49
HANDBOOK ON ETHICAL ISSUES RELATED TO ALGORITHMS, AUTOMATION AND AI
List of References
Adams‐Prassl, J., Binns, R. and Kelly‐Lyth, A. (2022). Directly Discriminatory Algorithms. Modern Law
Review, Vol. 86, Issue 1, pages 144-175.
Allhutter, D., Cech, F., Fischer, F., Grill, G., & Mager, A. (2020). Algorithmic profiling of job seekers in
Austria: how austerity politics are made effective. Frontiers in Big Data, 5
Buolamwini, J. & Gebru, T. (2018). Gender Shades: Intersectional Accuracy Disparities in Commercial
Gender Classification. In Sorelle A. Friedler, Christo Wilson, editors, Conference on Fairness,
Accountability and Transparency, FAT 2018, 23-24 February 2018, New York, NY, USA. Volume 81 of
Proceedings of Machine Learning Research, pages 77-91, PMLR, 2018
Casili, A. “An ‘End-to-End’ Approach to Ethical AI”, Institut Polytechnique de Paris. See Casili’s AI talk @
ETUI, available at: https://www.etui.org/events/ai-talks-etui-what-really-ethical-ai
Chapman, P., Clinton, J., Kerber, R., Khabaza, T., Reinartz, T., Shearer, C., and Wirth, R., “CRISP-DM
1.0 step-by-step data mining guide,” 2000
Corbett-Davies, S., & Goel, S. (2018). The Measure and Mismeasure of Fairness: A Critical Review of
Fair Machine Learning. In arXiv [stat.ML]. Available at: http://arxiv.org/abs/1808.00023
Council of Europe, Ad Hoc Committee on Artificial Intelligence (CAHAI). (2020). AI Ethics Guidelines:
European and Global Perspectives. Provisional report by Marcello Ienca and Effy Vayena, available at:
https://rm.coe.int/cahai-2020-07-fin-en-report-ienca-vayena/16809eccac
Dastin, J. (2018), ‘Amazon scraps secret AI recruiting tool that showed bias against women’ (10
October), available at: www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-
secret-airecruiting-tool-that-showed-biasagainst-women-idUSKCN1MK08G
European Commission, Directorate-General for Justice and Consumers, Gerards, J., Xenidis, R., (2021).
Algorithmic discrimination in Europe: challenges and opportunities for gender equality and non-
discrimination law, Publications Office, available at: https://data.europa.eu/doi/10.2838/544956
Gerards, J., Schäfer, M.T., Vankan, A. & Muis, I. Impact Assessment: Fundamental rights and
algorithms (Netherlands Ministry of the Interior and Kingdom Relations, 2022), available at:
https://www.government.nl/documents/reports/2021/07/31/impact-assessment-fundamental-rights-and-
algorithms
Gupta, AH. (2019), ‘Are Algorithms Sexist?’, The New York Times (15 November), available at:
www.nytimes.com/2019/11/15/us/apple-card-goldman-sachs.html
Hacker, P. (2018). Teaching fairness to artificial intelligence: Existing and novel strategies against
algorithmic discrimination under EU law. Common Market Law Review, Vol. 55, Issue 4, pages 1143 –
1185
Hardt, M., Price, E., & Srebro, N. (2016). Equality of Opportunity in Supervised Learning. Advances in
Neural Information Processing Systems, 3315–3323. Available at : https://arxiv.org/abs/1610.02413
Ienca, M., & Vayena, E. (2020). On the responsible use of digital data to tackle the COVID-19 pandemic.
Nature Medicine, Volume 26(4), pages 463-464
Jobin, A., Ienca, M. & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine
Intelligence, Vol. 1, pages 389–399, available at: https://doi.org/10.1038/s42256-019-0088-2
Kamishima, T., Akaho, S., Asoh, H., & Sakuma, J. (2012). Fairness-Aware Classifier with Prejudice
Remover Regularizer. Machine Learning and Knowledge Discovery in Databases, 35–50. Available at:
https://link.springer.com/chapter/10.1007/978-3-642-33486-3_3