Ethics & Bias in Artificial Intelligence

The Vienna Deep Learning Meetup and the Centre for Informatics and Society invite you to an evening of discussion on the topic of Ethics and Bias in AI. As promising as machine learning techniques are in terms of their potential to do good, the technologies raise a number of ethical questions and are prone to biases that can subvert their well-intentioned goals.

Machine learning systems, from simple spam filtering or recommender systems to Deep Learning and AI, have already arrived at many different parts of society. Which web search results, job offers, product ads and social media posts we see online, even what we pay for food, mobility or insurance – all these decisions are already being made or supported by algorithms, many of which rely on statistical and machine learning methods. As they permeate society more and more, we also discover the real world impact of these systems due to inherent biases they carry. For instance, criminal risk scoring to determine bail for defendants in US district courts has been found to be biased against black people [1], and analysis of word embeddings has been shown to reaffirm gender stereotypes because of biased training data. While a general consensus seems exist that such biases are almost inevitable, solutions range from embracing the bias as a factual representation of an unfair society to mathematical approaches trying to determine and combat bias in machine learning training data and the resulting algorithms.

Besides producing biased results, many machine learning methods and applications already in use today raise complex ethical questions. Should governments use machine learning and AI methods to determine the trustworthiness of their citizens (cf. [3])? Should the use of algorithmic systems that are known to have biases be tolerated to benefit some while disadvantaging others? Is it ethical to develop AI technologies that might soon replace many jobs currently performed by humans? And how do we keep AI and automation technologies from widening society’s divides, such as the digital divide or income inequality?

These and many more questions and issues need a broad and multidisciplinary discussion to ensure a fair and overall beneficial future of AI and related technologies. This event aims to provide a platform for debate in the form of two keynotes and a panel discussion with five international experts from numerous scientific fields.

This event has already happened. Watch the recording of the live stream, or take a look at the photos!

Keynotes

Prof. Moshe Vardi
“Deep Learning and the Crisis of Trust in Computing”

Prof. Sarah Spiekermann-Hoff
“The Big Data Illusion and its Impact on Flourishing with General AI”

Panelists: Ethics and Bias in AI

Prof. Moshe Vardi
Karen Ostrum George Distinguished Service Professor in Computational Engineering, Rice University

Prof. Peter Purgathofer
Centre for Informatics and Society / Institute for Visual Computing & Human-Centered Technology, TU Wien

Prof. Sarah Spiekermann-Hoff
Institute for Management Information Systems, WU Vienna

Prof. Mark Coeckelbergh
Professor of Philosophy of Media and Technology, Department of Philosophy, University of Vienna

Dr. Christof Tschohl
Scientific Director at Research Institute AG & Co KG

Moderator: Markus Mooslechner, Terra Mater Factual Studios

Agenda

18:30 – 19:00     Welcome
19:00 – 19:30     Deep Learning and the Crisis of Trust in Computing, Prof. Moshe Vardi, Rice University
19:30 – 20:00     The Big Data Illusion and its Impact on Flourishing with General AI, Prof. Sarah Spiekermann-Hoff, WU Wien
20:00 – 21:30     Panel Discussion
21:30 – 23:00     Networking, Buffet

The evening will be complemented by networking & discussions over snacks and drinks.

This event is a joint effort between the Vienna Deep Learning Meetup group and the Centre of Informatics and Society of TU Wien.

Organizers:

Thomas Lidy, Alexander Schindler, Jan Schlüter and Florian Cech

May 7, 2018


Prechtl-Saal - TU Wien