AMS algorithm – a sociotechnical study

In 2020, the Austrian Public Employment Service (AMS) will introduce an algorithm to help allocate subsidies for the training of jobseekers. The so-called “AMS algorithm” is controversial. The ITA Wien – in cooperation with the Chamber of Labor and TU Wien – analyzes technical specificities and social consequences of the system. […]

The AMS algorithm lacks an instruction leaflet

The following was published in the Austrian Newspaper ‘Futurezone’ on 03.10.2019 (see Dem AMS-Algorithmus fehlt ein Beipackzettel).

We are five scientists from the TU Vienna, WU Vienna and the University of Vienna, with a diverse background in Artificial Intelligence, Mathematics, Business Informatics, Cognitive Science, Social Sciences and Science and Technology Studies. We have been researching the AMS algorithm for some time now and wonder about the current debate on the AMS algorithm.

Genuine Transparency

Time and again, the AMS spoke of transparency with regard to the AMS algorithm. Whether and to what (productive) extent transparency exists is measured by the degree to which a scientific discourse based on verifiable facts and data is possible. Genuine transparency would mean that these verifiable facts and data would be provided by the AMS. Unfortunately, this has not yet been done satisfactorily: of the 96 model variants that are bundled to form the algorithmic system, only two have been published, and one of them only on request. Additionally, the corresponding error rates of the 96 model variants are largely unknown.
It is unacceptable and contradicts the postulate of transparency that a lot of correspondence and several inquiries were required in order to receive even a fraction of the necessary information. Rather, the AMS, as the bearer of public responsibility, should proactively fulfill the promised transparency and make efforts to produce the corresponding model variants, data and facts in a verifiable, comprehensible and sufficiently anonymous manner in order to enable an analysis in the course of a broad democratic discourse. The assessment of the extent to which transparency is available for a sufficient scientific debate is the responsibility of the scientific community and cannot be replaced by claims made by AMS officers on their private Internet sites. Such publications on private channels are not verifiable and are not subject to adequate control by constitutional institutions. Instead, communication should be conducted through the appropriate official channels of the AMS, where genuine transparency should be lived.  
Science thrives on critical engagement with a shared base of information. This common information foundation does not exist now regarding the AMS algorithm and must be produced urgently. This is remarkable, as this common information base has hardly changed even after a one-year media debate. The public knows almost as little about the actual deployment of automated systems at the AMS as they did a year ago.

Which technology is it actually about?

Unlike the claims made by the AMS board, the AMS algorithm is based on training data, in the form of personal data from the previous 4 years and ex-post observations of the outcome, and produces forecasts based on the 96 statistical models mentioned above. Thus, the system is subject to the same sources of error – such as bias – as other systems based on training data. Beyond questions of defining exactly what AI is or is not, the discussion should rather focus on the applicability and meaningfulness of the chosen technical method as well as its risks and problems.
The increasing trend towards the automation of administrative technologies is additionally accompanied by a responsibility towards citizens in particular and society in general: in this context specifically it is absolutely necessary to understand people at the centre of these systems with dignity and as a whole, rather than using a reductionist data view as a measure of the success of the system. It should be a matter of discourse that such a system is not piloted on structurally disadvantaged groups without external evaluation before the system has matured and has been confirmed as such by independent scientific experts – a demand that has remained unanswered since the announcement of the ‘evaluation phase’ in October 2018.

The application of algorithms as a question of principle

The implementation of such automated systems by the public sector is a fundamental decision and must be discussed in a democratically legitimized society. Part of this discourse is sufficient transparency; this is only adequate if it enables the addressees sufficiently to conduct this discourse. This debate has so far only been touched upon; there are still many unanswered questions, without whose answers a truly transparent social debate is not possible. In order to respond to these questions, verifiable data and facts are necessary.
It should also be noted that the current application of the AMS algorithm does not comply with international standards in many respects. In early 2018, the Council of Europe published recommendations for the public use of algorithms (one of the authors of this article, Ben Wagner, is one of them), which contradict the current use of algorithms by the AMS. Various colleagues from the scientific community also explicitly warn against the current use of algorithms by the AMS, both in media reports and at scientific events, such as an event of the University of Vienna on algorithms in job placement on 23 April 2019. In addition to transparency, being state of the art would also mean involving end-users such as jobseekers and AMS counsellors in the process of developing information systems to ensure that their needs and views are well taken into account.

Women are disadvantaged

Contrary to the claims of the AMS executive, the question of potential discrimination against groups of people is by no means resolved either. The principle of equal treatment states clearly that no one should be discriminated against based on gender or ethnicity in accessing goods or services available to the public. Irrespective of the question as to whether classification into groups A, B or C would be more or less desirable for this individual, the segmentation of the persons cared for by the AMS inevitably means that some groups of persons receive different subsidies because of their sex. Whether this affects women in particular may be vehemently denied by the AMS, but it remains an unproven assertion until concrete, scientifically reliable data are published.
From a scientific point of view, this assertion furthermore seems to be questionable, especially since other discriminatory effects such as the problem of a cumulative disadvantage still exist with regard to the encounter of different, structurally disadvantaged groups of people (e.g. women with childcare obligations and migration background). The personal disadvantage that individuals can suffer from because of an incorrect classification, which can only be proven retrospectively, is also highly problematic and must be investigated from the viewpoint of discrimination. Since only a fraction of the error rates of the 96 model variants were published, it is reasonable to assume that some of these models have a worse hit rate than others, which would mean that the affected person segments would be discriminated against by higher error rates and incorrect allocations.

Decisions are made by counselors

The argument that the decision will further on be made by counsellors is untenable from a scientific point of view: since the official launch of the system, decisions of the AMS are made jointly by counsellors and a system that makes recommendations in the form of group assignments. From a scientific point of view, it has been proven several times that the very existence of the system will influence the decisions of the supervisors. How exactly the algorithm influences the decisions of the supervisors cannot be exactly quantified without an external audit and an analysis on a transparent, common data pool.
The AMS’ claim that supervisors would be encouraged to question the assessment of the system and “check the value calculated by the computer” is highly problematic without a clear description of how this assessment should be carried out. The “internal guidelines” that have hitherto been indicated, which are intended to regulate the allocation of measures taking into account the algorithmic result, have not been published. However, even the best guidelines would not be suitable to prevent a problematic influence of the algorithm.

A more efficient AMS as a goal

The rarely questioned recourse to efficiency as the ultimate goal hence harbors certain pitfalls. The AMS counsellors find themselves in a new situation of tension, contrary to the assumption that the new system is intended to make things easier for them and increase their efficiency: on the one hand, AMS supervisors are expected to use the new system as an aid, but on the other they are required to intervene as a “social corrective” if they believe that the system is wrong. In addition to looking after the unemployed, they now also have to manage an algorithmic system and above all understand it critically enough to recognize and correct mistakes. This additional effort naturally reduces the time spent with jobseekers, and stands in contrast to the supposed efficiency of the new system. Precisely these issues of the exact operationalization, internal rules and guidelines for dealing with this area of tension require absolute transparency in order to enable a critical dialogue about the objectives and expectations of such a system!

The algorithm without an instruction leaflet

Finally, a note on how to deal with uncertainty, which the AMS is happy to make an effort to address: the algorithm is like the use of “medication” because the algorithm is about “forecasts into the future with probabilities and not 100% accuracy”. The argument is that such risks must be dealt with in a democratic society.
The comparison of AMS algorithms with medications is interesting insofar as it illustrates the customary handling of risks and probabilities by the public sector using the example of medications. A drug must be tested in clinical trials over a period of years, the study and the basis of the investigation must be publicly accessible and transparent. A systematic review by independent external entities is mandatory. Even then, the medication is still associated with a residual risk, which is why package inserts for medicines are mandatory so that people can decide before taking the medication whether they want to use it or not. The AMS algorithm, on the contrary, lacks both clinical studies and package leaflets. 
This is crucial because as a person affected by the algorithm, you do not have the possibility to decide against the AMS algorithm. As an affected individual, you are at the mercy of a system whose risks you cannot assess yourself. In the health sector, it is difficult to imagine government institutions distributing drugs without leaflets. Then why should the distribution of resources by the AMS to people in vulnerable positions be any different?
Florian Cech (TU Vienna), Fabian Fischer (TU Vienna), Gabriel Grill (TU Wien, University of Michigan), Soheil Human (WU Vienna, University of Vienna), Paola Lopez (University of Vienna), Ben Wagner (WU Vienna)

Radio interview about the ‘AMS Algorithmus’ – Radio NJOY 91.3FM

Talking about current critical research and issues with the AMS Algorithm, Florian Cech joined the ‘Wissenschaftsradio’ for an interview in Radio NJOY 91.3FM on October 1st, 2019.

A copy of the interview is available for streaming at

AMS: Neuer Algoritmus darf diskriminieren?

Panel discussion “Social Media: Wie politische Kommunikation im digitalen Zeitalter funktioniert”

We’re excited to participate in the following event (German only):

Social Media

Wie politische Kommunikation im digitalen Zeitalter funktioniert

Technologie hat die politische Kommunikation umgekrempelt: Social Media sind zu einem wichtigen Multiplikator geworden, Entscheidungen werden aufgrund der Datenlage getroffen und Zielgruppen lassen sich gezielt ansteuern.

Wie funktioniert politische Kommunikation heutzutage? Welche Mechanismen führen zum Erfolg? Sind Big Data und Künstliche Intelligenz inzwischen unverzichtbar? Welche Kanäle sind relevant, welche digitalen Strategien erfolgreich? Und wie groß ist der Einfluss von Facebook, Twitter und Co. auf die öffentliche Meinung tatsächlich?

Darüber diskutieren Expertinnen und Experten am 19. September im Haus der Musik. Die Keynote hält Yussi Pick (Pick & Barth Digital Strategies GmbH). Mit ihm diskutieren im Anschluss dazu Florian Cech (TU Wien), Lena Doppel-Prix (Digitalstrategin), Klemens Ganner (APA-DeFacto), Nina Hoppe (HOPPE – Strategia. Politica. Media.) und Dieter Zirnig (neuwal).

 

Details and sign-up available at https://eventmaker.at/apa/social_media/.

Call for Case Studies @ C&T 2019

3-7 June 2019, TU Wien, Vienna, Austria
https://2019.comtech.community/casestudies.html

The International Conference on Communities and Technologies (C&T) is the premier international forum on the complex connections between communities – both physical and virtual – and information and communication technologies. The theme of C&T 2019 is “Transforming Communities”, embracing a dynamic view of communities and paying particular attention to the roles of technologies in the making, un-making, and re-making of communities (see the Call for Papers for more).

C&T 2019, for the first time, will host two separate Case Studies tracks: […]

Call for Papers: ECIS 2019 Workshop – Engineering Accountable Information Systems

The function of information systems in society is increasingly in focus of information systems research. One outstanding challenge is how to build information systems which promote accountability, as part of a wider debate on fairness, accountability, and transparency principles (e.g., ACM FAT*). This need for accountability must be reflected on all layers of systems engineering, ranging from process analysis to concrete technical components and primitives employed during implementation.

Based on our existing research we believe that three specific areas of information systems research are most relevant in this context: user cognition and human behaviour in relation to the design of interfaces; automated decision-making systems and decision-support systems; critical business processes with a particular need for accountability. In all these and further areas, accountability in complex information systems needs to be addressed through technical mechanisms. Existing high-level considerations on accountability in information systems notwithstanding, however, the concrete engineering and implementation of such mechanisms has so far received only limited attention. Similarly, the challenges arising during such transformations of abstract accountability concepts into concrete technologies as well as the critical evaluation of respective implementations are only rarely covered by existing research.

We see this workshop as an opportunity to close these gaps through engaging with the existing debate on accountability while integrating key knowledge from the information systems community. Importantly, our approach to accountable systems is not just focused on high level norms of system development, but indispensably also incorporates practical questions of engineering and designing actual systems. We therefore encourage submissions from the above-mentioned and other areas of accountability engineering as long as they sufficiently incorporate questions of concrete systems engineering and implementation. […]