October 3, 2019
The following was published in the Austrian Newspaper ‘Futurezone’ on 03.10.2019 (see Dem AMS-Algorithmus fehlt ein Beipackzettel).
We are five scientists from the TU Vienna, WU Vienna and the University of Vienna, with a diverse background in Artificial Intelligence, Mathematics, Business Informatics, Cognitive Science, Social Sciences and Science and Technology Studies. We have been researching the AMS algorithm for some time now and wonder about the current debate on the AMS algorithm.
Genuine Transparency
Time and again, the AMS spoke of transparency with regard to the AMS algorithm. Whether and to what (productive) extent transparency exists is measured by the degree to which a scientific discourse based on verifiable facts and data is possible. Genuine transparency would mean that these verifiable facts and data would be provided by the AMS. Unfortunately, this has not yet been done satisfactorily: of the 96 model variants that are bundled to form the algorithmic system, only two have been published, and one of them only on request. Additionally, the corresponding error rates of the 96 model variants are largely unknown.
It is unacceptable and contradicts the postulate of transparency that a lot of correspondence and several inquiries were required in order to receive even a fraction of the necessary information. Rather, the AMS, as the bearer of public responsibility, should proactively fulfill the promised transparency and make efforts to produce the corresponding model variants, data and facts in a verifiable, comprehensible and sufficiently anonymous manner in order to enable an analysis in the course of a broad democratic discourse. The assessment of the extent to which transparency is available for a sufficient scientific debate is the responsibility of the scientific community and cannot be replaced by claims made by AMS officers on their private Internet sites. Such publications on private channels are not verifiable and are not subject to adequate control by constitutional institutions. Instead, communication should be conducted through the appropriate official channels of the AMS, where genuine transparency should be lived.
Science thrives on critical engagement with a shared base of information. This common information foundation does not exist now regarding the AMS algorithm and must be produced urgently. This is remarkable, as this common information base has hardly changed even after a one-year media debate. The public knows almost as little about the actual deployment of automated systems at the AMS as they did a year ago.
Which technology is it actually about?
Unlike the claims made by the AMS board, the AMS algorithm is based on training data, in the form of personal data from the previous 4 years and ex-post observations of the outcome, and produces forecasts based on the 96 statistical models mentioned above. Thus, the system is subject to the same sources of error – such as bias – as other systems based on training data. Beyond questions of defining exactly what AI is or is not, the discussion should rather focus on the applicability and meaningfulness of the chosen technical method as well as its risks and problems.
The increasing trend towards the automation of administrative technologies is additionally accompanied by a responsibility towards citizens in particular and society in general: in this context specifically it is absolutely necessary to understand people at the centre of these systems with dignity and as a whole, rather than using a reductionist data view as a measure of the success of the system. It should be a matter of discourse that such a system is not piloted on structurally disadvantaged groups without external evaluation before the system has matured and has been confirmed as such by independent scientific experts – a demand that has remained unanswered since the announcement of the ‘evaluation phase’ in October 2018.
The application of algorithms as a question of principle
The implementation of such automated systems by the public sector is a fundamental decision and must be discussed in a democratically legitimized society. Part of this discourse is sufficient transparency; this is only adequate if it enables the addressees sufficiently to conduct this discourse. This debate has so far only been touched upon; there are still many unanswered questions, without whose answers a truly transparent social debate is not possible. In order to respond to these questions, verifiable data and facts are necessary.
It should also be noted that the current application of the AMS algorithm does not comply with international standards in many respects. In early 2018, the Council of Europe published recommendations for the public use of algorithms (one of the authors of this article, Ben Wagner, is one of them), which contradict the current use of algorithms by the AMS. Various colleagues from the scientific community also explicitly warn against the current use of algorithms by the AMS, both in media reports and at scientific events, such as an event of the University of Vienna on algorithms in job placement on 23 April 2019. In addition to transparency, being state of the art would also mean involving end-users such as jobseekers and AMS counsellors in the process of developing information systems to ensure that their needs and views are well taken into account.
Women are disadvantaged
Contrary to the claims of the AMS executive, the question of potential discrimination against groups of people is by no means resolved either. The principle of equal treatment states clearly that no one should be discriminated against based on gender or ethnicity in accessing goods or services available to the public. Irrespective of the question as to whether classification into groups A, B or C would be more or less desirable for this individual, the segmentation of the persons cared for by the AMS inevitably means that some groups of persons receive different subsidies because of their sex. Whether this affects women in particular may be vehemently denied by the AMS, but it remains an unproven assertion until concrete, scientifically reliable data are published.
From a scientific point of view, this assertion furthermore seems to be questionable, especially since other discriminatory effects such as the problem of a cumulative disadvantage still exist with regard to the encounter of different, structurally disadvantaged groups of people (e.g. women with childcare obligations and migration background). The personal disadvantage that individuals can suffer from because of an incorrect classification, which can only be proven retrospectively, is also highly problematic and must be investigated from the viewpoint of discrimination. Since only a fraction of the error rates of the 96 model variants were published, it is reasonable to assume that some of these models have a worse hit rate than others, which would mean that the affected person segments would be discriminated against by higher error rates and incorrect allocations.
Decisions are made by counselors
The argument that the decision will further on be made by counsellors is untenable from a scientific point of view: since the official launch of the system, decisions of the AMS are made jointly by counsellors and a system that makes recommendations in the form of group assignments. From a scientific point of view, it has been proven several times that the very existence of the system will influence the decisions of the supervisors. How exactly the algorithm influences the decisions of the supervisors cannot be exactly quantified without an external audit and an analysis on a transparent, common data pool.
The AMS’ claim that supervisors would be encouraged to question the assessment of the system and “check the value calculated by the computer” is highly problematic without a clear description of how this assessment should be carried out. The “internal guidelines” that have hitherto been indicated, which are intended to regulate the allocation of measures taking into account the algorithmic result, have not been published. However, even the best guidelines would not be suitable to prevent a problematic influence of the algorithm.
A more efficient AMS as a goal
The rarely questioned recourse to efficiency as the ultimate goal hence harbors certain pitfalls. The AMS counsellors find themselves in a new situation of tension, contrary to the assumption that the new system is intended to make things easier for them and increase their efficiency: on the one hand, AMS supervisors are expected to use the new system as an aid, but on the other they are required to intervene as a “social corrective” if they believe that the system is wrong. In addition to looking after the unemployed, they now also have to manage an algorithmic system and above all understand it critically enough to recognize and correct mistakes. This additional effort naturally reduces the time spent with jobseekers, and stands in contrast to the supposed efficiency of the new system. Precisely these issues of the exact operationalization, internal rules and guidelines for dealing with this area of tension require absolute transparency in order to enable a critical dialogue about the objectives and expectations of such a system!
The algorithm without an instruction leaflet
Finally, a note on how to deal with uncertainty, which the AMS is happy to make an effort to address: the algorithm is like the use of “medication” because the algorithm is about “forecasts into the future with probabilities and not 100% accuracy”. The argument is that such risks must be dealt with in a democratic society.
The comparison of AMS algorithms with medications is interesting insofar as it illustrates the customary handling of risks and probabilities by the public sector using the example of medications. A drug must be tested in clinical trials over a period of years, the study and the basis of the investigation must be publicly accessible and transparent. A systematic review by independent external entities is mandatory. Even then, the medication is still associated with a residual risk, which is why package inserts for medicines are mandatory so that people can decide before taking the medication whether they want to use it or not. The AMS algorithm, on the contrary, lacks both clinical studies and package leaflets.
This is crucial because as a person affected by the algorithm, you do not have the possibility to decide against the AMS algorithm. As an affected individual, you are at the mercy of a system whose risks you cannot assess yourself. In the health sector, it is difficult to imagine government institutions distributing drugs without leaflets. Then why should the distribution of resources by the AMS to people in vulnerable positions be any different?
Florian Cech (TU Vienna), Fabian Fischer (TU Vienna), Gabriel Grill (TU Wien, University of Michigan), Soheil Human (WU Vienna, University of Vienna), Paola Lopez (University of Vienna), Ben Wagner (WU Vienna)