On the European AI Strategy

This was a busy week. I participated in the AI Europe Stakeholder Summit : A European Strategy for Artificial Intelligence, organized by the Section Single Market, Production & Consumption of the European Economic and Social Committee (EESC) - great event, great people, great debate! 😊
Also, as a member of the European AI Alliance, I have submitted several topics on AI ethics for discussion by the High-Level Expert Group on AI.
Here is a summary of my contributions, supplemented by the reasoning behind some of them.

* * *

First and foremost, I have an uneasy feeling that Europe has not yet clarified to itself what should be the purpose of AI in, and for, Europe? I mean, AI is cool and all, but why are we in? What is Europe trying to achieve, what problems are we trying to solve with AI? Are we simply joining a shiny bandwagon, with no clear idea what to do next once we are on board?

I tried to find an answer to these questions in the EPSC strategic note on "The Age of Artificial Intelligence : Towards a European Strategy for Human-Centric Machines" (March 2018) and the European Commission communication on "Artificial Intelligence for Europe" (April 2018).

In the Introduction, the Communication remind us that "beyond making our lives easier, AI is helping us to solve some of the world's biggest challenges: from treating chronic diseases or reducing fatality rates in traffic accidents to fighting climate change or anticipating cybersecurity threats. [...] These are some of the many examples of what we know AI can do across all sectors, from energy to education, from financial services to construction." (p. 2) But these are punctual and disconnected benefits, not underpinned by a overarching theme which can point to a chief objective.

This Communication then sets out a European initiative on AI (p. 4), which aims to:
  • Boost the EU's technological and industrial capacity and AI uptake across the economy.
  • Prepare for socio-economic changes brought about by AI by encouraging the modernization of education and training systems, nurturing talent, anticipating changes in the labor market, supporting labor market transitions and adaptation of social protection systems.
  • Ensure an appropriate ethical and legal framework, based on the Union's values and in line with the Charter of Fundamental Rights of the EU.
The argument and recommendations of the strategic note, along pretty much the same lines, are summarized on the first page : 
Finding a policy response to what is undoubtedly ‘the next big thing’ is both urgent and challenging. Europe needs an ambitious and rapid deployment strategy, covering both business and public administration. This must go hand in hand with a world-class research and science strategy, as well as an international drive to claim its stake in what is for now a heated race between the United States and China for global dominance. In addition to creating an enabling environment for AI, Europe must use its widely recognized values and principles to build global regulatory norms and frameworks that ensure a human-centric and ethical development of this technology.
To me, the two documents tell us what, but are very vague about why and to what end. It seems that we are reluctantly joining what is a US-China race, in order to avoid the worst, including staying too much behind and becoming irrelevant on the global stage. I have to admit, this might work as a strategic goal, but is hardly inspiring or uplifting.

In contrast, the Chinese "Next Gen AI Development Plan" adopted a year ago, is aimed at "enhancing national power" and at "the great rejuvenation of the Chinese nation" - pretty straightforward, and more effective in mobilizing energies and directing efforts.

The uneasy feeling is compounded by the fact that there is no institutional guarantee at the EU level for a continuity in the strategic thinking on this fundamental challenge, a challenge that requires long-term vision and commitment because, as Stephen Hawking put it, "this could be the biggest event in the history of our civilization - or the worst". Don't get me wrong : I am a fan of President Juncker, and I do not mean to belittle the initiatives and achievements of the current Commission. My concern stems from the fact that I cannot think of a current European institutional arrangement that embodies and / or ensures the pursuit and implementation of a long-term strategic vision for Europe.

Coming back now to the issue of purpose. To grossly simplify, I strongly suggest that we decide whether the main thrust should be towards a more competitive and innovative European industry, or towards improvement of the quality of life for all Europeans - or something else altogether. Will / Should, for example, AI and the wider current of digital transformation be put in the service of achieving a deeper European integration leading to a political Union (imagine though the massive backlash such an idea would generate...)?
Based on the overall purpose of AI in Europe, and underlined by the European value system, we should then develop a priority ranking by type and domain : what kinds of AI do we want, and where do we want them?

* * *

There is a distinction between operational, analytical, and predictive (uses / facets of) AI, as detailed in the Chatham House report.
  • In analytical roles, AI systems might allow fewer humans to make higher-level decisions, or to automate repetitive tasks such as monitoring sensors set up to ensure treaty compliance. In these roles, AI may well change – and in some ways it has already changed – the structures through which human decision-makers understand the world. But the ultimate impact of those changes is likely to be attenuated rather than transformative.
  • Predictive uses of AI could have more acute impacts, though likely on a longer timeframe. Such employments may change how policymakers and states understand the potential outcomes of specific courses of action. This could, if such systems become sufficiently accurate and trusted, create a power gap between those actors equipped with such systems and those without – with notably unpredictable results.
  • Operational uses of AI are unlikely to fully materialize in the near term. The regulatory, ethical and technological hurdles to fully autonomous vehicles, weapons and other physical-world systems such as robotic personal assistants are very high – although rapid progress towards overcoming these barriers is being made. In the longer term, however, such systems could radically transform not only the way decisions are made but the manner in which they are carried out.
In addition to these types, it seems that we have already made the first steps towards what I call - for lack of a better term - a philosopher AI, one that understands and develops concepts, makes inferences, and more generally interprets and gives meaning to the world as she experiences it. On the negative side, this abilities could lead to unforeseen and unexpected biases, meaning selective use of information according to the AI's worldview and value system. I can easily imagine one of these being eventually able to formulate value-based objectives, and to develop strategies and plans to achieve them - and I dread what those objectives might be due to the AI watching the news and surfing the internet.
At Kyndi computer scientists are writing code in Prolog, a programming language that dates to the 1970s. It was designed for the reasoning and knowledge representation side of A.I., which processes facts and concepts, and tries to complete tasks that are not always well defined. Kyndi has been able to use very little training data to automate the generation of facts, concepts and inferences. The Kyndi system can train on 10 to 30 scientific documents of 10 to 50 pages each. Once trained, the software can identify concepts and not just words. Kyndi can read 1000 documents in seven hours. Kyndi serves as a tireless digital assistant, identifying the documents and passages that require human judgment.
Another distinction is between localized and extended AI : the former has a clearly circumscribed material footprint, while the latter is dispersed in a network. How do we regulate a system that cannot be pinpointed geographically?

All this makes me think that a single, one-size-fits-all set of ethical rules for an undifferentiated AI might not be the proper approach. I believe each of the types described above needs to incorporate a distinct set of checks and failsafes, all of them of course derived from the values we chose to upheld but adapted to the respective specifics.

* * *

On granting AI legal personality : Should we / are we prepared to allow them to operate autonomously and anonymously in financial matters,  including to make payments / donations to political action groups?
One group of experts agrees that legal personality for AI is a bad idea, and that robot rights violate human rights.

* * *

Should we encourage a profession whose job description is the algorithmization of higher-level non-routine cognitive tasks? Should we allow the practice of this profession in Europe, and / or on European citizens?
Judging from the display of a Robot Coach HuBot at the AI Europe Stakeholder Summit, voluntary transfer of one's knowledge and skills to AI / robots is accepted / acceptable.


* * *

Last but not least, we should keep in mind the issue of fairness. The current debate about eliminating AI bias should be complemented with a careful consideration of the necessary bias we would want incorporated in AI systems - e.g., the relaxation of economic or logistical criteria for optimization when they conflict with protection of vulnerable persons.
Let's imagine two cases of an AI without the type of bias we should rightfully endeavor to eliminate, but also without purposely designed fairness-oriented rules ("affirmative action" bias). A local government uses AI for traffic infrastructure and flow optimization; the AI recommends investment and resource deployment that, while making a lot of economic and logistic sense, put at a disadvantage areas inhabited by older and / or financially deprived people. Another case could be a company using a workforce optimization AI, which leads to disproportionately higher layoffs among single moms. In both cases the AI solution might be hailed as optimal and objective, but also deeply offensive to a sense of fairness.

* * *

My other posts relevant to the debate