A regulation-centric European Strategy on Artificial Intelligence is inadequate

The intention of the the European Strategy on Artificial Intelligence to reduce harm while maximizing the individual and social benefits is laudable, and I totally support this humanistic / human-centric approach, which sets Europe apart.

What I'm not particularly enthusiastic about is that a regulation-centric perspective seems to have become the cornerstone of this strategy. Currently, Europe appears to favor regulation of AI (rules and criteria for AI operation) over investing and generally creating a favorable environment for AI development.
Such an understanding of the European approach is not far off the mark, considering that the recent European Commission proposal for a considerable investment in a European digital program speaks of "helping spread AI across the European economy and society" and "boosting investments to make the most out AI, while taking into account the socio-economic changes brought about by AI and to ensure an appropriate ethical and legal framework" (section 2). No explicit mention whatsoever of investing in AI development.
In contrast, an EPSC strategic note on "The Age of Artificial Intelligence : Towards a European Strategy for Human-Centric Machines", published in March this year, makes the case for both "creating an enabling framework favoring investment in AI, and setting global AI quality standards" (p. 5). Also, the European Commission communication on "Artificial Intelligence for Europe" (April 2018) states in the opening paragraphs of the section titled "The Way Forward" that
The EU should be ahead of technological developments in AI and ensure they are swiftly taken up across its economy. This implies stepping up investments to strengthen fundamental research and make scientific breakthroughs, upgrade AI research infrastructure, develop AI applications in key sectors from health to transport, facilitate the uptake of AI and the access to data. Joint effort by both the public (national and EU levels) and private sectors are needed to gradually increase overall investments by 2020 and beyond, in line with the EU's economic weight and investments on other continents. (p. 6-7)
I suspect that the prominence of the regulatory perspective is at least partly due to the success and mostly positive reception of GDPR.
GDPR is related to AI because AI is data-heavy, and the regulation deals with collection, circulation, and processing of data. The link is also mentioned in the aforementioned strategic note, in a section called "The General Data Protection Regulation – leading the world towards a better AI?" (p. 6)
I believe that GDPR is also relevant because its success so far lends support to the idea that a regulation-centric AI strategy is efficient, sufficient, and beneficial. But it seems to me that the analogy could be misleading.
GDPR works (or appears to, so far) because all media and social media companies have a massive penetration of the market (share of population using their services), and a huge stake and incentive to be present in the European market, and therefore to comply to GDPR. In contrast, non-European AI providers could be reluctant to enter the European market if the cost of compliance with regulation is higher than lost (short-term) revenue. This is acknowledged by the authors of the strategic note, but quickly brushed aside. The foreseen European regulation is indeed expected / supposed (?) to make AI development and use in Europe more difficult; nevertheless, it is assumed that the underlying value set will eventually prevail globally (how? why?), so whoever incorporates it from the outset will have a competitive advantage.
Europe has an opportunity to set global standards to reach the highest level of welfare for citizens, gaining trust and thereby setting the ground for a stable and broad level of acceptance of the new technology, not only in Europe but, over time, also in other parts of the world. In the short term this can imply additional hurdles for companies willing to invest in Europe. However, in the long run it is likely that higher standards will prevail, so the companies that gain early trust among users could have a competitive advantage. (EPSC strategic note, p. 5)
Even if we subscribe to this optimistic vision, the question is then how long will it take for non-European companies to become willing to invest in Europe, and how far behind will Europe be by then?
One way in which this scenario could unfold, one that assumes European technological leadership, is that the European "human-centric AI" perspective and values (as embodied by regulations) are carried all over the world by a wave of superior and popular European AI. Highly unlikely for the moment.

I suspect the real effect of such regulation, in the context of a dearth of investment, will be that Europe will lag even more behind, in terms of both AI development and deployment, thus possibly trailing also in productivity. Competitive advantage of companies using non-EU-compliant AI could become so large as to trigger a massive support by European companies and possibly also European public for the repeal or watering down of European AI regulation.

It is also not at all clear to me how US- or Chinese-made AI will come under the European jurisdiction, or why they should comply with European regulations. I emphasize the non-European AI, because US and China are at the forefront of AI development, while Europe is trailing significantly behind.
I can easily imagine situations where European citizens / consumers feel the consequences of companies or governments using AI located outside EU; a key question is how to make these non-European AI compliant with the European regulations and philosophy.
Maybe not even GDPR (or an extended GDPR+) offers enough protection. The way I understand it, it regulates personal data that can be linked to a specific person. But let's imagine a local government that uses a Chinese AI for traffic infrastructure and flow optimization. It shares anonymous traffic data, together with geo-aggregated commercial and demographic information; all these, in my current understanding, are not protected by GDPR. The AI then recommends investments and deployment of transport infrastructure that, while making a lot of economic and logistic sense and the local government happy, put at a disadvantage areas inhabited by older and / or financially deprived people. Another case could be a company using an American workforce optimization AI, which leads to disproportionately higher layoffs among single moms.
In both cases the AI solution might be hailed as optimal and objective, but, as has been repeatedly demonstrated, it might very well be flawed due to inbuilt biases that are not apparent either to the designer or the user. There is also a potentially more important flaw : the system does NOT have the right biases built in, such as the relaxation of economic or logistical criteria when they conflict with protection of vulnerable persons.

We could also split the area of applicability of the regulation into input, process, and output. Input (primarily data) is to some extent covered by GDPR (or by an upcoming, much extended GDPR+). Regulating the process, the AI black box, is meaningless in the absence of access to the algorithms, which is extremely unlikely in general, and particularly for AI systems physically located in non-European territory. I guess a significant proportion of the planned AI regulation will be focused on the output, the effects of applying a particular AI system. The real-life consequences of AI could be identified either by running scenarios and simulations using the respective algorithms, or by  observing them, belatedly and perhaps incompletely and imprecisely. The first approach is virtually impossible for non-European AI; the second is mired by the methodological difficulty inherent in any attempt to separate analytically the impact of multiple factors of various natures, not to mention that the conclusion of such analysis would be heavily contested by various stakeholders.

Another point is that one has to understand what one wants to regulate. Imagine the internet regulated and policed by people who only know and understand the telephone and the postal system; the outcome is inevitably belated, inadequate, and even hilarious. Without homegrown talent and capabilities, who is going to educate and update the ethicists and legislators? Who is going to provide a reality check for the present and a realistic and comprehensive perspective of the future?

To sum up, even extensive, well-conceived, and well-intentioned regulation could meaningfully cover just a small part of the AI ecosystem - and maybe not the most important. If not supplemented by serious investment in development, it will most likely relegate Europe in a role of taker instead of maker, with considerable negative social and economic effects.

* * *

My other posts relevant to the debate