Luukas Ilves Raigo Pajula

AI: Hyper-competent yet fallible

The last ten years of AI development, primarily around machine learning and deep learning, have yielded incredible breakthroughs. Progress has outpaced the most researchers’ most optimistic expectations. AI-s now beat humans at a growing list of tasks, from playing GO and driving cars to detecting cancer and even composing music. The Digital Summit is rightly discussing the implications this technology has for the future of work and government.

But evidence is also mounting that indiscriminate adoption of AI brings with it unexpected consequences, sometimes amusing, sometimes serious. Some examples:

  • Machine learning systems have been shown to reinforce ethnic, gender and other biases when used to make loan decisions, grant parole or identify potential criminals.
  • AI’s learn from their environment in unexpected way. For instance, Microsoft’s AI chatbot “Tay” learned to use racist and derogatory language in a matter of hours.
  • These tools are far from omniscient and make errors of judgment. Amazon’s facial recognition algorithm, for instance, mis-identified numerous elected politicians as criminals.
  • AI can also be intentionally tricked through so-called “adversarial attacks” that are often undetectable to human observers.

AI and algorithmic tools can also be put to malicious uses, including manipulation, hacking and warfare. AI research is open and many core tools are available as open source, which also makes it hard to control the proliferation of AI for use by bad guys.

In short, AI is like many powerful new technologies – its blind adoption brings about unexpected consequences. The speed of AI development and adoption makes the problem more urgent. And AI applications in one key aspect – their use with autonomous tools that interact independently with complex environments and the difficulty even engineers have difficulty explaining why AI’s make certain decisions.

The real danger arises when we over-rely on these rules and fail to put in place safeguards, surveillance and redress mechanisms. We need responsible use of AI – not just for its own sake, but because reaping the benefits of AI will require broad public and political support, enabling regulation and the avoidance of serious backlash.

Recent history does not augur well when it comes to preparing for the risks of technology adoption: Data protection and cyber security both morphed into serious public problems before governments took a public policy response and, at least in the case of cybersecurity, it remains unclear whether companies have internalized the real costs.

Yet this time may be different. The good news is that a robust and broad multi-stakeholder debate has emerged incredibly rapidly. A few short years ago, academic and activist communities started issuing calls for action around the responsible use of AI. Examples: the Fairness, Accountability, Transparency conference, the Asilomar AI principles, the 100 Year study on AI, public letters on the military use of AI.

Remarkably, the reaction from governments and corporations has been swift. Numerous national AI strategies have noted the importance of ethical and responsible AI. Companies like Google and Microsoft have announced their own internal AI ethics guidelines – with clear red lines and areas of work they will not engage in. International processes are underway to provide both the technical standards and the legal norms to underpin a responsible use of AI (notably the IEEE, the EU and Council of Europe, and at the UN).

Nevertheless, there will be serious challenges ahead. Getting this right requires unprecedented coordination across many domains and disciplines. And even in the best case, most of the measures proposed have costs – testing, standardization, legal compliance – all take time and money.

 

Luukas Ilves, Deputy Director and Senior Fellow at Lisbon Council & speaker at Tallinn Digital Summit 2018