Text by Adrian Brown, Executive Director at Centre for Public Impact
The 1865 Locomotive Act, passed by the British Parliament, set out a series of legal restrictions on new “horseless carriages” the growing use of which was causing consternation among the general public. The primary requirement of the Act was that these early cars must have a person walking “not less than sixty yards” in front of them waving a red flag at all times and it became known as the “Red Flag Act” as a result.
To our modern minds this seems ridiculous, especially given the fact that the vehicles were also restricted to a maximum speed of 2 miles per hour, but the public fear about this new technology was very real. How could a solo human driver be trusted to adequately captain such a powerful machine without the assistance an extra pair of eyes and ears? At least the drivers of horse-drawn carriages could rely on the help of their horses to avoid dangers on the road.
Today, the growing use of artificial intelligence is raising similar concerns amongst the public (albeit about machines taking over from humans rather than humans taking over from horses). While techno-enthusiasts champion the wide array of benefits that these technologies can offer to society, legitimate voices of concern are raised about the implications for privacy, accountability and equality.
At the Centre for Public Impact (CPI) we have been exploring the impact of artificial intelligence on government. We are particularly interested in how AI technologies can be used to automate or augment decision-making and improve policymaking and service delivery in government. Our research has highlighted that while the potential for positive disruption is indeed great a troubling gap exists between the pace of technological change and the level of support and understanding amongst policymakers, frontline professionals and the general public.
This “legitimacy gap”, between what is possible with AI and what people are comfortable with, must be understood and addressed if the benefits of AI technologies are to be unlocked in the public sector. From citizens to frontline workers and right up the most senior political leaders we need an open conversation about how these technologies work, what the risks might be, and how those risks can be mitigated if people are to feel comfortable with the pace of change.
Only when people are more supportive of the objectives, the design and the implementation of AI will the true potential of these exciting new technologies be unlocked. Strengthening this legitimacy will require crafting a shared vision of what is possible and offering meaningful opportunities for the public and civil servants to debate and scrutinise possible uses of AI in government, taking their concerns seriously and responding with empathy and authenticity.
For some, this may seem like pointless flag-waving in front of a slow-moving vehicle. People’s fears are misplaced, the technology is safe, and worrying too much about building broad-based support will needlessly slow down progress. This view is mistaken.
We are still at the very earliest stages of the AI journey in government and if we don’t construct a solid platform of legitimacy today the likelihood is that we will store up public concern and anxiety for tomorrow. How we approach this question of strengthening legitimacy for AI in government, as well as how and where to start the application of AI in government, will be the topics of our roundtable session at the Tallinn Digital Summit in October and we welcome all contributions to this important debate.
The Red Flag Act was eventually repealed and replaced with the Locomotives on Highways Act in 1896. The new Act allowed vehicles to travel between 12 and 14 miles per hour and subsequent legislation saw that speed limit raised further as the public became more comfortable with horseless carriages. So, in one sense, it is thanks to those early flag wavers that we have the modern vehicles of today, right up to and including autonomous vehicles.