The question of why we have yet to get autonomous cars, or why home help robots are not yet ubiquitous is often asked. Sector experts and organisations ask what are the barriers that need to be overcome to enable adoption of autonomous systems? This is a question that has been asked for well over a decade in workshops, various open fora and probably over lots of coffee or indeed something stronger. Plenty of cash ($billions in the case of the autonomous cars) has been poured into their development and a fair proportion of that has probably been spent on marketing and lobbying various governments to support the ideas. There is certainly a lot of hype. Within autonomous systems R&D, there have been at least 2 major peaks of investment into Artificial Intelligence (AI) over the past 30 years, feeding off that hype. A good presentation on this was given by Melanie Mitchell - The Collapse of Artificial Intelligence.
The focus on investment has mostly been focussed on what can be done with the various flavours of Artificial Intelligence, rather than what are the limitations that need to be addressed through the use of AI. This does not build trust within the developer community, let alone the wider public.
The term ‘work’ can have many connotations. AI may do some of the things desired, some of the time, but there is a limit to being able to show that it will always do what is needed all of the time. For a start, AI developers often cannot define a set of desired behaviours or requirements for the software system, so realistically it becomes difficult to verify that the system behaves correctly. Furthermore, most people are worried that they don’t want AI to do things they don’t want it to do; not that this is a complete list by any means, but it’s scary nevertheless. The public and regulators also want to know what happens to the behaviour of the AI when (not ‘if’) something goes wrong. Defining all of these behaviours is difficult but necessary and should cover both safe operation as well as secure. Without subsequent assurance, how can trust be built such that these highly complex systems can be deployed? Furthermore, AI developers will also say that they don’t necessarily know why the software does what it does, let alone how. This certainly doesn’t help build trust.
There is a really good programme from the BBC given by Prof Jim Al-Khalili on various platforms which explains both the expectation as well as the reality of AI.
Melanie Mitchell
Privacy Policy | Terms & Conditions
Drisq Ltd 2024. All rights reserved. Design by Design in the Shires