Limit the Machine: Ethics and AI
Imagine one of the world’s major present-day wars, if it were being fought using artificial intelligence (AI).
I wrote that in a letter to civic and community leaders a few weeks ago. Shortly afterwards, my attention was drawn to a news report that reminded me how quickly a future possibility can become a present reality.
The report centred on one of the world’s most respected and advanced intelligence services, which had used machine learning to identify tens of thousands of potential human targets based on their links to an enemy organisation. (Note the word “potential” here.)
When we consider the huge present and future impact of artificial intelligence on warfare, but also on the arts, jobs, policing and much more, one thing is abundantly clear. We urgently need to move beyond talk-fests between politicians, or between politicians and BigTechies, to lock in some tight regulations for developing and testing new and existing AI models.
Governments must stop waiting for BigTech companies to regulate themselves. As we’ve seen time and again with social media, they’re either incapable or unwilling to do so.
Recent history is replete with examples of big tech companies that have failed in their responsibility to be self-governing and to act in ways that are transparent and for the common good.
In the Cambridge Analytica scandal, Facebook acquired and used personal data from users without explicit permission. It accessed the personal information of nearly 87 million people.
Twitter has admitted to letting advertisers access its users' personal data to improve the targeting of marketing campaigns. In 2020, Google was fined nearly $57 million by the French data protection authority for failing to acknowledge how it used users' personal data.
BigTech groups seem to see little or no reason to be accountable or socially responsible. In the eyes of the TechnoKings who run these multi-nationals, their primary accountability is to company shareholders.
For all their claims about promoting the common good - and they have presented some great benefits - their major motivation is making money, mainly through targeted advertising, based on data we hand over every time we use one of their platforms.
BigTech’s record is even more troubling when you consider how deeply they’re involved in research and development of AI, the social impact of which will outweigh that of social media by a huge margin.
They can’t be trusted to be tough on machine intelligence without governments looking over their shoulders, armed with strong regulations.
Now, it’s one thing for governments to make laws, it’s quite another matter to institute laws that will stand the test of time. Laws that will not simply withstand the inevitable legal challenges BigTech will throw at them, but will remain fit for service even as AI grows in power and global reach.
Elon Musk recently estimated that we might see a one-hundred-fold improvement in the power and accuracy of AI within just a year or two. Meanwhile, technology engineers now talk seriously about developing, within a few years, an artificial general intelligence (AGI), which can perform any task at least as well as a human.
Some are also confident that we’ll see an artificial super-intelligence (ASI) emerge within perhaps a decade or two. In theory, a super-intelligence would be able to think at a level that’s far beyond the reach of our best and brightest human minds.
It’s a staggering and unsettling notion - especially when you think that the term “artificial intelligence” didn’t exist in the mainstream conversation until a few years ago.
To make AI regulations that are tough and relevant enough to last, governments need help to identify the ethics that should guide AI’s development. So, what ethical questions should be shaping that discussion?
Intelligent machines, dumb people? First, there’s the question of “IA versus AI”. Is our goal with AI to make humans smarter, or to build machines of such sophistication that they eventually look down at us and say, “You are yesterday’s news”?
Should we build machines to make humans smarter, through human intelligence augmentation (IA) or focus primary on artificial machine intelligence (AI)?
Since perhaps the dawn of time, human beings have believed themselves to be unique among all of earth’s creatures. And not without good reason. For one thing, we possess important capacities that are denied all animals. One of them is the ability to communicate through complex language and symbolism.
Using this ability we share abstract ideas, pass knowledge from one generation to another, and collaborate in large, organised groups.
Humans also possess a remarkable level of cognitive flexibility and creativity, with a gift for imagining and planning for the future. Our ability to use advanced tools is without parallel in the animal kingdom.
Our social structures and the diversity and complexity of our cultures are exceptional. We create very complex social and belief systems.
Perhaps the greatest of all differentiating factors is our highly developed sense of self-awareness and our capacity for introspection and moral reasoning. Taken together, these traits give us an extraordinary level of adaptability and the ability to impact our world in very significant ways.
The idea that humans are singularly gifted among all of earth’s creatures has led to a belief that we are uniquely responsible for the welfare of all other creatures, and our natural habitat itself.
This notion is embedded in some of the world’s major religions.
It’s at least as old as the book of Genesis, which insists that humans have been equipped and empowered to exercise loving stewardship over the natural order.
This type of thinking has been foundational to many human civilizations, including our own. It also motivated the pioneers of the first environmental movements, many of whom drew inspiration from religious texts like the Bible.
To accept the idea that humans have such a weighty responsibility is to imply that this is something we can’t easily abdicate. This would mean demeaning our own value.
What’s more, it would be a tragic mistake to surrender control or stewardship to a form of “intelligence” that knows nothing of human empathy or human conscience.
A Self-Disciplined Machine? A second major ethical question revolves around the concept of deliberation. Will AIs of today and tomorrow be able to rein themselves in, even if we do not or cannot?
Do we want AI that is capable of acting impulsively, without giving thought to possible consequences or, more importantly, doing so in an unselfish way? Should we allow AI to display a form of antisocial personality disorder, so that it disregards rules and social norms?
No Off-Switch? Another ethical question revolves around the idea of latency. Will AI systems go on producing outcomes long after their use-by date - or long after we might think we’ve deactivated them?
Latency is a big problem in warfare. Land mines and other devices continue to maim and kill civilians long after the end of a conflict.
An estimated 26,000 people worldwide are maimed or killed by hidden landmines each year. Afghanistan is heavily affected by these latent killers. It contains ten per cent of the world’s 100 million mines.
Landmines and ordnance reduce the amount of available land for agriculture, habitation and infrastructure development. This stifles economic growth, increases unemployment, and leads to a paucity of housing, plus malnutrition.
This type of latency leaves a permanent mark on the lives of many thousands, if not millions the world over.
It seems very likely that many AI platforms, apps and algorithms will develop a type of autonomy which puts them beyond human restriction. Perhaps some will remain hidden from human sight for decades or more, constantly evolving via networked machine intelligence-sharing.
Do we really want AI that develops an extended life of its own, acting with little or no reference to human interests and adapting its programming to justify that?
These are just a few examples of the many and varied ethical issues that confront the developers of AI models. The problem is that, though there are groups looking seriously at these things, they tend to operate independently of the groups developing the models - and without much consideration from governments.
Developers must be made to act with a level of deliberation and caution - and a healthy scepticism about AI - that reflects the enormous potential power of the technological beast they’re unleashing. Machine intelligence, at all of its levels, holds potentially epoch-shaping implications for generations to come.
While BigTech leaders dream of life on the moon and Mars, perhaps they should also pause to consider the future of life here below.
|