Craig Berry, energy and technology writer, says that the ethics of Artificial Intelligence will be shaped by corporations and their interests unless there is a concerted effort to subject it to democratic forums like Citizens Assemblies
A PERTINENT question around Artificial Intelligence (AI) which is rarely asked is: who designs the ethics of AI systems and in whose interest are they designed?
AI is expected to have a significant impact on our lives, but consider the impact it would have had if Cambridge Analytica had access to high-functioning AI systems. Not only will it have been used to increase profitability but it would have been used to exploit data about us, to better coordinate that data to alter our opinions towards those of their clients.
To understand AI, we need to understand how it is developed and operates within the capitalist system in which it is embedded. The fears that people associate with intelligent robots, inhumane machines centred on extracting maximum resource and profit with little concern of the people it harms, are the same concerns which people have had about corporations for hundreds of years. This fear is deep-rooted in the lack of control people have over these systems.
READ MORE: Stephen Hawking among 1000 experts calling for ban on artificial intelligence warfare
In 2010, the BP oil spill led to the death of 11 people and devastated the environment of the Gulf of Mexico. Yet, not a single person went to jail. AI in the hands of the legal teams of large corporations would mean AI techniques are used to escape corporate liability through legal loop holes, using them with impunity.
In the 1950s, John McCarthy, who coined the term “artificial intelligence”, wrote in his notes: “Once one system of epistemology is programmed and works, no other will be taken seriously unless it also leads to intelligent programmes.” His suggestion was that influence, not authority, could decide the scientific consensus in his field. AI doesn’t need to “solve intelligence”, as DeepMind has claimed, but it just needs to outshine it competitors.
“Intelligence” is often used a tool for domination throughout history. Aristotle appealed to the “natural law” of social hierarchy to explain why women, slaves and animals were to be subjugated by intellectual men. As AI enables large-scale automated categorisation, this must now contend with a society which asks profound questions on identity such as race, gender, sexuality and colonialism.
READ MORE – Craig Berry: We don’t have to fear the AI revolution – we have to democratise it
Machine learning can differentiate between cancerous and benign moles due to the knowledge we embed in them. Yet, when directed at the complexities of people’s lives, careless labels can oppress and do harm when they assert false authority. When encountering fluidity, categorisations can be inadequate if it comes from the perspective of those without an understanding of that fluidity.
As AI moves from research environments to real-world decision environments, it goes from being a computer science challenge to becoming a business and societal challenge as well. To ensure that societal values are reflected in algorithms and AI technologies, society must have the capacity to develop what they consider to be AI ethics. To begin this process, we should look to citizen assemblies to consider the socio-political aspects of AI.
An assembly which is representative of the electorate should be established to consider some of the most important issues related to the future of AI. Deliberating on the issue with expert opinion to help form an understanding of AI, we can establish how AI relates to people. From here, we can then establish how we wish AI to develop. As AI has been predominately developed by men, this process allows us to understand female specific aspects of AI ethics.
READ MORE: Rise of the machines: Technology may soon replace workers – but is it a good thing?
Technology is a tool which helps shape our society. If the interactions of these tools are concentrated by a select few, technology will shape to their character and biases. This is the current trajectory of AI systems as they continue to be developed by corporations and are used to meet their demands. In our supposedly representative democracy, we need to develop the systems which ensure that everyone in our society is represented in the development of AI, because the current system isn’t working.
Questions over inequality are often answered by enhancing democracy. AI has been born in an unequal society, and if we want to end this inequality we need to allow the democratisation of these systems so that they are used in the benefit of everyone. If we want AI to be ethical, we need it to be democratic.
Picture courtesy of Ecole Polytechnique
HELP US BUILD A COMMON FUTURE TOGETHER: Support our work at allofusfirst.org/donate
