policy@Manchester Comment on Socially Responsible AI
Policymakers should do more to ensure the development of artificial intelligence (AI) is both democratic and socially responsible.
20:33 18 May 2018
A recent report published by the policy@Manchester, a University of Manchester initiative set up to connect researchers working on artificial intelligence, urged policymakers and regulators to do more to make sure that the technology is deployed in a responsible way. Entitled ‘On AI and Robotics: Developing Policy for the Fourth Industrial Revolution, the report claims that the development of AI is often subject to bias, which can result in systems that are discriminatory.
Dr Barbara Ribeiro, one of the authors of the report, said: “Just like with any other new technology policymakers must not take for granted what they currently understand as the public benefit or public value of AI. Instead they should let the potential end users and beneficiaries explain their own concerns.”
“Ensuring social justice in AI development is essential,” she added. “AI technologies rely on big data and the use of algorithms, which influence decision-making in public life and on matters such as social welfare, public safety and urban planning.”
“In these ‘data-driven’ decision-making processes some social groups may be excluded, either because they lack access to devices necessary to participate or because the selected datasets do not consider the needs, preferences and interests of marginalised and disadvantaged people.”
The report also covers robotics and stresses the need to create awareness on its capabilities and limitations and how it can be used in different industries.