Human decisions and human rights should be at the core of AI regulation, says UN adviser

Back to news
Man using a smartphone.

It’s not quite time for machines to be taking over, according to a University of Stirling expert in Artificial Intelligence (AI), who says a human touch and close oversight is crucial to developing such technologies. 

Human-centric decisions and human rights are central to a raft of new AI regulation proposals from Markus Krebsz, an expert in risk management and AI, neurotechnology and robotics, and a leading adviser to the United Nations. 

The proposed way forward for governments, published in a UN paper this week, assert that machines should not trump humans when it comes to products with embedded AI and other digital technology.   

The paper also calls for governments to put human rights – in particular safeguards to protect children – at the heart of any regulatory framework, alongside the environment, sustainability and reducing the global digital divide. 

Due to the horizontal nature of digital regulation of goods, the paper covers a wide range of products that make use of AI or other embedded digital technologies, with far reaching implications and few exceptions. Internet-enabled toys, consumer drones, industrial robots, sophisticated medical diagnostic tools, smart speakers, and the newest ranges of security cameras and smartphones, are all covered by the proposals. Among the exceptions to the proposals are self-driving cars, autonomous weapons and stand-alone AI that is not embedded in products. 

Mr Krebsz, an Honorary Professor of the University of Stirling Management School, said: “Smart devices – including home gaming, mobile phones, robots and drones – increasingly have AI systems embedded within them, and therefore have the potential to be impacted by these new regulation proposals. It is imperative that our fundamental rights as citizens, from privacy, personal and data protection to potential harm to the environment, are addressed when governments draw up their regulatory frameworks.” 

The paper makes the case for the widespread and globally aligned use of common international standards and compliance and suggests that only AI-embedded products that comply with these standards are put on national markets.  

Honorary Professor Markus Krebsz headshot.
Markus Krebsz
Honorary Professor Stirling Management School
Proper regulation shouldn’t be seen as stifling AI development. In fact, it will do the opposite – it will encourage sustainable innovation with the aim to ensuring safe infrastructures and helping to reduce trade barriers for products with embedded AI.

Mr Krebsz, who is also Founding Director of the Human-AI Institute, has been leading efforts for the United Nations Economic Commission for Europe Working Party 6 (UNECE WP.6) to produce global guidance for member countries since early 2020. He added: “Proper regulation shouldn’t be seen as stifling AI development. In fact, it will do the opposite – it will encourage sustainable innovation with the aim to ensuring safe infrastructures and helping to reduce trade barriers for products with embedded AI.”

“Regulating products with embedded AI has its challenges – for example, the products and technology are continuously evolving, and that brings with it questions of whether these products remain compliant following firmware updates. But I hope the proposals provide a solid basis for the governments of member states to develop their own horizontal digital regulatory framework for products with embedded AI, and to provide a greater certainty as well as reducing risks for all stakeholders – policymakers, industry and the public.”  

Following the UN working party’s recent meeting in Geneva, member states encouraged Mr Krebsz to continue this work and tasked the project team with drafting a UN Declaration and the development of a Common Regulatory Agreement – both expected for spring 2024. 

Heidi Lund, Chair of UNECE WP.6, said: “The regulation of digital innovation, including AI, is currently discussed all over the world and approached in many different ways by individual countries and markets, often ignoring the need to embrace the multitude of legitimate regulatory concerns related to a digital product such as product safety, cybersecurity, resilience, and privacy.  

“Without proper cross-sectoral regulatory impact analysis, and without considering larger societal concerns, governments risk not only missing new risks and vulnerabilities, but will also be facing increased regulatory fragmentation and unnecessary barriers to trade. The new UNECE guidance is well timed to provide best practice to policymakers and regulators to address these challenges.” 

When adopted, the UN Declaration and Common Regulatory Agreement will impact 193 member states and more than eight billion people worldwide. 

The paper, The regulatory compliance of products with embedded artificial intelligence or other digital technologies, is now available. 

You may also be interested in