With the passing of the United Nations Resolution on Artificial Intelligence, the need and associated discourse on the regulation of AI has entered a new phase.
A global acknowledgement of the risks associated with AI systems and the urgent need to promote responsible use was at the centre of the adopted resolution.
It was recognised that unethical and improper use of AI systems would impede the achievement of the 2030 Sustainable Development Goals (SDGs), weakening the ongoing efforts across all three dimensions — social, environmental, and economic.
Another controversial aspect mentioned in the UN resolution has been the plausible adverse impact of AI on the workforce.
It would be imperative, especially for developing and least developed countries, to devise a response as the labour market in such countries is increasingly vulnerable to the use of such systems.
In addition to its workforce, the impact on small and medium entrepreneurs also needs to be ascertained.
The Resolution has shed light on the future implications of AI systems and the urgent need to adopt collaborative action
EU’s approach
EU recently passed the AI Act, the foremost law establishing rules and regulations governing AI systems.
With its risk-based approach, the Act categorises systems into four categories, namely unacceptable, high, limited, and minimal risks, prescribing guidelines for each.
The Act prescribes an absolute ban on applications that risk citizens’ rights, including manipulation of human behaviour, emotion recognition, mass surveillance etc.
The landmark legislation highlights two important considerations — acknowledging the compliance burden placed on business enterprises, and start-ups, and regulating the much-deliberated Generative AI systems such as ChatGPT
China’s stand on AI
Focuses on prompting AI tools and innovation with safeguards against any future harm to the nation’s social and economic goals
China released, in phases, a regulatory framework addressing the following three issues
content moderation, which includes identification of content generated through any AI system;
personal data protection, with a specific focus on the need to procure users’ consent before accessing and processing their data;
algorithmic governance, with a focus on security and ethics while developing and running algorithms over any gathered dataset.
U.K.’s framework
UK has adopted a principled and context-based approach in its ongoing efforts to regulate AI systems.
The approach requires mandatory consultations with regulatory bodies, expanding its technical know-how and expertise in better regulating complex technologies while bridging regulatory gaps, if any.
The U.K. has thus, resorted to a decentralised and more soft law approach rather than opting to regulate AI systems through stringent legal rules.
This is in striking contrast to the EU approach.
India’s position
India will be home to over 10,000 deep tech start-ups by 2030.
In this direction, a ₹10,300 crore allocation was approved for the India AI mission to further its AI ecosystem through enhanced public-private partnerships and promote the start-up ecosystem.
Amongst other initiatives, the allocation would be used to deploy 10,000 Graphic Processing Units, Large Multi-Models (LMMs) and other AI-based research collaboration and efficient and innovative projects
India’s response must align with its commitment towards the SDGs while also ensuring that economic growth is maintained.
This would require the judicious use of AI systems to offer solutions that could further the innovation while mitigating its risks
A gradual phase-led approach appears more suitable for India’s efforts towards a fair and inclusive AI system.
COMMENTS