What is the Hiroshima AI Process?
The annual Group of Seven (G-7) Summit, hosted by Japan, took place in Hiroshima from 2023 May 19-21.
The G-7 Hiroshima Leaders’ Communiqué initiated the Hiroshima AI Process (HAP).
It is an effort by the G-7 bloc to determine a way forward to regulate Artificial Intelligence (AI).
Commitment to promote human-centric and trustworthy AI based on the OECD AI Principles and to foster collaboration to maximise the benefits for all brought by AI technologies.
G-7 is determined to work with others to
Advance international discussions on inclusive AI governance.
Interoperability to achieve our common vision and goal of trustworthy AI, in line with shared democratic value.
Encourage international organisations such as
OECD (Organisation for Economic Co-operation and Development) to consider analysis on the impact of policy developments
Global Partnership on AI (GPAI) to conduct practical projects.
The discussions on HAP could include topics such as governance, safeguard of intellectual property rights including copyrights, promotion of transparency, response to foreign information manipulation, including disinformation, and responsible utilisation of these technologies.
Importance of it
It will highlight the shared values and standards that can be used to derive guiding principles (fairness, accountability, transparency, and safety) for the regulation of AI.
AI development and implementation will help in aligning with values such as freedom, democracy and human rights.
What does the process entail?
An emphasis on multi-stakeholder international cooperation indicates that the HAP isn’t expected to address AI regulation from a State-centric perspective.
Instead, it exists to account for the importance of involving multiple stakeholders in various processes and to ensure the latter are fair and transparent.
Challenges:
Divergence among G7 countries.
G7 countries are acting on their own paths to regulate AI instead of waiting for the outcomes from the HAP.
The relationship between AI and Intellectual Property Rights (IPR) is not clear.
Whether training a generative AI model, like ChatGPT, on copyrighted material constitutes a copyright violation.
There have been several conflicting interpretations and judicial pronouncements.
COMMENTS