What is ‘giant artificial intelligences’?
They are designed from the top-down, with the premise that the model will acquire the smaller details on its own.
There are many use-cases intended for these systems, including legal services, teaching students, generating policy suggestions and even providing scientific insights.
gAIs are thus intended to be a tool that automates what has so far been assumed to be impossible to automate: knowledge-work.
Concerns
gAIs could lead to the loss of languages, which will hurt the diversity of our very thoughts.
The risk of such language loss is due to the bias induced by models trained only on the languages that already populate the Internet, which is mostly just English (~60%).
There are other ways in which a model is likely to be biased, via religion (for example, more websites preach Christianity than they do other religions), sex and race.
Local knowledge only comes from experience. We can call that ‘knowledge of the territory’. This knowledge is abstracted away by gAIs.
The ‘knowledge of the territory’ can only be captured by the people doing the tasks that gAIs are trying to replace.
Way forward
Artificially slow the rate of progress in AI commercialisation to allow time for democratic inputs.
Ensure there are diverse models being developed.
‘Diversity’ here implies multiple solutions to the same question, like independent cartographers preparing different atlases with different incentives — some will focus on the flora while others on the fauna.
The research on diversity suggests that the more time passes before reaching a common solution, the better the outcome.
A better outcome is critical when dealing with the stakes involved in artificial general intelligence.
COMMENTS