Canada Health Alliance

“Sorry in Advance!”: Rapid Rush to Deploy Generative A.I. Risks a Wide Array of Automated Harms

Rick Claypool and Cheyenne Hunt

Generative A.I. tools like ChatGPT are creating a huge amount of buzz – especially among the Big Tech corporations best positioned to profit from them. Boosters say A.I. will change the world in ways that make everyone rich – and some detractors say it could kill us all. Separate from frightening threats that may materialize as the technology evolves are real-world harms the rush to release and monetize these tools can cause – and, in many cases, is already causing. This report compiles these harms and categorizes them into five broad areas of concern:

Damaging Democracy: Misinformation-spreading spambots aren’t new, but generative A.I. tools easily allow bad actors to mass produce deceptive political content. Increasingly powerful audio and video-production A.I. tools are making authentic content harder to distinguish synthetic content.

Consumer Concerns: Businesses trying to maximize profits using generative A.I. are using these tools to gobble up user data, manipulate consumers, and concentrate advantages among the biggest corporations. Scammers are using them to engage in increasingly sophisticated rip-off schemes.

Worsening Inequality: Generative A.I. tools risk perpetuating and exacerbating systemic biases such racism as sexism. They give bullies and abusers new ways to harm victims, and, if their widespread deployment proves consequential, risk significantly accelerating economic inequality.

Undermining Worker Rights: Companies developing A.I. tools use texts and images created by humans to train their models – and employ low-wage workers abroad to help filter out disturbing and offensive content. Automating media creation, as some A.I. does, risks deskilling and replacing media-production work performed by humans.

Environmental Concerns: Training and maintaining generative A.I. tools requires significant expansions in computing power – expansions in computing power that are increasing faster than technology developers’ ability to absorb the demands with efficiency advances. Mass deployment is expected to require that some of the biggest tech companies increase their computing power – and, thus, their carbon footprints – by four or five times. The goal of this report is to reframe the conversation around generative A.I. to ensure that the public and policymakers have a say in how these new technologies might upend our lives. Until meaningful government safeguards are in place to protect the public from the harms of generative A.I., we need a pause.

Image: ThisIsEngineering @ Pexels

Latest articles

Dawn Lester & David Parker … As we have shown in many...
Dawn Lester & David Parker Modern medicine is widely acclaimed as being...
Dawn Lester & David Parker In parts one and two, we showed...
Dawn Lester & David Parker In the three previous parts of this...
Roger Koops For those who may not recall Chicken Little (AKA Henny...
Tristan Coleman Does the latest ‘climate consensus’ study show a genuine agreement...

Thank you!

Thank you for your membership application. As soon as your payment has been received your membership will be activated and you will be informed via email.

Thank you.

Thank you!

The form has been submitted successfully!