Last week, both B.C. Premier David Eby and federal AI Minister Evan Solomon began to suggest that perhaps firms like OpenAI are subject to Canadian jurisdiction.
This comes in the wake of the horrific Tumbler Ridge mass shooting.
But after the incident, which killed eight other people, six of them children, it emerged that the shooter had been recently banned from her OpenAI account because the company had identified “misuses of our models in furtherance of violent activities,” they said in a statement.
There were also reports that staffers inside OpenAI considered reporting the disturbing discussions between the teen and the chatbot, but ultimately decided not to.
There were doubtless other off ramps that could have prevented this tragedy – improving mental health supports in small communities should be a government priority, for one thing – but this issue stands out.
OpenAI, a foreign-based company, saw the red flags. Rather than put the choice to intervene or not in the hands of police, or provincial health officials, they made their own, unaccountable decision.
That’s the problem with OpenAI, and it’s a problem across much of the tech ecosystem.
While Solomon suggested that Canadians should have been involved in the decision to warn or not warn in such cases, Eby went further.
“I can’t think of a better example of where we need to start on a regulation than ensuring that when these companies have information that harm is going to be caused to people, that they will report that to the police,” Eby said on March 5.
He said he wants to make it a legal obligation for companies operating all AI chat services.
It’s a good start, and it’s a good template for social media, online commerce, and video sharing sites, too.
We give up a huge amount of information to the online world, and that data is mined for the profit of major corporations.
Yet we barely regulate these companies. Around the world, there has been an increase in age limits for social media services, and some regulations on pornographic sites.
We’re at the very beginning of regulating online environments, because we’re at the beginning of understanding the trade offs between their benefits and harms.
We’re very much like Victorians, who noticed that the air around all those smokestacks wasn’t that pleasant, or that the gunk factories were dumping into rivers seemed to be bad for the fish.
For all the benefits of our new online tools, we’ve also seen how they can create all kinds of harms – from the multiple reports of peculiar delusional disorders seemingly fostered by chatbots, to medical misinformation and conspiracy theories spreading like wildfire through social media.
For-profit online platforms and software are seeking our attention. They are designed to maximize the time we spend scrolling, watching, chatting. We know there are some harms wrapped up in what they are selling to us.
The right place for accountability and control is not within the companies making the profits.
It’s with democratically elected governments.
It’s long past time to rein in the infosphere that we all live with, every day of our lives.