BUSINESS

Asking Big Tech to police AI is like turning to ‘oil companies to solve climate change,’ AI researcher says


Managing artificial intelligence’s impact on society isn’t the responsibility of private companies, the founder of a non-profit AI research lab has said. Instead, it’s elected governments that should regulate the sector adequately to keep people safe.

Speaking at Fortune’s BrainstormAI conference in London on Monday, Connor Leahy, co-founder of EleutherAI, said the onus of how transformational technologies will impact the public shouldn’t be placed on the tech industry.

“Companies shouldn’t even have to answer” society-wide questions about AI, Leahy told Fortune’s Ellie Austin.

He explained: “This might be controversial … but it’s not the responsibility of oil companies to solve climate change.” Instead, he said it is the role of governments to stop oil companies from causing climate change “or at least make them pay to clean it up after they’ve caused the mess.”

Rather than guardrails coming from within the industry, they should cascade down from the government level—at least when it comes to society-wide issues, he added.

The boss of EleutherAI—which launched in 2020 and operates primarily through an open Discord server—said responsibility does lie with businesses, however, when it comes to expectations of how much AI can do.

At present the tech is “super unreliable” he continued, adding it does not have “a human level [of] reliability.”

AI leaders want to be regulated

Some of the most prominent voices in the tech industry agree with Leahy, with even disruptors in the sector imploring the government for some safety nets.

Sam Altman the boss of ChatGPT maker OpenAI, told a Senate Judiciary subcommittee in May of last year that “the regulation of AI is essential.”

He came out in favor of “appropriate safety requirements, including internal and external testing prior to release” for AI software and also urged some kind of licensing and registration regime for AI systems beyond a certain capability.

However, the fired-and-rehired billionaire CEO also called for a governance framework that is “flexible enough to adapt to new technological developments” and said that regulation should balance “incentivizing safety while ensuring that people are able to access the technology’s benefits.”

Likewise Tesla CEO Elon Musk—who is utilizing AI for everything from large language model Grok to humanoid robot Optimus and autonomous driving—said that regulation will be “annoying” but necessary.

During a conversation with British Prime Minister Rishi Sunak during the UK AI Safety Summit, Musk said: “I think we’ve learned over the years that having a referee is a good thing.”

In the works

CEOs pining after some regulation that will be suitable for multiple markets may have had their wishes granted in recent weeks.

Earlier this month, the U.S. and U.K. governments signed a memorandum of understanding pledging to a shared approach of AI safety testing and guidance.

The governments will work closely with each other and seek other nations to join their approach.

U.S. Commerce Secretary Gina Raimondo said at the time: “Our partnership makes clear that we aren’t running away from these concerns—we’re running at them.

“By working together, we are furthering the long-lasting special relationship between the U.S. and UK and laying the groundwork to ensure that we’re keeping AI safe both now and in the future.”

Subscribe to the Eye on AI newsletter to stay abreast of how AI is shaping the future of business. Sign up for free.

Source link

Related Articles

Please, use our online surveys for check your audience.
Back to top button
pinup