AI

OpenAI Leadership Warns of Superintelligent AI and Shares Thoughts on Governance

Stating that “superintelligence will be more powerful than other technologies humanity has had to contend with in the past,” and highlighting the need to “get it right.”

AI

OpenAI Leadership Warns of Superintelligent AI and Shares Thoughts on Governance

Stating that “superintelligence will be more powerful than other technologies humanity has had to contend with in the past,” and highlighting the need to “get it right.”

In a recent blog written by key members of OpenAI — Sam Altman, Greg Brockman, and Ilya Sutskever — the group expressed its concerns surrounding superintelligent AI and proposed possible regulatory next steps.

The letter expressed that these future AI systems will be dramatically more capable than even artificial general intelligence (AGI) and that it is important to get ahead of regulation if possible.

According to OpenAI, within the next decade, AI systems are expected to surpass “expert skill levels in most domains and generate as much productivity as today’s largest corporations.” While this advancement offers immense opportunities, it also presents unique and exceedingly more significant challenges than any other technology.

To address these concerns, OpenAI proposed several key ideas, primarily, a call for coordination among leading development efforts to ensure the safe and responsible integration of superintelligence into society. It also suggested that major governments and regulatory authorities could establish a collaborative project or limit the rate of AI capability growth through a collective agreement.

Additionally, OpenAI advocated for the establishment of an international authority, akin to the International Atomic Energy Agency (IAEA), to oversee superintelligence efforts. It explained that this authority would inspect systems, enforce safety standards, and regulate the deployment and security of superintelligent systems.

Despite calls for oversight, however, OpenAI emphasized that “It’s important to allow companies and open-source projects to develop models below a significant capability threshold, without the kind of regulation we describe here [including burdensome mechanisms like licenses or audits].”

Most importantly, OpenAI expressed that people around the world should be able to democratically decide the bounds and defaults for AI systems, explaining “We continue to think that, within these wide bounds, individual users should have a lot of control over how the AI they use behaves.”

This being said, various AI tools have already begun to be mistreated and used for malicious purposes like fake ransom calls, the sale of fake songs replicating legitimate artists, and most recently a manipulated image of an explosion at the Pentagon.

While OpenAI seems to be concerned with superintelligence, more common AI systems are already causing massive manipulation. When the photo of the explosion circulated the internet, it was picked up by a variety of news agencies covering it as real news.

Paired with a fake but verified Twitter account deceptively named “Bloomberg Feed,” a user posted the photo and as it gained traction, financial markets reacted rapidly, with the S&P 500 experiencing a slight dip — an instance the authentic Bloomberg expressed might “possibly [be] the first instance of an AI-generated image moving the market.”

Just days before this event, Altman had shared his broader concerns around AI before Congress, stating “If this technology goes wrong, it can go quite wrong.”

That said, both Altman and his colleagues expressed in their recent letter that “We believe it’s going to lead to a much better world than what we can imagine today [citing examples education, creative work, and personal productivity], adding that “the world faces a lot of problems that we will need much more help to solve; this technology can improve our societies, and the creative ability of everyone to use these new tools is certain to astonish us.”

Ultimately, they believe that the reward outweighs the risk, explaining that “The upsides are so tremendous, the cost to build it decreases each year, the number of actors building it is rapidly increasing, and it’s inherently part of the technological path we are on, stopping it would require something like a global surveillance regime, and even that isn’t guaranteed to work. So we have to get it right.”

In other news, Apple’s latest job listings call for generative AI experts.

You may also like

Google Unveils Dozens of AI Updates During Annual I/O Developer Conference
AI

Google Unveils Dozens of AI Updates During Annual I/O Developer Conference

Including supercharged search engines, generative emails, immersive maps, Adobe integrations, and a host of upcoming plugins for its LLM Bard.
OpenAI CTO Twitter Hack Results in Over $100K in Stolen Funds
Cryptocurrency

OpenAI CTO Twitter Hack Results in Over $100K in Stolen Funds

Though live for only one hour, the tweet promoting a fraudulent airdrop received over 80,000 impressions and appears to have deceived hundreds of viewers.
OpenAI's API Expansion Could Change the way AI is Leveraged
AI

OpenAI's API Expansion Could Change the way AI is Leveraged

With names like Snapchat, Shopify, Instacart, and more having already adopted the technology for various use cases.
What Are Autonomous AI Agents and Why You Need to Know About Them
AI

What Are Autonomous AI Agents and Why You Need to Know About Them

With the launch of ChatGPT, publicly available AI models have blown up in popularity and now they’re being automated to complete given tasks without any further human input outside of an initial prompt.
More ▾