OpenAI CEO Sam Altman Said GPT-4 is “Going to Disappoint” People With High Expectations

OpenAI released GPT-3 in 2020 and used an improved version, GPT-3.5, to create the popular chatbot that took the internet by storm two months ago — ChatGPT.

During an interview with StrictlyVC, Altman answered many AI-related questions, including questions about the much anticipated GPT-4 language model. When questioned about GPT-4 release timeframe, Altman said:

It’ll come out at some point when we are confident we can do it safely and responsibly.

As soon as OpenAI mentioned the upcoming GPT-4 model, the most excitable members of Silicon Valley and the AI community started calling it a big step forward and making wild predictions about it, particularly about the model’s number of parameters.

Responding to a question about one viral (and incorrect) chart that compares parameters in GPT-3 (175 billion) to those in GPT-4 (100 trillion), Altman said it’s far from the truth:

The GPT-4 rumor mill is a ridiculous thing. I don’t know where it all comes from

Altman continued by saying,

People are begging to be disappointed, and they will be. The hype is just like… We don’t have an actual AGI, and that’s sort of what’s expected of us.

The AGI Altman spoke of refers to artificial general intelligence, an AI system with its own emergent intelligence and at least human-like capabilities across various fields.

Altman confirmed that OpenAI would soon add a video-generating model to its lineup of AI-powered tools without providing a timeframe for its release. “It will come. I wouldn’t want to make a confident prediction about when,” said Altman. He further went on to say:

We’ll try to do it, other people will try to do it … It’s a legitimate research project. It could be pretty soon; it could take a while.

Talking Freedom, Education, and Competition With Google

During the interview, Altman shared his view on the need for AI models with different viewpoints, saying:

The world can say, ‘Okay, here are the rules. Here are the very broad absolute rules of a system.’ But within that, people should be allowed very different things that they want their AI to do.

From that, he went on to say,

If you want the super never-offend, safe-for-work model, you should get that, and if you want an edgier one that is creative and exploratory but says some stuff you might not be comfortable with, or some people might not be comfortable with, you should get that. And I think there will be many systems in the world that will have different settings of the values they enforce.

He ended by saying:

And really what I think — but this will take longer — is that you, as a user, should be able to write up a few pages of ‘here’s what I want; here are my values; here’s how I want the AI to behave’ and it reads it and thinks about it and acts exactly how you want because it should be your AI.

Regarding the issues related to the chatbot and the educational system that led to educators complaining about the possibility of their students cheating by using the AI-powered tool, Altman said that OpenAI and others are working on a solution.

While talking about the possible introduction of watermarks and similar techniques meant to help educators identify the work of AI, Altman said that focusing on that for too long is futile because a “determined person will get around them.”

Altman believes we all need to adapt to generated text, mentioning how calculators changed what we test for in math classes. He said this is just a more extreme version of that.

I would much rather have ChatGPT teach me something than go read a textbookAltman

When asked about the future of AI and the possible transition to AGI, Altman said the shift to more advanced technology might not be as abrupt as people expect it to be:

The closer we get, the harder time I have answering. Because I think it’s going to be much blurrier and much more of a gradual transition than people think

He believes the gradual transition is a good thing because society needs to prepare for AGI and get ready to live with it. He argued that,

Starting these [product releases] now [makes sense], where the stakes are still relatively low, rather than just put out what the whole industry will have in a few years with no time for society to update.

With the success of ChatGPT and rumors of a ChatGPT-powered version of Bing, Altman commented on the predictions that OpenAI might be the end of Google by saying:

I think whenever someone talks about a technology being the end of some other giant company, it’s usually wrong. I think people forget they get to make a countermove here, and they’re like pretty smart, pretty competent.

He ended by saying,

I do think there’s a change for search that will probably come at some point — but not as dramatically as people think in the short term.

Read More Software News:

Microsoft to Cut 5% of its Workforce as Global Economic Slowdown Bites

Elon Musk Revises Twitter Blue Pricing as he Seeks to Transform the Company

SmartNews Lays Off Over 40% of Workers in China and the US