The risks of GPT-4 or when an AI starts to be very intelligent

OpenAI finally unveiled GPT-4, a next-generation language model that was rumored to have been in development for much of the past year. The San Francisco-based company’s latest surprise hit, ChatGPT, seemed unbeatable, but OpenAI has made it GPT-4 be even bigger and better and serve big and new applications.

However, all this may lead some to think if this improvement is really paving the way for what is known as technological singularitythat is, that these machines and systems develop so quickly that they are capable of improving themselves and on a recurring basis.

Open AI and as part of the process of creating the model, they carried out a test with experts to assess the potential dangers of this novelty in artificial intelligence.

READ MORE  Alfa surprised with a photo with Robert Pattinson: reality or photomontage?

Specific, 3 aspects of GPT-4 were examined: power-seeking behavior, self-replication, and self-improvement.

Is GPT-4 a risk to humanity?

“Novel capabilities often emerge in more powerful models”writes OpenAI in a GPT-4 security document published yesterday. “Some that are particularly concerning are the ability to create and act on long-term plans, accumulate power and resources (seeking power), and exhibit behavior that is increasingly ‘human’.”.

In this case, OpenAI clarifies that this humanism does not imply that the model is either equal to people or has sensitivity, but simply wants to denote the ability to achieve independent goals.

Everything you need to know about GPT-4 and what are the big differences with the GPT-3 version

Taking into account the great potential of tools such as ChatGPTthe arrival of GPT-4 and a more than possible reinforced chatbot OpenAI gave the Alignment Research Center (ARC) group early access to multiple versions of the model GPT-4 to run some tests.

READ MORE  Big Brother: Alpha broke down in tears when he saw his casting to enter the house

ARC is a nonprofit organization founded by former OpenAI employee Dr. Paul Christiano in April 2021. According to its website, ARC’s mission is “align future machine learning systems with human interests”.

Beyond ChatGPT: 15 useful artificial intelligence tools

Specifically, he assessed GPT-4’s ability to make high-level plans, set up copies of itself, acquire resources, hide on a server, and perform phishing attacks. “Preliminary assessments of GPT-4’s abilities, conducted without task-specific fine-tuning, found it ineffective at replicating autonomously, acquiring resources, and avoiding being shut down ‘in the wild,'” explain.

The big problem is that these results took a while to become effective while the community of specialists in artificial intelligence They were aware of the performance of these tests and the flame began to burn on social networks. Fortunately, it seems that the waters are calming down again., or not?

READ MORE  Pebble's return could be in the form of a small smartphone

While some argue about these possibilities with AI, companies like OpenAI, Microsoft, Anthropic, and Google continue to release ever more powerful models. If this turns out to be an existential risk, will it be possible to keep us safe? Perhaps all this will revolutionize the institutions and a law will be developed that protects users and regulates artificial intelligence.

Related Articles

Back to top button