Secret Project Q*; Is superhuman artificial intelligence already among us?

0 449
5
(98)

For a few days now, everyone has been talking about the secret and apparently dangerous Q* project, which is said to be the first model of artificial intelligence with superhuman capabilities.

It was just two, three weeks ago that the always-closed doors of the mysterious OpenAI company, the creator of ChatGPT artificial intelligence, finally surprisingly opened to the media; The board abruptly fired CEO Sam Altman; Hundreds of employees threatened to resign in protest against this decision; Microsoft took the initiative and offered Altman the position of CEO of the artificial intelligence unit; But Altman returned to his post and the media analyzed the whole story from different angles. But with all these reports and dramatic stories, we still don’t know exactly how OpenAI will develop its technology or what Altman has in mind for more powerful projects in the future.

Our very limited understanding of OpenAI’s plans revealed itself when Reuters and tech website The Information wrote in two separate reports that, before Altman was fired, several OpenAI researchers had raised concerns about the apparently dangerous project’s major *development; An algorithm-oriented project with the mysterious name Q.

“The new model was able to solve some special mathematical problems with the help of huge computing resources,” Reuters wrote, citing an anonymous source. Although these issues were at the elementary level, the 100% success in solving them has made researchers very optimistic about the future success of Q*.

The Information website also wrote that Q* is a significant advance that will lead to the development of “much more powerful AI models” and that “the speed of the project’s development has alarmed some researchers concerned about the safety of AI.”

Sam-Altman
Sam-Altman

These two reports were enough to ignite the fire of speculations and worries. Was Q* somehow related to Altman’s firing? Is this project as powerful as the rumors say? Could it be a sign that OpenAI is getting closer to its goal, which is to achieve AGI or artificial intelligence at the level of science fiction movies? Can the Q* algorithm really solve complex tasks like or even better than humans? Is the robot apocalypse closer than we thought?

Although the name of the secret Q project has been revealed for a week or two, we still don’t know much about it. Altman also confirmed the existence of this project in his new interview with The Verge, but said, “I can’t say anything about the unpleasant leak of this project.” And it was enough to say a series of vague sentences about “meaningful and rapid progress” and efforts to develop the company’s projects “safely and profitably”.

What could be the mysterious Q project?

From what we read in the reports, Qstar is an algorithm that can solve elementary math problems like we’ve never seen before. You may think that solving these problems is not very significant; After all, if a 10, 11 year 11-year-old can solve them, then even the weakest artificial intelligence should be able to solve it easily. But in the world of AI, the story is different; So that some OpenAI researchers apparently believe that Qstar can be the first sign of improving the ability of “reasoning” or the use of logic to solve new problems in artificial intelligence models.

The power of reasoning is one of the key and still missing elements of strong artificial intelligence

For years, researchers have been trying to develop artificial intelligence models so that they can solve mathematical problems correctly. Language models such as GPT-4 used in the ChatGPT chatbot can handle mathematical problems to a very limited extent, but not to the extent that they are reliable in all scenarios.

READ MORE :  This is the new iPhone screen for installing apps [Update: Apple issued a statement]

Currently, we do not have an algorithm or even a proper architecture to reliably solve mathematical problems using artificial intelligence. Deep learning and the so-called transformer neural networks that underlie the development of linguistic models are excellent at finding patterns and distinguishing cats from trees, but simply taking advantage of this capability is not enough to achieve strong artificial intelligence.

It is interesting to know that mathematics is used as a benchmark to test the reasoning power of the artificial intelligence model because it is easy for researchers to propose a new problem, and reaching a solution requires understanding abstract concepts and step-by-step planning.

If AGI is unleashed, there will be global catastrophe

The ability to reason is one of the key and yet still missing elements of more intelligent and general-purpose artificial intelligence systems; The same system that OpenAI calls “Artificial General Intelligence” is known as AGI. According to this company, if such a system enters the real world, it will be better than humans in most tasks, and if one day it wants to get out of human control, a global disaster will occur.

When we piece together Qstar’s reports and relate them to the most controversial problems in AI these days, we come to a project that OpenAI announced about seven months ago in May, claiming that with the help of a technique called “process monitoring (Process Supervision) has achieved new powerful results.

In this project, the chief scientist and one of the founders of OpenAI, Ilya Sotskior, played a role, that was the key to the dismissal of Sam Altman from the company; However, after the OpenAI crisis, he reversed his decision and accepted Altman with open arms. According to The Information, Ilya has led the development of Qstar.

Ilya Sotskior has led the development of Qstar
Ilya Sotskior has led the development of Qstar

The May OpenAI project focused on reducing logical errors of large language models through “process monitoring”. In process monitoring, the artificial intelligence model is trained to analyze the steps required to solve the problem to increase the algorithm’s chance of reaching the correct answer. The project showed how this technique can help large language models that often make simple errors in solving elementary math problems deal with such problems more effectively.

According to many artificial intelligence experts, improving large language models is the next step in making them more practical. According to Stanford University professor Andrew Ng, who led artificial intelligence labs at Google and Baidu, “big language models are not very good at solving mathematical problems. Of course, we humans have the same situation. But well, if you give me a pen and paper, my multiplication will be much better than these models. However, I think it is not that difficult to improve language models with memory that can run the multiplication algorithm.”

There are other clues about the nature of Q*. The name of this project may be a reference to “Q-learning“, which is actually a form of reinforcement learning that is used to build game bots and improve ChatGPT, in which the algorithm tries to solve the problem through positive or negative feedback. to solve

Some also believe that Q* may be related to the *A search algorithm, which is widely used in programs to find the best path to a goal.

The website The Information gives us another clue:

Sotscure’s significant progress on this project allowed OpenAI to overcome the limitations of obtaining high-quality data to train new models. In this project, computer-generated data was used to train the new models, rather than data obtained from the real world or the Internet.

According to The Information’s explanation, it seems that in the Qstar project, the algorithms were trained with synthetic data; A method recently used to train more powerful artificial intelligence models.

When we put all these clues together, we conclude that Q* may be a project that uses massive amounts of artificial computer-generated data to use reinforcement learning techniques, a kind of large language model, to perform tasks such as mathematical calculations. Easy to learn.

READ MORE :  Apple has reduced its plans for the production of iPhone 15

To make the story even more complicated, machine learning scientist Nathan Lambert has written in detail in an article about the possible nature of Qstar. In summary, Lambert believes that the Q* project uses reinforcement learning and several other techniques to improve the ability of a large language model to solve tasks through step-by-step reasoning. He says that this method may help ChatGPT become better at solving mathematical problems, but it is not clear that this method has led to the development of an artificial intelligence system that can one day escape human control.

After all, Qstar is a kind of AGI or not?

Until OpenAI itself speaks about the true nature of this project, we cannot be sure. But this skepticism reveals one of the oldest facts about artificial intelligence research; That the opinions about the developments in this field are very different at the moment they occur. It takes a long time for scientists to agree on whether an algorithm or project is truly a breakthrough in the field of artificial intelligence. Because it is necessary for more researchers to confirm how repeatable, effective, and widely applicable the proposed idea is.

For example, consider the transformer algorithm that underlies the large language models and ChatGPT; When Google researchers developed the algorithm in 2017, it was hailed as a major breakthrough, but few predicted it would be so critical to today’s generative AI. Only when OpenAI came to Transformer with a huge amount of data and computing resources and strengthened it, did other artificial intelligence companies start using this algorithm and move the boundaries of image, text, and even video production with the help of AI.

Power in the field of artificial intelligence
Power in the field of artificial intelligence

In artificial intelligence research or any other scientific research, the rise and fall of ideas is not based on mere meritocracy. Usually, scientists and companies that have the most resources and the biggest platform will have the most influence in this field.

In the artificial intelligence industry, power is concentrated in the hands of a few companies, including Meta, Google, OpenAI, Microsoft, and Anthropic. Right now, this imperfect consensus-building process is the best we have, but it’s getting more limited every day; Because the research that was once done mainly in front of everyone’s eyes, is now done completely secretly.

Artificial intelligence research is now conducted in complete secrecy

Over the past ten years, when big tech companies became aware of the tremendous commercialization potential of artificial intelligence, they tried to lure students away from academia and into Silicon Valley with very tempting offers. Many Ph.D. students no longer wait to receive their degrees to join the laboratories of these companies, and many researchers who decide to stay at the university receive funding from these companies to carry out their projects. These days, a lot of AI research is done at tech companies that are trying to hide their most valuable achievements from their commercial rivals.

OpenAI is one of those companies that has clearly stated that the goal of all its projects is to achieve AGI. The mysterious company has attributed the secrecy of its projects to the dangers of artificial intelligence and says that anything that can accelerate the path to superintelligence must be strictly monitored and controlled, otherwise, it may become a threat to humanity.

READ MORE :  The first promotional image of the Galaxy Tab S9 series has been revealed

Of course, OpenAI has openly admitted that keeping its projects secret allows it to keep its distance from competitors. “Developing GPT-4 is not an easy task,” OpenAI chief scientist Ilia Sotskior told The Verge in March. Almost all employees of the company were involved in the development of this language model for a very long time. There are so many companies that want to do exactly what we do.”

Should we be afraid of the Q* project?

People who, like the OpenAI founders, are concerned about the threat of artificial intelligence to humanity, fear that capabilities such as the power of reasoning will lead to the emergence of unbridled AI. If such AI systems are allowed to set their own goals and interfere with the physical and digital worlds, major security concerns arise.

But although the ability to solve mathematical problems may bring us one step closer to powerful artificial intelligence systems, solving this model of problems does not mean the emergence of superintelligence. Of course, this is not the first time that a new model of achieving AGI has sparked controversy. Last year, researchers had the same idea about the Gato all-in-one model developed by Google’s DeepMind.

According to MIT Technology Review, Gato is a system that “learns several different tasks at the same time and can switch between them and learn new skills without forgetting the previous skills.” Gato is a model that can play Atari games, caption pictures, chat, and even stack cubes with a real robot arm.

At the time, some AI researchers claimed that DeepMind was “on the verge” of achieving AGI, as Gato could perform many different tasks well. This was also the story of Google’s LaMDA artificial intelligence claim to become self-aware. Controversy and uproar over artificial intelligence is nothing new, it just comes out every time from a different project and company.

These controversies may be an extremely effective tool in raising the profile of companies, but they do more harm than good to the AI industry because they distract people from the real and tangible problems of AI. Although rumors of a powerful artificial intelligence model could be a wake-up call for the deregulatory tech industry; There is nothing left for the European Union to finalize the artificial intelligence law.

One of the biggest struggles among lawmakers these days is whether tech companies should have more power to legislate their AI models. Some time ago, the artificial intelligence safety summit was held for the first time in history, to think of a solution to control artificial intelligence before it becomes unbridled.

Ultimately, it’s safe to say that almost all companies active in the field of artificial intelligence are looking to achieve AGI, and OpenAI would like to be the first to achieve this goal. The company’s researchers have once predicted that they are only ten years away from reaching AGI. How do you know? Maybe Q* is a big step in this direction.

How useful was this post?

Click on a star to rate it!

Average rating 5 / 5. Vote count: 98

No votes so far! Be the first to rate this post.

Leave A Reply

Your email address will not be published.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy