OpenAI releases o3 and o4-mini
Still a model of brute force, not AGI
- From some information on the Internet, in some professional fields, the ability to solve some problems with known answers written by humans (closed domain) has been greatly improved, reaching a professional level.
- However, although hallucinations have been greatly reduced, especially in the High mode, they still make some low-level mistakes.
- Judging from the repeated postponement of GPT-5, the improvement of the base model seems to have encountered a bottleneck.
- The release of the High mode this time is to use the base model to use a large number of tools in the background to solve some more difficult problems, which consumes a lot of resources, so it is very expensive.
- In the past, it was hoped that as the data increased, LLM might have a qualitative change from a quantitative change at a certain point in time. But now the public data on the Internet has been exhausted. OpenAI once called on experts and scholars to provide data sources. At present, it seems that they may have collected some very professional data, so the knowledge level of the o3 and o4-mini models in some professional fields has been greatly improved, but this should not have triggered the qualitative change of LLM to AGI.
- Yann LeCun may be right, the road of LLM may have come to an end, and new technologies are needed, such as the world model that Yann LeCun advocates.
- If artificial neural networks can generate basic logical thinking ability by observing the world, and can self-learn to adjust their own neural network structure, perhaps then a real AGI can be produced.
Application
Although the current capabilities of LLM have not reached the level of AGI, it can already do some very useful things:
- Assist in writing code
- In-depth research
- Translation
- Assisted learning
- Companion chat
- Automate work chores