GPT5

  • Altman has been mysteriously paving the way for so long, and finally released GPT5

    • Before, he hinted that it had reached the level of AGI
    • Although GPT5 is very strong, it is by no means AGI
    • I personally think that it is only because of OpenAI’s accumulation in data that GPT5 can have very comprehensive knowledge in various professional fields, thereby greatly reducing hallucinations, and also calling tools in the background to solve problems
  • A true AGI should be able to make new discoveries through logical deduction from principles. At present, it seems that GPT5 has not yet reached this level, and a more advanced level is to have self-awareness

  • On the contrary, I feel that Deepmind’s current thinking is correct.

  • I have used KataGo before. In fact, the strength of various Go models is not much different after training. When two Go models compete, what they are competing for is the computing power behind them.

  • I feel that the current LLM has basically reached this situation.

  • At present, there is not much difference between the major models. The competition of model capabilities has become a competition of data accumulation, tool use, and computing power behind them.

  • I deployed OpenAI’s open source GPT-OSS-20B myself and asked a few questions. I feel that it is similar to QWEN-30B-A3B-2507 of the same scale.

  • Then the coding ability of GPT-OSS-20B is also significantly weaker than that of Alibaba’s coding model QWEN-CODER-30B-A3B.

  • So the ability of OpenAI’s open source small model is similar to that of other companies.