DeepSeek-Prover V2 Released
Mathematical reasoning ability is a necessary condition for strong artificial intelligence
- I have always believed that mathematical reasoning ability is a necessary condition for building strong artificial intelligence, but of course not a sufficient condition;
 - Other necessary conditions include understanding of the real world;
 - DeepSeek is on the right track;
 - Today, Microsoft released the phi4-mini-reasoning model, which once again proves that with the blessing of RL, the poor phi4-mini can reach the level of o1-mini;
 - Therefore, it is very necessary to build a neural network with reasoning as its core;
 - This reasoning-core neural network can be trained by relying only on some fundamental and most profound data;
 - At present, we still don’t know which data is useful, which is redundant, and which is even harmful. Under the premise, we should first increase the amount of data,
 - For example, it is necessary to expand from 7B of Prover 1.5 to 671b of Prover 2.0;
 - The next step should be to do subtraction, remove the unnecessary, and only leave the essence;
 - How to streamline, can we refer to the survival of the fittest strategy of nature, where different models compete, just like what AlphaGO Zero did;
 - This strategy should be achievable on mathematical reasoning models. After all, mathematical propositions are closed systems with precise answers;
 - The perception model of the real world can also evolve through the method of survival of the fittest.
 - Only when the model truly has the ability to reason on its own and understand the world can it become a truly strong artificial intelligence through self-learning.