3 [VIDÉO] You may also like this partner content (after advertising)
The process of automatically translating mathematical concepts written in natural language into formal specifications and proofs is called “self-formalization”. To improve the current self-formalization model and verification machine, researchers have developed a new method, based on an AI that is capable of supporting such a process. More efficient and faster than humans at this task, it could lead to the discovery of new mathematical theories.
Mathematical proofs (or displays) are now verified by computers. However, this first requires a “translation” of the display – which combines formulas, mathematical symbols, and natural language – into a specific language that is machine-understandable. However, this work is very time consuming: it took Kepler’s estimates about ten years to formalize (and verify), which describes the best way to stack a collection of spherical objects.
An automated system can not only reduce the cost of current formalization efforts, but it can also combine different research fields that automatically write specific aspects of mathematical reasoning with a huge chunk of knowledge written exclusively in natural language. ” A successful self-formalization system can lead to formal verification, program synthesis, and artificial intelligence. In the essay the researchers describe their approach.
Self-formalization by Grand Language Model
Only a very small fraction of mathematical knowledge has been formally converted and then proven. There are several projects for understanding traditional languages, but limited to those languages for which there are a large number of corporations on the web (for example, Python).
Formal mathematical information is very rare:Archive of official evidence, Only 180MB – less than 0.18% of the training data from OpenAI Codex, an artificial intelligence created by OpenAI that analyzes natural language and generates code in response. This AI was trained from the large amount of text and programming data available on the web. It is typically used to feed GitHub Copilot, a programming auto-complete tool; It can satisfy about 37% of requests and thus makes it possible to increase programming speed.
To develop an AI capable of formalizing mathematical problems, Yuhuai Wu, a researcher at Google, and his colleagues came up with the idea of using this codex – assuming that there were some similarities between programming language and mathematical performance.
They provided a codex with a set of 150 problems from the high school math competition. It turns out that a significant portion (25.3%) of AI’s given problems were able to be fully translated into language compatible with Isabel’s – an open source program for solving mathematical proofs. According to the team, most of the missed translations are due to a lack of understanding of some of the mathematical concepts used in the system (due to discrepancies between formal and informal definitions).
An automatic formalization is much better than human formalization
To test the effectiveness of this process, the researchers submitted to the Codex a problem that had already been formalized by humans, thus obtaining a second set of formal problems. They then call another AI called MiniF2F to verify the two versions. Outcome: Codex issues have been found to be much better than humans in terms of formalization. ” Our approach leads to a new sophisticated result on the MiniF2F theorem proof benchmark, raising the proof rate from 29.6% to 35.2%. “Mention the researchers.
The team also explored reverse translation, formalization, and translation of the Isabel code into natural language. Of the 38 examples tested, 36 were translated into a “logically consistent” statement, of which 29 (76%) were more or less accurate. Conclusion: The success rate of formalization is significantly higher than formalization.
The success rate of self-formalization may seem modest, but due to the small amount of Isabel code included in the codex training, it is already interesting that the model can write syntactically correct code, the researchers noted. In addition, there is virtually no data alignment between the natural language and the Isabel code.
The team believes that the success rate can be quickly improved to compete with the best mathematicians. This self-formalization not only improves existing models, but can also be applied to many verification tasks (both in software design and hardware design).
However, there remains a major challenge in applying this model to mathematical research, much of which is written in LaTeX, a documentation language and system that uses its own syntax and commands. Interpreting and translating such content can be very complicated for neural networks.