Tech giant Microsoft has unveiled a new AI training method called the “Algorithm of Thoughts” (AoT), designed to make large language models like ChatGPT more efficient and human-like in their reasoning abilities.
The new approach is the natural next step for the company, which has invested heavily AI, and particularly in OpenAI —the creators of DALL-E, ChatGPT, and the powerful GPT language model.
Microsoft says the AoT technique is a potential game-changer, as it “guides the language model through a more streamlined problem-solving path,” according to a published research paper. This novel approach utilizes “in-context learning,” enabling the model to explore different solutions in an organized manner systematically.
The result? Faster, less resource-intensive problem-solving.
“Our technique outperforms previous single-query methods and is on par with a recent multi-query approach employing extensive tree search,” the paper states. “Intriguingly, our results suggest that instructing a model with an algorithm can lead to performance surpassing the algorithm itself.”
The researchers claim that the model gains an improved “intuition” when this technique optimizes its search process.
A Human-Algorithmic Hybrid?
The AoT method addresses the limitations of current in-context learning techniques like the “Chain-of-Thought” (CoT) approach. CoT sometimes provides incorrect intermediate steps, whereas AoT guides the model using algorithmic examples for more reliable results.
AoT draws inspiration from both humans and machines to improve the performance of a generative AI model. While humans excel in intuitive cognition, algorithms are known for their organized, exhaustive exploration. The research paper says that the Algorithm of Thoughts seeks to “fuse these dual facets to augment reasoning capabilities within LLMs.”
Microsoft says this hybrid technique enables the model to overcome human working memory limitations, allowing more comprehensive analysis of ideas.
Unlike CoT’s linear reasoning or the “Tree of Thoughts” (ToT) technique, AoT permits flexible contemplation of different options for sub-problems, maintaining efficacy with minimal prompting. It also rivals external tree-search tools, efficiently balancing costs and computations.
Overall, AoT represents a shift from supervised learning to integrating the search process itself. With refinements to prompt engineering, researchers believe this approach can enable models to solve complex real-world problems efficiently while also reducing their carbon impact.
Given its substantial AI investments, Microsoft seems well-positioned to incorporate AoT into advanced systems like GPT-4. Though challenging, teaching language models to “think” in this more human way could be transformative.