Welcome to AGI Advance, Turing’s weekly recap of the most important AI & AGI developments.
This week, we’re diving into advanced post-training techniques, the latest in LLM reasoning frameworks, and AI’s evolving role in the real world.
What we're thinking
At Turing, we are refining post-training methodologies to enhance AGI model accuracy and reasoning depth. Our focus this week:
- Beyond Chain of Thought (CoT): The Tree of Thoughts (ToT) framework allows AI models to fork, explore multiple paths, and backtrack, improving decision-making and sample efficiency.
- Process Reward Models (PRMs): These models verify AI reasoning step by step, reducing errors and enhancing structured learning.
- Self-consistency in AI: Running multiple reasoning paths and selecting the most consistent answer reduces stochastic errors, leading to more reliable AI outputs.
These techniques bring us closer to AGI models that reason and refine their own thought processes—a critical step toward more trustworthy, autonomous AI.
What we're saying
Insights from Turing’s leadership on the latest AGI developments and industry shifts:
- The News: Baidu will open-source ERNIE 4.5 by June 30, drop chatbot premium tiers, and launch ERNIE 5 in H2 2025.
Ellie Chio, Engineering Leader:
"ERNIE’s last major release was in Oct 2023—long in LLM terms. Unlike DeepSeek and Qwen, it hasn’t been seen as a technical leader, but Baidu’s expertise in Chinese language processing is unmatched. Open-sourcing now invites scrutiny, but if their long development cycle has paid off, this could be a major shake-up in China’s AI race." - The News: A study reveals that LLMs mirror biases in their training data when analyzing U.S. Supreme Court cases, rather than aligning with public opinion surveys.
Ellie Chio, Engineering Leader:
"LLMs, like search engines, can appear biased when trained on non-verifiable data (opinions, perspectives). This is often reinforced by moderation filters, limiting viewpoint diversity. A more balanced approach—using embedding-based similarity or refining post-training datasets—can help align models without restricting discourse."
What we're reading
We’re diving into three cutting-edge AI research papers this week:
- Tree of Thoughts: Deliberate Problem Solving with Large Language Models
A framework that enables LLMs to explore multiple reasoning paths, evaluate intermediate steps, and improve problem-solving across math, creative writing, and logic-intensive tasks. - ALMA: Alignment with Minimal Annotation
Meta’s latest research on self-bootstrapping alignment, showing that AI models can align effectively with only 9,000 labeled examples, cutting reliance on extensive human annotation. - Self-Consistency Improves Chain of Thought Reasoning
This study highlights how aggregating multiple reasoning paths significantly boosts accuracy and reliability, particularly for complex mathematical and logical problems
Where we’ll be
Turing will be at two major AI conferences in the coming months—join us to discuss the future of AGI:
- ICLR 2025 [Singapore | Apr 24 – 28]
A top-tier deep learning conference covering representation learning, AI optimization, and theoretical advancements. - MLSys 2025 [Santa Clara, CA | May 12 – 15]
A major event focused on the intersection of machine learning and systems, discussing efficient AI model training, distributed learning, and AI hardware innovations.
If you’re attending, reach out—we’d love to connect and exchange insights!
Stay ahead with AGI Advance
Turing is leading the charge in bridging AI research with real-world applications. Subscribe to AGI Advance for weekly insights into breakthroughs, research, and industry shifts that matter.
Want to accelerate your business with AI?
Talk to one of our solutions architects and start innovating with AI-powered talent.


