Ant Group Pushes AI Frontiers With Open-Source Trillion-Parameter Model: Chinese fintech powerhouse Ant Group has just made a major move in the global AI race. The company has launched a new open-source large language model (LLM) called Ling-1T, which packs a staggering one trillion parameters. Ant says this new model delivers stronger math and coding abilities than rival systems from DeepSeek and OpenAI, raising the bar in the fast-evolving AI world.
A New Benchmark in Reasoning and Performance
According to Ant, the Ling-1T model shows “superior complex reasoning ability and overall advantage” compared with both open-source and closed-source competitors. The Hangzhou-based company says the model shines in several areas, including code generation, software development, competition-level mathematics, and logical reasoning.
Ant shared that Ling-1T achieved higher scores than DeepSeek-V3.1-Terminus, Moonshot AI’s Kimi-K2-0905, and OpenAI’s GPT-5-main on major benchmarks such as LiveCodeBench and the American Invitational Mathematics Examination (AIME).

Outperforming on Maths Challenges
One of Ling-1T’s standout results came from the AIME benchmark, where it reached an impressive 70.42 per cent accuracy with an average output of over 4,000 tokens per problem. Ant noted that this performance was “on par with Google’s Gemini-2.5-Pro and surpassing those from DeepSeek, OpenAI and Moonshot.”
Building on Previous Breakthroughs
This new release follows Ant’s earlier trillion-parameter model, Ring-1T-preview, which the company introduced just last month. Ant described that earlier model as the world’s first open-source, trillion-parameter thinking model.
With Ling-1T now joining the lineup, it’s clear that the competition in China’s AI scene is heating up. Other tech giants, including DeepSeek and Alibaba, have also been busy rolling out their own cutting-edge models.
A Race Among China’s AI Heavyweights
Last month, DeepSeek introduced an “experimental” version of its V3 foundation model, called V3.2-Exp, which offers more efficient training and inference while cutting API costs by more than 50 percent compared with earlier versions. That update came just a week after the release of DeepSeek-V3.1-Terminus, which the company described as “the first step towards the agent era.”
Meanwhile, Alibaba launched its biggest AI model so far, Qwen-3-Max-Preview, in early September – another trillion-parameter system designed to push performance boundaries.
Why Parameters Matter
Parameters are the adjustable settings in an LLM that help determine how well it understands and generates text. More parameters generally mean better performance and stronger reasoning skills, but they also demand a lot more computing power.
For comparison, OpenAI’s GPT-4.5 is believed to have between 5 and 7 trillion parameters, making it one of the largest models in the world.
Ant’s Growing AI Ambitions
Ant Group, best known as the operator of Alipay, first entered the AI space in 2023 with its self-developed financial LLM. Since then, the company has expanded its AI family to include the Ling series of non-thinking language models, the Ring series of thinking models, and the Ming series of multimodal models capable of handling text, images, audio, and video.
Each series is designed for different use cases, and Ant says its ultimate goal is to build “practical, inclusive AGI services that benefit everyone.”
Pushing Toward Practical AGI
Artificial General Intelligence, or AGI, refers to machines that can match or even surpass human intelligence. With Ling-1T, Ant Group is taking another confident step in that direction, adding fresh momentum to the rapidly intensifying global race for smarter, more capable AI systems.
ALSO READ: 7 Easy Steps To Turn Your Old Smartphone Into A Security Camera





