Where is AI heading?
We’re almost done with a quarter of 2025 and looks like the trajectory of AI isn’t in the same direction as expected towards the end of 2024.
While people were boasting of AI-Agents taking over the world, till now, we haven’t witnessed anything ground breaking except Manus AI, which is closed source and just few demos are released, so not very confident
Subscribe to datasciencepocket on Gumroad
Instead, I can for sure see some new trends that might shape AI advancements this year:
1. Reasoning LLMs
DeepSeek-R1 has been a game-changer, shifting everyone’s attention to the importance of reasoning in LLMs. Unlike earlier models that often relied on supervised fine-tuning, DeepSeek-R1-Zero demonstrated that reinforcement learning alone could cultivate advanced reasoning abilities. This has led to a number of reasoning LLMs being released in a very short duration of time.
Notable releases: DeepSeek-R1, Claude3.7 Sonnet, Gemini 2.0 Thinking series
2. Audio AI Models/Agents
Towards the end of 2024, we saw a surge in speech generation models, and this trend has only solidified since then. These models are now capable of producing human-like speech, making interactions with AI more natural and engaging. Their applications range from virtual assistants to real-time language translation, indicating they’re here to stay.
Notable releases: Sesame CSM-1b, Zyphra Zonos,
3. Pure Reinforcement Learning & Distillation
DeepSeek-R1’s success has spotlighted training techniques like reinforcement learning and model distillation. These methods have proven effective in enhancing LLMs’ reasoning capabilities without the need for extensive supervised fine-tuning. This shift has opened new avenues for developing more efficient and capable AI models.
DeepSeek-R1’s paper even lead to US stock market crash !!
4. Agentic AI: Still Finding Its Ground
Despite the hype, autonomous AI agents haven’t fully met expectations yet. Manus AI is one of the few notable releases making waves, but the field as a whole is still in its early stages, working towards more reliable and versatile autonomous systems.
Notable releases: Manus AI, Google Co-Scientist
5. China Leading the AI Race
In a surprising turn, China has overtaken the USA in the AI race. Chinese tech firms like DeepSeek, Alibaba, Baidu, and Tencent have been rapidly releasing state-of-the-art AI models, not just in text generation but also in video creation. Their strategy of open-sourcing these models has accelerated innovation and positioned them at the forefront of AI development.
OpenAI is really struggling to stay relevant this year
6. Small-Sized LLMs: Big Wins in Compact Packages
This one was expected, and it has delivered on its hype
The development of smaller LLMs, some with fewer than 1 billion parameters, has been a significant achievement. These compact models are not limited to text generation but extend to video generation and text-to-speech applications, making advanced AI more accessible and efficient.
Notable releases: Wan2.1 T2V 1.3B, Sesame CSM-1b, DeepSeek-R1 distilled models
7. Model Context Protocol: Gaining Traction
Introduced by Anthropic last year, the Model Context Protocol (MCP) is now gaining the recognition it deserves. MCP enhances AI-agent interactions by providing structured context, which could be pivotal in advancing AI-agent applications.
The Blender MCP is already trending on internet, giving access to Claude to code directly in Blender to generate graphics
Read more about MCP here
What is the Model Context Protocol (MCP)?
8. Vibe Coding
Coined by AI expert Andrej Karpathy, “vibe coding” refers to a new approach where developers leverage AI to handle much of the coding process. By providing high-level instructions, developers can rely on AI to generate code, making the development process more intuitive and efficient. This paradigm shift is transforming how we approach software development.
The internet is filled with applications and folks posting about their experience with Vibe Coding.
General patterns
Apart from this, a few, generic patterns I’m observing are
- Open-source is beating closed-sourced: No one expected we would have exact replicas of paid resources for free. Hail HuggingFace for this !! And due to this, OpenAI is struggling big time to figure out how to make money
- LLM Wrappers are the future: Any wrapper that solves a business problem can yield you Millions !! The focus is not just on improved LLMs, but on applications built on top of it. Cursor.ai, Replit, Manus AI, etc are already killing it
- Programming is shifting: While it may not become obsolete, it is shifting the way we do it some years back. I hardly remember anything I coded from scratch in the last 3 months.
- Job Market: Though it is still shitty, some roles around AI are emerging and if you are comfortable, make this transition. It’s not that tough to transition into AI and is the need of the hour.
- Don’t listen to AI influencers/leaders a lot: That’s true, nowadays, everyone is making some bizarre statement to be relevant. Don’t listen to anyone, even though they are heading OpenAI !!
They will do everything to sell their stuff to you. That’s how sales work !
So, what’s the takeaway? AI isn’t just evolving — it’s reshaping industries at a speed no one expected. From reasoning LLMs like DeepSeek-R1 setting new benchmarks to China’s rapid innovations leading the race, the AI landscape is shifting fast. Open-source models are making cutting-edge AI accessible to everyone, and new training methods are unlocking even smarter systems.
Whether you’re a developer exploring vibe coding or a business leveraging LLM wrappers for real-world impact, one thing is clear: staying ahead means adapting. The job market is shifting, and those who embrace these trends will be at the forefront of the AI revolution. The future is here — time to keep up.
Emerging AI Trends in 2025 was originally published in Data Science in your pocket on Medium, where people are continuing the conversation by highlighting and responding to this story.