Over the past two years, we have already seen three drastic waves in the evolution of Gen AI. The first focused on building a larger frontier model with more calculations, more data and more scaling. This wave led to the development and deployment of AI assistants. In the second wave, we saw the emergence of small and medium-sized AI models (openweight or open source) comparable to frontier models. Models are beginning to become commoditized. In the ongoing third wave, AI agents and agent platforms focus on scaling and inference of test times as they facilitate process reengineering. As the pace of evolution accelerates, AI leaders continue to recognize more meaningful results from AI. Beyond the business domain, it is AI High Performerand when it comes to building basic capabilities that can create value, we are already embracing this evolutionary approach. Here's how pioneers within these AI powerhouses recognize the possibilities of AI:They enrich them AI Development FlywheelTheir AI development flywheel is a cycle of continuous iterations. From problem identification, research, data procurement and enrichment, model construction, validation and deployment, this flywheel is the perfect repetition cycle for fast-paced experimental tri-leans. These are sprints, but are positioned as milestones within the marathon. For this flywheel to solve more winning solutions, the source needs a feeder that brings a rich repository of business and technology problems effectively constructed as AI/ML problems. Cultivated with a culture of rapid iteration and learning, teams are invited to launch and test features quickly and continuously, and fine-tune models and prompts. The key to successful AI experiments is speed and agility that involves experiments. Quickly validate ideas, measure impacts, and iterate to build better AI applications. With the right tools and processes in place, they intentionally train their teams in this new paradigm of AI development. For example, Deepseek has been conducting continuous research and experiments over a period of time, constantly evolving the model before it can succeed with the latest models.They are big supporters First Principles AIThis is a preferred way for frontrunners to gain new insights into data strategies, AI infrastructure design and optimization, deepening algorithm efficiency, improving model performance, and slashing development time before and after training. First Principles AI involves breaking down complex business problems framed as ML/AI issues into basic components. This is to reach the core building block in question and build a solution without using existing models and techniques as a starting point. In many ways, this is a reconstruction of the status quo. Therefore, in the context of AI, this leads to the surface of the principles of machine learning, neural networks and data science from scratch with groundbreaking innovations. They invest Hybrid AI InfrastructureAI High Performers understand the benefits of combining the strengths of rental cloud-based AI solutions for scaling innovation. They embrace a hybrid approach that allows them to work optimally with their AI flywheels. When their AI experiments are embedded in viable solutions ready to expand, they often also need a year-long scaled infrastructure. Rental spending models are ideal in the scaling phase where they are only utilized at the tail end of more regular experimental cycles. In contrast, investments in owned AI infrastructures provide an opportunity to be fully controlled and enhanced customization, making them ideal for the load of predictable and continuous AI experiments. Owning an infrastructure will result in significant cost savings in the long run. Additionally, ownership ensures data security and compliance, along with an efficient operational environment. They are fast followers tooFrontrunners recognizes that despite widespread concerns about data privacy and the doctrine of responsible AI, the benefits of existing open source models and AI services cannot be denied. Even as AI High Performers build new models and capabilities, they deploy these existing models from their own infrastructure, including private clouds, prem infrastructure, custom hardware (Raspberry PI), and AI-enabled devices. Such deployments can fully comply with a global framework that ensures ethical and fair AI deployments.As more and more organizations continue to expand their AI and plan to increase their AI investment in the coming months, these learnings can guide greater efforts to deliver purposeful AI results.– By Mohammed Rafee Tarafdar, Chief Technology Officer of Infosys
