Findings from Bain & Company’s sixth annual Global Technology Report revealed that two trillion dollars in annual revenue is needed to fund computing power to support growing AI demand by 2030.
As the demand for AI continues to rise, recent findings from Bain & Company’s sixth annual Global Technology Report revealed that two trillion dollars in annual revenue is needed to fund computing power to support this demand by 2030. The report stated that global AI compute requirements could reach 200 gigawatts, with the US accounting for half the power.
The report also revealed the increasing speed of agentic AI being adopted by organizations. Specifically, there are four levels of development emerging, with Levels 2 (single-task agentic workflows) and 3 (cross-system workflow orchestration), attracting most capital today.
While agentic AI adoption is increasing, GenAI adoption is showing modest gains. Despite nearly two-thirds of software firms using coding tools, productivity boosts are just 10–15%. The real value of GenAI should be coming from embedding AI across the full software life cycle, not just coding.
But the reason for the slow pace of embedding AI across the full software life cycle could be because of data sovereignty concerns. As the report reveals, sovereign AI accelerates fragmentation with Tariffs and the US–China rivalry driving decoupling, while diverging national strategies make full-stack independence unrealistic. With no global standards likely, definitions of “responsible AI” vary widely, leaving firms to navigate increasingly regionalized ecosystems.
Anne Hoecker, Partner & Head of Global Technology, Media & Telecommunications Practice, Bain & Company explains more about the findings of the report to CRN Asia.
Looking at the findings of the study, there seems to be a shortage of AI infrastructure to support the demand. How will this impact AI development and deployment?
The pace of AI development has been very rapid over the last two years. We have spent time watching where the choke points are in the industry to enable the compute infrastructure required to support the AI demand. Early on, it was the availability of GPUs and specifically the semiconductor packaging capacity needed for advanced GPUs.
As we look at the mid to long term, the constraint is power, followed by the labor requirements for data center construction. Hyperscalers and others in the market are investing heavily in adding the data center capacity which is demanded by AI training and inferencing. Utilities around the world are looking at how to solve the power generation and delivery needs. Without sufficient data center capacity, it is more difficult to train the next generation of models, and AI service levels could be impacted.
Will the need for sovereign AI enable better data protection practices in organizations? Will this also mean AI development and deployment needs to focus more on data management capabilities?
Data is incredibly central to any AI deployment. So even before we think about the impacts of sovereign AI, implementing AI for internal productivity in an enterprise will increase the importance of data management capabilities within the organization. And then we can layer on the sovereign AI requirements which will further increase the level of sophistication required for data management and how companies think about their own internal IT infrastructure.
With different regulations and requirements across countries, it will increase the requirements for some companies as they look to release products globally and even roll out internal productivity tools to a global workforce.
Will sovereign AI also influence the deployment of humanoid robotics?
Humanoid robotics is a space where we see a lot of innovation in both China and the United States as well as other parts of the world. Overall, we have been tracking a trend toward decoupling of the technology supply chain between China and the US for some time. For example, we see this with communications equipment and semiconductors – and it will possibly continue with robotics. The models and mechanical components being used will likely look different between robotics companies in the US and China.
Embedding AI beyond coding would mean deployment beyond proof of concepts for most companies. What’s holding them back? Is it just the lack of skills and infrastructure or costs as well?
We see many companies have launched interesting AI use cases across areas such as sales, marketing, and customer support. Often, companies have a challenge scaling those use cases to see a real ROI. There are a few factors which make scaling challenging. First, companies need to relook the underlying business process from end to end keeping AI in mind. Process redesign takes time, but it is critical to remove the bottlenecks that mitigate the full potential of AI use cases.
Data quality can also be a challenge as there are almost always multiple data sets and systems which need to be accessed to get a full view of a customer or supplier. Finally, there is a big change management component as you need to train your workforce to work differently – but behavioral change can be hard. It will require a structured approach.
Lastly, can SMBs afford to use agentic AI capabilities? What opportunities will be available for them?
There are a lot of opportunities for SMBs to use agentic AI capabilities. In some sense, they may even have an advantage with less system, process, and data complexity to deal with. There are plenty of small startups that are doing an incredible amount with very small staff – and this is because of how they are using AI.