Intelligence Everywhere: The Computing Platform Behind AI's Next Phase
More than 200 miles above Earth, a satellite’s processors churn through streams of climate data.
They produce forecasts up to 1,000 times faster than traditional systems—collapsing days of analysis into seconds.
Two miles beneath the ocean’s surface, an autonomous vehicle prowls through complete darkness.
It charts unexplored terrain, making split-second navigation decisions with no link to the surface—intelligence operating in complete isolation.
AI is making decisions instantly and locally, without waiting for direction from distant data centers. The same technology allowing satellites and underwater vehicles to process data and act autonomously is reshaping everything from factories to hospitals to the smartphones in our pockets.
“AI is enabling intelligence and insight everywhere,” says Will Abbey, Executive VP and Chief Commercial Officer of Arm (Nasdaq: Arm), a leading technology company that designs AI compute platforms. “Whether in a device operating under extreme conditions or not, it unlocks tangible benefits and enhances experiences.”

Real business outcomes are emerging
These examples aren’t futuristic AI outliers but signal a broad transformation. Across industries, business leaders are quantifying AI’s increasing impact with concrete forecasts.
Bloomberg Intelligence finds that pharmaceutical companies anticipate reductions in drug development costs and significantly faster time to market. Industrial firms forecast 30% cuts in product-development cycles, with 80% expecting AI to drive meaningful recurring revenue growth. And automotive executives expect AI to lift sales and profit by 9% over the next three years.
Delivering AI at scale requires efficiency, not just power. The question for leaders is whether they can deploy intelligence everywhere it creates value—cloud to edge, factory to smartphone—without fragmenting their operations or overwhelming their energy budgets. The answer lies in technology choices being made today, and understanding where intelligence must operate to deliver value.
Where intelligence needs to live
What’s driving the shift to distributed AI (DAI) across devices? Bloomberg analysis reveals the AI industry is expanding from training large language models (LLMs) in centralized data centers to running continuous inference at the point of decision. This shift is creating sustained demand for computing spread across cloud, edge and devices—an evolution mirrored by rising interest in purpose-built models and small language models suited for edge environments.
Bloomberg analysts project that autonomous AI agents could represent a $270 billion market by 2032. These agents display intelligence at the point of action: reviewing contracts locally, managing supply chains and controlling manufacturing processes instantaneously.
The AI infrastructure challenge is compounding. Bloomberg Intelligence projects that data center equipment spending will grow from $46 billion in 2024 to $73 billion by 2028, and that data center power demand will more than double by 2035.
With energy becoming a constraint, the question for organizations is: Where should computing happen to maximize efficiency and value?
“AI is no longer confined to massive data centers; it’s moving closer to where data is created,” says Abbey. “That shift means decisions can happen in real time, with greater privacy and efficiency.”
Why edge AI is accelerating
Market growth reflects this architectural evolution. The edge AI market is projected to grow from $21 billion in 2024 to between $57 billion and $67 billion by 2030, and the broader edge computing infrastructure market is expected to reach $249 billion by 2030, up from $168 billion in 2025.
These projections reflect AI workloads moving to where decisions must happen immediately.
In a smart factory, edge AI detects defects on production lines instantly, with no time lost sending data to the cloud and waiting for analysis. In hospitals, AI models run on medical devices locally, keeping sensitive patient data secure while delivering diagnostic insights. And the smartphones run AI features like face ID even when offline, processing everything locally without needing a connection.
These cases represent a fundamental shift, as AI workloads are being distributed strategically—in the cloud for training and enterprise analytics, and at the edge for instant decisions where privacy and speed matter most.
“Processing at the source—on-device and at the edge—cuts latency, strengthens privacy and lowers costs, helping with real-time decisions and better customer experiences,” Abbey says. “As intelligence decentralizes, entirely new edge services become possible.”
Distributed AI requires flexible, high-performance, energy-efficient compute platforms
Organizations need platforms that can deploy AI seamlessly from centralized data centers to billions of edge devices, using a distributed AI approach optimized for efficiency at scale.
Leaders will solve this challenge by choosing a compute platform that pairs flexibility with compatibility. Rather than fragmenting efforts across incompatible systems, they're building on foundations that allow customization while preserving ecosystem unity.
“From a technology perspective, distributing AI unlocks scalability and resilience," Abbey says. “From a human perspective, it means people and businesses benefit from intelligence that's responsive and adaptive to their needs: whether in a factory, a hospital, or at home.”
The commercial impact of purpose-built compute is clear. Major hyperscalers including Google, Microsoft and AWS are building on the Arm compute platform, gaining better performance per watt (PPW) and significant cost-of-ownership savings. “AI compute is not an island; it is part of a larger workload—some classic compute, some machine learning,” Abbey says.
Applications across industries
This compute flexibility extends beyond performance. Arm's model allows companies to customize solutions for specific workloads, optimized for unique infrastructure demands.

This design philosophy is playing out throughout the AI market. In the automotive industry, AI systems underpin vehicles that deliver near-real-time driver assistance capabilities and a personalized driving experience. Across industries, smart manufacturing deploys high-performance edge AI for adaptive production lines, driving operational efficiency and measurable cost savings.
“Even small savings in latency or energy use multiply into enormous savings at scale,” says Abbey. "With power becoming a rate limiter, businesses must begin building for what's next: platforms optimized for future workloads and maximized for intelligence per watt.”
That means rethinking infrastructure holistically—not evaluating AI agents, hardware or software in isolation, but reinventing systems from the ground up, based on what actually drives ROI.
With its heritage in energy-efficient, heterogeneous compute systems, Arm’s platform provides up to 10x more energy efficiency than traditional architectures. At the gigawatt scale, that translates to billions in operating expense savings.
“In the next decade, every workload will have an element of AI,” says Abbey. “The demand for efficiency and scale will only grow.”
The scale of transformation is staggering.
This year alone will add 16 zettaflops of AI compute capacity.
If every person on Earth performed one calculation per second, it would take 63,000 years to match that scale.
Source: Arm, Open Compute Project
The next frontier is already emerging: agentic AI systems that make autonomous decisions across supply chains; on-device intelligence that operates entirely without connectivity, processing, learning and adapting locally with zero latency; and AI models that learn across millions of devices simultaneously while keeping user data private. These aren’t distant capabilities but are already being built today.
As this shift unfolds, platforms that combine efficiency at scale with broad developer ecosystems are best positioned to carry intelligence everywhere. Arm architecture has become a common layer across smartphones, automobiles, data centers and the emerging edge, giving companies a consistent way to deploy AI wherever it delivers the most value.
From satellites to smartphones and from cloud to edge, the transformation is well underway. The foundational compute platforms chosen by companies will determine which businesses thrive in the era of intelligence everywhere—and which are left on the sidelines of possibilities we haven’t yet imagined.
