Qualcomm and Alphabet's Google are making significant strides in the artificial intelligence hardware market, introducing new chips that promise to redefine computing for both personal devices and vast data centers. Qualcomm recently unveiled its Snapdragon X2 Elite, boasting an 80 TOPS AI engine for laptops, while Google's Axion processors, its first custom Arm-based CPUs, are now generally available for cloud customers. These developments intensify competition with established players like Nvidia, Intel, and AMD, signaling a new era of performance and energy efficiency in AI computing.
Qualcomm's Aggressive Push into AI PCs and Data Centers
Qualcomm is aggressively advancing its presence in the AI-powered PC market with the recent unveiling of its Snapdragon X2 Elite Extreme System-on-a-Chip (SoC). Announced in November 2025, this next-generation laptop processor features a Hexagon NPU 6 capable of delivering a staggering 80 Tera Operations Per Second (TOPS) of AI performance. This represents a significant 78% increase over its predecessor, the Snapdragon X Elite, which offered 45 TOPS. Early benchmarks suggest the Hexagon NPU 6 is the fastest laptop NPU to date, outperforming Apple's M4 and Intel's Core Ultra 9 288V.
The Snapdragon X2 Elite is designed to power advanced "agentic workflows" and generative AI applications directly on devices, emphasizing both raw performance and power efficiency. Qualcomm highlights that its NPU can offer up to 1.6 times the performance compared to the X Elite at the same power consumption during intensive tasks. The earlier Snapdragon X Elite, launched in mid-2024, already showcased impressive AI capabilities, achieving a score of 1787 in the UL Procyon AI test, significantly surpassing the Apple M3 (898 points) and Intel Ultra 7 155H (480 points). Qualcomm claimed the X Elite delivered 2.6 times better energy efficiency than the M3 and 5.4 times better than the Ultra 7 processor.
Beyond personal computing, Qualcomm is also making a bold move into the data center market with its new AI200 and AI250 accelerator chips, unveiled on November 3, 2025. These chips are specifically designed for AI inference workloads in data centers, aiming to challenge Nvidia's dominant position in this lucrative segment. Qualcomm is betting on the energy efficiency and cost-effectiveness of its designs, which draw on its extensive experience in mobile technology, to attract major cloud providers. The company has already secured its first customer, Saudi Arabia's AI startup Humain, which plans to deploy 200 megawatts of computing systems based on these chips starting in 2026. Qualcomm is in discussions with other large buyers, including Microsoft, Amazon, and Meta Platforms, for potential deployments.
Google's Axion Processors Challenge Cloud Giants
Alphabet's Google officially entered the custom Arm-based CPU market for data centers with the announcement of its Axion processors on April 9, 2024, at the Google Cloud Next conference. These processors are built using Arm's Neoverse V2 architecture and are designed for a wide range of general-purpose computing and AI inference workloads in the cloud. The Axion processors became generally available to Google Cloud customers in late 2024.
Google claims that Axion processors offer significant performance and efficiency advantages. They deliver up to 30% better performance than the fastest Arm-based cloud instances currently available. Furthermore, Google states Axion provides up to 50% better performance and 60% better energy efficiency compared to comparable current-generation x86-based processors from AMD and Intel. Early adopters have reported substantial gains; Dave Zolotusky, Principal Engineer at Spotify, stated that their tests showed "roughly 250% better performance" on their workloads using Axion. Paramount Global also utilizes Axion, achieving 33% faster video encode times.
The introduction of Axion processors positions Google in direct competition with other major cloud providers, such as Amazon Web Services (AWS) with its Graviton processors and Microsoft with its Cobalt chips, both of whom have developed their own Arm-based CPUs for data centers. Axion is integrated across various Google Cloud services, including Google Compute Engine, Google Kubernetes Engine (GKE), Dataproc, Cloud SQL, and AlloyDB for PostgreSQL. This strategic move expands Google's long-standing investment in custom silicon, which also includes its Tensor Processing Units (TPUs) for AI and Tensor chips for Pixel mobile devices.
Reshaping the Competitive AI Hardware Market
The global AI chips market is experiencing rapid expansion, projected to grow from an estimated $83.80 billion in 2025 to $459.00 billion by 2032, at a compound annual growth rate of 27.5%. This growth is fueled by massive investments in data center infrastructure, with McKinsey forecasting $6.7 trillion in global data center capital expenditures through 2030, largely driven by AI-specific systems.
Qualcomm's and Google's latest chip announcements intensify the competition across different segments of this booming market. Qualcomm's Snapdragon X2 Elite directly challenges Apple's M-series and Intel's Core Ultra chips in the laptop space, while its new data center accelerators aim to carve out a share from Nvidia's near-monopoly in AI GPUs. Google's Axion processors, meanwhile, escalate the race among cloud providers to offer highly optimized, energy-efficient computing solutions, reducing reliance on traditional x86 architectures.
The industry's shift towards custom, Arm-based silicon underscores a broader trend where companies seek greater control over hardware design to optimize for specific AI workloads, energy efficiency, and cost. While Nvidia remains the undisputed market leader in AI hardware, especially for training large AI models, the increasing demand for AI inference at the edge and in diverse cloud environments creates significant opportunities for new competitors. Analysts predict Alphabet could sell up to 1 million AI chips by 2027, highlighting the growing importance of proprietary AI hardware. These developments promise to drive further innovation, offer customers more choices, and potentially lower the overall cost of AI computing.




