Arm Q2 FYE26: third straight billion-dollar quarter, CSS momentum, and Neoverse’s cloud AI land grab
Arm’s second quarter of fiscal 2026 was not just another AI themed earnings beat. Revenue reached 1.14 billion dollars, up 34 percent year on year and above the top of guidance. It is the third straight quarter above 1 billion dollars. More importantly, the mix is shifting in the way Arm has been promising since its IPO. Royalties are growing on Armv9 and Compute Subsystems, Neoverse is gaining share in cloud CPUs, and CSS designs are turning Arm from a core supplier into a system level partner.
Headline numbers and where the growth came from
Arm reported Q2 FYE26 revenue of 1.14 billion dollars, a 34 percent increase compared to the same quarter last year. That beats the high end of guidance and keeps the company on a clear growth path.
Royalty revenue reached 620 million dollars, up 21 percent year on year. That side of the business is where the long term leverage sits, and the drivers are exactly what Arm has been telling investors to watch:
- More shipments based on Armv9 rather than older architectures, which carry higher royalties per chip.
- More Arm based CPUs inside data centers, especially Neoverse based designs used alongside AI accelerators.
License revenue is still important, but Arm’s strategy is clearly to move customers onto newer designs and more complete platforms so that each chip built on Arm IP generates more dollars over its lifetime.
CSS strategy moving from IP blocks to full platforms
Compute Subsystems are the clearest example of Arm trying to sell more than individual cores. In Q2, Arm signed three new CSS licenses, one in smartphones, one in tablets and one in data centers. That brings the total to 19 CSS licenses across 11 companies, with five customers already shipping CSS based chips.1
Samsung is now using CSS for Exynos, which means the top four Android phone vendors all have devices built on CSS based platforms.1 That matters because CSS is not just a handful of CPU cores. It is a pre designed compute platform with cores, interconnect, memory controllers and other pieces that Arm has already validated together.
The goal is to reduce integration risk and time to market for partners. Instead of everyone assembling their own SoC topology from menus of IP, Arm offers a reference system that has already been tuned for performance, power and software compatibility. That pushes more of the system architecture up into Arm’s domain rather than leaving it entirely with the SoC vendor.
Lumex CSS smartphone AI platform in detail
Q2 also included the formal launch of Lumex CSS, Arm’s latest smartphone platform. Lumex combines new Arm C1 CPUs with Scalable Matrix Extension 2 for CPU based AI acceleration and a Mali G1 Ultra GPU.
Arm is quoting three headline improvements for Lumex reference designs compared to the prior generation:
- Up to 5 times faster AI performance on the CPU side.
- Up to 3 times energy savings in some AI workloads.
- Up to 2 times combined AI and graphics performance in certain use cases.1
The company is also leaning hard on ecosystem support. Lumex is being targeted by apps such as Alipay, Gmail and YouTube to offload more AI work onto the device.1 MediaTek is integrating Lumex configurations into upcoming smartphone SoCs, which are expected to ship in flagships from Oppo and vivo.1
The practical impact is that more personalisation, summarisation and generative tasks that would previously have gone to the cloud can now run on Arm based mobile silicon. That gives Arm more ways to monetise AI trends without having to own the cloud stack directly.
Neoverse and cloud AI Arm CPUs around the accelerators
The cloud story in Q2 is Neoverse. Arm says that more than 1 billion Neoverse CPU cores have now shipped into cloud infrastructure, and that it expects Arm to account for nearly half of the CPUs deployed by top hyperscalers this year.1
This is not about replacing GPUs. The GPU remains the main engine for AI training and many inference workloads. What Neoverse does is occupy the control plane and support plane around those accelerators. Every serious AI cluster still needs CPUs for orchestration, storage, networking, security, and pre and post processing of data.
When a large share of those CPUs are Arm based, the architecture collects a royalty on almost every rack that matters, regardless of which vendor sells the accelerators. That is the quiet leverage behind the Neoverse strategy.
Google is the headline example. According to Arm, Google has already migrated more than 30,000 cloud applications to Arm based instances, including Gmail and YouTube, and aims to move most of its more than 100,000 internal applications over time.1 That is a strong signal to the rest of the cloud ecosystem that Arm servers are ready for production workloads, not just niche tests.
How AI shows up in Arm’s P&L
A lot of companies are talking about AI. In Arm’s numbers, it shows up in practical ways.
- Device side AI lifts the value of each smartphone, tablet and PC SoC built on modern Arm designs, especially Lumex class CSS platforms.
- Cloud AI infrastructure increases the number of Neoverse based CPUs deployed alongside GPUs and other accelerators.
- As customers refresh their designs to take advantage of AI, they are more likely to move from older cores to Armv9 and CSS, which carry better royalty terms.
The result is that AI is not a separate product line. It is a multiplier applied across Arm’s existing end markets. The Q2 ramp in royalties is consistent with that picture.
Licensing pipeline and ecosystem depth
The CSS license count is worth watching because it hints at future royalty growth. Nineteen CSS licenses across 11 companies, with five customers already shipping, means a growing set of SoCs built on Arm’s reference platforms rather than one off designs.1
That has two important side effects.
- CSS designs tend to use newer architectures, so they drag the licensee base toward Armv9 and newer cores where Arm’s take per chip is higher.
- Each CSS platform becomes a small ecosystem. Once a SoC vendor commits to Lumex or similar, device makers and software vendors have an incentive to optimise for that specific combination of CPU, GPU and interconnect.
Arm’s developer ecosystem is the background enabler. With tens of millions of developers and millions of apps already tuned for Arm, new platforms like Lumex and Neoverse instances have a large base of existing software to draw from. That does not show up directly in a single quarter’s revenue, but it makes ramps faster and less risky.
Risks behind the clean growth story
The quarter looks strong, but the risk profile is not trivial. A few points stand out.
- Hyperscaler capex sensitivity. Neoverse growth is tied directly to hyperscaler investment. If AI and cloud infrastructure spending slows or becomes more volatile, Arm’s server side royalty growth will feel it.
- CSS execution risk. When Arm sells more complete subsystems, it also takes on more responsibility for system level performance. A serious issue in a major CSS based platform would hit more than one chip partner.
- Competitive pressure. Intel and AMD are pushing their own AI narratives hard. The RISC V ecosystem is gathering momentum at the low end and in some custom designs. Arm has a lead in many segments, but that lead has to be defended every generation.
- Valuation expectations. The market already prices Arm as a structural AI winner. That sets a high bar for how many quarters like this it needs to deliver before investors start questioning the multiple.
My read on where Arm really stands after Q2
Looking past the headline growth, Q2 FYE26 is a useful checkpoint on Arm’s strategy rather than a surprise. The company said it would push customers to Armv9, sell more system level IP through CSS, and capture a meaningful share of the CPU side of AI data centers. The quarter is broadly consistent with that plan.
On the engineering and product side, a few points are worth calling out.
- The CSS and Lumex story suggests that Arm’s smartphone and PC push is now more about shipping full reference platforms than just cores. That should make performance more predictable and time to market shorter for partners, at the cost of more responsibility sitting with Arm.
- Neoverse’s position inside AI infrastructure is stronger than a simple CPU market share number suggests. If Arm based CPUs end up fronting a large fraction of GPU heavy clusters, Arm benefits from the AI build out even when someone else sells the accelerators.
- The focus on Armv9 royalties shows that Arm understands its leverage. The more customers migrate off older designs, the more room Arm has to keep growing revenue even if unit volumes in mature markets such as phones stay flat.
The main open questions are about durability and competition. Can Arm maintain close to 50 percent share of hyperscaler CPU deployments if cloud budgets tighten or experiment more with RISC V. Can CSS based platforms hold or grow share in smartphones and PCs once Intel, AMD and Qualcomm respond with their own tightly integrated AI designs.
For now, Q2 says that Arm is one of the few companies in the AI cycle that gets paid at almost every layer from phones and PCs in the hand to the control plane of AI clusters in the rack. It does not need to win the GPU race to profit from AI. It needs to keep its architecture in the path wherever general compute and AI converge, and this quarter suggests it is doing that successfully.

Leave a Reply Cancel reply