The James Webb Space Telescope is one of the most technically complex instruments humanity has ever built. It sits at the L2 Lagrange point, roughly 1.5 million kilometers from Earth, and peers into the universe in infrared light with a precision previously impossible. The onboard computer running all of this is a BAE Systems RAD750, a radiation-hardened processor based on the PowerPC 750 architecture. It has a maximum clock speed of 200 MHz. It is, for all practical purposes, a chip designed at the turn of the millennium.
That is not an accident or a cost-cutting measure. The RAD750 had flown on more than 150 spacecraft before TWST launched. It was proven, space-qualified, and trusted. NASA, as an institution, does not change what is working. The Curiosity and Perseverance Mars rovers both run the same chip. For missions that cost billions and cannot be repaired, conservatism is rational.
![]()
But conservatism has limits. Future mission requirements, covering autonomous planetary landing, real-time AI-driven science processing, and multi-node distributed sensor networks on the lunar surface, cannot be met by a 200 MHz single-core processor. The computational gap between what NASA is flying today and what it needs to fly tomorrow is not incremental. It is, by NASA’s own figures, a factor of at least 100.
That is the problem NASA set out to solve in 2022. The solution reached for was a 12-core RISC-V SoC built around processor IP from SiFive. The chip, now formally branded by Microchip Technology as the PIC64-HPSC, sits at the center of NASA’s High-Performance Spaceflight Computing (HPSC) program. It is also running behind its published schedule as of early 2026.
What SiFive is bringing to the table
SiFive was founded in 2015 by Krste Asanovic, Yunsup Lee, and Andrew Waterman, the researchers responsible for the RISC-V ISA at UC Berkeley. Where Arm charges licensing fees for access to its architecture, and Intel owns x86 outright, RISC-V is an open standard. Anyone can build a RISC-V processor without paying royalties. SiFive commercialized this by developing a portfolio of high-performance, licensable processor IP that chip designers can incorporate into their own SoCs.
The specific core at the center of the HPSC program is the SiFive Intelligence X280, a 64-bit RISC-V processor with vector-processing extensions conforming to the RVV (RISC-V Vector) standard. The X280 features a 512-bit vector register length and a decoupled scalar and vector pipeline, which allows the vector unit to work behind the scalar pipeline and improves machine-level parallelism by allowing memory loads to commit early. In scientific and signal-processing workloads, the throughput advantage over a scalar-only core like the RAD750 is substantial.
SiFive has claimed, and substantiated, that the X280 delivers orders of magnitude more compute performance than existing space processors on representative science and autonomy workloads. That is not primarily a clock speed advantage. The RAD750 processes data one element at a time. The X280 processes 512-bit-wide vectors per cycle, meaning it can perform the matrix arithmetic underpinning neural networks, image processing, and scientific transforms at a fundamentally different rate. The attraction for NASA was clear: a mature, shipping product with a validated design, and an open ISA that meant the same compilers, operating systems, and software libraries used in terrestrial HPC would, with suitable porting work, run on HPSC hardware.
A decade of RISC-V: why SiFive’s commitment matters
All of this is worth contextualizing with a direct observation: SiFive has earned the right to be taken seriously on this, and that was not a foregone conclusion.
The founders did not license someone else’s ISA and build a business around it. They built the ISA, and then built the company. That is a different kind of commitment to an architecture, and it shows in the depth of the engineering work SiFive has done on RISC-V over the past decade. Nobody understands RISC-V’s design trade-offs more thoroughly than the people who made them.
In 2021, Intel was reportedly in acquisition discussions with SiFive at a valuation of over $2 billion. Those talks ended without a deal. SiFive raised independent capital, brought in new leadership in the form of CEO Patrick Little, and kept going. That decision looks considerably better in retrospect. Intel’s own position in the semiconductor market has deteriorated substantially since 2021. SiFive, by contrast, has continued to build a portfolio that now spans embedded IoT to data center AI, and the commercial traction is real and verifiable.
Its IP is designed into automotive chips targeting 2025 and 2026 model years across multiple Tier 1 automotive partners. Two Tier 1 US semiconductor companies adopted the X100 series in 2025. The NVLink Fusion partnership with NVIDIA, announced in January 2026, makes SiFive the first RISC-V vendor integrated into NVIDIA’s high-bandwidth GPU interconnect fabric. Red Hat released a developer edition of RHEL for SiFive’s HiFive Premier P550 board. NVIDIA announced it is porting CUDA to RISC-V, using that same board as the starting point. These are not paper announcements. They are engineering commitments from organizations that do not make them lightly.
What SiFive has demonstrated over the past ten years is that RISC-V is not an academic exercise or a hobbyist alternative. It is a commercially viable ISA with competitive performance, a rapidly expanding software ecosystem, and enough industry momentum that the world’s largest GPU company has concluded it cannot afford to ignore it. That is a substantive achievement, built through sustained engineering work and deliberate commercial strategy across a period when the outcome was genuinely uncertain.
The X280 cores at the center of the PIC64-HPSC are a direct product of that decade of work. They are not experimental silicon. They are production IPs validated across automotive, edge AI, and industrial applications. NASA is not taking a flyer on an unproven design. It is flying IP from a company with a demonstrated track record and the institutional knowledge to support it over the long term when you are designing hardware intended to last multiple decades in a radiation environment, that matters.
The PIC64-HPSC: what the chip actually looks like
In September 2022, NASA’s Jet Propulsion Laboratory awarded a $50 million firm-fixed-price contract to Microchip Technology, with SiFive named as the CPU IP supplier. Microchip contributed significant internal R&D funding in addition to the NASA contract value to complete the program.
The resulting SoC, formally unveiled by Microchip as the PIC64-HPSC in July 2024, is a 12-core RISC-V design manufactured on GlobalFoundries’ 12LP+ process node. Eight of those cores are SiFive X280 (or X288 in some Microchip documentation) vector-processing cores organized into two clusters of four, with four additional general-purpose RISC-V cores for system control and application compute. The application complex is backed by SiFive’s WorldGuard security model, which provides fine-grained, hardware-enforced isolation across up to 32 security domains.
The headline compute figures are up to 2 TOPS for int8 workloads and 1 TFLOPS for bfloat16 floating-point, courtesy of the vector units. Those numbers will not set any records in 2026. Still, the context is a chip designed to survive the radiation environment of deep space, operate reliably at extreme temperatures, and function correctly even when individual components fail. Delivering this level of compute under those constraints is a genuinely hard engineering problem.
Beyond the processor cores, the PIC64-HPSC integrates a suite of connectivity that no previous space computer has come close to offering. That includes a 240 Gbps Time Sensitive Networking (TSN) Ethernet switch with 10GbE endpoints, PCIe Gen 3 and Compute Express Link (CXL) 2.0 with x4 or x8 configurations, RMAP-compatible SpaceWire ports with internal routers, and RDMA over Converged Ethernet (RoCEv2) hardware accelerators enabling low-latency, zero-copy data transfers from remote sensors directly into DDR4 memory. The chip also supports post-quantum cryptographic algorithms and a hardware hypervisor for mixed-criticality workloads.
Fault tolerance is built in at multiple levels. The design is radiation-hardened-by-design (RHBD) across the entire die, with Dual-Core Lockstep (DCLS) for safety-critical processing, split-mode operation for configuring the cores across different fault-containment domains, and hardware error detection and correction throughout the memory subsystem. Power management is handled via configurable power islands and clock gating, allowing the chip to scale its consumption to match mission phases. The package options include a space-qualified organic QML Class-Y package, confirming the chip’s intended flight qualification path.

A 100x leap, on paper
NASA’s stated goal for the HPSC program is to deliver at least 100 times the computational capacity of current spaceflight computers. That figure is not marketing language. It reflects the specific comparison between the RAD750 and the X280-based compute complex across the science and autonomy workloads that define NASA’s mission roadmap.
The RAD750 runs at up to 200 MHz on a 32-bit scalar PowerPC 750 architecture. The PIC64-HPSC runs eight 64-bit RISC-V X280 cores with 512-bit vector units, plus four additional general-purpose cores, on a modern fabrication process with substantially better performance-per-watt characteristics. For workloads involving vector maths, the improvement is multiplicative: wider vectors, more cores, and a higher clock rate all compound.
Where the 100x figure gets interesting is in the software context. The RAD750’s ecosystem is narrow and specialist. Writing or porting software for it requires familiarity with tools and frameworks rarely used anywhere outside the space industry, which limits the developer pool and raises development costs. The HPSC is designed to support the same software environments used in HPC on the ground: Linux, real-time operating systems such as RTEMS and Wind River VxWorks, the Xen hypervisor, and NASA’s core Flight Software (cFS) framework. The cFS team has already completed integration with the HPSC platform at NASA’s Goddard Space Flight Center, creating a day-one software foundation that mission teams can build on without starting from scratch. That is the long-term payoff for accepting the higher risk of a newer, less flight-proven architecture.

Figure 2: RAD750 vs PIC64-HPSC specification comparison. The RAD750 is a 200 MHz single-core scalar processor on a 1990s-era architecture. The PIC64-HPSC is a 12-core, 64-bit RISC-V SoC with 512-bit vector units and a suite of modern connectivity interfaces that have never previously been available in a space-qualified processor.
What the schedule said, and where things stand
The original published timeline for the HPSC program called for initial chip availability in 2024 and for space-qualified hardware to be ready in 2025. When Microchip formally announced the PIC64-HPSC in July 2024, it stated that the availability date for early access partner samples was “in 2025.” NASA’s official HPSC program page, as last updated in September 2025, stated that the project “will produce its first processors in early 2025.”
That language is telling. By September 2025, “early 2025” had already elapsed without a publicly confirmed first-silicon delivery, and NASA was still using the future tense. As of early 2026, there has been no announcement of first silicon reaching customers, no confirmed early access program shipments, and no named flight program that has selected the HPSC for integration. The chip appears to be running at least one year behind its original schedule.
This is not especially surprising in isolation. Radiation-hardened SoC development is among the most technically demanding work in the semiconductor industry. The combination of RHBD cell libraries, fault-tolerant architecture, full die-level radiation tolerance, and the advanced connectivity suite of the PIC64-HPSC makes this one of the most complex space processors ever attempted. Tape-out iterations on complex, rad-hard designs regularly slip, and delays at this scale are a normal part of the process. What is less normal is the institutional context surrounding the delay.

JPL’s turbulent two years
The HPSC program is managed by NASA’s Jet Propulsion Laboratory in Pasadena, California. Since early 2024, JPL has conducted four rounds of workforce reductions, cutting its headcount from approximately 6,500 employees to roughly 4,500. That is a loss of around a quarter of the lab’s total workforce over roughly 20 months.
The initial cuts, in January and February 2024, were driven by the collapse of the Mars Sample Return mission budget. MSR had seen its projected costs balloon to the point where an independent review found it had a “near zero probability” of making its planned launch date. NASA slashed MSR funding, JPL lost the work, and approximately 630 employees and contractors departed in those first two rounds.
Further cuts followed in November 2024 (325 positions) and October 2025 (approximately 550 positions, or around 11 percent of the remaining workforce). The October 2025 round came during a US government shutdown, amid the Trump administration’s proposed budget that called for a 24 to 25 percent cut to NASA’s overall funding. The January 2025 Eaton Fire in the foothills near Pasadena compounded the situation by destroying the homes of more than 200 JPL employees and temporarily displacing approximately 20 percent of the lab’s staff.
A congressional appropriations bill passed in January 2026 with bipartisan support (82 to 15 in the Senate) rejected the administration’s proposed deep cuts to NASA. The bill formally canceled MSR while allocating $110 million for a new Mars Future Missions program, and funded NASA broadly at near-FY2025 levels. That provided some stability. But as space advocacy groups have noted, the institutional damage from 2025 was substantial. The agency lost more than 4,000 civil servants and thousands of contractors in a single year. The specialized expertise they carry does not regenerate quickly.
For the HPSC program, this matters in a specific and practical way. Radiation-hardened processor development depends on sustained engineering effort by teams with highly specialized knowledge: rad-hard VLSI design, fault-tolerant architecture, space-qualified packaging, and qualification testing against military and NASA standards. That expertise is not abundant in the commercial semiconductor industry. Whether the workforce disruptions directly affected HPSC’s schedule cannot be confirmed from public information. Still, it is implausible that four rounds of layoffs and a year of institutional chaos did not affect a program that JPL leads.
SiFive, meanwhile, is not standing still.
While the HPSC program has been navigating delays, SiFive as a company has made considerable progress. In September 2025, SiFive launched its second-generation Intelligence IP family. The lineup includes the entirely new X160 Gen 2 and X180 Gen 2, targeting far-edge IoT and embedded AI applications, alongside upgraded X280 Gen 2, X390 Gen 2, and XM Gen 2 variants for higher-performance applications. The XM Gen 2 adds a scalable matrix engine for demanding AI inference workloads. All five products feature enhanced capabilities for scalar, vector, and matrix processing. First silicon from the Gen 2 family is expected in Q2 2026, and all five IP products are available for licensing.
In January 2026, SiFive announced it was joining NVIDIA’s NVLink Fusion ecosystem, becoming the first RISC-V IP vendor to do so. This opens the door to RISC-V-based CPUs in data center AI systems that interoperate with NVIDIA GPUs via the high-bandwidth NVLink-C2C interface. Arm, Intel, and AWS were already on board; SiFive is the first RISC-V entrant. None of this accelerates the PIC64-HPSC’s schedule directly, but it reinforces that SiFive is a commercially stable, growing company with an increasingly central role in the global semiconductor industry.
Why this program matters beyond the schedule slip
The HPSC represents something more fundamental than just a faster space processor. It is NASA’s deliberate attempt to end its dependence on closed, proprietary silicon ecosystems and the specialist toolchains that go with them. Every previous NASA processor has carried its own narrow ecosystem. Writing or porting software for it required familiarity with tools and frameworks rarely used anywhere outside the space industry. This drove up costs, lengthened development timelines, and created a structural talent bottleneck.
The HPSC’s use of RISC-V, and specifically of SiFive’s X280 with its standard RVV vector extensions, is an attempt to break that pattern. RISC-V has an open ISA, standard tools including GCC and LLVM compiler support, a growing Linux ecosystem, and an increasingly large developer base. If NASA can build future mission software on the same frameworks used in ground-based HPC and edge AI development, the barrier to entry for mission software drops substantially.
NASA’s own planning reflects this ambition. The HPSC is described as the intended processor for “virtually every future space mission,” covering lunar surface operations, planetary exploration, crewed deep space systems, and advanced science instruments. The integration of NASA’s cFS framework with the HPSC ecosystem and JPL’s ongoing work to validate HPSC for its future mission computing roadmap both suggest the commitment to the program is genuine rather than aspirational.
Where things actually stand
The honest summary is that the PIC64-HPSC is a technically ambitious chip running at least a year behind schedule, managed by a program office operating under significant institutional stress, against a backdrop of the most severe NASA budget disruption in the agency’s recent history.
The program is still active. The chip is still being developed. NASA’s published program pages continue to reference HPSC as the foundation of its future computing architecture. Microchip continues to develop the PIC64-HPSC product line and its ecosystem. An HPSC Workshop was held for the second consecutive year in 2025, with Microchip scheduled to provide a current readiness update. That is the kind of language you use when hardware has not yet shipped to customers.
The original 2024 first-silicon target came and went without a public announcement of delivery. The 2025 space-qualified hardware target is also now in the past without confirmation. The most likely scenario is that the schedule has slipped due to a combination of the normal difficulties inherent in rad-hard SoC development at this level of complexity, compounded by institutional disruption at JPL over 2024 and 2025. Qualified first silicon in 2026, with space-grade hardware qualification following in 2026 or 2027, would be a plausible revised expectation based on current public information.
For a chip intended to fly for the next several decades, replacing hardware originally designed in the late 1990s, a year or two of schedule slip is not a program-ending event. The RAD750 that the HPSC is replacing took years to qualify and has been flying reliably since the early 2000s. Whether the PIC64-HPSC will achieve the same longevity will depend on whether Microchip, SiFive, and NASA can complete the qualification process and get the chip into a flight program. By all available indications, that work is ongoing.
Final Note: SiFive is headquartered in San Mateo, California. The HPSC program is managed by NASA’s Jet Propulsion Laboratory in Pasadena, California, and funded through NASA’s Space Technology Mission Directorate. Microchip Technology, the SoC manufacturer and program prime contractor, is headquartered in Chandler, Arizona. GlobalFoundries is manufacturing the PIC64-HPSC on its 12LP+ process node as an onshore, trusted supply chain provider approved by the US Department of Defense.















