NAND Shortage And SSD Price Spike: Why Phison Thinks The Pain Could Last A Decade

I thought the NAND rollercoaster was finally calming down. Prices were back to sane levels, 2TB PCIe 4.0 SSDs were routinely discounted, and you could tell people “just buy a decent NVMe, it’ll be fine” without checking the DRAM market first. Then Phison’s CEO popped up and basically said: enjoy it while it lasts, because the cheap SSD era is about to get flattened by AI data centers and it might not come back for a long time.

What Phison is actually warning about

Phison sits in a very interesting spot. They do not make NAND wafers themselves, but their controllers sit on a huge number of consumer and OEM SSDs. They see both sides: NAND vendors locking in long term deals with hyperscalers, and SSD makers trying to secure enough flash to build drives people actually want to buy.

In a recent round of interviews and coverage, Phison’s CEO has put a pretty blunt label on what is coming. NAND prices have already more than doubled off the bottom, all of 2026’s NAND production is effectively sold, and we are staring at what he calls a memory supercycle that could keep supply tight for as much as ten years. From his seat, 2026 is the year shortages bite hard, and 2027 is when the pain fully shows up in SSD prices for the rest of us.

Translated into plain PC-builder language: the days of cheap 2TB and 4TB SSDs might be over for a while, not because vendors are greedy this quarter, but because AI build-outs are inhaling flash at a rate the industry did not tool up for.

Why AI data centers change the math overnight

AI training systems are loud and obvious. Racks of GPUs, custom accelerators, exotic cooling. The less glamorous bit that sits next to them is storage. Training and serving models at scale is a read heavy, write steady workload that wants low latency and high throughput. That is not a natural fit for spinning disks any more, so hyperscalers are very happy to replace huge banks of HDDs with SSD-based storage tiers when the numbers add up.

Multiple reports now point to enterprise HDDs being on backorder horizons counted in years, and AI data centers switching to QLC SSDs for bulk storage as capacities and costs converge. At the same time, NAND roadmaps are pushing toward thicker 3D stacks and higher interface speeds, which make flash denser and potentially cheaper per bit, but also more capital intensive to build in the first place.

When you suddenly have hyperscalers, cloud providers and AI platforms saying “we’ll take everything you can make through 2026 and 2027, thanks”, the market stops being a nice balance between consumer drives, phones, and a bit of enterprise. It becomes “feed AI first, everyone else gets what is left.”

How we got from oversupply to a “supercycle” this fast

The whiplash here is worth unpacking. Not long ago, NAND vendors were drowning in inventory. Smartphone demand softened, PC shipments fell off a cliff after the pandemic spike, and a lot of QLC and TLC wafers were piling up. Prices fell, and we got the nice side of the cycle: 1TB and 2TB NVMe drives at ridiculous discounts, decent Gen 4 drives well under £100, and TLC still being viable on mid-range models.

Vendors did what they always do in that situation: cut wafer starts, delay new capacity, and try to ride it out. Then the AI boom properly hit. Training clusters grew from “a few thousand GPUs” to “hundreds of thousands”, and the storage footprint had to grow alongside them. You can compress model checkpoints only so far. Datasets are messy, logs grow forever, and backups have to live somewhere.

That is where the supercycle language comes from. You have rising baseline demand from phones, consoles, PCs, and general servers, plus an AI layer on top that is both new and extremely aggressive. It is not just “more of the same”, it is a second consumer glued to the side of the first one, and it has more money.

Why this is worse than the last DRAM spike for SSD buyers

We have done memory spikes before. DRAM went insane around the 2017–2018 window and stayed high for an uncomfortably long time. The difference this time is that the pressure is not just on the “RAM” side of the house. NAND is getting hit as well, and in some verticals it is being treated as strategic infrastructure rather than just a component.

Hyperscalers will happily sign multi-year, take-or-pay deals for flash if it lets them lock in capacity and pricing ahead of rivals. Once those contracts are in place, spot buyers – the people who fill retail SSDs and many OEM designs – are negotiating over the remainder. That almost guarantees more volatile pricing for consumer drives, even if headline contract prices look stable.

So where the last DRAM cycle was “your RAM kit is annoyingly expensive for a while”, this one threatens to be “your SSD is expensive, and the high capacity tier you really wanted never quite hits the price point you were waiting for.”

What this means for consumer SSD pricing through 2027

If we take Phison’s warnings at face value, the picture for consumer SSDs over the next few years looks something like this: no catastrophic shortage where shelves are empty, but a long period where prices are stubbornly high compared to what you might expect from the underlying process shrinks and layer count improvements.

On the technical side, the NAND industry is still marching toward 300, 400, even 500 layer 3D stacks and faster interfaces, which on paper should give us cheaper, denser drives. In a normal cycle that would show up as 4TB and 8TB NVMe drives becoming mainstream in gaming PCs, and 1TB becoming the absolute minimum for any halfway serious machine. In a constrained cycle, those improvements do not translate cleanly into street pricing. Instead, they mostly show up as margin protection for the vendors and more competitive bids for the big AI contracts.

The likely pattern for retail SSDs

If you build or upgrade PCs regularly, you will probably see a pattern like this over the next 18–24 months:

  • Entry-level 500GB and 1TB drives stay around their current price points or creep up slightly, with more QLC and DRAM-less designs in the mix.
  • 2TB drives, which used to feel like the sweet spot, stop getting much cheaper and may even tick up during tight quarters.
  • 4TB and larger drives stubbornly refuse to drop into the “impulse buy” bracket and stay aspirational unless you catch a rare sale.
  • Better controllers and firmwares show up, but they are more focused on endurance and QoS for mixed workloads than on raw sequential numbers for consumer benchmarks.

The most annoying bit is that the numbers on the box may keep climbing. We will see ever higher advertised read and write figures, more PCIe 5.0 models, and marketing slides full of 10 GB/s+ claims. Underneath, the price-per-terabyte curve will not be as friendly as we had hoped for this stage of the NAND roadmap.

QLC becomes the default, not the exception

One clear knock-on effect is the march of QLC. When your bit cost is under pressure and your biggest customers are pushing for crazy capacities per rack unit, you use fewer dies and squeeze more bits out of them. That inevitably means more QLC and, eventually, PLC in the mix.

On the consumer side, that does not mean your boot drive is going to explode. It does mean more SSDs with relatively small SLC caches and noticeable write cliffs under sustained loads, especially at the lower capacities. If you copy a lot of large files, work with 4K video, or hammer your drives with game installs and updates, you will feel that more often.

The irony is that AI also pushes enterprise vendors toward higher endurance and better QoS on their own SSDs. So at the top of the stack you get fancy, high endurance TLC and clever controllers tuned for consistent latency. At the bottom, you get very cost-optimised QLC designs hoping you never push them into their worst case behaviour long enough to notice.

How PC builders should adapt their SSD strategy

All of this sounds a bit doom and gloom, but from a practical PC-building perspective, you do have options. The trick is to stop assuming that SSD capacity will keep getting cheaper every quarter and start planning around a choppy, possibly flat pricing curve for a few years.

Buy capacity, not just speed

If NAND stays tight and SSD makers lean harder on QLC, one of the worst feelings in a couple of years’ time will be realising you cheaped out on capacity when it was relatively affordable. If you look at your own usage and know that 2TB is already marginal, it may be worth stretching to 4TB now rather than expecting that size to fall into bargain territory later.

In other words, move your mental slider from “I’ll get the fastest 1TB and upgrade later” to “I’ll get a sensible controller and NAND mix, but I want enough space to not care about every new game install.” For many gaming and general-purpose builds, that probably means treating 2TB as the realistic minimum and 4TB as a target if budget allows.

Be picky about controllers and DRAM

In a tight NAND market, the corners get cut where most buyers do not look. That usually means cheaper controllers, DRAM-less designs, skinny SLC caches, and aggressive write-combining. If you can stomach reading a few proper SSD reviews before buying, it is worth favouring drives with:

  • Proven controllers from the big names (Phison, Innogrit, Silicon Motion, in-house controllers from Samsung/WD etc.).
  • Real DRAM, especially on 2TB and larger drives.
  • Decent sustained write behaviour once the SLC cache is exhausted.

You do not need to chase the absolute fastest PCIe 5.0 numbers. A good PCIe 4.0 drive with solid firmware will feel the same in most real workloads and will age better than a cheap Gen 5 model built to hit a headline speed at the expense of everything else.

Use multiple drives cleverly

One way to hedge against both pricing and endurance is to split roles. For example:

  • A smaller, higher quality TLC drive (1TB or 2TB) for your OS, main applications, and “I cannot lose this” data.
  • A larger, cheaper QLC drive (2TB–4TB) for game libraries, scratch space, and bulk storage where performance is less critical.

That way you put the heavier random write and mixed-load work on the SSD that can handle it, and you treat the cheaper, higher-capacity drive as semi-disposable bulk storage. If QLC gets hit harder by the pricing wave, you at least have options to juggle those roles or reuse drives across builds.

What this means for system integrators and OEMs

If you run a small system integration business or you are the “PC person” for a company, this NAND situation changes how you spec and price machines. The days of quietly slipping in a 2TB NVMe because “it hardly costs more” might end. Procurement will notice when storage lines jump, and you will have to justify it.

The key is to stop treating consumer SSD pricing as a given and start modelling a few scenarios. One where SSDs hold roughly steady but never get much cheaper, one where they rise 10–20 percent over the next couple of years, and one where high capacities become premium options again. If you quote for three years of workstation refresh or small server builds, that matters.

Enterprise and prosumer: think ahead on refresh cycles

For prosumer and small enterprise setups, the NAND supercycle message is simple. Do not assume that your next storage-heavy refresh will be cheaper than your last one. If anything, budget a little extra for SSDs, especially if you are planning on moving file servers or VMs from spinning disks to flash.

It also strengthens the case for proper monitoring and lifecycle planning. If replacing SSDs gets more expensive, you want good data on write amplification, remaining life and performance drift over time so that you are not yanking drives early “just in case”. Investing a bit of time in setting up SMART monitoring and logging now can pay for itself when the next refresh rolls around and you can confidently extend or shorten drive lifetimes based on real behaviour rather than gut feel.

Why vendors are not rushing to flood the market

Whenever a component gets tight, the natural question is “why don’t they just build more capacity?” The answer, as usual, is that fab capex is painful, slow, and not something you throw at a trend you are not sure will last. The AI boom looks durable, but NAND vendors have very fresh scars from overbuilding in the last cycle and watching prices collapse.

New 3D NAND fabs aimed at 300+ or 400+ layer processes are staggeringly expensive. They also require very tight integration between process, design, and packaging to get acceptable yields. If you are a NAND maker, you would rather sell everything you can build at a healthy margin for as many years as possible than rush into a buildout that might overshoot demand just as the AI curve flattens.

That is one reason Phison’s CEO talks about a decade of tightness. It is not that we physically cannot build enough NAND. It is that the industry has very little appetite for going back to the days of oversupply and bargain-bin pricing that wrecks earnings. A steady, controlled shortage looks much more attractive from a balance sheet perspective.

AI-specific SSDs hoover up the good stuff

Another twist is the emergence of AI-specific SSD designs. We already see Kioxia and Nvidia talking about 100 million IOPS drives aimed squarely at AI servers, with exotic controllers and firmware tuned for inference workloads. Those drives are not being built from junk wafers. They will use some of the best NAND available, and they will soak up controller engineering talent and packaging capacity.

When the high end of the SSD market forks into AI-optimised designs, it pulls effort and volume upward. That is great for the companies that win those design slots. It is less great for the part of the market that just wants reliable 4TB drives at sensible prices for creative workstations and gaming rigs.

How this plays with HDDs and tiered storage

One possible escape route is to fall back to old friends: spinning hard drives. For bulk cold storage, backup sets, and media libraries, HDDs are still hard to beat on sheer cost per terabyte. The problem is that the same AI data centres that are eating NAND are also starting to eat HDDs for colder tiers. Enterprise class drives are already seeing extended lead times and firm pricing as cloud providers pre-book supply.

At home and in small offices, this is the right time to sit down and decide what really needs SSD-level performance and what does not. If you have a big Steam library full of titles you rarely touch, or terabytes of raw footage you only dip into occasionally, it may be time to normalise using HDDs for that and keeping SSDs for live projects, OS, and “hot” data.

That does not mean going back to the days of suffering through a 5400RPM system drive. It does mean accepting that tiered storage is not just something hyperscalers do. A small NAS with a couple of HDDs and perhaps an SSD cache layer can soak up a lot of data you do not want to pay flash prices for.

So, should you buy SSDs now or wait?

Any time someone talks about shortages and price spikes, the immediate instinct is to ask whether you should rush out and buy hardware before it gets worse. I am not a fan of panic buying, but there are some common sense guidelines you can follow given what we know.

If you are building or upgrading in the next 6–12 months

If you already planned a new build or a major upgrade in the next year, I would not delay it hoping SSDs get cheaper. Plan your storage budget assuming prices hold steady or tick up a little. If you can afford to stretch capacity now, especially to 2TB or 4TB on your main drive, it is worth considering.

The worst case is you end up having spent slightly more than the theoretical future minimum. The best case is that you avoid being stuck in 2027 with a cramped 1TB boot drive watching 4TB prices refuse to drop.

If you have plenty of SSD space already

If your current machine has enough SSD capacity and you are not regularly hitting 90–100 percent usage, you do not need to join any rush. Keep an eye on your usage trends and the market, but do not let “shortage” headlines push you into stockpiling drives you have no immediate use for.

Use this time to tighten your backup and storage strategy instead. Make sure you have at least one external backup, ideally on a mix of SSD and HDD, and think about where a cheap NAS might fit into your setup if flash pricing gets uncomfortable.

If you are running lots of SSDs in a homelab

Homelab people are in an awkward spot. You are exactly the kind of user who can feel NAND pricing swings because you care about both capacity and behaviour under load. The good news is that you have the skills to be creative. RAIDed HDD pools with SSD caches, ZFS with special vdevs, careful use of SMR drives for really cold data – there are lots of tricks to reduce your dependence on high capacity SSDs without wrecking performance for the workloads that matter.

The general rule still applies: do not expect NVMe shelves to magically get cheaper. Treat any genuinely good SSD deal you see over the next year or two as an opportunity, but not as a guarantee that more will follow.

Final thoughts: the end of the “cheap SSD” mental model

The biggest shift here is psychological. For years, we have been conditioned to think of SSD pricing as a one-way street, with occasional bumps when a fab hiccups. Bigger drives get cheaper, then even bigger drives take their place at the top of the stack, and the cycle repeats. AI has broken that model, at least for now.

When a controller vendor like Phison says out loud that all of 2026’s NAND is essentially spoken for, and that shortages could linger for up to a decade, it is a sign that flash has moved into the same category as GPUs and high bandwidth memory. It is no longer just a silent component that tags along for the ride. It is a strategic resource that big players are willing to fight over and pre-pay for.

For PC builders, small shops, and enthusiasts, that does not mean SSDs vanish. It means we have to be a bit more deliberate. Buy capacity when it makes sense. Favour quality where it matters. Use HDDs and tiered storage to soak up the boring bits. And stop assuming that next year’s 4TB drive will be half the price just because the layer count went up again.

If the supercycle narrative plays out, we are not going back to 2019-style “1TB for pocket change” any time soon. But with a bit of planning, you can ride out the NAND crunch without letting it dictate every build decision you make.

Be the first to comment

Leave a Reply

Your email address will not be published.


*