Samsung makes fast expensive storage but even cheap storage can max out SATA, hence there's no point Samsung trying to compete in the dwindling SATA space.
Does this mean that we'll start to see SATA replaced with faster interfaces in the future? Something like U.2/U.3 that's currently available to the enterprise?
The first NVMe over PCIe consumer drive was launched a decade ago.
It's hard to even find new PC builds using SATA drives.
SATA was phased out many years ago. The primary market for SATA SSDs is upgrading old systems or maybe the absolute lowest cost system integrators at this point, but it's a dwindling market.
My desktop motherboard has 4... not sure how many you need, even if 8tb drives are pretty pricey. Though actual PCIe lanes in consumer CPUs are limited. If you bump up to ThreadRipper, you can use PCIe to M.2 adapters to add lots of drives.
On top of what the others have said, any faster interface you replace SATA with will have the same problem set because it's rooted in the total bandwidth to the CPU, not the form factor of the slot.
E.g. going to the suggested U.2 is still going to net you looking for the PCIe lanes to be available for it.
We used to have motherboards with six or twelve SATA ports. And SATA HDDs have way more capacity than the paltry (yet insanely expensive) options available with NVMe.
We used to want to connect SSDs, hard drives and optical drives, all to SATA ports. Now, mainstream PCs only need one type of internal drive. Hard drives and optical drives are solidly out of the mainstream and have been for quite a while, so it's natural that motherboards don't need as many ports.
This article is talking about SATA SSDs, not HDDs. While the NVMe spec does allow for MVMe HDDs, it seems silly to waste even one PCIe lane on a HDD. SATA HDDs continue to make sense.
And I'm saying assuming that m.2 slots are sufficient to replace SATA is folly because it is only talking about SSDs.
And SATA SSDs do make sense, they are significantly more cost effective than NVMe and trivial to expand. Compare the simplicity, ease, and cost of building an array/pool of many disks comprised of either 2.5" SATA SSDs or M.2 NVMe and get back to me when you have a solution that can scale to 8, 14, or 60 disks as easily and cheaply as the SATA option can. There are many cases where the performance of SSDs going over ACHI (or SAS) is plenty and you don't need to pay the cost of going to full-on PCIe lanes per disk.
> And SATA SSDs do make sense, they are significantly more cost effective than NVMe
That doesn't seem to be what the vendors think, and they're probably in a better position to know what's selling well and how much it costs to build.
We're probably reaching the point where the up-front costs of qualifying new NAND with old SATA SSD controllers and updating the firmware to properly manage the new NAND is a cost that cannot be recouped by a year or two of sales of an updated SATA SSD.
SATA SSDs are a technological dead end that's no longer economically important for consumer storage or large scale datacenter deployments. The one remaining niche you've pointed to (low-performance storage servers) is not a large enough market to sustain anything like the product ecosystem that existed a decade ago for SATA SSDs.
Is it not fair to say 4x4 TB SSD is an example of at least a prosumer use case (barrier there is more like ~10 before needing workstation/server gear)? Joe Schmoe is doing on the better half of Steam gamers if he's rocking a 1x2 TB SSD as his primary drive.
The MSI motherboard I use has 3, and with the PCIe expansion card installed, I have 7 m.2's. There are some expansion cards with 8 m.2 slots.
You can also get SATA to m.2 devices, or my fav is USB-c drives that hold 2 m.2.
Getting great speeds from that little device.
It's more likely that third party integrators will look after the demand for SSD SAS/SATA devices, and the demand won't go away because SAS multiplexers are cheap and NVMe/PCIe is point to point and expensive to make switching hardware for.
Likely we'd need a different protocol to make scaling up the number of high speed SSDs in a single box to work well.
SATA just needs to be retired. It's already been replaced, we don't need Yet Another Storage Interface. Considering consumer IO-Chipsets are already implemented in such a way that they take 4 (or generally, a few) upstream lanes of $CurrentGenPCIe to the CPU, and bifurcating/multiplexing it out (providing USB, SATA, NVMe, etc I/O), we should just remove the SATA cost/manufacturing overhead entirely, and focusing on keeping the cost of keeping that PCIe switching/chipset down for consumers (and stop double-stacking chipsets AMD, motherboards are pricey enough). Or even just integrating better bifurcation support on the CPU's themselves as some already support it (typically via converting x16 on the "top"/"first" PCIe slot to x4/x4/x4/x4).
Going forward, SAS should just replace SATA where NVMe PCIe is for some reason a problem (eg price), even on the consumer side, as it would still support existing legacy SATA devices.
Storage related interfaces (I'm aware there's some overlap here, but point is, there's already plenty of options, and lots of nuances to deal with already, let's not add to it without good reason):
- NVMe PCIe
- M.2 and all of it's keys/lengths/clearances
- U.2 (SFF-8639) and U.3 (SFF-TA-1001)
- EDSFF (which is a very large family of things)
- FibreChannel
- SAS and all of it's permutations
- Oculink
- MCIO
- Let's not forget USB4/Thunderbolt supporting Tunnelling of PCIe
I think it's becoming reasonable to think consumer storage could be a limited number of soldered NVMe and NVMe-over-M.2 slots, complemented by contemporary USB for more expansion. That USB expansion might be some kind of JBOD chassis, whether that is a pile of SATA or additional M.2 drives.
The main problem is having proper translation of device management features, e.g. SMART diagnostics or similar getting back to the host. But from a performance perspective, it seems reasonable to switch to USB once you are multiplexing drives over the same, limited IO channels from the CPU to expand capacity rather than bandwidth.
Once you get out of this smallest consumer expansion scenario, I think NAS takes over as the most sensible architecture for small office/home office settings.
Other SAN variants really only make sense in datacenter architectures where you are trying to optimize for very well-defined server/storage traffic patterns.
Is there any drawback to going towards USB for multiplexed storage inside a desktop PC or NAS chassis too? It feels like the days of RAID cards are over, given the desire for host-managed, software-defined storage abstractions.
I wouldn't trust any USB-attached storage to be reliable enough for anything more than periodic incremental backups and verification scrubs. USB devices disappear from the bus too often for me to want to rely on them for online storage.
OK, I see that is a potential downside. I can actually remember way back when we used to see sporadic disconnects and bus resets for IDE drives in Linux and it would recover and keep going.
I wonder what it would take to get the same behavior out of USB as for other "internal" interconnects, i.e. say this is attached storage and do retry/reconnect instead of deciding any ephemeral disconnect is a "removal event"...?
FWIW, I've actually got a 1 TB Samsung "pro" NVMe/M.2 drive in an external case, currently attached to a spare Ryzen-based Thinkpad via USB-C. I'm using it as an alternate boot drive to store and play Linux Steam games. It performs quite well. I'd say is qualitatively like the OEM internal NVMe drive when doing disk-intensive things, but maybe that is bottlenecked by the Linux LUKS full-disk encryption?
Also, this is essentially a docked desktop setup. There's nothing jostling the USB cable to the SSD.
As @wtallis already said, a lot of external USB stuff is just unreliable.
Right now I am overlooking my display and seeing 4 different USB-A hubs and 3 different enclosures that I am not sure what to do with (likely can't even sell them, they'd go for like 10-20 EUR and deliveries go for 5 EUR so why bother; I'll likely just dump them at one point). _All_ of them were marketed as 24/7, not needing cooling etc. _All_ of them could not last two hours of constant hammering and it was not even a load at 100% of the bus; more like 60-70%. All began disappearing and reappearing every few minutes (I am presuming after overheating subsided).
Additionally, for my future workstation at least I want everything inside. If I get an [e]ATX motherboard and the PC case for it then it would feel like a half-solution if I then have to stack a few drives or NAS-like enclosures at the side. And yeah I don't have a huge villa. Desk space can become a problem and I don't have cabinets or closets / storerooms either.
SATA SSDs fill a very valid niche to this day: quieter and less power-hungry and smaller NAS-like machines. Sure, not mainstream, I get how giants like Samsung think, but to claim they are no longer desirable tech like many in this thread do is a bit misinformed.
I recognize the value in some kind of internal expansion once you are talking about an ATX or even uATX board and a desktop chassis. I just wonder if the USB protocol can be hardened for this using some appropriate internal cabling. Is it an intrinsic problem with the controllers and protocol, or more related to the cheap external parts aimed at consumers?
Once you get to uATX and larger, this could potentially be via a PCIe adapter card too, right? For an SSD scenario, I think some multiplexer card full of NVMe M.2 slots makes more sense than trying to stick to an HDD array physical form factor. I think this would effectively be a PCIe switch?
I've used LSI MegaRAID cards in the past to add a bunch of ports to a PC. I combined this with a 5-in-3 disk subsystem in a desktop PC. This is where the old 3x 5.25" drive bay space could be occupied by one subsystem with 5x 3.5" HDD hot-swap trays. I even found out how to re-flash such a card to convert it from RAID to a basic SATA/SAS expander for JBOD service, since I wanted to use OS-based software RAID concepts instead.
I wonder if this move has anything to do with SATA SSDs being a common upgrade for older PCs, but those will just go in the trash now that Windows 10 is EOL and Windows 11 will refuse to run on most of them? (I assume only a small percentage will be switched to Linux instead.)
If I were to bet on my hunches: At least half, leaning more, of that 20% buying SATA SSDs is probably momentum of people who didn't know they could get a better performing m.2 NVMe drive for the same price. Few people are upgrading PCs with SSDs for the first time in 2025 and those that are probably didn't really need SATA, they just searched for SATA/saw SATA.
I don't really know how one would get numbers for any of the above one way or the other though.
I prefer SSDs because the connector is so much more accessible. Ripping out the video card and futzing with the pain of that tiny NVME screw is no fun.
I am almost never IO blocked where the performance difference between the two matters. I guess when I do the initial full backup image of my drive, but after that, everything is incremental.
> I prefer SSDs because the connector is so much more accessible. Ripping out the video card and futzing with the pain of that tiny NVME screw is no fun.
This doesn't make sense as written. I suspect you meant to say "SATA SSDs" (or just "SATA") in the first sentence instead of "SSDs", and M.2 instead of NVMe in the second sentence. This kind of discussion is much easier to have when it isn't polluted by sloppy misnaming.
Keep in mind 'SATA SSD' != '2.5" SSD' as m.2 SSDs can be SATA as well.
Even then, I suppose it how the m.2 vs 2.5" SATA mounting turns out depends on the specific system. E.g. on this PC the main NVMe slot is above the GPU but mounting a 2.5" SSD is 4 screws on a custom sled + cabling once mounted. If it were the other way around and the NVMe was screw-in only below the GPU while the SSD had an easy mount then it might be a different story.
I remember running my first NVME drive via a PCIe adapter on my i7 4790K about a decade ago... man, that was a game changer for sure, almost as much as going from HDD to SSD on SATA around 2009.
I've been buying only Samsung for about seven or eight years. I got a four-bay M.2 Thunderbolt 4 RAID enclosure in 2022 and I couldn't be happier with it. It absolutely smokes everything else I have (other than my internal SSD).
Tech news has been quite the bummer in the last few months. I'm running out of things to anticipate in my nerd hobby.
Anything exciting has been in NVMe over various physical form factors for the last decade, discontinuing SATA only helps in the future anticipation regard.
It seems like it's rare to find M.2 with the sort of things you'd want in a NAS like PLP, reasonably high DWPD, good controllers, etc. and you've also got to contend with the problem of managing heat in a way I had never seen with 2.5 or 3.5 drives. I would imagine the sort of people doing NVMe for NAS/SAN/servers are all probably using U.2 or U.3 (I know I do).
I've been doing my home NASes in m.2 NVMe for years now with 12 disks on one and 22 disks on another (backup still HDD though):
DWPD: Between the random teamgroup drives in the main NAS and WD Red Pro HDDs in the backup, the write limits are actually about the same. With the bonus reads are infinite on the SSDs, so things like scheduled ZFS scrubs don't count as 100 TB of usage across the pool each time.
Heat: Actually easier to manage than the HDDs. The drives are smaller (so denser for the same wattage) but the peak wattage is lower than the idle spinning wattage of the HDDs and there isn't a large physical buffer between the hot parts and the airflow. My normal case airflow keeps them at <60C under sustained benching of all of the drives raw, and more like <40 C given ZFS doesn't like to go more than 8 GB/s in this setup anyways. If you select $600 top end SSDs with high wattage controllers shipping with heatsinks you might have more of a problem, otherwise it's like 100 W max for the 22 drives and easy enough to cool.
PLP: More problematic if this is part of your use case, as NVMe drives with PLP will typically lead you straight into enterprise pricing. Personally my use case is more "on demand large file access" with extremely low churn data regularly backed up for the long term and I'm not at a loss if I have an issue and need to roll back to yesterday's data, but others who use things more as an active drive may have different considerations.
The biggest downsides I ran across were:
- Loading up all of the lanes on a modern consumer board works in theory, can be buggy as hell in practice. Anything from the boot becoming EXTREMELY long to just not working at all sometimes to PCIe errors during operation. Used Epyc in a normal PC case is the way to go instead.
- It costs more, obviously
- Not using a chassis designed for massive numbers of drives with hot-swap access can be quite the pain to troubleshoot install for.
The biggest upsides (other than the obvious ones) I ran across were:
- No spinup drain on the PSU
- No need to worry about drive powersaving/idling <- pairs with -> whole solution is quiet enough to sit in my living room without hearing drive whine.
- I don't look like a struggling fool trying to move a full chassis around :)
Fwiw, SATA and NVMe are mutually incompatible concepts for a single device; SATA drives use AHCI to wrap ATA commands in a SCSI-shaped queuing mechanism called command lists over the SATA bus, while NVMe (M.2/U.2/add-in) drives talk NVMe protocol (multiple queues) over PCIe.
For a drive, yes, SATA and NVMe are mutually exclusive. The M.2 slot can provide both options. But if you have a machine with a M.2 slot that's only wired for SATA but not PCIe, your choices for drives to put in that slot have been quite limited for a long time.
At least the Samsung and SanDisk PCIe AHCI M.2 drives were only for PC OEMs and were not officially sold as retail products. There were gray-market resellers, but overall it was a niche and short-lived format. Especially because any system that shipped with a PCIe M.2 slot could gain NVMe capability if the OEM deigned to release an appropriate UEFI firmware update.
When it comes to ready-made home/SMB-grade NASes, in recent year or two plenty of options popped up: Terramaster F8, Flashstor 6 or 12, BeeLink ME mini N150 (6x NVMe). It's just QNAP and Synology who seem not interested.
How well does buying PCIe to M.2 adapters work for a custom NAS? Slot-wise you should be able to get 16 M.2 devices per motherboard with for example a Supermicro consumer board.
The difficulty with pcie to m.2 adapters is you usually can't use bifurcation below x4 and active PCIe switches got very expensive after PCIe 3.0.
Used multiport SATA HBA cards are inexpensive on eBay. Multiport nvme cards are either passive for bifurcation and give you 4x x4 for an x16 slot or are active and very expensive.
I don't see how you get to 16 m.2 devices on a consumer socket without lots of expense.
I don't think there are any consumer boards which support this?
In practice you can put 4 drives in the x16 slot intended for a GPU, 1 drive each in any remaining PCIe slots, plus whatever is available onboard. 8 should be doable, but I doubt you can go beyond 12.
I know there are some $2000 PCIe cards with onboard switches so you can stick 8 NVMe drives on there - even with an x1 upstream connection - but at that point you're better off going for a Threadripper board.
A few generations old, and HEDT, which isn't exactly consumer but ok. I see one for $100 on ebay, so that's not awful either.
Even that gives you one m.2 slot, and 8/8/8/16 on the x16 slots, if you have the right cpu. Assuming those are can all bifurcate down to x4 (which is most common), that gets you 10 m.2 slots out of the 40 lanes. That's more than you'd get on a modern desktop board, but it's not 16 either.
For home use, you're in a tricky spot; can't get it in one box, so horizontal scaling seems like a good avenue. But in order to do horizontal scaling, you probably need high speed networking, and if you take lanes for that, you don't have many lanes left for storage. Anyway, I don't think there's much simple software to scale out storage over multiple nodes; there's stuff out there, but it's not simple and it's not really targeted towards a small node count. But, if you don't really need high speed, a big array of spinning disks is still approachable.
I don't now if you consider it "reasonable" but the Gigabye Aorus TRX boards even from 6 years ago came with a free PCIE expansion card that held 8 M2 sticks, up to 32 TB on a consumer board. It's eATX, of course, so quite a bit bigger than an appliance NAS, and the socket is for a threadripper, more suitable for a hypervisor than a NAS, but if you're willing to blow five to ten grand and be severely overprovisioned, you can build a hell of a rig.
Are you sure? I've seen plenty of motherboards bundle a PCIe riser to passively bifurcate the PCIe slot to support four M.2 drives in an x16 slot or two in an x8 slot, but doing eight M.2 drives in one PCIe slot would either require a PCIe switch that would be too expensive for a free bundled card, or require PCIe bifurcation down to two lanes per link, which I don't think any workstation CPUs have ever supported. And 32TB is possible with just four M.2 SSDs.
What I want to know is if this is the beginning of the end of the SATA era. Once one major player leaves, others are sure to follow, and soon quality no longer matters, and finally the tech atrophies. I don't want to be forced to have my spinning platters connected via NVMe and a series of connector adapters.
Consumer chipsets have long supported USB, SATA, and PCIe using shared PHYs giving motherboard vendors some flexibility to decide which IO lanes they would like to wire up to SATA connectors vs USB connectors vs PCIe/M.2 connectors. (This worked great when all three were in the 5-6Gbps range.) Since SATA is now the slowest of those three interfaces, it doesn't really drive up the die cost much. It's pretty cheap for the chipset to continue to have a few SATA MACs on die, and giving the motherboard vendor the option to use the PHYs for USB or PCIe means there's no significant opportunity cost or inflation to the pin count to support SATA.
We've already seen the typical number of SATA ports on a consumer desktop motherboard drop from six to four or two. We'll probably go through a period where zero is common but four is still an option on some motherboards with the same silicon, before SATA gets removed from the silicon.
PCIe SATA adapters will likely be around forever. They may be problematic to boot from, but A) I'm sure your OS isn't on a spinning disk, and B) by the time PCIe SATA adapters disappear the entire concept of a PC will be an outlawed or legacy retro thing anyway.
Agreed... although it would be nice to find easier support for older formats, they do stick around for a while. I've got a 3.5" USB floppy in my garage that's been sitting for over a decade now, and even then it was long after CD/BR and thumb drives were the norm.
I can't say I'm surprised, but I am disappointed. The SATA SSD market has basically turned into a dumping ground for low quality flash and controllers, with the 870s being the only consistently good drives still in production after Crucial discontinued the MX500.
If you care even remotely about speed, you'll get an NVMe drive. If you're a data hoarder who wants to connect 50 drives, you'll go for spinning rust. Enterprise will go for U.3.
So what's left? An upgrade for grandma's 15-year-old desktop? A borderline-scammy pre-built machine where the listed spec is "1TB SSD" and they used the absolute cheapest drive they can find? Maybe a boot drive for some VM host?
Cheaper, sturdier, and more easily swappable than NVME while still being far faster than spinning discs.
I use them basically as independent cartridges, this one's work, that one's a couple TB of raw video files plus the associated editor project, that one has games and movies. I can confidently travel with 3-4 unprotected in my bag.
There's probably a similar cost usb-c solution these days, and I use a usb adapter if I'm not at my desktop, but in general I like the format.
Did that for a while until I invested in a NAS... at that point those early SSDs became drives for my RPi projects, which worked well enough until I gave all my RPi hardware away earlier this year... those 12+yo SSD drives still running without issue.
Where do you add more storage after you've used your 1-2 nvme slots and the m.2?
I would think an SSD is going to be better than a spinning disc even with the limits of sata if you want to archive things or work with larger data or whatever
4 M.2 NVMe drives is quite doable, and you can put 8TB drives in each. There are very few people who need more than 32TB of fast data access, who aren't going to invest in enterprise hardware instead.
Pre-hype, for bulk storage SSDs are around $70/TB, whereas spinning drives are around $17/TB. Are you really willing to pay that much more for slightly higher speeds on that once-per-month access to archived data?
In reality you're probably going to end up with a 4TB NVMe drive or two for working data, and a bunch of 20TB+ spinning drives for your data archive.
You can actually get a decent 4TB USB-C drive from Samsung. For most home users those are fast and big enough. If you get a mac, the SSD is soldered on the main board typically. And you can get up to 8TB now. That's a trend that some other laptop builders are probably following. There's no need for separate SATA drives anymore except for a shrinking group of enthusiast home builders.
I have a couple of 2TB USB-C SSDs. I haven't bought a separate SATA drive in well over a decade. My last home built PC broke around 2013.
Only SATA made it common for motherboards or adapters to support more than 2-4 hard drives. We're back to what we used to do before SATA: when you're out of space you replace the smallest drive with something larger.
Link? An adapter allowing a M.2 SATA SSD to be used in a 2.5" SATA enclosure is cheap and dead simple: just needs a 5V to 3.3V regulator. But that doesn't help. Connecting a M.2 NVMe SSD to a SATA host port would be much more exotic, and I don't recall ever hearing about someone producing the silicon necessary to make that work.
Actually that's a really common use - I've bought a half dozen or so Dell rack mount servers in the last 5 years or so, and work with folks who buy orders of magnitude more, and we all spec RAID0 SATA boot drives. If SATA goes away, I think you'll find low-capacity SAS drives filling that niche.
I highly doubt you'll find M.2 drives filling that niche, either. 2.5" drives can be replaced without opening the machine, too, which is a major win - every time you pull the machine out on its rails and pop the top is another opportunity for cables to come out or other things to go wrong.
M.2 boot drives for servers have been popular for years. There's a whole product segment of server boot drives that are relatively low capacity, sometimes even using the consumer form factor (80mm long instead of 110mm) but still including power loss protection. Marvell even made a hardware RAID0/1 controller for NVMe specifically to handle this use case. Nobody's adding a SAS HBA to a server that didn't already need one, and nobody's making any cheap low-port-count SAS HBAs.
Anything later than and including x4x has M.2 BOSS support and in 2026 you shouldn't buy anything lower than 14th gen. But yes, cheap SSDs serve well as the ESXi boot drives.
I bought 2 of the 870 QVOs a few years ago and put them in software RAID 0 for my steam library. They cost significantly less per TB than the M.2 drives at the time.
It’s a shame. I’m really enjoying their SATA 8TB QLC SSDs in RAID0 for mostly read-only data. It seems like I cannot scale my system vertically in the same manner. :/
The storage markets I can think of, off the top of my head:
1. individual computers
2. hobbyist NAS, which may cross over at the high end into the pro audio/video market
3. cloud
4. enterprise
#1 is all NVMe. It's dominated by laptops, and desktops (which are still 30% or so of shipments) are probably at the high end of the performance range.
#2 isn't a big market, and takes what they can get. Like #3, most of them can just plug in SAS drives instead of SATA.
#3 - there's an enterprise market for capacity drives with a lower per-device cost overhead than NVMe - it's surprisingly expensive to build a box that will hold dozens of NVMe drives - but SAS is twice as fast as SATA, and you can re-use the adapters and mechanicals that you're already using for SATA. (pretty much every non-motherboard SATA adapter is SAS/SATA already, and has been that way for a decade)
#4 - cloud uses capacity HDDs and both performance and capacity NVMe. They probably buy >50% of the HDD capacity sold today; I'm not sure what share of the SSD market they buy. The vendors produce whatever the big cloud providers want; I assume this announcement means SATA SSDs aren't on their list.
I would guess that SATA will stay on the market for a long time in two forms:
- crap SSDs, for the die-hards on HN and other places :-)
- HDDs, because they don't need the higher SAS transfer rate for the foreseeable future, and for the drive vendor it's probably just a different firmware load on the same silicon.
I agree hobbyist NAS is niche but it's very useful: less noise, less electricity bills, and not that much less space i.e. if you can find 3x Samsung 870 QVO drives at 8TB, you can have a super solid 16TB NAS with redundancy (or 24TB without). Not to mention compact; you can have an ITX-sized PC do quite a lot of work.
SATA SSD's are in a weird space. HDD are cheaper and more reliable for large storage pools. NVME is everywhere and provides those quick speeds and are even faster if you need that. There just aren't many use cases where SATA SSD's are the best option.
Are any SATA SSDs actually built to sink heat into the enclosure? e.g. The 860 Pro released in 2018 has a PCB taking up a third of plastic enclosure with no heatsinks to speak of: https://www.myfixguide.com/samsung-860-pro-ssd-teardown/
And even in worst-case hammering of drives, thermally throttled NVMEs can still sustain higher speeds than SATA drives.
Lots of consumer SATA SSDs don't have any thermal pads between the PCB and the case, and plastic cases are common. Heat just isn't a problem for a drive that's only drawing 2-3W under load.
And most consumer NVMe SSDs don't need any extra cooling for normal use cases, because consumer workloads only generate bursts of high-speed IO and don't sustain high power draw long enough for cooling to be a serious concern.
In the datacenter space where it is actually reasonable to expect drives to be busy around the clock, nobody's been trying to get away with passive cooling even for SATA SSDs.
SATA SSDs have one advantage though - their size. You don't see m.2 form factor SSDs going well over 8TBs, but for a larger SATA drive you can find >8TBs easily. Samsung had the best offering for this recently - Samsung SSD 870 QVO
The enterprise world has U2, but us plebs don't really have a comparable alternative.
China has also wisened up and is limiting supplies also. Their B2C marketplace is seeing less and less >1TB SSDs and even those who sell I've seen prices x2 in the span of two months.
> down-votes can't stop China. Tariffs can though...
People like you and I pay tariffs. Not China. You realize that right? And how will that stop China? Tariffs mostly hurt American consumers and producers. Just ask farmers.
First, cost != price. Pricing is in part based on competitive product availability. So if the cost of a product + tariff is greater than the cost of a competing product, there is pressure to reduce that cost. There's also pressure to produce elsewhere, such as domestically to avoid the tariff altogether.
This is a large part of why the tariffs have in fact not had the dramatic impact on all pricing that some have suggested would happen. It's been largely a negotiation tactic first, and second, many products have plenty of margin and competition to allow for pricing to remain relatively level even in the face of tariffs... so it absolutely can, in fact be a burden borne by Chinese manufacturers by lowering margins instead of US importers simply eating the cost of tariffs.
He's being downvoted because it's a dumb, knee-jerk comment. This has nothing to do with RAM, the thing getting really expensive at the moment, and Samsung isn't even stopping SSD production (which would be worth getting really mad about). It's about stopping production for a specific interface which has long since been saturated by even the cheapest, crummiest SSDs.
SATA SSDs don't really have much of a reason to exist anymore (and to the extent they do, certainly not by Samsung, who specializes in the biggest, baddest, fastest drives you can buy and is probably happy to leave the low end of the market to others).
there are more non-crucial suppliers of Micron based ram than Crucial... they can pick up the slack... Micron simply wanted to redirect resources to supporting larger contracts to other suppliers over direct consumer support. The market isn't shrinking as a result.
I would suspect the same with Samsung exiting SATA (not NVME) drives... their chips are likely to be used by other MFGs, but even then maybe not as SATA is much slower than what most solid state memory and controllers are capable of supporting. There's also a massive low-end market of competition for SATA SSDs and Samsung sales are likely not the best overall.
Samsung makes fast expensive storage but even cheap storage can max out SATA, hence there's no point Samsung trying to compete in the dwindling SATA space.
Does this mean that we'll start to see SATA replaced with faster interfaces in the future? Something like U.2/U.3 that's currently available to the enterprise?
The first NVMe over PCIe consumer drive was launched a decade ago.
It's hard to even find new PC builds using SATA drives.
SATA was phased out many years ago. The primary market for SATA SSDs is upgrading old systems or maybe the absolute lowest cost system integrators at this point, but it's a dwindling market.
It's for hdds. We still use those for massive storage
NVMe via m.2 remains more than fine for covering the consumer SSD use cases.
Problem is that you only get pitiful amount of m2 slots in mainstream motherboards.
A lot of modern boards come with 3 or more - that's what mine has. And with modern density, that's a LOT of storage. I have two 4TB drives!
You could even get more using a PCIe NVME expansion card, since it's all over PCIe anyways.
My desktop motherboard has 4... not sure how many you need, even if 8tb drives are pretty pricey. Though actual PCIe lanes in consumer CPUs are limited. If you bump up to ThreadRipper, you can use PCIe to M.2 adapters to add lots of drives.
On top of what the others have said, any faster interface you replace SATA with will have the same problem set because it's rooted in the total bandwidth to the CPU, not the form factor of the slot.
E.g. going to the suggested U.2 is still going to net you looking for the PCIe lanes to be available for it.
Three is not pitiful. Three is plenty for mainstream use cases, which is what mainstream motherboards are designed for.
We used to have motherboards with six or twelve SATA ports. And SATA HDDs have way more capacity than the paltry (yet insanely expensive) options available with NVMe.
We used to want to connect SSDs, hard drives and optical drives, all to SATA ports. Now, mainstream PCs only need one type of internal drive. Hard drives and optical drives are solidly out of the mainstream and have been for quite a while, so it's natural that motherboards don't need as many ports.
> Now, mainstream PCs only need one type of internal drive
More so it would only need one drive. ODD is dead for at least 10 years and most people never need another internal drive at all.
Still use ODD for ripping... that said, I'm using a USB3 BRW drive and it's been fine for what I need.
This article is talking about SATA SSDs, not HDDs. While the NVMe spec does allow for MVMe HDDs, it seems silly to waste even one PCIe lane on a HDD. SATA HDDs continue to make sense.
And I'm saying assuming that m.2 slots are sufficient to replace SATA is folly because it is only talking about SSDs.
And SATA SSDs do make sense, they are significantly more cost effective than NVMe and trivial to expand. Compare the simplicity, ease, and cost of building an array/pool of many disks comprised of either 2.5" SATA SSDs or M.2 NVMe and get back to me when you have a solution that can scale to 8, 14, or 60 disks as easily and cheaply as the SATA option can. There are many cases where the performance of SSDs going over ACHI (or SAS) is plenty and you don't need to pay the cost of going to full-on PCIe lanes per disk.
> And SATA SSDs do make sense, they are significantly more cost effective than NVMe
That doesn't seem to be what the vendors think, and they're probably in a better position to know what's selling well and how much it costs to build.
We're probably reaching the point where the up-front costs of qualifying new NAND with old SATA SSD controllers and updating the firmware to properly manage the new NAND is a cost that cannot be recouped by a year or two of sales of an updated SATA SSD.
SATA SSDs are a technological dead end that's no longer economically important for consumer storage or large scale datacenter deployments. The one remaining niche you've pointed to (low-performance storage servers) is not a large enough market to sustain anything like the product ecosystem that existed a decade ago for SATA SSDs.
its not enough if you have four ssds each with 4tb for instance
Is it not fair to say 4x4 TB SSD is an example of at least a prosumer use case (barrier there is more like ~10 before needing workstation/server gear)? Joe Schmoe is doing on the better half of Steam gamers if he's rocking a 1x2 TB SSD as his primary drive.
The MSI motherboard I use has 3, and with the PCIe expansion card installed, I have 7 m.2's. There are some expansion cards with 8 m.2 slots. You can also get SATA to m.2 devices, or my fav is USB-c drives that hold 2 m.2. Getting great speeds from that little device.
Most consumer motherboards have 2-3 M.2 slots.
You can buy cheap add-in cards to use PCIe slots as M.2 slots, too.
If you need even more slots, there are add-in cards with PCIe switches which allow you to install 10+ M.2 drives into a single M.2 slot.
It's more likely that third party integrators will look after the demand for SSD SAS/SATA devices, and the demand won't go away because SAS multiplexers are cheap and NVMe/PCIe is point to point and expensive to make switching hardware for.
Likely we'd need a different protocol to make scaling up the number of high speed SSDs in a single box to work well.
SATA just needs to be retired. It's already been replaced, we don't need Yet Another Storage Interface. Considering consumer IO-Chipsets are already implemented in such a way that they take 4 (or generally, a few) upstream lanes of $CurrentGenPCIe to the CPU, and bifurcating/multiplexing it out (providing USB, SATA, NVMe, etc I/O), we should just remove the SATA cost/manufacturing overhead entirely, and focusing on keeping the cost of keeping that PCIe switching/chipset down for consumers (and stop double-stacking chipsets AMD, motherboards are pricey enough). Or even just integrating better bifurcation support on the CPU's themselves as some already support it (typically via converting x16 on the "top"/"first" PCIe slot to x4/x4/x4/x4).
Going forward, SAS should just replace SATA where NVMe PCIe is for some reason a problem (eg price), even on the consumer side, as it would still support existing legacy SATA devices.
Storage related interfaces (I'm aware there's some overlap here, but point is, there's already plenty of options, and lots of nuances to deal with already, let's not add to it without good reason):
- NVMe PCIe
- M.2 and all of it's keys/lengths/clearances
- U.2 (SFF-8639) and U.3 (SFF-TA-1001)
- EDSFF (which is a very large family of things)
- FibreChannel
- SAS and all of it's permutations
- Oculink
- MCIO
- Let's not forget USB4/Thunderbolt supporting Tunnelling of PCIe
Obligatory: https://imgs.xkcd.com/comics/standards_2x.png
I think it's becoming reasonable to think consumer storage could be a limited number of soldered NVMe and NVMe-over-M.2 slots, complemented by contemporary USB for more expansion. That USB expansion might be some kind of JBOD chassis, whether that is a pile of SATA or additional M.2 drives.
The main problem is having proper translation of device management features, e.g. SMART diagnostics or similar getting back to the host. But from a performance perspective, it seems reasonable to switch to USB once you are multiplexing drives over the same, limited IO channels from the CPU to expand capacity rather than bandwidth.
Once you get out of this smallest consumer expansion scenario, I think NAS takes over as the most sensible architecture for small office/home office settings.
Other SAN variants really only make sense in datacenter architectures where you are trying to optimize for very well-defined server/storage traffic patterns.
Is there any drawback to going towards USB for multiplexed storage inside a desktop PC or NAS chassis too? It feels like the days of RAID cards are over, given the desire for host-managed, software-defined storage abstractions.
Does SAS still have some benefit here?
I wouldn't trust any USB-attached storage to be reliable enough for anything more than periodic incremental backups and verification scrubs. USB devices disappear from the bus too often for me to want to rely on them for online storage.
OK, I see that is a potential downside. I can actually remember way back when we used to see sporadic disconnects and bus resets for IDE drives in Linux and it would recover and keep going.
I wonder what it would take to get the same behavior out of USB as for other "internal" interconnects, i.e. say this is attached storage and do retry/reconnect instead of deciding any ephemeral disconnect is a "removal event"...?
FWIW, I've actually got a 1 TB Samsung "pro" NVMe/M.2 drive in an external case, currently attached to a spare Ryzen-based Thinkpad via USB-C. I'm using it as an alternate boot drive to store and play Linux Steam games. It performs quite well. I'd say is qualitatively like the OEM internal NVMe drive when doing disk-intensive things, but maybe that is bottlenecked by the Linux LUKS full-disk encryption?
Also, this is essentially a docked desktop setup. There's nothing jostling the USB cable to the SSD.
As @wtallis already said, a lot of external USB stuff is just unreliable.
Right now I am overlooking my display and seeing 4 different USB-A hubs and 3 different enclosures that I am not sure what to do with (likely can't even sell them, they'd go for like 10-20 EUR and deliveries go for 5 EUR so why bother; I'll likely just dump them at one point). _All_ of them were marketed as 24/7, not needing cooling etc. _All_ of them could not last two hours of constant hammering and it was not even a load at 100% of the bus; more like 60-70%. All began disappearing and reappearing every few minutes (I am presuming after overheating subsided).
Additionally, for my future workstation at least I want everything inside. If I get an [e]ATX motherboard and the PC case for it then it would feel like a half-solution if I then have to stack a few drives or NAS-like enclosures at the side. And yeah I don't have a huge villa. Desk space can become a problem and I don't have cabinets or closets / storerooms either.
SATA SSDs fill a very valid niche to this day: quieter and less power-hungry and smaller NAS-like machines. Sure, not mainstream, I get how giants like Samsung think, but to claim they are no longer desirable tech like many in this thread do is a bit misinformed.
I recognize the value in some kind of internal expansion once you are talking about an ATX or even uATX board and a desktop chassis. I just wonder if the USB protocol can be hardened for this using some appropriate internal cabling. Is it an intrinsic problem with the controllers and protocol, or more related to the cheap external parts aimed at consumers?
Once you get to uATX and larger, this could potentially be via a PCIe adapter card too, right? For an SSD scenario, I think some multiplexer card full of NVMe M.2 slots makes more sense than trying to stick to an HDD array physical form factor. I think this would effectively be a PCIe switch?
I've used LSI MegaRAID cards in the past to add a bunch of ports to a PC. I combined this with a 5-in-3 disk subsystem in a desktop PC. This is where the old 3x 5.25" drive bay space could be occupied by one subsystem with 5x 3.5" HDD hot-swap trays. I even found out how to re-flash such a card to convert it from RAID to a basic SATA/SAS expander for JBOD service, since I wanted to use OS-based software RAID concepts instead.
I wonder if this move has anything to do with SATA SSDs being a common upgrade for older PCs, but those will just go in the trash now that Windows 10 is EOL and Windows 11 will refuse to run on most of them? (I assume only a small percentage will be switched to Linux instead.)
If I were to bet on my hunches: At least half, leaning more, of that 20% buying SATA SSDs is probably momentum of people who didn't know they could get a better performing m.2 NVMe drive for the same price. Few people are upgrading PCs with SSDs for the first time in 2025 and those that are probably didn't really need SATA, they just searched for SATA/saw SATA.
I don't really know how one would get numbers for any of the above one way or the other though.
I prefer SSDs because the connector is so much more accessible. Ripping out the video card and futzing with the pain of that tiny NVME screw is no fun.
I am almost never IO blocked where the performance difference between the two matters. I guess when I do the initial full backup image of my drive, but after that, everything is incremental.
> I prefer SSDs because the connector is so much more accessible. Ripping out the video card and futzing with the pain of that tiny NVME screw is no fun.
This doesn't make sense as written. I suspect you meant to say "SATA SSDs" (or just "SATA") in the first sentence instead of "SSDs", and M.2 instead of NVMe in the second sentence. This kind of discussion is much easier to have when it isn't polluted by sloppy misnaming.
Keep in mind 'SATA SSD' != '2.5" SSD' as m.2 SSDs can be SATA as well.
Even then, I suppose it how the m.2 vs 2.5" SATA mounting turns out depends on the specific system. E.g. on this PC the main NVMe slot is above the GPU but mounting a 2.5" SSD is 4 screws on a custom sled + cabling once mounted. If it were the other way around and the NVMe was screw-in only below the GPU while the SSD had an easy mount then it might be a different story.
I remember running my first NVME drive via a PCIe adapter on my i7 4790K about a decade ago... man, that was a game changer for sure, almost as much as going from HDD to SSD on SATA around 2009.
On the other hand nvme has been around for >10 years (since z97 from 2014 I guess).
I've been buying only Samsung for about seven or eight years. I got a four-bay M.2 Thunderbolt 4 RAID enclosure in 2022 and I couldn't be happier with it. It absolutely smokes everything else I have (other than my internal SSD).
Tech news has been quite the bummer in the last few months. I'm running out of things to anticipate in my nerd hobby.
Anything exciting has been in NVMe over various physical form factors for the last decade, discontinuing SATA only helps in the future anticipation regard.
I've noticed there aren't a lot of reasonable home/sb m.2 NVME NAS options for main boards and enclosures.
SATA SSD still seems like the way you have to go for a 5 to 8 drive system (boot disk + 4+ raid6).
It seems like it's rare to find M.2 with the sort of things you'd want in a NAS like PLP, reasonably high DWPD, good controllers, etc. and you've also got to contend with the problem of managing heat in a way I had never seen with 2.5 or 3.5 drives. I would imagine the sort of people doing NVMe for NAS/SAN/servers are all probably using U.2 or U.3 (I know I do).
I've been doing my home NASes in m.2 NVMe for years now with 12 disks on one and 22 disks on another (backup still HDD though):
DWPD: Between the random teamgroup drives in the main NAS and WD Red Pro HDDs in the backup, the write limits are actually about the same. With the bonus reads are infinite on the SSDs, so things like scheduled ZFS scrubs don't count as 100 TB of usage across the pool each time.
Heat: Actually easier to manage than the HDDs. The drives are smaller (so denser for the same wattage) but the peak wattage is lower than the idle spinning wattage of the HDDs and there isn't a large physical buffer between the hot parts and the airflow. My normal case airflow keeps them at <60C under sustained benching of all of the drives raw, and more like <40 C given ZFS doesn't like to go more than 8 GB/s in this setup anyways. If you select $600 top end SSDs with high wattage controllers shipping with heatsinks you might have more of a problem, otherwise it's like 100 W max for the 22 drives and easy enough to cool.
PLP: More problematic if this is part of your use case, as NVMe drives with PLP will typically lead you straight into enterprise pricing. Personally my use case is more "on demand large file access" with extremely low churn data regularly backed up for the long term and I'm not at a loss if I have an issue and need to roll back to yesterday's data, but others who use things more as an active drive may have different considerations.
The biggest downsides I ran across were:
- Loading up all of the lanes on a modern consumer board works in theory, can be buggy as hell in practice. Anything from the boot becoming EXTREMELY long to just not working at all sometimes to PCIe errors during operation. Used Epyc in a normal PC case is the way to go instead.
- It costs more, obviously
- Not using a chassis designed for massive numbers of drives with hot-swap access can be quite the pain to troubleshoot install for.
The biggest upsides (other than the obvious ones) I ran across were:
- No spinup drain on the PSU
- No need to worry about drive powersaving/idling <- pairs with -> whole solution is quiet enough to sit in my living room without hearing drive whine.
- I don't look like a struggling fool trying to move a full chassis around :)
Its also quite difficult to find 2280 M.2 SATA SSD. Had an old laptop that only takes 2280 M.2 SATA SSD.
Its always one of the 2. M.2 but PCIe/NVMe, or SATA but not M.2.
Fwiw, SATA and NVMe are mutually incompatible concepts for a single device; SATA drives use AHCI to wrap ATA commands in a SCSI-shaped queuing mechanism called command lists over the SATA bus, while NVMe (M.2/U.2/add-in) drives talk NVMe protocol (multiple queues) over PCIe.
For a drive, yes, SATA and NVMe are mutually exclusive. The M.2 slot can provide both options. But if you have a machine with a M.2 slot that's only wired for SATA but not PCIe, your choices for drives to put in that slot have been quite limited for a long time.
There were even M.2 PCIe-connected AHCI drives - both not-SATA and not-NVMe. Samsung SM951 was one. You can find them on ebay but not otherwise.
At least the Samsung and SanDisk PCIe AHCI M.2 drives were only for PC OEMs and were not officially sold as retail products. There were gray-market resellers, but overall it was a niche and short-lived format. Especially because any system that shipped with a PCIe M.2 slot could gain NVMe capability if the OEM deigned to release an appropriate UEFI firmware update.
When it comes to ready-made home/SMB-grade NASes, in recent year or two plenty of options popped up: Terramaster F8, Flashstor 6 or 12, BeeLink ME mini N150 (6x NVMe). It's just QNAP and Synology who seem not interested.
Probably because QNAP and Synology pricing is rent seeking behavior on per drive bay pricing models.
How well does buying PCIe to M.2 adapters work for a custom NAS? Slot-wise you should be able to get 16 M.2 devices per motherboard with for example a Supermicro consumer board.
The difficulty with pcie to m.2 adapters is you usually can't use bifurcation below x4 and active PCIe switches got very expensive after PCIe 3.0.
Used multiport SATA HBA cards are inexpensive on eBay. Multiport nvme cards are either passive for bifurcation and give you 4x x4 for an x16 slot or are active and very expensive.
I don't see how you get to 16 m.2 devices on a consumer socket without lots of expense.
Not to mention, the physical x16 slot may be running in x8 mode if you're using a video card.
I don't think there are any consumer boards which support this?
In practice you can put 4 drives in the x16 slot intended for a GPU, 1 drive each in any remaining PCIe slots, plus whatever is available onboard. 8 should be doable, but I doubt you can go beyond 12.
I know there are some $2000 PCIe cards with onboard switches so you can stick 8 NVMe drives on there - even with an x1 upstream connection - but at that point you're better off going for a Threadripper board.
Can you point to a specific motherboard? 16 separate PCIe links of any width sounds rather high for a consumer platform.
C9X299-RPGF
https://www.supermicro.com/en/products/motherboard/C9X299-RP...
A few generations old, and HEDT, which isn't exactly consumer but ok. I see one for $100 on ebay, so that's not awful either.
Even that gives you one m.2 slot, and 8/8/8/16 on the x16 slots, if you have the right cpu. Assuming those are can all bifurcate down to x4 (which is most common), that gets you 10 m.2 slots out of the 40 lanes. That's more than you'd get on a modern desktop board, but it's not 16 either.
For home use, you're in a tricky spot; can't get it in one box, so horizontal scaling seems like a good avenue. But in order to do horizontal scaling, you probably need high speed networking, and if you take lanes for that, you don't have many lanes left for storage. Anyway, I don't think there's much simple software to scale out storage over multiple nodes; there's stuff out there, but it's not simple and it's not really targeted towards a small node count. But, if you don't really need high speed, a big array of spinning disks is still approachable.
That's a workstation board, not a regular consumer board, and it is over 5 years old by now - it has even been discontinued by Supermicro.
Buiding a new system with that in 2025 would be a bit silly.
If you want to go big in capacity, which is something you usually want for NAS, m.2 becomes super expensive.
I don't now if you consider it "reasonable" but the Gigabye Aorus TRX boards even from 6 years ago came with a free PCIE expansion card that held 8 M2 sticks, up to 32 TB on a consumer board. It's eATX, of course, so quite a bit bigger than an appliance NAS, and the socket is for a threadripper, more suitable for a hypervisor than a NAS, but if you're willing to blow five to ten grand and be severely overprovisioned, you can build a hell of a rig.
Are you sure? I've seen plenty of motherboards bundle a PCIe riser to passively bifurcate the PCIe slot to support four M.2 drives in an x16 slot or two in an x8 slot, but doing eight M.2 drives in one PCIe slot would either require a PCIe switch that would be too expensive for a free bundled card, or require PCIe bifurcation down to two lanes per link, which I don't think any workstation CPUs have ever supported. And 32TB is possible with just four M.2 SSDs.
What I want to know is if this is the beginning of the end of the SATA era. Once one major player leaves, others are sure to follow, and soon quality no longer matters, and finally the tech atrophies. I don't want to be forced to have my spinning platters connected via NVMe and a series of connector adapters.
Consumer chipsets have long supported USB, SATA, and PCIe using shared PHYs giving motherboard vendors some flexibility to decide which IO lanes they would like to wire up to SATA connectors vs USB connectors vs PCIe/M.2 connectors. (This worked great when all three were in the 5-6Gbps range.) Since SATA is now the slowest of those three interfaces, it doesn't really drive up the die cost much. It's pretty cheap for the chipset to continue to have a few SATA MACs on die, and giving the motherboard vendor the option to use the PHYs for USB or PCIe means there's no significant opportunity cost or inflation to the pin count to support SATA.
We've already seen the typical number of SATA ports on a consumer desktop motherboard drop from six to four or two. We'll probably go through a period where zero is common but four is still an option on some motherboards with the same silicon, before SATA gets removed from the silicon.
PCIe SATA adapters will likely be around forever. They may be problematic to boot from, but A) I'm sure your OS isn't on a spinning disk, and B) by the time PCIe SATA adapters disappear the entire concept of a PC will be an outlawed or legacy retro thing anyway.
Agreed... although it would be nice to find easier support for older formats, they do stick around for a while. I've got a 3.5" USB floppy in my garage that's been sitting for over a decade now, and even then it was long after CD/BR and thumb drives were the norm.
> I don't want to be forced to have my spinning platters connected via NVM
It's called a PCIe disk controller and you just accustomized to have one built-in in the south bridge.
> I don't want to be forced to have my spinning platters connected via NVMe and a series of connector adapters.
I want to build a mini PC-based 3D printed NAS box with a SATA backplate with that exact NVME connector adapter setup!
https://makerworld.com/en/models/1644686-n5-mini-a-3d-printe...
The reality is, as long as you have PCIe you can do pretty much whatever you want, and it's not a big deal.
Techradar article may be fake or a rumor
https://wccftech.com/no-samsung-isnt-phasing-out-of-the-cons...
I can't say I'm surprised, but I am disappointed. The SATA SSD market has basically turned into a dumping ground for low quality flash and controllers, with the 870s being the only consistently good drives still in production after Crucial discontinued the MX500.
It's the end of an era.
The thing is, what's the market for them?
If you care even remotely about speed, you'll get an NVMe drive. If you're a data hoarder who wants to connect 50 drives, you'll go for spinning rust. Enterprise will go for U.3.
So what's left? An upgrade for grandma's 15-year-old desktop? A borderline-scammy pre-built machine where the listed spec is "1TB SSD" and they used the absolute cheapest drive they can find? Maybe a boot drive for some VM host?
Cheaper, sturdier, and more easily swappable than NVME while still being far faster than spinning discs. I use them basically as independent cartridges, this one's work, that one's a couple TB of raw video files plus the associated editor project, that one has games and movies. I can confidently travel with 3-4 unprotected in my bag.
There's probably a similar cost usb-c solution these days, and I use a usb adapter if I'm not at my desktop, but in general I like the format.
Did that for a while until I invested in a NAS... at that point those early SSDs became drives for my RPi projects, which worked well enough until I gave all my RPi hardware away earlier this year... those 12+yo SSD drives still running without issue.
Where do you add more storage after you've used your 1-2 nvme slots and the m.2?
I would think an SSD is going to be better than a spinning disc even with the limits of sata if you want to archive things or work with larger data or whatever
Counterpoint: who needs that much fast storage?
4 M.2 NVMe drives is quite doable, and you can put 8TB drives in each. There are very few people who need more than 32TB of fast data access, who aren't going to invest in enterprise hardware instead.
Pre-hype, for bulk storage SSDs are around $70/TB, whereas spinning drives are around $17/TB. Are you really willing to pay that much more for slightly higher speeds on that once-per-month access to archived data?
In reality you're probably going to end up with a 4TB NVMe drive or two for working data, and a bunch of 20TB+ spinning drives for your data archive.
You can actually get a decent 4TB USB-C drive from Samsung. For most home users those are fast and big enough. If you get a mac, the SSD is soldered on the main board typically. And you can get up to 8TB now. That's a trend that some other laptop builders are probably following. There's no need for separate SATA drives anymore except for a shrinking group of enthusiast home builders.
I have a couple of 2TB USB-C SSDs. I haven't bought a separate SATA drive in well over a decade. My last home built PC broke around 2013.
Only SATA made it common for motherboards or adapters to support more than 2-4 hard drives. We're back to what we used to do before SATA: when you're out of space you replace the smallest drive with something larger.
There are SATA SSD enclosures for M.2 drives. Those are cheap enough now that granny can still upgrade her old PC on the cheap.
Link? An adapter allowing a M.2 SATA SSD to be used in a 2.5" SATA enclosure is cheap and dead simple: just needs a 5V to 3.3V regulator. But that doesn't help. Connecting a M.2 NVMe SSD to a SATA host port would be much more exotic, and I don't recall ever hearing about someone producing the silicon necessary to make that work.
pcie expansion cards? SATA isn’t free and takes away from having potentially more PCIE lanes, so the only real difference here is the connector
PCIE expansion card with m2 slots?
(SSDs are "fine", just playing devil's advocate.)
> Maybe a boot drive for some VM host?
Actually that's a really common use - I've bought a half dozen or so Dell rack mount servers in the last 5 years or so, and work with folks who buy orders of magnitude more, and we all spec RAID0 SATA boot drives. If SATA goes away, I think you'll find low-capacity SAS drives filling that niche.
I highly doubt you'll find M.2 drives filling that niche, either. 2.5" drives can be replaced without opening the machine, too, which is a major win - every time you pull the machine out on its rails and pop the top is another opportunity for cables to come out or other things to go wrong.
M.2 boot drives for servers have been popular for years. There's a whole product segment of server boot drives that are relatively low capacity, sometimes even using the consumer form factor (80mm long instead of 110mm) but still including power loss protection. Marvell even made a hardware RAID0/1 controller for NVMe specifically to handle this use case. Nobody's adding a SAS HBA to a server that didn't already need one, and nobody's making any cheap low-port-count SAS HBAs.
Anything later than and including x4x has M.2 BOSS support and in 2026 you shouldn't buy anything lower than 14th gen. But yes, cheap SSDs serve well as the ESXi boot drives.
I bought 2 of the 870 QVOs a few years ago and put them in software RAID 0 for my steam library. They cost significantly less per TB than the M.2 drives at the time.
I have some older SATA SSD's in my PC currently. I'd not buy a new one, too slow compared to NVME.
It’s a shame. I’m really enjoying their SATA 8TB QLC SSDs in RAID0 for mostly read-only data. It seems like I cannot scale my system vertically in the same manner. :/
The storage markets I can think of, off the top of my head: 1. individual computers 2. hobbyist NAS, which may cross over at the high end into the pro audio/video market 3. cloud 4. enterprise
#1 is all NVMe. It's dominated by laptops, and desktops (which are still 30% or so of shipments) are probably at the high end of the performance range.
#2 isn't a big market, and takes what they can get. Like #3, most of them can just plug in SAS drives instead of SATA.
#3 - there's an enterprise market for capacity drives with a lower per-device cost overhead than NVMe - it's surprisingly expensive to build a box that will hold dozens of NVMe drives - but SAS is twice as fast as SATA, and you can re-use the adapters and mechanicals that you're already using for SATA. (pretty much every non-motherboard SATA adapter is SAS/SATA already, and has been that way for a decade)
#4 - cloud uses capacity HDDs and both performance and capacity NVMe. They probably buy >50% of the HDD capacity sold today; I'm not sure what share of the SSD market they buy. The vendors produce whatever the big cloud providers want; I assume this announcement means SATA SSDs aren't on their list.
I would guess that SATA will stay on the market for a long time in two forms: - crap SSDs, for the die-hards on HN and other places :-) - HDDs, because they don't need the higher SAS transfer rate for the foreseeable future, and for the drive vendor it's probably just a different firmware load on the same silicon.
I agree hobbyist NAS is niche but it's very useful: less noise, less electricity bills, and not that much less space i.e. if you can find 3x Samsung 870 QVO drives at 8TB, you can have a super solid 16TB NAS with redundancy (or 24TB without). Not to mention compact; you can have an ITX-sized PC do quite a lot of work.
Earlier: https://news.ycombinator.com/item?id=46266070
Probably no longer profitable and they can change that production capacity to something that is.
I haven't even seen a SATA SSD in 5+ years. Don't know anyone that uses them.
SATA SSD's are in a weird space. HDD are cheaper and more reliable for large storage pools. NVME is everywhere and provides those quick speeds and are even faster if you need that. There just aren't many use cases where SATA SSD's are the best option.
SATA SSD has a huge heatsink attached to it. It is crucial for 24/7 use. NVME needs active cooling to survive.
Are any SATA SSDs actually built to sink heat into the enclosure? e.g. The 860 Pro released in 2018 has a PCB taking up a third of plastic enclosure with no heatsinks to speak of: https://www.myfixguide.com/samsung-860-pro-ssd-teardown/
And even in worst-case hammering of drives, thermally throttled NVMEs can still sustain higher speeds than SATA drives.
It isn't plastic, though, it's aluminum.
Lots of consumer SATA SSDs don't have any thermal pads between the PCB and the case, and plastic cases are common. Heat just isn't a problem for a drive that's only drawing 2-3W under load.
And most consumer NVMe SSDs don't need any extra cooling for normal use cases, because consumer workloads only generate bursts of high-speed IO and don't sustain high power draw long enough for cooling to be a serious concern.
In the datacenter space where it is actually reasonable to expect drives to be busy around the clock, nobody's been trying to get away with passive cooling even for SATA SSDs.
SATA SSDs have one advantage though - their size. You don't see m.2 form factor SSDs going well over 8TBs, but for a larger SATA drive you can find >8TBs easily. Samsung had the best offering for this recently - Samsung SSD 870 QVO The enterprise world has U2, but us plebs don't really have a comparable alternative.
No advantage over SAS here - it's the same form factor.
Problem here is I haven't seen SAS connectors in any consumer motherboard.
Yeah, you need an adapter. Search on eBay for "freeNAS" and "LSI" and you'll find a bunch listed for way under $100.
Fsck this cartel.. I hope China will fill these gaps and help restore normal prices.
China has also wisened up and is limiting supplies also. Their B2C marketplace is seeing less and less >1TB SSDs and even those who sell I've seen prices x2 in the span of two months.
They aren't limiting supplies, they can't scale up the production: https://www.reuters.com/commentary/breakingviews/chinas-chip...
You will be down-voted to hell for this comment, but luckily their down-votes can't stop China. Tariffs can though...
> down-votes can't stop China. Tariffs can though...
People like you and I pay tariffs. Not China. You realize that right? And how will that stop China? Tariffs mostly hurt American consumers and producers. Just ask farmers.
First, cost != price. Pricing is in part based on competitive product availability. So if the cost of a product + tariff is greater than the cost of a competing product, there is pressure to reduce that cost. There's also pressure to produce elsewhere, such as domestically to avoid the tariff altogether.
This is a large part of why the tariffs have in fact not had the dramatic impact on all pricing that some have suggested would happen. It's been largely a negotiation tactic first, and second, many products have plenty of margin and competition to allow for pricing to remain relatively level even in the face of tariffs... so it absolutely can, in fact be a burden borne by Chinese manufacturers by lowering margins instead of US importers simply eating the cost of tariffs.
He's being downvoted because it's a dumb, knee-jerk comment. This has nothing to do with RAM, the thing getting really expensive at the moment, and Samsung isn't even stopping SSD production (which would be worth getting really mad about). It's about stopping production for a specific interface which has long since been saturated by even the cheapest, crummiest SSDs.
SATA SSDs don't really have much of a reason to exist anymore (and to the extent they do, certainly not by Samsung, who specializes in the biggest, baddest, fastest drives you can buy and is probably happy to leave the low end of the market to others).
Funnily enough, I wasn't even downvoted yet :D
But you see, it's hard to post smarter comments when the title and the article don't help..
People are at same time complaining about data slow but they seem to happily paying AWS 10x for less iops and bandwidth.
What does this have to do with consumer SATA SSDs?
If Samsung (maybe) ends SSD production and Crucial existing the consumer business, what is the next best alternative for SSD products?
I thought Samsung was the de facto choice for high-quality SSD products.
SATA, not NVMe, they will still be making SSDs.
there are more non-crucial suppliers of Micron based ram than Crucial... they can pick up the slack... Micron simply wanted to redirect resources to supporting larger contracts to other suppliers over direct consumer support. The market isn't shrinking as a result.
I would suspect the same with Samsung exiting SATA (not NVME) drives... their chips are likely to be used by other MFGs, but even then maybe not as SATA is much slower than what most solid state memory and controllers are capable of supporting. There's also a massive low-end market of competition for SATA SSDs and Samsung sales are likely not the best overall.