Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> I expect the price of NVMe drives to drop over the next few years until they're cheap enough that the majority of computers are running NVMe drives.

Price no longer has anything to do with it. PC OEMs are simply not shipping SATA SSDs any more, and major drive vendors have started to discontinue their client (OEM) SATA SSD product lines. We're just waiting for the SATA-based PC install base to be retired.



My mobo has many more SATA slots than M.2. slots. I expect there will be hybrid systems for quite a while.


One SSD is sufficient for almost all consumer systems. The only reason to want more than two SSDs is if you're re-using at least one old tiny SSD in a new machine. SATA ports will stick around in desktops only for the sake of hard drives. There may be a few niches left where using several SATA SSDs in a workstation still makes some kind of sense, and obviously not all server platforms have migrated to NVMe yet. But as far as influencing the direction and design of consumer systems, SATA SSDs have only slightly more relevance than optical disc drives.


Drive price doesn't scale linearly with capacity, you can save a fair bit of money sticking with multiple smaller capacity drives vs one big one.


Drive price per GB doesn't even scale monotonically with capacity. Right now, the best price per GB is usually on 1TB or 2TB models. And if you need more than two such devices with SSD performance, you're far outside the bounds of consumer computing and into workstation territory.


It depends on what you mean by "consumer computing", really. The single largest app using up my disk space is Steam, and if I only had a single 1 Tb SSD, it'd be full by now.


I have 8 SATA SSDs in my workstation; are there motherboards that could run a similar NVMe setup?


With PCIe lane bifurcation, you won’t even need a PCIe switch on your expansion card. I have 10 Samsung 980 Pro PCIe SSDs in my AMD ThreadRipper PRO/WX machine (2 in motherboard M.2 slots, and 2 x “expansion cards” that hold 4 SSDs each). Had to configure PCIe bifurcation in BIOS, so lanes connected to a PCIe x16 card will be treated like 4 x PCIe x4 instead.

So far the best aggregate results with io_uring 10.5 M 4K IOPS and 66.5 GB/s with large reads...


That's incredible, which PCIe cards are you using?


10 x Samsung 980 PRO SSDs (PCIe 4.0)

2 x ASUS Hyper M.2 X16 PCIe 4.0 X4 Expansion Card

When doing research, I was careful to buy only kit that can achieve full PCIe 4.0 speed and not some old PCIe 3.0 stuff that's "compatible with PCIe 4". This applies both to SSDs and expansion cards...

Edit: It's worth adding that your CPU(s)/mobo must have enough PCIe lanes when trying to get max throughput. 10 SSDs would need 40 PCIe lanes dedicated to them (many consumer CPUs/chipsets have 36 or 44 and some lanes are used for other stuff! The new AMD ThreadRipper PRO WX has 128 :-)


I wasn't including the workstation market when I referred to what PC OEMs are doing.

Are you using 8 consumer SATA SSDs in your workstation? Is it for the sake of increased capacity, or for the sake of increased performance? Because it's pretty easy now to match the performance of an 8-drive SATA RAID-0 with a single NVMe drive, but 8TB consumer NVMe SSDs are still 50% more expensive than 8TB consumer SATA SSDs.

(Also, even 8 SATA ports is above average for consumer motherboards; it looks like about 17% of the retail desktop motherboard models currently on the market have at least 8 SATA ports.)


Increased capacity. I started with 4 spinning disks, replaced them with SSDs a while ago and then grew it to 8.


You can get NVME PCIe cards which has on-board PCIe switch. Random example, here's[1] one with 4 M.2 slots sharing an x8 PCIe slot.

Obviously sustained bandwidth is limited to that of effectively two NVME devices, but if you're doing lots of random I/O I guess it's a win.

[1]: https://www.aliexpress.com/item/4000034598072.html


Yes, using PCIe expansion cards. I know of an AMD board that ships with 5 (3 on the board, 2 with a PCIe card). Could easily add more.


is this with threadripper boards?


3 M.2 slots is common even on AMD's mainstream X570 and B550 platforms. I don't know if any of those motherboards also bundle riser cards for further M.2 PCIe SSDs, but they do support PCIe bifurcation so you can run your GPU at PCIe 4.0 x8 and use the second x16/x8 slot to run two more SSDs in a passive riser purchased separately.


The b550 Aorus Master I just got is unique in its bifurcation. It maps the CPU's gen 4 PCI-E to the main 16x slot and one M.2 4x, or drops the 16x slot down to 8x and then has three gen 4 M.2 4x. The other two PCI-E slots are gen 3 through the b550 chipset.

I chose this board for its absurdly overkill CPU voltage regulation. The pci-e configuration seems like a pretty good compromise, though.


No, just an x570, the MSI “Godlike”. You can also just buy PCIe cards with M2 slots for drives.


Right now it's a bit more specialized to storage oriented server platforms that can run in the 10-40 NVMe devices. You get this sort of imbalance where any one or two high performance NVMe devices at full throughput can push more I/O than a single high end network link


Sure you could run 24 NVMes with highpoint pcie4 raids on a trx40 board. But then most still have like 10 sata ports so you can run those as well. It will be great when sata is replaced by U.2 but who knows when that happens.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: