I suppose the only way to get them is to wait till the drives as sold as refurbished
Buy server, sell drives, steal underpants, profit!
wait until next year, when Seagate launches its first 30TB HAMR HDD.
So in 2024 then. Let’s wait for the price.
That is what next year means I’m this case, yes.
Congrats on knowing it’s currently 2023 I guess.
Will never again buy anything seagate ever again! Fuck them!
Shame this sub doesn’t have an original source policy, because this regurgitated article absolutely sucks.
Will not ever buy seagate again. .|. them. Worst support, RMA, and customer care ever.
From a quick skim, the author seems to have not noted that these are host-managed, so they’re not particularly useful individually.
Instead of holding all of the management logic on the drive itself, that’s done at the appliance level to manage load across disks—so these wouldn’t work in standard NAS devices, unless Seagate provides a binary or API for Synology or QNAP to implement in their firmware.
Soon, 100tb in a 3.5" for 500$!
Already a thing if you ignore that price thing
I’m not touching Seagate drives ever again… Fucking pile of rust.
What brand do you trust?
I have lost 12 seagate drives in the last 10 years. I have had good success with Toshiba and WD. I always check Backblaze drive stats now before buying new drives.
m-m-m-must not f-f-fap
That’s a big rack.
But Solidigm SSDs are still ahead with power, speed, and capacity at 61.44 TB
Rather read the original source, which is way better: https://blocksandfiles.com/2023/11/15/seagate-hamr-drives-come-to-corvault/
I wonder how they do this. Are the drives even SAS/NVMe/some standard interface, or are they fully proprietary? What “logic” is being done on the controller/backplane vs. in the drive itself?
If they have moved significant amounts of logic such as bad block management and such to the backplane, it’s an interesting further example of “full circle” in the tech industry. (e.g. we started out using terminals, then went to locally running software, and now we’re slowly moving back towards hosted software via web apps/VDI.) I see no practical reason to do this other than (theoretically) reducing manufacturing costs and (definitely) pushing vendor lock-in. Not like we haven’t seen that sorta stuff done with e.g. NetApp messing with firmware on drives though.
However if they just mean that the 29TB disks are SAS drives and the enclosure firmware implements some sort of proprietary filesystem and that the disks are only officially supported in their enclosure, but the disk could operate on its own as just a big 29TB drive, we could in theory get these drives used and stick them in any NAS running ZFS or similar. (I’m reminded of how they originally pitched the small 16/32GB Optanes as “accelerators” and for a short time people weren’t sure if you could just use them as tiny NVMe SSDs - turned out you could. I have a Linux box that uses an Optane 16GB as a boot/log/cache drive and it works beautifully. Similarly those 800GB “Oracle accelerators” are just SSDs, one of them is my VM store in my VM box.)
This is all nice but when it takes 3-weeks to check the volume or add a drive that’s gonna suck. With spinning media there’s a benefit to more smaller drives since you can read/write from many. I’m not saying I wouldn’t want these just that if I didn’t have petabytes of data I’d stick with more drives of smaller size. Unless of course their speed increases. Spinning media isn’t getting faster as quickly as it’s getting bigger. So. When your scrubbing or anything I would expect you could scrub 5x20tb a lot quicker than 1x100tb. So. As I see it. This is niche for me.
“The 4RU chassis can be fitted with 106 3.5-inch hard drives…”
106 x 29 = 3074 terabytes
2.5 petabytes = 2500 terabytes
3074 - 2500 = 574 terabytes
574 / 29 = 19.79 so about 19 or 20 drives out of the total 106 used for (parity? hot swap?).
I have a feeling that this might be slightly out of my budget.