What storage manufacturer do you swear to and why?

There are quite a few choices of brands when it comes to purchasing harddisks or ssd, but which one do you find the most reliable one? Personally had great experiences with SeaGate, but heard ChrisTitus had the opposite experience with them.

So curious to what manufacturers people here swear to and why? Which ones do you have the worst experience with?

exu ,
@exu@feditown.com avatar

For hard drives Toshiba, though SeaGate would be my second pick. Fuck WD.

On SSDs I go on Wikipedia and look at a list of flash + controller manufacturers and pick one of those. (Samsung, Kioxia (I think), Sandisk)

corsicanguppy ,

SSD? Crucial.

They officially supported a non-standard setup I had, and they were totally there for me in a jam. Spinning rust Iā€™ll go other names, but SSD is crucial-first. I donā€™t even care the cost.

helenslunch ,

Iā€™ve seen L1Techs recommend Solidigm several times now. These are also often the cheapest drives around. If I were in the market Iā€™d be having a close look at them.

talkingpumpkin ,
@talkingpumpkin@lemmy.world avatar

With the very limited number of drives one may use at home, just get the cheapest ones (*), use RAID and assume some drive may fail.

(*) whose performances meet your needs and from reputable enough sources

You can look at the backblaze stats if you like stats, but if you have ten drives 3% failure rate is exactly the same as 1% or .5% (they all just mean ā€œuse RAID and assume some drive may failā€).

Also, IDK how good a reliabiliy predictor the manufacturer would be (as in every sector, reliabiliy varies from model to model), plus you would basically go by price even if you need a quantity of drives so great that stats make sense on them (wouldnā€™t backblaze use 100% one manufacturer otherwise?)

teawrecks ,

Assume your hard drives will fail. Any time I get a new NAS drive, I do a burn-in test (using a simple badblocks run, can take a few days depending on the size of the drive, but you can run multiple drives in parallel) to get them past the first ledge of the bathtub curve, and then I put them in a RaidZ2 pool and assume it will fail one day.

Therefore, itā€™s not about buying the best drives so they never fail, because they will fail. Itā€™s about buying the most cost effective drive for your purpose (price vs avg lifespan vs size). For this part, definitely refer to the Backblaze report someone else linked.

Bonehead ,

I learned a long time ago that the manufacturer doesn't matter much on the long run. They all have a bad model occasionally. I have 500GB Seagate drives that still work, and some 1TB drives that died within a year. I've had good luck with recent WD Red 4TB drives, but my 2TB Green drives have all died on me. I had a some of the Hitachi Deskstar drives that worked perfectly for years when no one would touch them because of a bad production run. I currently have a Toshiba 8TB that I had never heard of before, but seems to be rock solid for the last year.

Pick a size that you want, look at what's available, and research the reasonably priced ones to see if anyone is complaining about them. Review sites can be useful, but raw complaints in user forums will give you a better idea of which ones to avoid.

rentar42 ,

Can confirm the statistics: I recently consolidated about a dozen old hard disks of various ages and quite a few of them had a couple of back blocks and 2 actually failed. One disk was especially noteworthy in that it was still fast, error-less and without complaints. That one was a Seagate ST3000DM001. A model so notoriously bad that it's got its own Wikipedia entry: https://en.wikipedia.org/wiki/ST3000DM001
Other "better" HDDs were entirely unresponsive.

Statistics only really matter if you have many, many samples. Most people (even enthusiasts with a homelab) won't be buying hundreds of HDDs in their life.

conorab ,

When buying disks do some research for the exact model to ensure they are not SMR drives if you plan on using them in RAID. Some manufacturers will not tell you if they are SMR drives and this can do anything from tank write performance to make the RAID reject the drive entirely.

See: arstechnica.com/ā€¦/caveat-emptor-smr-disks-are-beiā€¦

YodaDaCoda ,

2 of my 6 disks are failing thanks to WDā€™s EFAX line

Bastards

RedEyeFlightControl ,
@RedEyeFlightControl@lemmy.world avatar

Hard disks, WD/HGST.

Iā€™ve had good luck with EMC and NetApp for enterprise solutions, Synology for SMB class NAS storage, and rely on TrueNAS/ZFS on supermicro hardware at home, which has been rock solid for years and years.

BigMikeInAustin ,

With spinning disks, I preferred Seagate over Western Digital. And then move to HGST.

Back in those days, Western Digital had the best warranty. And I used it on every Western Digital. But that was still several days without a drive, and I still needed a backup drive.

So it was better to buy two drives at 1.3 x the price of one Western Digital. And then I realized that none of the Seagate or HGST drives failed on me.

For SATA SSDs, I just get a 1TB to maximize the cache and wear leveling, and pick a brand where the name can be pronounced.

For NVME, for a work performance drive, I pick a 2TB drive with the best write cache and sustainable write speed at second tier pricing.

For a general NVME drive, I pick at least a 1 TB from anyone who has been around long enough to have reviews written about them.

LanternEverywhere ,

Yup, knock on wood, I've had lots of Seagate drives over the decades and I've never had any of them go bad. I've had two WD drives and they both failed

jkrtn ,

Why does 1TB help with the wear leveling?

BigMikeInAustin ,

In general and simplifying, my understanding is:

There is the area where data is written, and there is the File Allocation Table that keeps track of where files are placed.

When part of a file needs to be overwritten (either because it inserted or there is new data) the data is really written to a new area and the old data is left as is. The File Allocation Table is updated to point to the new area.

Eventually, as the disk gets used, that new area eventually comes back to a space that was previously written to, but is not being used. And that data gets physically overwritten.

Each time a spot is physically overwritten, it very very slightly degrades.

With a larger disk, it takes longer to come back to a spot that has already been written to.

Oversimplifying, previously written data that is no longer part of a file is effectively lost, in the way that shredding a paper effectively loses whatever is written, and in a more secure way than as happens in a spinning disk.

teawrecks ,

Afaik, the wear and tear on SSDs these days is handled under the hood by the firmware.

Concepts like Files and FATs and Copy-on-Write are format-specific. I believe that even if a filesystem were to deliberately write to the same location repeatedly to intentionally degrade an SSD, the firmware will intelligently shift its block mapping around under the hood so as to spread out the wear. If the SSD detects a block is producing errors (bad parity bits), it will mark it as bad and map in a new block. To the filesystem, thereā€™s still perfectly good storage at that address, albeit with a potential one-off read error.

The larger sizes SSD just gives the firmware more extra blocks to pull from.

skittlebrau ,

Does that mean that manually attempting to overprovision SSDs isnā€™t necessary for maximising endurance? Eg. partition a 1TB SSD as 500GB.

BigMikeInAustin ,

That would be called under-provisioning.

I havenā€™t read anything about how an SSD deals with partitions, so I donā€™t know for sure.

Since the controller intercepts the calls for specific locations, Iā€™m inclined to believe that the controller does not care about the concept of partitions and does not segregate any chips, thus it would spread all writes across all of the chips.

skittlebrau ,

Isnā€™t it overprovisioning because youā€™re artificially limiting the usable capacity of a volume?

techtarget.com/ā€¦/overprovisioning-SSD-overprovisiā€¦

teawrecks ,

As the other person said, I donā€™t think the SSD knows about partitions or makes any assumptions based on partitioning, it just knows if youā€™ve written data to a certain location, and it could be smart enough to know how often youā€™re writing data to that location. So if you keep writing data to a single location, it could decide to logically remap that location in logical memory to different physical memory so that you donā€™t wear it out.

I say ā€œcouldā€ because it really depends on the vendor. This is where one brand could be smart and spend the time writing smart software to extend the life of their drive, while another could cheap out and skip straight to selling you a drive that will die sooner.

Itā€™s also worth noting that drives have an unreported space of ā€œspare sectorsā€ that it can use if it detects one has gone bad. I donā€™t know if you can see the total remaining spare sectors, but it typically scales with the size of a drive. You can at least see how many bad sectors have been reallocated using S.M.A.R.T.

BigMikeInAustin ,

[Thread, post or comment was deleted by the author]

  • Loading...
  • teawrecks ,

    Seriously? Why be like this? It feels like a Lemmy thing for people to have a chip on their shoulder all the time.

    You shared your understanding, and then I shared mine (in fewer words). I also summarized in once sentence at the bottom. Was just trying to have a conversation, sorry.

    jkrtn ,

    I thought you meant 1 TB as a sort of peak performer (better than 2+ TB) in this area. From the description, itā€™s more like 1 TB is kinda the minimum durability you want with a drive, but larger drives are better?

    BigMikeInAustin ,

    From the drives I have seen, usually there are 3 write-cache sizes.

    Usually the smallest write-cache is for drives 128GB or smaller. Sometimes the 256GB is also here.

    Usually the middle size write-cache is for 512GB and sometimes 256GB drives.

    Usually the largest write-cache is only in 1TB and bigger drives.

    Performance-wise for writes, you want the biggest write cache, so you want at least a 1TB drive.

    For the best wear leveling, you want the drive as big as you can afford, while also looking at the makeup of the memory chips. In order of longest lasting listed first: Single Level, Multi Level, Triple Level, Quad Level.

    jkrtn ,

    This is great, thank you! My next drive is going to be fast and durable.

    BigMikeInAustin ,

    An analogy is writing everything on one piece of paper with a pencil. When you need to change or remove something, you cross it out, instead of erasing, and write the new data to a clean part of the paper. When there are no more clean areas, you use the eraser to erase a crossed off section.

    The larger the paper, the less frequent you come back to the same area again with the eraser.

    Using an eraser on paper slowly degrades the paper until that section tears and never gets used again.

    blahsay ,

    Toshiba oddly enough. Iā€™ve been burnt by the big names like Seagate a few times now.

    possiblylinux127 ,
    @possiblylinux127@lemmy.zip avatar

    I think you are good as long as you avoid the cheap brands.

    deadbeef ,

    I swear allegiance to the only one true storage vendor, Micropolis. The Micropolis 1323A being the embodiment of perfection in storage basked in the glow of the only holy storage interconnect, MFM.

    I wait patiently for the return of Micropolis so that I may serve as their humble servant.

    MangoPenguin ,
    @MangoPenguin@lemmy.blahaj.zone avatar

    None of them, because every manufacturer has made good and bad products. Seagate had really bad 3TB drives which gave them a lot of that reputation.

    I just buy whatever fits my budget for HDDs and have proper backups in place. I think almost all of my HDDs are ā€˜refurbishedā€™ ones.

    For SSDs I look for one with a good TBW rating with a cache in it. Typically Iā€™ll go for used enterprise SSDs as well.

    PoliticallyIncorrect ,
    @PoliticallyIncorrect@lemmy.world avatar

    Definetly Western Digital for used drives, some time ago I sold like 3 of these IDE old drives from like 15 or 20 years ago and they was working perfectly. IDK of nowadays WD drives but used to be very good at least for me.

    catloaf ,

    None, I buy whatever is cheapest so that I can have spares for an eventual failure.

    Whatever is cheapest out of established brands, that is. WD, Seagate, etc., not JIOFUI brand drives from Amazon

  • All
  • Subscribed
  • Moderated
  • Favorites
  • ā€¢
  • random
  • [email protected]
  • tech
  • kbinEarth
  • testing
  • interstellar
  • wanderlust
  • All magazines