background preloader

Storage

Facebook Twitter

Nas

DroboShare network storage 'robot' High performance access to file storage Review Network-attached storage (NAS) boxes are all very well, but they're not what you'd call user friendly. Arch-geeks love 'em for storing and streaming content, but a fair few folk would prefer a simpler yet equally robust way of making storage available on a network. Enter Data Robotics' oddly named Drobo, an external storage system designed with a high level of data resilience, now augmented with DroboShare, an add-on that allows a couple of Drobos to be accessed over a network. Data Robotics' Drobo: dark but well-lit First, the Drobo. The drive LEDs have a very simple colour scheme, clearly explained on a sticker inside the front panel, which is held in place magnetically so it's a doddle to remove and replace.

Drobo comes without drives: you need to add two or more 3.5in SATA HDDs yourself. And so ad infinitum... Slide in up to four SATA HDDs We populated a Drobo with two drives: an 80GB unit and a 250GB drive. RooSwitch. Google’s Disk Failure Experience. Google released a fascinating research paper titled Failure Trends in a Large Disk Drive Population (pdf) at this years File and Storage Technologies (FAST ’07) conference. Google collected data on a population of 100,000 disk drives, analyzed it, and wrote it up for our delectation.

In yet another twist of consumer-driven IT, the disks Google studied, PATA and SATA drives, are the same drives you and I would buy for personal use. As an ironic result, we now have better data on drive failures for cheap drives than the enterprise does for its much costlier FC and SCSI “enterprise” disks with their much higher MTBFs. Google found surprising results in five areas: The validity of manufacturer’s MTBF specsThe usefulness of SMART statisticsWorkload and drive lifeAge and drive failureTemperature and drive failure I’ll give you the skinny on each after a note about MTBF and AFR. Vendor MTBF and Google AFR Mean Time Between Failure (MTBF) is a statistical measure. How smart is SMART? Everything You Know About Disks Is Wrong. Update II: NetApp has responded. I’m hoping other vendors will as well.

Which do you believe? Costly FC and SCSI drives are more reliable than cheap SATA drives.RAID 5 is safe because the odds of two drives failing in the same RAID set are so low.After infant mortality, drives are highly reliable until they reach the end of their useful life. Vendor MTBF are a useful yardstick for comparing drives. According to the one of the “Best Paper” awards at FAST ’07, none of these are backed by empirical evidence. Beyond GoogleYesterday’s post discussed a Google-authored paper on disk failures. Google’s wasn’t even the best: Bianca Schroeder of CMU’s Parallel Data Lab paper Disk failures in the real world: What does an MTTF of 1,000,000 hours mean to you? Best “academic computer science” paper So it is very heavy on statistics, including some cool techniques like the “auto-correlation function”. Key observations from Dr. Maybe consumer stuff gets kicked around more. Infant mortality? Dr. Help, My Hard Drive is Full!