background preloader

Review

Facebook Twitter

Cpu

So. Psu. Motherboard. Disk. Sata. Psu. Case. The [H]ard Forum Storage Showoff Thread - Post your 10TB+ systems. Updated 09/08/11: Total multiple system storage 197.4TB Total single/internal system storage 45TB Newest rack picks.

The [H]ard Forum Storage Showoff Thread - Post your 10TB+ systems

Disk activity isn't taken well with the flash (took two different shots to try to show difference). Included a bit blury no-flash pick which shows all LEDs as active: New 30x3 TB drives bought from Fry's. No flash pic (shows disk activity but blurry): Two flash pics (clear but disk activity washed out): Back of the rack showing cables going to boxes with just a SAS expander in them: Older Rack pics: Rack from the side (Without the side panel on yet: Rack from the Front (machines in order: zeroshell, kaizoku, chikan, osol, dekabutsu) Cables going from the rack to the cloest in the master bedroom which in turn goes to my desk that has my monitor, speakers, mouse, etc..). ATLAS My Virtualized unRAID server. Hardware Build and ESXi install.

ATLAS My Virtualized unRAID server

Hardware install notes:Original Hardware unboxing The 650Watt Corsair Power Supply pictured was not going to cut it. I used the spare 750watt Seasonic I had from an earlier sale. I just need to swap it out from a workstation and put the 650 into it. In addition, the Seasonic is gold certified, That is a bonus for an always on PC. The first step I did was assemble everything into the Norco as if i was going to install unRAIDI installed the Motherboard, RAM, CPU, 1 of the MV8's and the Power Supply for testing. Current Build Photo with 32GB Ram, E-1240, 2x M1015, Expander, Corsair Pro SSD's, and custom power cables I don't think I need to go into detail here.I'll assume you can assemble the hardware.Plug in the power and Ethernet cables to both the IPMI and LAN2[Use LAN1 For ESXi, LAN2 is for baremetal unRAID] After this step, I stuck the ESXi flashdrive into the internal USB Yes, it is still blank. this is for the Bios config step.

Areca ARC-4036 6Gb/s SAS/SATA JBOD Enclosure Review - Areca, LSI, Sandisk and Patriot Perform! The storage industry is shifting from 3Gb/s devices to the new 6Gb/s specification which has been a boon to manufacturers as new devices are needed to handle the new, faster devices that 6Gb/s enables.

Areca ARC-4036 6Gb/s SAS/SATA JBOD Enclosure Review - Areca, LSI, Sandisk and Patriot Perform!

The 3Gb/s specification reigned for roughly five years which allowed user speeds up to 300 MB/s. With the advent of “disruptive” technologies like SSD devices, the limits of 3Gb/s have been pushed back faster than expected. Now with devices that are able to saturate the specification, 6Gb/s and its 600 MB/s speeds could not have been integrated in a more timely and appropriate manner.

As an added bonus, the specification is backwards compatible with the existing 3Gb/s specification, allowing users to upgrade the different portions of their storage subsystems at their leisure. Tom's CPU Architecture Shootout: 16 CPUs, One Core Each, And 3 GHz : A Real (Theoretical) Performance Shootout. The CPU landscape is really complex.

Tom's CPU Architecture Shootout: 16 CPUs, One Core Each, And 3 GHz : A Real (Theoretical) Performance Shootout

Both AMD and Intel offer tons of different models. But how would today’s processors perform if they didn't have multiple cores? We take 16 different CPUs and compare them all using a single core running at 3 GHz. Ever since AMD and Intel started cramming more processing cores into their CPUs, potential performance has grown faster than it did back when single-core CPUs were king thanks to parallelization. Back then, pushing higher frequencies and improving performance per clock were the only ways to speed things up. We all know that more advanced manufacturing technologies are paving the way for more cores per CPU, and that clock rates are slowly creeping up as well. Prerequisites and Processors In preparation for this article, we had to look at the processors available to us for benchmarking.

Goliath: My Norco RPC-4224 build. Goliath, Formerly My WHSv1 Media server named Leviathan.

Goliath: My Norco RPC-4224 build.

The WHS server was a test of WHS's Drive Extender. Previously I was hosting my media off of a windows 2008r2 box with 16 2TB Samsung F4's in raid 6. I wanted a greener alternative to having to spin 16 drives 24x7 just to watch a movie or access a single file. in my house there is almost always someone on the media server.

Besides greener, I wanted simple, set and forget. WHS was my first step to testing that. My next step was to be WHS2011 and Flexraid. I went to step 3 bypassing step 2. unRAID.after weeks of trolling the forums and playing with the free version, I feel am ready to buy my key and go into pre-production. Right now, I feel pretty positive about this step, enough so that ill be putting my data on it. My network consists of several 2008r2 boxes, several with raid 5 arrays. Goliath OS at time of building: 5.0-beta6a Pro License (I live on the edge.)5B14 now. Home. DIY Rack Cabinet 8U for only 20$ with Ikea Lack.