background preloader

Storage

Facebook Twitter

Enterprise Storage Forum: Data storage and storage networking news and trends. A Storage Architecture Guide - article on StorageSearch.com. Storage standards are weak standards that are driven by component considerations. Network standards are strong standards that are driven by system considerations A weak standard for Direct Attached Storage (DAS) Today, greater than 95% of all computer storage devices such as disk drives, disk arrays and RAID systems are directly attached to a client computer through various adapters with standardized software protocols such as SCSI, Fibre Channel and others.

This type of storage is alternatively called captive storage, server attached storage or direct attached storage (DAS) as illustrated in Figure 1. The committees that established these standards, however, allowed such wide flexibility in interoperability that there a many variations of SCSI and Fibre Channel (FC) for the many available UNIX and Windows NT systems. The Storage Architect | Storage, Virtualisation & Cloud. Comparison. Most people focus on the wires, but the difference in protocols is actually the most important factor. For instance, one common argument is that SCSI is faster than ethernet and is therefore better.

Why? Mainly, people will say the TCP/IP overhead cuts the efficiency of data transfer. So a Gigabit Ethernet gives you throughputs of 600-800 Mbps rather than 1000Mbps. But consider this: the next version of SCSI (due date ??) Will double the speed; the next version of ethernet (available in beta now) will multiply the speed by a factor of 10. Which will be faster? The Wires--NAS uses TCP/IP Networks: Ethernet, FDDI, ATM (perhaps TCP/IP over Fibre Channel someday)--SAN uses Fibre Channel--Both NAS and SAN can be accessed through a VPN for security The Protocols--NAS uses TCP/IP and NFS/CIFS/HTTP--SAN uses Encapsulated SCSI. Storage area network. A SAN does not provide file abstraction, only block-level operations. However, file systems built on top of SANs do provide file-level access, and are known as SAN filesystems or shared disk file systems. Storage[edit] Historically, data centers first created "islands" of SCSI disk arrays as direct-attached storage (DAS), each dedicated to an application, and visible as a number of "virtual hard drives" (i.e.

LUNs).[1] Essentially, a SAN consolidates such storage islands together using a high-speed network. Operating systems maintain their own file systems on their own dedicated, non-shared LUNs, as though they were local to themselves. Despite such issues, SANs help to increase storage capacity utilization, since multiple servers consolidate their private storage space onto the disk arrays. Common uses of a SAN include provision of transactionally accessed data that require high-speed block-level access to the hard drives such as email servers, databases, and high usage file servers. 5. SAN vs. NAS - What Is the Difference Between SAN and NAS Network Technologies? Question: SAN vs NAS - What Is the Difference? Answer: A NAS is a single storage device that operate on data files, while a SAN is a local network of multiple devices that operate on disk blocks. SAN vs NAS Technology A SAN commonly utilizes Fibre Channel interconnects.

A NAS typically makes Ethernet and TCP/IP connections. SAN vs NAS Usage Model The administrator of a home or small business network can connect one NAS device to their LAN. Administrators of larger enterprise networks may require many terabytes of centralized file storage or very high-speed file transfer operations. SAN / NAS Convergence As Internet technologies like TCP/IP and Ethernet have proliferated worldwide, some SAN products are making the transition from Fibre Channel to the same IP-based approach NAS uses.

In virtualization circles, there is a heated war between iSCSI partisans and Fibre Channel storage area network (SAN) proponents. Some believe that Fibre Channel SANs are faster; others that that an iSCSI storage infrastructure is cheaper. But these generalizations do not capture the complexities of the debate. Fibre Channel SANs enjoy their own connective fabric and a more optimized protocol stack, for example. But iSCSI storage infrastructures can ride on top of an existing Ethernet investment, and link aggregation can be easier than with Fibre Channel configurations. When you register, my team of editors will also send you the latest expert resources covering all areas of server virtualization, such as platforms, architectures and strategies, server hardware, managing virtual environments, application issues and more.

The widespread embrace of virtualization, along with its overall cost reductions, has now brought the iSCSI-vs. So, what does this mean? Fibre Channel vs. iSCSI: The war continues | Data Explosion. In the beginning there was Fibre Channel (FC), and it was good. If you wanted a true SAN -- versus shared direct-attached SCSI storage -- FC is what you got.

But FC was terribly expensive, requiring dedicated switches and host bus adapters, and it was difficult to support in geographically distributed environments. Then, around six or seven years ago, iSCSI hit the SMB market in a big way and slowly began its climb into the enterprise. The intervening time has seen a lot of ill-informed wrangling about which one is better. Sometimes, the iSCSI-vs. -FC debate has reached the level of a religious war. [ Also on InfoWorld.com: Download Logan Harbaugh's Archiving Deep Dive and get the fundamentals of regulatory compliance. | Learn how data deduplication can slow the explosive growth of data with Keith Schultz's Deep Dive Report. ] Now that we're about a year down the pike after the ratification of the FCoE (FC over Ethernet) standard, things aren't much better.

NAS vs iSCSI vs DAS vs Fiber Channel vs Internal Storage… A Simple Explaination - Up & Running Technologies Calgary. If you are confused by NAS, DAS, iSCSI… and want a one minute explaination, then this is the post for you! The classic. A bunch of disks inside a standard (usually Windows) server, configured in whatever RAID you want. This used to be good old SCSI connected disk arrays but now is primarily external SAS connections to a pile of SATA or SAS disks.

Basically, this is a server with a bunch of disks in it that you can create RAID arrays with and then share out the space on your network as you see fit. This used to confuse me because I did not understand that the root the the NAS is just another server. This sounds confusing but is actually quite simple in theory. Fiber Channel is simply a very expensive way to physically connect your disk array to a Server. For a smaller customer I sourced a Netgear ReadyNAS Pro 6TB, which I will configure as RAID 10 giving my customer speed at about 80MB/Sec, 3 disks of redundancy, with 3TB of space for about $2200. Home.