background preloader

VMware Storage Integration & Top Storage Vendors

VMware Storage Integration & Top Storage Vendors
Research done in collaboration with Stuart Miniman and Nick Allen Introduction In April 2011 Wikibon ran a survey looking at the area of storage and VMware. The results showed that EMC and NetApp had a clear lead in the number of respondents that selected them as the best VMware storage and as the primary VMware storage vendor. Wikibon has further analyzed the results of the survey, including a detailed analysis of the degree of integration. Figure 1 – Relative Positioning of VMware Storage Integration by VendorSource: Wikibon Survey April 2011, n=361, and detailed analysis of vendor implementations. Wikibon believes that practitioners can use the methodology in this report to help position the importance of integration features for their own VMware storage decisions and to help decide which vendors to include in RFPs. Vendors and Storage Assessed Wikibon looked at the major storage offerings from the six (6) largest storage vendors. Rating Importance of Integration Features Conclusions Related:  VMwareVMWare Networking and StorageVMWare Storage

VMware's Virtual SAN Threatens Traditional Data Storage Models VMware has launched the final part of its software defined data center puzzle: a virtual SAN product called Virtual SAN. The product has been in beta testing for the last six months, with around 12,000 customers, but there were still plenty of surprise announcements made at the launch event on March 6. The biggest of these was the maximum size of its Virtual SAN. Previously VMware had said that this would be 8 server nodes, and then 16. But VMware CEO Pat Gelsinger announced that the product has been upgraded to support 32 nodes. Now for some math. "This is a monster," said Gelsinger, echoing the monster VM concept that the company introduced in 2011 with the introduction of vSphere 5. He added that performance scales linearly - a 16 node setup offers 1M IOPS, a 32 node one offers 2M IOPS - and despite his previous comment about 32 nodes being monstrous, he hinted that even larger Virtual SANs are on the roadmap: "There will be more in the future. It turns out that there's two ways.

VMworld 2014 VMware vCloud Air and ViPR Object Storage This one is short and sweet! The vCloud Hybrid Service is no more, and is now VMware vCloud Air! Furthermore – it keeps getting bigger (more locations), and better (more capabilities – DR as a Service, Backup as a Service, and Platform as a Service) – with one of the new additions being one of the industries’ richest web-scale, geo-dispersed and efficient - which can translate into the lowest cost model – object (and HDFS!) store! No cloud service is really complete without PaaS and Object stores – and it’s great that the vCloud Air service is getting them… What is delivering the underlying Object/HDFS Storage? Yup – the VMware vCloud Air Object Storage service runs on EMC ViPR Object! For the doubters for the concept of the Federation, sure you can see examples where the Federation parties are “Open” (which we are – see how VMware and EMC are embracing Openstack a little differently, or Pivotal running on AWS, or EMC’s HDFS offerings partnering with Cloudera). What do you think?

VMware Storage and Software-defined Storage (SDS) Solutions Blog Posts Oregon State University, a public institution with more than 26,000 students and growing VDI workloads wanted a high performance storage tier for their VDI environment. However, they wanted the solution to be up and running before the school summer session began, along with being easy to operate and scale on an on-going basis, without requiring large upfront investments. Continue reading Welcome to the next installment of our vSphere PowerCLI 5.8 walkthrough series of the new cmdlets for vSphere Storage Policy Based Management. Introduction to vSphere Storage Policies Creating vSphere Storage PoliciesAssociating vSphere Storage Policies In this article we will take the next step and illustrate how to leverage vSphere Storage Policies to enhance the provisioning of New VMs. PowerCLI cmdlets referenced in this blog article: New-VM Get-SpbmCompatibleStorage Get-SpbmEntityConfigurationSet-SpbmEntityConfiguration Using the vSphere Web ClientUsing PowerCLI Continue reading Continue reading

Converged Storage for VMware Imagine what you can achieve by combining the power of VMware vSphere with storage platforms designed to eliminate the challenges of server and client virtualization. HP’s next generation of Converged Storage solutions are designed to enhance the benefits of VMware vSphere, VMware View, and VMware vCloud Solutions. With HP storage supporting your VMware deployments, you are able to: Optimize VM density, availability and business continuity Simplify provisioning and management with less storage and vSphere complexity Save on storage by increasing capacity utilization and efficiency HP Converged Storage for VMware Environments HP 3PAR StoreServ Storage—delivers best-in-class, hardware-assisted integration with VMware vSphere along with guarantees to double your VM density and cut your capacity requirements in half. HP StoreOnce Backup—reduces backup data in vSphere environments by up to 20x by eliminating duplicated data. HP VirtualSystem Solution brief (PDF 274 KB) Increasing VM density

VMware storage: SAN configuration basics VMware storage entails more than simply mapping a logical unit number (LUN) to a physical server. VMware’s vSphere enables system administrators to create multiple virtual servers on a single physical server chassis. The underlying hypervisor, vSphere ESXi, can use both internal and external storage devices for guest virtual machines. In this article we will discuss the basics of using storage area network (SAN) storage on vSphere and the factors administrators should consider when planning a shared SAN storage deployment. VMware storage: SAN basics vSphere supports internally-connected disks that include JBODs, hardware RAID arrays, solid-state disks and PCIe SSD cards. SAN storage, however, provides a shared, highly available and resilient storage platform that can scale to a multi-server deployment. It is possible to use NAS and SAN-based storage products with vSphere, but in this article we will consider only SAN, or block-based devices. VMware file system and datastores

VMware wants to be the VMware of Networking « IT 2.0 By Massimo, on April 17th, 2012 There have been a lot of discussions lately about SDN (Software Defined Networking). Arguably SDN may mean a lot of different things to a lot of different people. If you ask the like of Facebook, Google and academic researchers they will probably tell you that SDN is all about gaining full visibility (and control) on how packets flow on the network. People and organizations that are closer to the commercial world may tell you that SDN is all about creating an abstraction layer (virtualization anyone?) I’d like to focus on the latter definition of SDN. A few weeks ago Cisco’s Lauren Cooney asked a question on twitter on the line of “how would you define SDN?”. Look at the picture. SDN purists may very well argue that this PDF was not including important aspects of SDN such as self-service capabilities and a proper API to access these functionalities. I also hear a lot of discussions about VMware missing credibility in the networking space. Massimo.

VMware vSphere Storage Appliance (VSA) for Shared Storage End of Availability VMware is announcing the End of Availability of all vSphere Storage Appliance versions, effective April 1, 2014. After this date you will no longer be able to purchase this product. All support and maintenance for vSphere Storage Appliance 5.5 will be unaffected and will continue to follow the Enterprise Infrastructure Support Policy. The End of General Support life date for customers with vSphere Storage Appliance 5.5 remains September 19, 2018. Support contracts can be renewed beyond End of Availbaility until End of General Support. Customers interested in moving to a new VMware software-defined storage solution may elect to upgrade to VMware Virtual SAN.

VMware vSphere Best Practices - VMwaremine - Mine of knowledge about virtualization VMware tools for nested ESXi If you have nested ESXi running on your homelab environment this is something which you should look at. VMware tools for nested ESXi – very cool flings by VMware labs it does work like on any other guest operating system (Linux, Windows). read more OpenSSL heartbleed bug – VMware products Most probably you are aware about recent finding by The bug was independently discovered by security firm Codenomicon and a Google Security engineer. Sponsor News 04/2014 vCAC 6.0 most annoying installation error During my POC on vCAC I had to struggle with different type of issues but he most annoying one was during IaaS component installation. vCO workflow – change vCPU count Recently I started to develop\build my own vCenter Orchestrator workflows. 2014 top VMware & virtualization blog voting results are out vCAC 6 series – Part 11 – Create Blueprint vSphere 5.5 U1 released with vSAN 1.0 support vCAC 6 series – Part 10 – Prepare for provisioning vSAN 1.0 Announcement

Unable to bond Etherchannel on ESXi 5.5 I couldn't seem to get an etherchannel working properly on a ESXi5.5 host, I'm not using vcenter and was attempting to aggregate the links to a Cisco 3750. I referred to the kb article below, I configured IP hash on the ESXi host, set SRC-DST-IP aggregation algorithm on the switch, and disabled LACP, however when I disabled LACP per the guide, I lost connectivity altogether to the ESXi host, although all of the ports in the etherchannel showed '(P)' and actually correctly established a bundle. If I re-configured the etherchannel to enable LACP, I could ping and connect to the host, however all of the ports in the etherchannel are showing '(I)' which means operating independently. So I presume its the host configuration rather than the switch configuration which is incorrect but can anyone advise? KB Article: VMware KB: Sample configuration of EtherChannel / Link Aggregation Control Protocol (LACP) with ESXi/ESX and Cisco/H… Switch config: CM_CORE#sh run int po1 Building configuration... end

Connect to VMWare virtual machines using Remote Desktop Had a short training on VMWare on Tuesday, the software development department finally got the official permission (read: get a license) to use VMWare Workstation. I’m no stranger to Virtual Machines (VMs) – started playing with Virtual PC 2005 a fwe years back and I understood the general concepts of hardware virtualization. The biggest problem I have with VMs in general is the slowness; I’d rather develop directly on my PC, which is faster. Can’t say I’ve delved deep into it, but I know enough to utilize it and be dangerous Regardless, virtual machines provide a way to simulate multiple computers and I’ve done 3-tier software testing (client to app server using WCF and app server to SQL 2005 backend) to verify our framework can support both 2-tier (client –> DB) and 3-tier deployments. Fast forward to the current time, I’d like to be able to do some coding on Windows 7; unfortunately Windows 7 is not quite sanctioned yet to be deployed, and it’s a pain to have to dual-boot. . .

untitled As an architect, I talk to many vendors and customers, and often the conversation is about what makes one storage vendor better than another for VMware. I’m not going to focus on performance and array types of features, but what I do want to cover is the integration points that storage vendors can offer between their products and VMware. When it comes to VMware storage integrations, it comes down to two questions. First, can the storage device do what others can– and if so, then what are the table stakes? Second, what integrations can a vendor offer that’s unique to that vendor? vSphere API for Array Integration (VAAI): VAAI has been around for a few years now. vSphere APIs for Storage Awareness (VASA): VASA was released with vSphere 5.0, and it’s there to identify storage capabilities. Storage Replication Adapter (SRA): An SRA is used when deploying Site Recovery Manager (SRM) from VMware. Tagged API, storage, VAAI, VASA, vCenter, vSphere

Install VMware Tools In Ubuntu Linux ( VMwareTools-1.0.5-8017.tar.gz file ) Q. How do I install VMware tools (virtual server tools) in Ubuntu Linux to improve the performance of the guest VPS (VM machine) system? A. VMWare tools created to fine tune your virtualization. It provides essential performance boost to the guest operating system. Step # 1 : Boot Ubuntu Linux Start your Ububtu Linux VM Step # 2: Install VMware Tools To Install VMware Tools, select VM Menu > Select Install VMware Tools Click on Install to Install the tools: WARNING! Step #3: Install tool from virtual CD-ROM VMware Workstation / server software will temporarily connect to the virtual machine's first virtual CD-ROM drive to the ISO image file that contains the VMware Tools installer for your guest operating system and you are ready to begin the installation process. How do I start / stop / restart vmware tools from vm itself? Use the command as follows: $ sudo /etc/init.d/vmware-tools start $ sudo /etc/init.d/vmware-tools stop $ sudo /etc/init.d/vmware-tools restart

VMware vSphere 5 Host NIC Network Design Layout and vSwitch Configuration [Major Update] | Tech Blog | Blog This is an update to an older post and I wanted to overhaul it for the Indy VMUG... This was also another VMworld submission that didn't get the votes. See what you guys are missing out on? As vSphere has progressed, my current 6, 10, and 12 NIC designs have slowly depreciated. The assumption of these physical NIC designs is that these hosts are going to be configured with Enterprise Plus Licensing so all the vSphere features can be used. They key to any design is discovering requirements. Design considerations Discovering RequirementsNetwork InfrastructureIP InfrastructureStorageMultiple-NIC vMotionFault TolerancevSphere or vCloudStrive for Redundancy & Performance Network Infrastructure The size of the business will usually dictate the type of servers you acquire. Rackmount servers have the same compute potential as blade servers but you also get a greater failure domain. Depending on the amount of VMs your host can hold, should be an indicator of 1GB vs 10GB. IP Infrastructure

Related: