background preloader

VMWare

Facebook Twitter

Resize LUN protected by Recoverpoint and SRM - PhilVirtual. This is not a task that I do very often, but when I do, I seem to forget the “gotchas” in this process.

Resize LUN protected by Recoverpoint and SRM - PhilVirtual

Documentation is not cohesive, so below are my steps to follow. **If you have RecoverPoint 3.5 SP1 or later and the VNX splitter with a VNX running OE for Block 05.32 or later exist at all copies, you can use Dynamic LUN Resizing instead. See the Recoverpoint Administrator Guide page 188. . ** Resize LUN protected by Recoverpoint and SRM - PhilVirtual. VMware KB: Commands to monitor snapshot deletion in VMware ESXi/ESX. VMware KB: Performing a Recovery using the Web Client in VMware vCenter Site Recovery Manager 5.8 reports the error: Failed to connect Site Recovery Manager Server(s) KB: Permanent Device Loss (PDL) and All-Paths-Down (APD) in vSphere 5.x and 6.x.

VMware KB: Permanent Device Loss (PDL) and All-Paths-Down (APD) in vSphere 5.x and 6.x. vMotion of a virtual machine fails with error: Insufficient capacity on each physical CPU. Upgrading a vSphere Host Using vSphere Update Manager (vSOM) Installing VMware vSphere Update Manager 5.5 on a Microsoft Windows platform. Fix - No coredump target has been configured. Once I built my ESXi 5.5 hosts in my test lab and added them to my vCenter environment, they both had a yellow warning prompt stating: “No coredump target has been configured.

Fix - No coredump target has been configured

Host core dumps cannot be saved” The good news is I was able to fix this, so I thought to document the solution in case anyone else has the same issue. All the details are below… Situation In my situation, I was both my hosts have ESXi 5.5 installed on a USB flash drive (each 32GB in size), which may have been the cause of the issue. Just so we are all on the same page, the warning message I was seeing on the host was: Solution To resolve the “No coredump target has been configured” error, complete the following steps: Error: No coredump target has been configured The solution above was successful for one of my ESXi hosts, however on the second host when I ran step 2 to find the active diagnostic partition, one was already configured… yet I was still getting the No coredump target has been configured error.

More Info. Configuring a diagnostic coredump partition on an ESXi 5.x host. SRM 5.8 customization. Some companies have built out their disaster recovery site with a stretched layer 2 network or even a disjoint layer 2 network that shares the same IP addresses with their production sites.

SRM 5.8 customization

This is great because VMs don’t need to change IP Addresses if there is a failover event. This post goes over what options we have if you need to change IP Addresses during your failover. Site Recovery Manager 5.5 Documentation Center. Correlating VMware product build numbers to update levels. How to Install latest ESXi VMware Patch - [Guide] Updating an ESXi/ESX host using VMware vCenter Update Manager 4.x and 5.x. Untitled. Exporting esxtop performance data as a CSV file and manipulating it from the command-line. Troubleshooting ESX/ESXi virtual machine performance issues. Esx2 using esxtop.

Esx2 using esxtop. Setting the number of cores per CPU in a virtual machine. VMware PowerCLI – Suppress vCenter Certificate warnings – Pragmatic IO. A very quick post this one, but an annoying thing I often see left unchecked in environments, that is simple to fix in a minute.

VMware PowerCLI – Suppress vCenter Certificate warnings – Pragmatic IO

Yes, this is an old topic, but it’s still too evident today…. This should be in your “just do it” category. It’s the awful Certificate warning displayed in your Powershell session when you connect to a vCenter server (or direct to a ESXi host) that hasn’t had the default SSL certs replaced. For whatever reason (I don’t judge) it can be a PITA for busy admins to bother sorting and replacing the SSL certs, or perhaps they are “CLI-shy” and just don’t see it.

It’s the PowerCLI equivalent of this; (which is almost always is installed/ignored) For me though, if you can’t fix it, you should at least SUPPRESS it and not fill that session up with yellow clutter and delay your script connection times. The setting to change this is stored in the PowerCLIConfiguration context and current settings can be displayed using; Using PowerShell to create a Virtual Machine Inventory in VMware and Export it to a CSV File. Hi all, In this blog I will explain an easy way to generate a Virtual Machine inventory and export it to a csv file.

Using PowerShell to create a Virtual Machine Inventory in VMware and Export it to a CSV File

We will be using the “Get-VM” cmdlet and piping it to the “Export-csv” cmdlet to get the information we need in the examples below. Uploading diagnostic information for VMware through the Secure FTP portal. Collecting diagnostic information for VMware ESX/ESXi. Recovering a lost partition table on a VMFS volume. Recreating a missing VMFS datastore partition in VMware vSphere 5.0/5.1/5.5. Cannot remount a datastore after an unplanned permanent device loss (PDL) Recreating a missing VMFS datastore partition in VMware vSphere 5.0/5.1/5.5.

Configure HP ILO directly on ESXi server - Virtual to the Core. I recently installed the HP drivers on some customers’ server following these two great guides:

Configure HP ILO directly on ESXi server - Virtual to the Core

Vsphere esxi vcenter server 55 setup mscs. Connecting to VMware vSphere Web Client fails with the error: HTTP Status 404. Applying New IP Addresses to vCenter, ESXi Hosts, and Plugins - Wahl Network. In an earlier post, I discussed my focus on a new network design for the lab.

Applying New IP Addresses to vCenter, ESXi Hosts, and Plugins - Wahl Network

This post continues along that journey with a focus on vCenter, plugins, ESXi hosts, and gotchas within a vSphere 5.5 environment. Hope it helps! The first thing I wanted to look at was my vCenter plugin URLs. These are the addresses used to talk with the plugin. If they were hard coded using an IP address, they’d need to be updated to either a new IP address or a DNS name. The fastest way to dump all of the URLs for my plugins was to crank out a quick PowerCLI script. Most of them were fine, or pointed towards the local server itself, but a few – such as my vCenter Operations Manager plugin – would need to be fixed. vCenter 5.1 upgrade. vSphere 5.5 - Part 2 - vCenter Single Sign On (SSO) 5.5 (inc U1) Install - VMadmin.co.uk.

vSphere 5.5 - Part 4 - vCenter Server 5.5 (inc U1) Install - VMadmin.co.uk. Combined Upgrade Procedure for VMware ESXi and Cisco Nexus 1000V VEM. System logs are stored on non-persistent storage. HA Error: The number of heartbeat datastores for host is 1, which is less than required: 2. vSphere 5.5 - Enable SNMP - EverythingShouldBeVirtual.

I am going through setting up Solarwinds Virtual Manager and needed to enable SNMP on my vSphere 5.5 hosts.

vSphere 5.5 - Enable SNMP - EverythingShouldBeVirtual

So the service set to automatically start but it will not start without generating an error when attempting to start the service as seen below. So in case you run into the same thing it is as simple as running the following commands on your individual hosts from a console session. Replace YOUR_STRING with the community string that you would like to use. The last command sets the firewall to allow all to poll SNMP. esxcli system snmp set --communities YOUR_STRING esxcli system snmp set --enable true esxcli network firewall ruleset set --ruleset-id snmp --allowed-all true esxcli network firewall ruleset set --ruleset-id snmp --enabled true /etc/init.d/snmpd restart.

A Cisco Nexus 1000v VEM installed on an ESX or ESXi host fails to respond with the error: Failed to get card domain - returned error 1. Convsa_55_guide. Cluster warning for ESXi Shell and SSH appear on an ESXi 5.x host. Unable to bond Etherchannel on ESXi 5.5. I couldn't seem to get an etherchannel working properly on a ESXi5.5 host, I'm not using vcenter and was attempting to aggregate the links to a Cisco 3750.

Unable to bond Etherchannel on ESXi 5.5

I referred to the kb article below, I configured IP hash on the ESXi host, set SRC-DST-IP aggregation algorithm on the switch, and disabled LACP, however when I disabled LACP per the guide, I lost connectivity altogether to the ESXi host, although all of the ports in the etherchannel showed '(P)' and actually correctly established a bundle. If I re-configured the etherchannel to enable LACP, I could ping and connect to the host, however all of the ports in the etherchannel are showing '(I)' which means operating independently. So I presume its the host configuration rather than the switch configuration which is incorrect but can anyone advise?

ESXi 5.0 & DELL R720 Network Connectivity Loss. Copying a template from an ESX host to a remote ESX host in a different datacenter using SCP. Enabling debug logging for VMware Tools within a guest operating system. Enabling debug logging for VMware Tools within a guest operating system. VMware Tools on a Windows virtual machine fails with the error: Exception 0xc0000005 (access violation) Cluster warning for ESXi Shell and SSH appear on an ESXi 5.x host. Adding an ESX/ESXi host to Nexus 1000v vDS fails with the error: vDS operation failed on host <hostname>, got (vmodl.fault.SystemError) exception.

Installing async drivers on VMware ESXi 5.x and ESXi 6.0.x. Update Manager is not available in a vCenter Server configured in linked mode. Stopping, starting, or restarting the vSphere Update Manager service. vMotion Fails at 14% - Resolution. vMotion Fails At 14% – with at least one solution. Every found your self with an issue and spending hours trying to find a solution while none of the Google (or bing) search(find)engine results fixed you’re problem?

vMotion Fails At 14% – with at least one solution

Well I did just this today. Trying to update our VMWare clusters I noticed some VM’s not willing to vMotion to another node and that the task stalled at 14%. With off course the all clarifying error: “Operation timed out” and from Tasks & Events “Cannot migrate <VM> from host X, datastore X to host Y, datastore X” My solution: It turns out that, in my case, there was a vmx-***.vswp file left over from a failed DRS migration. During a DRS migration, because the VMX is started at both nodes, each VMX process create a process swap file, with or -1 of -2 in the name. So when you browse the datastore you’re failing VM is located on, and you open up the folder of the VM, you should see those two vmx-***.vswp files, as presented in the image below.

The oldest one is most likely the one you need to delete.