V2V conversion using VMware Converter fails at 97% with error: Unable to find the system volume, reconfiguration is not possible


I tried to move a Windows 7 VM from ESXi to vSphere but as it neared completion the Converter stated that it had failed.

After a little search I found this from VMware

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1037507

It described a long process of going through an OS repair mode on the original source machine and using BCDEDIT to reset the partition settings used during the boot process.

I didn’t go through the repair process but instead booted the source VM to run the three BCDEDIT commands that it lists. I was only getting success with the first one:

bcdedit /set {bootmgr} device partition=C:

and an error regarding syntax with the other two commands:

bcdedit /set {default} device partition=C:

bcdedit /set {default} osdevice partition=C:

In the end and out of curiosity I checked the destination VM because even though it had errorred, it had still completed the conversion and was listed as a powered off VM in the Virtual Infrastructure.

Strangely it powered on and worked as normal! I can only assume that the error was some kind of registration problem as the VM functions correctly and its’ partitions are listed correctly.

 

Advertisements

How to access the BIOS screen in a VM virtual machine


I find sometimes that the POST screen can pass too quickly which makes it pretty impossible to to access the BIOS.

  • If you are using vSphere or ESXi connect to the server using the vSphere or VI client

  • Right click on the VM that you need to access the BIOS of

  • Select Edit Settings then Options

  • Click on Boot Options and in the right hand pane tick the box under the Force BIOS setup field

Image

 

  • Restart the VM and the BIOS will automatically load. This will only happen once so if you need to access it multiple times then you will need to manually set it each time

Virtual Machine in ESX 4.0 Virtual Infrastructure hangs when powering on as it attempts to relocate the VM to a different vSphere node


I experienced this recently with a server that had been inadvertently powered down. I restarted as normal via the vSphere client but the progress bar stuck at 1%.

Looking at the event details it stated that it was relocating the server within the infrastructure to a different node before actually powering on. I have no idea why as the original vSphere node had no issue or resource overload. The actual migration between nodes seemed to be hung.

To resolve this I restarted the VCenter server and was then able to successfully start the VM.

Enabling root SSH login on an ESX or vSphere host


After rebuilding a vSphere node I forgot that SSH is disabled by default for the root account.

As I hadn’t created a new user with root permissions which would have allowed me to login with SSH the actual sshd service does not allow root logins.

To enable root login for SSH and SCP:

· At the actual server login to the console as root

· Edit the configuration file for SSH with the following command:

nano /etc/ssh/sshd_config

Locate the line that starts with PermitRootLogin (under the #Authentication heading) and change the no to yes.

image001

· Save the file by first pressing Ctrl-O and then Enter. Exit with Ctrl-X.

· Restart the sshd service with the command:

service sshd restart

Test by remotely connecting using a SHH client such as PuTTy

Is it possible to rename a VMware vSphere or ESX datastore whilst it is live?


You can easily rename a Datastore via the vSphere client without any impact on running VM’s in that datastore.

The name that you see is a friendly name that Vcenter uses for the datastore. The actual UUID of the datastore is not being changed therefore there is no impact on the vSphere host, datastore or VM’s.

If you feel unsure then by all means migrate the VM’s from the datastore before you make the name change.

When you add a VM back into the vCenter inventory it is grayed out and in brackets states (invalid)


I had this recently when I made a change to the .vmx file, removed the VM from the inventory and then attempted to add it back in to the inventory to reinitialise.

Although I saw many explanations relating to services on the vSphere host and restarting the vcenter service on the vCenter server my issue was caused by a corrupt speech mark from copying and pasting configuration syntax into the .vmx file from notepad.

As soon as I checked and replaced the corrupt speech mark and added the VM back in it reinitialised successfully.

vSphere nodes power down VM’s after power outage or network isolation


We have experienced a couple of strange issues with our vSphere nodes and hosted VM’s when there was a power outage and/or temporary loss of network connectivity.

We have a vSphere cluster connected to a SAN via FC and configured with HA and FT.

After a power test we lost a couple of physical switches (they were not on UPS and power cycled). The vSphere nodes ‘seemed’ to become non-responsive but what had actually happened was that all the VM’s including the vCenter VM had been powered down.

After some investigation we found that the problem was caused by a default isolation setting in HA which has the host isolation response set to Shut Down .

To stop the problem happening again we changed the host isolation response to leave powered on

VMwares’ explanation makes sense and can be found on page 14 of the vSphere Availability Guide under the Failure Detection and Host Network Isolation section. Here is the link http://www.vmware.com/pdf/vsphere4/r40/vsp_40_availability.pdf