— pissing into the wind


Let me start by saying this:  I hate VCSA 6.5.  I hate the fact that I have to use Flash (which EVERYONE is dropping support for) to manage my enterprise environment.  Flash and Java… good riddance.  I didn’t even realize I didn’t have Flash installed until I had to manage this stupid thing and needed the plugin for IE11.

I recently upgraded(?) my Windows vCenter 6.0 installation to VCSA 6.5.  I couldn’t get the migration to work and this is just at home, so I did a clean install, recreated my 6.0 environment, and reattached my hosts to the new vCenter.  One of the things that I’ve been struggling with since then is not being able to deploy new OVAs or upload files to my datastores (short of just using scp).  Scouring the internet, I came across a couple of VMware KB articles that solved my issues:

This one talks about the issue and this one solves it.  Basically you have to either setup a valid cert on the VCSA or trust the built-in one signed by the VCSA CA.  Now I can even use Edge (for now) to access vCenter.

Read More

A few weeks ago, I was banging my head on the table trying to get the management port group working on a nic team/etherchannel for a client.  They use Netgear switches, so I was kinda feeling my way through the GUI to make it work.  Everything looked right, but I still couldn’t get the stupid etherchannel working.  Everytime I plugged both nics in on the 2 nic channel, the link would drop.  It would come right back up when I removed one of the links.  I got fed up and blamed the switches.

Normally when you create an etherchannel you also go into the vswitch properties and enable “Route based IP hash” for the load balancing algorithm.  As it turns out, THE MANAGEMENT NETWORK PORT GROUP DOES NOT INHERIT THIS SETTING IN 4.1.  I followed the instructions tonight and the etherchannel works like a champ now (again?) at the client site.

%#[email protected]%#$!!!

Read More

I recently went through the exercise of upgrading my standalone VMware ESXi 4.1 server.  The process is pretty easy.

Grab the vSphere CLI from:  http://www.vmware.com/support/developer/vcli/

Grab the relevant patches.  Make sure you select the correct version of ESX/ESXi.  I facepalmed when I realized I was trying to update using the ESX (not ESXi) packages and nothing was happening.  http://www.vmware.com/patch/download/

Move the downloaded zip file to C:\Program Files (x86)\VMware\VMware vSphere CLI\bin.  In a command prompt run:  C:\Program Files (x86)\VMware\VMware vSphere CLI\bin>vihostupdate.pl -server <server ip> -i –b <updatefile>.zip

Reboot and that’s it.

Read More

Promiscuous mode needs to be enabled on the vSwitch if you are using bridge mode.  Remember that before you facepalm.

Read More

I’m going to call this a successful migration with a couple issues:

1)  I didn’t/forgot to set the block size of the vmfs when I installed ESXi.  The default is 1MB.  This means my virtual disks are limited to 256GB.  Not a big deal for now.  I plan on putting a set of 2TB disks on the controller as well at a later point in time and I’ll remember to set the block size to 8MB then.  As a workaround in the meantime, I’m just using dynamic disks on Windows when I need more than a 256GB disk.

2)  The Perc 6i card got hot.  It never failed or exploded, but it was hot enough to burn my finger the instant I touched the heatsink.  These things were meant to be in servers with airflow going over them, so that’s understandable.  I installed a 40mm chipset cooler to solve the heat issue.  Here are some pics:


Heatsink removed

Chipset sans heatsink

New heatsink mounted

Side shot.  Notice that I reused the clips from the original heatsink to attach the upgraded one.

This does take up the PCI slot immediately next to the controller, so keep that in mind if you decide to go this route.

So now I’m on ESXi 4.1 with 2TB of RAID10 storage.  I‘ve added a couple more servers and increased the memory on a couple others where before I was just completely maxed out in terms of memory capacity and couldn’t do anything more.  Huzzah!

Read More

The migration of my Hyper-V R2 environment to ESXi 4.1 is at the halfway point right now.  At this point, all the systems have been imported into VMware Workstation 7.1 that I have installed on my desktop.  The process didn’t go as smooth as I had hoped.  For starters, I couldn’t use the built-in importer on Workstation.  It just wouldn’t work, and I didn’t want to troubleshoot all day.  I downloaded the standalone converter, but when I tried to P2V remote systems, it would let me select only VMware infrastructure destinations.  So, I ended up installing the converter on all my Windows systems and, by running it locally, you can send it to a network share.  Also, GPT is NOT supported by converter.  This is in the readme, but of course I didn’t read it until I ran into problems.  Pretty much the only way to get around it is to move the data onto another drive that’s been setup as MBR.

Linux systems didn’t go as smoothly either.  The latest version of the converter does not have a *nix version.  Maybe I’m wrong, but I couldn’t find it.  I ended up using Winimage to convert the vhd file to a vmdk.  Then I created a new virtual machine in Workstation 7.1 using roughly the same specs as the Hyper-V machine and then telling it to use the existing vmdk that was just created.  The machine boots up fine, but there’s no nic.  VMware tools has to be installed, then /etc/udev/rules.d/70-persistent-net.rules has to be editted.  All I did was comment out the first line for the tulip driver and change the line for the e1000 to be eth0 instead of eth1.  Reboot and it should be good to go from there.  See this post for more info.

Perc 6i card is installed, 2TB RAID 10 is building now.  I install ESXi 4.1 after that and hopefully import everything in nice and smooth.

Read More

So I’m in the mood to upgrade my virtual server.  Right now it’s running some Phenom II Quadcore with 8GB of RAM.  There are 2 320GB disks in a Raid 1 using the onboard nvdia controller.

The disk configuration was a big deciding factor when I was trying to choose between Hyper-V R2 or ESXi 4.x.  Simply put, ESXi doesn’t recognize the controller as anything more than a standard SATA controller, so RAID and thus ESXi were a no-go.

Microsoft’s built in management tools for Hyper-V never appealed to me.  I don’t feel like I can do enough to the host OS.  Plus, getting them to work in the first place is an ordeal itself.  See http://blogs.technet.com/b/jhoward/archive/2008/03/28/part-1-hyper-v-remote-management-you-do-not-have-the-requested-permission-to-complete-this-task-contact-the-administrator-of-the-authorization-policy-for-the-computer-computername.aspx.  Now that everything is up and running, I don’t want to touch it lest I break anything.  I hate this kind of feeling with systems and replace them with something more manageable ASAP.

I’m not sure how well Systems Center works for managing multiple Hyper-Vs in an enterprise, but vCenter works very well and it’s quite robust.  I feel like it is a very complete management solution for a virtual machine environment.  I digress though, this is just for home and only 1 machine.

Another thing that I don’t like is the lack of memory overcommit.  Hyper-V won’t let me provision more than 7 of the 8GB for the guest systems.  As I experiment and put in new systems, this is becoming a real hard limit and I’m pretty much stuck right now.

So, I’ve made the decision to do what it takes to get onto ESXi.  First thing I need to do is replace the RAID controller.  I picked up a Dell Perc 6i WITH battery (score!) off of eBay for cheap.  Almost all of the controllers do NOT come with brackets, so I had to purchase one from Mouser electronics.  My plan is go at least 2 1TB drives for OS and at least 2 2TB drives for a file server all a minimum of RAID1.  I might do something else if I can pick up more drives, but no 0.  To get this going, I need a 32-pin to 4 SATA cable.  One can be had from Dell or Amazon for about $20.

Once this is in, I’m going to have to P2V all the servers from Hyper-V hell using VMware Workstation on my desktop as purgatory before I bring up the ESXi host and then import them into VMware heaven.

I’m then going to up the memory on a couple of systems to see what performance is like when I overcommit.  If it’s acceptable, then I’ll be happy for about 5 minutes.  At some point, I’m going to pull all that RAM and add 4 4GB sticks to max out the system at 16GB of RAM anyway.

Right now I’m just waiting on the cables from Amazon and then I have to order hard drives from Newegg.  One other thing I’m worried about is heat.  The case I have does not have any cooling over the hard drives and I noticed the ones in place now are pretty hot.  That may be another cost that I’m eventually going to have to consider, but I’ll cross that bridge when I get to it.

Read More