I could have accomplished this project via constantly swapping harddrives when I needed to change hypervisors, but that seemed more like accepting the problem than a solution.
My initial plan was to divide my six harddrives into 4 logical volumes at the raid controller level. Unfortunately the raid controller for servers I was using (I'm going to refrain from plugging any one vendor, at least as long as I'm operating on other people's hardware) doesn't provide this functionality. For a brief moment I flirted with the idea of cutting up the disk using partitions, until a closer look at ESXi and Xen's installation revealed they don't provide any granularity in setup of the partition tables beyond selecting an installation volume.
Instead of telling this story as it happened, I'm going to first go over the hiccups I encountered on the way. That way, incase you're like me and already started this project before doing your research, you can plan a bit before reading the whole article.
VMware Boot Bank Corruption
Not sure how I accomplished this on the first server, because I couldn't replicate it on the second. I assumed it was because Microsoft was automounting the VMware Boot Banks (they are formatted fat32). Anywho, for quite a long time I was stuck with the error "Not a VMware Boot Bank". VMware provides very little information on this error (and not applicable to my situation). Unfortunately VMware also doesn't seem to provide a way to fix a corrupted Boot Bank, so I was forced to reinstall. Afterwards I ran this command to prevent Windows from auto-mounting these volumes:
diskpart automount disable exit
Windows installing onto the wrong drives
While Microsoft is nice enough to ask you which drive you'd like to install Windows on, that doesn't mean it'll listen. Unfortunately while it will install your C: drive to the selected partition, it will install the 100MB System Reserved partition which it boots from to where it sees fit. On a system with lots of unformatted drives like mine, and where I was installing Windows to not the first volume, it chose to put the system reserved partition and the MBR for windows on the first volume set, ensuring that it would be booted automatically. I ended up fixing this by only leaving in the disks for Windows during it's installation process, then inserting the other disks after it was done.
XenServer is scared of VMFS volumes
Citrix, it's ok to be scared, but you need to face your fears. Every time I booted the Xen installer it would die when it got the point of scanning the drives with the error "Could not parse Sgdisk". From what I found via a quick google, this is due to an inability of the Xen 6 installer to handle VMFS volumes. Citrix, this is no way to convert followers away from VMware. I resolved this by making sure the disk I was installing Xen to was formatted and pulling the other disks out of the server during the installation of XenServer.
XenServer is scared of GPT volumes
Not sure if this was just from me, or why this was occuring, but I had to follow these instructions to get past a GPT error which occurred during the installation of Xen 6:
1. Boot from the XenServer 6.0.0 install CDROM.