We’ve been lucky enough to get our hands on a Dell Virtual SAN ready node based on the FX2 hardware platform. The FX2 chassis contains 4 x FC430 compute nodes, 2 x storage sleds containing the SSD drives, and 2 x 10Gb IOA modules. Each node has 3 x 800GB and 1 x 200GB SSD drives mapped from the storage sleds and these are presented in pass through mode.
Initial High Level Configuration
To setup the FX2 I have installed ESXi 6.0 U2 to each of the four FC430 nodes and deployed the vCenter virtual appliance (VSCA). All four ESXi hosts have been added to a new cluster which I’ve called VSAN. I’ve also configured the network prerequisites for VSAN traffic. I’ve uplinked each IOA to my 10Gb switch using a LAG (by default the IOA’s are configured to LAG) and made sure the internal ports come online.
Don’t forget as this is an FX2 chassis it is possible to streamline large deployments using server templates either via the CMC or using Dell OpenManage Essentials if licensed correctly. My previous blog series covers the FX2 in more detail.
http://www.definetomorrow.co.uk/blog/2016/2/29/dell-fx2-part-1-introduction-and-use-cases
VSAN Configuration
Once the initial vSphere configuration is complete the next step is to create a VSAN datastore using the local storage on each host. To do this edit the cluster settings, and select Virtual SAN, general, and click configure.
This will launch the VSAN configuration wizard. There are options for disk claiming, enabling deduplication and compression, and configuring fault domains or stretched clusters. To keep things simple I have chosen to manually configure my disks and do not require compression, dedupe or fault domains at this stage.
Clicking next takes you to the network validation page. This checks the VSAN VMkernel adapters have been setup correctly.
The next step is to configure the disks in each of the hosts. I’ve shown the disks detected in one host and as shown below you can see there are 3 x 800GB and 1 x 200GB SSD disks available. The 800GB disks will be configured in the capacity tier and the 200GB disks in the cache tier.
Once all the disks have been claimed the wizard will show how much total capacity has been claimed for each tier. I have around 8.5TB configured in the capacity tier and around 800GB in the cache tier.
Jumping back over to the hosts and cluster view I can now see the available capacity is around 8.5TB. I can also see all the tests for VSAN health have passed. Clicking the “Monitor Virtual SAN health” link will take you to a more detailed view.
Storage Policies
By default there is a single policy configured called Virtual SAN Default Storage Policy. This has a basic configuration and applied to all objects if no other policy is selected when provisioning a virtual machine. To create a new storage policy is quite straightforward. Click the “Create New VM Storage Policy” icon to launch the wizard.
Give the policy a name and choose “VSAN” from the drop down menu. I’m not going to go through the options now but I’ll include a link to a great blog post by Cormac Hogan which explains these below.
Once the policies are configured they can be applied when deploying a new virtual machine or on the fly by changing the policy applied to an existing virtual machine. To do this right click the VM and choose VM Policies > Edit VM Storage Policies.
In the example below I have changed the policy to another I’ve created called “vSAN – Cache Reservation”. The GUI will tell you what the impact of making this change will be. In this example the storage policy has a cache reservation on 5%, so 2GB will be reserved on the cache tier by this VM.
Monitoring Virtual SAN
Now the hardware has been setup and storage policies created I have deployed several more virtual machines to the cluster. To see information such as performance and consumption data this can be found by clicking the on the VSAN cluster, clicking monitor, then the Virtual SAN tab. Here you can view information on physical disks, virtual disks resynching components, health, capacity and perform proactive tests. There is a nice graphic showing the breakdown of consumed capacity. As this is an all flash configuration running 6.2 I have enabled compression and deduplication to get even more effective capacity.
Clicking the performance tab allows you to see information on IOPS, bandwidth and latency. This can be viewed at the virtual machine level or the VSAN backend
Summary
If you’re looking at deploying Virtual SAN, perhaps for a VDI deployment or remote office then the FX2 ready nodes are certainly worth looking at. Not only do they provide a large amount of compute in a small footprint they are really scalable. The configuration above provided 8TB of all-flash capacity consuming just 2U of rack space. You can easily scale upwards and manage up to 20 chassis under a single CMC. Don’t forget if you’re concerned about chassis resilience you have the option to configure fault domains and/or stretched clusters in 6.2. Once again thanks to Mark Maclean at Dell for loaning us an FX2.
Useful Links
http://cormachogan.com/2013/09/10/vsan-part-7-capabilities-and-vm-storage-policies/
Leave a Reply