IsilonSD – Part 2: Test Platform

This post is part of a series covering the EMC Free and Frictionless software products.
Go to the first post for a table of contents.

So… last post I figured out IsilonSD Edge needs (at least) three separate ESX hosts, with (at least) 7 independent, local disks. So I cannot deploy this on the nested ESXi VSAN I made, but I really want to get IsilonSD up and running.

Since my previous attempt to wing-it didn’t go so well, I’m also going to do a little more planning. I need to create ESX hosts that meet the criteria, plus some plan out things like cluster name, network settings, IP addresses and DNS zones.

For the ESX hosts, my solution is to run nested ESXi (a virtual machine running ESX on top of a physical machine running ESX). This allows me to provide the independent disks, as well multiple ESX hosts, without all the hardware. As well, this will help facilitate the networking needs, through making virtual switches to provide the internal and external networks.

To build this test platform, we’ll cover 4 main areas:

  • ESX Hosts
  • External Network
  • Internal Network
  • SmartConnect

ESX Hosts

For testing IsilonSD Edge, I’m going to make four virtual machines, and configure them as ESX hosts. Each of these will need four nNICs (two for ESX purposes, two for IsilonSD) and nine hard drives (2 for ESX again, and 7 for IsilonSD). I’m hosting all the hard drives in a single data store; it happens to be SSD. For physical networking, my host only has a single network card connected, so I’ve leveraged virtual switches without a network card to simulate a private management network.

A snapshot of my single VM configuration is below:

IsilonSD_NestedESXDiagram1

With the first virtual machine created, now I simply clone it three times, so I have four exact replicas. Why four? It will allow a three node cluster to start; then I can test adding (and removing) a node without data loss; the same we would with a physical Isilon.

Note the ‘guest OS’ for the virtual machines is VMware ESXi 6.x. This is nice feature of vSphere to help you keep track of your nested ESXi VMs. Though keep in mind, nesting vSphere is NOT supported by Vmware; you cannot call and ask for help. Not a concern here given I can’t call EMC for Isilon either since I’m using Free and Frictionless downloads. This is not a production grade configuration by any stretch.

IsilonSD_NestedESXDiagramx4Once all four virtual machines existed on my physical ESX host, installing ESX is just an ISO attach away.

After installing ESX on all my virtual hosts, I then add them to my existing vCenter as hosts. vCenter doesn’t know these are virtual machines and treats them the same as a physical ESX host.

I’ve placed these virtual hosts into a vCenter cluster. However, this is only for aesthetics purposes to keep them organized. I won’t enable normal cluster features such as HA and DRS given Isilon cannot leverage them, nor does it need them. Plus given there is no shared storage between these hosts, you cannot do standard vMotion (enhanced vMotion is always possible, but that’s another blog).

Here you can see those virtual machines with ESX installed masquerading as vSphere hosts:

IsilonSD_NestedESXCluster

I’ll leverage Cluster A you see in the screenshots for the Management Server and InsightIQ server. Cluster A is another cluster of nested ESXi VMs I used for testing VSAN; it also has the Virtual Infrastructure port group available so all the IsilonSD Edge VMs can be on the same logical segment.

External Network

The Isilon External network is what faces the NAS clients. In my environment, I have a vSphere Distributed Virtual Switch Port Group called ‘Virtual Infrastructure’ where I place my core systems. This is also where vCenter and the ESX Hosts sit as well, and what I’ll use for Isilon as there is no firewall/router between the Isilon and what I’ll connect to it.

Virtual Infrastructure network space is 10.0.0.0/24; I’ve set aside a range of IP addresses for Isilon in this network.
.50 is the management server
.151-158 for nodes
.159 for SmartConnect
.149 for InsightIQ.
You MUST have contiguous ranges for your nodes, but all other IP addresses are personal preference.

For use in the deployment steps:
-Netmask: 255.255.255.0
-Low IP Range: 10.0.0.151
-High IP Range: 10.0.0.158
-MTU: 1500
-Gateway: 10.0.0.1
-DNS Servers: 10.0.0.11
-Search Domains: lab.vernex.io

Internal Network

The physical Isilon appliances use dedicated Infiniband switches to interconnect the nodes. This non-blocking, low latency, high bandwidth network allows the nodes to communicate with each other to stripe data across nodes, providing hardware resiliency. For IsilonSD Edge, Ethernet is used over a virtual network for this same purpose. If you were deploying this on physical nodes, you could bind the Isilon internal network to anything that all hosts have access too, the same network as vMotion, or a dedicated network if you prefer. Obviously, 10Gb is preferable, and I would recommend diversifying your physical connections using failover or LACP at the vSwitch/VDS level.

For my lab, I have a vSphere DVS for Private Management traffic; this is bound to the virtual switch of my host that has no actual NIC associated with it. It’s a private network on the physical host under my nested ESXi instances. I use this DVS for VSAN traffic already, so I merely created an additional port group for Isilon named PG_Isilon.

Because this is essentially a dedicated, non-routable network, the IP addresses do not matter. But to keep things clean I use a range set aside for private traffic (10.0.101.0/24) as well use the same last octet as my external network.

For use in the deployment:
-Netmask: 255.255.255.0
-Low IP Range: 10.0.101.151
-High IP Range: 10.0.101.158

SmartConnect

For those not familiar with Isilon, SmartConnect is the technique used for load balancing clients across the multiple nodes in the cluster. Isilon protects the data across nodes using custom code, but to interoperate with the vast variety of clients standard protocols such as SMB, CIFS and NFS is used. For these, there still is not an industry standard method for load to be spread across multiple servers (NFS does have the ability for transparent failover, which Isilon supports), the approach is a beautiful blend of powerful and simplistic. By delegating a zone in your enterprise DNS for the Isilon cluster to manage, SmartConnect will hand out IP addresses to clients based different load balancing options appropriate for your workloads, such as round robin (default) or others like least connection.

To prepare for deploying an IsilonSD Edge cluster, we’re going to modify the DNS to extend this delegation to Isilon.Configuring the DNS ahead of time makes the deployment simple. If you’re running Windows DNS, here are the quick steps (if you’re using BIND or something similar, this is a delegation and should be very similar in your config files).

Launch the Windows/Active Directory DNS Administration Tool

IsilonSD_NewDNSDelgation

Locate the parent zone you wish to extend, here I use lab.vernex.io.

Right click on the parent zone and select New Delegation

 

 

 

 

 

 

 

 

 

 

 

Enter the name of the delgated zone, this ideally will be your cluster name, for my deployment Isilon01IsilonSD_Delgation1

 

 

 

 

 

 

 

 

 

Enter the IP address you intend to assign to Isilon SmartConnect

IsilonSD_Delgation2

 

 

 

 

 

 

 

That’s it, when the Isilon Cluster is deployed and Smart Connect running, you’ll be able to navigate to a CIFS share like \\Isilon01.lab.vernex.io, your DNS will pass this request to the Isilon DNS server, which will reply with an IP address of a host that can accept your workload. This same DNS works for managing the Isilon cluster as well.

QuickTip, you can CNAME anything else to Isilon01.lab.vernex.io, so I could make File.lab.vernex.io CNAME pointed to SmartConnect. This is an excellent way to replace multiple file servers with a single Isilon.

For use in the deployment:
-Zone Name: Isilon01.lab.vernex.io
-SmartConnect Service IP: 10.0.0.159