vSwitch

IsilonSD – Part 3: Deploy a Cluster (Successfully)

With proper planning and setup of the prerequisites (see Part 2), the actual deployment of the IsilonSD Edge cluster is fairly straight-forward. If you experience issues during this section (see Part 1) it’s very likely because you don’t have the proper configuration, so revisit the previous steps. That said, let’s dive in and make a cluster.

This post is part of a series covering the EMC Free and Frictionless software products.
Go to the first post for a table of contents.

High Level, you’re going to do a few things:

  1. Deploy the IsilonSD Edge Management Server
  2. Setup IsilonSD Edge Management Server Password
  3. Complete IsilonSD Edge Management Server Boot-up
  4. Configure  Management Server vSphere Link & Upload Isilon Node Template
  5. Open the IsilonSD Edge vSphere Web Client Plug-in
  6. Deploy the IsilonSD Edge Cluster

Here’s the detail.

Deploy the IsilonSD Edge Management Server

*Note there is no sound, this is to follow along the steps.
This is your standard OVA deployment, as long as  you’re using the “EMC_IsilonSD_Edge_MS_x.x.x.ova” file from the download and providing an IP address accessible to vCenter, you can deploy this just about anywhere.

Follow along in the video on the left if you’re not familiar with the OVA process.

Once the OVA deployment launches, ensure you find the deployment task in the vSphere Task Pane and keep an eye on the progress.

Setup IsilonSD Edge Management Server password

IsilonSD_ManagementBootPasswordChangeOnce the OVA is complete and the virtual machine is booting up, you’ll need to open the console and watch the initialization process. Generally I recommend this with any OVA deployment, as you’ll see if there are any errors as the first boot configuration occurs. For the IsilonSD Edge Management Appliance, it’s required to set the administrator password.

Complete IsilonSD Edge Management Server Boot-up

IsilonSD_ManagementConsoleBlueScreenAfter entering your password, the server will continue it’s first boot process and configuration. When you reach this screen (what I call the blue screen of start) you’re ready to proceed. Open a browser and navigate to the URL provided in the blue screen next to IsilonSD Management Server.

 

Configure Management Server vSphere Link & Upload Isilon Node Template

When you navigate to the URL provided by the blue screen, after accepting the unauthorized certificate, you’ll be promoted for logon credentials. This is NOT the password you provided during the boot up. I failed to read the documentation and assumed this, resulting in much frustration.

Logon using:
username: admin
password: sunshine

After successful logon, and accepting the EULA, you have just a couple steps, which you can follow along in the video on the right:

  1. Adjust the admin password
  2. Register your vCenter
  3. Upload the Isilon Virtual Node template
    1. This is “EMC_IsilonSD_Edge_x.x.x.ova in your download

*Note there is no sound, this is to follow along the steps.

 

Open the IsilonSD Edge vSphere Web Client Plug-in

Wait for the OVA template to upload, this may take up to ten minutes depending on your environment. Once complete, you’ll be ready to move on to actually creating the IsilonSD cluster through the vSphere Web Client plug-in that was installed by the Management Server when you registered vCenter. Ensure you close out all the browser windows and open a new session to your vSphere Web Client.

IsilonSD_vCenterDatacenterSelect the datacenter where you deployed the management server (not the cluster, again where I lost some time).

 

IsilonSD_ManageTab In the right-hand pane of vSphere, under the Manage tab, you should now see two new sub-tabs, Create IsilonSD Cluster and Manage IsilonSD ClusterIsilonSD_vCenterIsilonTabs

 

Deploy the IsilonSD Edge Cluster

*Note there is no sound, this is to follow along the steps.

Follow along in the video above:

  1. Check the box next to your license
  2. Adjust your node resources
    1. For my deployment, I started with 3 nodes; adjusting the Cluster Capacity from the default 2 TB  to the minimum 1152 GB (64GB & 7 Drives * 3 Nodes)
  3. Clicking Next on the Requirements tab will search the virtual datacenter in your vCenter for ESX hosts that can satisfy the requirements you provided, including having those independent drives that meet the capacity requirement
    1. Should the process fail to find the necessary hosts, you’ll see a message like this. Don’t get discouraged, look over the requirements again to ensure everything is in order, try restarting the Inventory Service too.
    2. IsilonSD_NoQualifiedHosts
  4. When the search for hosts is successful, you’ll see a list of hosts available to select, such as
    1. IsilonSD_HostSelection
  5. Next, select all the hosts you wish to add to the cluster (if you prepared more than 3, consider selecting 3 now, as the next post we’ll walk through adding an additional node).
  6. For each host, you’ll need to select the disks and their associated role (Data Disk, Journal, Boot Disk or Journal & Boot Disk).
    1. Remember, you need at LEAST 6 data disks, you won’t get this far if you don’t but you won’t get farther if you don’t select them.
    2. In our scenario, we selects 6x 68GB data disks, and a final 28GB disk for Journal & Boot Disk
    3. You’ll also need to select the External Network Port Group and Internal Network Port Group
    4. IsilonSD_HostDriveSelection
  7. After setting up all hosts, with the exact configuration, you’ll move into the Cluster Identity
    1. IsilonSD_ClusterIdentity
    2. Cluster Name (this is used in the management interface to name the cluster)
    3. Root Password
    4. Admin Password
    5. Encoding (I’d leave this alone)
    6. Timezone
    7. ESRS Information (only populate this if you have a production license)
  8. Next will be your network settings.
    1. IsilonSD_ClusterNetworking
    2. External Network
    3. Internal Network
    4. SmartConnect
  9. You have a final screen to verify all your settings, look them over, the full deployment will take awhile and click Next.

At this point, patience is key, do not interrupt it. An OVA will be deployed for every node, then all of those unformatted disks will be turned into data stores, then VMDK files put on each datastore; finally all the nodes will boot and configure themselves. If everything goes as planned, your reward will look like this:

IsilonSD_ClusterCreationSuccess

To verify everything, point your browser to your smart connect IP address, in our case https://Isilon01.lab.vernex.io:8080 if you get a OneFS Logon Prompt, you should be in business!

IsilonSD_OneFSLogonPrompt

 

You should also be able to navigate in windows to your SmartConnect address; recall ours is \\Isilon01.lab.vernex.io\ and see the IFS share. This is the initial administrator share that in a production environment you’d disable. Likewise in *nix you can NFS attach to //Isilon01.lab.vernex.io:/IFS

 

 

 

 

 

 

 

 

By | March 30th, 2016|EMC, Home Lab, Storage, VMWare|1 Comment

IsilonSD – Part 2: Test Platform

This post is part of a series covering the EMC Free and Frictionless software products.
Go to the first post for a table of contents.

So… last post I figured out IsilonSD Edge needs (at least) three separate ESX hosts, with (at least) 7 independent, local disks. So I cannot deploy this on the nested ESXi VSAN I made, but I really want to get IsilonSD up and running.

Since my previous attempt to wing-it didn’t go so well, I’m also going to do a little more planning. I need to create ESX hosts that meet the criteria, plus some plan out things like cluster name, network settings, IP addresses and DNS zones.

For the ESX hosts, my solution is to run nested ESXi (a virtual machine running ESX on top of a physical machine running ESX). This allows me to provide the independent disks, as well multiple ESX hosts, without all the hardware. As well, this will help facilitate the networking needs, through making virtual switches to provide the internal and external networks.

To build this test platform, we’ll cover 4 main areas:

  • ESX Hosts
  • External Network
  • Internal Network
  • SmartConnect

ESX Hosts

For testing IsilonSD Edge, I’m going to make four virtual machines, and configure them as ESX hosts. Each of these will need four nNICs (two for ESX purposes, two for IsilonSD) and nine hard drives (2 for ESX again, and 7 for IsilonSD). I’m hosting all the hard drives in a single data store; it happens to be SSD. For physical networking, my host only has a single network card connected, so I’ve leveraged virtual switches without a network card to simulate a private management network.

A snapshot of my single VM configuration is below:

IsilonSD_NestedESXDiagram1

With the first virtual machine created, now I simply clone it three times, so I have four exact replicas. Why four? It will allow a three node cluster to start; then I can test adding (and removing) a node without data loss; the same we would with a physical Isilon.

Note the ‘guest OS’ for the virtual machines is VMware ESXi 6.x. This is nice feature of vSphere to help you keep track of your nested ESXi VMs. Though keep in mind, nesting vSphere is NOT supported by Vmware; you cannot call and ask for help. Not a concern here given I can’t call EMC for Isilon either since I’m using Free and Frictionless downloads. This is not a production grade configuration by any stretch.

IsilonSD_NestedESXDiagramx4Once all four virtual machines existed on my physical ESX host, installing ESX is just an ISO attach away.

After installing ESX on all my virtual hosts, I then add them to my existing vCenter as hosts. vCenter doesn’t know these are virtual machines and treats them the same as a physical ESX host.

I’ve placed these virtual hosts into a vCenter cluster. However, this is only for aesthetics purposes to keep them organized. I won’t enable normal cluster features such as HA and DRS given Isilon cannot leverage them, nor does it need them. Plus given there is no shared storage between these hosts, you cannot do standard vMotion (enhanced vMotion is always possible, but that’s another blog).

Here you can see those virtual machines with ESX installed masquerading as vSphere hosts:

IsilonSD_NestedESXCluster

I’ll leverage Cluster A you see in the screenshots for the Management Server and InsightIQ server. Cluster A is another cluster of nested ESXi VMs I used for testing VSAN; it also has the Virtual Infrastructure port group available so all the IsilonSD Edge VMs can be on the same logical segment.

External Network

The Isilon External network is what faces the NAS clients. In my environment, I have a vSphere Distributed Virtual Switch Port Group called ‘Virtual Infrastructure’ where I place my core systems. This is also where vCenter and the ESX Hosts sit as well, and what I’ll use for Isilon as there is no firewall/router between the Isilon and what I’ll connect to it.

Virtual Infrastructure network space is 10.0.0.0/24; I’ve set aside a range of IP addresses for Isilon in this network.
.50 is the management server
.151-158 for nodes
.159 for SmartConnect
.149 for InsightIQ.
You MUST have contiguous ranges for your nodes, but all other IP addresses are personal preference.

For use in the deployment steps:
-Netmask: 255.255.255.0
-Low IP Range: 10.0.0.151
-High IP Range: 10.0.0.158
-MTU: 1500
-Gateway: 10.0.0.1
-DNS Servers: 10.0.0.11
-Search Domains: lab.vernex.io

Internal Network

The physical Isilon appliances use dedicated Infiniband switches to interconnect the nodes. This non-blocking, low latency, high bandwidth network allows the nodes to communicate with each other to stripe data across nodes, providing hardware resiliency. For IsilonSD Edge, Ethernet is used over a virtual network for this same purpose. If you were deploying this on physical nodes, you could bind the Isilon internal network to anything that all hosts have access too, the same network as vMotion, or a dedicated network if you prefer. Obviously, 10Gb is preferable, and I would recommend diversifying your physical connections using failover or LACP at the vSwitch/VDS level.

For my lab, I have a vSphere DVS for Private Management traffic; this is bound to the virtual switch of my host that has no actual NIC associated with it. It’s a private network on the physical host under my nested ESXi instances. I use this DVS for VSAN traffic already, so I merely created an additional port group for Isilon named PG_Isilon.

Because this is essentially a dedicated, non-routable network, the IP addresses do not matter. But to keep things clean I use a range set aside for private traffic (10.0.101.0/24) as well use the same last octet as my external network.

For use in the deployment:
-Netmask: 255.255.255.0
-Low IP Range: 10.0.101.151
-High IP Range: 10.0.101.158

SmartConnect

For those not familiar with Isilon, SmartConnect is the technique used for load balancing clients across the multiple nodes in the cluster. Isilon protects the data across nodes using custom code, but to interoperate with the vast variety of clients standard protocols such as SMB, CIFS and NFS is used. For these, there still is not an industry standard method for load to be spread across multiple servers (NFS does have the ability for transparent failover, which Isilon supports), the approach is a beautiful blend of powerful and simplistic. By delegating a zone in your enterprise DNS for the Isilon cluster to manage, SmartConnect will hand out IP addresses to clients based different load balancing options appropriate for your workloads, such as round robin (default) or others like least connection.

To prepare for deploying an IsilonSD Edge cluster, we’re going to modify the DNS to extend this delegation to Isilon.Configuring the DNS ahead of time makes the deployment simple. If you’re running Windows DNS, here are the quick steps (if you’re using BIND or something similar, this is a delegation and should be very similar in your config files).

Launch the Windows/Active Directory DNS Administration Tool

IsilonSD_NewDNSDelgation

Locate the parent zone you wish to extend, here I use lab.vernex.io.

Right click on the parent zone and select New Delegation

 

 

 

 

 

 

 

 

 

 

 

Enter the name of the delgated zone, this ideally will be your cluster name, for my deployment Isilon01IsilonSD_Delgation1

 

 

 

 

 

 

 

 

 

Enter the IP address you intend to assign to Isilon SmartConnect

IsilonSD_Delgation2

 

 

 

 

 

 

 

That’s it, when the Isilon Cluster is deployed and Smart Connect running, you’ll be able to navigate to a CIFS share like \\Isilon01.lab.vernex.io, your DNS will pass this request to the Isilon DNS server, which will reply with an IP address of a host that can accept your workload. This same DNS works for managing the Isilon cluster as well.

QuickTip, you can CNAME anything else to Isilon01.lab.vernex.io, so I could make File.lab.vernex.io CNAME pointed to SmartConnect. This is an excellent way to replace multiple file servers with a single Isilon.

For use in the deployment:
-Zone Name: Isilon01.lab.vernex.io
-SmartConnect Service IP: 10.0.0.159

 

By | March 28th, 2016|EMC, Home Lab, Storage, VMWare|1 Comment