vCenter

UnityVSA – Part 4: VVOLs

In part 1 of this series I shared the move to HTML5 was the most exciting part of Unity. Frankly this was because of the increased compatibility and performance of Unisphere for Unity, but more so the signaling of EMC shifting to HTML5 across the board (I hope).

If there was one storage feature inside Unity that excites me the most, it has to be VVOL support. So in this post, I’m going to dive into VVOLs on the UnityVSA we set up previously. As VVOLs is a framework, every implementation is going to differ slightly. As such, the switch to VVOL itself and the Unity flavor is going to require an adjustment in the architectural and management practices we’ve adopted over the years.

This post is part of a series covering the EMC Free and Frictionless software products.
Go to the first post for a table of contents.

First, for those not familiar, a little on VVOLs themselves. VVOL stands for Virtual Volumes, in essence, it simplifies the layers between the virtual machine and the storage array while allowing both vSphere and the Storage Array to have a deeper understanding of each other. A VVOL itself directly correlates to a virtual disk attached to virtual machines; including the configuration file and swap file every VM has. Enabling VVOLs is VASA (vSphere Storage APIs for Storage Awareness), with which the array can describe the attributes of the storage presented to vSphere. These two core tenants of VVOLs allow the vSphere layer to see deeper into the storage; while the storage layer can see the more granular virtual machine and virtual disk usage.

In practice, this provides a better management framework to enable the movement most in the vSphere realm have been making; creating larger datastores with a naming convention that denotes the storage features (flash, tiering, snapshots, etc.). Where previously vSphere Admins would need to learn these conventions to determine where to place VMs; with VVOLs, this can be abstracted into Storage Policies, with the vSphere Admin simply selecting the appropriate policy during creation.

So new terms and concepts to become familiar with:

  • Storage Provider
    • Configured within vCenter this is a link to the VASA provider which in turn shares the storage system details with vSphere.
    • For Unity, the VASA provider is built in and requires no extra configuration on the Unity side.
  • Protocol EndPoint
    • This is the storage side access point that vSphere communicates with; they work across protocols and replace LUNs and mount points.
    • On Unity, Protocol Endpoints have created automatically through the VVOL provisioning process.
  • Storage Container
    • This essentially replaces the LUN, though a storage container is much more than an LUN ever was as it can contain multiple types of storage on the array, which effectively means it can have multiple LUNs.
    • In vSphere a storage container maps to a VVOL Datastore (shown in the normal datastore section of vSphere).
    • Unity has mirrored this naming in Unisphere, calling the storage container ‘Datastore’.
    • In Unity a Datastore can contain multiple Capability Profiles (which if you remember, in Unity, is synonymous to a Pool).

To fully explore and demonstrate the VVOL functionality in Unity, we’re going to perform several sets of actions, I’m going to share these in video walkthroughs (with sound), as there are multiple steps.

  1. Create additional pools and capability profiles on the UnityVSA then configure vSphere and Unity with appropriate connection for VVOL
  2. Provision a VVOL Datastore with multiple capability profiles and provision a test virtual machine on the new VVOL Datastore
  3. Create a vSphere Storage Policy and relocate the VM data
  4. Create advanced vSphere Store Policies, extending the VM to simulate a production database server

 

First some prep work and connecting vSphere and Unity:

  • Add 4 new virtual disks to the UnityVSA VM
  • Create two new Unity pools
    • 1 with 75GB as single tier
    • 1 with 15GB, 25GB and 55GB as multi-tier with FastVP
  • Link Unisphere/Unity to our vCenter
  • Create a Storage Provider link in vSphere to the Unity VASA Provider

 

Next, let’s provision the VVOL Datastore or “Storage Container”:

  • Create a Unity Datastore (aka “Storage Container”) with three Capability Profiles (as such, three pools)
  • Create a vSphere VVOL Datastore
  • Investigate VVOL Datastore attributes

 

Provisioning a virtual machine on the new storage looks the same as traditional datastores, but there is more than meets the eye:

  • Create a new virtual machine on the VVOL Datastore
  • Investigate where the VM files are placed
  • See the VM details inside Unisphere
  • Create a simple Storage Policy in vSphere
  • Adjust the virtual machine storage policy and watch the storage allocation adjustment

 

Now let’s consider more advanced usage of VVOLs. With the ability to create custom tags in Unisphere Capability Profiles, we have an unlimited mechanism to describe the storage in our own words. You could use these tags to create application specific pools, and thus vSphere Storage Policies for admins to target VMs related to an application. You could also use tags for tiers (Web, App, DB), or in the example below, we’re going to create vSphere Storage Policies and Unity capability tags to partition a database server into Boot, Data and Backup storage types.

  • Modify our three Capability Profiles to add tags: Boot, DB and Backup.
  • Create vSphere Storage Policies for each of these tags.
  • Adjust the boot files of our test VM to leverage the Boot Storage Policy
  • Add additional drives to our test VM, leveraging our DB and Backup Storage Policies; investigate where these files were placed

 

Hopefully now you not only have a better understanding of how to setup and configure VVOLs on EMC Unity storage; but a deeper understanding of the VVOL technology in general. This framework opens brand new doors in your management practices; imagine a large Unity array with multiple pools and capabilities all being provisioned through one Storage Container and VVOL Datastore. Leveraging Storage Policies to manage your data placement rather than carving up numerous LUNs.

With the flexibility of Storage Policies, you can further inform the administrators creating and managing virtual servers on what storage characteristics are available. If you have multiple arrays that support VVOLs and/or VSAN; your policies can work across arrays and even vendors. This abstraction allow further consistency inside vSphere, streamlining management and operations tasks.

You can see how, over time, this technology has advantages over the traditional methods we’ve been using for virtual storage provisioning. However, before you start making plans to buy a new Unity array and replace all your vSphere storage with VVOLs, know that, as with any new technology, there are still some limitations. Features like array based replication, snapshots, even quiescing VMs, all are lagging a bit behind the VVOL release, all highly dependent on your environment and usage patterns. I expect quick enhancements in this area, so research the current state based and talk with your VMware and EMC reps/partners.

By | May 27th, 2016|EMC, Home Lab, Storage, Train Yourself|2 Comments

UnityVSA – Part 2: Deploy

This post is part of a series covering the EMC Free and Frictionless software products.
Go to the first post for a table of contents.

So, to start this off let’s go to the EMC Software Download page; navigating to UnityVSA to get the download. You’ll receive a login prompt, if you have not registered for an EMC account previously, you’ll need one to access the download. Once logged in, you’ll start downloading the OVA file for UnityVSA.

Yes, last time I went immediately to downloading OVAs without reading the manuals my installation failed miserably. But I’m going to try an installation unassisted every time; maybe I’m a glutton for punishment, but I choose to be optimistic that installations should be intuitive, so I prefer trying unaided first.

That’s just me; you might want to check out the Installation Guide and FAQs. Here’s the quick details though… in order to run the VSA you’ll need to be able to run a vSphere virtual machine with 2x cores (recommended is 2Ghz+), 12GB of RAM, and 85GB of hard drive space, plus however much extra space you want to give the VSA to be presented as storage. The VSA does not provide any raid protection inside the virtual array; so if you want to avoid data loss from hardware failure, ensure the storage underneath the VSA is RAID (or VSAN, or SAN itself). You’re also going to want at least two IP addresses set aside; I’d recommend about 5 for a full configuration. Depending on your network environment, you might want to have multiple networks available in vSphere. The UnityVSA has multiple virtual network cards to provide ethernet redundancy. One of those virtual nics is for management traffic, which you could place on a separate virtual switch (and thus VLAN) if you’re topology supports this. My lab is basically flat, so I’ll be binding all virtual nics to one virtual network.

With the downloaded OVA in hand, the initial deployment is indeed incredibly easy. If you’re familiar with OVA deployments, this won’t be very exciting. If you’re not familiar with deploying an OVA, you can follow along in the video below. I’m going to be deploying UnityVSA on my nested ESXi cluster, running VSAN 6.2. You could install directly on an ESXi Host, or even VMware Workstation or Fusion; put that’s another post.

*Note there is no sound, this is to follow along the steps.
  1. In vCenter, right-click the cluster and select “Deploy OVF Template”
  2. Enter the location you saved the UnityVSA OVA file
  3. Select the check box “Accept extra configuration details”, this OVA contains resource reservations which we’ll touch on below.
  4. Provide a name, this will be the virtual machine name, not the actual name of the Unity array
  5. Select the datastore to house the UnityVSA, in my case I’m putting this on VMware Virtual SAN.
  6. Select which networks you wish the UnityVSA bound to.
  7. Configure the “System Name”, this is the actual name of the VSA, I’d recommend matching the virtual machine name you entered above.
  8. Configure the networking, I’m a fan of leveraging DHCP for virtual appliance downloads when possible, it eliminates some troubleshooting.
  9. I do NOT recommend checking the box “Power on after deployment”, the OVA deployment does not include any storage for provisioning pools; for the initial wizard to work, you’ll want to add some additional virtual hard drives.

 

UnityVSA_ResourceReservations

 

A couple notes on the virtual machine itself to be aware of. EMC’s intent with UnityVSA is to provide a production-grade virtual storage array. To ensure adequate performance, when the OVA is deployed the resulting virtual machine has reservations in place on both CPU and memory. These reservations will ensure the ESX host/cluster underneath the VSA can deliver the appropriate amount of resources for the virtual array to function. If you are planning on running UnityVSA in a supported, production environment, I recommend leaving these reservations. Even if you have enough resources to make the reservations unnecessary, I’m sure EMC Support will take issue with you removing them should you need to open a case. If this is just for testing and you are tight on resources, you can remove these reservations.

 

 

 

 

 

Next, let’s add that extra hard drive I talked about and power this up.

 

*Note there is no sound, this is to follow along the steps.

UnityVSA-InitialConfigurationWizard

  1. In vCenter, right click the UnityVSA and Edit Settings
  2. At the bottom under New Device. use the drop down box to select “New Hard Disk” and press “Add”
  3. Enter the size, I’m making a VMDK with 250GB; if you use the arrow to expand the hard drive settings, you can place it on a separate datastore, if you want to using FASTVP to tier between datas tores this is where you’ll want to customer the hard drive.  I’m putting this new hard drive with the VM on VSAN.
  4. Once the reconfigure task is complete, right click your VM again and select “Power On”
  5. I highly recommend opening the console to the VM to watch the boot up, especially on first boot as the UnityVSA will configure itself.
  6. When the console shows a Linux prompt, walk away. Seriously, walk away; get a coffee, take a smoke break, or work on something else. More is happening behind the scenes at this point and the VSA is not ready. You’ll only get frustrated and this watched pot will not seems to boil.
  7. Ok, are you back? Close the console and go back to the virtual machine, ensure the VM Tools are running and the IP address shows up. If you choose DHCP as I did, you’ll need to note the assigned IP address. If you don’t see the IP address, the VSA is still not ready (see, I told you to walk away)
  8. Once the IP appears vCenter, you’re safe to open a browser over SSL to that address.
  9. You should receive a logon prompt, the default login is:
    1. user: admin
    2. pw: Password123#
  10. Did you login? Do you see a “Initial Configuration” wizard? Congratulations then, the VSA is up.

In the next post in this series, we’ll leverage this wizard to fast track configuring the UnityVSA, but, if you want you can cancel the wizard at this point, manually configuring the array; or restarting the wizard later through preferences.

For now, as evident by this post; the deployment of UnityVSA was straight-forward and exactly what we’ve all come to expect of virtual appliances. From download to up and running, user interaction time was less than 2 minutes, with overall deploy time around 30 minutes.

 

 

 

By | May 24th, 2016|EMC, Home Lab, Storage|3 Comments

vSphere HTML5 Web Client – Fling

Last week, a small light appeared at the end of the dim tunnel called the vSphere Web Client. If you’re a vSphere engineer, or even just a user managing applications or server; you know what I’m talking about.

The vSphere Web Client has been problematic since day one. Some days it feels like you need an incantation to get the web client working, with Adobe Flash, browser security, and custom plug-ins. Even when you do, the slow response time of the web client directly impacts your productivity. Then there has been the very slow migration from the C# client, with components such as vCenter Update Manager just recently being available in the web client. Plus, only with the recent fling activity around the ESXi Embedded Host Client can we see a world where we don’t need the C# client at all.

Speaking of flings; the vSphere HTML5 Web Client is currently a VMware Fling; a technology preview built by VMware engineers with the intent the community explore and test, providing feedback. Often flings make it into the product in a future release, though that is largely dependent on the feedback from the community. This one needs our feedback!

If you are running vSphere 6, I highly encourage you to install the vSphere HTML5 Web Client. Currently, the deployment is through a vApp with the web server hosting the interface on a separate virtual machine from any of your vCenter Servers. This means there is very low risk to your environment, as you aren’t modifying your existing vCenter, rather extending it through the SSO engine to a separate web server.

At first glance, the instructions for the fling appear a little involved, though trust me they are not. All told it took me about 10 minutes to setup the H5 Web Client. The instructions are very detailed, so I won’t repeat them in depth; though I captured my deployment if you are interested.

I have both VCSA and vCenter for Windows in my lab, the fling will work with either (and will support an existing enhanced link mode setup like mine). I went the Windows route as it’s my platform services controller and SSO server.

  1. Download the OVA and the batch file
  2. Execute the batch file, which generates three config files
  3. Deploy the vSphere HTML5 Web Client OVA
  4. Once the new vApp is only, follow the instructions to create three new directories.
  5. Using a tool such as WinSCP, copy the three files from your vCenter server to the
  6. vSphere Web Client server
  7. Set the NTP Server
  8. Start the web server
  9. Browse vCenter in beautiful native HTML5, no plug-ins, no flash, so fiddling with your browser security. (If you are not already logged in to vSphere, the H5 site will redirect you to the SSO for authentication)

 

*Note there is no sound, this is to follow along the steps. The video is not speed up, this is the actual deployment time

Again this is the very first preview and as such the functionality is limited. Here a just a few screen shot while you’re waiting on the download.

This slideshow requires JavaScript.

By | April 6th, 2016|Home Lab, VMWare|0 Comments

IsilonSD – Part 4: Adding & Removing Nodes

One of the core functions of the Isilon product is scaling. In it’s physical incarnation you can scale up to 144 nodes with over 50 petabytes in a single namespace. Those limits are because of hardware, as drives and switches get bigger, so can Isilon scale bigger. Even so, you can still start with only three nodes. When nodes are added to the cluster, it increases the storage and performance; existing data is re-balanced across all the nodes after the addition. Likewise, you can remove a node, proactively migrating the data from the leaving node without sacrificing data protection; an excellent way to lifecycle replace your hardware. This tenant of Isilon, coupled with non-disruptive software upgrades, means you there is no pre-set life-span to an Isilon cluster. With SmartPools ability to tier storage by node type, you can leverage older nodes for less frequently accessed data, maximizing your investment.

IsilonSD Edge has that same ability, though slightly muted given you’re limited to six nodes and 36TB (for now hopefully). I wanted to walk through the exercise, to see how node changes are accomplished in the virtual version, which is different from the physical version.

This post is part of a series covering the EMC Free and Frictionless software products.
Go to the first post for a table of contents.

 

Adding a Node

Adding a node to the IsilonSD Edge Cluster is very easy, as long as you have an ESX host with all the criteria ready. If you recall from building our test platform, we built a forth node for this very purpose.

*Note there is no sound, this is to follow along the steps.
  1. In the vSphere Web Client, return to the Manage IsilonSD Cluster tab
  2. Select the cluster (in our case, Isilon01)
  3. Switch to the Nodes tab
  4. Click the + button
  5.  The Management server will again search for candidates if found it will allow you to select them.
  6. Again select the disks and their roles, and then proceed; all the cluster and networking information already exists.

 

Just like creating the cluster, the OVA will be deployed, data stores created (if you used raw disks) and the IsilonSD node brought online. This time the node will be added into the cluster, which will start a rebalance effort to re-stripe the data across all the nodes, including the new one.

Keep in mind, IsilonSD Edge can scale up to six nodes, so if you start with three you can double your space.

Removing a Node

Removing a node is just as straight-forward as adding one, as long as you have four or more nodes. This action can take a very long time, depending on how much data you have, as before the node can be removed from the cluster all the data must be re-striped.

*Note there is no sound, this is to follow along the steps.
  1. In the vSphere Web Client, return to the Manage IsilonSD Cluster tab
  2. Select the cluster (in our case, Isilon01)
  3. Switch to the Nodes tab
  4. Select the node to evict (in our case, node 4)
  5. Click the – (minus) button.
  6. Double check the node and select Yes
  7. Wait until the node Status turned to StopFail

 

During the smart fail operation, should you log onto the IsilonSD Edge Administrator GUI, you’ll notice in the Cluster Overview the node you are removing has a warning light next to it. This is also a good summary screen to gauge the progress of the smartfail, by comparing the % column of the node your evicting, to the other nodes. In the picture below the node we choose to remove now is <1% used, while the other 3 nodes are 4% or 5%, meaning we’re almost there. IsilonSD_RemoveNodeClusterOverview

 

 

Drilling into that node is the best way to understand why it has a warning, there you will see the message that the node is being smartfailed. IsilonSD_RemoveNodeSmartFailMessage

When the smartfail is complete, you still have some cleanup activities.

*Note there is no sound, this is to follow along the steps.
  1. In the vSphere Web Client, return to the Manage IsilonSD Cluster tab
  2. Select the cluster (in our case, Isilon01)
  3. Switch to the Nodes tab
  4. The node you previously set to evict should be red in the Status
  5. Select the node, then click on the trash icon.
  6. This will delete the virtual machine and associated VMDKs

 

If you provided IsilonSD unformatted disks, the data stores the wizard created will still exist and you might want to clean them up. If you want to re-add the node, you’ll need to wait awhile, or restart the vCenter Inventory Service as it takes a bit to update.

By | March 31st, 2016|EMC, Home Lab, Storage, VMWare|1 Comment

IsilonSD – Part 3: Deploy a Cluster (Successfully)

With proper planning and setup of the prerequisites (see Part 2), the actual deployment of the IsilonSD Edge cluster is fairly straight-forward. If you experience issues during this section (see Part 1) it’s very likely because you don’t have the proper configuration, so revisit the previous steps. That said, let’s dive in and make a cluster.

This post is part of a series covering the EMC Free and Frictionless software products.
Go to the first post for a table of contents.

High Level, you’re going to do a few things:

  1. Deploy the IsilonSD Edge Management Server
  2. Setup IsilonSD Edge Management Server Password
  3. Complete IsilonSD Edge Management Server Boot-up
  4. Configure  Management Server vSphere Link & Upload Isilon Node Template
  5. Open the IsilonSD Edge vSphere Web Client Plug-in
  6. Deploy the IsilonSD Edge Cluster

Here’s the detail.

Deploy the IsilonSD Edge Management Server

*Note there is no sound, this is to follow along the steps.
This is your standard OVA deployment, as long as  you’re using the “EMC_IsilonSD_Edge_MS_x.x.x.ova” file from the download and providing an IP address accessible to vCenter, you can deploy this just about anywhere.

Follow along in the video on the left if you’re not familiar with the OVA process.

Once the OVA deployment launches, ensure you find the deployment task in the vSphere Task Pane and keep an eye on the progress.

Setup IsilonSD Edge Management Server password

IsilonSD_ManagementBootPasswordChangeOnce the OVA is complete and the virtual machine is booting up, you’ll need to open the console and watch the initialization process. Generally I recommend this with any OVA deployment, as you’ll see if there are any errors as the first boot configuration occurs. For the IsilonSD Edge Management Appliance, it’s required to set the administrator password.

Complete IsilonSD Edge Management Server Boot-up

IsilonSD_ManagementConsoleBlueScreenAfter entering your password, the server will continue it’s first boot process and configuration. When you reach this screen (what I call the blue screen of start) you’re ready to proceed. Open a browser and navigate to the URL provided in the blue screen next to IsilonSD Management Server.

 

Configure Management Server vSphere Link & Upload Isilon Node Template

When you navigate to the URL provided by the blue screen, after accepting the unauthorized certificate, you’ll be promoted for logon credentials. This is NOT the password you provided during the boot up. I failed to read the documentation and assumed this, resulting in much frustration.

Logon using:
username: admin
password: sunshine

After successful logon, and accepting the EULA, you have just a couple steps, which you can follow along in the video on the right:

  1. Adjust the admin password
  2. Register your vCenter
  3. Upload the Isilon Virtual Node template
    1. This is “EMC_IsilonSD_Edge_x.x.x.ova in your download

*Note there is no sound, this is to follow along the steps.

 

Open the IsilonSD Edge vSphere Web Client Plug-in

Wait for the OVA template to upload, this may take up to ten minutes depending on your environment. Once complete, you’ll be ready to move on to actually creating the IsilonSD cluster through the vSphere Web Client plug-in that was installed by the Management Server when you registered vCenter. Ensure you close out all the browser windows and open a new session to your vSphere Web Client.

IsilonSD_vCenterDatacenterSelect the datacenter where you deployed the management server (not the cluster, again where I lost some time).

 

IsilonSD_ManageTab In the right-hand pane of vSphere, under the Manage tab, you should now see two new sub-tabs, Create IsilonSD Cluster and Manage IsilonSD ClusterIsilonSD_vCenterIsilonTabs

 

Deploy the IsilonSD Edge Cluster

*Note there is no sound, this is to follow along the steps.

Follow along in the video above:

  1. Check the box next to your license
  2. Adjust your node resources
    1. For my deployment, I started with 3 nodes; adjusting the Cluster Capacity from the default 2 TB  to the minimum 1152 GB (64GB & 7 Drives * 3 Nodes)
  3. Clicking Next on the Requirements tab will search the virtual datacenter in your vCenter for ESX hosts that can satisfy the requirements you provided, including having those independent drives that meet the capacity requirement
    1. Should the process fail to find the necessary hosts, you’ll see a message like this. Don’t get discouraged, look over the requirements again to ensure everything is in order, try restarting the Inventory Service too.
    2. IsilonSD_NoQualifiedHosts
  4. When the search for hosts is successful, you’ll see a list of hosts available to select, such as
    1. IsilonSD_HostSelection
  5. Next, select all the hosts you wish to add to the cluster (if you prepared more than 3, consider selecting 3 now, as the next post we’ll walk through adding an additional node).
  6. For each host, you’ll need to select the disks and their associated role (Data Disk, Journal, Boot Disk or Journal & Boot Disk).
    1. Remember, you need at LEAST 6 data disks, you won’t get this far if you don’t but you won’t get farther if you don’t select them.
    2. In our scenario, we selects 6x 68GB data disks, and a final 28GB disk for Journal & Boot Disk
    3. You’ll also need to select the External Network Port Group and Internal Network Port Group
    4. IsilonSD_HostDriveSelection
  7. After setting up all hosts, with the exact configuration, you’ll move into the Cluster Identity
    1. IsilonSD_ClusterIdentity
    2. Cluster Name (this is used in the management interface to name the cluster)
    3. Root Password
    4. Admin Password
    5. Encoding (I’d leave this alone)
    6. Timezone
    7. ESRS Information (only populate this if you have a production license)
  8. Next will be your network settings.
    1. IsilonSD_ClusterNetworking
    2. External Network
    3. Internal Network
    4. SmartConnect
  9. You have a final screen to verify all your settings, look them over, the full deployment will take awhile and click Next.

At this point, patience is key, do not interrupt it. An OVA will be deployed for every node, then all of those unformatted disks will be turned into data stores, then VMDK files put on each datastore; finally all the nodes will boot and configure themselves. If everything goes as planned, your reward will look like this:

IsilonSD_ClusterCreationSuccess

To verify everything, point your browser to your smart connect IP address, in our case https://Isilon01.lab.vernex.io:8080 if you get a OneFS Logon Prompt, you should be in business!

IsilonSD_OneFSLogonPrompt

 

You should also be able to navigate in windows to your SmartConnect address; recall ours is \\Isilon01.lab.vernex.io\ and see the IFS share. This is the initial administrator share that in a production environment you’d disable. Likewise in *nix you can NFS attach to //Isilon01.lab.vernex.io:/IFS

 

 

 

 

 

 

 

 

By | March 30th, 2016|EMC, Home Lab, Storage, VMWare|1 Comment

IsilonSD – Part 2: Test Platform

This post is part of a series covering the EMC Free and Frictionless software products.
Go to the first post for a table of contents.

So… last post I figured out IsilonSD Edge needs (at least) three separate ESX hosts, with (at least) 7 independent, local disks. So I cannot deploy this on the nested ESXi VSAN I made, but I really want to get IsilonSD up and running.

Since my previous attempt to wing-it didn’t go so well, I’m also going to do a little more planning. I need to create ESX hosts that meet the criteria, plus some plan out things like cluster name, network settings, IP addresses and DNS zones.

For the ESX hosts, my solution is to run nested ESXi (a virtual machine running ESX on top of a physical machine running ESX). This allows me to provide the independent disks, as well multiple ESX hosts, without all the hardware. As well, this will help facilitate the networking needs, through making virtual switches to provide the internal and external networks.

To build this test platform, we’ll cover 4 main areas:

  • ESX Hosts
  • External Network
  • Internal Network
  • SmartConnect

ESX Hosts

For testing IsilonSD Edge, I’m going to make four virtual machines, and configure them as ESX hosts. Each of these will need four nNICs (two for ESX purposes, two for IsilonSD) and nine hard drives (2 for ESX again, and 7 for IsilonSD). I’m hosting all the hard drives in a single data store; it happens to be SSD. For physical networking, my host only has a single network card connected, so I’ve leveraged virtual switches without a network card to simulate a private management network.

A snapshot of my single VM configuration is below:

IsilonSD_NestedESXDiagram1

With the first virtual machine created, now I simply clone it three times, so I have four exact replicas. Why four? It will allow a three node cluster to start; then I can test adding (and removing) a node without data loss; the same we would with a physical Isilon.

Note the ‘guest OS’ for the virtual machines is VMware ESXi 6.x. This is nice feature of vSphere to help you keep track of your nested ESXi VMs. Though keep in mind, nesting vSphere is NOT supported by Vmware; you cannot call and ask for help. Not a concern here given I can’t call EMC for Isilon either since I’m using Free and Frictionless downloads. This is not a production grade configuration by any stretch.

IsilonSD_NestedESXDiagramx4Once all four virtual machines existed on my physical ESX host, installing ESX is just an ISO attach away.

After installing ESX on all my virtual hosts, I then add them to my existing vCenter as hosts. vCenter doesn’t know these are virtual machines and treats them the same as a physical ESX host.

I’ve placed these virtual hosts into a vCenter cluster. However, this is only for aesthetics purposes to keep them organized. I won’t enable normal cluster features such as HA and DRS given Isilon cannot leverage them, nor does it need them. Plus given there is no shared storage between these hosts, you cannot do standard vMotion (enhanced vMotion is always possible, but that’s another blog).

Here you can see those virtual machines with ESX installed masquerading as vSphere hosts:

IsilonSD_NestedESXCluster

I’ll leverage Cluster A you see in the screenshots for the Management Server and InsightIQ server. Cluster A is another cluster of nested ESXi VMs I used for testing VSAN; it also has the Virtual Infrastructure port group available so all the IsilonSD Edge VMs can be on the same logical segment.

External Network

The Isilon External network is what faces the NAS clients. In my environment, I have a vSphere Distributed Virtual Switch Port Group called ‘Virtual Infrastructure’ where I place my core systems. This is also where vCenter and the ESX Hosts sit as well, and what I’ll use for Isilon as there is no firewall/router between the Isilon and what I’ll connect to it.

Virtual Infrastructure network space is 10.0.0.0/24; I’ve set aside a range of IP addresses for Isilon in this network.
.50 is the management server
.151-158 for nodes
.159 for SmartConnect
.149 for InsightIQ.
You MUST have contiguous ranges for your nodes, but all other IP addresses are personal preference.

For use in the deployment steps:
-Netmask: 255.255.255.0
-Low IP Range: 10.0.0.151
-High IP Range: 10.0.0.158
-MTU: 1500
-Gateway: 10.0.0.1
-DNS Servers: 10.0.0.11
-Search Domains: lab.vernex.io

Internal Network

The physical Isilon appliances use dedicated Infiniband switches to interconnect the nodes. This non-blocking, low latency, high bandwidth network allows the nodes to communicate with each other to stripe data across nodes, providing hardware resiliency. For IsilonSD Edge, Ethernet is used over a virtual network for this same purpose. If you were deploying this on physical nodes, you could bind the Isilon internal network to anything that all hosts have access too, the same network as vMotion, or a dedicated network if you prefer. Obviously, 10Gb is preferable, and I would recommend diversifying your physical connections using failover or LACP at the vSwitch/VDS level.

For my lab, I have a vSphere DVS for Private Management traffic; this is bound to the virtual switch of my host that has no actual NIC associated with it. It’s a private network on the physical host under my nested ESXi instances. I use this DVS for VSAN traffic already, so I merely created an additional port group for Isilon named PG_Isilon.

Because this is essentially a dedicated, non-routable network, the IP addresses do not matter. But to keep things clean I use a range set aside for private traffic (10.0.101.0/24) as well use the same last octet as my external network.

For use in the deployment:
-Netmask: 255.255.255.0
-Low IP Range: 10.0.101.151
-High IP Range: 10.0.101.158

SmartConnect

For those not familiar with Isilon, SmartConnect is the technique used for load balancing clients across the multiple nodes in the cluster. Isilon protects the data across nodes using custom code, but to interoperate with the vast variety of clients standard protocols such as SMB, CIFS and NFS is used. For these, there still is not an industry standard method for load to be spread across multiple servers (NFS does have the ability for transparent failover, which Isilon supports), the approach is a beautiful blend of powerful and simplistic. By delegating a zone in your enterprise DNS for the Isilon cluster to manage, SmartConnect will hand out IP addresses to clients based different load balancing options appropriate for your workloads, such as round robin (default) or others like least connection.

To prepare for deploying an IsilonSD Edge cluster, we’re going to modify the DNS to extend this delegation to Isilon.Configuring the DNS ahead of time makes the deployment simple. If you’re running Windows DNS, here are the quick steps (if you’re using BIND or something similar, this is a delegation and should be very similar in your config files).

Launch the Windows/Active Directory DNS Administration Tool

IsilonSD_NewDNSDelgation

Locate the parent zone you wish to extend, here I use lab.vernex.io.

Right click on the parent zone and select New Delegation

 

 

 

 

 

 

 

 

 

 

 

Enter the name of the delgated zone, this ideally will be your cluster name, for my deployment Isilon01IsilonSD_Delgation1

 

 

 

 

 

 

 

 

 

Enter the IP address you intend to assign to Isilon SmartConnect

IsilonSD_Delgation2

 

 

 

 

 

 

 

That’s it, when the Isilon Cluster is deployed and Smart Connect running, you’ll be able to navigate to a CIFS share like \\Isilon01.lab.vernex.io, your DNS will pass this request to the Isilon DNS server, which will reply with an IP address of a host that can accept your workload. This same DNS works for managing the Isilon cluster as well.

QuickTip, you can CNAME anything else to Isilon01.lab.vernex.io, so I could make File.lab.vernex.io CNAME pointed to SmartConnect. This is an excellent way to replace multiple file servers with a single Isilon.

For use in the deployment:
-Zone Name: Isilon01.lab.vernex.io
-SmartConnect Service IP: 10.0.0.159

 

By | March 28th, 2016|EMC, Home Lab, Storage, VMWare|1 Comment