Monthly Archives: March 2016

IsilonSD – Part 4: Adding & Removing Nodes

One of the core functions of the Isilon product is scaling. In it’s physical incarnation you can scale up to 144 nodes with over 50 petabytes in a single namespace. Those limits are because of hardware, as drives and switches get bigger, so can Isilon scale bigger. Even so, you can still start with only three nodes. When nodes are added to the cluster, it increases the storage and performance; existing data is re-balanced across all the nodes after the addition. Likewise, you can remove a node, proactively migrating the data from the leaving node without sacrificing data protection; an excellent way to lifecycle replace your hardware. This tenant of Isilon, coupled with non-disruptive software upgrades, means you there is no pre-set life-span to an Isilon cluster. With SmartPools ability to tier storage by node type, you can leverage older nodes for less frequently accessed data, maximizing your investment.

IsilonSD Edge has that same ability, though slightly muted given you’re limited to six nodes and 36TB (for now hopefully). I wanted to walk through the exercise, to see how node changes are accomplished in the virtual version, which is different from the physical version.

This post is part of a series covering the EMC Free and Frictionless software products.
Go to the first post for a table of contents.

 

Adding a Node

Adding a node to the IsilonSD Edge Cluster is very easy, as long as you have an ESX host with all the criteria ready. If you recall from building our test platform, we built a forth node for this very purpose.

*Note there is no sound, this is to follow along the steps.
  1. In the vSphere Web Client, return to the Manage IsilonSD Cluster tab
  2. Select the cluster (in our case, Isilon01)
  3. Switch to the Nodes tab
  4. Click the + button
  5.  The Management server will again search for candidates if found it will allow you to select them.
  6. Again select the disks and their roles, and then proceed; all the cluster and networking information already exists.

 

Just like creating the cluster, the OVA will be deployed, data stores created (if you used raw disks) and the IsilonSD node brought online. This time the node will be added into the cluster, which will start a rebalance effort to re-stripe the data across all the nodes, including the new one.

Keep in mind, IsilonSD Edge can scale up to six nodes, so if you start with three you can double your space.

Removing a Node

Removing a node is just as straight-forward as adding one, as long as you have four or more nodes. This action can take a very long time, depending on how much data you have, as before the node can be removed from the cluster all the data must be re-striped.

*Note there is no sound, this is to follow along the steps.
  1. In the vSphere Web Client, return to the Manage IsilonSD Cluster tab
  2. Select the cluster (in our case, Isilon01)
  3. Switch to the Nodes tab
  4. Select the node to evict (in our case, node 4)
  5. Click the – (minus) button.
  6. Double check the node and select Yes
  7. Wait until the node Status turned to StopFail

 

During the smart fail operation, should you log onto the IsilonSD Edge Administrator GUI, you’ll notice in the Cluster Overview the node you are removing has a warning light next to it. This is also a good summary screen to gauge the progress of the smartfail, by comparing the % column of the node your evicting, to the other nodes. In the picture below the node we choose to remove now is <1% used, while the other 3 nodes are 4% or 5%, meaning we’re almost there. IsilonSD_RemoveNodeClusterOverview

 

 

Drilling into that node is the best way to understand why it has a warning, there you will see the message that the node is being smartfailed. IsilonSD_RemoveNodeSmartFailMessage

When the smartfail is complete, you still have some cleanup activities.

*Note there is no sound, this is to follow along the steps.
  1. In the vSphere Web Client, return to the Manage IsilonSD Cluster tab
  2. Select the cluster (in our case, Isilon01)
  3. Switch to the Nodes tab
  4. The node you previously set to evict should be red in the Status
  5. Select the node, then click on the trash icon.
  6. This will delete the virtual machine and associated VMDKs

 

If you provided IsilonSD unformatted disks, the data stores the wizard created will still exist and you might want to clean them up. If you want to re-add the node, you’ll need to wait awhile, or restart the vCenter Inventory Service as it takes a bit to update.

By | March 31st, 2016|EMC, Home Lab, Storage, VMWare|1 Comment

IsilonSD – Part 3: Deploy a Cluster (Successfully)

With proper planning and setup of the prerequisites (see Part 2), the actual deployment of the IsilonSD Edge cluster is fairly straight-forward. If you experience issues during this section (see Part 1) it’s very likely because you don’t have the proper configuration, so revisit the previous steps. That said, let’s dive in and make a cluster.

This post is part of a series covering the EMC Free and Frictionless software products.
Go to the first post for a table of contents.

High Level, you’re going to do a few things:

  1. Deploy the IsilonSD Edge Management Server
  2. Setup IsilonSD Edge Management Server Password
  3. Complete IsilonSD Edge Management Server Boot-up
  4. Configure  Management Server vSphere Link & Upload Isilon Node Template
  5. Open the IsilonSD Edge vSphere Web Client Plug-in
  6. Deploy the IsilonSD Edge Cluster

Here’s the detail.

Deploy the IsilonSD Edge Management Server

*Note there is no sound, this is to follow along the steps.
This is your standard OVA deployment, as long as  you’re using the “EMC_IsilonSD_Edge_MS_x.x.x.ova” file from the download and providing an IP address accessible to vCenter, you can deploy this just about anywhere.

Follow along in the video on the left if you’re not familiar with the OVA process.

Once the OVA deployment launches, ensure you find the deployment task in the vSphere Task Pane and keep an eye on the progress.

Setup IsilonSD Edge Management Server password

IsilonSD_ManagementBootPasswordChangeOnce the OVA is complete and the virtual machine is booting up, you’ll need to open the console and watch the initialization process. Generally I recommend this with any OVA deployment, as you’ll see if there are any errors as the first boot configuration occurs. For the IsilonSD Edge Management Appliance, it’s required to set the administrator password.

Complete IsilonSD Edge Management Server Boot-up

IsilonSD_ManagementConsoleBlueScreenAfter entering your password, the server will continue it’s first boot process and configuration. When you reach this screen (what I call the blue screen of start) you’re ready to proceed. Open a browser and navigate to the URL provided in the blue screen next to IsilonSD Management Server.

 

Configure Management Server vSphere Link & Upload Isilon Node Template

When you navigate to the URL provided by the blue screen, after accepting the unauthorized certificate, you’ll be promoted for logon credentials. This is NOT the password you provided during the boot up. I failed to read the documentation and assumed this, resulting in much frustration.

Logon using:
username: admin
password: sunshine

After successful logon, and accepting the EULA, you have just a couple steps, which you can follow along in the video on the right:

  1. Adjust the admin password
  2. Register your vCenter
  3. Upload the Isilon Virtual Node template
    1. This is “EMC_IsilonSD_Edge_x.x.x.ova in your download

*Note there is no sound, this is to follow along the steps.

 

Open the IsilonSD Edge vSphere Web Client Plug-in

Wait for the OVA template to upload, this may take up to ten minutes depending on your environment. Once complete, you’ll be ready to move on to actually creating the IsilonSD cluster through the vSphere Web Client plug-in that was installed by the Management Server when you registered vCenter. Ensure you close out all the browser windows and open a new session to your vSphere Web Client.

IsilonSD_vCenterDatacenterSelect the datacenter where you deployed the management server (not the cluster, again where I lost some time).

 

IsilonSD_ManageTab In the right-hand pane of vSphere, under the Manage tab, you should now see two new sub-tabs, Create IsilonSD Cluster and Manage IsilonSD ClusterIsilonSD_vCenterIsilonTabs

 

Deploy the IsilonSD Edge Cluster

*Note there is no sound, this is to follow along the steps.

Follow along in the video above:

  1. Check the box next to your license
  2. Adjust your node resources
    1. For my deployment, I started with 3 nodes; adjusting the Cluster Capacity from the default 2 TB  to the minimum 1152 GB (64GB & 7 Drives * 3 Nodes)
  3. Clicking Next on the Requirements tab will search the virtual datacenter in your vCenter for ESX hosts that can satisfy the requirements you provided, including having those independent drives that meet the capacity requirement
    1. Should the process fail to find the necessary hosts, you’ll see a message like this. Don’t get discouraged, look over the requirements again to ensure everything is in order, try restarting the Inventory Service too.
    2. IsilonSD_NoQualifiedHosts
  4. When the search for hosts is successful, you’ll see a list of hosts available to select, such as
    1. IsilonSD_HostSelection
  5. Next, select all the hosts you wish to add to the cluster (if you prepared more than 3, consider selecting 3 now, as the next post we’ll walk through adding an additional node).
  6. For each host, you’ll need to select the disks and their associated role (Data Disk, Journal, Boot Disk or Journal & Boot Disk).
    1. Remember, you need at LEAST 6 data disks, you won’t get this far if you don’t but you won’t get farther if you don’t select them.
    2. In our scenario, we selects 6x 68GB data disks, and a final 28GB disk for Journal & Boot Disk
    3. You’ll also need to select the External Network Port Group and Internal Network Port Group
    4. IsilonSD_HostDriveSelection
  7. After setting up all hosts, with the exact configuration, you’ll move into the Cluster Identity
    1. IsilonSD_ClusterIdentity
    2. Cluster Name (this is used in the management interface to name the cluster)
    3. Root Password
    4. Admin Password
    5. Encoding (I’d leave this alone)
    6. Timezone
    7. ESRS Information (only populate this if you have a production license)
  8. Next will be your network settings.
    1. IsilonSD_ClusterNetworking
    2. External Network
    3. Internal Network
    4. SmartConnect
  9. You have a final screen to verify all your settings, look them over, the full deployment will take awhile and click Next.

At this point, patience is key, do not interrupt it. An OVA will be deployed for every node, then all of those unformatted disks will be turned into data stores, then VMDK files put on each datastore; finally all the nodes will boot and configure themselves. If everything goes as planned, your reward will look like this:

IsilonSD_ClusterCreationSuccess

To verify everything, point your browser to your smart connect IP address, in our case https://Isilon01.lab.vernex.io:8080 if you get a OneFS Logon Prompt, you should be in business!

IsilonSD_OneFSLogonPrompt

 

You should also be able to navigate in windows to your SmartConnect address; recall ours is \\Isilon01.lab.vernex.io\ and see the IFS share. This is the initial administrator share that in a production environment you’d disable. Likewise in *nix you can NFS attach to //Isilon01.lab.vernex.io:/IFS

 

 

 

 

 

 

 

 

By | March 30th, 2016|EMC, Home Lab, Storage, VMWare|1 Comment

IsilonSD – Part 2: Test Platform

This post is part of a series covering the EMC Free and Frictionless software products.
Go to the first post for a table of contents.

So… last post I figured out IsilonSD Edge needs (at least) three separate ESX hosts, with (at least) 7 independent, local disks. So I cannot deploy this on the nested ESXi VSAN I made, but I really want to get IsilonSD up and running.

Since my previous attempt to wing-it didn’t go so well, I’m also going to do a little more planning. I need to create ESX hosts that meet the criteria, plus some plan out things like cluster name, network settings, IP addresses and DNS zones.

For the ESX hosts, my solution is to run nested ESXi (a virtual machine running ESX on top of a physical machine running ESX). This allows me to provide the independent disks, as well multiple ESX hosts, without all the hardware. As well, this will help facilitate the networking needs, through making virtual switches to provide the internal and external networks.

To build this test platform, we’ll cover 4 main areas:

  • ESX Hosts
  • External Network
  • Internal Network
  • SmartConnect

ESX Hosts

For testing IsilonSD Edge, I’m going to make four virtual machines, and configure them as ESX hosts. Each of these will need four nNICs (two for ESX purposes, two for IsilonSD) and nine hard drives (2 for ESX again, and 7 for IsilonSD). I’m hosting all the hard drives in a single data store; it happens to be SSD. For physical networking, my host only has a single network card connected, so I’ve leveraged virtual switches without a network card to simulate a private management network.

A snapshot of my single VM configuration is below:

IsilonSD_NestedESXDiagram1

With the first virtual machine created, now I simply clone it three times, so I have four exact replicas. Why four? It will allow a three node cluster to start; then I can test adding (and removing) a node without data loss; the same we would with a physical Isilon.

Note the ‘guest OS’ for the virtual machines is VMware ESXi 6.x. This is nice feature of vSphere to help you keep track of your nested ESXi VMs. Though keep in mind, nesting vSphere is NOT supported by Vmware; you cannot call and ask for help. Not a concern here given I can’t call EMC for Isilon either since I’m using Free and Frictionless downloads. This is not a production grade configuration by any stretch.

IsilonSD_NestedESXDiagramx4Once all four virtual machines existed on my physical ESX host, installing ESX is just an ISO attach away.

After installing ESX on all my virtual hosts, I then add them to my existing vCenter as hosts. vCenter doesn’t know these are virtual machines and treats them the same as a physical ESX host.

I’ve placed these virtual hosts into a vCenter cluster. However, this is only for aesthetics purposes to keep them organized. I won’t enable normal cluster features such as HA and DRS given Isilon cannot leverage them, nor does it need them. Plus given there is no shared storage between these hosts, you cannot do standard vMotion (enhanced vMotion is always possible, but that’s another blog).

Here you can see those virtual machines with ESX installed masquerading as vSphere hosts:

IsilonSD_NestedESXCluster

I’ll leverage Cluster A you see in the screenshots for the Management Server and InsightIQ server. Cluster A is another cluster of nested ESXi VMs I used for testing VSAN; it also has the Virtual Infrastructure port group available so all the IsilonSD Edge VMs can be on the same logical segment.

External Network

The Isilon External network is what faces the NAS clients. In my environment, I have a vSphere Distributed Virtual Switch Port Group called ‘Virtual Infrastructure’ where I place my core systems. This is also where vCenter and the ESX Hosts sit as well, and what I’ll use for Isilon as there is no firewall/router between the Isilon and what I’ll connect to it.

Virtual Infrastructure network space is 10.0.0.0/24; I’ve set aside a range of IP addresses for Isilon in this network.
.50 is the management server
.151-158 for nodes
.159 for SmartConnect
.149 for InsightIQ.
You MUST have contiguous ranges for your nodes, but all other IP addresses are personal preference.

For use in the deployment steps:
-Netmask: 255.255.255.0
-Low IP Range: 10.0.0.151
-High IP Range: 10.0.0.158
-MTU: 1500
-Gateway: 10.0.0.1
-DNS Servers: 10.0.0.11
-Search Domains: lab.vernex.io

Internal Network

The physical Isilon appliances use dedicated Infiniband switches to interconnect the nodes. This non-blocking, low latency, high bandwidth network allows the nodes to communicate with each other to stripe data across nodes, providing hardware resiliency. For IsilonSD Edge, Ethernet is used over a virtual network for this same purpose. If you were deploying this on physical nodes, you could bind the Isilon internal network to anything that all hosts have access too, the same network as vMotion, or a dedicated network if you prefer. Obviously, 10Gb is preferable, and I would recommend diversifying your physical connections using failover or LACP at the vSwitch/VDS level.

For my lab, I have a vSphere DVS for Private Management traffic; this is bound to the virtual switch of my host that has no actual NIC associated with it. It’s a private network on the physical host under my nested ESXi instances. I use this DVS for VSAN traffic already, so I merely created an additional port group for Isilon named PG_Isilon.

Because this is essentially a dedicated, non-routable network, the IP addresses do not matter. But to keep things clean I use a range set aside for private traffic (10.0.101.0/24) as well use the same last octet as my external network.

For use in the deployment:
-Netmask: 255.255.255.0
-Low IP Range: 10.0.101.151
-High IP Range: 10.0.101.158

SmartConnect

For those not familiar with Isilon, SmartConnect is the technique used for load balancing clients across the multiple nodes in the cluster. Isilon protects the data across nodes using custom code, but to interoperate with the vast variety of clients standard protocols such as SMB, CIFS and NFS is used. For these, there still is not an industry standard method for load to be spread across multiple servers (NFS does have the ability for transparent failover, which Isilon supports), the approach is a beautiful blend of powerful and simplistic. By delegating a zone in your enterprise DNS for the Isilon cluster to manage, SmartConnect will hand out IP addresses to clients based different load balancing options appropriate for your workloads, such as round robin (default) or others like least connection.

To prepare for deploying an IsilonSD Edge cluster, we’re going to modify the DNS to extend this delegation to Isilon.Configuring the DNS ahead of time makes the deployment simple. If you’re running Windows DNS, here are the quick steps (if you’re using BIND or something similar, this is a delegation and should be very similar in your config files).

Launch the Windows/Active Directory DNS Administration Tool

IsilonSD_NewDNSDelgation

Locate the parent zone you wish to extend, here I use lab.vernex.io.

Right click on the parent zone and select New Delegation

 

 

 

 

 

 

 

 

 

 

 

Enter the name of the delgated zone, this ideally will be your cluster name, for my deployment Isilon01IsilonSD_Delgation1

 

 

 

 

 

 

 

 

 

Enter the IP address you intend to assign to Isilon SmartConnect

IsilonSD_Delgation2

 

 

 

 

 

 

 

That’s it, when the Isilon Cluster is deployed and Smart Connect running, you’ll be able to navigate to a CIFS share like \\Isilon01.lab.vernex.io, your DNS will pass this request to the Isilon DNS server, which will reply with an IP address of a host that can accept your workload. This same DNS works for managing the Isilon cluster as well.

QuickTip, you can CNAME anything else to Isilon01.lab.vernex.io, so I could make File.lab.vernex.io CNAME pointed to SmartConnect. This is an excellent way to replace multiple file servers with a single Isilon.

For use in the deployment:
-Zone Name: Isilon01.lab.vernex.io
-SmartConnect Service IP: 10.0.0.159

 

By | March 28th, 2016|EMC, Home Lab, Storage, VMWare|1 Comment

IsilonSD – Part 1: Quick Deploy (or how I failed to RTM)

This post is part of a series covering the EMC Free and Frictionless software products.
Go to the first post for a table of contents.

As I mentioned in my previous post, EMC recently released the “software defined” version of Isilon; their leading enterprise scale-out NAS solution. If you’re familiar with Isilon, you might already know there has been a virtual Isilon (called Isilon Simulator) for years now. The virtual Isilon would run on a laptop with VMware Workstation/Fusion, or on vSphere in your datacenter. I purchased and installed Isilon in multiple organizations, for several different use-cases; the Isilon Simulator was a great solution for testing changes pre-production as well familiarizing engineers with the interface. The Isilon Simulator is not supported and up until recently you had to know the right people to even get ahold of it.

With the introduction of IsilonSD Edge, we now have a virtualized Isilon that is fully supported, available for download and purchase through your favorite EMC sales rep. It runs the same codebase as the physical appliances, with some adjustments for the virtual worlds. As we discussed, there is a “free” version for use in non-production as part of EMC’s Free and Frictionless movement. I’ve run the Isilon Simulator personally for years, so I want to leverage this latest release of IsilonSD Edge as my new test Isilon in my home lab.

A quick stop to the IsilonSD Edge download page and I’m quickly pulling down the bits. While waiting a few moments for the 2GB download, I review some of the links; there is a YouTube video on Aggregating Unused Storage, another that covers FAQs on IsilonSD Edge and one more that talks about Expanding the Data Lake to the Edge. These all cover what I assumed, you’ll want at least three ESX hosts to provide physical diversity, you can run other workloads on these hosts, and the ultimate goal of this software is to extend Isilon into Edge scenarios, such as branch offices.

Opening the downloaded ZIP file, I find a couple OVA files, plus the installation instructions. I reviewed a couple of the FAQs linked from the product page, though didn’t spend much time on the installation guide; nor did I watch the YouTube Demo: Install and Configure IsilonSD Edge. I like to figure some things out on my own; that’s half the fun of a lab, right? I did see under the system requirements, it mentions support for VSAN compatible hardware, referencing the VMware HCL for VSAN. I just recently setup VSAN in my home lab, so that coupled with the fact I’ve run the Isilon Simulator; I’m good to go.

Fast forward though a couple failed installations, re-reading the FAQs, more failed installations, then reading the actual manual… here’s the catch.

You have to have local disks on each ESX node.

More specifically, you need to have directly attached storage… 
        without hardware RAID protection or VSAN.

Plus, you need at least 7 of these directly attached unprotected disks, per node.

While this wasn’t incredibly clear to me in the documentation, once you know this, you will see it’s said; but given IsilonSD is running on VMDK files, I glossed over the parts of the documentation that (vaguely) spelled this out. If you’ve deployed the Isilon Simulator, or any OVA for that matter, you’re used to selecting where the virtual machines are deployed, I assumed this would be the same for IsilonSD and I could choose the storage location.

However IsilonSD comes with a vCenter Plug-In that deploys the cluster, as part of that deployment, it scans for hosts and disks that meet this specific requirement. Moreover, during the deployment IsilonSD leverages a little used feature in vSphere to create virtual serial port connections over the network for the Isilon nodes to communicate with the Management Server, this is how the cluster is configured; so deploying IsilonSD nodes by hand isn’t an option (you can use still use the Isilon Simulator, which you can deploy more manually).

I’m going to stop here and touch on some thoughts, I’ll elaborate more on this in a later post once I actually have IsilonSD Edge working.

I do not know any IT shop that has ESX hosts that has locally attached, independent disks (again, not in a RAID group or under any type of data protection). We’ve worked hard as VM engineers to always build shared storage so we can use things like vMotion.

The marketing talks about capturing unused storage, about running IsilonSD on the same hosts as other workloads; in fact the same storage as other VMs; but I’m not sure who has unused capacity that’s also independent disks.

I certainly wouldn’t recommend running virtual machines on storage without any type of RAID-like protection. Maybe some folks have a server with some disks they never put into play, but 7 disks; and at least three servers with 7 disks?

I know there are organizations that have lean budget and this might be the best they can afford, but are shops like licensing and running vCenter ($), are they looking at virtual Isilon($)?

Call me perplexed, but I’m going to put off thinking about this as I still want to get this running in my lab. Since I don’t have three servers and 21 disks laying around at home, I’ll need to figure out a way to create a test platform for IsilonSD to run.

Be back soon…

 

By | March 25th, 2016|EMC, Home Lab, Storage, Uncategorized|3 Comments

Free and Frictionless – A Series

One of the most common statements I’ve made to vendors over the years is “why should I pay to test your software?”. To this day I still don’t understand this; if I’m going to purchase software to run in my production environment, why should I have to pay to run this software for our development and testing needs? It seems counter-intuitive; in my mind having easy access to software which IT can test and develop against increases the probability of choosing it for a project. Having software be free in non-production allows developers to ensure that it is properly leveraged, as well it encourages accurate testing and facilitates operations ensuring it’s ready to be run in production. In my experience not only does this result in more use of the software in production (which means more sales for the vendor), but more operational knowledge (which means less support needed from the vendor).

Companies offer difference solutions to attempt to solve this. Microsoft does it well with TechNet and MSDN subscriptions; which for a small yearly fee you can license your IT staff, rather than the servers; you get some limited support and recently even cloud credits. Many companies will provide time-bombed versions of the software; this helps in the evaluation phase to test installation, but falls short in ongoing development needs, not to mention operations teams gain no experience. Some vendors will steeply discount non-production; though most of them only do this through during the purchasing process, and I’ve seen a wide range of how well this gets negotiated (if at all).

There is no doubt in my mind that this challenge is a significant factor in the growth of open-source software. With the ability to easily download software, without time limits and without a sales discussion; the time to start being productive in developing a solution is dramatically reduced. I’ve made this very choice; downloading free software and beginning the project while things like budget are still not finalized. The software can be kept running in non-production and when moved into production, support contracts can begin. You don’t need to pay upfront, before prototyping, before a decision is made and before any business value is being is being derived.

This is why I’ve been ecstatic EMC is making a movement towards a method that allows the free use of software in non-production, even in products they are not using an open-source license model. They refer to this approach as ‘Free and Frictionless’. It doesn’t apply to all their software, but the list is growing. Currently, products like ScaleIO, VNX, ECS and recently added; Isilon. The free and frictionless products are available for download, without support, but without time-bombs either. In most cases there are restrictions, such as the total amount of manageable storage. These limitations are easy to understand and work with and fully deliver on my age old question “why should I pay to test your software”.

I’m going to spend a little time with these offerings, many of them I’ve run in production, at scale; so I’m interested how well they stack up in their virtual forms. I’ll also explore some products I haven’t run before.

By | March 24th, 2016|EMC, Home Lab, Storage, VMWare|10 Comments