IsilonSD Edge

IsilonSD – Part 6: Final Thoughts

Now that I’ve spent some time with Isilon SD; how does it compare to my experience with its physical big brother? Where does the software defined version fit?

This post is part of a series covering the EMC Free and Frictionless software products.
Go to the first post for a table of contents.

Overview (TL;DR version)

I’m excited by the entrance of this product into the virtualization space. Isilon is a robust product that can be tuned for multiple use cases and workloads. Even though Isilon has years of product development behind it and currently on it’s eight major software version; the virtual product is technically V1. With any first version, there are some areas to work on; from my limited time with IsilonSD, I believe this is everything it’s physical big brother is in a smaller, virtual package. However, it’s also bringing some of the limitations of its physical past. Limitations to be aware of, but also, limitations I believe EMC will be working to remove in vNext of IsilonSD.

If you ran across this blog because of interest in IsilonSD, I hope you can test the product, either with physical nodes or with the test platform I’ve put together; only with customer testing and feedback can the product mature into what it’s capable of becoming.

Deep Dive (long version)

From running Isilon’s in multiple use-cases and companies, I always wanted the ability to get smaller Isilon models for my branch offices. I’ve worked in environments where we had hundreds of physical locations of varying sizes. Many of these we wanted file solutions in these spokes replicating back to a hub. We wanted a universal solution that applied to the varying size locations; allowing all the spokes to replicate back to the hub. The startup cost for a new Isilon cluster was prohibitive for a smaller site. Leading us to leverage Windows File Servers (an excellent file server solution but that’s a different post) for those situations, bifurcating our file services stack which increased complexity in management, not just of the file storage itself, but ancillary needs like monitoring and backups.

Given I’ve been running a virtualized Isilon simulator for as long as I’ve been purchasing and implementing the Isilon solution; leveraging a virtualized Isilon for these branch office scenarios was always on my wish list. When I heard the rumors an actual virtual product was in the works (vIMSOCOOL) I expected the solution to target this desire. When IsilonSD Edge was released, as I read the documentation, I continued with this expectation. I watched YouTube videos that said this was the exact use-case.

It’s taken actually setting up the product in a lab to understand that IsilonSD Edge is not the product I expected it to be. Personally, though the solution by it’s nature is ‘software defined’ as it includes no hardware; it doesn’t quite fit the definition I’ve come to believe SD stands for. This is less a virtual Isilon, or software defined Isilon, as it is “bring your own hardware”, IsilonBYOH if you will.

IsilonBYOH is, on its merit, an exciting product and highlights what makes Isilon successful; a great piece of software sitting on commodity servers. This approach is what’s allowed Isilon to become the product is it, supporting a plethora of node types as well as hard drive technologies. You can configure a super fast flash based NFS NAS solution to be an ultra high reliable storage solution behind web servers, where you can store the data once and have all nodes have shared access. You can leverage the multi-tenancy options to provide mixed storage in a heterogeneous environment, NFS to service servers and CIFS to end users, talking to both LDAP and Active Directory, tiering between node types to maximum performance for newer files and cost for older. You can create a NAS for high-capacity video editing needs; where the current data is on SSD for screaming fast performance, then moving to HDD when the project is complete. You can even create archive class storage array with cloud competitive pricing to store aged data, knowing you can easily scale, support multiple host types and if ever needed, incorporate faster nodes to increase performance.

With this new product, you can now start even smaller, purchasing your own hardware, running on your own network, and still leverage the same management and monitoring tools, even the same remote support. Plus you can replicate it just the same, including to traditional Isilon appliances.

However, to me, leveraging IsilonSD Edge does call for purchasing hardware, not simply adding this to your existing vSphere cluster and capturing unused storage. IsilonSD Edge, while running on vSphere, requires, locally attached, independent hard drives. This excludes leveraging VSAN, which means no VxRail (and all the competitive HCIA), it also means no ROBO hardware such as Dell VRTX (and all the similar competitive offerings), in fact just having RAID excludes you from using IsilonSD. These hardware requirements, specifically the dedicated disks; turns into limitations. Unless you’re in the position to dedicate three servers, which you’ll likely need to buy new to meet the requirements; you’re probably not putting this out in your remote/branch offices; even though that’s the goal of the ‘Edge’ part of the name.

When you buy those new nodes; you’d probably go ahead and leverage solid state drives; the cost for locally attached SSD SATA is quickly cutting even with traditional hard drives. But understand, IsilonSD Edge will not take advantage of those faster drives like it’s physical incarnation… no metadata caching with the SD version. Nor can the SD version provide any tiering through SmartPools (you can still control the data protection scheme with SmartPools, and obviously you’ll get a speed boost with SSD).

Given all this, the use-cases for IsilonSD Edge get very narrow. With the inability to put IsilonSD Edge on top of ROBO designs, the likelihood of needing to buy new hardware, coupled with the 36TB overall limit of the software defined version of Isilon; I struggle to identify a production scenario that is a good fit. The best case scenario in my mind is purchasing hardware with enough drives to run both IsilonSD and VSAN, side-by-side, on separate drives.; this would require at least nine drives server (more really), so you’re talking some larger machines, and again, a narrow fit.

To me, this product is less about today and more about tomorrow; release one setting the foundation for the future opportunity of virtual Isilon.

What is that opportunity?

For starters, running Isilon SD Edge on VxRail; even deploying it directly through the VxRail marketplace, and by this, I mean running the IsilonSD Edge VMDK files on the VSAN data store.

Before you say the Isilon protection scheme would double-down storage needs on the VSAN model; keep in mind you can configure per VM policies in VSAN. Setting Failure To Tolerate (FTT) of 0 is not recommended, but this is why it exists. Let Isilon provide data protection while playing in the VSAN sandbox. Leverage DRS groups and rules to configure anti-affinity of the Isilon virtual nodes; keeping them on separate hosts. Would VSAN introduce latency as compared to physical disk; quite probably… though in the typical ROBO scenario that’s not the largest concern. I was able to push 120Mbps onto my IsilonSD Edge cluster, and that was with nested ESXi all running on one host.

All of this doesn’t just apply to VxRail, but it’s competitors in the hyper-converged appliance space, as well a wide range of products targeted at small installations. To expand on the small installation scenario, if IsilonSD had lower data protection options like VSAN does to remove the need for six disks per node, or even three nodes; it could fit in smaller situations. Why not trust the RAID protection beneath the VM and still leverage Isilon for the robust NAS features it provides. Meaning run a single-node Isilon, after all, those remote offices are likely providing file services with Windows or Linux VMs, relying on the vSphere HA/DRS for availability, and server RAID (or VSAN) for data loss prevention. The Isilon has a rich feature set outside of just data protection across nodes. Even a single node Isilon with SmartSync back to a mothership has compelling use cases.

On the other side of the spectrum, putting IsilonSD in a public cloud provider, where you don’t control the hardware and storage, has quite a few use-cases. Yes, Isilon has CloudPool technology, this extends an Isilon into public cloud models that provide object storage. But a virtual Isilon running in, say, vCloud Air or VirtuStream, with SynqIQ with your on-premise Isilon opens quite a few doors, such as for those looking at public cloud disaster-recovery-as-a-service solutions. Or moving to the cloud while still having a bunker on-premise for securing your data.

Outside of the need for independent drives, this is, an Isilon, running on vSphere. That’s… awesome! As I mentioned before, this opens some big opportunities should EMC continue down this path. Plus, it’s Free and Frictionless, meaning you can do this exact same testing as I’ve done. If you are an Isilon customer today, GO GET THIS. It’s a great way to test out changes, upgrades, command line scripts, etc.

If you are running the Free and Frictionless version, outside of the 36TB and six node limit, you also do NOT get licenses for SynqIQ, SmartLock or CloudPools.

I’ll say, given I went down this road from my excitement about Free and Frictionless; these missing licenses are a little disappointing. I’ve run SyncIQ and SmartLock, two great features and was looking forward to testing them, and having them handy to help answer questions I get when talking about Isilon.

CloudPools, while I have not run, is something that I’ve been incredibly excited about for years leading up to its release, so I’ll admit I wish it were in the Free and Frictionless version, if only a small amount of storage to play with.

Wrapping up, there are countless IT organizations out there; I’ve never met one that wasn’t unique, even with some areas I’d like to see improved with this product, undoubtedly IsilonSD Edge will apply to quite a few shops. In fact, I’ve heard some customers were asking for a BYOH Isilon approach; so maybe this is really for them (which if so, the 36TB seems limiting). If you’re looking at IsilonSD Edge, I’d love to hear why; maybe I missed something (certainly I have). Reach out, or use the comments.

If you are looking into IsilonSD Edge, outside of the drive/node requirements; some things to be aware of that caught my eye.

While the FAQs state you can run other virtual machines on the same hosts; I would advise against it. If you had enough physical drives to split them between IsilonSD and VSAN, it could be done. You could also use NFS, ISCSI or Fibre Channel for data stores; but this is overly complex and in all likelihood, more expensive than simply having dedicated hardware for IsilonSD Edge (or really, just buying the physical Isilon product). But given the data stores used by the IsilonSD Edge nodes are unprotected, putting a VM on them means you are just asking for the drive to fail, and to lose that VM.

Because you are dedicating physical drives to a virtual machine, you cannot vMotion the IsilonSD virtual nodes. This means you cannot leverage DRS (Dynamic Resource Scheduler), which in turn means you cannot leverage vSphere Update Manager to automatically patch the hosts (as it relies on moving workloads around during maintenance).

The IsilonSD virtual nodes do NOT have VMware tools. Meaning you cannot shut down the virtual machines from inside vSphere (for patching or otherwise), rather you’ll need to enter the OneFS administrator CLI, shut down the Isilon node; then go and perform ESX host maintenance. If you have reporting in place to ensure your virtual machines have VMtools installed, running and at the supported version (something I highly recommend) you’ll need to adjust this. Other systems that leverage VMtools; such as Infrastructure Navigator, will not work either.

I might be overlooking something (I hope so) but I cannot find a way to expand the storage on an existing node. In my testing scenario, I built the minimal configuration of six data drives of a measly 64GB each. I could not figure out how to increase this space, which is something we’re all accustomed to on vSphere (in fact quickly growing VMs resources is a cornerstone of virtualization). I can increase the overall capacity by increasing nodes, but this requires additional ESX hosts. If this is true, again the idea of using ‘unclaimed capacity’ for IsilonSD Edge is marginalized.

IsilonSD wants nodes in a pool to be configured the same, specifically with the same size and amount of drives. This is understandable as it spreads data across all the drives in the pool equally. However, this lessens the value of ‘capturing unused capacity’. Aside from the unprotected storage point; if you were to have free storage on drives, your ability to deploy IsilonSD will be constrained to the lowest free space volume, as all the VMDK files (virtual drives) have to be the same. Even if you had twenty-one independent disks across three nodes, if just one of them was smaller than the rest, that free space dictates the size unit you can configure.

Even though I’m not quite sure where this new product fits or what problem it solves; that’s true of many products when they first release. It’s quite possible this will open new doors no one knew were closed and if nothing else; I’m ecstatic EMC is pursuing making a virtual version of the product; after all this is just version 1… what would you want in version 2? Respond in the comments!

By | April 4th, 2016|EMC, Home Lab, Opinions, Storage|2 Comments

IsilonSD – Part 5: Monitoring Activity

For my deployment of IsilonSD Edge, I want to keep this running in my lab, installing systems is often far easier than operating them (especially troubleshooting issues). However an idle system isn’t really a good way to get exposure, so I need to put a little activity on this cluster, plus monitor it.

This post is part of a series covering the EMC Free and Frictionless software products.
Go to the first post for a table of contents.

This is just my lab, so here is my approach to doing more with IsilonSD than simply deploying it:

  • Deploy InsightIQ (EMC’s dedicated Isilon monitoring suite)
  • Move InsightIQ Data to IsilonSD Edge Cluster
  • Synchronize Software Repository
  • Mount Isilon01 as vSphere Datastore
  • Load Test

Deploy InsightIQ

InsightIQ is EMC’s custom-built monitoring application for Isilon. Personally, this was one of the top reasons I select Isilon years ago when evaluating NAS solutions. I’m a firm believer that the ability to monitor a solution should be a key deciding factor in product selection.

Without going too deep in InsightIQ itself (that’s another blog), it provides the ability to monitor the performance of the Isilon, including the client perspective of the performance. You can drill into the latency of operations by IP address; which when I first purchased an Isilon array is was because the incumbent solution was having numerous performance problems and the lack of visibility into why was causing a severe customer satisfaction issue.

InsightIQ monitors the nodes, cluster communication, and even does file analysis to help administrators understand where their space is consumed and by what type of files.

Deploying InsightIQ is a typical OVA process, we’ve collected the information necessary in previous posts, so I’ll be brief, in fact you can probably wing-it on this one if you want.

*Note there is no sound, this is to follow along the steps.
  1. In the vSphere Web Client, deploy an OVA
  2. Provide the networking information and datastore for the InsightIQ appliance
  3. After the OVA deploy is complete, open the console to the VM, where you’ll need to enter the root password
  4. Navigate your browser to the IP address you entered, logging in as root, with the password you created in the console
  5. Add the Isilon cluster to InsightIQ and wait while it discovers all the nodes.

 

Move InsightIQ Data to IsilonSD Edge Cluster

You can imagine collecting performance data, and file statistics will consume quite a bit of storage. By default InsightIQ will store all this data on the virtual machine, so I move the InsightIQ Datastore onto the Isilon cluster itself. While this is a little circular, InsightIQ will generate some load writing the monitoring data, which in turn will give it something to monitor, for our lab purposes this provides some activity.

Simply log into InsightIQ, under Settings -> Datastore, change the location to NFS Mounted Datastore. By default Isilon shares out /IFS, however in production this should ALWAYS be changed, but for a lab we’ll leverage the export path.

IsilonSD_InsightIQDSMove

If you do this immediately after deploying InsightIQ, it will be very quick. If, however, you’ve been collecting data, you’ll be presented with information about the progress of the migration, refreshing the browser will provide updates.

IsilonSD_InsightIQDSMoveProgressSynchronize Software Repository

I have all my ISO files, keys, OVAs and software installation on a physical NAS; this makes it very easy to mount via NFS to all my hosts as a datastore, physical and nested; for quickly installing software in my lab. Because of this, I use this repository daily; so to ensure I’m actually utilizing IsilonSD to continue to learn about it post setup, I’m going use IsilonSD to keep a copy of this software repository, mounting all my nested ESXi hosts to it.

I still need my physical NAS for my physical hosts, in case I lose the IsilonSD I don’t want to lose all my software and be unable to reinstall. I want the physical NAS and IsilonSD to stay in sync too. My simple solution is to leverage robocopy to sync the two file systems; the added benefit of this is I also get the regular load on IsilonSD.

Delving into robocopy is a whole different post, but here is my incredibly simple batch routine. It mirrors my primary NAS software repository to the Isilon. This runs nightly now.

:START
robocopy \\nas\software\ \\isilon01\ifs\software\ /MIR /MT:64 /R:0 /W:0 /ZB
GOTO START

Upon first execution, I see in InsightIQ traffic onto IsilonSD. Even though this is nested ESXi, with the virtual Isilon nodes sharing both compute, network, memory and disk; I see a fairly healthy external throughput rate, peaking around 100Mb/s.

IsilonSD_InsightIQRobocopyThroughput

 

When the copy process is complete, looking in the OneFS administrator console will show the data has been spread across the nodes (under HDD Used).

IsilonSD_HDDLoaded

Mount Isilon01 as vSphere Datastore

Generally speaking, I would not recommend Isilon for VMware storage. Isilon is built for file services, and its specialty is sequential access workloads. For small workloads, if you have an Isilon for file services already, an Isilon datastore will work; but there are better solutions for vSphere data stores in my opinion.

For my uses in the lab though, with my software repository being replicated onto Isilon, mounting an Isilon NFS export as a datastore will not only allow me to access those ISO files but open multiple concurrent connections to monitor.

*Note there is no sound, this is to follow along the steps.
Mounting an NFS datastore to Isilon is exactly the same as any other NFS NAS.

You MUST use the FQDN to allow SmartConnect to balance the connections.

 

With the datastore mounted, if you go back into the OneFS administrator console; you can see the connections were spread across the nodes.

IsilonSD_ConnectionSpread

Now I have a purpose to regularly use my IsilonSD Edge cluster, keeping it around for upgrade testing, referencing while talking to others, etc. Again, with the EMC Free and Frictionless license, I’m not going to run out of time, I can keep using this.

Load Test

Even though I have an ongoing use for IsilonSD, I want to to a little more to do than just serve as a software share, just to ensure it’s really working well. So I’ll use IOMeter to put a little load on it.
I’m running IsilonSD Edge on 4 nested ESXi virtual machines, which in turn are all running on one physical host. So IsilonSD is sharing compute, memory, disk and network across the 4 IsilonSD nodes (plus I have dozens of other servers running on this host). Needless to say, this is not going handle a high amount of load, nor provide the lowest latency. So, while I’m going to use IOMeter to put some load on my new IsilonSD Edge cluster and typically I would record all the details of a performance test; this time I’m not. Especially given I’m generating load from virtual machines on the same host.

Given Isilon is running on x86 servers, it would be incredibly interesting to see a scientific comparison between physical Isilon and IsilonSD Edge with like-for-like hardware. In my personal experience with virtualization, there is a negligible overhead, but I have to wonder the difference Infiniband makes.

In this case, my point of load testing is not to ascertain the latency or IOPS, but merely to put the storage device under some stress for a couple of hours to ensure it’s stable. So I created a little load, peaking around 80Mbps and 150 IOPS, but running for about 17 hours (overnight).

Below are some excerpts from InsightIQ, happily the next morning the cluster was running fine, even given the load. During the test, the latency fluctuated widely (as you’d expect due to the level of contention my nested environment creates). From an end user perspective, it was still usable.

IsilonSD_LoadTest1IsilonSD_LoadTest2IsilonSD_LoadTest3IsilonSD_LoadTest4

In my next post I’m going to wrap this up and share my thoughts on IsilonSD Edge.

By | April 1st, 2016|EMC, Home Lab, VMWare|1 Comment

IsilonSD – Part 4: Adding & Removing Nodes

One of the core functions of the Isilon product is scaling. In it’s physical incarnation you can scale up to 144 nodes with over 50 petabytes in a single namespace. Those limits are because of hardware, as drives and switches get bigger, so can Isilon scale bigger. Even so, you can still start with only three nodes. When nodes are added to the cluster, it increases the storage and performance; existing data is re-balanced across all the nodes after the addition. Likewise, you can remove a node, proactively migrating the data from the leaving node without sacrificing data protection; an excellent way to lifecycle replace your hardware. This tenant of Isilon, coupled with non-disruptive software upgrades, means you there is no pre-set life-span to an Isilon cluster. With SmartPools ability to tier storage by node type, you can leverage older nodes for less frequently accessed data, maximizing your investment.

IsilonSD Edge has that same ability, though slightly muted given you’re limited to six nodes and 36TB (for now hopefully). I wanted to walk through the exercise, to see how node changes are accomplished in the virtual version, which is different from the physical version.

This post is part of a series covering the EMC Free and Frictionless software products.
Go to the first post for a table of contents.

 

Adding a Node

Adding a node to the IsilonSD Edge Cluster is very easy, as long as you have an ESX host with all the criteria ready. If you recall from building our test platform, we built a forth node for this very purpose.

*Note there is no sound, this is to follow along the steps.
  1. In the vSphere Web Client, return to the Manage IsilonSD Cluster tab
  2. Select the cluster (in our case, Isilon01)
  3. Switch to the Nodes tab
  4. Click the + button
  5.  The Management server will again search for candidates if found it will allow you to select them.
  6. Again select the disks and their roles, and then proceed; all the cluster and networking information already exists.

 

Just like creating the cluster, the OVA will be deployed, data stores created (if you used raw disks) and the IsilonSD node brought online. This time the node will be added into the cluster, which will start a rebalance effort to re-stripe the data across all the nodes, including the new one.

Keep in mind, IsilonSD Edge can scale up to six nodes, so if you start with three you can double your space.

Removing a Node

Removing a node is just as straight-forward as adding one, as long as you have four or more nodes. This action can take a very long time, depending on how much data you have, as before the node can be removed from the cluster all the data must be re-striped.

*Note there is no sound, this is to follow along the steps.
  1. In the vSphere Web Client, return to the Manage IsilonSD Cluster tab
  2. Select the cluster (in our case, Isilon01)
  3. Switch to the Nodes tab
  4. Select the node to evict (in our case, node 4)
  5. Click the – (minus) button.
  6. Double check the node and select Yes
  7. Wait until the node Status turned to StopFail

 

During the smart fail operation, should you log onto the IsilonSD Edge Administrator GUI, you’ll notice in the Cluster Overview the node you are removing has a warning light next to it. This is also a good summary screen to gauge the progress of the smartfail, by comparing the % column of the node your evicting, to the other nodes. In the picture below the node we choose to remove now is <1% used, while the other 3 nodes are 4% or 5%, meaning we’re almost there. IsilonSD_RemoveNodeClusterOverview

 

 

Drilling into that node is the best way to understand why it has a warning, there you will see the message that the node is being smartfailed. IsilonSD_RemoveNodeSmartFailMessage

When the smartfail is complete, you still have some cleanup activities.

*Note there is no sound, this is to follow along the steps.
  1. In the vSphere Web Client, return to the Manage IsilonSD Cluster tab
  2. Select the cluster (in our case, Isilon01)
  3. Switch to the Nodes tab
  4. The node you previously set to evict should be red in the Status
  5. Select the node, then click on the trash icon.
  6. This will delete the virtual machine and associated VMDKs

 

If you provided IsilonSD unformatted disks, the data stores the wizard created will still exist and you might want to clean them up. If you want to re-add the node, you’ll need to wait awhile, or restart the vCenter Inventory Service as it takes a bit to update.

By | March 31st, 2016|EMC, Home Lab, Storage, VMWare|1 Comment

IsilonSD – Part 3: Deploy a Cluster (Successfully)

With proper planning and setup of the prerequisites (see Part 2), the actual deployment of the IsilonSD Edge cluster is fairly straight-forward. If you experience issues during this section (see Part 1) it’s very likely because you don’t have the proper configuration, so revisit the previous steps. That said, let’s dive in and make a cluster.

This post is part of a series covering the EMC Free and Frictionless software products.
Go to the first post for a table of contents.

High Level, you’re going to do a few things:

  1. Deploy the IsilonSD Edge Management Server
  2. Setup IsilonSD Edge Management Server Password
  3. Complete IsilonSD Edge Management Server Boot-up
  4. Configure  Management Server vSphere Link & Upload Isilon Node Template
  5. Open the IsilonSD Edge vSphere Web Client Plug-in
  6. Deploy the IsilonSD Edge Cluster

Here’s the detail.

Deploy the IsilonSD Edge Management Server

*Note there is no sound, this is to follow along the steps.
This is your standard OVA deployment, as long as  you’re using the “EMC_IsilonSD_Edge_MS_x.x.x.ova” file from the download and providing an IP address accessible to vCenter, you can deploy this just about anywhere.

Follow along in the video on the left if you’re not familiar with the OVA process.

Once the OVA deployment launches, ensure you find the deployment task in the vSphere Task Pane and keep an eye on the progress.

Setup IsilonSD Edge Management Server password

IsilonSD_ManagementBootPasswordChangeOnce the OVA is complete and the virtual machine is booting up, you’ll need to open the console and watch the initialization process. Generally I recommend this with any OVA deployment, as you’ll see if there are any errors as the first boot configuration occurs. For the IsilonSD Edge Management Appliance, it’s required to set the administrator password.

Complete IsilonSD Edge Management Server Boot-up

IsilonSD_ManagementConsoleBlueScreenAfter entering your password, the server will continue it’s first boot process and configuration. When you reach this screen (what I call the blue screen of start) you’re ready to proceed. Open a browser and navigate to the URL provided in the blue screen next to IsilonSD Management Server.

 

Configure Management Server vSphere Link & Upload Isilon Node Template

When you navigate to the URL provided by the blue screen, after accepting the unauthorized certificate, you’ll be promoted for logon credentials. This is NOT the password you provided during the boot up. I failed to read the documentation and assumed this, resulting in much frustration.

Logon using:
username: admin
password: sunshine

After successful logon, and accepting the EULA, you have just a couple steps, which you can follow along in the video on the right:

  1. Adjust the admin password
  2. Register your vCenter
  3. Upload the Isilon Virtual Node template
    1. This is “EMC_IsilonSD_Edge_x.x.x.ova in your download

*Note there is no sound, this is to follow along the steps.

 

Open the IsilonSD Edge vSphere Web Client Plug-in

Wait for the OVA template to upload, this may take up to ten minutes depending on your environment. Once complete, you’ll be ready to move on to actually creating the IsilonSD cluster through the vSphere Web Client plug-in that was installed by the Management Server when you registered vCenter. Ensure you close out all the browser windows and open a new session to your vSphere Web Client.

IsilonSD_vCenterDatacenterSelect the datacenter where you deployed the management server (not the cluster, again where I lost some time).

 

IsilonSD_ManageTab In the right-hand pane of vSphere, under the Manage tab, you should now see two new sub-tabs, Create IsilonSD Cluster and Manage IsilonSD ClusterIsilonSD_vCenterIsilonTabs

 

Deploy the IsilonSD Edge Cluster

*Note there is no sound, this is to follow along the steps.

Follow along in the video above:

  1. Check the box next to your license
  2. Adjust your node resources
    1. For my deployment, I started with 3 nodes; adjusting the Cluster Capacity from the default 2 TB  to the minimum 1152 GB (64GB & 7 Drives * 3 Nodes)
  3. Clicking Next on the Requirements tab will search the virtual datacenter in your vCenter for ESX hosts that can satisfy the requirements you provided, including having those independent drives that meet the capacity requirement
    1. Should the process fail to find the necessary hosts, you’ll see a message like this. Don’t get discouraged, look over the requirements again to ensure everything is in order, try restarting the Inventory Service too.
    2. IsilonSD_NoQualifiedHosts
  4. When the search for hosts is successful, you’ll see a list of hosts available to select, such as
    1. IsilonSD_HostSelection
  5. Next, select all the hosts you wish to add to the cluster (if you prepared more than 3, consider selecting 3 now, as the next post we’ll walk through adding an additional node).
  6. For each host, you’ll need to select the disks and their associated role (Data Disk, Journal, Boot Disk or Journal & Boot Disk).
    1. Remember, you need at LEAST 6 data disks, you won’t get this far if you don’t but you won’t get farther if you don’t select them.
    2. In our scenario, we selects 6x 68GB data disks, and a final 28GB disk for Journal & Boot Disk
    3. You’ll also need to select the External Network Port Group and Internal Network Port Group
    4. IsilonSD_HostDriveSelection
  7. After setting up all hosts, with the exact configuration, you’ll move into the Cluster Identity
    1. IsilonSD_ClusterIdentity
    2. Cluster Name (this is used in the management interface to name the cluster)
    3. Root Password
    4. Admin Password
    5. Encoding (I’d leave this alone)
    6. Timezone
    7. ESRS Information (only populate this if you have a production license)
  8. Next will be your network settings.
    1. IsilonSD_ClusterNetworking
    2. External Network
    3. Internal Network
    4. SmartConnect
  9. You have a final screen to verify all your settings, look them over, the full deployment will take awhile and click Next.

At this point, patience is key, do not interrupt it. An OVA will be deployed for every node, then all of those unformatted disks will be turned into data stores, then VMDK files put on each datastore; finally all the nodes will boot and configure themselves. If everything goes as planned, your reward will look like this:

IsilonSD_ClusterCreationSuccess

To verify everything, point your browser to your smart connect IP address, in our case https://Isilon01.lab.vernex.io:8080 if you get a OneFS Logon Prompt, you should be in business!

IsilonSD_OneFSLogonPrompt

 

You should also be able to navigate in windows to your SmartConnect address; recall ours is \\Isilon01.lab.vernex.io\ and see the IFS share. This is the initial administrator share that in a production environment you’d disable. Likewise in *nix you can NFS attach to //Isilon01.lab.vernex.io:/IFS

 

 

 

 

 

 

 

 

By | March 30th, 2016|EMC, Home Lab, Storage, VMWare|1 Comment

IsilonSD – Part 2: Test Platform

This post is part of a series covering the EMC Free and Frictionless software products.
Go to the first post for a table of contents.

So… last post I figured out IsilonSD Edge needs (at least) three separate ESX hosts, with (at least) 7 independent, local disks. So I cannot deploy this on the nested ESXi VSAN I made, but I really want to get IsilonSD up and running.

Since my previous attempt to wing-it didn’t go so well, I’m also going to do a little more planning. I need to create ESX hosts that meet the criteria, plus some plan out things like cluster name, network settings, IP addresses and DNS zones.

For the ESX hosts, my solution is to run nested ESXi (a virtual machine running ESX on top of a physical machine running ESX). This allows me to provide the independent disks, as well multiple ESX hosts, without all the hardware. As well, this will help facilitate the networking needs, through making virtual switches to provide the internal and external networks.

To build this test platform, we’ll cover 4 main areas:

  • ESX Hosts
  • External Network
  • Internal Network
  • SmartConnect

ESX Hosts

For testing IsilonSD Edge, I’m going to make four virtual machines, and configure them as ESX hosts. Each of these will need four nNICs (two for ESX purposes, two for IsilonSD) and nine hard drives (2 for ESX again, and 7 for IsilonSD). I’m hosting all the hard drives in a single data store; it happens to be SSD. For physical networking, my host only has a single network card connected, so I’ve leveraged virtual switches without a network card to simulate a private management network.

A snapshot of my single VM configuration is below:

IsilonSD_NestedESXDiagram1

With the first virtual machine created, now I simply clone it three times, so I have four exact replicas. Why four? It will allow a three node cluster to start; then I can test adding (and removing) a node without data loss; the same we would with a physical Isilon.

Note the ‘guest OS’ for the virtual machines is VMware ESXi 6.x. This is nice feature of vSphere to help you keep track of your nested ESXi VMs. Though keep in mind, nesting vSphere is NOT supported by Vmware; you cannot call and ask for help. Not a concern here given I can’t call EMC for Isilon either since I’m using Free and Frictionless downloads. This is not a production grade configuration by any stretch.

IsilonSD_NestedESXDiagramx4Once all four virtual machines existed on my physical ESX host, installing ESX is just an ISO attach away.

After installing ESX on all my virtual hosts, I then add them to my existing vCenter as hosts. vCenter doesn’t know these are virtual machines and treats them the same as a physical ESX host.

I’ve placed these virtual hosts into a vCenter cluster. However, this is only for aesthetics purposes to keep them organized. I won’t enable normal cluster features such as HA and DRS given Isilon cannot leverage them, nor does it need them. Plus given there is no shared storage between these hosts, you cannot do standard vMotion (enhanced vMotion is always possible, but that’s another blog).

Here you can see those virtual machines with ESX installed masquerading as vSphere hosts:

IsilonSD_NestedESXCluster

I’ll leverage Cluster A you see in the screenshots for the Management Server and InsightIQ server. Cluster A is another cluster of nested ESXi VMs I used for testing VSAN; it also has the Virtual Infrastructure port group available so all the IsilonSD Edge VMs can be on the same logical segment.

External Network

The Isilon External network is what faces the NAS clients. In my environment, I have a vSphere Distributed Virtual Switch Port Group called ‘Virtual Infrastructure’ where I place my core systems. This is also where vCenter and the ESX Hosts sit as well, and what I’ll use for Isilon as there is no firewall/router between the Isilon and what I’ll connect to it.

Virtual Infrastructure network space is 10.0.0.0/24; I’ve set aside a range of IP addresses for Isilon in this network.
.50 is the management server
.151-158 for nodes
.159 for SmartConnect
.149 for InsightIQ.
You MUST have contiguous ranges for your nodes, but all other IP addresses are personal preference.

For use in the deployment steps:
-Netmask: 255.255.255.0
-Low IP Range: 10.0.0.151
-High IP Range: 10.0.0.158
-MTU: 1500
-Gateway: 10.0.0.1
-DNS Servers: 10.0.0.11
-Search Domains: lab.vernex.io

Internal Network

The physical Isilon appliances use dedicated Infiniband switches to interconnect the nodes. This non-blocking, low latency, high bandwidth network allows the nodes to communicate with each other to stripe data across nodes, providing hardware resiliency. For IsilonSD Edge, Ethernet is used over a virtual network for this same purpose. If you were deploying this on physical nodes, you could bind the Isilon internal network to anything that all hosts have access too, the same network as vMotion, or a dedicated network if you prefer. Obviously, 10Gb is preferable, and I would recommend diversifying your physical connections using failover or LACP at the vSwitch/VDS level.

For my lab, I have a vSphere DVS for Private Management traffic; this is bound to the virtual switch of my host that has no actual NIC associated with it. It’s a private network on the physical host under my nested ESXi instances. I use this DVS for VSAN traffic already, so I merely created an additional port group for Isilon named PG_Isilon.

Because this is essentially a dedicated, non-routable network, the IP addresses do not matter. But to keep things clean I use a range set aside for private traffic (10.0.101.0/24) as well use the same last octet as my external network.

For use in the deployment:
-Netmask: 255.255.255.0
-Low IP Range: 10.0.101.151
-High IP Range: 10.0.101.158

SmartConnect

For those not familiar with Isilon, SmartConnect is the technique used for load balancing clients across the multiple nodes in the cluster. Isilon protects the data across nodes using custom code, but to interoperate with the vast variety of clients standard protocols such as SMB, CIFS and NFS is used. For these, there still is not an industry standard method for load to be spread across multiple servers (NFS does have the ability for transparent failover, which Isilon supports), the approach is a beautiful blend of powerful and simplistic. By delegating a zone in your enterprise DNS for the Isilon cluster to manage, SmartConnect will hand out IP addresses to clients based different load balancing options appropriate for your workloads, such as round robin (default) or others like least connection.

To prepare for deploying an IsilonSD Edge cluster, we’re going to modify the DNS to extend this delegation to Isilon.Configuring the DNS ahead of time makes the deployment simple. If you’re running Windows DNS, here are the quick steps (if you’re using BIND or something similar, this is a delegation and should be very similar in your config files).

Launch the Windows/Active Directory DNS Administration Tool

IsilonSD_NewDNSDelgation

Locate the parent zone you wish to extend, here I use lab.vernex.io.

Right click on the parent zone and select New Delegation

 

 

 

 

 

 

 

 

 

 

 

Enter the name of the delgated zone, this ideally will be your cluster name, for my deployment Isilon01IsilonSD_Delgation1

 

 

 

 

 

 

 

 

 

Enter the IP address you intend to assign to Isilon SmartConnect

IsilonSD_Delgation2

 

 

 

 

 

 

 

That’s it, when the Isilon Cluster is deployed and Smart Connect running, you’ll be able to navigate to a CIFS share like \\Isilon01.lab.vernex.io, your DNS will pass this request to the Isilon DNS server, which will reply with an IP address of a host that can accept your workload. This same DNS works for managing the Isilon cluster as well.

QuickTip, you can CNAME anything else to Isilon01.lab.vernex.io, so I could make File.lab.vernex.io CNAME pointed to SmartConnect. This is an excellent way to replace multiple file servers with a single Isilon.

For use in the deployment:
-Zone Name: Isilon01.lab.vernex.io
-SmartConnect Service IP: 10.0.0.159

 

By | March 28th, 2016|EMC, Home Lab, Storage, VMWare|1 Comment

Free and Frictionless – A Series

One of the most common statements I’ve made to vendors over the years is “why should I pay to test your software?”. To this day I still don’t understand this; if I’m going to purchase software to run in my production environment, why should I have to pay to run this software for our development and testing needs? It seems counter-intuitive; in my mind having easy access to software which IT can test and develop against increases the probability of choosing it for a project. Having software be free in non-production allows developers to ensure that it is properly leveraged, as well it encourages accurate testing and facilitates operations ensuring it’s ready to be run in production. In my experience not only does this result in more use of the software in production (which means more sales for the vendor), but more operational knowledge (which means less support needed from the vendor).

Companies offer difference solutions to attempt to solve this. Microsoft does it well with TechNet and MSDN subscriptions; which for a small yearly fee you can license your IT staff, rather than the servers; you get some limited support and recently even cloud credits. Many companies will provide time-bombed versions of the software; this helps in the evaluation phase to test installation, but falls short in ongoing development needs, not to mention operations teams gain no experience. Some vendors will steeply discount non-production; though most of them only do this through during the purchasing process, and I’ve seen a wide range of how well this gets negotiated (if at all).

There is no doubt in my mind that this challenge is a significant factor in the growth of open-source software. With the ability to easily download software, without time limits and without a sales discussion; the time to start being productive in developing a solution is dramatically reduced. I’ve made this very choice; downloading free software and beginning the project while things like budget are still not finalized. The software can be kept running in non-production and when moved into production, support contracts can begin. You don’t need to pay upfront, before prototyping, before a decision is made and before any business value is being is being derived.

This is why I’ve been ecstatic EMC is making a movement towards a method that allows the free use of software in non-production, even in products they are not using an open-source license model. They refer to this approach as ‘Free and Frictionless’. It doesn’t apply to all their software, but the list is growing. Currently, products like ScaleIO, VNX, ECS and recently added; Isilon. The free and frictionless products are available for download, without support, but without time-bombs either. In most cases there are restrictions, such as the total amount of manageable storage. These limitations are easy to understand and work with and fully deliver on my age old question “why should I pay to test your software”.

I’m going to spend a little time with these offerings, many of them I’ve run in production, at scale; so I’m interested how well they stack up in their virtual forms. I’ll also explore some products I haven’t run before.

By | March 24th, 2016|EMC, Home Lab, Storage, VMWare|10 Comments