virtulization

UnityVSA – Part 4: VVOLs

In part 1 of this series I shared the move to HTML5 was the most exciting part of Unity. Frankly this was because of the increased compatibility and performance of Unisphere for Unity, but more so the signaling of EMC shifting to HTML5 across the board (I hope).

If there was one storage feature inside Unity that excites me the most, it has to be VVOL support. So in this post, I’m going to dive into VVOLs on the UnityVSA we set up previously. As VVOLs is a framework, every implementation is going to differ slightly. As such, the switch to VVOL itself and the Unity flavor is going to require an adjustment in the architectural and management practices we’ve adopted over the years.

This post is part of a series covering the EMC Free and Frictionless software products.
Go to the first post for a table of contents.

First, for those not familiar, a little on VVOLs themselves. VVOL stands for Virtual Volumes, in essence, it simplifies the layers between the virtual machine and the storage array while allowing both vSphere and the Storage Array to have a deeper understanding of each other. A VVOL itself directly correlates to a virtual disk attached to virtual machines; including the configuration file and swap file every VM has. Enabling VVOLs is VASA (vSphere Storage APIs for Storage Awareness), with which the array can describe the attributes of the storage presented to vSphere. These two core tenants of VVOLs allow the vSphere layer to see deeper into the storage; while the storage layer can see the more granular virtual machine and virtual disk usage.

In practice, this provides a better management framework to enable the movement most in the vSphere realm have been making; creating larger datastores with a naming convention that denotes the storage features (flash, tiering, snapshots, etc.). Where previously vSphere Admins would need to learn these conventions to determine where to place VMs; with VVOLs, this can be abstracted into Storage Policies, with the vSphere Admin simply selecting the appropriate policy during creation.

So new terms and concepts to become familiar with:

  • Storage Provider
    • Configured within vCenter this is a link to the VASA provider which in turn shares the storage system details with vSphere.
    • For Unity, the VASA provider is built in and requires no extra configuration on the Unity side.
  • Protocol EndPoint
    • This is the storage side access point that vSphere communicates with; they work across protocols and replace LUNs and mount points.
    • On Unity, Protocol Endpoints have created automatically through the VVOL provisioning process.
  • Storage Container
    • This essentially replaces the LUN, though a storage container is much more than an LUN ever was as it can contain multiple types of storage on the array, which effectively means it can have multiple LUNs.
    • In vSphere a storage container maps to a VVOL Datastore (shown in the normal datastore section of vSphere).
    • Unity has mirrored this naming in Unisphere, calling the storage container ‘Datastore’.
    • In Unity a Datastore can contain multiple Capability Profiles (which if you remember, in Unity, is synonymous to a Pool).

To fully explore and demonstrate the VVOL functionality in Unity, we’re going to perform several sets of actions, I’m going to share these in video walkthroughs (with sound), as there are multiple steps.

  1. Create additional pools and capability profiles on the UnityVSA then configure vSphere and Unity with appropriate connection for VVOL
  2. Provision a VVOL Datastore with multiple capability profiles and provision a test virtual machine on the new VVOL Datastore
  3. Create a vSphere Storage Policy and relocate the VM data
  4. Create advanced vSphere Store Policies, extending the VM to simulate a production database server

 

First some prep work and connecting vSphere and Unity:

  • Add 4 new virtual disks to the UnityVSA VM
  • Create two new Unity pools
    • 1 with 75GB as single tier
    • 1 with 15GB, 25GB and 55GB as multi-tier with FastVP
  • Link Unisphere/Unity to our vCenter
  • Create a Storage Provider link in vSphere to the Unity VASA Provider

 

Next, let’s provision the VVOL Datastore or “Storage Container”:

  • Create a Unity Datastore (aka “Storage Container”) with three Capability Profiles (as such, three pools)
  • Create a vSphere VVOL Datastore
  • Investigate VVOL Datastore attributes

 

Provisioning a virtual machine on the new storage looks the same as traditional datastores, but there is more than meets the eye:

  • Create a new virtual machine on the VVOL Datastore
  • Investigate where the VM files are placed
  • See the VM details inside Unisphere
  • Create a simple Storage Policy in vSphere
  • Adjust the virtual machine storage policy and watch the storage allocation adjustment

 

Now let’s consider more advanced usage of VVOLs. With the ability to create custom tags in Unisphere Capability Profiles, we have an unlimited mechanism to describe the storage in our own words. You could use these tags to create application specific pools, and thus vSphere Storage Policies for admins to target VMs related to an application. You could also use tags for tiers (Web, App, DB), or in the example below, we’re going to create vSphere Storage Policies and Unity capability tags to partition a database server into Boot, Data and Backup storage types.

  • Modify our three Capability Profiles to add tags: Boot, DB and Backup.
  • Create vSphere Storage Policies for each of these tags.
  • Adjust the boot files of our test VM to leverage the Boot Storage Policy
  • Add additional drives to our test VM, leveraging our DB and Backup Storage Policies; investigate where these files were placed

 

Hopefully now you not only have a better understanding of how to setup and configure VVOLs on EMC Unity storage; but a deeper understanding of the VVOL technology in general. This framework opens brand new doors in your management practices; imagine a large Unity array with multiple pools and capabilities all being provisioned through one Storage Container and VVOL Datastore. Leveraging Storage Policies to manage your data placement rather than carving up numerous LUNs.

With the flexibility of Storage Policies, you can further inform the administrators creating and managing virtual servers on what storage characteristics are available. If you have multiple arrays that support VVOLs and/or VSAN; your policies can work across arrays and even vendors. This abstraction allow further consistency inside vSphere, streamlining management and operations tasks.

You can see how, over time, this technology has advantages over the traditional methods we’ve been using for virtual storage provisioning. However, before you start making plans to buy a new Unity array and replace all your vSphere storage with VVOLs, know that, as with any new technology, there are still some limitations. Features like array based replication, snapshots, even quiescing VMs, all are lagging a bit behind the VVOL release, all highly dependent on your environment and usage patterns. I expect quick enhancements in this area, so research the current state based and talk with your VMware and EMC reps/partners.

By | May 27th, 2016|EMC, Home Lab, Storage, Train Yourself|2 Comments

UnityVSA – Part 3: Initial Configuration Wizard

When last we meet, we had deployed a UnityVSA virtual appliance; it was up and running sitting at the prompt for the Initial Configuration Wizard.

This post is part of a series covering the EMC Free and Frictionless software products.
Go to the first post for a table of contents.

If you’re following along and took a break, your browser session to Unisphere likely expired, which means you also lost the prompt for the Initial Configuration Wizard. Don’t fret; you can get it back. In the upper left corner of Unisphere, click on the Preferences button . On the main preferences page, there is a link to launch the Initial Configuration Wizard again.

UnityVSA_PreferencesButton

UnityVSA_InitialConfigurationWizardLink

Ok, so back to the wizard? Good. This wizard is going to walk through a complete configuration of the UnityVSA, including changing passwords, licensing, network setup (DNS, NTP), creating pools, e-mail server, support credentials, your customer information, ESRS, iSCSI and NAS interfaces. You can choose to skip some steps, but if you complete all portions, your UnityVSA will be ready to provision storage across block and file, with LUNS, shares or VVOLS.

Before we get going, let’s cover some aspects of the VSA to plan for.

First, there is a single SP (service processor in Unity-speak; or a ‘brain’). The physical Unity (and the VNX and CLARiiON before if you are familiar with the line) has two SPs. These provide redundancy should an SP fail, given for all intents an SP is an x86 server it’s suspect to common failures. The VSA rather relies on vSphereHA to provide the redundancy should a host have a failure, and vSphereDRS/vMotion to move the workload preemptively for planned maintenance. This is germane because, for the VSA, you won’t be planning for balancing LUNs between the SPs, nor creating multiple NAS servers to span SPs, nor even iSCSI targets across SPs to allow for LUN trespass.

The second is size limitations. I’m using the community edition, a.k.a. Free and Frictionless; which is limited to 4TB of space. I do not get any support, as such ESRS will not work. If you’re planning on a production deployment, that 4TB limit will be increased to 50TB (based, of course, on how much you purchased from EMC) and then be a fully supported Unity array. With the overall size limit, you also are limited to 16 virtual disks behind the VSA (meaning 16 VMDK files you assign the VM). So plan accordingly. In my initial deployment I provided a 250GB VMDK, so if I have 15 more (16 total), I hit my 4TB max.

Third is simply the key differences from a physical array to prepare for.

  • Max pool LUN size: 16TB
  • Max pool LUNs per system: 64
  • Max pools per system: 16
  • No fibre channel support
  • No write caching (to ensure data integrity)
  • No synchronous replication
  • No RAID – this one is crucial to understand. In a physical Unity raw disks are put into RAID groups to protect against data loss during a drive failure. UnityVSA relies on the storage you provide to be protected already. Up to the limits mentioned, you can present VMDKs from multiple vSphere datastores, even from separate tiers of storage leveraging FASTVP inside UnityVSA.

Fourth and finally, a few vSphere specific notes. UnityVSA comes installed with VMware Tools, but you can’t modify it, so don’t try automatic upgrades the VMtools. The cores and memory are hard coded; so don’t try adding more to increase performance; rather split workload onto multiple UnityVSA appliances. You cannot resize virtual disks once they are in a UnityVSA pool, again add more, but pay attention to the limits. EMC doesn’t support vSphere-FT, VM-level backup/replication nor VM-level snapshots.

 

 UnityVSA_ICW__1 Got all that, now let’s setup this array! I’m going to step through detail on the options this time, because the wizard is packed full of steps, the video is also at the bottom.
 

UnityVSA_ICW__26

First off, change your passwords. If you missed it earlier, here are the defaults again:

  • admin/Password123#
  • service/service

The “admin” account is the main administrator of Unisphere, while the “service” account is used to log into the console of the VSA, where you can perform limited admin steps such as changing IP settings should you need to move the network; as well logs.

 

UnityVSA_ICW__25

Next up, licensing. Every UnityVSA needs a license, even if you’re using the Community edition. If you purchased the array, you should go through all the normal channels, LAC, support, or your rep. For community edition, copy the System UUID and follow the link: Get License Online
 

UnityVSA_ICW__24

The link to get a license will take you to EMC.com, again you’ll need to logon with the same account you used to download the OVA. Then simply paste in that System UUID and select “Unity VSA” from the drop down. Your browser will download a file that is the license key; from there you can import it into Unisphere.

I will say, I’ve heard complaints that needing to register to download and create a license is ‘friction’, but it was incredibly easy. I don’t take any issues with a registration, EMC has made it simple, and no one from sales is calling me to follow up. I’m sure EMC is tracking who’s using it, but that’s good data; downloading the OVA vs actually creating a license. I don’t find this invalidates the ‘Free and Frictionless’ motto.

UnityVSA_ICW__23 Did you get this nice message? If so good job, if not… well try again.
 

UnityVSA_ICW__22

The virtual array needs DNS servers, to talk to the outside world, communicate through ESRS, get training videos, etc… make sure to provide DNS servers that can resolve internet addresses.
 UnityVSA_ICW__21 Synchronizing time is always important, if you are providing file shares it’s extra important you’re in synch with your Active Directory. Should a client and server be more than 5 minutes apart access will be denied. So use your AD server itself, or an NTP server on the same time synch with AD.
 

UnityVSA_ICW__20

Now we’re getting into some storage. In Unity, pools are a grouping of storage which is carved up to present to hosts. In the VSA, a pool must have at least one virtual disk; though if you have more data will be balanced across the virtual disks.

A pool is the foundation of all storage, without one you cannot setup anything else on the virtual array.

 

UnityVSA_ICW__18

When creating a Pool, you’ll want to give it a name and description. Given this is a virtual array, consider including the cluster and datastore names in the description so you know where the storage is coming from.

For my purposes of testing, I named the Pool Pool_250 simply to know it was on my 250GB virtual disk.

 

UnityVSA_ICW__19

Once named, you’ll need to give the pool some virtual disks. Recall we created a 250GB virtual disk, here we are adding that into the pool.

When adding disks you need to choose a tier, typically:

  • Flash = Extreme Performance
  • FC/SAS = Performance
  • NL-SAS/SATA = Capacity

The virtual disk I provided the VSA is actually VSAN running All-Flash, but I’m selecting Capacity here simply to explore FAST VP down the road.

 

UnityVSA_ICW__17

Next up I need to give the Pool a Capability Profile. This profile is for VMware VVOL support. We’ll cover VVOL in more depth another time, but essentially it allows you to connect vSphere to Unity, and assign storage at a VM level. This is done through vSphere Storage Policy that map through to the Capability Profile.

So what is the Capability Profile used for? It encompasses all the attributes of the pool, allowing you to created a vSphere Storage Policy. There is one capability profile per pool, it includes the Service Tier (based on your storage tier and FAST VP), Space Efficiency Options and any extra tags you might want to add.

You can skip this by not checking the Create VMware Capability Profile for the Pool box at this point; but you can also modify/delete the profile later.

I went ahead and made a simple profile called CP_250_VSAN

 

UnityVSA_ICW__16

Here are the constraints, or attributes, I mentioned above. Some are set for you, then you can add tags. Tags can be shared across pools and capability profiles. I tagged this ‘VSAN’, but you could tag for applications you want on this pool, or application tiers (web, database, etc), or the location of the array.

This finishes out creating the pool itself.

 

UnityVSA_ICW__14

The Unity array will e-mail you when alerts occur, configure this to e-mail your team when there are issues.
 

UnityVSA_ICW__13

Remember me mentioning reserving a handful of IP addresses? Here we start using them. The UnityVSA has two main ways to attach storage; iSCSI and NAS (the third, fibre channel, is available on the physical array).

If you want to connect via iSCSI, you’ll need to create iSCSI Network Interfaces here.

 

UnityVSA_ICW__12

Creating an iSCSI interface is easy, you’ll pick between your four ethernet ports, provide IP address, subnet mask and gateway. You can assign multiple IP addresses to the same ethernet port; you also can run both iSCSI and NAS on the same ethernet port (you can’t share an IP address though).

How you leverage these ports is an end user design. Keep in mind, the UnityVSA itself is a single VM and thus a single point of failure. Though you could put the virtual network cards on separate virtual switches to provide some network redundancy, or you could put virtual network cards into separate VLANs to provide storage to different network segments.

 

 

UnityVSA_ICW__11

I created two iSCSI interfaces on the same network card, so that I can simulate multi-pathing at the client side down the road.
 

UnityVSA_ICW__9

Next up is creating a NAS Server, this provides file services for a particular pool. Provide a name for the NAS server, then pool to bind it too, and the service processor for it to run on (only SPA for VSA).
 

UnityVSA_ICW__8

With the NAS Server created, it will need an ethernet port and IP information. Again, this can be the same port as iSCSI, or different; your choice; but you CANNOT share IP addresses. I choose to use a different port here.
UnityVSA_ICW__7 Unity supports both Windows and *nix file shares, as well multiple options for how to secure the authentication and authorization of said shares. Both the protocol support and the associated security settings are per NAS Server. Remember we can create multiple NAS Servers; so this is how you provide access across clients.

For example, if you have two Active Directory forests you want to have shares for; one NAS Server cannot span them, though you can simply create a second NAS server for the other forest.

Or if you want to provide isolation between Windows and *nix shares, simply use two NAS servers, each with single protocol support.

One pool may have multiple NAS servers, but one NAS server can NOT have multiple pools.

This is again where the multiple NICs might come in play on UnityVSA. I could create a NAS Server on a virtual nic that is configured in vSphere for NFS access; while another NAS Server is bound to a separate virtual nic in that my Windows clients can see for SMB shares.

For my initial NAS Server, I’m going to use multi-protocol and leverage Active Directory. This will create a server entry in my AD. I’m also going to enable VVOL, NFSv4 and configure secure NFS.

 

 

UnityVSA_ICW__6

This is the secure NFS option page, by having NFS authenticate via Kerberos against Active Directory, I can use AD as my LDAP as well.
 

UnityVSA_ICW__5

With secure NFS enabled, my Unix Directory Service is going to target AD, so I simply need to provide the IP address(es) of my domain controller(s).
 

UnityVSA_ICW__4

Given I’m using Active Directory, the NAS server needs DNS Servers, these can be different from the DNS servers you entered earlier if you have separation of DNS per zones.
 

UnityVSA_ICW__3

I do not have another Unity box at this point to configure replication; something I’ll explore down the road, so leaving this unchecked.
 

UnityVSA_ICW__2

At this point all my selections are ready, clicking finish will complete all the configuration options.

At this point, my UnityVSA is connected to the network, ready to carve up the pool into LUNs, VVOLs or shares. Everything I accomplished in this wizard can be done manually inside the GUI. The Initial Configuration Wizard just streamlines all the tasks with bringing up a new Unity array. If you have a complete Unity configuration mapped out, you can see how this wizard would greatly reduce the time to value. In the next few posts I’ll explore the provisioning process; like how to leverage the new VVOL support with vSphere.

Here is the silent video, to see the steps I skipped and the general pacing of this wizard.

By | May 25th, 2016|EMC, Home Lab, Storage|2 Comments

UnityVSA – Part 1: HTML5 FTW!

This post is part of a series covering the EMC Free and Frictionless software products.
Go to the first post for a table of contents.

In my last post, I shared my pet project tracking the Twitter activity of the EMC Elect at EMC World 2016. One of the report elements was the ever-popular word cloud showing the popularity of words used in the tweets. One of the more popular words (behind the obvious like EMC, EMCWorld, Dell, Session and Vegas) was “Unity”. Which is not surprising given EMC took the wraps off a new mid-range storage product called Unity.

Unity is the evolutionary replacement of the VNX line, which in turn replaced the CLARiiON; I’ve lost track how many of these arrays I’ve had over the years, I’d guess it’s north of 50. The overall product line heritage is one of the most widely deployed storage arrays out there. Of all the enhancements in Unity, over its predecessor, none excites me more than the new HTML5 interface. Not just because it’s a better interface for the Unity product, but it (hopefully) signals a new wave of management tools from EMC that will all be HTML5.

CLARiiON-FC4700.pngI’ve been around long enough I can remember when the CLARiiON didn’t have a web based GUI. Think early 2000’s, (2002 I believe) when Navisphere was a thick client installed on your Windows workstation. Installing the client seemed so constraining then, in fact, we just built a dedicated (physical) server to run the Navisphere client to remote into. I remember my excitement talking to our EMC Sales Engineers about the forthcoming FLARE upgrade that would provide a web-based Navisphere. So much in fact that I signed up for pre-release testing (I was running development labs at the time, so we weren’t ‘production’). I recall to this day that FLARE upgrade because it was incredibly painful and incurred some outages (again, pre-release code).

In the 15 years since that upgrade, struggling with the Jave JRE client configurations necessary for the web based Navisphere, then Unisphere; not even mentioning the constant performance frustrations; I’ve frequently regretted being a voice asking for a web GUI. Especially given my team, and every other team I talked to, simply setup a dedicated virtual server to act as the Unisphere client. So here I am again, excited over the concept of a HTML5 interface that requires no client, is cross platform compatible and maybe, even, works on mobile devices.

Unlike the past 15 years, I don’t have to wait until it’s time to purchase a new storage array to explore the new GUI. EMC has provided multiple avenues to explore the HTML5 interface. The first is the Unity Simulator, which provides an environment to play with the new HTML5 Unity Unisphere. Take note, this only appears to work on Windows, and it’s not going to let you provision storage; but it certainly is a quick way to play with the UI. Download it here (registration required), and here are some screenshots.

This slideshow requires JavaScript.

If you just want to take a peek, or do not have a Windows machine handy; there are some great videos online, like the EMC Unity – Unisphere Overview, want more: search YouTube for EMC Unity. 

However, if you’re like me, and want to not only play with the UI but be able to provision storage, assign it to hosts, explore VVOL support and create NAS shared; you’re in luck! Because with the release of Unity comes “UnityVSA” (Unity Virtual Storage Array), a virtualized version of the Unity array that runs on vSphere. Under EMC’s Free and Frictionless initiative, the UnityVSA is available in a community edition free of charge with no time bombs. You can also purchase UnityVSA, as it’s a fully supported and production grade, Software Defined version of Unity. So as part of my ongoing series on Free and Frictionless, I’m going to dive into the UnityVSA, record my deployment and testing, and share my thoughts and opinions.

Next post, we’ll deploy the OVA for UnityVSA.

By | May 23rd, 2016|EMC, Home Lab, Storage|3 Comments

IsilonSD – Part 6: Final Thoughts

Now that I’ve spent some time with Isilon SD; how does it compare to my experience with its physical big brother? Where does the software defined version fit?

This post is part of a series covering the EMC Free and Frictionless software products.
Go to the first post for a table of contents.

Overview (TL;DR version)

I’m excited by the entrance of this product into the virtualization space. Isilon is a robust product that can be tuned for multiple use cases and workloads. Even though Isilon has years of product development behind it and currently on it’s eight major software version; the virtual product is technically V1. With any first version, there are some areas to work on; from my limited time with IsilonSD, I believe this is everything it’s physical big brother is in a smaller, virtual package. However, it’s also bringing some of the limitations of its physical past. Limitations to be aware of, but also, limitations I believe EMC will be working to remove in vNext of IsilonSD.

If you ran across this blog because of interest in IsilonSD, I hope you can test the product, either with physical nodes or with the test platform I’ve put together; only with customer testing and feedback can the product mature into what it’s capable of becoming.

Deep Dive (long version)

From running Isilon’s in multiple use-cases and companies, I always wanted the ability to get smaller Isilon models for my branch offices. I’ve worked in environments where we had hundreds of physical locations of varying sizes. Many of these we wanted file solutions in these spokes replicating back to a hub. We wanted a universal solution that applied to the varying size locations; allowing all the spokes to replicate back to the hub. The startup cost for a new Isilon cluster was prohibitive for a smaller site. Leading us to leverage Windows File Servers (an excellent file server solution but that’s a different post) for those situations, bifurcating our file services stack which increased complexity in management, not just of the file storage itself, but ancillary needs like monitoring and backups.

Given I’ve been running a virtualized Isilon simulator for as long as I’ve been purchasing and implementing the Isilon solution; leveraging a virtualized Isilon for these branch office scenarios was always on my wish list. When I heard the rumors an actual virtual product was in the works (vIMSOCOOL) I expected the solution to target this desire. When IsilonSD Edge was released, as I read the documentation, I continued with this expectation. I watched YouTube videos that said this was the exact use-case.

It’s taken actually setting up the product in a lab to understand that IsilonSD Edge is not the product I expected it to be. Personally, though the solution by it’s nature is ‘software defined’ as it includes no hardware; it doesn’t quite fit the definition I’ve come to believe SD stands for. This is less a virtual Isilon, or software defined Isilon, as it is “bring your own hardware”, IsilonBYOH if you will.

IsilonBYOH is, on its merit, an exciting product and highlights what makes Isilon successful; a great piece of software sitting on commodity servers. This approach is what’s allowed Isilon to become the product is it, supporting a plethora of node types as well as hard drive technologies. You can configure a super fast flash based NFS NAS solution to be an ultra high reliable storage solution behind web servers, where you can store the data once and have all nodes have shared access. You can leverage the multi-tenancy options to provide mixed storage in a heterogeneous environment, NFS to service servers and CIFS to end users, talking to both LDAP and Active Directory, tiering between node types to maximum performance for newer files and cost for older. You can create a NAS for high-capacity video editing needs; where the current data is on SSD for screaming fast performance, then moving to HDD when the project is complete. You can even create archive class storage array with cloud competitive pricing to store aged data, knowing you can easily scale, support multiple host types and if ever needed, incorporate faster nodes to increase performance.

With this new product, you can now start even smaller, purchasing your own hardware, running on your own network, and still leverage the same management and monitoring tools, even the same remote support. Plus you can replicate it just the same, including to traditional Isilon appliances.

However, to me, leveraging IsilonSD Edge does call for purchasing hardware, not simply adding this to your existing vSphere cluster and capturing unused storage. IsilonSD Edge, while running on vSphere, requires, locally attached, independent hard drives. This excludes leveraging VSAN, which means no VxRail (and all the competitive HCIA), it also means no ROBO hardware such as Dell VRTX (and all the similar competitive offerings), in fact just having RAID excludes you from using IsilonSD. These hardware requirements, specifically the dedicated disks; turns into limitations. Unless you’re in the position to dedicate three servers, which you’ll likely need to buy new to meet the requirements; you’re probably not putting this out in your remote/branch offices; even though that’s the goal of the ‘Edge’ part of the name.

When you buy those new nodes; you’d probably go ahead and leverage solid state drives; the cost for locally attached SSD SATA is quickly cutting even with traditional hard drives. But understand, IsilonSD Edge will not take advantage of those faster drives like it’s physical incarnation… no metadata caching with the SD version. Nor can the SD version provide any tiering through SmartPools (you can still control the data protection scheme with SmartPools, and obviously you’ll get a speed boost with SSD).

Given all this, the use-cases for IsilonSD Edge get very narrow. With the inability to put IsilonSD Edge on top of ROBO designs, the likelihood of needing to buy new hardware, coupled with the 36TB overall limit of the software defined version of Isilon; I struggle to identify a production scenario that is a good fit. The best case scenario in my mind is purchasing hardware with enough drives to run both IsilonSD and VSAN, side-by-side, on separate drives.; this would require at least nine drives server (more really), so you’re talking some larger machines, and again, a narrow fit.

To me, this product is less about today and more about tomorrow; release one setting the foundation for the future opportunity of virtual Isilon.

What is that opportunity?

For starters, running Isilon SD Edge on VxRail; even deploying it directly through the VxRail marketplace, and by this, I mean running the IsilonSD Edge VMDK files on the VSAN data store.

Before you say the Isilon protection scheme would double-down storage needs on the VSAN model; keep in mind you can configure per VM policies in VSAN. Setting Failure To Tolerate (FTT) of 0 is not recommended, but this is why it exists. Let Isilon provide data protection while playing in the VSAN sandbox. Leverage DRS groups and rules to configure anti-affinity of the Isilon virtual nodes; keeping them on separate hosts. Would VSAN introduce latency as compared to physical disk; quite probably… though in the typical ROBO scenario that’s not the largest concern. I was able to push 120Mbps onto my IsilonSD Edge cluster, and that was with nested ESXi all running on one host.

All of this doesn’t just apply to VxRail, but it’s competitors in the hyper-converged appliance space, as well a wide range of products targeted at small installations. To expand on the small installation scenario, if IsilonSD had lower data protection options like VSAN does to remove the need for six disks per node, or even three nodes; it could fit in smaller situations. Why not trust the RAID protection beneath the VM and still leverage Isilon for the robust NAS features it provides. Meaning run a single-node Isilon, after all, those remote offices are likely providing file services with Windows or Linux VMs, relying on the vSphere HA/DRS for availability, and server RAID (or VSAN) for data loss prevention. The Isilon has a rich feature set outside of just data protection across nodes. Even a single node Isilon with SmartSync back to a mothership has compelling use cases.

On the other side of the spectrum, putting IsilonSD in a public cloud provider, where you don’t control the hardware and storage, has quite a few use-cases. Yes, Isilon has CloudPool technology, this extends an Isilon into public cloud models that provide object storage. But a virtual Isilon running in, say, vCloud Air or VirtuStream, with SynqIQ with your on-premise Isilon opens quite a few doors, such as for those looking at public cloud disaster-recovery-as-a-service solutions. Or moving to the cloud while still having a bunker on-premise for securing your data.

Outside of the need for independent drives, this is, an Isilon, running on vSphere. That’s… awesome! As I mentioned before, this opens some big opportunities should EMC continue down this path. Plus, it’s Free and Frictionless, meaning you can do this exact same testing as I’ve done. If you are an Isilon customer today, GO GET THIS. It’s a great way to test out changes, upgrades, command line scripts, etc.

If you are running the Free and Frictionless version, outside of the 36TB and six node limit, you also do NOT get licenses for SynqIQ, SmartLock or CloudPools.

I’ll say, given I went down this road from my excitement about Free and Frictionless; these missing licenses are a little disappointing. I’ve run SyncIQ and SmartLock, two great features and was looking forward to testing them, and having them handy to help answer questions I get when talking about Isilon.

CloudPools, while I have not run, is something that I’ve been incredibly excited about for years leading up to its release, so I’ll admit I wish it were in the Free and Frictionless version, if only a small amount of storage to play with.

Wrapping up, there are countless IT organizations out there; I’ve never met one that wasn’t unique, even with some areas I’d like to see improved with this product, undoubtedly IsilonSD Edge will apply to quite a few shops. In fact, I’ve heard some customers were asking for a BYOH Isilon approach; so maybe this is really for them (which if so, the 36TB seems limiting). If you’re looking at IsilonSD Edge, I’d love to hear why; maybe I missed something (certainly I have). Reach out, or use the comments.

If you are looking into IsilonSD Edge, outside of the drive/node requirements; some things to be aware of that caught my eye.

While the FAQs state you can run other virtual machines on the same hosts; I would advise against it. If you had enough physical drives to split them between IsilonSD and VSAN, it could be done. You could also use NFS, ISCSI or Fibre Channel for data stores; but this is overly complex and in all likelihood, more expensive than simply having dedicated hardware for IsilonSD Edge (or really, just buying the physical Isilon product). But given the data stores used by the IsilonSD Edge nodes are unprotected, putting a VM on them means you are just asking for the drive to fail, and to lose that VM.

Because you are dedicating physical drives to a virtual machine, you cannot vMotion the IsilonSD virtual nodes. This means you cannot leverage DRS (Dynamic Resource Scheduler), which in turn means you cannot leverage vSphere Update Manager to automatically patch the hosts (as it relies on moving workloads around during maintenance).

The IsilonSD virtual nodes do NOT have VMware tools. Meaning you cannot shut down the virtual machines from inside vSphere (for patching or otherwise), rather you’ll need to enter the OneFS administrator CLI, shut down the Isilon node; then go and perform ESX host maintenance. If you have reporting in place to ensure your virtual machines have VMtools installed, running and at the supported version (something I highly recommend) you’ll need to adjust this. Other systems that leverage VMtools; such as Infrastructure Navigator, will not work either.

I might be overlooking something (I hope so) but I cannot find a way to expand the storage on an existing node. In my testing scenario, I built the minimal configuration of six data drives of a measly 64GB each. I could not figure out how to increase this space, which is something we’re all accustomed to on vSphere (in fact quickly growing VMs resources is a cornerstone of virtualization). I can increase the overall capacity by increasing nodes, but this requires additional ESX hosts. If this is true, again the idea of using ‘unclaimed capacity’ for IsilonSD Edge is marginalized.

IsilonSD wants nodes in a pool to be configured the same, specifically with the same size and amount of drives. This is understandable as it spreads data across all the drives in the pool equally. However, this lessens the value of ‘capturing unused capacity’. Aside from the unprotected storage point; if you were to have free storage on drives, your ability to deploy IsilonSD will be constrained to the lowest free space volume, as all the VMDK files (virtual drives) have to be the same. Even if you had twenty-one independent disks across three nodes, if just one of them was smaller than the rest, that free space dictates the size unit you can configure.

Even though I’m not quite sure where this new product fits or what problem it solves; that’s true of many products when they first release. It’s quite possible this will open new doors no one knew were closed and if nothing else; I’m ecstatic EMC is pursuing making a virtual version of the product; after all this is just version 1… what would you want in version 2? Respond in the comments!

By | April 4th, 2016|EMC, Home Lab, Opinions, Storage|2 Comments

IsilonSD – Part 4: Adding & Removing Nodes

One of the core functions of the Isilon product is scaling. In it’s physical incarnation you can scale up to 144 nodes with over 50 petabytes in a single namespace. Those limits are because of hardware, as drives and switches get bigger, so can Isilon scale bigger. Even so, you can still start with only three nodes. When nodes are added to the cluster, it increases the storage and performance; existing data is re-balanced across all the nodes after the addition. Likewise, you can remove a node, proactively migrating the data from the leaving node without sacrificing data protection; an excellent way to lifecycle replace your hardware. This tenant of Isilon, coupled with non-disruptive software upgrades, means you there is no pre-set life-span to an Isilon cluster. With SmartPools ability to tier storage by node type, you can leverage older nodes for less frequently accessed data, maximizing your investment.

IsilonSD Edge has that same ability, though slightly muted given you’re limited to six nodes and 36TB (for now hopefully). I wanted to walk through the exercise, to see how node changes are accomplished in the virtual version, which is different from the physical version.

This post is part of a series covering the EMC Free and Frictionless software products.
Go to the first post for a table of contents.

 

Adding a Node

Adding a node to the IsilonSD Edge Cluster is very easy, as long as you have an ESX host with all the criteria ready. If you recall from building our test platform, we built a forth node for this very purpose.

*Note there is no sound, this is to follow along the steps.
  1. In the vSphere Web Client, return to the Manage IsilonSD Cluster tab
  2. Select the cluster (in our case, Isilon01)
  3. Switch to the Nodes tab
  4. Click the + button
  5.  The Management server will again search for candidates if found it will allow you to select them.
  6. Again select the disks and their roles, and then proceed; all the cluster and networking information already exists.

 

Just like creating the cluster, the OVA will be deployed, data stores created (if you used raw disks) and the IsilonSD node brought online. This time the node will be added into the cluster, which will start a rebalance effort to re-stripe the data across all the nodes, including the new one.

Keep in mind, IsilonSD Edge can scale up to six nodes, so if you start with three you can double your space.

Removing a Node

Removing a node is just as straight-forward as adding one, as long as you have four or more nodes. This action can take a very long time, depending on how much data you have, as before the node can be removed from the cluster all the data must be re-striped.

*Note there is no sound, this is to follow along the steps.
  1. In the vSphere Web Client, return to the Manage IsilonSD Cluster tab
  2. Select the cluster (in our case, Isilon01)
  3. Switch to the Nodes tab
  4. Select the node to evict (in our case, node 4)
  5. Click the – (minus) button.
  6. Double check the node and select Yes
  7. Wait until the node Status turned to StopFail

 

During the smart fail operation, should you log onto the IsilonSD Edge Administrator GUI, you’ll notice in the Cluster Overview the node you are removing has a warning light next to it. This is also a good summary screen to gauge the progress of the smartfail, by comparing the % column of the node your evicting, to the other nodes. In the picture below the node we choose to remove now is <1% used, while the other 3 nodes are 4% or 5%, meaning we’re almost there. IsilonSD_RemoveNodeClusterOverview

 

 

Drilling into that node is the best way to understand why it has a warning, there you will see the message that the node is being smartfailed. IsilonSD_RemoveNodeSmartFailMessage

When the smartfail is complete, you still have some cleanup activities.

*Note there is no sound, this is to follow along the steps.
  1. In the vSphere Web Client, return to the Manage IsilonSD Cluster tab
  2. Select the cluster (in our case, Isilon01)
  3. Switch to the Nodes tab
  4. The node you previously set to evict should be red in the Status
  5. Select the node, then click on the trash icon.
  6. This will delete the virtual machine and associated VMDKs

 

If you provided IsilonSD unformatted disks, the data stores the wizard created will still exist and you might want to clean them up. If you want to re-add the node, you’ll need to wait awhile, or restart the vCenter Inventory Service as it takes a bit to update.

By | March 31st, 2016|EMC, Home Lab, Storage, VMWare|1 Comment

IsilonSD – Part 3: Deploy a Cluster (Successfully)

With proper planning and setup of the prerequisites (see Part 2), the actual deployment of the IsilonSD Edge cluster is fairly straight-forward. If you experience issues during this section (see Part 1) it’s very likely because you don’t have the proper configuration, so revisit the previous steps. That said, let’s dive in and make a cluster.

This post is part of a series covering the EMC Free and Frictionless software products.
Go to the first post for a table of contents.

High Level, you’re going to do a few things:

  1. Deploy the IsilonSD Edge Management Server
  2. Setup IsilonSD Edge Management Server Password
  3. Complete IsilonSD Edge Management Server Boot-up
  4. Configure  Management Server vSphere Link & Upload Isilon Node Template
  5. Open the IsilonSD Edge vSphere Web Client Plug-in
  6. Deploy the IsilonSD Edge Cluster

Here’s the detail.

Deploy the IsilonSD Edge Management Server

*Note there is no sound, this is to follow along the steps.
This is your standard OVA deployment, as long as  you’re using the “EMC_IsilonSD_Edge_MS_x.x.x.ova” file from the download and providing an IP address accessible to vCenter, you can deploy this just about anywhere.

Follow along in the video on the left if you’re not familiar with the OVA process.

Once the OVA deployment launches, ensure you find the deployment task in the vSphere Task Pane and keep an eye on the progress.

Setup IsilonSD Edge Management Server password

IsilonSD_ManagementBootPasswordChangeOnce the OVA is complete and the virtual machine is booting up, you’ll need to open the console and watch the initialization process. Generally I recommend this with any OVA deployment, as you’ll see if there are any errors as the first boot configuration occurs. For the IsilonSD Edge Management Appliance, it’s required to set the administrator password.

Complete IsilonSD Edge Management Server Boot-up

IsilonSD_ManagementConsoleBlueScreenAfter entering your password, the server will continue it’s first boot process and configuration. When you reach this screen (what I call the blue screen of start) you’re ready to proceed. Open a browser and navigate to the URL provided in the blue screen next to IsilonSD Management Server.

 

Configure Management Server vSphere Link & Upload Isilon Node Template

When you navigate to the URL provided by the blue screen, after accepting the unauthorized certificate, you’ll be promoted for logon credentials. This is NOT the password you provided during the boot up. I failed to read the documentation and assumed this, resulting in much frustration.

Logon using:
username: admin
password: sunshine

After successful logon, and accepting the EULA, you have just a couple steps, which you can follow along in the video on the right:

  1. Adjust the admin password
  2. Register your vCenter
  3. Upload the Isilon Virtual Node template
    1. This is “EMC_IsilonSD_Edge_x.x.x.ova in your download

*Note there is no sound, this is to follow along the steps.

 

Open the IsilonSD Edge vSphere Web Client Plug-in

Wait for the OVA template to upload, this may take up to ten minutes depending on your environment. Once complete, you’ll be ready to move on to actually creating the IsilonSD cluster through the vSphere Web Client plug-in that was installed by the Management Server when you registered vCenter. Ensure you close out all the browser windows and open a new session to your vSphere Web Client.

IsilonSD_vCenterDatacenterSelect the datacenter where you deployed the management server (not the cluster, again where I lost some time).

 

IsilonSD_ManageTab In the right-hand pane of vSphere, under the Manage tab, you should now see two new sub-tabs, Create IsilonSD Cluster and Manage IsilonSD ClusterIsilonSD_vCenterIsilonTabs

 

Deploy the IsilonSD Edge Cluster

*Note there is no sound, this is to follow along the steps.

Follow along in the video above:

  1. Check the box next to your license
  2. Adjust your node resources
    1. For my deployment, I started with 3 nodes; adjusting the Cluster Capacity from the default 2 TB  to the minimum 1152 GB (64GB & 7 Drives * 3 Nodes)
  3. Clicking Next on the Requirements tab will search the virtual datacenter in your vCenter for ESX hosts that can satisfy the requirements you provided, including having those independent drives that meet the capacity requirement
    1. Should the process fail to find the necessary hosts, you’ll see a message like this. Don’t get discouraged, look over the requirements again to ensure everything is in order, try restarting the Inventory Service too.
    2. IsilonSD_NoQualifiedHosts
  4. When the search for hosts is successful, you’ll see a list of hosts available to select, such as
    1. IsilonSD_HostSelection
  5. Next, select all the hosts you wish to add to the cluster (if you prepared more than 3, consider selecting 3 now, as the next post we’ll walk through adding an additional node).
  6. For each host, you’ll need to select the disks and their associated role (Data Disk, Journal, Boot Disk or Journal & Boot Disk).
    1. Remember, you need at LEAST 6 data disks, you won’t get this far if you don’t but you won’t get farther if you don’t select them.
    2. In our scenario, we selects 6x 68GB data disks, and a final 28GB disk for Journal & Boot Disk
    3. You’ll also need to select the External Network Port Group and Internal Network Port Group
    4. IsilonSD_HostDriveSelection
  7. After setting up all hosts, with the exact configuration, you’ll move into the Cluster Identity
    1. IsilonSD_ClusterIdentity
    2. Cluster Name (this is used in the management interface to name the cluster)
    3. Root Password
    4. Admin Password
    5. Encoding (I’d leave this alone)
    6. Timezone
    7. ESRS Information (only populate this if you have a production license)
  8. Next will be your network settings.
    1. IsilonSD_ClusterNetworking
    2. External Network
    3. Internal Network
    4. SmartConnect
  9. You have a final screen to verify all your settings, look them over, the full deployment will take awhile and click Next.

At this point, patience is key, do not interrupt it. An OVA will be deployed for every node, then all of those unformatted disks will be turned into data stores, then VMDK files put on each datastore; finally all the nodes will boot and configure themselves. If everything goes as planned, your reward will look like this:

IsilonSD_ClusterCreationSuccess

To verify everything, point your browser to your smart connect IP address, in our case https://Isilon01.lab.vernex.io:8080 if you get a OneFS Logon Prompt, you should be in business!

IsilonSD_OneFSLogonPrompt

 

You should also be able to navigate in windows to your SmartConnect address; recall ours is \\Isilon01.lab.vernex.io\ and see the IFS share. This is the initial administrator share that in a production environment you’d disable. Likewise in *nix you can NFS attach to //Isilon01.lab.vernex.io:/IFS

 

 

 

 

 

 

 

 

By | March 30th, 2016|EMC, Home Lab, Storage, VMWare|1 Comment

IsilonSD – Part 2: Test Platform

This post is part of a series covering the EMC Free and Frictionless software products.
Go to the first post for a table of contents.

So… last post I figured out IsilonSD Edge needs (at least) three separate ESX hosts, with (at least) 7 independent, local disks. So I cannot deploy this on the nested ESXi VSAN I made, but I really want to get IsilonSD up and running.

Since my previous attempt to wing-it didn’t go so well, I’m also going to do a little more planning. I need to create ESX hosts that meet the criteria, plus some plan out things like cluster name, network settings, IP addresses and DNS zones.

For the ESX hosts, my solution is to run nested ESXi (a virtual machine running ESX on top of a physical machine running ESX). This allows me to provide the independent disks, as well multiple ESX hosts, without all the hardware. As well, this will help facilitate the networking needs, through making virtual switches to provide the internal and external networks.

To build this test platform, we’ll cover 4 main areas:

  • ESX Hosts
  • External Network
  • Internal Network
  • SmartConnect

ESX Hosts

For testing IsilonSD Edge, I’m going to make four virtual machines, and configure them as ESX hosts. Each of these will need four nNICs (two for ESX purposes, two for IsilonSD) and nine hard drives (2 for ESX again, and 7 for IsilonSD). I’m hosting all the hard drives in a single data store; it happens to be SSD. For physical networking, my host only has a single network card connected, so I’ve leveraged virtual switches without a network card to simulate a private management network.

A snapshot of my single VM configuration is below:

IsilonSD_NestedESXDiagram1

With the first virtual machine created, now I simply clone it three times, so I have four exact replicas. Why four? It will allow a three node cluster to start; then I can test adding (and removing) a node without data loss; the same we would with a physical Isilon.

Note the ‘guest OS’ for the virtual machines is VMware ESXi 6.x. This is nice feature of vSphere to help you keep track of your nested ESXi VMs. Though keep in mind, nesting vSphere is NOT supported by Vmware; you cannot call and ask for help. Not a concern here given I can’t call EMC for Isilon either since I’m using Free and Frictionless downloads. This is not a production grade configuration by any stretch.

IsilonSD_NestedESXDiagramx4Once all four virtual machines existed on my physical ESX host, installing ESX is just an ISO attach away.

After installing ESX on all my virtual hosts, I then add them to my existing vCenter as hosts. vCenter doesn’t know these are virtual machines and treats them the same as a physical ESX host.

I’ve placed these virtual hosts into a vCenter cluster. However, this is only for aesthetics purposes to keep them organized. I won’t enable normal cluster features such as HA and DRS given Isilon cannot leverage them, nor does it need them. Plus given there is no shared storage between these hosts, you cannot do standard vMotion (enhanced vMotion is always possible, but that’s another blog).

Here you can see those virtual machines with ESX installed masquerading as vSphere hosts:

IsilonSD_NestedESXCluster

I’ll leverage Cluster A you see in the screenshots for the Management Server and InsightIQ server. Cluster A is another cluster of nested ESXi VMs I used for testing VSAN; it also has the Virtual Infrastructure port group available so all the IsilonSD Edge VMs can be on the same logical segment.

External Network

The Isilon External network is what faces the NAS clients. In my environment, I have a vSphere Distributed Virtual Switch Port Group called ‘Virtual Infrastructure’ where I place my core systems. This is also where vCenter and the ESX Hosts sit as well, and what I’ll use for Isilon as there is no firewall/router between the Isilon and what I’ll connect to it.

Virtual Infrastructure network space is 10.0.0.0/24; I’ve set aside a range of IP addresses for Isilon in this network.
.50 is the management server
.151-158 for nodes
.159 for SmartConnect
.149 for InsightIQ.
You MUST have contiguous ranges for your nodes, but all other IP addresses are personal preference.

For use in the deployment steps:
-Netmask: 255.255.255.0
-Low IP Range: 10.0.0.151
-High IP Range: 10.0.0.158
-MTU: 1500
-Gateway: 10.0.0.1
-DNS Servers: 10.0.0.11
-Search Domains: lab.vernex.io

Internal Network

The physical Isilon appliances use dedicated Infiniband switches to interconnect the nodes. This non-blocking, low latency, high bandwidth network allows the nodes to communicate with each other to stripe data across nodes, providing hardware resiliency. For IsilonSD Edge, Ethernet is used over a virtual network for this same purpose. If you were deploying this on physical nodes, you could bind the Isilon internal network to anything that all hosts have access too, the same network as vMotion, or a dedicated network if you prefer. Obviously, 10Gb is preferable, and I would recommend diversifying your physical connections using failover or LACP at the vSwitch/VDS level.

For my lab, I have a vSphere DVS for Private Management traffic; this is bound to the virtual switch of my host that has no actual NIC associated with it. It’s a private network on the physical host under my nested ESXi instances. I use this DVS for VSAN traffic already, so I merely created an additional port group for Isilon named PG_Isilon.

Because this is essentially a dedicated, non-routable network, the IP addresses do not matter. But to keep things clean I use a range set aside for private traffic (10.0.101.0/24) as well use the same last octet as my external network.

For use in the deployment:
-Netmask: 255.255.255.0
-Low IP Range: 10.0.101.151
-High IP Range: 10.0.101.158

SmartConnect

For those not familiar with Isilon, SmartConnect is the technique used for load balancing clients across the multiple nodes in the cluster. Isilon protects the data across nodes using custom code, but to interoperate with the vast variety of clients standard protocols such as SMB, CIFS and NFS is used. For these, there still is not an industry standard method for load to be spread across multiple servers (NFS does have the ability for transparent failover, which Isilon supports), the approach is a beautiful blend of powerful and simplistic. By delegating a zone in your enterprise DNS for the Isilon cluster to manage, SmartConnect will hand out IP addresses to clients based different load balancing options appropriate for your workloads, such as round robin (default) or others like least connection.

To prepare for deploying an IsilonSD Edge cluster, we’re going to modify the DNS to extend this delegation to Isilon.Configuring the DNS ahead of time makes the deployment simple. If you’re running Windows DNS, here are the quick steps (if you’re using BIND or something similar, this is a delegation and should be very similar in your config files).

Launch the Windows/Active Directory DNS Administration Tool

IsilonSD_NewDNSDelgation

Locate the parent zone you wish to extend, here I use lab.vernex.io.

Right click on the parent zone and select New Delegation

 

 

 

 

 

 

 

 

 

 

 

Enter the name of the delgated zone, this ideally will be your cluster name, for my deployment Isilon01IsilonSD_Delgation1

 

 

 

 

 

 

 

 

 

Enter the IP address you intend to assign to Isilon SmartConnect

IsilonSD_Delgation2

 

 

 

 

 

 

 

That’s it, when the Isilon Cluster is deployed and Smart Connect running, you’ll be able to navigate to a CIFS share like \\Isilon01.lab.vernex.io, your DNS will pass this request to the Isilon DNS server, which will reply with an IP address of a host that can accept your workload. This same DNS works for managing the Isilon cluster as well.

QuickTip, you can CNAME anything else to Isilon01.lab.vernex.io, so I could make File.lab.vernex.io CNAME pointed to SmartConnect. This is an excellent way to replace multiple file servers with a single Isilon.

For use in the deployment:
-Zone Name: Isilon01.lab.vernex.io
-SmartConnect Service IP: 10.0.0.159

 

By | March 28th, 2016|EMC, Home Lab, Storage, VMWare|1 Comment

IsilonSD – Part 1: Quick Deploy (or how I failed to RTM)

This post is part of a series covering the EMC Free and Frictionless software products.
Go to the first post for a table of contents.

As I mentioned in my previous post, EMC recently released the “software defined” version of Isilon; their leading enterprise scale-out NAS solution. If you’re familiar with Isilon, you might already know there has been a virtual Isilon (called Isilon Simulator) for years now. The virtual Isilon would run on a laptop with VMware Workstation/Fusion, or on vSphere in your datacenter. I purchased and installed Isilon in multiple organizations, for several different use-cases; the Isilon Simulator was a great solution for testing changes pre-production as well familiarizing engineers with the interface. The Isilon Simulator is not supported and up until recently you had to know the right people to even get ahold of it.

With the introduction of IsilonSD Edge, we now have a virtualized Isilon that is fully supported, available for download and purchase through your favorite EMC sales rep. It runs the same codebase as the physical appliances, with some adjustments for the virtual worlds. As we discussed, there is a “free” version for use in non-production as part of EMC’s Free and Frictionless movement. I’ve run the Isilon Simulator personally for years, so I want to leverage this latest release of IsilonSD Edge as my new test Isilon in my home lab.

A quick stop to the IsilonSD Edge download page and I’m quickly pulling down the bits. While waiting a few moments for the 2GB download, I review some of the links; there is a YouTube video on Aggregating Unused Storage, another that covers FAQs on IsilonSD Edge and one more that talks about Expanding the Data Lake to the Edge. These all cover what I assumed, you’ll want at least three ESX hosts to provide physical diversity, you can run other workloads on these hosts, and the ultimate goal of this software is to extend Isilon into Edge scenarios, such as branch offices.

Opening the downloaded ZIP file, I find a couple OVA files, plus the installation instructions. I reviewed a couple of the FAQs linked from the product page, though didn’t spend much time on the installation guide; nor did I watch the YouTube Demo: Install and Configure IsilonSD Edge. I like to figure some things out on my own; that’s half the fun of a lab, right? I did see under the system requirements, it mentions support for VSAN compatible hardware, referencing the VMware HCL for VSAN. I just recently setup VSAN in my home lab, so that coupled with the fact I’ve run the Isilon Simulator; I’m good to go.

Fast forward though a couple failed installations, re-reading the FAQs, more failed installations, then reading the actual manual… here’s the catch.

You have to have local disks on each ESX node.

More specifically, you need to have directly attached storage… 
        without hardware RAID protection or VSAN.

Plus, you need at least 7 of these directly attached unprotected disks, per node.

While this wasn’t incredibly clear to me in the documentation, once you know this, you will see it’s said; but given IsilonSD is running on VMDK files, I glossed over the parts of the documentation that (vaguely) spelled this out. If you’ve deployed the Isilon Simulator, or any OVA for that matter, you’re used to selecting where the virtual machines are deployed, I assumed this would be the same for IsilonSD and I could choose the storage location.

However IsilonSD comes with a vCenter Plug-In that deploys the cluster, as part of that deployment, it scans for hosts and disks that meet this specific requirement. Moreover, during the deployment IsilonSD leverages a little used feature in vSphere to create virtual serial port connections over the network for the Isilon nodes to communicate with the Management Server, this is how the cluster is configured; so deploying IsilonSD nodes by hand isn’t an option (you can use still use the Isilon Simulator, which you can deploy more manually).

I’m going to stop here and touch on some thoughts, I’ll elaborate more on this in a later post once I actually have IsilonSD Edge working.

I do not know any IT shop that has ESX hosts that has locally attached, independent disks (again, not in a RAID group or under any type of data protection). We’ve worked hard as VM engineers to always build shared storage so we can use things like vMotion.

The marketing talks about capturing unused storage, about running IsilonSD on the same hosts as other workloads; in fact the same storage as other VMs; but I’m not sure who has unused capacity that’s also independent disks.

I certainly wouldn’t recommend running virtual machines on storage without any type of RAID-like protection. Maybe some folks have a server with some disks they never put into play, but 7 disks; and at least three servers with 7 disks?

I know there are organizations that have lean budget and this might be the best they can afford, but are shops like licensing and running vCenter ($), are they looking at virtual Isilon($)?

Call me perplexed, but I’m going to put off thinking about this as I still want to get this running in my lab. Since I don’t have three servers and 21 disks laying around at home, I’ll need to figure out a way to create a test platform for IsilonSD to run.

Be back soon…

 

By | March 25th, 2016|EMC, Home Lab, Storage, Uncategorized|3 Comments