NFS

UnityVSA – Part 4: VVOLs

In part 1 of this series I shared the move to HTML5 was the most exciting part of Unity. Frankly this was because of the increased compatibility and performance of Unisphere for Unity, but more so the signaling of EMC shifting to HTML5 across the board (I hope).

If there was one storage feature inside Unity that excites me the most, it has to be VVOL support. So in this post, I’m going to dive into VVOLs on the UnityVSA we set up previously. As VVOLs is a framework, every implementation is going to differ slightly. As such, the switch to VVOL itself and the Unity flavor is going to require an adjustment in the architectural and management practices we’ve adopted over the years.

This post is part of a series covering the EMC Free and Frictionless software products.
Go to the first post for a table of contents.

First, for those not familiar, a little on VVOLs themselves. VVOL stands for Virtual Volumes, in essence, it simplifies the layers between the virtual machine and the storage array while allowing both vSphere and the Storage Array to have a deeper understanding of each other. A VVOL itself directly correlates to a virtual disk attached to virtual machines; including the configuration file and swap file every VM has. Enabling VVOLs is VASA (vSphere Storage APIs for Storage Awareness), with which the array can describe the attributes of the storage presented to vSphere. These two core tenants of VVOLs allow the vSphere layer to see deeper into the storage; while the storage layer can see the more granular virtual machine and virtual disk usage.

In practice, this provides a better management framework to enable the movement most in the vSphere realm have been making; creating larger datastores with a naming convention that denotes the storage features (flash, tiering, snapshots, etc.). Where previously vSphere Admins would need to learn these conventions to determine where to place VMs; with VVOLs, this can be abstracted into Storage Policies, with the vSphere Admin simply selecting the appropriate policy during creation.

So new terms and concepts to become familiar with:

  • Storage Provider
    • Configured within vCenter this is a link to the VASA provider which in turn shares the storage system details with vSphere.
    • For Unity, the VASA provider is built in and requires no extra configuration on the Unity side.
  • Protocol EndPoint
    • This is the storage side access point that vSphere communicates with; they work across protocols and replace LUNs and mount points.
    • On Unity, Protocol Endpoints have created automatically through the VVOL provisioning process.
  • Storage Container
    • This essentially replaces the LUN, though a storage container is much more than an LUN ever was as it can contain multiple types of storage on the array, which effectively means it can have multiple LUNs.
    • In vSphere a storage container maps to a VVOL Datastore (shown in the normal datastore section of vSphere).
    • Unity has mirrored this naming in Unisphere, calling the storage container ‘Datastore’.
    • In Unity a Datastore can contain multiple Capability Profiles (which if you remember, in Unity, is synonymous to a Pool).

To fully explore and demonstrate the VVOL functionality in Unity, we’re going to perform several sets of actions, I’m going to share these in video walkthroughs (with sound), as there are multiple steps.

  1. Create additional pools and capability profiles on the UnityVSA then configure vSphere and Unity with appropriate connection for VVOL
  2. Provision a VVOL Datastore with multiple capability profiles and provision a test virtual machine on the new VVOL Datastore
  3. Create a vSphere Storage Policy and relocate the VM data
  4. Create advanced vSphere Store Policies, extending the VM to simulate a production database server

 

First some prep work and connecting vSphere and Unity:

  • Add 4 new virtual disks to the UnityVSA VM
  • Create two new Unity pools
    • 1 with 75GB as single tier
    • 1 with 15GB, 25GB and 55GB as multi-tier with FastVP
  • Link Unisphere/Unity to our vCenter
  • Create a Storage Provider link in vSphere to the Unity VASA Provider

 

Next, let’s provision the VVOL Datastore or “Storage Container”:

  • Create a Unity Datastore (aka “Storage Container”) with three Capability Profiles (as such, three pools)
  • Create a vSphere VVOL Datastore
  • Investigate VVOL Datastore attributes

 

Provisioning a virtual machine on the new storage looks the same as traditional datastores, but there is more than meets the eye:

  • Create a new virtual machine on the VVOL Datastore
  • Investigate where the VM files are placed
  • See the VM details inside Unisphere
  • Create a simple Storage Policy in vSphere
  • Adjust the virtual machine storage policy and watch the storage allocation adjustment

 

Now let’s consider more advanced usage of VVOLs. With the ability to create custom tags in Unisphere Capability Profiles, we have an unlimited mechanism to describe the storage in our own words. You could use these tags to create application specific pools, and thus vSphere Storage Policies for admins to target VMs related to an application. You could also use tags for tiers (Web, App, DB), or in the example below, we’re going to create vSphere Storage Policies and Unity capability tags to partition a database server into Boot, Data and Backup storage types.

  • Modify our three Capability Profiles to add tags: Boot, DB and Backup.
  • Create vSphere Storage Policies for each of these tags.
  • Adjust the boot files of our test VM to leverage the Boot Storage Policy
  • Add additional drives to our test VM, leveraging our DB and Backup Storage Policies; investigate where these files were placed

 

Hopefully now you not only have a better understanding of how to setup and configure VVOLs on EMC Unity storage; but a deeper understanding of the VVOL technology in general. This framework opens brand new doors in your management practices; imagine a large Unity array with multiple pools and capabilities all being provisioned through one Storage Container and VVOL Datastore. Leveraging Storage Policies to manage your data placement rather than carving up numerous LUNs.

With the flexibility of Storage Policies, you can further inform the administrators creating and managing virtual servers on what storage characteristics are available. If you have multiple arrays that support VVOLs and/or VSAN; your policies can work across arrays and even vendors. This abstraction allow further consistency inside vSphere, streamlining management and operations tasks.

You can see how, over time, this technology has advantages over the traditional methods we’ve been using for virtual storage provisioning. However, before you start making plans to buy a new Unity array and replace all your vSphere storage with VVOLs, know that, as with any new technology, there are still some limitations. Features like array based replication, snapshots, even quiescing VMs, all are lagging a bit behind the VVOL release, all highly dependent on your environment and usage patterns. I expect quick enhancements in this area, so research the current state based and talk with your VMware and EMC reps/partners.

By | May 27th, 2016|EMC, Home Lab, Storage, Train Yourself|2 Comments

UnityVSA – Part 3: Initial Configuration Wizard

When last we meet, we had deployed a UnityVSA virtual appliance; it was up and running sitting at the prompt for the Initial Configuration Wizard.

This post is part of a series covering the EMC Free and Frictionless software products.
Go to the first post for a table of contents.

If you’re following along and took a break, your browser session to Unisphere likely expired, which means you also lost the prompt for the Initial Configuration Wizard. Don’t fret; you can get it back. In the upper left corner of Unisphere, click on the Preferences button . On the main preferences page, there is a link to launch the Initial Configuration Wizard again.

UnityVSA_PreferencesButton

UnityVSA_InitialConfigurationWizardLink

Ok, so back to the wizard? Good. This wizard is going to walk through a complete configuration of the UnityVSA, including changing passwords, licensing, network setup (DNS, NTP), creating pools, e-mail server, support credentials, your customer information, ESRS, iSCSI and NAS interfaces. You can choose to skip some steps, but if you complete all portions, your UnityVSA will be ready to provision storage across block and file, with LUNS, shares or VVOLS.

Before we get going, let’s cover some aspects of the VSA to plan for.

First, there is a single SP (service processor in Unity-speak; or a ‘brain’). The physical Unity (and the VNX and CLARiiON before if you are familiar with the line) has two SPs. These provide redundancy should an SP fail, given for all intents an SP is an x86 server it’s suspect to common failures. The VSA rather relies on vSphereHA to provide the redundancy should a host have a failure, and vSphereDRS/vMotion to move the workload preemptively for planned maintenance. This is germane because, for the VSA, you won’t be planning for balancing LUNs between the SPs, nor creating multiple NAS servers to span SPs, nor even iSCSI targets across SPs to allow for LUN trespass.

The second is size limitations. I’m using the community edition, a.k.a. Free and Frictionless; which is limited to 4TB of space. I do not get any support, as such ESRS will not work. If you’re planning on a production deployment, that 4TB limit will be increased to 50TB (based, of course, on how much you purchased from EMC) and then be a fully supported Unity array. With the overall size limit, you also are limited to 16 virtual disks behind the VSA (meaning 16 VMDK files you assign the VM). So plan accordingly. In my initial deployment I provided a 250GB VMDK, so if I have 15 more (16 total), I hit my 4TB max.

Third is simply the key differences from a physical array to prepare for.

  • Max pool LUN size: 16TB
  • Max pool LUNs per system: 64
  • Max pools per system: 16
  • No fibre channel support
  • No write caching (to ensure data integrity)
  • No synchronous replication
  • No RAID – this one is crucial to understand. In a physical Unity raw disks are put into RAID groups to protect against data loss during a drive failure. UnityVSA relies on the storage you provide to be protected already. Up to the limits mentioned, you can present VMDKs from multiple vSphere datastores, even from separate tiers of storage leveraging FASTVP inside UnityVSA.

Fourth and finally, a few vSphere specific notes. UnityVSA comes installed with VMware Tools, but you can’t modify it, so don’t try automatic upgrades the VMtools. The cores and memory are hard coded; so don’t try adding more to increase performance; rather split workload onto multiple UnityVSA appliances. You cannot resize virtual disks once they are in a UnityVSA pool, again add more, but pay attention to the limits. EMC doesn’t support vSphere-FT, VM-level backup/replication nor VM-level snapshots.

 

 UnityVSA_ICW__1 Got all that, now let’s setup this array! I’m going to step through detail on the options this time, because the wizard is packed full of steps, the video is also at the bottom.
 

UnityVSA_ICW__26

First off, change your passwords. If you missed it earlier, here are the defaults again:

  • admin/Password123#
  • service/service

The “admin” account is the main administrator of Unisphere, while the “service” account is used to log into the console of the VSA, where you can perform limited admin steps such as changing IP settings should you need to move the network; as well logs.

 

UnityVSA_ICW__25

Next up, licensing. Every UnityVSA needs a license, even if you’re using the Community edition. If you purchased the array, you should go through all the normal channels, LAC, support, or your rep. For community edition, copy the System UUID and follow the link: Get License Online
 

UnityVSA_ICW__24

The link to get a license will take you to EMC.com, again you’ll need to logon with the same account you used to download the OVA. Then simply paste in that System UUID and select “Unity VSA” from the drop down. Your browser will download a file that is the license key; from there you can import it into Unisphere.

I will say, I’ve heard complaints that needing to register to download and create a license is ‘friction’, but it was incredibly easy. I don’t take any issues with a registration, EMC has made it simple, and no one from sales is calling me to follow up. I’m sure EMC is tracking who’s using it, but that’s good data; downloading the OVA vs actually creating a license. I don’t find this invalidates the ‘Free and Frictionless’ motto.

UnityVSA_ICW__23 Did you get this nice message? If so good job, if not… well try again.
 

UnityVSA_ICW__22

The virtual array needs DNS servers, to talk to the outside world, communicate through ESRS, get training videos, etc… make sure to provide DNS servers that can resolve internet addresses.
 UnityVSA_ICW__21 Synchronizing time is always important, if you are providing file shares it’s extra important you’re in synch with your Active Directory. Should a client and server be more than 5 minutes apart access will be denied. So use your AD server itself, or an NTP server on the same time synch with AD.
 

UnityVSA_ICW__20

Now we’re getting into some storage. In Unity, pools are a grouping of storage which is carved up to present to hosts. In the VSA, a pool must have at least one virtual disk; though if you have more data will be balanced across the virtual disks.

A pool is the foundation of all storage, without one you cannot setup anything else on the virtual array.

 

UnityVSA_ICW__18

When creating a Pool, you’ll want to give it a name and description. Given this is a virtual array, consider including the cluster and datastore names in the description so you know where the storage is coming from.

For my purposes of testing, I named the Pool Pool_250 simply to know it was on my 250GB virtual disk.

 

UnityVSA_ICW__19

Once named, you’ll need to give the pool some virtual disks. Recall we created a 250GB virtual disk, here we are adding that into the pool.

When adding disks you need to choose a tier, typically:

  • Flash = Extreme Performance
  • FC/SAS = Performance
  • NL-SAS/SATA = Capacity

The virtual disk I provided the VSA is actually VSAN running All-Flash, but I’m selecting Capacity here simply to explore FAST VP down the road.

 

UnityVSA_ICW__17

Next up I need to give the Pool a Capability Profile. This profile is for VMware VVOL support. We’ll cover VVOL in more depth another time, but essentially it allows you to connect vSphere to Unity, and assign storage at a VM level. This is done through vSphere Storage Policy that map through to the Capability Profile.

So what is the Capability Profile used for? It encompasses all the attributes of the pool, allowing you to created a vSphere Storage Policy. There is one capability profile per pool, it includes the Service Tier (based on your storage tier and FAST VP), Space Efficiency Options and any extra tags you might want to add.

You can skip this by not checking the Create VMware Capability Profile for the Pool box at this point; but you can also modify/delete the profile later.

I went ahead and made a simple profile called CP_250_VSAN

 

UnityVSA_ICW__16

Here are the constraints, or attributes, I mentioned above. Some are set for you, then you can add tags. Tags can be shared across pools and capability profiles. I tagged this ‘VSAN’, but you could tag for applications you want on this pool, or application tiers (web, database, etc), or the location of the array.

This finishes out creating the pool itself.

 

UnityVSA_ICW__14

The Unity array will e-mail you when alerts occur, configure this to e-mail your team when there are issues.
 

UnityVSA_ICW__13

Remember me mentioning reserving a handful of IP addresses? Here we start using them. The UnityVSA has two main ways to attach storage; iSCSI and NAS (the third, fibre channel, is available on the physical array).

If you want to connect via iSCSI, you’ll need to create iSCSI Network Interfaces here.

 

UnityVSA_ICW__12

Creating an iSCSI interface is easy, you’ll pick between your four ethernet ports, provide IP address, subnet mask and gateway. You can assign multiple IP addresses to the same ethernet port; you also can run both iSCSI and NAS on the same ethernet port (you can’t share an IP address though).

How you leverage these ports is an end user design. Keep in mind, the UnityVSA itself is a single VM and thus a single point of failure. Though you could put the virtual network cards on separate virtual switches to provide some network redundancy, or you could put virtual network cards into separate VLANs to provide storage to different network segments.

 

 

UnityVSA_ICW__11

I created two iSCSI interfaces on the same network card, so that I can simulate multi-pathing at the client side down the road.
 

UnityVSA_ICW__9

Next up is creating a NAS Server, this provides file services for a particular pool. Provide a name for the NAS server, then pool to bind it too, and the service processor for it to run on (only SPA for VSA).
 

UnityVSA_ICW__8

With the NAS Server created, it will need an ethernet port and IP information. Again, this can be the same port as iSCSI, or different; your choice; but you CANNOT share IP addresses. I choose to use a different port here.
UnityVSA_ICW__7 Unity supports both Windows and *nix file shares, as well multiple options for how to secure the authentication and authorization of said shares. Both the protocol support and the associated security settings are per NAS Server. Remember we can create multiple NAS Servers; so this is how you provide access across clients.

For example, if you have two Active Directory forests you want to have shares for; one NAS Server cannot span them, though you can simply create a second NAS server for the other forest.

Or if you want to provide isolation between Windows and *nix shares, simply use two NAS servers, each with single protocol support.

One pool may have multiple NAS servers, but one NAS server can NOT have multiple pools.

This is again where the multiple NICs might come in play on UnityVSA. I could create a NAS Server on a virtual nic that is configured in vSphere for NFS access; while another NAS Server is bound to a separate virtual nic in that my Windows clients can see for SMB shares.

For my initial NAS Server, I’m going to use multi-protocol and leverage Active Directory. This will create a server entry in my AD. I’m also going to enable VVOL, NFSv4 and configure secure NFS.

 

 

UnityVSA_ICW__6

This is the secure NFS option page, by having NFS authenticate via Kerberos against Active Directory, I can use AD as my LDAP as well.
 

UnityVSA_ICW__5

With secure NFS enabled, my Unix Directory Service is going to target AD, so I simply need to provide the IP address(es) of my domain controller(s).
 

UnityVSA_ICW__4

Given I’m using Active Directory, the NAS server needs DNS Servers, these can be different from the DNS servers you entered earlier if you have separation of DNS per zones.
 

UnityVSA_ICW__3

I do not have another Unity box at this point to configure replication; something I’ll explore down the road, so leaving this unchecked.
 

UnityVSA_ICW__2

At this point all my selections are ready, clicking finish will complete all the configuration options.

At this point, my UnityVSA is connected to the network, ready to carve up the pool into LUNs, VVOLs or shares. Everything I accomplished in this wizard can be done manually inside the GUI. The Initial Configuration Wizard just streamlines all the tasks with bringing up a new Unity array. If you have a complete Unity configuration mapped out, you can see how this wizard would greatly reduce the time to value. In the next few posts I’ll explore the provisioning process; like how to leverage the new VVOL support with vSphere.

Here is the silent video, to see the steps I skipped and the general pacing of this wizard.

By | May 25th, 2016|EMC, Home Lab, Storage|2 Comments

IsilonSD – Part 6: Final Thoughts

Now that I’ve spent some time with Isilon SD; how does it compare to my experience with its physical big brother? Where does the software defined version fit?

This post is part of a series covering the EMC Free and Frictionless software products.
Go to the first post for a table of contents.

Overview (TL;DR version)

I’m excited by the entrance of this product into the virtualization space. Isilon is a robust product that can be tuned for multiple use cases and workloads. Even though Isilon has years of product development behind it and currently on it’s eight major software version; the virtual product is technically V1. With any first version, there are some areas to work on; from my limited time with IsilonSD, I believe this is everything it’s physical big brother is in a smaller, virtual package. However, it’s also bringing some of the limitations of its physical past. Limitations to be aware of, but also, limitations I believe EMC will be working to remove in vNext of IsilonSD.

If you ran across this blog because of interest in IsilonSD, I hope you can test the product, either with physical nodes or with the test platform I’ve put together; only with customer testing and feedback can the product mature into what it’s capable of becoming.

Deep Dive (long version)

From running Isilon’s in multiple use-cases and companies, I always wanted the ability to get smaller Isilon models for my branch offices. I’ve worked in environments where we had hundreds of physical locations of varying sizes. Many of these we wanted file solutions in these spokes replicating back to a hub. We wanted a universal solution that applied to the varying size locations; allowing all the spokes to replicate back to the hub. The startup cost for a new Isilon cluster was prohibitive for a smaller site. Leading us to leverage Windows File Servers (an excellent file server solution but that’s a different post) for those situations, bifurcating our file services stack which increased complexity in management, not just of the file storage itself, but ancillary needs like monitoring and backups.

Given I’ve been running a virtualized Isilon simulator for as long as I’ve been purchasing and implementing the Isilon solution; leveraging a virtualized Isilon for these branch office scenarios was always on my wish list. When I heard the rumors an actual virtual product was in the works (vIMSOCOOL) I expected the solution to target this desire. When IsilonSD Edge was released, as I read the documentation, I continued with this expectation. I watched YouTube videos that said this was the exact use-case.

It’s taken actually setting up the product in a lab to understand that IsilonSD Edge is not the product I expected it to be. Personally, though the solution by it’s nature is ‘software defined’ as it includes no hardware; it doesn’t quite fit the definition I’ve come to believe SD stands for. This is less a virtual Isilon, or software defined Isilon, as it is “bring your own hardware”, IsilonBYOH if you will.

IsilonBYOH is, on its merit, an exciting product and highlights what makes Isilon successful; a great piece of software sitting on commodity servers. This approach is what’s allowed Isilon to become the product is it, supporting a plethora of node types as well as hard drive technologies. You can configure a super fast flash based NFS NAS solution to be an ultra high reliable storage solution behind web servers, where you can store the data once and have all nodes have shared access. You can leverage the multi-tenancy options to provide mixed storage in a heterogeneous environment, NFS to service servers and CIFS to end users, talking to both LDAP and Active Directory, tiering between node types to maximum performance for newer files and cost for older. You can create a NAS for high-capacity video editing needs; where the current data is on SSD for screaming fast performance, then moving to HDD when the project is complete. You can even create archive class storage array with cloud competitive pricing to store aged data, knowing you can easily scale, support multiple host types and if ever needed, incorporate faster nodes to increase performance.

With this new product, you can now start even smaller, purchasing your own hardware, running on your own network, and still leverage the same management and monitoring tools, even the same remote support. Plus you can replicate it just the same, including to traditional Isilon appliances.

However, to me, leveraging IsilonSD Edge does call for purchasing hardware, not simply adding this to your existing vSphere cluster and capturing unused storage. IsilonSD Edge, while running on vSphere, requires, locally attached, independent hard drives. This excludes leveraging VSAN, which means no VxRail (and all the competitive HCIA), it also means no ROBO hardware such as Dell VRTX (and all the similar competitive offerings), in fact just having RAID excludes you from using IsilonSD. These hardware requirements, specifically the dedicated disks; turns into limitations. Unless you’re in the position to dedicate three servers, which you’ll likely need to buy new to meet the requirements; you’re probably not putting this out in your remote/branch offices; even though that’s the goal of the ‘Edge’ part of the name.

When you buy those new nodes; you’d probably go ahead and leverage solid state drives; the cost for locally attached SSD SATA is quickly cutting even with traditional hard drives. But understand, IsilonSD Edge will not take advantage of those faster drives like it’s physical incarnation… no metadata caching with the SD version. Nor can the SD version provide any tiering through SmartPools (you can still control the data protection scheme with SmartPools, and obviously you’ll get a speed boost with SSD).

Given all this, the use-cases for IsilonSD Edge get very narrow. With the inability to put IsilonSD Edge on top of ROBO designs, the likelihood of needing to buy new hardware, coupled with the 36TB overall limit of the software defined version of Isilon; I struggle to identify a production scenario that is a good fit. The best case scenario in my mind is purchasing hardware with enough drives to run both IsilonSD and VSAN, side-by-side, on separate drives.; this would require at least nine drives server (more really), so you’re talking some larger machines, and again, a narrow fit.

To me, this product is less about today and more about tomorrow; release one setting the foundation for the future opportunity of virtual Isilon.

What is that opportunity?

For starters, running Isilon SD Edge on VxRail; even deploying it directly through the VxRail marketplace, and by this, I mean running the IsilonSD Edge VMDK files on the VSAN data store.

Before you say the Isilon protection scheme would double-down storage needs on the VSAN model; keep in mind you can configure per VM policies in VSAN. Setting Failure To Tolerate (FTT) of 0 is not recommended, but this is why it exists. Let Isilon provide data protection while playing in the VSAN sandbox. Leverage DRS groups and rules to configure anti-affinity of the Isilon virtual nodes; keeping them on separate hosts. Would VSAN introduce latency as compared to physical disk; quite probably… though in the typical ROBO scenario that’s not the largest concern. I was able to push 120Mbps onto my IsilonSD Edge cluster, and that was with nested ESXi all running on one host.

All of this doesn’t just apply to VxRail, but it’s competitors in the hyper-converged appliance space, as well a wide range of products targeted at small installations. To expand on the small installation scenario, if IsilonSD had lower data protection options like VSAN does to remove the need for six disks per node, or even three nodes; it could fit in smaller situations. Why not trust the RAID protection beneath the VM and still leverage Isilon for the robust NAS features it provides. Meaning run a single-node Isilon, after all, those remote offices are likely providing file services with Windows or Linux VMs, relying on the vSphere HA/DRS for availability, and server RAID (or VSAN) for data loss prevention. The Isilon has a rich feature set outside of just data protection across nodes. Even a single node Isilon with SmartSync back to a mothership has compelling use cases.

On the other side of the spectrum, putting IsilonSD in a public cloud provider, where you don’t control the hardware and storage, has quite a few use-cases. Yes, Isilon has CloudPool technology, this extends an Isilon into public cloud models that provide object storage. But a virtual Isilon running in, say, vCloud Air or VirtuStream, with SynqIQ with your on-premise Isilon opens quite a few doors, such as for those looking at public cloud disaster-recovery-as-a-service solutions. Or moving to the cloud while still having a bunker on-premise for securing your data.

Outside of the need for independent drives, this is, an Isilon, running on vSphere. That’s… awesome! As I mentioned before, this opens some big opportunities should EMC continue down this path. Plus, it’s Free and Frictionless, meaning you can do this exact same testing as I’ve done. If you are an Isilon customer today, GO GET THIS. It’s a great way to test out changes, upgrades, command line scripts, etc.

If you are running the Free and Frictionless version, outside of the 36TB and six node limit, you also do NOT get licenses for SynqIQ, SmartLock or CloudPools.

I’ll say, given I went down this road from my excitement about Free and Frictionless; these missing licenses are a little disappointing. I’ve run SyncIQ and SmartLock, two great features and was looking forward to testing them, and having them handy to help answer questions I get when talking about Isilon.

CloudPools, while I have not run, is something that I’ve been incredibly excited about for years leading up to its release, so I’ll admit I wish it were in the Free and Frictionless version, if only a small amount of storage to play with.

Wrapping up, there are countless IT organizations out there; I’ve never met one that wasn’t unique, even with some areas I’d like to see improved with this product, undoubtedly IsilonSD Edge will apply to quite a few shops. In fact, I’ve heard some customers were asking for a BYOH Isilon approach; so maybe this is really for them (which if so, the 36TB seems limiting). If you’re looking at IsilonSD Edge, I’d love to hear why; maybe I missed something (certainly I have). Reach out, or use the comments.

If you are looking into IsilonSD Edge, outside of the drive/node requirements; some things to be aware of that caught my eye.

While the FAQs state you can run other virtual machines on the same hosts; I would advise against it. If you had enough physical drives to split them between IsilonSD and VSAN, it could be done. You could also use NFS, ISCSI or Fibre Channel for data stores; but this is overly complex and in all likelihood, more expensive than simply having dedicated hardware for IsilonSD Edge (or really, just buying the physical Isilon product). But given the data stores used by the IsilonSD Edge nodes are unprotected, putting a VM on them means you are just asking for the drive to fail, and to lose that VM.

Because you are dedicating physical drives to a virtual machine, you cannot vMotion the IsilonSD virtual nodes. This means you cannot leverage DRS (Dynamic Resource Scheduler), which in turn means you cannot leverage vSphere Update Manager to automatically patch the hosts (as it relies on moving workloads around during maintenance).

The IsilonSD virtual nodes do NOT have VMware tools. Meaning you cannot shut down the virtual machines from inside vSphere (for patching or otherwise), rather you’ll need to enter the OneFS administrator CLI, shut down the Isilon node; then go and perform ESX host maintenance. If you have reporting in place to ensure your virtual machines have VMtools installed, running and at the supported version (something I highly recommend) you’ll need to adjust this. Other systems that leverage VMtools; such as Infrastructure Navigator, will not work either.

I might be overlooking something (I hope so) but I cannot find a way to expand the storage on an existing node. In my testing scenario, I built the minimal configuration of six data drives of a measly 64GB each. I could not figure out how to increase this space, which is something we’re all accustomed to on vSphere (in fact quickly growing VMs resources is a cornerstone of virtualization). I can increase the overall capacity by increasing nodes, but this requires additional ESX hosts. If this is true, again the idea of using ‘unclaimed capacity’ for IsilonSD Edge is marginalized.

IsilonSD wants nodes in a pool to be configured the same, specifically with the same size and amount of drives. This is understandable as it spreads data across all the drives in the pool equally. However, this lessens the value of ‘capturing unused capacity’. Aside from the unprotected storage point; if you were to have free storage on drives, your ability to deploy IsilonSD will be constrained to the lowest free space volume, as all the VMDK files (virtual drives) have to be the same. Even if you had twenty-one independent disks across three nodes, if just one of them was smaller than the rest, that free space dictates the size unit you can configure.

Even though I’m not quite sure where this new product fits or what problem it solves; that’s true of many products when they first release. It’s quite possible this will open new doors no one knew were closed and if nothing else; I’m ecstatic EMC is pursuing making a virtual version of the product; after all this is just version 1… what would you want in version 2? Respond in the comments!

By | April 4th, 2016|EMC, Home Lab, Opinions, Storage|2 Comments

IsilonSD – Part 5: Monitoring Activity

For my deployment of IsilonSD Edge, I want to keep this running in my lab, installing systems is often far easier than operating them (especially troubleshooting issues). However an idle system isn’t really a good way to get exposure, so I need to put a little activity on this cluster, plus monitor it.

This post is part of a series covering the EMC Free and Frictionless software products.
Go to the first post for a table of contents.

This is just my lab, so here is my approach to doing more with IsilonSD than simply deploying it:

  • Deploy InsightIQ (EMC’s dedicated Isilon monitoring suite)
  • Move InsightIQ Data to IsilonSD Edge Cluster
  • Synchronize Software Repository
  • Mount Isilon01 as vSphere Datastore
  • Load Test

Deploy InsightIQ

InsightIQ is EMC’s custom-built monitoring application for Isilon. Personally, this was one of the top reasons I select Isilon years ago when evaluating NAS solutions. I’m a firm believer that the ability to monitor a solution should be a key deciding factor in product selection.

Without going too deep in InsightIQ itself (that’s another blog), it provides the ability to monitor the performance of the Isilon, including the client perspective of the performance. You can drill into the latency of operations by IP address; which when I first purchased an Isilon array is was because the incumbent solution was having numerous performance problems and the lack of visibility into why was causing a severe customer satisfaction issue.

InsightIQ monitors the nodes, cluster communication, and even does file analysis to help administrators understand where their space is consumed and by what type of files.

Deploying InsightIQ is a typical OVA process, we’ve collected the information necessary in previous posts, so I’ll be brief, in fact you can probably wing-it on this one if you want.

*Note there is no sound, this is to follow along the steps.
  1. In the vSphere Web Client, deploy an OVA
  2. Provide the networking information and datastore for the InsightIQ appliance
  3. After the OVA deploy is complete, open the console to the VM, where you’ll need to enter the root password
  4. Navigate your browser to the IP address you entered, logging in as root, with the password you created in the console
  5. Add the Isilon cluster to InsightIQ and wait while it discovers all the nodes.

 

Move InsightIQ Data to IsilonSD Edge Cluster

You can imagine collecting performance data, and file statistics will consume quite a bit of storage. By default InsightIQ will store all this data on the virtual machine, so I move the InsightIQ Datastore onto the Isilon cluster itself. While this is a little circular, InsightIQ will generate some load writing the monitoring data, which in turn will give it something to monitor, for our lab purposes this provides some activity.

Simply log into InsightIQ, under Settings -> Datastore, change the location to NFS Mounted Datastore. By default Isilon shares out /IFS, however in production this should ALWAYS be changed, but for a lab we’ll leverage the export path.

IsilonSD_InsightIQDSMove

If you do this immediately after deploying InsightIQ, it will be very quick. If, however, you’ve been collecting data, you’ll be presented with information about the progress of the migration, refreshing the browser will provide updates.

IsilonSD_InsightIQDSMoveProgressSynchronize Software Repository

I have all my ISO files, keys, OVAs and software installation on a physical NAS; this makes it very easy to mount via NFS to all my hosts as a datastore, physical and nested; for quickly installing software in my lab. Because of this, I use this repository daily; so to ensure I’m actually utilizing IsilonSD to continue to learn about it post setup, I’m going use IsilonSD to keep a copy of this software repository, mounting all my nested ESXi hosts to it.

I still need my physical NAS for my physical hosts, in case I lose the IsilonSD I don’t want to lose all my software and be unable to reinstall. I want the physical NAS and IsilonSD to stay in sync too. My simple solution is to leverage robocopy to sync the two file systems; the added benefit of this is I also get the regular load on IsilonSD.

Delving into robocopy is a whole different post, but here is my incredibly simple batch routine. It mirrors my primary NAS software repository to the Isilon. This runs nightly now.

:START
robocopy \\nas\software\ \\isilon01\ifs\software\ /MIR /MT:64 /R:0 /W:0 /ZB
GOTO START

Upon first execution, I see in InsightIQ traffic onto IsilonSD. Even though this is nested ESXi, with the virtual Isilon nodes sharing both compute, network, memory and disk; I see a fairly healthy external throughput rate, peaking around 100Mb/s.

IsilonSD_InsightIQRobocopyThroughput

 

When the copy process is complete, looking in the OneFS administrator console will show the data has been spread across the nodes (under HDD Used).

IsilonSD_HDDLoaded

Mount Isilon01 as vSphere Datastore

Generally speaking, I would not recommend Isilon for VMware storage. Isilon is built for file services, and its specialty is sequential access workloads. For small workloads, if you have an Isilon for file services already, an Isilon datastore will work; but there are better solutions for vSphere data stores in my opinion.

For my uses in the lab though, with my software repository being replicated onto Isilon, mounting an Isilon NFS export as a datastore will not only allow me to access those ISO files but open multiple concurrent connections to monitor.

*Note there is no sound, this is to follow along the steps.
Mounting an NFS datastore to Isilon is exactly the same as any other NFS NAS.

You MUST use the FQDN to allow SmartConnect to balance the connections.

 

With the datastore mounted, if you go back into the OneFS administrator console; you can see the connections were spread across the nodes.

IsilonSD_ConnectionSpread

Now I have a purpose to regularly use my IsilonSD Edge cluster, keeping it around for upgrade testing, referencing while talking to others, etc. Again, with the EMC Free and Frictionless license, I’m not going to run out of time, I can keep using this.

Load Test

Even though I have an ongoing use for IsilonSD, I want to to a little more to do than just serve as a software share, just to ensure it’s really working well. So I’ll use IOMeter to put a little load on it.
I’m running IsilonSD Edge on 4 nested ESXi virtual machines, which in turn are all running on one physical host. So IsilonSD is sharing compute, memory, disk and network across the 4 IsilonSD nodes (plus I have dozens of other servers running on this host). Needless to say, this is not going handle a high amount of load, nor provide the lowest latency. So, while I’m going to use IOMeter to put some load on my new IsilonSD Edge cluster and typically I would record all the details of a performance test; this time I’m not. Especially given I’m generating load from virtual machines on the same host.

Given Isilon is running on x86 servers, it would be incredibly interesting to see a scientific comparison between physical Isilon and IsilonSD Edge with like-for-like hardware. In my personal experience with virtualization, there is a negligible overhead, but I have to wonder the difference Infiniband makes.

In this case, my point of load testing is not to ascertain the latency or IOPS, but merely to put the storage device under some stress for a couple of hours to ensure it’s stable. So I created a little load, peaking around 80Mbps and 150 IOPS, but running for about 17 hours (overnight).

Below are some excerpts from InsightIQ, happily the next morning the cluster was running fine, even given the load. During the test, the latency fluctuated widely (as you’d expect due to the level of contention my nested environment creates). From an end user perspective, it was still usable.

IsilonSD_LoadTest1IsilonSD_LoadTest2IsilonSD_LoadTest3IsilonSD_LoadTest4

In my next post I’m going to wrap this up and share my thoughts on IsilonSD Edge.

By | April 1st, 2016|EMC, Home Lab, VMWare|1 Comment

IsilonSD – Part 4: Adding & Removing Nodes

One of the core functions of the Isilon product is scaling. In it’s physical incarnation you can scale up to 144 nodes with over 50 petabytes in a single namespace. Those limits are because of hardware, as drives and switches get bigger, so can Isilon scale bigger. Even so, you can still start with only three nodes. When nodes are added to the cluster, it increases the storage and performance; existing data is re-balanced across all the nodes after the addition. Likewise, you can remove a node, proactively migrating the data from the leaving node without sacrificing data protection; an excellent way to lifecycle replace your hardware. This tenant of Isilon, coupled with non-disruptive software upgrades, means you there is no pre-set life-span to an Isilon cluster. With SmartPools ability to tier storage by node type, you can leverage older nodes for less frequently accessed data, maximizing your investment.

IsilonSD Edge has that same ability, though slightly muted given you’re limited to six nodes and 36TB (for now hopefully). I wanted to walk through the exercise, to see how node changes are accomplished in the virtual version, which is different from the physical version.

This post is part of a series covering the EMC Free and Frictionless software products.
Go to the first post for a table of contents.

 

Adding a Node

Adding a node to the IsilonSD Edge Cluster is very easy, as long as you have an ESX host with all the criteria ready. If you recall from building our test platform, we built a forth node for this very purpose.

*Note there is no sound, this is to follow along the steps.
  1. In the vSphere Web Client, return to the Manage IsilonSD Cluster tab
  2. Select the cluster (in our case, Isilon01)
  3. Switch to the Nodes tab
  4. Click the + button
  5.  The Management server will again search for candidates if found it will allow you to select them.
  6. Again select the disks and their roles, and then proceed; all the cluster and networking information already exists.

 

Just like creating the cluster, the OVA will be deployed, data stores created (if you used raw disks) and the IsilonSD node brought online. This time the node will be added into the cluster, which will start a rebalance effort to re-stripe the data across all the nodes, including the new one.

Keep in mind, IsilonSD Edge can scale up to six nodes, so if you start with three you can double your space.

Removing a Node

Removing a node is just as straight-forward as adding one, as long as you have four or more nodes. This action can take a very long time, depending on how much data you have, as before the node can be removed from the cluster all the data must be re-striped.

*Note there is no sound, this is to follow along the steps.
  1. In the vSphere Web Client, return to the Manage IsilonSD Cluster tab
  2. Select the cluster (in our case, Isilon01)
  3. Switch to the Nodes tab
  4. Select the node to evict (in our case, node 4)
  5. Click the – (minus) button.
  6. Double check the node and select Yes
  7. Wait until the node Status turned to StopFail

 

During the smart fail operation, should you log onto the IsilonSD Edge Administrator GUI, you’ll notice in the Cluster Overview the node you are removing has a warning light next to it. This is also a good summary screen to gauge the progress of the smartfail, by comparing the % column of the node your evicting, to the other nodes. In the picture below the node we choose to remove now is <1% used, while the other 3 nodes are 4% or 5%, meaning we’re almost there. IsilonSD_RemoveNodeClusterOverview

 

 

Drilling into that node is the best way to understand why it has a warning, there you will see the message that the node is being smartfailed. IsilonSD_RemoveNodeSmartFailMessage

When the smartfail is complete, you still have some cleanup activities.

*Note there is no sound, this is to follow along the steps.
  1. In the vSphere Web Client, return to the Manage IsilonSD Cluster tab
  2. Select the cluster (in our case, Isilon01)
  3. Switch to the Nodes tab
  4. The node you previously set to evict should be red in the Status
  5. Select the node, then click on the trash icon.
  6. This will delete the virtual machine and associated VMDKs

 

If you provided IsilonSD unformatted disks, the data stores the wizard created will still exist and you might want to clean them up. If you want to re-add the node, you’ll need to wait awhile, or restart the vCenter Inventory Service as it takes a bit to update.

By | March 31st, 2016|EMC, Home Lab, Storage, VMWare|1 Comment