mdschneider

About mdschneider

Strategic Tactician | Architect | Geek | EMC Elect |

Founder & Principle @ http://vernex.io

New Chapter – Same Book

Today is my first day in a new role; I’m joining EMC as an Advisory Systems Engineer in the Enterprise team. If you’d look at my LinkedIn profile, this might seem like a big change from what I’ve been doing the past ten years; though personally this is just the next chapter in the same book.

DirkGentlyBookCover01What book you may ask? Well if I had to pick my favorite author, it would be Douglas Adams. The unique blend of dry, witty humor, science fiction, technology, and frankly just the bizarre anti-climatic twists. So if I picked a favorite book, you’d think it would be his seminal work, The Hitchhikers Guide to the Galaxy. While I adore these books to the extent I reference them almost daily; it actually would be Dirk Gently’s Holistic Detective Agency.

I believe I was around 10 when I first read the book; full of questions about myself, and well; life, the universe, and everything. While Hitchhiker simply pokes fun at asking such audacious questions; Dirk tackles it through understanding and exploiting the fundamental interconnectedness of things in his seemingly erratic approach to detecting. If you haven’t read the book, (go do it now) I won’t ruin it for you. But Dirk believes that everything is connected, that only by looking at everything, even the seemingly unrelated, can you solve the whole problem. So much, in fact, that Dirk cracked his greatest case by asking a child; whom he felt were free of the filters that hide the whole solution.

The book and Dirk’s approach stuck with me as I started working in IT. I began to notice how much it applied to technology. Not that I started believing that ghosts or time-travel explained issues (often it does seem more likely); but rather like the 10Base2 network I first managed, the problem was often down the wire. That someone simply kicking back their feet could take down the whole system. The proverbial butterfly flapping its wings is an everyday truth in IT.

As I took on more complex roles, even sometimes seemingly unsolvable issues and projects; I always imagined myself as Dirk. I tried to look with an open mind, not scared of retreading previous tracks, asking seemingly stupid questions or even posing ideas that appeared unorthodox (ironically in technology I’ve learned that today’s unorthodox is tomorrow’s status quo).

As my roles grew from administrative and troubleshooting in nature; toward designing and developing solutions; I continued this approach. Always wanting to ensure my portion of the design was well connected with the whole, and that the whole worked together elegantly. I quickly realized to be successful in this approach; I needed a broad experience across all the elements that are interconnected in IT.

So I made it my goal to shift my perspective continually, to take new opportunities outside of my comfort zone; bringing with me my knowledge but learning the impact of aspects of IT on each other.

Taking stock of this goal; I count the blessings of opportunities over the years. I’m so lucky to have personal experience in dimensions of IT such as:

  • Domains like programming, quality assurance, infrastructure, operations, performance/capacity, database management, even more as a leader.
  • Technologies across network, storage and servers; Windows, Linux, Unix, Mainframe running and coding languages of all flavors.
  • Industries traversing insurance, healthcare, travel, retail, real-estate and finance.
  • Organizations that were cloud computing, software vendors, and enterprise brick and mortar.
  • In companies sizes counting employees in the hundreds, thousands and hundreds of thousands.
  • Businesses that were young start-ups, to 100 plus years old.
  • Nameless brands that nobody knew, to seeing daily commercials for my employer on the tv.

Through those roles, I’ve been an individual contributor, a team leader, to vice president with 130+ team members. Not just progressing up a traditional career ladder, but frequently leaving management roles to jump back into an individual contributor.

I’m fortunate, that through all these facets; I’ve learned more about IT, business and myself. I hope that this diversity has improved my solution designs, made me a better employee, a better leader, and maybe even, a better person. I’ve strived in all of these to see how things were interconnected and in doing so learned that all those difference facets are not just connected, but important in of themselves. Being a VP was no more valuable than an administrator. It’s pointless to build a datacenter if there aren’t programmers to build applications to run in is; though a programmer won’t get anything done without a laptop built by a admin. None of IT at all matters without a business to contribute to. That, because all these elements are so connected, each is a critical portion of making the whole successful, and every aspect deserves respect.

Over the years, I often felt that building this experience was leading me to a specific role. At times I thought that destination was solution architecture, having this broad experience did indeed help my designs be more holistic. Later I felt is was preparing me for large leadership roles and I hope it did as I could relate and mentor everyone on my team. Though, continually I’m reminded of that first goal; the it’s not about the destination, but the journey itself. That when I start feeling like I have enough diversity is when I need to switch it up and find something new.

So, what’s missing from above, why the new role and company? Well, I’ve never been on the sales side. So today I’m adding that to this list, learning new aspects of IT. As well, while I’ve worked for a vendor, they were software only vendors. Now with EMC (and soon Dell EMC) I’ll learn more about being a full solution vendor (hardware, software & services).

I look forward to learning a new facet, to bringing my experience to the role; and further learning how we all interconnect. I’m excited as this opportunity also allows more insight into more IT shops and business models; every one is difference, has it’s own challenges and opportunities to learn from. I look forward to trying to explain to my children this new adventure, as they always live up to Dirk’s belief and ask terrific questions with unknowing clarity.

I’d encourage you to look to the holistic side of your own role; how does what you do related with those around you? If you’re comfortable, maybe you shouldn’t be. If you think you’re an expert in your field; maybe it’s time to change fields a bit. When is the last time you asked peer in a different department for a perspective on how what you do is connected to them; or asked a child even for their perspective?

By | June 20th, 2016|EMC, Opinions|2 Comments

UnityVSA – Part 4: VVOLs

In part 1 of this series I shared the move to HTML5 was the most exciting part of Unity. Frankly this was because of the increased compatibility and performance of Unisphere for Unity, but more so the signaling of EMC shifting to HTML5 across the board (I hope).

If there was one storage feature inside Unity that excites me the most, it has to be VVOL support. So in this post, I’m going to dive into VVOLs on the UnityVSA we set up previously. As VVOLs is a framework, every implementation is going to differ slightly. As such, the switch to VVOL itself and the Unity flavor is going to require an adjustment in the architectural and management practices we’ve adopted over the years.

This post is part of a series covering the EMC Free and Frictionless software products.
Go to the first post for a table of contents.

First, for those not familiar, a little on VVOLs themselves. VVOL stands for Virtual Volumes, in essence, it simplifies the layers between the virtual machine and the storage array while allowing both vSphere and the Storage Array to have a deeper understanding of each other. A VVOL itself directly correlates to a virtual disk attached to virtual machines; including the configuration file and swap file every VM has. Enabling VVOLs is VASA (vSphere Storage APIs for Storage Awareness), with which the array can describe the attributes of the storage presented to vSphere. These two core tenants of VVOLs allow the vSphere layer to see deeper into the storage; while the storage layer can see the more granular virtual machine and virtual disk usage.

In practice, this provides a better management framework to enable the movement most in the vSphere realm have been making; creating larger datastores with a naming convention that denotes the storage features (flash, tiering, snapshots, etc.). Where previously vSphere Admins would need to learn these conventions to determine where to place VMs; with VVOLs, this can be abstracted into Storage Policies, with the vSphere Admin simply selecting the appropriate policy during creation.

So new terms and concepts to become familiar with:

  • Storage Provider
    • Configured within vCenter this is a link to the VASA provider which in turn shares the storage system details with vSphere.
    • For Unity, the VASA provider is built in and requires no extra configuration on the Unity side.
  • Protocol EndPoint
    • This is the storage side access point that vSphere communicates with; they work across protocols and replace LUNs and mount points.
    • On Unity, Protocol Endpoints have created automatically through the VVOL provisioning process.
  • Storage Container
    • This essentially replaces the LUN, though a storage container is much more than an LUN ever was as it can contain multiple types of storage on the array, which effectively means it can have multiple LUNs.
    • In vSphere a storage container maps to a VVOL Datastore (shown in the normal datastore section of vSphere).
    • Unity has mirrored this naming in Unisphere, calling the storage container ‘Datastore’.
    • In Unity a Datastore can contain multiple Capability Profiles (which if you remember, in Unity, is synonymous to a Pool).

To fully explore and demonstrate the VVOL functionality in Unity, we’re going to perform several sets of actions, I’m going to share these in video walkthroughs (with sound), as there are multiple steps.

  1. Create additional pools and capability profiles on the UnityVSA then configure vSphere and Unity with appropriate connection for VVOL
  2. Provision a VVOL Datastore with multiple capability profiles and provision a test virtual machine on the new VVOL Datastore
  3. Create a vSphere Storage Policy and relocate the VM data
  4. Create advanced vSphere Store Policies, extending the VM to simulate a production database server

 

First some prep work and connecting vSphere and Unity:

  • Add 4 new virtual disks to the UnityVSA VM
  • Create two new Unity pools
    • 1 with 75GB as single tier
    • 1 with 15GB, 25GB and 55GB as multi-tier with FastVP
  • Link Unisphere/Unity to our vCenter
  • Create a Storage Provider link in vSphere to the Unity VASA Provider

 

Next, let’s provision the VVOL Datastore or “Storage Container”:

  • Create a Unity Datastore (aka “Storage Container”) with three Capability Profiles (as such, three pools)
  • Create a vSphere VVOL Datastore
  • Investigate VVOL Datastore attributes

 

Provisioning a virtual machine on the new storage looks the same as traditional datastores, but there is more than meets the eye:

  • Create a new virtual machine on the VVOL Datastore
  • Investigate where the VM files are placed
  • See the VM details inside Unisphere
  • Create a simple Storage Policy in vSphere
  • Adjust the virtual machine storage policy and watch the storage allocation adjustment

 

Now let’s consider more advanced usage of VVOLs. With the ability to create custom tags in Unisphere Capability Profiles, we have an unlimited mechanism to describe the storage in our own words. You could use these tags to create application specific pools, and thus vSphere Storage Policies for admins to target VMs related to an application. You could also use tags for tiers (Web, App, DB), or in the example below, we’re going to create vSphere Storage Policies and Unity capability tags to partition a database server into Boot, Data and Backup storage types.

  • Modify our three Capability Profiles to add tags: Boot, DB and Backup.
  • Create vSphere Storage Policies for each of these tags.
  • Adjust the boot files of our test VM to leverage the Boot Storage Policy
  • Add additional drives to our test VM, leveraging our DB and Backup Storage Policies; investigate where these files were placed

 

Hopefully now you not only have a better understanding of how to setup and configure VVOLs on EMC Unity storage; but a deeper understanding of the VVOL technology in general. This framework opens brand new doors in your management practices; imagine a large Unity array with multiple pools and capabilities all being provisioned through one Storage Container and VVOL Datastore. Leveraging Storage Policies to manage your data placement rather than carving up numerous LUNs.

With the flexibility of Storage Policies, you can further inform the administrators creating and managing virtual servers on what storage characteristics are available. If you have multiple arrays that support VVOLs and/or VSAN; your policies can work across arrays and even vendors. This abstraction allow further consistency inside vSphere, streamlining management and operations tasks.

You can see how, over time, this technology has advantages over the traditional methods we’ve been using for virtual storage provisioning. However, before you start making plans to buy a new Unity array and replace all your vSphere storage with VVOLs, know that, as with any new technology, there are still some limitations. Features like array based replication, snapshots, even quiescing VMs, all are lagging a bit behind the VVOL release, all highly dependent on your environment and usage patterns. I expect quick enhancements in this area, so research the current state based and talk with your VMware and EMC reps/partners.

By | May 27th, 2016|EMC, Home Lab, Storage, Train Yourself|2 Comments

UnityVSA – Part 3: Initial Configuration Wizard

When last we meet, we had deployed a UnityVSA virtual appliance; it was up and running sitting at the prompt for the Initial Configuration Wizard.

This post is part of a series covering the EMC Free and Frictionless software products.
Go to the first post for a table of contents.

If you’re following along and took a break, your browser session to Unisphere likely expired, which means you also lost the prompt for the Initial Configuration Wizard. Don’t fret; you can get it back. In the upper left corner of Unisphere, click on the Preferences button . On the main preferences page, there is a link to launch the Initial Configuration Wizard again.

UnityVSA_PreferencesButton

UnityVSA_InitialConfigurationWizardLink

Ok, so back to the wizard? Good. This wizard is going to walk through a complete configuration of the UnityVSA, including changing passwords, licensing, network setup (DNS, NTP), creating pools, e-mail server, support credentials, your customer information, ESRS, iSCSI and NAS interfaces. You can choose to skip some steps, but if you complete all portions, your UnityVSA will be ready to provision storage across block and file, with LUNS, shares or VVOLS.

Before we get going, let’s cover some aspects of the VSA to plan for.

First, there is a single SP (service processor in Unity-speak; or a ‘brain’). The physical Unity (and the VNX and CLARiiON before if you are familiar with the line) has two SPs. These provide redundancy should an SP fail, given for all intents an SP is an x86 server it’s suspect to common failures. The VSA rather relies on vSphereHA to provide the redundancy should a host have a failure, and vSphereDRS/vMotion to move the workload preemptively for planned maintenance. This is germane because, for the VSA, you won’t be planning for balancing LUNs between the SPs, nor creating multiple NAS servers to span SPs, nor even iSCSI targets across SPs to allow for LUN trespass.

The second is size limitations. I’m using the community edition, a.k.a. Free and Frictionless; which is limited to 4TB of space. I do not get any support, as such ESRS will not work. If you’re planning on a production deployment, that 4TB limit will be increased to 50TB (based, of course, on how much you purchased from EMC) and then be a fully supported Unity array. With the overall size limit, you also are limited to 16 virtual disks behind the VSA (meaning 16 VMDK files you assign the VM). So plan accordingly. In my initial deployment I provided a 250GB VMDK, so if I have 15 more (16 total), I hit my 4TB max.

Third is simply the key differences from a physical array to prepare for.

  • Max pool LUN size: 16TB
  • Max pool LUNs per system: 64
  • Max pools per system: 16
  • No fibre channel support
  • No write caching (to ensure data integrity)
  • No synchronous replication
  • No RAID – this one is crucial to understand. In a physical Unity raw disks are put into RAID groups to protect against data loss during a drive failure. UnityVSA relies on the storage you provide to be protected already. Up to the limits mentioned, you can present VMDKs from multiple vSphere datastores, even from separate tiers of storage leveraging FASTVP inside UnityVSA.

Fourth and finally, a few vSphere specific notes. UnityVSA comes installed with VMware Tools, but you can’t modify it, so don’t try automatic upgrades the VMtools. The cores and memory are hard coded; so don’t try adding more to increase performance; rather split workload onto multiple UnityVSA appliances. You cannot resize virtual disks once they are in a UnityVSA pool, again add more, but pay attention to the limits. EMC doesn’t support vSphere-FT, VM-level backup/replication nor VM-level snapshots.

 

 UnityVSA_ICW__1 Got all that, now let’s setup this array! I’m going to step through detail on the options this time, because the wizard is packed full of steps, the video is also at the bottom.
 

UnityVSA_ICW__26

First off, change your passwords. If you missed it earlier, here are the defaults again:

  • admin/Password123#
  • service/service

The “admin” account is the main administrator of Unisphere, while the “service” account is used to log into the console of the VSA, where you can perform limited admin steps such as changing IP settings should you need to move the network; as well logs.

 

UnityVSA_ICW__25

Next up, licensing. Every UnityVSA needs a license, even if you’re using the Community edition. If you purchased the array, you should go through all the normal channels, LAC, support, or your rep. For community edition, copy the System UUID and follow the link: Get License Online
 

UnityVSA_ICW__24

The link to get a license will take you to EMC.com, again you’ll need to logon with the same account you used to download the OVA. Then simply paste in that System UUID and select “Unity VSA” from the drop down. Your browser will download a file that is the license key; from there you can import it into Unisphere.

I will say, I’ve heard complaints that needing to register to download and create a license is ‘friction’, but it was incredibly easy. I don’t take any issues with a registration, EMC has made it simple, and no one from sales is calling me to follow up. I’m sure EMC is tracking who’s using it, but that’s good data; downloading the OVA vs actually creating a license. I don’t find this invalidates the ‘Free and Frictionless’ motto.

UnityVSA_ICW__23 Did you get this nice message? If so good job, if not… well try again.
 

UnityVSA_ICW__22

The virtual array needs DNS servers, to talk to the outside world, communicate through ESRS, get training videos, etc… make sure to provide DNS servers that can resolve internet addresses.
 UnityVSA_ICW__21 Synchronizing time is always important, if you are providing file shares it’s extra important you’re in synch with your Active Directory. Should a client and server be more than 5 minutes apart access will be denied. So use your AD server itself, or an NTP server on the same time synch with AD.
 

UnityVSA_ICW__20

Now we’re getting into some storage. In Unity, pools are a grouping of storage which is carved up to present to hosts. In the VSA, a pool must have at least one virtual disk; though if you have more data will be balanced across the virtual disks.

A pool is the foundation of all storage, without one you cannot setup anything else on the virtual array.

 

UnityVSA_ICW__18

When creating a Pool, you’ll want to give it a name and description. Given this is a virtual array, consider including the cluster and datastore names in the description so you know where the storage is coming from.

For my purposes of testing, I named the Pool Pool_250 simply to know it was on my 250GB virtual disk.

 

UnityVSA_ICW__19

Once named, you’ll need to give the pool some virtual disks. Recall we created a 250GB virtual disk, here we are adding that into the pool.

When adding disks you need to choose a tier, typically:

  • Flash = Extreme Performance
  • FC/SAS = Performance
  • NL-SAS/SATA = Capacity

The virtual disk I provided the VSA is actually VSAN running All-Flash, but I’m selecting Capacity here simply to explore FAST VP down the road.

 

UnityVSA_ICW__17

Next up I need to give the Pool a Capability Profile. This profile is for VMware VVOL support. We’ll cover VVOL in more depth another time, but essentially it allows you to connect vSphere to Unity, and assign storage at a VM level. This is done through vSphere Storage Policy that map through to the Capability Profile.

So what is the Capability Profile used for? It encompasses all the attributes of the pool, allowing you to created a vSphere Storage Policy. There is one capability profile per pool, it includes the Service Tier (based on your storage tier and FAST VP), Space Efficiency Options and any extra tags you might want to add.

You can skip this by not checking the Create VMware Capability Profile for the Pool box at this point; but you can also modify/delete the profile later.

I went ahead and made a simple profile called CP_250_VSAN

 

UnityVSA_ICW__16

Here are the constraints, or attributes, I mentioned above. Some are set for you, then you can add tags. Tags can be shared across pools and capability profiles. I tagged this ‘VSAN’, but you could tag for applications you want on this pool, or application tiers (web, database, etc), or the location of the array.

This finishes out creating the pool itself.

 

UnityVSA_ICW__14

The Unity array will e-mail you when alerts occur, configure this to e-mail your team when there are issues.
 

UnityVSA_ICW__13

Remember me mentioning reserving a handful of IP addresses? Here we start using them. The UnityVSA has two main ways to attach storage; iSCSI and NAS (the third, fibre channel, is available on the physical array).

If you want to connect via iSCSI, you’ll need to create iSCSI Network Interfaces here.

 

UnityVSA_ICW__12

Creating an iSCSI interface is easy, you’ll pick between your four ethernet ports, provide IP address, subnet mask and gateway. You can assign multiple IP addresses to the same ethernet port; you also can run both iSCSI and NAS on the same ethernet port (you can’t share an IP address though).

How you leverage these ports is an end user design. Keep in mind, the UnityVSA itself is a single VM and thus a single point of failure. Though you could put the virtual network cards on separate virtual switches to provide some network redundancy, or you could put virtual network cards into separate VLANs to provide storage to different network segments.

 

 

UnityVSA_ICW__11

I created two iSCSI interfaces on the same network card, so that I can simulate multi-pathing at the client side down the road.
 

UnityVSA_ICW__9

Next up is creating a NAS Server, this provides file services for a particular pool. Provide a name for the NAS server, then pool to bind it too, and the service processor for it to run on (only SPA for VSA).
 

UnityVSA_ICW__8

With the NAS Server created, it will need an ethernet port and IP information. Again, this can be the same port as iSCSI, or different; your choice; but you CANNOT share IP addresses. I choose to use a different port here.
UnityVSA_ICW__7 Unity supports both Windows and *nix file shares, as well multiple options for how to secure the authentication and authorization of said shares. Both the protocol support and the associated security settings are per NAS Server. Remember we can create multiple NAS Servers; so this is how you provide access across clients.

For example, if you have two Active Directory forests you want to have shares for; one NAS Server cannot span them, though you can simply create a second NAS server for the other forest.

Or if you want to provide isolation between Windows and *nix shares, simply use two NAS servers, each with single protocol support.

One pool may have multiple NAS servers, but one NAS server can NOT have multiple pools.

This is again where the multiple NICs might come in play on UnityVSA. I could create a NAS Server on a virtual nic that is configured in vSphere for NFS access; while another NAS Server is bound to a separate virtual nic in that my Windows clients can see for SMB shares.

For my initial NAS Server, I’m going to use multi-protocol and leverage Active Directory. This will create a server entry in my AD. I’m also going to enable VVOL, NFSv4 and configure secure NFS.

 

 

UnityVSA_ICW__6

This is the secure NFS option page, by having NFS authenticate via Kerberos against Active Directory, I can use AD as my LDAP as well.
 

UnityVSA_ICW__5

With secure NFS enabled, my Unix Directory Service is going to target AD, so I simply need to provide the IP address(es) of my domain controller(s).
 

UnityVSA_ICW__4

Given I’m using Active Directory, the NAS server needs DNS Servers, these can be different from the DNS servers you entered earlier if you have separation of DNS per zones.
 

UnityVSA_ICW__3

I do not have another Unity box at this point to configure replication; something I’ll explore down the road, so leaving this unchecked.
 

UnityVSA_ICW__2

At this point all my selections are ready, clicking finish will complete all the configuration options.

At this point, my UnityVSA is connected to the network, ready to carve up the pool into LUNs, VVOLs or shares. Everything I accomplished in this wizard can be done manually inside the GUI. The Initial Configuration Wizard just streamlines all the tasks with bringing up a new Unity array. If you have a complete Unity configuration mapped out, you can see how this wizard would greatly reduce the time to value. In the next few posts I’ll explore the provisioning process; like how to leverage the new VVOL support with vSphere.

Here is the silent video, to see the steps I skipped and the general pacing of this wizard.

By | May 25th, 2016|EMC, Home Lab, Storage|2 Comments

UnityVSA – Part 2: Deploy

This post is part of a series covering the EMC Free and Frictionless software products.
Go to the first post for a table of contents.

So, to start this off let’s go to the EMC Software Download page; navigating to UnityVSA to get the download. You’ll receive a login prompt, if you have not registered for an EMC account previously, you’ll need one to access the download. Once logged in, you’ll start downloading the OVA file for UnityVSA.

Yes, last time I went immediately to downloading OVAs without reading the manuals my installation failed miserably. But I’m going to try an installation unassisted every time; maybe I’m a glutton for punishment, but I choose to be optimistic that installations should be intuitive, so I prefer trying unaided first.

That’s just me; you might want to check out the Installation Guide and FAQs. Here’s the quick details though… in order to run the VSA you’ll need to be able to run a vSphere virtual machine with 2x cores (recommended is 2Ghz+), 12GB of RAM, and 85GB of hard drive space, plus however much extra space you want to give the VSA to be presented as storage. The VSA does not provide any raid protection inside the virtual array; so if you want to avoid data loss from hardware failure, ensure the storage underneath the VSA is RAID (or VSAN, or SAN itself). You’re also going to want at least two IP addresses set aside; I’d recommend about 5 for a full configuration. Depending on your network environment, you might want to have multiple networks available in vSphere. The UnityVSA has multiple virtual network cards to provide ethernet redundancy. One of those virtual nics is for management traffic, which you could place on a separate virtual switch (and thus VLAN) if you’re topology supports this. My lab is basically flat, so I’ll be binding all virtual nics to one virtual network.

With the downloaded OVA in hand, the initial deployment is indeed incredibly easy. If you’re familiar with OVA deployments, this won’t be very exciting. If you’re not familiar with deploying an OVA, you can follow along in the video below. I’m going to be deploying UnityVSA on my nested ESXi cluster, running VSAN 6.2. You could install directly on an ESXi Host, or even VMware Workstation or Fusion; put that’s another post.

*Note there is no sound, this is to follow along the steps.
  1. In vCenter, right-click the cluster and select “Deploy OVF Template”
  2. Enter the location you saved the UnityVSA OVA file
  3. Select the check box “Accept extra configuration details”, this OVA contains resource reservations which we’ll touch on below.
  4. Provide a name, this will be the virtual machine name, not the actual name of the Unity array
  5. Select the datastore to house the UnityVSA, in my case I’m putting this on VMware Virtual SAN.
  6. Select which networks you wish the UnityVSA bound to.
  7. Configure the “System Name”, this is the actual name of the VSA, I’d recommend matching the virtual machine name you entered above.
  8. Configure the networking, I’m a fan of leveraging DHCP for virtual appliance downloads when possible, it eliminates some troubleshooting.
  9. I do NOT recommend checking the box “Power on after deployment”, the OVA deployment does not include any storage for provisioning pools; for the initial wizard to work, you’ll want to add some additional virtual hard drives.

 

UnityVSA_ResourceReservations

 

A couple notes on the virtual machine itself to be aware of. EMC’s intent with UnityVSA is to provide a production-grade virtual storage array. To ensure adequate performance, when the OVA is deployed the resulting virtual machine has reservations in place on both CPU and memory. These reservations will ensure the ESX host/cluster underneath the VSA can deliver the appropriate amount of resources for the virtual array to function. If you are planning on running UnityVSA in a supported, production environment, I recommend leaving these reservations. Even if you have enough resources to make the reservations unnecessary, I’m sure EMC Support will take issue with you removing them should you need to open a case. If this is just for testing and you are tight on resources, you can remove these reservations.

 

 

 

 

 

Next, let’s add that extra hard drive I talked about and power this up.

 

*Note there is no sound, this is to follow along the steps.

UnityVSA-InitialConfigurationWizard

  1. In vCenter, right click the UnityVSA and Edit Settings
  2. At the bottom under New Device. use the drop down box to select “New Hard Disk” and press “Add”
  3. Enter the size, I’m making a VMDK with 250GB; if you use the arrow to expand the hard drive settings, you can place it on a separate datastore, if you want to using FASTVP to tier between datas tores this is where you’ll want to customer the hard drive.  I’m putting this new hard drive with the VM on VSAN.
  4. Once the reconfigure task is complete, right click your VM again and select “Power On”
  5. I highly recommend opening the console to the VM to watch the boot up, especially on first boot as the UnityVSA will configure itself.
  6. When the console shows a Linux prompt, walk away. Seriously, walk away; get a coffee, take a smoke break, or work on something else. More is happening behind the scenes at this point and the VSA is not ready. You’ll only get frustrated and this watched pot will not seems to boil.
  7. Ok, are you back? Close the console and go back to the virtual machine, ensure the VM Tools are running and the IP address shows up. If you choose DHCP as I did, you’ll need to note the assigned IP address. If you don’t see the IP address, the VSA is still not ready (see, I told you to walk away)
  8. Once the IP appears vCenter, you’re safe to open a browser over SSL to that address.
  9. You should receive a logon prompt, the default login is:
    1. user: admin
    2. pw: Password123#
  10. Did you login? Do you see a “Initial Configuration” wizard? Congratulations then, the VSA is up.

In the next post in this series, we’ll leverage this wizard to fast track configuring the UnityVSA, but, if you want you can cancel the wizard at this point, manually configuring the array; or restarting the wizard later through preferences.

For now, as evident by this post; the deployment of UnityVSA was straight-forward and exactly what we’ve all come to expect of virtual appliances. From download to up and running, user interaction time was less than 2 minutes, with overall deploy time around 30 minutes.

 

 

 

By | May 24th, 2016|EMC, Home Lab, Storage|3 Comments

UnityVSA – Part 1: HTML5 FTW!

This post is part of a series covering the EMC Free and Frictionless software products.
Go to the first post for a table of contents.

In my last post, I shared my pet project tracking the Twitter activity of the EMC Elect at EMC World 2016. One of the report elements was the ever-popular word cloud showing the popularity of words used in the tweets. One of the more popular words (behind the obvious like EMC, EMCWorld, Dell, Session and Vegas) was “Unity”. Which is not surprising given EMC took the wraps off a new mid-range storage product called Unity.

Unity is the evolutionary replacement of the VNX line, which in turn replaced the CLARiiON; I’ve lost track how many of these arrays I’ve had over the years, I’d guess it’s north of 50. The overall product line heritage is one of the most widely deployed storage arrays out there. Of all the enhancements in Unity, over its predecessor, none excites me more than the new HTML5 interface. Not just because it’s a better interface for the Unity product, but it (hopefully) signals a new wave of management tools from EMC that will all be HTML5.

CLARiiON-FC4700.pngI’ve been around long enough I can remember when the CLARiiON didn’t have a web based GUI. Think early 2000’s, (2002 I believe) when Navisphere was a thick client installed on your Windows workstation. Installing the client seemed so constraining then, in fact, we just built a dedicated (physical) server to run the Navisphere client to remote into. I remember my excitement talking to our EMC Sales Engineers about the forthcoming FLARE upgrade that would provide a web-based Navisphere. So much in fact that I signed up for pre-release testing (I was running development labs at the time, so we weren’t ‘production’). I recall to this day that FLARE upgrade because it was incredibly painful and incurred some outages (again, pre-release code).

In the 15 years since that upgrade, struggling with the Jave JRE client configurations necessary for the web based Navisphere, then Unisphere; not even mentioning the constant performance frustrations; I’ve frequently regretted being a voice asking for a web GUI. Especially given my team, and every other team I talked to, simply setup a dedicated virtual server to act as the Unisphere client. So here I am again, excited over the concept of a HTML5 interface that requires no client, is cross platform compatible and maybe, even, works on mobile devices.

Unlike the past 15 years, I don’t have to wait until it’s time to purchase a new storage array to explore the new GUI. EMC has provided multiple avenues to explore the HTML5 interface. The first is the Unity Simulator, which provides an environment to play with the new HTML5 Unity Unisphere. Take note, this only appears to work on Windows, and it’s not going to let you provision storage; but it certainly is a quick way to play with the UI. Download it here (registration required), and here are some screenshots.

This slideshow requires JavaScript.

If you just want to take a peek, or do not have a Windows machine handy; there are some great videos online, like the EMC Unity – Unisphere Overview, want more: search YouTube for EMC Unity. 

However, if you’re like me, and want to not only play with the UI but be able to provision storage, assign it to hosts, explore VVOL support and create NAS shared; you’re in luck! Because with the release of Unity comes “UnityVSA” (Unity Virtual Storage Array), a virtualized version of the Unity array that runs on vSphere. Under EMC’s Free and Frictionless initiative, the UnityVSA is available in a community edition free of charge with no time bombs. You can also purchase UnityVSA, as it’s a fully supported and production grade, Software Defined version of Unity. So as part of my ongoing series on Free and Frictionless, I’m going to dive into the UnityVSA, record my deployment and testing, and share my thoughts and opinions.

Next post, we’ll deploy the OVA for UnityVSA.

By | May 23rd, 2016|EMC, Home Lab, Storage|3 Comments

Tracking EMC Elect Tweets @ EMC World

Staying abreast of technology is simultaneously a challenging and rewarding part of my career. Now and then I like to dive deep into an area to get my hands dirty. Recently I’ve had the itch to explore the latest offerings from Microsoft. SQL 2016, PowerBI and .Net. I’ve also wanted to get a little more hands-on experience with public APIs. All topics I’m familiar with, but sitting down and writing code, designing a database, calling APIs and building reports is a little different that simply understanding how it works.

With EMC World right around the corner, I figured I’d have a little bit of fun with the project, and track and report on the Twitter usage of my fellow EMC Elect during the event.

Down the road I’ll try to blog more about the details but here is the gist of what’s behind the report. Levering Twitter I created a list with all the EMC Elect  twitter accounts, you can subscribe to it here. Then, leveraging .Net and Twitter’s public API, I programmed a routine that will continually monitor that list, collecting Tweet information and storing it in SQL 2016. With PowerBI, I built a report that shows interesting tidbits on the Twitter usage collected. Recently Microsoft released a new feature in PowerBI that allows sharing reports with the internet without requiring authentication which has enabled me to share the report. To keep the report up to date, I’m using the Personal Gateway for PowerBI, which allows me to connect my on-prem SQL 2016 database with the cloud-based reporting tool.

I chose this stack and components in part because all of these are available free of charge now, a shift Microsoft has been making much like EMC’s Free and Frictionless movement. PowerBI allows a personal account (with limited data and options). Microsoft recently made SQL Developer Edition free, which essentially is all SQL Enterprise features, just for you as a single user. The .Net coding language has free Visual Studio options, with Nuget I can pull free libraries into my code from the web quickly, and of course, Twitter makes accessing the API free with an account.

I also hooked this up to Azure’s Machine Learning cloud to perform sentiment analysis on the keywords, which also has a free tier. Though given the volume of tweets, I’m not sure I’ll stay in the free tier band, so still working on that aspect.

So, here is the EMC Elect Twitter Statistics for the week of EMC World, May 1st-5th. I’ve embedded the report on my blog below, scroll past it for some information on what the charts mean, as well links to get directly to the report and data for the previous week to compare. If you are a PowerBI user already and have the mobile application and would like to watch on your phone, drop me a note… hopefully, Microsoft will allow sharing the mobile reports publically down the road.

I’d also love comments on your personal deciphering of what this means, as always data presented often needs a human to make it into information. As well, there are countless ways to slice this data now that it’s all in a database, if you have some burning questions or a different way you’d like to see the data, let me know, and I’ll try to build it (or at least run the query to see). I’m personally interested what words will show up, will we see the names of new releases in the word clouds? Will we see more tweets given the event, or less. Will many of the European EMC Elect coming state-side for the event shift the time of day we see tweeting? Or will the fact we’re all out late at night counter-balance?

 

Follow this link for the full page report.

If the report above is empty, it’s hopefully because you’re reading this post before Sunday the 1st, otherwise I broke something. If it’s not the week of EMC World yet, the data won’t start populating, but you can look at the previous weeks report to see an example, as well compare the two weeks.

Follow this link for the full page report. 

Like I mentioned above, this is available through the PowerBI mobile app, but only for PowerBI users, not general use. Because PowerBI is a responsive design, the reports above are designed for desktop (or tablet) viewing and don’t work well on your phone.

Due to the current preview mode of the public web publishing, and the free Personal Gateway, the update frequenty of PowerBI is limited to daily with up to  8 refreshes per day. You can do live queries from on-prem, or SQL Azure, it just isn’t free. So while the data collection from Twitter is live, the reports might be an hour or so behind.

I hope the report is fairly self-evident. A good dashboard shouldn’t require much explanation. But if I didn’t make it intuitive enough, here some details on the elements.

  • Timeframe
    •  In the upper right is the timeframe of the report. All data in the report is within that timeframe. With the exception of the timelines that have a legend for “Last Week” and “This Week” of which “This Week” is inside the timeframe, and “Last Week” is the previous week to show a comparison.
    • Also on time frame, everything is in Central time. PowerBI needs to enhance their time localization functions (by enhance I mean create, since there is none I could find).
  • Total EMC Elect
    • How many total of the EMC Elect that have valid Twitter accounts, I’m missing a couple at the time of publishing.
  • Total Tweets
    • The sum total of original Tweets created by the EMC Elect (meaning I’m not counting when an EMC Elect retweets someone else’s original tweet)
  • Total Retweets
    • How many times original Tweets from the EMC Elect were retweeted
  • Total Favorites
    • How many times the original tweets from the EMC Elect have been ‘liked’
  • EMC Elect Active
    • Of the total EMC Elect members, during the week how many have tweeted at least once
  • EMC World Mentioned
    • From the original tweets, how many mentioned EMC World (in any facet, hashtag or words)
  • EMC Mentioned
    • From the original tweets, how many tweets mentioned EMC in any way
  • Tweets by Weekday
    • Of all those original tweets, what day did they occur on compared to the same day last weel.
  • Tweets by Hour of Day
    • When are all those tweets coming out, so all tweets to date summed for the hour of day; then compared to last week.
  • Who Tweeted The Most
    • Ordered descending and a running sum, who has created the most original tweets
  • Who was Retweeted the Most
    • Ordered descending and a running sum, who’s tweets have been retweeted the most
  • Who’s Tweets Received the Most Likes
    • Ordered descending and a running sum, who’s received the most ‘likes’
  • What Words were Tweeted
    • This is your standard word cloud of all the words used in the original tweets from the list. The bigger the word, the more it’s used. I’ve removed common words, but didn’t do any other filtering so if it’s profane; it came from Twitter.
  • What #Hashtags were Tweeted
    • Same as words, just the hashtags
  • Most Mentioned
    • Ordered descending and a running sum, who is the EMC Elect mentioning in their tweets
  • Everyone Mentioned
    • Again a word cloud, but of the other users mentioned in the tweets.
  • Where are EMC Elect Tweeting from
    • This is a little light on data because few people tag their location when tweeting. But Twitter does store it when you do, and I wanted to play with the geospatial features in SQL and PowerBI.

 

By | April 28th, 2016|Code, Home Lab, MyDW|2 Comments

“It’s not the…”

I saw this meme on my social media feed, and it reminded me of my first rule of troubleshooting.

Never, ever, try to prove it’s not your area.

If you’re in IT, you’re familiar with ‘critical’ issues. You might call them SevA, Sev1, TITSUP, outage, all-hands or something else. But we all have them, and we’ve all been involved in some way.

How many times in one of those situations did you hear:
“It’s not the network”
“It’s not the storage”
“It’s not VMware”
“It’s not my code”

What you’re really saying is: “It’s not MY fault”.

Stop thinking this way. Stop trying to prove it’s not your fault or your area, or your systems. Instead, ask yourself how can you solve the problem. How can you make it better. Maybe your area did not create the issue at hand. But if all you’re are trying to do is prove it’s not your responsibility, you’re not actually trying to solve the problem, rather you’re only trying to get out of the situation. It reminds me of childhood neighborhood games yelling “1-2-3 NOT IT”. If everyone is simply trying to be “not it” then the problem will never get solved.

Rather than trying to prove it’s not you, I urge everyone to prove it IS. Why? Well for starters if I keep trying to proof it’s my area, and I work under the assumption it might be… I might find out it actually is. It’s very easy for us to overlook a detail about why our area is part of the problem if all we’re trying to do is prove it’s not.

More than that, it’s a mindset.

The goal should always be to restore the service at any cost; does it matter why it happened during the outage? If during a critical issue I can find a way to improve the area I’m responsible for enough to alleviate the pain, I can help de-escalate the situation enough to restore service, then, get to true root cause.

Everything in IT is related, the components all work together. If I leave the situation after proving it’s not my area, I’m not present in the conversation to help answer questions. We see this result as waiting to get someone back on the phone or in the war room to answer a question, delaying the resolution.

Continuing to stay engaged I will learn more about how my role fits into the larger ecosystem my area supports. With that knowledge, I improve my ability to contribute. Not just to the issue at hand, but to future designs. Plus if I wish for “my area” to grow, a.k.a. get promotions; the more I know about the other areas the better suited I am for a wider set of responsibilities.

Digging deeper and deeper into the tools and metrics I have may help uncover the key to solving the problem. I might be able to find the nugget that helps my peer solve the problem. Tracing network packets can help developers; comparing IOPS volume from before the incident can point to workload changes, leveraging security tools might help find code issues; I’ve witnessed this over and over again.

I have a great real-world example of this I use when talking to teams about critical response practices.

Years ago, we were experiencing an issue where a database server would crash every night during major ETL loads. For days no one could figure out why. The database looked fine, the logs were clean, but the whole server would panic every night. I was not responsible for the operating system up at the time, so I was not involved in the initial troubleshooting effort. But with the problem ongoing the teams who were responsible started reaching out for help.

I offered to take a look. While initially I didn’t see anything of concern, I asked when the issues happened and if I could watch. It was the middle of the night, so I agreed to stay up late with them that night to watch in real time. While the other team looked at the OS and Database monitoring tools; I opened up mine, vCenter, storage, etc. Right before the crash happened, in real-time monitoring mode inside vCenter, I noticed an enormous spike in packets per second at the network layer. We repeated the workload, and the crash and the spike repeated as well.

Why and what was happening? The ETL load was causing a large influx of data over the network, increasing the packets per second. While the 10Gbs bandwidth was not a bottleneck, the virtual network card was an older model (E1000) which in turn was overwhelming the kernel processor usage, confirmed by the Linux admin after I asked him to look at each processor usage statistic individually. The solution was to adjust the virtual nic (to VXNET3), as well enable Receive Side Scaling to spread the network processing workload across multiple cores, avoiding starving the kernel on core 0.

By looking at the tools for my area, we were able to find data that led us down the path to the ultimate cause of the issue and solved it. It wasn’t the vSphere Hypervisor causing the issue, but the monitoring at that level could point to the issue. I could help solve the issue, even though it wasn’t my fault. Because I was trying to help, not just trying to prove it wasn’t my fault.

This story also demonstrates another important point… it often is not anyone’s or any areas fault; but the combination of them. Which means no one team can solve it on their own.

My last point, and maybe the most important personally, is also the easiest to forget. This time, it might not be your area, but next time it might be. When it is, don’t you want your peers there to help you? More over, isn’t it better to solve it together and make it a team problem? It might not be your culture today, but it can be with your help.

These are all the reasons I’ve told my teams “Don’t try to prove it’s NOT your area, try to prove IT IS, because if it is, YOU can fix it, and I need it fixed”.

So if you find yourself saying: “It’s not the <my area>”. Try instead “How can <my area> help?”

By | April 27th, 2016|Opinions, Pet Peeve, Soapbox|0 Comments

The ‘Value Added’ in VAR

Eric Hagstrom and I sat down to discuss the value VAR/Partners bring to the table, why they exist, what you should expect from them, and some advice from behind the scenes.

Listen on his blog, or subscribe to the Remain Silent podcast with your favorite tool.

http://gotdedupe.com/2016/04/sell-me-a-var/

 

By | April 14th, 2016|Uncategorized|0 Comments

vSphere HTML5 Web Client – Fling

Last week, a small light appeared at the end of the dim tunnel called the vSphere Web Client. If you’re a vSphere engineer, or even just a user managing applications or server; you know what I’m talking about.

The vSphere Web Client has been problematic since day one. Some days it feels like you need an incantation to get the web client working, with Adobe Flash, browser security, and custom plug-ins. Even when you do, the slow response time of the web client directly impacts your productivity. Then there has been the very slow migration from the C# client, with components such as vCenter Update Manager just recently being available in the web client. Plus, only with the recent fling activity around the ESXi Embedded Host Client can we see a world where we don’t need the C# client at all.

Speaking of flings; the vSphere HTML5 Web Client is currently a VMware Fling; a technology preview built by VMware engineers with the intent the community explore and test, providing feedback. Often flings make it into the product in a future release, though that is largely dependent on the feedback from the community. This one needs our feedback!

If you are running vSphere 6, I highly encourage you to install the vSphere HTML5 Web Client. Currently, the deployment is through a vApp with the web server hosting the interface on a separate virtual machine from any of your vCenter Servers. This means there is very low risk to your environment, as you aren’t modifying your existing vCenter, rather extending it through the SSO engine to a separate web server.

At first glance, the instructions for the fling appear a little involved, though trust me they are not. All told it took me about 10 minutes to setup the H5 Web Client. The instructions are very detailed, so I won’t repeat them in depth; though I captured my deployment if you are interested.

I have both VCSA and vCenter for Windows in my lab, the fling will work with either (and will support an existing enhanced link mode setup like mine). I went the Windows route as it’s my platform services controller and SSO server.

  1. Download the OVA and the batch file
  2. Execute the batch file, which generates three config files
  3. Deploy the vSphere HTML5 Web Client OVA
  4. Once the new vApp is only, follow the instructions to create three new directories.
  5. Using a tool such as WinSCP, copy the three files from your vCenter server to the
  6. vSphere Web Client server
  7. Set the NTP Server
  8. Start the web server
  9. Browse vCenter in beautiful native HTML5, no plug-ins, no flash, so fiddling with your browser security. (If you are not already logged in to vSphere, the H5 site will redirect you to the SSO for authentication)

 

*Note there is no sound, this is to follow along the steps. The video is not speed up, this is the actual deployment time

Again this is the very first preview and as such the functionality is limited. Here a just a few screen shot while you’re waiting on the download.

This slideshow requires JavaScript.

By | April 6th, 2016|Home Lab, VMWare|0 Comments

IsilonSD – Part 6: Final Thoughts

Now that I’ve spent some time with Isilon SD; how does it compare to my experience with its physical big brother? Where does the software defined version fit?

This post is part of a series covering the EMC Free and Frictionless software products.
Go to the first post for a table of contents.

Overview (TL;DR version)

I’m excited by the entrance of this product into the virtualization space. Isilon is a robust product that can be tuned for multiple use cases and workloads. Even though Isilon has years of product development behind it and currently on it’s eight major software version; the virtual product is technically V1. With any first version, there are some areas to work on; from my limited time with IsilonSD, I believe this is everything it’s physical big brother is in a smaller, virtual package. However, it’s also bringing some of the limitations of its physical past. Limitations to be aware of, but also, limitations I believe EMC will be working to remove in vNext of IsilonSD.

If you ran across this blog because of interest in IsilonSD, I hope you can test the product, either with physical nodes or with the test platform I’ve put together; only with customer testing and feedback can the product mature into what it’s capable of becoming.

Deep Dive (long version)

From running Isilon’s in multiple use-cases and companies, I always wanted the ability to get smaller Isilon models for my branch offices. I’ve worked in environments where we had hundreds of physical locations of varying sizes. Many of these we wanted file solutions in these spokes replicating back to a hub. We wanted a universal solution that applied to the varying size locations; allowing all the spokes to replicate back to the hub. The startup cost for a new Isilon cluster was prohibitive for a smaller site. Leading us to leverage Windows File Servers (an excellent file server solution but that’s a different post) for those situations, bifurcating our file services stack which increased complexity in management, not just of the file storage itself, but ancillary needs like monitoring and backups.

Given I’ve been running a virtualized Isilon simulator for as long as I’ve been purchasing and implementing the Isilon solution; leveraging a virtualized Isilon for these branch office scenarios was always on my wish list. When I heard the rumors an actual virtual product was in the works (vIMSOCOOL) I expected the solution to target this desire. When IsilonSD Edge was released, as I read the documentation, I continued with this expectation. I watched YouTube videos that said this was the exact use-case.

It’s taken actually setting up the product in a lab to understand that IsilonSD Edge is not the product I expected it to be. Personally, though the solution by it’s nature is ‘software defined’ as it includes no hardware; it doesn’t quite fit the definition I’ve come to believe SD stands for. This is less a virtual Isilon, or software defined Isilon, as it is “bring your own hardware”, IsilonBYOH if you will.

IsilonBYOH is, on its merit, an exciting product and highlights what makes Isilon successful; a great piece of software sitting on commodity servers. This approach is what’s allowed Isilon to become the product is it, supporting a plethora of node types as well as hard drive technologies. You can configure a super fast flash based NFS NAS solution to be an ultra high reliable storage solution behind web servers, where you can store the data once and have all nodes have shared access. You can leverage the multi-tenancy options to provide mixed storage in a heterogeneous environment, NFS to service servers and CIFS to end users, talking to both LDAP and Active Directory, tiering between node types to maximum performance for newer files and cost for older. You can create a NAS for high-capacity video editing needs; where the current data is on SSD for screaming fast performance, then moving to HDD when the project is complete. You can even create archive class storage array with cloud competitive pricing to store aged data, knowing you can easily scale, support multiple host types and if ever needed, incorporate faster nodes to increase performance.

With this new product, you can now start even smaller, purchasing your own hardware, running on your own network, and still leverage the same management and monitoring tools, even the same remote support. Plus you can replicate it just the same, including to traditional Isilon appliances.

However, to me, leveraging IsilonSD Edge does call for purchasing hardware, not simply adding this to your existing vSphere cluster and capturing unused storage. IsilonSD Edge, while running on vSphere, requires, locally attached, independent hard drives. This excludes leveraging VSAN, which means no VxRail (and all the competitive HCIA), it also means no ROBO hardware such as Dell VRTX (and all the similar competitive offerings), in fact just having RAID excludes you from using IsilonSD. These hardware requirements, specifically the dedicated disks; turns into limitations. Unless you’re in the position to dedicate three servers, which you’ll likely need to buy new to meet the requirements; you’re probably not putting this out in your remote/branch offices; even though that’s the goal of the ‘Edge’ part of the name.

When you buy those new nodes; you’d probably go ahead and leverage solid state drives; the cost for locally attached SSD SATA is quickly cutting even with traditional hard drives. But understand, IsilonSD Edge will not take advantage of those faster drives like it’s physical incarnation… no metadata caching with the SD version. Nor can the SD version provide any tiering through SmartPools (you can still control the data protection scheme with SmartPools, and obviously you’ll get a speed boost with SSD).

Given all this, the use-cases for IsilonSD Edge get very narrow. With the inability to put IsilonSD Edge on top of ROBO designs, the likelihood of needing to buy new hardware, coupled with the 36TB overall limit of the software defined version of Isilon; I struggle to identify a production scenario that is a good fit. The best case scenario in my mind is purchasing hardware with enough drives to run both IsilonSD and VSAN, side-by-side, on separate drives.; this would require at least nine drives server (more really), so you’re talking some larger machines, and again, a narrow fit.

To me, this product is less about today and more about tomorrow; release one setting the foundation for the future opportunity of virtual Isilon.

What is that opportunity?

For starters, running Isilon SD Edge on VxRail; even deploying it directly through the VxRail marketplace, and by this, I mean running the IsilonSD Edge VMDK files on the VSAN data store.

Before you say the Isilon protection scheme would double-down storage needs on the VSAN model; keep in mind you can configure per VM policies in VSAN. Setting Failure To Tolerate (FTT) of 0 is not recommended, but this is why it exists. Let Isilon provide data protection while playing in the VSAN sandbox. Leverage DRS groups and rules to configure anti-affinity of the Isilon virtual nodes; keeping them on separate hosts. Would VSAN introduce latency as compared to physical disk; quite probably… though in the typical ROBO scenario that’s not the largest concern. I was able to push 120Mbps onto my IsilonSD Edge cluster, and that was with nested ESXi all running on one host.

All of this doesn’t just apply to VxRail, but it’s competitors in the hyper-converged appliance space, as well a wide range of products targeted at small installations. To expand on the small installation scenario, if IsilonSD had lower data protection options like VSAN does to remove the need for six disks per node, or even three nodes; it could fit in smaller situations. Why not trust the RAID protection beneath the VM and still leverage Isilon for the robust NAS features it provides. Meaning run a single-node Isilon, after all, those remote offices are likely providing file services with Windows or Linux VMs, relying on the vSphere HA/DRS for availability, and server RAID (or VSAN) for data loss prevention. The Isilon has a rich feature set outside of just data protection across nodes. Even a single node Isilon with SmartSync back to a mothership has compelling use cases.

On the other side of the spectrum, putting IsilonSD in a public cloud provider, where you don’t control the hardware and storage, has quite a few use-cases. Yes, Isilon has CloudPool technology, this extends an Isilon into public cloud models that provide object storage. But a virtual Isilon running in, say, vCloud Air or VirtuStream, with SynqIQ with your on-premise Isilon opens quite a few doors, such as for those looking at public cloud disaster-recovery-as-a-service solutions. Or moving to the cloud while still having a bunker on-premise for securing your data.

Outside of the need for independent drives, this is, an Isilon, running on vSphere. That’s… awesome! As I mentioned before, this opens some big opportunities should EMC continue down this path. Plus, it’s Free and Frictionless, meaning you can do this exact same testing as I’ve done. If you are an Isilon customer today, GO GET THIS. It’s a great way to test out changes, upgrades, command line scripts, etc.

If you are running the Free and Frictionless version, outside of the 36TB and six node limit, you also do NOT get licenses for SynqIQ, SmartLock or CloudPools.

I’ll say, given I went down this road from my excitement about Free and Frictionless; these missing licenses are a little disappointing. I’ve run SyncIQ and SmartLock, two great features and was looking forward to testing them, and having them handy to help answer questions I get when talking about Isilon.

CloudPools, while I have not run, is something that I’ve been incredibly excited about for years leading up to its release, so I’ll admit I wish it were in the Free and Frictionless version, if only a small amount of storage to play with.

Wrapping up, there are countless IT organizations out there; I’ve never met one that wasn’t unique, even with some areas I’d like to see improved with this product, undoubtedly IsilonSD Edge will apply to quite a few shops. In fact, I’ve heard some customers were asking for a BYOH Isilon approach; so maybe this is really for them (which if so, the 36TB seems limiting). If you’re looking at IsilonSD Edge, I’d love to hear why; maybe I missed something (certainly I have). Reach out, or use the comments.

If you are looking into IsilonSD Edge, outside of the drive/node requirements; some things to be aware of that caught my eye.

While the FAQs state you can run other virtual machines on the same hosts; I would advise against it. If you had enough physical drives to split them between IsilonSD and VSAN, it could be done. You could also use NFS, ISCSI or Fibre Channel for data stores; but this is overly complex and in all likelihood, more expensive than simply having dedicated hardware for IsilonSD Edge (or really, just buying the physical Isilon product). But given the data stores used by the IsilonSD Edge nodes are unprotected, putting a VM on them means you are just asking for the drive to fail, and to lose that VM.

Because you are dedicating physical drives to a virtual machine, you cannot vMotion the IsilonSD virtual nodes. This means you cannot leverage DRS (Dynamic Resource Scheduler), which in turn means you cannot leverage vSphere Update Manager to automatically patch the hosts (as it relies on moving workloads around during maintenance).

The IsilonSD virtual nodes do NOT have VMware tools. Meaning you cannot shut down the virtual machines from inside vSphere (for patching or otherwise), rather you’ll need to enter the OneFS administrator CLI, shut down the Isilon node; then go and perform ESX host maintenance. If you have reporting in place to ensure your virtual machines have VMtools installed, running and at the supported version (something I highly recommend) you’ll need to adjust this. Other systems that leverage VMtools; such as Infrastructure Navigator, will not work either.

I might be overlooking something (I hope so) but I cannot find a way to expand the storage on an existing node. In my testing scenario, I built the minimal configuration of six data drives of a measly 64GB each. I could not figure out how to increase this space, which is something we’re all accustomed to on vSphere (in fact quickly growing VMs resources is a cornerstone of virtualization). I can increase the overall capacity by increasing nodes, but this requires additional ESX hosts. If this is true, again the idea of using ‘unclaimed capacity’ for IsilonSD Edge is marginalized.

IsilonSD wants nodes in a pool to be configured the same, specifically with the same size and amount of drives. This is understandable as it spreads data across all the drives in the pool equally. However, this lessens the value of ‘capturing unused capacity’. Aside from the unprotected storage point; if you were to have free storage on drives, your ability to deploy IsilonSD will be constrained to the lowest free space volume, as all the VMDK files (virtual drives) have to be the same. Even if you had twenty-one independent disks across three nodes, if just one of them was smaller than the rest, that free space dictates the size unit you can configure.

Even though I’m not quite sure where this new product fits or what problem it solves; that’s true of many products when they first release. It’s quite possible this will open new doors no one knew were closed and if nothing else; I’m ecstatic EMC is pursuing making a virtual version of the product; after all this is just version 1… what would you want in version 2? Respond in the comments!

By | April 4th, 2016|EMC, Home Lab, Opinions, Storage|2 Comments