VSAN

UnityVSA – Part 4: VVOLs

In part 1 of this series I shared the move to HTML5 was the most exciting part of Unity. Frankly this was because of the increased compatibility and performance of Unisphere for Unity, but more so the signaling of EMC shifting to HTML5 across the board (I hope).

If there was one storage feature inside Unity that excites me the most, it has to be VVOL support. So in this post, I’m going to dive into VVOLs on the UnityVSA we set up previously. As VVOLs is a framework, every implementation is going to differ slightly. As such, the switch to VVOL itself and the Unity flavor is going to require an adjustment in the architectural and management practices we’ve adopted over the years.

This post is part of a series covering the EMC Free and Frictionless software products.
Go to the first post for a table of contents.

First, for those not familiar, a little on VVOLs themselves. VVOL stands for Virtual Volumes, in essence, it simplifies the layers between the virtual machine and the storage array while allowing both vSphere and the Storage Array to have a deeper understanding of each other. A VVOL itself directly correlates to a virtual disk attached to virtual machines; including the configuration file and swap file every VM has. Enabling VVOLs is VASA (vSphere Storage APIs for Storage Awareness), with which the array can describe the attributes of the storage presented to vSphere. These two core tenants of VVOLs allow the vSphere layer to see deeper into the storage; while the storage layer can see the more granular virtual machine and virtual disk usage.

In practice, this provides a better management framework to enable the movement most in the vSphere realm have been making; creating larger datastores with a naming convention that denotes the storage features (flash, tiering, snapshots, etc.). Where previously vSphere Admins would need to learn these conventions to determine where to place VMs; with VVOLs, this can be abstracted into Storage Policies, with the vSphere Admin simply selecting the appropriate policy during creation.

So new terms and concepts to become familiar with:

  • Storage Provider
    • Configured within vCenter this is a link to the VASA provider which in turn shares the storage system details with vSphere.
    • For Unity, the VASA provider is built in and requires no extra configuration on the Unity side.
  • Protocol EndPoint
    • This is the storage side access point that vSphere communicates with; they work across protocols and replace LUNs and mount points.
    • On Unity, Protocol Endpoints have created automatically through the VVOL provisioning process.
  • Storage Container
    • This essentially replaces the LUN, though a storage container is much more than an LUN ever was as it can contain multiple types of storage on the array, which effectively means it can have multiple LUNs.
    • In vSphere a storage container maps to a VVOL Datastore (shown in the normal datastore section of vSphere).
    • Unity has mirrored this naming in Unisphere, calling the storage container ‘Datastore’.
    • In Unity a Datastore can contain multiple Capability Profiles (which if you remember, in Unity, is synonymous to a Pool).

To fully explore and demonstrate the VVOL functionality in Unity, we’re going to perform several sets of actions, I’m going to share these in video walkthroughs (with sound), as there are multiple steps.

  1. Create additional pools and capability profiles on the UnityVSA then configure vSphere and Unity with appropriate connection for VVOL
  2. Provision a VVOL Datastore with multiple capability profiles and provision a test virtual machine on the new VVOL Datastore
  3. Create a vSphere Storage Policy and relocate the VM data
  4. Create advanced vSphere Store Policies, extending the VM to simulate a production database server

 

First some prep work and connecting vSphere and Unity:

  • Add 4 new virtual disks to the UnityVSA VM
  • Create two new Unity pools
    • 1 with 75GB as single tier
    • 1 with 15GB, 25GB and 55GB as multi-tier with FastVP
  • Link Unisphere/Unity to our vCenter
  • Create a Storage Provider link in vSphere to the Unity VASA Provider

 

Next, let’s provision the VVOL Datastore or “Storage Container”:

  • Create a Unity Datastore (aka “Storage Container”) with three Capability Profiles (as such, three pools)
  • Create a vSphere VVOL Datastore
  • Investigate VVOL Datastore attributes

 

Provisioning a virtual machine on the new storage looks the same as traditional datastores, but there is more than meets the eye:

  • Create a new virtual machine on the VVOL Datastore
  • Investigate where the VM files are placed
  • See the VM details inside Unisphere
  • Create a simple Storage Policy in vSphere
  • Adjust the virtual machine storage policy and watch the storage allocation adjustment

 

Now let’s consider more advanced usage of VVOLs. With the ability to create custom tags in Unisphere Capability Profiles, we have an unlimited mechanism to describe the storage in our own words. You could use these tags to create application specific pools, and thus vSphere Storage Policies for admins to target VMs related to an application. You could also use tags for tiers (Web, App, DB), or in the example below, we’re going to create vSphere Storage Policies and Unity capability tags to partition a database server into Boot, Data and Backup storage types.

  • Modify our three Capability Profiles to add tags: Boot, DB and Backup.
  • Create vSphere Storage Policies for each of these tags.
  • Adjust the boot files of our test VM to leverage the Boot Storage Policy
  • Add additional drives to our test VM, leveraging our DB and Backup Storage Policies; investigate where these files were placed

 

Hopefully now you not only have a better understanding of how to setup and configure VVOLs on EMC Unity storage; but a deeper understanding of the VVOL technology in general. This framework opens brand new doors in your management practices; imagine a large Unity array with multiple pools and capabilities all being provisioned through one Storage Container and VVOL Datastore. Leveraging Storage Policies to manage your data placement rather than carving up numerous LUNs.

With the flexibility of Storage Policies, you can further inform the administrators creating and managing virtual servers on what storage characteristics are available. If you have multiple arrays that support VVOLs and/or VSAN; your policies can work across arrays and even vendors. This abstraction allow further consistency inside vSphere, streamlining management and operations tasks.

You can see how, over time, this technology has advantages over the traditional methods we’ve been using for virtual storage provisioning. However, before you start making plans to buy a new Unity array and replace all your vSphere storage with VVOLs, know that, as with any new technology, there are still some limitations. Features like array based replication, snapshots, even quiescing VMs, all are lagging a bit behind the VVOL release, all highly dependent on your environment and usage patterns. I expect quick enhancements in this area, so research the current state based and talk with your VMware and EMC reps/partners.

By | May 27th, 2016|EMC, Home Lab, Storage, Train Yourself|2 Comments

UnityVSA – Part 3: Initial Configuration Wizard

When last we meet, we had deployed a UnityVSA virtual appliance; it was up and running sitting at the prompt for the Initial Configuration Wizard.

This post is part of a series covering the EMC Free and Frictionless software products.
Go to the first post for a table of contents.

If you’re following along and took a break, your browser session to Unisphere likely expired, which means you also lost the prompt for the Initial Configuration Wizard. Don’t fret; you can get it back. In the upper left corner of Unisphere, click on the Preferences button . On the main preferences page, there is a link to launch the Initial Configuration Wizard again.

UnityVSA_PreferencesButton

UnityVSA_InitialConfigurationWizardLink

Ok, so back to the wizard? Good. This wizard is going to walk through a complete configuration of the UnityVSA, including changing passwords, licensing, network setup (DNS, NTP), creating pools, e-mail server, support credentials, your customer information, ESRS, iSCSI and NAS interfaces. You can choose to skip some steps, but if you complete all portions, your UnityVSA will be ready to provision storage across block and file, with LUNS, shares or VVOLS.

Before we get going, let’s cover some aspects of the VSA to plan for.

First, there is a single SP (service processor in Unity-speak; or a ‘brain’). The physical Unity (and the VNX and CLARiiON before if you are familiar with the line) has two SPs. These provide redundancy should an SP fail, given for all intents an SP is an x86 server it’s suspect to common failures. The VSA rather relies on vSphereHA to provide the redundancy should a host have a failure, and vSphereDRS/vMotion to move the workload preemptively for planned maintenance. This is germane because, for the VSA, you won’t be planning for balancing LUNs between the SPs, nor creating multiple NAS servers to span SPs, nor even iSCSI targets across SPs to allow for LUN trespass.

The second is size limitations. I’m using the community edition, a.k.a. Free and Frictionless; which is limited to 4TB of space. I do not get any support, as such ESRS will not work. If you’re planning on a production deployment, that 4TB limit will be increased to 50TB (based, of course, on how much you purchased from EMC) and then be a fully supported Unity array. With the overall size limit, you also are limited to 16 virtual disks behind the VSA (meaning 16 VMDK files you assign the VM). So plan accordingly. In my initial deployment I provided a 250GB VMDK, so if I have 15 more (16 total), I hit my 4TB max.

Third is simply the key differences from a physical array to prepare for.

  • Max pool LUN size: 16TB
  • Max pool LUNs per system: 64
  • Max pools per system: 16
  • No fibre channel support
  • No write caching (to ensure data integrity)
  • No synchronous replication
  • No RAID – this one is crucial to understand. In a physical Unity raw disks are put into RAID groups to protect against data loss during a drive failure. UnityVSA relies on the storage you provide to be protected already. Up to the limits mentioned, you can present VMDKs from multiple vSphere datastores, even from separate tiers of storage leveraging FASTVP inside UnityVSA.

Fourth and finally, a few vSphere specific notes. UnityVSA comes installed with VMware Tools, but you can’t modify it, so don’t try automatic upgrades the VMtools. The cores and memory are hard coded; so don’t try adding more to increase performance; rather split workload onto multiple UnityVSA appliances. You cannot resize virtual disks once they are in a UnityVSA pool, again add more, but pay attention to the limits. EMC doesn’t support vSphere-FT, VM-level backup/replication nor VM-level snapshots.

 

 UnityVSA_ICW__1 Got all that, now let’s setup this array! I’m going to step through detail on the options this time, because the wizard is packed full of steps, the video is also at the bottom.
 

UnityVSA_ICW__26

First off, change your passwords. If you missed it earlier, here are the defaults again:

  • admin/Password123#
  • service/service

The “admin” account is the main administrator of Unisphere, while the “service” account is used to log into the console of the VSA, where you can perform limited admin steps such as changing IP settings should you need to move the network; as well logs.

 

UnityVSA_ICW__25

Next up, licensing. Every UnityVSA needs a license, even if you’re using the Community edition. If you purchased the array, you should go through all the normal channels, LAC, support, or your rep. For community edition, copy the System UUID and follow the link: Get License Online
 

UnityVSA_ICW__24

The link to get a license will take you to EMC.com, again you’ll need to logon with the same account you used to download the OVA. Then simply paste in that System UUID and select “Unity VSA” from the drop down. Your browser will download a file that is the license key; from there you can import it into Unisphere.

I will say, I’ve heard complaints that needing to register to download and create a license is ‘friction’, but it was incredibly easy. I don’t take any issues with a registration, EMC has made it simple, and no one from sales is calling me to follow up. I’m sure EMC is tracking who’s using it, but that’s good data; downloading the OVA vs actually creating a license. I don’t find this invalidates the ‘Free and Frictionless’ motto.

UnityVSA_ICW__23 Did you get this nice message? If so good job, if not… well try again.
 

UnityVSA_ICW__22

The virtual array needs DNS servers, to talk to the outside world, communicate through ESRS, get training videos, etc… make sure to provide DNS servers that can resolve internet addresses.
 UnityVSA_ICW__21 Synchronizing time is always important, if you are providing file shares it’s extra important you’re in synch with your Active Directory. Should a client and server be more than 5 minutes apart access will be denied. So use your AD server itself, or an NTP server on the same time synch with AD.
 

UnityVSA_ICW__20

Now we’re getting into some storage. In Unity, pools are a grouping of storage which is carved up to present to hosts. In the VSA, a pool must have at least one virtual disk; though if you have more data will be balanced across the virtual disks.

A pool is the foundation of all storage, without one you cannot setup anything else on the virtual array.

 

UnityVSA_ICW__18

When creating a Pool, you’ll want to give it a name and description. Given this is a virtual array, consider including the cluster and datastore names in the description so you know where the storage is coming from.

For my purposes of testing, I named the Pool Pool_250 simply to know it was on my 250GB virtual disk.

 

UnityVSA_ICW__19

Once named, you’ll need to give the pool some virtual disks. Recall we created a 250GB virtual disk, here we are adding that into the pool.

When adding disks you need to choose a tier, typically:

  • Flash = Extreme Performance
  • FC/SAS = Performance
  • NL-SAS/SATA = Capacity

The virtual disk I provided the VSA is actually VSAN running All-Flash, but I’m selecting Capacity here simply to explore FAST VP down the road.

 

UnityVSA_ICW__17

Next up I need to give the Pool a Capability Profile. This profile is for VMware VVOL support. We’ll cover VVOL in more depth another time, but essentially it allows you to connect vSphere to Unity, and assign storage at a VM level. This is done through vSphere Storage Policy that map through to the Capability Profile.

So what is the Capability Profile used for? It encompasses all the attributes of the pool, allowing you to created a vSphere Storage Policy. There is one capability profile per pool, it includes the Service Tier (based on your storage tier and FAST VP), Space Efficiency Options and any extra tags you might want to add.

You can skip this by not checking the Create VMware Capability Profile for the Pool box at this point; but you can also modify/delete the profile later.

I went ahead and made a simple profile called CP_250_VSAN

 

UnityVSA_ICW__16

Here are the constraints, or attributes, I mentioned above. Some are set for you, then you can add tags. Tags can be shared across pools and capability profiles. I tagged this ‘VSAN’, but you could tag for applications you want on this pool, or application tiers (web, database, etc), or the location of the array.

This finishes out creating the pool itself.

 

UnityVSA_ICW__14

The Unity array will e-mail you when alerts occur, configure this to e-mail your team when there are issues.
 

UnityVSA_ICW__13

Remember me mentioning reserving a handful of IP addresses? Here we start using them. The UnityVSA has two main ways to attach storage; iSCSI and NAS (the third, fibre channel, is available on the physical array).

If you want to connect via iSCSI, you’ll need to create iSCSI Network Interfaces here.

 

UnityVSA_ICW__12

Creating an iSCSI interface is easy, you’ll pick between your four ethernet ports, provide IP address, subnet mask and gateway. You can assign multiple IP addresses to the same ethernet port; you also can run both iSCSI and NAS on the same ethernet port (you can’t share an IP address though).

How you leverage these ports is an end user design. Keep in mind, the UnityVSA itself is a single VM and thus a single point of failure. Though you could put the virtual network cards on separate virtual switches to provide some network redundancy, or you could put virtual network cards into separate VLANs to provide storage to different network segments.

 

 

UnityVSA_ICW__11

I created two iSCSI interfaces on the same network card, so that I can simulate multi-pathing at the client side down the road.
 

UnityVSA_ICW__9

Next up is creating a NAS Server, this provides file services for a particular pool. Provide a name for the NAS server, then pool to bind it too, and the service processor for it to run on (only SPA for VSA).
 

UnityVSA_ICW__8

With the NAS Server created, it will need an ethernet port and IP information. Again, this can be the same port as iSCSI, or different; your choice; but you CANNOT share IP addresses. I choose to use a different port here.
UnityVSA_ICW__7 Unity supports both Windows and *nix file shares, as well multiple options for how to secure the authentication and authorization of said shares. Both the protocol support and the associated security settings are per NAS Server. Remember we can create multiple NAS Servers; so this is how you provide access across clients.

For example, if you have two Active Directory forests you want to have shares for; one NAS Server cannot span them, though you can simply create a second NAS server for the other forest.

Or if you want to provide isolation between Windows and *nix shares, simply use two NAS servers, each with single protocol support.

One pool may have multiple NAS servers, but one NAS server can NOT have multiple pools.

This is again where the multiple NICs might come in play on UnityVSA. I could create a NAS Server on a virtual nic that is configured in vSphere for NFS access; while another NAS Server is bound to a separate virtual nic in that my Windows clients can see for SMB shares.

For my initial NAS Server, I’m going to use multi-protocol and leverage Active Directory. This will create a server entry in my AD. I’m also going to enable VVOL, NFSv4 and configure secure NFS.

 

 

UnityVSA_ICW__6

This is the secure NFS option page, by having NFS authenticate via Kerberos against Active Directory, I can use AD as my LDAP as well.
 

UnityVSA_ICW__5

With secure NFS enabled, my Unix Directory Service is going to target AD, so I simply need to provide the IP address(es) of my domain controller(s).
 

UnityVSA_ICW__4

Given I’m using Active Directory, the NAS server needs DNS Servers, these can be different from the DNS servers you entered earlier if you have separation of DNS per zones.
 

UnityVSA_ICW__3

I do not have another Unity box at this point to configure replication; something I’ll explore down the road, so leaving this unchecked.
 

UnityVSA_ICW__2

At this point all my selections are ready, clicking finish will complete all the configuration options.

At this point, my UnityVSA is connected to the network, ready to carve up the pool into LUNs, VVOLs or shares. Everything I accomplished in this wizard can be done manually inside the GUI. The Initial Configuration Wizard just streamlines all the tasks with bringing up a new Unity array. If you have a complete Unity configuration mapped out, you can see how this wizard would greatly reduce the time to value. In the next few posts I’ll explore the provisioning process; like how to leverage the new VVOL support with vSphere.

Here is the silent video, to see the steps I skipped and the general pacing of this wizard.

By | May 25th, 2016|EMC, Home Lab, Storage|2 Comments

UnityVSA – Part 2: Deploy

This post is part of a series covering the EMC Free and Frictionless software products.
Go to the first post for a table of contents.

So, to start this off let’s go to the EMC Software Download page; navigating to UnityVSA to get the download. You’ll receive a login prompt, if you have not registered for an EMC account previously, you’ll need one to access the download. Once logged in, you’ll start downloading the OVA file for UnityVSA.

Yes, last time I went immediately to downloading OVAs without reading the manuals my installation failed miserably. But I’m going to try an installation unassisted every time; maybe I’m a glutton for punishment, but I choose to be optimistic that installations should be intuitive, so I prefer trying unaided first.

That’s just me; you might want to check out the Installation Guide and FAQs. Here’s the quick details though… in order to run the VSA you’ll need to be able to run a vSphere virtual machine with 2x cores (recommended is 2Ghz+), 12GB of RAM, and 85GB of hard drive space, plus however much extra space you want to give the VSA to be presented as storage. The VSA does not provide any raid protection inside the virtual array; so if you want to avoid data loss from hardware failure, ensure the storage underneath the VSA is RAID (or VSAN, or SAN itself). You’re also going to want at least two IP addresses set aside; I’d recommend about 5 for a full configuration. Depending on your network environment, you might want to have multiple networks available in vSphere. The UnityVSA has multiple virtual network cards to provide ethernet redundancy. One of those virtual nics is for management traffic, which you could place on a separate virtual switch (and thus VLAN) if you’re topology supports this. My lab is basically flat, so I’ll be binding all virtual nics to one virtual network.

With the downloaded OVA in hand, the initial deployment is indeed incredibly easy. If you’re familiar with OVA deployments, this won’t be very exciting. If you’re not familiar with deploying an OVA, you can follow along in the video below. I’m going to be deploying UnityVSA on my nested ESXi cluster, running VSAN 6.2. You could install directly on an ESXi Host, or even VMware Workstation or Fusion; put that’s another post.

*Note there is no sound, this is to follow along the steps.
  1. In vCenter, right-click the cluster and select “Deploy OVF Template”
  2. Enter the location you saved the UnityVSA OVA file
  3. Select the check box “Accept extra configuration details”, this OVA contains resource reservations which we’ll touch on below.
  4. Provide a name, this will be the virtual machine name, not the actual name of the Unity array
  5. Select the datastore to house the UnityVSA, in my case I’m putting this on VMware Virtual SAN.
  6. Select which networks you wish the UnityVSA bound to.
  7. Configure the “System Name”, this is the actual name of the VSA, I’d recommend matching the virtual machine name you entered above.
  8. Configure the networking, I’m a fan of leveraging DHCP for virtual appliance downloads when possible, it eliminates some troubleshooting.
  9. I do NOT recommend checking the box “Power on after deployment”, the OVA deployment does not include any storage for provisioning pools; for the initial wizard to work, you’ll want to add some additional virtual hard drives.

 

UnityVSA_ResourceReservations

 

A couple notes on the virtual machine itself to be aware of. EMC’s intent with UnityVSA is to provide a production-grade virtual storage array. To ensure adequate performance, when the OVA is deployed the resulting virtual machine has reservations in place on both CPU and memory. These reservations will ensure the ESX host/cluster underneath the VSA can deliver the appropriate amount of resources for the virtual array to function. If you are planning on running UnityVSA in a supported, production environment, I recommend leaving these reservations. Even if you have enough resources to make the reservations unnecessary, I’m sure EMC Support will take issue with you removing them should you need to open a case. If this is just for testing and you are tight on resources, you can remove these reservations.

 

 

 

 

 

Next, let’s add that extra hard drive I talked about and power this up.

 

*Note there is no sound, this is to follow along the steps.

UnityVSA-InitialConfigurationWizard

  1. In vCenter, right click the UnityVSA and Edit Settings
  2. At the bottom under New Device. use the drop down box to select “New Hard Disk” and press “Add”
  3. Enter the size, I’m making a VMDK with 250GB; if you use the arrow to expand the hard drive settings, you can place it on a separate datastore, if you want to using FASTVP to tier between datas tores this is where you’ll want to customer the hard drive.  I’m putting this new hard drive with the VM on VSAN.
  4. Once the reconfigure task is complete, right click your VM again and select “Power On”
  5. I highly recommend opening the console to the VM to watch the boot up, especially on first boot as the UnityVSA will configure itself.
  6. When the console shows a Linux prompt, walk away. Seriously, walk away; get a coffee, take a smoke break, or work on something else. More is happening behind the scenes at this point and the VSA is not ready. You’ll only get frustrated and this watched pot will not seems to boil.
  7. Ok, are you back? Close the console and go back to the virtual machine, ensure the VM Tools are running and the IP address shows up. If you choose DHCP as I did, you’ll need to note the assigned IP address. If you don’t see the IP address, the VSA is still not ready (see, I told you to walk away)
  8. Once the IP appears vCenter, you’re safe to open a browser over SSL to that address.
  9. You should receive a logon prompt, the default login is:
    1. user: admin
    2. pw: Password123#
  10. Did you login? Do you see a “Initial Configuration” wizard? Congratulations then, the VSA is up.

In the next post in this series, we’ll leverage this wizard to fast track configuring the UnityVSA, but, if you want you can cancel the wizard at this point, manually configuring the array; or restarting the wizard later through preferences.

For now, as evident by this post; the deployment of UnityVSA was straight-forward and exactly what we’ve all come to expect of virtual appliances. From download to up and running, user interaction time was less than 2 minutes, with overall deploy time around 30 minutes.

 

 

 

By | May 24th, 2016|EMC, Home Lab, Storage|2 Comments

IsilonSD – Part 3: Deploy a Cluster (Successfully)

With proper planning and setup of the prerequisites (see Part 2), the actual deployment of the IsilonSD Edge cluster is fairly straight-forward. If you experience issues during this section (see Part 1) it’s very likely because you don’t have the proper configuration, so revisit the previous steps. That said, let’s dive in and make a cluster.

This post is part of a series covering the EMC Free and Frictionless software products.
Go to the first post for a table of contents.

High Level, you’re going to do a few things:

  1. Deploy the IsilonSD Edge Management Server
  2. Setup IsilonSD Edge Management Server Password
  3. Complete IsilonSD Edge Management Server Boot-up
  4. Configure  Management Server vSphere Link & Upload Isilon Node Template
  5. Open the IsilonSD Edge vSphere Web Client Plug-in
  6. Deploy the IsilonSD Edge Cluster

Here’s the detail.

Deploy the IsilonSD Edge Management Server

*Note there is no sound, this is to follow along the steps.
This is your standard OVA deployment, as long as  you’re using the “EMC_IsilonSD_Edge_MS_x.x.x.ova” file from the download and providing an IP address accessible to vCenter, you can deploy this just about anywhere.

Follow along in the video on the left if you’re not familiar with the OVA process.

Once the OVA deployment launches, ensure you find the deployment task in the vSphere Task Pane and keep an eye on the progress.

Setup IsilonSD Edge Management Server password

IsilonSD_ManagementBootPasswordChangeOnce the OVA is complete and the virtual machine is booting up, you’ll need to open the console and watch the initialization process. Generally I recommend this with any OVA deployment, as you’ll see if there are any errors as the first boot configuration occurs. For the IsilonSD Edge Management Appliance, it’s required to set the administrator password.

Complete IsilonSD Edge Management Server Boot-up

IsilonSD_ManagementConsoleBlueScreenAfter entering your password, the server will continue it’s first boot process and configuration. When you reach this screen (what I call the blue screen of start) you’re ready to proceed. Open a browser and navigate to the URL provided in the blue screen next to IsilonSD Management Server.

 

Configure Management Server vSphere Link & Upload Isilon Node Template

When you navigate to the URL provided by the blue screen, after accepting the unauthorized certificate, you’ll be promoted for logon credentials. This is NOT the password you provided during the boot up. I failed to read the documentation and assumed this, resulting in much frustration.

Logon using:
username: admin
password: sunshine

After successful logon, and accepting the EULA, you have just a couple steps, which you can follow along in the video on the right:

  1. Adjust the admin password
  2. Register your vCenter
  3. Upload the Isilon Virtual Node template
    1. This is “EMC_IsilonSD_Edge_x.x.x.ova in your download

*Note there is no sound, this is to follow along the steps.

 

Open the IsilonSD Edge vSphere Web Client Plug-in

Wait for the OVA template to upload, this may take up to ten minutes depending on your environment. Once complete, you’ll be ready to move on to actually creating the IsilonSD cluster through the vSphere Web Client plug-in that was installed by the Management Server when you registered vCenter. Ensure you close out all the browser windows and open a new session to your vSphere Web Client.

IsilonSD_vCenterDatacenterSelect the datacenter where you deployed the management server (not the cluster, again where I lost some time).

 

IsilonSD_ManageTab In the right-hand pane of vSphere, under the Manage tab, you should now see two new sub-tabs, Create IsilonSD Cluster and Manage IsilonSD ClusterIsilonSD_vCenterIsilonTabs

 

Deploy the IsilonSD Edge Cluster

*Note there is no sound, this is to follow along the steps.

Follow along in the video above:

  1. Check the box next to your license
  2. Adjust your node resources
    1. For my deployment, I started with 3 nodes; adjusting the Cluster Capacity from the default 2 TB  to the minimum 1152 GB (64GB & 7 Drives * 3 Nodes)
  3. Clicking Next on the Requirements tab will search the virtual datacenter in your vCenter for ESX hosts that can satisfy the requirements you provided, including having those independent drives that meet the capacity requirement
    1. Should the process fail to find the necessary hosts, you’ll see a message like this. Don’t get discouraged, look over the requirements again to ensure everything is in order, try restarting the Inventory Service too.
    2. IsilonSD_NoQualifiedHosts
  4. When the search for hosts is successful, you’ll see a list of hosts available to select, such as
    1. IsilonSD_HostSelection
  5. Next, select all the hosts you wish to add to the cluster (if you prepared more than 3, consider selecting 3 now, as the next post we’ll walk through adding an additional node).
  6. For each host, you’ll need to select the disks and their associated role (Data Disk, Journal, Boot Disk or Journal & Boot Disk).
    1. Remember, you need at LEAST 6 data disks, you won’t get this far if you don’t but you won’t get farther if you don’t select them.
    2. In our scenario, we selects 6x 68GB data disks, and a final 28GB disk for Journal & Boot Disk
    3. You’ll also need to select the External Network Port Group and Internal Network Port Group
    4. IsilonSD_HostDriveSelection
  7. After setting up all hosts, with the exact configuration, you’ll move into the Cluster Identity
    1. IsilonSD_ClusterIdentity
    2. Cluster Name (this is used in the management interface to name the cluster)
    3. Root Password
    4. Admin Password
    5. Encoding (I’d leave this alone)
    6. Timezone
    7. ESRS Information (only populate this if you have a production license)
  8. Next will be your network settings.
    1. IsilonSD_ClusterNetworking
    2. External Network
    3. Internal Network
    4. SmartConnect
  9. You have a final screen to verify all your settings, look them over, the full deployment will take awhile and click Next.

At this point, patience is key, do not interrupt it. An OVA will be deployed for every node, then all of those unformatted disks will be turned into data stores, then VMDK files put on each datastore; finally all the nodes will boot and configure themselves. If everything goes as planned, your reward will look like this:

IsilonSD_ClusterCreationSuccess

To verify everything, point your browser to your smart connect IP address, in our case https://Isilon01.lab.vernex.io:8080 if you get a OneFS Logon Prompt, you should be in business!

IsilonSD_OneFSLogonPrompt

 

You should also be able to navigate in windows to your SmartConnect address; recall ours is \\Isilon01.lab.vernex.io\ and see the IFS share. This is the initial administrator share that in a production environment you’d disable. Likewise in *nix you can NFS attach to //Isilon01.lab.vernex.io:/IFS

 

 

 

 

 

 

 

 

By | March 30th, 2016|EMC, Home Lab, Storage, VMWare|1 Comment

IsilonSD – Part 1: Quick Deploy (or how I failed to RTM)

This post is part of a series covering the EMC Free and Frictionless software products.
Go to the first post for a table of contents.

As I mentioned in my previous post, EMC recently released the “software defined” version of Isilon; their leading enterprise scale-out NAS solution. If you’re familiar with Isilon, you might already know there has been a virtual Isilon (called Isilon Simulator) for years now. The virtual Isilon would run on a laptop with VMware Workstation/Fusion, or on vSphere in your datacenter. I purchased and installed Isilon in multiple organizations, for several different use-cases; the Isilon Simulator was a great solution for testing changes pre-production as well familiarizing engineers with the interface. The Isilon Simulator is not supported and up until recently you had to know the right people to even get ahold of it.

With the introduction of IsilonSD Edge, we now have a virtualized Isilon that is fully supported, available for download and purchase through your favorite EMC sales rep. It runs the same codebase as the physical appliances, with some adjustments for the virtual worlds. As we discussed, there is a “free” version for use in non-production as part of EMC’s Free and Frictionless movement. I’ve run the Isilon Simulator personally for years, so I want to leverage this latest release of IsilonSD Edge as my new test Isilon in my home lab.

A quick stop to the IsilonSD Edge download page and I’m quickly pulling down the bits. While waiting a few moments for the 2GB download, I review some of the links; there is a YouTube video on Aggregating Unused Storage, another that covers FAQs on IsilonSD Edge and one more that talks about Expanding the Data Lake to the Edge. These all cover what I assumed, you’ll want at least three ESX hosts to provide physical diversity, you can run other workloads on these hosts, and the ultimate goal of this software is to extend Isilon into Edge scenarios, such as branch offices.

Opening the downloaded ZIP file, I find a couple OVA files, plus the installation instructions. I reviewed a couple of the FAQs linked from the product page, though didn’t spend much time on the installation guide; nor did I watch the YouTube Demo: Install and Configure IsilonSD Edge. I like to figure some things out on my own; that’s half the fun of a lab, right? I did see under the system requirements, it mentions support for VSAN compatible hardware, referencing the VMware HCL for VSAN. I just recently setup VSAN in my home lab, so that coupled with the fact I’ve run the Isilon Simulator; I’m good to go.

Fast forward though a couple failed installations, re-reading the FAQs, more failed installations, then reading the actual manual… here’s the catch.

You have to have local disks on each ESX node.

More specifically, you need to have directly attached storage… 
        without hardware RAID protection or VSAN.

Plus, you need at least 7 of these directly attached unprotected disks, per node.

While this wasn’t incredibly clear to me in the documentation, once you know this, you will see it’s said; but given IsilonSD is running on VMDK files, I glossed over the parts of the documentation that (vaguely) spelled this out. If you’ve deployed the Isilon Simulator, or any OVA for that matter, you’re used to selecting where the virtual machines are deployed, I assumed this would be the same for IsilonSD and I could choose the storage location.

However IsilonSD comes with a vCenter Plug-In that deploys the cluster, as part of that deployment, it scans for hosts and disks that meet this specific requirement. Moreover, during the deployment IsilonSD leverages a little used feature in vSphere to create virtual serial port connections over the network for the Isilon nodes to communicate with the Management Server, this is how the cluster is configured; so deploying IsilonSD nodes by hand isn’t an option (you can use still use the Isilon Simulator, which you can deploy more manually).

I’m going to stop here and touch on some thoughts, I’ll elaborate more on this in a later post once I actually have IsilonSD Edge working.

I do not know any IT shop that has ESX hosts that has locally attached, independent disks (again, not in a RAID group or under any type of data protection). We’ve worked hard as VM engineers to always build shared storage so we can use things like vMotion.

The marketing talks about capturing unused storage, about running IsilonSD on the same hosts as other workloads; in fact the same storage as other VMs; but I’m not sure who has unused capacity that’s also independent disks.

I certainly wouldn’t recommend running virtual machines on storage without any type of RAID-like protection. Maybe some folks have a server with some disks they never put into play, but 7 disks; and at least three servers with 7 disks?

I know there are organizations that have lean budget and this might be the best they can afford, but are shops like licensing and running vCenter ($), are they looking at virtual Isilon($)?

Call me perplexed, but I’m going to put off thinking about this as I still want to get this running in my lab. Since I don’t have three servers and 21 disks laying around at home, I’ll need to figure out a way to create a test platform for IsilonSD to run.

Be back soon…

 

By | March 25th, 2016|EMC, Home Lab, Storage, Uncategorized|3 Comments