OVA

UnityVSA – Part 3: Initial Configuration Wizard

When last we meet, we had deployed a UnityVSA virtual appliance; it was up and running sitting at the prompt for the Initial Configuration Wizard.

This post is part of a series covering the EMC Free and Frictionless software products.
Go to the first post for a table of contents.

If you’re following along and took a break, your browser session to Unisphere likely expired, which means you also lost the prompt for the Initial Configuration Wizard. Don’t fret; you can get it back. In the upper left corner of Unisphere, click on the Preferences button . On the main preferences page, there is a link to launch the Initial Configuration Wizard again.

UnityVSA_PreferencesButton

UnityVSA_InitialConfigurationWizardLink

Ok, so back to the wizard? Good. This wizard is going to walk through a complete configuration of the UnityVSA, including changing passwords, licensing, network setup (DNS, NTP), creating pools, e-mail server, support credentials, your customer information, ESRS, iSCSI and NAS interfaces. You can choose to skip some steps, but if you complete all portions, your UnityVSA will be ready to provision storage across block and file, with LUNS, shares or VVOLS.

Before we get going, let’s cover some aspects of the VSA to plan for.

First, there is a single SP (service processor in Unity-speak; or a ‘brain’). The physical Unity (and the VNX and CLARiiON before if you are familiar with the line) has two SPs. These provide redundancy should an SP fail, given for all intents an SP is an x86 server it’s suspect to common failures. The VSA rather relies on vSphereHA to provide the redundancy should a host have a failure, and vSphereDRS/vMotion to move the workload preemptively for planned maintenance. This is germane because, for the VSA, you won’t be planning for balancing LUNs between the SPs, nor creating multiple NAS servers to span SPs, nor even iSCSI targets across SPs to allow for LUN trespass.

The second is size limitations. I’m using the community edition, a.k.a. Free and Frictionless; which is limited to 4TB of space. I do not get any support, as such ESRS will not work. If you’re planning on a production deployment, that 4TB limit will be increased to 50TB (based, of course, on how much you purchased from EMC) and then be a fully supported Unity array. With the overall size limit, you also are limited to 16 virtual disks behind the VSA (meaning 16 VMDK files you assign the VM). So plan accordingly. In my initial deployment I provided a 250GB VMDK, so if I have 15 more (16 total), I hit my 4TB max.

Third is simply the key differences from a physical array to prepare for.

  • Max pool LUN size: 16TB
  • Max pool LUNs per system: 64
  • Max pools per system: 16
  • No fibre channel support
  • No write caching (to ensure data integrity)
  • No synchronous replication
  • No RAID – this one is crucial to understand. In a physical Unity raw disks are put into RAID groups to protect against data loss during a drive failure. UnityVSA relies on the storage you provide to be protected already. Up to the limits mentioned, you can present VMDKs from multiple vSphere datastores, even from separate tiers of storage leveraging FASTVP inside UnityVSA.

Fourth and finally, a few vSphere specific notes. UnityVSA comes installed with VMware Tools, but you can’t modify it, so don’t try automatic upgrades the VMtools. The cores and memory are hard coded; so don’t try adding more to increase performance; rather split workload onto multiple UnityVSA appliances. You cannot resize virtual disks once they are in a UnityVSA pool, again add more, but pay attention to the limits. EMC doesn’t support vSphere-FT, VM-level backup/replication nor VM-level snapshots.

 

 UnityVSA_ICW__1 Got all that, now let’s setup this array! I’m going to step through detail on the options this time, because the wizard is packed full of steps, the video is also at the bottom.
 

UnityVSA_ICW__26

First off, change your passwords. If you missed it earlier, here are the defaults again:

  • admin/Password123#
  • service/service

The “admin” account is the main administrator of Unisphere, while the “service” account is used to log into the console of the VSA, where you can perform limited admin steps such as changing IP settings should you need to move the network; as well logs.

 

UnityVSA_ICW__25

Next up, licensing. Every UnityVSA needs a license, even if you’re using the Community edition. If you purchased the array, you should go through all the normal channels, LAC, support, or your rep. For community edition, copy the System UUID and follow the link: Get License Online
 

UnityVSA_ICW__24

The link to get a license will take you to EMC.com, again you’ll need to logon with the same account you used to download the OVA. Then simply paste in that System UUID and select “Unity VSA” from the drop down. Your browser will download a file that is the license key; from there you can import it into Unisphere.

I will say, I’ve heard complaints that needing to register to download and create a license is ‘friction’, but it was incredibly easy. I don’t take any issues with a registration, EMC has made it simple, and no one from sales is calling me to follow up. I’m sure EMC is tracking who’s using it, but that’s good data; downloading the OVA vs actually creating a license. I don’t find this invalidates the ‘Free and Frictionless’ motto.

UnityVSA_ICW__23 Did you get this nice message? If so good job, if not… well try again.
 

UnityVSA_ICW__22

The virtual array needs DNS servers, to talk to the outside world, communicate through ESRS, get training videos, etc… make sure to provide DNS servers that can resolve internet addresses.
 UnityVSA_ICW__21 Synchronizing time is always important, if you are providing file shares it’s extra important you’re in synch with your Active Directory. Should a client and server be more than 5 minutes apart access will be denied. So use your AD server itself, or an NTP server on the same time synch with AD.
 

UnityVSA_ICW__20

Now we’re getting into some storage. In Unity, pools are a grouping of storage which is carved up to present to hosts. In the VSA, a pool must have at least one virtual disk; though if you have more data will be balanced across the virtual disks.

A pool is the foundation of all storage, without one you cannot setup anything else on the virtual array.

 

UnityVSA_ICW__18

When creating a Pool, you’ll want to give it a name and description. Given this is a virtual array, consider including the cluster and datastore names in the description so you know where the storage is coming from.

For my purposes of testing, I named the Pool Pool_250 simply to know it was on my 250GB virtual disk.

 

UnityVSA_ICW__19

Once named, you’ll need to give the pool some virtual disks. Recall we created a 250GB virtual disk, here we are adding that into the pool.

When adding disks you need to choose a tier, typically:

  • Flash = Extreme Performance
  • FC/SAS = Performance
  • NL-SAS/SATA = Capacity

The virtual disk I provided the VSA is actually VSAN running All-Flash, but I’m selecting Capacity here simply to explore FAST VP down the road.

 

UnityVSA_ICW__17

Next up I need to give the Pool a Capability Profile. This profile is for VMware VVOL support. We’ll cover VVOL in more depth another time, but essentially it allows you to connect vSphere to Unity, and assign storage at a VM level. This is done through vSphere Storage Policy that map through to the Capability Profile.

So what is the Capability Profile used for? It encompasses all the attributes of the pool, allowing you to created a vSphere Storage Policy. There is one capability profile per pool, it includes the Service Tier (based on your storage tier and FAST VP), Space Efficiency Options and any extra tags you might want to add.

You can skip this by not checking the Create VMware Capability Profile for the Pool box at this point; but you can also modify/delete the profile later.

I went ahead and made a simple profile called CP_250_VSAN

 

UnityVSA_ICW__16

Here are the constraints, or attributes, I mentioned above. Some are set for you, then you can add tags. Tags can be shared across pools and capability profiles. I tagged this ‘VSAN’, but you could tag for applications you want on this pool, or application tiers (web, database, etc), or the location of the array.

This finishes out creating the pool itself.

 

UnityVSA_ICW__14

The Unity array will e-mail you when alerts occur, configure this to e-mail your team when there are issues.
 

UnityVSA_ICW__13

Remember me mentioning reserving a handful of IP addresses? Here we start using them. The UnityVSA has two main ways to attach storage; iSCSI and NAS (the third, fibre channel, is available on the physical array).

If you want to connect via iSCSI, you’ll need to create iSCSI Network Interfaces here.

 

UnityVSA_ICW__12

Creating an iSCSI interface is easy, you’ll pick between your four ethernet ports, provide IP address, subnet mask and gateway. You can assign multiple IP addresses to the same ethernet port; you also can run both iSCSI and NAS on the same ethernet port (you can’t share an IP address though).

How you leverage these ports is an end user design. Keep in mind, the UnityVSA itself is a single VM and thus a single point of failure. Though you could put the virtual network cards on separate virtual switches to provide some network redundancy, or you could put virtual network cards into separate VLANs to provide storage to different network segments.

 

 

UnityVSA_ICW__11

I created two iSCSI interfaces on the same network card, so that I can simulate multi-pathing at the client side down the road.
 

UnityVSA_ICW__9

Next up is creating a NAS Server, this provides file services for a particular pool. Provide a name for the NAS server, then pool to bind it too, and the service processor for it to run on (only SPA for VSA).
 

UnityVSA_ICW__8

With the NAS Server created, it will need an ethernet port and IP information. Again, this can be the same port as iSCSI, or different; your choice; but you CANNOT share IP addresses. I choose to use a different port here.
UnityVSA_ICW__7 Unity supports both Windows and *nix file shares, as well multiple options for how to secure the authentication and authorization of said shares. Both the protocol support and the associated security settings are per NAS Server. Remember we can create multiple NAS Servers; so this is how you provide access across clients.

For example, if you have two Active Directory forests you want to have shares for; one NAS Server cannot span them, though you can simply create a second NAS server for the other forest.

Or if you want to provide isolation between Windows and *nix shares, simply use two NAS servers, each with single protocol support.

One pool may have multiple NAS servers, but one NAS server can NOT have multiple pools.

This is again where the multiple NICs might come in play on UnityVSA. I could create a NAS Server on a virtual nic that is configured in vSphere for NFS access; while another NAS Server is bound to a separate virtual nic in that my Windows clients can see for SMB shares.

For my initial NAS Server, I’m going to use multi-protocol and leverage Active Directory. This will create a server entry in my AD. I’m also going to enable VVOL, NFSv4 and configure secure NFS.

 

 

UnityVSA_ICW__6

This is the secure NFS option page, by having NFS authenticate via Kerberos against Active Directory, I can use AD as my LDAP as well.
 

UnityVSA_ICW__5

With secure NFS enabled, my Unix Directory Service is going to target AD, so I simply need to provide the IP address(es) of my domain controller(s).
 

UnityVSA_ICW__4

Given I’m using Active Directory, the NAS server needs DNS Servers, these can be different from the DNS servers you entered earlier if you have separation of DNS per zones.
 

UnityVSA_ICW__3

I do not have another Unity box at this point to configure replication; something I’ll explore down the road, so leaving this unchecked.
 

UnityVSA_ICW__2

At this point all my selections are ready, clicking finish will complete all the configuration options.

At this point, my UnityVSA is connected to the network, ready to carve up the pool into LUNs, VVOLs or shares. Everything I accomplished in this wizard can be done manually inside the GUI. The Initial Configuration Wizard just streamlines all the tasks with bringing up a new Unity array. If you have a complete Unity configuration mapped out, you can see how this wizard would greatly reduce the time to value. In the next few posts I’ll explore the provisioning process; like how to leverage the new VVOL support with vSphere.

Here is the silent video, to see the steps I skipped and the general pacing of this wizard.

By | May 25th, 2016|EMC, Home Lab, Storage|2 Comments

UnityVSA – Part 2: Deploy

This post is part of a series covering the EMC Free and Frictionless software products.
Go to the first post for a table of contents.

So, to start this off let’s go to the EMC Software Download page; navigating to UnityVSA to get the download. You’ll receive a login prompt, if you have not registered for an EMC account previously, you’ll need one to access the download. Once logged in, you’ll start downloading the OVA file for UnityVSA.

Yes, last time I went immediately to downloading OVAs without reading the manuals my installation failed miserably. But I’m going to try an installation unassisted every time; maybe I’m a glutton for punishment, but I choose to be optimistic that installations should be intuitive, so I prefer trying unaided first.

That’s just me; you might want to check out the Installation Guide and FAQs. Here’s the quick details though… in order to run the VSA you’ll need to be able to run a vSphere virtual machine with 2x cores (recommended is 2Ghz+), 12GB of RAM, and 85GB of hard drive space, plus however much extra space you want to give the VSA to be presented as storage. The VSA does not provide any raid protection inside the virtual array; so if you want to avoid data loss from hardware failure, ensure the storage underneath the VSA is RAID (or VSAN, or SAN itself). You’re also going to want at least two IP addresses set aside; I’d recommend about 5 for a full configuration. Depending on your network environment, you might want to have multiple networks available in vSphere. The UnityVSA has multiple virtual network cards to provide ethernet redundancy. One of those virtual nics is for management traffic, which you could place on a separate virtual switch (and thus VLAN) if you’re topology supports this. My lab is basically flat, so I’ll be binding all virtual nics to one virtual network.

With the downloaded OVA in hand, the initial deployment is indeed incredibly easy. If you’re familiar with OVA deployments, this won’t be very exciting. If you’re not familiar with deploying an OVA, you can follow along in the video below. I’m going to be deploying UnityVSA on my nested ESXi cluster, running VSAN 6.2. You could install directly on an ESXi Host, or even VMware Workstation or Fusion; put that’s another post.

*Note there is no sound, this is to follow along the steps.
  1. In vCenter, right-click the cluster and select “Deploy OVF Template”
  2. Enter the location you saved the UnityVSA OVA file
  3. Select the check box “Accept extra configuration details”, this OVA contains resource reservations which we’ll touch on below.
  4. Provide a name, this will be the virtual machine name, not the actual name of the Unity array
  5. Select the datastore to house the UnityVSA, in my case I’m putting this on VMware Virtual SAN.
  6. Select which networks you wish the UnityVSA bound to.
  7. Configure the “System Name”, this is the actual name of the VSA, I’d recommend matching the virtual machine name you entered above.
  8. Configure the networking, I’m a fan of leveraging DHCP for virtual appliance downloads when possible, it eliminates some troubleshooting.
  9. I do NOT recommend checking the box “Power on after deployment”, the OVA deployment does not include any storage for provisioning pools; for the initial wizard to work, you’ll want to add some additional virtual hard drives.

 

UnityVSA_ResourceReservations

 

A couple notes on the virtual machine itself to be aware of. EMC’s intent with UnityVSA is to provide a production-grade virtual storage array. To ensure adequate performance, when the OVA is deployed the resulting virtual machine has reservations in place on both CPU and memory. These reservations will ensure the ESX host/cluster underneath the VSA can deliver the appropriate amount of resources for the virtual array to function. If you are planning on running UnityVSA in a supported, production environment, I recommend leaving these reservations. Even if you have enough resources to make the reservations unnecessary, I’m sure EMC Support will take issue with you removing them should you need to open a case. If this is just for testing and you are tight on resources, you can remove these reservations.

 

 

 

 

 

Next, let’s add that extra hard drive I talked about and power this up.

 

*Note there is no sound, this is to follow along the steps.

UnityVSA-InitialConfigurationWizard

  1. In vCenter, right click the UnityVSA and Edit Settings
  2. At the bottom under New Device. use the drop down box to select “New Hard Disk” and press “Add”
  3. Enter the size, I’m making a VMDK with 250GB; if you use the arrow to expand the hard drive settings, you can place it on a separate datastore, if you want to using FASTVP to tier between datas tores this is where you’ll want to customer the hard drive.  I’m putting this new hard drive with the VM on VSAN.
  4. Once the reconfigure task is complete, right click your VM again and select “Power On”
  5. I highly recommend opening the console to the VM to watch the boot up, especially on first boot as the UnityVSA will configure itself.
  6. When the console shows a Linux prompt, walk away. Seriously, walk away; get a coffee, take a smoke break, or work on something else. More is happening behind the scenes at this point and the VSA is not ready. You’ll only get frustrated and this watched pot will not seems to boil.
  7. Ok, are you back? Close the console and go back to the virtual machine, ensure the VM Tools are running and the IP address shows up. If you choose DHCP as I did, you’ll need to note the assigned IP address. If you don’t see the IP address, the VSA is still not ready (see, I told you to walk away)
  8. Once the IP appears vCenter, you’re safe to open a browser over SSL to that address.
  9. You should receive a logon prompt, the default login is:
    1. user: admin
    2. pw: Password123#
  10. Did you login? Do you see a “Initial Configuration” wizard? Congratulations then, the VSA is up.

In the next post in this series, we’ll leverage this wizard to fast track configuring the UnityVSA, but, if you want you can cancel the wizard at this point, manually configuring the array; or restarting the wizard later through preferences.

For now, as evident by this post; the deployment of UnityVSA was straight-forward and exactly what we’ve all come to expect of virtual appliances. From download to up and running, user interaction time was less than 2 minutes, with overall deploy time around 30 minutes.

 

 

 

By | May 24th, 2016|EMC, Home Lab, Storage|2 Comments

IsilonSD – Part 5: Monitoring Activity

For my deployment of IsilonSD Edge, I want to keep this running in my lab, installing systems is often far easier than operating them (especially troubleshooting issues). However an idle system isn’t really a good way to get exposure, so I need to put a little activity on this cluster, plus monitor it.

This post is part of a series covering the EMC Free and Frictionless software products.
Go to the first post for a table of contents.

This is just my lab, so here is my approach to doing more with IsilonSD than simply deploying it:

  • Deploy InsightIQ (EMC’s dedicated Isilon monitoring suite)
  • Move InsightIQ Data to IsilonSD Edge Cluster
  • Synchronize Software Repository
  • Mount Isilon01 as vSphere Datastore
  • Load Test

Deploy InsightIQ

InsightIQ is EMC’s custom-built monitoring application for Isilon. Personally, this was one of the top reasons I select Isilon years ago when evaluating NAS solutions. I’m a firm believer that the ability to monitor a solution should be a key deciding factor in product selection.

Without going too deep in InsightIQ itself (that’s another blog), it provides the ability to monitor the performance of the Isilon, including the client perspective of the performance. You can drill into the latency of operations by IP address; which when I first purchased an Isilon array is was because the incumbent solution was having numerous performance problems and the lack of visibility into why was causing a severe customer satisfaction issue.

InsightIQ monitors the nodes, cluster communication, and even does file analysis to help administrators understand where their space is consumed and by what type of files.

Deploying InsightIQ is a typical OVA process, we’ve collected the information necessary in previous posts, so I’ll be brief, in fact you can probably wing-it on this one if you want.

*Note there is no sound, this is to follow along the steps.
  1. In the vSphere Web Client, deploy an OVA
  2. Provide the networking information and datastore for the InsightIQ appliance
  3. After the OVA deploy is complete, open the console to the VM, where you’ll need to enter the root password
  4. Navigate your browser to the IP address you entered, logging in as root, with the password you created in the console
  5. Add the Isilon cluster to InsightIQ and wait while it discovers all the nodes.

 

Move InsightIQ Data to IsilonSD Edge Cluster

You can imagine collecting performance data, and file statistics will consume quite a bit of storage. By default InsightIQ will store all this data on the virtual machine, so I move the InsightIQ Datastore onto the Isilon cluster itself. While this is a little circular, InsightIQ will generate some load writing the monitoring data, which in turn will give it something to monitor, for our lab purposes this provides some activity.

Simply log into InsightIQ, under Settings -> Datastore, change the location to NFS Mounted Datastore. By default Isilon shares out /IFS, however in production this should ALWAYS be changed, but for a lab we’ll leverage the export path.

IsilonSD_InsightIQDSMove

If you do this immediately after deploying InsightIQ, it will be very quick. If, however, you’ve been collecting data, you’ll be presented with information about the progress of the migration, refreshing the browser will provide updates.

IsilonSD_InsightIQDSMoveProgressSynchronize Software Repository

I have all my ISO files, keys, OVAs and software installation on a physical NAS; this makes it very easy to mount via NFS to all my hosts as a datastore, physical and nested; for quickly installing software in my lab. Because of this, I use this repository daily; so to ensure I’m actually utilizing IsilonSD to continue to learn about it post setup, I’m going use IsilonSD to keep a copy of this software repository, mounting all my nested ESXi hosts to it.

I still need my physical NAS for my physical hosts, in case I lose the IsilonSD I don’t want to lose all my software and be unable to reinstall. I want the physical NAS and IsilonSD to stay in sync too. My simple solution is to leverage robocopy to sync the two file systems; the added benefit of this is I also get the regular load on IsilonSD.

Delving into robocopy is a whole different post, but here is my incredibly simple batch routine. It mirrors my primary NAS software repository to the Isilon. This runs nightly now.

:START
robocopy \\nas\software\ \\isilon01\ifs\software\ /MIR /MT:64 /R:0 /W:0 /ZB
GOTO START

Upon first execution, I see in InsightIQ traffic onto IsilonSD. Even though this is nested ESXi, with the virtual Isilon nodes sharing both compute, network, memory and disk; I see a fairly healthy external throughput rate, peaking around 100Mb/s.

IsilonSD_InsightIQRobocopyThroughput

 

When the copy process is complete, looking in the OneFS administrator console will show the data has been spread across the nodes (under HDD Used).

IsilonSD_HDDLoaded

Mount Isilon01 as vSphere Datastore

Generally speaking, I would not recommend Isilon for VMware storage. Isilon is built for file services, and its specialty is sequential access workloads. For small workloads, if you have an Isilon for file services already, an Isilon datastore will work; but there are better solutions for vSphere data stores in my opinion.

For my uses in the lab though, with my software repository being replicated onto Isilon, mounting an Isilon NFS export as a datastore will not only allow me to access those ISO files but open multiple concurrent connections to monitor.

*Note there is no sound, this is to follow along the steps.
Mounting an NFS datastore to Isilon is exactly the same as any other NFS NAS.

You MUST use the FQDN to allow SmartConnect to balance the connections.

 

With the datastore mounted, if you go back into the OneFS administrator console; you can see the connections were spread across the nodes.

IsilonSD_ConnectionSpread

Now I have a purpose to regularly use my IsilonSD Edge cluster, keeping it around for upgrade testing, referencing while talking to others, etc. Again, with the EMC Free and Frictionless license, I’m not going to run out of time, I can keep using this.

Load Test

Even though I have an ongoing use for IsilonSD, I want to to a little more to do than just serve as a software share, just to ensure it’s really working well. So I’ll use IOMeter to put a little load on it.
I’m running IsilonSD Edge on 4 nested ESXi virtual machines, which in turn are all running on one physical host. So IsilonSD is sharing compute, memory, disk and network across the 4 IsilonSD nodes (plus I have dozens of other servers running on this host). Needless to say, this is not going handle a high amount of load, nor provide the lowest latency. So, while I’m going to use IOMeter to put some load on my new IsilonSD Edge cluster and typically I would record all the details of a performance test; this time I’m not. Especially given I’m generating load from virtual machines on the same host.

Given Isilon is running on x86 servers, it would be incredibly interesting to see a scientific comparison between physical Isilon and IsilonSD Edge with like-for-like hardware. In my personal experience with virtualization, there is a negligible overhead, but I have to wonder the difference Infiniband makes.

In this case, my point of load testing is not to ascertain the latency or IOPS, but merely to put the storage device under some stress for a couple of hours to ensure it’s stable. So I created a little load, peaking around 80Mbps and 150 IOPS, but running for about 17 hours (overnight).

Below are some excerpts from InsightIQ, happily the next morning the cluster was running fine, even given the load. During the test, the latency fluctuated widely (as you’d expect due to the level of contention my nested environment creates). From an end user perspective, it was still usable.

IsilonSD_LoadTest1IsilonSD_LoadTest2IsilonSD_LoadTest3IsilonSD_LoadTest4

In my next post I’m going to wrap this up and share my thoughts on IsilonSD Edge.

By | April 1st, 2016|EMC, Home Lab, VMWare|1 Comment

IsilonSD – Part 3: Deploy a Cluster (Successfully)

With proper planning and setup of the prerequisites (see Part 2), the actual deployment of the IsilonSD Edge cluster is fairly straight-forward. If you experience issues during this section (see Part 1) it’s very likely because you don’t have the proper configuration, so revisit the previous steps. That said, let’s dive in and make a cluster.

This post is part of a series covering the EMC Free and Frictionless software products.
Go to the first post for a table of contents.

High Level, you’re going to do a few things:

  1. Deploy the IsilonSD Edge Management Server
  2. Setup IsilonSD Edge Management Server Password
  3. Complete IsilonSD Edge Management Server Boot-up
  4. Configure  Management Server vSphere Link & Upload Isilon Node Template
  5. Open the IsilonSD Edge vSphere Web Client Plug-in
  6. Deploy the IsilonSD Edge Cluster

Here’s the detail.

Deploy the IsilonSD Edge Management Server

*Note there is no sound, this is to follow along the steps.
This is your standard OVA deployment, as long as  you’re using the “EMC_IsilonSD_Edge_MS_x.x.x.ova” file from the download and providing an IP address accessible to vCenter, you can deploy this just about anywhere.

Follow along in the video on the left if you’re not familiar with the OVA process.

Once the OVA deployment launches, ensure you find the deployment task in the vSphere Task Pane and keep an eye on the progress.

Setup IsilonSD Edge Management Server password

IsilonSD_ManagementBootPasswordChangeOnce the OVA is complete and the virtual machine is booting up, you’ll need to open the console and watch the initialization process. Generally I recommend this with any OVA deployment, as you’ll see if there are any errors as the first boot configuration occurs. For the IsilonSD Edge Management Appliance, it’s required to set the administrator password.

Complete IsilonSD Edge Management Server Boot-up

IsilonSD_ManagementConsoleBlueScreenAfter entering your password, the server will continue it’s first boot process and configuration. When you reach this screen (what I call the blue screen of start) you’re ready to proceed. Open a browser and navigate to the URL provided in the blue screen next to IsilonSD Management Server.

 

Configure Management Server vSphere Link & Upload Isilon Node Template

When you navigate to the URL provided by the blue screen, after accepting the unauthorized certificate, you’ll be promoted for logon credentials. This is NOT the password you provided during the boot up. I failed to read the documentation and assumed this, resulting in much frustration.

Logon using:
username: admin
password: sunshine

After successful logon, and accepting the EULA, you have just a couple steps, which you can follow along in the video on the right:

  1. Adjust the admin password
  2. Register your vCenter
  3. Upload the Isilon Virtual Node template
    1. This is “EMC_IsilonSD_Edge_x.x.x.ova in your download

*Note there is no sound, this is to follow along the steps.

 

Open the IsilonSD Edge vSphere Web Client Plug-in

Wait for the OVA template to upload, this may take up to ten minutes depending on your environment. Once complete, you’ll be ready to move on to actually creating the IsilonSD cluster through the vSphere Web Client plug-in that was installed by the Management Server when you registered vCenter. Ensure you close out all the browser windows and open a new session to your vSphere Web Client.

IsilonSD_vCenterDatacenterSelect the datacenter where you deployed the management server (not the cluster, again where I lost some time).

 

IsilonSD_ManageTab In the right-hand pane of vSphere, under the Manage tab, you should now see two new sub-tabs, Create IsilonSD Cluster and Manage IsilonSD ClusterIsilonSD_vCenterIsilonTabs

 

Deploy the IsilonSD Edge Cluster

*Note there is no sound, this is to follow along the steps.

Follow along in the video above:

  1. Check the box next to your license
  2. Adjust your node resources
    1. For my deployment, I started with 3 nodes; adjusting the Cluster Capacity from the default 2 TB  to the minimum 1152 GB (64GB & 7 Drives * 3 Nodes)
  3. Clicking Next on the Requirements tab will search the virtual datacenter in your vCenter for ESX hosts that can satisfy the requirements you provided, including having those independent drives that meet the capacity requirement
    1. Should the process fail to find the necessary hosts, you’ll see a message like this. Don’t get discouraged, look over the requirements again to ensure everything is in order, try restarting the Inventory Service too.
    2. IsilonSD_NoQualifiedHosts
  4. When the search for hosts is successful, you’ll see a list of hosts available to select, such as
    1. IsilonSD_HostSelection
  5. Next, select all the hosts you wish to add to the cluster (if you prepared more than 3, consider selecting 3 now, as the next post we’ll walk through adding an additional node).
  6. For each host, you’ll need to select the disks and their associated role (Data Disk, Journal, Boot Disk or Journal & Boot Disk).
    1. Remember, you need at LEAST 6 data disks, you won’t get this far if you don’t but you won’t get farther if you don’t select them.
    2. In our scenario, we selects 6x 68GB data disks, and a final 28GB disk for Journal & Boot Disk
    3. You’ll also need to select the External Network Port Group and Internal Network Port Group
    4. IsilonSD_HostDriveSelection
  7. After setting up all hosts, with the exact configuration, you’ll move into the Cluster Identity
    1. IsilonSD_ClusterIdentity
    2. Cluster Name (this is used in the management interface to name the cluster)
    3. Root Password
    4. Admin Password
    5. Encoding (I’d leave this alone)
    6. Timezone
    7. ESRS Information (only populate this if you have a production license)
  8. Next will be your network settings.
    1. IsilonSD_ClusterNetworking
    2. External Network
    3. Internal Network
    4. SmartConnect
  9. You have a final screen to verify all your settings, look them over, the full deployment will take awhile and click Next.

At this point, patience is key, do not interrupt it. An OVA will be deployed for every node, then all of those unformatted disks will be turned into data stores, then VMDK files put on each datastore; finally all the nodes will boot and configure themselves. If everything goes as planned, your reward will look like this:

IsilonSD_ClusterCreationSuccess

To verify everything, point your browser to your smart connect IP address, in our case https://Isilon01.lab.vernex.io:8080 if you get a OneFS Logon Prompt, you should be in business!

IsilonSD_OneFSLogonPrompt

 

You should also be able to navigate in windows to your SmartConnect address; recall ours is \\Isilon01.lab.vernex.io\ and see the IFS share. This is the initial administrator share that in a production environment you’d disable. Likewise in *nix you can NFS attach to //Isilon01.lab.vernex.io:/IFS

 

 

 

 

 

 

 

 

By | March 30th, 2016|EMC, Home Lab, Storage, VMWare|1 Comment

Free and Frictionless – A Series

One of the most common statements I’ve made to vendors over the years is “why should I pay to test your software?”. To this day I still don’t understand this; if I’m going to purchase software to run in my production environment, why should I have to pay to run this software for our development and testing needs? It seems counter-intuitive; in my mind having easy access to software which IT can test and develop against increases the probability of choosing it for a project. Having software be free in non-production allows developers to ensure that it is properly leveraged, as well it encourages accurate testing and facilitates operations ensuring it’s ready to be run in production. In my experience not only does this result in more use of the software in production (which means more sales for the vendor), but more operational knowledge (which means less support needed from the vendor).

Companies offer difference solutions to attempt to solve this. Microsoft does it well with TechNet and MSDN subscriptions; which for a small yearly fee you can license your IT staff, rather than the servers; you get some limited support and recently even cloud credits. Many companies will provide time-bombed versions of the software; this helps in the evaluation phase to test installation, but falls short in ongoing development needs, not to mention operations teams gain no experience. Some vendors will steeply discount non-production; though most of them only do this through during the purchasing process, and I’ve seen a wide range of how well this gets negotiated (if at all).

There is no doubt in my mind that this challenge is a significant factor in the growth of open-source software. With the ability to easily download software, without time limits and without a sales discussion; the time to start being productive in developing a solution is dramatically reduced. I’ve made this very choice; downloading free software and beginning the project while things like budget are still not finalized. The software can be kept running in non-production and when moved into production, support contracts can begin. You don’t need to pay upfront, before prototyping, before a decision is made and before any business value is being is being derived.

This is why I’ve been ecstatic EMC is making a movement towards a method that allows the free use of software in non-production, even in products they are not using an open-source license model. They refer to this approach as ‘Free and Frictionless’. It doesn’t apply to all their software, but the list is growing. Currently, products like ScaleIO, VNX, ECS and recently added; Isilon. The free and frictionless products are available for download, without support, but without time-bombs either. In most cases there are restrictions, such as the total amount of manageable storage. These limitations are easy to understand and work with and fully deliver on my age old question “why should I pay to test your software”.

I’m going to spend a little time with these offerings, many of them I’ve run in production, at scale; so I’m interested how well they stack up in their virtual forms. I’ll also explore some products I haven’t run before.

By | March 24th, 2016|EMC, Home Lab, Storage, VMWare|10 Comments