Nutanix Offering Per Desktop VDI Pricing with Guaranteed Performance

The first infrastructure vendor in the industry to offer something like this, Nutanix is blazing yet another trail! Introducing Nutanix VDI Per Desktop and Assurance.


The biggest problems that plague VDI deployments can be summed up by a few key things:

  • Price
  • Performance
  • Uncertainty

Let’s leave price alone for a second, because we will talk about that once we go through the per desktop VDI pricing model. Instead, let’s talk about performance and uncertainty, because even if you get a great price these two things can torpedo a well-meaning VDI deployment.


Performance covers so many aspects of a VDI deployment. This is not just what any given infrastructure can provide, but also what your users actually need in order to do their job with any level of efficiency. It should also include the speed at which the VDI administration team can react to user requirements, change, and overall management of that environment.

Understanding what resources each desktop will need usually requires performing some sort of assessment of a subset of the users daily usage that will be using the VDI desktops. If the customer will have multiple different use cases, then a subset of users from each group should be assessed. This subset of users should be selected based on providing a solid average of that groups usage profile. An assessment should be a minimum of 2 weeks, while 30 days is generally more than enough.

An IT department’s ability to meet changing demands is generally dictated by two factors: The policies and procedures to implement change, and how quickly those changes can actually be applied to the underlying infrastructure. It is the opinion of this author that traditional architecture, like 3-Tier Architecture, creates added time and steps, along with potentially very costly upgrades that take even longer to implement, than a web-scale solution like the Nutanix Virtual Computing Platform where you can add resource capacity dynamically as needed.


Like performance, uncertainty covers many aspects of a VDI deployment. There is uncertainty of what your users actually need, including the performance required on their VDI desktops, and uncertainty on what infrastructure will best provide the resources for those requirements. There is also uncertainty on how large a VDI deployment will ultimately become, and a large portion of this depends on how well the VDI deployment is received by the end-users, whether or not more users/departments are requesting VDI, the associated cost that is tied to scaling out the infrastructure of your current VDI environment, and what that means from a datacenter perspective (environmental variables like power, cooling, space, etc.) as well as management requirements (do they need additional FTEs to keep up with management/deployment tasks, time to delivery, ease of management, time to value, etc.). This list goes on and on.

The most prepared customers will perform desktop assessments before trying to deploy a VDI environment (as stated above). This provides details on how your users use their desktops, what applications are being used, as well as what kind of resources they are using from CPU, Memory and Disk IOPS. Taking this data helps in appropriately sizing an environment, however too often people try to size for the average and run into issues when the performance ebb and flow of their users trends up a little too far and performance starts to tank for all of your users sharing those resources. The hardest thing to size correctly for performance is shared storage, and this has traditionally been because of the storage architectures in use.

Traditional shared storage is network based and has one to two storage processors (also called storage controllers) to answer and deliver on storage requests. You can have any number of hypervisor hosts attached to this storage array over the Storage-Area-Network (SAN), and many VM workloads on each of those hosts. No matter how many hosts you add in, and how many VM workloads you run, the storage array has both a finite amount of disk IOPS it can provide, as well as a finite throughput and processing capability of the storage processors providing access to those disks. Even the newer “software-defined” storage arrays have this limitation, although most of these vendors have tried to “right-size” the storage performance from both a disk IOPS to storage processor performance, as well as what the network capabilities are. But these solutions, like their larger SAN cousins, do not generally scale out and are like mini-islands of storage. When I mention scale out, I am mainly referring to clustered, distributed file systems that can grow both dynamically (zero downtime or disruption to running workloads) and exponentially (with no actual limit). With either model the customer has to make some sizing guesses that may need to be forecasted out 3-5 years in advance. Not only is that hardware CapEx spend depreciating before you can fully utilize it and get to your target ROI, but it’s practically impossible to hit your goal with that forecast method.

Most of what I’m talking about isn’t unique to VDI either… it applies to virtualization in general.


Finally, with those two described, we can understand the true price of VDI. There is the cost of the infrastructure as well as the cost of administering that infrastructure and the cost of agility to meet changing demands.

While I’m not going to talk about specific prices in this blog (that discussion should be had with a Nutanix Partner or Account Manager in your area), I am going to talk about some ideas you need to consider when calculating your VDI desktops. With traditional architecture, like I mentioned earlier, you generally have to size for your end goal in advance. If you don’t, then you either have forklift upgrades as your environment grows, or you wind up with a disjointed and extremely complex infrastructure to manage that is essentially a bunch of different environments. This may be a slight exaggeration, but I’m hoping you get my point.

If you size for the future, you are spending a lot of money for infrastructure that won’t be used up front, which means your cost per desktop is insanely high until you start filling it up. An example would be spending $500K on infrastructure and then deploying 100 VDI desktops to start. That would give you a cost of $5K per desktop while that is all you are running! As you deploy more desktops, the price per desktop starts to drop… but how fast are you going to get to that planned capacity of desktops that you may have originally forecasted for and ultimately used to figure out your per desktop cost? All the while, that infrastructure is depreciating as well as getting older.

If you size for different deployments and buy different infrastructure for each deployment based on a certain performance profile and density required for each deployment, you will get a solid price per desktop but your administration of all of those environments will become a nightmare.

And what do you do when things take a left turn in either situation? It only exacerbates the problems!!

What if I told you that you could start your infrastructure out small and grow capacity only as you need it, while maintaining the same administration and architecture methods at pretty much any size and know that your performance will not diminish as you grow. But wait… there’s more! 😉 It’s even better than that, because you can do all of this dynamically with zero downtime or disruption and in small cost increments that provide a known fixed price per desktop that doesn’t ever change based on scale. And did I mention a single user interface to manage all of this infrastructure? How about the fact that you don’t need to overthink where to put the various parts of your VDI desktops (I’m referring to replicas, deltas, and user data) because there is live tiering of where data resides based on actual usage at any given moment. Does installation & setup in about an hour for a brand new installation sound good? What about dynamically expanding your cluster in minutes when you need more resources?

Sound too good to be true? Well then you haven’t seriously checked out the Nutanix Virtual Computing Platform. The per desktop pricing with Desktop Assurance is simply icing on the cake. Sure, you could get all of these benefits, minus the guarantee, by just buying the Nutanix gear. But Desktop Assurance provides a guarantee that we will honor the promised performance or provide more hardware to make up the difference at no additional charge to the customer. You can’t get that anywhere else.

I have seen some pretty wild and far stretched claims on density numbers from the competition that get the forecasted price per desktop very low. However, I have never seen any of these promises delivered on, so the actual price per desktop isn’t realistic and is always more than promised in the end. Knowing that you can bank on a given price per desktop makes things very predictable, and finance and procurement people love that.

So, performance is something that we’ve already worked out the math on and are willing to guarantee or we’ll provide more hardware at no additional cost to meet the guarantee.

As for uncertainty, because you can add packs of desktops as you are ready to deploy more, which include the required hardware to run them, you don’t have to forecast what your end goal is. Buy what you need when you need it and pay as you’re ready to grow!

It really can be that uncompromisingly simple. #NutanixFTW

Sorry Amazon, you missed your opportunity with the Fire phone


Amazon’s new Fire Phone

Well Amazon, I think you missed your opportunity to enter the smartphone market.

While the phone itself could be nice, even great, the idea behind it is to sell more stuff through Amazon. Most of the features you touted in your keynote were all about how to sell more stuff through Amazon. The Firefly feature, while cool, is really just a way to make it easier for someone to buy something on Amazon.

All of this is well and good. I get it… you need to make money and these devices are a great way to help you do that. Make it easier to buy things, and people will generally buy more. Remove roadblocks… smart.

What I don’t get is why you didn’t take this opportunity to subsidize part of the price of the phone yourself to help lower the initial buy-in for your customers. $199 WITH a 2yr contract for a 32GB phone is not a great price. It is a better price than your competitors 32GB smartphones, but you can’t even claim compatibility with all the apps in the Google Play Store. So you are offering a somewhat crippled Android phone for the same entry price as your competition. I just don’t see people clambering for this phone. I could be wrong.

Personally, I think a $99 price tag would get people to buy it, even knowing what it is designed for. Keeping the 32GB of storage, you could maybe get away with $149, but I still don’t think it’s compelling enough.

Here’s hoping I’m wrong and you sell millions of them.

See Shared Google Calendars in Native Apple Calendar Apps!

My company uses Google Apps for email, calendaring, etc. What bugged me was that I couldn’t figure out how to see other employee’s shared calendars in the native iOS/OS X Calendar application. So I would have to use Google’s web calendar app to see these, and that always seemed like a PITA to me.

On iOS I found a couple apps that allowed me to see these shared calendars on my iOS devices, but never found anything for the desktop (other than web-based apps). My favorite of those apps are Calendars 5 by Readdle and Sunrise. Sunrise actually has a desktop app via the web, but I’d rather have something native that I can use offline. Both apps were decent (I started to use Calendars 5 by Readdle as my main/default calendar app), but they seem a little slow to sync which slows down how fast you can use the app to check something or make changes.

Being frustrated, I happened to do another search today. Perhaps I used different keywords for my search, but I actually FOUND something that worked!!

The blog post is here by Google App Tips from Refractiv and outlines the steps required to set up a Google Apps account on both iOS and OS X Calendar apps to see all of the shared calendars you want to. The key step I was missing was Step 5 which has a link to a Google Apps sync setup page that allows you to choose what calendars you want to enable for sync. You have to browse to that link from a desktop web browser (I believe), but one of the commenters shows a link that works from a mobile device as well.


Also, I had to close my calendar apps (on the desktop and mobile) in order for the changes to reflect. They may show up over time, but I didn’t feel like waiting. You can then pick and choose which calendars you want to display in your calendar app, so you can set up all calendars to sync, but only display the ones you want to see at any given time. This would offer you the best availability.

I know it may seem silly, but I am SO excited that I can now view these calendars in the native Calendar apps, which I like better than any other calendar app (so far). Enjoy! 😉


Update: Just in case the post goes offline in the future, wanted to capture the URL from Step 5 to select the calendars to sync.

And the URL that commenter Michael Dweck listed (I never tested this one).

This mobile link doesn’t seem to work, however the first link was accessible from my iPhone, so just use that link.

May the 4th Be With You!


Well, it’s that time of year again… Star Wars Day!

I started my Star Wars movie marathon at 1pm today… at this rate, I will not finish all 7 of the movies (yes, I’m including the “Clone Wars” in this marathon) until around 3am! But so worth it. 😉

Hope every Star Wars fan is having fun doing something Star Wars related today.

I am COMPLETELY excited that the main character actors will be in Episode 7. You can find more news about it here. Cool pic of the cast!


Click on image to open full size

Retina display goodness… ruined by low res images on the web

Retina Display

Retina Display

Retina, or high pixel density, displays are no longer new. They are on all the major and mainstream mobile devices, as well as Apple’s flagship MacBook Pro laptops. While Apple may have coined the term “retina” to differentiate between their lower resolution devices/displays, it is probably the term I will use when referring to displays with a pixel density of 200ppi or better (ppi stands for “pixels per inch”). This is mainly because people are familiar with the term “retina” and what it means, and I have a ton of Apple gear. 😉

I feel kind of redundant talking about the technical details of what the differences are between regular (non-retina) displays and retina displays because there are so many posts out there that already do a good job of this.

If you are looking for more details, check out this wikipedia page on PPI and this one on Retina Display, and there are countless others.

The quick and easy explanation, for Apple’s devices anyway, is that retina displays essentially have four (4) times as many pixels as their non-retina counterparts. One (1) regular pixel becomes four (4) pixels on a retina display.

They use these extra pixels not for creating more screen “real-estate”, but rather making things look sharper. They do his by scaling things that are displayed. As an example, the 15″ MacBook Pro retina’s resolution is 2880×1800, but is scaled (by default) to act like it is 1440×900.

So, for the iPhone, the original iPhone’s display has a resolution of 320×480 and the retina version in that display’s form factor (the iPhone 4 & 4S… keep in mind they changed the screen dimensions starting with the iPhone 5) is 640×960. This is each dimension’s pixel count multiplied by two, but keep in mind that means each pixel equates to four (4) pixels when this happens. This post by Smashing Magazine does a good job of explaining this.


image credit to Smashing Magazine

Why am I even writing this post?? Well, recently I bought a 15″ MacBook Pro with a retina display. Before I bought it I didn’t think it would be that big of a deal. I had seen them, my friends have had them, and I had checked them out… but it is not until it is yours and you are doing all of the things that you do on a computer that make it more intimate and you “get it“. Then, as you use various applications and visit different web sites, it becomes PAINFULLY clear when something is NOT “retina capable”. At first I thought the non-retina graphics were worse than my non-retina laptop… until I compared them side-by-side and realized it is not worse… it is just so much more dramatic when it is side-by-side on a web page comparing retina capable content and non-retina content within centimeters of each other. As an easy example, text (unless it is an actual bitmap image) is pretty much always retina capable because fonts are vector based and not bitmaps. So if you visit a web page with bitmap based images (this includes bmp, jpg, png, gif, etc.) that are not configured to be 2x images scaled down for retina displays and those images are next to the perfect looking text, it becomes very obvious.

This is also true on an iPad or iPhone that has a retina display, however it is not as painful because those devices are already doing some scaling of the web pages and images so you are not scrolling all over the place to see the content. Where it becomes very obvious is if you run a native iOS application who’s graphics were not designed for a retina display. This is almost impossible to find anymore, but there are some out there.

So this brings up the other reason I started looking into this more, which is the fact that I created (with some serious help from ThinkUp, LLC) a native iOS application and noticed some of the graphics I have are not retina capable and it is noticeable. Unfortunately I was not the one who created those graphics, so I don’t have the originals that I can scale as needed. I will be getting them from the correct person though, so I should be able to fix those very quickly.

At any rate, since this has been bugging me on other web sites I visit with my retina MacBook Pro, I figured I better make darn sure my own web site was fully retina capable. So I visited a bunch of web posts that talk about how to do it and started experimenting on my own. I bought a full Adobe Creative Cloud subscription so I would have all of those applications to use in building both web and application assets. They are good (even great) tools, but I don’t necessarily think they are the “best” hands down. I had purchased other OS X native applications that were allowing me to get the job done, but since I am also collaborating with others and sharing assets, AND those other people always seem to have Adobe applications, it just made sense to switch over to Adobe. Of course, now I have to figure out how to do everything I need in those apps… but maybe that’s another post. 😉

This post by Daan Jobsis (you’ll need to translate the page for english) does an awesome job of showing the various ways you can solve the issue and, in some cases, wind up with images that are retina capable but even smaller than the non-retina images! I suppose part of the reason I brought up the fact that I am using the Adobe Creative Cloud suite is because I have not been able to figure out how to apply varying levels of compression on PNG files. I have done varying levels of compression on JPG files, but in order to have transparencies in most of the images I am using, it requires using PNG files. So far I only see one (1) compression option in Photoshop, and it doesn’t seem to reduce the size of the image file much at all. But, in Daan’s post he mentions the fact that you can use higher compression on the larger dimension files because the compression artifacts are not as noticeable once you scale them down to what their 72ppi dimensions would be. I am hopeful that I can figure out a way to use this method and only maintain one set of image assets for my web site… even if I may have to maintain both versions for my iOS application/s.

There is another method that intrigues me, but it still would require maintaining two sets of image files for each asset (or maybe only maintaining two files for those images that are significantly larger in the @2x size). That would be to use retina.js. Retina.js is a javascript that will essentially parse through your html code and if it sees any images it will look for a file with the same name and “@2x” at the end of the name. This is the method that Apple mandates for their application development and it seems that the industry is following suit with this naming convention. For example, if you had an image named “picture.png”, then it would look for “picture@2x.png” in the same directory and use that. Aside from you having to manage two image files per asset, this method also means that the browser will have additional HTTP requests to pull the higher-res images. If you are on a mobile device using a cellular data plan, this can add kilobytes of additional data usage per page… which adds up. The nice thing about this method is that the script is at the bottom of the page, and if the lower-res images are a lot smaller in size, images will show up faster, and then get better in quality. I am uncertain whether or not this script has the intelligence to figure out if the @2x images are actually needed and only use them if the screen is >200ppi. It would be even cooler if it could determine the type of network connection and not pull the larger files if it is over cellular, but that may be asking too much.

Other posts talk about using SVG files (Scalable Vector Graphics) for things where that works, so I will start to look into that as an option since those do NOT require maintaining two files. There is more limited browser support, but I think it is only IE8 and earlier that wouldn’t be supported. I would be surprised if anyone visiting my site was using anything earlier than IE9 on a Windows PC, but I suppose only time (and stats) will tell. It looks like Michaël Chaize wrote a great article on exporting SVGs from Illustrator CC, so I will spend some time going through that and testing.

As always, my web design and my app design is “as time permits”. With a full-time job and 5 kids, time doesn’t permit often. 😉

I Couldn’t Agree More!

Not too long ago, Kelly Olivier created a blog post for the Nutanix web site about his experience first as a customer, Nutanix’s FIRST customer in fact, and then as an employee of the company. Kelly and I started around the same time and we were at new hire training together, so that’s when I got to know him. While I was never a direct Nutanix customer before coming to work for them, I still share many of his experiences and opinions.

It was January 2011 in Palo Alto, CA, and I was helping host a training-type event for VMware SEs. I was still fairly new as a VMware employee respectively, so “helping” may be a bit of a stretch. At any rate, Nutanix was one of the, very, select vendors that we brought in to present to everyone. Since my group was completely focused on VMware View (now VMware Horizon View), otherwise known as VDI (Virtual Desktop Infrastructure), and because Nutanix was causing some disruption in the traditional 3-tier infrastructure market particularly around solving VDI workloads, it made sense that they made the short list.

I had not been introduced to them before this, so the presentation by Dheeraj Pandey and Mohit Aron was my first glimpse at their solution. I have to admit, when they talked about the type of resiliency they had even though they were using local storage, I called bullshit (to one of my co-workers anyway). My co-worker actually had a Nutanix “block” in his home lab and confirmed that their claims were true. I had (and still have) loads of respect for this co-worker and so my interest was piqued and I started to not only pay more attention during that specific presentation, but also past that day.

Fast forward to January 2013 when things at VMware got a little sketchy from an employment perspective. My great friend and previous co-worker (not the same person already mentioned) had already left VMware and started working for Nutanix. When I told him I was starting to look around, he let me know that there was an opening for the Philadelphia area (where I live), and this also meant I wouldn’t have to travel all around the globe like my job required at VMware. So, long story short, I interviewed and was offered a job at Nutanix in March of 2013 and I couldn’t possibly be happier ever since that day.

The Nutanix Virtual Computing Platform stands up to not only its claims, but also the competition. And things are starting to really heat up in the competition space as Nutanix is no longer that small startup ankle-biter that the big companies used to think we were. They are starting to feel the pain in their pocketbook as they lose business to us and can’t wrangle it back from us. You see, almost 100% of the time a Nutanix first time customer will buy more and more Nutanix as they realize the “sales pitch” is not just smoke and mirrors and they really can trust it to run their Tier 1 applications, etc.

It is wonderful to be back in sales again, but better yet is not ever having to lie about capabilities and how I feel about the product. Nutanix is not a silver-bullet that is the perfect solution for every need, and thankfully the executives of the company KNOW that! They don’t ever expect their employees to lie or make false statements to try and win business. I have always taken this approach in sales roles I have held in my working career, but my “conscience” was not always well received by all employers, and in some cases I had to find an exit because I was being asked to do things I didn’t agree with (none of my more recent companies, thank goodness).

Nutanix is having explosive growth not only in sales/revenue, but also in company size! I believe I was employee #163 or somewhere around that when I was hired, and we just went over 500 current employees! So Nutanix has more than doubled in size in just 1 year.

I can’t wait to see what the future holds!!