Recently a well respected figure in the Citrix community published a blog called “The Hidden Costs of VDI”
Granted, some of these points are valid. However, I felt it was skewed enough from my perception to warrant a response.
A quick note about cost models
It is true. Caveat emptor. Good cost models let you challenge the assumptions and plug in your own numbers for everything. Make sure you find and use one of these or work with a trusted IT consultant™ to develop one for your situation.
The hidden costs of server based computing
Not being able to get rid of legacy systems
The author makes a point that unless you can replace everything in an infrastructure it makes it more expensive to migrate. He also makes a statement that you must retain the prior support infrastructure to keep using old desktops as new client devices.
Many customers prefer phased approaches to alleviate some of this issue. Why not only replace the 25-30% of desktops that are due to retire this year? If you’re doing more than a couple hundred desktops a year the numbers still make sense. Most businesses are segmented enough in the user base that there are well defined user groups to focus projects like this on.
If there are already desktop management products in place it makes sense to leverage those to assist in the migration project. Say you have tossed 1/3 of the old desktops in favor of VDI and thin clients. Now you’re ready to tackle the next 300 systems. But the first phase of your project only took 4 months and it will be 8 months before those systems are depreciated and ready to retire.
Leverage your tools and good design skills. Develop a locked down desktop to serve as a display client. Only install the necessary OS, client software, and multimedia support applications required to support the optimal user experience. Setup the OS shell to run only the client applications and keep the user out of the local OS.
Changes to user paradigms
Changes that negatively impact the user can be problematic from a helpdesk perspective. Some changes are necessary to improve the users’ security or protect the company. Anyone remember when virus scanning wasn’t ubiquitous? Do you remember when you could get away without scanning all your files on the file servers? How did that change when a user was infected or there was a big outbreak?
Did the users complain when you rolled out SMS/SCCM or some other application that was perceived as Big Brother? Did they appreciate it when they realized that an application could be delivered from a phone call or web page instead of a technician with a cdrom?
The same is true for the kind of granular access control SmartAccess provides. You forgot your laptop, but you have your cell phone. Is it a benefit that you can view your documents? You bet. Is it a hindrance that you can’t send the doc as an attachment? You bet. Does that protect the company? Indeed it can.
Thinking things will scale linearly
Here’s a case where the title is misleading. You don’t necessarily have a broken design, you have a broken cost model. Will you get a linear function when planning for user migrations? No way. The first few users are very expensive in most cost models. Once you hit a couple hundred you start to gain ground. Above this the cost models start to look great! But there is a hidden cost if you plan on scaling linearly. You do run into questions of scale and hidden bottlenecks that are not considered in the basic cost models. What do you do?
Plan your design around a modular architecture. Plan your design to support blocks and pods of users and their personal files and desktop attributes. Find that sweet spot and leverage it. Rather than having surprises with what happens to the system when you hit 2500 sessions gain confidence the fifth time you add another block of 500 users.
The hidden costs of VDI
While a simple TS farm may be a great deal cheaper than practically anything other than its Linux analog, I’d question how many enterprises are using *exclusively* one simple TS server build for all their operations. If you say that it works with the desktop environment then you’re contraindicating argument of not being able to get rid of legacy systems. Either you only have Terminal Servers and Thin Clients *OR* you already have a mixture of fat desktops, thin clients, TS, etc. This would imply you either have both desktop specialists and TS specialists OR you bring in services to some of this work done. Either way, TS doesn’t obviate any of these issues, although it may be a bit cheaper once it’s running and the apps don’t change.
The facts alone should do here. Linked clones are available. Citrix, VMware, NetApp, & IBM all have methods for thin provisioning. VMware claims 64++ linked clones running off one master replica per lun. That’s a pretty huge savings. Enhance that with the fact that NetApps can keep that image in cache to keep the load off the spindles and SPs.
Additionally the argument is made that some users may need different apps and the users won’t benefit from VDI. Application virtualization is the solution and is part of any good application delivery architecture. Yes, you use the same disk. Yes, the users have different app stacks. Mix them and match them. App-V & ThinApp are the way to leverage this design.
Windows Licensing (VECD)
Microsoft is going to demand tribute until they have a competitive product and then they’re going to give it away for a while. Should it become mainstream it will then demand a premium additional cost. We’ve seen this with Terminal Services when W2k cals were “free”. When 2003 came out you’d better have paid attention to the EA/SA and gotten them while they were free. I don’t think they’re free any more.
Hyper-V is at the beginning of that process. Come get your free product. Put it everywhere. Once it’s good enough there will be a cost associated with running it.
Customers with an EA may be able to deal with the minimal cost of VECD. Those who have to pay the higher fees may be able to negotiate their way through that to minimize the issue.
Complexity of the unknown
While it is true that VDI is a recent development it is mature enough to have a look. A POC (Proof of Concept) project is a reasonable way to have a taste. Engaging with an expert can really speed things up as you can have a running POC up in a couple days once you have the hardware, licenses, and other moving parts nailed down. Quality consultants will give you hands-on time to build and configure the POC as it is built up. This is the tow-in approach.
Another way to kick the tires works with a Pilot project to actually do a basic design for several user classes. Once these are nailed down and the services catalog for these are well known a scaled down farm of 10-50 users is constructed on demo gear or in excess existing capacity. These can take several weeks or more to put together, but once done can be scaled and extended to meet the needs of the enterprise.
While planning for the unknown is a good idea, this need can be minimized by engaging with someone who has done this work before. This is the model for the future. No longer can a projects team spend months figuring out a new technology then another 6 months testing and tweaking to make sure everything is dialed in. The tow-in approach provides solid designs, documentation, experience, and proven designs.
Not thinking about non-compatible apps
I’m going to have to lump this in with complexity of the unknown. You can’t know what you don’t know ahead of time. I think this is at least one Dilbert comic strip, perhaps more. VMware publishes lists of applications that are known to work with VI3. It’s just a matter of time before there is the same coverage for application virtualization products. For the time being it is a bit of a trial and error model, but some apps end up running better when virtualized. The upside is that you can provide applications in Thinapp that wouldn’t work on the same machine without application virtualization.
Not knowing Windows XP well enough
This objection is unusual. Do I expect my admins to “know” their platforms? Sure I do. Has someone else figured out what these tweaks need to be? Usually. I know it was big business to do the tweaking for Citrix Presentation Server years ago. Now there is a big enough community with adequate print and web resources to serve as decent reference material for a design and implementation. VDI is headed that way. I personally have been running Xp since it was in beta on VMware.
The View architecture also supports quick response to necessary changes and updates in the images. Unlike an enormous XenApp stack that takes careful balancing to fine tune and craft, this can be a single desktop image that can be updated quickly.
Vendor products that change too fast
Seriously? We just need to slow it down here? Change is just too radical, right. I’m sorry, but this is outside the realm of believability here. The only constant is change. The only safe bet is to adopt internal controls and qa methods to maintain reliability. Again, it help to work with an implementer who spend a lot of time with a large number of clients to ensure complete information on how the current releases and future versions are going to stack up. A stable environment is better than the latest and greatest unless the features are something that is going to give you a big advantage and makes sense financially to use.
Part of the environment now is the burgeoning management space. There are huge opportunities here for tools and systems that will make it easier to both upgrade and test these complex environments. Some of this will be provided by system vendors, some by OS & hypervisor vendors, some by storage vendors, and a great deal by the grass-roots scripting community. The third party space continues to support great vendors like Vizioncore and Platespin who exist at the pleasure of the general VMware enterprise customers.
Not knowing which vendor is going to win
The same could be said for the terminal server space. Will Microsoft leave enough scraps for Citrix to productize or will they just continue to pull 10-20% of the XenApp features into each major TS release? If I could predict the future I wouldn’t be writing this blog. I’d be on a nice beach or mountain somewhere instead of on WN 438 to San Jose working to keep the dream alive.
Seriously, even when the obvious vendor (Microsoft) provides less than adequate products (Vista) the market responds in the expected fashion. Even if VMware or Citrix has the obvious winner in 2 years that doesn’t mean the game is over. Protect your future by doing adequate due diligence. Get advice from consultants who have worked on both platforms. Get roadmaps from vendors to see what features are coming and when. Make sure you have plans for a staged rollout, including a Proof of Concept (POC), a pilot, and a final phased rollout starting with the “low hanging fruit”. Then execute on your model, reinvest the savings, and keep looking for the next great thing.
Now is the time to begin looking at VDI projects. There is demand in the form of security, operational agility, and headcount flexibility. The platforms, brokers, clients, and supporting tools are maturing. If you do nothing you’ll end up with what you’ve already got. If you’re considering a VDI project that must not be enough.