As a sometime lurker on this list (too many lists, too little time) I caught some of the conversation about virtualization. We have a small IT unit (5 staff including myself) and a very small VMWare server environment  (9 windows servers, 1 linux) which we've been running since 2008.

We're about to embark on desktop virtualization for our public computing areas (projected to be 54 workstations, currently 48 physical workstations). In some ways I'm going on a limb here as we don't know how well it will work until we do it. However , I hear nothing but good things about it from our sister library, which beat us to the punch, and has been using Citrix with more workstations than we will be doing for well over a year. 

1) Server virtualization has been like a dream come true for us. Performance has been great, first off. Downtime was lessened from the day we went live, and since then we've gotten even better at maximizing virtualization's advantages in reducing downtime during individual server and software upgrades . Development is so much easier since we can clone production environments, as well as quickly and easily build test environments. My personal belief is that due to our backups of virtual disk images, we are way better off in terms of disaster preparedness than we ever were with just traditional backups. Is it a bit more expensive than traditional servers in our situation? probably.  Is it worth whatever extra cost?  For me, no question.

2) Desktop virtualization/thin client seems ideal for a public computing environment, where the big need is simply for web browsing and office applications. We don't think it will be cheap to do. However, it doesn't come off too badly. Even with the cost of the storage and server back end, it will cost well under the budget of a 3 or 4 year replacement cycle of traditional workstations.  (I know, lots of us can't afford to do that. But...) The OS and virtual workstation  per unit licensing is tricky and seems but for us, we're looking at a 3 year license of VMWare View, with no licensing cost beyond that for the Windows 7 OS. Thin or zero client devices have a much longer projected life than workstations. Lastly, energy savings can be significant in large environments. But even in smaller ones I would argue that buying fewer pieces of hardware - less plastic - and just smaller/fewer motherboards, is significantly better for the environment.

On our current physical workstations, now out of warranty for a year,  like most libraries we use a lockdown solution. We have workarounds and scheduled ways to patch the machines. We work with the campus IT and their LANDesk patching solution for afterhours scheduled patch times. However no matter what we do we still seem to end up touching each individual machine way more often than we'd like, with less than ideal results, amounting to what I think is a less than ideal customer experience. 

In virtualization we will have one image that we patch and correct as often as we want to, and it will always be exactly the same for our users no matter where they sit. Using either alternate VM profiles or ThinApp (we haven't decided) we can also license fewer copies of software (say, Endnote) and deliver just as much software as we need right to wherever the customer may choose to sit.

Single point of failure?  Sure - but for our library customers, if the Internet is down or the network is down, there's little to zero computing they would want to do anyway. 

So that's where we are, one year from now , perhaps I'll be singing a different tune, but we've thought about this and tested quite a lot before making the plunge.

Jeff Kuntzman
Head, Library IT
[log in to unmask]
University of Colorado, AMC Health Sciences Library