Think about this for just a second. Conventional or traditional storage was created 20 years prior to virtualization. So as that sinks in, I’ll ask you a question. Why has server provisioning with compute made so many advances in regards to lowered costs, less complexity, and higher performance rates, but storage has relatively remained the same? Even when talking about SSD, the larger players in the storage market are only bolting this on as a cache point and those that are leveraging total flash arrays don’t really address the real problems with storage for virtualized environments. Performance in itself doesn’t mean that you have resolved all issues, “Am I alone here?” … “Can I get an Amen?” Just because a large traditional storage manufacturer purchases a company specializing in conventional flash storage, do you think that they are suddenly resolving their issues regarding how that product interacts with the virtual environment? The answer is no. Maybe if they intended to rewrite their code from the ground up to take specific advantages of that flash storage specifically for virtual environments, then you might be onto something. This, my friends, is exactly what Tintri has done.
Although Tintri was purpose built for virtualized environments period, I will be writing specifically regarding VDI for this post. I am fully certified for both VMware View and Citrix products and my livelihood for the past few years has been centrally focused on VDI performing assessments, plan and design work, and implementations. I have integrated great third-party products such as Trend Micro Deep Security, UniDesk, and Imprivata and with those come an increase in complexity from an architectural standpoint and more specifically a storage standpoint. Let’s look at how one traditional storage provider is carving up storage to meet a specific VDI 500 seat demand. This is straight from their best practices document and is available for anyone to see. On the left is how EMC will carve up your storage into several raid groups, then into LUNS, tiering storage with SSD bolt-on cache (which is expensive BTW). Other storage vendors’ solutions aren’t much better. There are several things wrong with this… let me elaborate on a couple of points here using the 500 seat comparison.
- What happens when you need to advance beyond 500 seats? (What happens to what you have just architected? Back to the well for more spindles? More SSD? Do you have the finances available for that?)
- What happens when you have more than one golden image or use case? (Hint, you only have room for one image in this small 100GB space for a golden image. In VMware View, since a recompose process requires that the replica has to be written before the original is deleted, multiple images will run you out of space. With XenDesktop it doesn’t even make sense.)
- When using a third party product like Unidesk, the CachePoints become extremely important to get the right amount of I/O out of them to drive that performance. In this design there is not enough room in SSD for the cachepoints in the majority of cases.
- Did you have enough I/O built into the original design to accommodate the virtual infrastructure for Citrix XenDesktop or VMware View and all of the VMs? How about for the infrastructure needed for Trend Micro Deep Security? How about the throughput and latency metrics?
- How do you know for certain how many more VMs that you can fit on your current storage before performance is impacted or you are simply out of room?
- With traditional block storage are you getting any deduplication or compression advantages. (I can answer this…as no).
- How about your maximum VMs per LUN when using block storage, have you considered that?
I could go on and on but, here is one more really good question, “What if your storage was aware that VMs were running on it?” (See: VM-Aware)
With the Tintri VMstore there are no RAID groups to worry about, and no LUNS to carve up. Using NFS you can see from the picture on the right how Tintri answers that best practice design for VDI. Some people promise simple, but Tintri really delivers it. There are no cost, complexity or storage performance barriers for VDI anymore which has allowed Tintri customers to realize some ROI when implementing virtual desktops; bringing the VDI storage costs from ~60% of the project down to ~15-20%. Its hyper-density can allow for up to 1000 VMs to be deployed on one single Tintri Storage Appliance (see product specs). [In a server environment you can expect to get 250 – 300 Server VMs on a single Tintri datastore]
Tintri also gives you instant bottleneck visualization, interchangeable datastores, intuitive fuel gauges showing available capacity and performance headroom, VM trend-over-time statistics, VM auto alignment, per-VM snapshots, and more. It wraps a QoS around each VM ensuring performance and virtually eliminates the usual worries surrounding boot storms, AV storms, and login storms pertaining to VDI environments. So my point is, if you can decrease the CapEX and OpEX costs and decrease the complexity or storage while increasing the performance of storage (which is spotlighted by VDI), then what are you waiting for? Give your VDI implementation over to a Tintri VMstore and rest easy that you made a great decision. Some of the best products are the ones which you don’t have to manage and just flat out work (see Data Domain). Isn’t it time that you stop the LUNacy?
Interesting VDI Video:
“Stay thirsty my friends.”
~ The most interesting man in the world.