4 Ways to Avoid Server Overload Caused by Virtualization

Posted on September 18, 2011 by
singleimage

As virtualization goes deeper into corporate operations, including for resource-intensive and mission critical tasks, IT managers learn that high virtual-to-physical server ratio is simply unfeasible. Some virtualizations vendors touted the possibility of setting more than one hundred of virtual machines in a single physical server, but based on past experiences, such a high ratio is risky in a demanding environment and may cause performance issues or even outages. For normal usages, some companies may set up about fifty virtual machines in a powerful physical server, however when it comes to resource-intensive and mission-critical tasks, it is more sensible to use less than ten virtual machines.

Overestimating the ideal virtual-to-physical ratio for a corporate task may prove to be costly, as the company will unexpectedly need to add more physical server, cooling capacity, rack space and electricity. If a company plans to use only ten physical servers during a project and they actually require fifteen, this could have a huge impact on the system performance and productivity. Not a good thing, if the company is on a tight budget. So why there is frequently a disconnect between reality and virtualization expectations? During normal operations, virtualizations are used to serve low I/O, low-use applications such as print servers, logging, development and tests. Consequently, these non critical applications don’t need high resources; you can stack dozens of them on a single physical machine. Initially, virtualization is used on tasks that had less than 10 percent utilization rate and if they went down, no one would mind.


Unfortunately, when IT teams use virtualization for resource-intensive and mission-critical tasks, they seem to be slow in explaining the harsh reality to the corporate executives and users.

These are four things IT managers should do to avoid server overload when using virtualizations:

1. Perform capacity analysis:
IT teams should reassess their methods and dial back to users’ expectations. The capacity analysis is a good place to start. Rigorous experiments should be performed before a project is started to better predict the real requirements for physical server. Possible CPU and memory utilization during peak load should be predicted accurately.

2. Continuous performance monitoring:
When preparing for an intensive use of virtualization, it is important consider things beyond CPU and memory utilization. I/O can also significantly affect performance. The IT team should monitor disk space usage, read-write performance and other factors that may affect performance. The monitoring should be reliable enough and the conclusion shouldn’t be contaminated by occasional spikes from a few VMs.

3. Stability test:
Rigorous testing should be performed before the actual deployment, it is important to be sure that the server is stable enough in terms of network and memory bandwidth during the critical project. Tests can help to determine which tasks that can be virtualized well and it is also important to ensure that VMs are not duplicated across the servers, which is a waste of resources.

4. Learn from others; experience:
IT teams should attend major annual conferences on virtualizations such as Synergy or VMworld, to learn about latest developments and meet those who experienced similar situations. Attendees are often glad to share their experiences

About: This Article was Contributed by Raja. He is a Web Hosting industry watcher and writes regularly on Dedicated Hosting Reviews and Reseller Hosting Reviews.

Author :

  • admin