A History of Virtualization
The concept of virtualization is generally believed to have its origins in the mainframe days in the late 1960s and early 1970s, when IBM invested a lot of time and effort in developing robust time-sharing solutions. Time-sharing refers to the shared usage of computer resources among a large group of users, aiming to increase the efficiency of both the users and the expensive computer resources they share. This model represented a major breakthrough in computer technology: the cost of providing computing capability dropped considerably and it became possible for organizations, and even individuals, to use a computer without actually owning one. Similar reasons are driving virtualization for industry standard computing today: the capacity in a single server is so large that it is almost impossible for most workloads to effectively use it. The best way to improve resource utilization, and at the same time simplify data center management, is through virtualization. Data centers today use virtualization techniques to make abstraction of the physical hardware, create large aggregated pools of logical resources consisting of CPUs, memory, disks, file storage, applications, networking, and offer those resources to users or customers in the form of agile, scalable, consolidated virtual machines. Even though the technology and use cases have evolved, the core meaning of virtualization remains the same: to enable a computing environment to run multiple independent systems at the same time.
What is Data Center Virtualization?
Data center virtualization is the process of designing, developing and deploying a data center on virtualization and cloud computing technologies. It primarily enables virtualizing physical entities i.e servers storage, networking and other infrastructure devices and equipment. Data center virtualization usually produces a virtualized, cloud and collocated virtual/cloud data center. Data center virtualization implies the creation of multiple, logical instances of hardware or software on a single physical hardware resource. This technique allows maximum utilization of hardware resources for each application instance running on a data center through dynamic resource allocation. Additionally, virtualization allows businesses to reduce the maintenance and energy costs associated with multiple physical servers. It is about time data centers in developing countries such as India, and China embrace virtualization, and get a head start by incorporating data center virtualization.
The Benefits of Data Center Virtualization:
Reduced Hardware Vendor Lock-in
Server virtualization nullifies the dependency on one particular server model or vendor, by abstracting away the underlying hardware and replacing it with virtual hardware. Virtual machines created hence, don’t require any specific hardware server assembly to obtain resources. Instead, they take up RAM, storage and CPU resources from virtual hosts, dynamically connected with each other. If one host fails, machines request the resource from another host in the same virtual server. Thereby, data center administrators and owners gain more flexibility and leverage through virtualization, when it comes to negotiating the price and build of server equipment, with the hardware vendors while purchasing more equipment.
Reduction in Operating Expenses
Hardware is mostly the costliest asset of the Data centers. The perks of virtualization are even higher as it is easy to maintain, consumes less electricity and there are even lesser occasions of downtime. Reducing the cost of hardware is directly proportional to reduced cost. Virtualization enables hosting of multiple OS deployment on a single physical Server. As the number of physical hardware decries,you can achieve far lower costs for power consumption, cooling, real estate and maintenance.
Improved Disaster Recoveries
Disaster recovery in a virtualized data center is pretty straightforward, with the usage of updated snapshots of your virtual machines. Should disaster strike the data center itself, administrators could always migrate the residing virtual machines, on to another virtual server instance or host. Virtualization offers three important benefits to organizations when it comes to chalking out a solution for disaster recovery. Firstly, virtualization provides hardware abstraction capability, wherein, the disaster recovery site doesn’t require identical hardware to match that in the production environment. Secondly, by server consolidation, fewer physical machines remain in the production environment, and organizations can easily create an affordable replication site. Lastly, virtualized server instances have software that offers automation of the fail over in case of disaster, thereby minimizing human intervention in disaster recovery.
Backup at all times
One can have total backup of virtual servers along with the backups and snapshots of virtual machines. Snapshots can be taken throughout the day, which gives updated data. The virtual machines can be relocated from one server to another as well as redeployed fast and super-easily. Firing a snapshot is easier than rebooting the server, there is hardly any difference spotted in case of a downtime.
Smooth Migrations to Cloud
Data center virtualization enables smooth transition on to cloud environment when needed, due to their ability to abstract away the underlying hardware. The first step would be to migrate a simple virtualized data center onto a private cloud. Subsequently, with the maturing of the public cloud and the technology around it, you could proceed with the migration of data onto a cloud hosting facility. The future of IT industry is the cloud, and incorporating virtualization into your data center, would serve as a head start for your organization, once the migration to the cloud becomes a necessity. One can easily deploy virtual machines to and from their Data center and enjoy a full-blown cloud-based infrastructure. This actually makes the migration to a cloud a cakewalk.
Reduced Data Center Footprint
The ability of virtualization to abstract underlying hardware assembly does benefit the organization in minimizing the data center footprint. It results in the requirement of lesser number of servers, lesser networking hardware and an even smaller number of rack assemblies. Combined, it all boils down to less floor space required for the data center. For smaller business units that cannot invest large sums of money into procuring dedicated physical hardware resources for data center, virtualization is the sure shot way to minimize costs, yet maximize the efficiency of the data center.
Faster Provisioning
Server virtualization enables administrators to provide system provisioning and deployment at short notices. This virtue is of utmost value for business units which have a stringent requirement to provision near instant-on functionality for a production environment server. Virtualization allows system administrators to get a server instance up and running within a few minutes, by cloning a virtual machine through a golden image, master template, or an existing virtual machine. On the contrary, physical server instances require filling out of purchase orders, waiting for shipping and receiving, and then assembling the components together, only to waste few more hours over the installations of the operating system and server specific applications. In matter of few clicks, virtual machine snapshots can be enabled. It is mostly so fast that end users are hardly able to spot any difference in their experience. This is the best disaster recovery solution as a once a physical server dies, it is very hard to redeploy. Whereas with a virtual server redeploy can be done in minutes.
Better testing
What better testing environment is there than a virtual one? If you make a tragic mistake, all is not lost. Just revert to a previous snapshot and you can move forward as if the mistake didn’t even happen. You can also isolate these testing environments from end users while still keeping them online. When you’ve perfected your work, deploy it as live. It is a safe option for when you are testing, even if you end up making a disastrous mistake, you can still go back to the previous snapshot. The data is still there and you can go on without any hassle. The end users are still online and will stay unaware of the testing environments. When you end up perfecting your work you can deploy it as live.
Performance
In virtual environment, resources are allocated through resource pool. A resource poll is collection of hardware resources (i.e. CPU, Memory, Network Interface, Storage, SAN space). The resources can be allocated to any virtual Machine. So a virtual machine can use resources more than any hardware specific resources. A resource can be allocated dynamically from resource pool by settings resource limit so that hardware resources can be allocated to the VM automatically as per requirement and can be release during idle time. This is called vertical scaling.Vertical scaling means that you scale by adding more power (CPU, RAM) to an existing machine.Vertical-scaling is often limited to the capacity of a single machine. Alternatively based on the application load and VM resource utilization, automatically new VM will be launched with exact configuration and join to the application cluster so that the performance bottleneck can be avoided. For Example, an application is running with 8 GB of RAM , 4 CPU. If any point of time CPU or RAM utilization will reach to 75% (threshold limit can be customized), automatically another VM will be created form this existing VM image with same hardware and software configuration and it will join to existing application cluster through load balancer. This is called horizontal scaling.Horizontal scaling means that you scale by adding more machines into your pool of resources.