Hypervisors are, at their core, an attempt to abstract away hardware by recreating its behavior in software. Because they exist only in software, virtual machines are infinitely more agile than their physical counterparts. They allow you to, for instance, take a snapshot of a system before running destructive tests, migrate a running system from one physical location to another, or more intelligently spread a pool of physical CPU / memory resources across many virtual machines.Interested in more? Continued here.
So, how do you recreate hardware in software? Essentially, imagine an emulator which reads instructions one by one from the virtual machine and exactly mimics what the processor and I/O devices would do in a physical system. You’ve probably used an emulator before, maybe to run legacy programs on an incompatible processor architecture (like when Apple computers moved from PowerPC to Intel CPUs but some programs still only ran on PowerPC) or to play old Game Boy games on your laptop. This approach is effective and gives unmatched flexibility — you can pause the emulator at arbitrarily precise moments, inspect minute bits of the processor’s inner state, and even run programs written using a totally different instruction set than your physical processor understands. However, this flexibility comes at a cost — performance is awful, especially for workloads that use a lot of CPU. Because you’re reading instructions into memory and then interpreting them in software, you end up executing many instructions on your physical hardware for each instruction that runs in the emulated machine. When the machine you’re emulating is inherently much slower than the physical machine it’s emulated on, like a 1998 Game Boy Color emulator running on your 2015 laptop, that’s fine because the modern hardware easily absorbs the additional computation, but to run a complex modern OS with performance-critical code, you need to do better.
Be the first to post a reply!