21.3 C
New York
May 1, 2024
Worship Media
Technology

Rethinking the PC: Why virtual machines should replace operating systems

Discloser: Most of the vendors mentioned are clients of the author.

Technology generally develops linearly even something comes along that should change its progression.  Take PC operating systems, which arrived in the 1980s. One of the big problems they brought with them was the need to keep the OS and applications from breaking every time Intel made a change to its chipset or the firmware. The fix, eventually, was to create Virtual Machines – a virtual hardware layer that would remain constant, regardless of what happened to the underlying hardware. 

Much of the problems we’ve had with deployments over the last couple of decades have revolved around the need for IT to keep the PC image static while the hardware changed. If we instead preloaded a Virtual Machine from either VMware or Microsoft – and then placed the image on that – you could assure a level of compatibility you generally don’t get today. 

Let’s explore rethinking PCs, virtual vachines, and operating systems this week. 

Rethinking PCs

When PCs were first created, the folks that built the OS and the folks that built the hardware were the same. Apple built both, and IBM bought the rights to Windows so it could effectively do the same thing as well. But on the Windows side, the operating system quickly became decoupled. It allowed for a far more competitive market, but also one that was unusually plagued by incompatibilities and breakage because the two halves of the PC weren’t developed together.  

For a time in the early parts of this century – when Intel and Microsoft weren’t even talking to each other very well – we got disasters like Windows Vista and Windows 8, platforms that even Microsoft would like to forget. Things eventually evened out, and most of those problems are history. But in some ways, this problem has worsened  because AMD has risen to be a power and Qualcomm is now providing PC solutions.  This hardware variety is forcing Intel to speed its own development efforts – raising the possibility that maintaining OS reliability will get harder to do.  

One way Microsoft is addressing this with its Surface line; the company is starting to specify processors for the Surface X and the upcoming Surface Neo twin-screen laptop: from Qualcomm and Intel, respectively.  Custom processors are an interesting idea, but were Dell, HP, and Lenovo to go down this path, the resulting hardware complexity – and the chance for OS breakage – would increase dramatically. 

In this new world, there is a need to allow the OS side of the solution from Apple, Google, and Microsoft to advance as fast as those firms can move and for the hardware platforms from AMD, Intel, and Qualcomm to do the same without any resulting breakage.

Enter the Virtual Machine

A Virtual Machine running on hardware generally has a Hypervisor, so you can run multiple virtual machine instances – each isolated from the other because the technology is generally used on servers where you have multiple users on the same hardware. 

On a PC, you could have distinct VM instances for work, school, and personal use with differing levels of user freedom. The company VM would be locked down so that the firm is better protected from the other usage models.  Viruses often come into companies carried by employees who aren’t careful with their personal use of their firm’s PC. You mostly see this kind of behavior with developers who need to keep their dev projects separate from their enterprise image.  

Even with a three-image installation (work, school, personal), you’d be able to optimize on all three support organizations.  Work IT handles the work image, school IT handles the school image, and the OEM helps with the personal image (which they could charge for). You’d get a higher level of security because the two or three usage models would be isolated from each other – and you’d free up the OS vendor and the platform vendor to advance their platforms faster because they could specify a set Virtual Machine configuration.

The VM company, be it VMware or Microsoft, could then work with the hardware vendor to optimize flexibility as a performance factor and hardware development PCs would evolve to become better multi-host clients. Other options could include  creating a VM for your kids on the family PC that could be automatically purged and rebuilt regularly or tuned OSs for things like eSports. You might be able to create games that run native on a VM while suspending other VMs when you compete.  And, of course, IT would get a very stable virtual hardware image that would remain stable across hardware vendors and hardware versions.  

Wrapping up

I think it is time to begin rethinking the relationship between operating systems, hypervisors, and virtual machines to better secure our PCs. (Rootkits would generally become a thing of the past, thanks to the VM.) The result could be more flexible, more reliable, more secure, and better able to deal with our changing future than the way we build the platforms today. 

I think the world is ready for a change; now, it’s time for an OEM who’s willing to take the risk and try something new.   

Click Here to Visit Orignal Source of Article https://www.computerworld.com/article/3518849/rethinking-the-pc-why-virtual-machines-should-replace-operating-systems.html#tk.rss_all

Related posts

HomeKit is going to be everywhere as ABB buys Eve Systems

ComputerWorld

Following US moves, UK looks to regulate crypto, create government token

ComputerWorld

Microsoft touts new GPT-powered chatbot for Dynamics 365 CRM, ERP apps

ComputerWorld

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy