Sometimes, a single capability becomes a major technology differentiator. Although the emergence of cloud storage and computing relies on mobility and networking, the real enabling capability is virtualization. There are two threads to this story.
First comes the evolution of data centers. Once upon a time, enterprises actually bought lots of hardware servers and put them in special closets to serve the enterprise computing needs. Then, some clever CFOs realized they could save money by leasing these computers instead of buying them. Not long after, the special closets went too, when whole buildings owned by third parties sprang up that would lease space protected by cages to house your leased servers. At least at this stage, an IT manager could go to the building, point into a cage, and say “there’s OUR servers”.
It didn’t take long before some clever soul realized that a whole lot of leased servers were sitting in a whole lot of cages, not really doing very much most of the time. Wouldn’t it be great, they said, if we just owned all the hardware and leased time to companies on whatever server was least busy at the time; after all, nobody actually ever comes to point at their servers, and in fact nobody really much cares as long as they get answers when they need them.
The second thread came from the techno-nerds. Wouldn’t it be totally awesome if we could make one type of hardware pretend to be another type of hardware so it could run all the programs written for both machines. Apple of course was an early adopter, putting an “emulator” on their Macs that worked just like the Microsoft operating system running on PC hardware, so that Apple users could run Microsoft Office (Microsoft released some proprietary changes shortly thereafter; not to be outdone, entreprising coders soon put Mac emulators on their boring office PCs). It worked, but not all that well…yet the concept of virtualization had seen the light of day.
These days, the “machines” that run software in big data centers are almost always virtual. A single hardware server can run several different virtual machines at the same time. The supporting structure has also improved dramatically so that a new virtual machine can be created on-demand. And, it turned out to fit perfectly with all those under-utilized servers sitting in their cages…The Cloud was born.
Providers such as Google, Amazon Web Services (AWS) or the other major providers can lease you a server, available immediately, at prices of cents per CPU-hour…a virtualized machine that can be created as you need it and disappear when you are done. Of course the data centers do buy racks and racks of actual physical machines, and now use significant fractions of the world’s total energy output to run and cool these machines. Companies have not abandoned their own data centers either…the evolution now is to implement “hybrid cloud” a combination of on-demand public cloud resources for surge capacity, reliability and so forth combined with company-owned “on-premise” virtual servers for protection of data assets, backup, and similar reasons.
The transition to the cloud has turned the IT world on its head. No longer is it possible to draw a boundary around your system when much of your data is “out there”. Since it is now possible to access the data from anywhere with an internet connection, why can’t people use their own devices for work purposes? The cloud has also introduced a whole new set of cybersecurity issues: where does your data live? who has access to it? how do you provide security controls to protect your data? what new kinds of attack are available? Check out the next Daveknology CSAM Back to Basics post for a discussion of these issues.