The details, though, aren’t as crisp. What you get and when you get it has evolved over the years, shifting as the market begins to understand what people want and what they really need. In the beginning, you got a machine and a root password -- that’s about it. Everything else was up to you. Now the tools and techniques for building out infrastructure are getting better. The stock machines, after all, are commodities, so the companies are competing by adding bells and whistles that make your life easier.
We get more, but using it isn’t always as simple as it could be. Sure, you still end up on root on some box that’s probably running Linux, but getting the right performance out of that machine is more complex. You now have more options than ever for storing your data, and it’s not always obvious which is best. Are you going to run a database that does plenty of interaction with a persistent disk You’ll want to do something different than if you’re simply running a Web service that can cache all of the important data in RAM.
But the real fun comes when you try to figure out how to pay for your planned cloud deployment because there are more options than ever. If you’re willing to be flexible with your compute time, they’ll cut you a break. And if you’re willing to test your app on many machines, you’ll probably be surprised to learn how different performance can be, even on machines that seem to have similar stats. In some cases, the cost engineering can be more complex than the software engineering.
Here’s a list of 13 ways that the cloud has morphed or scudded into something new of late. The area was born by engineers that want to make it easier to share computing resources, and this is truer than ever.
In the beginning, the cloud business was simple. You typed in your credit card info and paid for every hour (or minute) you used your server instance. Every second had the same price.
The model was simple and intuitive, but it ignored an important part of reality. Demand for computing power in the cloud is not uniform. E-commerce companies found that people shopped during lunch. Streaming video companies watched demand skyrocket when the kids came home, then leap again when adults settled down in the evening looking for entertainment. Demand ebbed and soared as people used or ignored the Web.
The natural solution is to charge different prices at different times based on demand, and the cloud companies are beginning to offer this option. Amazon now runs auctions for its machines, a process that allows prices for its instances to shift up and down with demand. If you’re able to run your jobs at off-hours and get out of the way when demand surges, you can save dramatically. If you need computing power when demand surges, you’re going to pay more.
Many cloud providers treat their renters as if they were owners. When you start up an instance, it’s yours until you release it. Unless there’s a terrible catastrophe or a strange violation of the terms of service, like spamming, your machine will run and run until you decide to shut it down or your credit card bounces.
Google looked at the challenge of variable demand and decided to solve it by offering a lower price for machines that could be shut down and survive. Your machine is your machine until some algorithm at Google decides that someone else will pay more. When demand is slack, you can pay much less, perhaps as little as 30 percent, but when demand soars you’ll be the first one they push out of the lifeboat. When demand ebbs again, they’ll let you back in.
It’s a great option for anyone who doesn’t need guarantees. The only challenge is writing your code so that it can survive crashes. But you’re probably doing that already, like the good programmer you are.
The first cloud instances were pretty much empty machines. If they came with any software at all, it was a stock distribution of a standard, open source operating system. They were a blank slate, and it was your job to fill it.
Some of the new offerings invert this model. Microsoft’s Azure, for instance, is bundling up machine learning and data analysis tools as services. You can store the data in Microsoft’s cloud, then fire up its software to crunch the numbers. The price of the hardware is bundled into the software. The Data Lake Analytics tool, for instance, bills by the minute and by the completed job. You concentrate on writing Microsoft’s U-SQL language for the analysis, and it sends you a bill after the queries are finished.
Microsoft’s Azure has more than a half-dozen services offering answers, not merely time on a machine.
One of the challenges for the cloud company is predicting how much demand will really show up. The bean counters can watch Kevin Costner in “Field of Dreams” and say, “If you build it, they will come.” But that’s no guarantee.
Amazon avoids some of this risk by letting customers purchase “reserved” instances, an option that marries a guarantee of service with a commitment to pay.
In its simplest form, you write one check and Amazon will keep your machine running for one to three years. You’ll be billed whether or not your machine does anything. In return for the commitment, Amazon offers discounts that range from about 30 percent to 50 percent.
Google takes a different approach to rewarding long-term customers by offering similar discounts without the firm commitment. It starts offering discounts for “sustained use” that kick in once your machine has been running for at least 25 percent of the month. These increase until it offers a 60 percent discount for the last minutes of the month. When all of the discounting is averaged out, you’ll save 30 percent if your machine runs continuously throughout the month.
The key difference is that your machine doesn’t need to run continuously for the entire month. Google bills and computes the discount by the minute. You’ll save money even if you run your machine sporadically (as long as your aggregate use pushes into one of Google’s discount tiers). This reduces the chance that some instance will sit there unused.
Amazon offers another larger discount on top of its Reserved Instances. If you lock in more than $500,000 in instances in one of its regions, your discount starts at 5 percent. If you spend more than $4 million, the discount rises to 10 percent.
Once upon a time, it was your job to get your data into the cloud. Now, cloud providers recognize that some data sources can be shared. Amazon, for instance, is storing weather data. If you need access to the NEXRAD data from the U.S. National Oceanic and Atmospheric Administration, Amazon has already signed a contract and loaded the information into its S3 store. It’s available in real time, and the archives go back to June 1991.
There are several dozen sources gathered from big public science projects like the Human Microbiome Project and open source efforts like Wikipedia. Access to these is free -- although you’ll probably want to rent an instance in Amazon’s cloud to run your software.
The Web’s persistent challenge is scaling. It’s one thing to do something well; it’s another to do it equally as well for everyone in the world who happens to hit your website when your awesomeness goes viral.
The newer software layers offered by cloud vendors handle scaling for you. Google was one of the pioneers. Its App Engine takes your thin layer of code and autoscales it, deciding exactly how much compute power it needs to handle the load that’s coming your way. Google’s cloud decides how much compute power you need, and you get billed by the request, not by the machine. Amazon has a more basic option, Elastic Beanstalk, which dispatches generic EC2 instances to handle the load so that you don’t have to do it yourself.
GPU chips may be common in the video cards on your desktop, but cloud machines aren’t desktops. They don’t have USB ports, CD/DVD drives, or video cards because they only communicate to the world via the network. They don’t need to run games or even display streaming video.
This isn’t an issue for anyone who’s running a standard Web server that only concatenates strings, but it is a problem if you want to do the heavy-duty parallel computation for which GPUs are ideal. Now that more and more scientists and others are discovering the power of running parallel algorithms on GPUs, more and more are asking for GPUs in their cloud machines.
You won’t find them as an option with a standard instance, but IBM’s SoftLayer will install one in its bare-metal servers. It’s not as simple as spinning up an instance in seconds, but you can have the power of a GPU in the same box as a CPU. Amazon also has two types of machine that come with GPUs ready to run.
The early cloud machines came with a meter, and at the end of the month you got a bill. If you wanted more details, you had to log into your machine and install your own analytics package. Today, it’s easier to get data about how your machine is running.
Google’s dashboard offers live graphics that plot the load on your machines. Microsoft's dashboard includes maps and graphs for monitoring the performance of your systems. Then there are enhanced services from a handful of companies such as LogicMonitor or NewRelic. They offer even data and graphs because they specialize in analytics. The major clouds now have a number of satellite companies orbiting around them for you to get a better sense of what your cloud machines are weathering, to mix a few celestial metaphors.
One of the biggest challenges is choosing a machine. You might think it would be easy because they all run Linux or Windows, but it’s getting harder than ever. Amazon has about nine different types of machines, each of which can have different configurations of RAM. And that’s only the current generation. If you want to stick with older machines, Amazon has at least nine of those too.
The same is true for the other companies. Rackspace has newer machines that are optimized for intense computation, fast I/O, or large memory. You’ll want to stick your databases on the I/O-optimized instances because they keep reading and writing from the disk. Large data sets like search indices need as much memory as you can afford. There are many decisions to make, and there promise to be more.
The original cloud machines weren’t single machines at all, but time slices of large machines that ran virtual machines. You had root, but it was a virtual machine running on a huge box. The virtualization software may make it easy to adjust the amount of RAM or keep several different machines running consistently, but they add overhead to the system. The virtualization layer is always acting like a traffic cop, sending signals to different virtual machines and slowing down everything.
More and more companies are selling “bare metal,” which is to say servers that aren’t virtual. You get a box and an operating system, and there’s nothing between your OS and the hardware, except perhaps some kind of BIOS. The reads and writes to the disk go faster. The exchanges with the network cards are zippier. Everything is simpler without virtualization in the way.
IBM and Rackspace are two of the more prominent companies renting bare-metal machines by the hour. Rackspace has a collection of standard machines and is about to launch its second generation this month. IBM has some stock machines but will build custom machines if you want them.
Docker is sweeping through the cloud like a storm. It makes deploying software much easier for everyone, and it’s only natural that people want to make it simple to deploy Docker containers to cloud machines.
In its simplest form, the cloud will spin up a new instance with a Docker-ready version of the OS at the bottom. Then it installs the container and sends it running. Google also offers cluster management tools that automate much of this using Kubernetes.
One of the most interesting options may be Joyent’s bare-metal hosting of Docker containers using Trident. It hacked a version of Solaris/SmartOS to support Linux-based Docker containers running directly on the base OS. That strips away a big maze of virtualization, and it makes starting and stopping much faster.
The cloud services are largely following the path of Lego toys. The earliest machines had as much variety as the early brick sets. There were a few basic options, and it was up to you to create what you needed from the basics. Now there’s a proliferation of exotic options, all offering anything you need with the extra phrase “as a service.”
The most exotic for now might be Microsoft’s Blockchain as a Service, an option that lets you add all of the trust-enhancing power of the bitcoin blockchain to your company’s IT department. It’s not only for illicit and anonymous deals because the shared ledger can help simplify accounting, compliance, and other regulatory challenges, all with an immutable database.
Related articles