How to secure Amazon Web Services like a boss
More and more sensitive data is heading to the cloud. Genomic informatics company GenomeNext, for example, feeds raw genome sequencing data into high-speed computational algorithms running entirely on AWS. Pharmaceutical giant Bristol-Myers Squibb reduced the duration of its clinical trials by using AWS. Electronic exchange Nasdaq OMX developed FinQloud on AWS to provide clients with tools for storing and managing financial data.
Amazon, like most cloud providers, takes care of security for its physical data centers and the server hardware the virtual machines run on, but leaves it up to the individual customer to protect its own infrastructure. Amazon provides a plethora of security services and tools to secure practically any workloads, but the administrator has to actually implement the necessary defenses.
The following are expert tips that go beyond the basics for securing your AWS account and keep the business up and running.
Most developers want to be secure, but they don’t want to be slowed down. They are under tremendous pressure to build new features and ship code. The cloud is supposed to help them work faster, so they incorporate security in such a way that they can keep doing what they do best.
Cloud usage in most organizations tends to be primarily developer-driven, as developers spin up new instances whenever they need more storage or power. When developers lead cloud usage, it's easy to wind up with a sprawling environment with varying levels of security, said Rich Sutton, vice president of engineering at Nexgate, a division of security company Proofpoint. All the ports may be left open, or all the user accounts on a given server may have administrator rights. Another common mistake is to reuse the root password across instances.
Create cloud images with basic security policies already applied and security tools configured. Developers can deploy new instances off the secure images, making it easy to use self-service to get up and running without introducing friction.
A typical development scenario has developers working in different environments for development, testing, and production. The foolproof way to set up the cloud counterpart is to have completely separate AWS accounts for each environment. Thus, each environment is isolated from the other, so an attacker who gains access to a development server can’t easily hop onto a production system. It also prevents accidents, such as a developer or administrator dropping a database in production instead of in testing.
“Developers are trying to get things done as fast as possible,” Sutton says. “It’s a built-in guarantee that developers can’t make mistakes."
If separate accounts are not possible, each environment should use a different key to prevent cross-connectivity. Development keys should never wind up in production code, and vice versa.
User accounts are the Achilles’ heel of information security, because attackers can take over the entire environment by stealing account credentials. So make it easy -- avoid user accounts wherever you can. Amazon offers various APIs to handle provisioning and scaling; choose them when working with instances instead of creating new accounts to manage them.
Have applications use specially created service accounts with low privileges to access systems. An example is to create a specific account an application when it needs to make database calls, rather than going with a normal database user account.
Service accounts typically are restricted in what they can do. With a database service account, for example, privileges might be limited to the ability to select and possibly update certain tables. If someone tries to log in, the potential for damage would be much lower because the attacker can't view other tables or objects, let alone make any changes. And if the logs show a login attempt using the service account, that is a surefire sign someone is trying to break in.
“Security with AWS is all about being proactive and reducing the attack surface by limiting the damage an attacker could cause in case of an eventual breach,” says Liviu Arsene, senior e-threat analyst at BitDefender.
For user accounts that have already been created or need to exist for specific purposes, deleting them can cause more problems. Perhaps the team is not even sure if anyone is using an account. Instead of deleting those accounts, assign the lowest set of privileges possible. If the account is legitimately in use, someone will complain. As with service accounts, if someone tries logging in with one of these accounts, that will show up in the records. Administrators will then have a starting point for investigation to determine whether the attempt is legitimate.
“If you see [user accounts] surface, chances are someone has compromised your cloud infrastructure,” says Misha Govshteyn, chief strategy officer and founder of AlertLogic.
Amazon offers several security services, including certificate management, encryption tools, Hardware Security Modules for storing private keys, and Web application firewalls. Take advantage of these built-in tools -- or one of the many offerings in Amazon Marketplace.
Security Groups let administrators split instances by service types and assign them to specific groups. A set of security policies could then be applied to all the hosts assigned to the group. The database should be in its own group, separate from the load balancer and the Web application firewall, for example. By restricting ports and defining access rules, administrators can prevent lateral movement across the network, where attackers get a foothold on the Web server and try to move onto the database.
Regularly inspect the security group settings via the AWS console to make sure nothing has changed unexpectedly. If you whitelist IP addresses -- a very good practice to restrict access to certain systems -- check the list to make sure nothing has changed without your knowledge.
While Security Groups and Network Access Control Lists don’t compare to full-fledged firewalls, they're still effective for limiting specific network access to applications. More important, they prevent anyone from breaking into different groups. Craft inbound and outbound rules to filter out unnecessary traffic and allow only necessary network communications.
“You need to consider whether you really need to allow 0.0.0.0/0 network traffic or only accept specific connections,” says Arsene.
Native AWS tools such as Elastic Load Balancers (ELBs) can be used -- somewhat -- to mitigate DoS or DDoS attacks, Arsene said. ELBs make applications resilient when faced with a high traffic load by directing traffic to multiple EC2 instances running the same application. In the case of a DoS or DDoS attack, the application remains up and available because the ELB scales up to multiple instances.
The genomics company GenomeNext takes full advantage of the cloud's fluid nature. The company randomly moves instances around in the region so that IP addresses are constantly in flux. This tactic forces potential attackers into a game of hide-and-seek, trying to find the servers long enough to launch an attack.
“We take advantage of everything Amazon offers for security, but you still have to architect your environment. You still have to plan for failure,” says James Hirmas, co-founder and CEO of GenomeNext.
Just as software undergoes extensive testing before going to production, cloud instances should be tested thoroughly. If a cloud instance in production has a critical vulnerability or is missing the appropriate security controls, then it should be treated as an outage, with the issue escalated so that it is addressed right away.
If a cloud instance has a vulnerability discovered before it’s deployed into production, it should be treated with the same priority as a critical software defect and the release should be halted.
“Software already goes through QA before it’s shipped. Why shouldn't security work this way too” asks Govshteyn.
Regularly back up your data so that recovery is possible, even in the case of an attack or a ransomware infection.
Code Spaces, which provided support for devops application management, offered a sobering lesson in how much damage a dedicated perpetrator can inflict on a company’s cloud environment. In this case, the attacker launched a DDoS attack and demanded a ransom. When Code Spaces officials logged into the AWS account to try to stop the attack, the attacker deleted data from the servers. The destruction was extensive enough that Code Spaces ceased operations.
Amazon lets customers back up data across regions or even move data out of S3 into Amazon Glacier for data archiving. A rule can move Amazon S3 object versions to the lower-cost Glacier class and automatically delete them from Glacier storage after the data expiration date. This may feel like going backward, but instead of backing up to another cloud instance, creating an offline archive is an option, too. The benefit of having offline backup is to ensure there is a copy of essential business data attackers don't readily have access to.
Amazon offers all the tools to take care of the security basics. Don’t ignore them. Multifactor authentication on AWS accounts is a must. Create separate accounts for developers so that no one is sharing passwords. Make sure no one is using root accounts and that developer accounts have only the necessary privileges.
Use Amazon’s tools to manage private keys and make sure they are stored securely. Monitor AWS usage for suspicious activity, such as unexpected API calls and unusual account logins.
Being secure on AWS requires a different mind-set from how organizations traditionally approached security. As Govshteyn says, “You have to believe Amazon is doing the job it needs to secure its environment, but people also have to change how they architect their infrastructure.”