blog-header-image
| 8 min read
| Dec 17, 2019
| by Andrew Pritchett

Why the cloud is probably more secure than your on-prem environment


Cloud this, cloud that … the cloud has sure become the buzzword in IT, dev ops and cybersecurity, hasn’t it? According to Gartner, “by 2022, up to 60% of organizations will use an external service provider’s cloud-managed service offering, which is double the percentage of organizations from 2018.”

However, there are still plenty of cloud skeptics out there, wondering whether all of those who have gone before them are further on the path to demise … from a security standpoint, that is.

We believe the skeptics can rest easy. Cloud service providers (CSP) know that their profitability and reputation depend on their ability to maintain security for customer data. Therefore, security is a focus for all the CSPs, and they’ve each made significant investments in physical security and hiring security expertise.

“But my data is safer in the server room next door …”

One reason that some of us struggle with putting data in the cloud is that we have a warm and fuzzy feeling about having our data physically close. We believe that if the data is in our own data centers — at the end of the hallway — then it’s somehow more secure.

The reality is that the physical location of the data really has little to do with its security. What affects security most is access and control. If we’re honest with ourselves, how many times have we walked past the data center to find the door propped open with a box fan? How many times have we seen an unescorted visitor roaming that same hallway looking for the restroom?

We have a human tendency to become comfortable in our own surroundings — but when we become comfortable we become complacent. This is why incident responders find abandoned vendor access points in on-prem data centers on the regular.

Why the cloud probably offers better security

Here are five reasons why your data might just be safer in the cloud.

Reason #1: Physical access

Unlike most of the environments we’ve all worked in, CSPs have incredible standards for physical access controls. If you don’t believe me, check out some of the data center tours that are posted to YouTube.

CSPs exercise security defense in layers, starting with having very restricted access to the places where customers’ data is stored. Authorized employees must pass through security gates and fences, security guards and surveillance cameras. The buildings are designed with man traps and limited ingress and egress points and are also equipped with biometric scanners. Additionally, anytime an employee has to perform any kind of maintenance within the data center, the work is rigorously audited. Those employees even have to have proprietary hardware and chips in their badges or other devices in order to be authenticated and allowed inside the data center.

If somehow a bad actor were to thwart all of these controls and enter the data center — which is pretty unlikely — your data is still protected by additional layers of security. CSPs protect your data with anonymity, encryption and replication. In addition to using several layers of encryption for data at rest (either AES256 or AES128), CSPs also distribute each customer’s data across multiple computers. Here is a snippet from Google’s website that explains in more detail how they protect your data within their data centers:

“Rather than storing each user’s data on a single machine or set of machines, we distribute all data — including our own — across many computers in different locations. We then chunk and replicate the data over multiple systems to avoid a single point of failure. We name these data chunks randomly, as an extra measure of security, making them unreadable to the human eye.”

The TL;DR: Most businesses couldn’t achieve this level of physical security on their own, given the sheer amount of resources you’d need to do it, like real estate, personnel and technology.

Reason #2: Resiliency

An important aspect of physical data security that’s often neglected is resiliency. When I say resiliency, I mean that when you store data somewhere, you expect that when you need it again, you can go back and it’ll still be there as you left it. CSPs know that business data is often mission-critical so they invest resources to offer their customers consistent reliability.

Objects are stored redundantly on multiple devices across multiple facilities by CSPs — no interaction from the customer required. For example, Amazon Web Services (AWS) states that they design their redundancy for Amazon S3 to sustain the concurrent loss of data in two facilities. What reliability does this represent? To put in perspective, in the last month, according to Cloud Harmony Amazon S3 had 100 percent availability across all 18 regions globally with zero minutes of recorded downtime. Google Cloud Storage reported a total of 3.88 minutes of downtime from two of their 26 regions and Microsoft Azure Cloud Storage reported a total of 48.13 minutes from only one of their 36 regions. Because none of the CSPs reported multiple concurrent data center outages, most users wouldn’t have noticed there was an outage.

Can your IT department guarantee that you will have nearly 100 percent availability and reliability? I didn’t think so.

Most companies probably have some level of redundancy at maybe one other site and perhaps a set of tapes stored elsewhere that they could access if things really went sideways. But the reality is that backups and archives take time to put back into production, and you’ll probably experience some data loss in the delta between when the backup was last written and when it is put back into production.

The redundancy and data arrays offered by CSPs allow real-time, seamless continuity. Additionally, customers have the ability to automate additional redundancy across other regions and countries to accomodate for regional catastrophic events, such as hurricanes, earthquakes or other natural disasters. Unless your company is already a global operation with offices around the world, your IT department likely can’t achieve this level of redundancy.

Reason #3: Significant investment in security expertise

In determining the security of data, we often evaluate two things: physical access and virtual access. I mentioned a few reasons why CSPs can provide better physical access controls, but there are some ways that CSPs can offer better virtual access controls, too.

According to Microsoft, the company has a “team of more than 3,500 global cybersecurity experts that work together to help safeguard your business assets and data in Azure.” Their cybersecurity team alone is larger than the employee size of most businesses in the United States. It’s a luxury for most companies to have two or three people on their staff who focus on cybersecurity. The reality is that most companies hire a bunch of developers and engineers for production, a small staff for IT and the help desk, perhaps an information security officer and maybe someone on the IT team gets some extra security or incident response training (I know, I know … it’s not just you who feels this way!).

With all of the cybersecurity expertise at their disposal, CSPs can make sure that advanced security features are built into every product and service to keep data protected at every layer. These cybersecurity teams include security engineers, security architects, security analysts and incident responders, data scientists, penetration testers, vulnerability engineers, code reviewers, quality assurance and compliance auditors and specialized feature development teams — and their single focus is on providing and improving security.

Reason #4: Development of best-in-class access control systems

Because of the vast security expertise they have on staff, CSPs have the ability to develop best in class authentication and access control systems. By now you’ve probably seen the Log in with Google button on some of your favorite websites and thought, “That’s odd … this isn’t even a Google website.” Or perhaps you’ve seen “Log in with Facebook” or “Log in with GitHub”. Sure, the site you’re on might not be owned by Google or the others, but many companies have come to realize that it’s difficult to continually stay updated on the latest attacks against authentication systems. Storing passwords is difficult and potentially risky. Keeping up with the latest multi-factor services is a constant sprint, and striking a balance between easy password reset functionalities and not giving the wrong person access to protected data is difficult to get right. CSPs have the expertise and the resources to stay on top of all of these concerns and deliver best-in-class control systems.

These control systems include the secure management of passwords and keypairs, multi-factor authentication services, mitigating controls assigned to password resets, protection from brute force and malicious login attempts, key vaults, conditional access policies (geo location, trusted devices/clients, trusted countries/regions, IP ranges), role-based access control, automated DDoS defenses, firewalls/VPC controls, secure VPN protocols, audit logging and alerting. All of these systems are closely integrated, tested and audited by CSPs on a continual basis.

It would take a large team of developers and security engineers to even begin to replicate these control systems on-prem, and that doesn’t even take into account the additional maintenance and testing required to support and validate these systems.

That said, just because CSPs do a great job protecting their own infrastructure doesn’t mean that once you put your data in the cloud, you can wipe your hands clean of all things security. CSPs are responsible for protecting the global infrastructures that run all of the cloud services — the hardware, software, networking and facilities that run all of the cloud platform services offered by the provider. As the customer, you’re responsible for the security of your data and the resources you create in the cloud. That includes protecting the confidentiality, integrity and availability of your data and maintaining any compliance requirements for your workloads, whether you use the controls provided by your provider or you bring your own.

Reason #5: Vulnerability and patch management

CSPs have entire teams of people solely devoted to detecting vulnerabilities and conducting patch management. These teams scan for software vulnerabilities using a combination of commercially available and purpose-built tools. They also conduct intensive automated and manual penetration testing, software security reviews and external audits. These teams are dedicated to finding vulnerabilities before the attackers do.

For the average company, the IT Manager is the vulnerability scanner and auditor, and she or he plays that role in addition to looking after all their other duties and responsibilities. The IT Manager may get lucky and have some funding at the end of the year to put toward a third-party assessment or penetration test. It’s then the responsibility of the IT Manager to make sure that all of the system owners are following up with the recommendations and patchwork suggestions made by those third parties. The reality is that this kind of work can be exhausting for small teams, especially when the IT team is already wearing many hats. That’s why it often falls through the cracks. Think about how many orgs fell victim in May 2017 when WannaCry ransomware used the EternalBlue vulnerability to spread itself. Microsoft announced the vulnerability on March 14, 2017, in security bulletin MS17-010; however, two months later, millions of systems remained unpatched.

Security: Still a shared responsibility

Notice that I talked about “why the public cloud can be more secure…” and not “why the cloud is more secure….”

Sure, CSPs have a great culture of security. They’ve built many features and services to make it possible for you to experience data security — but you’ve got to take the initiative to enable the security controls they’re offering. If you don’t take the time to learn about the security features and controls at your disposal and you don’t turn them on, they won’t do you any good. For example, multi-factor authentication and conditional access policies are great features but they aren’t automatically configured or enforced — you’ve got to do a little bit of the legwork here.

Most cloud providers offer security best practice documents or security checklists. These are a helpful starting point to learn about some of the security features and controls available to you. Remember, you aren’t their first customer, — meaning that the CSPs know their own services better than anyone and they know what other customers experienced when they haven’t followed security best practices with their services. Which is exactly why those CSPs will guide you on what to do.

So please … take the time to learn about those features and controls that are available. And use them.


Subscribe

Following the CloudTrail: Generating strong AWS security signals with Sumo Logic

As orgs increasingly shift some of their workloads to cloud providers like Amazon Web Services (AWS), it’s often challenging to get the right level of visibility into these new environments for security monitoring purposes. Sure, security professionals have had decades…
Read More