Capital One Data Breach: Non-Technical Tips To Not Be A Headline

This post was originally published here by CloudPassage.

In the last 48 hours, weā€™ve received dozens of messages asking about the recentĀ Capital One data breach, detailed in thisĀ New York Times article.Ā  A similar data breach happened in April involving half a billion Facebook records in another cloud-related headline hack, and we had a similar influx from concerned AWS and Azure cloud users.

CloudPassage protects many of the largest deployments in AWS, Azure, and other IaaS providers. Those who know us appreciate the perspective weā€™ve developed over the last decade. So when the latest headline news broke that the Capital One data breach was related to an AWS environment, the email deluge began. The news that this securityĀ breach also hit other major companiesĀ made people even more concerned.

Questions, comments and considerations flowed in. Two topics stood out:

  1. How does a security breach like this happen?
  2. Could my company be facing its own data breach?

People shared varying perspectives and commentary on the Capital One data breach itself. Many have asked if public cloud services like AWS or Azure make compromise more likely (short answer: public cloud infrastructure typically improves overall security posture). And as always when a compromise like the this security breach occurs, people ask about considerations for preventing a headline moment of their own.

The discourse will unfold as most others do. The technical aspects of the Capital One data breach will be dissected and studied for months to come. The technical conclusions will likely revolve around failure to implement some specific best practices for AWS configuration. The discussion will turn to operational practices and insider threats. The shared security responsibility model implemented by AWS, Azure, and other cloud infrastructure providers will be debated. The talk track is almost always the same.

However, the conversation around the Capital One data breach took an interesting turn. Instead of discussions around technical post-mortem, the security of AWS itself, or the latest attack automation employed, the discussion quickly shifted gears to the less obvious human and organizational factors.

Based on our work at CloudPassage helping prevent events like this for hundreds of enterprises, hereā€™s some perspective on non-technical factors that contribute to such headline events.

Blind Trust Is A Security Exposure

Blind trust is probably the biggest non-technical mistake that leads to headline compromises like the Capital One data breach. Blind trust can stem from bad assumptions like these, which we hear with unfortunate frequency as enterprises are figuring out the brave new world of cloud and DevOps:

ā€œAWS, Azure, Google, or [insert cloud provider name here] takes care of all the security.ā€

ā€œThe DevOps team isnā€™t bringing up securityā€”no news is good news.ā€

ā€œThese are experimental projects so they must not be using critical data.ā€

It seems so obvious that verification is important, so how does blind trust happen?

Blind trust might stem from a DevOps team that gets frustrated with traditional security thatā€™s slow and non-automated, so they simply shut them out. In the early days of public cloud infrastructure adoption, this was a disturbingly common situation associated with shadow IT.

In other cases, the security team may be asked to engage on an AWS, Azure, or other cloud project, but turn a blind eye because they havenā€™t been given additional resources to support these projects, or theyā€™re unofficial. We observe this with some frequency when enterprises are just beginning to adopt AWS, Azure, GCP, and other IaaS services.

Sometimes thereā€™s simply a lack of common knowledge. The most common version of this problem is when legacy security technologies, practices, and policies collide with the realities of cloud technologies and DevOps. A great example is legacy vulnerability scanning tools which are simply too slow to keep up with the wildly increased rate of change seen in modern application delivery environments. DevOps is executing beautifully on fully automated continuous delivery, then old security tech shows up like an anchor.Ā  This often leads to the frustration problem mentioned earlier.

Cloud and DevOps go hand-in-hand, and DevOps teams are quite autonomous by nature. Since development and operations are now decentralized, DevOps teams now assume more responsibility for implementing security controls. This is good for the security teams, and building trust in your DevOps team is critical.

But as the old saying goes, trust but verify. Someone or something (like AWS configuration assessment automation) must verify. Otherwise trust is implicit but blind, intended or not. Blind trust creates security exposures in any environmentā€”AWS, Azure, data center, anywhereā€”since eventually somethingĀ willĀ get missed. The rate of change created by DevOps, a good thing in almost all other regards, can exacerbate the probability of something being missed.

When things get missed for any reason, cloud infrastructure weaknesses will be present from the onset or will develop over time. This means an escalating probability of a disaster like the Capital One data breach.

Build-It-Yourself Security Visibility

Building a security solution in-house is a siren song. There are so many dazzling open source tools out there just waiting to be stitched together. The price canā€™t be beaten. And in concept itā€™s so simpleā€”just grab some configuration data and compare it to a rule. Super easy to build, right?

Wrong.

The hard reality is that building and maintaining security visibility tools that are consistent, robust, and accurate enough to avoid a compromise at the level of the the Capital One data breach is anything but simple. These tools are easy to conceptualize, easy to design, even easy to prototype. But beyond that point, things get much more serious. Itā€™s incredibly complex to build a production-quality solution that can operate at the speed and scale required to keep up with cloud infrastructure and DevOps.

Here are some factors to account for if considering a DIY security solution:

  • New Services:Ā IaaS providers like AWS, Azure, and Google all build new services constantly, and any visibility tooling will need to keep up with them.
  • Quality Assurance:Ā There must be an ongoing quality assurance investment because you have to constantly verify that the system is operating correctlyā€”the auditors and your internal customers will demand it.
  • Rules & Policies:Ā Somebody has to write and maintain the policies and signatures needed to run assessment and monitoring systems (e.g. best AWS configuration practices) and they must be continually updated.
  • Platform Maintenance:Ā If it were as easy as building it once, everyone would do it. But it isnā€™t that easyā€”the underlying platform, software, integrations, data feed formatsā€¦ all of it has to be kept current and running to prevent a security breach.

Yes, itā€™s simple to conceptualize such tools. Yes, a prototype is simple to build. And yes, your team is smart enough to do it.

But when it actually goes into production and there are bugs, user enhancement requests, new environments that need to be supported, updates to the underlying systemā€¦ pretty soon the security organization is in the software business, not the security business. Focus and energy shift from security to software development. Bad things happen, like the recent high-profile Facebook and Capital One data breaches.

And then the person who wrote the tooling gets frustrated and quits. The documentation turns out to be sub-par. As fate would have it, the developers might not have really been developers, and youā€™re left with a monster made of duct tape and baling wire that nobody understands, setting you up for a very ugly failure.

Weā€™ve seen this happen numerous times. Weā€™re always there to help clean up the mess when it fails, but itā€™s always disappointing to watch companies have to deal with the aftermath. Yes, we are a vendor, which gives you the right to summarily dismiss these issues as ā€œvendor fear-mongeringā€. But before getting too enamored with the idea of building your own cloud security solution, take a close look around at some of the recent compromises. Youā€™ll see that many of them tried to build their own security tools.

If these companies canā€™t get it right, itā€™s obviously not that easy. Think hard about what it takes to build and maintain a solution throughout its full lifecycle. Remember that it has to be production-quality, consistent, and effectiveā€”because if itā€™s not done well, the next headline data breach could have your name in it instead of Capital Oneā€™s.

Imbalance Between Threat Prevention and Detection

An effective security and compliance program needs a balance of both prevention and detection at all levels of the stack. We often see companies get over-focused on one or the other, and the imbalance results in too many attacks to make response effective, or overconfidence in preventative measures and weak incident detection and response.

When working with fast-moving cloud infrastructure environments like AWS and Azure, the importance of this balance is amplified. DevOps teams can make sweeping changes quickly in these massive environments almost instantly, and the raw scale of most cloud infrastructure environments makes for an ever-expanding surface area for attackers to probe, leaving it vulnerable to a hack like the Capital One data breach.

Too Much Focus On Detection & Response

Strictly focusing on detection and response is another siren song, because ā€œcatching the bad guysā€ seems sexy and exciting. Giant monitors with people barking orders like a NASA mission control scene. Threat hunting, intrusion detection, containment, forensicsā€¦ even the words seem really important and cool. But the reality is that you canā€™t invest 100% of your time and effort in detection and response. This would be like piling your money up in the street, surrounding it with cameras and guards on Segues, and chasing around people who walk by and snatch up some cash. Itā€™s absurd. Without preventative controls to ensure the environment can at least stop the majority of lower-level attacks, the amount of detection and response would be overwhelming.

The flip side is that you canā€™t just invest time and energy in prevention. Omitting detection and response from a security program would have to be based on the absurd notion that the protective measures were perfect. They wonā€™t be. They canā€™t be.

It doesnā€™t matter how amazing you believe your security architecture might be, how solidly your AWS environment is operated, how many audits youā€™ve passedā€¦. no organization in the world has ever implemented perfect security. Whether itā€™s an attack from a trusted insider, an honest mistake made by a privileged user, or a piece of devilishly clever malware that infected a DBAā€™s workstationā€”there will be compromises. And it doesnā€™t matter if youā€™re in a data center, collocation, or a cloud provider like AWS or Google, any of these environments require both detection and prevention to avoid an event like the Capital One data breach.

Itā€™s about balance. Failing to achieve a reasonable balance leads to operation struggles, inconsistency in achieving control objectives, and corners being cut due to inefficiencies draining resources. All of these conditions set the stage for compromise.

Summary: Avoiding a Capital One Data Breach

There are many factors that contribute to compromises so massive they make headlines. The technical issues get all the air time, so here are the three non-technical factors that we most often see eroding security efficacy. Keep them in mind, and we hope they can help you avoid a headline data security breach like the ones suffered by Capital One, Facebook, and others.

  • Donā€™t succumb to blind trust.Ā Itā€™s too easy to assume that security is being done right, someone else is taking care of it, or experimental/unsanctioned projects donā€™t warrant focus. Leverage an independent source of risk visibility to help identify areas of potential troubleā€”and for fast, dynamic environments like AWS, consider the importance of automation.
  • Donā€™t take the DIY route lightly.Ā That ā€œneatoā€ feeling of building your own tools can distract you from your core mission of security. Itā€™s a slippery slope, and itā€™s not as simple or cheap as it looks. The costs are high, especially ongoing, and mistakes are exceptionally costly. And hey, if youā€™d rather be in the business of building security tools,Ā weā€™re hiring.
  • Strike a balance between prevention and detection. Be it your AWS environment, your credit rating, or your personal health, successful risk management requires a balance of both prevention and detection. Donā€™t fool yourself into thinking your protective controls are perfect, and donā€™t get deluded into thinking you can catch and stop every attack. Preventing events like the Capital One data breach is about balance.

We hope this posting provided some useful considerations as you focus on protecting your own enterprise cloud infrastructure environment. Stay tuned for a post by Amol Sarwate, Senior Director of Threat Research for CloudPassage, on how our Halo platform detects exposures that enable events like the Facebook and Capital One data breaches.

We also invite you to learn more about howĀ CloudPassage makes cloud infrastructure securityĀ fast, integrated, and automated so you can have continuous security assurance in your own AWS, Azure, and other IaaS environments.

If you are particularly anxious about your exposure, sign up for ourĀ free vulnerability assessmentĀ of your environment. Youā€™ll get results in 30 minutes.

Ā 

Ad

No posts to display