Capital One Has Multi-Billion Dollar Breach. It Didn’t have to Happen

We were very early to break the news on Twitter regarding the Capital One breach. This is one of the largest banking breaches ever. It is a huge deal.

This is a very scary breach because it was found not by the company itself or law enforcement but instead because the hacker, Paige Thompson who is pictured at the top of this post bragged about it online. The hacker who goes by Erratic on Twitter is prolific – tweeting many times a day. We learned by reading the feed that her cat recently died for example.

Capital One has disclosed that a March 22-23 breach affected 100 million people in the US and a further 6 million in Canada. 140,000 social security numbers were stolen and 80,000 bank account numbers.  

The data leaked potentially includes “names, addresses, ZIP codes/postal codes, phone numbers, email addresses, dates of birth, and self-reported income” of those who applied, as well as information like “credit scores, credit limits, balances, payment history, contact information.”

Transaction data for “a total of 23 days” spread across 2016/2017/2018 was obtained

A complaint filed Monday in Seattle identified the suspect was formerly a software engineer for Amazon Web Services.

Capital One CEO Richard D. Fairbank said: “While I am grateful that the perpetrator has been caught, I am deeply sorry for what has happened. I sincerely apologize for the understandable worry this incident must be causing those affected and I am committed to making it right.”

From the DOJ:

According to the criminal complaint, THOMPSON posted on the information sharing site GitHub about her theft of information from the servers storing Capital One data. The intrusion occurred through a misconfigured web application firewall that enabled access to the data.  On July 17, 2019, a GitHub user who saw the post alerted Capital One to the possibility it had suffered a data theft.  After determining on July 19, 2019, that there had been an intrusion into its data, Capital One contacted the FBI.  Cyber investigators were able to identify THOMPSON as the person who was posting about the data theft.  This morning agents executed a search warrant at THOMPSON’s residence and seized electronic storage devices containing a copy of the data. 

            “Capital One quickly alerted law enforcement to the data theft — allowing the FBI to trace the intrusion,” said U.S. Attorney Moran.  “I commend our law enforcement partners who are doing all they can to determine the status of the data and secure it.”

            Computer fraud and abuse is punishable by up to five years in prison and a $250,000 fine.
The charges contained in the complaint are only allegations.  A person is presumed innocent unless and until he or she is proven guilty beyond a reasonable doubt in a court of law.

            The case is being investigated by the FBI.  The case is being prosecuted by Assistant United States Attorneys Steven Masada and Andrew Friedman.

U.S. Magistrate Judge Mary Alice Theiler ordered Thompson to be held. A bail hearing is set for Aug 1. We reached out to a number of companies in the cybersecurity space to get their comments on how such a breach could have been prevented.

The is a pretty good example of why putting 100 million credit card numbers in one database is not a good idea. If this was a distributed database the economics for the hacker would have been very different.

Brett Shockley, CEO

The circumstances around the Capital One breach highlights the need for increased scrutiny of hosted security applications. As enterprises and networks become more distributed and network resources – including security applications – are allocated to the cloud, the security applications themselves, whether commercially available or custom designed, must be regularly tested and monitored to ensure they are secure and free of misconfigurations that could be leveraged for exploit.

Tom DeSot, EVP and Chief Information Officer at Digital Defense, Inc.

We reached out to Apex Technology Services, an MSP/MSSP providing IT and cybersecurity services to a variety of customers from the Fortune 200, on down. The company has a great deal of hands-on experience. (Disclosure: Your’s Truly, Rich Tehrani is the CEO). A senior engineer for the company responded to our questions as follows:

Any idea how this breach happened?

It appears to be the result of an overlooked or misunderstood configuration on the web application firewall they are using that allowed access to a server using network ports that should not have been opened.  Either incorrect ports were used or the ports were open to a server that should not have been accessible over those ports from the outside.  Something as simple as an incorrect subnet mask defined on an object in the firewall can allow access to a number of devices when it was intended that only 1 device should be accessible over a specific port.

How can this happen in such a large company with so many resources?

Based on my experience, the size of the company and the number of resources don’t necessarily relate to better security.  Even in the largest corporations, limiting expenses and cost-cutting is one of the greatest focuses and is ingrained in the management culture.  Bonuses and compensation are often tied to how well you manage your budget and staffing is a huge expense in that budget.  If you ask most IT professionals they will agree that they have more work than they can keep up with, even in largest corporate organizations.  When operating in an organization with this culture, techs are rushed to complete one thing and catch up on the next one and this is how things get overlooked.  It should have been caught by multiple groups (engineering, QA, IT Security, etc.) but wasn’t.  It seems more likely that is was not lack of expertise but more lack of focus which occurs when IT groups are not adequately staffed and rushing between projects or tasks in order to keep up with their workload.

How can it be prevented in the future?

If they didn’t previously, Capital One should have 3rd party security analysts checking every node on their network that can be accessed from the outside on a quarterly basis at a minimum.  This will come with a large price tag but corporations should start considering the minimal profit gain they get from being cost-conscious with IT funding and weigh the larger impact of a breach of this magnitude or even larger.

What do companies need to watch for in terms of firewall configuration?

Any applications should be simulated in a test environment using the exact same security that will be used in their production environment and extensive pen testing should be done there against it.  This allows for any holes to be caught and plugged in that environment vs. in production.  In this case it was their web application firewall but the same basic rules apply, restrict any management access from the outside, close all ports that are not absolutely necessary, for ports required by the application it is best to redirect them to ports that are not well known, and they should be strictly limited to only the IP addresses that the application needs accessible, to function.

Are any firewalls more secure than others?
Some firewalls are more robust than others in terms of capacity and their ability to handle DDoS type attacks but as far as configuration and the ability to make one more secure than another, they all perform the same function.  All of them can be locked down completely to allow nothing through but applications which require ports to be opened in order to function. What ports you open and where they are open to is really what makes the difference and that is why configuration is so important.  Every major firewall in the industry needs to be updated regularly.  All of them have had a number of exploits reported over the past year or so and they will continue to so updating them regularly is important.  Again this becomes a drain on staffing resources and is often not prioritized properly.

What are some other cybersecurity areas companies need to watch for?

While in this case the breach appears to have occurred from the outside, the greatest threat facing all organizations today is the internal threat of their employees already inside the network.  In most cases this is not malicious behavior from an employee but a targeted attack via phishing or other mechanism to compromise their device and then use that to gain access or to slowly work their way to the intended target once inside the network.  Employee cybersecurity training should be a top priority for all organizations and it should take place at least once a year for all employees, regardless of whether they have attended it previously or not.  The threat is continually evolving and what they were taught to look for 6 months ago may not be the most important thing they should be looking for today.  Cybercriminals are well aware that it is much easier to gain access via social engineering against someone already inside than to find holes in the company’s external perimeter.

Next up we spoke with Tim Woods, VP Technology Alliances, at security automation vendor FireMon.

Tim Woods VP Technology Alliances, FireMon.

Any idea how this breach happened?

The individual responsible (Paige Thompson) reportedly had first-hand knowledge where the Capital One data resided.  She discovered access to the data was essentially allowed unchallenged as a result of misconfigured security application controls.  Companies take cloud configuration too lightly and frequently assume that data storage in the cloud inherently makes it secure.  This particular bad actor was not very adept at covering their tracks and while it remains early, the hope is exposure may be limited.  Others have not been so lucky.  Hackers are routinely using automated methods to crawl the web searching for public data exposure due to misconfigurations.

How can this happen in such a large company with so many resources? 

Unfortunately, the problem is not as uncommon as one might think.  Companies are increasingly moving their workloads and data stores to the cloud.  The larger the data stores and more distributed the data stores become can contribute to the potential for exploitation.  And even though a larger company may have many resources, if those resources are not properly trained or equipped then the probability of misconfigurations goes up.  It is very important to understand that the security configuration aspects of cloud deployments are a shared responsibility between the cloud provider and the consumer of the cloud service.  One analyst has predicted that by 2020 95% of cloud data exposures will be the result of consumer misconfiguration not the cloud providers fault.

How can it be prevented in the future?

In network security (cloud or on-premise) there are no silver bullets or 100% guarantees.  But technology is available that can assess the effectiveness of a given security policy, monitor for change, and analyze change when it occurs.  Automating the behavioral analysis of security policies is a great first step.  A better strategy is to build repeatable security configurations standards and technically enforceable compliance guidelines that become integrated components of the application deployment process.

What do companies need to watch for in terms of firewall configuration?

Companies should always monitor and assess and changes to firewall configurations as they happen or if possible, proactively assess proposed changes prior to implementation.  Practicing good firewall hygiene includes removing technical mistakes (redundant/shadowed), removing unused rules as they are identified, and ensuring that overly permissive rules are eradicated.  Risky rule assessment of the access that a firewall policy provides is paramount especially when correlated with vulnerability scan data.

Are any firewalls more secure than others?

No, you can have the absolute best, most feature-rich product available, but if the technology is not properly managed, companies will fail to realize the return on the security investment.

What are some other cybersecurity areas companies need to watch for?

Real-time network discovery always comes to the top of my list.  I like to say it is very difficult, if not impossible, to manage what you can’t see and even harder to secure what you don’t know about.  Given the dynamic nature of cloud risk posture can change very rapidly and security teams cannot afford reduced visibility to the infrastructure they are charged to protect.

Tell us more about your company?

FireMon is a security software development company. We offer enterprise security management solutions that provide comprehensive visibility across your entire network.  Our flagship product “Security Manager” yields real-time visibility and control over your complex hybrid network infrastructure, policies and risk.  Key security performance indicators are presented in a single pain of glass for the entire security real estate.

What will your company look like in the future? New products/services, etc.

FireMon has over 15 years of security platform experience trusted by the most prominent enterprise companies today across virtually every market sector.  At FireMon we are constantly engaged in conversations with our customers in an effort to solve not only their current challenges but also the challenges that loom on the horizon ahead.  Security must be present and have parity with the speed of business.  It’s our goal to give our customer the freedom they need to move at the speed they require.

Let’s face it – this is likely a multi-billion dollar breach. It’s about as bad as it gets. Companies need to wake up to the threats while they still can. We will be updating this post with more responses as they come back from the industry.


Share via
Copy link
Powered by Social Snap