The cloud can be more secure – but it’s not automatic
The recent Capital One breach is a very interesting case study in security for a variety of reasons. It’s happened recently, there is a unusually large amount of publicly available data due to court filings and an attacker who was very intentionally sharing a lot of information, it involves AWS cloud services and isn’t just “admin made S3 bucket public” and of course it impacts a large number of people. In this blog post we’re going to dive deep into some quick thoughts on lessons learned from this attack.
What we know
The attack was performed by a single actor who has been going under the alias ‘erratic’ and sharing a lot of detail about her attack methods on AWS environments and approaches via a Slack channel and on twitter. Krebs has provided a good write-up after having connected to her Slack channel, indicating there are likely other organizations who’ve had compromises along the way.
According to the FBI filings, the attack, discovery and notification took the following steps:
1. A misconfigured firewall allowed access to a protected host, which was then compromised
2. IAM credentials on that host for a “WAF” user were compromised, which allowed for enumeration and access to all the S3 buckets in the environment
3. The S3 bucket (not public) was then access and customer data (which was encrypted) was extracted and then exfiltrated
4. The attacker posted screenshots of the dump to her Slack channel
5. Another user on the Slack channel saw the dump and followed the Capital One disclosure policy to inform them of the breach
6. Capital One remediated the path of attack, informed the FBI and the attacker was arrested
This all happened fairly quickly, and there are a few key lessons here:
1. Although the cloud CAN be more secure, there is still a lot of dependency on the operating organization to ensure that they have their environment properly configured and resistant to attackers with knowledge of how to navigate and move laterally through these systems. The attacker had solid AWS skills, likely acquired during their time working at AWS.
2. IAM in particular, and credential storage, is a tricky and complex thing for administrators to understand and configure properly.
3. The majority of attacks are still discovered by third parties. Having clear disclosure policies on your website make it easier for those parties to reach you and reduce the time your data is in public hands before you learn of it and take action.
Since our focus at Kobalt is on cyber security monitoring, here are a few things that are useful for our team:
1. Monitoring for enumeration activities across an AWS environment is a good sign of potential lateral movement
2. Monitoring for large volume data exfiltration (anomalies in traffic patterns) likely would have caught this attack as well
3. Policy reviews against firewalls and IAM credentials, including monitoring for items like credential rotation will reduce risk
4. Host based monitoring – which requires more effort and in some cases additional software than simple CloudTrail logs – may have helped detect the compromise and activities
5. Access to that host likely came from a “rare IP” which could have triggered anomaly detection within the environment
If you’d like to talk to us about performing an firewall or AWS IAM audit, host based monitoring or other enhancements to your current monitoring capabilities, please get in touch with me at email@example.com.