Key Points
When our Product Lead, Andy Hornegold, was asked to talk about vulnerability management at DTX Europe, he used the Optus breach in Australia to show how a vulnerability scanner might have prevented such a dramatic and damaging breach.
“I’ve got 15,000 vulnerabilities...”
Said a CSO recently to me recently. My response? You can’t fix them all – and if you’re trying to fix them all, you’re chasing the wrong metrics. In the time it takes to fix those 15,000 vulnerabilities, another 15,000 will take their place. It's a never-ending story.
But that CSO is not the only security professional to lose sleep over the numbers. Over 14,114 vulnerabilities have been publicly released this year (as of 10th October), 2,544 of which are critical and 5,686 rated high priority. These include some nasty headliners like ProxyNotShell and the lingering ghost of Log4Shell. And CISA’s Known Exploited Vulnerability Catalogue currently lists 837 vulnerabilities.
You could try to fix them all. And in an ideal world perhaps you could. But we don’t live in an ideal world, which is why metrics like the Exploit Probability Scoring System (EPSS) has come about. EPSS is an open, data-driven effort at estimating the probability that a software vulnerability will be exploited in the wild, designed to help security teams prioritise their vulnerability remediation efforts. But this still masks the main problem – that vulnerability management can be boiled down to a numbers game.
It’s more than just a numbers game
Vulnerabilities are a data scientist’s dream, but it’s so much more than a numbers game. We need to move beyond the numbers see the bigger picture, focusing on the vulnerabilities that have a direct impact on the organisation that are happening now. And not just on the tangible things we can see – we need to know as much about the environment and attack surface as possible.
Because it’s what you don’t know that can have the biggest impact and potential risk to your business. In almost every Red Team engagement I’ve worked on, we managed to gain access to an environment, network, asset or system that the customer didn’t even know existed...
What you don’t know is your biggest risk
Vulnerability management can be boiled down to four key phases:
- Detect: all the vulnerabilities
- Prioritise: the vulnerabilities
- Control: fix, mitigate or accept the vulnerabilities
- Report: vulnerabilities to stakeholders
Here we’ll focus on the detect phase. Detection is so tightly aligned with asset discovery that vulnerability management is fundamentally flawed if you don’t have visibility across your whole environment and don’t detect vulnerabilities quickly.
Without good asset management you won’t know which assets are under your control, you can’t scan all them for vulnerabilities, and you can’t know which vulnerabilities are a potential risk to your organisation.
Cloud servers are now the #1 method of entry
So where does the danger lie? According to the Hiscox 2022 Cyber Readiness Report, cloud servers are now the number one method of entry, and small-to-medium sized businesses the fastest growing sector being targeted.
With cloud adoption continuing to increase, almost every company now relies on cloud services to some extent, and by devolving responsibility for cloud assets to developers/end users, it’s no wonder that cloud assets are the #1 attack vector. With greater flexibility and permission to expose new services to the internet without visibility from security teams or those managing risk, there are increasingly going to be assets which the organisation isn’t aware of. Let’s take a real-life example.
Optus breached
Last month a post hit Breach Forums, a site used to sell data or access. The posted claimed to have breached Optus – an Australian telco giant – and accessed the customer information of 11.2 million people, including:
- Full name
- Date of Birth
- Mobile number
- Email address
- Physical address
- Identification documents (passports, driving licences)
It was later confirmed that it also included Medicare numbers, the Australian equivalent of the NHS. 11.2 million users! That’s over 40% of the Australian population!
Of course, a breach of this size made the mainstream news – and with good reason. And when you hear about this kind of a breach you think: HOW?! How did an attacker get access to data of 11.2 million users from the second largest telecommunications provider in Australia? They must be pretty sophisticated attackers, throwing some zero day exploits around, right?
The Optus response
And that’s certainly the message put out by Optus. CEO Kelly Bayer Rosmarin said: “Optus has very strong cyber defences. Cyber security has a lot of focus and investment here. So, this should serve as a warning call to all organisations. There are sophisticated criminals out there, and we need all organisations to be on alert.”
But investigative journalist Jeremy Kirk wasn’t convinced and contacted the person selling the data to validate the hack. When asked how they got access, the attacker responded with an API endpoint, and stated that it had an “access control bug”. Interesting, but what does the attacker mean by “access control bug”? When asked, he got the response: “No authenticate needed”.
Kirk asked how the attacker got access to so many records if it was an API endpoint. That doesn’t sound particularly sophisticated... indeed, you could do it yourself with off-the-shelf tooling.
Not so sophisticated after all?
While no one is blaming Optus – they're not the bad guys here – Clare O’Neil, Australian Cyber Security Minister didn’t think it was a sophisticated attack either. When speaking to ABC News, she said: “What is of concern is quite a basic hack was undertaken. We should not have a telecommunications provider in this country which has effectively left the window open for data of this nature to be stolen.” It suddenly became very political.
The API wasn’t hosted on some part of a legacy environment either. According to reliable source, Kirk discovered it was hosted in Google Cloud/Apigee. Perhaps Optus has been focusing on the wrong things? That vulnerability could have been in place for weeks before it was discovered.
When the clock’s ticking...
In situations like this when time is running out, you could carry out dark web monitoring which would alert you when a compromise has happened and will buy you a few hours. You could monitor the egress of your APIs and wait for a spike in the amount of data leaving your endpoints.
OR you could carry out live detection and vulnerability scanning of those endpoints the second they hit the internet – and buy yourself a lot more time.
Because we no longer have the luxury of time
Attackers are compromising systems faster and faster: at Intruder we have had one threat group automatically compromising systems the second a certificate is registered for the domain.
The move to continuous deployment means that irregular pen testing or scanning isn’t enough to catch the latest vulnerabilities. Technology continues to speed up. We’re working in a world that wants to increase the rate of innovation, facilitate lean processes, and reach success sooner.
And we’re doing that by decreasing the overhead to deploy with CI/CD, asking developers to manage their own infrastructure, and releasing prototypes and MVPs more rapidly.
To maintain security, we need detection and visibility to increase at the same rate. Everything is evolving so fast that you simply can't wait for penetration tests to complete. You need to see what’s hitting the internet, and you need to make sure it’s secure.
It’s become so important, that CISA has introduced a binding directive for national agencies which mandates by the of Q2 2023 that they have asset discovery in place and build a vulnerability detection process on top.
Simplify the chaos and streamline management
Vulnerability management doesn’t need to be difficult. Ask yourself whether your current solutions:
- Automatically detect new systems that are added to your cloud account
- Identify vulnerabilities in your systems the second they’re changed or brought online
- Identify vulnerabilities as soon as a check is available
- Prioritise detected vulnerabilities so that you know what to fix and when
- Allow you to add team members so that you don’t have to fixing everything yourself
- Integrate with other solutions so that you can track vulnerabilities
You should also be tracking metrics including:
- Time from a system coming online to being scanned for a vulnerability
- Time to fix vulnerabilities and break it down by severity
- Time between a vulnerability or a check being released and a scan being completed of all your known assets
We make vulnerability management effortless
We’re actively helping customers deal with these kinds of problems at Intruder. Our vulnerability scanner helps you react faster to emerging threats by proactively scanning as soon as checks are available. We reduce your attack surface by identifying services that are internet facing and shouldn’t be – like remote desktop or direct database access. You can then focus on what’s important by filtering out the non-actionable noise from your scan results.
Sign up for a free trial to get started.