According to this year’s “State of Cyber Resilience” report from Accenture, executives from both Banking & Capital Markets and Insurance are quite confident about their cybersecurity capabilities. For example, across financial services:

  • 82% are confident or extremely confident they can monitor for breaches
  • 82% say they can identify the cause of a breach
  • 83% are confident or extremely confident they can restore normal activity after a breach
  • 78% say they can minimize disruption from a cybersecurity event

But pair those findings with another: Across financial services, 42% of firms say that it takes more than a week for a successful breach to be detected. (Nine percent require more than a month.)

So, all in all, is this confidence or overconfidence?

Small mistakes can mean big consequences

Let’s be clear: There has been significant progress. If we look specifically at banking, for example, last year our survey on cyber resilience found that 81% of breaches took weeks or months to detect. But being overconfident is a serious matter because there are so many security threats and it’s so easy to make a mistake. Small matters such as a misconfiguration of your external infrastructure can lead to big problems. And, as I mentioned in my last blog, if just 5% of banking customers have less than honest motives, they might cause 95% of the damage.

No bugs in your software?

So let’s try an experiment. Let’s think about cyber security confidence in the context of software development. If we read that most companies are very confident that there are no bugs in their software, we would all laugh, right?

Well, bugs in cybersecurity aren’t all that different. So why would you be confident that there are no issues with your security? It’s very difficult to be completely confident unless you thoroughly test your cybersecurity defenses—apply the same standards to security as you would if you were testing other big pieces of infrastructure. Overconfidence among financial services companies is likely an outgrowth of not doing the testing that would scare them.

The importance of testing

Regulations are starting to catch up here. Consider TIBER-EU, which stands for “Threat Intelligence-based Ethical Red Teaming” in the European Union. TIBER-EU, a non-compulsory framework, helps businesses with critical infrastructure to test the effectiveness of their cyber security defenses in real time against malicious threat actors. The strategy uses red teams to attack live systems and processes so that a business can better understand its detection and remediation capabilities in the case of an actual attack. Financial services companies are expected to be among those adopting the framework.

Red teams are expected among major software makers. Microsoft Corporation, for example, uses an elite hacker team to keep Windows® PCs safe.1 The person who leads the team says the company had previously taken a defensive and reactive position, responding to known issues, but he wanted to “go on offense” instead. This sort of approach sounds like a big step but it is not. It is the equivalent of user acceptance testing for your security apparatus. In this case, the user is looked at as the “bad guy” and we want to test that it catches them. If we don’t test it in a live situation then we don’t know if it works.

Admittedly, large software developers spend significant sums on testing, something your average bank or insurance company can’t do. Yet some form of realistic cybersecurity testing is essential to merit executives’ high levels of confidence.

Reference:

  1. “The Elite Microsoft Hacker Team That Keeps Windows PCs Safe,” Wired, June 10, 2018. Access at: https://www.wired.com/story/microsoft-windows-red-team/

Submit a Comment

Your email address will not be published. Required fields are marked *