Cybersecurity is an incurable disease, it’s time we thought of it that way

4 Comments

A couple of weeks ago I met with the email management and security vendor Retarus. While I was unfamiliar with the company (and it had a reasonably standard portfolio, I thought at first glance), my interest was piqued because it was German, and the country is very particular about such questions as personal data, privacy and so on.

As our conversation tended towards what the organisation is calling patient zero detection — referring to how the vendor’s software looks to improve how it reacts to security attacks that have already taken place, I found myself on fundamentally more interesting and potentially valuable ground. It’s difficult to explain why I think this is so important, so please bear with me and I shall try.

IT security has had a chequered history. Back in the day when I used to manage UNIX systems, workstations and servers tended to be delivered with all technological ‘doors’ left open (front and back), so that any person with a reasonable grasp of the operating system could gain access to whatever they wanted.

Some systems were better than others — indeed, good old mainframes had an almost-militaristic attitude to their own protection, the principles of which were adopted over time by the open systems movement and then the PC wave of computing (c/f Microsoft’s late-to-party-but-still-laudable Trusted Computing Initiative, kicked off in 1999).

(As an aside, pretty much any time I have pushed back against such efforts being promoted by IT vendors, my discomfort has been driven by the presentation of well-established, accepted and required truth as something new.)

As computing moved into the mainstream, security best practices came into alignment with the broader world of contextual risk, itself well known in military and safety-critical circles. This world had taken a different path, moving from risk (and litigation) avoidance to the (more affordable) risk management philosophy we still follow today.

Having bottomed out the best practices required for managing and mitigating the probability and impact of risk, attention has turned to resolving any issues when they arise. The parallel fields of business continuity and disaster recovery are testament to these efforts, their principles later applied to IT security not least in terms of how to deal with zero-day exploits.

Here’s the ‘however’: while this philosophical path from risk mitigation to breach resolution remains a constant, it is based on assumptions that are difficult to maintain where IT is involved. Not only that decisions, once made, can be stuck with; but also the idea that by dealing with the tangible assets (bey they physical, electronic or software-based), the stuff they deal with is also protected.

In the case of IT, the ‘stuff’ is called data. When the Jericho Forum got together to discuss the changing nature of IT security, they did so because they saw the protect-the-border approach to security as being a recipe for disaster. Their focus moved to identity management as a result, proposing models now espoused by Google’s zero-trust, end-to-end encrypted BeyondCorp initiative.

And so, in this day and age, traditional asset-based security runs uneasily alongside the school of thought that says data, not devices, needs to be protected. I was faced with this dichotomy myself when I released my (more asset-centric) book on Security Architecture to pointed criticism from luminaries of the latter camp.

The truth, however, is that neither perspective is completely right — and indeed, both start from the wrong point. Specifically, neither considers what to do when things don’t work out, when (as all too often) a breach or data leak takes place. The pervading view from security professionals is “well, you didn’t listen” which is not the most helpful in a time of crisis, however accurate.

The mindset of all parties, that we are trying to prevent things from going wrong to the best of our abilities, is fundamentally flawed. The core notion (which goes back to the origins of both IT security and broader risk management) is that if we did everything right, we would pretty much ensure bad things didn’t happen.

This notion is false. It is not that bad things are going to happen anyway, in the same way that a vehicle crash might happen even with all the right protections in place. That may be true but if we do have a car accident we are typically distraught, in the knowledge that we were, figuratively and statistically, one of the unlucky ones.

A far better framing of the nature of IT security is similar to that of disease. Of course we can look to avoid illness but when we succumb, we recognise it is part of the tapestry of life. Prevention will never fully work, recovery is a necessary and well-understood set of steps.

Indeed, so it is with human weakness, in that sometimes we succumb to our less positive traits. Far from being just another analogy, this is a fundamental input to our understanding of how digital technology is as likely to be misused as used for positive reasons.

The overall consequence is that we should be accepting the consequences of such weakness as the norm, not treating any incidents as exceptions. We should also be thinking about risk management and mitigation in the same way as we think of hygiene when dealing with germs — as necessary as it is imperfect and, sometimes, counterproductive.

Thinking more broadly, even as I write this we are getting far better at using data to understand the spread of disease. It makes absolute sense that we should be investing in tools that look at how computer attacks spread virally, and how they can potentially be contained and their wider impact minimised.

It’s not for me to say whether Retarus’ product is any better or worse than any other (as I haven’t tested it), but the company’s philosophy was decidedly refreshing. Thinking in terms of ‘patient zero’ outbreak detection and mitigation is a good start for any security vendor and, more importantly, it should be part of the mindset adopted by any organisation wanting to define its attitude to IT security in this increasingly digital world.

Comment

Community guidelines

Be sure to review our Community Guidelines. By continuing you are agreeing to our Terms of Service and Privacy Policy.

4 Comments

Andy

Isn’t this why the system security mgr’s need to be AI? That and the premise that the heart and soul of the web is anonymity is shear illusory thinking. It enables the criminals and mal-intent. The whole architecture of the web needs to be revolved. to leave it uninverted is to leave it as a chronically incurable disease. I’ve always envisioned the web universe as a bio community with all of the interactions that come within a biom, it’s pathogens and their associated “hard ware”. Personally I tire of the discussion corps circling around such small things . This is a refreshing look at it. Thank you Jon.

Reply
AirShou

I totally agree Jon, cybersecurity is getting on it’s way to be safe to say it’s just now an incurable disease.

Reply
Scott H

I think we can safely say that it is an incurable disease. Command and control over the perimeter is problematic but the user centric identity management approach isn’t a holistic solution either. “I told you so” only applies because security will eventually be right about something but that doesn’t mean that they are right about everything. Patient zero sounds like another iteration of root cause analysis.

That approach makes sense and it could improve security in select areas but from my experience, post mortem will normally show that it was a process failure. In other words, it was easier to establish a process that was completely devoid of common sense than it was to do it the right way. This is where, “I told you so” never comes up because no one, outside of those using it, knew about it.

However, there are times when this occurs because of command and control policies that restrict their ability to function. In other cases, a poor design decision may limit the efficacy of a system. Sometimes, it is both. Once the project team is done, they move on and it takes a miracle to get approval for changes thereafter. It is like reversing the flow of a river.

Not that long ago, I ran into an issue like this at a client. A workstation imaging solution was designed without domain join capabilities for select offices. Desktop personnel did not have the permissions required to join machines to the domain because of policy restrictions. When the issue was raised, the system management team distributed a script that they could run to do this. The script was distributed to techs through email.

For some reason, ops personnel rarely encrypt or obfuscate credentials in their scripts. Knowing this, it was a red flag. I asked the client lead to open the script and look for domain credentials in clear text. Sure enough, he found the domain account name and password. I asked him if the account had elevated permissions. He confirmed this via the group membership.

So, in this case, patient zero could have been the service account. It was a carrier that could have compromised the forest, domain and any resources that the account could authenticate to. It could have also been used to obtain even more elevated permissions to a devastating effect. The vulnerability was due to cascading design and policy decisions. Inadvertently, they created a petri dish with a dormant strain of Ebola.

Security used command and control to restrict the creation of new directory objects. This aligns with least privilege but the restrictions assumed that a managed service account would perform this function. Unfortunately, they were right. They just didn’t expect it to show up in clear text in a script. Design decisions further complicated matters because of forest and domain trusts. Business unit requirements dictated the design with some of it being the result of poor integration for new acquisitions. The provisioning system wasn’t designed to scale and lastly, there were inadequate feedback mechanisms in place to remedy flaws.

I’ve worked as a consultant with many IT shops and organizations over the years. It is rarely a technological problem. More often than not, it is people and process or the lack thereof. Patient zero is more like a philosophical discussion but the 80/20 rule suggests that the problem is much more pragmatic and prevalent than anyone would like to admit. Unfortunately, I see petri dishes everywhere I go.

I rarely see a high level of engagement between the business, ops, dev and security. They function in isolation despite themselves. Security is relegated to paper trails and compliance. In fact, many of them will use compliance reporting to mitigate liability even if they know, or suspect, that the reports are inaccurate. The difference between compliance and non-compliance is a report parameter filter with a boatload of assumptions about environment health and agent penetration.

I would like to say that philosophical discussions around security are meaningful but from my experience, we are nowhere near the point where it will make a difference. If an organization even has a security team, they are normally isolated and disconnected. They hide behind compliance reports out of willful ignorance or because they literally don’t know any better. It is like a Greek tragedy that runs in a continuous loop. The low hanging fruit is everywhere. It isn’t a nuanced problem that needs an elegant solution. It is fundamentally related to culture, people and process. Common sense and communication would be a good start.

Reply