Making Sense of WannaCry

0
4
[ This article was originally published here ]

Whenever a calamity befalls, it’s only natural for people to try and rationalise and identify the problem.

As is now happening with the WannaCry ransomware outbreak that affected the UK’s NHS service, and other services in over 100 countries. People are discussing what should have been done to prevent it.

On one hand, there’s a debate ongoing about responsible disclosure practices. Should the NSA have “sat on” vulnerabilities for so long? Because when Shadowbrokers released the details it left a small window for enterprises to upgrade their systems.

On the other hand, there are several so-called “simple” steps the NHS or other similar organisations could have taken to protect themselves, these would include:

  1. Upgrading systems
  2. Patching systems
  3. Maintaining support contracts for out of date operating systems
  4. Architecting infrastructure to be more secure
  5. Acquiring and implementing additional security tools.

The reality is that while any of these defensive measures could have prevented or minimised the attack, none of these are easy for many enterprises to implement.

Also, none of these are new discussions or challenges. Most security professionals have witnessed these same occurrences, albeit not as wide scale, for many years.

Sometimes the infrastructure or endpoint devices aren’t all controlled by IT. Also, patching or updating a system can sometimes lead to other dependent applications breaking or having other issues. For example, the operating system can’t be updated until another vendor updates their software, which in turn can’t be updated until an in-house custom application is updated.

There are many other technical nuances; but it boils down to risk management. And often times if systems are working as desired with no issues, then they will be kept running as such, especially where the costs of upgrading is a taxpayer expense.

That’s not to say security measures shouldn’t be implemented. In an ideal world it would be good to see no legacy systems, regular patching, and securely architected infrastructure. Unfortunately, that is the exception for most companies; not the rule. So while its easy to simply say that the government should have put more money into systems; it’s more a case of the senior decision-makers and purse string holders weighing risks – understanding the exposure they have, the pros and cons, and the potential impact.

Only then can decisions be made that can result in meaningful change. This should include addressing the root causes for the Wannacry outbreak and other threats. It’s inevitable there will be copy-cats soon, with it being trivial to replace the transport mechanism (the SMB worm) with a new payload (variant of ransomware).

But more could be done. Australia is notable for their success in enforcing higher than average security across government. Departments are mandated to enforce four technical controls. The first attacks would have been limited by the first two controls – application whitelisting and regular patching. Enforcing these controls on legacy systems requires a significant investment in personnel.

That’s not to say stricter legislation is the answer. However, blaming companies for not patching, or running legacy systems, or asking that intelligence agencies cease cyber activities is not going to fix the issues.

Here’s the video!