Supply chain attacks demand a 3rd party risk re-think

By Tomislav Pericin [ Join Cybersecurity Insiders ]
2079

By Tomislav Pericin, Chief Software Architect at ReversingLabs

Looked at from one angle, the recent attack on JumpCloud, a cloud-based identity and access management provider, was unsurprising. The incident, which JumpCloud disclosed in early July, involved a North Korean state-sponsored actor known as Lazarus Group, hacking into accounts associated with JumpCloud customers in the cryptocurrency business. That’s a long-established pattern for Lazarus Group, and North Korean hacking groups generally. CISA warned in 2022 about North Korean forays into cryptocurrency and block chain companies and infrastructure. The JumpCloud hack was also just the latest, sophisticated supply chain attack to target the customers of cloud infrastructure providers – with potentially devastating consequences for private sector firms, government agencies and more.

As JumpCloud and its customers pick up the pieces from the compromise, however, it is worth asking why attacks on third-party providers such as JumpCloud, CircleCI and 3CX manage to trip up even sophisticated firms and what it would take to put supply chain hackers like Lazarus Group on the back foot. In this blog post, I’ll review what we know about the recent JumpCloud incident and talk about the bigger implications of the attack on firms as they assess the risk posed by supply chain compromises of third-party software providers they rely on.

JumpCloud: part of a pattern

The attack on JumpCloud, which dates to June, 2023, forced the company to rotate API keys for many of its estimated 180,000 customers to prevent further attacker access to customer accounts and data. That recalled the follow-up to the attack on CircleCI, a popular application development platform in January 2023 that resulted in that company calling on its customers to rotate API tokens and other environmental variables. There were also echoes in the attack of other recent supply chain incidents: the compromise of Voice over IP provider 3CX, the breach of Solar Winds, CodeCov and more. It’s all part of a trend of attacks on software suppliers. In fact, the European Union Agency for Cybersecurity (ENISA) predicted that supply chain compromises targeting software dependencies will be the biggest emerging threat by 2030.

Tool talk: supply chain hacks escape notice

The increasing frequency of successful attacks underscores the challenges that even sophisticated firms face in managing cyber risks from their software supply chains. Partially, that is due to a gap in the tooling and processes needed to manage software supply chain threats. Established application security technologies like static- and dynamic application security testing (S/DAST) and software composition analysis (SCA) are well suited to identifying flaws in raw application code, or vulnerable software dependencies. But they fall short when presented with challenges like software tampering or the inadvertent use of malicious open source- and proprietary software modules as part of the application design process. SAST, for example, requires access to raw, uncompiled application code for analysis. That makes it of little use to downstream consumers, who are unlikely to be given access to the code. Similarly, SCA products are useful for spotting out-of-date or vulnerable open-source software libraries, but have little visibility into the vast population of proprietary, closed source third party libraries and software components.

Software makers get it. We sponsored a survey of more than 300 IT professionals that found 74% of IT and security professionals working for development organizations considered tools such as SAST, DAST, and SCA inadequate to fully protect their organizations from software supply chain threats. In that same survey, more than half of technology professionals (55 percent) cited secrets leaked through source code as a serious business risk. 52 percent labeled malicious code as a serious risk and 46 percent did so for “suspicious code.”

I’m from the government, and I’m here to help!

Given the growing awareness of the risks posed by vulnerable software supply chains, what is needed now is a way to break down the barriers within organizations to implementing effective software supply chain risk management policies, tools and practices.

Some of that is already happening. Platform providers like Google, Amazon as well as Microsoft (owner of GitHub and npm) are implementing features that raise the bar for would-be supply chain hackers, while streamlining basic security functions like monitoring code for vulnerabilities or leaked development secrets.

Government and industry regulators are also taking up the mantle of software supply chain security. The Biden Administration, for example, issued an Executive Order (#14028) in 2021 calling on companies that sell software and services to the federal government to attest to the security of their software and the components contained in them. The Administration recently issued a memo (PDF) to the heads of Executive Branch agencies on how to comply with those attestation requirements, while CISA and the NSA have issued guidance on securing CI/CD pipelines.

In the meantime, the federal PATCH Act, passed in late 2022, requires makers of medical devices to present to the Food and Drug Administration a wide range of security measures prior to getting FDA approval. Those include “plans to monitor, identify, and address…cybersecurity vulnerabilities and exploits;” secure device design, development and maintenance processes;” as well as a software bill of materials (SBOM) documenting commercial, open source and off the shelf software components used in the device.

Regulations like these are raising the bar for application security, even as they shift the ground under software development organizations. Over time, however, they promise to raise the level of application security, while giving customers (or one customer, anyway: Uncle Sam) assurances and actionable information to address threats spilling out of extensive software development supply chains.

Wanted: a final exam for developed code

In the meantime, both development organizations and their customers have to navigate increasingly treacherous terrain, with the risk of damaging supply chain compromises growing. To do that, application security teams need to increase their scrutiny of the entire development pipeline. That requires organizations to stay on top of traditional risks such as insecure developer and administrator accounts, not to mention exploitable software vulnerabilities and balky open source libraries.

Beyond those basic measures, organizations need to embrace new processes and methods. The deployment of software bills of materials (SBOMs), for example, requires more than just generating an ingredients list for compiled applications. Development organizations and downstream customers also need a way to monitor and act on the information contained in SBOMs to limit exposure to newly discovered exploits (Log4Shell, for example).

And both application security teams and customers need tools that can assess the security of developed code both before it is shipped and after it has been received to spot evidence of more sophisticated attacks – like SUNBURST style code tampering – that may slip past traditional application security technologies.

A great approach is to present developed and compiled code with something like a “final exam” that must be passed before binaries are released to customers. It presumes that development teams follow all of the best practices but adds an integrity check of the compiled (post-compilation, pre-deployment) software package to look for characteristics or behaviors that are known to be malicious or suspicious. Those might include red flags like unexplained communications with external infrastructure, or suspicious dependencies that aren’t explained by the design and function of the application.

The goal is to detect the kinds of compromises that bedeviled SolarWinds and 3CX, which were clearly observable after the fact, but escaped notice by application security teams within those organizations looking for the ‘usual suspects.’

The objective is for software makers to have a reliable means of manufacturing reproducible software builds that give both them and their customers confidence that malicious code or functionality is not lurking in application code and updates shipped to end users. That’s a goal we should all be able to get behind.

Ad

No posts to display