Don’t Shut Off ChatGPT, Implement a Managed Allowance Instead

By James Robinson [ Join Cybersecurity Insiders ]
1514

By James Robinson, Deputy CISO Netskope

Over the past 30 days, the most pressing question facing CIOs and CISOs right now is, ”how much?” How much access to ChatGPT do we actually give our employees? Top security leaders are left to decide whether they should completely ban ChatGPT in their organizations, or embrace the use of it. So which option should they pick?

A simple answer is to implement a managed allowance. However, this may only work if your organization is doing all the right things with sensitive data protection and the responsible use of AI/ML in your own platforms and products. Your organization must effectively convey where and how it’s using AI to customers, prospects, partners, and third- and fourth-party suppliers in order to build successful and securely enabled programs that are governance-driven.

Organizations that simply “shut off” access to ChatGPT may feel initially more secure, but they are also denying its many productive uses and potentially putting themselves—and their entire teams—behind the innovation curve. To avoid falling behind, organizations should consider prioritizing the implementation of a managed allowance of ChatGPT and other generative AI tools.

Governing ChatGPT within your organization

Netskope has been deeply focused on the productive use of AI and ML since our founding in 2012. Like everyone, we’ve just observed an inflection point for generative AI. Unless you were a data scientist, you likely weren’t doing much with generative AI before November 2022. And as a security practitioner, developer, application builder, or technology enthusiast your exposure was focused on use not development of the features. But since the public release of ChatGPT, everyone is able to access these services and technologies without any prior knowledge about the tool. Anyone with a browser today, right now, can go in and understand what ChatGPT can and can’t do.

When something quickly becomes the dominant topic of conversation in business and technology this quickly—and ChatGPT definitely has—leaders have essentially two choices:

  • Prohibit or severely limit its use
  • Create a culture where they allow people to understand the use of this technology—and embrace its use—without putting the business at risk

For those on your team who are allowed access to ChatGPT, you must enable responsible access. Here at the dawn of mainstream generative AI adoption, we’re going to see at least as much disruptive behavior as we did at the dawn of the online search engine decades ago, and where we saw different threats and a lot of data made publicly available that arguably should not have been.

Managing third and fourth-party risk

As organizations implement the productive business use of generative AI by the appropriate users, we will also see the rise of copilots being used. This will force security companies to be responsible for obtaining critical information from their third- or fourth-party suppliers regarding AI-associated tools. These questions can help guide the assessment:

  • How much of a supplier’s code is written by AI?
  • Can your organization review the AI-written code?
  • Who owns the AI technology your suppliers are using?
  • Who owns the content they produce?
  • Is shift-left licensing involved, and is that a problem?

AI is here to stay. With the right cultural orientation, users within organizations are better able to understand and use the technology without compromising the company’s security posture. However, this needs to be combined with the right technology orientation, meaning modern data loss prevention (DLP) controls that prevent misuse and exfiltration of data, and are also part of an infrastructure that enables teams to respond quickly in the event of that data’s misuse.

Ad

No posts to display