The human-AI partnership: a guide towards secure coding

By Pieter Danhieux [ Join Cybersecurity Insiders ]
1859

[Pieter Danhieux Co-Founder and CEO, Secure Code Warrior]

The doomsayers are, so far, losing the argument. The panic around AI replacing humans has been countered with a new narrative: “Let AI redefine your job rather than replace it.” According to a recent survey from Stack Overflow, 44% of developers are either using or planning to use AI tools—even though just 3% “highly trust” the accuracy of the results. Twice as many (6%) say they highly mistrust AI due to security concerns and inaccuracy.

There remains at least some debate among developers on whether to embrace these tools, though many businesses are testing them as much as possible. The UK government’s stance has been laissez-faire, with no “rush to regulate,” encouraging businesses to explore AI’s benefits. And many developers report good results, with some already claiming it increases their productivity and reduces time spent on repetitive tasks.

AI’s role in supporting developers will grow over time, but it cannot come at the expense of secure coding practices. Its quick-to-please mentality and propensity to “hallucinate” is a significant concern, rendering it impossible to fully trust. Until this is resolved—if it can be resolved—we’re going to need skilled developers that can ensure security is front-of-mind, and to check AI-generated code for any potential vulnerabilities.

GenAI: a journey companion

Beyond streamlining time-consuming and monotonous tasks, AI tools can proactively propose fresh lines of code, provide fast answers to technical inquiries, offer valuable research support, demystify complex processes and make what was a very difficult job, more accessible. Github surveyed developers about how managers should consider productivity, collaboration, and AI coding tools. Over 80% of developers anticipate that AI coding tools will promote greater collaboration within their team, and 70% believe that AI coding tools will give them a competitive edge in their professional roles, with benefits to code quality, speed, and incident resolution.

However, it also introduces a new security challenge—now it’s no longer enough to check your own code for vulnerabilities, but that of your AI helper. It’s already crucial to maintain a strong focus on secure coding practices in software development. Recent research from the Department of Homeland Security estimates that 90% of software vulnerabilities can be traced back to defects in design and software coding.

So while AI offers significant advancements in productivity, it is fallible and needs vigilance so advantages don’t come at the expense of more security issues.

Developers as security sheriffs

Blindly relying on AI output without verification is like using Wikipedia: while a good place to start, you can’t be certain about its reliability. We all still use Wikipedia, we just need to be aware of the risks and have the right processes in place to catch any potential problems.

The UK has already shown some initiative, starting with The AI Safety Summit. This gathering aimed to help establish a global consensus on AI and drive international efforts to enhance safety. These rules will be critical in shaping the future of AI security. Still, we cannot wait for governments to draft them—developers must act to ensure new technologies are used responsibly, or risk an AI-generated nightmare with insecure software.

Developers should be enabled to act as security sheriffs within their organisation to drive secure strategies while producing protected code. This can be done through:

  • Human oversight and expertise: While certain AI tools will flag potential vulnerabilities and inconsistencies, humans must still oversee this process. The code produced can only be as accurate as the prompts provided by the developer, who needs to understand how the AI recommendations are applied in the greater context of the project.
  • Pay attention to complexities and the overall strategy: In software production, developers can take on the role of a quality control team. They can be trained to review AI-generated code and ensure it meets the project’s standards. AI is not yet capable of independently handling complex components or generating innovative solutions for DevOps challenges.

Why “sheriffs?” Today’s AI frontier is the wild west, with little regulation and a real potential for danger. Organisations cannot wait for robust regulation—they need to integrate a culture of security today that extends across the entire business.

Ad

No posts to display