Responsible AI Boosts Software Security – CASA Software

While the prevalence of high-severity security flaws in applications has dropped slightly in recent years, the risks posed by software vulnerabilities remains high, and remediating these vulnerabilities could stand in the way of new application development. Responsible AI offers a solution to the challenge of balancing risk mitigation and software development.

This is according to John Smith, Chief Technology Officer for EMEA at Veracode, who was speaking ahead of the 2024 ITWeb Security Summit in Johannesburg.

Veracode’s State of Software Security 2024 report finds that the prevalence of high-severity security flaws in applications is half of what it was in 2016, however, the situation is far from ideal. Around 63% of applications have flaws in first-party code and 70% contain flaws in third-party code. Worryingly, these flaws can take 7 – 11 months to fix and 46% of organisations have persistent, high-severity flaws that constitute critical security debt.

Smith says South Africa’s software security environment is no different from the situation in the rest of the world. “We find the same challenge everywhere, in that in any programming problem you attempt to solve, there are many ways that will introduce weakness. Mistakes will happen unless you put security at the heart of development. The only way to mitigate this is by testing early and often, and prioritising remediation,” he says. “However, prioritising is difficult. Only around 10% of organisations can efficiently prioritise risk.”

He says there is an inevitable trade-off between spending developer time fixing weaknesses in software instead of creating new features, and investing in remediation in case a business is hacked.

AI offers significant opportunities to support prioritising and remediation, but Smith cautions against having too much faith in generative AI at this stage. Generative AI, sourcing its data from the internet, may use inaccurate or biased data. He notes that organisations may trust the answers too implicitly and not have the proper checks in place.

Smith says the key to effective use of AI to mitigate risks lies in the data it uses. “The approach we have taken with Veracode Fix is to narrow it down to focus on fixing vulnerabilities in code. Instead of using a whole mass of data from outside, we focus on patches designed by our security researchers – using human knowledge and encoding that into the AI. This gets past the challenge of generative AI generating everything. Applying a human generated patch is a more responsible approach and removes poor quality data and AI hallucinations. It also means we have control over the IP, eliminating the risk of the model reproducing code it sourced on the internet that was on the internet under licence.”

Sagaran Naidoo, Sales Director of CASA – a premium partner of Veracode in South Africa, says: “This type of responsible AI is crucial, and Veracode is doing this exceptionally well in improving developer productivity. We have seen several examples recently where generative AI with inaccurate and untrained data has caused concern. Google’s recent apology for “missing the mark” with its historical image generation depictions” is a case in point”

Naidoo says many South African organisations are still grappling with finding the balance between remediating code vulnerabilities , while also rolling out new software features and functions to support business growth. “Developers, in particular, are under constant pressure to deliver at speed,” he says. “At CASA, we believe Veracode offers the most complete solution to this dilemma by using human-generated best practice and machine learning to generate fixes developers can review and implement significantly faster than the traditional way..”

Smith says: “We don’t have the reach to have people in South Africa to offer in-person support to customers. Through premier partners like CASA who are trusted by local customers, we can help South African organisations build and maintain secure software.”