featured-image

82% percent of C-suite executives believe secure and trustworthy AI is critical. However, only 24% of them build security into their initiatives related to GenAI. That’s according to new research from IBM and Amazon Web Services, joint research announced at RSA Conference 2024 and which detailed results of a survey focused on the views of C-suite executives regarding how to drive safe use cases for AI, with a ton of focus on generative AI. IBM reports that only 24% of respondents considered a security dimension in their generative AI initiatives, whereas nearly 70% say innovation takes precedence over security.

Innovation vs. security tradeoff

While most executives worry unpredictable risks will impact gen AI initiatives, they are not prioritizing security. Even though a fully realized AI threat landscape has not yet occurred, some use cases have involved using ChatGPT or similar to generate phishing email scripts and deepfake audio. IBM X-Force security researchers have warned that, as with more mature technologies, AI systems can increasingly be expected to be targeted on a larger scale.

While a consolidated AI threat surface is only starting to form, IBM X-Force researchers anticipate that once the industry landscape matures around common technologies and enablement models, threat actors will begin to target these AI systems more broadly, the report read. Indeed, that convergence is underway as the market matures rapidly, and leading providers are already emerging across hardware, software, and services.

More immediately concerning, said the IBM report, are those companies that don’t properly secure the AI models they build and use as part of their businesses. Poor application of GenAI tools may lead to mishandling or leaking of sensitive data. According to the report, “shadow AI” is increasing in its usage within organizations as employees use GenAI tools that have not been approved and secured by enterprise security teams.

To this end, IBM announced a framework for securing GenAI in January. Its basic tenets are centralizing data, which is used to train AI models, securing models using a combination of scanning development pipelines for vulnerabilities, enforcing policy and access control for AI use, and securing its usage against live AI model-based attacks.

Securing the AI pipeline

On Monday, IBM’s X-Force Red offensive security team rolled out an artificial intelligence testing service that seeks to evaluate AI applications, AI models, and MLSecOps pipelines through red teaming. On Wednesday at the RSA Conference 2024, IBM will present “Innovate Now, Secure Later? Decisions, Decision…” on securing and establishing governance for the AI pipeline.

The second speaker was Ryan Dougherty, the program director for security technology at IBM Security. Speaking to TechTarget Editorial, he said that securing AI right out of the box is one of the prime concerns for IBM Security in the technology space. He said, That’s what makes this area so critical, particularly for the case of generative AI: it’s getting deeply embedded into business applications and processes. Integrating the business fabric raises it above and beyond the potential risks and threats.

According to him, from a business perspective, AI must have a grounding in its real nature.

“Generative AI is trained and operationalized on vast troves of sensitive business data, and we need to secure those new crown jewels because that’s where the competitive advantage comes in. It’s around the data these organizations have and the insights they’re getting by using the generative AI and surfacing it within their applications to improve their businesses,”

Dougherty added that they’re paying a lot of money for the models, which are very expensive. A lot of IP and investment is going into operationalizing these generative AI applications, and businesses simply can’t afford to secure them.