Chief Information Security Officers are grappling with the dual pressures of executive enthusiasm for generative artificial intelligence and the inherent security risks it poses. A recent survey by NTT Data revealed that while eighty-nine percent of C-suite executives are deeply concerned about the security risks associated with generative AI, they also believe the potential benefits and returns on investment outweigh these risks. This dichotomy leaves CISOs overwhelmed, with nearly half of them expressing negative sentiments toward generative AI due to the pressures they face. Experts warn that the risks of generative AI are unprecedented, with concerns about data leakage and malicious code injection becoming increasingly prominent.
So it’s timely that The Cybersecurity and Infrastructure Security Agency, along with leading U.S. technology companies, introduced a new plan aimed at reporting and sharing information about security threats to artificial intelligence models. This initiative, revealed today, highlights the critical nature of addressing security flaws that could potentially endanger not just model creators but any company utilizing AI applications. The new playbook, crafted by the agency’s Joint Cyber Defense Collaborative, serves as a guide for companies on how to report ongoing cyber threats and system vulnerabilities. It includes checklists for reporting incidents and new vulnerabilities and was inspired by feedback from two AI security tabletop exercises conducted last year. As the new Trump administration prepares to take office, the future of the Cybersecurity Agency remains uncertain, yet industry leaders like Alex Levinson from Scale AI express commitment to continue sharing intelligence with partners regardless of political changes. The overarching goal is to foster trust in AI technologies while ensuring that security is a top priority for developers and users alike.
Why do we care?
As companies rush to adopt these technologies, the urgency for effective risk management strategies has never been greater, especially as bad actors seek to exploit vulnerabilities in these new systems.
The AI playbook represents a golden opportunity to offer targeted services, especially in AI security consulting, governance, and compliance. Positioning yourself as an expert in implementing AI-safe practices will help secure long-term client relationships.
The tension between innovation and security is not unique to AI, but the risks of generative AI are notable in their scale and complexity. IT providers that invest in understanding AI-specific threats will gain a competitive edge as these technologies proliferate.

