Guardrails and Attack Vectors: Securing the Generative AI Frontier
04 November 2025

Guardrails and Attack Vectors: Securing the Generative AI Frontier

CISO Insights: Voices in Cybersecurity

About

This episode dissects critical risks specific to Large Language Models (LLMs), focusing on vulnerabilities such as Prompt Injection and the potential for Sensitive Information Disclosure. It explores how CISOs must establish internal AI security standards and adopt a programmatic, offensive security approach using established governance frameworks like the NIST AI RMF and MITRE ATLAS. We discuss the essential role of robust governance, including mechanisms for establishing content provenance and maintaining information integrity against threats like Confabulation (Hallucinations) and data poisoning.


 


Sponsor:


www.cisomarketplace.services