Enterprise AI Safety in the Real World: Guardrails, Policies, and “Good Enough” Risk (Jonne Tuomela from Netskope)
26 February 2026

Enterprise AI Safety in the Real World: Guardrails, Policies, and “Good Enough” Risk (Jonne Tuomela from Netskope)

CYBERCAST

About

Can your organization safely use ChatGPT-style tools—and trust what comes back? Recorded at NEVERHACK Estonia’s Client Day 2026 in Tallinn, Louis Zezeran sits down with Jonne Tuomela (Senior Solutions Engineer, Netskope) to unpack the real risks of large language models: prompt injection and jailbreaks, hallucinations and misinformation, poisoned training data, and why “perfect” safety is unrealistic.

They discuss how AI red teaming works at scale (thousands of test prompts), how guardrails can inspect both prompts and responses, and why smart policies (like allowing prompts but blocking file uploads) can protect sensitive data without wrecking user experience. Plus: why coaching and employee education still beat buying “one more tool.”

🎧 Listen now, follow NEVERHACK Estonia Cybercast, and subscribe for more real-world security conversations.