Danger in Dialogue: The Security Risks of Large Language Models
2025TL; DR
Discover the hidden security risks of Large Language Models and learn practical strategies to protect your organization. This session covers threat vectors, real-world examples, and actionable frameworks for secure LLM implementation.
Session Details
In this comprehensive session, we'll explore the critical security challenges posed by Large Language Models (LLMs) as they become increasingly integrated into business operations and daily life.
From prompt injection attacks to data privacy concerns, we'll examine real-world examples of LLM vulnerabilities and their potential impact on organizations. Participants will learn what strategies exist for identifying, assessing, and mitigating these risks while maintaining the benefits of LLM adoption.
Through demonstrations and case studies, attendees will gain hands-on experience in recognizing common attack vectors and implementing effective security measures.
From prompt injection attacks to data privacy concerns, we'll examine real-world examples of LLM vulnerabilities and their potential impact on organizations. Participants will learn what strategies exist for identifying, assessing, and mitigating these risks while maintaining the benefits of LLM adoption.
Through demonstrations and case studies, attendees will gain hands-on experience in recognizing common attack vectors and implementing effective security measures.
3 things you'll get out of this session
Understand the major categories of LLM security threats and their potential impact on organizations
Learn to identify and analyze common attack vectors in LLM implementations
Create an effective risk assessment framework for LLM deployment
Speakers
Scott Bell's other proposed sessions for 2026
AI for Developers, Not End Users: Master Agentic BI Development Workflows - 2026
AI for Developers, Not End Users: Master Agentic BI Development Workflows - 2026
AI for Developers, Not End Users: Master Agentic BI Development Workflows - 2026
AI Security in Practice: How to run Threat Modeling Workshops - 2026
Building Context, Not Vibes Pratical AI Augmented Data Engineering - 2026
Building Context, Not Vibes Pratical AI Augmented Data Engineering - Part 1 - 2026
Building Context, Not Vibes Pratical AI Augmented Data Engineering (Part 2) - 2026
Danger in Delegation: When “Helpful” Becomes Harmful - 2026
Optimizing Your Delta Lake: Beyond the Defaults - 2026
Scott Bell's previous sessions
Navigating Data Governance in the Age of Generative AI
In the rapidly evolving world of data analytics, the emergence of Large Language Models (LLMs) has sparked a debate: Are LLMs signaling the end of traditional data analytics? This session delves into the heart of this question, exploring the fundamental workings of LLMs and their transformative impact on the analytics landscape. Attendees will gain insights into the advantages and potential pitfalls of integrating LLMs into their data strategies. We'll discuss the innovative use cases LLMs unlock and emphasize the paramount importance of governance and lineage in harnessing their full potential. Whether you're intrigued by the brilliance of LLMs or wary of their implications, this session will equip you with a balanced perspective to navigate the future of data analytics.
Cosmos 101
Find out everything you need to know to get started with Azure Cosmos DB in 20 minutes or less.
Is HTAP the future?
Hybrid Transactional Analytical Processing solve the age old problem of integrating operational processes with analytical capabilities within a single system. Find out what they're and how they deliver value