Training Delivery & Duration

  • Live Online / On-Site / Private Team Training
  • 1 Day

Secure Coding for Large Language Model Applications

As LLMs power chatbots, developer tools, and customer platforms, they bring new security risks like prompt injection and data leakage. This full-day workshop trains engineers and AppSec professionals to design, build, and test secure LLM applications

Heading

 |

Date : TBC

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.

Do you have 5 or more attendees?

Contact Us about Team Training >>
bkg-decorativelines-big-white

About this course

About this course

Course Overview

As large language models (LLMs) become embedded in everything from customer service to code generation, security professionals, developers, and architects must shift their mindset. Traditional security models aren’t enough—LLMs introduce new, often misunderstood risks like prompt injection, model theft, and excessive autonomy.

This course will equip you to:

Understand the unique vulnerabilities of LLM-based systems

Apply real-world mitigation techniques aligned with OWASP’s Top 10 for LLMs

Design and deploy secure, reliable, and trustworthy LLM applications

Why Take this Course?

This full-day workshop teaches engineers and AppSec professionals how to design, build, and test LLM applications with security in mind. Participants will gain a practical understanding of secure coding principles tailored for LLM-driven architectures, drawing from real-world case studies, OWASP guidance, and hands-on lab scenarios.

It is recommended that participants on the Secure Coding for Large Language Model Applications have completed the Web Application Security Training course. Please see Related Training at the end of this page.

Learning Objectives

Top 3 takeaways:

1. How to Recognize and Mitigate the Unique Security Risks of LLMs - Students will gain a clear understanding of the OWASP Top 10 for LLMs, including threats like prompt injection, insecure output handling, and model theft. They’ll learn how these risks differ from traditional application security issues and how to defend against them.

2. How to Design and Deploy LLM Applications with Secure Defaults - Participants will be equipped with practical techniques for securing LLMs throughout the lifecycle—covering input/output validation, plugin security, data provenance, and safe autonomy boundaries—enabling them to implement LLMs with confidence in real-world systems.

3. Why Critical Oversight and Responsible Use Are Essential in LLM-Driven Systems - Students will understand the human and operational risks of overreliance on LLM outputs and excessive model agency. They’ll learn how to integrate human-in-the-loop controls, policy safeguards, and monitoring to maintain accountability and trust.

Who Should Attend this Course?

This course is designed for anyone building, integrating, or securing applications that use large language models (LLMs)

To fully benefit from this course, students should have:

A basic understanding of software development and web technologies

Familiarity with Python and JavaScript—you don’t need to be an expert, but you should be comfortable reading and modifying simple code snippets

A general grasp of application security concepts (e.g., input validation, injection attacks, authentication)

Interest in AI and LLM systems—no prior experience with machine learning is required

Benefits

Top 3 takeaways

1. How to Recognize and Mitigate the Unique Security Risks of LLMs - Students will gain a clear understanding of the OWASP Top 10 for LLMs, including threats like prompt injection, insecure output handling, and model theft. They’ll learn how these risks differ from traditional application security issues and how to defend against them.

2. How to Design and Deploy LLM Applications with Secure Defaults - Participants will be equipped with practical techniques for securing LLMs throughout the lifecycle—covering input/output validation, plugin security, data provenance, and safe autonomy boundaries—enabling them to implement LLMs with confidence in real-world systems.

3. Why Critical Oversight and Responsible Use Are Essential in LLM-Driven Systems - Students will understand the human and operational risks of overreliance on LLM outputs and excessive model agency. They’ll learn how to integrate human-in-the-loop controls, policy safeguards, and monitoring to maintain accountability and trust.

Attendee Testimonials

Course Outline

Module 1: Introduction

Overview of the OWASP Top 10 for LLMs

Threat landscape for LLM applications

Why traditional security paradigms fall short

Mapping to existing risk frameworks (e.g., NIST, ISO, OWASP AppSec)

Module 2: Prompt Injection

Definition and impact

Direct vs Indirect prompt injection

Real-world examples (e.g., attacks via RAG, plugins, and tools)

Module 3: Insecure Output Handling

Output injection (HTML, code, SQL, etc.)

Over-reliance on hallucinated or unverified content

Risks to downstream consumers (e.g., agents, APIs, UIs)

Module 4:  Training Data Poisoning

Risks in dataset curation and ingestion pipelines

Threats from 3rd-party or open-source data

Intentional vs unintentional poisoning

Module 5: Model Denial of Service

Token flooding, infinite loops, adversarial prompts

Cost/resource exhaustion attacks

Module 6: Supply Chain Vulnerabilities

Risks in third-party models, plugins, libraries, and datasets

Trust boundaries and integrity of the ML pipeline

Module 7: Sensitive Information Disclosure

Memorization of secrets (e.g., API keys, PII)

Prompt leaking and output probing

Module 8: Insecure Plugin Design

Input validation and sanitization failures

Over-permissive capabilities and scopes

Module 9:  Excessive Agency

Risk of autonomous decision-making and execution

Module 10: Overreliance

Psychological trust in LLMs (automation bias)

Failures in oversight or review

Module 11: Model Theft

Reverse engineering, output inference, and model extraction

Threats to IP, compliance, and model confidentiality

Duration: 1 day (8 hours)

Format

The course combines theory and hands-on practical exercises.

What is included?

Levels

  • Printed materials
  • Virtual image containing all tools used
  • Certificate of Participation (CPE Points)

Team Training with Cycubix

Team Training with Cycubix

Instructors

The minds behind the course

The minds behind the course

Fabio Cerullo

Senior Official ISC2 Authorised Instructor for CISSP, CCSP, CSSLP and SSCP

Fabio Cerullo is the Managing Director of Cycubix. He has extensive experience in understanding and addressing the challenges of cybersecurity from over two decades working in and with organisations across a diverse range of industries – from financial services to government departments, technology and manufacturing.

Fabio Cerullo is a Senior Authorised Instructor for ISC2,the global leader in information security education and certification. Fabio has delivered training to thousands of IT and security professionals world wide in cyber, cloud, and application security. As a member of ISC2 and OWASP organisations, Fabio helps individuals and organisations strengthen their application security posture and build fruitful relationships with governments, industry and educational institutions.

Fabio is a regular speaker and delivers training at events organised by leading Cybersecurity associations including OWASP and ISC2. He holds a Msc in Computer Engineering from UCA and the SSCP, CISSP, CSSLP & CCSP certifications from ISC2.

Show (Instructors)

The minds behind the course

The minds behind the course

Fabio Cerullo

Fabio Cerullo is the Managing Director of Cycubix. He has extensive experience in understanding and addressing the challenges of cybersecurity from over two decades working in and with organisations across a diverse range of industries – from financial services to government departments, technology and manufacturing.

Fabio Cerullo is a Senior Authorised Instructor for ISC2,the global leader in information security education and certification. Fabio has delivered training to thousands of IT and security professionals world wide in cyber, cloud, and application security. As a member of ISC2 and OWASP organisations, Fabio helps individuals and organisations strengthen their application security posture and build fruitful relationships with governments, industry and educational institutions.

Fabio is a regular speaker and delivers training at events organised by leading Cybersecurity associations including OWASP and ISC2. He holds a Msc in Computer Engineering from UCA and the SSCP, CISSP, CSSLP & CCSP certifications from ISC2.