Training Delivery & Duration

  • Live Online / On-Site / Private Team Training
  • 2 Days

Secure Coding for AI and Large Language Model (LLM) Applications

Hands-on training that teaches teams to securely develop and deploy AI and LLM applications. Learn how to prevent prompt injection, data leakage, and other critical risks from the OWASP Top 10 for LLMs.

Training Delivery & Duration

  • Live Online / On-Site / Private Team Training
  • 2 Days

Do you have 5 or more attendees?

Contact Us about Team Training >>
bkg-decorativelines-big-white

About this course

About this course

Course Overview

AI-driven applications, especially those powered by Large Language Models (LLMs), are rapidly transforming products, developer workflows, and customer experiences. 

But these systems introduce unique security risks that traditional AppSec practices don’t address. 

This 2 days hands-on course teaches developers, AppSec engineers, and architects how to design and build secure AI/LLM applications. Participants learn to defend against prompt injection, insecure output handling, model poisoning, data leakage, and other risks from the updated OWASP Top 10 for LLM Applications 2025. 

Through labs and real-world case studies, attendees gain practical skills for deploying safe, trustworthy, and compliant AI capabilities at scale.

Why Take this Course?

This course helps organisations confidently integrate AI technologies by addressing the emerging security challenges associated with LLM-powered systems.

You will learn to:

  • Protect AI and LLM applications from real-world attacks (prompt injection, data leaks, model theft).
  • Demonstrate compliance readiness across AI governance standards (ISO 42001, NIST AI RMF).
  • Build defensible and auditable AI architectures aligned with security best practices.
  • Reduce business, legal, and operational risk caused by AI system failures.
  • Equip engineering teams with security-by-design practices for AI-enabled products.
  • For compliance managers and buyers, this training provides assurance that your organisation is developing secure and responsible AI.

Learning Objectives

Participants will be able to:

  • Identify and mitigate the unique risks of AI/LLM-powered applications

  • Implement secure coding practices for LLM inputs, outputs, and agent-based behaviours

  • Apply OWASP Top 10 for LLM Applications 2025 controls effectively

  • Design AI systems with safe autonomy, secure plugin architectures, and least-privilege access

  • Detect high-risk behaviours, hallucinations, and security regressions in AI systems

  • Evaluate AI components for compliance and governance implications

Who Should Attend this Course?

This course is designed for anyone building, integrating, or securing applications that use large language models (LLMs):

  • Software Developers and Engineers

  • AI/ML Engineers and Data Scientists

  • Application Security and Cloud Security Professionals

  • Technical Architects and Engineering Managers

  • AI Governance, Risk, and Compliance (GRC) Leads

  • Product Owners working on AI-enabled features

To fully benefit from this course, participants should have:

  • A basic understanding of software development and web technologies.
  • Familiarity with Python and JavaScript, you don’t need to be an expert, but you should be comfortable reading and modifying simple code snippets.
  • A general grasp of application security concepts (i.e. input validation, injection attacks, authentication).
  • Interest in AI and LLM systems, no prior experience with machine learning is required.

Benefits

Attendee Testimonials

Course Outline

Part I: Foundations of AI and LLM Security

Part II: Threat Modeling and Architecture

  • Threat Modeling for LLM Systems
  • RAG Security: Retrieval, Embeddings, and Index Integrity
  • Agent and Tool Security

Part III: The OWASP Top 10 for LLM Applications 2025

  • LLM01:2025 Prompt Injection
  • LLM02:2025 Sensitive Information Disclosure
  • LLM03:2025 Supply Chain
  • LLM04:2025 Data and Model Poisoning
  • LLM05:2025 Improper Output Handling
  • LLM06:2025 Excessive Agency
  • LLM07:2025 System Prompt Leakage
  • LLM08:2025 Vector and Embedding Weaknesses
  • LLM09:2025 Misinformation
  • LLM10:2025 Unbounded Consumption

Part IV: Secure AI/LLM Design and Governance

  • Secure AI/LLM Design Patterns and Best Practices
  • Governance, Risk and Regulatory Alignment

Format

This instructor-led workshop is available for both onsite or online deliveries. It combines focused technical instruction with practical, hands-on labs in a secure AI/LLM Lab environment. Participants engage in guided exercises, real attack simulations, and collaborative problem-solving, ensuring the skills learned can be applied immediately to real-world AI and LLM application development.The course combines theory and hands-on practical exercises.

What is included?

• Live instructor-led sessions (online or in-person)

• 365 days access to slides and course materials via Cycubix Academy

• Specific labs for secure coding for AI and LLM

• Certificate of Completion

• Option to customise content for organisational objectives

Levels

  • SECCDAI-01 Secure Coding for AI & LLM Applications Core Course
    • Focuses on OWASP Top 10 for LLM Applications. Practical secure coding for AI/LLM systems.

Team Training with Cycubix

Team Training with Cycubix

Instructors

The minds behind the course

The minds behind the course

Fabio Cerullo

Senior Official ISC2 Authorised Instructor for CISSP, CCSP, CSSLP and SSCP

Fabio Cerullo is the Managing Director of Cycubix. He has extensive experience in understanding and addressing the challenges of cybersecurity from over two decades working in and with organisations across a diverse range of industries – from financial services to government departments, technology and manufacturing.

Fabio Cerullo is a Senior Authorised Instructor for ISC2,the global leader in information security education and certification. Fabio has delivered training to thousands of IT and security professionals world wide in cyber, cloud, and application security. As a member of ISC2 and OWASP organisations, Fabio helps individuals and organisations strengthen their application security posture and build fruitful relationships with governments, industry and educational institutions.

Fabio is a regular speaker and delivers training at events organised by leading Cybersecurity associations including OWASP and ISC2. He holds a Msc in Computer Engineering from UCA and the SSCP, CISSP, CSSLP & CCSP certifications from ISC2.

Show (Instructors)

The minds behind the course

The minds behind the course

Fabio Cerullo

Fabio Cerullo is the Managing Director of Cycubix. He has extensive experience in understanding and addressing the challenges of cybersecurity from over two decades working in and with organisations across a diverse range of industries – from financial services to government departments, technology and manufacturing.

Fabio Cerullo is a Senior Authorised Instructor for ISC2,the global leader in information security education and certification. Fabio has delivered training to thousands of IT and security professionals world wide in cyber, cloud, and application security. As a member of ISC2 and OWASP organisations, Fabio helps individuals and organisations strengthen their application security posture and build fruitful relationships with governments, industry and educational institutions.

Fabio is a regular speaker and delivers training at events organised by leading Cybersecurity associations including OWASP and ISC2. He holds a Msc in Computer Engineering from UCA and the SSCP, CISSP, CSSLP & CCSP certifications from ISC2.