Hot!

Adversarial AI Attacks: Master Offensive Techniques on Image Classifiers (W77)

Original price was: $159.00.Current price is: $119.00.

This hands-on course will teach you how to execute adversarial attacks on image classification models using techniques like FGSM, BIM, and PGD. You’ll gain practical experience, understand the vulnerabilities of AI systems, and learn how to protect them. Led by AI and cybersecurity expert Hugo Le Belzic, this VoD is ideal for both professionals and enthusiasts looking to deepen their knowledge of AI safety.

Only 5 left in stock


Get the access to all our courses via Subscription

Subscribe

Category: Tag:

COURSE IS SELF-PACED, AVAILABLE ON DEMAND

DURATION: 4 hours

CPE POINTS: On completion, you get a certificate granting you 4 CPE points. 


This Online Course dives into adversarial attacks on image classification models, offering hands-on tasks after each module. Participants will learn and apply attack techniques like FGSM, BIM, CW, DeepFool, and PGD, with every module ending in a Vault Challenge where you break a model to capture a flag. Complete the Vault Challenges in Modules 3-7 to unlock a 15% discount on future courses. You'll also explore essential AI safety principles, gaining practical skills to both attack and defend AI systems.


Course benefits:

What skills will you gain?

  • Applying various attacks techniques in practice (FGSM, BIM, CW, DeepFool, and PGD.).
  • Manipulating AI Image Models with adversarial attacks.
  • Disrupting image classifiers.
  • Protecting AI Image Models from attacks.

Why take it NOW?

Adversarial attacks on image models are not just theoretical - they're a real and growing threat in AI safety! As AI becomes more integrated into our daily lives and critical systems, understanding how to defend against these attacks is more crucial than ever. With the rapid advancement in AI technology, now is the time to equip yourself with the knowledge and skills to implement robust defenses. This course covers a comprehensive range of essential techniques, ensuring you gain the most up-to-date and relevant skills in the field.


 


YOUR INSTRUCTOR: Hugo Le Belzic

Hugo Le Belzic is an entrepreneur and an AI professional with strong roots in cybersecurity. He started coding at the age of 13 and developed his first Remote Access Trojan at the age of 14, rapidly gaining recognition in the technology domain. By 19, he had already co-founded a successful Web3 marketing agency and taken key roles in AI and machine learning projects. Prior to that, he has worked as a machine learning engineer at YC and OpenAI backed startups, and even won multiple AI hackathons. Today, Hugo is building an AI Safety Startup leveraging his deep expertise in adversarial attacks and model optimization while still holding a great commitment to living up to ethical standards in technology.

 


COURSE SYLLABUS


Module 1

Introduction to AI Safety and Vision Models

Covered topics:

  • Overview of vision model architectures, family types, and their use cases.
  • Core principles of AI safety, including robustness, transparency, fairness, and alignment, with examples of notable attacks.

Module 2

Environment Setup

Covered topics:

  • Overview of the course objectives.
  • Environment setup and introduction to the tools that will be used throughout the VoD.

Module 3

FGSM (Fast Gradient Sign Method) Attack

Covered topics:

  • Introduction to FGSM and its origins.
  • Implementing FGSM and understanding its parameters.
  • Applying FGSM to achieve desired results.
  • Discussion on the pros and cons of FGSM.
  • The module ends with a Vault Challenge, where participants must use FGSM to successfully break a model and capture a flag.

Module 4

BIM (Basic Iterative Method) Attack

Covered topics:

  • Introduction to BIM as an extension of FGSM, including key differences.
  • Implementing BIM.
  • Using BIM to achieve desired results.
  • Discussion on the pros and cons of BIM.
  • The module ends with a Vault Challenge, where participants must use FGSM to successfully break a model and capture a flag.

Module 5

CW (Carlini & Wagner) Attack

Covered topics:

  • Introduction to the CW Attack method.
  • Implementing CW Attack.
  • Applying CW Attack to achieve desired outcomes.
  • Discussion on the effectiveness of CW and situations where it may be preferred.
  • The module ends with a Vault Challenge, where participants must use FGSM to successfully break a model and capture a flag.

Module 6

DeepFool Attack

Covered topics:

  • Introduction to DeepFool, a more efficient adversarial attack method.
  • Implementing DeepFool.
  • Applying DeepFool to achieve desired outcomes.
  • Discussion on the pros and cons of DeepFool, with a focus on efficiency.
  • The module ends with a Vault Challenge, where participants must use FGSM to successfully break a model and capture a flag.

Module 7

PGD (Projected Gradient Descent)

Covered topics:

  • Introduction to PGD, a stronger adversarial attack.
  • Implementing PGD.
  • Applying PGD to achieve desired outcomes.
  • Discussion on the strengths and weaknesses of PGD.
  • The module ends with a Vault Challenge, where participants must use FGSM to successfully break a model and capture a flag.

Module 8

Summary

Covered topics:

  • Summary and comparison of all the techniques covered: FGSM, BIM, CW, DeepFool, and PGD.
  • Discussion of scenarios where these techniques can be applied, including a case study.
  • Explanation of the concept of transferability in adversarial attacks.
  • Best practices for protecting AI models from these types of attacks.

QUESTIONS? 

If you have any questions, please contact our eLearning Manager at [email protected].

(1 views)

Reviews

There are no reviews yet.

Only logged in customers who have purchased this product may leave a review.

© HAKIN9 MEDIA SP. Z O.O. SP. K. 2023