COURSE IS SELF-PACED, AVAILABLE ON DEMAND

DURATION: 4 hours

CPE POINTS: On completion, you get a certificate granting you 4 CPE points. 

The course starts on the 4th of October.


This Online Course provides a deep dive into adversarial attacks on image classification models and is aimed at professionals and hobbyists interested in understanding the weaknesses of AI systems. The participants will have hands-on practice and insight into the effects of various methods of adversarials attacks when they are injected into AI models. The importance of AI safety principles such as robustness, transparency, fairness, and alignment is also focused on in the VoD.


Course benefits:

What skills will you gain?

  • Applying various attacks techniques in practice (FGSM, BIM, CW, DeepFool, and PGD.).
  • Manipulating AI Image Models with adversarial attacks.
  • Disrupting image classifiers.
  • Protecting AI Image Models from attacks.

Why take it NOW?

Adversarial attacks on image models are not just theoretical—they're a real and growing threat in AI safety! As AI becomes more integrated into our daily lives and critical systems, understanding how to defend against these attacks is more crucial than ever. With the rapid advancement in AI technology, now is the time to equip yourself with the knowledge and skills to implement robust defenses. This course covers a comprehensive range of essential techniques, ensuring you gain the most up-to-date and relevant skills in the field.


 


YOUR INSTRUCTOR: Hugo Le Belzic

Hugo Le Belzic is an entrepreneur and an AI professional with strong roots in cybersecurity. He started coding at the age of 13 and developed his first Remote Access Trojan at the age of 14, rapidly gaining recognition in the technology domain. By 19, he had already co-founded a successful Web3 marketing agency and taken key roles in AI and machine learning projects. Prior to that, he has worked as a machine learning engineer at YC and OpenAI backed startups, and even won multiple AI hackathons. Today, Hugo is building an AI Safety Startup leveraging his deep expertise in adversarial attacks and model optimization while still holding a great commitment to living up to ethical standards in technology.

 


COURSE SYLLABUS


Module 1

Introduction to AI Safety and Vision Models

Covered topics:

  • Overview of vision model architectures, family types, and their use cases.
  • Core principles of AI safety, including robustness, transparency, fairness, and alignment, with examples of notable attacks.

Module 2

Environment Setup

Covered topics:

  • Overview of the course objectives.
  • Environment setup and introduction to the tools that will be used throughout the VoD.

Module 3

FGSM (Fast Gradient Sign Method) Attack

Covered topics:

  • Introduction to FGSM and its origins.
  • Implementing FGSM and understanding its parameters.
  • Applying FGSM to achieve desired results.
  • Discussion on the pros and cons of FGSM.

Module 4

BIM (Basic Iterative Method) Attack

Covered topics:

  • Introduction to BIM as an extension of FGSM, including key differences.
  • Implementing BIM.
  • Using BIM to achieve desired results.
  • Discussion on the pros and cons of BIM.

Module 5

CW (Carlini & Wagner) Attack

Covered topics:

  • Introduction to the CW Attack method.
  • Implementing CW Attack.
  • Applying CW Attack to achieve desired outcomes.
  • Discussion on the effectiveness of CW and situations where it may be preferred.

Module 6

DeepFool Attack

Covered topics:

  • Introduction to DeepFool, a more efficient adversarial attack method.
  • Implementing DeepFool.
  • Applying DeepFool to achieve desired outcomes.
  • Discussion on the pros and cons of DeepFool, with a focus on efficiency.

Module 7

PGD (Projected Gradient Descent)

Covered topics:

  • Introduction to PGD, a stronger adversarial attack.
  • Implementing PGD.
  • Applying PGD to achieve desired outcomes.
  • Discussion on the strengths and weaknesses of PGD.

Module 8

Summary

Covered topics:

  • Summary and comparison of all the techniques covered: FGSM, BIM, CW, DeepFool, and PGD.
  • Discussion of scenarios where these techniques can be applied, including a case study.
  • Explanation of the concept of transferability in adversarial attacks.
  • Best practices for protecting AI models from these types of attacks.

QUESTIONS? 

If you have any questions, please contact our eLearning Manager at [email protected].

(605 views)

Course Reviews

N.A

ratings
  • 5 stars0
  • 4 stars0
  • 3 stars0
  • 2 stars0
  • 1 stars0

No Reviews found for this course.

© HAKIN9 MEDIA SP. Z O.O. SP. K. 2023