Recently, I was exploring GitHub for AI-related repositories and came across a fascinating profile from HugoLB0. Curious, I decided to learn more about the author and what I discovered was impressive. Hugo has an extensive background in AI security, coding, and freelancing. Not only does he contribute to impressive AI projects, but he also runs his own startup, Ntropy, which offers AI-driven tools for analyzing and transforming transactional data.
There’s an interesting backstory about how he began his coding journey, but I’ll let him share that during the live session—it’s quite a unique one!
And if that’s not impressive enough, he’s only 19 years old...
I invite you to join the interview with your instructor, Hugo Le Belzic, as part of our course: Adversarial AI Attacks: Breach, Defend & Fortify Image Classifiers
Jakub from Hakin9: What was your most interesting discovery related to breaking AI model security that changed your approach to technology?
Hugo Le Belzic: One of the most fascinating discoveries for me was realizing how easily an image model's output can be manipulated. This revelation opened up a myriad of possibilities for where these kinds of attacks could be performed and what could be achieved with them. However, it's crucial to always consider the ethical implications of such actions. The ease of manipulation is intriguing, but it also requires a responsible approach to ensure that these techniques are used appropriately.
Jakub from Hakin9: Are there any lesser-known techniques or approaches to AI jailbreaking....
Author
- Entrepreneur and an AI professional. He is building an AI Safety Startup leveraging his deep expertise in adversarial attacks and model optimization while still holding a great commitment to living up to ethical standards in technology. He is running the "PIXEL TRICKERY: Outsmarting AI in image classification Models" 40 Steps online live workshop