🎉 AI Engineers: Aporia's 2024 Benchmark Report and mutiSLM has been released. View the report here>>

Igal Leikin

Igal is a Senior Software Engineer at Aporia.

Igal’s articles

GenAI Academy Risk & Compliance
learn

What Are LLM Jailbreak Attacks?

LLM Jailbreaks involve creating specific prompts designed to exploit loopholes or weaknesses in the language models’ operational guidelines, bypassing internal...

Igal Leikin Igal Leikin
Read Now 6 min read
Green Background

Control All your GenAI Apps in minutes