Building Trust in AI: Liran Hason CEO of Aporia, Ensuring Safe and Transformative AI | S2: E52
Listen to the Founders Podcast episode featuring Liran Hason, Co-founder and CEO of Aporia and Daniel Robbins, CEO of IBH...
As AI technology advances and becomes increasingly used amongst businesses, there are certain issues that have arisen that are now coming to light, one of them being the GenAI Chasm. Let’s explore what exactly the GenAI Chasm is, and how businesses investing in AI can cross the chasm confidently, without relying on prompt engineering.
Liran Hason, CEO of Aporia, coined this term after having over 2 years of experience speaking to potential customers about their GenAI products. He noticed that all businesses trying to implement AI apps struggle to get past the pilot phase, known as the chasm.
Here is an example of business ZYX that tries to cross the GenAI chasm:
Hallucinations, prompt injection risks, compliance issues, and unintended behavior are some of the main reasons that only a small percentage of apps can actually go live. Releasing an app with these issues risks damaging brand reputation, exposing sensitive information, and losing customer trust.
GenAI is an incredible tool that businesses can use to enhance their productivity and engagement with customers. However, when presenting hallucinations and incorrect behavior, most of these apps will never go live. Crossing this chasm to get AI apps to go live is a difficulty almost every business investing in AI is struggling with, but there is a solution to this situation.
One proven way to help businesses cross the chasm and release more AI apps with confidence is by implementing guardrails that sit between the LLM and the user. AI guardrails that can vet every response that comes in from the user, and that goes out from the LLM passes through these guardrails, ensuring that hallucinations are intercepted, prompt injections are blocked, and that the app is behaving how it should.
While prompt engineering is currently the preferred method of mitigating hallucinations, it is not the ultimate solution that provides long-term results in the app. Studies have shown that adding more words to the system prompt decreases accuracy, making it more susceptible to hallucinations. So using prompt engineering to catch inappropriate behavior and incorrect results can only further worsen this issue as a result.
App Accuracy Decreases as More Tokens Added
Aporia Guardrails is the preferred method to use when crossing the GenAI chasm. These Guardrails provide out-of-the-box policies to intercept, block, and rephrase hallucinations and inappropriate LLM behavior. Simply integrate Aporia Guardrails and safeguard your app in a few minutes.
Want to see how it works in real-time? Sign up now >
Listen to the Founders Podcast episode featuring Liran Hason, Co-founder and CEO of Aporia and Daniel Robbins, CEO of IBH...
As we step into a new era at Aporia, I’m thrilled to share the story behind our transition from being...
In the ever-evolving field of AI, the maturity of production applications is a sign of progress. The industry is witnessing...
You’ve probably heard about the hallucinations AI can experience and the potential risks they introduce when left unchecked. From Amazon’s...
In the first installation of our guide to building great AI products, we discussed the challenges of deploying models to...
When deploying AI products, model accuracy is crucial, but it’s just the tip of the iceberg. The real test begins...
Machine learning (ML) has been widely recognized as a powerful tool for solving complex problems in various industries, from retail...