The most advanced ML Observability platform
Building an ML platform is nothing like putting together Ikea furniture; obviously, Ikea is way more difficult. However, they both, similarly, include many different parts that help create value when put together. As every organization sets out on a unique path to building its own machine learning platform, taking on the project of building a […]
Start integrating our products and tools.
We’re excited 😁 to share that Forbes has named Aporia a Next Billion-Dollar Company. This recognition comes on the heels of our recent $25 million Series A funding and is a huge testament that Aporia’s mission and the need for trust in AI are more relevant than ever. We are very proud to be listed […]
Over the last few months, there’s been a lot of hype surrounding all the new text-to-image models, such as DALL-E 2, Imagen, and Make-A-Scene. These are exciting times, as we’re seeing vast advancements in next-gen technologies and imagining how they will impact our lives in the years to come.
However, alongside this excitement, when this advanced technology is not used responsibly, or the right guardrails aren’t in place, we see major issues surface that impact business continuity and society at large. This was exactly what happened in the Zillow and Unity Software cases, which saw massive financial losses, extended layoffs, and temporary loss of trust, all due to a technological snafu that went unnoticed. The most recent issue I want to shine a light on comes from Equifax, which recently reported that it issued lenders wrong credit scores on millions of US consumers.
Equifax is one of the largest consumer credit reporting agencies, comprising the “Big Three” along with Experian and TransUnion. In other words, Equifax collects account information from various creditors and sells reports to their customers about the financial history of different companies or individuals.
According to Freddie Mac’s June 1st alert to its clients, Equifax informed them that about 12% of credit scores released between March 17th and April 6th were incorrect. Equifax cited a “coding issue” while making changes to its server as the culprit behind the lead score error that led to a nearly 5% drop in share price.
Due to this technological snafu, the real world felt the impact, as in some cases, credit report errors showed at the very least a 25-point differential for nearly 300K consumers. This resulted in many would-be borrowers being wrongfully denied loans or issued loans at a much higher rate than their true credit score would warrant.
To ensure economical value, players in the financial market ingest massive amounts of data to ensure a robust decision-making process that’s efficient and most importantly – fast. This is the reason that the financial industry has been one of the biggest and earliest adopters of machine learning models for their core business.
As one of the leading consumer credit reporting agencies, Equifax’s data is used by many leveraging AI/ML. This latest issue may have created malformed input data, which could have real-world implications. Businesses that ingested the Equifax data could be looking at skewed results, and subsequently bad output.
A big no-no for any business looking to make a profit and create a better world by utilizing these game-changing technologies. This example, like many others, continues to highlight the need for reliable monitoring solutions to ensure these impactful models drive their intended value.
This is not the first time we’ve seen an example showcasing the essential need for businesses to monitor their model performance. Again, I highlight the Zillow and Unity Software cases mentioned earlier, which experienced losses of nearly $500 million and extensive layoffs.
The Equifax case also shows the importance of another vital aspect of validating model behavior – the need to monitor the model’s inputs for change. In a perfect world, we would assume that the inputs we use to generate the features we feed the models would be correct, and keep their behavior over time. And while this would be ideal, data scientists and ML engineers know that this is far from the actual. When a model is deployed to production, continuously tracking and monitoring are key to detecting different kinds of errors and drift in its behavior. If your business trains its ML models or uses vulnerable external data – just like in this case with Equifax’s incorrect credit scores – it’s imperative to leverage model monitoring to ensure that any faults in your ML pipeline are alerted to relevant stakeholders and dealt with before they harm your business or user.
As mentioned above, this issue is one of many possible issues that may cause data to drift in the inputs that the features fed into the model are based on.
There are two main approaches to detecting these kinds of cases:
In this specific case, monitoring the total amount of true value predictions could have helped alert to the change in the behavior of the model, which potentially denied legitimate loan requests. In addition to these two approaches, it is always best practice to monitor the behavior of the features fed into the model as well.
Aporia’s customizable ML monitoring solution is used by organizations around the world to ensure their models are working as intended while offering data science and ML teams full customization to fit their specific use cases. Book a demo with our team and start monitoring in minutes.