Functions, Users, and Comparative Analysis
We decided that Docs should have prime location.
Build AI products you can trust.
We’re super excited to share that Aporia is now the first ML observability offering integration to the Databricks Lakehouse Platform. This partnership means that you can now effortlessly automate your data pipelines, monitor, visualize, and explain your ML models in production. Aporia and Databricks: A Match Made in Data Heaven One key benefit of this […]
Fundamentals of ML observability
Metrics, feature importance and more
We’re excited 😁 to share that Forbes has named Aporia a Next Billion-Dollar Company. This recognition comes on the heels of our recent $25 million Series A funding and is a huge testament that Aporia’s mission and the need for trust in AI are more relevant than ever. We are very proud to be listed […]
Let’s say for a given problem we have a big stable model that uses a lot of data to train – let’s mark it as model A. We will also devise another model, a more lightweight model that trained on smaller and more recent data – it can have the same type. We’ll call it model B.
The idea: Find the time windows where model B outperforms model A. As model A is stable and encapsulates more data than model B, we would expect it to outperform it. However, if model B outperforms model A that might suggest that a concept drift has occurred.
If interested, learn and read more about these concepts in our articles concept drift in machine learning 101 and 8 Concept Drift Detection Methods.