What Are AI Hallucinations and How to Prevent Them
While some people find them amusing, AI hallucinations can be dangerous. This is a big reason why prevention should be...
The training set used to train the ML model can be labeled by humans who may have been biased during the tagging process. For example, in a system that predicts the success rate of a job candidate, if the labeling was done by a person who is biased (intentionally or unintentionally), the ML model will learn the bias that exists in the labeled data set it receives.