Technology, Artificial IntelligenceAugust 8, 202New AI model reduces bias and enhances trust in decision-making, revolutionizing healthcare outcomesResearchers at the University of Waterloo have developed a new explainable artificial intelligence (AI) model called Pattern Discovery and Disentanglement (PDD) to address the prevalent issue of bias and enhance trust and accuracy in machine learning-generated decision-making and knowledge organization. Traditional machine learning models often yield biased results, favoring larger population groups or being influenced by unknown factors, leading to misdiagnoses and inequitable healthcare outcomes for specific patient groups. However, the innovative PDD model aims to eliminate these barriers by untangling complex patterns from data, relating them to specific underlying causes unaffected by anomalies and mislabeled instances. By doing so, it not only enhances trust and reliability in Explainable Artificial Intelligence (XAI) but also allows for the discovery of new and rare patterns in datasets, enabling researchers and practitioners to detect mislabels or anomalies in machine learning. The PDD model has already demonstrated its potential in predicting patients' medical results based on their clinical records, providing healthcare professionals with more reliable diagnoses supported by rigorous statistics and explainable patterns for better treatment recommendations across various diseases and stages. Recognizing its industrial significance, PDD has received an NSER Idea-to-Innovation Grant of $124,648 and is being commercialized through the Waterloo Commercialization Office.
Source: University of Waterloo