IBM just released a new patent that is building a way to – measure AI trustworthiness…
Insights:
– IBM focuses on UX to help users understand and trust AI predictions.
– Their method emphasizes transparency and accountability for user confidence.
– By continuously improving, IBM aims to make AI systems more trustworthy and accepted.
IBM claims their method will:
– Identify UX parts showing trustworthy AI info, evaluate them, and get alternatives if needed for better UX.
– Check the AI model’s accuracy, focusing on trust, so users can rely on predictions.
– Ensure transparency in the UX, so users understand how predictions are made.
– Analyze AI code for better understanding of UX terms, improving user trust.
– Rate multiple AI trust factors like accuracy, explainability, transparency, and fairness, to show overall trustworthiness in the UX.
Example:
For instance, if the AI predicts a tennis match winner, the system analyzes terms in the UX to explain why, like recent wins, past Grand Slam performances, or success rates on specific courts. This justification analysis ensures users understand the basis for predictions, using techniques like BERT (Broad Bidirectional Encoder Representations from Transformers) for accurate interpretation.
#IBM #AI #Transparency #patent
: The views expressed in this post are my own. The views within any of my posts, or articles are not those of my employer or the employers of any contributing experts. this post? Click icon for more!