Researchers at Binghamton University, the University of Hawaii, and Clemson University have developed a software tool to automate the evaluation of trust in human/AI conversations.
Currently, evaluating the extent to which artificial intelligence (AI) gains a user's trust is done manually through surveys, interviews, and focus groups.
Despite the billions of lines of code written for execution by a large language model (LLM) to win a human's trust, in the end "trust" alone is a blunt instrument.
The new software tool sharpens and automates the identification of trust-building turns in a human/AI conversation, which consists of a series of turns, including queries from the human and responses from the AI.
Author's summary: AI trust evaluation is now automated.