I hope we don't do to trust what advertising has done to love
TLDR
This paper explores defining and measuring trust in AI, proposing "trust pillars" to prevent its trivialization like the word "love."
Key contributions
- Examines the risk of trivializing "trust" in AI, similar to how advertising uses "love."
- Proposes "trust pillars" to enable actionable and measurable discussions about AI trust.
- Suggests leveraging agentic AI interfaces as "trust vectors" for better transparency.
- Aims to foster a cross-disciplinary conversation on AI trust, involving civil society.
Why it matters
This paper addresses the critical need to establish a robust framework for discussing and measuring trust in AI, especially with the rise of agentic systems. By proposing "trust pillars" and "trust vectors," it aims to prevent the concept of trust from being diluted and to foster meaningful, actionable dialogue across various sectors.
Original Abstract
Advertising uses love to sell stuff, like nylons. It also uses the word "love" in trivialising ways -- do you "love" your oven? When I hear about trust in the context of AI, especially agentic, I hope we don't do to trust what advertising has done to love. But what is trust? Can we discuss it in actionable and measurable ways in the context of AI? Thus I suggest a number of "trust pillars", hoping to start a communal conversation, across computing and beyond, to civil society. I also suggest that agentic systems may be a blessing in disguise, as we may be able to turn their explicit interfaces into "trust vectors".
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.