AI is an everyday part of our world – being used for everything from online research to companionship. Approaching more than 378 Million users in 2025, the number of people plugged into these tools has increased exponentially since the launch of OpenAI’s GPT-3.5 model in November of 2022. As our reliance on it grows, we put increasing stock in the results AI tools provide – making important decisions, dissecting our inner struggles, and learning new skills.
With higher stakes come higher risks, leading users and regulators alike to ask a foundational question; can AI be trusted? 75% of Canadians remain unsure, suggesting that “AI tools lack the emotion and empathy required to make good decisions”. Far from a simple inquiry, understanding the answer to this question requires us to examine the very nature of trust itself.
Putting our trust in someone can leave us exposed
First, we need to start with the fundamentals – what is trust? According to Psychology Today, trust is “the belief that someone or something can be relied on to do what they say they will”. Let’s use a simple example to break it down: you might trust your friend to arrive on time for lunch. You know they’re capable (physically) of arriving for lunch, they’re incentivized (socially) to arrive on time, and they could be held accountable (morally) if they failed to arrive.
Trust is what we build relationships on, giving us the ability to work together:
Putting our trust in someone can also leave us exposed. When our trust is broken, we can be left in a bad situation, lacking resources, support, and a sense of safety:
We need to see AI for what it is
So can AI bear the burden of trust? In our earlier example, we highlighted that our friend was capable of, socially incentivized to, and accountable for, meeting our expectations. A human friend knows what it means to do right by us or leave us hanging. People are “moral agents” – they have the capacity to understand right from wrong and the free will to choose between the two.
Even a highly sophisticated AI doesn’t truly have the capacity to understand the difference between what is morally right or wrong. A machine’s decisions are based firmly on the instructions handed down to it by the humans which designed it.
We need to see AI for what it is – a machine controlled by people. When AI produces incorrect answers or delivers unsatisfactory results, it’s the people who create and use these tools who have to answer for their shortcomings.
Teams need to understand that it’s their job to check the results given to them by AI tools
Teams need to understand that it’s their job to check the results given to them by AI tools. Verifying information and having realistic expectations are key pieces to successfully integrating this technology into our workflows without opening ourselves up to unnecessary risks.
Building AI policy for your organization? Get more insight into what your team needs to know about AI tools by exploring membership today.