Abstract-view-of-Toronto-City-Hall

BC Tribunal Confirms Companies Remain Liable for AI Chatbot-Created Information

Torkin Manes LegalPoint
 

 

On February 14, 2024, the British Columbia Civil Resolution Tribunal (the “Tribunal”) found Air Canada liable for misinformation given to a consumer through the use of artificial intelligence chatbots (“AI chatbot”).

The decision, Moffatt v. Air Canada, generated international headlines, with reports spanning from the Washington Post in the United States to the BBC in the United Kingdom.[1] While AI comes with economical and functional benefits, companies clearly remain liable if inaccurate information is provided to consumers through the use of any AI tools.

Background

AI chatbots are automated programs that use AI and other potential tools like natural language processing to simulate a conversation and provide information in response to a person’s prompts and input. Common virtual assistants like Alexa and Siri are all examples of AI chatbots.[2]

Increasingly, AI chatbots are used in commerce. According to a 2024 report from AI Multiple Research,[3] AI chatbots have saved organizations around US$0.70 per interaction. By 2025, the predicted revenue of the chatbot industry is estimated to reach around US$1.3 billion. Today, around half of all large companies are considering investing in these tools. Air Canada’s AI chatbot is one example of their use in a commercial setting. However, as the Tribunal’s decision shows, they do not come without risks.

The Tribunal’s Decision in Moffatt

The Tribunal’s decision came after a complaint was made by Mr. Moffatt (“Moffatt”), who wanted to purchase an Air Canada plane ticket to fly to Ontario, where his grandmother had recently passed away. On the airline’s website, Moffatt engaged with an AI chatbot, which responded that there was a discount if the buyer were traveling and using reduced bereavement fares. Anyone seeking a reduced fare could allegedly submit their ticket within 90 days of issuance through an online form and receive the lower bereavement rate.[4]

Unfortunately, the AI chatbot’s answer was incorrect. The reference to “bereavement fares” was hyperlinked to a separate Air Canada webpage titled, “Bereavement travel”, which contained additional information regarding Air Canada’s bereavement policy stipulating that the bereavement policy is not applicable to requests for bereavement consideration after travel was completed. Accordingly, when Moffatt submitted his application to receive a partial refund of his fare, Air Canada refused. After a series of interactions, Air Canada admitted that the AI chatbot had provided “misleading words.” The representative pointed out the AI chatbot’s link to the bereavement travel webpage and said Air Canada had noted the issue so it could update the chatbot. 

Moffatt then sued Air Canada for having relied on its AI chatbot, which the Tribunal determined was an allegation of negligent misrepresentation. Air Canada alleged that the correct information could have been found elsewhere on its website, and that it could not be liable for the AI chatbot’s responses.[5]  Strangely, Air Canada endeavoured to argue that the AI chatbot was a separate legal entity that is responsible for its own actions.

The Tribunal ultimately found in favour of Moffatt. While the AI chatbot has an interactive component, the Tribunal found that the program was just a part of Air Canada’s website and Air Canada still bore responsibility for all the information on its website, whether it came from a static page or a chatbot. As a service provider, Air Canada owed Moffatt a duty of care, which was breached by the misrepresentation. Air Canada could not separate itself from the AI chatbot that was integrated in its own website. Negligence existed, as Air Canada did not take reasonable care to ensure that its AI chatbot provided accurate information. It did not matter if the correct information existed elsewhere. A consumer cannot be expected to double-check information it finds on one part of the website with another.[6]

The Tribunal ultimately awarded Moffat approximately $650 in damages, plus pre-judgment interest and filing fees with the Tribunal.

Takeaways

While admittedly this is not a court decision, the Tribunal’s decision in Moffatt serves as an important reminder that companies remain liable for the actions of their AI tools. In addition to training the AI system to deliver accurate results, companies that intend to use AI tools should also ensure that they also establish and implement adequate internal policies that protect consumer privacy and clearly warn consumers of any limitations.

For more information, please contact Lisa R. Lifshitz and Roland Hung of Torkin Manes’ Technology and Privacy & Data Management Groups.

The authors would like to acknowledge Torkin Manes Articling Student Herman Wong for his assistance in drafting this bulletin.


[1] Kyle Melnick, “Air Canada chatbot promised a discount. Now the airline has to pay it.” (18 February 2024), online: <https://www.washingtonpost.com/travel/2024/02/18/air-canada-airline-chatbot-ruling/>; Maria Yagoda, “Airline held liable for its chatbot giving passenger bad advice – what this means for travellers” (23 February 2024), online: <https://www.bbc.com/travel/article/20240222-air-canada-chatbot-misinformation-what-travellers-should-know>.
[2] IBM, “What is a chatbot?”, online: <https://www.ibm.com/topics/chatbots>.
[3] AIMultiple, “90+ Chatbot/Conversational AI Statistics in 2024” (5 February 2024), online: <https://research.aimultiple.com/chatbot-stats/>.
[4] Moffatt, supra note 1 at paras. 13-16.
[5] Ibid. at paras. 18-25.
[6] Ibid. at paras. 26-32.