Canadians lack AI privacy awareness
Share
Share

Canadians are embracing AI tools at work without understanding the privacy risks, with 95 per cent unaware of what to consider when using systems such as ChatGPT and Copilot, according to new findings from NordVPN.
The results, drawn from the National Privacy Test for the full year of 2025, suggested that millions of workers may be exposing sensitive personal and company data as generative AI becomes part of daily routines.
“The rapid adoption of AI in the workplace has outpaced our understanding of its risks. People are typing confidential information into AI tools without realizing where that data goes, how it’s stored, or who might have access to it,” said Marijus Briedis, chief technology officer at NordVPN. “Unlike a conversation with a colleague, interactions with AI tools can be logged, analyzed, and potentially used to train future models. When employees share client details, internal strategies, or personal information with AI assistants, they may be creating privacy vulnerabilities they never intended.”
The company also flagged rising exposure to AI-powered cybercrime. About a quarter of Canadians, 26 per cent, cannot correctly identify common scams using AI technology, including deepfakes and voice cloning. As synthetic media extends from cloned voices to fabricated videos with realistic movement, the scams are becoming more difficult to spot. NordVPN pointed to earlier research showing 33 per cent of respondents had suffered online scams, with 49 per cent of those victims losing money.
Briedis said AI has lowered the barrier to entry for bad actors, making it easier to produce convincing phishing emails, clone voices or build fake retail sites that mimic legitimate brands. The firm expects AI-driven attacks to be among the key cybersecurity risks in 2026 as criminals adopt more sophisticated methods.
NordVPN urged basic safeguards for employees using AI at work, starting with a simple rule: Do not input confidential company data, client information or personal details into AI assistants. The company advises workers to assume conversations with AI can be logged and retained, to confirm their employer’s AI usage policies before applying generative tools to tasks and to understand that training data practices may differ across providers.
For consumers facing AI-enabled scams, NordVPN advised caution with unexpected calls or messages, even those that sound familiar, and suggests families adopt a code word for emergencies. Requests for money or sensitive information should be verified through a separate, trusted channel. The company also warns that convincing fake videos and images are now possible, so visual evidence alone may not be reliable. Keeping software updated and using security tools correctly remains part of the baseline defence.
“Training influences how employees respond to situations they face at work,” Briedis said. “When compliance training reflects real workplace scenarios, it helps people recognize misconduct, understand what steps they can take, and feel more comfortable speaking up. That kind of practical training builds stronger trust across the organization.”
Image credit: Depositphotos.com
Leave a Reply