Young people face increasing risks in a future fuelled by artificial intelligence, and the rapid rise of tools like generative AI expose them to serious new threats, according to recent Canadian reports and surveys looking at AI and the growing number of young people encountering or using artificial intelligence systems.
Young users, aged 16 to 19, are the ones with the most to gain and the most to lose in the AI-driven future, say coordinators of a survey and report called Safeguarding Tomorrow’s Data Landscape. Young “digital natives” are constantly connected and they are constantly at risk.
Another research report, called (Gen)eration AI, warns that new generative AI (genAI) capabilities pose unprecedented privacy risks, and can increase young people’s chances of being impacted by mental illness, addiction, and more.

New reports on the use of artificial intelligence propose a new digital and algorithmic literacy among young Canadians as a way to reduce risks associated with AI. AI generated image.
The reports can be applauded for proposing a new digital and algorithmic literacy among young Canadians, for raising awareness about privacy rights and data governance in the AI age, and for empowering people of all ages to take control over their personal information and be responsible with their activities on a complicated – and sometimes controversial – data landscape.
Educators, parents, AI program developers, academic researchers, policy analysts, and young digital citizens between 16 and 19 participated in the research initiative, Safeguarding Tomorrow’s Data Landscape: Young Digital Citizens’ Perspectives on Privacy within AI Systems.
By collaborating so widely, wrote Dr. Ajay Shrestha, the initiative’s Principal Investigator and computer science professor at Vancouver Island University (VIU), “we can ensure that AI technologies empower young digital citizens without compromising their right to privacy and agency in the digital age.”
The resulting report and an accompanying handbook were released last month; the project was funded by the Office of the Privacy Commissioner of Canada.

Young digital citizens often the first to adopt the latest AI-driven apps, so they can have the most to gain, and the most to lose. AI generated image.
The documents lay out a proposed series of best practices and step-by-step guidance to protect that privacy and agency, starting with the understanding what personal data is: much more than a name or address. Daily digital activities like emails, web browsing, online shopping, social media posting and more all generate valuable personal data – value that is worth protecting for the owner, worth harvesting by an AI company.
Informed consent means a full understanding of the hows and whys of the value of our data, with a clear explanation as to why it’s collected and how it will be used. User agreements and privacy policies from most online businesses and third-party data collection activities must be more transparent and the results more accessible.
Likewise, with the aim of reducing unnecessary data sharing risks, companies and AI tools should collect only the data needed to achieve a specific purpose.
That purpose, and the data that’s collected, used, analyzed and algorithmically processed, must be clearly defined and disclosed for young people to have safety and security online.
“For young digital citizens, who are often the first to adopt the latest AI-driven apps, privacy has never felt more personal,” said Dr. Shrestha.
While not directly associated with the project, the professor has also said that it’s important to talk with young people about AI and ethical issues, and to address the misuse of AI to create disinformation or misrepresentation – fake images, fraudulent text, online deception and false narratives meant to deceive or manipulate.
Another academic research paper looking into artificial intelligence developments says young people are at risk because there is not enough information about how AI companies and platforms process data they collect from youth, and there’ and no specific legislation in place to protect minors’ data on genAI platforms (described as a type of AI that can create new content and ideas, including conversations, stories, images, videos, and music).
According to the researchers at the Dais at Toronto Metropolitan University (TMU), realistic, human-like interactions with generative AI (genAI) tools can get youth to trust and share even more personal information about their lives, behaviour and relationships – putting them and their data at even greater risk of manipulation and exploitation.
In (Gen)eration AI: Safeguarding youth privacy in the age of generative artificial intelligence, findings show that human-like relationships with AI chatbots can increase feelings of depression and poor mental health, and that AI-fuelled bullying can increase in frequency and emotional impact through peer-generated content.
These findings echo those from an international study into the effects of on AI and young people that differentiates between technological, procedural impacts and personal, emotional ones:
“While much attention has been given to concerns about privacy and security, a deeper risk lies in the algorithmic shaping of human identity and consciousness among the younger generation…[A] subtle yet profound transformation is occurring in the way children and young people develop emotionally, socially, and cognitively,” wrote Tecnológico de Monterrey (Mexico) Assistant Professor Roberto Tobais.
Even before the current explosion in AI applications, the Canadian media literacy advocacy group MediaSmarts conducted focus groups with youth ages 13 to 17 to gain insight into how young Canadians understand the relationships between artificial intelligence (AI), algorithms, privacy, and data protection.
Young people said they want the tools and ability to better understand how algorithms work, and how artificial intelligence and machine learning impacts their lives. Transparency about how personal data is collected and used is important to know, they said, but more protection online, especially from future unintended consequences of AI and data sharing or selling, was even more important.
Youth in the survey asked for better and stronger reporting features to hold platforms accountable for taking corrective action when harmful or discriminatory content is found online.
Another Canadian initiative, The Algorithmic Literacy Project, anticipated the AI concerns detailed in the more recent VIU and TMU reports by stating simply, “AI comes with baggage.”
Project authors go on to explain and explore similar AI issues to those raised in the other academic reports, issues like bias, consent, training data source material, authenticity and fake content.
Unfortunately, fake content and AI risks can cut both ways.
Students and young people need to be wary that genAI is not used to manipulate, deceive or trick them, sure. But an over-reliance on AI programs for completing school assignments shows AI can be used by young people to trick the rest of us (including teachers, professors, and educators).
At the University of Waterloo, for example, a recent Canadian Computing Competition (CCC) was cancelled due to widespread cheating. Reports indicate many students were suspected of using AI-created content to complete their assignments. It’s one of the latest in a series of reports about AI and academic cheating.
So there’s another risk to using AI – getting caught!
# # #

Hoping to make AI technology more inclusive and beneficial to society, students participant in an AI awarenerss program at Princeton University. The program is sponsored by the AI4ALL nonprofit, with contributions from other academic departments and offices. Organizers want to see “the benefits of AI go to the all instead of the few” and they say that starts with education.
-30-