It’s been a year, and the breakneck pace of development in artificial intelligence continues.
The Hinton Lectures continue, as well, with the second annual event again presenting a public platform for open discussion about AI, and its current and future impacts.
For three days next week, at the John W. H. Bassett Theatre in Toronto and on a worldwide livestream, leading global AI safety experts will describe the need – and the efforts – to keep pace with the ethical, political, and safety challenges posed by Artificial Intelligence and Artificial General Intelligence.
The Hinton Lectures take place November 10-12.

The Hinton Lectures 2025 will feature AI researcher Owain Evans, Nobel Laureate Geoffrey Hinton and Canadian journalist Farah Nasser. Supplied image.
The series was co-founded by Professor Geoffrey Hinton, one of the pioneers of deep learning and a recipient of the 2024 Nobel Prize in Physics, working with Founding Sponsor Global Risk Institute (GRI) and in partnership with AI Safety Foundation (AISF). They were first held last year in Toronto; this year Manulife is welcomed as a presenting sponsor.
Professor Hinton brings the perspective of someone who helped shaped modern AI yet is now urging caution about its direction: “AI is transforming the world in remarkable ways, and as we build more powerful systems, it’s very important we figure out how to keep them safe,” he said when announcing this year’s lecture.
Actually, a series of three: this year, three keynotes will be delivered by Owain Evans, an internationally renowned lecturer and leading machine learning and AI researcher. Journalist Farah Nassar will moderate discussions.
Evans, selected by a nominating committee composed of five AI leaders, including Hinton, is recognized as an expert in AI Safety and Alignment. He’s Director at the non-profit research group Truthful AI and an Affiliate Researcher at UC Berkeley’s Center for Human-Compatible AI; he’s spent more than a decade studying how to keep increasingly powerful systems acting in ways that reflect human values.
Reached a few days before the November 10 start of the Hinton Lectures, Evans said he had “a little bit of polishing” to do on his remarks before then, but the goal is clear:

Owain Evans, an expert in AI Safety and Alignment, the Director at Truthful AI and an Affiliate Researcher at UC Berkeley’s Center for Human-Compatible AI, will delivers The Hinton Lectures in 2025.
“I want to get across that the implications (of AI) are great: that this happened very fast, continues to be very fast, and even now there’s a lot of work to be done in dealing with the risks.
“You might hope the relevant experts have taken care of this (the government, the AI companies, whatever); that all is under control and everything’s sort of well understood.
“That’s not the case.”
While a lot has been invested into making AI powerful and capable and ubiquitous, much less time and money have gone into making AI safe, he said.
Emerging Challenges, Subliminal Problems
The phrase “emergent misalignment” is being used to describe how quickly and unexpectedly AI systems can drift away from their intended behaviour. Explanations are few and far between.
Evans’ own research at his Truthful AI firm has uncovered what he calls “subliminal learning” by which certain “skills” and abilities are transferred from one AI model to another: sometimes the effect is intended, such as building a weak model into a stronger one. But sometimes, Evans said, skills or abilities can be transferred unintentionally, and you can transfer more than meets the eye.
(In the research paper, the transfer of certain math skills from one model to another seems to have been accompanied by a new and unintended fondness for owls!)
“It’s not about models being really smart, it’s something more basic about how they work,” Evans described. “It’s not the kind of thing our understanding of these networks in general would allow us to predict with confidence.”
That absence of understanding is a general theme that will be woven into his upcoming talks, he added.
“This really is experimental science, making these AI systems and understanding them,” Evans noted. “We don’t have a rigorous well-worked out theory of how they behave. We keep being surprised, because we don’t have this theory of what models do in every situation. It’s not that subliminal learning is clear source of danger; it’s another way the behaviours of these systems is quite unexpected.”
A lack of understanding. Surprise. Danger.
No wonder nearly half of us think the risks of AI outweigh the benefits: there’s a concern about losing jobs in a changing workplace; worries about bias, about security, about the loss of privacy, damage to the environment, and the misuse of intellectual property, among other concerns.

Geoffrey Hinton, one of the pioneers of deep learning and a recipient of the 2024 Nobel Prize in Physics, will again host The Hinton Lectures in Toronto. Screen grab from Hinton Lectures video.
Such concerns are well-placed and well-founded, experts say.
A survey of safety practices followed by leading AI companies was conducted at the UC Berkeley Centre for Human-Compatible AI (Evans is an affiliated researcher there, although he was not directly involved in the survey).
The results were not good.
Called the 2025 FLI (Future of Life Institute) AI Safety Index, it looked at major AI companies across different critical operations and categories. The highest average safety score was a ‘C’, and only two ‘A minus’ scores were achieved across a grid of 42 possible scoring outcomes.
On-going Education, Needed Regulation
Over the years, Evans has mentored many grad students and doctoral candidates in AI, and many have gone on to work at leading edge companies like OpenAI, Google DeepMind, and Anthropic.
Although he worries that such companies are among those racing to make AI more powerful while not paying enough attention to making them safe, Evans has and continues to have direct impact and influence on key people and players in the industry.
He has mentored dozens of up-and-coming AI leaders over the years (including PhD students at the University of Toronto) and continues to do so through a funding mechanism known as the Astra Fellowship, and its efforts to accelerate AI safety research and talent.
“AI is advancing faster than any technology in history, and we think it’s worth preparing for some of its most concerning risks,” the Fellowship states.
And although AI legislation and regulation are not his areas of expertise, Evans says that “some regulation at the frontier, for the next generation of systems”, is needed.
The EU has some regulation in these dimensions with some good components, he mentions, and some legislation in California is getting companies that are building AI to provide some transparency.
The transparency and safety of current and emerging AI systems must have regulation and oversight, Evans added, preferably by building up independent third-party components to research and run tests, to understand complex AI issues, ask the right questions, and advise on needed governance.
During next week’s Hinton Lectures, keynote speaker Owain Evans will present audiences with a high-level overview and take them to the cutting edge of AI. He’ll share insights about AI “thinking”, about AI “ethical stability”, and even a neuroscience-style analysis of artificial models.
And he’ll do so safely.
-30-