Risk and reward. Optimism and pessimism. Artificial intelligence has it all.
And in the first of what was billed as an annual event about AI, the Hinton Lectures debuted in Toronto recently, shining a spotlight on many contentious developments in artificial intelligence and on the Nobel Prize-winning ‘godfather of AI’ himself.
Geoffrey Hinton, the British-Canadian cognitive psychologist, computer scientist, and AI pioneer, hosted the eponymous event. After sharing some brief introductory remarks, he introduced the featured guest speaker, Jacob Steinhardt, Assistant Professor of Statistics and EECS, UC Berkeley and a long-time AI researcher himself (Hinton’s and Steinhardt’s presentations, delivered during a two-day event, were recorded and are now available online).
Hinton has received many awards and citations for his work on artificial intelligence and neural networks throughout his accomplished career (which includes two significant stints at the University of Toronto, as well as at Cambridge, Edinburgh – where, back in 1978, he got his Ph.D. in AI – Carnegie Mellon, and Google Brain, the company’s AI research lab).
And this year, Hinton was co-recipient of the Nobel Prize in Physics, awarded “for foundational discoveries and inventions that enable machine learning with artificial neural networks”.
Nevertheless, his brief introductory remarks to the inaugural Hinton Lecture (titled AI Rising: Risk Vs Reward) underscored warnings about the technology he helped develop that have been made before: there are opportunities and challenges, and AI can cause or contribute to a wide range of very risky scenarios: showing bias and discrimination; errors and ‘hallucinations’; use in cybercrime and deep-faked content; surveillance or military escalation; unemployment or joblessness; even biological, technical and existential threats.
Hinton called himself a “worried pessimist” in the face of such possibilities.
Yet there’s reason for at least some cautious optimism, according to Lecture keynote speaker Steinhardt, and the folks at working at organizations like the AI Safety Foundation, the Global Risk Institute, and the Schwartz Reisman Institute for Technology and Society, where Hinton’s “legacy of discovery” is said to carry on.
The separate organizations are all based in Toronto, working in their owns ways to further development of artificial intelligence while also having those many AI risk and rewards addressed by academia, industry, and the general public.
(Government has a role to play, of course, be it policy, legislation or particularly funding. In April, the Canadian government pledged more than $2.4 billion dollars for AI development, including for an AI safety institute. And individual researchers at the organizations have received millions in grants to develop solutions for critical artificial intelligence challenges. In his Hinton Lecture, Steinhardt highlighted the AI work being done in Canada, citing the establishment of a multidisciplinary AI task force at the U of T, and noting that government pledge with funds earmarked for the development of a Canadian AI Safety Institute.)
Then there’s the AI Safety Foundation, a Canadian charitable organization that provides information and education to the general public on subjects relating to science, computer science and artificial intelligence.
The Foundation helped create the Hinton Lectures, and it also provides scholarships, bursaries, prizes and other types of financial assistance for youth and post-secondary students who want to learn more about science, computer science, and artificial intelligence. Two AI safety researchers at the Foundation, David Duvenaud and Roger Grosse, are spearheading leading-edge research into AI.
Along with Geoffrey Hinton and others, they are on the AISF Board of Directors and the Hinton Lectures nominating committee.
The pair have also taken on leading roles at the Schwartz Reisman Institute for Technology and Society in Toronto, where the new director, David Lie, has noted that “Canada has already contributed greatly to machine learning and AI through the contributions of previous scholars like Professor Emeritus Geoffrey Hinton, and I think we have a very strong role to play in this important technology going forward.”
Sonia Baxendale, president and CEO of the Hinton Lecture-supporting Global Risk Institute, emphasized the importance of the Hinton Lecture event, stating, “As AI continues to shape our world, it’s essential that we consider both its vast opportunities and the challenges it presents.”
GRI is focused on risk management for the financial services sector, and it has also held events looking at the broader opportunities and challenges AI presents, as Baxendale describes. One session at GRI looked at risk management in generative AI applications, and appropriately enough, the Institute held a summit that investigated artificial intelligence and environmental sustainability, one of AI’s biggest challenges.
Leading artificial intelligence (AI) and climate researcher Dr. Sasha Luccioni presented two contrasting perspectives on the relationship between AI and climate change. On one hand, she presented ways that AI and machine learning are being used to help address environmental challenges.
But she also spoke about how AI models themselves are becoming some of the biggest polluters in the world, and what can be done to track and control their energy consumption and resultant carbon emissions. She highlighted the need for more efficient ways to store electricity and how AI is being used in testing possible replacements for lithium-ion as the state-of-the-art in high-performance electric vehicle batteries, for example.
However, as has been noted, AI’s environmental demands are huge: the energy needed to create, train and deploy the large language models currently in existence, much less new ones emerging every day, already makes a larger carbon footprint than the airline industry, known as one of the dirtiest industries in the world.
Simple, single data centres can use as much electricity in a year as 50,000 homes. How much power fully operational GPT5+ LLMs will consume is unclear, so Luccioni went on to discuss some of the ways she’s developing to track the carbon footprint of AI, including an ‘Energy Star’ rating system that would be used to rate the power consumption and climate impact of different AI models.
So, while Geoffrey Hinton called himself a “worried pessimist” in the context of all these very real artificial intelligence risks, rewards challenges and possibilities, Jacob Steinhardt called himself a “worried optimist.” He sees a 10 per cent chance AI will lead to human extinction but says there’s a 50 per cent chance it will cause immense economic value and “radical prosperity.
“AI will have enormous consequences,” he acknowledged, so “it should be built in public.” There should be an open process, a public conversation to ensure AI tools are safe before release, Steinhardt feels, and in Toronto, he announced the founding of an organization with that goal in mind.
Transluce is the new, non-profit AI research lab Steinhardt has founded; he’s also CEO. Transluce will support meaningful public oversight, he says, with tools and processes for auditing AI systems that can be openly validated, respond to public feedback, and be accessible to third-party evaluators.
His team is building open, scalable technology for understanding AI systems and the behaviours of large language models that could address the worries expressed by a Nobel Laureate.
Geoffrey Hinton may be ‘the godfather of AI’, but he did not name it.
The term “artificial intelligence” was coined nearly 70 years ago! John McCarthy, Claude Shannon and other researchers wrote a proposal for the Dartmouth Summer Research Project on Artificial Intelligence in 1956.
They seemed optimistic at the time.
# # #
Although previously billed as an annual event, details of future events are still undecided, a spokesperson told WhatsYourTech.
-30-