AI (artificial intelligence) is presenting numerous opportunities for students, businesses and governments – numbers like 153 billion, and 13 trillion.
Those are the dollar figures touted by tech industry analysts as the size of the AI industry by 2020 (it’ll be worth $153 billion, according to Bank of America) and the additional global economic activity AI can help generate globally by 2030 ($13 trillion in overall GDP, says McKinsey).
Attendees at the Re-Work conference in Toronto heard these numbers and much more as descriptions and predictions about deep learning and artificial intelligence were shared among computer science students, business owners, technology developers, government representatives and many others.
Re-Work included both a Deep Learning and an AI for Government Summit; each had a dedicated seminar track. The sometimes-competing agenda was dotted with short 20-minute single vendor presentations, fireside chat-type workshops and some open-ended panel discussions, but one of the highlights sessions was surely that of “the Godfather.”
Geoffrey Hinton’s been given the affectionate moniker for his pioneering and much-heralded work into deep learning through neural networks: getting technology to work, act and learn like the human brain.
Hinton’s more official title, that of Engineering Fellow, Google/Emeritus Distinguished Professor, University of Toronto, hints at the fact he spends time both at the U of T and Google’s office and new AI lab in Toronto. He’s also a chief scientific advisor at the Vector Institute, an artificial intelligence research lab and Rework participant.
His plenary session at the conference was a deep dive into recent work on ensemble learning, knowledge distillation and data-smoothing techniques.
At one point, while describing detailed data-driven machine learning processes, Hinton paused, looked up his audience and smiled (in a fatherly-like way), saying “Perhaps you weren’t expecting a technical talk…”
But there were those in the packed house hanging on every word, including several young people from TKS, The Knowledge Society in Toronto, a leadership learning, training and mentoring space for 13 to 17-year-olds.
They were among those following Hinton closely, taking copious notes, grabbing shots of his slides with their smartphones.
If in the end he meant that to learn something, we humans and those machines must gather lots of information and then think about it a while, well, the Godfather is strictly business.
Speaking of which…
AI is a useful recommendation tool for movie streaming services, TV networks and content publishers. By targeting audiences through data and learned behaviour analysis, highly customized content can be developed and delivered to viewers by machine.
As Re-Work attendees could learn, the CBC has an initiative to apply machine intelligence in its recommendation systems, and it is experimenting with various algorithms to create product- and audience-specific browsing suggestions for discovering online content.
In fact, CBC is looking for machine intelligence developers who can apply data systems engineering, machine learning and software development techniques to refine and improve the way it “surfaces content” to its audience.
Another AI-based application described at Re-Work could have come straight out of a CBC news headline: immigration.
Founded in 2017 by CEO Nargiz Mammadova, Destin AI offers a smart web platform and virtual assistant that guides applicants through the Canadian immigration process.
Mammadova described how Destin AI helps applicants prepare necessary documents, provides insight into the how’s and why’s of successful applications, and even makes probability predictions about visa approvals based on its analysis of previous and ongoing applications.
Unfortunately, due to time constraints, important questions about the online service itself, much less the global immigration crisis overall, could barely be addressed in the 20-minute session.
A forty-five-minute panel discussion hoping to provide answers to the session’s title, ‘Should the Government Regulate AI?’, also came up somewhat short.
Yes, there were calls for education about the meaning and impact of artificial intelligence, particularly for our elected representatives and policy-makers; government certainly needs some expertise before it lays down the law. There’s an identified need for education, awareness and informative case studies about AI and machine learning to help government (and others) understand the risks and rewards of deploying artificial intelligence. However, while there was some discussion about self-regulation and the need for tech companies “to explain themselves and their decisions” involving the use and implementation of artificial intelligence, not much mention was made of recent history that has shown some industry players to be not that forthcoming about their shortcomings.
And while AI depends in many ways on economies of scale and the concentration of computing horsepower and aggregated data to move forward, that necessary asymmetry means tech-nopolies of power and profit are the result. Like the railway barons or industrialists of yesteryear, today’s tech CEOs are the all-powerful czars creating and ruling over dominant marketplace monopolies.
Not well represented on this or other such tech conference panels is the consumer, the end user, the provider of all the data that fuels the industry. Amidst all the talk about using data to teach machines to read or write or make medical diagnoses, amidst the talk of smart cities and intelligent chatbots, amidst the talk about value propositions, profitability and the bigger challenges to capturing ROI with AI, very little was said about consent.
Consent, that is, of the end user to willingly provide data that has potentially billions of dollars of value for little or nothing. Consent given by the consumer to giant companies who may or may not clearly describe the intended or possible uses of that data. Consent that the personally identifying data that is (usually, but not always) willingly provided will be handled in a way that engenders trust and confidence.
Trust and confidence. Morality and ethics. They’re not often session titles at tech conferences, but the topics were certainly in the air at Re-Work. Although not a participant this time, AI expert, tech investor and former Apple employee Kai-Fu Lee has said in other contexts that artificial intelligence will have a negative impact when measured by that emerging new tech spec, social cohesion:
“I think AI will exacerbate wealth and inequality…at the very bottom rank are the people, many of whose jobs will be replaced because they’re routine and AI will do their jobs for them, so it’s actually having a doubling effect on giving more wealth to the wealthiest, creating new AI tycoons at the same time taking away from the poorest of society,” Lee said.
Now there ‘s a topic for a long deep learning session.