Artificial intelligence technologies will impact every citizen and country and business worldwide. As what’s been dubbed World AI Week ends today, it’s clear the societal impacts of AI are already being felt across education, biotech, media, manufacturing, and the very nature of work itself.
At industry events like World AI Week, now wrapping in Amsterdam, and at the upcoming World AI Summit in Montreal, renowned researchers, applied scientists, leading business executives and government officials from across the globe fervently discuss key challenges and economic opportunities surrounding the development and deployment of AI applications in fields such as healthcare, finance, arts and culture, supply chain management, retail, and the environment.
Describing the economic dividends anticipated from AI developments some six years ago, Canadian Prime Minister Justin Trudeau said he would make Canada a world leader in artificial intelligence. British PM Rishi Sunak voiced a similar desire just this month. The U.S. and China have long both proclaimed their leadership in global AI developments, among others.
But many governments are still trying to find that balanced approached that lets them rein in the potential negative consequences of AI without stifling the development and innovation industry wants so badly to pursue.
That the importance of good AI governance and conscientious regulation can help ensure balanced, responsible, and accountable access to AI (and from its impacts) is also a topic for discussion and action by industry and government alike.
In 2024, World Summit AI Americas will head to Montreal to again convene global leaders in artificial intelligence from enterprise, big tech, academia, law, government and investment as they continue to tackle the thorny issues that are part of an innovative global AI agenda.
One topic of discussion will be the voluntary AI code of conduct issued by the Canadian government, addressing how advanced generative artificial intelligence is used and developed. One of the goals of the code is to engender trust in AI products while more binding regulations are working their way through Parliament; introduced as the Artificial Intelligence and Data Act (AIDA) as part of Bill C-27, those regs are still being reviewed in the House of Commons before possible passage later next year.
But the code, now in effect, identifies several measures that companies are encouraged to apply to their operations when developing and managing AI systems to mitigate the potential risks such systems pose.
Voluntary and not legally binding, the code encourages adoption of key principles by the Canadian AI: accountability, safety, fairness and equity, transparency, human oversight and monitoring, and validity and robustness.
Risk management strategies are laid out, as are calls for frequent testing and assessment of AI systems for biases. Recommendations are included for sharing and publishing of information about AI systems so that AI content in the large models they use can be identified.
As the code was being developed, government officials consulted with a range of Canada’s fastest growing technology companies, many of them represented on a business council known as the Council of Canadian Innovators. Formed in 2015, the Council now includes more than 150 CEOs from high-growth companies headquartered in Canada. The Council is chaired by Jim Balsillie, former Blackberry Co-CEO, and John Ruffolo, Founder & Managing Partner of Maverix Private Equity, and Founder of OMERSVentures.
CCI has been calling for Canada to take a leadership role on AI regulation, saying it should be done in the spirit of collaboration between government and industry.
“The AI Code of Conduct is a meaningful step in the right direction,” said Benjamin Bergen, CCI’s President. “[It] marks the beginning of an ongoing conversation about how to build a policy ecosystem for AI that fosters public trust and creates the conditions for success among Canadian companies.”
Among the membership of CCI is AI expert and tech CEO/founder Jason Cassidy of Shinydocs, a Kitchener-based AI-based information management company.
He’s well-versed in the importance of trust when it comes to AI, and he says organizations need to prepare themselves for AI to take advantage of the wave of opportunity, but also to ensure that the process is accountable – issues the code addresses but only partially, he says:
“If you ask me does the code meet our objectives, I’d say yes, it met the objectives as far as a framework for accountability. Some guidelines are obvious: you must have them if you are being responsible. Things like continuous monitoring data inputs to your AI resources. How you build the models; how you build the decision-making process. You must absolutely know where they came from; those controls must be in place.”
“But if you ask about the objectives of making Canada a world leader in AI, well, I think it’s ten per cent of what’s needed. There are other, bigger steps to take.”
First on his list: investment and financial support. “There is a massive disparity between Canada and other countries in terms of where we spend our investment dollars.” It’s one of the sharpest criticisms of the Canadian code: we need to focus on encouraging more innovation, not on regulating the sector.
As just one quick comparison, it is expected that the Canadian federal government will spend around $15 billion CDN on scientific and technological activities in 2023/24, a slight decrease from the previous year. Germany reportedly has plans to spend almost $22 billion on semiconductor production alone in the coming years.
A number of Canadian AI development companies have signed the code of conduct, including BlackBerry, OpenText, Cohere and Coveo, and some adjustments to the code have been made based on important industry feedback. At industry events like World AI Week and the upcoming World AI Summit in Montreal, more comparisons will be made, discussions will continue, feedback will be gathered. They are among the steps that remain to be taken by Canada and other countries seek that leadership position in the global AI race.