The growing use of artificial intelligence (AI) is also leading to the growth of initiatives to ensure the technology is used in ethical, responsible and socially beneficial ways. The need now is to provide added detail about and coordination among these different ethical AI and tech for good initiatives, including those in Canada.
Ethical concerns raised by the implementation of artificial intelligence systems include but are not limited to data privacy and security: the need for greater transparency in how AI tools are programmed and how they operate; a lack of accountability and liability should unintended consequences results from AI-decision-making; a concentration of power and influence among fewer and fewer – but larger and larger – tech companies; much less the overall impact of AI systems on job security, employment levels and social cohesion.
The United States and China dominate developments in the interrelated fields of artificial intelligence, machine and deep learning right now, but many countries around the world are also involved in AI development and implementation.
Canada is a significant global developer of artificial intelligence technology, driven in part by announced plans and activities of private sector companies and various government agencies. Cities like Toronto, Montreal and Edmonton have become hubs for AI research and development as a result.
In terms of international investment in AI development and implementation, a recent report from McKinsey Global Institute shows that, among those countries trailing the U.S. and China are Germany, South Korea, Sweden, France and Canada.
It’s the latter two countries that have now come together and announced plans for an international panel to research, discuss and ultimately propose policies to manage the social, economic and ethical impacts of artificial intelligence. Sensibly enough, it’s called the International Panel on Artificial Intelligence (IPAI).
Proponents of the AI ethics panel want it to create a set of common values, open standards and best practices for the technology’s development and use. The panel is also set to assess the dangers of the technology and formulate appropriate policies to mitigate such dangers among participating nations.
As announced by Canadian Prime Minister Justin Trudeau and French President Emmanuel Macron during a recent G7 summit, Canada and France want the IPAI to be a global point of reference for understanding and sharing research results on human-centric artificial intelligence.
“If Canada is to become a world leader in AI, we must also play a lead role in addressing some of the ethical concerns we will face in this area,” Trudeau said during the G7 meeting. The idea of an AI panel had been surfaced previously by the two leaders, but few details were provided.
For example, when Mounir Mahjoubi, the French secretary of state for digital affairs, described ideal agenda plans for the panel at the G7 nation’s meeting in Montreal, he said they would include issues such as ethics in autonomous and AI-derived weaponry (noting however that the topic is already being discussed at international levels). The panel, he added, will include G7 and EU members; China will be invited to join.
So it is not exactly clear who will join the panel, there are questions about exactly what will be discussed, and it is possible that the world’s leading AI developer will not be a participant.
No matter where the panel participants come from, they are to include members of scientific community, industry, civil society and governments; they will produce reports and assessments, establish working group(s) or other mechanisms, and draw on the work being done on AI both domestically and internationally.
As far as the topics to be reviewed are concerned, the IPAI says only that its areas of interest and coverage “will be refined over time” and that “for illustrative purposes” those interests could include subjects like Trust in AI, Acceptance and Adoption of AI, and the Future of Work. Those are rather broad-brush illustrations; whether the panel can get much needed detail and specificity into its work remains to be seen. Applying “open source” software concepts to deep learning algorithms; establishing quality control mechanisms for AI implementation; licensing and regulatory standards for software coding and AI development; industry peer review mechanisms and even product recall provisions for AI-enabled products and services should all be considered.
Another Canadian initiative to develop and implement AI best practices has been kick-started by CATA, the Canadian Advanced Technology Alliance. It’s a voluntary, opt-in group and it has put out a wide call for input and participation.
The group has tabled a list of 10 focus points about AI as a way to start the conversation, including some immensely practical (AI-driven machines or robots must not kill or harm) and some appropriately aspirational (AI must not lie, must not judge human behaviours, must help humans become better individuals).
Another Canadian initiative for the creation of safe, ethical and inclusive AI systems, based in Montreal, was formed very much as a bottom-up, grassroots group. Founded by a machine learning engineer at Microsoft and an AI ethics researcher at McGill University, Abhishek Gupta, the Montreal AI Ethics Institute has tabled its own statement of intent, a rather comprehensive declaration for responsible AI development.
Interestingly, Gupta was an invitee to the same G7 discussions about artificial intelligence from which the Canada-France ethical AI agreement came. He was also a participant at last year’s Re-Work conference on Deep Learning and Artificial Intelligence, held in Toronto, speaking on a panel that asked the question: Should the Government Regulate AI?
Noting the broad need for more education about artificial intelligence in the public and private sector, he said a specific requirement for government is expertise in not just policy-making about AI, but in the evaluation and auditing of AI projects, applications and technologies.
That’s a big ask, but without a clear audit trail, the “black box syndrome” is encountered in computer programming, software engineering and machine learning, making the evaluation of AI ethical considerations much more difficult. Black box testing looks only at the output of a certain program, not the actual program itself. The risk is that unintended (or otherwise) human biases or prejudgements could skew the final results. Attempts to get inside the box, to turn its colour from black to white, are underway, but even the most intelligent of researchers still say that the deep learning tools that drive artificial intelligence work in mysterious ways.
It’s hard to be ethical when you don’t know what you’re doing (or why or how you are doing it); that’s another reason why coordinating the efforts of tech for good movements and initiatives to develop ethical artificial intelligence would be a really smart move here in Canada and around the world.
# # #
Here is an excerpt from a tender request describing a project in which the Treasury Board Secretariat of Canada (TBS) is seeking to understand how the Government of Canada and its Departments can leverage the benefits of artificial intelligence (AI):
While the power that AI systems may bring to government could be significant, they must be deployed in a responsible and ethical manner. AI systems often require “training” using datasets that are reflective of the problem needing to be solved. If these data were collected or tabulated in a way that carries bias, then the outcome will be AI recommendations or decisions that are biased as well. Further, some AI systems currently operate as “black boxes,” meaning that the decisions they make are difficult to audit or fully comprehend. In light of these limitations, it is important to understand where it is appropriate to deploy different types of AI systems, balancing the potential for gains in efficiency and effectiveness of government with the risk of misuse. Finally, although AI will afford institutions with new capabilities, institutions will need to apply a strong ethical lens to whether the technology should be deployed at all in certain circumstances.