Among the innovative ideas being used to tackle the thorny issue of immigration in Canada, female technology entrepreneurs are developing artificial intelligence programs to make important decisions about who gets to come to this country.
Private sectors tech start-ups are not the only ones using AI and other advanced data analysis tools to categorize and process immigration claims: the Canadian federal ministry responsible for determining people’s immigration and refugee status has for some time now been using AI platforms to assess applications more efficiently.
More efficiently, because there will be more of them: Canada is on record as saying it will welcome 310,000 new permanent residents in 2018, 330,000 in 2019 and 340,000 in 2020. They will be admitted through various programs, in various categories and from different regions.
So the fact that AI is being used to supplement, if not replace, human decision-making capabilities throws out alarming implications for the protection of those caught up in the techno-enabled system.
Decisions normally made by a human being about a wide range of complex issues (from the completion and accuracy of an immigration application form to the validity of a foreign marriage certificate to the assessment of potential “risk” or “value” posed by an individual applicant claiming hardship in their country of origin) are being made by computer programs and algorithms that may well have prejudicial biases baked into them.
But the replacement of a real person at the border crossing is not the goal, says Nargiz Mammadova, CEO and founder of a new AI-powered immigration service provider called Destin AI.
The Toronto-based company was founded in 2017 with its launch of an online chatbot that can help hopeful immigrants through the application process. “Technology and humans should work together,” Mammadova said at an industry event looking at artificial intelligence and machine learning applications. Her company works not just with immigrants, but lawyers and immigration counsellors in different countries, as well. Destin AI developed its toolset though focus group testing and what was called an intensive development process with various stakeholder groups.
Understanding that there are at least 60 different immigration programs and application streams in Canada, Mammadova clearly identified a need to streamline the process for administrators while helping make it more understandable to applicants.
Destin AI online tools include a self-assessment eligibility checker along with guidance and instruction on how best to prepare the necessary documents. The virtual assistant leads applicants through the process in a step-by-step online interface. The system is in English right now, but Mammadova said additional languages will be added.
What’s more, as immigration applications are processed by the system, it learns from experience who applied and with what results. The AI platform can predict who will be accepted as a successful applicant and why, Mammadova described.
Being able to successfully predict the outcome of an immigration application or visa approval request is clearly a powerful and valuable capability.
If not to predict the outcome, RovBOT is another Canadian-launched online platform developed to at least provide answers to applicants’ most common questions about the immigration process.
It, too, uses artificial intelligence algorithms to understand the needs of individuals manoeuvering through the Canadian visa process, algorithms developed by computer science grads at Dalhousie University.
Ruhi Madiwale, Dhivya Jayaraman and JeyaBalaji Samuthiravelu built an AI-powered chatbot that acts as a guide through the online visa application process.
“We trained RovBOT using publicly-available information from Citizenship and Immigration Canada,” explained Madiwale, “to both understand individual needs and keep track of application deadlines and documentation.”
While customized advice and guidance is provided by their app, the development team noted that it makes use of Facebook Messenger to provide connectivity and communications for the RovBOT.
That may not be what personal data protection and online privacy experts want to hear: the idea that a potentially vulnerable social media network is being integrated into a powerful immigration and refugee system opens up many concerns about the very personal and revealing information that could determine a person’s immigration or refugee status.
Another expressed concern is that of national jurisdiction over collected data. If a company provides its AI-powered services using the infrastructure of another company or country, privacy and protection can be at risk.
Founded in the U.S. by a former counsellor and lawyer husband-and-wife team, Talent Beyond Boundaries (TBB) has assisted refugees and immigrants with backgrounds in engineering, health care, IT and other professions to make successful applications for employment opportunities and approved immigration status in Canada through a pilot program.
Canada has been using automated decision-making technology in its immigration system for several now, as mentioned, by automating certain activities done by immigration officials and supporting the evaluation of visitor application forms.
The federal government has made significant investment in artificial intelligence and it is describing efficiency and effectiveness as among the benefits to be gained by using AI technologies in the immigration process.
(That’s in spite of one such implementation that unfortunately was shut down some ten minutes after launch, apparently due to concerns about excessive volume and potential discrimination).
That potential for discrimination is what has many privacy advocates concerned about developments with AI and immigration.
“A.I. is by no means neutral. It can be just as biased, if not more biased, as a human being,” says Petra Molnar, a Toronto-based refugee lawyer and co-author of a new report called Bots at the Gate. There’s a bad track record concerning the use of computer algorithms and search engine biases, she adds, noting that cultural stereotypes can fuel problematic decisions, accidentally or not.
Without proper oversight, automated decisions can rely on discriminatory and stereotypical markers—such as appearance, religion, or travel patterns—as erroneous or misleading proxies for more relevant data, thus entrenching bias into a seemingly “neutral” tool, says the report.
Released by the International Human Rights Program at the University of Toronto Faculty of Law and The Citizen Lab at U of T, the report looks at how the Canadian government’s use of AI tools threatens to create a laboratory for high-risk experiments. The initiatives may place highly vulnerable individuals at risk of being subjected to unjust and unlawful processes in a way that threatens to violate Canada’s domestic and international human rights obligations, according to the report.
“Our legal system has many ways to address the frailties of human decision making,” Dr. Lisa Austin, professor at the University of Toronto’s Faculty of Law and an advisor on the report, said in a statement. “What this research reveals is the urgent need to create a framework for transparency and accountability to address bias and error in relation to forms of automated decision making. The old processes will not work in this new context and the consequences of getting it wrong are serious.”
Molnar echoes the concerns that the Canadian government should not be test-driving autonomous decision-making systems, and that (if it is not too late already) it should put in place publicly reviewed algorithmic impact assessments and a human rights-centered framework for immigration before implementation.
The report’s authors recommend that the federal government establish an independent, arms-length body with the power and expertise to engage in comprehensive oversight and review of all uses of automated decision systems by the federal government; publish all current and future uses of AI by the government; and create a task force that brings key government stakeholders alongside academia and civil society to better understand the current and prospective impacts of automated decision system technologies on human rights and the public interest more broadly.