In the wake of reports that the Chinese government is using artificial intelligence-based technology to track and detain some of its citizens, a Newfoundland and Labrador scientist is questioning how AI is being used and who should answer for its misuse — and he’s not alone.
“It seems to me that there’s not enough people actively fighting against what’s happening,” said David Churchill, an associate computer science professor at Memorial University in St. John’s.
Last month, Human Rights Watch published an investigation alleging officials in China’s Xinjiang province were using a mobile app to aggregate personal data and flag suspicious individuals — mostly Uighur Muslims — to authorities.
Other reports have shown the government uses a system of surveillance cameras backed up by facial recognition software to spot and track Uighurs. The UN estimates a million Uighurs are now being held by Chinese authorities in massive “re-education” camps.
People crack jokes about sci-fi flick The Terminator, where a cyborg assassin sent from an army of machines to terrorize humans. But in reality, it’s not the technology that does harm, he said.
“These sort of tracking systems … are the exact same technologies that are able to detect brain tumours in MRI images or to help doctors diagnose patients with certain diseases at a better rate than human doctors are able to,” he said.
“The real existential threat are the people who are willing to use AI, which was invented with the best intentions in mind, for their intentions which may not be the best.”
With China one of the world AI juggernauts — SenseTime, a company identified by the New York Times and Buzzfeed as being tied to the software used by the Chinese government, is now the highest-valued AI company in the world — Churchill is worried his colleagues are quiet because they’re afraid of killing opportunities.
CBC News has found researchers at at least one Canadian university have published papers on object recognition AI with scientists from both the Chinese National University of Defense Technology and SenseTime. They did not respond to a request for comment.
In early June, the Alberta Machine Intelligence Institute (AMII) launched a partnership with the Hong Kong AI Lab, a non-profit funded in part by SenseTime.
An AMII spokesperson told CBC News the partnership is not about sharing research, but about sharing business knowledge and developing an AI ecosystem.
AI unlike any tech in history
The whiplash rate at which artificial intelligence technologies are developed sets AI apart from anything in history, says Jana Rosales, a professor in MUN’s engineering department who helps scientists think about the social consequences of their work.
That makes it prime territory for design regret — the remorse someone feels when their work is used for harm, she said.
“Our institutions have to find ways to keep up with the pace of change and … be nimble enough to make decisions about what responsible AI looks like.”
And they need to find ways to support researchers who want to slow down and be more thoughtful about their work — and researchers like Churchill, who speak up about its unintended consequences.
Abhishek Gupta agrees. He’s the founder of the Montreal AI Ethics Institute, a driving force behind the growing Canadian movement toward ethical AI.
“It’s been very recent that this [ethical AI] work has started to become mainstream,” he said. Right now, he said the technology is still outpacing the movement.
Nothing will change without awareness, he said, and the situation in China is a major flashing light for scientists, institutions and government — all of which need to commit to better practices and policies, he said.
The public, too, has a responsibility to learn about the technology they’re using and any potential to cause harm, he said.
Will declarations have teeth?
Both Rosales and Gupta have hope.
“I try to take comfort in the fact that people are actually saying, ‘Hang on, wait a minute — AI is actually is qualitatively different from anything we’ve been working on,’ or at least they recognize how complex it is and that there is no putting the genie back in the bottle,” Rosales said.
She points to initiatives like the Montreal Declaration for Responsible Development of Artificial Intelligence, to which more than 2,000 scientists and institutions have signed their names.
“Are they going to have teeth? Who really knows?” she said.
Chinese scientists on board
Gupta said he is pleased with the Canadian government’s efforts, pointing to their release of guiding principles for ethical AI use.
And there are groups and individuals in China who are fighting for ethical AI practices, he said, pointing to the Beijing Academy of Artificial Intelligence’s release of the Beijing AI Principles in May.
The move was criticized as a smokescreen, but Gupta said after hosting a session with Chinese scientists, he sees their situation with more nuance.
“My biggest takeaway … is that we need to have these open dialogues and that we need to have people who have these different perspectives share their opinions and insights and really use that in making decisions rather than having a unilateral view on how someone is using certain technology.”