SAN FRANCISCO — In July, two of the world’s top artificial intelligence labs unveiled a system that could read lips.
Designed by researchers from Google Brain and DeepMind — the two big-name labs owned by Google’s parent company, Alphabet — the automated setup could at times outperform professional lip readers. When reading lips in videos gathered by the researchers, it identified the wrong word about 40 percent of the time, while the professionals missed about 86 percent.
In a paper that explained the technology, the researchers described it as a way of helping people with speech impairments. In theory, they said, it could allow people to communicate just by moving their lips.
But the researchers did not discuss the other possibility: better surveillance.
A lip-reading system is what policymakers call a “dual-use technology,” and it reflects many new technologies emerging from top A.I. labs. Systems that automatically generate video could improve movie making — or feed the creation of fake news. A self-flying drone could capture video at a football game — or kill on the battlefield.
Now a group of 46 academics and other researchers, called the Future of Computing Academy, is urging the research community to rethink the way it shares new technology. When publishing new research, they say, scientists should explain how it could affect society in negative ways as well as positive.
“The computer industry can become like the oil and tobacco industries, where we are just building the next thing, doing what our bosses tell us to do, not thinking about the implications,” said Brent Hecht, a Northwestern University professor who leads the group. “Or we can be the generation that starts to think more broadly.”
When publishing new work, researchers rarely discuss the negative effects. This is partly because they want to put their work in a positive light — and partly because they are more concerned with building the technology than with using it.
As many of the leading A.I. researchers move into corporate labs like Google Brain and DeepMind, lured by large salaries and stock options, they must also obey the demands of their employers. Public companies, particularly consumer giants like Google, rarely discuss the potential downsides of their work.
Mr. Hecht and his colleagues are calling on peer-reviewed journals to reject papers that do not explore those downsides. Even during this rare moment of self-reflection in the tech industry, the proposal may be a hard sell. Many researchers, worried that reviewers will reject papers because of the downsides, balk at the idea.
Still, a growing number of researchers are trying to reveal the potential dangers of A.I. In February, a group of prominent researchers and policymakers from the United States and Britain published a paper dedicated to the malicious uses of A.I. Others are building technologies as a way of showing how A.I. can go wrong.
And, with more dangerous technologies, the A.I. community may have to reconsider its commitment to open research. Some things, the argument goes, are best kept behind closed doors.
Matt Groh, a researcher at the M.I.T. Media Lab, recently built a system called Deep Angel, which can remove people and objects from photos. A computer science experiment that doubles as a philosophical question, it is meant to spark conversation around the role of A.I. in the age of fake news. “We are well aware of how impactful fake news can be,” Mr. Groh said. “Now, the question is: How do we deal with that?”
If machines can generate believable photos and videos, we may have to change the way we view what winds up on the internet.
Can Google’s lip-reading system help with surveillance? Maybe not today. While “training” their system, the researchers used videos that captured faces head-on and close-up. Images from overhead street cameras “are in no way sufficient for lip-reading,” said Joon Son Chung, a researcher at the University of Oxford.
In a statement, a Google spokesman said much the same, before pointing out that the company’s “A.I. principles” stated that it would not design or share technology that could be used for surveillance “violating internationally accepted norms.”