Next time you see a surveillance camera following you down an alleyway, don't be too sure that there's a human watching.
Surveillance camera companies are increasingly relying on artificial intelligence (AI) to automatically identify and detect problematic and criminal behavior as it happens — everything from public intoxication to trespassing.
An automated camera system called AIsight (pronounced eyesight), installed in Boston after the 2013 marathon bombing, monitors camera feeds in real time and alerts authorities if it spots unusual activity, according to Bloomberg.
AIsight cameras use a statistical method called machine learning to "learn what is normal for an area and then alerts on abnormal activity," according to its creator, Behavioral Recognition System Labs.
Slate reports that could mean picking up anything from "unusual loitering to activity occurring in restricted areas."
"We are recognizing a precursor pattern that may be associated with a crime that happens," Wesley Cobb, chief science officer at the company, told Bloomberg. "Casing the joint, poking around where he shouldn't be, going around looking at the back entrances to buildings."
And these systems aren't just looking for criminals. In early August, West Japan Railway installed 46 security cameras that can "automatically search for signs of intoxication" in passengers at the Kyobashi train station in Osaka, Japan, according to Wall Street Journal.
The AI watches for people stumbling, napping on benches, or standing motionless on the platform for long periods of time before lurching to move. The system can then alert human attendants if the person is in danger of falling on the tracks or hurting themselves.
Drunken passengers frequently fall or stumble off the train platform. West Japan Railway conducted a study that found 60% of the 221 people hit by trains in Japan in 2013 were intoxicated, the Wall Street Journal reports.
Using AI in surveillance systems makes sense — AI can catch what humans miss, operate around the clock, never get tired, or fall asleep on the job. But it raises concerns with "privacy and civil liberties advocates," because it "treats everyone as a potential criminal and targets people for crimes they have not yet committed," according to Slate.
Stuart Russell, AI researcher at University of California, Berkeley and co-author of the standard textbook "Artificial Intelligence: A Modern Approach," thinks intelligent "watching" programs will likely freak people out more than a human monitor does, even though& most people would reasonably expect they were being watched if they encounter a surveillance camera.
"What if there's an AI system, which actually can understand everything that you're doing?" Russell told Tech Insider. "Would that feel different from a human watching directly? I expect people are going to feel differently about that once they're aware that AI systems can watch through a camera and can, in some sense, understand what it's seeing."
This is just one of the many security and privacy issues that courts will have to grapple with as AI improves in the coming years, like the legality of AI that can buy up tickets, and then scalp them online.