How AI is changing investigations, policing and law enforcement
AI is having a significant impact on the ability of law enforcement to identify criminals and to detect and investigate crime. In the process, it is changing the face of policing.
Early AI in Anti-Money Laundering
AI in law enforcement is not new. Two areas where AI was used early on were border control and anti-money laundering in the US.
The US Customs and Border Protection Agency created an AI system using rule-based reasoning to identify suspicious activity for immigration purposes in the mid-1980s.
In 1993, FinCEN developed the FinCEN Artificial Intelligence System (FAIS), which links and evaluates financial transactions for indicators of money laundering or terrorist financing. The system identifies unknown, high-value leads for investigation and, if warranted, prosecution. In its first two years, FAIS identified over US$1 billion in potential laundered funds that humans alone could not detect.
AI systems allow investigators to detect criminality in ways not previously possible by processing transactional data and linking it to identify patterns and connections. AI can process big data rapidly, reducing the time investigators would otherwise spend manually combing through large datasets for leads and patterns in financial crime. In financial crime law especially, investigations are often hampered by manpower shortages. Mining and processing data solves those shortages and accelerates pattern detection to identify anomalous behavior and criminal actors.
This is especially useful for transnational criminal organizations. They typically involve repetitive patterned behavior in areas such as drug trafficking, extortion, cybercrime and money laundering. They also involve multiple offenders connected through relationships such as family, friendship or business associates. Members of these organizations often travel and dine together. Learning and linking associations between members of criminal organizations and their business enterprises is a critical part of how anti-money laundering experts and law enforcement uncover criminal activities and networks. Combining AI with traditional link analysis is enabling both the public and private sectors to develop deeper intelligence.
At FinCEN, specialized money laundering and terrorist financing expertise is distributed among agents, so the system incorporates a wide range of shared knowledge. The design of the suspiciousness evaluation modules — with individual rule sets addressing specific money laundering indicators — facilitates the incorporation of additional indicators and improves accuracy. Using AI technology, an organization like FinCEN can identify multiple businesses linked to certain financial transactions to detect money laundering activities and criminal associations in support of enforcement.
Facial Recognition
Facial recognition is undergoing a renaissance with AI and is changing policing. The technology was first developed in the late 1980s by the Central Intelligence Agency. Back then, the CIA’s facial recognition system combined image analysis technology with collateral information tied to a database used to identify people. Since then, facial recognition has played a role in law enforcement since the mid-1990s. For example, border agencies at airports in China and Japan have deployed facial recognition systems for years to control immigration and, in the process, have built two of the world’s largest facial recognition databases. China’s federal facial recognition database is tied to national identity cards and intelligence agencies.
Jiao Tong University has built a facial recognition system that can identify criminals with an 89.5% accuracy rate using machine vision algorithms based on examinations of photographic records of known criminals and non-criminals.
The FBI has facial recognition systems as well, accessing and scanning over 411 million photos in state and federal databases.
US Customs and Border Patrol is developing drones with sensors, cameras and facial recognition capabilities to film persons near borders and check if they are matched in law enforcement databases — including matching from the IDENT database, which has more than 170 million facial images collected from foreign nationals as they enter the US.
The UAE has deployed police robots whose primary function is to scan faces using facial recognition programs for enforcement.
In the private sector, Google and Facebook apply facial recognition to photographs voluntarily uploaded by users on their platforms. They group photographs of people together, informing their AI systems to associate images of the same person automatically. However, such automatic linking raises privacy law issues regarding the collection, retention and use of a person’s likeness, as well as issues of informed consent.
Also in the private sector, organizations such as casinos use facial recognition programs to capture images of the public and extrapolate information for compliance and enforcement purposes — for example, to detect whether a person is prohibited from gambling due to ties to organized crime.
Public CCTV systems operate similarly to casino facial recognition networks. Machine learning systems scan faces and inform law enforcement of suspicious activities, such as when the same people appear at the same locations more often than statistically probable.
For example, facial recognition and machine learning can detect if a person interacts on the same street corner frequently; the system might then predict that the person is selling illegal drugs. Similarly, if a person enters a high-end hotel at the same time frequently, the system might predict they are selling prostitution services.
There are obvious concerns with such judgment calls. In the example above, the person who appears frequently on the street corner could be a Girl Guide selling cookies, not a drug trafficker.
For a response to be sufficient to justify reasonable grounds to suspect — and to justify a search, seizure or reporting — it must be accurate and based on an understanding of the law. Systems are only as good as their code, and if that code is used in criminal prosecutions without input from lawyers, it may fail constitutional thresholds.
Predictive AI
Another area where AI is changing investigations and law enforcement is predictive AI.
Predictive AI is expected to become embedded in policing to predict and stop crimes before they happen. In the future, it is highly probable that a machine will identify criminals on its own and alert law enforcement on how and where to locate a suspect, with the evidence detailing the crime packaged by systems for arrest and prosecution.
Chicago is already evaluating predictive AI using public data — such as social media — and other sources to identify people likely to commit crimes before they do so. The research is controversial because it assumes criminality can be predicted.
Automating the process cuts down the time a human would take to identify the data and reach the same conclusion. The advantage of using data vacuumed from social media is that it can capture repeated information (such as the hashtag #drugs), correlations among posts, repeated locations, connections and references to other people that a human could not detect without years of analysis.
Moreover, using social media as collateral information allows financial crime investigators to detect, within seconds, things that are out of pattern. For example, if a person has geo-tagged frequent visits to expensive resorts or restaurants inconsistent with their salary, that may be indicative of possession of proceeds of crime.
Today, we can identify criminal actors in organized crime before there is sufficient evidence to prove criminal conduct — but that is markedly different from predicting the criminality of an individual. The former is based on the fact that members of criminal organizations and gangs are part of the same circles and networks. Statistically, they are likely to “infect” each other with criminal interests.
It sounds promising in theory that we can predict criminality, but there are risks. Machines are not infallible, and neither are humans. Humans often make bad judgment calls or lack the maturity, intellect or education to understand the consequences of what they post online and how it will be used. People who are unaware of data vacuuming may be harmed by the permanent storage of their social media activity when it is later used for criminal predictability.
Borg Collective? A Hive Mind for Policing
Accessing big data, vacuuming it and applying machine learning may lead to a form of constructive knowledge — legally speaking — that allows law enforcement to rely on predictions of criminality without meeting individual reasonable suspicion requirements. One scholar has suggested this could turn police agencies into something like a “Hive Mind” that collects and processes data from millions of sources, CCTV cameras and drones — similar to the Borg Collective in Star Trek. In such a future, police agencies would rely on global, real-time, updated databases about individuals for law enforcement purposes.
Other AI in Law Enforcement
In other contexts, securities commissions — including the US Securities and Exchange Commission and the Australian Securities and Investments Commission — use AI to detect rogue market behavior among traders and brokers. Nasdaq is exploring AI software that uses machine intelligence to understand trader language and identify key indicators of fraud or criminal activity as it happens.
Autonomous boats equipped with sonar AI capabilities are used to detect and report illegal fishing and other illegal activities, such as drug trafficking off coastal waters.
AI is also being used to create safer cities. Students at Berkeley University developed an app that brings real-time crime incident information to users, using historical and location data to identify safe navigation paths and send alerts. The app features a dynamic crime map, notifications about nearby crimes and automated incident reporting. It draws on various data sources, including police dispatch data, crowd-sourced information and historical data.
The world is rapidly changing with AI — and policing is no different.
