AI must back the blue, not stymie them

AI must back the blue, not stymie them



shutterstock 2240464447

A controversial facial recognition system in New Orleans has reignited the debate over the unchecked use of artificial intelligence by the government. The uproar highlights AI surveillance networks with the power to scan, identify and flag individuals in public and private spaces. Such systems are in operation, making decisions once reserved for human beings, with profound implications for privacy and civil liberties.

For two years, New Orleans police secretly tapped into a privately run network of 200 cameras equipped with facial recognition software. This off-the-books experiment in AI surveillance was conducted by a non-profit organization called Project NOLA, which bypassed legal guardrails without the knowledge or consent of elected officials. Shortly before the Washington Post exposed the secret camera network, New Orleans Police Department officials pulled the plug.

Now, city officials want to revive it. A proposed ordinance permitting real-time facial recognition would be the first in the country, with profound implications for civil liberties.

Privacy advocates say the issue isn’t that facial recognition is being used, but how it’s being used. Typically, law enforcement agencies employ software like Clearview AI after a crime has occurred, comparing a still image against a database of mugshots, driver’s license photos, and social media posts. In contrast, New Orleans police were secretly operating a live, real-time dragnet of the French Quarter and crime hotspots, alerting officers whenever it identified a person of interest. The untargeted nature of the surveillance — scanning everyone’s faces without a warrant or specific investigative purpose — is what makes New Orleans’ program uniquely controversial.

In addition to breaking new ground technologically, New Orleans is innovating on program structure, proposing novel data-sharing agreements to legitimize the Project NOLA model. Other cities, including Tulsa and Nashville, have also begun to integrate live feeds from privately owned cameras into official police surveillance systems. These shared public-private camera networks — incorporating everything from traffic cameras to doorbell cameras — give police 360-degree coverage of a neighborhood, or even an entire city. With the help of AI, these networks can operate continuously, day and night.

New Orleans has long been at the forefront of this technological revolution. In January 2020, 25-year-old Michael Celestine was smoking outside a friend’s house when NOPD officers chased, tased and arrested him. Despite no evidence connecting him to a crime, Celestine had been remotely flagged as “suspicious” by the city’s Real Time Crime Center, and he spent over a year in jail before charges were dropped. The ACLU later sued the department for false arrest, and the city settled in Celestine’s favor.

While New Orleans should be commended for embracing new technology, the use of always-on surveillance gives police extraordinary new powers, fundamentally reshaping the relationship between citizens and the state. Fortunately, what separates legitimate public safety from government surveillance isn’t the tech itself but the people and policies behind it. In this case, an algorithm-driven dragnet with an imperfect accuracy record should not be used as the basis for an arrest. The kind of real-time system New Orleans is proposing should require at least reasonable suspicion, if not a warrant. During protests or other constitutionally protected activities, it shouldn’t be used at all.

AI must assist police officers, not replace them. One promising application is the review of police body camera footage, where AI is being employed to analyze terabytes of video content in a fraction of the time previously possible. These systems can identify key events, spotlight professional policing, and even draft preliminary reports. Given the vast amounts of data generated by body cameras daily, automating video review has the potential to turn unmanageable archives into active insights.

The real question isn’t whether AI will be used in the criminal justice system; it’s where, when and how. If New Orleans wants to lead the nation in public safety innovation, it must also lead on protecting civil liberties. With commonsense regulations and a few simple safeguards, city officials can align technological innovation with constitutional principles to make communities safer.

In America, we don’t have to choose between security and freedom. With the proper safeguards, we can — and must — have both.

Logan Seacrest is a resident fellow for criminal justice and civil liberties at the R Street Institute. He wrote this for InsideSources.com.


Related Articles

Responses

Your email address will not be published. Required fields are marked *