Someone Hid 40 Surveillance Cameras on California's Highways. Also: A Cop Used License Plate Readers to Stalk His Girlfriend, Pakistan Bombs Afghanistan, and ChatGPT Becomes a Snitch.
February 27, 2026
1. The Surveillance Network Nobody Was Supposed to Find
A San Diego man named James Cordero found an abandoned trailer on a remote border road. Inside: a hidden license plate reader feeding a vast federal surveillance network. He’s since found more than 40 of them, tucked inside trailers and construction barrels along highways between San Diego and Arizona.
The cameras started appearing after California quietly issued permits to Border Patrol and other federal agencies in the final months of the Biden administration. Every plate that passes gets logged: make, model, state, GPS coordinates, timestamp, and sometimes photos of passengers. Border Patrol won’t say where the cameras are or who can access the data. There’s “no transparency, that’s the worst part,” Cordero told CalMatters.
The EFF and 30 organizations sent Gov. Newsom a letter demanding the permits be revoked. California’s own 2016 law restricts ALPR data-sharing with out-of-state agencies, especially for immigration enforcement. The federal government’s solution: bypass the law by putting the cameras on state highways with state permits, then pipe the data to federal databases. Cordero leads volunteer water drops for migrants in the backcountry. He’s not worried about himself. He’s worried his volunteers are being tracked. Given the first Trump administration prosecuted volunteers for leaving water in the Arizona desert, he should be.
Sources: CalMatters, EFF, KPBS
2. Flock Safety’s Very Bad Week
Milwaukee police officer Josue Ayala used Flock Safety’s license plate reader system to track a person he was dating 124 times between March and May 2025, plus that person’s ex another 55 times. He’s now charged with attempted misconduct in public office. The ACLU of Wisconsin said the charges “exemplify just how easily Flock cameras can be turned against the very people the technology purports to protect.”
That was Tuesday. Also this week: Mountain View’s city council unanimously voted to terminate its Flock Safety contract after discovering that 250+ unauthorized agencies had conducted roughly 600,000 searches of the city’s plate data without permission. Some out-of-state. Some in violation of California law banning immigration enforcement data-sharing. “We did not know this was going on,” admitted Police Chief Mike Canfield. Santa Clara County followed suit days later.
The pattern is always the same: “safeguards” exist on paper, nobody checks them, and the data flows wherever power wants it to go.
Sources: NYT, ACLU-WI, SJ Spotlight, Fox6 Milwaukee
3. Pakistan Declares “Open War” on Afghanistan
The Pakistani government bombed Kabul and Kandahar overnight, the first time it has directly attacked Taliban military facilities. Defence Minister Khawaja Asif: “Our cup of patience has overflowed. Now it is open war between us and you.”
The strikes hit weapons depots and military posts across three provinces. Thick black smoke rose over Afghanistan’s capital. Both sides claim heavy casualties; neither’s numbers are verifiable. The Taliban (now the Afghan government, if you’ve lost track) fired back at Pakistani military installations but says it wants to “resolve issues through dialogue.”
The Russian, Chinese, Turkish, and Saudi governments are scrambling to mediate.
Pakistan has 170+ nuclear warheads and a conventional military that vastly outguns the Taliban. But the Taliban spent 20 years bleeding the world’s most advanced military in exactly this terrain. Nobody wins a land war in Afghanistan. Nobody ever has.
Sources: Reuters, AP, Al Jazeera
4. ChatGPT Becomes a Snitch (Because It Wasn’t One Already)
OpenAI admitted it found a second ChatGPT account belonging to the Tumbler Ridge, B.C., mass shooter, who killed eight people including six at a secondary school on February 10. The original account was banned in June 2025 for “potential warnings of committing real-world violence.” OpenAI chose not to tell police. The shooter simply made another account. OpenAI only discovered it after the name went public.
Now, under pressure from the Canadian government, OpenAI is “overhauling safety protocols.” The new policy: OpenAI will notify authorities of “imminent and credible” threats even if the user hasn’t revealed “a target, means, and timing.” They’re establishing a direct pipeline to Canadian law enforcement.
Polymarket prices a US AI safety bill before 2027 at 39.5% ($49K volume), suggesting the political pressure may translate into legislation.
Once you build a reporting pipeline between AI companies and police, who decides what qualifies as a “credible threat”? Hopefully not the AI! It hallucinates enough as is - imagine if those hallucinations translated into arrests.


