logo

Foundations Donate Police Tech at Expense of Transparency

From Issue 1.57 - By Aaron Rieke
Aaron Rieke  -  

Private foundations provide police departments with technologies that receive less scrutiny than those purchased with public money, reported Pro Publica this week. Large charities have donated controversial surveillance equipment, including “hundreds of thousands of dollars’ worth of license place readers” and upgrades to “stringray” devices that vacuum up signals from mobile phones.

These donations may be yet another challenge to the accountable deployment of police technologies. The gifts are often hard to track and allow police to accumulate tools that are already far ahead of policies. (We’ve written extensively on both ALPRs and stingrays.)

Police foundations have been around for decades. The first was established in New York in 1971. Today, similar organizations have cropped up in “dozens of jurisdictions, from Atlanta, Georgia, to Oakland, California.” Some foundation activities are uncontroversial, like providing medical kits to treat gunshot wounds. But recently, the largest foundations have begun supporting “technology initiatives” that include surveillance tools. In Atlanta, for example, a police foundation “bankrolled the surveillance cameras that now blanket the city, as well as the center where police officers monitor live video feeds.”

The foundations’ gifts can be difficult to vet and monitor. “At least with public contracts and spending, there’s a facade of transparency and accountability,” said Ana Muniz, who has studied the LAPD’s gang policing efforts. “With private partnerships, with private technology, there’s nothing.” There is also concern about conflicts of interest: it’s not uncommon for companies to donate to the foundations that purchase their products, or to be contractors for associated police departments.


Online Payday Loans Often Worse than Those Offered by Storefronts, Says New Report

From Issue 1.57 - By Aaron Rieke
Aaron Rieke  -  

Online payday loans are often worse than those provided by storefront lenders, explains a new study by Pew, one of the first formal analyses of internet payday lending.

Today, about one-third of all payday loans originate on the web. Online lenders’ revenue has tripled since 2006, despite the fact they incur high loss rates.

Online loans differ from their offline counterparts in several important ways. They often rely on direct, electronic access to borrowers’ bank accounts (whereas offline payday loans tend to rely on postdated checks). They may offer nontraditional repayment structures that are difficult for borrowers to understand. And they tend to come with higher fees — almost double those of offline loans, by some analysts’ estimates.

From a borrower’s perspective, online loans can have notable detrimental effects:

  • One third of online loans use payment structures that do not reduce their principal. This encourages long-term indebtedness.
  • Online borrowers reported abuse at rates significantly higher than offline borrowers. Online lenders often threatened “contacting family, friends, or employers, and arrest by the police.”
  • More than a third of online borrowers reported that their personal or financial data was sold without their consent. These disclosures are likely tied to the practices of “lead generators” — websites that collect applicants’ personal information to sell to lenders. Lead generators sell these lists widely.

Pew’s report was not universal in its criticism, and noted that a handful of online lenders “are the subject of few complaints” and have “called for raising industry standards.” But on the whole, it’s clear that online lending still has a long ways to go.


Also of Note: October 15, 2014

From Issue 1.57 - By Aaron Rieke
Aaron Rieke  -  
  • The government removed seven Americans from the no-fly list, complying with a federal judge’s previous ruling that processes to challenge placement on the list were “wholly ineffective.”

  • The Justice Department is reviewing an incident in which the DEA created a counterfeit Facebook profile of a woman without her consent.

  • In Los Angeles, a group of public and private groups have earmarked more than $200 million to improve computerized systems that connect the homeless population with services. “If you can imagine being at an airport, and one of your flights get canceled, and then you’re just running from gate to gate to gate,” said Chris Ko of United Way, “that’s the experience homeless people are going through.”

  • Some advocates claim little has changed regarding surveillance of Muslims in New York City. “If you weren’t following New York City politics and didn’t know we had an election last year, you wouldn’t see a difference in the response around surveillance, from the last administration to this administration,” said Linda Sarsour, executive director of the Arab-American Association of New York.


Third Circuit Gives Police Leeway on Surveillance Tech

From Issue 1.56 - By Aaron Rieke
Aaron Rieke  -  

Last week, the Third Circuit Court of Appeals ruled that police may be allowed to use evidence collected using technologies that are not subject to settled law.

The case, U.S. v. Katzin, focused on the “exclusionary rule,” which seeks to deter law enforcement officers from violating the Fourth Amendment by making evidence derived from an illegal search or seizure unavailable at trial. The rule tries to balance protection of civil liberties with the social cost of repressing truthful evidence. Thus, when police act in good faith, the rule does not apply, even if they make a mistake.

Police are already treading on new ground with facial recognition, mesh networks, and drones. Use of these technologies, and many others, will outrun courts and state legislatures. Thus, the boundaries of the exclusionary rule — what qualifies as a “good guess” regarding the legality of cutting-edge surveillance techniques — will becoming increasingly important.

In the case, law enforcement tracked electrician Harry Katzin’s van with a GPS device and without a warrant. But the tracking took place in 2010, before Supreme Court ruled that such activities violated Fourth Amendment. Prosecutors sought to use the GPS evidence at trial, arguing that the good faith exception applied. The ACLU summarized

[T]he Department of Justice is basically saying, “Ok, we won’t appeal the substance of your ruling — this warrantless GPS tracking violated the Fourth Amendment. But how could the police officers have known this at the time?”

The Third Circuit ruled that the police acted reasonably, even though they did not rely on binding judicial precedent. But a number of judges dissented, arguing that police should have stronger incentives to act conservatively.

The essence of the majority’s holding is that any time a course of conduct by the police, particularly regarding technological advancements, has not been tested or breaks new ground, law enforcement will be entitled to the good faith exception.

Although the judges disagreed about how much leeway to offer, even the more permissive judges urged caution. “[A]fter Jones, law enforcement should carefully consider that a warrant may be required when engaging in such installation and surveillance.”


As Online Companies “Research” Users, Boundaries are Hazy

From Issue 1.56 - By Aaron Rieke
Aaron Rieke  -  

Facebook and OkCupid made headlines this summer for conducting experiments on their users. Facebook published a study explaining that users’ posts became sadder when shown fewer emotionally positive posts from others. And OkCupid told some of its users that they were good matches for each other, when in fact the company’s algorithm predicted that they would be bad matches.

Some have argued that these experiments are nothing new, pointing out that “marketers for a hundred years have methodically experimented with consumer emotions to sell products.” But others have called the companies’ actions unethical and even illegal.

Modern commercial research is fraught with legal and conceptual complexity. Private companies are likely to be largely (though perhaps not entirely) exempt from existing federal research laws. And legal definitions of “research” might cover only a small fraction of online experiments.

Federal law regulates research of people. The Common Rule requires, among other things, that researchers acquire informed consent and approval from an “institution review board” (or IRB). The law itself only applies to federally funded research. But legal scholar James Grimmelmann explains that the Rule reaches further than first meets the eye. For example, the state of Maryland requires Common Rule-style informed consent and IRB approval “regardless of who pays for the research.” Thus, Grimmelmann argues, Facebook and OkCupid have likely violated Maryland law. (Facebook contends that the Common Rule and Maryland law do not apply to its research project.)

But what counts as “research,” anyways? Online companies learn from user behavior all the time. The Common Rule defines “research” as “a systematic investigation … designed to develop or contribute to generalizable knowledge.” Facebook’s emotion study — which was published in a scientific journal as a contribution to academia — might well qualify. However, had Facebook instead kept its experiment confidential and profit-motivated, it is far less clear that this definition would apply, even though the impact on users would have been largely the same.

In short, there may be a large gap between our sense of what’s fair and ethical and what the law requires of private companies.

Last week, Facebook announced it was changing the way it does research, including “clearer guidelines” for its researchers and an “enhanced review” process. These are laudable steps, but don’t do much to clarify where the boundaries really fall.


Also of Note: October 8, 2014

From Issue 1.56 - By Aaron Rieke
Aaron Rieke  -  
  • Facebook apologized to the LGBT community for targeting drag queens and other performers who did not use their legal names on its social network. The company is “working on technical solutions to make sure that nobody has their name changed unless they want it to be changed and to help better differentiate between fake profiles and authentic ones,” said San Francisco Supervisor David Campos.

  • Microsoft and a handful of other tech firms signed a Student Privacy Pledge, publicly committing themselves not to sell K-12 student data. The move comes shortly after California enacted a student privacy law.

  • New York City shut down a commercial project that placed hundreds of sensors in phone booths across Manhattan to send messages to a smartphone app, reports the Wall Street Journal. The city “hadn’t authorized the sensors to be used this way.”


Subprime Auto Lenders Track and Disable Vehicles at Will

From Issue 1.55 - By Aaron Rieke
Aaron Rieke  -  

Lenders are making more auto loans to borrowers with low credit scores. But these loans sometimes come with devices that allow lenders to track and disable borrowers’ vehicles on a whim. A recent New York Times article describes how these devices can be unfair, and sometimes even dangerous, to low-income borrowers.

“No middle-class person would ever be hounded for being a day late,” said Robert Swearingen, a lawyer with Legal Services of Eastern Missouri, in St. Louis. “But for poor people, there is a debt collector right there in the car with them.”

The devices, called “starter interrupts,” serve dual purposes: tracking and disabling vehicles. They are used in about 25 percent of subprime auto loans nationwide today.
A starter interrupt device. From YouTube user acgstereoman.
They enable surveillance that goes far beyond locating cars for repossession. They allow lenders to peer into borrowers’ private lives. For example, the technology can notify a lender if a borrower is “no longer traveling to their regular place of employment — a development that could affect a person’s ability to repay the loan.” One vendor boasts that it provides indefinite histories of borrowers’ movements: “By glancing over this report, you’ll know where the vehicle was and more importantly, where it’s going to be.” And the Times describes collection agency computer monitors glowing with the real-time movements of hundreds of tracked cars.

In many cases, borrowers legally consent to this expansive private surveillance. It is thus subject to few legal limits.

The devices can also remotely disable vehicles, providing leverage for lenders to collect payments. Some lenders wait until a borrower is at home or work before flipping the switch. Others are more aggressive: people have reported being stranded at gas stations, shopping malls, and even on the freeway. There has not been much legal activity concerning this powerful capability.

Lenders insist that the technology has allowed them to lend to millions of people they would otherwise be unable to serve. It’s easy to see why: the devices make timely loan payments as “vital to driving a car as gasoline.”


A Scientist Explains the Discrimination Risks in Big Data

From Issue 1.55 - By David Robinson
David Robinson  -  

title

Mortiz Hardt is a computer scientist at IBM’s Almaden research center, where he works on refining the ways computers make decisions. In a recent essay he offers an accessible, crisp explanation of how these systems can accidentally promote unfairness. “An immediate observation is that a learning algorithm is designed to pick up statistical patterns,” he writes, so that if “data reflect existing social biases against a minority, the algorithm is likely to incorporate these biases.”

Even when the input data doesn’t reflect bias, group differences can still lead to an algorithm that makes many more mistakes with respect to minorities than it does for the majority group. For example, if a social network wants to confirm that new users are signing up with their real names, it might end up regarding short, common names as likely to be “real,” a rule that holds true for most Americans and thus has a very high accuracy rate across the whole population. But in the Native American population, that same rule will tend to reject real names as fake, because Native American names are long and unique. In situations like these, no single rule may work well for everyone.

Scientists are just starting to explore new approaches to mitigate these risks.


Los Angeles Police to Widen Biometric Net

From Issue 1.55 - By Virginia Eubanks
Virginia Eubanks  -  

Soon, Los Angeles county police officers won’t have to take you downtown to take your fingerprint—or your retinal scan, palm print, photograph, or DNA. Writing for the Center for Investigative Reporting, Ali Winston reports that Los Angeles county law enforcement is building a database that will include an unprecedented array of biometric information. The new “multimodal biometric identification system” would be the largest repository of law enforcement data outside of the FBI, holding records on up to 15 million people. The expansion is taking place without community input, and “uncertainty lingers” about who has access to the data.

A request for proposals released by the Los Angeles County Information Systems Advisory Body in 2012 asks for bids from technology companies to “enhance or replace” the existing Automated Fingerprint Identification System with a more comprehensive system that includes “applications and workflows for fingerprinting and facial data capture, iris, voice, scar, marks and tattoos, facial recognition and DNA biometric toolsets.”

The project will expand the use of mobile biometric technologies. Mobile fingerprint readers are already widely used to confirm the immigration status of day laborers and track individuals suspected of gang activity. Under the new system, mobile devices will be also used to verify identities of people “in the field” for crimes as minor as traffic violations, Winston writes. Biometric information will only be collected if the suspect is arrested and booked, says Sheriff-Lt. Josh Thai. But once in the system, biometric information will be retained until suspects are 99 years old, if they have a criminal record, and until age 75, if they do not.

Los Angeles is one of the first counties to confirm plans to make their systems fully interoperable with the FBI’s Next Generation Identification (NGI) System, which became operational in September. According to Jennifer Lynch of the Electronic Frontier Foundation (EFF), the FBI plans to have 52 million photos in the NGI facial recognition system by 2015.

The plan to collect, sort and search such comprehensive data raises troubling questions about due process, reasonable suspicion, and community participation in law enforcement decision-making. The new biometric system is moving forward without a chance for Los Angelinos to respond. “Without hearings or public input,” Winston writes, “technology companies are already bidding to build the system … a multimillion-dollar undertaking.”


Also of Note: October 1, 2014

From Issue 1.55 - By Aaron Rieke
Aaron Rieke  -  
  • More police departments are adopting body cameras, but there is an unmet need for clear guidelines and policies about their use, reports the Wall Street Journal. “Officers will need to be trained on camera use, including when to activate the devices and how to manage the terabytes of data that come with recording,” said Tom Roberts, deputy chief of the Las Vegas Metropolitan Police Department.

  • California Governor Jerry Brown signed the Student Online Personal Information Protection Act into law on Monday. (We described the bill as “unusually comprehensive and well-considered.”) The new law “could serve as a framework for other state legislatures’ approach to this issue,” writes Alex Bradshaw of the Center for Democracy & Technology.

  • Facebook recently announced that it would begin using personal data gathered inside of Facebook to target advertisements outside of Facebook.