logo

FBI Director Argues Need to Pierce Encryption

From Issue 1.58 - By Aaron Rieke
Aaron Rieke  -  

Law enforcement could be left “in the dark” as more people turn to encrypted, digital communications, warned the Director of the FBI, James B. Comey, in a speech last week. Catching kidnappers and child pornographers becomes harder, he argued, when law enforcement lacks the technical ability to conduct investigations despite legal authority to do so.

The FBI is asking the public rely on policy, rather than technology, to ensure lawful surveillance. Google and Apple recently moved in the opposite direction, announcing that their mobile operating systems would encrypt users’ data by default (such that only a user’s passcode would allow access to data on a device). But Comey argued companies shouldn’t “throw away the key.” Instead, he suggests, they should preserve the ability to respond to law enforcement requests.

This debate highlights a tension between investigating crimes and ensuring security and privacy. If a company can access a user’s data, it will inevitably be more vulnerable to hackers and spy agencies (both foreign and domestic). Thus, in the wake of Snowden’s revelations, companies are racing to provide better security, often in the form of encryption that they can’t feasibly reverse. “Just as people won’t put their money in a bank they won’t trust, people won’t use an Internet they won’t trust,” said Brad Smith, Microsoft’s general counsel.

The ACLU criticized Comey’s speech, claiming that he is “wrong in asserting that law enforcement cannot do its job while respecting Americans’ privacy rights.”

Comey is right in at least one respect: investigating crimes might be more difficult in a world where most consumer devices and communications are encrypted by default (a trend we are beginning to see). Unfortunately, most solutions seem to come at a steep cost to security.

“My goal today isn’t to tell people what to do,” Comey said. “My goal is to urge our fellow citizens to participate in a conversation as a country about where we are, and where we want to be, with respect to the authority of law enforcement.”


Personality Tests’ Fairness, Effectiveness Questioned

From Issue 1.58 - By Aaron Rieke
Aaron Rieke  -  

Employers are increasing their use of personality tests to hire for customer-service jobs, reports the Wall Street Journal. The tests — which might prompt applicants to say if they “experience mood changes” or “are always cheerful” — are said to help reduce attrition and predict performance. But some are questioning the tests’ fairness and effectiveness.

The Equal Employment Opportunity Commission (EEOC) is investigating whether personality tests might discriminate against people with mental illness, such as depression, even if they have the right skills for the job. “Employers are watching the investigation closely,” reports the Journal.

Employers have already scaled back testing variables in other contexts. For example, Xerox stopped considering applicants’ commuting time in its employment test because it worried that the data might put applicants from minority neighborhoods at a disadvantage. “There’s some knowledge that you gain that you should stay away from when making a hiring decision,” said Teri Morse, Xerox’s vice president of recruitment.

The overall effectiveness of personality tests is unclear. Some companies are pleased with the results. For example, Xerox claims that it is “shocked all the time” by tests’ accuracy. But others are more skeptical. According to Fred Morgeson, a management professor and organizational psychologist, the link between job performance and personality is “much lower than the field has led us to believe.”

Test vendors keep their formulas a secret. And at least one is actively resisting the EEOC’s efforts to acquire its validity tests.


ALPR Data Request Sent to California Court of Appeal

From Issue 1.58 - By Aaron Rieke
Aaron Rieke  -  

The Electronic Frontier Foundation (EFF) recently petitioned a California Court of Appeal to rule that the public has a right to know how Los Angeles police are using automatic license plate readers (ALPRs).

The petition — essentially an appeal of a judge’s denial to compel disclosure of ALPR data collected during a week in Ramadan — argues that knowing how police technologies are used is a vital prerequisite to an informed, public debate.

Los Angeles police are among the biggest gatherers of ALPR data. ALPRs are high-speed cameras that photograph vehicles. These images are automatically time-stamped, tagged with a location, analyzed for a license number, and stored in a database. Over time, ALPRs can create a detailed history of a person’s travels and habits.

ALPRs can be used to scan and record vehicles at a lawful protect or house of worship; track all movement in and out of an area; gather information about certain neighborhoods or organizations; or place political activists on hot lists so tat their movements trigger alerts.

The EFF highlighted the fact that public debates about ALPRs have been triggered in other cities, including Minneapolis and Boston, following news reports.

Without public access to information about how ALPR technology is being used—including the raw ALPR data from a limited time period—the very people whose whereabouts are being recorded cannot know if their rights are being infringed nor challenge policies that inadequately protect their privacy.

The petition also addresses some important legal issues, including whether indiscriminate surveillance data rightly counts as “records of investigation” that might be withheld under California’s Public Records Act, and whether disclosing anonymized ALPR records present any meaningful risks to individuals or police activities.


Also of Note: October 22, 2014

From Issue 1.58 - By Aaron Rieke
Aaron Rieke  -  
  • 40 percent of adult Internet users have experienced harassment online, according to a new study by the Pew Research Center. Young women report experiencing “severe types of harassment” at “disproportionately high levels.”

  • Some Virginia police departments are “secretly and automatically sharing criminal suspects’ telephone metadata and compiling it into a large database,” reports Ars Technica.


Foundations Donate Police Tech at Expense of Transparency

From Issue 1.57 - By Aaron Rieke
Aaron Rieke  -  

Private foundations provide police departments with technologies that receive less scrutiny than those purchased with public money, reported Pro Publica this week. Large charities have donated controversial surveillance equipment, including “hundreds of thousands of dollars’ worth of license place readers” and upgrades to “stringray” devices that vacuum up signals from mobile phones.

These donations may be yet another challenge to the accountable deployment of police technologies. The gifts are often hard to track and allow police to accumulate tools that are already far ahead of policies. (We’ve written extensively on both ALPRs and stingrays.)

Police foundations have been around for decades. The first was established in New York in 1971. Today, similar organizations have cropped up in “dozens of jurisdictions, from Atlanta, Georgia, to Oakland, California.” Some foundation activities are uncontroversial, like providing medical kits to treat gunshot wounds. But recently, the largest foundations have begun supporting “technology initiatives” that include surveillance tools. In Atlanta, for example, a police foundation “bankrolled the surveillance cameras that now blanket the city, as well as the center where police officers monitor live video feeds.”

The foundations’ gifts can be difficult to vet and monitor. “At least with public contracts and spending, there’s a facade of transparency and accountability,” said Ana Muniz, who has studied the LAPD’s gang policing efforts. “With private partnerships, with private technology, there’s nothing.” There is also concern about conflicts of interest: it’s not uncommon for companies to donate to the foundations that purchase their products, or to be contractors for associated police departments.


Online Payday Loans Often Worse than Those Offered by Storefronts, Says New Report

From Issue 1.57 - By Aaron Rieke
Aaron Rieke  -  

Online payday loans are often worse than those provided by storefront lenders, explains a new study by Pew, one of the first formal analyses of internet payday lending.

Today, about one-third of all payday loans originate on the web. Online lenders’ revenue has tripled since 2006, despite the fact they incur high loss rates.

Online loans differ from their offline counterparts in several important ways. They often rely on direct, electronic access to borrowers’ bank accounts (whereas offline payday loans tend to rely on postdated checks). They may offer nontraditional repayment structures that are difficult for borrowers to understand. And they tend to come with higher fees — almost double those of offline loans, by some analysts’ estimates.

From a borrower’s perspective, online loans can have notable detrimental effects:

  • One third of online loans use payment structures that do not reduce their principal. This encourages long-term indebtedness.
  • Online borrowers reported abuse at rates significantly higher than offline borrowers. Online lenders often threatened “contacting family, friends, or employers, and arrest by the police.”
  • More than a third of online borrowers reported that their personal or financial data was sold without their consent. These disclosures are likely tied to the practices of “lead generators” — websites that collect applicants’ personal information to sell to lenders. Lead generators sell these lists widely.

Pew’s report was not universal in its criticism, and noted that a handful of online lenders “are the subject of few complaints” and have “called for raising industry standards.” But on the whole, it’s clear that online lending still has a long ways to go.


Also of Note: October 15, 2014

From Issue 1.57 - By Aaron Rieke
Aaron Rieke  -  
  • The government removed seven Americans from the no-fly list, complying with a federal judge’s previous ruling that processes to challenge placement on the list were “wholly ineffective.”

  • The Justice Department is reviewing an incident in which the DEA created a counterfeit Facebook profile of a woman without her consent.

  • In Los Angeles, a group of public and private groups have earmarked more than $200 million to improve computerized systems that connect the homeless population with services. “If you can imagine being at an airport, and one of your flights get canceled, and then you’re just running from gate to gate to gate,” said Chris Ko of United Way, “that’s the experience homeless people are going through.”

  • Some advocates claim little has changed regarding surveillance of Muslims in New York City. “If you weren’t following New York City politics and didn’t know we had an election last year, you wouldn’t see a difference in the response around surveillance, from the last administration to this administration,” said Linda Sarsour, executive director of the Arab-American Association of New York.


Third Circuit Gives Police Leeway on Surveillance Tech

From Issue 1.56 - By Aaron Rieke
Aaron Rieke  -  

Last week, the Third Circuit Court of Appeals ruled that police may be allowed to use evidence collected using technologies that are not subject to settled law.

The case, U.S. v. Katzin, focused on the “exclusionary rule,” which seeks to deter law enforcement officers from violating the Fourth Amendment by making evidence derived from an illegal search or seizure unavailable at trial. The rule tries to balance protection of civil liberties with the social cost of repressing truthful evidence. Thus, when police act in good faith, the rule does not apply, even if they make a mistake.

Police are already treading on new ground with facial recognition, mesh networks, and drones. Use of these technologies, and many others, will outrun courts and state legislatures. Thus, the boundaries of the exclusionary rule — what qualifies as a “good guess” regarding the legality of cutting-edge surveillance techniques — will becoming increasingly important.

In the case, law enforcement tracked electrician Harry Katzin’s van with a GPS device and without a warrant. But the tracking took place in 2010, before Supreme Court ruled that such activities violated Fourth Amendment. Prosecutors sought to use the GPS evidence at trial, arguing that the good faith exception applied. The ACLU summarized

[T]he Department of Justice is basically saying, “Ok, we won’t appeal the substance of your ruling — this warrantless GPS tracking violated the Fourth Amendment. But how could the police officers have known this at the time?”

The Third Circuit ruled that the police acted reasonably, even though they did not rely on binding judicial precedent. But a number of judges dissented, arguing that police should have stronger incentives to act conservatively.

The essence of the majority’s holding is that any time a course of conduct by the police, particularly regarding technological advancements, has not been tested or breaks new ground, law enforcement will be entitled to the good faith exception.

Although the judges disagreed about how much leeway to offer, even the more permissive judges urged caution. “[A]fter Jones, law enforcement should carefully consider that a warrant may be required when engaging in such installation and surveillance.”


As Online Companies “Research” Users, Boundaries are Hazy

From Issue 1.56 - By Aaron Rieke
Aaron Rieke  -  

Facebook and OkCupid made headlines this summer for conducting experiments on their users. Facebook published a study explaining that users’ posts became sadder when shown fewer emotionally positive posts from others. And OkCupid told some of its users that they were good matches for each other, when in fact the company’s algorithm predicted that they would be bad matches.

Some have argued that these experiments are nothing new, pointing out that “marketers for a hundred years have methodically experimented with consumer emotions to sell products.” But others have called the companies’ actions unethical and even illegal.

Modern commercial research is fraught with legal and conceptual complexity. Private companies are likely to be largely (though perhaps not entirely) exempt from existing federal research laws. And legal definitions of “research” might cover only a small fraction of online experiments.

Federal law regulates research of people. The Common Rule requires, among other things, that researchers acquire informed consent and approval from an “institution review board” (or IRB). The law itself only applies to federally funded research. But legal scholar James Grimmelmann explains that the Rule reaches further than first meets the eye. For example, the state of Maryland requires Common Rule-style informed consent and IRB approval “regardless of who pays for the research.” Thus, Grimmelmann argues, Facebook and OkCupid have likely violated Maryland law. (Facebook contends that the Common Rule and Maryland law do not apply to its research project.)

But what counts as “research,” anyways? Online companies learn from user behavior all the time. The Common Rule defines “research” as “a systematic investigation … designed to develop or contribute to generalizable knowledge.” Facebook’s emotion study — which was published in a scientific journal as a contribution to academia — might well qualify. However, had Facebook instead kept its experiment confidential and profit-motivated, it is far less clear that this definition would apply, even though the impact on users would have been largely the same.

In short, there may be a large gap between our sense of what’s fair and ethical and what the law requires of private companies.

Last week, Facebook announced it was changing the way it does research, including “clearer guidelines” for its researchers and an “enhanced review” process. These are laudable steps, but don’t do much to clarify where the boundaries really fall.


Also of Note: October 8, 2014

From Issue 1.56 - By Aaron Rieke
Aaron Rieke  -  
  • Facebook apologized to the LGBT community for targeting drag queens and other performers who did not use their legal names on its social network. The company is “working on technical solutions to make sure that nobody has their name changed unless they want it to be changed and to help better differentiate between fake profiles and authentic ones,” said San Francisco Supervisor David Campos.

  • Microsoft and a handful of other tech firms signed a Student Privacy Pledge, publicly committing themselves not to sell K-12 student data. The move comes shortly after California enacted a student privacy law.

  • New York City shut down a commercial project that placed hundreds of sensors in phone booths across Manhattan to send messages to a smartphone app, reports the Wall Street Journal. The city “hadn’t authorized the sensors to be used this way.”