Earlier today, the Supreme Court heard oral argument on the question of whether a powerful civil rights doctrine known as “disparate impact” may be used in housing cases. Disparate impact allows for civil rights claims against policies that, even if well intentioned, have a disproportionate adverse impact on minorities.
In today’s case, Texas Department of Housing and Community Affairs v. Inclusive Communities Project, the Court is considering whether disparate impact claims should be allowed under the Fair Housing Act (FHA), which prohibits discrimination in home sales and rentals. The law prohibits discrimination “because of” race, and the two sides in the case disagree about whether a neutrally-motivated rule that systematically disadvantages racial minorities (i.e., one that has a disparate impact) should count as a violation of the law. “The perception is the Supreme Court has taken this case because it feels there is no disparate-impact theory of liability, and is prepared to rule to that effect,” says John Culhane, a law firm partner watching the case.
Some key civil rights laws — such Title VII, the employment portion of the Civil Rights Act — expressly allow for disparate impact claims, but the FHA and other key civil rights statutes are silent as to whether such claims are allowed. This is largely because the doctrine of disparate impact emerged from case law, after the FHA and other landmark civil rights statutes had been enacted. (Title VII is an exception: its disparate impact language was added in 1991.) Civil rights groups see disparate impact doctrine as a vital tool, in housing as in other core areas of civil rights law. “Experience shows that the intentional discrimination standard alone is insufficient and the [FHA]’s goal can only be achieved if decision-makers are required to give up policies or practices that have avoidable disparate impacts,” argues Joe Rich and Thomas Silverstein on SCOTUSblog.
Technological trends make this an especially important case to watch. New technologies will be used to micro-target mortgage loans and rental listings, leveraging new sources of data and more sophisticated machine learning techniques. As housing choices move online, future disparities may flow from computers rather than from human decisionmakers. This will make the availability of disparate impact claims ever more important for ensuring that minority groups have truly equal housing choices.
New online lenders argue that today’s credit scores will soon be obsolete. These innovators exaggerate the pace of change — traditional credit scores will be important for years to come. But in the long term, they might be right to project that peoples’ credit opportunities will be shaped by a wider range of factors.
Today, credit scores rely on a person’s credit history. By contrast, lending start-ups assemble data from “diverse sources, including household buying habits, bill-paying records and social network connections.” They use this mosaic to try and guess whether a person is likely to repay his or her debts. For example, according to a recent New York Times report, big data lenders have discovered:
- Those who use proper capitalization on online forms tend to be more reliable in repaying their loans.
- Those who are employed as firemen, police officers, and teachers appear to be among the most reliable payers, even if their salaries are lower.
- A single bankruptcy, which can be disastrous to traditional credit scores, may not strongly predict that an individual will default on future loans.
These kinds of correlations might be consistently predictive for broad populations over time, or they might not: their usefulness is mostly unproven. Traditional credit history scores, like those provided by Fair Isaac (FICO), are well studied by companies and regulators. But new scoring methods, which may rely heavily on Facebook or LinkedIn data, have yet to show that they can provide the same predictive punch. They are most commonly used in subprime markets, such as payday lending. (For more analysis, take a look at our recent report, Knowing the Score.)
However, as the volume of data about us and the sophistication of computing grows, these new models could take off. “The potential is there to save millions of people billions of dollars,” argues Rajeev V. Date, a venture investor and former banker.
Fair lending laws, which prohibit discrimination against a loan applicant on the basis of race, sex, and other factors, were designed for the world of FICO, and it’s not yet clear how they will apply to new scores. When a computer is set loose to learn from a large, diverse pile of data, it may discriminate against a protected class, even if its designers didn’t mean for it to do so. (To better understand why, see this piece on how data mining can have a disparate impact.) Even “enthusiasts” of big data underwriting acknowledge this antidiscrimination pitfall, writes Steve Lohr of the New York Times.
The Obama Administration recently encouraged the Federal Communications Commission (FCC) to “address barriers” preventing underserved markets from obtaining broadband access. Those “barriers” are state laws intended to make it difficult for municipalities to provide broadband. It is likely that the FCC will soon try to preempt some of those laws, instigating another battle over the scope of its authority.
According to a report from the White House, only half of rural Americans have access to optimal internet speeds. In many rural areas, rollout of private broadband access is slow or nonexistent because providers pay more per household.
Many municipalities want to bridge this gap by providing their own broadband. Those municipalities that already provide electric can share electric’s infrastructure and overhead costs with broadband, allowing them to offer faster broadband service for less. Moreover, this local competition usually prompts existing providers to lower their rates and increase their speeds.
However, 19 states have passed laws, ranging from prohibitions to tax hurdles, that make it difficult for municipalities to provide their own broadband. The Administration argues that these laws are written by providers seeking to stifle competition. The laws’ proponents insist that private companies will provide better access and protect taxpayers from paying for failed ventures.
A legal fight seems imminent. Next month, the FCC plans to vote on petitions by Chattanooga, Tennessee and Wilson, North Carolina (both of which offer municipal broadband) asking the FCC to overrule state laws that impede their broadband offerings. Chairman Wheeler made clear his plans to preempt state law, writing in a blog post, “I believe that it is in the best interests of consumers and competition that the FCC exercises its power to preempt state laws that ban or restrict competition from community broadband.”
The legal question is ultimately whether the Telecommunications Act, which authorizes the FCC to encourage the deployment of broadband and remove barriers to investment, also authorizes the FCC to preempt state laws. The FCC lost a similar case in 2004, but municipal broadband advocates argue that the case is distinguishable.
“Are we going to allow a means of communications which it simply isn’t possible to read?” said British Prime Minister David Cameron in a speech last week, referring to U.S. technology companies that offer encrypted communications that they cannot unscramble. “No. We must not.” President Obama seemed to agree, further foreshadowing a policy battle about over government’s ability to pierce encryption.
Fitness tracking devices are most commonly purchased by young, affluent users, writes Olga Khazan in the Atlantic. “In other words, they’re not likely to be the people who need the most help to lose weight.”
New mobile applications, like the ACLU of Missouri’s Mobile Justice app, are trying to empower individuals to hold law enforcement agents accountable. The ACLU’s app allows its users to record video, alert nearby users of an incident, and report details directly to the ACLU. See techPresident’s writeup of these new apps here.
On Monday, the White House announced a laundry list of privacy proposals, many of which you are likely to hear about in next week’s State of the Union address. The proposals include new bills, voluntary industry agreements, and federal agency initiatives.
These efforts are laudable. Just don’t expect any fast or drastic changes. Congress’ enthusiasm for the new legislation is likely to range from lukewarm to cold. Nonetheless, this announcement will frame many technology policy debates in the year ahead.
Read on for our short guide to the White House’s most recent push for privacy.
A Focus on Student Data Privacy
The President announced three proposals related to student privacy.
The White House will soon release the Digital Student Privacy Act, a new bill that would “prevent companies from selling student data to third parties for purposes unrelated to the educational mission and from engaging in targeted advertising to students based on data collected in school.” The bill is modeled on California’s recently-enacted statute, which we described as “unusually comprehensive and well-considered.”
The President encouraged companies to sign the Student Privacy Pledge, a voluntary commitment to, among other things, refrain from selling student information or targeting students with behavioral ads. Although 75 companies have signed the pledge, some major companies, including Google and Amazon, have not.
Third, the Department of Education will prepare model terms of service and will provide teacher training assistance in order to help schools ensure that student data is “used appropriately and in accordance with the educational mission.”
These efforts are timely: Schools and their contractors are collecting and using lots of data — including grades, disciplinary records, and even the cadence of students’ keystrokes in educational apps — in many new ways. Moreover, a key federal educational privacy law, the Family Educational Rights and Privacy Act (FERPA), is showing its age. Its provisions do not cover all educational technology companies.
For some parents, addressing basic privacy isn’t enough. They worry that new educational technologies will lead to some students being treated unfairly. The New York Times reports that
the president’s comments did nothing to alleviate the unease of some parents concerned about potential civil rights issues raised by the increasing use of ed tech in schools, including the possibility that some programs and products might automatically channel or categorize students in ways that could ultimately be discriminatory or detrimental to their education.
Easier Access to Credit Scores (For Some)
The President announced that JPMorganChase and Bank of America will make FICO credit scores available for free to their consumer card customers.
While Americans are entitled, by law, to a free copy of their credit reports, most still have to pay to see their credit scores (especially FICO scores, the most popular brand of credit score). There’s no good reason that we should have to pay to see such an important number.
A Big Privacy Bill Is Coming, but Is Unlikely to Pass This Year
The President also announced that a major new privacy bill, derived from the White House’s 2012 Consumer Privacy Bill of Rights, is on the way. The bill will have wide-ranging implications. However, it is unlikely to gain traction with this Congress and will encounter some stiff resistance from technology and advertising firms. But it may be a model for the future.
Notably Missing: Government Surveillance Reforms
The announcement was silent about government surveillance. There was no mention the Electronic Communications Privacy Act (a sorely outdated 1986 law that specifies standards for law enforcement access to electronic communications and associated data), future attempts at NSA reform, or the FBI’s request that mobile technology companies cease using strong digital encryption.
Police officers should not be able to view footage captured by body cameras before writing incident reports, argues the ACLU. “[S]urprisingly, this has become an increasingly common policy question as first dashcams, and now police body cameras, are deployed around the country.”
“[N]o technological fix can remedy the inequalities that underly police violence against young black men,” write Melissa Gregg and Jason Wilson in an Atlantic piece entitled The Myth of Neutral Technology. “We take it for granted that sensor metrics are capable of mitigating unruly human behavior—sloth, procrastination, discrimination, and violence.”
The White House said it wants to “end laws that harm broadband service competition” in a recent report on community-based broadband.
Facebook recently announced a partnership with the National Center for Missing and Exploited Children to begin placing AMBER Alerts (which notify people about abducted children) into its users’ News Feeds.
Increasingly, abusive spouses are using surveillance apps to monitor their partners, a practice that some domestic violence groups are saying has reached “epidemic proportions.” Unfortunately, these apps are hard to police. Many location tracking apps are sold for non-stalking purposes, making it difficult to ban them outright. And broader legislative efforts to address location privacy are opposed by technology firms.
There is no precise count of GPS stalking victims, but the number is likely large. According to a Department of Justice report using data from 2006, “more than 25,000” U.S. adults are the victims of GPS stalking every year. Since then, the number of people carrying location-aware smartphones has exploded, so the number of GPS victims is likely significantly higher.
Women are the most frequent targets. The UK-based domestic violence charity Women’s Aid found that 41 percent of the domestic violence victims that it helped had been tracked or harassed using electronic devices. “We increasingly hear stories of abusers adding tracking software to phones, placing spyware on personal computers and using the internet to gather information about their partner,” said Polly Neate, the group’s chief executive. Location tracking isn’t the only issue: stalking apps can also provide unauthorized access to a victim’s text messages, calls, and email messages. This kind of surveillance has contributed to incidences of physical assault and homicide.
It will be difficult to enact an effective legislative solution. Senator Al Franken has had his eye on the problem for several years, but his proposed bill has yet to pass. Last year, he reintroduced the Location Privacy Protection Act, which he originally introduced in 2012. The bill would ban the development and sale of stalking apps. However, although some mobile apps are marketed explicitly for stalking purposes, just as many are marketed to employers or parents.
More broadly, the bill would require that companies get individuals’ permission before collecting location data and would require apps to be more transparent about who they are sharing users’ data with. Mobile advertising groups have opposed the bill on these grounds.
Eliminating the most conspicuous bad apps would be a positive first step. However, there’s no avoiding the fact that smartphones are powerful, location-aware computers. Victims need all the help they can get, including education, well-prepared law enforcement officers, and technical solutions.
Medical centers, technology companies, insurers, and suicide prevention groups are all beginning to use social media and mobile device data to try and spot mental-health issues.
Many of these efforts are years away from widespread adoption. However, it’s never too early “to think very carefully about who gets access to these tools and what the boundaries are for technology used to make judgments about individuals,” says Munmun De Choudhur, a professor at Georgia Tech.
In a study published last year, Microsoft recruited several hundred Twitter volunteers to take a standard screening test for depression. Microsoft’s researchers were also granted one-time access to the volunteers’ Twitter accounts. Using this data, the researchers developed an algorithm to predict whether a person was vulnerable to depression. The algorithm was about 70 percent accurate in predicting depression based on tweets.
More than 30 medical centers, including Kaiser Permanente, are encouraging patients to use a mobile app that tries to detect symptoms of postpartum depression based off of weekly surveys and analysis of “behavioral patterns like decreased mobility on weekends and longer phone calls.”
The National Institute of Health has given a $2.42 million grant to the Harvard School of Public Health to research whether when patients lock and unlock their phones can predict sleep patterns in people with psychiatric disorders.
Researchers at University of Michigan are investigating whether a patient’s’ vocal patterns, recorded on a telephone call, might predict depression or mania.
Police officers are increasingly turning to social media to aid their investigations. But can an officer lie about their identity to gain access to suspect’s personal information? According to one federal judge, the answer might be yes.
In 2013, New Jersey police officers investigated Daniel Gatson for receiving and transporting stolen property. They created an Instagram account under a fake persona and sent Gatson a friend request. When Gatson accepted the request, the police were able to access all of Gatson’s otherwise private photos, including photos of stolen goods.
At trial, Gatson challenged the cops’ tactics. A federal judge dismissed Gatson’s argument, ruling that police could use “undercover” social media accounts to gain access to suspects’ social media data without a warrant. The judge wrote that “no search warrant is required for the consensual sharing of this type of information,” effectively deciding the issue for the entire state of New Jersey in one sentence.
Defendants aren’t the only ones feeling duped. A woman named Sondra Prince is suing a Drug Enforcement Administration agent who used her scantily-clad photos to create a fake Facebook profile to investigate an alleged New York drug ring. In 2010, the DEA agent seized the Prince’s phone when she was arrested on drug-related charges. The officer used pictures from her phone to create a fake Facebook account under her name to try to nab other suspects in the drug ring.
Daniel Gatson and Sondra Prince’s stories are probably not unique: other police officers have admitted to using covert accounts during investigations. “It reeks of misrepresentation, fraud and invasion of privacy,” said Anita L. Allen, a University of Pennsylvania law professor. Nevertheless, with a LexisNexis survey reporting that 83% of cops believe creating a fake profile for an investigation is ethical, courts are likely to see this issue again. Hopefully, judges will give the issue more attention in the future. After all, what if you friended a cop believing it was a close friend or relative? Is that really a fair and constitutional way for police to gain access to otherwise private data?
Iowa is trying to become the first state in the nation to let its residents to carry a virtual version of their driver’s license on their phone, reports The Des Moines Register. But some worry about how the app would be used in practice. “I see the greater potential for harm coming from purely accidental circumstances, where the police officer has to open the license and by mistake comes across something they shouldn’t come across,” said Nicholas Sarcone, a Des Moines defense attorney.
And speaking of driving, California’s Department of Motor Vehicles (DMV) recently announced it will be late in establishing public regulations for self-driving cars. According to the DMV, there are currently no federal safety standards for self-driving vehicles.
2014 was a big year for “privacy profiteers,” writes Violet Blue on ZDNet. An overwhelming number of flawed devices and software programs were marketed by “unscrupulous charlatans eager to capitalize on a frightened public.”
Protecting civil rights requires that secure technologies be “baked in” to the communication tools of the future, argues ACLU technologist Daniel Kahn Gillmor in an Atlantic piece entitled Where Design Choices and Civil Rights Overlap.
The FCC is expected to vote on new neutrality rules in February.