Tomorrow, representatives from Google, Microsoft, and Yahoo will meet in Brussels to discuss the European Union’s implementation of a new “right to be forgotten” rule intended to protect online privacy. In May, the EU’s Court of Justice ruled that European citizens have the right to ask search engines to remove links to information about them that is “inaccurate, inadequate, irrelevant or excessive.”
Here in the United States the “right to be forgotten” ruling has largely been met with puzzlement or panic. The notion that accurate, relevant information might need to be removed from search results because it is “excessive” is without parallel in American law.
But consider the case of a site like TheDirty.com, where disgruntled ex-boyfriends, neighbors, and total strangers post the names and pictures of girls they think are “slutty,” with descriptions like this: “[Name redacted] is the definition of white trash. Loves the painkillers…[and] starving animals to death. … She will saddle up for any guy for a ride.” (That’s an actual comment, and it comes up as the eleventh hit when anyone Googles the target’s name.)
Or the case of Jill Filipovic, who on break from law school, recovering from having wisdom teeth extracted, discovered a busy thread on a law school discussion board that explored the specific, horribly detailed ways that anonymous commenters–people she would pass in the halls and sit next to in lecture when she returned to school–believed that she should be raped and killed. Eight years later “Jill Filipovic RAPE THREAD” still comes up in web searches on her name.
The benefits of a right to be forgotten may be particularly strong for women. Today, the harassment of women online—and 75% of online abuse is targeted at women—is forever. The expectation of free speech and our human rights to live without fear of violence, daily exposure to hate speech, and “virtual” threats to our bodily integrity are in legitimate conflict here, and we need solutions for the real world.
The debate currently underway between the European Commission and American civil liberties advocates is a bellwether, sharpening the contrast between an American approach that puts freedom of speech above all else and a European view that holds that limits on the retention and sharing of information are important to human rights and dignity.
Until about twenty years ago, your correspondence, government records, or the poorly executed erotic short story you wrote in your early twenties had a naturally limited shelf life. They were on paper, they took up space, and eventually people threw them away or stored them in a deep, dark warehouse. There are compelling reasons for the digital record of our lives to have a finite expiration date, too.
On the other hand, as Jonathan Zittrain argues in a recent New York Times Op-Ed, the EU rule would likely be unconstitutional in the United States, in that it allows individuals to effectively censor “facts about themselves found in public documents.” And, with the sheer volume of takedown requests—Google alone has received 70,000 so far—it is likely that search engines will accede to the majority of requests to remove personal information, without carefully considering each case.
Seeing the right to be forgotten as a woman’s issue underscores the potential group harms of unregulated data. To update the old feminist motto: The personal (data) is political.
Starting July 24th, Verizon Wireless will begin offering a customer loyalty program called Smart Rewards. Hidden in the fine print, though, is a requirement to enroll in a separate program called Verizon Selects. By joining Selects, a user agrees to allow Verizon to track their browsing behavior and movements, so marketers can better target them. Once enrolled in Smart Rewards, a user can disable Verizon Selects – if she’s willing to lose the estimated $5/month reward.
Corporate tracking is commonplace online, but tracking at the network level—before a user even arrives at a website—is relatively novel. Verizon is in a powerful position to collect individuals’ data, including location, phone call metadata, demographic characteristics, and mobile web browsing habits. When advertising company NebuAd attempted to partner with Internet Service Providers in 2008 collect data in a similar manner (albeit using an opt-out model), they were stopped by Congress.
Verizon is treading carefully and requiring users to opt-in to the program. But for some, the incentives may be hard to refuse. Verizon estimates that each Smart Rewards point is worth $0.01 and enrollment in Verizon Selects pays 500 points per month, so enrollment pays $5/month. When the cheapest individual MORE Everything plan is $45, that’s a discount of over 10%.
Verizon claims that they won’t share data that “identifies you personally” with non-Verizon companies, but they plan to collect and use location data that is not generally considered personally-identifying information. Researchers have shown that such data can be easily used to de-anonymize data, so this assurance holds only so much value.
It’s concerning that getting the best deal on a mobile Internet plan—which one third of Americans rely on for their primary access to the net—requires an individual to reveal so much.
“[A] growing number of lenders are using new technologies that can remotely disable the ignition of a car within minutes of the borrower missing a payment,” notes the New York Times, reporting on the rapid growth of subprime auto loans.
In Oakland, California, where 32% of residents don’t have internet access, a group called Sudo Mesh is striving to build mesh networks in poor neighborhoods. “We’ve reached a point where broadband is no longer a convenience but a necessity. You can’t get ahead in society, or pull even, without broadband access,” a representative said.
Political analytics firms are trying to “translate everything they know about voters into votes”, with huge sums of money being spent on “constructing new weapons and experimenting with increasingly personalized targeting.”
The San Francisco Chronicle performed an external audit of Airbnb’s impact, finding that “160 entire homes or apartments seem to be rented full time, giving weight to arguments that the service is allowing landlords to flout strict rental laws.”
Explosive allegations that the National Security Agency and FBI targeted Muslim-American activists, politicians, and educators for secret surveillance programs, published last week in The Intercept, have prompted a robust response from a coalition of civil and human rights groups.
Glenn Greenwald and Murtaza Hussain’s report, based on a spreadsheet of nearly 8,000 email addresses leaked by Edward Snowden and three months of investigative reporting, contends that at least five of the surveillance targets led “highly public, outwardly exemplary lives,” but were scrutinized by the federal intelligence agencies based on their ethnic backgrounds and Muslim faith. Among those targeted were Faisal Gill, a member of the Bush administration once employed by the Department of Homeland Security with top security clearance; Asim Ghafoor, a civil rights attorney who defended the Saudi charity the Al Haramain Islamic Foundation; and Agha Saeed, who founded the American Muslim Alliance and concentrated much of his activism on registering Muslim Americans to vote.
The leaked spreadsheet that inspired the story is titled “FISA recap,” which suggests that the surveillance was cleared by the Foreign Intelligence Surveillance Act (FISA) court. The FISA court is a secret tribunal that vets government requests to tap phones, search emails, collect data, and otherwise monitor suspected terrorists and spies. Given that FISA determinations are classified, we may never know the rationale for, or the extent of, the surveillance of these men. Greenwald and Hussain write,
Whatever the specific reasons and methods used to monitor the five men’s emails, the surveillance against them took place during the chaos and fear that enveloped the national security community in the years after 9/11. … One former law enforcement official said that, while the FBI was diligent in trying to hew to the law, there may have been ‘some missteps’ along the way. Those missteps have landed heavily on Americans of Muslim heritage.
In a response sent July 9, fifty-two civil and human rights groups, including the ACLU, American Muslim Alliance, Human Rights Campaign, NAACP Legal Defense Fund, and the National Immigration Law Center, called on President Obama to re-affirm law enforcement’s commitment to “serve and protect America’s diverse population equally.” They wrote,
The [Intercept] report is troubling because it arises in [a] broader context of abuse. … Under the guise of community outreach, the FBI targeted mosques and Muslim community organizations for intelligence gathering. It has pressured law-abiding American Muslims to become informants against their own communities, often in coercive circumstances. … [T]he government’s domestic counterterrorism policies treat entire minority communities as suspect, and American Muslims have borne the brunt of government suspicion, stigma and abuse.
Arguing that these practices “strike at the bedrock of democracy,” the co-signers called on the White House to strengthen the Department of Justice’s Guidance Regarding the Use of Race by Federal Law Enforcement Agencies, issued in June 2003, and to expand these guidelines to explicitly ban profiling on the basis of religion, sexual orientation, gender identity and national origin.
In response, the Office of the Director of National Intelligence and the Department of Justice insisted that FISA court orders to monitor U.S. citizens are only issued if credible evidence that the target of surveillance is “an agent of a foreign power, a terrorist, a spy, or someone who takes orders from a foreign power” exists. In a joint statement, the agencies wrote,
It is entirely false that U.S. intelligence agencies conduct electronic surveillance of political, religious or activist figures solely because they disagree with public policies or criticize the government, of for exercising constitutional rights. Unlike some other nations, the United States does not monitor anyone’s communications in order to suppress criticism or to put people at a disadvantage based on their ethnicity, race, gender, sexual orientation or religion.
However, according to the Electronic Privacy Information Center, of the more than 35,000 FISA surveillance orders reviewed by the court in the last 35 years, only twelve—a mere 0.03% of them—were denied.
The Federal Communications Commission (FCC) voted last Friday to spend $1 billion annually to help schools build Wi-Fi networks. The vote changes how money from the federal E-Rate program, which is designed to bring phone and Internet access to rural and low-income Americans, is spent.
Wi-Fi networks ultimately rely upon schools’ external connection to the Internet. Even so, they can provide powerful benefits. For example, a 2009 ethnographic study found that limited terminal time was a major hurdle for students, especially those without Internet access at home. One student noted, “I can’t even really concentrate on what I am doing because I am so stressed that I will run out of time.” Wi-Fi can ease this bottleneck, allowing a greater number of students utilize connectivity that already exists.
However, the financial reshuffle has prompted some concern from both sides of the political spectrum. Republican FCC Commissioner Ajit Pai alluded to a future increase in Americans’ phone bills to cover the cost. And while many Democrats support the move, some worry that funding for Wi-Fi will compete with dollars currently being spent on broadband access.
The overwhelming majority of federal and state wiretaps reported in 2013 were focused on suspected drug deals. “‘Narcotics’ constituted a whopping 3,115 of the 3,576 total wiretaps, followed by ‘other major offenses’ (including smuggling and money laundering), homicide, and kidnapping, which was the subject of one wiretap,” reports Motherboard.
Colleges are turning to predictive analytics to help them identify students who are likely to drop out, reports Vox. However, acting on these predictions might prove challenging. “The idea of knowing what’s going on is really important. But knowing what you can do to address that is probably even more important,” says education technology expert Ellen Wagner.
Google+ recently retracted its policy of requiring that its users publicly display their real name. “We know that our names policy has been unclear, and this has led to some unnecessarily difficult experiences for some of our users. For this we apologize, and we hope that today’s change is a step toward making Google+ the welcoming and inclusive place that we want it to be,” the company wrote.
Even when targeting foreign nationals, the NSA vacuums up a lot of data about Americans. According to a unique analysis by the Washington Post, almost half the data collected by the NSA (when targeting non-Americans) might contain details about Americans. This so-called “incidental collection” occurs, for example, when those in the U.S. communicate with those abroad.
The Post’s story is a rare glimpse into the NSA’s online data collection practices. Post reporters analyzed a large cache of surveillance data, supplied by former NSA contractor Edward Snowden, that included more than 160,000 email and instant message conversations. To date, no government oversight body has analyzed a sample of the same magnitude.
The Post observed that:
The intercepted messages contained “discoveries of considerable intelligence value.” These included “fresh revelations about a secret overseas nuclear project” and “double-dealing by an ostensible ally.”
The NSA tried, with varying degrees of success, to protect the identities of Americans. NSA analysts routinely mask references to and details about to U.S. citizens or residents (more than 65,000 incidents of such masking appeared in the cache of 160,000 messages.) However, almost a thousand email address could be “strongly linked to U.S. citizens or U.S.residents.”
Intimate messages described as “useless” by NSA analysts were nonetheless retained. According to the Post, “stories of love and heartbreak, illicit sexual liaisons, mental-health crises, political and religious conversions, financial anxieties and disappointed hopes,” were cataloged and archived.
These observations will fuel the continuing NSA debate. The story comes on the heels of the federal Privacy and Civil Liberties Oversight Board’s conclusion that the NSA’s foreign data collection program is largely constitutional, though incidental collection might “push the program close to the line….”
More immediately, the story reminds us that surveillance can implicate those not targeted, and that judiciously executing surveillance activities is hard. The NSA is a highly sophisticated actor with abundant resources. If it struggles to appropriately scope its data collection and protect individuals’ privacy, then so will smaller-scale efforts such as those housed at local police departments.
New York City is pursuing a plan to transform its payphone network into a series of Wi-Fi access points and terminals. But it wants to unite its city-wide network under a single contract, despite the fact that it is reexamining its franchise agreements with Verizon Fios and Time Warner Cable.
Maya Wiley, counsel to Mayor de Blasio, takes the position that having a single company run the network will have the benefits of “consistent design, avoiding sidewalk clutter and a network scale.” But Dana Spiegel, executive director of NYCwireless, suggests breaking the city into geographic regions and awarding contracts limited to particular sections to encourage competition and make the entire project more resilient to the failure of any individual contractor.
A single, city-wide network also presents privacy and data security concerns. It’s much easier to track or surveil individuals’ data across a single network. (And we’ve already seen city Wi-Fi networks used for surveillance in other contexts.)
As Spiegel concludes, “the idea has a lot of merit, the implementation needs some work.”
“[E]very venture-backed startup chasing advertising revenue is going after just 0.6% of the [American] economy,” notes Christopher Mims at the Wall Street Journal. “Whether or not we’re in a bubble isn’t the issue. What matters is what we do with all the money that flows into tech when times are good.”
Technology CEOs — including Facebook’s Mark Zuckerberg — signed a letter encouraging FCC commissioners to provide increased funding for Wi-Fi in public schools (a proposal we wrote about last month). “By responsibly investing $2 billion of unused funds and providing predictable ongoing support for Wi-Fi, the plan will make dramatic progress in bringing high-speed connectivity to our classrooms,” the CEOs said.
Increasing access to education for disadvantage students requires “more than creating sophisticated educational content and building high-end online learning platforms,” argues Mimi Ito. “When you’re a kid whose main point of access to the net is your mom’s smartphone, and your only broadband is at your school or library, it’s tough to make it through a series of Kahn Academy videos or a Udacity course on your own to become an awesome coder.”
Facebook’s facial recognition systems have a major advantages over those of federal law enforcement agencies due to the size of its network. “[T]he nation’s most powerful law enforcement agency is getting outgunned by a social network,” reports the Verge.
For one week in January 2012, a Facebook data scientist (collaborating with a psychology professor) strategically altered the Facebook News Feed content of 689,003 users. A computer automatically evaluated the emotional tone of each post users saw. Some users saw fewer positive emotional posts than they otherwise might, while other users saw fewer negative emotional posts.
The researchers found, as they explain in the Proceedings of the National Academy of Sciences, that seeing fewer positive News Feed items led users to post more negative status messages themselves, while seeing fewer negative News Feeds led them to post more positive statuses, an effect that the researchers themselves call “emotional contagion.” (The effect was very small on average, but consistent across many users.) Ironically, the researchers were actually interested in investigating the opposite possibility — the theory that seeing too many happy posts about our friends “may somehow effect us negatively, for example via social comparison” to people who seem happier than ourselves.
The study involved some collaboration with researchers at Cornell, but Cornell’s Institutional Review Board (which exists to protect human subjects that may be harmed in experiments) concluded that their approval was not required because the Cornell researcher, who did not control the users’ Facebook feeds, “was not directly engaged in human research.” If the experiment had faced such review, it would likely not have met the rigorous standards for informed consent set out by a federal policy called the Common Rule.
On the other hand, experiments like these are standard practice for consumer-facing online companies, who are not subject to the standard rules of academic research ethics. Facebook’s study is a reminder of the powerful influence that companies have on our daily lives. As Shoshana Zuboff points out, “Facebook, like Google, represents a new kind of business entity whose power is simultaneously ubiquitous, hidden, and unaccountable.” What Facebook does is engineering, not science, and our online actions are the product being engineered. While we call Facebook’s employees “data scientists,” they don’t have the strict adherence to ethics, rigor, and reproducibility that we associate with science.
The study points out that “the well-documented connection between emotions and physical well-being suggests the importance of these findings for public health.” It’s easy to imagine a News Feed where you might never see people’s sad posts, and never offer consolation, if Facebook were to decide that a purely happy feed were better for your health, or would draw more of your attention to the site. No matter how they choose shape our online experience, Facebook’s decisions will carry benefits and harms for some — benefits and harms that can quickly add up, given how widely the site is used.
Facebook’s power could even extend to a form of digital gerrymandering. Last election, Facebook drove more people out to vote, partly by telling them which of their friends had already voted earlier in the day. (A study in Nature found the effort drove about 60,000 more voters to the polls.) What if Facebook were to use that knowledge during the next election, and selectively give such feedback to users who matched the company’s political ideals? It’s within their technological power to do so, and within their legal power too, given today’s lack of oversight. Only the company’s self-restraint stops it from going further.
The bottom line is that if not for this public study, we would have no insight at all into these manipulations. For the most part, we still have little idea how Facebook, Google, Twitter and other major online firms may use their algorithms to shape our lives. And given that outside scrutiny is infeasible, it may be time to start thinking about what other options may make sense, to help constrain how this new power gets exercised. Or else we may be looking toward a world where manipulation like this is effectively not constrained.