Twenty states have laws on the books prohibiting municipalities from offering broadband internet services. The FCC recently received two petitions—one from a community-owned electric utility in Chattanooga, Tennessee, and one from the city of Wilson, North Carolina—asking it to overturn such state-level prohibitions. The Chattanooga petition argues that Tennesee’s law “frustrates the Congressional goal that all Americans should have access to broadband[.]“
The FCC is considering its options, and recently opened comments for both petitions.
Municipal broadband efforts are likely to increase access. “Municipalities typically have lower costs than private entities and do not seek the high short-term profits that shareholders and investors expect of private entities,” writes James Baller, a legal advocate for municipal broadband. “As a result, municipalities can sometimes serve areas that private entities shun and can often provide more robust and less expensive services than private entities are willing to offer.” Rural communities are particularly likely to be underserved by corporate broadband providers.
The FCC thinks it might have the authority to overturn state broadband restrictions. But it’s an unsettled legal question. On one hand, D.C. Circuit Judge Laurence Silberman, in his dissent to this year’s Verizon opinion, argued that the FCC has statutory authority (under Section 706) to “promote competition in the local telecommunications market or other regulating methods that remove barriers to infrastructure investment.” He called state laws restricting municipalities “paradigmatic” examples of such barriers (although municipal broadband was not the subject of the ruling). On the other hand, the Supreme Court ruled in 2004 that states were not prohibited from limiting separately-classified “telecommunications” services. But that ruling may not apply to the municipal broadband services in question.
Over at Freedom to Tinker, I argue that we may be rushing to judgment against the hidden power of algorithms:
These opaque, data-driven predictions of what news you’ll want from your network of friends, or who you might like to date, are scary in part because they have an element of self-fulfilling prophecy, a quality they share with more consequential “big data” predictions. In a civil rights context, automatic predictions can be particularly concerning because they may tend to reinforce existing disparities. For example, if historic arrest statistics are used to target law enforcement efforts, the history of over-enforcement in communities of color (which is reflected in these numbers) may lead to a system predicting more crime in those communities, bringing them under greater law enforcement scrutiny. Over time, minor crimes that occur in these communities may be prosecuted while the same crimes occurring elsewhere go unrecorded — leading to exaggerated “objective” record of the targeted neighborhood’s higher crime rate.
But just because predictions are opaque — and just because they are self-fulfilling prophecies — does not necessarily mean that they’ll turn out to be bad for people, or harmful to civil rights. Computerized, self-fulfilling prophecies of positive social justice outcomes could be a key tool for civil rights.
Read the full version here.
In an analysis of 4 million traffic light violations occurring since 2007, the Chicago Tribune found evidence that thousands of drivers were erroneously fined. The charges stemmed from automated cameras:
Cameras that for years generated just a few tickets daily suddenly caught dozens of drivers a day. One camera near the United Center rocketed from generating one ticket per day to 56 per day for a two-week period last summer before mysteriously dropping back to normal…
Many of the spikes were marked by periods immediately before or after when no tickets were issued—downtimes suggesting human intervention that should have been documented. City officials said they cannot explain the absence of such records.
Fail-safes, such as auditing of the videos and documenting changes to the system, were ineffectual or overlooked. Whatever the problem, human or technical, Chicago’s adoption of an automated system enabled minor issues to have major effects: At least 13,000 questionable $100 tickets were issued, and a vast majority of such tickets were not appealed.
This is an example of how technology can supercharge the impact of misguided decisions or incomplete oversight. Luckily, in this case, transparency led to a solution. Because infraction were records available to the press, the harms came to light. Unfortunately, for many technologies, records are in short supply.
Comcast’s $10-per-month Internet Essentials program, focused on families eligible for the National School Lunch Program, is too hard to sign up for, argue advocates. While Comcast contends that the process takes under two weeks, the California Emerging Technology Fund reports that “[t]he application process often takes 2-3 months.”
Continuing the diversity report trend, Twitter and Pinterest have now released their numbers. While Twitter has some of the worst numbers we’ve seen so far, Pinterest is actually exceeding CS graduate diversity rates.
To be placed on a government watchlist, effectively forever, “concrete facts are not necessary,” reveals a leaked “Watchlisting Guidance” manual. As the Center for Constitutional Rights puts it, “‘watchlisting is not an exact science’ is a gross understatement.”
A working paper from the National Bureau of Economic Research found that there are fewer midwage jobs such as “factory work, sales and bookkeeping,” while highly skilled jobs and service jobs are taking up more of the economy. Those most impacted by these changes are “the young, the less educated and men.”
Tomorrow, representatives from Google, Microsoft, and Yahoo will meet in Brussels to discuss the European Union’s implementation of a new “right to be forgotten” rule intended to protect online privacy. In May, the EU’s Court of Justice ruled that European citizens have the right to ask search engines to remove links to information about them that is “inaccurate, inadequate, irrelevant or excessive.”
Here in the United States the “right to be forgotten” ruling has largely been met with puzzlement or panic. The notion that accurate, relevant information might need to be removed from search results because it is “excessive” is without parallel in American law.
But consider the case of a site like TheDirty.com, where disgruntled ex-boyfriends, neighbors, and total strangers post the names and pictures of girls they think are “slutty,” with descriptions like this: “[Name redacted] is the definition of white trash. Loves the painkillers…[and] starving animals to death. … She will saddle up for any guy for a ride.” (That’s an actual comment, and it comes up as the eleventh hit when anyone Googles the target’s name.)
Or the case of Jill Filipovic, who on break from law school, recovering from having wisdom teeth extracted, discovered a busy thread on a law school discussion board that explored the specific, horribly detailed ways that anonymous commenters–people she would pass in the halls and sit next to in lecture when she returned to school–believed that she should be raped and killed. Eight years later “Jill Filipovic RAPE THREAD” still comes up in web searches on her name.
The benefits of a right to be forgotten may be particularly strong for women. Today, the harassment of women online—and 75% of online abuse is targeted at women—is forever. The expectation of free speech and our human rights to live without fear of violence, daily exposure to hate speech, and “virtual” threats to our bodily integrity are in legitimate conflict here, and we need solutions for the real world.
The debate currently underway between the European Commission and American civil liberties advocates is a bellwether, sharpening the contrast between an American approach that puts freedom of speech above all else and a European view that holds that limits on the retention and sharing of information are important to human rights and dignity.
Until about twenty years ago, your correspondence, government records, or the poorly executed erotic short story you wrote in your early twenties had a naturally limited shelf life. They were on paper, they took up space, and eventually people threw them away or stored them in a deep, dark warehouse. There are compelling reasons for the digital record of our lives to have a finite expiration date, too.
On the other hand, as Jonathan Zittrain argues in a recent New York Times Op-Ed, the EU rule would likely be unconstitutional in the United States, in that it allows individuals to effectively censor “facts about themselves found in public documents.” And, with the sheer volume of takedown requests—Google alone has received 70,000 so far—it is likely that search engines will accede to the majority of requests to remove personal information, without carefully considering each case.
Seeing the right to be forgotten as a woman’s issue underscores the potential group harms of unregulated data. To update the old feminist motto: The personal (data) is political.
Starting July 24th, Verizon Wireless will begin offering a customer loyalty program called Smart Rewards. Hidden in the fine print, though, is a requirement to enroll in a separate program called Verizon Selects. By joining Selects, a user agrees to allow Verizon to track their browsing behavior and movements, so marketers can better target them. Once enrolled in Smart Rewards, a user can disable Verizon Selects – if she’s willing to lose the estimated $5/month reward.
Corporate tracking is commonplace online, but tracking at the network level—before a user even arrives at a website—is relatively novel. Verizon is in a powerful position to collect individuals’ data, including location, phone call metadata, demographic characteristics, and mobile web browsing habits. When advertising company NebuAd attempted to partner with Internet Service Providers in 2008 collect data in a similar manner (albeit using an opt-out model), they were stopped by Congress.
Verizon is treading carefully and requiring users to opt-in to the program. But for some, the incentives may be hard to refuse. Verizon estimates that each Smart Rewards point is worth $0.01 and enrollment in Verizon Selects pays 500 points per month, so enrollment pays $5/month. When the cheapest individual MORE Everything plan is $45, that’s a discount of over 10%.
Verizon claims that they won’t share data that “identifies you personally” with non-Verizon companies, but they plan to collect and use location data that is not generally considered personally-identifying information. Researchers have shown that such data can be easily used to de-anonymize data, so this assurance holds only so much value.
It’s concerning that getting the best deal on a mobile Internet plan—which one third of Americans rely on for their primary access to the net—requires an individual to reveal so much.
“[A] growing number of lenders are using new technologies that can remotely disable the ignition of a car within minutes of the borrower missing a payment,” notes the New York Times, reporting on the rapid growth of subprime auto loans.
In Oakland, California, where 32% of residents don’t have internet access, a group called Sudo Mesh is striving to build mesh networks in poor neighborhoods. “We’ve reached a point where broadband is no longer a convenience but a necessity. You can’t get ahead in society, or pull even, without broadband access,” a representative said.
Political analytics firms are trying to “translate everything they know about voters into votes”, with huge sums of money being spent on “constructing new weapons and experimenting with increasingly personalized targeting.”
The San Francisco Chronicle performed an external audit of Airbnb’s impact, finding that “160 entire homes or apartments seem to be rented full time, giving weight to arguments that the service is allowing landlords to flout strict rental laws.”
Explosive allegations that the National Security Agency and FBI targeted Muslim-American activists, politicians, and educators for secret surveillance programs, published last week in The Intercept, have prompted a robust response from a coalition of civil and human rights groups.
Glenn Greenwald and Murtaza Hussain’s report, based on a spreadsheet of nearly 8,000 email addresses leaked by Edward Snowden and three months of investigative reporting, contends that at least five of the surveillance targets led “highly public, outwardly exemplary lives,” but were scrutinized by the federal intelligence agencies based on their ethnic backgrounds and Muslim faith. Among those targeted were Faisal Gill, a member of the Bush administration once employed by the Department of Homeland Security with top security clearance; Asim Ghafoor, a civil rights attorney who defended the Saudi charity the Al Haramain Islamic Foundation; and Agha Saeed, who founded the American Muslim Alliance and concentrated much of his activism on registering Muslim Americans to vote.
The leaked spreadsheet that inspired the story is titled “FISA recap,” which suggests that the surveillance was cleared by the Foreign Intelligence Surveillance Act (FISA) court. The FISA court is a secret tribunal that vets government requests to tap phones, search emails, collect data, and otherwise monitor suspected terrorists and spies. Given that FISA determinations are classified, we may never know the rationale for, or the extent of, the surveillance of these men. Greenwald and Hussain write,
Whatever the specific reasons and methods used to monitor the five men’s emails, the surveillance against them took place during the chaos and fear that enveloped the national security community in the years after 9/11. … One former law enforcement official said that, while the FBI was diligent in trying to hew to the law, there may have been ‘some missteps’ along the way. Those missteps have landed heavily on Americans of Muslim heritage.
In a response sent July 9, fifty-two civil and human rights groups, including the ACLU, American Muslim Alliance, Human Rights Campaign, NAACP Legal Defense Fund, and the National Immigration Law Center, called on President Obama to re-affirm law enforcement’s commitment to “serve and protect America’s diverse population equally.” They wrote,
The [Intercept] report is troubling because it arises in [a] broader context of abuse. … Under the guise of community outreach, the FBI targeted mosques and Muslim community organizations for intelligence gathering. It has pressured law-abiding American Muslims to become informants against their own communities, often in coercive circumstances. … [T]he government’s domestic counterterrorism policies treat entire minority communities as suspect, and American Muslims have borne the brunt of government suspicion, stigma and abuse.
Arguing that these practices “strike at the bedrock of democracy,” the co-signers called on the White House to strengthen the Department of Justice’s Guidance Regarding the Use of Race by Federal Law Enforcement Agencies, issued in June 2003, and to expand these guidelines to explicitly ban profiling on the basis of religion, sexual orientation, gender identity and national origin.
In response, the Office of the Director of National Intelligence and the Department of Justice insisted that FISA court orders to monitor U.S. citizens are only issued if credible evidence that the target of surveillance is “an agent of a foreign power, a terrorist, a spy, or someone who takes orders from a foreign power” exists. In a joint statement, the agencies wrote,
It is entirely false that U.S. intelligence agencies conduct electronic surveillance of political, religious or activist figures solely because they disagree with public policies or criticize the government, of for exercising constitutional rights. Unlike some other nations, the United States does not monitor anyone’s communications in order to suppress criticism or to put people at a disadvantage based on their ethnicity, race, gender, sexual orientation or religion.
However, according to the Electronic Privacy Information Center, of the more than 35,000 FISA surveillance orders reviewed by the court in the last 35 years, only twelve—a mere 0.03% of them—were denied.
The Federal Communications Commission (FCC) voted last Friday to spend $1 billion annually to help schools build Wi-Fi networks. The vote changes how money from the federal E-Rate program, which is designed to bring phone and Internet access to rural and low-income Americans, is spent.
Wi-Fi networks ultimately rely upon schools’ external connection to the Internet. Even so, they can provide powerful benefits. For example, a 2009 ethnographic study found that limited terminal time was a major hurdle for students, especially those without Internet access at home. One student noted, “I can’t even really concentrate on what I am doing because I am so stressed that I will run out of time.” Wi-Fi can ease this bottleneck, allowing a greater number of students utilize connectivity that already exists.
However, the financial reshuffle has prompted some concern from both sides of the political spectrum. Republican FCC Commissioner Ajit Pai alluded to a future increase in Americans’ phone bills to cover the cost. And while many Democrats support the move, some worry that funding for Wi-Fi will compete with dollars currently being spent on broadband access.
The overwhelming majority of federal and state wiretaps reported in 2013 were focused on suspected drug deals. “‘Narcotics’ constituted a whopping 3,115 of the 3,576 total wiretaps, followed by ‘other major offenses’ (including smuggling and money laundering), homicide, and kidnapping, which was the subject of one wiretap,” reports Motherboard.
Colleges are turning to predictive analytics to help them identify students who are likely to drop out, reports Vox. However, acting on these predictions might prove challenging. “The idea of knowing what’s going on is really important. But knowing what you can do to address that is probably even more important,” says education technology expert Ellen Wagner.
Google+ recently retracted its policy of requiring that its users publicly display their real name. “We know that our names policy has been unclear, and this has led to some unnecessarily difficult experiences for some of our users. For this we apologize, and we hope that today’s change is a step toward making Google+ the welcoming and inclusive place that we want it to be,” the company wrote.