Virtual education is gaining popularity. Five states—Michigan, Virginia, Florida, Alabama and Arkansas—require that students take at least one online course to graduate from high school. Three other states—Georgia, New Mexico and West Virginia—recommend that students take an online course, but do not require it.
Many tout the benefits of online education. For example, Michigan’s Department of Education claims that online courses help “prepare [students] for the demands they will encounter in higher education, the workplace and personal lifelong learning.” Moreover, online learning opportunities might allow students to take courses not offered at their schools, to work at their own pace, and to graduate on time when they have fallen behind, reports NPR.
Despite these potential benefits, the online schools through which many students complete their digital coursework have been criticized. A report from the National Education Policy Center found that about two-thirds of full-time online schools that receive “Adequate Yearly Progress” ratings were rated “unacceptable,” and these schools’ “‘On-Time Graduation Rate’… was less than half the national average.” Part of the reason may be that many virtual schools are for-profit. They “spend a fraction of what districts spend on teacher salaries,” says Gary Miron, a co-author of the report. However, K12, a large provider of online education, says that its lower scores are explained by the fact it serves a lower caliber of student.
Perhaps the biggest unanswered question is whether or not online education can live up to real-life instruction. In a 2009 report, the US Department of Education found that “on average, students in online learning conditions performed modestly better than those receiving face-to-face instruction.” However, the report was careful to note the surprising lack of “rigorous published studies contrasting online and face-to-face-learning conditions for K-12 students,” and cautioned against generalizing the study’s findings to K-12 students.
With so many open questions and so much conflicting evidence, policymakers should proceed carefully.
AT&T is offering different levels of privacy at different prices to users of its GigaPower broadband service. The base price of GigaPower is $70 per month, but AT&T charges an additional $29 per month to customers who opt out of its Internet Preferences program. That program allows the company to use “individual Web browsing information, like the search terms you enter and the web pages you visit, to tailor ads and offers to your interests.”
AT&T recently introduced GigaPower in Kansas City, bringing its privacy practices back into headlines.
The program represents a new and powerful form of web tracking. Historically, internet service providers (ISPs)—companies that provide internet access, like AT&T and Comcast—generally did not surveil their customers’ online behaviors for profit. But this is changing. Because ISPs carry all of a customer’s internet traffic, they have a unique vantage point from which to observe that customer’s browsing behavior. For example, ISPs can observe unencrypted interactions with many different competing online retailers, allowing them to profile people based their individual shopping and reading habits. (It’s worth noting that AT&T is unlikely to see customers’ interactions with websites that encrypt communications by default, like Google and Facebook.)
Targeted advertising and other types of analytics that depend on “segmenting” people can be predatory or even unintentionally exploitative. It’s troubling that individuals with lower incomes may not be able to afford what has traditionally been seen as “normal” levels of privacy for broadband internet services.
The Department of Justice (DOJ) is proposing a change to the federal rules that govern criminal procedure which would allow judges to issue more sweeping search warrants.
The Federal Rules of Criminal Procedure (specifically, Rule 41) currently allow judges to approve warrants only within the geographic confines of their judicial district, with a few limited exceptions. The DOJ’s proposed amendment would allow judges to issue a warrant to remotely search or seize data on a computer outside of their district in certain cybersecurity-related investigations, or when the location of the data has been “concealed through technological means.”
The amendment has drawn criticisms from both privacy advocates and industry. Privacy advocates are worried that computer security investigations—particularly those involving botnets (networks of computers compromised by malware or a virus)—might inadvertently pull in thousands of innocent users’ data. “Victims of botnets include journalists, dissidents, whistleblowers, members of the military, lawmakers and world leaders, or protected classes,” wrote Amie Stepanovich of Access. Google argued that the amendment might also open the door for U.S. authorities to “directly search computers and devices around the world.”
Compounding these concerns is the fact that no one seems to know what “remote access” means, or how much information it might entitle the government to see. The American Civil Liberties Union argued that “federal law enforcement agencies have used sophisticated surveillance software as part of criminal and national security investigations” in the past. Thus, it worried that the vague rules might permit new and covert surveillance attempts which “would undoubtedly end up searching the computers of innocent people who are not engaged in any crime.” Google also noted that this ambiguity could enable “government hacking of any facility wherever located.”
The DOJ says that it merely wants to realign search warrant protocol with modern technology. The “existing rules already allow the government to obtain and execute such warrants when the district of the targeted computer is known,” says Deputy Assistant General David Bitkower. The new rule, he argues, merely clears the path to serve warrants in more technologically complex cases.
Given the enormous power of warrants in the digital world, even small and seemingly-wonky procedural rules can have big consequences.
The Supreme Court on Monday declined to hear Daoud v. United States, a case about a criminal defendant’s right to see secret surveillance applications approved under the Foreign Intelligence Surveillance Act (FISA). The Court’s decision effectively upholds the Seventh Circuit’s ruling that the judges can evaluate classified surveillance orders without disclosing them.
Two Columbia University researchers have created a smartphone accessory that can accurately detect HIV in just 15 minutes and which costs only $34 to produce. Early detection can help prevent the spread of HIV. The device may help protect ethnic minorities who have a disproportionately high risk of infection.
According to recent article Salon, the technology sector, among others, is “getting filthy rich” from mass incarceration. Corporations, including Exmark (a Microsoft subcontractor) and Dell, have used prison laborers to shrinkwrap software and recycle desktops. Corporations can pay inmates as little as 35 cents an hour for their work.
In South Carolina, inmates face up to two years in solitary confinement for making just one post to Facebook. It’s one stark frontier in an ongoing battle over the pros and cons of cell phone and internet access in prisons.
Many inmates are prohibited from using cell phones or the internet. However, the California Department of Corrections has confiscated more than 30,000 cell phones since 2012, according to an in depth investigation by Fusion (a new technology web site with a social impact focus). The investigation also “turned up dozens of social media profiles of inmates currently serving time in several states.” Prison officials are worried that inmates can use cell phones to “have unmonitored conversations that could further criminal activity, such as selling drugs or harassing other individuals.” They offer stories of inmates who have used cell phones to order killings of eyewitness and corrections officers. Internet access, whether on a smartphone or a computer, could be used to achieve the same ends.
Given these risks, some correctional facilities have gone to great lengths to prevent inmates from communicating with others through cell phones and/or the internet. Some correctional facilities in New York use chair-shaped magnetic scanners to detect metal objects, such as cell phones, inside body cavities. California, Georgia, Maryland, Mississippi, and Texas have used “managed access systems,” which intercept calls from nearby phones before they are sent to carriers’ towers. Furthermore, the South Carolina Department of Corrections (SCDC) has made it a separate Level 1 offense (the same level as homicide or sexual assault) for each day that an inmate posts on a social network site, writes Dave Maass of the Electronic Frontier Foundation (EFF). Under this policy, inmates can receive up to 720 days of solitary confinement per offense. One inmate was sentenced to 37.5 years in solitary confinement for making just 38 posts on Facebook. “In other words, if a South Carolina inmate caused a riot, took three hostages, murdered them, stole their clothes, and then escaped, he could still wind up with fewer Level 1 offenses than an inmate who updated Facebook every day for two weeks.”
Despite significant efforts to keep inmates from accessing cell phones and the internet, some argue that the ability for inmates to connect with the outside world isn’t really that harmful. Many inmates use cell phones for purposes that do not threaten anyone’s safety, like communicating with friends and family, posting videos of themselves dancing, or organizing hunger strikes. Some research, including a study from the Minnesota Department of Corrections, show that human interaction decreases recidivism: that study found that felons who were visited in prison were 13% less likely to be reconvicted than those who weren’t visited. “For every one [prisoner] who is a problem, nine more just really want to connect with society,” said Michael Santos (author, teacher, and former prisoner) to Fusion.
“Balancing the rights of inmates with public safety is a tricky task, but prisons… must consider proportionality and fairness for justice to be truly served,” writes Dave Maass of the EFF.
The National Association for the Deaf (NAD) has sued Harvard and MIT, (both complaints are online) arguing that as recipients of federal funds, these universities are legally obliged to provide closed captioning for video lectures they’ve made freely available to the public. NAD hopes that its suit will move other universities to caption their content as well.
The suit concerns “massively open” online courses that Harvard and MIT make freely available to anyone with an internet connection, as well as podcasts and public lectures. In making the content available online, Harvard aims to “create effective, accessible avenues for people who desire to learn but who may not have an opportunity to obtain a Harvard education.” However, the NAD alleges that deaf and hard of hearing Americans are largely denied access to this content because it “is either not captioned or inaccurately or unintelligibly captioned.” For example, a caption of Liberian President Ellen Johnson Sirleaf read “the square governmental Oct ago,” when her real statement was “…these were either the government or partners of the government.” According to the lawsuit, “just as buildings without ramps bar people who use wheelchairs, online content without captions excludes individuals who are deaf or hard of hearing.”
Thus far, neither university has commented on the case’s merits. Nevertheless, any potential discrimination may not be intentional. Both schools say they want to make their content accessible, however, Harvard has indicated that it doesn’t know how. Jeff Neal, a spokesman for Harvard, stated that the University is eagerly awaiting “much needed guidance” from the US Department of Justice (DOJ). Once adopted, Harvard plans to follow the DOJ’s rules. “Expanding access to knowledge and making online learning content accessible is of vital importance to Harvard and to educational institutions across the country,” said Neal. MIT said it plans to caption all new online content.
Captioning is costly, but not particularly difficult. It can cost about $200 an hour to caption content, and Harvard alone has thousands of video and audio tracks available online. “Disability law compliance at universities is very much a work in progress… it requires making changes in bureaucratic routines, and in big institutions, there’s resistance to deviating from the routines,” said Sam Bagenstos, a University of Michigan law professor and former number-two at the Department of Justice Civil Rights division, to the New York Times.
A pair of articles in The Wall Street Journal reported that a new search engine, Memex, is being used to catch sex traffickers who largely operate “in the shadows” online. Memex makes diagrams to show common elements that connect, such as when two ads use the same phone number. Memex helped reveal a sex trafficking ring that brought North Korean women to the United States.
In a recent speech, F.B.I. director James Comey spoke about implicit racial bias. Rather than fault people for their unconscious tendencies, he said, law enforcement needs “to design systems and processes to overcome that very human part of us all.”
Courts are starting to weigh the meaning of “emoji,” those little smiley faces and other icons that can be added to online messages. Wired explains that “as social media becomes increasingly important evidence for law enforcement, so too do emoji. When the digital symbol for a gun, a smile, or a face with stuck-out tongue comes up in court, they aren’t being derided or ignored.”
Facebook is now giving users the option to select a “legacy contact” who can manage some aspects of their profile once they die. Specifically, after submitting a memorialization request, the legacy contact can update profile pictures, respond to friend requests, and display a post at the top of a user’s page announcing things like memorial services. The contact can not post as the user or read their private messages.
In more Facebook news, the Electronic Frontier Foundation writes of Native Americans having difficulty with Facebook’s name policy, which asks users to “use the name they go by in real life.” The website suspended Dana Lone Hill’s account until it could verify her “everyday name.” This story echoes one we wrote discussing how Google’s name policy can burden those in cultures that use unconventional names.
Differential pricing means charging different people different prices for the same product. It’s an old practice that might soon take on new forms thanks to big data.
This was the message from a recent White House report entitled Big Data and Differential Pricing. The report didn’t break new ground and didn’t expose new businesses practices. However, it did explain the wide-ranging territory of differential pricing—a topic that is bigger than it seems at first glance. It also made clear that advocates must become increasingly conversant in the topic, especially as businesses collect more and more data about consumers.
Here are a some of the most important takeaways from the report:
There are at least two major types of differential pricing, risk-based pricing and value-based pricing. They raise different questions and tend to benefit different consumers.
Risk-based pricing occurs when a business prices a product based on the cost of selling it to different groups of buyers. The practice is common in the insurance and credit markets. For example, recent developments in risk-based pricing include devices that monitor driving behavior to price auto insurance, and algorithms that use social media data to price credit. Risk-based pricing can benefit a business’s least costly customers at the expense of others. It can thus raise “serious fairness concerns, especially when major risk factors are outside of an individual consumer’s control.”
Value-based pricing, on the other hand, occurs when a business prices a product based on buyers’ willingness to pay. For example, online storefronts might use people’s browsing history and location to vary the offers and products displayed. In a competitive environment, value-based pricing can help ensure that less price-sensitive customers (who are often wealthier consumers) pay a higher price at the benefit of more price-sensitive customers.
Big data is poised to change both types of differential pricing by giving businesses a finer-grained understanding their customers.
Differential pricing is not fundamentally unfair or wrong. Sometimes, it provides clear benefits.
Many forms of differential pricing are obvious and uncontroversial. For example, movie theaters charge less for matinees, grocery stores reduce the prices of nearly-expired produce, and colleges award scholarships to students who lack financial means. “Economic reasoning suggests that differential pricing, whether online or offline, can benefit both buyers and sellers,” concludes the report. In other words, price differentiation is just a tool: it can be used for good or for ill. Accordingly, the White House declined to recommended that the practice be directly regulated.
As differential pricing relies on more data, businesses must tread carefully to avoid unfairly impacting historically disadvantaged groups.
The report observed that “big data could lead to disparate impacts by providing sellers with more variables to choose from, some of which will be correlated with membership in a protected class.” We discussed these concerns in an earlier article, A Guide to “Big Data’s Disparate Impact.”
As differential pricing relies on more data, businesses must tread carefully to avoid treating historically disadvantaged groups unfairly.
There is scant evidence that online retailers customize their prices on an individual basis today. The report claims that the “relative scarcity of personalized pricing examples suggests that companies are moving slowly or remaining quiet, perhaps due to fears that consumers will respond negatively, but also because the methods are still being developed.”
However, businesses are using big data in other, more subtle ways. For example, businesses regularly test different price points and promotional patterns at different times, and might randomly assign different consumers different prices. They are also making making increased use of loyalty programs (and similar programs) to more precisely target coupons and other offers.
Companies also engage in “steering”: the practice of showing different products to people in different demographic groups. Steering is “a common practice across web sites, but for a relatively limited set of products,” observed the report. For example, Orbitz showed users of Apple devices more expensive hotel options.
Unfortunately, businesses’ pricing practices are often opaque to users, and will likely remain so in the future. A recent research paper on online differential pricing openly admits that “today, [researchers] lack the tools and techniques necessary to be able to detect such behavior.”
A California state Senator will soon introduce a bill that would require California state agencies to obtain a warrant before searching electronic devices or compelling service providers to hand over electronic communications. The bill protects more information from warrantless government access than would some reform efforts at the federal level.
The California bill, called the Electronic Communications Privacy Act (CalECPA), shares a name with the federal Electronic Communications Privacy Act (ECPA). Passed in 1986, ECPA sets standards for law enforcement access to electronic communications and associated data. Civil rights and privacy advocates have been trying to update it for years.
Today, ECPA allows government agencies to compel service providers to disclose 180-day-old electronic communications with just a subpoena, as opposed to a warrant (which is significantly harder to obtain). It also provides little protection for location information. “If law enforcement sought those same messages in the physical world, a warrant would be required. This difference is not only wrong, but also inconsistent with the Fourth Amendment,” wrote the Electronic Frontier Foundation, summarizing popular criticisms.
“An outdated ECPA is a particular burden on minority communities,” argue Christopher Calabrese and Sandra Fulton on the Leadership Conference’s blog. “Despite the efforts of civil rights groups, the practice of racial profiling by members of law enforcement at the federal, state, and local levels remains a widespread and pervasive problem affecting African-American, Muslim, Latino, and other communities.”
CalECPA, proposed by California Senator Mark Leno, falls more in line with modern expectations of privacy. It would essentially prohibit California state government agencies from demanding access to or accessing emails, private messages (like those on Facebook and Twitter), text messages, and location data, and certain other data stored in the cloud without a warrant or wiretap order. It protects more information from warrantless access than an ECPA reform bill supported by a majority of the House. (That bill would require the government to obtain a warrant before compelling service providers to disclose electronic communications but does not appear to protect location information.)
“Legal experts say that CalECPA, if it passes, would not be the first such digital protection bill at the state level, but it would be the most comprehensive,” reports Ars Technica.
Would you share private data for the good of city planning? This is the question asked by Henry Grabar of Next City, who explains that individuals’ travel data is immensely valuable to cities. It can help cities “decide which streets to plow first after a snowstorm, improve evacuation procedures, understand ambulance response, implement effective congestion pricing, identify popular late-night districts, measure the impact of new development and ascertain the viability of certain commercial strips.”
Millions of Facebook users in Indonesia and Nigeria don’t recognize that they are using the internet, reports Quartz. But does that matter? “This is more than a matter of semantics. The expectations and behaviors of the next billion people to come online will have profound effects on how the internet evolves.”
The federal government continues to collect more data than ever before. The most recent metric: Twitter recently announced that the United States made 1,622 requests for user data during the second half of 2014. That figure represents a 29% increase since the last reporting period.