Stronger Locks, Better Security Print
Written by Administrator   
Wednesday, 25 November 2015 04:59

What if, in response to the terrorist attacks in Paris, or cybersecurity attacks on companies and government agencies, the FBI had come to the American people and said: In order to keep you safe, we need you to remove all the locks on your doors and windows and replace them with weaker ones. It's because, if you were a terrorist and we needed to get to your house, your locks might slow us down or block us entirely.  So Americans, remove your locks! And American companies: stop making good locks!

We'd all reject this as a bad idea. We'd see that it would make us all vulnerable, not just to terrorists but to ordinary thieves and bad guys. We'd reject undermining our daily security in favor of a vague potential that in some cases, law enforcement would be guaranteed, quick, easy access to our homes. We'd say to the FBI: Stop right there. We need more security in the wake of these attacks, not less.

Yet that same tradeoff is similar to what's being asked of us in the attacks on strong encryption. The FBI isn't technically asking for no locks—it's asking for weakened ones so that it can guarantee that it can break any lock that we buy or use—but the end result is the same. We're made more vulnerable. As with the locks on our doors, digital locks can't be made to allow access to all the good guys and none of the bad guys. The lock can't tell the difference, and even more vulnerabilities are created by building complicated processes for storing digital keys, as demonstrated by a recent MIT report and an open letter to David Cameron by Harvard Professor (and EFF Board member) Jonathan Zittrain.

Right now the FBI's strategy is focused on putting pressure on companies like Apple, Microsoft and Google, to prevent us from ever getting access to good locks in the first place. Yet if the FBI was publicly calling for home builders and locksmiths to stop offering you the strongest possible home or office security systems, we'd see the folly of their strategy outright.

EFF and many others have long demonstrated that limiting our access to strong encryption is a bad idea. But somehow, maybe because the way these locks work is more hidden from users in the context of digital networks and tools, the argument continues to be raised by an FBI that should know better. And by politicians who should know better, too, like Hillary Clinton

The response to to insecure networks and digital technologies must be to make them stronger. And yet this basic message is not only lost on those who call for encryption controls, but it has also been undermined by the cybersecurity approach of CISA, which instead of encouraging better security by those who store our information, pushes companies to increase the risks we already face by "sharing" more of our data with the government. Of course, the lapses in government security are already well documented. The same wrongheaded approach is on display when our Congress fails to reform the Computer Fraud and Abuse Act to protect the security researchers whose work results in better protections for us all—and instead pushes for a worse version of the law, with a still broader scope and harsher penalties.

Unlocking everyone's doors isn't the answer to global crime or terrorism. Building and supporting stronger security is.

Share this: Share on Twitter Share on Facebook Share on Google+ Share on Diaspora  ||  Join EFF
Free Router Software Not In The Crosshairs, FCC Clarifies Print
Written by Administrator   
Wednesday, 25 November 2015 04:32

FCC will not seek to ban free software from wireless routers, according to a clarification it made earlier this month on a rulemaking related to radio devices. An earlier draft of the official proposal included a specific reference to device manufacturers restricting installation of the open-source project DD-WRT.

That line, in the context of the larger proposal, created confusion in a community of router hackers that already operate in an often unwelcoming environment. Router makers rarely provide much in the way of support or documentation to people developing new software, and have a bad record on delivering software updates to end users. Against this background, the idea that regulators might require or urge those manufacturers to take proactive steps against third-party developers was cause for alarm.

This is especially true considering the valuable innovations and developments that have come out of the third-party router software community—innovations like advancing the state of the art in mesh networking and combatting slowdowns that come from "bufferbloat." Beyond that, free router software is frequently more secure than the manufacturer option, because it continues to receive patches and critical updates through community support.

EFF was far from alone in its concern about the possibility of a regulatory crackdown on free router software. Working together with the Save Wifi coalition, we re-launched our "Dear FCC" platform, originally developed to help the public provide comment on the net neutrality rulemaking earlier this year. More than 1000 concerned individuals used the platform to leave comments on this more recent rulemaking, making it one of the most active open FCC dockets.

To its credit, the FCC seemed to get the message loud and clear. In a blog post earlier this month title "Clearing the Air on Wi-Fi Software Updates," the chief of the agency's Office of Engineering & Technology explained the situation:

[T]here is concern that our proposed rules could have the unintended consequence of causing manufacturers to “lock down” their devices and prevent all software modifications, including those impacting security vulnerabilities and other changes on which users rely. Eliciting this kind of feedback is the very reason that we sought comment in an NPRM and we are pleased to have received the feedback that will inform our decision-making on this matter.

In my last post I recognized the need to work with stakeholders – particularly the user community – to address these concerns in a way that still enables the Commission to execute its mandate to protect users from harmful interference. I’m happy to say that the OET staff and I have spoken directly with some of these stakeholders in the last few weeks.

One immediate outcome of this ongoing dialogue is a step we’ve taken to clarify our guidance on rules the Commission adopted last year in the U-NII proceeding. Our original lab guidance document released pursuant to that Order asked manufacturers to explain “how [its] device is protected from ‘flashing’ and the installation of third-party firmware such as DD-WRT”. This particular question prompted a fair bit of confusion – were we mandating wholesale blocking of Open Source firmware modifications?

We were not, but we agree that the guidance we provide to manufacturers must be crystal-clear to avoid confusion. So, today we released a revision to that guidance to clarify that our instructions were narrowly-focused on modifications that would take a device out of compliance.

That revision is a welcome one. We'll continue to monitor the progress of this proposed rule to ensure it can't be used to jeopardize the important role that free third-party software continues to play in the router ecosystem.

Related Issues: 

Share this: Share on Twitter Share on Facebook Share on Google+ Share on Diaspora  ||  Join EFF
Superfish 2.0: Now Dell is Breaking HTTPS Print
Written by Administrator   
Wednesday, 25 November 2015 03:56

Earlier this year it was revealed that Lenovo was shipping computers preloaded with software called Superfish, which installed its own HTTPS root certificate on affected computers. That in and of itself wouldn't be so bad, except Superfish's certificates all used the same private key. That meant all the affected computers were vulnerable to a “man in the middle” attack in which an attacker could use that private key to eavesdrop on users' encrypted connections to websites, and even impersonate other websites.

Now it appears that Dell has done the same thing [PDF], shipping laptops pre-installed with an HTTPS root certificate issued by Dell, known as eDellRoot. The certificate could allow malicious software or an attacker to impersonate Google, your bank, or any other website. It could also allow an attacker to install malicious code that has a valid signature, bypassing Windows security controls. The security team for the Chrome browser appears to have already revoked the certificate.  People can test if their computer is affected by the bogus certificate by following this link

Ars Technica is reporting that at least two models of Dell laptop have been confirmed to contain the rogue certificate, but the actual number is possibly much higher.

The same certificate appears to be installed in every affected Dell machine, which would enable an attacker to compromise every affected Dell user if only they had the private key which Dell used to create the certificate. Fortunately attackers (and unfortunately for Dell's customers), Dell included that key on all the affected laptops as well. The result is that anyone with an affected Dell laptop could use it to create a valid HTTPS certificate for any other affected Dell laptop owner. One security researcher made this test site signed with the Dell certificate to prove that this attack was possible. During the test, the researcher confirmed that Firefox, Chrome and Internet Explorer all established an encrypted connection to the site with no warnings at all on an affected Dell laptop. Notably the Dell root certificate was also discovered on at least one SCADA system (the type of computer systems used to control industrial equipment, including in power plants, water treatment centers, and factories).

Less than 24 hours after Ars Technica published the story, Dell issued an apology stating:

Customer security and privacy is a top concern and priority for Dell; we deeply regret that this has happened and are taking steps to address it.

The certificate is not malware or adware. Rather, it was intended to provide the system service tag to Dell online support allowing us to quickly identify the computer model, making it easier and faster to service our customers. This certificate is not being used to collect personal customer information. It’s also important to note that the certificate will not reinstall itself once it is properly removed using the recommended Dell process.

Dell has also released an application to uninstall the certificate [exe] and instructions for how to remove the root certificate manually.

While we applaud Dell for responding to this fiasco so quickly, the fact remains that it never should have happened in the first place. The rogue eDellRoot certificate is dated two months after the Superfish debacle happened. Furthermore, Dell used the Superfish debacle to their advantage, promoting the security of their own products. Since Dell clearly knew that installing a root certificate—à la Superfish—was a bad idea, why did they make the exact same blunder?

We hope that other computer manufactures will learn from this fiasco, if they didn't already learn from Lenovo and Superfish. Hardware manufacturers need to realize that installing their own root certificates on consumer machines is dangerous and irresponsible, since it compromises the security of the entire web. If they don't they're guaranteed to keep facing embarrassment and losing the trust of their customers.

Share this: Share on Twitter Share on Facebook Share on Google+ Share on Diaspora  ||  Join EFF
Stupid Patent of the Month: Infamous Prison Telco Patents Asking Third-Parties for Money Print
Written by Administrator   
Tuesday, 24 November 2015 05:15

Plenty of businesses rely on third-party payers: parents often pay for college; insurance companies pay most health care bills. Reaching out to potential third-party payers is hardly a new or revolutionary business practice. But someone should tell the Patent Office. Earlier this year, it issued US Patent No. 9,026,468 to Securus Technologies, a company that provides telephone services to prisoners. The patent covers a method of “proactively establishing a third-party payment account.” In other words, Securus patented the idea of finding someone to pay a bill.

It’s been an interesting few weeks for Securus. First, the FCC announced that in response to price gouging by the industry, it would impose per-minute price caps on prison calls. Then The Intercept reported on a massive hack of recorded Securus calls: 70 million recordings, including many calls made under attorney-privilege, were leaked through Secure Drop. We’d like to add November’s Stupid Patent of the Month award.

Securus’ patent has a single independent claim with three steps. These steps are: 1) identifying a “prospective third-party payer”; 2) detecting a “campaign triggering event” (this can be something like an inmate being booked into a facility); and 3) “initiating a campaign to proactively contact” the prospective third-party payer using an “interactive voice response system.” In other words, when an inmate gets booked into the local jail, Securus robocalls a family member to ask if they are willing to set up a pre-paid phone account.

There are two serious problems with this patent. First, the claims are directed to a mind-numbingly mundane business practice and should have been rejected as obvious. Obvious uses or combinations of existing technology are not patentable. Second, the claims are ineligible for patent protection under the Supreme Court’s 2014 decision in Alice v. CLS Bank —this is a recent Supreme Court decision that holds that an abstract idea (like contacting potential third-party payers) doesn’t become eligible for a patent simply because it is implemented using generic technology. That the system failed to register either of these defects shows deep dysfunction.

In a sane world, a patent examiner would apply common sense and reject Securus’ application out of hand. It includes no technological innovation (it notes that all of the relevant phone technology already exists); instead, it simply describes a basic set of steps for contacting potential third-party payers. Unfortunately, the Federal Circuit has essentially beaten common sense out of the patent system. For example, it recently overruled an examiner who had relied on common sense for the basic fact that electrical plugs have prongs. This repudiation of common sense is how we get patents on filming a yoga class or white background photography.

To the patent examiner’s credit, he did originally reject all of Securus’ claims as obvious based on a combination of earlier publications regarding third-party payment accounts. Securus appealed that rejection to the Patent Trial and Appeal Board (PTAB). The PTAB overruled (PDF) the examiner. In a victory of hyper-formalism over rationality, the PTAB said that the examiner had not giving sufficiently explicit reasons for combining the teachings of prior publications.

What about Alice v. CLS Bank? Securus’ claims are just an abstract idea implemented on generic technology. They should therefore have been found ineligible under the Supreme Court’s new standard. In fact, when it overruled the examiner on obviousness grounds, the PTAB explicitly noted (in a footnote) that the examiner “may wish to review the claims for compliance under 35 U.S.C. § 101” in light of the Alice decision. Yet the Patent Office ignored this suggestion and rubber stamped the claims.

We have repeatedly urged the Patent Office (here, here, and here) to do a better job applying Alice to pending applications. It is very disappointing to see patents like this one being granted months after the Supreme Court’s ruling. Even invalid patents are very expensive to defeat in court after they’ve issued. Securus already has a captive market for its services. It does not need the monopoly power of a stupid patent as well.

Share this: Share on Twitter Share on Facebook Share on Google+ Share on Diaspora  ||  Join EFF
The Sorry Tale of PECB, Pakistan's Terrible Electronic Crime Bill Print
Written by Administrator   
Monday, 23 November 2015 11:09

It is a truth universally acknowledged that a government, in the wake of a national security crisis—or hostage to the perceived threat of one—will pursue and in many cases enact legislation that is claimed to protect its citizens from danger, actual or otherwise. These security laws often include wide-ranging provisions that do anything but protect their citizens' rights or their safety. We have seen this happen time and time again, from the America's PATRIOT Act to Canada's C-51. The latest wave of statements by politicians after the Paris bombing implies we will see more of the same very soon.

Not keen to be left out, Pakistan has now joined the ranks of countries using “cybercrime” and terrorism to rewrite the protections for their nationals' privacy and right to free expression. In January 2015 the Government of Pakistan drafted the Prevention of Electronic Crimes Bill (PECB). Ostensibly the PECB was written to address new digital issues, such as cyberstalking, forgery, and online harassment. The reality is the PECB contains such broad legal provisions that that it would criminalize everyday acts of expression while undermining the right to privacy of Pakistani citizens.

PECB was introduced in the same period as the government of Pakistan established its National Action Plan, a comprehensive state-level project to combat terrorism after armed men linked to the Taliba, attacked an Army-run school in the city of Peshawar, killing 145 people, 132 of which were children. The PECB became part of the NAP: a political product intended to make control of political expression an official role of the government.

Much like its international counterparts, the PECB skews in favor of national security—loosely defined—while ignoring civil liberties. Section 34 of the PECB, for example, gives the Pakistan Telecommunication Authority (PTA) powers to block objectionable content and websites, with very vague, unclear ideas as to what constitutes ‘objectionable'. If the PTA determine that it is “necessary in the interest of the glory of Islam or the integrity, security or defence of Pakistan or any part thereof, friendly relations with foreign states, public order, decency or morality,” then the authorities can censor it.

Do you pass messages via Facebook, Twitter and other social media platforms? Under the PECB, if those messages are “obscene” or “immoral”, you may be committing a criminal offence—again, there is no clear definition of what constitutes either “obscene” or “immoral.” . Even if one does manage to think clean thoughts, sending an email or a message without the recipients permission is a criminal offence, under Section 21. A lack of clearly defined clarifications and explanations gives sweeping power to investigating agencies, with the ability to implicate, fine and imprison anyone for sending a single email without prior consent.

These provisions and others in the drafted bill have led to condemnation from Pakistani rights organizations, international groups including Article 19, Human Rights Watch and Privacy International, and from Pakistan's legal and media communities.

My own organization and many others have been pushing Pakistan's government to retract the drafted PECB, and to include amendments that incorporate civil liberties concerns. The political atmosphere has made them generally reluctant to open up the drafting process to civil society. Organizations, activists and members of Pakistan's nascent tech industry spent most of 2015 calling upon the Pakistan National Assembly's Standing Committee on Information Technology and Telecommunication to withdraw the drafted PECB for further study and amendments.

On September 17th, however, the Standing Committee decided to approve the draft and send it on its way to the National Assembly. Actually, to be more precise: copies of the draft were not given by the drafters to other committee members. When they objected, and stressed that the drafted bill could not be approved without review, they were overruled by the committee chair, who said that as he had seen the draft, that would be sufficient to pass it onto the National Assembly.

Anusha Rehman, the Minister of State for IT & Telecommunications, has defended the PECB, asserting that “safeguards have been ensured against any expected misuse.” But as it is currently written, the PECB contains little in the way of safeguards. Suggestions by civil society and lawyers have been consistently ignored.

What Pakistan needs is a a cybercrime bill that progressively and effectively balances security and civil liberties. The current PECB text, badly drafted and politically compromised, is so far away from that goal that it needs a complete overhaul.

Pakistan's lawmakers need to know how broken the PECB is. EFF and Digital Rights Foundation have created a tool that lets you send a message to key Senators and Members of the National Assembly via Twitter. Take action now, and stop the PECB from undermining Pakistan's online future.

Nighat Dad is the founder and executive director of Pakistan's Digital Rights Foundation.

Share this: Share on Twitter Share on Facebook Share on Google+ Share on Diaspora  ||  Join EFF
Consumer Review Freedom Act Ready for Senate—Still a Good Law with a Few Problems Print
Written by Administrator   
Monday, 23 November 2015 05:31

We wrote earlier this month about the Consumer Review Freedom Act (S. 2044, H.R. 2110), a bill that would prohibit businesses from using form contracts to prevent their customers from sharing negative reviews of their products and services online, or using bogus copyright claims to censor reviews they don’t like. We also joined a group of peer organizations in signing a letter in support of the bill.

Last week, the Senate Commerce Committee approved an amended version of the bill, readying it for debate on the Senate floor. We applaud the committee for making customers’ freedom of speech a priority. Chairman John Thune (R-SD) is the bill’s primary sponsor in the Senate.

Last time we wrote about the bill, we noted two changes we’d like to see to it before it passes.

First, we were worried about a carveout that would allow a business to use a contract to assign the copyright for a customer’s speech to the business itself when that speech is not “lawful.” We think that this loophole could allow businesses to bypass the traditional protections in place for allegedly unlawful speech. For example, it’s easier and faster to send a copyright takedown notice to the offending user than to convince a judge to order the user to remove content for defamation. Unfortunately, the version the Commerce Committee approved still includes this loophole.

Second, we were concerned that the law could be used to prosecute not only the businesses that offer these unfair contracts, but also the customers who enter into them. The committee did edit the definition of “form contract” to make it clear that the contracts the law addresses are those that establish a business-customer relationship, but we still fear that the law could be used against customers.

The new version also clarifies that nonexclusive licenses to use content aren’t covered under the ban on contracts that transfer the customer’s copyright. That’s a good move: nearly all online forums and social media platforms require users to grant the platform a nonexclusive license to use their content. Calling those agreements into question clearly isn’t the intention of this bill.

Once again, we’re glad that the Senate is moving forward with the CRFA. It’s not a perfect solution, but it’s great to see Congress addressing the problem of unfair, lopsided form contracts.

Share this: Share on Twitter Share on Facebook Share on Google+ Share on Diaspora  ||  Join EFF
EFF Joins Broad Coalition of Groups to Protest the TPP in Washington D.C. Print
Written by Administrator   
Friday, 20 November 2015 05:19

We were out on the streets this week to march against the Trans-Pacific Partnership (TPP) agreement in the U.S. Capitol. We were there to demonstrate the beginning of a unified movement of diverse organizations calling on officials to review and reject the deal based on its substance, which we can finally read and dissect now that the final text is officially released.

Image of the final, officially-released version of the TPP agreement printed double-sided, taken at the Public Citizen Access to Medicines office. This photo by Maira Sutton can be reused under CC-BY 4.0

Contained within these 6,000-plus pages of the completed TPP text are a series of provisions that empower multinational corporations and private interest groups at the expense of the public interest. Civil society groups represent diverse concerns, so while we may disagree on our specific concerns about the TPP, we commonly recognize that this is a toxic, undemocratic deal that must be stopped at all costs.

Our TPP protest signs, slogans based on suggestions from Twitter users @ronmexicolives and @GabeNicholas. This photo by Maira Sutton can be reused under CC-BY 4.0

So on Monday, we kicked off the new phase of TPP campaigning to call on U.S. Congress members to reject the entire deal in the coming ratification vote in a few months.

Beginning of the rally in front of the Chamber of Commerce in downtown Washington D.C. This photo by Maira Sutton can be reused under CC-BY 4.0

Roughly a couple of hundred people came out to meet in front of the Chamber of Commerce. Some organizers and leading activists gave speeches about the impacts of the TPP on our local and global communities. Maira Sutton, EFF's Global Policy Analyst, spoke about the effects of the TPP's restrictive digital policy provisions that empower the rights of Hollywood and other corporations, and that it does little to nothing to safeguard the rights of the public interest on the Internet or over our digital devices. Other speakers discussed how the TPP would impact environmental protections and the raise the costs of affordable life-saving medicines and treatments.

We then started the march, with large banners and people carrying dozens of toilet paper-shaped lanterns with the words "flush the TPP" written across it.

This photo by Maira Sutton can be reused under CC-BY 4.0

The rally picked up many more people as we snaked around the downtown area and marched towards the Ronald Reagan International Trade Center:

This photo by Maira Sutton can be reused under CC-BY 4.0

Another rally was held on Tuesday morning, where we marched to each of the TPP country embassies to demonstrate our support of those who have been protesting it in other regions of the world. Protesters carried a 10-foot-tall figure of Mr. Monopoly, which puppeteered the flags of the 12 TPP countries participating countries. Others carried flags with "stop TPP" in all the languages of the TPP countries, and a gigantic globe of the earth on their shoulders to signify our common responsibility for the rights and interests of people and environments worldwide:

This photo by Maira Sutton can be reused under CC-BY 4.0

This photo by Maira Sutton can be reused under CC-BY 4.0

People from all over the United States came to attend these events in DC this week. We met people from Texas, Alabama, Florida, North Carolina, Michigan, and Washington state. They all traveled hundreds or thousands of miles to voice their opposition against the TPP, as well as the other secretive trade deals that harm our digital rights and actively erode transparent, public-interest driven policymaking. 

While we had a pretty good turn out of several hundred people at these events at the Capitol, a recent poll showed that 60% of people in the United States have no opinion on the TPP. Clearly, we still have a lot of work to do to make more people in the United States aware and actively working to stop this deal before it goes to Congress.

Stay tuned as we develop more materials and resources to spread the word about the TPP's impacts on your digital rights. For now, you can start by taking this action to urge your lawmakers to call a hearing on the contents of the TPP that will impact your digital rights, and more importantly, to vote this deal down when it comes to them for ratification:

TPP action button

Share this: Share on Twitter Share on Facebook Share on Google+ Share on Diaspora  ||  Join EFF
Unintended Consequences, European-Style: How the New EU Data Protection Regulation will be Misused to Censor Speech Print
Written by Administrator   
Friday, 20 November 2015 05:05

Europe is very close to the finishing line of an extraordinary project: the adoption of the new General Data Protection Regulation (GDPR), a single, comprehensive replacement for the 28 different laws that implement Europe's existing 1995 Data Protection Directive. More than any other instrument, the original Directive has created a high global standard for personal data protection, and led many other countries to follow Europe's approach. Over the years, Europe has grown ever more committed to the idea of data protection as a core value. In the Union's Charter of Fundamental Rights, legally binding on all the EU states since 2009, lists the “right to the protection of personal data” as a separate and equal right to privacy. The GDPR is intended to update and maintain that high standard of protection, while modernising and streamlining its enforcement.

The battle over the details of the GDPR has so far mostly been a debate between advocates pushing to better defend data protection, against companies and other interests that find consumer privacy laws a hindrance to their business models. Most of the compromises between these two groups have now already been struck.

But lost in that extended negotiation has been another aspect of public interest. By concentrating on privacy, pro- or con-, the GDPR as it stands has omitted sufficient safeguards to protect another fundamental right: the right to freedom of expression, “to hold opinions and to receive and impart information... regardless of frontiers”.

It seems not to have been a deliberate omission. In their determination to protect the personal information of users online, the drafters of the GDPR introduced provisions that streamline the erasure of such information from online platforms—while neglecting to consider those who published that information to those platforms who were exercising their own human right of free expression in doing so, and their audiences who have the right to receive such information. Almost all digital rights advocates missed the implications, and corporate lobbyists didn't much care about the ramifications.

The result is a ticking time-bomb that will be bad for online speech, and bad for the future reputation of the GDPR and data protection in general.

Europe's data protection principles include a right of erasure, which has traditionally been about the right to delete data that a company holds on you, but has been extended over time to include a right to delete public statements that contain information about individuals that is “inadequate, irrelevant or excessive”. The first widely-noticed sign of how this might pose a problem for free speech online came from the 2014 judgment of the European Court of Justice, Google Spain v. Mario Costeja González—the so-called Right to Be Forgotten case.

We expressed our concern at the time that this decision created a new and ambiguous responsibility upon search engines to censor the Web, extending even to truthful information that has been lawfully published.

The current draft of the GDPR doubles down on Google Spain, and raises new problems. (The draft currently under negotiation is not publicly available, but July 2015 versions of the provisions that we refer to can be found in this comparative table or proposals and counter-proposals by the European institutions [PDF]. Article numbers referenced here, which will likely change in the final text, are to the proposal from the Council of the EU.)

First, it requires an Internet intermediary (which is not limited to a search engine, though the exact scope of the obligation remains vague) to respond to a request by a person for the removal of their personal information by immediately restricting the content, without notice to the user who uploaded that content (Articles 4(3a), 17, 17a, and 19a.). Compare this with the DMCA takedown notices, which include a notification requirement, or even the current Right to Be Forgotten process, which give search engines some time to consider the legitimacy of the request. In the new GDPR regime, the default is to delete.

Then, after reviewing the (also vague) criteria that balance the privacy claim with other legitimate interests and public interest considerations such as freedom of expression (Articles 6.1(f), 17a(3) and 17.3(a)), and possibly consulting with the user who uploaded the content if doubt remains, the intermediary either permanently erases the content (which, for search engines, means removing their link to it), or reinstates it (Articles 17.1 and 17a(3)). If it does erase the information, it is not required to notify the uploading user of having done so, but is required to notify any downstream publishers or recipients of the same content (Articles 13 and 17.2), and must apparently also disclose any information that it has about the uploading user to the person who requested its removal (Articles 14a(g) and 15(1)(g)).

Think about that for a moment. You place a comment on a website which mentions a few (truthful) facts about another person. Under the GDPR, that person can now demand the instant removal of your comment from the host of the website, while that host determines whether it might be okay to still publish it. If the host's decision goes against you (and you won't always be notified, so good luck spotting the pre-emptive deletion in time to plead your case to Google or Facebook or your ISP), your comment will be erased. If that comment was syndicated, by RSS or some other mechanism, your deleting host is now obliged to let anyone else know that they should also remove the content.

Finally, according to the existing language, while the host is dissuaded from telling you about any of this procedure, they are compelled to hand over personal information about you to the original complainant. So this part of EU's data protection law would actually release personal information!

What are the incentives for the intermediary to stand by the author and keep the material online? If the host fails to remove content that a data protection authority later determines it should have removed, it may become liable to astronomical penalties of €100 million or up to 5% of its global turnover, whichever is higher (Article 79).

That means there is enormous pressure on the intermediary to take information down if there is even a remote possibility that the information has indeed become “irrelevant”, and that countervailing public interest considerations do not apply.

These procedures are deficient in many important respects, a few of which are mentioned here:

  • Contrary to principle 2 of the Manila Principles on Intermediary Liability, they impose an obligation on an intermediary to remove content prior to any order by an independent and impartial judicial authority. Indeed, the initial obligation to restrict content comes even before the intermediary themselves has had an opportunity to substantively consider the removal request.
  • Contrary to principle 3 of the Manila Principles, the GDPR does not set out any detailed minimum requirements for requests for erasure of content, such as the details of the applicant, the exact location of the content, and the presumed legal basis for the request for erasure, which could help the intermediary to quickly identify baseless requests.
  • Contrary to principle 5, there is an utter lack of due process for the user who uploaded the content, either at the stage of initial restriction or before final erasure. This make the regime even more likely to result in mistaken over-blocking than the DMCA, or its European equivalent the E-Commerce Directive, which do allow for such a counter-notice procedure.
  • Contrary to principle 6, there is precious little transparency or accountability built into this process. The intermediary is not, generally, allowed to publish a notice identifying the restriction of particular content to the public at large, or even to notify the user who uploaded the content (except in difficult cases).

More details of these problems, and more importantly some possible textual solutions, have been identified in a series of posts by Daphne Keller, Director of Intermediary Liability at The Center for Internet and Society (CIS) of Stanford Law School. However at this late stage of the negotiations over the GDPR in a process of “trialogue” between the European Union institutions, it will be quite a challenge to effect the necessary changes.

Even so, it is not too late yet: proposed amendments to the GDPR are still being considered. We have written a joint letter with ARTICLE 19 to European policymakers, drawing their attention to the problem and explaining what needs to be done. We contend that the problems identified can be overcome by relatively simple amendments to the GDPR, which will help to secure European users' freedom of expression, without detracting from the strong protection that the regime affords to their personal data.

Without fixing the problem, the current draft risks sullying the entire GDPR project. Just like the DMCA takedown process, these GDPR removals won't just be used for the limited purpose they were intended for. Instead, it will be abused to censor authors and invade the privacy of speakers. A GDPR without fixes will damage the reputation of data protection law as effectively as the DMCA permanently tarnished the intent and purpose of copyright law.

Share this: Share on Twitter Share on Facebook Share on Google+ Share on Diaspora  ||  Join EFF

Page 1 of 2