Cyber Civil Rights Conference

Gathered together in one place, for easy access, an agglomeration of writings and images relevant to the Rapeutation phenomenon.

Re: Cyber Civil Rights Conference

Postby admin » Mon Jun 29, 2015 8:31 pm

WHO TO SUE?: A BRIEF COMMENT ON THE CYBER CIVIL RIGHTS AGENDA

VIVA R. MOFFAT†

Danielle Citron’s groundbreaking work on cyber civil rights raises a whole variety of interesting possibilities and difficult issues. [1] In thinking about the development of the cyber civil rights agenda, one substantial set of concerns revolves around a regulatory question: what sorts of claims ought to be brought and against whom? The spectrum of options runs from pursuing currently-existing legal claims against individual wrongdoers to developing new legal theories and claims to pursuing either existing or new claims against third parties. I suggest here—very briefly—that for a variety of reasons the cyber civil rights agenda ought to be pursued in an incremental manner and that, in particular, we ought to be quite skeptical about imposing secondary liability for cyber civil rights claims.

Citron has argued very persuasively that online harassment, particularly of women, is a serious and widespread problem. I will not describe or expand upon her claims and evidence here, but for the purposes of this brief essay, I assume that online harassment is a problem. Determining what, if anything, to do about this problem is another matter. There are a variety of existing legal options for addressing online harassment. Victims of the harassment might bring civil claims for defamation or intentional infliction of emotional distress. [2] Prosecutors might, under appropriate circumstances, indict harassers for threats or stalking or, perhaps, conspiracy. [3] These options are not entirely satisfactory: because of IP address masking, wireless networks, and other technological hurdles, individual wrongdoers can be difficult, if not impossible, for plaintiffs and prosecutors to find. Even if found, individual wrongdoers might be judgment-proof. Even if found and able to pay a judgment, individual wrongdoers may not be in a position to take down the offending material, and they are certainly not in a position to monitor or prevent similar bad behavior in the future.

Thus there are reasons to pursue secondary liability—against ISPs, website operators, or other online entities. Current law, however, affords those entities general and broad immunity for the speech of others. Section 230 of the Communications Decency Act provides that “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” [4] This provision has been interpreted broadly such that ISPs, website operators, and others are not indirectly liable for claims such as defamation or intentional infliction of emotional distress. [5] The statute provides a few exceptions, for intellectual property claims, for example. [6] The proponents of the cyber civil rights agenda have proposed that additional exceptions be adopted. For example, Mary Anne Franks has analogized online harassment to workplace harassment and suggested that Section 230 immunity ought to be eliminated for website operators hosting harassing content. [7]

Notwithstanding the force of the arguments about the extent of the problem of online harassment and the reasons for imposing third party liability, I suggest that claims for indirect liability ought to be treated with skepticism for a variety of reasons. [8]

First, it is unclear whether the imposition of third party liability is likely to be effective at reducing or eliminating the individual bad behavior that is problematic. Secondary liability would presumably entail some proof of, for example, the third party’s ability to control the wrongful behavior or the place in which that behavior occurred, the third party’s knowledge of the bad behavior, or the third party’s inducement of the harassment (or some other indicia of responsibility of the third party). If this is so, it is easy to imagine that third parties—ISPs, website operators, and so on—who wish to avoid imposition of secondary liability or who wish to encourage or permit the “Wild West” behavior online will take measures to avoid findings of ability to control, of knowledge, or of inducement. Website operators might, for example, employ terms of use that strongly condemn online harassment and that require that users indemnify the website operators. ISPs might adopt strategies that effectively reduce or eliminate any “knowledge” the entity might have of what occurs on the site. The third parties might design their operations such that they cannot control user-created content, much as the filesharing services and peer-to-peer networks did in the wake of the RIAA’s pursuit of secondary liability claims.

Having just postulated that indirect liability may be ineffective, my second concern may seem contradictory: it may be overbroad. The collateral consequences of imposing secondary liability for user-generated content are enormous. As many have pointed out, third party liability may very well have substantial chilling effects on speech. Even if individual wrongdoers are willing to put their views out in the world, website operators and ISPs are likely to implement terms of use, commenting policies, and takedown procedures that are vastly overbroad. This is not to say that there are no collateral consequences, such as chilling effects on speech, from the imposition of direct liability, but only to speculate that such effects are potentially greater as a result of third party liability.

Third, to the extent that cyber civil rights agenda entails (and perhaps emphasizes) a norm-changing enterprise, it seems at least possible that claims of indirect liability are less likely to be effective in that regard. Revealing individual bad behavior and pursuing that wrongdoer through the legal system represents a straightforward example of the expressive value of the law at work: public condemnation of wrongful behavior. Claims for indirect liability are less likely to allow for such a straightforward story. Many (though not all) website operators and ISPs are engaged in very little behavior that is easily categorized as wrongful. Instead, third party liability of those entities is justified on other grounds, such as the entity’s ability to control the online behavior, the receipt of benefits from the bad behavior, or knowledge of the harassment. Attempts to hold these entities liable may not serve the expressive value of changing the norms of online behavior because in the vast majority of instances people are less likely to be convinced that the behavior by the third party was, in fact, wrongful. [9] In short, the argument that the imposition of third-party liability will change norms about individual online behavior strikes me as speculative.

Finally, a number of the reasons that victims might pursue claims against third parties simply are not sufficient to justify imposition of such liability. One might seek third party liability because individual wrong- doers cannot be found or because those individual wrongdoers are judgment- proof. Neither reason, though understandable, is sufficient. As a descriptive matter, third party liability in general is rarely or never imposed solely for one of those reasons. As a fairness matter, that is the right result: it would be inequitable to hold a third party liable solely because the wrongdoer cannot be found or cannot pay.

Each of the concerns sketched out above applies either to a lesser extent or not at all to the pursuit of direct liability claims, civil or criminal. While there are other problems with efforts to seek redress against individuals wrongdoers, that is the more fruitful path for the development of the cyber civil rights agenda.

________________

Notes:

† Assistant Professor, University of Denver Sturm College of Law. J.D., University of Virginia Law School; M.A., University of Virginia; A.B., Stanford University.

1. Danielle Keats Citron, Cyber Civil Rights, 89 B.U. L. REV. 61 (2009); Danielle Keats Citron, Law’s Expressive Value in Combating Cyber Gender Harassment, 108 MICH. L. REV. 373 (2010).

2. See Citron, Cyber Civil Rights, supra note 1, at 86–89.

3. Id. See, for example, the indictment of Lori Drew for conspiracy and violation of the Computer Fraud and Abuse Act, 18 U.S.C. § 1030. (The indictment is available online at http://www.scribd.com/doc/23406509/Indictment.) Drew was eventually acquitted by the judge in the case. See Rebecca Cathcart, Judge Throws out Conviction in Cyberbullying Case, N.Y. TIMES, July 2, 2009, available at http://www.nytimes.com/2009/07/03/us/03 ... &scp=4&sq= lori%20drew&st=cse.

4. 47 U.S.C. § 230(c)(1) (2006).

5. For a summary of the development of the CDA’s immunity provisions, see H. Brian Holland, In Defense of Online Intermediary Immunity: Facilitating Communities of Modified Exceptionalism, 56 U. KAN. L. REV. 369, 374–75 (2008) (“[C]ourts have consistently extended the reach of § 230 immunity along three lines . . . .”).

6. The statute provides exceptions for intellectual property claims, federal criminal enforcement, and a few others. 47 U.S.C. § 230(e) (2006). Third party liability for intellectual property claims is also regulated and partly immunized. See 17 U.S.C. § 512 (2006).

7. Mary Anne Franks, Unwilling Avatars: Idealism and Discrimination in Cyberspace, COLUM. J. GENDER & L. (forthcoming Feb. 2010), available at http://papers.ssrn.com/sol3/papers.cfm? ... id=1374533.

8. On the other hand, I have much less concern about the vigorous pursuit of claims against individual wrongdoers.

9. In the course of representing a student sued by the RIAA for uploading digital music file in violation of the Copyright Act, I asked her if she had heard of the Napster opinion (A&M Records, Inc. v. Napster, Inc., 239 F.3d 1004 (9th Cir. 2001) (notably, for purposes of this anecdote, an indirect liability case)). She said, “Yes, but I used Gnutella.” The suit for indirect liability obviously didn’t have the expressive value for that student that the recording industry might have hoped.
admin
Site Admin
 
Posts: 36183
Joined: Thu Aug 01, 2013 5:21 am

Re: Cyber Civil Rights Conference

Postby admin » Mon Jun 29, 2015 8:33 pm

UNREGULATING ONLINE HARASSMENT

BY ERIC GOLDMAN†

INTRODUCTION

I learned a lot from Danielle Keats Citron’s articles Cyber Civil Rights [1] and Law's Expressive Value in Combating Cyber Gender Harassment. [2] I realized that women are experiencing serious harms online that men—including me—may be unfairly trivializing. I was also convinced that, just like the 1970s battles over workplace harassment doctrines, we will not adequately redress online harassment until we first acknowledge the problem.

However, finding consensus on online harassment’s normative implications is trickier. Online harassment raises cyberspace’s standard regulatory challenges, including:

• Defining online harassment, which may range from a coordinated group attack by an “online mob” to a single individual sending a single improper message.
• Dealing with anonymous or difficult-to-identify online harassers.
• Determining how online harassment differs from offline harassment (if at all) [3] and any associated regulatory implications.
• Deciding if it makes more sense to regulate early or late in the technological evolution cycle (or never).
• Allocating legal responsibility to intermediaries.

PROTECTING SECTION 230

In 1996, Congress addressed the latter issue in the Communications Decency Act, 47 U.S.C. § 230, which provides a powerful immunity for websites (and other online actors) from liability for third-party content or actions. Empowered by this immunity, some websites handle user- generated content (“UGC”) in ways that may facilitate online harassment, such as tolerating harassing behavior by users or deleting server logs of user activity that could help identify wrongdoers. As frustrating as these design choices might be, they are not surprising, nor are they a drafting mistake; instead, they are the logical implications of Congress conferring broad and flexible immunity on an industry.

Though we might question Congress’ understanding of UGC in 1996, it turns out Congress made a great (non)regulatory decision. Congress’ enactment of § 230 correlates with the beginning of the dot com boom—one of the most exciting entrepreneurial periods ever. Further, the United States remains a global leader in UGC entrepreneurial activity and innovation; note that many of the most important new UGC sites founded in the past decade (such as Facebook and YouTube) were developed in the United States. Although I cannot prove causation, I strongly believe that § 230 plays a major role in both outcomes.

Frequently, § 230’s critics do not attack the immunization generally, but instead advocate a new limited exception for their pet concern. As tempting as minor tweaks to § 230 may sound, however, we should be reluctant to entertain these proposals. Section 230 derives significant strength from its simplicity. Section 230’s rule is actually quite clear: except for three statutorily enumerated exceptions (intellectual property, federal crimes and the Electronic Communications Privacy Act), websites are not liable for third-party content or actions—period. Creative and sympathetic plaintiffs have tried countless attempts to get around § 230’s immunity, but without any meaningful success. [4] Given the immunity’s simplicity, judges have interpreted § 230 nearly uniformly to shut down these attempted workarounds. Increasingly, I notice that plaintiffs omit UGC websites as defendants knowing that § 230 would moot that claim.

Operationally, § 230 gives “in the field” certainty to UGC websites. Sites can confidently ignore meritless demand letters and nastygrams regarding UGC. Section 230 also emboldens UGC websites and entrepreneurs to try innovative new UGC management techniques without fear of increased liability.

Any new exceptions to § 230, even if relatively narrow, would undercut these benefits for several reasons. First, new exceptions would reduce the clarity of § 230’s rule to judges. Second, service providers will be less confident in their immunity, leading them to remove content more frequently and to experiment with alternative techniques less. Third, plaintiffs’ lawyers will try to exploit any new exception and push it beyond its intent. We saw this phenomenon in response to some plaintiff- favorable language in the Ninth Circuit’s Fair Housing Council of San Fernando Valley v. Roommates.com en banc ruling. [5] Judges have fairly consistently rejected plaintiffs’ expansive interpretations of Roommates.com, [6] but only at significant additional defense costs.

CONCLUSION: EDUCATION AND “NETIQUETTE”

While the debate about regulating intermediaries’ role in online harassment continues, education may provide a complementary—or possibly substitutive—method of curbing online harassment. On that front, we have much progress to make. For example, most current Internet users started using the Internet without any training about bullying, online or offline. Not surprisingly, some untrained users do not make good choices.

However, future generations of Internet users will have the benefit of education about bullying. For example, my seven-year-old son is learning about bullying in school. The program [7] teaches kids—even first graders—not to bully each other or tolerate being bullied. It even shows kids how to deal with bullies proactively. Anti-bullying programs like this may not succeed, but they provide a reason to hope that online harassment will abate naturally as better trained Internet users come online.

______________

Notes:

† Associate Professor and Director, High Tech Law Institute, Santa Clara University School of Law. Website: http:///www.ericgoldman.org. Email: egoldman@gmail.com.

1. Danielle Keats Citron, Cyber Civil Rights, 89 B.U. L. REV. 61 (2009).

2. Danielle Keats Citron, Law's Expressive Value in Combating Cyber Gender Harassment, 108 MICH. L. REV. 373 (2009).

3. Compare Noah v. AOL Time Warner Inc., 261 F. Supp. 2d 532 (E.D. Va. 2003) (holding that Title II discrimination claims do not apply to virtual spaces such as AOL chatrooms), aff’d No. 03-1770, 2004 WL 602711 (4th Cir. Mar. 24, 2004) (per curiam), with Nat’l Fed. of the Blind v. Target Corp., 452 F. Supp. 2d 946 (N.D. Cal. 2006) (allowing an ADA claim against a non-ADA compliant retailer’s website based on the interaction between the retailer’s physical store and its online activity).

4. See e.g., Nemet Chevrolet, Ltd. v. Consumeraffairs.com, Inc., 591 F.3d 250, 258 (4th Cir. 2009) (holding that § 320 shielded Consumeraffairs.com because plaintiff failed to establish that Consumeraffairs.com constituted an information content provider by exceeding its "traditional editorial function.").

5. Fair Hous. Council of San Fernando Valley v. Roommates.com, 521 F.3d 1157 (9th Cir. 2008) (en banc).

6. As of December 31, 2009, I have tracked 13 cases citing Roommates.com, 11 of which have done so while ruling for the defense. See Posting of Eric Goldman to Technology & Marketing Law Blog, Consumer Review Website Wins 230 Dismissal in Fourth Circuit—Nemet Chevrolet v. ConsumerAffairs.com, http://blog.ericgoldman.org/archives/20 ... view_1.htm. (Dec. 29, 2009, 14:53 PST).

7. See Project Cornerstone, http://www.projectcornerstone.org.
admin
Site Admin
 
Posts: 36183
Joined: Thu Aug 01, 2013 5:21 am

Previous

Return to A Growing Corpus of Analytical Materials

Who is online

Users browsing this forum: No registered users and 25 guests