Sunday, May 01, 2016

FESC: Regulation by Internet Intermediaries

Regulation by Internet Intermediaries
Moderator: Jack Balkin
Emma Llanso & Rita Cant     “Internet Referral Units”: Co-Option of Private Content Moderation Systems for Extralegal Government Censorship
Kate Klonick, discussant: Most UGC platforms have content standards to which users agree.  Impermissible nudity, hate speech, violent content, harassment, etc. can be taken down, and this is protected by §230.  Platforms have become good at moderation of TOS-violating content; they’re much better than the gov’t, and also unlike the gov’t they’re allowed to do it.  UK created an internet referral unit in 2010, dedicated to flagging “terrorist” content on social media sites.  IRUs are becoming increasingly popular for countering terrorist propaganda in Europe; US is starting to talk about that. Blur public/private lines.
 
Authors argue that flagging, and subsequent takedown, constitutes gov’t action—takedown is directly causally related to the flagging. They also argue that IRUs raise the specter of state action: allowing private action to be held to const’l standards where state affirmatively encourages actor; §1983 suits would be dismissed b/c of §230 immunity. 
 
UK definition of extremism: vocal disrespect for key values, calls for death of armed forces—would be overbroad/unconst’l in the US. Also lack of procedural safeguards.  Internet cos. generally don’t provide adequate notice/right to appeal.
 
Klonick suggests that Google, FB, and Twitter aren’t very vulnerable to gov’t pressures and easily able to push back.  IRUs couldn’t do their job w/o social media—the platforms then have a lot of leverage in terms of denying gov’t access to ToS.  [Though the platforms also trade off interests and may sacrifice some for leverage on others, like stopping DMCA reform making them more liable.] Also, content moderation at the big three at least offers an extraordinary amount of notice—what more would you want, other than notice of ToS, notice of takedown and reason, notice of existence of appeals process, all of which are provided?  A lot of appeals processes may not be available per platform, but many times there are.
 
Transparency reports are a good idea: gov’t requests by month, which would let the public know about the relationship b/t the platform and the gov’t.
 
Llanso: we’ve seen some high level examples of Google, FB, Yahoo taking affirmative stances against overreach.  Concerned that there’s no guarantee they’ll always be able to take that kind of stand, or that other/smaller intermediaries will be able to make that stand.  Pressure on credit card companies as an example: Backpage v. Sheriff Dart.
 
Molly Land     Human Rights and Private Governance of the Internet
Rita Cant, presenter: Reminder that 1A isn’t universally applicable as sometimes we think in the US.  Even immediate removal of defamatory content upon notification of same in the European Union can lead to liability.  Institutional views drive courts’ understanding of roles of intermediaries. European bodies aren’t dismissing the concept of intermediary protection for speech, but holding them liable when they have capacity to police users.  If they fail to do so, liability attaches.  View that big hosts have the power to remove content and thus the responsibility is just wrong, according to Land.  Rather than liability based on size, human rights law prescribes a different principle: an intermediary that participates in creation of culpable speech is different than one that merely serves as a conduit.  Regulating them as colluders/contributors to human rights violation is not regulating them as intermediaries. This prevents over-takedowns.
 
But is a platform’s takedown of legitimate speech a human rights violation the way that facilitating murder or illegal mining is?  Those seem very different.  Wrongful takedown of expression: is it even a violation of human rights at all if it’s according to a company’s standards?  Co-regulation for allowing gov’t to affirmatively protect those rights may be quite difficult—generally, gov’t actively enforcing human rights online has undermined those rights.
 
Land: Would like a bright line rule, but jurisdictions vary; would not be seen as legitimate in many jurisdictions giving more weight to dignity, protection against discrimination, etc.  Easily convinced that co-regulation can be worst of both worlds, with lack of transparency.
 
Kate Klonick  From Constitution to Click-Worker-The Creation, Policy, and Process of Online Content Moderation
Presenter: Molly Land: Deal with actual empirical evidence about how this works. Interviews executives and click-workers enforcing the policies: FB, Twitter, Google.  May have just 2 seconds to look at each piece of content.  Transborder nature affects these policies too.
 
For future: closer connection between normative questions and empirical research.  Right now there are a lot of possible normative questions you could ask.  Maybe empirical research helps us understand the nature of the problem. Who’s doing this, what the content problems are, etc.  If these policies are in response to user pressure v. gov’t pressure/avoiding regulation, we might have different reactions.  Promoting v. defeating user preferences may differ from regulatory perspective.  Also, why these companies and not others? 
 
Klonick: FB, Twitter, Google are continuously operating for a while. Primarily UGC companies, specifically YouTube.  History matters: created mostly by small group of lawyers who were committed to the 1A and wanted to take down as little as they could while retaining an engaged userbase—some of the same lawyers moved from company to company.
 
Balkin: intermediary liability rules are state action rules.  The only Q is whether the free speech principle you use prohibits what’s being done.  If we tried an IRU in the US, though the ISP is permitted to have a TOS, gov’t is probably not able to say “please enforce your TOS with this class of content.”  Line could be different—could be Grokster-style inducement for everything, not just IP.  Leaves a wide swatch open.  Internet company could decide to be a passive conduit for something, but also curation and hosting. Don’t want to tell a company what kind of business model it can adopt.  Drawing a line like Grokster offers more opportunities for innovation.
 
Whenever the gov’t shapes the innovation space and permissible rules about when a private party we rely on for speech will be held liable, the gov’t is always already involved in that decision. The human rights laws are always invoked; the only question is the substantive one: what do those laws require?
 
Q: Gov’ts across the globe are resorting to self-help w/data localization and content regulation, often affirmatively objecting to US approach. Art. 20 of one treaty outlaws hate speech: advocacy of hatred that incites hostility, discrimination, or violence.  Microsoft’s response: commit to obey local laws where we do business, informed by int’l law.  We have to distinguish social media from search engines.  Mapping all info on Web is critical part of research/advancing knowledge; rely on notice & takedown rather than looking for affirmatively offensive content. Nobody elected us to make these decisions. We couldn’t hire the right people across the globe to make a nuanced decision.  So we use notice & takedown; we publish our standards.
 
Land: if we just went w/users, it’d be all porn, so it makes sense for companies to have freedom to shape their own communities. Signal to gov’t about where they are going too far.
 
Balkin: consider Southern gov’ts cooperating indirectly or directly w/private entities to enforce private segregation—also intermediary issues.
 
Abrams: terrible terrorist attack; gov’t learns that the perpetrator just watched a particularly explosive and incendiary work touting jihad.  President calls in Microsoft & Google etc. and provides list of things they ought to do to screen out bad content, though you don’t have to do it.  For the safety of the country you should do it; and the President tells the public that she has called for this action on their part.  Is that a problem?  [Note that this already happens in less fraught circumstances—consider the gov’t’s organization of the best practices in DMCA notices.] 
 
Q: when the NYT decides not to publish an article b/c the gov’t pleads with it to hold off on national security grounds, is that state action?
 
Balkin: that’s the Backpage case. 
 
Llanso: reminds her of Innocence of Muslims.  Yes, that’s improper for the gov’t to do, even in emergency circumstances. There are options for more formal procedures.  Telling the country about the request starts feeling like coercion.
 
Klonick: Innocence of Muslims was a big deal for her interviewees—often called the cause of the Benghazi attacks; people took it as incitement (even if that wasn’t really the cause). Even w/pressure from White House, they uniformly decided not to take it down.
 
Balkin: the fact that people violate the constitution isn’t an argument, it’s just a fact.
 
Lyrissa Lidsky: where does gov’t speech come into this and the gov’t’s right to express its opinion?  If Obama had an op-ed saying ISPs shouldn’t publish Innocence of Muslims, is that gov’t speech and ok or not ok?
 
Llanso: fact-intensive: expression of opinion versus suggestion of consequences/coercion.  If they start talking about modifying §230 if website owners aren’t more responsible, that might be coercive.  [But if they talk about amending §230 at some time there hasn’t just been an attack, that’s ok?]
 
Klonick: YouTube took down anti-Thai monarchy videos; claim is harm to the Thai people. Exporting 1A standards in many cases, though.
 
Q: 4.5% of world population is covered by the 1A.
 
Klonick: the click workers are from countries that don’t have an easy context for the n-word. So they have to look at a report by looking at the person’s whole page.
 
Llanso: when content is illegal, transparency about it and where the locations are can be difficult, such as child porn, or avoiding providing personal data.
 
RT: Not a hypo about gov’t pressure: 512 hearings, Katherine Oyama of Google gets directly told: do more for copyright owners or we’ll have to change this.  Did that violate the First Amendment?
 
Balkin: Not a threat if they have a right to do it. Congress has the right to change the rules of liability, unless the reason is viewpoint based.  That’s the fact question to be resolved: whether the reason is viewpoint based. 
 
Q: don’t assume private corporations are benign compared to the gov’t.
 
Klonick: real name policies are designed to make sure people know who’s attacking them. This is a way to control libel etc.  But they differ platform to platform.
 
Llanso: that’s a controversial policy; it also generates worse outcomes for people with traditional Native American names whose names aren’t recognized as “real,” or people at risk of stalking, harassment, or abuse.  Still sees gov’t effort to restrict publicly available speech as more dangerous than indiv. co. decisions, which can have big impacts but not as big.

No comments: