Media Law Resource Center

Serving the Media Law Community Since 1980

Digital Home Page

Registration & Lodging




2018 Sponsors



Our Reception Sponsor



Our Breakfast Sponsor





















Home Digital Legal Frontiers in Digital Media 2018 Curriculum – 2018 Legal Frontiers in Digital Media

Curriculum – 2018 Legal Frontiers in Digital Media



The conference, a joint production of the Media Law Resource Center and the Berkeley Center for Law & Technology, explores emerging legal issues surrounding digital content in today’s multi-platform world.  Our 2018 Digital Conference will be held May 17th & 18th, 2018 at the Mission Bay Conference Center in San Francisco, CA.  The Conference will feature sessions running from 1:00 p.m. on May 17, with an early evening reception, through 12:30 p.m. on May 18th.

Conference co-chairs:

  • Kelly Wong Craven, Facebook
  • Aaron Schur, Yelp Inc.
  • Brian Willen, Wilson Sonsini Goodrich Rosati


UC Berkeley School of Law certifies that this activity has been approved by the State Bar of California for 7.5 hours of Continuing Legal Education credit (6.25 General Hours, and 1.25 Hours in Recognition and Elimination of Bias). If you are seeking credit for another jurisdiction, please check with your state bar to determine if California CLE credits are recognized, through reciprocity, in your jurisdiction.


Thursday, May 17, 2018

Under Pressure: Hosting and Unhosting Objectionable Content
(1:10 p.m. to 2:25 p.m.)

Increasingly, platforms have been under pressure on a number of fronts to take down, moderate and/or stop hosting objectionable groups and content, such as content originating from white supremacists, alleged sex traffickers, terrorist groups and the like. The pressure is coming from political forces seeking legal reforms, such as the recently passed Section 230 exception for sex trafficking (FOSTA) and EU regulations demanding accelerated removals; as well social and public-relations pressures, e.g., public outrage over Neo-Nazi groups online after the violence in Charlottesville. As a result, platforms are shifting to a more hands-on approach to editorial control, attempting to refine their own values and community standards.

• What role do hosting services like CloudFlare and social media sites and other platforms have in excluding hate groups, and is there a danger in private companies becoming upstream speech intermediaries?

• If you do allow controversial speech, and can identify participants, do you also allow people to advertise to those participants?

• What are the consequences for platforms of FOSTA's enactment, and is the rest of Section 230 under increasing threat?

• What are the jurisdictional boundaries of removal orders from foreign countries, and what standards should platforms follow in deciding whether to obey orders from foreign removal orders from foreign countries?

• What pressures are coming from Europe to accelerate the removal of content, and what effect does that have in U.S.?

Ari Holtzblatt, Counsel, WilmerHale (Moderator)
Michael Bloom, Director of Federal Government Affairs, Internet Association
Evan Engstrom, Executive Director, Engine
Sarah Jeong, Senior Writer, The Verge
Corynne McSherry, Legal Director, Electronic Frontier Foundation


Combatting Internet Disinformation Campaigns
(2:40 p.m. to 3:55 p.m.)

Whether by foreign governments like Russia, or by fraudsters and other individuals wishing to influence opinion and actions on the internet for their own ends, disinformation campaigns have become an acute problem that social media sites are facing calls to address. Beginning with a tech tutorial on how fake news and other misinformation is created and distributed in an artificially viral way and how bots and fake users are employed to manipulate people, this session will cover the roles of platforms in identifying and combatting disinformation campaigns. The discussion will include:

• Virality online can be good or bad, but how do we distinguish between the good and content that malevolently manipulates or undermines democracy?

• How should platforms fight back against disinformation campaigns, including future campaigns to influence our elections.

• As a platform, when do you have a legal obligation to act or to report actors manipulating your site?

• What obstacles – legal, practical, and otherwise – do platforms face in addressing these issues?

• A certain segment of users don't care that they're receiving fake news, and won't necessarily rely on verification tools even if they're made available. What's the proper response from platforms to this challenge?

• What is the appropriate role of governments in facing these issues and how can platforms better work with governments on this challenge?

Samir Jain, Partner, Jones Day (Moderator)
Dipayan Ghosh, Fellow, Public Interest Technology, New America 
Nicole Wong, Principal, Nwong Strategies
Samuel Woolley, Director of the Digital Intelligence Lab, Institute for the Future

Women in Tech: Is Climate Change Coming?
(4:10 p.m. to 5:25 p.m.)

It has been approximately a year since the Uber scandal uncovered a culture of sexual harassment and gender bias in the tech community. Silicon Valley still faces a dearth of female founders and women are still underrepresented at executive levels in tech companies and law firms. But is the outlook showing signs of improvement? What steps are tech companies taking to reduce gender bias and discrimination? How can the legal community contribute to increased diversity? This session will examine the current climate and highlight the strides the tech community is making to improve the future of women in tech.

Regina Thomas, Associate General Counsel, Oath Inc. (Moderator)
Lora Blum, General Counsel, SurveyMonkey
Connie Loizos, Silicon Valley Editor, TechCrunch
Nikki Stitt Sokol, Associate General Counsel - Litigation, Facebook

Keynote by Kara Swisher
(5:30 p.m. to 6:00 p.m.)

Kara Swisher, influential technology journalist and co-founder of Recode, will give a keynote speech on the current social and political climate for digital companies. She will tackle a theme that runs throughout our sessions this year, a shift in the attitudes of the public and public officials, who are increasingly expressing a desire that platforms take on more responsibility and serve as a filter to police objectional content, propaganda, and illegal activity. Are digital platform's responses meeting the challenges?


Friday, May 18, 2018

Face-Swapping Technology: Dignity, Privacy & the First Amendment
(9:00 a.m. to 9:40 a.m.)

New machine-learning technology is allowing even amateur video editors to conjure videos that convincingly replace people's faces with those of others – frequently unwitting celebrities – to both creative and destructive ends. This digital face-swapping tech has been used for satirical internet videos and perhaps most famously to recreate a younger Princess Leia in the Star Wars film, Rogue One. In their most provocative form, these so-called "deepfakes" digital AI tools have been used to create X-rated content featuring the faces of popular Hollywood actresses grafted on to porn stars' bodies. The videos have already engendered a firestorm that has led to bans on even freewheeling platforms like Reddit and Pornhub. This short presentation will explore whether the law can keep up with this controversial form of speech, and whether a balance can be struck to protect the reputational and privacy interests of unwitting subjects while upholding First Amendment principles.

• Do existing laws governing defamation, privacy, right of publicity, copyright, or the intentional infliction of emotional distress, or anti-revenge porn laws, protect the unwitting subjects of "deepfakes" videos?

• How does the legal analysis change when fake videos are passed off as real? When celebrities are involved?

• Will this technology make it harder to verify audiovisual content, and easier to generate fake news?

Jim Rosenfeld, Partner, Davis Wright Tremaine LLP

How Algorithms & Machine Learning Work
(9:45 a.m. – 11:00 a.m.)

This session will begin with a tutorial on how algorithms and machine learning work in order to provide lawyers with a better understanding of how these technologies apply to solving real world problems. For example: how does machine learning help a review site spot fake reviews, a social media platform identify misinformation campaigns, or sites identify a banned user trying to rejoin the site under a new identity? Our tutorial will explore the limits of what algorithms and machine learning can and cannot do. The demonstration will be followed by a broader policy discussion, which will explore some of the practical, legal and ethical challenges of using algorithms:

• Since it's almost impossible to run a large network with millions of users without algorithms, how do you strike the right balance between machine learning and human moderators for legal compliance and/or takedowns to comply with company policies, e.g., copyright, pornography, hate speech.

• Does more reliance on machines to make decisions create new problems like unfair takedowns and lack of transparency?

• Under what circumstances does legal liability for machine-made decisions attach?

• What happens when a government agency (such as under the new GDPR "right to an explanation") requires platforms to disclose an explanation of algorithmic decision making and – not only is the algorithm proprietary – but the complexity of machine learning may make it impossible for even the platform to know precisely why a particular choice is made, e.g., why certain content was delivered.

Jim Dempsey, Executive Director, Berkeley Center for Law & Technology
Travis Brooks, Group Product Manager - Data Science and Data Product, Yelp (Tutorial)
Glynna Christian, Partner, Orrick
Cass Matthews, Senior Counsel, Jigsaw

Scraping By with the Computer Fraud & Abuse Act
(11:10 a.m. – 12:25 p.m.)

The Computer Fraud & Abuse Act was enacted by Congress in 1986, primarily as a tool to criminally prosecute hackers, in an era before the web and online publishing, when the internet was mostly used by a small universe of academics, government and military staff. Although the CFAA has been updated by Congress several times, its meaning in the modern age of universal internet access and porous digital borders has eluded courts as to what it means to access a computer without authorization. This panel will attempt to make sense of the various, often contradictory, judicial rulings in this area, and debate a better way forward which balances platforms' private property right to its data with the right of public access to online information. The session will consider:

• Can platforms deny other companies the right to access and process otherwise public information on their sites, and what are the rights of aggregators to gather news and process information by scraping other sites?

• What technical measures, such as password protection, is sufficient to enjoy the protections of the CFAA?

• Reconciling CFAA decisions in Power Ventures & Craigslist v. 3Taps with contradictory rulings in Hi-Q and others.

• Is there a kind of "public forum doctrine" emerging on private social media in light of notions of mandatory access and free speech protections arguably extended to privately-owned social media platforms in other cases.

• What should a modern update or replacement of the CFAA look like?

Brian Willen, Partner, Wilson Sonsini (Moderator)
Jonathan Blavin, Partner, Munger Tolles
Stacey Brandenburg, Partner, ZwillGen
Jamie Williams, Staff Attorney, Electronic Frontier Foundation

Joomla Templates: by JoomlaShack