home

Media Law Resource Center

Serving the Media Law Community Since 1980

Digital Home Page

Registration & Lodging

Schedule

Curriculum

 

2018 Sponsors

 

google_new_logo

Our Reception Sponsor

 

microsoft_color

Our Breakfast Sponsor

 

AXIS-2013

 

Ballard-logo_small

 

CNA_Red

 

COV_CMYK

 

DWT_logo_tagline

 

KT_Logo

 

sheppard_mullin

 

Wilmer-Hale

 

ZG_Logo_CMYK_No_PLLC

 

Munger-Color-logo

Home Digital Legal Frontiers in Digital Media 2018 Curriculum – 2018 Legal Frontiers in Digital Media

Curriculum – 2018 Legal Frontiers in Digital Media

 

CLICK TO REGISTER BY CREDIT CARD

The conference, a joint production of the Media Law Resource Center and the Berkeley Center for Law & Technology, explores emerging legal issues surrounding digital content in today’s multi-platform world.  Our 2018 Digital Conference will be held May 17th & 18th, 2018 at the Mission Bay Conference Center in San Francisco, CA.  The Conference will feature sessions running from 1:00 p.m. on May 17, with an early evening reception, through 12:30 p.m. on May 18th.

Conference co-chairs:

  • Kelly Craven, Facebook
  • Aaron Schur, Yelp Inc.
  • Brian Willen, Wilson Sonsini Goodrich Rosati

 

UC Berkeley School of Law certifies that this activity has been approved by the State Bar of California for 7.5 hours of Continuing Legal Education credit (6.25 General Hours, and 1.25 Hours in Recognition and Elimination of Bias). If you are seeking credit for another jurisdiction, please check with your state bar to determine if California CLE credits are recognized, through reciprocity, in your jurisdiction.

 


Thursday, May 17, 2018

Under Pressure: Hosting and Unhosting Objectionable Content
(1:10 p.m. to 2:25 p.m.)

Increasingly, platforms have been under pressure on a number of fronts to take down, moderate and/or stop hosting objectionable groups and content, such as content originating from white supremacists, alleged sex traffickers, terrorist groups and the like. The pressure is coming from political forces seeking legal reforms, such as the recently passed Section 230 exception for sex trafficking (FOSTA) and EU regulations demanding accelerated removals; as well social and public-relations pressures, e.g., public outrage over Neo-Nazi groups online after the violence in Charlottesville. As a result, platforms are shifting to a more hands-on approach to editorial control, attempting to refine their own values and community standards.

• What role do hosting services like CloudFlare and social media sites and other platforms have in excluding hate groups, and is there a danger in private companies becoming upstream speech intermediaries?

• If you do allow controversial speech, and can identify participants, do you also allow people to advertise to those participants?

• What are the consequences for platforms of FOSTA's enactment, and is the rest of Section 230 under increasing threat?

• What are the jurisdictional boundaries of removal orders from foreign countries, and what standards should platforms follow in deciding whether to obey orders from foreign removal orders from foreign countries?

• What pressures are coming from Europe to accelerate the removal of content, and what effect does that have in U.S.?

Panelists:  
Ari Holtzblatt, Counsel, WilmerHale (Moderator)
Evan Angstrom, Executive Director, Engine
Corynne McSherry, Legal Director, Electronic Frontier Foundation

More speakers TBA


Combatting Internet Disinformation Campaigns
(2:40 p.m. to 3:55 p.m.)

Beginning with a tech tutorial on how fake news is created and distributed in an artificially viral way, this session will cover how bots and fake users are employed to manipulate people, and how advertising tools are employed to target particular users. Whether by foreign governments like Russia, or by fraudsters and other individuals wishing to influence opinion and actions on the internet for their own ends, misinformation campaigns have become an acute problem that social media sites are facing calls to address. Virality online can be good or bad, but how do we distinguish between the good and something that malevolently manipulates and undermines democracy? And how do we respond to bad virality? The discussion will include:

• How can platforms fight back against future campaigns to influence our elections, particularly in light of the limited help from the current administration.

• How fake accounts and bots are used by state-sponsored trolls engaging in public influence campaigns.

• Can oligarchs, foreign and domestic, or even ordinary citizens, use the same techniques perfected by the Russians?

• As a platform, when do you have a legal obligation to act or to report foreign actors manipulating your site?

• A certain segment of users don't care that they're receiving fake news, and won't necessarily rely on verification tools even if they're made available. What's the proper response from platforms to this challenge?

Panelists:    
Samir Jain, Partner, Jones Day
Dipayan Ghosh, Fellow, Public Interest Technology, New America 
Nicole Wong, Former Deputy US Chief Technology Officer


Women in Tech: Is Climate Change Coming?
(4:10 p.m. to 5:25 p.m.)

It has been approximately a year since the Uber scandal uncovered a culture of sexual harassment in the tech community. While it has become clear through the #MeToo movement that Silicon Valley is not alone, the tech community also faces a dearth of female founders and executives which may be contributing to the climate of sexual harassment. Tech lawyers are not immune from harassment and discrimination, but have they also contributed to the problem by negotiating NDAs to silence victims? At the same time, is there a danger of an overreaction to allegations that fails to allow the legal process run its course? This session will examine the current climate faced by women in tech, and will discuss how the law, and tech lawyers, may fit into this puzzle and help shape the future of women in tech.

Panelists:
Regina Thomas, Associate General Counsel, Oath Inc. (Moderator)
Laura Blum, General Counsel, SurveyMonkey
Ara Katz, Co-Founder & Co-CEO, Seed, Inc.
Connie Loizos, Silicon Valley Editor, TechCrunch


Keynote by Kara Swisher
(5:30 p.m. to 6:00 p.m.)

Kara Swisher, influential technology journalist and co-founder of Recode, will give a keynote speech on the current social and political climate for digital companies. She will tackle a theme that runs throughout our sessions this year, a shift in the attitudes of the public and public officials, who are increasingly expressing a desire that platforms take on more responsibility and serve as a filter to police objectional content, propaganda, and illegal activity. Are digital platform's responses meeting the challenges?


Friday, May 18, 2018

Face-Swapping Technology: Dignity, Privacy & the First Amendment
(9:00 a.m. to 9:40 a.m.)

New machine-learning technology is allowing even amateur video editors to conjure videos that convincingly replace people's faces with those of others – frequently unwitting celebrities – to both creative and destructive ends. This digital face-swapping tech has been used for satirical internet videos and perhaps most famously to recreate a younger Princess Leia in the Star Wars film, Rogue One. In their most provocative form, these so-called "deepfakes" digital AI tools have been used to create X-rated content featuring the faces of popular Hollywood actresses grafted on to porn stars' bodies. The videos have already engendered a firestorm that has led to bans on even freewheeling platforms like Reddit and Pornhub. This short presentation will explore whether the law can keep up with this controversial form of speech, and whether a balance can be struck to protect the reputational and privacy interests of unwitting subjects while upholding First Amendment principles.

• Do existing laws governing defamation, privacy, right of publicity, copyright, or the intentional infliction of emotional distress, or anti-revenge porn laws, protect the unwitting subjects of "deepfakes" videos?

• How does the legal analysis change when fake videos are passed off as real? When celebrities are involved?

• Will this technology make it harder to verify audiovisual content, and easier to generate fake news?

Presenter:   
Jim Rosenfeld, Partner, Davis Wright Tremaine LLP


How Algorithms & Machine Learning Work
(9:45 a.m. – 11:00 a.m.)

This session will begin with a tutorial on how algorithms and machine learning work in order to provide lawyers with a better understanding of how these technologies apply to solving real world problems. For example: how does machine learning help a review site spot fake reviews, a social media platform identify misinformation campaigns, or sites identify a banned user trying to rejoin the site under a new identity? Our tutorial will explore the limits of what algorithms and machine learning can and cannot do. The demonstration will be followed by a broader policy discussion, which will explore some of the practical, legal and ethical challenges of using algorithms:

• Since it's almost impossible to run a large network with millions of users without algorithms, how do you strike the right balance between machine learning and human moderators for legal compliance and/or takedowns to comply with company policies, e.g., copyright, pornography, hate speech.

• Does more reliance on machines to make decisions create new problems like unfair takedowns and lack of transparency?

• Under what circumstances does legal liability for machine-made decisions attach?

• What happens when a government agency (such as under the new GDPR "right to an explanation") requires platforms to disclose an explanation of algorithmic decision making and – not only is the algorithm proprietary – but the complexity of machine learning may make it impossible for even the platform to know precisely why a particular choice is made, e.g., why certain content was delivered.

Panelists:
Jim Dempsey, Executive Director, Berkeley Center for Law & Technology
Travis Brooks, Group Product Manager - Data Science and Data Product, Yelp (Tutorial)
Cass Matthews, Senior Counsel, Jigsaw
Sara Solow, Senior Associate, Hogan Lovells




Scraping By with the Computer Fraud & Abuse Act
(11:10 a.m. – 12:25 p.m.)

The Computer Fraud & Abuse Act was enacted by Congress in 1986, primarily as a tool to criminally prosecute hackers, in an era before the web and online publishing, when the internet was mostly used by a small universe of academics, government and military staff. Although the CFAA has been updated by Congress several times, its meaning in the modern age of universal internet access and porous digital borders has eluded courts as to what it means to access a computer without authorization. This panel will attempt to make sense of the various, often contradictory, judicial rulings in this area, and debate a better way forward which balances platforms' private property right to its data with the right of public access to online information. The session will consider:

• Can platforms deny other companies the right to access and process otherwise public information on their sites, and what are the rights of aggregators to gather news and process information by scraping other sites?

• What technical measures, such as password protection, is sufficient to enjoy the protections of the CFAA?

• Reconciling CFAA decisions in Power Ventures & Craigslist v. 3Taps with contradictory rulings in Hi-Q and others.

• Is there a kind of "public forum doctrine" emerging on private social media in light of notions of mandatory access and free speech protections arguably extended to privately-owned social media platforms in other cases.

• What should a modern update or replacement of the CFAA look like?

Panelists:   
Brian Willen, Partner, Wilson Sonsini (Moderator)
Jonathan Blavin, Partner, Munger Tolles
Stacey Brandenburg, Partner, ZwillGen
Jamie Williams, Staff Attorney, Electronic Frontier Foundation

 
Joomla Templates: by JoomlaShack