Skip to main content
May 2023

Thrilled to Bits: MLRC Digital Conference A Resounding Success

By George Freeman
PUBLISHED IN: MediaLawLetter May 2023
TOPICS :
Legal Frontiers in Digital Media was held May 18 at Mission Bay Conference Center, San Francisco

I am flying home from our Digital Conference in San Francisco, and decided to write about it because it was such an awesome success. We had about 150 attendees — about a pre-pandemic crowd — and everyone I spoke to felt it was truly engaging and worthwhile.

Three broad takeaways: First, the MLRC seems to have an uncanny knack for scheduling events on days of big news events. For example, at last fall’s London Conference, we had to totally rejigger our program because the first day of the Conference was the day of Queen Elizabeth’s funeral and our Conference site, as all of London, was closed. Here, just hours before the start of our Conference, the Supreme Court handed down three decisions in cases very relevant to our audience and our program. The online content liability cases involving Twitter and Google were already scheduled to be discussed during our first panel, but with the incoming decisions that morning, our speakers only had a couple of hours to quickly read the new opinions and give their hot takes to the audience. The moderator, Jennifer Dukarski, in her opening remarks, analogized these circumstances to our 2016 Forum event before our annual dinner, scheduled for the day after the presidential election (in which a panel was expected to provide media criticism in the context of a very different candidate winning the election). She recalled someone throwing out their stack of pre-planned notes, and saying, “ok, we’re going to change things up a little bit.” And that’s what our panel proceeded to do, with the biggest takeaway being that the Court’s decisions were basically a punt on the issue of Section 230, instead ruling on a narrower issue involving a federal anti-terrorism statute.

Content Moderation: Free Speech Principles and the Law

The Andy Warhol case was not originally scheduled to be discussed, so we pivoted and added a 10-minute session at the end of the day, in which Jim Rosenfeld summarized and analyzed it. Good thing we were in the Pacific time zone, giving our speakers extra time to read and digest the Court’s opinions.

Second, this is really a very high-level conference for lawyers who grapple with these issues every day. In MLRC nomenclature, if our Zoom calls are 101s and 201s, this conference is a 401, aimed at true experts on legal issues impacting digital companies. And the speakers communicated at that level as well. Kudos to Michael Norwick and Jeff Hermes, who invited experienced practitioners at the top of their field, all of whom were good and provocative presenters as well. Not one weak link among them.

Legal Issues with Generative AI Models

Finally, this was the first time that this conference was held in just one day. In the past, it had been scheduled over two half-days (usually, a Thursday afternoon and a Friday morning). This year it went from 10:00 a.m. to 6:30 p.m. (including a wine and beer reception, sponsored by Google), all on Thursday. While it was a bit exhausting, attendance was solid through all the sessions, and most attendees seemed to favor this new condensed format. They also seemed to prefer our site in the City to our former venue in Silicon Valley.

As previously discussed, the Conference’s first session on “Big Tech at the Supreme Court” about the Gonzalez v Google and Twitter v Taamneh cases had the unique ability to give first impressions on decisions released just hours earlier (rather than give more speculative analysis on themes raised during oral argument, as had originally been planned). The panel also engaged in an interesting discussion about the two conflicting NetChoice cases out of the 5th and 11th Circuits, which SCOTUS might well take in the next few weeks. Hard to believe the Court would rule that states can constitutionally require private social media platforms to carry user content they object to, but these days, who knows?

Digital Sovereignty and Global Impact

The second program was on content moderation. Julie Owono, of Stanford, Internet Sans Frontières, and the Meta Oversight Board, discussed the Lex Platformia, an emergent body of principles on content moderation reliant upon a multiple stakeholder approach. Attorney Cathy Gellis argued for giving platform decisions the full benefit of the First Amendment, asserting that it was hard to draw a line between platforms and traditional media outlets. Prof. Eugene Volokh, Zooming in (with some technical difficulty) put social media platforms somewhere between unregulated newspapers and magazines, on the one hand, and phone companies and shopping malls, on the other hand, where some government oversight has been found permissible. He also argued that not all functions and decisions of platforms are necessarily the same from a constitutional perspective. He didn’t go so far as to say digital platforms were common carriers, but he didn’t foreclose that argument either. And Prof. Alan Rozenshtein argued that it was necessary to step back from traditional First Amendment modes of analysis, because it was not clear that those approaches were suited to addressing the interests of speakers and listeners on social media.

The lunch break provided attendees with the opportunity to try out some AI tools, but the delicious hot meal, sponsored by Microsoft, and the glee with which members schmoozed seemed to take priority. In addition, we provided large round tables for Next Generation members to commune, and those tables filled up rapidly with our more junior members.

It’s not Cricket: George Freeman, far left, explaining some of the finer points of American culture to Jens van den Brink (to his right) and two of his Dutch colleagues at a Giants’ game at beautiful Oracle Park the day before the Conference.

After lunch, we started with a program on emerging legal issues arising from AI. As the other panels, this one had a really all-star lineup: the General Counsel of Midjourney, the Deputy GC of OpenAI, and Krishna Sood, Assistant GC of Microsoft (who is also one of our 2023 conference co-chairs), all of whom spoke very cogently about the legal problems likely to arise from AI content down the road, particularly with respect to copyright law. Issues discussed included whether either the process of “training” AI models, as well as the AI creations themselves, violate copyright.  Another issue is: who owns the copyright to creations generated solely by AI programs? At this early date, there are no answers, but it was a provocative discussion as to what the future holds.

Next was our alphabet soup program, a summary of all the new EU tech laws, as well as some new regulatory Initiatives which haven’t yet reached fruition. Ably taking us through this maze of legislation, including the Digital Services Act, the Digital Markets Act, and the European Media Freedom Act, was Remy Chavannes from Amsterdam.

After a well-deserved break on a long day came an excellent program on the challenges digital platforms face in operating and moderating content in countries around the world, both in the EU and the third world, which (unlike the U.S.) don’t have a First Amendment. This too was a sophisticated discussion moderated by Steve Crown, a VP at Microsoft, and featured David Kaye, the former UN Rapporteur on protection of freedom of expression, Daphne Keller of the Stanford Cyber Policy Center, Chloe Poynton, founder of Article One, and our good friend and ubiquitous colleague Mark Stephens. Among the thorniest questions discussed were determining who is a legitimate source of authority in a foreign country that can “order” the removal of content, whether a country has a truly independent judiciary where such orders can be challenged, and what are the consequences of either complying or not complying with that removal order, or even exiting the country entirely. Even where removal orders appear to be unlawful, the panel noted that local presence laws in certain countries can subject employees to arrest and detention unless the orders of government officials are followed.

Our last (scheduled) session was on tracking technologies that expose online publishers to class action privacy litigation based upon arguably outdated laws – like the Video Privacy Protection Act, enacted in the late 1980’s in reaction to the publication of the video rental history of Supreme Court nominee, Robert Bork. But before the audience could relax with wine and beer, we added a short additional session on the day’s Andy Warhol decision by the Supreme Court. Putting aside my wonderment at the simple (but, I would submit, correct) proposition that transforming a portrait photograph into a Warhol is almost by definition transformative, reviewing this controversial decision was a great way to end a spectacular Conference.

The author would like to thank Michael Norwick and Jeff Hermes for making significant additions to this column. The opinions expressed are those of the author and not the MLRC. We welcome responses at gfreeman@medialaw.org; they may be printed in next month’s MediaLawLetter.

George Freeman is MLRC’s executive director.