San Francisco Attorney Magazine

Fall 2024

Art vs. AI

KGWoods headshot 2021_pp

This article was written by a human.
Kathleen Guthrie Woods is a long-time contributor to San Francisco Attorney magazine. Previous articles include “For the Kids: Arguments in Favor of Right to Counsel” (Summer 2019) and “Understanding the Crisis in Our Immigration Courts” (Spring 2015).

Artificial intelligence (AI) is here to stay, and a new sort of spectator sport has emerged as we observe and anticipate the developments in this chapter in the legal canon. Lawsuits, many of which are filed in the Bay Area, are challenging what we think we know about copyrights and trademarks. Journalists and publishers from San Francisco to New York, as well as the Authors Guild, have sued to stop companies from training AI models with their copyrighted content, launching debates about what qualifies as “public data.” Actors have gone on strike in an effort to protect their images and likenesses from being used without authorization, and Scarlett Johansson spoke out when a program was voiced by a chatbot that sounded eerily similar to her own.

In what sometimes feels like David v. Goliath confrontations, creatives are fighting tech innovators in attempts to protect their original creations, copyrights, images, and livelihoods. In the same breath, AI has the potential to do good—a lot of good. It already is streamlining processes, disseminating information, and providing the general public with access to knowledge and inspiration.

As in the past, when new technology is released—as it morphs, is perfected, and expands—the law has to scramble to catch up. In some cases that means interpreting existing laws or developing new laws, to ensure ethics, protections, and fairness while encouraging and supporting innovation and growth.

This is a dance we’ve seen before with emerging technologies, and Matthew Dedon, Law Office of Matthew Dedon, offers a historical perspective. “Way back when automobiles were first invented, the United States was pretty horse-focused. When there was a conflict, the law favored the horse.” Laws were enacted to address muzzle-to-hood confrontations, he says. Initially, the car had to move to allow the horse to pass. If that option didn’t work, the car had to be camouflaged so the horse would feel comfortable enough to pass. And if that didn’t work, the driver had to dismantle the car. “The laws were ridiculous. We didn’t know what we were doing,” Dedon says. “But we found balance, and the laws evolved. It’s like this now, but moving faster.”

Like trying-to-jump-on-a-moving-bullet-train faster.

To get a baseline understanding of the impact of AI, we somewhat narrowed the scope to visual arts and reached out to members of related BASF sections for their thoughts on what’s happening and what we might anticipate. “We’ve got to adapt and find a way that’s fair,” says Dedon. For all parties.

Who owns the rights to creative work?

This is the central question in many of the lawsuits currently in litigation. “Copyright/trademark hasn’t changed, but there’s interest when it’s this new thing, this new tech,” says Lauren Curry*, Chair of BASF’s Art Law Section and

Deputy City Attorney, Office of the City Attorney, San Francisco. “How are our existing laws going to handle it? What are other people doing? It’s challenging because there’s nothing to look at,” she says. “We have to pave the way.”

Per the US Copyright Office, “Copyright is a type of intellectual property that protects original works of authorship as soon as an author fixes the work in a tangible form of expression.” Paintings, photographs, illustrations, musical compositions, sound recordings, computer programs, and architectural works are listed as examples. “Original” is further defined as works that are “independently created by a human author and have a minimal degree of creativity…[something] you create…yourself, without copying.”

Only recently, in July of this year, the Copyright Office published its first installment of a report concerning the impact of AI on copyright law. Part 1 focused on digital replicas, defined as “the use of digital technology to realistically replicate an individual’s voice or appearance,” a concern for actors depicted in films and games. (Check the website for releases of additional parts of the report.)

“The big question is,” says Chris Young, a Partner at Joseph Saveri Law Firm (JSLF), “if what they [defendants] did constitutes Fair Use.” There are four factors to consider:

  1. the purpose and character of the use,
  2. the nature of the copyrighted work,
  3. the amount and substantiality of the portion used, and
  4. the effect of the use upon the potential market for or value of the use.

The fourth point, Young explains, hinges on whether the effect on the potential market “intended to replace artists in the market.” This debate comes up in cases before the courts today as it relates to the competition between original art and AI-generated art. “Defendants have said their use of training data scraped from the internet qualifies as Fair Use under US copyright law,” says Dedon, who serves as Secretary of BASF’s Art Law Section. “Right now, in the US, [the iteration] is automatically in the public domain if generated in AI. It’s not copyright protectable.”

“Scraping” means taking existing content—text, images, and/or data—and using it to train AI to create something “new.” To find examples in recent litigation, Dedon refers to Doe v. GitHub, Inc. (software coding) and Getty Images (US), Inc. v. Stability AI, Inc. (photographs).

It can be argued that students have done this for all time, as in “a young artist is influenced by great works and ‘stands on the shoulders of giants,’” says Dedon. But for him, scraping “doesn’t pass the smell test.” Today, an artist might charge $500 for an original work of art, but an AI program that has trained on that art might offer something similar—along with access to many other artists’ works—for a $30 subscription fee. “This directly competes with the artist,” he says.

“There was a huge case last year with the Andy Warhol estate (Andy Warhol Foundation for the Visual Arts, Inc. v. Goldsmith et al.), which [court watchers] thought would decide Fair Use,” says Javier Bastidas, a Senior Associate whose practice focuses mainly on copyright and trademark law at Leland, Parachini, Steinberg, Matzger & Melnick LLP. For one of his iconic silk-screened images, Warhol used a photograph taken by Lynn Goldsmith. Was it copyright infringement, or had Warhol created a new incarnation of the image? “The Supreme Court held in favor of the photographer,” says Bastidas, deeming it a case of “sampling” an existing artwork.

“Fair Use is so messy, cloudy, gray,” says Bastidas, who is the Immediate Past Chair of BASF’s Intellectual Property Section and currently serves on the section’s executive committee. He applauds attorneys for their “creative legal arguments,” including citing the Lanham Act, the 1946 law that protects trade dress. A trade dress infringement happens when the design or packaging of a product is so similar to another that consumers are confused. Young uses Kentucky Fried Chicken’s (KFC) red and white striped design and distinctive font as an example, for it is instantly recognizable. Can the style of an artist be protected? What—if any—ethical lines are crossed when an AI-generated program claims its model can make a layperson’s work look like that of a professional artist’s?

There are no straightforward answers, and this is part of what makes this evolving area of law so fascinating.

All eyes are on Andersen v. Stability AI

There’s one case concerning “generative AI” that everyone we spoke to says they are watching. In simple terms, generative AI is when a computer “learns” from the data it’s been given and shares what it has learned. Initially, algorithms made it possible for users to find and download data, then the technology evolved into allowing users to create something using the input, and now it’s possible for the model to create works that mimic and compete with human-made works.

In Andersen v. Stability AI Ltd. (a class-action lawsuit filed in January 2023 by JSLF, a BASF member), illustrators and artists Sarah Andersen, Kelly McKernan, and Karla Ortiz claim companies stole their original work, to train their AI models and generate remarkably similar and competing art. This was done “without the three Cs: credit, consent, and compensation,” says Young. After the models were tested and output was determined to closely resemble the artist’s original work, Ortiz stated in a July 2023 hearing before the US Senate, “I have never been asked…never been credited…never been compensated.” The plaintiffs are fighting to protect not only their copyrights, but their careers and livelihoods.

In August, the litigants proceeded to discovery. Court watchers anticipate the decisions in this case will set precedents for future cases and legislation.

Other AI-related cases to watch

If you’re intrigued by the different forms AI takes and how it impacts our lives, here are some potentially precedent-setting cases you might follow:

In Zhang v. Google LLC and Alphabet Inc., another class-action suit filed by JSLF, visual artists claim a tool has been trained to copy “enormous amounts” of their copyrighted work, without authorization. A hearing is scheduled in December in the US District Court, Northern District of California (San Jose).

Young is watching the “music case,” in which members of the Recording Industry Association of America (RIAA) accuse companies of copying and exploiting copyrighted music and sound recordings to train their AI to “spit out” work that is similar—and for sale. But are “musical elements” (rhythm, melody) protected? (See the 2022 decision in Gray v. Hudson.) And if the resulting works are considered satire or “new musical ideas,” do they pass the Fair Use test?

Deepfakes are AI-created pictures or videos in which a woman’s or girl’s identifiable face is digitally attached to an unrelated nude image. (We’ve also seen deepfakes employed in political campaigns to spread false information and/or smear a person’s reputation.) San Francisco City Attorney David Chiu recently sued sixteen websites to take down deepfakes, block sites from posting the images, and seek civil penalties.

Thompson Reuters claims an AI company copied content from Westlaw, its legal research platform, to train its own platform. Thompson Reuters v. ROSS was filed in May of 2020, which Young says makes it one of the first AI cases. The Fair Use factors will be employed to determine whether or not any copyright has been infringed.

Related and pending legislation

Concurrently, legislators are scrambling to anticipate and head off problems as the technology develops, while at the same time not hindering or stifling innovation. It’s an ongoing process, one that began with the Digital Millennial Copyright Act of 1998 (DMCA), Congress’s first effort to update copyright laws for the digital age.

California has long been at the forefront for creating innovative legislation as new technology presents and evolves, and there are several bills worth following. AB886, The California Journalism Preservation Act (CJPA), if passed (it was amended in the State Senate on July 3), would require online platforms to pay a “journalism usage fee” to publishers. AB2839, Elections: deceptive media in advertisements, would prohibit AI-generated and other digitally manipulated campaign ads—including deepfakes—before and after an election. Watermarks would be required on some AI-generated content if SB942 passes.

Among the most hotly debated is SB1047. Supporters claim the bill’s call for required testing ahead of an AI model’s public release, the creation of a state agency to run those tests, the application of a “kill switch,” and liabilities for “significant injuries” are needed to ensure public safety. Opponents warn the restrictions would negatively impact the growth of “little tech” companies and impose “unnecessary bureaucracy.” The bill passed in the California Assembly and Senate in late August. Governor Newsom has until September 30 to veto or sign it into law.

Leading from the hub for tech—and legal—innovation

Bastidas has been playing with some tools. “It’s pretty amazing what AI can do! And it will get better,” he says. “For some things, AI can be great,” says Curry. She sees AI’s potential for collaborating with humans, for inspiration, for generating “different ideas for exhibits and educational programs, trends, and [information] about audience.”

“Yes, things are scary, as artists are being affected,” says Dedon, yet “balance will be found. I believe the law will adjust to take technology and artists into account.”

As settlements are reached, as legislation is approved, as the technology continues to evolve and expand, the world will be watching. “A lot of eyes are on the US,” says Dedon, “and other countries will follow us.”

A lot of eyes are on the San Francisco Bay Area’s legal community, to see how we will navigate and lead from this hub for tech and legal innovation. How exciting that we get front-row seats.

*The views Ms. Curry expresses here are her own.

Ad

In case you missed it

 

Phil Omorogbe spells out 3 Things Lawyers Should Know About Using AI in the Summer 2024 issue of San Francisco Attorney.