The rules governing who owns a photograph, who can train an AI on it, and where you can fly a drone to capture it are all being rewritten simultaneously. Across courtrooms, five separate legal confrontations are converging on a question that matters to every working photographer: in an age of generative AI and autonomous aircraft, who actually controls the value of an image?

What follows is a photographer-focused breakdown of the cases and regulations most likely to change how you shoot, edit, license, and protect your work this year and next. 

The information in this article is provided for general informational purposes only and does not constitute legal advice. Consult a qualified attorney for guidance on how these developments may affect your specific situation.

The Artists Take Their Case to Court

The lawsuit most likely to produce a major early test of AI training and copyright is Andersen v. Stability AI (Case No. 3:23-cv-00201-WHO), filed in January 2023 in the Northern District of California by three visual artists: Sarah Andersen, Karla Ortiz, and Kelly McKernan. The case has since expanded to include additional plaintiffs and now names four defendants: Stability AI, Midjourney, DeviantArt, and Runway AI. The core allegation is that all four companies copied billions of copyrighted images scraped into the LAION-5B dataset to train their image generators without permission.

Judge William H. Orrick’s August 2024 ruling on motions to dismiss the First Amended Complaint allowed the most important claims to proceed. Direct copyright infringement survived on two theories. The first is the “training theory,” which holds that copying images into a dataset is itself an act of reproduction. The second, and more technically complex, is the “model theory,” which argues that the AI models contain compressed copies of the copyrighted works they ingested. The court was persuaded in part by Stability AI CEO Emad Mostaque’s own description of the technology: that the company had compressed 100,000 gigabytes of images into a two-gigabyte file that could recreate any of them. Induced infringement claims also survived, meaning the companies could be liable not just for their own copying but for enabling users to generate infringing outputs. The DMCA metadata claims and DeviantArt’s breach of contract claim were thrown out permanently. As the case moves forward, substantial similarity between the outputs and specific training images is emerging as a key contested issue, particularly for the model theory.

Since then, the case has moved deep into discovery. Magistrate Judge Lisa J. Cisneros resolved disputes over electronic discovery protocols in March 2025, and by October 2025 the parties reported substantial completion of initial document production. Fights over access to source code and training data beyond LAION remain ongoing. No class has been certified yet, and the proposed class, all U.S. copyright holders whose works were used to train any version of the defendants’ tools, is ambitious. In early February 2026, Judge Orrick granted a request to push back the case schedule by roughly three months. The summary judgment hearing is now set for February 17, 2027, which means the case has slipped to the back of the line among major AI copyright suits heading toward a fair use ruling. Concord Music v. Anthropic and other cases may reach summary judgment first.

For photographers, the stakes are significant but contingent. If the court ultimately finds that copying images into AI training datasets is not protected by fair use, it would establish an important precedent. But whether individual photographers whose work appeared in LAION-5B could pursue claims would depend on class certification, proof of ownership and copying, copyright registration status, and whatever defenses survive. This case is a bellwether, not a guarantee.

Getty Won the UK Trial, Then Lost the War

Getty Images brought what should have been the photography industry’s strongest case against AI scraping, and the result was a near-total defeat. The UK trial of Getty Images v. Stability AI began on June 9, 2025, before Mrs. Justice Joanna Smith. Getty alleged that Stability AI scraped roughly 12.3 million of its visual assets through the LAION-5B dataset. The November 4, 2025 judgment, spanning over 200 pages, was a decisive loss for Getty.

The case fell apart in stages. Getty abandoned its primary copyright infringement claim mid-trial after failing to show that the actual model training happened on servers within UK jurisdiction. It also dropped its database rights claim and its claim that the model’s outputs infringed copyright. That left only secondary infringement under Sections 22 and 23 of the CDPA, the argument that making the Stable Diffusion model available for download amounted to distributing an infringing copy. Justice Smith rejected this theory, finding that the AI model’s weights encode learned mathematical patterns, not reproductions of specific copyrighted images, and therefore the model is not an “infringing copy” for these purposes. It is worth noting what the court did not decide: because Getty abandoned its primary infringement claim, the judgment contains no merits holding on whether AI training itself constitutes infringement under UK law. That question remains open.

Getty did win on trademark infringement. Early versions of Stable Diffusion occasionally generated outputs containing Getty’s watermark. But the court found this was extremely limited and historic, affecting just 0.15% of the prompts analyzed, and the problem did not persist in newer versions. Getty faced a substantial adverse costs order, and in December 2025, Justice Smith granted permission to appeal the secondary copyright ruling, calling it a novel and important question with potentially far-reaching ramifications. That appeal is expected to be heard in late 2026 or early 2027.

Meanwhile, Getty refiled in the United States in August 2025, voluntarily dismissing its stalled Delaware case and starting fresh in the Northern District of California (Case No. 3:25-cv-06891). The new complaint lists 7,216 copyrighted images and advances theories including that mass AI-generated content “hollows out” the value of the original licensed work. In the earlier proceedings, Getty had sought roughly $1.7 billion in damages. Stability AI filed a motion to dismiss in October 2025 that remains pending. The US case, where fair use will be the central legal battleground, remains the more promising path for photographers hoping for a favorable precedent.

Hollywood Shows the Courts What AI Infringement Looks Like

Where the artist-led lawsuits require complex technical arguments about training data and compressed representations, the entertainment industry’s case against Midjourney makes a much simpler argument with much more visceral evidence. Disney, Universal, Lucasfilm, Marvel, DreamWorks, and Twentieth Century Fox filed suit on June 11, 2025 in the Central District of California (Case No. 2:25-cv-05275). Warner Bros., DC Comics, Hanna-Barbera, and Cartoon Network followed with a related action in September 2025, and the two cases were consolidated in November.

The studios’ 110-page complaint paints Midjourney as a virtual vending machine for copyrighted characters. The exhibits are devastating: type “Yoda” and you get Yoda. Type “animated toys” and you get what are unmistakably Pixar characters. The complaint catalogs over 150 copyrighted characters that Midjourney reproduces from simple prompts, and as one Georgetown Law analysis noted, the reproductions make the question of infringement practically answer itself. This differs fundamentally from asking a judge to understand what a latent diffusion model does to training data. This is side-by-side comparison evidence that any juror can evaluate at a glance.

Midjourney chose not to file a motion to dismiss, instead going straight to a 43-page answer on August 6, 2025. The answer leans heavily on fair use but also invokes the First Amendment, arguing that “the limited monopoly granted by copyright must give way to fair use, which safeguards countervailing public interests in the free flow of ideas and information” and framing the platform as an instrument for user expression whose suppression would chill lawful speech. Another provocative defense is “unclean hands,” the argument that Disney and Universal themselves use generative AI internally, so they cannot credibly accuse Midjourney of wrongdoing for the same practices. Legal commentators have been skeptical, noting that a studio using AI tools internally is legally distinct from Midjourney selling a product that reproduces copyrighted characters to millions of subscribers. The case is in early discovery, with court-ordered mediation due by August 2026 and no trial date set. For photographers who license character-adjacent or editorial content, the outcome will shape whether AI companies can freely ingest copyrighted visual material or must negotiate access.

The Supreme Court Weighs Whether AI Can Be an Author

While the training and output cases argue over what AI companies did with existing photographs, Thaler v. Perlmutter asks a more fundamental question: can AI itself be a copyright author? The answer matters to every photographer who uses AI tools, because it defines where the line falls between a tool that helps you create and a machine that creates on its own.

Stephen Thaler sought to register a copyright for a visual artwork he says was created autonomously by his AI system, listing the AI as the author. The Copyright Office denied registration. The D.C. District Court upheld that denial in August 2023, and the D.C. Circuit affirmed unanimously on March 18, 2025, with Judge Patricia Millett writing that the Copyright Act’s provisions on duration, inheritance, and nationality all presuppose a human author. But the court deliberately left the door open, writing that the human authorship requirement does not prohibit copyrighting work made by or with the assistance of artificial intelligence.

Thaler petitioned the Supreme Court in October 2025 (Petition No. 25-449). The Court requested a government response, and the DOJ filed a brief in January 2026 recommending denial, arguing the case is a poor vehicle because Thaler deliberately disclaimed any human creative involvement. The petition has been distributed for conference, meaning we should know by early March 2026 whether the Court will take the case. Most observers expect it to decline. But the case has already accomplished something significant: it forced the Copyright Office to articulate exactly where AI assistance ends and AI authorship begins. The Office’s January 2025 Copyrightability Report confirms that AI-assisted work remains copyrightable when a human exercises creative judgment in selecting, arranging, or modifying the output. Your Lightroom AI masking, your camera’s AI autofocus, your Photoshop generative fill with substantial human editing on top: all of that is squarely within the zone of protectable work.

The real gray zone will be resolved not by Thaler but by Allen v. Perlmutter in Colorado, where Jason Allen used over 600 Midjourney prompts plus extensive Photoshop editing to create the award-winning “Théâtre D’opéra Spatial.” Allen argues those prompts constituted specific creative instructions analogous to a director’s instructions to a camera crew, a more refined legal theory than simply claiming prompt engineering is authorship. The Copyright Office denied his registration and argues that prompts are ideas, not authorship. Cross-motions for summary judgment were filed in January 2026. That ruling, when it comes, will draw the practical line that determines whether AI-assisted photography retains full copyright protection.

New Drone Rules Could Transform Aerial Photography, or Ground Your Fleet

The fifth legal battle is not a lawsuit but a regulation, and for drone photographers it may be the most immediately consequential of all. The FAA’s proposed Part 108 rule would replace the current waiver-by-waiver system for Beyond Visual Line of Sight operations with a standardized framework that could unlock extended-range landscape surveys, autonomous mapping missions, and large-area real estate coverage. The Notice of Proposed Rulemaking was published on August 7, 2025, following a presidential executive order directing the FAA to finalize the rule within 240 days.

The original comment period drew thousands of submissions, and the FAA reopened a narrow 14-day comment window in late January 2026 focused on electronic conspicuity requirements. No final rule has been published, and the original 240-day timeline has slipped. The executive order’s February 1, 2026 target was effectively frozen during the 43-day government shutdown (October 1 through November 12, 2025), during which rulemaking was not considered essential activity. Adding those lost days pushes the target to March 16, 2026, though meeting even that date remains uncertain given the volume of comments the FAA must review. Photography and videography would fall under Part 108’s “aerial surveying” category, and under an Operating Permit, photographers could fly BVLOS missions with permits valid for 24 months. But the details of the final operational framework, including fleet size limits and population-density categories, will depend on the final rule text.

The larger concern for most working drone photographers is Section 108.700 of the proposed rule, which limits airworthiness acceptance to drones manufactured in the US or countries with bilateral UAS airworthiness agreements. No such agreements currently exist for unmanned aircraft. That effectively bars DJI, Autel, and every other foreign-manufactured drone from Part 108 operations. This compounds the FCC’s December 2025 update to its Covered List, which added foreign-manufactured UAS and UAS critical components. That action complicates FCC equipment authorization for new covered models and components, with downstream effects on availability and import pipelines. Existing DJI drones remain legal to fly under Part 107, and previously authorized models can still be purchased from remaining inventory. But the pipeline of new products faces serious uncertainty, and long-term support for firmware updates, repair parts, and Remote ID compliance patches is no longer guaranteed.

Standard Part 107 work continues unaffected with your current gear. But the BVLOS future that Part 108 envisions appears to be designed around a domestic drone manufacturing ecosystem that does not yet exist at the scale photographers need.

What to Do Right Now

These five fronts are converging on a compressed timeline. The Andersen summary judgment hearing in February 2027 will test whether AI training on copyrighted images is protected by fair use. The Getty UK appeal and the US refiling will determine whether that question gets a different answer in different jurisdictions. Disney’s case will show whether output-focused infringement claims can succeed where training-focused arguments face technical hurdles. The Thaler petition and Allen ruling will define the boundary between AI-assisted and AI-generated work. And Part 108’s final rule will determine who gets to fly BVLOS and with what equipment.

Three things you can do today: document your creative process when using AI editing tools, because that record of human judgment is what preserves your copyright claim. Stock up on spare parts and batteries for DJI equipment before supply chains constrict further. And keep an eye on Allen v. Perlmutter in Colorado, because that ruling will affect your rights and your livelihood more directly than anything else on this list.