News
AI’s Legal Wake-Up Call: Lessons from 2025, Actions for 2026

AI’s Legal Wake-Up Call: Lessons from 2025, Actions for 2026

If 2023 and 2024 were the years AI captured global imagination, 2025 was the year legal accountability took centre stage. Regulators, courts and enforcement bodies across the world moved from debate to action, introducing new obligations, launching enforcement sweeps and issuing landmark rulings. Founders discovered–often abruptly–that AI was no longer simply a product and growth lever; it was a legal and commercial risk category in its own right.

By the end of 2025, one thing was clear: building AI without a legal strategy is no longer optional. The companies that weathered the year most effectively were those that treated legal readiness as a competitive advantage, not a brake on innovation.

Below is a recap of the biggest shifts of 2025, what they mean in practice, and what founders should take into 2026. We’ll discuss:

  • Key Legal Developments Shaping AI 
  • Lessons for Building AI That Scales
  • The AI playbook for 2026

Key Legal Developments Shaping AI 

In 2025, global regulators began to converge on meaningful oversight of AI, turning years of debate into concrete obligations.

Europe: The EU AI Act Goes Live

Europe became the most structured expression of this shift when the EU AI Act entered its phased rollout in 2025. Bans on “unacceptable-risk” systems took effect in February, and the first obligations for general-purpose AI (GPAI) models followed in August. For many startups, this meant compliance moved from a future concern to a product requirement, with pressure from customers, investors and regulators to prepare governance, documentation and post-market monitoring ahead of 2026.

The Act forced founders operating in or serving Europe to confront compliance questions far earlier in the product development lifecycle. Choices that once felt purely technical –  data provenance, model explainability and risk testing – became legal and compliance questions. Product roadmaps now had to account for governance structures, human oversight and documentation from day one, not bolted on after launch.

United States: The Enforcement Era Arrives

Although the US still lacks a comprehensive federal AI law, 2025 proved that regulators could act without one. The Federal Trade Commission intensified its focus on AI, treating inflated claims and deceptive practices as high-priority enforcement issues. State privacy regimes and employment law guidance added another layer, making it clear that US compliance would be multi-agency rather than a single statute.

Companies discovered that promises about accuracy, fairness, or automation were no longer marketing fluff– they were representations regulators would scrutinise like product claims in any other regulated market. The FTC’s enforcement sweeps under Operation AI Comply signalled that “AI-washing” had become a clear regulatory trigger.

US regulators also began asking more pointed questions about training data: where it came from, whether companies had rights to use it, and how fine-tuning or reinforcement learning was conducted. Founders deploying AI into enterprise environments increasingly needed to articulate a clear, defensible data lineage.

United Kingdom: Regulation Through Enforcement, Not Legislation

While the UK continued to position itself as pro-innovation, 2025 proved this was not a hands-off approach. Rather than passing a single AI statute, the UK doubled down on its sector-led regulatory model, empowering existing regulators to police AI under current laws. That model proved increasingly assertive in practice. 

The Competition and Markets Authority (CMA) led the charge. Its investigations into foundation models became some of the year’s most closely watched regulatory activity, scrutinising market concentration, safety testing, transparency and the competitive dynamics of model access. 

Alongside the CMA, the Information Commissioner’s Office (ICO) confirmed that AI systems involving personal data fall within its jurisdiction. In 2025, it opened multiple investigations into unlawful data scraping, insufficient lawful basis for training models, and AI-driven profiling without proper assessments. The message was simple: an AI Act is not required to regulate AI – GDPR already applies.

The courts also played a pivotal role. In  Getty Images v Stability AI,  the High Court dismissed most copyright arguments related to model training but allowed trademark claims to proceed. The mixed outcome signalled the UK’s emerging judicial stance: not every challenge to AI training will succeed, but brand and consumer protection claims remain viable.

In practice, founders learned that the UK’s flexible, regulator-driven model could be just as demanding as the EU’s rulebook– just enforced differently.

Australia: Privacy Reform and Early Enforcement Signals

Australia took a pragmatic approach in 2025. Rather than adopting an EU-style AI Act, the government focused on targeted regulation under existing consumer, privacy and competition laws, supplemented by updated guidance and ongoing Privacy Act reform

A key development was the government’s work on a National AI Safety Framework, which began to shape expectations for organisations deploying high-impact AI. Although not legally binding, it set concrete expectations around testing, transparency, oversight and accountability for systems used in employment decisions, credit assessments, biometric verification and government service delivery. Founders operating in these domains discovered that this guidance quickly became the de facto standard for public-sector procurement.

Building on these expectations, Privacy Act reform accelerated in 2025. Throughout the year, the government advanced proposals to bring automated decision-making within the Privacy Act, strengthen transparency requirements, and expand individuals’ rights to understand and challenge AI-driven outcomes. For founders, the message was clear: In Australia, AI governance is being enforced primarily through privacy and consumer protection regimes.

The ACCC increased scrutiny around deceptive or misleading AI claims and emerging competition issues in the foundational model market. Companies discovered that overstated marketing claims could trigger the same scrutiny as any other misleading product claim under consumer law. Enthusiasm without evidence became a consumer-law risk, with regulators expecting claims to be backed by testing and real-world performance..

2025 confirmed that Australia’s approach favoured practical enforcement over sweeping legislation, but that does not make it light-touch. For founders expanding into APAC, Australia is a jurisdiction where regulators expect you to show your work–from data practices and testing to the evidence behind marketing claims.

Global Courts: Training Data in the Spotlight 

Courts across multiple jurisdictions became a focal point in 2025, reshaping expectations about how AI systems are trained and commercialised. The year did not deliver a universal answer on training AI on copyrighted works, but it produced rulings that materially increased legal and commercial risk for developers.

A Munich regional court held in November 2025 that training an AI model on copyrighted song lyrics without permission infringed copyright and awarded damages. In the UK, the Stability AI v Getty Images case produced a more nuanced outcome, rejecting most copyright-based arguments but permitting trademark claims. Meanwhile, new suits continue to emerge in the US, with filings extending into 2026.

Alongside litigation, industry behaviour has shifted. Universal Music Group’s 2025 licensing deal with Udio signalled a growing willingness to replace lawsuits with structured, compensated training-data arrangements. As a result, many now see the market shifting toward licensed training data, rather than unrestricted scraping.

For founders, the direction of travel is clear: demonstrable provenance of training data is becoming both a legal safeguard and a commercial differentiator.

Lessons for Building AI That Scales

If there was one takeaway from 2025, it was this: AI only scales when teams can demonstrate technical, legal and operational discipline.

Training Data Became a Legal Story, Not a Technical One

Throughout 2025, investors and enterprise customers began asking more direct questions about training data. Founders learned that if they couldn’t explain what their models were trained on–or demonstrate rights to the data–they risked losing deals and inviting litigation. Teams with even a simple data-transparency record were often far ahead of competitors.

Using Third-Party Models Did Not Absolve Responsibility

Many founders believed that if they used OpenAI, Anthropic or Google models, the legal risks sat with the vendor. 2025 showed that assumption was unsafe. Regulators and customers cared about how the model was deployed–what data was fed in, what safeguards were implemented, how users were informed of and how organisations responded to harmful outputs. Deployment choices became as scrutinised as model choices.

Marketing Claims Became Compliance Risks

2025 was the year companies learned that bold AI claims could become legal liabilities. The safest companies were those that described what their AI could realistically do while acknowledging limitations with clarity.

High-Risk Use Cases Pulled Legal Into Product Design

AI systems used in hiring, credit scoring, insurance, health assessment or identity verification drew heightened scrutiny. Founders discovered that involving legal at the design stage could save significant pain later. A small number of early design decisions often determined whether a product could be sold into regulated sectors.

Enterprise Buyers Raised the Bar

Perhaps the most under-appreciated shift of 2025 was how enterprise procurement evolved. Clients increasingly added AI-governance questions to their due diligence process, asking about training data, safeguards, oversight and explainability. Founders who could not answer these questions faced delays or lost deals, even when the product itself was strong.

In many sectors–particularly heavily regulated industries such as healthcare– AI legal readiness became a go-to-market advantage rather than a compliance burden.

The AI Playbook for 2026

Startups don’t need heavy processes–but they do need structure. The teams that thrived in 2025 succeeded because they put simple, repeatable disciplines in place; those same moves will define who scales in 2026.

Own Your Data Lineage Before Anyone Asks

2025 was the year that vague answers about training data stopped being acceptable. Regulators in the EU and UK demanded clarity about training sources, the FTC questioned companies on data rights during enforcement sweeps, and enterprise clients began adding provenance questions to RFPs. 

The teams that succeeded were not the ones producing exhaustive binders, but those able to clearly explain where their training data came from, why they had the rights to use it, and how fine-tuning was handled. In 2026, a short, credible data narrative will be a baseline expectation for buyers, partners, and regulators.

Map AI Use Across Your Products Now

Another key shift in 2025 was the rise of AI-specific due diligence questionnaires from enterprise buyers. These questionnaires asked companies to identify exactly where AI was used, what decisions it influenced, what data it touched, and what could go wrong. 

In 2026, speed in AI diligence will be a competitive advantage. A simple internal inventory–use case, data inputs, risk level and safeguards–often proves more useful than a stack of generic policies.

Substantiate Every Marketing Claim

In 2025, AI marketing moved squarely into consumer-protection law. Regulators began applying existing false-advertising and deceptive-practice rules to AI claims, particularly those related to accuracy, autonomy and fairness. The UK’s CMA and Australia’s ACCC, along with US regulators, signalled that unsupported capability statements could trigger the same scrutiny as any other product claim.  Companies that grounded claims in testing and real-world performance closed deals faster and avoided regulator attention.

In 2026, the safest approach is simple: describe what your AI actually does, acknowledge its limits, and retain evidence to substantiate the claim.

Make Governance Lightweight and Visible

Governance became a competitive advantage rather than a compliance burden in 2025. Effective teams implemented simple mechanisms: a clear approval step before launching AI features, lightweight testing documentation, and a named owner responsible for AI risk. These measures satisfied regulatory expectations without slowing development and signalled operational maturity to customers. Governance began to function less like bureaucracy and more like credibility infrastructure

In 2026, the goal is not perfect governance, but governance buyers and regulators can understand in five minutes.

Modernise Contracts Before the Deal Forces You

Founders discovered that traditional commercial agreements were not drafted for AI realities:  rights to training data, responsibilities for outputs, IP ownership of fine-tuned models, and safeguards around misuse. These issues now arise across customer terms, vendor agreements, partnership and data-sharing arrangements–often late in the process, when leverage is lowest.

In 2026, contracts that still assume traditional software will slow deals; contracts designed for AI will accelerate them. 

Have Your AI Answers Ready for Due Diligence 

AI due diligence is now standard in enterprise sales, fundraising rounds and partnership negotiations. Investors and customers increasingly ask about model oversight, safety testing, training data provenance, and governance structures. Teams that wait for these questions end up in reactive, last-minute scrambles that slow deals and complicate funding rounds. The  companies moving fastest already have clear answers: a short governance summary, a defensible data lineage explanation, a map of AI usage across their products, and a set of contracts that address AI risk and responsibilities 

In 2026, a well-organised AI diligence pack becomes a go-to-market asset, not a back-office exercise. Preparing early shortens evaluation timelines and builds credibility before legal review begins.

Conclusion

2025 reshaped not only how AI systems are built, but how organisations that rely on AI are evaluated. Legal readiness moved from a defensive exercise to a commercial requirement. The organizations best positioned for 2026 will not be those with the most advanced models, but those that can clearly explain their training data, justify design choices, and support safeguards with evidence.

AI is now embedded in real-world decision-making, and scrutiny will only increase. The organisations that scale successfully will be those that treat legal, technical and operational readiness as part of the product–not an afterthought. For teams building, deploying or integrating AI, the practical question is no longer whether these issues will arise, but when. Preparing early allows organisations to address them deliberately rather than under deal pressure. In short, early preparation turns legal review from a blocker into an accelerator.

At Biztech Lawyers, we advise organisations building, deploying and integrating AI to align legal strategy with product and operational realities. If these issues are relevant to your team, we’re happy to discuss how they apply in your specific context. 

Sharon Taraska

Introducing Biztech

International law firm Biztech Lawyers elevates clients, providing vision and confidence to navigate global markets and seize opportunities.

Get Started

Discover more

Whether you’re looking for advice in a particular jurisdiction or exploring how we can help expand your business, discover more below.