Request A Demo

Legal AI Talk: Latest Large Language Model Trends

Legal AI Talk 22.01.2025
AI and Large Language Models (LLMs) are revolutionizing the legal industry, streamlining contract analysis, document automation, and research with greater precision. However, challenges like transparency, bias, and ethics must be addressed to ensure responsible and effective integration into legal practice. 
 

In the "Legal-AI-Talk" about latest large language model trends, which were on January 22, 2025, the experts - Dr. Maria Börner, Head of the AI Competence Center at Westernacher Solutions; Dr. Nils Feuerhelm, Legal Engineer & Legal AI Consultant; Gordian Berger, CTO of Legartis; and David Alain Bloch, CEO & Co-Founder of Legartis - discuss the development and trends of Large Language Models (LLMs) and their implications for the legal industry. 

What are the latest development trends or the latest developments in LLMs?

Gordian: In 2024, AI base models improved incrementally, but there were no major breakthroughs compared to previous years. While users noticed some enhancements in models like ChatGPT, the most significant change was the ability to achieve the same benchmark results with fewer computational resources. This trend led to reduced costs and more viable business applications. Although AI models have become more efficient, a contradictory trend emerged where highly resource-intensive models, like OpenAI’s free model, demonstrated significantly better performance. However, due to high costs, these models are not yet practical for widespread use but indicate the potential for future advancements.

The European sector has begun developing its own AI models, such as Open GPT-X. With increasing concerns over data sovereignty, the demand for European alternatives to dominant US-based AI systems is growing. While base models have not drastically changed, solution providers have learned how to optimize their usage, leading to more practical applications. In 2024, AI adoption increased, with the emergence of the first truly profitable AI-driven business use cases.

One of the biggest trends in 2024—and a key focus for 2025—is AI agents. Unlike simple chatbots, AI agents can interact with APIs, retrieve data, access emails, perform searches, and even communicate with other AI models. This opens up new possibilities, particularly in legal tech, where AI can act as a legal counsel assistant. AI agents now have the capability for long-term memory, allowing them to remember user interactions and retain historical data. This enhances their ability to provide personalized responses and continuously improve through autonomous learning and corrections. The developments of 2024 have laid the groundwork for more sophisticated AI applications, making AI more efficient, accessible, and applicable across industries, particularly in the legal sector.

Legal AI Talk 2025-01-22

Maria: A significant trend is the increasing use of ChatGPT in personal settings. "Friends and colleagues" use it for tasks like "creating recipes," "writing letters," and even for "small legal matters." While major legal issues still require real lawyers, people are turning to AI for minor legal tasks. Additionally, ChatGPT is being used for "emotional connections," especially during "Christmas time," where people interact with it as a "chatbot friend." Some even use AI to "recreate the voices and images" of deceased loved ones using data from platforms like "WhatsApp."

AI has become a game-changer in marketing. At a "marketing conference," it was demonstrated how "AI-created content" is reshaping the industry. "Adidas produced an entire marketing video using AI," showcasing the technology's potential. AI is also capable of generating "podcasts with AI-generated discussions" that sound completely real, as well as "creating presentations and business pitches." The "heavy use of large language models in marketing" is delivering "impressive and reasonable results," making AI an essential tool for content creation. 

In the legal field, AI is mainly used through "ChatGPT-based use cases." Lawyers leverage AI for tasks such as "drafting and reviewing contracts," "extracting information," and "client onboarding." AI also facilitates "email communication" and enhances "client interactions." These AI-driven solutions streamline legal workflows, making processes more efficient. 

Nils: "ChatGPT has now arrived with almost every lawyer" across different age groups, and "everyone is experimenting with it." The technology is now widely used in the legal sector, marking a significant shift in AI adoption. There has been a surge in "investments in legal techs and legal startups." Legal AI is no longer "off the radar of VCs," which is accelerating innovation in the field. "Five to ten years ago, legal tech was not considered attractive, but with AI, that has changed."

Another emerging trend is the focus on "domain-specific AI models." In Germany, Legal AI providers in various jurisdictions are working on "specific legal AI models" tailored to regional legal frameworks. There is now "less focus on technical development" and "more emphasis on benchmarks." The "0.1 preview model" was released, but it "didn’t create a big hype." While "GPT-3 is expected," it is "super expensive," leading the legal industry to prioritize "real use cases over just new models." The legal industry is now focusing on "better reasoning capabilities," particularly for "real legal use cases." The goal is to "find real value in legal tools," making AI more practical and beneficial in the sector. 

The actual use cases in the legal space

  • Two main directions: repetitive tasks (boosting productivity with contract management tools) and mass claim scenarios.
  • First instance of vertical specialization combining LLMs with legal expertise at a high level.
  • Traditional legal tech required manual programming, but legal AI now assists with smaller claims and client mandates.
  • New digital regulations like DSA (Digital Services Act) and DAU create AI-driven compliance opportunities.
  • Many legal tech providers integrating AI with deterministic technologies.
  • Document management systems (DMS) now include AI features, whether necessary or not.AI adoption driven by market trends, with only the most effective solutions likely to remain.
  • Automation of repetitive tasks and AI-driven compliance solutions are key trends shaping legal AI.

Would you like to experience a Legal AI?

What are the challenges of LLM trends?

Maria: AI models perform exceptionally well in English, but their performance in languages like French, Spanish, and other European languages remains a significant challenge. This is particularly relevant in Europe, where multilingual capabilities are crucial. A key development in this area is OpenGPTX, a European AI model trained on only 50% English data, compared to Meta’s LLaMA, which was trained on 89% English data. OpenGPTX incorporates a higher percentage of German, French, and Spanish data, making it more suitable for multilingual applications and legal contexts across Europe.

Another major concern is AI regulation and data privacy. Many users rely on ChatGPT without fully considering the implications of data usage. When using the open version, data may be used for further AI training, which raises risks, especially for businesses and legal professionals handling sensitive information. To mitigate this, it is essential to use enterprise versions of Legal AI tools and be mindful of the type of data entered. This is particularly relevant for lawyers who, in some cases, have unknowingly entered business secrets or confidential legal information into AI systems, potentially exposing sensitive data.

In the legal industry, AI is increasingly used as a knowledge base. However, AI-generated legal references are not always accurate, and there have been cases where lawyers unknowingly cited fabricated or incorrect legal decisions in official documents. Despite awareness of this issue, such incidents continue to occur, highlighting the need for verification through external sources like Google or legal databases. Education and awareness about AI’s limitations are crucial to ensure its responsible use in legal settings.

Courts and judges have been slow to adopt AI, primarily due to the significant computational resources required. Most AI systems rely on cloud services like Azure or AWS, but due to the U.S. Cloud Act, data stored on these platforms could potentially be accessed by U.S. authorities. This makes such cloud solutions unsuitable for judicial applications, as courts require highly secure and local data storage. As a result, there is growing interest in developing smaller AI models that can run on local cloud providers, reducing reliance on large-scale, foreign-owned cloud infrastructures.

The shift toward smaller AI models is not only beneficial for security and cost-efficiency but also for sustainability. Smaller models require fewer resources, consume less energy, and are more environmentally friendly. As AI continues to evolve, the trend is moving toward more efficient and specialized models that balance performance, security, and environmental impact.

David: "And smaller model doesn't actually mean just less or worse results, it definitely can mean even Yeah, better results for just what you actually need."

How can the economic situation regarding AI in Europe be defined?

Gordian: Many companies, including corporates and law firms, are increasingly developing their own proof of concepts (POCs) and AI solutions. While this reflects a growing interest in AI, it comes with both advantages and challenges. On the positive side, companies can quickly build simple AI tools using available frameworks, allowing them to experiment without relying on external vendors.

However, developing an AI solution independently also brings risks. Many organizations underestimate the investment required, leading to scope creep and unexpected costs. While solution providers dedicate extensive resources to AI development, companies building their own POCs must also commit to ongoing maintenance and benchmarking to ensure quality.

Additionally, the AI landscape is constantly evolving, meaning that solutions built today may become outdated within a few months if they are not regularly updated. Companies must factor in long-term maintenance, compliance with regulations like the AI Act, and overall adaptability to keep up with technological advancements. These challenges highlight the need for careful planning when integrating AI into corporate and legal environments.

What are the risks and opportunities for users of AI?

Risks for AI users

Maria: AI is becoming increasingly easy to use, with many people relying on tools like ChatGPT for various tasks. However, a major risk lies in data privacy, as users often upload photos, personal data, and even conversations without considering that this information can be used for AI training or even misused. In some countries, people readily share their data with cloud providers, often unaware of how their information is stored or processed.

In Europe, strict regulations such as the AI Act, the Data Act, and the Cloud Act ensure a greater focus on data privacy by design. This means that AI solutions developed in Europe are designed to keep data secure from the start, ensuring that information stays on users' devices or is stored only within approved cloud providers. While this enhances user security, it also leads to challenges—some large AI companies are now avoiding the European market due to regulatory concerns. For instance, Meta decided not to release an updated version of LLaMA in the EU due to copyright issues and compliance fears.

Another critical issue is the environmental impact of AI. The carbon footprint of training GPT-3 is equivalent to running five fuel-powered cars throughout the entire production process. Daily AI usage generates even more emissions than its initial training, making AI consumption a significant environmental concern. A single ChatGPT query produces ten times the carbon footprint of a Google search, yet many people use ChatGPT for knowledge retrieval, even though Google provides more reliable information with a lower environmental impact.

Educating users about data security and AI’s ecological footprint is crucial. While AI offers many advantages, it comes at a cost, and awareness of these implications will be increasingly important in the future.

Chances for AI users

Maria: Regulations, particularly in data privacy, are not just restrictions but also drivers of innovation. One example is federated learning, which allows AI models to be trained directly on a user’s device without sharing raw data with the cloud. Instead, only the trained model is sent, ensuring greater data security while still enabling AI advancements. Rather than seeing European regulations as a barrier, they can be a source of technological progress. Europe's focus on secure AI solutions encourages the development of privacy-first innovations, allowing it to compete globally while maintaining high standards of data protection and AI efficiency.

Nils: AI presents significant efficiency gains for legal professionals by reducing repetitive tasks such as document review and legal research. This allows lawyers and judges to focus on complex legal matters rather than spending time searching for information. Beyond efficiency, AI also enables new business models in the legal sector. By lowering costs and improving access to justice, AI-powered services can make legal assistance more affordable and accessible. A prime example is flight compensation services, which have streamlined the process of claiming refunds through automation. Similar AI-driven legal services could emerge in other areas, making legal advice less expensive and more widely available.

AI also enhances client services, with chatbots and automated legal education tools helping users navigate legal processes, such as tax filing assistance. This benefits both non-lawyers seeking guidance and lawyers who need support in their work.

A key challenge is how legal education will evolve with AI. Since AI can now draft contracts and generate legal documents instantly, legal professionals must shift their focus to reviewing AI-generated content rather than creating it from scratch. However, universities have yet to effectively integrate AI into their curricula, leaving uncertainty about how legal training will adapt in the future. 

How can AI solution providers balance flexibility, compliance, and automation in legal tech?

Gordian: AI solution providers must adapt to flexible models due to constantly improving AI technology and changing compliance regulations. If a model becomes restricted, providers must be prepared to switch models while maintaining the same quality. This flexibility extends to task-specific models, as different models perform better in identifying legal topics or rephrasing clauses based on custom requirements.

Another key trend is the need for on-premise AI solutions, especially in the legal sector, where highly confidential contracts may not be shared with vendors. To address this, providers must support smaller models and offer vendor-agnostic platforms that allow customers to host AI on their own infrastructure. Fortunately, smaller AI models are improving in quality, making on-premise AI adoption more feasible. Transparency and traceability are becoming expected features in AI solutions. Users want to understand how AI arrives at decisions, reducing misunderstandings caused by misinterpreted prompts or missing context. Providing clear explanations within AI tools helps users refine their inputs and achieve better results.

A major shift in 2025 will be automation through AI agents. Traditionally, users needed to manually enter prompts to receive AI-generated outputs. With AI agents, the system itself can generate prompts, reducing the need for prompt engineers and making AI more self-sufficient with minimal expert supervision. This trend will drive greater automation and efficiency for AI solution providers.

What LLM trends can we expect in 2025?

Nils: AI agents are expected to be the next major trend, driving automation in workflows, complex tasks, and multi-step processes. Unlike simple custom GPTs, real AI agents will integrate multiple LLMs to handle advanced legal tasks, such as contract drafting and claims processing. Legal processes involve research, collecting client information, and iterative communication, making them too complex for a single AI prompt. Instead, task automation and AI-driven process redesign will be key challenges and developments this year. Another important trend is combining legal expertise with AI tool providers, as seen in the DORA example, where law firms collaborate with AI developers to create high-quality, specialized AI solutions

Maria: A major positive trend for the coming year is the combination of generative AI with a knowledge base, allowing users to verify sources and trust AI-generated information. This focus on explainable AI will be crucial, ensuring transparency in AI decision-making, not only for text generation but also for areas like image recognition in legal contexts. However, a significant negative trend is the rise of deepfakes, especially during election periods. AI-generated content is becoming so advanced that it is difficult to distinguish between real and fake. EU regulations require labeling of deepfakes, but this does not prevent criminal misuse, making it a growing concern.

Another key issue is AI model costs. While models are becoming smaller and cheaper, large providers like OpenAI and Microsoft have yet to recoup their investments. It remains uncertain whether AI services like ChatGPT and Copilot will see price increases in the future due to high resource and training costs. AI pricing will likely change as new models are introduced, with OpenAI continuously updating its models.

A key issue is the declining efficiency of AI models over time. As AI generates more text and images, future models will learn from AI-generated content rather than real-world data, potentially reducing quality. 

Gordian: AI tool adoption is still in progress, with major tech companies integrating AI into their ecosystems. Google Workspace has enabled Gemini for all users, while Apple and Microsoft are also embedding AI into their platforms. The success of these integrations will depend on user satisfaction and profitability.

A major trend for 2025 is AI agents, but they come with security risks, particularly prompt injections. Autonomous agents with email access could be manipulated by external actors, potentially exposing sensitive data like passwords. This highlights the need for careful AI deployment and data security measures. Startups are already working on solutions, but awareness and risk management remain critical.

David: Thank you very much Nils, Maria and Gordian for your participation in today's Legal AI Talk. And a big thank you to our audience!

Q&A from the audience

What is the comparison between Gen AI and simple LLM? 

Gordian: So I would say generative AI is just a more broad term in general for all these bigger models which can generate and large language model is more of a specific term related to. As it says language so for example it does not contain models that generate images and so on, which are specific to the tasks.So one is generative AI is more product term and large language model is simply more specific to certain tasks.

Does anyone know of any trends regarding AI in criminal investigations or evidence presented on social media, for example?

Maria: To be honest, they are more or less negative because if you use AI in criminal investigation, it will be under high risk from the European Union AI Act. And I think that in this case courts they will not invest that much much money to really set up these kind of AIs because if it is high risk environment, AI systems, then you need to fulfill like compliance and it will take a lot of time and it's quite expensive to fulfill them.

However, what I see as well is that you can use AI to filter out in social media negative things, negative comments or untrue things, but we see like on meta that people don't want it and they're not following up on this topic.

Nils: I see a positive trend. I think one big potential is that we can analyze like large data sets with AI much easier in and also in criminal investigations. In and through social media, too and as from what I know, at least in germany court's also working on this at the moment because of course it's so much easier to find patterns like.

You know digital forensics this also will increase with AI over the next years and it's a bit slower than outside, of course, but I think they're working on it and this will also come in the next years.

I hear a lot about improved efficiency and lower costs, but everyone offering legal AI services is charging premiums.
Where is the benefit? 

Gordian: I agree with his statement somewhat. I think there's a big difference between different providers. I think a lot of existing platforms are just I would say slapping AI on top of it and then charging. I would also question where the benefits compared to value. I mean, this is something of course you have to discuss with the tool also provider to get the business case right.

In our case, it's quite simple. For example, we reduce risk in a contract that has some value and we also allow a speech to review a contract and that you can easily calculate how much of it is worth.

Nils: I would agree. It depends. But for example, also some of big law firms are losing pitches, because they they're just too expensive because they don't use AI to for example in dds uh in data analysis in big data rooms, information extraction is just. When you want to do it manually, of course, you just have the billable hour and then it's just too expensive so you have to use tools.

And of course, the tool itself costs but the tool itself and some hours to review it are still cheaper than doing it manually by lawyers. And I think this is what it means by more efficiency, lowering costs, at least for law firms. But of course, this will increase and we see more.

Recommended Articles

Subscribe to the Legartis Newsletter

Sign up for our newsletter and benefit from the latest insights from the world of LegalTech, artificial intelligence and contract review automation.

Subscribe Here!