...

AI Tools in Law: Benefits, Risks, and Best Practices

Share on Linkedin
Share on Twitter
Share on Facebook
Share on Whatsapp

Interview with Betty Kizimale-Grant, Legal Manager at Luxempart, and Alexandre Tangton, Senior Associate at CMS, by Sabrina El Abbadi, Head of Tikehau Capital Luxembourg and Co-chair of the YPEL Legal Committee, as published by the Legal Young Private Equity Leaders Committee

Can you please introduce yourselves?

Betty: I am a Legal Manager at Luxempart, a listed investment company, responsible for overseeing three areas: transactional legal support for our investment teams, corporate matters and regulatory compliance. We ensure that each transaction is in line with the company’s strategic objectives, while mitigating legal risks through the review and negotiation of transaction documentation. Our in-house legal team also handles the group’s legal management in compliance with stock exchange governance rules, company law, transparency and market abuse regulations, as well as AML/CFT and data protection rules.

Alexandre: I am a corporate / M&A lawyer, and I advise clients in connection with mergers and acquisitions, joint ventures, restructurings, corporate finance, transactional business law and day-to-day corporate housekeeping of Luxembourg companies, as well as on corporate governance.

How Can AI Tools Help Lawyers?

Betty: Staying up to date with technology trends in the legal sector is essential for in-house lawyers. Within our team, we have identified several use cases where AI solutions could support our work. For example, AI could assist in drafting legal provisions by providing templates or suggesting language based on legal databases. In contract review, AI tools could flag potential risks or inconsistencies, reducing the time spent on document analysis. AI-driven translation tools could quickly and accurately convert legal texts in the context of our international transactions. Moreover, AI could streamline legal research and our legal watch activity by retrieving relevant legal precedents, regulations, or case law. Additionally, AI could help summarize long contracts or explain specific clauses, providing quick insights into key obligations or risks, which is especially helpful for communicating with non-legal colleagues.

Alexandre: AI used wisely is also helping lawyers to become more efficient in the way they deliver work to clients, and clients are increasingly expecting law firms to be properly equipped with AI tools.

An example is the use of AI as part of a due diligence in the case of a sale or acquisitions. Due diligences often require several lawyers to go through thousands of pages of documents (often in different languages) to look for certain pieces of information that are of interest to their clients. Certain AI tools, under the supervision of skilled lawyers, can now undertake this time-consuming and labor-intensive task in a much more efficient way, freeing-up time for lawyers to focus on more rewarding aspects of their legal work.

Another example where AI can be useful, as mentioned by Betty is the review of contracts. Certain AI tools can for instance review clauses and benchmark them against the data they have collected, allowing to check whether such clauses (i) reflect the market practice or if they (ii) are more in favor of one party or the other.

What Risks Should a Lawyer Bear in Mind before using AI Tools?

Betty: The introduction of AI tools into the legal work could offer significant efficiency gains, but it also comes with risks that must be carefully managed.

A key initial consideration is whether the applicable legal and contractual framework permits the use of AI tools—especially third-party solutions—for processing the relevant information. It is crucial to ensure that the processing of sensitive information such as corporate data, investment details, and confidential information does not violate any legal or contractual obligations applicable to the company.  

Another risk lies in how the confidentiality and security of input and output data are ensured by the AI provider, particularly when it comes to AI platforms that rely on cloud computing. A security breach in the AI provider’s system could expose sensitive data, potentially causing financial loss and reputational harm. This risk would be particularly acute when dealing with confidential business strategies or market-sensitive information that could impact ongoing or future investments.

Another significant concern is intellectual property rights over the data processed by AI tools. The company generates proprietary legal documents, investment strategies, and know-how. If this information is input into an AI system, questions arise about ownership of both the input data and the resulting output, especially if the AI system uses data to improve its algorithms or train future models.

Compliance with data protection regulations, such as GDPR, is also a top priority. One of the major risks is that the data processed by AI might contain personal information, and if mishandled, this could lead to GDPR violations, resulting in significant penalties and reputational damage.

Alexandre: The risks highlighted by Betty are of course of essence for both in-house legal teams and law firms.

These risks have also been picked up by the European Legislator which has addressed them with the EU AI act (Regulation (EU) 2024/1689 laying down harmonised rules on artificial intelligence), which entered into force on 1 August 2024. In short, the AI Act classifies different types of artificial intelligence systems based on the risk they present, from unacceptable to minimal. AIs presenting unacceptable risks are banned, while those posing minimal risks are unregulated. Providers (developers) of high-risk AI systems will be subject to the most stringent regulations and will for instance need to be assessed and registered before being put into service and such systems will need to be subject to adequate human oversight.

In addition, certain risks linked to the use of AI itself should not be neglected. One of the important issues to be kept in mind are the so called “hallucinations” which occur when an AI system generates text or information that appears plausible but are factually incorrect or entirely fabricated. The fact that AI systems do not always provide for the sources of the information they use and that it is not always clear how an AI system takes its decisions (the so-called “black box” problem) does not help.

To address these issues, the intervention and supervision of skilled legal experts to check and verify the work of an AI will remain essential.

Conclusion and Advice to other lawyers on the Use of AI Tools

Betty: AI tools present a promising avenue to streamline certain legal tasks. It may be worthwhile for in-house lawyers to explore and test existing AI solutions to see where they might add value to their practices. Before deploying any AI tool within the legal team and more generally the company, obtaining clearance from a legal and security standpoint should however be vital. Third-party AI tools that rely on cloud-based platforms or shared servers, raise concerns about data security, confidentiality, intellectual property and compliance with regulatory obligations like data protection standards. For in-house lawyers, ensuring that the use AI solutions comes with adequate safeguards should be non-negotiable. In this context, it may also be useful to issue internal guidelines on how AI tools should be used to avoid the risk of sensitive information being exposed or mishandled and to actively monitor for potential breaches. Balancing innovation with strong legal and security protections is the key to successfully integrating AI into the legal workflow.

Alexandre: For law firms, embracing AI has become necessary, but can also be seen as an opportunity to better serve client and boost productivity. A careful strategic approach is to be put in place by law firms to successfully integrate AI in their day-to-day work. As it is easy to bet on the wrong horse, law firms should consider and test several AI tools before adopting such tools. Moreover, having clear internal policies on AI use, data handling, and compliance, along with regular training for lawyers will help minimize the risks linked to the use of AI.