The legal profession in the age of AI

Legal firms are approaching Artificial Intelligence with a dual perspective. Not only are they using AI tools to benefit administrative and research processes within their businesses but, due to the nature of their work, they are also involved in the wider legal implications of the adoption of AI across society.

In automotive, self-driving cars are a classic example of this. Google’s driverless car programme, Waymo, is now operating in two locations in the US and, as of October 2023, is now available to book through Uber in Phoenix. Arizona. People can now book a driverless taxi to the airport just as easily as they can a car with a human driver. This is fantastic technological advancement due to AI, but it is also a great risk-bearing operation that could have a significant impact on legal cases globally.

If a self-driving car is faced with a split-second decision between hitting a human or another vehicle, how does the car make that decision and who takes the responsibility for having programmed the car to make the choices in the first place? If there is an accident and a subsequent legal process, who then is liable?

With examples like this in mind, lawyers need to look carefully at how they themselves adopt AI tools, knowing that the outcomes of such technology can be life-altering.

Within the day-to-day business operations of law, automated legal research processes have been in use for several years now, with systems such as Thomson Reuters’ Westlaw UK providing faster and easier access to legal data. Some of the more sophisticated tools have the capacity to suggest a range of outcomes, depending on the questions asked of the system. However, they are not yet – and may never be – at the stage where they will make the decisions themselves. For now at least, this must be performed by humans.

AI tools can also draft legal documents, letters, contracts, business agreements etc., but they still need to be proofed and fact-checked by trained professionals to confirm validity and accuracy. AI systems can produce inaccurate information but in a definitive, ‘confident’ way that could lead the reader to accept it as truth. When asking a question of an AI system, you cannot be absolutely certain that the response will be 100% accurate. The dangers of this for a law firm are self-evident because an incorrect legal document could spell serious trouble for a client and damage a firm’s reputation.

An artificial intelligence system is not able to be creative, nor is it capable of independent thought, meaning that where there are creative or persuasive legal arguments required to overcome contentious cases, AI could not be relied upon to solve the issue.

AI also has the potential to damage business for legal firms. Prospective clients may Google their legal problems before they approach a legal firm – a law version of the medical phenomenon of ‘Dr Google’. As with medical self-diagnosis, clients may feel they can find a solution without the need to consult an expert, though their research is likely to yield only basic legal information. What the legal profession can offer clients is experience, knowledge and gravitas, not to mention an understanding of the human condition.

Law is human. It is for and about the rights of the human and the practice of justice. Trust is key between client and legal representative and empathy is essential. Therefore, humans will always be needed in the legal profession for as long as AI lacks these qualities.

As with all the professions we have examined in our blog series so far, AI has an important part to play in the legal industry by increasing efficiency, reducing overheads and assisting with mundane tasks. The law will be increasingly impacted as AI sweeps across the globe, raising challenging issues of accountability. This means the human skills of lawyers will be required more than ever to navigate this new landscape and establish legal precedents for the future.

Comms Team
About the author

The Ennis & Co Comms Team

Related Posts

Leave a Reply