The advent of AI in legal circles: Should we all be looking for new jobs?
Something is changing in the world of legal finance.
Since the beginning of 2023, with the release of ChatGPT and various other artificial intelligence large language models (LLMs), people have been successfully integrating AI into everyday life quickly and comfortably. It is no longer something that requires an eye watering amount of revenue and time to build. In fact, it’s as easy as going to somewhere like Chat GPT and simply asking it a question.
With this advent of supercharged, and super accessible use of AI, lots of industries have sat up and started to pay attention. Teachers can now ask an LLM to come up with ten quadratic equations to simplify using the complete the square method, along with answers and step by step solutions. A few seconds later they could have the content they would be using to teach that day. Software developers are using things like Github Copilot, which has been “trained on billions of lines of code” to help speed up the process of writing the boilerplate code that is needed to create the more elegant and difficult parts of software. From retail through to traffic organisation, AI is starting to make the simple jobs easier and the harder jobs quicker.
In the legal world, typically, there is a large resistance to change of any type. Ask most legal departments what they run most of their number analysis, data storage, knowledge share, timelines, even sometimes calendars and goal setting, and most of the time you’ll probably get Microsoft Excel as the standout answer. It makes sense, because small details are very important when negotiating contracts, and changing systems can sometimes cause those small details to slip between the cracks.
But a system built on particularly structured sentences (once you’ve seen an ISDA master agreement you’ve seen about 90% of all ISDA master agreements), with specific clauses to add and extract is ripe and ready for AI to take control, wrest the contracts out of Excel’s grasp and supercharge the industry of negotiation.
Picture the scene…
A legal firm gets an amendment to a contract for negotiation through to them from a regular client. It has changes to clauses, updates and additions to consider. This is the third amendment of the original contract that was agreed upon.
The current process would involve some form of data gathering — finding the original, and each amendment so far, before building a picture of “how it looks now”. Once that is done the team then consider each of the pieces of the new amendment to establish “how it will look moving forward”. The team would cross reference this against suggested best practices for the jurisdiction, considerations of the contract, and potential risks. Once this fairly lengthy process is finished, the actual work of negotiation and determining if the amendment is acceptable needs to be done. After all that, the finished and finalised amendment needs to be stored somewhere in case another amendment comes through in the future and the whole process would start again.
With the advent of AI, and more technologically refined solutions, this process could be sped up by magnitudes. An application could read the amendment, and in doing so, find the previous contracts in its database, collate the data, and give the legal team a “how it looks now” and “how it will look moving forward” without any human interaction. In a futuristic world the AI could even negotiate the contract. We are a few years away from that, but right now it would be realistic for the application to provide the updates and the relevant best practices, so negotiation is as streamlined as possible, before storing the updated final amendment ready for the next step. Even with a quick hypothetical, it’s clear the potential for AI to revolutionise the world of legal is massive.
So how does it work?
LLMs work, as all things tend to when you break them down to the bare bones, on statistics and numbers. OpenAI have led the charge with their LLMs, using around 570 gigabytes of data (roughly 300 billion words) to train ChatGPT, the current de facto industry standard. The actual “magic” behind these LLMs is that they can use statistics and probability, fed by this enormous dataset of “training material” to calculate and predict the most likely word that should come next in the conversation with the user. It is trained using something called Reinforcement Learning from Human Feedback (RLHF), which does what it says on the tin, but has obvious risks and limitations. If enough people asked ChatGPT “what is 5 + 5?” and then said “No, 10 is wrong, the answer is 11”, eventually ChatGPT would respond to that question with the wrong answer. So, one could argue the use of “intelligence” to describe these models is a bit of a misnomer. It isn’t thinking freely, as much as it is using an enormous training set to determine the most likely thing to say next on a word-by-word basis.
The models are in fact so complex that because they learn via RLHF that if you ask it to do the same thing twice and the thing you ask it to do is complex or creative enough, you will get a different answer from the model. Sometimes if the prompt is highly complex, the model can “hallucinate”, coming up with made up, incorrect statements or (as an example tied back to the legal profession) making up case law that doesn’t exist.
However, the reason statistics like these are used is because often the model correctly puts the sentences together and hence you end up with something that is very powerful and pulling data from, e.g., ISDA master agreements.
Fitting it into the legal world. Are lawyers about to lose their jobs?
LLMs are a fantastic new tool that can be used in near enough all areas of work, and the legal world is no exception to this. However, based on the current limitations of the models, and arguably the very essence of how they are currently built as enormous, complex but ultimately statistical engines, a tool is still only as good as the craftsman who wields it. Using AI in the example I used above with software tailor built to that would streamline so much of the work required by negotiators, but it would still, by necessity, need the eye of a truly intelligent (in the human sense of the world) being to check over and make sure things aren’t breaking through hallucinations in the system.
So, for the moment, no, lawyers aren’t about to be fired en masse, carving a path for robots to negotiate our contracts. And, based on the experimenting and research currently done around the models, I don’t foresee a world where AI replaces legal teams. However, in the not-too-distant future I believe legal teams will be leveraging AI to do the boring grunt work, massively expediting contract negotiation time and effectiveness.
A final note for any legal professionals reading this:
I am very much a technical person living in a legal world at the moment building applications. My legal knowledge is limited but I can see very clearly areas in the industry that are crying out for modernisation and innovation. AI is definitely one such area. With that in mind, the virtues of AI, particularly LLMs is very much front and centre of a lot of interesting, cutting-edge developments in tech, and the temptation might be to jump straight in. But before doing so, ensure that you aren’t using private data in a publicly trained model. Case in point, don’t ever put sensitive information into ChatGPT unless you are expressly happy for that information to be used to train the model, and potentially able to be extracted from someone else using the same software. The best bet is to use a software wrapper you pay for, which will (probably) have a private, secure model of AI for its own use, keeping the data safe.