> "I strongly believe that law students and junior lawyers need to understand these AI tools, and other technologies, that will help make them better lawyers and shape future legal practice," Buell told Mashable in an email.
I am a law student and that is not what is going through my head. What I see is a tougher job market, justifiably. Understanding, and more importantly, developing legal tech will help me make to be a better lawyer; since I am not already on top of the food chain, we (as graduates) will be eaten away.
Also: it is important to keep in mind that this is strictly contract law. Criminal and maybe to some extent tort law will be in the hands of real people for some foreseeable time, at least in regards to representation. Representation must not be strictly seen in a legal manner. As a lawyer you are also a fellow human with emotions, who must weigh pros and cons specifically adapted to your client. And that, right now, is a human thing only.
Research and drafting documents - that I believe will also be dominated by AI. Simply because economies of scale and cost/effectiveness ratio.
My wife was a law firm junior associate in the banking law group of a major firm during the S&L crisis. She spent a lot of time on the due diligence work involved in merging sick S&L's into each other. I suspect that if faced with a similar Augean stable, having auto-summarization and auto-triage would be a hugely beneficial. True, need for fewer junior associates since they are looking only at exception documents and otherwise tuning the "queries", for lack of a better term. But I think it would help even out the work load and the net impact is a healthier organizational structure. (That experience made her rethink her career path, she ended up in corporate practice doing software licensing.)
In the area of patent law, claims have a very regular structure and I think their analysis would yield to analysis given the current state of machine learning. Given the stakes involved, it seems to me that automated claims checking and claims analysis would help everyone. Patent firms could produced better work product in less time. There is no great surplus of patent attorneys, and given the time and cost constraints, many companies pursuing patents tend to establish a budget and attack patents in priority order until the budget is gone. I don't think patent firms will end up billing any less in total, but simply bill the same for more patents each completed for less money. As a small-time inventor, this would be good, because if you only want one patent, you see your costs reduced.
This. So much this. One big problem with AI/ML as is currently eating the world is that it's just a really fancy averaging engine. It memorizes (incredibly, beautifully, superhumanly) but it doesn't really actually understand. There's a whole spate of "lets trick the AI" in image processing, I have to believe that there are easy ways to do the same with text.
That said, one counterargument could be that obfuscating an AI will lead to more confusing contracts which could actually end up making them harder to enforce. So perhaps in this case there's a counter-force.
I read a great, great paper about training systems against this. Obfuscation is currently easy against AI, but obfuscation that fools an AI and a human could still detect (Purposeful perturbations). This can be pre-trained against by incorporating such perturbations programmatically during training.
In a way, the way contracts and legal documents are written now are already an antagonistic optimization against humans natural language ability. A typical contract is pretty difficult for a human to parse without experience. Imagine how much more tricky and complex legalese can get if lawyers start battling machines!
Yeah, as this kind of software progresses, fewer people will be required to do one of the main jobs that lawyers do well - reading through tons of legal documents.
Additionally, even mediocre attorneys and paralegals will be able to provide a good level of service by leveraging the software.
What will happen is that the industry will need to focus on higher-level tasks to find productive work.
Sorry to sound cynical, but unfortunately for the rest of us, idle lawyers usually do a great job of creating new work for other lawyers out of thin air.
Like I've always said: when a medical doctor runs out of patients, he can't just go around breaking peoples' legs to make more work. Lawyers, however, can cause a great deal of destruction when they want something to do. That destruction creates work for yet more lawyers. The ultimate display of that ability is when they go into politics. :)
The AI's effect is that some tasks get automated and require much less labor but in many cases a real human lawyer is still required to communicate with the client.
* Does that mean some lawyers or law firms which make good use of the technology might offer lower prices and gain market share?
* Is there latent demand that may be induced into the legal market once the costs are cheaper?
* Can lawyers retrain to specialize in other laws and thus increase supply and lower costs for less automatable areas over time?
I know little about the legal market. So I'm curious if the above analysis makes sense for it.
Premise: I study in Austria, so this is the Austrian legal market.
> Does that mean some lawyers or law firms which make good use of the technology might offer lower prices and gain market share?
In law, lower prices do not necessarily reflect on getting bigger market share. A lot of clients come from hear-say or publicly defended or publicly known cases. When such companies make good use of such technology they will most definitely gain market share by being able to accept more clients/increase "output" so to speak.
> Is there latent demand that may be induced into the legal market once the costs are cheaper?
That I unfortunately cannot answer, but interesting question!
> Can lawyers retrain to specialize in other laws and thus increase supply and lower costs for less automatable areas over time?
In Austria, that is common practice for law firms. You generally do a broad service, usually only divided into criminal law and tort/contract law and maybe public law. But within those, you specialise (e.g. NDAs etc...). Lawyers tend not to retrain, rather hire someone who is trained in specified subject. So maybe yes, "AI-hostile" trained lawysers will tend to be more sought after. Whether that is in criminal law or somewhere else, I cannot say.
As a lawyer I'm extremely skeptical about this contract reading software displacing any lawyers any time soon, like in the next 10 years. This isn't because I question the abilities of the tech (though I do), rather its for the simple reason that I see so much more low hanging fruit for automation that still exists in the industry compared to reading contracts. Also this situation hasn't changed much in my 6+ years so I guess lawyers are able to charge for a certain amount of inefficiency, though I believe there was a huge wave of eDiscovery and subsequent eDiscovery automation layoffs in the 2000s, and the jobs outlook is probably not positive, more like neutral at best.
> The AI's effect is that some tasks get automated and require much less labor but in many cases a real human lawyer is still required to communicate with the client.
For every 1 lawyer in front of a client or jury, there are dozens if not hundreds of lawyers doing grunt work. The vast majority of legal work is reading documents. That's what most lawyers do and it pays well. If that goes, it's going to put a significant pressure on profession and wages.
It's going to be great for the top lawyers at top firms as they can charge more, work more clients and their overhead drops significantly.
I suspect that learning even the basic programming principles (e.g. if/then/else, loops, OR/XOR) may be more helpful to a lawyer than understanding "AI tools".
"The AI also completed the task in 26 minutes, while the human lawyers took 92 minutes on average", should be 26 _seconds_, according to the infographic on the source page: https://www.lawgeex.com/AIvsLawyer/
The AI was clearly faster, which is no surprise. But since the "correct" answers for the test were assigned by one group of humans, and the correct answers in real practice are also determined by (a different group of, compared to either those running the test or those taking the test) humans, I'm not sure that result that the AI was more correct can be in any sense meaningful.
That's exactly how every human-vs-AI evaluation is measured. For example, if you have an AI detect whether an image contains a cat, your first step is to have a person manually check every image. Of course it is possible for the test/evaluation labels to be incorrect. Nothing about this case is special, at least nothing you have pointed out.
> That's exactly how every human-vs-AI evaluation is measured.
No, it's not. For instance, there are comparison on games with fixed definite rules.
And even if it was, the fact that a methodological weakness is universal in a field doesn't make it not a weakness, it just makes the field problematic as a whole.
> For example, if you have an AI detect whether an image contains a cat, your first step is to have a person manually check every image.
Sure, but the thing is if you are comparing humans to AI at finding cars in pictures, you are usually testing against lay humans with no special expertise, so the assumption on which usefulness of the experiment rests is “an academic expert evaluating pictures for the presence of cats approximates perfect accuracy near enough that it is unlikely that the score of either lay humans or AI against that expert significantly misstates their accuracy.”
With the legal case, the assumption is something like: “an academic expert evaluating the legal effect that a court would find in a set of contracts approximates perfect accuracy near enough that it is unlikely that the score of either experienced lawyers practicing in the the field or AI against that panel significantly misstates their accuracy.”
The former is, of course, not certain, but I think most people would accept it to be more likely than not to be true.
The latter is less believable (unless, for example, the expert used as the oracle is actually the Supreme Court of the jurisdiction whose law is to be applied in evaluating the contracts.)
You would assume that the test/answer creation involved a lot more eyes and time to double-check everything. Hell, they might even have used an AI to make sure that they didn't miss anything.
We are evaluating similar software for use during due diligence review in investment and M&A transactions at my firm. We have found that the software is good for identifying when contracts have certain troublesome clauses such as restrictions on assignments in an asset sale. These tools certainly save time and make document review faster and more accurate.
The biggest issue we have found is that this type of review does not identify when contracts are missing key terms. For that, we still need someone with experience in the relevant field thinking about the business context of the agreement and the potential risks.
TLDR: AI replaces lowest-cost portion of legal practice, AKA "document review."
For those of you familiar with the legal field, document review involves reading and summarizing documents. This task is left to new graduates because they don't have the skills or experience to actually do anything useful.
Looking at LawGeex website, it appears that all the AI does is heuristically match common terms, phrases, and cases to guess what legal issues are raised in the document reviewed. It certainly makes the process of document review more efficient, but it's not revolutionary.
I wish such a technology could be in hands of ordinary people, so that one wouldn't need an expensive lawyer to spot a contract that is being unfair to him.
I was in an accelerator with a company called beagle.ai that did exactly this (among other things). Their site appears to be down this morning, I don’t know if that is a temporary or permanent state.
Seriously, over time I've come to the conclusion that a lot of the most egregious things put in contracts are around open-ended liabilities and indemnifications. It seems like it would be straight-forward to create a tool that simply scanned for which party has liabilities, and what limits there are on those liabilities. It is not everything, but it would be a good first pass.
Sure. grep could take the file. I knew that when I wrote the above line. We could have a nice debate about readability and maintainability, compositional design over compact expression, etc. But that's kind of over-kill for a joke, eh?
This will likely make lawyers much cheaper to hire since it will only require maybe an hour of their time, of which 95% will be spent talking to you and the other 5% spent running an AI app on the contract.
I have to be honest, I'm a bit suspicious considering the study was conducted by the company which developed the AI and that the academic auditors are all law professors and not AI professors.
I'm not surprised at all. I just joined a company that has a similar product, and everyone we demo it for is amazed at how accurate and quick it is. AI and law is a huge untapped market that's just starting to get explored.
Consider - law language is very frequently very similar, to the point where that's where the phrase "boilerplate" was adopted from for coding. Is it that surprising that AI would be very, very applicable to this kind of problem?
As long as they did a reasonable job of making sure the company running the AI didn't have the questions long enough to just go cheat, I think the law professors should have much more to say about the output than an AI professor would.
The AI could really help in due dilligence which is usually left for the youngest associates/trainees. By the nature of the work, you have to review and summarize thousands of documents. And as this process takes a long time, you cannot charge the standard rates to the client. Instead you work overtime without charging anything thus receiving no extra compensation.
I remember doing a due dilligence work during one of my summer internships, where my only job was to scan documents for any non-spanish documents. If the document was in spanish and contained no suspicious names, I closed it. If not, I printed it out. I had to do this for hours and hours in the night.
> So does this spell the end of humanity? Not at all. On the contrary, the use of AI can actually help lawyers expedite their work, and free them up to focus on tasks that still require a human brain.
Although what that does not point out is it will still cost jobs just not "all" the jobs. It will soon take less people to do the same job and unless the demand grows or the demand currently vastly outpaces supply, companies will need to downsize.
There is a best case. It could be that no one loses their job and it means Doctors, Lawyers, and Engineers can start working 40 hour weeks and have a work live balance.
The best case is that lawyers feel the pinch, because that’s maybe the only way we’re ever going to see real legislation to protect people in this area. As long as automation is seen to largely hurt people on the bottom of the totem pole, there will be hand-waving galore. The moment it starts to bite people who see themselves as indispensable and important, especially the people who are most of the lobbyists and politicians, that will end.
> The best case is that lawyers feel the pinch, because that’s maybe the only way we’re ever going to see real legislation to protect people in this area.
Probably not; while lots of legislators (and lobbyists) were trained as lawyers, protecting the jobs of people who actually practice law isn't really something that shows any evidence of being a priority in that class; in fact, a large majority of them spend much of their time railing against practicing lawyers as a class.
Unless the political and electoral calculus forces them to adopt protections (to which the threat to practicing lawyers is pretty much irrelevant), don't expect any action.
Discovery has been automated for a many years (and FWIW this was how HP got interested in Autonomy -- which turned into a scam). This is moving up the stack a bit with some "AI" clickbait added.
With the increasing use of AI in law, there will be an incentive to make the texts more tractable for AI. Perhaps this may lead to the changes in the law language, eliminating the remaining ambiguities.
A "new study" AKA a marketing gimmick dutifully reported as if it were the truth from God's lips by press release dumping ground websites. One important point: are the bots liable for malpractice, or no? That's one of the things that you pay for when you pay for a lawyer: you pay them to put their necks on the line, also.
What about false positives? False negatives? What about when different legal language is used? How does punctuation affect this? What about differing laws per jurisdiction?
The article is great for an instant hit of wonder about how great AIs are, but an attorney still needs to review the output of the AI.
This is one of the reasons I think Agrello will be a successful project, as it aims to have an AI engine process legal contracts into smart contracts. Granted it currently still currently requires a lawyer of a particular region to create an interpretive template.
AIs are terrible at open-ended questions, and interpretation of law is usually open-ended. In this experiment they had pretty well-defined structure and evaluation criteria, which sets the AI up to be successful, whereas in real life there is rarely such definition.
I'm gonna go full populist here and say that instead of making AI really smart so that they can read a contract, we should make AI as smart as an average human and discard the contracts it can't understand.
I'd be happy if as a first pass the AIs went through and cleaned up our mess of legal frameworks so they made logical sense to a being of any intelligence.
> This technology will never fully replace a human lawyer, but it can certainly speed up their work by highlighting the most important sections of a story.
That's optimistic thinking but AI can and probably will replace lawyers and doctor's at some point. At least a large majority. All legal consulting could easily be replaced, courtroom lawyering will probably be the last to go because a lot of that is emotional to sway jurors one way or another..
It's already happening, it's just not visible because the lawyers being replaced are the young, entry-level people that get stuck doing research early in their career.
I think many people are expecting this to democratize the legal system. My fear is that it will consolidate power into the hands of a few top firms and raise the barrier to entry dramatically.
If your job is taking data and transforming it in to different data, or taking data and interpreting it to produce new data, then a computer is going to be better at it than you very soon. If the developers of that software can persuade your boss (and any relevant government authorities) that this is the case then you'll need a new job soon. This is true in all professional services.
If your job includes any creative work than isn't simply derived from previous work then you're safe for a while. But probably not forever...
There is a fundamental role that lawyers play that is an intangible: to pursue justice for the client.
Much of why law is expensive and has been slow to automate is because of intentional and unintentional barriers.
Unintentional because it has developed in evolutionary form, over hundereds of years, and during that time has imperfectly used human language as its protocol. Contracts and statutes are difficult to make perfect using the English language. Lawyers thus argue over what the parties intended and understood when they fixed their promises into words.
Intentional because of, for example regulatory capture. Law is an industry that has taken actions in the past to make itself necessary.
But even if these barriers are fixed and much of law is automated, there remains a notion of what is fair and right, or ought to be fair and right. This is a human concept.
Take for example blockchain and its contracts, which remove some of these barriers. Currency is transferred to/from one address to another because that is what the code intended, and the code is law and the transaction immutable. But what if the other side fraudulently misrepresented who they were? Do you not want a human advocate and arbitrator there to determine what is right in this situation?
I am a law student and that is not what is going through my head. What I see is a tougher job market, justifiably. Understanding, and more importantly, developing legal tech will help me make to be a better lawyer; since I am not already on top of the food chain, we (as graduates) will be eaten away.
Also: it is important to keep in mind that this is strictly contract law. Criminal and maybe to some extent tort law will be in the hands of real people for some foreseeable time, at least in regards to representation. Representation must not be strictly seen in a legal manner. As a lawyer you are also a fellow human with emotions, who must weigh pros and cons specifically adapted to your client. And that, right now, is a human thing only.
Research and drafting documents - that I believe will also be dominated by AI. Simply because economies of scale and cost/effectiveness ratio.