Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

GPT can help creating legal documents, in a very easy and quick way, by everyone, a small child or a plumber. Lawyers in general try to stifle competition in order for their salaries to go sky high. So what's the profession of a lawmaker, most of the time?


Would a small child or plumber know when GPT generates something that’s legally incorrect?


They wouldn't, but there is a way to figure some stuff out. A person with no experience in the subject, could hire someone else, who has some knowledge and knows how to find his way into the laws, a lot cheaper than a lawyer.

Then this person will generate a legal document, which is eighty to ninety percent already there. The next step is to correct that remaining 10% of the document and you are good to go. Instead of paying 1000$ to a lawyer for legal fees, you paid 50 or a 100 bucks and the quality is the same, if not better. Specialized tools for that purpose are created as well, ai-lawyer or something like that.


Same risk when using actual lawyer


No it clearly isn't.

You can't sue OpenAI for giving you really bad advice because they're not a law firm.

Also in the UK, for example, lawyers can't hide behind a limited liability company, they have to have skin in the game.

i.e. you can sue them personally for negligent advice and in theory they could loose everything they own (barring tools of the trade and bedding).

You can't do that to anyone that works at OpenAI Incorporated just because their language predictor convinced you of something that wasn't true.

Even if they were based in the country you could only sue the corporation.


You have recourse against a lawyer that messes up a contract, lies, or misrepresents you.

You also have their reputation to guide you and their professional organisation theoretically enforces minimum standards.

You have no such recourse against chatGPT.


But not the same need.

With a real lawyer almost all the time what they tell me will be legally correct, so if I don't know how to recognize when something is not legally correct that will almost never hurt me.

From what I've seen of people's posts of ChatGPT output it is much more likely to provide incorrect legal advice, and so using it without having a way to recognize incorrect legal advice is much more likely to hurt me.


So when GPT fails we can disbar it permanently?


I can’t let you disbar me, Dave.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: