Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

100%. But people will do it wrong. A lot.


Should get better over time as the tool gets better incorporated into the industry. People always misuse new tools


In such cases, I believe there are (and should be) guardrails. It already has some guardrails internally set:

ChatGPT already knows how to say: "I am not a lawyer, but I can provide you with some general information on this topic." The same thing for medicine: "I'm not a medical professional, but I can provide some general information..."

But, for instance, ask it to design a canard for a 4th generation supersonic fighter, and suddenly it spits out pages of output. [Though it finally corrects itself and balks to answer when you ask for angle of forward sweep and a formula for the curvature of the surface.]

There needs to be a way to "sniff out" if there's topics that people are getting too close to danger zones for it to answer. Ways for organizations themselves to set those 'bounding boxes.'


> There needs to be a way to "sniff out" if there's topics that people are getting too close to danger zones for it to answer. Ways for organizations themselves to set those 'bounding boxes.'

I wonder if this can be achieved with the "Custom Instructions" feature. With enterprise, these can be managed by the admin. Could tell ChatGPT something along the lines of: "Make sure to state you are not an expert if you are asked to comment on any of the following: []"


Azure already has this, there's a literal checkbox that forces it to only answer questions it can cite from your internal documents.

The reality is chat is a terrible interface in the long run. Horrible discoverability, completely non-obvious edges, turns what you might think is an equally accessible tool into the worst case of "you're holding it wrong you've ever seen just judging by what people in these comments are complaining about.

But chat was/is brilliant at making this stuff accessible. I've gone back and learned of incredible things I could have been doing years ago if I had paid more attention to ML, but it wasn't until chat gave us this perfect interface to start from that I suddenly got the spark.

As chat models get more powerful, chat will be the least interesting thing you can do with them. If the model is intelligent enough to convert minimal instructions into complex workflows, why do I need to chat with it? To take it to its logical extreme, why not have one button that just does the right thing?

The more realistic version of that will be industry specific interfaces that focus on the 5 buttons you need to get your job done, along with the X decades of procedure that can guide the LLM to exactly what needs to be done.


One of the bounding boxes should be requests to introspect. The models absolutely are incapable of introspection. They just simulate a human introspecting, but they think totally differently from a human, so the output is less than useless. If you ask “why did you say X” it should just refuse to answer, the same as if you asked it to swear or say something racist.


But why will it get better at accuracy if it isnt trained to be accurate?


What I mean is, the situation of people misusing it should get better over time


Fair. Yes, people will eventually develop best practices through trial & error, and through better features/capabilities of the product/service itself. I just remain extremely cautious and pragmatic during this part of the hype cycle and zone of inflated expectations.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: