I think this is true. It can super charge some bad takes.
But I've had the opposite experience. The average person is never going to read a scientific study, nor invest the time to find out the real details of any topic they are opinionated about other than simply typing a Youtube search and finding a video that is:
- Entertaining
- The person has their same biases
- the present the information in a short, consumable manner that doesn't require much investment.
In comparison to this dynamic LLMs are wonderful. They can reference scientific data. I have noticed that they do push back on bad takes (very gently) and steer people towards truth.
It's not that I think LLMs are perfect. They are not. But they are infinitely better than the average human at discovering truth.
You realize that they will gladly hallucinate science...
You should check the papers it claims to reference as see if the claims it makes are actually backed up.
In my experience, it can completely mischaracterize scientific literature. For example, I asked it if a codebase was a faithful implementation of an algorithm described in a CS paper, and is said "no" and then proceeded to list a dozen small changes. Every single change was incorrect. The codebase was in fact a completely faithful implementation.
But I've had the opposite experience. The average person is never going to read a scientific study, nor invest the time to find out the real details of any topic they are opinionated about other than simply typing a Youtube search and finding a video that is:
- Entertaining - The person has their same biases - the present the information in a short, consumable manner that doesn't require much investment.
In comparison to this dynamic LLMs are wonderful. They can reference scientific data. I have noticed that they do push back on bad takes (very gently) and steer people towards truth.
It's not that I think LLMs are perfect. They are not. But they are infinitely better than the average human at discovering truth.