They post it on arxiv before they submit the paper for peer review usually. Industry scientists might just post it up on arxiv without peer review because they might not care about putting it on their resume.
Lay people trust the peer review process way too much anyway. For a typical conference, it's usually just a grad student who goes through the paper in an hour or two and makes some comments. I've done peer review on a few papers for a prestigious conference that I am not even a subject matter expert on, just adjacent expertise enough to kind of make sense of the paper in an hour or two.
> Lay people trust the peer review process way too much anyway. For a typical conference, it's usually just a grad student who goes through the paper in an hour or two and makes some comments.
Yep. I'm that grad student right now. From observing people across universities and countries do peer review, it seems that the main things that get checked are flagrant inconsistencies in any data presented, since this has recently brought a lot of bad reputation. Otherwise it's exactly as you described.
And reputation plays a huge huge role. One person our lab works with (is well known in the field) was telling a story, about a time they sent paper to a conference or a journal as a draft (for review, I suppose) and pointed out that there still are various mistakes that need to be ironed out. Despite this, they simply took the initial paper and published it. It also got the best paper award.
Another interesting thing I noticed was researchers "forgiving" each other's B.S. It might be a surprise to some, but most research papers published are just there to increase the count of papers published. Even the most prolific researchers send out a few of this type of paper every year to catch up on the metrics. For another researcher in the same field it's usually patently obvious that it's a "filler" paper, and so long as it doesn't contain anything egregious and is something innocent, it's let through. In AI/ML this is usually restating existing algorithms/theorems in an exotic setting to make it sound novel. Or they add a KL divergence term to some loss function in an existing setup. Since H-indexes are a thing, these papers within "friends" typically cite each other to help everyone. All in all it is very hard today to separate the signal from the noise especially for an outsider.