the article shows scans of the research reports listing only Carlsen as their author, you could have just linked to one in the first paragraph of the Wikipedia page to support his sole inventorhood, right
per: the scans, the patent isn't for TIFF and the articles misspell his name. I think that the Paul Brainerd (Aldus cofounder) interview with the Computer History Museum, when he names Stephen, will be sufficient for Wiki.
Focus and determination can grant you the power of the queen on the chessboard.
But when you become blind to what happens around you, you become the pawn in someone else’s plan. A messenger is an authority’s favorite tool.
Someone would like to starve people and you are a part of their plan. If you feel the tug of appeal, it is because you understand something isn’t right here. If you don’t investigate, your mind is not your own.
Legal will be the face of it, but engineers often handle the actual underlying request.
Over a couple large public companies, I’ve had to react to a court ruling and stop an account’s actions, work with the CA FTB for document requests, provide account activity for evidence in a case, things like that.
I thought it would be limited when the first truly awful thing inspired by an LLM happened, but we’ve already seen quite a bit of that… I am not sure what it will take.
I choose to believe that too. I think more people are interested than we’d initially believe. Money restrains many of our true wants.
Sidebar — I do sympathize with the problem being thrust upon them, but it is now theirs to either solve or refuse.
A chat like this is all you’ve said and dangerous, because they play a middle ground: Presenting a machine can evaluate your personal situation and reason about it, when in actuality you’re getting third party therapy about someone else’s situation in /r/relationshipadvice.
We are not ourselves when we are fallen down. It is difficult to parse through what is reasonable advice and what is not. I think it can help most people but this can equally lead to a disaster… It is difficult to weigh.
It's worse than parroting advice that's not applicable. It tells you what you told it to tell you. It's very easy to get it to reinforce your negative feelings. That's how the psychosis stuff happens, it amplifies what you put into it.
This makes no sense at all to me. You can choose to gather evidence and evaluate that evidence, you can choose to think about it, and based on that process a belief will follow quite naturally. If you then choose to believe something different, it's just self-deception.
LLMs lack the human nuance that a good Wikipedia article requires. Weighing quality sources and digesting them in the most useful way that a human would want and expect — that is very difficult for both humans and machines, and it is why Wikipedia as a whole is such a treasure: Because a community of editors take the time to tweak the articles and aim for perfection.
There are guidelines across all Wikipedia articles that make a good experience for the reader. We can’t even get the world’s greatest LLMs to follow a set of rules in a small conversation.
In my opinion simply using a dataset of high quality books and highly rated academic journals is enough to surpass current Wikipedia quality.
In my experience when using LLMs as a replacement for Wikipedia (learning about history), it is often of higher quality in niche topics and far less biased in political contentious areas
For me Wikipedia is only good for introductions and exploration. You don't have time to read a dense tome but also don't have enough experience in reading research papers in that area? Wikipedia it is then.
Wikipedia is the tabloid equivalent for scientific topics.
LLMs tend to be much more useful for niche topics, because they've most likely been trained directly on the source itself.
reply