The person you’re responding to dodged (with a very fair argument), but I’ll bite the bullet here: not really, and I’m curious to hear what you think the damage could be.
I mean SOME criticism always has a basis, but that seemed to be a large part of the reason they published this technical demo: to get feedback and spark scientific discussion on the state of the art. They did publish it with prominent warnings to not trust the output as necessarily true, after all.
If the worry isn’t with primary users but with people using it to intentionally generate propaganda/falsehoods for others to consume… idk it seems like we’ve long passed that point with GPT-3.
So their goal was to gather feedback (read: criticism) but took it down after 3 days? In lieu of some sort of coercion (which, idk how you’d coerce Meta), it seems like they weren’t all that interested in feedback and discussion.
The fact that people responded negatively to a bad model (where “bad” can vary from unethical to dangerous to useless depending your vantage) has little to do with the anti-AI cottage industry.
Portraying criticisms as necessarily stemming from bad faith actors is exactly the opposite of fostering feedback and improvement.
Not sure how you made the leap from "There is cottage anti-AI industry from people who couldn't do real AI and hoping the next best thing to do is label it as 'racist'" to "You can't see any basis to criticize a "scientific tool" released into the wild that spits out convincing falsehoods?"
Well I'm pretty sure we're on a thread about an AI tool that was released as a scientific tool and it spits out convincing falsehoods. So maybe the cottage industry comment was truly just a non-sequitur, or maybe it was in reference to the topic of this entire HN post?
I'll give you another option: There are valid concerns with the type of output that large model AI generates, and there are experts working in the field who are trying to improve the state of the art by researching and implementing solutions to these valid concerns. There are also a subset of academics whose "one trick pony" is just "veto, veto, veto", without providing valid solutions, or worse, not taking a good faith understanding of "yes, this may not be perfect yet, but that doesn't mean we have to shut the whole thing down."
I'm not as familiar with how this culture works in the AI field, but I absolutely have seen it in the world of open source: people who have little to no programming skill who do nothing but grep repos for instances of "whitelist" and "blacklist" and pretend they are doing God's greatest work by changing these terms, and then cause typical faux-outrage storms on Twitter when their PRs are met with eyerolls.
Like GP I was replying to, it sounds like you’re mostly looking to air grievances here rather than discuss the topic at hand. Thank you for your work on OSS in any case, I imagine that’s a very frustrating experience.