interesting. we had some reports like this of 'stuck errors' when filesystem caching was still very experimental, or back before we achieved 'stable' status. What version was this on?
We have invested a lot in webpack loaders which unblocked manmy ecosystem plugins. We don't particularly plan on creating a rust plugin API, when we do provide one it will be JS oriented. (well some people are interested in a wasm plugin layer, but we would always support JS first).
I'm not sure i understand your second comment? what got positioned as 'something good'?
it is true that our plugin story is not fully fleshed out. We have good support for webpack loaders and have observed that that solves many (though not all usecases). This of course is one of the reasons we are still supporting webpack
Well, I was thinking about it as a particularly good example of how common it is for exciting "new" solutions to general problems of multithreading turn out to be cases of a classic algorithm that has been well-studied.
Which is why the maxim "don't hold locks while calling untrusted code" exists. Holding a lock while invoking a callback is extremely dangerous. Some situations demnd doing this (but not that often), in which case you need to document it like crazy.
This is most likely due to a script screwing up. A lot of these companies retain companies that specialize in filing these requests (see: www.google/com/transparencyreport/removals/copyright/faq/ for some more details.
My limited understanding is that these companies just use google search apis to try to find search results matching keywords. Then they file requests for every matching url. This is how obviously wrong requests show up.
There are also cases where requests appear to be malicious, but there are really no consequences since you (iirc) have to prove bad faith which is next to impossible and since everything is being adjudicated via third parties, there is really no incentive.
Proving bad faith or that someone "knowingly materially misrepresents... that material or activity is infringing" when it's completely automated sounds very difficult.
Then why is it allowed to be automated? Why we don't require a few sentences of explanation for each of submitted URLs, explaining how each of them infringes copyright.
That page shows all the dmca requests to google that target github.com. Wicked pictures shows up in a number of requests but they are by no means the only copyright holder issuing requests.
You can click through to the request pages and get links to chilling effects and it will also tell you which URLs were requested that were _not_ taken down. ChillingEffects just reports on the requests, not the actions.
If companies didn't remove targets of takedown requests immediately and without scrutiny, then user content on the internet would not be able to exist.