1) What really bothered me personally about GPT2 is that they made it look sciency by putting out a paper that looks like other scientific papers -- but then they undermine a key aspect of science: reproducability/verifiability.
I struggle to believe 'science' that cannot be verified/replicated.
2) In addition to this, they stand on the shoulder of giants and profit from a long tradition of researchers and even companies making their data and tools available. but "open"AI chose to go down a different path.
3) which makes me wonder what they are trying to add to the discussion? the discussion about the dangers of AI is fully ongoing. by not releasing background info also means that openAI is not contributing to how dangerous AI can be approached. openAI might or might not have a model that is closer to some worrisome threshold. but we don't know for sure. so imv, what openAI primarily brought to the discussion are some vague fears against technological progress -- which doesn't help anyone.
Re 1: GPT2 is no different from most stuffs by DeepMind. DeepMind, in general, does not release code, data, or model. DeepMind does not seem to get reproducibility complaints, supposedly "key aspect of science".
2) In addition to this, they stand on the shoulder of giants and profit from a long tradition of researchers and even companies making their data and tools available. but "open"AI chose to go down a different path.
3) which makes me wonder what they are trying to add to the discussion? the discussion about the dangers of AI is fully ongoing. by not releasing background info also means that openAI is not contributing to how dangerous AI can be approached. openAI might or might not have a model that is closer to some worrisome threshold. but we don't know for sure. so imv, what openAI primarily brought to the discussion are some vague fears against technological progress -- which doesn't help anyone.