Only the R1 671B model (aka just plain 'R1') has the censorship being discussed in the article. The smaller parameter models are fine-tunings of Llama and Qwen, and the former at least doesn't have the censorship.
This has caused a lot of conflicting anecdotes since those finding their prompts aren't censored are running the distilled/fine-tuned models not the foundational base model.
A sibling comment was facetiously pointing out that the cost of running the 'real' R1 model being discussed locally is out of the price range of most, however someone in this thread actually has run it locally and their findings match those of the article[1].
Is it true to say that there are two levels of censorship at play here? First is a "blunt" wrapper that replaces the output with the "I am an AI assistant designed to provide helpful and harmless responses" message. Second is a more subtle level built into the training, whereby the output text skirts around certain topics. It is this second level that is being covered by the "1,156 Questions Censored by DeepSeek" article?
The Deepseek hosted chat site has additional 'post-hoc' censorship applied from what people have observed, if that's what you're referring to. While the foundational model (including self hosted) has some just part of its training which is the kind the article is discussing, yes.
Is it correct or incorrect that they open-sourced tbeir code? i.e. can anyone with $6M now take the DeepSeek training code, apply it to their dataset of interest, and train a new model that is not censoeed (i.e. even somehow intrinsically to the kodel itself)? Apologies I am not an AI engineer nor even a software engineer of my terminology usage isn't quite spot on.
They have definitely open sourced the inference code. I haven't any training code. I don't think HAI-LLM is open source.
But certainly you can take the architecture from the paper and train a similar model. Or you can try to remove the alignment and produce and uncensored version then realign it.
But at least part of the advantage they have is training on Chinese internet data from inside the great firewall that (AFAIK) US companies don't have access to for any price.
I asked about Taiwan being a country on the hosted version at chat.deepseek.com and it started generating a response saying it's controversial, then it suddenly stopped writing and said the question is out of its scope.
Same happened for Tiananmen and asking if Taiwan has a flag.
I disagree, I observed censorship at the RLHF level on my local GPU, at 1.5B, 8B (llama) and 7B (qwen). Refuses to talk about Uyghurs and tiananmen 80% of the time
This has caused a lot of conflicting anecdotes since those finding their prompts aren't censored are running the distilled/fine-tuned models not the foundational base model.
A sibling comment was facetiously pointing out that the cost of running the 'real' R1 model being discussed locally is out of the price range of most, however someone in this thread actually has run it locally and their findings match those of the article[1].
[1] https://news.ycombinator.com/item?id=42859086