This is not a troll-post; I am genuinely curious about why this is the case. When I asked DeepSeek AI some Western propaganda questions like “Is Taiwan a country” and “What happened on Tiananmen Square 1989”, it refuses to answer.

This is strange because on other Chinese sites like Baidu, you can easily search these topics and get answers from the non-Western, Chinese point of view that are very educational, yet DeepSeek for some reason flags these questions. I’ve only tested this out with the English version since I unfortunately am not fluent in Chinese.

Does anyone have any possible explanation for why this may be the case?

Edit: After some further investigation, I’m seeing that the AI’s political views tend to be pretty liberal and only a little to the left of ChatGPT. In this context, I can see why it refuses to answer these questions in an attempt to prevent the spread of disinformation.

  • redtea@lemmygrad.ml
    link
    fedilink
    arrow-up
    8
    ·
    2 months ago

    Does it depend on the training data?

    Option one: data in Chinese languages – model translates this for the English version. This one knows the ‘Chinese’ answers, whatever that means.

    Option two: data in English for the English version. This one knows the ‘English’ answers, essentially distilled liberalism like chatgpt.

    Either: political decision needed by the owners to walk the right line between domestic laws, culture, regs and international expectations/desires (foreign, English-speaker-dominated).

    All this is a guess, tbh.

    • 矛⋅盾@lemmygrad.ml
      link
      fedilink
      arrow-up
      4
      ·
      2 months ago

      I was guessing that the training data wouldn’t try to translate between information gathered from different languages and one of the other comments under this post seems to confirm my hypothesis. In any case, I’d even hazard that depending on what your query is would affect the result, similar to already existing behavior (on the part of the surveyor) when asking biased questions to humans in polling/surveys (for example “tell me about the massacre–” vs “what was reported on the ground during–” would probably also give you some differences too)