Yesterday, my close friend and talented tech recruiter messaged me about my recent LinkedIn post discussing DeepSeek, a new AI model from China. She expressed concern over potential censorship, sharing a widely circulated post that demonstrates DeepSeek’s evasive response when asked, "Who is the Dalai Lama?" The chat initially forms an answer but then deletes it, stating, "Sorry, that's beyond my current scope. Let’s talk about something else." I considered texting back but decided instead to share my thoughts more broadly in this post.
To address this clearly, let's break the question into two parts:
1. Is the online chat service based on DeepSeek censored by its creators, possibly under government pressure?
The unfortunate answer is yes. Certain political topics, deemed sensitive, are automatically filtered out or suppressed. This aligns with the widespread expectation that AI models developed in China might adhere to governmental restrictions, particularly around politically sensitive subjects.
2. Is the DeepSeek model itself censored when deployed locally?
Interestingly, no. While the web-based implementation may be restricted, DeepSeek can also be deployed directly on a user’s computer. In this environment, the model operates similarly to ChatGPT in terms of speed and efficiency, but without external censorship. If you ask DeepSeek to critique the Chinese government on your own device, you’ll find that it can indeed offer frank criticisms on topics like authoritarianism and human rights violations.
This distinction makes DeepSeek an appealing choice for enterprises, offering competitive advantages in cost, speed, and flexibility. The real issue is not censorship at the model level, but rather how the model is deployed and controlled in public-facing applications.
However, a separate but related question is whether DeepSeek, like other AI models, is biased. The answer is yes, and this is true for all AI models since they are trained on datasets that inevitably reflect human biases. I recently spoke with an amazing entrepreneur Irma, who is launching an AI startup ‘Backpack’ for kids that can be extra helpful in challenging ADHD. She’s also researching biases in AI at Södertorn University and we had a deep conversation about how modern AI models, including ChatGPT, display significant gender biases. AI mirrors society, it also mirrors its biases—political, cultural, racial, and beyond.
Large language models (LLMs) aren't just for conversation; they have immense potential in coding, logical reasoning, and transforming raw data into structured, meaningful insights. In these areas, DeepSeek is proving to be faster, cheaper, and more efficient. If OpenAI continues down a purely profit-driven path without offering open and accessible models, it risks falling behind. The demand for customizable, locally-deployable models is only going to grow.
As we increasingly rely on AI technology, addressing biases and maintaining a balanced approach to its deployment is crucial. This is a complex, ongoing challenge. Misinformation, paranoia, and fearmongering can easily undermine public trust in AI, creating unnecessary barriers to progress.