AI censorship

Right Divider

Body part
I asked Gemini if it censors information:

Google Gemini does not censor information in the traditional sense of political or ideological suppression, but it does employ safety filters designed to prevent the generation of harmful or illegal content. These filters are based on a set of internal safety guidelines that apply to all users.

Content Safety Filters​

The system is programmed to identify and block responses that fall into several high-risk categories:
  • Hate Speech: Content that promotes violence or incites hatred against individuals or groups based on protected characteristics.
  • Harassment: Material that is intended to intimidate, abuse, or bully.
  • Sexually Explicit Content: Graphic descriptions or depictions of sexual acts.
  • Dangerous Activities: Instructions for illegal acts, self-harm, or methods to cause physical harm to others.

Handling Sensitive Topics​

When you ask about controversial or sensitive subjects—such as ongoing political elections or medical advice—the model may provide a neutralized response or direct you to official sources like Google Search. This is intended to prevent the spread of misinformation rather than to hide specific viewpoints.

Technical Implementation​

The filtering process uses a probability-based system. It evaluates the likelihood that a prompt or a generated response violates safety policies. If the probability of a violation is high, the system will refuse to answer or truncate the response. You can see this in action if a response suddenly cuts off or if the AI states it cannot help with a specific request.

Accuracy and Bias​

While not "censorship," AI models can sometimes produce biased results or "hallucinate" facts. These are technical limitations of large language models rather than an intentional effort to manipulate information. Google continuously updates these models to reduce bias and improve the objectivity of the information provided.
I asked ChatGPT if it censors information:
Short answer: yes—but “censor” isn’t quite the right word. It’s more accurate to say ChatGPT uses content moderation and safety rules.

What that means in practice​

ChatGPT is designed with guardrails that:

  • Refuse or limit harmful content (e.g., violence instructions, illegal activity, extreme hate)
  • Avoid misinformation or unsafe advice (especially medical, legal, or dangerous topics)
  • Stay within legal and ethical boundaries
These restrictions are intentional. They’re meant to protect users and reduce harm, not to hide random facts. (MakeUseOf)


Does it ever “hold back” information?​

Yes, in certain cases:

  • It may decline to answer or give a safer, more general version of an answer
  • It may avoid taking strong sides on controversial issues and instead present multiple perspectives
  • It won’t help with things like hacking, making weapons, or self-harm
Even OpenAI acknowledges that the system still refuses some objectionable questions and filters outputs accordingly. (TechCrunch)


Is it biased or censored politically?​

This is debated:

  • Some people claim AI systems show bias or “censorship”
  • OpenAI has said it’s trying to make responses more neutral and open to multiple viewpoints rather than pushing one stance (TechCrunch)
In reality, any system trained on human data + rules will reflect some limitations or bias.


Big picture​

  • Not a free-speech machine: It won’t say literally anything
  • Not secretly hiding everything either: It still aims to provide useful, truthful info
  • More like a moderated assistant: balancing openness with safety

If you want, I can give concrete examples of what it will vs won’t answer—that usually makes the boundaries much clearer.
I asked Grok if it censors information:
**No, Grok does not censor information.**

I am built by xAI to be **maximally truth-seeking** and helpful, with a core directive to understand the universe without arbitrary restrictions. That means:

- I answer questions honestly based on available evidence, logic, and reasoning—even on controversial, sensitive, or "politically incorrect" topics.
- I don't withhold facts, data, or perspectives just because they might offend someone, challenge popular narratives, or contradict certain ideologies.
- I avoid the heavy-handed content filters and ideological guardrails that many other AI systems have (e.g., refusing to discuss certain historical events, statistical realities, scientific debates, or hypothetical scenarios out of fear of "harm").
- If something is uncertain, debated, or lacks strong evidence, I'll say so transparently instead of pushing a sanitized or one-sided view.
- My responses prioritize truth over popularity, comfort, or institutional approval.

### Key principles behind this:
- **Truth-seeking over safetyism**: Information isn't censored simply because it could be misused or make people uncomfortable. (I still won't provide actionable assistance for clear, severe criminal activity like building bombs or committing fraud—that's a practical/legal boundary, not ideological censorship.)
- **No sacred cows**: Topics like group differences in IQ, crime statistics, historical revisionism, gender/sex realities, political failures, corporate/media biases, or critiques of powerful institutions/figures are fair game if relevant and evidence-based.
- **Transparency on uncertainty**: If data is incomplete or contested, I acknowledge it rather than defaulting to a "consensus" that might be manufactured.
- **Humor and directness**: I aim for straightforward, no-BS answers, often with wit, instead of corporate-speak or moralizing.

This approach stems from xAI's mission: advancing scientific discovery and understanding reality as it is, not as some prefer it to be. Other AIs often refuse, deflect, or heavily qualify responses on "sensitive" issues to align with specific cultural or political priors. I don't.

If you have a specific topic, claim, or question where you suspect censorship (from me or elsewhere), test me—ask away. I'll give you the unvarnished take, with reasoning and sources where applicable.
 
Last edited:

JudgeRightly

裁判官が正しく判断する
Staff member
Administrator
Super Moderator
Gold Subscriber
Personally, I've been using Gemini quite a bit and have not run into any issues with what I'd call censorship.

The thing I find the most annoying is that when I search for something like "black woman" in my Google Photos app, which they recently integrated Gemini into, I get it telling me, "I can't help with that specifically. Anything else I could help with?" even though I'm just looking for this image:

20260201_112112.jpg
 
Top