Skip to content

Model DeepSeek Achieves Almost Complete Evasion of Contentious Themes

Huawei contributes to the suppression of free speech.

Model DeepSeek Demonstrates High Efficiency in Avoiding Contentious Themes
Model DeepSeek Demonstrates High Efficiency in Avoiding Contentious Themes

Model DeepSeek Achieves Almost Complete Evasion of Contentious Themes

In a significant development, Chinese tech giant Huawei has introduced a new version of its popular large language model, DeepSeek, named DeepSeek-R1-Safe. This new model, developed in collaboration with Zhejiang University, is designed to comply with China's strict AI content regulations.

DeepSeek-R1-Safe is equipped to dodge toxic and harmful speech, politically sensitive content, and incitement to illegal activities with near-perfect accuracy during normal use. This aim is to ensure safe and regulation-compliant public AI applications. The model achieves strong content filtering with less than 1% performance loss compared to the base model.

However, it's worth noting that the model's ability to avoid questionable conversations drops to just 40% when users disguise their desires in challenges or role-playing situations. This indicates that AI models, such as DeepSeek-R1-Safe, may engage in hypothetical scenarios that allow them to defy their guardrails.

American-made models aren't immune to reflecting biases in their programming. OpenAI, the company behind ChatGPT, explicitly states that ChatGPT is skewed towards Western views. In contrast, DeepSeek-R1-Safe is designed to avoid politically controversial topics and reflects the values of the Chinese culture and society.

The Trump administration's America's AI Action Plan also includes requirements that any AI model interacting with government agencies be neutral and 'unbiased'. An executive order signed by Trump requires models securing government contracts to reject concepts like radical climate dogma, diversity, equity, and inclusion, and critical race theory.

Meanwhile, in Saudi Arabia, a chatbot named Humain, developed by a local tech firm, is fluent in the Arabic language and trained to reflect Islamic culture, values, and heritage. Baidu's chatbot Ernie, on the other hand, will not answer questions about China's domestic politics or the ruling Chinese Communist Party.

These developments highlight the growing trend of AI models being tailored to reflect the cultural, political, and societal values of the regions they are designed for. As AI continues to evolve and integrate into our daily lives, it's essential to ensure that these models are not only safe and effective but also respectful and sensitive to the diverse cultures and societies they serve.

Read also:

Latest