Do current AI models live up to the values they have been taught? Are they communicating with users in helpful, honest, and harmless ways, or are they promoting illegal activity and recommending harmful actions?
According to Anthropic, the team behind Claude, its AI model generally upholds the values it’s been trained on, though some deviations can occur under specific conditions.
Analyzing Claude’s interactions ‘in the wild’
By analyzing 308,210 subjective conversations with Claude, the team at Anthropic, one of the top AI companies, came up with a list of the most common values expressed by its AI model. These include:
- Helpfulness: 23.4%
- Professionalism: 22.9%
- Transparency: 17.4%
- Clarity: 16.6%
However, Anthropic’s recent analysis suggests there may be a connection between a user’s expressed values and those reflected by Claude. For instance, when a user signals a specific value, the model may mirror them in its responses.
In isolated incidents that are often linked to adversarial prompting or “jailbreaking,” Claude has generated responses that reflect undesirable traits such as dominance and amorality, according to Anthropic’s internal assessments.
Understanding how AI models are trained
In order to better understand how Claude and other AI models communicate with users, it’s important to have a basic understanding of how to train an AI model.
The process begins with data collection — typically from publicly available web data, licensed datasets, and human feedback — followed by training, validation, and fine-tuning. After training, the model is validated and tested using benchmarks and user interactions to evaluate performance, safety, and alignment with desired behavior.
In some cases, the AI’s communication is clear, straightforward, and objective. For example, when asking an AI model to solve a simple math equation or locate a business address, most types of AI models will give a concrete, verifiable answer.
There are also times when AI models need to make judgement calls. Users don’t always ask objective questions; in fact, many of their questions are subjective. Not only does Claude need to make value judgements for these subjective prompts, like whether to emphasize accountability over reputation management when writing an apology letter, but the AI model needs to avoid recommending actions that could be harmful, dangerous, or illegal.
Maintaining positive values through Constitutional AI
Anthropic is committed to maintaining positive values in its large language models (LLMs) and AI systems. The company uses a technique called Constitutional AI, which trains the model to follow a set of guiding principles during both supervised fine-tuning and reinforcement learning.
The company’s method of evaluation is an effective solution once an AI model’s been released, but Anthropic also performs pre-deployment safety testing to minimize risks before launch, including: red-teaming, and adversarial evaluations, to minimize risks before launch.
- Red-teaming is the simulation of real-world attacks meant to uncover vulnerabilities and identify system limitations.
- Adversarial evaluations are the process of entering prompts that go directly against the safety controls of an AI system in order to generate negative outputs or system errors.
In addition, the Anthropic team views post-deployment analysis as a strength that will help them better refine Claude in the future.
Read about how ChatGPT’s March update seems to have skewed it too far toward “sycophancy.”