Tech6d ago·37 sources
Anthropic Dials Back AI Safety Commitments
AI company Anthropic is overhauling its Responsible Scaling Policy (RSP), notably dropping the pledge to never train an AI system without guaranteeing safety measures in advance. This pivot, prompted by competitive pressures and the rapid advancement of AI, shifts Anthropic from a highly cautious stance to one that prioritizes matching or surpassing competitors' safety efforts and being more transparent about risks. The company, which recently raised $30 billion and is valued at $380 billion, argues this change is a pragmatic response to evolving political and scientific realities, rather than a capitulation to market incentives. The new policy includes commitments to transparency, matching competitor safety efforts, and delaying development only if Anthropic leads the AI race and risks are significant. While some experts view this as a bearish signal for managing AI catastrophes, Anthropic believes it can maintain incentives for safety research through new initiatives like "Frontier Safety Roadmaps" and "Risk Reports."
