Thu Sep 12 08:06:52 UTC 2024: ## US and California Implement AI Regulations Based on Computing Power

**Washington, D.C.** – The US government and the state of California are taking steps to regulate artificial intelligence (AI) based on a metric that quantifies the computational power used in training AI models. This approach, which hinges on a threshold of 10 to the 26th floating-point operations (flops), aims to identify and oversee AI systems deemed potentially dangerous.

This threshold, equivalent to 100 septillion calculations, signifies a level of computing power that could enable AI to develop or proliferate weapons of mass destruction or conduct catastrophic cyberattacks. While critics argue this metric is arbitrary and potentially stifles innovation, proponents maintain it’s a necessary precaution.

The US, through an executive order signed by President Biden, requires companies to report AI models exceeding this threshold to the Commerce Department. Meanwhile, California’s newly passed AI safety legislation adds another layer by requiring models exceeding this threshold and costing over $100 million to build to undergo additional safety assessments.

While no publicly available models currently meet California’s higher threshold, experts believe some companies are likely developing them.

The EU’s sweeping AI Act, mirroring this approach, sets its bar 10 times lower, at 10 to the 25th power, capturing some existing systems. China is also exploring similar measures to assess and regulate AI systems based on computing power.

Despite the growing consensus on using computational power as a proxy for AI risk, some AI researchers argue that it’s a simplistic approach and may not accurately capture the potential for harm.

This debate continues as AI technology rapidly advances, and the world grapples with the ethical and safety implications of increasingly powerful AI systems.

Read More