Rules can’t solve every potential problem, and the demand for perfect safety has dangers of its own
By Jason Furman. Excerpts:
"the European Union’s heavy-handed AI rules have impeded progress"
"balance benefits and risks."
"Cost-benefit analysis requires regulators to think not only about the risks of AI but also the risks from slower AI development, such as more cancer deaths because of delayed drug discovery"
"compare AI with humans, not to the Almighty"
"autonomous cars crash—but how do they compare with human drivers?"
"address how existing regulations are hindering progress. The most obvious are permitting and other obstacles to the expansion of data centers and the power sources they will need. A bigger threat over time is the dozens of state laws regulating AI that have already been passed"
"where new regulation is warranted, AI should be overseen by existing domain-specific regulators rather than a new superregulator."
"Existing regulators should focus on outputs and consequences in their domains, not on inputs and methods."
"regulation must not become a moat protecting incumbents."
"well-intentioned rules can entrench existing powers, from medieval guilds to hospital certificate-of-need laws."
"A superregulator could be captured by big companies. When tech giants enthusiastically promote regulation, it should raise red flags."
"not every problem caused by AI can be solved by regulating AI."
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.