See: https://www.courtlistener.com/docket/72379655/24/1/anthropic-pbc-v-us-department-of-war/
The legal vacuum in which these contractual terms exist makes them only more important. The United States currently has no comprehensive federal law governing the use of AI by military or intelligence agencies in domestic contexts. There is no statutory framework requiring transparency, judicial oversight, or meaningful accountability for AI-driven surveillance at scale. There is no enforceable legal standard governing when an autonomous weapons system may select and engage a target. In the absence of public law, the contractual and technological requirements that AI developers impose on the use of their systems represent a vital safeguard against their catastrophic misuse.
That is the key point they are making. Because there are no laws governing AI, then the guardrails that AI develops impose are necessary to prevent their misuse.
The solution to this conundrum is to create such a legal framework governing the use of AI, not to have AI developers arbitrarily impose their own views.
No comments:
Post a Comment