News

 

08.14.2023 White House Announces Voluntary Commitments from AI Companies By: Kevin D. Pomfret

On July 21, 2023, the Biden administration announced voluntary commitments that several technology companies agreed to make with respect to their artificial intelligence (AI) efforts. These commitments – which pertain to the three pillars of safety, security, and trust – are:

  • internal and external security testing before release of AI systems by independent parties;
  • sharing information across industry, governments civil society, and academia on managing AI risks (e.g., best practices for safety, information on attempts to circumvent safeguards, and technical collaboration);
  • protecting proprietary and unreleased model weights by investing in cybersecurity and insider threat safeguards;
  • facilitating third-party discovery and reporting of vulnerabilities in their AI systems;
  • developing technical mechanisms to ensure that users know when content is AI generated (e.g., watermarking);
  • publicly reporting their AI systems’ capabilities, limitations, and areas of appropriate and inappropriate use;
  • prioritizing research on the societal risks that AI systems can pose (e.g., harmful bias and discrimination, privacy); and 
  • developing and deploying advanced AI systems to help address society’s greatest challenges (e.g., cancer prevention, climate change, etc.)
     

While the commitments are not enforceable, they are important for several reasons. One is that they highlight some issues that the Biden administration likely will consider important in future legislation, regulations and policies. Another is that these companies will likely consider requiring that some of these provisions flow through their vendor and customer contracts in order to comply with their commitments to the administration.