![]() ![]() This work will focus in particular on commercially available information containing personally identifiable data. Evaluate how agencies collect and use commercially available information-including information they procure from data brokers-and strengthen privacy guidance for federal agencies to account for AI risks.The National Science Foundation will also work with this network to promote the adoption of leading-edge privacy-preserving technologies by federal agencies. Strengthen privacy-preserving research and technologies, such as cryptographic tools that preserve individuals’ privacy, by funding a Research Coordination Network to advance rapid breakthroughs and development.Protect Americans’ privacy by prioritizing federal support for accelerating the development and use of privacy-preserving techniques-including ones that use cutting-edge AI and that let AI systems be trained while preserving the privacy of the training data.To better protect Americans’ privacy, including from the risks posed by AI, the President calls on Congress to pass bipartisan data privacy legislation to protect all Americans, especially kids, and directs the following actions: AI not only makes it easier to extract, identify, and exploit personal data, but it also heightens incentives to do so because companies use data to train AI systems. Without safeguards, AI can put Americans’ privacy further at risk. This document will ensure that the United States military and intelligence community use AI safely, ethically, and effectively in their missions, and will direct actions to counter adversaries’ military use of AI. Order the development of a National Security Memorandum that directs further actions on AI and security, to be developed by the National Security Council and White House Chief of Staff.Together, these efforts will harness AI’s potentially game-changing cyber capabilities to make software and networks more secure. Establish an advanced cybersecurity program to develop AI tools to find and fix vulnerabilities in critical software, building on the Biden-Harris Administration’s ongoing AI Cyber Challenge.Federal agencies will use these tools to make it easy for Americans to know that the communications they receive from their government are authentic-and set an example for the private sector and governments around the world. The Department of Commerce will develop guidance for content authentication and watermarking to clearly label AI-generated content. Protect Americans from AI-enabled fraud and deception by establishing standards and best practices for detecting AI-generated content and authenticating official content.Agencies that fund life-science projects will establish these standards as a condition of federal funding, creating powerful incentives to ensure appropriate screening and manage risks potentially made worse by AI. Protect against the risks of using AI to engineer dangerous biological materials by developing strong new standards for biological synthesis screening. ![]() Together, these are the most significant actions ever taken by any government to advance the field of AI safety. The Departments of Energy and Homeland Security will also address AI systems’ threats to critical infrastructure, as well as chemical, biological, radiological, nuclear, and cybersecurity risks. The Department of Homeland Security will apply those standards to critical infrastructure sectors and establish the AI Safety and Security Board. The National Institute of Standards and Technology will set the rigorous standards for extensive red-team testing to ensure safety before public release. Develop standards, tools, and tests to help ensure that AI systems are safe, secure, and trustworthy.These measures will ensure AI systems are safe, secure, and trustworthy before companies make them public. In accordance with the Defense Production Act, the Order will require that companies developing any foundation model that poses a serious risk to national security, national economic security, or national public health and safety must notify the federal government when training the model, and must share the results of all red-team safety tests. Require that developers of the most powerful AI systems share their safety test results and other critical information with the U.S. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |