Biden’s AI Executive Order: A Wake-Up Call for Healthcare Industry

Articles & Alerts
January 2024

By David S. Givens 

President Biden issued a landmark Executive Order (EO) on October 30, 2023, laying out an expansive blueprint for the federal government’s oversight of the development and use of AI technologies. Healthcare organizations should be particularly interested in the EO’s numerous healthcare-specific directives. EN 1. These directives should influence how every healthcare organization assesses the risks and rewards of using AI technology and how they shape their organizational hierarchy, infrastructure, and workforce to employ and manage current and emerging AI capabilities.

Sound governance frameworks are necessary for the safe, secure, and trustworthy development, acquisition, and use of AI in healthcare. Healthcare organizations must prepare their AI strategies proactively. And those strategies should align with the guiding principles of the EO, as well as the responsible AI standards the EO clearly anticipates. Careful planning regarding the role AI technologies will play within the organization will hugely impact its success in the age of transformational AI.

More significantly, perhaps, than any of its particulars, the EO should be viewed as a wake-up call to formalize AI-related oversight and decision-making protocols. Responsible oversight should consider promulgating an “AI vision statement” or similar document that addresses the organization’s intent to responsibly employ AI technologies consistent with the regulatory and policy changes stemming from the EO’s directives. Organizations should also consider expanding the AI-related responsibilities of the chief legal officer and the chief compliance officer and implementing robust reporting systems on AI-related risks and opportunities. Healthcare organizations should evaluate their AI competency and address any shortcomings now. They should also consider opportunities to shape AI policy and standards by joining like-minded stakeholders and engaging with one or more of the new task forces established through the EO.

In short, President Biden’s landmark EO should serve to remind organizational leaders of the need to shape AI policies and practices proactively. AI is a transformational and rapidly evolving technology. Healthcare organizations must understand AI's promise and perils to structure their relationship with it properly. This means defining the organization’s AI ambitions by identifying potential uses for current AI technologies and the future use opportunities presented by emerging AI capabilities and making conscious decisions to incorporate or reject those uses.  

How will advanced AI technologies impact your organization and does your organization have the knowledge, resources and foresight to succeed in the age of AI? What is your risk-reward appetite for embracing AI and what are your organization’s AI capabilities? To develop an advantageous relationship with AI, healthcare organizations must ask these and other critical AI-related questions and truly understand the answers. Being AI-ready means looking forward, understanding, and constantly monitoring emerging AI capabilities, industry standards, and government regulations related to AI use and actively developing and updating AI policies and practices. Sound AI principles and practices will be essential to survive and succeed in the new era of advanced AI. 

EN 1. For example, the EO includes a directive to the Department of Health and Human Services (HHS) to establish an AI Task Force within 90 days to develop a strategic plan for the responsible deployment of AI-enabled technologies within a year. The strategic plan will focus on developing, maintaining, and using predictive and generative AI technologies in health care delivery and financing; long-term safety and real-world performance monitoring of AI technologies; incorporating equity principles into AI; incorporating safety, privacy, and security standards into software development; and creating documentation to govern efforts to determine appropriate and safe uses for AI in healthcare.

The EO also requires the HHS to develop a strategy to "determine whether AI-enabled technologies in the health and human services sector maintain appropriate levels of quality" within 180 days. This includes developing an AI assurance policy and taking steps to advance compliance with federal nondiscrimination laws by "health and human service providers" that use AI and receive federal financial assistance. HHS must also establish an AI safety program in partnership with Patient Safety Organizations within 365 days, acting in consultation with the Department of Defense and the Secretary of Veterans Affairs, which must promulgate a "common framework” for identifying clinical errors resulting from AI deployed in healthcare settings and provide “specifications for a central tracking repository for associated incidents that cause harm, including through bias or discrimination, to patients, caregivers, or other parties."

These and other HHS-specific directives are just a few of the Executive Order's numerous and wide-ranging instructions for various federal agencies and other executive officials. Future steps to advance the Executive Order's directives will come directly from federal agencies, such as the HHS.

Jump to Page

By using this site, you acknowledge having read our Disclaimer and agree to our Privacy Policy