LLM Personal Medical Data Privacy and Security
When an organization is in possession of patient medical information, the regulatory oversight that applies to data privacy and security can be directed by federal, regional and/or state authorities.
Laws and regulations restrict the disclosure of personal health information (PHI) in the possession of organizations operating within government jurisdictions. For example, the US Health Insurance Portability and Accountability Act of 1996 (HIPAA) defines organization entities, safeguards for personal health information (PHI) disclosure, and sets standards for PHI information sharing (HHS.gov, 2008). Other examples of these regulations include the General Data Protection Regulation (GDPR) adopted in the EU, and the UK Data Protection Act (GDPR.eu, 2018). These regulations provide similar guardrails to HIPAA and there are penalties for violations.
In the US, several states have their own consumer data privacy regulations that may also extend to health information sharing. These regulations incorporate transparency requirements for organizations in addition to some mandated opt-out ramps for consumers. States, including California, Virginia, and Colorado, have statutes currently in force, while several other states have enacted laws that will become effective in the near future (US State Privacy Legislation Tracker ). As a part of a Large Language Model (LLM) adoption strategy, organizations should systematically address the risks of medical information sharing, privacy, and security. They should also evaluate LLM usage to eliminate potentially violative disclosures to avoid penalties.
AI Guidelines and Proposals Impacting LLMs
Significant interest and activity in the area of regulatory oversight of AI in healthcare technology has emerged recently to include recent updates from the White House, FDA, HHS/ONC, AMA, EU parliament, and WHO as recently as October-December of 2023.
There are many examples of proposed legislation, guidelines, and best practices that can be applied to AI-enabled and LLM healthcare products, the focus of which often refer to transparency, data and information validation, safety and reducing risk of bias, privacy, data protection and cybersecurity.
A few notable examples include:
- The Ethics Guidelines for Trustworthy AI (EU) published by the European Commission, which focuses on developing AI systems (European Commission, 2019) and frameworks by WHO Regulatory considerations on artificial intelligence for health and the AMA Principles for Augmented Intelligence Development, Deployment and Use.
- The European Parliament will likely take up AI legislation known as the AI Act, per a deal struck with EU regulators. It proposes rules for AI deployments used in high-risk applications including healthcare use of AI and LLM systems. This law aims to: limit potential harm; protect fundamental rights, ensure transparency and explain ability, maintain human control, and global impact. The next steps for this law are: the law will come into effect after a two-year transition period; and, implementation and effectiveness remain to be seen. Overall, the EU AI Act is a significant step forward in regulating AI development and deployment. It prioritizes safety, ethics, and human rights, paving the way for responsible AI use in the future. (E.U. Agrees on AI Act, Landmark Regulation for Artificial Intelligence - The New York Times (nytimes.com)
- Lawmakers in Congress are showing a keen interest in overseeing healthcare AI. Senator Mark Warner (D-VA), the Senate Select Committee on Intelligence chairman, has been advocating for AI regulation, emphasizing its importance leading up to the 2024 elections. Concurrently, another Senate committee invited public input on potential modifications to HIPAA's framework, aiming to address AI data considerations more effectively.
- An Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, was issued on October 30, 2023. It aims to establish a framework for ethical and responsible AI development and deployment in the United States.
Goals:
- Promote AI that is safe, secure, and trustworthy. This includes setting new standards for safety and security evaluations, developing policies to mitigate risks, and promoting transparency.
- Protect Americans' privacy and civil rights. The order addresses concerns about bias and discrimination in AI algorithms, requiring developers to consider fairness and equity throughout the development process.
- Advance American leadership in AI. The government will invest in research and development, attract talent, and foster a competitive AI ecosystem.
- Support workers and ensure equitable benefits of AI. The order recognizes the potential impact of AI on jobs and calls for policies to support workers and communities.
Key elements:
- Eight guiding principles: These principles outline the overall vision for ethical AI, encompassing aspects like fairness, non-discrimination, accountability, and safety.
- Standardized evaluations: The order requires robust and reliable assessments of AI systems before deployment, focusing on safety, bias, and security vulnerabilities.
- Public transparency: Developers and government agencies will be required to provide information about AI systems used in high-risk applications.
- Algorithmic discrimination: The order reinforces existing protections against discrimination in AI, including in areas like employment, criminal justice, and healthcare.
- Investing in education and workforce development: The government will support programs to equip Americans with the skills needed for the AI era.
- International collaboration: The order emphasizes the importance of international cooperation on AI development and governance.
This executive order represents a significant step in the direction of responsible AI development and deployment in the United States. However, its implementation and effectiveness remain to be seen.
It remains to be seen in practice when and how existing AI guidelines and future legislation, guidelines or proposals may be specifically applied to healthcare LLM implementations; at the same time, the result could be a similar operational landscape for LLM adoption as for general AI deployments.
The Supplemental Reference section below summarizes this and other important guidelines and proposal information.
Diagnostic, Clinical Decision Support and Therapeutic LLM
Commercialized LLM products used for healthcare purposes may be regulated by government agencies, such as the FDA in the US, MHRA in the UK, and the competent authorities in the EU. For example, although LLMs are differentiated from general AI-enabled systems by the nature of the large datasets, in operational situations where LLMs are intended for predictive patient diagnosis, clinical decision support or for delivering patient treatment recommendations, a LLM could be regulated Software as Medical Device (SaMD) in some jurisdictions.
In general, regulatory agencies consider moderate to high-risk AI enabled software used in these healthcare use-cases to be medical devices subject to regulatory oversight. These regulations are likely to extend to LLM deployments, with corresponding control of marketing clearance or approval of SaMD products within the agency jurisdictions (Meskó & Topol, 2023).
To date, the FDA has authorized over 690 AI-enabled medical devices, with the majority (77%) cleared for radiology use cases. As of October 19th 2023, the FDA has not authorized as a medical device any technology that uses generative AI or LLMs as most medical devices currently approved have “locked algorithms’ that are preapproved models and a small number of SaMDs that have FDA authorized pre-determined change control plans (PCCPs) that allow controlled changes. The biggest challenge with approving GenAI and LLM medical devices is how to provide responsible regulatory oversight to a model that is continuously learning and updating without stifling innovation. Specifically, the FDA does not have an approach to regulating adaptive algorithms (ability to adjust behaviors and parameters) or autodidactic/unsupervised learning.
‘It can be said that this is the “Wild, Wild West of AI” due to the regulatory oversight not keeping up with the pace of innovation.’ The deployment, and updates of LLMs and Generative AI as a whole is happening at a much faster pace than innovations in health care have in the past. A new approach and implementation of balancing public and healthcare industry’s safety regulatory orders needs to be quickly established to keep pace with the speed of the current innovation and the healthcare workforce needs.
Since 2019, the FDA and ONC, has been focusing on regulating AI, with its subsets of ML and LLM, for patient safety. Patient safety concerns have led the FDA to create a new Digital Health Advisory Committee for further exploration of issues related to Health IT, including AI. Furthermore, the ONC finalized a rule in December 2023 to create new technical transparency and risk-management requirements for a wide range of certified Health IT solutions, including LLM.
Developers of medical device software have regulatory responsibilities in addition to data privacy and security regulations. As part of a comprehensive strategy, developers must determine the regulatory jurisdictions and the medical device regulations applicable to the specific intended use of commercialized LLM.
References
https://www.warner.senate.gov/public/index.cfm/2023/8/warner-presses-ai-companies-to-stop-promoting-eating-disorders
https://www.modernhealthcare.com/digital-health/ai-healthcare-federal-regulation-fda-onc *Modern Healthcare is a subscription only reference.
Supplemental Reference Information
- Ethics Guidelines for Trustworthy AI (EU) promote transparency, non-discrimination, diversity, fairness, accountability, and human oversight in AI development and deployment (NIST).
- For US non-defense government agencies, an Executive Order - Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government was signed emphasizing these concepts (The White House, 2020).
- The Algorithmic Accountability Act (US) has been proposed in the United States Congress and re-introduced in 2022. This bill addresses potential bias and discrimination in AI systems that would require organizations to assess and mitigate risks associated with automated decision-making using AI outputs (Rep. Clarke, 2022).
- The US Federal Trade Commission (FTC) has issued guidelines outlining principles for businesses using AI and automated decision-making systems. These guidelines focus on fairness, transparency, and accountability (Chopra et al., n.d.).
- The Blueprint for Trustworthy AI Implementation Guidance and Assurance for Healthcare published by the Coalition For Health AI (a MITRE project) strives to identify and propose solutions to issues that must be addressed in order to enable trustworthy AI in healthcare (Coalition For Health AI).
- The NIST AI Risk Management Framework discusses how organizations can evaluate the risks related to AI. Additionally, it outlines categories of risk management that can be applied at multiple stages of the AI Lifecycle (NIST)
- A recently signed Executive Order by the US President, Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence promotes policies and actions for US agencies to address AI (US Executive Order 14110)
- The Office of Science and Technology Policy has promoted five principles and associated practices in its Blueprint for an AI Bill of Rights to form an overlapping set of backstops against potential harm (The White House).
- Framework of the EU AI Act: Artificial intelligence act: Council and Parliament strike a deal on the first rules for AI in the world - Consilium (europa.eu)
- FDA “Artificial Intelligence and Machine Learning (AI/ML)-Enabled Medical Devices” is a resource for medical devices approved by the FDA that incorporate AI.
Citations
- Meskó, Bertalan, and Eric J. Topol. “The Imperative for Regulatory Oversight of Large Language Models (or Generative AI) in Healthcare.” NPJ Digital Medicine 6, no. 1 (July 6, 2023): 1–6. https://doi.org/10.1038/s41746-023-00873-0
- Office for Civil Rights (OCR), "Summary of the HIPAA Privacy Rule." HHS.gov, May 7, 2008. https://www.hhs.gov/hipaa/for-professionals/privacy/laws-regulations/index.html
- GDPR.eu, "What Is GDPR, the EU’s New Data Protection Law?" November 7, 2018. https://gdpr.eu/what-is-gdpr/
- GOV.UK, "Data Protection." https://www.gov.uk/data-protection
- "US State Privacy Legislation Tracker." https://iapp.org/resources/article/us-state-privacy-legislation-tracker/
- European Commission. "Ethics Guidelines for Trustworthy AI | Shaping Europe’s Digital Future," April 8, 2019. https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai
- The White House. "Executive Order on Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government." https://trumpwhitehouse.archives.gov/presidential-actions/executive-order-promoting-use-trustworthy-artificial-intelligence-federal-government/.
- Rep. Clarke, Yvette D. [D-NY-9. "Text - H.R.6580 - 117th Congress (2021-2022): Algorithmic Accountability Act of 2022." Legislation, February 4, 2022. http://www.congress.gov/bill/117th-congress/house-bill/6580/text.
- Chopra, Rohit, Kristen Clarke, Charlotte A Burrows, and Lina M Khan. "Joint Statement On Enforcement Efforts Against Discrimination and Bias In Automated Systems," n.d. EEOC-CRT-FTC-CFPB-AI-Joint-Statement(final).pdf. EEOC-CRT-FTC-CFPB-AI-Joint-Statement(final).pdf
- Coalition For Health AI. Blueprint for Trustworthy AI Implementation Guidance and Assurance for Healthcare. blueprint-for-trustworthy-ai_V1.0.pdf (coalitionforhealthai.org)
- Artificial Intelligence Risk Management Framework (AI RMF 1.0) (nist.gov) (AI Risk Management Framework | NIST) Artificial Intelligence Risk Management Framework (AI RMF 1.0) (nist.gov)
- The White House. Blueprint for an AI Bill of Rights | OSTP.
- US Executive Order 14110, October 30, 2023 Federal Register :: Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence E.U. Agrees on AI Act, Landmark Regulation for Artificial Intelligence - The New York Times (nytimes.com)
Other Sections
The views and opinions expressed in this content or by commenters are those of the author and do not necessarily reflect the official policy or position of HIMSS or its affiliates.