Impact on the Health Care and Life Science Industries
Examining How AI Influences the Deployment of Health Care
Generative and predictive AI, as well as machine learning applications, are transforming how diseases and conditions are predicted and managed, and how patients are diagnosed and treated. AI drives dramatic advances in robotic surgery, remote telehealth services, and smart hospital rooms. It also revolutionizes the administrative side of health care, particularly the billing and coding of claims to governmental and private payors.
Progress, however, is not immune to problems. The use of AI in health care prompts a host of legal, regulatory, and liability issues that providers, health care systems, and payors must address.
Diagnosis
There are so many questions to consider when AI is deployed to support health-related diagnoses. For example:
- What biases in the data impact a diagnosis?
- How do I know if my machine learning model works well enough—both in accuracy and reliability—for a given health care use?
- How does our organization apply a clinical judgment on top of an AI diagnosis?
- Since the rules and laws on AI are always evolving, how can I keep up with the current regulations?
It is critical for any entity to know how AI is operating before it is used to make diagnoses. In addition, it is also important to implement best practices surrounding transparency, explainability, and cybersecurity.
Epstein Becker Green works with clients to determine which machine learning models to build, as well as the tools and techniques for post-model explanation. The firm also provides guidance on accuracy and robustness testing and consulting to ensure that algorithms work well enough to be safe and effective for a particular use.
Claims and Reimbursement
As claims-payment schemes are adapted to AI-based services, Epstein Becker Green helps clients navigate the new complexities of reimbursement from public and private payors.
If providers or hospitals are using AI for billing or coding claims, they must know the risks of fraud and abuse involved with it. Creating a compliance program specific to compliance when using AI in billing and coding is critical.
AI and data governance structures should be enabled, and the development of a systematic approach is essential for ensuring future compliance. A data governance structure typically involves several unique components:
- Formation of an AI governance/compliance committee
- Creation of AI evaluation guides for reviewing and approving AI solutions
- Preparation and auditing of policies and procedures governing the use and development of AI
FDA Approval of AI Products
As a thought leader in the health care industry for more than a half-century, Epstein Becker Green helps organizations develop AI-based products and steer these products through the approval processes of the FDA and other federal and state agencies.
In collaboration with other stakeholders, such as the Federal Trade Commission (FTC) and the U.S. Department of Health and Human Services (HHS), the FDA continues to develop a wide range of regulatory guidelines and discussion papers on the safety and ethics of AI systems known as Software as a Medical Device or "SaMD."
As the laws evolve, organizations should prepare for changes to the FDA approval process and consider these questions:
- What uses of AI does the FDA regulate, even when developed by a health care provider?
- What are the FDA requirements for any algorithm that influences patient care?
- What is an organization's liability under the Affordable Care Act (ACA) for algorithms that discriminate in providing care?
- What is an organization's liability under the False Claims Act (FCA) for algorithms that provide incorrect information in a claim?