Healthcare providers, payors, and other healthcare organizations should be aware of a recently announced, “first-of-its-kind” settlement between the Texas attorney general and a healthcare generative artificial intelligence (AI) company resolving allegations that the company made a series of false and misleading statements about the accuracy and safety of its AI products. The settlement highlights the potential for enforcement against companies that utilize AI in a healthcare setting under existing laws that are not specific to AI and the importance of exercising caution in developing claims about an AI product’s efficacy or performance.
How can organizations adopt a human-centric approach to artificial intelligence (AI) use in the workplace? In this SHRM article, Kathleen Pearson, McDermott’s chief human resources officer, discusses the Firm’s adoption of the emerging technology and how she’s brought cross-functional teams together to explore AI use cases.
In December 2023, the National Association of Insurance Commissioners (NAIC) adopted a Model Bulletin on the Use of Artificial Intelligence (AI) Systems by Insurers. The model bulletin reminds insurance carriers that they must comply with all applicable insurance laws and regulations (e.g., prohibitions against unfair trade practices) when making decisions that impact consumers, including when those decisions are made or supported by advanced technologies, such as AI systems. To date, 11 states have adopted the model bulletin, thereby applying the standards to insurers that operate in the states.
On February 6, 2024, the US Centers for Medicare & Medicaid Services (CMS) issued a letter to all Medicare Advantage (MA) organizations and Medicare-Medicaid plans. The letter covered frequently asked questions and answers related to the coverage criteria and utilization management requirements in the CMS Final Rule issued on April 5, 2023.
Among the FAQs was guidance related to the use of artificial intelligence (AI) and other technologies to assess coverage decisions. CMS wrote, “An algorithm or software tool can be used to assist MA plans in making coverage determinations, but it is the responsibility of the MA organization to ensure that the algorithm or artificial intelligence complies with all applicable rules for how coverage determinations by MA organizations are made.” For example, in a decision to terminate post-acute care services, an algorithm or software tool can be used to predict the potential length of stay, but that prediction alone cannot be used as the basis to terminate services.
CMS also expressed concern that algorithms and AI technologies can exacerbate discrimination and biases, emphasizing that MA organizations must comply with nondiscrimination requirements of Section 1557 of the Affordable Care Act.
What are the major risks and rewards of artificial intelligence’s healthcare transformation? In this AHLA podcast episode, Alya Sulaiman offers insight into how healthcare organizations should manage AI governance and examines related legislative and regulatory issues.
Following a dynamic 2023 coupled with a continually evolving legal landscape, employers may feel that they are left with more questions than answers. During a recent webinar, McDermott’s employment team took a dive into the most pertinent legal updates of 2023 and shed light on uncertainties to prepare employers for the year ahead. The discussion covered new laws taking effect in 2024, explored key developments impacting the workforce and advised on what employers can expect heading into the new year.
The Biden administration recently announced that 28 healthcare payors and providers intend to implement and adhere to voluntary commitments for the safe, secure and trustworthy development and deployment of artificial intelligence (AI) in healthcare. The signatory companies aligned around the FAVES principle—namely, that AI should lead to healthcare outcomes that are fair, appropriate, valid, effective and safe.
On October 30, 2023, the Biden administration released a long-awaited Executive Order (EO) on the “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.” The EO acknowledges the transformative potential of AI while highlighting many known risks of AI tools and systems. It directs a broad range of actions around new standards for AI that will impact many sectors, and it articulates eight guiding principles and priorities to govern the development and use of AI.
What is the current state of digital health? Where will artificial intelligence (AI) see the most growth and adoption in healthcare? And what are the key AI issues most relevant to healthcare providers?
How is artificial intelligence (AI) shaping the healthcare industry? In this HealthLeaders article, Alya Sulaiman describes an active landscape in which federal agencies and state attorneys general are competing to regulate the technology.