Integris Assurance CompanyArtificial intelligence (AI) is no longer an emerging concept in medicine; it is rapidly becoming a foundational component of modern clinical practice. Over the past year, the capabilities of generative AI have accelerated at unprecedented speed, fueled by extraordinary growth in computational power and the maturity of large-language models (LLMs). As a result, clinicians now face a dual reality: AI offers tremendous promise for efficiency and insight, yet it also brings new risks, ethical considerations, and potential liability concerns.

This article summarizes current AI capabilities, outlines risks related to generative systems, and provides practical guidance for safe and responsible use in clinical settings.

Where We Are: The Rise of Generative AI in Medicine

The convergence of Moore’s Law and the exponential scaling of graphics processing units (GPU technology) described informally as “Huang’s Law,” has enabled the creation of massive AI models trained on enormous datasets. These foundational models now power chatbots, virtual assistants, and analytic tools capable of drafting letters, summarizing charts, explaining diagnoses, and emulating natural human conversation.

Their abilities can be impressive: they can translate between languages, generate patient-friendly explanations, assist with documentation, and provide comparative reviews of treatment options. However, their outputs are driven by statistical prediction, not true understanding, reasoning, or clinical judgment.

Recent research underscores these limitations. A 2024 American Academy of Orthopaedic Surgeons (AAOS) analysis found that ChatGPT’s responses for musculoskeletal conditions became increasingly inconsistent with more complex queries. While useful for general information, AI systems are not a substitute for clinical assessment, particularly when medical nuance or individualized decision making is required.

What Can Go Wrong? Understanding AI Risks

1. Hallucinations and Errors in Thought
AI models sometimes produce answers that sound authoritative but are factually incorrect, internally inconsistent, or entirely fabricated. These “hallucinations” can be subtle in tone but significant in clinical impact.

2.  Hidden Bias and Data Limitations
AI systems inherit the characteristics of the data that trains them. If historical data reflects disparities or incomplete representation, the model may replicate or amplify those biases.

3. Jailbreaks and Security Vulnerabilities
AI systems can be manipulated through prompts designed to bypass safety features—a phenomenon known as “jailbreaking.”

4. The “Human in the Middle” Problem
Without proper oversight, clinicians risk over-relying on model suggestions or failing to recognize when information is inaccurate.

5. Liability Exposure
AI cannot hold a medical license. Thus, the physician remains legally and ethically accountable. Whether the AI is correct or incorrect, the clinician’s adherence to, or departure from, the recommendation is central to any evaluation of negligence.

AI for Clinical Support: Best Use Cases Today

Despite the risks, generative AI offers significant value when used appropriately:

1. Chart summarization and synthesis
2. Administrative support
3. Comparative analysis
4. Education and mentoring

Five Rules for Safe Physician Use of Generative AI

1. Use AI for support tasks, not clinical decision-making.
2. Review everything.
3. Maintain human oversight.
4. Do not rely on AI for the latest clinical evidence.
5. Follow the principle of “Do no harm.”

Five Safety Rules for Medical Chatbots

1. Use encrypted channels.
2. Rely on evidence-based datasets.
3. Ensure human oversight.
4. Monitor and update performance.
5. Clearly disclose chatbot limitations.

Looking Ahead: Governance and Ethical Responsibility

AI performance degrades without ongoing evaluation, and developers must provide monitoring tools, transparency, and processes to identify aberrant behavior. Meanwhile, clinicians must report unexpected outputs so models can be corrected.

With thoughtful policies, strong oversight, and responsible implementation, AI can improve efficiency, reduce administrative burden, and enhance patient engagement.

*This article is a summary of one of the many Webinars offered by Integris Group for its policyholders.*