How to Improve Your AI Treatment Plan

At the end of the day, things are changing rapidly and the medical industry has already has been affected. I wrote a blog called “Googling with Wisdom” in September of 2024 and I wanted to update this conversation in light of the recent surge in large language models (LLMs, e.g. ChatGPT, Claude, Gemini, Grok, etc.,).

On a general note, I believe we are in sore need of tech integrations and systems to improve 1) our capacity, 2) patient autonomy and education, and 3) accessibility. Like the Digital Revolution and the mass accessibility of the internet, I believe there’s a way for us to learn artificial intelligence (AI) as a human informed tool. It has already had massive climate / energy implications and ethically we need to figure out the cost-benefit analysis of this technological “advancement”.

Similar to social media, these advancements don’t wait for legislation, research, or risk analyses. We need them, and we need to remain flexible enough to pivot as data continues to come out but we’re already here.

More and more often patients are coming in with their AI informed diagnosis and treatment plan. I’ve joked about integrating a consultation service for ChatGPT plans. But it’s really not a joke.

What do I love about it? In this Age of Information, patients are finding that AI platforms are creating digestible and organized information they can understand. And it’s free!

The problem is that these AI models, even health specific ones, are trained on text rather than clinical experience. The quality of the results are exclusively based on what information you provide the platforms, and many patients don’t know what they don’t know about their symptoms. And, of course, these LLMs cannot physically examine or evaluate you which is a huge component of clinical evaluation.

Shaming patients for using this access is archaic and irresponsible. The days of purely prescriptive medicine with providers on pedestals is over; we are (and should be) consultants to our patients in their health journeys.

In this role, summaries provided by LLMs and their AI chatbots can facilitate our work.

What does AI do really well when it comes to health concerns?

  • Help you organize and articulate your symptoms for a real provider

  • Explain medical terminology you've encountered

  • Generate questions to ask your doctor

  • Provide general education about body systems

  • Help you understand whether something is likely urgent or can wait for an appointment*

  • Offer frameworks for tracking symptoms

*Again dependent on the quality of the information you offer it!

Many times our work with a patient starts at teaching patients how to start organizing their symptoms and patterns. A dialogue with an AI chatbot may prompt many of these intial questions and proactively help a patient start noticing more essential information to bring into their appointment.

As providers, we benefit from self-reflective, educated and informed patients who understand a base amount about the bodies they inhabit. If used right, these discussions can accelerate care in many ways.

A cheat sheet to get more out of your chat bot diagnosis:

Be specific:

  • Bad: "What's wrong with my knee?"

  • Better: "I have sharp pain on the inside of my left knee when I go down stairs. What structures could be involved and what questions should I ask my PT?"

Provide context:

  • Your age, activity level, relevant medical history

  • Timeline and pattern of symptoms

  • What makes it better/worse

  • What you've already tried and how it influenced symptoms

Ask for frameworks, rather than diagnoses:

  • "What are possible categories of conditions that cause this symptom pattern?"

  • "What would differentiate between these possibilities?"

  • "What red flags would indicate I need urgent care vs. scheduling an appointment?"

Request educational content:

  • "Explain how the rotator cuff works in simple terms"

  • "What's the difference between a strain and a sprain?"

  • "How does inflammation work in acute injuries?"

What to bring to your provider:

If you’re having consistent symptoms, see a provider. You could ask your physiotherapist or doctor for more autonomy and an organized plan with fewer sessions, but if you have access, double checking your AI treatment plan is a wise choice.

Consider it a preparation tool! After going back and forth with your AI model, you can ask it for a summary of symptoms, patterns, and initial differential diagnosis.

Know that healthcare requires context and nuance, so be prepared to ask your provider for their opinion. You’re consulting them to help with your pain and initial research, not delivering verdicts and conclusions.

The benefit is in the organization of the information, but clinical expertise (at least at this point) cannot be wholly replaced by a text exchange.

Technology isn’t going away and learning to use it well matters, especially for us young clinicians. The goal isn’t a perfect self-diagnosis but rather informed patients, increased access, and better disbursed resources.

No matter what healthcare specific AI model comes out, your body is more complex than any algorithm can account for. Rather than replace skilled clinical assessment, use AI to get more involved in your care and advocate for your concerns or questions.

At ROOTS, we believe the best outcomes happen when informed patients work with skilled providers and we encourage you to get involved. And if you do want us to review your ChatGPT treatment plan, sign up for a 30-minute virtual consultation and we’ll set you on the right path.

Next
Next

Have you thought about the way you breathe?