Before You Use That AI Scribe:
- Saga Arthursson

- Dec 19, 2025
- 5 min read
5 Hidden Risks for Healthcare Practitioners

The AI Revolution in Healthcare is Here, But So Are the Risks
Artificial intelligence is rapidly transforming healthcare, offering powerful tools that promise to streamline workflows, enhance decision-making, and improve patient care. For mental health practitioners, in particular, AI-powered scribes, chatbots, and analytical tools seem poised to revolutionise the practice. The excitement is palpable and, in many ways, justified.
But beneath the surface of this innovation lies a landscape of hidden legal, ethical, and professional traps. Adopting these powerful new tools without a clear understanding of the accompanying risks can expose you and your practice to significant liability. Drawing on a recent discussion with Minter Ellison's legal and consulting experts on AI, Sam Barrett and Chelsea Gordon, this article uncovers five critical risks every practitioner must understand before integrating AI into their workflow.
1. The Buck Stops With You: AI is a Tool, Not a Colleague
The core principle is simple: practitioners retain ultimate professional and legal responsibility for any outcomes related to the AI tools they use. This accountability applies to everything the AI touches. If an AI scribe generates inaccurate patient notes, you are responsible for the error. If a clinical decision support tool provides a flawed recommendation that you follow, the duty of care remains with you. There are already growing reports of negligence stemming from unchecked AI-generated notes, underscoring the real-world consequences. This can be a counter-intuitive point for practitioners who might view AI as a way to offload work or delegate tasks. However, your professional duty of care cannot be handed over to an algorithm. The AI is an instrument in your hands, and you are accountable for how it is used and the results it produces.In Australia, healthcare practitioners using AI systems must understand that they retain ultimate responsibility for the use and outcomes of these tools. Ultimately, clinicians are accountable for all outputs generated by such technology.
2. Your Note-Taking App Might Be an Unregulated Medical Device
It may be surprising, but some AI systems—including common scribe tools—can be legally classified as "medical devices." When a tool falls into this category, it is subject to specific regulatory oversight by the Therapeutic Goods Administration (TGA) and must be listed on the Australian Register of Therapeutic Goods (ARTG). It is the practitioner's responsibility to verify an AI tool's status before procurement. There have been recent instances where AI scribe systems operating as medical devices were discovered not to be properly registered. Using an unapproved tool exposes your practice to significant regulatory risk. This is a critical and often overlooked step, yet the diligence required is simple: search the public ARTG register to confirm the system’s approval status before you commit. Even if a tool is not classified as a medical device, a risk-based approach still applies. Systems that interact directly with patients are considered higher risk than those used solely for practitioner oversight. Understanding this distinction provides a more sophisticated framework for evaluating any new tool you consider adopting.
3. The Hidden Clause: Your Patient Data Could Be Training the AI
One of the most critical risks is buried in the fine print of service agreements. An AI provider might contractually reserve the right to use your practice's data—including highly sensitive patient information—to train their own commercial AI models. It is vital to ensure your contract explicitly forbids this practice. If the contract permits your data to be used for the provider's model training, you must seek legal advice. Allowing this can lead to severe consequences, including breaches of patient privacy, contravening your professional obligations, and enabling illegal cross-border data transfers to jurisdictions without privacy laws compliant with Australian standards. You could be unknowingly contributing your proprietary and sensitive patient data to a third-party's product development, free of charge.
When reviewing contracts, seek specific warranties confirming the AI system is fit for its intended purpose, has been trained on appropriate health data, and stores information within Australia.
4. The $50 Million Mistake: Using Public AI with Private Data
The financial risk of inputting personal or sensitive health information into public AI systems like ChatGPT is immense. Under Australian law, this action can be considered a serious data breach, with potential penalties of up to $50 million or 30% of turnover. This danger is amplified by the rise of "shadow AI"—the unauthorised use of public tools by staff who are trying to be more efficient. An employee pasting patient details into a public AI to summarise a report creates a massive data breach risk for the entire organisation. The rule is absolute: only fully de-identified data should ever be used with public AI platforms, and only after your practice has conducted a thorough privacy impact assessment. For handling actual patient information, the solution lies in dedicated, secure platforms. Enterprise AI solutions within secure environments pose fewer risks, provided appropriate technical and contractual protections are in place.
5. The Indemnity Trap: You Could Be Held Liable for the AI's Failures
A dangerous contractual pitfall is emerging where AI providers insert clauses that cap their own liability at a very low amount while simultaneously requiring the practitioner to indemnify them. In simple terms, "indemnification" means that if the AI tool fails or is misused and causes damages, you would be legally and financially responsible for covering those damages, not the AI vendor. For example, if a patient-facing chatbot provides misleading information that leads to harm, this clause could make you liable for the outcome.
This is a shocking reversal of expectation. You are paying for a service, yet you could be left holding the bag for its failures. Before signing any service agreement, it is absolutely critical to have it reviewed by legal counsel. This review should also confirm that the AI provider has adequate insurance coverage and clarify whether you need to notify your own insurer about the use of the new technology.
Conclusion: Navigating the Future with Informed Caution
AI holds immense promise for healthcare, but its adoption cannot be a blind leap of faith. It requires diligence, education, and a practical framework for risk management. Proactive steps—from conducting a formal Privacy Impact Assessment (PIA) before introducing any new tool to meticulous contract reviews—are non-negotiable for harnessing AI's benefits safely and responsibly. The path forward lies in ongoing education and active dialogue within the healthcare community. Practitioners should look to guidance from professional associations and specialist colleges, such as the Australian Psychological Society or the Royal College of Pathologists, who are developing sector-specific advice. By collaborating and staying informed, we can ensure that as these powerful tools become integrated into our daily workflows, our professional judgment—backed by robust governance—remains the final, indispensable arbiter of patient care.
Further resources:
Comments