Before we begin

📃 Can you use ChatGPT for resident documentation?

This was the first question we tackled in last week's AI in Practice Q&A, and the one I received most often before the session. George Margelis and Peter Kokinakos gave answers that are worth hearing in full.

We'll be releasing one clip per question over the coming weeks through this newsletter. Thank you to panellists Amanda Birkin, Dr George Margelis, Emmanoel Katris, and Peter Kokinakos for a sharp and practical discussion.

And if the session raised questions about where to start in your own organisation, Innovation Philosophy is offering a free AI Readiness Assessment for a limited number of providers — details at the end of this issue.

This week's focus

We don't need another warning

To paraphrase Tina Turner's 1985 song "We don't need another hero" from the movie Thunderdome, we don't need another warning about AI in aged care. What we need is action.

Anyone who has worked in the sector long enough knows the list of challenges by heart, so I won't rehash them here. These problems are real. The vast majority of providers and workers are doing good work under difficult conditions, but the structural pressures on the sector are undeniable.

So when AI and new technologies began offering practical tools to assist with admin and care delivery, you'd expect cautious optimism. Instead, I keep coming across articles, opinion pieces, and academic papers that treat technology in aged care as primarily a threat: a vehicle for dehumanisation, surveillance overreach, and the erosion of relational care. And I hear these arguments echoed in boardrooms and policy discussions, often by people who seem to be looking for reasons to delay adoption rather than engage with it. No need to link out to any specific articles. Just search Google News for AI in aged care and you'll come across a few recent ones.

"We Don't Need Another Hero" was the lead single from the soundtrack of Mad Max Beyond Thunderdome (1985), where Tina Turner (pictured) also starred as the villain Aunty Entity.

The concerns raised in articles like these aren't baseless. But a pattern has emerged where the critique stops at warning and never get to building. Where promotional language on a company website is treated as evidence of what a technology will do in practice. Where the standard applied to a proposed solution is perfection — a standard the sector has never met without technology either.

The workforce pipeline will not fix itself overnight. Political will to fund aged care properly has been inconsistent for decades. While we debate whether every new tool meets every possible requirement, people are receiving less care than they need.

Imperfect action beats indefinite debate

During my recent trip to China, I learned about a local government project that installed fixed personal alarm systems for older people living alone in remote parts of a particular district. Quick, low-cost, extended coverage.

Of course you could pick it apart. The alarm is fixed to one spot, so someone who falls in another room may not reach it. It doesn't account for cognitive impairment. And so on.

All true. But every older person living alone in that area now has a personal alarm. The system can be improved over time — but only because it exists.

Technology in aged care doesn't have to be perfect to be better than what's currently available. A fall detection sensor that catches 80% of incidents is still an improvement over a facility where night staff can't physically check every room. Automated documentation that saves nurses 30 minutes per shift is 30 minutes that didn't exist before. These gains are concrete and measurable, even if the tools that deliver them are imperfect.

Criticism should build, not just warn

The aged care sector needs critical voices. It needs people asking hard questions — and in the case of technology, whether it's being deployed in the interests of older people or just in the interests of cost reduction.
But that criticism is only useful if it engages with implementation: what safeguards should look like, how override mechanisms should work, what good adoption looks like in practice. Criticism that begins and ends with "this could go wrong" gives policymakers and providers permission to do nothing. And doing nothing, in a sector where people are already underserved, has its own cost.

We've had enough warnings. It's time we gave the same rigorous attention to making the solutions work.

What’s coming up

Sessions and events

Invox Support at Home National Conference

21–22 April | Marvel Stadium, Melbourne & online

Over two days, the sessions cover what the first phase of the Support at Home rollout has taught us — what's working, what needs to change, and where the gaps in funding, guidance, systems, workforce, and accountability are showing up.

For senior staff, it's a chance to step above the day-to-day noise and discuss strategic priorities: what to fund, what to stop, and how to lead teams through the next phase. For operational managers and frontline leaders, it's about practical planning, processes that work, clearer decisions, tested ideas already delivering results in services.

I'll be part of a panel on technology and systems, one of the five conference themes, alongside workforce, financial sustainability, and compliance.

AI — What's Next for You and Your Organisation

23 April | 12–1pm AEST | Online | Free | Hosted by Anglicare Sydney

I'm presenting this session aimed at both individuals figuring out how AI fits into their own work and organisations working through how to adopt it responsibly. I'll cover where AI is now, where it's heading, and how to take practical next steps, with an honest conversation about what works, what doesn't, and how to move forward with confidence. There'll be time for questions.

This week’s picks

Three links worth your time

1 — AI glasses that help people with dementia identify household objects

AI-powered smart glasses have won the £1 million Longitude Prize on dementia. The glasses use an embedded AI assistant that guides wearers through daily tasks using verbal cues and text that floats in front of their eyes. It can also ask questions, hold light conversation, and support reminiscence.

What makes this notable is the result. In testing with 23 pairs of people living with dementia and their carers, participants went from correctly naming 46% of household objects without the glasses to 82% with them. An hour after taking the glasses off, recognition was still at 78%. That persistence suggests the prompts are reinforcing cognition, not just compensating for it in the moment.

2 — AI fluency may become a professional divide

Illustration: Aïda Amer/Axios. Stock: Getty Images

Anthropic published its latest Economic Index this month, analysing over one million conversations on its Claude platform. Experienced AI users get measurably better results than newcomers (a 10% higher success rate after six months of use), and the gap isn't explained by task type or geography. It's a skills gap.

The Axios write-up frames this sharply: a two-tier workforce is forming in real time, and the (US) government doesn’t have a plan for the people on the wrong side of it.

3 — Pricing transparency is the most common compliance issue

The Aged Care Quality and Safety Commission's latest bulletin flagged pricing transparency as one of the most common compliance issues found during prudential reviews of Support at Home providers. The new Aged Care Act introduced stronger consumer rights and registration requirements, and pricing transparency sits at the centre of how families compare providers. If your pricing isn't clear, comparable, and consistent with what's being delivered, that's now a compliance risk (as well as a reputational one).

Working with tech

Loopy: a free tool for thinking in systems

This week's recommendation comes via Professor Rick Watson who introduced me to this powerful little tool.

Loopy is a free, open-source tool that lets you build interactive system simulations by drawing circles and arrows. You sketch the components of a system, connect them with causal relationships (this goes up, that goes down), and press play to watch the dynamics unfold.

It takes about ten minutes to learn. The interface is deliberately minimal — you draw, you label, you connect. The simulations run in real time. You can share and remix other people's models.

It's free, in public domain, and runs in a browser.

From the Network

Special offer to webinar attendees, by Innovation Philosophy

Innovation Philosophy is offering an AI Readiness Assessment at no cost for a limited number of providers. This is a short, working session where they review your current systems, workflows, and priorities to identify where AI can be introduced safely and effectively, without creating compliance risk.

You'll walk away with a clear view of where the real opportunities are, what to avoid, and a simple, low-risk starting point tailored to your environment. If there are specific processes or concerns you'd like them to focus on, they can incorporate those into the session.

PULSE — the digital health podcast worth subscribing to

Dr George Margelis co-hosts a podcast with Dr Louise Schaper called PULSE. If you found George's perspective on AI governance useful in the Q&A last week, this is more of that — an informed, opinionated look at global digital health trends and current debates, covering aged care, health IT, and the policy conversations around both. Louise and George have a combined fifty years in digital health between them, and the episodes are accessible without requiring a technical background.

Available on all major podcast platforms.

Thanks for reading

Each week, I review developments in ageing and aged care and what they mean in practice. If this was useful, forward it to someone in the sector who'd appreciate it.

George Gouzounis

Looking for a speaker?

Keep Reading