• AgeFriendly.AI
  • Posts
  • šŸ‘» Did your AI just hallucinate? Yep, it happens.

šŸ‘» Did your AI just hallucinate? Yep, it happens.

Good morning future-focused leaders.

For those celebrating, I hope you had a joyful and peaceful Easter.

This week we’re unpacking one of AI’s biggest quirks: hallucinations. These are the confident-sounding errors your AI might make—and knowing how to prevent them is a crucial step toward using these tools more safely and effectively.

What else we cover this week:

  • WordPress builds websites from a single prompt

  • Tech use may protect against cognitive decline

  • Claude AI now integrates with Google Workspace

  • New neuroprosthesis restores natural speech

  • And more...

LATEST DEVELOPMENTS

EXPLAINER

šŸ‘» AI hallucinations: when your AI assistant starts imagining things

In brief: Sometimes, AI tools like ChatGPT or Claude give you answers that sound confident but are completely wrong. These mistakes are called ā€œhallucinationsā€ā€”and understanding why they happen is the first step to using AI more effectively.

What is a hallucination in AI?
A hallucination is when an AI makes up information. It doesn’t mean the system is malfunctioning. It means the AI is producing something that sounds plausible—but isn’t true.

Examples:

  • Saying a person wrote a book they didn’t.

  • Describing a historical fact that didn’t happen.

  • Quoting a source that doesn’t exist.

The problem is that these answers sound right, and the AI delivers them confidently—so it’s easy to be misled.

Why do hallucinations happen?
AI tools don’t think like humans. They don’t ā€œknowā€ facts or ā€œunderstandā€ what they’re saying. Here’s what’s actually going on:

  • AI predicts words, not facts: ChatGPT and similar tools are trained to guess the next most likely word or phrase, based on patterns in huge amounts of text.

  • There’s no built-in fact-checker: If you ask something unusual or very specific, and the AI doesn’t know the answer, it may just fill in the blanks with something that sounds right.

  • It tries to be helpful—even when it shouldn’t: If you phrase your question in a way that assumes something is true, the AI will usually go along with it, even if it’s not.

How common are hallucinations?
In one recent example, OpenAI’s newest model (GPT-4.5) was tested on extremely tricky trivia questions—so obscure they were designed to stump the system. In that case, it got about 37% wrong.

But in everyday use—like summarising a document or drafting an email—hallucinations are much less common. Experts say they usually happen in just a small percentage of interactions, especially if you use the right approach.

How to reduce hallucinations?
You don’t need to be a tech expert to avoid most AI mistakes. Here are four things anyone can do:

1. Understand that AI will always try to comply, rather than correct you.
If for example you ask: ā€œWhat year did Nelson Mandela hike the Blue Mountains?ā€

There’s no evidence he ever did—but the AI might still give you a confident-sounding answer, because your question assumes it happened.

2. Encourage the AI to say ā€œI don’t knowā€
AI is not good at admitting uncertainty, but you can prompt it to be more honest. You can add ā€œrespond with ā€˜I don’t know’ if you’re unsureā€ will help with less overconfident answers. 

3. Break big tasks into smaller parts
If you ask too much at once, the AI may try to guess to fill in the gaps. So, if for example you are working on producing a report, don’t ask for AI to generate the full report in one go. Instead, ask it to write the introduction first, then to describe the problem, etc. By breaking the task down into smaller components, it’s easier to track where things go wrong.

4. Give it trusted information
If you want accurate results, feed the AI the right data. Upload documents and data, share links. If you are working on writing policy based on the new Aged Care Standards, upload the relevant documents rather than relying on it searching the web. This keeps it grounded in the information that is important to you and reduces the risk of made-up content.

The bottom line:
AI is a powerful assistant, but it doesn’t know when it’s wrong. That means we need to work with it carefully—especially when accuracy matters. The good news is that with a few simple habits, you can use AI confidently and reduce the risk of hallucinations.

READY TO USE TODAY

šŸ” Wordpress’ AI interface builds entire websites from plain text prompts

In brief: I know I only recently spoke about AI being used to create full websites, but this takes it to the next level, and at a very affordable price. WordPress has launched a free-to-try AI website builder that turns a prompt into a fully designed site — and I tested it to create a website for (fictional) ā€œLife Care Centreā€ at West Beach, SA.

The details:

  • You simply describe the type of website you want and the AI designs the layout, text, and imagery.

  • The result is a clean, professional site with placeholder visuals, and service descriptions.

  • You can refine the content with follow-up prompts or jump into WordPress’s editor for more detailed changes.

  • Currently, the tool only works for new sites — no support for existing WordPress websites yet.

  • It doesn’t support online stores or complex features (yet), and publishing requires a hosting plan ($160 AUD/year with free domain for the first year).

Why it matters: For small providers and community groups without tech teams, building a website can be a costly and time-consuming process. This tool makes it very easy to do it yourself (especially if you have a basic understanding of Wordpress).

By the way, this is the prompt I used: 

Create a professional, welcoming, and user-friendly website design for "Life Care Centre," a nursing home dedicated to compassionate aged care services in West Beach, SA. Emphasise ease of navigation, clarity, and accessibility. The homepage should showcase the facility's services, care philosophy, and a brief introduction, along with intuitive menus directing visitors to detailed information on accommodation options, care services, resident activities, visitor information, and contact details. Use soft, calming colours that reflect warmth and professionalism, complemented by high-quality images of the facilities and interactions between carers and residents. Include clear calls to action, such as booking tours and requesting information. Use Australian English spelling.

QUICK HITS

🧠 Tech use slows cognitive decline in older adults: Older adults who regularly use digital technology may experience slower cognitive decline, according to a large meta-analysis in Nature Human Behaviour. The study found that tech users had a 58% lower risk of developing cognitive impairment and a 26% slower rate of decline, challenging fears around ā€œdigital dementiaā€ and suggesting potential cognitive benefits from staying digitally engaged.

šŸ“‚ Claude now connects with Google Workspace: Good news if you’re using Claude AI, as it can now integrate with Gmail, Google Calendar, and Docs (Pro plans), allowing it to access your emails, schedule, and documents for faster, more personalised support. Practically, Claude can have access straight to your Gmail, meaning you don’t have to download an reupload files, or repeat email context.

šŸš€ A workshop reminder: AI in Action: If you are based in Adelaide, this is a hands-on workshop for aged care professionals, focused on building practical AI skills using tools like ChatGPT and Claude. Designed to be jargon-free, the session covers real-world applications in reporting, compliance, and communication — all in just two hours. It takes place next Tuesday 29 April at CO.AS.IT. SA, 215 Port Road, Hindmarsh. You can register here.

šŸ—£ļø AI translates brain signals into speech: A new neuroprosthesis (a brain-connected device that restores lost functions) developed by UC Berkeley and UCSF converts brain activity into speech in real time—recreating the user's natural voice even for untrained words. This innovation offers new hope for people with paralysis who have lost the ability to speak.

COMMUNITY

ā˜• Meet the robot helping visitors and residents get around

During a recent visit to St Anna's Residential and Home Care, I met with Athin Christou, Manager of Wellness and Technology, to explore how the facility is integrating new technologies, and the exciting innovations in the works. A standout of my visit was the robot that guided me from the reception straight to the cafƩ. Its touchscreen offers destinations like the gym and nurse stations, making it easy for visitors and residents to find their way independently.

Hi, I'm George, the editor of this newsletter. I hope you found it interesting! I'd love to hear your thoughts—feel free to connect with me on LinkedIn or check out my website to learn more about my work.
See you next week!