10 Ways to Reduce AI Hallucinations in Your Small Business

It is Monday morning, you are drafting an email to a client, pulling together a proposal, or trying to summarise last week’s meeting notes before the next call. Increasingly, small businesses turn to artificial intelligence as a trusted assistant in moments exactly like this. Tools like ChatGPT, Microsoft Copilot, Google Gemini and Claude are now used daily for writing content, responding to customers, drafting proposals, doing research and handling admin.

Most of the time, they are impressively helpful. Occasionally, they are confidently wrong, i.e. they have AI hallucinations.

What is an ‘AI hallucination’ ?

An AI hallucination is when an AI tool generates information that sounds plausible but is inaccurate, incomplete or entirely made up. The key risk is not that the answer is wrong, but that it sounds right.

In practice, this might look like a convincing statistic that does not exist, a quotation attributed to the wrong person, a law or regulation that is out of date or fictional, a source or URL that leads nowhere, or a detailed explanation built on a false assumption.

These issues can appear in any modern AI system, and they are a normal part of working with AI rather than a sign that you are using it incorrectly. They are not a bug in one product, they are a limitation of how large language models work.

Why hallucinations happen

AI systems do not understand facts in the human sense, they predict the most likely sequence of words based on patterns in their training data.

When the model has strong information to work from, results are usually solid. When information is missing, vague or ambiguous, the model may attempt to fill the gaps rather than admit uncertainty, partly because these tools are designed to be helpful and provide an answer rather than leave a question unresolved.

Hallucinations are more likely when the question is broad or poorly defined, when multiple complex tasks are bundled into one prompt, when the topic involves regulation, law or niche technical detail, or when the user expects a single definitive answer where none exists.

A useful mental model is to treat AI like a very capable intern, fast, articulate and enthusiastic, but not the person who signs things off or carries legal or reputational responsibility. Smart but sometimes too confident for its own good.

Why this matters for small businesses

Small businesses increasingly rely on AI for everyday work: blogs, emails and social posts, proposal drafts and tenders, research into markets or competitors, plus help with internal processes and workflows.

If an AI-generated output contains errors, the consequences can be real. That might mean reputational damage, poor business decisions, misleading marketing, or compliance problems in regulated areas.

A recent public example illustrates the point. West Midlands Police faced criticism after Maccabi Tel Aviv supporters were reportedly banned from attending a fixture, partly due to inaccurate information generated by Microsoft Copilot that incorrectly linked them to disorder at a West Ham match. But Maccabi Tel Aviv had never played West Ham. This was not a small business situation, but it shows how a confident AI error can influence real-world decisions when not properly verified.

AI is a powerful assistant, but it is not the final decision-maker. Responsibility always stays with the business.

Common warning signs to watch for

There are some consistent red flags that suggest an answer may need closer scrutiny. These include very precise numbers with no clear source, quotes from experts that are hard to verify, confident legal or medical claims, links or citations that do not resolve, links to AI-generated content presented as authoritative sources, overly technical explanations for simple questions, and answers that feel suspiciously neat or absolute.

When something triggers a gut check, that instinct is usually worth listening to.

How to reduce hallucinations on any AI tool

The good news is that most hallucinations can be reduced significantly with better ways of working. This is where good practice makes the difference, regardless of whether you are using ChatGPT, Copilot, Gemini, Claude or another platform.

  1. Use a reasoning model when it is available. In ChatGPT, for example, choosing ‘Thinking’ mode can improve accuracy on complex tasks because the tool takes more time to reason through the answer rather than rushing to the first plausible response.
  2. Ask the AI to show its working. Simple instructions such as asking it to list sources, assumptions and limitations immediately discourage invention. If you are using a Deep Research feature or a ‘thinking’ mode, you may see elements of its reasoning steps surfaced automatically, which can further help you assess reliability.
  3. Use evidence tables for anything factual. Asking the AI to separate claims from sources and add a confidence rating turns it from a storyteller into a research assistant. For consistency, add this requirement into your Custom Instructions so evidence tables become your default rather than something you remember to request occasionally.
  4. Break complex requests into smaller steps. AI performs far better when tasks are clear and focused rather than bundled together.
  5. Double-check anything high stakes. Speed is AI’s strength, verification is still a human job, especially for legal, financial, health or compliance content.
  6. Encourage the AI tool to be honest about uncertainty. Prompts that invite the tool to admit doubt or highlight weak areas consistently improve accuracy. You can also add a note in your Custom Instructions requiring the AI to state confidence levels or highlight areas of uncertainty by default.
  7. Put a clear approval and sign-off process in place, especially for external communications such as press releases, public statements or website claims created with AI. No externally published content should go live without human review and accountability.
  8. Ask the AI to review its own answer, and where appropriate, use a different AI tool to fact check and verify key claims. A second pass, especially from another model, can expose weak reasoning, unsupported statistics or invented sources.
  9. Ground the AI in your own material wherever possible. Upload documents, paste reference text or link to authoritative sources. The more context you provide, the less guessing it needs to do. In a licensed version of Microsoft Copilot, your work information such as emails, files and meetings is automatically used to ground responses within your organisation’s permissions, which further reduces guesswork for internal queries.
  10. Using a consistent prompting framework also helps. In my own Pollinger’s Prompting Pillars, this means being clear on the Goal, Context, Persona, Tone and Style, Specifics, and Examples, which leads to more predictable and reliable outputs.

How different AI tools approach reliability

Most modern AI tools now include features designed to reduce hallucinations, but none remove the risk entirely. ChatGPT, Gemini and Claude work best when prompts are well structured and supported by clear reference material. They are excellent for drafting, summarising and exploring ideas, but they still require fact checking.

Microsoft Copilot (licenced version) adds an extra layer by grounding responses in your organisation’s data, such as documents, emails and meetings you already have access to. This makes it particularly reliable for internal questions and business-specific tasks, although outputs still need review.

The important point is that no tool should be treated as an unquestionable authority. 

When to trust AI and when not to

AI is well suited to drafting, brainstorming, summarising, rewriting, formatting and creating templates. It is also excellent for pointing you in the right direction during research.

It should not be relied on alone for legal advice, financial calculations, health or safety guidance, regulatory compliance or HR policies. In these areas, AI can assist, but not decide.

A practical rule of thumb is that AI can often get you most of the way there quickly, but the final judgement still belongs to a human.

Final thoughts

If you do just one thing to reduce hallucinations, update your custom instructions so the AI is always asked to present factual information in an evidence table, showing claims, sources and confidence levels.

AI hallucinations are not a reason to avoid AI. They are a reason to use it well. Small businesses that learn how to prompt clearly, verify intelligently and integrate AI into real workflows will save time, improve quality and avoid costly mistakes. Those that treat AI output as unquestionable risk learning the hard way.

Used properly, AI becomes a dependable expert assistant rather than a liability.

Sponsored

Ready to find out more?

Drop us a line today for a free quote!

Posted in

Mike Knight