4iT IT Support Sydney | Your Reliable Sydney IT Support Partner

Insights & News

Shadow AI in Australian SMEs: what to do about staff using ChatGPT at work

Shadow AI refers to employees using AI tools (ChatGPT, Claude, Gemini, Perplexity, Copilot personal accounts, and so on) for work without IT approval, sanctioned licensing, or policy guardrails. Recent industry surveys put shadow AI usage at over 70% of knowledge workers in Australian SMEs, often pasting confidential client data, financial figures, or unreleased information into consumer AI tools. The right response isn't a blanket ban (which doesn't work) but visibility, policy, and a sanctioned alternative employees actually want to use.

Office laptop with software interface representing AI tools used in Australian workplaces

Key facts

  • An estimated 70-80% of knowledge workers use AI tools at work, with most usage happening outside IT's sanctioned tools.
  • Consumer ChatGPT, Gemini, and Claude free tiers historically retained user inputs for model training; paid and enterprise tiers do not.
  • Australia's Privacy Act APP 1.7 (commencing 10 December 2026) requires disclosure of automated decision-making in privacy policies, including AI tools.
  • Shadow AI typically goes through residential internet, personal devices, or corporate networks via web browsers, making URL filtering the primary detection point.
  • Microsoft Purview and similar DLP tools can detect and warn on sensitive data sent to AI URLs without blocking outright.
  • Sanctioned AI alternatives (M365 Copilot, ChatGPT Enterprise, Claude for Work) range from AU$25-45 per user per month ex GST.

What is shadow AI and why is it everywhere?

Shadow AI is the AI version of shadow IT: tools brought in by individual employees to do their jobs better, without going through procurement, IT, or any formal approval. The defining characteristic is that the organisation typically has no visibility into who's using which tools, with which data, for which purposes.

The reason shadow AI is so widespread is structural. Generative AI delivers immediate productivity gains for most knowledge work tasks: drafting emails, summarising documents, transforming spreadsheets, analysing reports, generating code. Employees discover these benefits within minutes and adopt the tools the same day. Meanwhile, the organisation's formal AI strategy is still being written, and any sanctioned alternative is months away from rollout.

In our experience working with Sydney SMEs through 2025 and into 2026, the gap between when employees start using AI and when the organisation has a sanctioned position is typically 6-18 months. During that window, shadow AI fills the vacuum.

What are the actual risks of shadow AI for an SME?

Three risks matter for most Australian SMEs, in roughly this order of priority.

Data leakage to consumer AI services. Free-tier ChatGPT, Gemini, and Claude have variously retained user inputs for model training over the past three years. Even with current opt-out toggles, employees pasting client lists, financial figures, draft contracts, or confidential strategy documents into consumer AI tools means that information leaves your control. Once it's in someone else's training pipeline or logging system, you can't get it back.

Privacy Act exposure. Most shadow AI usage involves personal information (employee details, customer information, project staffing decisions). The amended Privacy Act's APP 11 requires "reasonable steps" including "technical and organisational measures" to protect personal information. Letting staff paste customer data into unmonitored consumer tools is increasingly hard to defend as reasonable steps. From 10 December 2026, automated decision-making transparency rules add a further obligation.

Quality and accuracy in client-facing output. AI-generated content with hallucinations or errors going to clients is a brand and legal risk. Unsanctioned AI usage often happens without policy guidance on which tasks AI is appropriate for, and which require human verification. The first time an SME accountant submits AI-generated tax advice to a client without checking the calculations is the day the lawyer gets called.

Why doesn't banning AI work?

Bans don't work for three reasons. First, AI tools are usable from any device with a web browser, including personal phones on mobile data, which makes a corporate network ban trivially easy to bypass. Second, employees who find AI useful for their work won't stop using it because of a policy; they'll just stop telling anyone they use it. Third, banning AI puts the organisation at a competitive disadvantage against businesses that have figured out how to use it productively.

The pragmatic position is governed AI: visibility into what's being used, policy that defines acceptable use, and a sanctioned tool that's good enough that employees prefer it to the consumer alternatives.

How do you get visibility into shadow AI usage?

Three layers of visibility, ordered from least to most invasive.

Network-level visibility via firewall logs or DNS-based filtering (Sophos, Cisco Umbrella, Cloudflare Gateway). You can see which AI services are being accessed, by whom, and how often. You can't see what's being sent to them, but you get a picture of usage patterns and shadow AI prevalence.

Endpoint visibility via Microsoft Defender for Cloud Apps, Sophos data loss prevention, or similar. These tools detect when sensitive data patterns (credit card numbers, customer record formats, document classifications) are being sent to AI service URLs. The good ones can warn the user without blocking, which catches the unintentional cases without breaking workflows that have legitimate use.

Microsoft Purview data classification ties content sensitivity to enforcement. If a document is classified as "Confidential" via Purview labels, you can prevent that document's content from being copied into a browser session targeting an AI URL, or warn the user that they're about to do something policy-risky. This is the cleanest implementation but requires existing M365 E5 or similar licensing tier.

What should an SME's AI policy actually say?

Avoid the 30-page policy document no one reads. The useful AI policy for an SME is one to two pages and covers six things plainly.

Sanctioned tools. Name the AI tools the organisation has approved (typically M365 Copilot, plus one external option for tasks Copilot doesn't cover well). State that other tools require explicit approval before use with company data.

Data classification rules. Plain English on what categories of data can be put into which AI tools. Customer personal information goes into sanctioned tools only. Public marketing copy can go anywhere. Financial figures and unreleased strategy go into sanctioned tools only.

Output review obligations. AI-generated content that goes to clients, regulators, or external parties must be reviewed by a human with the relevant expertise. AI-generated content for internal drafts doesn't need the same scrutiny.

Disclosure rules. Where AI is used in customer-affecting decisions, the organisation discloses that in line with APP 1.7 obligations from December 2026.

Training and prohibited uses. Specific tasks AI must not be used for (legal advice, regulated medical or financial guidance, anything where a hallucination has irreversible consequences).

Reporting path. What to do when an employee accidentally exposes sensitive data via AI, and how to ask permission for new AI tools without going through a six-month procurement process.

What's the right sanctioned AI to offer staff?

For most Australian SMEs already on Microsoft 365, Microsoft 365 Copilot is the natural sanctioned default. Pricing sits around AU$45 per user per month (annual commitment) and the tool covers the bulk of knowledge worker tasks (Word drafting, Excel analysis, Outlook triage, Teams meeting summaries) with enterprise data residency and no training on customer prompts.

For SMEs not on M365, or where Copilot is genuinely too expensive for the use case, ChatGPT Enterprise, Claude for Work, or Gemini for Workspace each offer business-tier alternatives with similar data protection guarantees. Pricing varies but mostly sits in the AU$25-45 per user per month range.

The discipline is matching tools to roles. Not every employee needs a Copilot license. Across Sydney SMEs we've seen, 30-60% of staff actively use sanctioned AI when offered; the rest don't need it. Targeted licensing for the people who actually use AI is much cheaper than blanket adoption, and easier to govern.

Frequently asked questions

What is shadow AI?

Shadow AI refers to employees using AI tools like ChatGPT, Claude, or Gemini for work tasks without IT approval, sanctioned licensing, or organisational policy guidance. Industry surveys suggest 70-80% of knowledge workers use AI at work, with most usage happening outside formal IT awareness or governance.

Is it illegal for employees to use ChatGPT at work in Australia?

Not inherently illegal, but it can create regulatory exposure under the Privacy Act if personal information is involved. APP 11 requires "reasonable steps" to protect personal information, and pasting customer data into unmonitored consumer AI tools is increasingly difficult to defend as reasonable steps. From December 2026, APP 1.7 also requires disclosure of automated decision-making, which can include AI tool usage in customer-affecting workflows.

Does the paid version of ChatGPT keep my data private?

ChatGPT Team, ChatGPT Enterprise, and the API by default do not use customer prompts for model training. Free ChatGPT historically did, with opt-out toggles available. The same general pattern applies to Claude (paid plans don't train on user data; free plans have variable defaults), Gemini, and Copilot. Always verify current terms before assuming, as policies have changed multiple times since 2023.

How can I stop staff from using ChatGPT without banning it?

The pragmatic path is governed AI rather than banning. Provide a sanctioned alternative (M365 Copilot or similar) that staff actually want to use, write a short clear AI policy that defines acceptable use, deploy data classification and DLP tools to detect risky usage, and accept that some shadow AI will continue. Banning drives usage underground and creates more risk, not less.

Do I need an AI policy if I'm a small business?

Yes. AI policy isn't size-dependent. A 10-person professional services firm has the same Privacy Act obligations as a 100-person firm when handling client information through AI tools. The good news is the policy itself can be short (1-2 pages) for an SME, focused on sanctioned tools, data classification, output review, and prohibited uses.

If you'd like a hand getting visibility into shadow AI usage in your business, drafting a workable AI policy, or rolling out M365 Copilot or another sanctioned alternative, we can run an AI governance review tailored to where your SME sits today and what your staff are actually doing with AI.

Brett Muscio

About the author

Brett Muscio is the Director of 4iT Support Pty Ltd, a managed services provider based in Castle Hill, NSW. He works with SME clients across Sydney, Melbourne, and Brisbane on AI governance, Microsoft 365 environments, Purview data protection, conditional access, and IT advisory, with on-site support across the Sydney metro area and remote delivery nationally. Connect on LinkedIn.

Contact Us

Partner with 4iT for your
Sydney based business

We’re happy to answer any questions you may have and help you determine which of our services best fit your needs.

Call us at: 1800 367 448

Your Benefits:

What happens next?

1

We schedule a call at
your convenience

2

We do a discovery
meeting

3

We prepare
a proposal

Schedule a Free Consultation

    Scroll to Top