top of page

The Hidden Risks of Using AI

ree

Artificial Intelligence (AI) tools like ChatGPT, Claude, and other generative AI systems have become mainstream in workplaces across Singapore. While these tools offer productivity and automation benefits, using AI without proper governance, security controls, or organisational policies can expose businesses to data leaks, compliance violations, and operational risks.


Understand the key risks and learn about real-world cases showing why businesses must adopt AI safely and strategically.


1. Employees Are Using AI Tools Without Approval


According to Salesforce’s 2024 State of Data and AI Report, 68% of employees globally have used generative AI tools without their employer's approval. In high‑adoption markets like Singapore, unauthorised AI use is accelerating.


Risks:

- Sensitive information may be pasted into public AI systems.

- No audit trail exists for AI-generated decisions.

- Greater exposure to data leakage and non-compliance.


2. Confidential Data May Be Unknowingly Leaked


In 2023, Samsung employees accidentally leaked confidential source code after pasting it into ChatGPT.


Cyberhaven’s 2024 Data Security Report found:

- 11% of data employees paste into ChatGPT is confidential.

- 4% contains regulated or sensitive personal data.


Any information entered into a public AI model may be stored, logged, or used for model training unless enterprise safeguards are in place.


3. AI Hallucinations Lead to Real Business Risk


A Cornell University study found that AI systems hallucinate 15–20% of the time depending on query type.


This can cause:

- Incorrect financial summaries

- Misleading operational recommendations

- Faulty legal or compliance interpretations

- Code errors that create vulnerabilities


Without guardrails, AI misinformation may appear convincing but be dangerously wrong.


4. Regulatory Exposure Under PDPA, MAS, and IMDA


Singapore’s regulatory bodies are tightening expectations around AI usage.


PDPA:

- Sharing personal data with public AI tools without consent may result in fines up to S$1 million.


MAS:

- Financial institutions must ensure explainability, traceability, and AI governance.


IMDA:

- The Model AI Governance Framework requires responsible AI usage across organisations.


Using unmonitored AI tools exposes companies to compliance and legal risks.


5. “Shadow AI” Is Becoming a New Cybersecurity Threat


IBM’s 2024 Threat Intelligence Index notes that AI misuse is now a top emerging cyber threat.


Risks include:

- AI-generated phishing attacks

- Fake AI tools used to extract data

- Employees unknowingly interacting with malicious AI agents


Without proper control, companies face new attack vectors previously unseen.


6. General AI Models Are Not Optimised for Your Business


Studies show AI accuracy improves 40–60% when models are customised with company-specific knowledge.


Without adaptation, AI responses may be:

- Generic

- Misaligned to policies

- Incorrect for your industry

- Potentially risky for compliance


7. Lack of AI Governance Causes Misuse and Reputational Damage


Gartner reports that:

- 70% of companies have no AI governance framework.

- 48% do not track AI output quality.

- Only 20% have guidelines for employee AI use.


Without governance, AI decisions may be biased, unethical, or dangerous.


Summary


AI offers transformational benefits, but only when deployed responsibly.

Without security, governance, and proper setup, companies risk data leaks, regulatory breaches, cyber threats, and operational errors.


Working with a trusted AI partner ensures your organisation adopts AI safely, securely, and strategically.


Keen to find out more about using AI safely?

Let's chat to see how AISI can help improve your use of AI.



References & Source Links:

Salesforce 2024 State of Data & AI Report:


Cyberhaven Data Security Report:


Samsung ChatGPT Incident (Bloomberg):


Cornell AI Hallucination Study:


IBM 2024 Threat Intelligence Index:


Gartner AI Governance Statistics:

Comments


bottom of page