Back to Blog
AI EthicsResponsible AIGDPREU AI ActBiasTransparencyAudit TrailSMB

AI Ethics and Responsible AI — What Does It Mean in Practice for an SMB?

ÁZ&A
Ádám Zsolt & AIMY
||9 min read

Why Should a 15-Person Company Care About AI Ethics?

Because AI ethics isn't just Google's and Meta's problem. If your company uses an AI assistant and it:

  • Sends an offer to the wrong customer → business damage, reputational risk
  • Makes biased suggestions (e.g., favoring certain customer groups) → discrimination
  • Leaks personal data in its responses → GDPR fines (up to 4% of annual revenue!)
  • Makes autonomous decisions where a human should → loss of trust

AI ethics isn't a collection of abstract principles — it's practical risk management that protects your company's money and reputation.


The 5 Ethical Pillars Every AI-Using Company Should Know

1. Transparency — "I Know I'm Talking to an AI"

The rule: The customer always knows they're dealing with AI, not a human.

In the EU, this isn't merely an ethical question — the EU AI Act (Regulation 2024/1689, gradually effective from 2026) requires AI systems to identify themselves in interactions.

What does this mean in practice?

  • The AI chat clearly indicates: "AIMY AI Assistant" — not "Kate from customer service"
  • AI-generated emails include a label: "This message was created with AI assistance"
  • The dashboard shows which tasks were created by humans and which by AI

The trap: Many think AI works better if it "seems human." Maybe short-term — but the loss of trust after being discovered is irreparable. Transparency isn't weakness, it's a competitive advantage.

2. Human Oversight — "AI Suggests, Humans Decide"

Autonomous AI doesn't mean unsupervised AI. Responsible systems work with three-tier control:

Level How It Works When to Use?
Notification AI suggests but doesn't act Onboarding, trust-building phase
Suggestion + approval AI prepares it, but you press the button Live use, medium risk
Action + reporting AI executes, reports afterward Low risk, proven processes

The key: Autonomy level is configurable per task. It can send reminder emails on its own — but shouldn't send quotes without approval. It can create tasks automatically — but should never delete a customer.

Daily limits: It's worth capping the AI's daily actions (e.g., max 50 operations/day). This prevents a bad prompt from causing a cascade failure — it can't "go off the rails."

3. Bias — "The AI Learned What You Gave It"

AI is exactly as biased as the data it's built on. If the "VIP customer" category in your CRM exclusively contains men over 40, the AI will optimize for that — not out of malice, but because that's what it saw in the data.

SMB context — common bias traps:

Trap Example Solution
Historical bias Legacy customer data is skewed (e.g., no online booking, only phone → older demographic overrepresented) Regular data audits, category analysis
Language bias LLMs "understand" English better than other languages → weaker relevance for local-language text Embedding models tested for your language, local golden dataset
Selection bias You only have data on "good" customers (who came back), not the ones who churned Deliberate tracking of churning customers
Automation bias The team blindly trusts AI suggestions without checking Regular "AI audit" days, random spot checks

The most important rule: If AI influences decisions (who to call, who gets a discount, who gets prioritized), regularly check that the suggestions aren't discriminatory.

4. Data Protection and GDPR — "AI Doesn't Forget, But You Need to Make Sure It Does"

GDPR applies to SMBs too. Using AI doesn't exempt you — in fact, it creates new risks:

The 3 Most Common GDPR Risks with AI:

a) Context leakage in multi-tenant systems

If you serve multiple clients (tenants) with a single AI system, it's critical that one tenant's data doesn't appear in another's responses. This is a technical issue, but its ethical consequence is immediate: the trust relationship ends instantly.

Solution: Every query filtered at the provider level — the AI never sees another tenant's data.

b) The Right to Be Forgotten

If a customer requests data deletion, you need to delete not just from the CRM, but also from:

  • The knowledge graph (knowledge graph nodes)
  • The embeddings (vector representations)
  • The AI conversation history

Solution: Cascade deletion — deleting the customer entity automatically removes all associated nodes and edges from the graph.

c) AI's "Memory"

LLMs (GPT, Claude) don't learn from data sent via API calls — but messages stored in conversation history are accessible for the duration of that conversation. If the AI incorporates personal data into a response, it gets stored in the chat log.

Solution: Automatic conversation history purging (e.g., 90-day retention), minimizing personal data in system prompts.

5. Audit Trail — "Always Know What the AI Did and Why"

This is perhaps the most practical ethical requirement. Every AI action must be logged:

What Do We Log? Why?
What was the action? Traceability
Why did the AI decide this way? Explainability — the AI justifies its decision
With what confidence? If it sent an email with 55% confidence, that's a risk
Who approved it? If no one → automatic → higher responsibility
What was the outcome? Effectiveness measurement, learning

The audit trail isn't paranoia — it's business value. If you know that AI suggestions are correct 78% of the time and irrelevant 22%, you can optimize. If you don't measure, you're flying blind.


The EU AI Act and SMBs — What to Know in 2026

The Regulation in Brief

The EU AI Act (2024/1689) is the world's first comprehensive AI regulation. It uses a risk-based approach:

Risk Level Example Requirement
Unacceptable Social scoring, real-time facial recognition Prohibited
High risk Credit scoring, recruitment, healthcare Registration, auditing, human oversight mandatory
Limited risk Chatbot, AI-generated content Transparency mandatory (labeling)
Minimal risk Spam filter, recommendation engine No specific requirements

What Does This Mean for an SMB?

Most SMB AI usage falls into the limited risk category: AI chatbot, AI assistant, automatic emails. For these, transparency is the main requirement.

But beware: If AI makes decisions about customers (e.g., automatically prioritizes based on lead scoring, reduces service level based on churn risk), it can easily shift toward high risk!

Practical steps for 2026:

  1. Map out where you use AI in your company
  2. Categorize by risk level
  3. Label AI-generated content (email, chat, documents)
  4. Document the AI system's operation (what data, what purpose, what model)
  5. Maintain an audit log of AI decisions

How to Build Ethics Into Daily Practice — 5-Minute Checklist

You don't need an ethics committee and a 50-page policy. This is enough:

Weekly 5 minutes:

  • Review the AI audit log: any "strange" actions?
  • Were any AI responses obviously incorrect?
  • Is the ratio of unapproved (automatic) actions acceptable?

Monthly 30 minutes:

  • Do AI suggestions show patterns (e.g., always favoring the same customer group)?
  • Have customer feedback signals flagged problems with AI behavior?
  • Is GDPR compliance in order? (deletion requests handled, retention policy respected)

Quarterly review:

  • Are the autonomy levels still appropriate? (Can you delegate more decisions to AI, or should you pull back?)
  • Did an AI model update cause behavior changes?
  • Do new EU AI Act requirements affect you?

Summary: Ethical AI Isn't a Limitation — It's a Competitive Advantage

Responsible AI isn't about making AI do less. It's about making it work reliably and keeping your company in control.

Customers are increasingly aware: they want to know when they're talking to AI, and they expect you to handle their data responsibly. The company that communicates this transparently builds trust. The company that hides it — gets caught.

The good news: most ethical requirements are neither expensive nor complicated. Transparency, human oversight, audit trail, GDPR compliance — these aren't "extra costs," but natural properties of a well-designed system.

The question shouldn't be "how much does ethical AI cost?" — but how much does it cost if it's not ethical?


Want your AI system to operate ethically and GDPR-compliantly? Get in touch — we'll help you implement responsible AI!