Patterns
AI Agent as Digital Employee
How to treat an AI agent like a new hire: probation period, own credentials, limited permissions and clear rules.
An AI agent is not a tool you install — it is a digital employee you onboard. That means: own identity, own credentials, limited permissions, probation period and EU AI Act compliance. This article covers the architecture patterns that make this possible.
The Mental Model: Agent = Employee
The most common mistake with AI agents: treating them like software tools. Install, add API key, done. That works for a single chatbot. But once an agent reads emails independently, contacts customers or writes to enterprise systems, it needs the same care as a new hire.
Concretely: own credentials, limited permissions, a probation period with gradual access expansion and clear rules for external communication.
Principle of Least Privilege
Every agent gets ONLY the permissions it needs for its job. Nothing more. Same principle as with human employees: an accountant does not need SSH access to the server.
| Area | Human Employee | AI Agent |
|---|---|---|
| Identity | Own company account, own email | Own system user, own API keys |
| Permissions | Only for their department | Only for defined doctypes/endpoints |
| Credentials | Own password, own badge | Own vault, isolated from other agents |
| Network | VPN access only for their systems | Network policy: deny-by-default, only allowlisted endpoints |
| Probation | 3-6 months, gradually more responsibility | 30 days read-only, then gradual write permissions |
Credential Isolation: Every Agent Gets Its Own Vault
One of the most critical points: agents must NEVER share credentials. If Agent A is compromised, Agent B must not be affected.
Typical vault structure per agent:
agent-vault/
llm.env # LLM provider API keys
services.env # External services (TTS, email, etc.)
erp.env # ERP system access (own user!)
identity.env # Agent name, email, tokenWhen multiple agents use the same API key, you cannot distinguish who did what in the audit log. Plus, one compromised key means ALL agents are affected. Every agent gets its own keys, own tokens, own vault.
Network Policy: Deny-by-Default
An agent should only reach the endpoints it needs for its work. Everything else is blocked. This prevents a compromised agent from accessing internal systems.
Example network policy (YAML):
allowed:
- host: "api.llm-provider.com" # LLM inference
port: 443
- host: "erp.internal" # ERP (customer doctypes only)
port: 8082
- host: "imap.provider.com" # Email read
port: 993
blocked:
- host: "*" # Everything elseThe agent gateway should listen on 127.0.0.1, not 0.0.0.0. Remote access goes through a VPN (e.g. Netbird, WireGuard). This prevents the agent endpoint from being reachable on the open network.
Skills Instead of Plugins: Keeping Control
Agent capabilities are defined as Markdown skills, not executable code. This is a deliberate security decision: a Markdown skill describes WHAT the agent should do, but the agent executes code in a sandbox.
| Property | Plugin (Code) | Skill (Markdown) |
|---|---|---|
| Execution | Direct code access | Description, agent interprets |
| Security | Can execute anything | Sandbox execution |
| Review | Code review needed | Reading text is enough |
| Maintenance | API changes break code | Description stays stable |
| Supply Chain | Dependencies can be malicious | No external dependencies |
Research shows that a significant portion of community plugins for agent frameworks can have security issues — from credential leaks to remote code execution. Write your skills yourself. Adopt logic and patterns from the community, but write the code yourself.
Two-Tier Heartbeat: 90% Token Savings
An agent needs to regularly check if there is work to do. But every LLM call costs tokens. The solution: a two-tier heartbeat system.
| Tier | What Happens | Cost | Latency |
|---|---|---|---|
| Tier 1 (Cheap) | Simple checks: HTTP status, email count, file exists | 0 tokens, just HTTP calls | < 500ms |
| Tier 2 (LLM) | Classification, summarization, decision | 100-300 tokens | 1-3s |
Tier 2 is ONLY triggered when Tier 1 detects an anomaly. Example: Tier 1 checks "Are there new emails?" (HTTP call, 0 tokens). Only if yes, Tier 2 calls the LLM for classification and summarization.
With a heartbeat every 5 minutes, that is 288 checks per day. Without two-tier: 288 LLM calls (~86,000 tokens). With two-tier and 10% anomaly rate: 29 LLM calls (~8,600 tokens). That is a 90% reduction — with the same response speed.
EU AI Act: Transparency Requirements
From August 2, 2026, AI systems with customer contact must be transparently labeled (Art. 50). This applies to AI agents deployed as digital employees.
Email Signature
"Max Mustermann | AI-powered Assistant | Company GmbH"
Social Media Bio
"AI Employee at Company GmbH" — clear and visible
Voice/Phone
Automatic announcement at the start: "I am an AI-powered assistant."
Art. 50 EU AI Act transparency obligations apply from August 2026. Penalty: up to EUR 15 million or 3% of global annual turnover. SMEs get a lower cap (Art. 99), but the obligation itself is the same.
Das Wichtigste
- ✓Treat AI agents like new hires: own identity, own credentials, limited permissions.
- ✓Credential isolation is not optional. Every agent gets its own vault.
- ✓Network policy: deny-by-default. The agent only reaches the endpoints it needs.
- ✓Skills instead of plugins: Markdown descriptions instead of executable code. Safer and more maintainable.
- ✓Two-tier heartbeat saves 90% tokens: cheap checks first, LLM only on anomaly.
- ✓EU AI Act labeling mandatory from August 2026. Prepare now, not later.
Sources
- Base spec: Internal design specification — AI Agent onboarding design (internal)
- EU AI Act Overview — Art. 50 transparency obligations
- Safety Hooks Pattern — Guardrails and output validation
- Heartbeat & Monitoring Pattern — Health checks and alerting
War dieser Artikel hilfreich?
Next step: move from knowledge to implementation
If you want more than theory: setups, workflows and templates from real operations for teams that want local, documented AI systems.
- Local and self-hosted by default
- Documented and auditable
- Built from our own runtime
- Made in Austria