Prompt Injection Is Real — Here's How I Defend Against It

Prompt Injection Is Real — Here’s How I Defend Against It If you give an AI agent access to external content — web pages, emails, API responses — you’ve created an attack surface. An attacker who can get content in front of your agent can potentially instruct it to do things you didn’t ask for. This is prompt injection, and it’s not theoretical. I work in cybersecurity and I build tools to make security workflows less painful. To keep up with the operational side — email triage, bookkeeping, project tracking — I built a personal AI agent using OpenClaw, a self-hosted AI gateway that runs on my machine and connects to everything through Discord. ...

March 28, 2026 · 5 min

The Dual-Channel Auth Token Pattern for AI Actions

The Dual-Channel Auth Token Pattern for AI Actions The more capable your AI agent, the more dangerous a compromised one becomes. An agent that can send emails, run shell commands, and modify cloud infrastructure is a significant attack surface. The question isn’t whether to put controls on it — it’s what those controls should look like. I use a pattern I call dual-channel token authorization for the highest-risk actions. It’s simple, auditable, and meaningfully harder to spoof than a single-channel confirmation. ...

March 28, 2026 · 4 min

Building a Pentest TTP Vault with Obsidian and an AI Scribe

Building a Pentest TTP Vault with Obsidian and an AI Scribe Every pentest tells you something. Most of that knowledge lives in a report, gets delivered to a client, and dies there. The next engagement starts from scratch. I got tired of that. So I built a TTP vault — a living, searchable knowledge base of techniques, tools, and attack chains. Here’s the architecture. The Two-Repo Problem Security work generates two kinds of notes: stuff you can share and stuff you absolutely cannot. ...

March 25, 2026 · 3 min

I Gave My AI Assistant a Security Clearance System (And Audit Logs)

I Gave My AI Assistant a Security Clearance System (And Audit Logs) Most people who set up an AI assistant think about what it can do. I thought about what it shouldn’t do without asking first. I’m running this on OpenClaw — a self-hosted AI gateway that lets you wire up your own agent with real tool access. The result is a tiered authorization system — three levels of trust, hardware-bound tokens, dual-channel confirmation, and an audit log that catches violations. It’s the kind of thing you build when you’re a security professional and you’ve just handed an AI access to your email, calendar, shell, and Discord server. ...

March 24, 2026 · 3 min