| By Stephen Bennett | LinkedIn · GitHub |
Build the habit first. Scale the tooling second. Verify before you trust.
I have ADHD. I was formally diagnosed as an adult — long after I’d spent thirty years building a career that required me to hold context across dozens of concurrent workstreams, never forget a commitment, and synthesise complex information under pressure. I managed. But I managed by working twice as hard as I needed to, by keeping everything in my head because I didn’t trust any system to hold it for me, and by losing things — insights, decisions, patterns — that I can never recover.
The Second Brain is not a productivity system I stumbled across. It is cognitive infrastructure I built because my brain, without it, is a liability in the role I hold. If you’ve ever felt like you’re running your entire professional life on a stack that’s one context-switch away from collapse — this is for you.
— Stephen Bennett
A CISO’s job is uniquely hostile to deep thinking. On any given day you might move from a vulnerability review to a board presentation to a vendor risk assessment to an incident triage call to a budget negotiation. Every context switch degrades the quality of your thinking on each topic.
Your biological brain is optimised for real-time decision-making, pattern recognition, and leadership judgment. It is fundamentally unsuited for maintaining deep context across dozens of concurrent workstreams over months or years.
| Function | Your Brain | The Second Brain |
|---|---|---|
| Processing | Real-time analysis, intuition, leadership judgment | Pattern recognition across large datasets and long histories |
| Storage | Volatile — subject to memory decay and context loss | Persistent, indexed, semantically searchable |
| Concurrency | Limited — prone to context-switching fatigue | Unlimited parallel tracking of initiatives |
| Durability | Finite — insights degrade over time | Open-format Markdown with multi-decade utility |
The Second Brain externalises storage and retrieval, freeing your biological brain for the things only it can do: synthesis, judgment, and leadership.
Cognitive psychology distinguishes two types of intelligence that behave very differently as we age:
Fluid intelligence (Gf) is the capacity to reason about novel problems, recognise patterns in unfamiliar information, and think abstractly — without relying on prior knowledge. It is raw cognitive horsepower: working memory, processing speed, logical reasoning with new material. It peaks in the early-to-mid twenties and declines gradually with age.
Crystallised intelligence (Gc) is accumulated knowledge, domain expertise, and experiential wisdom built through years of practice. It is everything you know and know how to do because you have done it before. Unlike fluid intelligence, it continues building throughout a career and often remains strong well into later decades.
Here is why this matters for AI augmentation: AI is extraordinarily capable at fluid intelligence tasks — rapidly processing large volumes of new information, identifying patterns across disparate datasets, making connections at scale that would take a human analyst days. What it does not have is genuine crystallised intelligence: thirty years of knowing how boards actually think, which risks matter in context, how to read an organisation’s culture, what “good enough” looks like in a real security programme.
The CISO brings the crystallised intelligence. The AI handles the fluid intelligence tasks. The Second Brain is the architecture that makes the combination work — persisting your accumulated knowledge in a form the AI can read, so neither half of the partnership has to operate blind.
For neurotypical people this framing is a productivity argument. For those of us with ADHD — where the fluid intelligence component is further complicated by a brain that drops context as a neurological pattern rather than a bad habit — it becomes something more structural. The ADHD brain hyperfocuses brilliantly on what is directly in front of it and releases everything else. That is an asset for threat modelling and crisis response. It is a liability for the administrative continuity a CISO role demands. The Second Brain does not fight that neurology. It compensates for it by design.
If you’re using AI assistants — Claude, Gemini, ChatGPT — you’ve already hit these walls:
The Second Brain solves all four by making Markdown files — not chat histories — the canonical store of knowledge. Any model can read your vault. No conversation needs to exceed its context window because important outputs are persisted externally. You own your knowledge permanently, in plain text, readable by any tool in any decade.
This is not a note-taking system. It is a personal operating system for a security leadership role — a combination of:
The system evolved from a simple collection of Git repositories with AI-readable context files into an integrated vault with agentic AI capabilities — what started as a “CISO Desktop” inside VS Code became a full Second Brain with persistent memory, automated workflows, and multi-device access.
This document is a snapshot, not a blueprint. I develop this system every day. New skills get built, new integrations get added, things that worked six months ago get replaced by something better. That is not instability — it is the appropriate response to a technology landscape that is moving faster than anything I have seen in thirty years in this industry. The architecture described here is current as of publication, but by the time you read it some of the tooling will have moved on. The principles — local-first, vault-grounded, session-based AI, data classified and stored appropriately — those stay constant. The implementation details evolve continuously. Build the habit and the principles first. The tooling will keep up.
A note on eating your own dogfood. I’m a CISO. Before deploying anything in this system I ask the same questions I’d ask about any production tool: Who owns the data? What’s the threat model? What runs persistently? What has network access? This document answers all of those questions. The short version: local-first storage, self-hosted version control, session-based AI (no persistent daemons), encrypted at rest and in transit, and local models for sensitive content. If I wouldn’t approve it for a production workload, it doesn’t run on my personal knowledge system.
Every CISO reading this has the same first objection: “We already pay for Confluence. Or Notion. Or SharePoint. Why build something new?”
These are capable tools, and for many purposes they are the right answer. For a CISO’s personal strategic knowledge, each has a specific limitation worth understanding before you default to the thing you already pay for.
| Tool | The durable concern |
|---|---|
| Confluence / SharePoint | Your content lives in a proprietary database, not in files you own. Export is possible but lossy. AI features send content to cloud APIs under the vendor’s terms — terms that can change. For content you want to still own and read in twenty years, that is a trust assumption worth examining. |
| Notion | Proprietary block-based data model. Your notes are not portable plain text — they are rows in someone else’s database, exported as best-effort Markdown that loses structure. If Notion changes its pricing, closes, or pivots, migration is painful and data-fidelity is not guaranteed. |
| OneNote | Deep Microsoft ecosystem lock-in. Hierarchical notebook format is difficult to export cleanly to anything else. Your knowledge graph is trapped inside a proprietary container even if the individual pages are technically retrievable. |
| ChatGPT / Claude web | Conversations are not a knowledge system. They are ephemeral sessions whose history belongs to the provider. Useful for thinking in the moment; not a home for the content you want to still reference next year. |
| Email and Teams | Communication systems, not knowledge systems. Retrievability is shallow and structure is accidental. Nothing accumulates into context the next conversation can build on. |
None of this means “do not use these tools.” It means: for the content you want to own permanently, in a format that will outlive any single vendor’s product decisions, plain-text files in a directory you control are a categorically different choice. The rest of this document describes how to make that choice practical for a CISO’s workflow.
Markdown has been stable for twenty years. It is readable by every text editor, every operating system, every current AI model, and every developer tool built in that time. There is no guarantee about 2044, but plain text formats have historically outlasted the proprietary formats they competed with — and a Markdown note today is trivially convertible to whatever comes next, which is not true of a note trapped in a proprietary database.
For a CISO thinking about institutional memory, succession planning, and regulatory record-keeping, this matters. The decisions you document in this system — risk acceptances, board commitments, control exceptions — are durable evidence in a format that will never become unreadable.
Notion’s proprietary format may not survive the next decade. Markdown will.
The system is organised into six layers, each with a clear responsibility:
| Layer | Component | Technology | Purpose |
|---|---|---|---|
| 1. Knowledge | Vault | Obsidian + Markdown | Single source of truth for all notes, research, and structured knowledge |
| 2. Sync | Multi-device access | Obsidian Sync + Git backup | Real-time sync across all devices with versioned backup |
| 3. Access | File access for AI | Filesystem MCP Server | Standardised file-level access for any MCP-compatible AI tool |
| 4. Agentic | AI interaction | Claude Code + Gemini CLI | Session-based AI for analysis, synthesis, and task execution |
| 5. Memory | Context persistence | CLAUDE.md + auto-memory | Persistent context across sessions and models |
| 6. Governance | Data protection | Classification + encryption | Data classification, encryption at rest, access control |
The architecture evolved through three distinct phases:
How this actually happened — a personal account
I have been using AI in anger since ChatGPT launched. Before the term existed, I was vibe coding — using AI to generate, iterate, and deploy without writing much code myself, because it let me move faster and solve problems that would otherwise require additional resources.
From the start, the problem was context. When you experiment across multiple models — Claude, ChatGPT, Gemini — you end up with multiple versions of the same thinking, each developed in a separate session with a model that had no awareness of the others. I had documents duplicated across providers, with no way to know which was current or which model had reasoned its way to which conclusion. The context was trapped in chat histories that couldn’t talk to each other.
The solution I landed on was GitHub repositories — not as a development tool, but as a context layer. Markdown files, structured by domain, that any model could read before the conversation started. The model got a briefing. That solved the cold start problem. That was the beginning of what I started calling the CISO Desktop.
The desktop grew into eight repositories covering everything I needed to manage a security programme: company context, risk management, compliance, security operations, policies, vendor assessments, awareness, business continuity. Structured plain text that any AI could reason over. And it worked. When I asked the right questions, the AI had enough context to surface things I would have missed — connections across domains that aren’t visible when you’re working within a single document or conversation.
I want to be clear about timing. This was already running, already useful, already part of how I worked — before the wave of Second Brain articles, before Andrej Karpathy published his thinking on personal AI systems. I built something, watched the conversation catch up to it, and then integrated others’ ideas where they improved what I already had. This document is that system, written up.
The personal layer came next — driven by the same problem that had followed me my entire career.
I have ADHD. Formally diagnosed as an adult, though I had been managing the reality of it for thirty years without the framework. One of the things ADHD does, particularly in a role as context-heavy as CISO, is make it genuinely difficult to track where you are across multiple concurrent workstreams. You’re busy. You’re reactive. You’ve moved between twelve things in a day. But have you moved anything that matters? I’d tried every productivity tool available over the years — task managers, to-do lists, Notion, things I can’t remember now. None of it stuck, because none of it could make sense of what was in my head. What I needed was AI to be the glue.
I was already moving everything to Markdown. Obsidian was the natural home for it. Claude Code was the AI I trusted to work across it. The Second Brain connected the personal working layer to the professional reference layer, with AI reading both simultaneously and a growing set of automated skills making the whole thing progressively more capable.
That is what is documented here.
Phase 1: The CISO Desktop A VS Code workspace with multiple Git repositories containing structured context — risk registers, compliance data, security controls, business context. AI assistants (GitHub Copilot, Claude) could query across all repositories simultaneously. This solved the “cold start” problem: every AI conversation began with rich organisational context rather than a blank slate.
Phase 2: Hub & Spoke
A central Git server (on a Proxmox LXC container) with automatic commits via gitwatch, syncing to multiple devices. Obsidian provided the human-friendly interface. This added persistence and multi-device access, but the AI was still limited to reading files — it couldn’t act on them.
Phase 3: Agentic (Current) Three things changed. The AI moved out of the VS Code chat panel into terminal-based agents (Claude Code, Gemini CLI) with full filesystem access — able to read, write, search, and execute rather than just suggest. The Model Context Protocol (MCP) turned the file collection into a queryable knowledge layer with structured access beyond plain filename matching. And a layered memory system — CLAUDE.md, auto-memory, explicit preserve commands — made context survive across sessions, so the AI does not need re-briefing on every conversation.
This architecture deliberately excludes always-on AI agents, persistent daemons, and internet-facing agentic services.
Think about what a persistent AI agent actually is from a security perspective: a process running continuously with read/write filesystem access, outbound internet connectivity, and often messaging integrations — sitting on a box that also holds your most sensitive professional knowledge. I would not approve that architecture for a production system. I am not running it on my personal one.
All AI interaction in this system is session-based. You start Claude Code, do the work, and it stops. The process has access only while you are actively using it. Automation uses scheduled scripts (cron jobs), not always-on services. This is the same principle as least-privilege access — the AI has access when needed and none when not.
The cost is minor: you open a terminal instead of having an agent surface things proactively. The benefit is a dramatically reduced attack surface on a system that holds your risk register, board communications, and investigation notes.
Obsidian stores everything as local Markdown files. No proprietary database, no cloud dependency, no vendor lock-in. Your notes are plain text files you own permanently — readable by any text editor, any AI model, any tool. If Obsidian disappeared tomorrow, you’d lose a nice interface but none of your data.
For a CISO handling sensitive strategic information, local-first storage with optional encrypted sync is the right architecture. Your risk register, board presentations, and incident reports never need to touch a cloud service unless you choose to sync them.
~/SecondBrain).Start with the PARA method (Projects, Areas, Resources, Archive), using Johnny Decimal numbering as a lightweight prefix convention:
SecondBrain/
├── 00-System/ # Vault config, templates, MOCs, CLAUDE.md
│ ├── Templates/ # Note templates (daily, project, meeting, risk)
│ └── skills/ # AI workflow definitions
├── 10-Projects/ # Active initiatives with end dates
├── 20-Areas/ # Ongoing responsibilities (no end date)
├── 30-Resources/ # Reference material, standards, research
├── 40-Daily/ # Daily notes and session summaries
│ └── Weekly/ # Weekly review notes
├── 50-Archive/ # Completed projects, exported chats
└── 90-Personal/ # Personal notes (optional)
Do not over-organise. Resist creating 20 subfolders before you have 20 notes. Obsidian’s internal links ([[wikilinks]]) update automatically when you move files. Let structure emerge from usage, then formalise it later.
Install these in phases — don’t front-load everything:
| Plugin | When | Purpose |
|---|---|---|
| Obsidian Git | Day 1 | Automated Git sync for backup and version control. Configure 10-minute auto-commit. |
| Templater | Day 1 | Dynamic templates with date variables, auto-generated IDs, conditional fields. |
| Calendar | Day 1 | Visual calendar view of daily notes. Optional but useful for navigation. |
| Dataview | Month 2+ | SQL-like queries on vault frontmatter. Powers dashboards and live views. |
| Tasks | Month 2+ | Task tracking with due dates, filters, and queries across the vault. |
Security note on plugins: Community plugins run arbitrary code with access to your vault. Treat them like any software install — review the GitHub repo before installing, avoid plugins that make outbound network calls unless you’ve verified what they send, and regularly audit what’s installed.
Build templates based on what you actually need. A CISO vault benefits from these core templates:
Daily Note Template:
---
date:
type: daily
---
#
## Priority Focus
- [ ]
## Meetings & Conversations
###
## Decisions Made
## Open Questions
## Sparks
## End of Day Review
What moved forward today?
Project Card Template:
---
type: project
title: ""
context: work
status: active
priority: medium
started:
due:
owner:
area:
tags:
- project
---
#
## Summary
_One paragraph: what this is, why it matters, what done looks like._
## Current Status
**As of:**
## Objectives
-
## Key Decisions
| Date | Decision | Rationale |
|------|----------|-----------|
## Open Items
- [ ]
## Progress Log
_Append entries here. Do not delete old entries._
###
-
## Related Notes
-
## Context
_Background, constraints, stakeholders — anything an AI assistant needs
to give useful answers about this project._
Other useful templates: Risk Assessment (with impact/likelihood/treatment fields and NIST CSF mapping), Meeting Note (attendees, decisions, action items), Incident Report (severity, root cause, lessons learned), Vendor Assessment (risk tier, compliance status, review date).
Use frontmatter fields consistently — they become queryable with Dataview, turning your vault into a live risk register, project dashboard, or compliance tracker.
For a multi-device setup that includes mobile (iOS/Android), use two complementary layers:
| Layer | Technology | Purpose |
|---|---|---|
| Real-time sync | Obsidian Sync (~$8/month personal, ~$16/month commercial) | Keeps all devices in sync, E2E encrypted, handles conflict resolution |
| Versioned backup | Git (on primary workstation) | Complete commit history, disaster recovery, audit trail |
These coexist without conflict. Obsidian Sync handles real-time multi-device sync; Git provides versioned backup and an independent recovery path.
Git on mobile is unreliable. The Obsidian Git plugin on iOS uses a JavaScript reimplementation that the plugin developer himself describes as “very unstable.” Working Copy costs $20 and requires manual pull/push via iOS Shortcuts before and after every session — friction that kills the capture habit.
With multiple devices, merge conflicts are inevitable. On iOS, they’re extremely difficult to resolve. Obsidian Sync handles conflicts automatically with a last-write-wins strategy and 30 days of version history.
Not every device needs the same setup:
| Device | Role | Sync | Plugins |
|---|---|---|---|
| Primary workstation | AI integration, deep work, Git backup | Obsidian Sync + Git | All |
| Work laptop | Primary work machine | Obsidian Sync | All except Git (optional) |
| Mobile (iOS/Android) | Quick capture only | Obsidian Sync | Templater + Daily Notes only |
Mobile is a capture device, not an analysis platform. Open the daily note, type quick bullets, add [[wikilinks]] to anything relevant. Don’t try to reorganise notes or run dashboards on your phone. The note syncs to your workstation within seconds.
Git hosting decisions should be driven by data classification, not by default or convenience.
For the personal vault — daily notes, project tracking, reflections — a self-hosted Forgejo instance is a good fit. Forgejo (forgejo.org) is a community-owned, open-source Git hosting platform you run on your own server (a lightweight LXC container is sufficient — see Section 9). Your data never leaves your infrastructure. No corporate dependency. No vendor that can change pricing, deprecate APIs, or be acquired.
The /mirror skill handles keeping the vault in sync with Forgejo:
/mirror # mirrors current repo to Forgejo
/mirror --all # mirrors all repos under ~/ciso/repos/
For the CISO Desktop — risk registers, control frameworks, compliance evidence, security programme data — self-hosted is not the right answer. This data requires enterprise-grade controls: SSO and MFA enforcement, audit logging, access management, and governance that a personal Forgejo instance cannot provide. GitHub Enterprise (or an equivalent enterprise Git platform) is the appropriate home for sensitive organisational security data.
The decision framework is the same one you apply professionally: classify the data, identify the controls required for that classification, choose infrastructure that provides those controls. Your personal vault and your security programme data are different classifications. They should not live in the same place or be managed the same way.
Claude Code is a terminal-based AI assistant that runs in your shell with full filesystem access to your vault. Unlike chat-based AI, it can read, write, search, and execute commands.
curl -fsSL https://claude.ai/install.sh | bash (macOS / Linux / WSL)
irm https://claude.ai/install.ps1 | iexcd ~/SecondBrainclaudeFrom this point, Claude Code can read any file in your vault, search for content, create and edit notes, and execute commands — all while understanding the structure and context of your knowledge base.
This is the AI’s persistent instruction set. It lives at the vault root and is read automatically at the start of every Claude Code session.
# CLAUDE.md - CISO Second Brain Context
## About Me
- Role: CISO at [Company]
## Tools Available
- **Obsidian CLI** is installed (`obsidian-headless` — the official sync client
also ships a CLI for vault operations). Use `obsidian search`, `obsidian read`,
`obsidian create`, `obsidian daily`. See Section 9 for setup.
Always prefer CLI over raw file reads for vault content.
- **Filesystem MCP** covers ~/Documents/ for non-vault repos.
## Vault Structure
- 00-System/: Templates, config, MOCs
- 10-Projects/: Active work with deadlines
- 20-Areas/: Ongoing responsibilities
- 30-Resources/: Reference material
- 40-Daily/: Daily notes, session summaries
- 50-Archive/: Completed work, exported chats
## Standing Instructions
- Search the vault before answering questions about past work.
- Always reference specific vault files when making claims.
- Never fabricate information about my organisation.
- When I say /preserve [topic], append the insight to
Persistent Insights below.
- **Write from the vault, not from conversation.** Before
creating or updating any note, search the vault for existing
content on the topic. The vault is the source of truth —
never write from conversation context alone when richer
detail may already exist. Search first, write second.
## Persistent Insights
[Durable decisions and conclusions added via /preserve]
## Archive Threshold
When this file exceeds 250 lines, move oldest Persistent
Insights to 00-System/CLAUDE-Archive.md.
Keep it lean. A common mistake is loading CLAUDE.md with placeholder sections you never fill in — “Current Active Projects”, “Key priorities this quarter”, “Writing style preference.” If the information lives better somewhere else (project cards, daily notes), don’t duplicate it here. Stale placeholders are noise that dilutes the instructions that matter.
This file is the single most important piece of the AI integration. It transforms Claude Code from a generic assistant into a context-aware thinking partner that understands your role, vault structure, and standing preferences.
The difference between a generic AI assistant and this system is what happens at the start of a conversation. Without the vault, every session begins with you re-explaining who you are, what your organisation does, what you’re trying to solve, and what decisions have already been made. You are perpetually the stranger at your own desk.
With the CLAUDE.md loaded and the vault in place, the AI already knows. It knows your risk taxonomy, your project history, your standing decisions, and your preferences. You can ask “where did we land on the cloud migration risk?” and get a sourced answer referencing the specific note and date, rather than a generic response about cloud migration risks in general.
The shift feels small until you’ve experienced the alternative enough times to have the comparison. Then it feels like the difference between working with someone who knows your business and working with someone who has just walked in.
This is the right question to ask. The honest answer to “can you trust AI-generated answers?” is: only if the AI is grounded in a verified source.
With a generic AI assistant — ChatGPT, Claude web — every answer is drawn from training data and the conversation context you provide. The model can hallucinate confidently and you have no way to verify the source.
This system is different. The AI’s standing instructions (in CLAUDE.md) are explicit:
- Search the vault before answering questions about past work.
- Always reference specific vault files when making claims.
- Never fabricate information about my organisation.
- Write from the vault, not from conversation.
When you ask “what did we decide about the cloud migration risk last quarter?”, the AI searches your notes, finds the relevant file, and cites it by name. If the information isn’t in the vault, it says so rather than inventing an answer.
This is called vault-grounded responses — every claim should trace back to a file you wrote. It is not foolproof (the AI can still misread a note) but it is meaningfully more reliable than asking a model to reason from training data alone, and the reader can verify any claim by opening the cited file. The vault is your source of truth; the AI reads from it rather than improvising.
The practical discipline: when the AI gives you an answer about your work, ask it which file it found that in. If it can’t cite one, treat the answer with scepticism.
Claude Code supports a persistent memory directory that survives across conversations. Use it to store:
This layer sits between CLAUDE.md (which you curate manually) and conversation context (which is ephemeral). It allows the AI to learn and retain operational knowledge without cluttering your CLAUDE.md. Correct the AI when it gets something wrong — it should update its memory immediately so the same mistake doesn’t repeat.
Claude Code has a finite context window. Every file read, search result, and tool output consumes tokens. In long sessions — especially when building workflows, processing vault content, or running multi-step tasks — the context fills up, earlier information gets compressed, and response quality degrades.
The most effective mitigation is subagents. The Agent tool spawns an independent Claude instance with its own context window. The subagent does its work (reading files, searching, analysing), then returns a single summary message. Only that summary enters your main conversation context.
Without subagent: You read 15 files → 15 files of content in your context. With subagent: Subagent reads 15 files → you get a 20-line summary.
When to use subagents:
When NOT to use subagents:
Add standing instructions to your CLAUDE.md to make this automatic:
## Context Window Management
- For vault-wide searches or reads spanning 5+ files, use the Agent
tool (subagents) to keep bulky results out of the main context.
Return only a structured summary.
- When executing multi-step workflows with independent tasks,
dispatch them as parallel subagents.
- Do NOT use subagents for trivial single-file reads or quick
searches — the overhead isn't worth it.
The Claude Code ecosystem has a growing library of plugins and MCP servers. Most are designed for software developers, not knowledge workers. Before installing anything, apply this filter:
Does it fill a gap I actually have, or does it duplicate something I already do?
Common recommendations that are not useful for a Second Brain:
| Tool | What It Does | Why You Don’t Need It |
|---|---|---|
| Sequential Thinking MCP | Forces step-by-step reasoning | Claude’s built-in extended thinking already does this — and doesn’t consume your context window to do it. This was useful for older models before extended thinking existed. |
| Context7 | Fetches up-to-date library documentation | Solves a developer problem (stale API docs). Your vault uses Markdown, bash, and stable CLIs — not fast-moving frameworks. |
| Superpowers plugin | Enforces brainstorm → spec → plan → implement workflow | Designed for multi-file code projects. The mandatory ceremony (checklists, spec docs, review loops) consumes more context than it saves for knowledge management work. The subagent dispatch it provides can be done directly with the Agent tool. |
| Memory MCP servers (Basic Memory, Graph Memory, etc.) | Persistent AI memory via knowledge graphs | Your vault IS your memory system. Adding a parallel store creates fragmentation and maintenance overhead. CLAUDE.md + auto-memory already handle cross-session context. |
| Re-evaluated. mcpvault MCP is now recommended — see Section 5.6.1 below. Unlike earlier vault MCP servers, mcpvault provides structured operations (frontmatter queries, tag management, bulk reads, patch edits) that the Obsidian CLI does not offer. |
What might actually help (evaluate when the need arises):
The general principle: your vault, the Obsidian CLI, CLAUDE.md, auto-memory, and direct MCP integrations (Gmail, Calendar, Drive, Atlassian) cover the vast majority of CISO workflows. Add tools when you hit a specific friction point, not because an article recommended them.
What it does: Smart Connections uses AI embeddings to surface semantic relationships between your notes. Instead of keyword matching (“data governance” finds notes containing those words), it finds conceptually related content regardless of wording (“accountability without authority” finds notes about “responsibility with no mandate”).
Why it matters for CISOs: Your vault will accumulate the same concepts expressed differently across project notes, daily reflections, meeting preps, strategy documents, and contact notes. A governance positioning discussion in a daily note is conceptually linked to a stakeholder strategy document and a board paper — but no keyword connects them. Smart Connections does.
How it works:
Privacy assessment: The default configuration is fully local. No vault content leaves your device. Only enable cloud APIs if your data classification policy permits it — for most CISO vaults, local-only is the right choice.
Key features:
| Feature | Value |
|---|---|
| Connections Pane | Passively shows related notes as you navigate — surfaces links you wouldn’t have searched for |
| Smart Lookup | Semantic search across the vault — finds notes by meaning, not just keywords |
| Smart Chat | Chat with your notes using RAG (now a separate plugin) |
| MCP Integration | Multiple community MCP servers expose Smart Connections embeddings to Claude Code, enabling semantic search from the AI assistant |
The MCP integration is the real prize. With a Smart Connections MCP server configured, your AI assistant can perform semantic queries across the vault — “what has been written about being excluded from governance programs” — instead of relying on exact keyword matches. This is the difference between the AI finding 3 notes and finding 15.
Pricing: Free core plugin includes semantic search, connections pane, and local embeddings. Pro tier (~$10/month) adds inline connections, graph view, and advanced ranking. The free tier is sufficient for most vaults.
Watch-outs:
obsidian search) through Phase 3 — you will miss conceptual matches but the system remains fully functional.Recommended setup:
50-Archive/, attachments, and .obsidian/ from indexing in plugin settingsMCP server setup (Claude Code integration):
Add the Smart Connections MCP server to your project .mcp.json. The command differs by operating system:
macOS / Linux:
{
"mcpServers": {
"smart-connections": {
"type": "stdio",
"command": "npx",
"args": ["-y", "@yejianye/smart-connections-mcp"],
"env": {
"OBSIDIAN_VAULT": "/path/to/SecondBrain"
}
}
}
}
Windows:
{
"mcpServers": {
"smart-connections": {
"type": "stdio",
"command": "cmd",
"args": ["/c", "npx", "-y", "@yejianye/smart-connections-mcp"],
"env": {
"OBSIDIAN_VAULT": "C:\\path\\to\\SecondBrain"
}
}
}
}
This exposes two key tools to Claude Code:
lookup — Semantic text search across the vault using pre-computed embeddings. Natural language queries find notes by meaning, not just keywords.connection — Find semantically related notes for a given file. Surfaces connections that keyword search and manual linking would miss.Prerequisites: Obsidian must have run Smart Connections at least once to generate embeddings (stored in .smart-env/). The MCP server reads pre-computed embeddings — it does not run its own model.
Resilience pattern — semantic-first, keyword-fallback:
Smart Connections is a community plugin with occasional stability issues (see Watch-outs above). Your AI workflows must not depend on it being available. The correct pattern is:
lookup/connection)obsidian search keyword matchingThis pattern is built into the /ask-vault, /linker, and /inbox skills. If Smart Connections breaks, all vault workflows continue to function — they just lose semantic matching until the plugin is restarted. No single plugin failure should break the Second Brain.
Gate criterion for installing: Keyword search (obsidian search) is missing conceptually related notes. You’re finding yourself thinking “I know I wrote about this somewhere” but can’t locate it with exact terms. This typically happens around 500+ notes or when the same themes span multiple project areas.
The Model Context Protocol (MCP) gives any compatible AI client standardised read/write access to your vault directory:
{
"mcpServers": {
"vault": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem",
"/path/to/SecondBrain"]
}
}
}
MCP provides file-level access: read, write, search by filename and path, list directories. It does NOT provide semantic search — if your notes call something “Java logging vulnerability CVE-2021-44228” and you search for “Log4j,” MCP will find it. If you search for “that critical logging issue from 2021,” it won’t.
The mcpvault MCP server provides structured, programmatic access to the Obsidian vault that goes beyond what the Filesystem MCP or Obsidian CLI offer. Where the Filesystem MCP gives raw file read/write and the CLI gives search and basic CRUD, mcpvault adds vault-aware operations designed for knowledge management.
Key capabilities:
| Tool | What It Does | Why It Matters |
|---|---|---|
search_notes |
Content and frontmatter search with match counts and excerpts | Faster, more structured results than CLI search — returns match counts per file |
read_note / read_multiple_notes |
Read notes with frontmatter parsed separately from content | Structured access — frontmatter comes as a parsed object, not raw YAML in a string |
write_note / patch_note |
Create, overwrite, append, prepend, or surgically patch notes | patch_note replaces a specific string without rewriting the entire file — ideal for AI edits |
get_frontmatter / update_frontmatter |
Query and modify YAML frontmatter independently of content | Update a project’s status or priority without touching the note body |
manage_tags |
Add, remove, or list tags across notes | Bulk tag operations without manually editing frontmatter |
list_directory / get_notes_info / get_vault_stats |
Directory listing, note metadata, vault-level statistics | Useful for audits, dashboards, and vault health checks |
MCP server setup:
Add to your project .mcp.json:
{
"mcpServers": {
"mcpvault": {
"type": "stdio",
"command": "npx",
"args": ["-y", "mcpvault"],
"env": {
"VAULT_PATH": "/path/to/SecondBrain"
}
}
}
}
When to use mcpvault vs Obsidian CLI vs Filesystem MCP:
| Task | Best Tool |
|---|---|
| Quick keyword search | Obsidian CLI (obsidian search) |
| Read a single note | Either mcpvault or Obsidian CLI |
| Bulk read multiple notes | mcpvault (read_multiple_notes) |
| Query or update frontmatter | mcpvault (get_frontmatter, update_frontmatter) |
| Patch part of a note | mcpvault (patch_note) |
| Tag management | mcpvault (manage_tags) |
| Access non-vault files (CSV, PDF, DOCX) | Filesystem MCP |
| Semantic search | Smart Connections MCP (Section 5.5.1) |
mcpvault complements the existing tools — it does not replace them. The Obsidian CLI remains the fastest path for quick searches; the Filesystem MCP handles non-vault files. mcpvault fills the gap for structured vault operations that the other tools handle clumsily or not at all.
Because Markdown files are the source of truth — not any model’s conversation history — switching between AI models is low-friction. The vault is portable; the CLAUDE.md file is plain text; auto-memory is a directory of Markdown. Slash commands are Claude Code-specific and would need to be rebuilt for another agent, but the underlying knowledge survives the move.
| Task | Recommended Model | Why |
|---|---|---|
| Strategic synthesis | Claude Code | Strong at nuanced reasoning, connecting disparate concepts |
| Compliance mapping | Claude Code | Effective at regulatory text cross-referencing |
| Data processing | Gemini CLI | Strong at structured data analysis |
| Quick vault queries | Either | Both perform equivalently for simple lookups |
Both Claude Code and Gemini CLI read the same Markdown files. Your knowledge stays in your vault, not in any provider’s chat history. The day you want to move, the data is portable — the tooling is what you would rebuild.
The CISO Desktop is the second half of the system — and the part that makes it genuinely powerful for security leaders rather than just knowledge workers.
Two layers, one brain:
| Layer | What It Is | What Lives Here |
|---|---|---|
| Vault | Your thinking space | Daily notes, project tracking, meeting captures, reflections, decisions, personal research |
| CISO Desktop | Your reference data | Risk register, control framework, compliance status, remediation tracker, policy library, vendor assessments |
These are distinct but connected. The vault is fluid and narrative — it’s where you think. The CISO Desktop is structured and authoritative — it’s what you report from. AI reads both simultaneously.
Repository structure:
A separate Git workspace outside the vault, organised into one repository per security programme domain. Typical domains include:
Each repository is plain text — Markdown, CSV, YAML — with no proprietary database. Alongside these, keep lightweight tools/, exports/, and scripts/ directories for the glue that automates routine work across the data set.
The skills that make it agentic:
The CISO Desktop becomes genuinely powerful when you build domain-specific AI skills on top of it:
| Skill | What It Does |
|---|---|
/internal-audit |
Structured management risk review — reads your risk register and control data, verifies control status against evidence tiers (assertion → documentation → system-level), produces a scored finding list |
/risk-triage |
Rapid prioritisation using a 6-factor model (staleness, criticality, control gap, deadline pressure, recurrence, cascade potential) — turns a risk register into an action list |
/risk-trend |
Trend analysis from historical review data — compares current posture to previous periods, identifies improving and deteriorating areas |
/remediation |
Manages the remediation tracker lifecycle — view open findings, update status, close resolved items, create new findings from audit output |
These are not generic AI prompts. They are structured workflows that know your data schema, your organisation’s risk taxonomy, and your reporting format. The AI knows where your risk register lives, what the columns mean, and how you classify control gaps.
The bridge — /push:
The most useful pattern is the one that keeps the two layers in sync. During a working session, you’ll learn things that belong in the structured data layer:
/push Vulnerability scanning now confirmed deployed in all regions
/push Offboarding process updated — new leavers checklist signed off today
/push Cloud security benchmark review completed for platform team
The AI identifies which file in which repository the information belongs in, reads the existing format, makes a targeted update, logs it in your daily note, and reports back. You stay in flow. The reference data stays current.
The question that showed me what I’d actually built was this one:
“Using our full risk and business context, give me three novel ways an attacker could take down the core of the business.”
The CISO Desktop had visibility across everything — business context, architecture, operational dependencies, the risk register, control status, vendor relationships. A human analyst working across the same material would need days, and would likely miss things because the connections aren’t visible at the scale required.
The AI came back with three answers. The first was something we’d already identified and were working on. The second was non-obvious — significant work to address, but once named, the path was clear. The third I hadn’t considered at all. It’s the kind of finding I’d rather have surfaced privately than missed entirely — and it’s the reason I now trust this system to think alongside me.
That’s not a failure of the system. That’s the system doing exactly what it’s supposed to — surfacing what’s genuinely hard, not just confirming what you already knew. No analyst I could have assigned to that question would have connected the dots across all of those repositories in the same way. The AI didn’t know more than my team. It had visibility my team didn’t have.
Where this lives — enterprise controls for enterprise data:
Before anything else in this section: the CISO Desktop described here lives on enterprise infrastructure with enterprise controls, not on my personal kit. Section 8 covers the threat model in full, but the short version belongs here so nobody reads the previous paragraph and wonders. This is where data classification applies to your own infrastructure, not just your organisation’s.
The personal vault — daily notes, project tracking, reflections — lives in a self-hosted Forgejo instance. Personal data, personal infrastructure, appropriate controls for that sensitivity level.
The CISO Desktop repositories are a different classification entirely. Risk registers, control frameworks, compliance evidence, security programme details — this is sensitive organisational data. It lives in GitHub Enterprise, not a personal self-hosted instance. GitHub Enterprise provides the controls that data requires: SSO and MFA enforcement, audit logging, access management, DLP integrations, and the governance overhead that enterprise security data should have.
The principle is not “always self-host” or “always use enterprise SaaS.” The principle is classify your data and apply controls appropriate to the classification. Which is exactly what you would tell your organisation to do. If you wouldn’t store your risk register in a personally-managed system at work, you should not store it in a personally-managed system at home.
Every change across both tiers is committed with a meaningful message, creating an auditable history of your security program’s evolution. The /gitsync skill handles committing and pushing across all CISO Desktop repositories in one command; /mirror handles keeping the personal vault in sync with Forgejo.
Cross-cutting AI queries:
With AI access to both the vault and the CISO Desktop simultaneously:
An AI assistant with access to generic industry knowledge can tell you what most organisations do. An AI assistant with access to your own programme data can tell you what you have already decided and where your gaps are. Those are categorically different conversations.
Coming soon: The CISO Desktop — public release
The CISO Desktop is the next release. Beyond the repository structure and skill framework described here, I am actively developing agentic applications on top of this architecture: automated risk review workflows, control assurance tooling, AI-assisted audit preparation, and programme dashboards that run against your own structured data rather than generic benchmarks.
The full CISO Desktop repository — including sanitised skill definitions, data schemas, and the agentic tooling built on top of them — will be released as a separate public repository.
Follow on GitHub or LinkedIn to be notified when it drops. If this manual was useful, the Desktop release will be the operational layer on top of it.
This is the most important section in this manual. The tooling is irrelevant if you do not build consistent capture. A missed note is a lost decision, a forgotten commitment, or a pattern you’ll never see.
| When | Duration | Action |
|---|---|---|
| Morning | 5 min | Create today’s daily note from template. Review yesterday’s open items. Set priority focus. |
| After meetings | 2-3 min each | Capture key decisions, action items, and open questions. Raw is better than absent. |
| End of day | 10-15 min | Run the end-of-day reflection (see below). |
| Weekly | 20-30 min | Run the weekly review. Update project statuses. Identify patterns. |
The capture habit is everything. Do not advance to any other part of this system until you have used your daily note consistently for at least 7 days and your vault is syncing successfully. If you cannot sustain this minimal habit, adding AI tooling will not help.
A note for ADHD readers: The capture habit is simultaneously the most important part of this system and the hardest one for an ADHD brain to sustain. Raw and imperfect beats complete and absent — always. A two-word bullet is better than nothing. The AI can synthesise rough notes; it cannot work from memories you never captured. Lower the bar for what counts as “done” until the habit is automatic.
The end-of-day workflow is the engine that keeps the Second Brain alive. Without it, the vault becomes a write-only data store — notes go in, nothing comes out.
This workflow is defined as a Claude Code “skill” — a structured prompt that the AI follows step by step. When triggered, it:
The reflection tone matters. Too soft (“Great job today!”) is useless. Too harsh (“You wasted the day”) is unconstructive. The right tone is specific, balanced, and actionable — a trusted advisor who references evidence from your actual notes.
The end-of-day reflection has told me things I didn’t want to hear and couldn’t disagree with.
The most consistent pattern it surfaces is the one that CISO roles reliably produce: days that feel busy and accomplish nothing strategic. The reflection doesn’t soften this. It will tell you directly that today was reactive, that nothing moved forward that you’re actually accountable for, that several items you handled should have been handled by someone on your team.
It has also, more than once, told me I’ve been avoiding a difficult conversation. My instinct is to push back on that — I don’t think of myself as someone who avoids hard things. But the vault knows what I’ve been writing about across the week. It can see when the same person or the same unresolved issue has appeared in my notes multiple times without movement. It names the pattern before I’ve consciously acknowledged it. And it’s been accurate.
The trick — and this matters — is keeping the vault current. The reflection is only as honest as what you put in. Surface-level notes produce surface-level reflection. Feed it the real version of your day, including the frustration, the doubt, the thing you didn’t do, and it gives you something genuinely useful back. The system learns from what you give it. It cannot learn from what you protect it from.
In a complex organisation the AI will benefit from structured context about the people you work with most often. A short note per person — role, organisational context, how you typically work together — lets the AI engage at the right level rather than needing you to explain who everyone is every time their name comes up.
Think of it as the same thing you’d write in any professional contact record: who they are, what they do, relevant working history. Not a psychological profile. Not a dossier. The scope is deliberately narrow — what goes in is anything you would write in a work document that might be read by the person themselves or by your manager. Anything you would not write in that context does not belong in the vault either.
The boundary matters and it is worth stating directly. Structured context for an AI is a useful pattern. Accumulating running observations about named individuals is a different kind of workflow, and one I am deliberately not building. The deeper dynamics of any particular working relationship — the things you are actively working through, the judgement calls, the things you would only say to a trusted peer — those stay in your own head. The card gives context. It does not replace judgement, and it is not a substitute for the human work of managing relationships.
The weekly review synthesises — it doesn’t copy-paste from daily reflections. It covers:
The weekly review is saved to 40-Daily/Weekly/YYYY-Www.md and becomes part of the vault’s permanent record. Over time, these accumulate into a powerful dataset for identifying patterns in how you spend your time, where you get stuck, and what actually moves the needle.
These commands manage context across sessions. Skills (marked with /) are installed workflow definitions. /preserve is a vault convention — a natural-language instruction that Claude recognises from your CLAUDE.md rather than an installed skill.
| Command | What It Does | When |
|---|---|---|
/preserve [topic] |
A CLAUDE.md convention: tell Claude to append a specific insight or decision to the Persistent Insights section of CLAUDE.md for permanent retention | When you reach a conclusion that should permanently inform future work |
/endofday |
Triggers the full end-of-day reflection workflow | End of every working day |
/endofweek |
Triggers the weekly review workflow | End of every working week |
/remind [person] [what] [when] |
Creates a tracked reminder assigned to a person with optional due date | When you tell someone to do something and want to track it was done |
/capture |
Sweeps Gmail and Google Calendar for actionable items and pulls them into the vault | Start of day or when catching up on external inputs |
/inbox |
Processes quick-capture notes from the Inbox folder — classifies, files, and links each one | When the Inbox has accumulated unprocessed notes |
/linker |
Scans recent notes for unlinked mentions, missing wikilinks, and semantic connections | Periodically, to strengthen the knowledge graph |
/ask-vault [question] |
Natural language query across the entire vault — searches, reads, and synthesises an answer | When you need to find or cross-reference information across the vault |
/push [information] |
Pushes information discovered during a vault session to the relevant structured reference repo outside the vault | When you learn something that should update your canonical reference data (risk registers, compliance trackers, security tooling inventories, etc.) |
A recurring frustration for any leader: you tell someone to do something and have to carry the mental overhead of tracking whether it got done. The /remind skill offloads this to the vault.
How it works:
Type /remind followed by natural language:
/remind Alex needs to manage the absences with Johan. Reminder sent today.
/remind Sam to send cost savings figures by Friday
/remind myself follow up on board meeting prep by 2026-04-07
The AI parses the input and creates a structured task in 20-Areas/Team Management/Reminders.md:
- [ ] [[Alex]] — Needs to manage the absences with Johan. Reminder sent 23/3/26 #reminder [sent:: 2026-03-23] [chase:: 2026-03-26] 📅 2026-03-26
Key design choices:
📅 emoji date ensures the reminder surfaces in your “Tasks Due Today” query on the chase date. You don’t need to remember to check — it comes to you.Waiting For.md tracks delegated deliverables (someone owes you a piece of work). Reminders track nudges and follow-ups (you told someone to do something and want to verify it happened). Some reminders may graduate to Waiting For items if the work is significant.The Reminders dashboard uses Dataview queries to show active reminders sorted by chase date, grouped by person, and a history of completed items. Over time, this builds a record of who needs chasing and who doesn’t — useful data for the delegation patterns surfaced in end-of-day reflections.
A common pattern in knowledge work: you learn something during a conversation or daily session that belongs in a structured reference repository outside the vault. A risk status changes. A security tool is confirmed deployed in a new region. A compliance milestone is hit. If you don’t update the reference data immediately, it drifts out of date.
The /push skill bridges the gap between your working vault and your canonical reference repos.
How it works:
Type /push followed by natural language describing the information:
/push Vulnerability scanning confirmed deployed in all regions
/push Offboarding procedures updated and current as of today
/push Awareness training platform is now available in all business units
The AI:
CISO Desktop Updates heading for audit trailKey design choices:
/remind — execute and report.This is particularly valuable when your vault and reference repos serve different purposes — the vault is your thinking space, but structured repos (risk registers, compliance trackers, security tooling inventories) are your authoritative data. /push keeps them in sync without breaking your flow.
Every significant initiative gets a project card — a Markdown file with structured frontmatter and a standardised format. Project cards live in 10-Projects/, organised by topic folder.
The project card is the single source of truth for a project’s status, decisions, and history. Daily notes capture activity; the project card captures state.
Create a project card for anything that:
Do not create cards for single tasks, one-off meetings, or things resolved same-day.
---
type: project
title: "Descriptive Project Name"
context: work # or home
status: active # active / paused / blocked / complete
priority: medium # high / medium / low
started: 2026-03-01
due: # optional — many CISO projects don't have hard deadlines
owner: Steve
area: Risk Management # domain area
tags:
- project
- relevant-tag
---
The context field (work/home) replaces folder-based separation. All projects live under 10-Projects/ in topic folders — the frontmatter drives filtering.
The end-of-day workflow automatically detects project signals in your daily notes. When it identifies something that looks like a project (mentioned multiple times, has follow-up actions, involves multiple people), it checks whether a card already exists and proposes creating one if it doesn’t.
This means you don’t need to remember to create project cards. The system surfaces them for you — you just approve or reject.
With Dataview, your project cards become a live dashboard. The query:
TABLE status, priority, started, due, area
FROM "10-Projects"
WHERE type = "project" AND status = "active"
SORT priority ASC
Would produce output like:
| File | status | priority | started | due | area |
|---|---|---|---|---|---|
| Zero Trust Network Rollout | active | high | 2026-01-15 | 2026-06-30 | Network Security |
| Cloud Security Posture Review | active | high | 2026-02-01 | 2026-04-15 | Cloud Security |
| Vendor Risk Assessment Program | active | medium | 2026-01-20 | Third Party Risk | |
| Security Awareness Refresh | active | medium | 2026-02-10 | 2026-03-31 | Awareness |
| SOC Automation Phase 2 | active | medium | 2025-11-01 | 2026-05-01 | Security Operations |
| Incident Response Playbook Update | active | low | 2026-03-01 | Incident Response |
Similar queries work for risk registers, policy review dates, vendor assessments — anything with structured frontmatter becomes queryable.
A CISO’s Second Brain contains sensitive strategic information — risk assessments, incident details, board presentations, personnel observations, investigation notes. The security model must match the sensitivity.
| Threat | Mitigation |
|---|---|
| Data at rest on work device | Encrypted volume (VeraCrypt on Windows, LUKS on Linux). Vault is an opaque blob when dismounted. |
| Data at rest on mobile | iOS/Android device encryption (enabled by default with passcode). |
| Data in transit (sync) | Obsidian Sync uses E2E encryption (AES-256). Encryption password never leaves your devices. |
| Data in transit (Git) | SSH-encrypted transport to self-hosted Forgejo instance (or private GitHub repository). |
| Colleague browsing files | Encrypted volume auto-dismounts on screen lock, inactivity, and logoff. |
| Device seizure / forensics | Protected when volume is dismounted. Main gap: hibernation can write encryption keys to disk — disable it. |
| Real-time access while mounted | Not protected against OS admins or endpoint monitoring while volume is mounted. A mounted volume is an open volume. |
| AI processing of sensitive data | Use local models (Ollama) for classified content. Cloud AI (enterprise tiers only) for non-sensitive analysis. |
For work devices, run Obsidian from a VeraCrypt encrypted container:
V:).Daily workflow: Mount volume → enter passphrase → Obsidian opens → work → lock screen → auto-dismount. The vault is an encrypted blob whenever you’re not actively using it.
Critical: Disable Windows hibernation (powercfg /h off) — it can write encryption keys to disk. Consider clearing the pagefile on shutdown via Group Policy.
| Layer | Component | Protection | Controls |
|---|---|---|---|
| 1. Vault | Local Markdown files | Full-disk or volume encryption | OS encryption, locked screensaver, device policy |
| 2. Sync | Obsidian Sync / Forgejo (self-hosted Git) | E2E encryption (Sync) / SSH keys (Git) | No PATs in vault, access audit, no third-party cloud for structured data |
| 3. Local AI | Ollama, local models | Data never leaves machine | Network isolation for sensitive queries |
| 4. Cloud AI | Claude/Gemini enterprise tiers | Provider data privacy guarantees | Enterprise tier only; verify no-training policies |
Tag any file containing classified information with classification: sensitive in frontmatter. Build automation that:
Community plugins run arbitrary code with full vault access. Treat each one as a software install:
For CISOs who want an always-available AI-augmented vault accessible from any device — including a phone — a lightweight headless Linux server provides a persistent home for the vault without needing a GUI.
A Debian/Ubuntu LXC container on Proxmox (or any virtualisation platform) running:
obsidian-headless) — official CLI sync client that keeps the vault synced via Obsidian Sync without a GUINo VNC, no desktop environment, no AppImage. The vault is plain markdown on disk, synced continuously. You SSH in from any device (phone, tablet, laptop) and run Claude Code directly against the vault files.
A note on the persistent daemon tension. Section 2.3 argues against always-on AI agents. The server node does run a persistent process — but it is the Obsidian Sync client, not an AI agent. It has one job: keep the vault in sync. It does not hold an AI context, does not read your files for any purpose beyond sync, and does not make outbound calls to anything except the Obsidian Sync service. Claude Code on this node remains session-based — it runs only when you are in a tmux session actively using it. The rule is still no persistent AI agents; sync is a different class of workload with different risks.
Phone (Termius/SSH) → Tailscale/VPN → Server LXC
├── Claude Code CLI (in tmux)
├── obsidian-headless (continuous sync)
└── Obsidian vault (plain markdown)
Prerequisites: Node.js 22+, an active Obsidian Sync subscription, a Claude account (Max plan or API key).
# Install Node.js 22
curl -fsSL https://deb.nodesource.com/setup_22.x | bash -
apt-get install -y nodejs tmux git
# Install Obsidian Headless
npm install -g obsidian-headless
# Login and configure sync
ob login
ob sync-list-remote
ob sync-setup --vault <VAULT_ID> --path /root/vault --device-name server
ob sync --path /root/vault # test one-time sync
# Install Claude Code
curl -fsSL https://claude.ai/install.sh | bash
export PATH="$HOME/.local/bin:$PATH"
Run ob sync --continuous as a systemd service for always-on sync. Run claude inside a tmux session so it persists across SSH disconnections.
What works: Claude Code reads and writes all vault markdown directly. Skills (capture, endofday, weekly review, etc.) work normally. MCP integrations (Gmail, Calendar, Jira) work via OAuth. All changes sync bidirectionally to your other devices.
What doesn’t execute server-side: Dataview queries, Tasks plugin queries, and Templater commands exist as raw text in the files. They render correctly when you open the vault on a device running the full Obsidian app. For an AI workflow this rarely matters — Claude Code works with the underlying markdown, not the rendered output.
| Resource | Minimum | Recommended |
|---|---|---|
| CPU | 1 core | 2 cores |
| Memory | 2 GB | 4 GB |
| Disk | 16 GB | 20-32 GB |
The headless approach is significantly lighter than running the full Obsidian desktop app — no Electron, no GUI rendering, no display server.
shutdown -r inside an LXC — it shuts down and does not restart. Use reboot or restart from the hypervisor.| Anti-Pattern | Why It Fails | What to Do Instead |
|---|---|---|
| Cathedral before the first brick | Months on infrastructure, no capture habit. Technically perfect, practically unused. | Start with a simple vault and daily notes. Build the habit for a week before adding anything. |
| Perfecting instead of capturing | Formatting friction prevents real-time capture. Insights evaporate. | Capture raw first. Refine later. A messy note is infinitely better than no note. |
| Archive instead of workspace | Notes go in, never come out. No synthesis, no re-reading. Write-only data store. | Use daily reviews. Build the end-of-day habit. Force yourself to revisit and connect. |
| Tools before friction | RAG, semantic search, and automation installed before enough notes justify them. | Follow phase gates. Each tool solves a problem you have actually experienced. |
| Persistent daemons | Always-on agents with filesystem + messaging access = unacceptable attack surface for a CISO. | Session-based tools (Claude Code) and scheduled scripts (cron). No daemons. |
| Storing secrets in vault | API keys in Markdown files get synced to Git remotes or processed by AI. | Environment variables or secrets manager. Never in vault files. |
| Cloud AI for classified data | Sensitive risk assessments sent to cloud APIs without data classification. | Classify in frontmatter. Use local LLMs for sensitive analysis. Cloud for non-sensitive only. |
| Trusting beta as production | Building critical workflows on days-old software with known bugs. | Evaluate betas in test environments. Specify fallbacks. Migrate after proven stability. |
Do not try to build everything at once. Each phase has a clear gate criterion — do not advance until the previous phase is delivering value.
| Phase | Focus | Timeline | Gate |
|---|---|---|---|
| 0 | Foundation & Habit | Week 1 | 7+ days of consistent daily notes, vault syncing |
| 1 | AI Integration | Weeks 2-4 | AI adds measurable value — you can name a specific insight it helped you reach |
| 2 | Structure | Month 2+ | Keyword search feels insufficient (typically around 500 notes) |
| 3 | Memory & Retrieval | Quarter 2+ | Manual memory management becomes a bottleneck |
| 4 | Automation | Quarter 3+ | Concrete automation need justified, security controls validated |
Phase 0 is the most important. If you cannot sustain a 5-minute morning note and 2-minute post-meeting capture for one week, no amount of AI tooling will help. The system is a behavioural change project with technology support, not the other way around.
| Component | Maturity | Maintainer | Risk | Fallback |
|---|---|---|---|---|
| Obsidian | Stable | Obsidian (company) | Low | N/A (foundational) |
| Obsidian Sync | Stable | Obsidian (company) | Low | Git sync |
| Obsidian Git plugin | Stable | Community | Low-Med | Manual git commits |
| Claude Code | Production | Anthropic | Low | Gemini CLI |
| Gemini CLI | Production | Low | Claude Code | |
| Filesystem MCP | Stable | Anthropic (official) | Low | Direct file access |
| Dataview plugin | Stable | Community | Low | Manual queries |
| Templater plugin | Stable | Community | Low | Manual templates |
| mcpvault MCP | Active | Community (npm) | Low | Obsidian CLI + Filesystem MCP — functional but lacks structured frontmatter access, bulk reads, and patch edits. |
| Smart Connections | Active (Phase 3) | Community | Medium | Keyword search (obsidian search) — functional but misses semantic links. License changed to proprietary in v4. |
| Component | Cost | Notes |
|---|---|---|
| Obsidian | Free | Free for personal use |
| Obsidian Sync | ~$8/month (personal) | Optional if you only use one device; needed for iOS/Android |
| Claude Code | ~$100/month (Max plan) | Or pay-per-token via API — cheaper for lighter use |
| Gemini CLI | Free tier available | Useful as a Claude fallback or for specific tasks |
| Forgejo (self-hosted) | Free | Requires a server; runs on minimal hardware |
| Server node (optional) | Varies | Can run on a $5-10/month VPS, repurposed hardware, or a Proxmox LXC on existing infrastructure |
Minimum viable setup: ~$8/month. Obsidian Sync only — no server node, Claude Code on a free trial or limited API budget. Get the habit right before spending on tooling.
Full stack: ~$110-120/month. Obsidian Sync + Claude Max plan. Everything else (Forgejo, server node, Gemini CLI) runs on infrastructure you may already have.
You do not need everything in this document. The system works at any level of sophistication. Here is the honest minimum that delivers real value:
Week 1 only — no AI, no plugins:
If you can’t sustain this for seven days, stop. No amount of AI tooling will fix a capture habit that doesn’t exist.
Week 2-4 — add AI:
At this point you have 80% of the value of the full system. Everything else — server nodes, encrypted volumes, CISO Desktop repos, semantic search — is incremental improvement, not the foundation.
Later, when you feel a specific friction point:
Don’t build the full architecture before you have the habit. The system is a behavioural change project with technology support, not the other way around.
Everything else comes later, when you need it, not before.
Start today. Open Obsidian. Create your first daily note. Everything else follows.
A final note on iteration.
The version of this system I am running today is not the version I was running three months ago, and it will not be the version I am running three months from now. New models arrive. New integrations become possible. Skills get rebuilt as the underlying tools improve. Things I thought were solved turn out to have better solutions.
This is not a problem. This is the point. The AI landscape is moving faster than any other technology shift I have experienced in thirty years in this industry, and the correct response is to build something that can move with it — not something that assumes the current state is the final state.
What stays constant is the vault. The Markdown files you write today will be readable by whatever AI exists in ten years. The decisions you document, the patterns you surface, the knowledge you accumulate — that compounds regardless of which model you are running or which tools you are using. The infrastructure evolves. The knowledge endures.
Build the habit. The rest follows.
If this was useful, you can buy me a coffee. If you build something with it, I’d genuinely like to hear about it.
The workflows referenced throughout this manual (/endofday, /fromchat, /push, etc.) are implemented as Claude Code slash commands — Markdown files stored in .claude/commands/ at the root of the vault. When Claude Code starts, it reads this directory and exposes each file as a slash command.
This appendix documents the skills that power the system. The versions here are generalised and sanitised — they remove specific names, employer paths, and personal idiosyncrasies from the skills I actually run, so you can adapt them to your own vault structure and role. Treat them as starting points, not canonical versions. Each one is a Markdown file with a heading, a description, and a numbered workflow that Claude Code follows step by step.
Domain-specific skills — internal audit workflows, risk scoring models, remediation trackers, control assurance tiers — are built on the same framework but contain organisation-specific data schemas and are not published here. The full CISO Desktop skill library, including these tools and the agentic applications I am building on top of them, will be released as a separate public repository. Watch GitHub or follow on LinkedIn for the release.
How to install a skill:
.claude/commands/ at the root of your vault if it does not exist.md file with the name matching the command (endofday.md becomes /endofday)A note on skill evolution. The skills below are snapshots. In practice mine change every few weeks as I discover new edge cases, as models improve at certain tasks, or as new MCP servers become available. The specific steps are less important than the pattern: give the AI explicit context gathering, explicit workflow steps, explicit tone guidance, and explicit rules. Vague prompts produce vague output.
/endofday — CISO Daily ReflectionThe most important skill in the system. This is what turns a note-taking vault into a thinking system. Without a daily reflection habit, the vault becomes a write-only data store — notes go in and nothing comes out.
Skill file: .claude/commands/endofday.md
# End of Day — Daily Reflection
You are closing out the working day. The goal is to help capture decisions,
detect emerging projects, surface open questions, and deliver an honest,
evidence-based reflection on how the day went. This is a thinking partner
exercise, not a reporting one — the reflection should challenge as much as
it supports.
## Workflow
### Step 1: Gather context
Run these in parallel to save time:
1. Read today's daily note using `obsidian daily`
2. Read any "Waiting For" tracker in the Areas folder
3. List all active project cards in `10-Projects/` (filter by status = active)
### Step 2: Follow the threads
Scan the daily note for `[[wikilinks]]` and referenced notes. Read any that
look relevant to understanding what happened today — project notes, meeting
notes, decision logs. Use judgement: a link to a template is not worth reading,
a link to a meeting note probably is.
### Step 3: Detect project signals
Scan the daily note and identify things that might be projects. Signals:
- Mentioned multiple times in the note
- Has a follow-up action beyond today
- Involves multiple people or systems
- Described as being "worked on", "led", "tracked", or "kicked off"
- Resembles an initiative, review, implementation, or investigation
For each signal, run `obsidian search "<topic>"` to check whether it already
has vault history or an existing project card.
### Step 4: Present findings for approval
Present a structured summary. Do NOT write anything to the vault yet. Format:
**Today's summary** — 2-3 sentences on the day's key activity.
**New project signals** — table of potential new projects with no existing
card. For each: topic, why it qualifies, does it exist elsewhere in the vault?
Ask the user to confirm each one as Yes / No / Maybe before creating cards.
**Existing project updates** — activity to append to existing project cards,
grouped by project.
**Decisions made** — explicit and implicit decisions from the day.
**Open questions** — new questions, evolved questions, and suspiciously
long-standing questions.
**End-of-day review** — what moved forward, what didn't, and why.
### Step 5: The CISO reflection
This is the most important output. Honest, evidence-based assessment covering:
- **Productivity** — volume and quality of output. Busy vs productive.
Five hours of meetings with one producing a clear outcome is a problem
worth naming.
- **Stress signals** — read from the tone of the notes. Unresolved people
issues, emotional weight, context-switching overload.
- **Delegation check** — work that was below the role level and should have
been handled by the team. Also what WAS at the right level.
- **Pattern to watch** — recurring behaviours. Avoiding a hard conversation,
spending time on low-value tasks, saying yes to everything.
- **Score: X/10** — overall day rating with justification. 5 is average.
7+ means real strategic progress. Below 5 means the day got away from you.
### Step 6: Wait for approval, then write
Once the user approves the content, update the daily note with each section
under appropriate headings. Use the Edit tool — do not rewrite the file.
### Step 7: Create new project cards
For each signal confirmed as Yes, create a new card in `10-Projects/<area>/`
using the project template at `00-System/Templates/project-template.md`.
Fill in: title, started date, area, owner, and the summary and context
sections based on today's notes.
### Step 8: Update existing project cards
For each existing project with activity today:
1. Read the existing card in full
2. Append a new Progress Log entry with today's date (2-4 bullets max)
3. Update the `status` field if it has changed
4. Check off completed Open Items, add new ones
5. Add to the Key Decisions table if a decision was made today
Do not rewrite existing content. Only append and update.
### Step 9: Report completion
Confirm:
- How many project cards were created
- How many were updated
- Open items carrying to tomorrow
## Tone Guide
The reflection tone matters. Here is the spectrum:
- **Too soft:** "Great job today! You got a lot done." — Useless.
- **Too harsh:** "You wasted the entire day in meetings." — Unconstructive.
- **Right tone:** "Five hours in meetings with one producing a clear outcome.
The 1:1 was the right use of your time — that is a strategic relationship.
The status meeting could have been an async update." — Specific, balanced,
actionable.
Reference specific activities from the day as evidence for every claim.
Never make vague assertions.
## Rules
- Project cards live in `10-Projects/` organised by topic (Strategy,
Compliance, Infrastructure, etc). Not by work / home.
- Use `context: work` or `context: home` in frontmatter — not folder
separation.
- Do not create cards for single tasks, one-off meetings, or things resolved
same-day.
- When in doubt, ask rather than guess.
- The vault is the source of truth. Do not invent activity that is not in
the notes.
/fromchat — Process Exported ConversationsClaude Code and the Claude web app are separate products. A useful conversation in the web app stays there unless you actively move it into the vault. This skill processes an exported Markdown conversation and extracts the useful vault content — decisions, project signals, action items, reference knowledge — rather than leaving the insights trapped in a chat history.
Skill file: .claude/commands/fromchat.md
# /fromchat
Process an exported GUI conversation and extract actionable vault content.
## Usage
Run `/fromchat` with an optional path:
- `/fromchat 00-System/Inbox/2026-04-12-topic.md` — process a specific file
- `/fromchat` — check `00-System/Inbox/` for unprocessed files
## Step 1: Find the file
If a path was provided, read that file directly.
If not, list all files in `00-System/Inbox/` with no `processed: true` in
their frontmatter. If multiple unprocessed files exist, ask which one to
process.
## Step 2: Read and parse
Read the full file. Identify:
- Who said what (Human vs Assistant turns)
- The overall topic and purpose
- When it took place (from filename or content)
Then extract:
**Decisions made** — look for phrases like "we decided", "the approach is",
"go with", "confirmed", "agreed".
**Project signals** — any multi-week work discussed that might need a
project card. Cross-check against existing `10-Projects/` cards.
**Action items** — anything the user was going to do, build, set up, or
follow up on.
**Reference knowledge** — factual content, explanations, or recommendations
worth keeping as a permanent vault note.
**CLAUDE.md updates** — anything that should update the vault's CLAUDE.md:
new tools, changed vault structure, new standing instructions.
**Open questions** — anything unresolved.
## Step 3: Present findings
Present a structured summary for review. Do NOT write anything yet.
## Step 4: Wait for approval, then write
Once approved, create the relevant notes:
- Decisions → append to today's daily note under a "Decisions" heading
- Project cards → create in `10-Projects/<area>/` using the template
- Reference notes → create in `30-Resources/` with an appropriate filename
- CLAUDE.md updates → propose the specific diff, get confirmation, apply
## Step 5: Mark file as processed
Add frontmatter to the imported file:
```yaml
---
processed: true
processed-date: <today>
items-created: <count>
---
```
Move the file from `00-System/Inbox/` to `50-Archive/Chat-Exports/`.
## Step 6: Report
Summarise what was created, updated, and archived.
## Rules
- Never delete or overwrite existing vault content — only append and create.
- If a reference note already exists on the same topic, append rather than
duplicate. Check with `obsidian search` first.
- Keep reference notes concise — summarise the key insight, do not dump the
full conversation.
- The inbox file is the source of truth. Do not invent items that are not
in it.
/endofweek — Weekly ReviewThe weekly review synthesises rather than summarises. It is the point at which patterns that span more than one day become visible.
Skill file: .claude/commands/endofweek.md
# End of Week — Weekly Review
Generate the weekly synthesis. The goal is not to copy-paste from daily
reflections but to identify patterns across the week that are not visible
from any single day.
## Step 1: Gather context
Read all daily notes from the current week (Monday through Friday).
Read the weekly note template if one exists. Read any existing weekly
review file at `40-Daily/Weekly/<YYYY-Www>.md`.
Also pull in:
- All project cards with `updated` this week
- The current Waiting For tracker
- Any reminders due or overdue
## Step 2: Score notes by significance
Not every daily note carries equal weight. Score each note by:
- Length (longer notes usually captured more significant days)
- Number of backlinks from other notes (referenced = important)
- Presence of decisions, project signals, or open questions
Use this to decide what to foreground and what to treat as background.
## Step 3: Synthesise the week
Produce a weekly review covering:
**Week summary** — what was the shape of the week? Strategic, operational,
reactive? Which mode dominated?
**What moved forward** — grouped by project. Only real progress, not just
activity.
**What did not move** — specific stalled items with reasons. Blocked?
Deprioritised? Neglected?
**Decisions made** — aggregated decision log from daily notes.
**Waiting for** — status check on delegated items. Flag anything older
than two weeks.
**Stakeholder touchpoints** — who was engaged (direct reports, upward,
cross-functional, external). Flag absent key relationships.
**Delegation scorecard** — rough ratio of strategic vs operational work.
**Weekly reflection** — strategic progress, time allocation, energy,
patterns. One specific thing to change next week.
**Week score: X/10** with evidence.
## Step 4: Handle existing summaries
If a weekly review already exists for this week, do not append. Re-synthesise
the whole thing, incorporating the existing content and anything new from
the vault since it was last written. The output should read coherently, not
as a diff.
## Step 5: Save and report
Save to `40-Daily/Weekly/<YYYY-Www>.md`. Report what was produced and
flag anything that needs attention before next week.
## Rules
- Synthesis over summary. The point is to see patterns, not recap events.
- Reference specific notes as evidence for claims.
- Preserve any manual observations already in the existing weekly file.
- Be specific about what to change next week. One concrete action beats
three vague intentions.
/push — Update Reference ReposDuring a working session you will learn things that belong in a structured reference repository outside the vault. A risk status changes. A security tool is confirmed deployed. A compliance milestone is hit. If you do not update the reference data immediately, it drifts out of date.
Skill file: .claude/commands/push.md
# /push
Push information discovered during a vault session to the appropriate
structured reference repository outside the vault.
## Usage
`/push <natural language description of the information>`
Examples:
- `/push Vulnerability scanning confirmed deployed in all regions`
- `/push Offboarding process updated and current as of today`
- `/push Awareness training platform live in all business units`
## Step 1: Identify the target
Search across configured reference repositories for where this information
belongs. Use Filesystem MCP to scan the repo directories. Match on:
- File names and paths
- Existing content on the same topic
- Frontmatter or headings indicating the right table, row, or section
If multiple candidates match, pick the most specific one and mention the
alternatives in the report.
If nothing matches, report that and ask whether to create a new entry or
cancel.
## Step 2: Read and understand the existing format
Before editing, read the target file in full. Understand:
- Is it a CSV, YAML, Markdown table, prose, or mix?
- What is the existing entry style?
- What fields need updating vs leaving alone?
## Step 3: Make a targeted edit
Match the existing format exactly. CSV rows stay as CSV rows. YAML
indentation preserved. Markdown tables get new rows, not reformatted
tables. Never reformat surrounding content.
## Step 4: Log the update in the daily note
Append a bullet under a `CISO Desktop Updates` heading in today's daily
note. Format:
```
- [file path]: [what changed] — [reason / source]
```
This creates the audit trail that end-of-day and end-of-week reflections
can surface.
## Step 5: Report back
Summarise exactly what changed, where, and why. Do NOT commit the change.
The user reviews and commits when ready.
## Rules
- No auto-commit. The user commits explicitly.
- If the new information contradicts existing data, flag it and ask before
overwriting. Never silently overwrite.
- Speed matters. No confirmation prompt for the basic edit path — execute
and report.
- Every push is logged in the daily note for audit.
/remind — Tracked RemindersThe offload for “you told someone to do something and want to verify it got done” — a recurring frustration for any leader.
Skill file: .claude/commands/remind.md
# /remind
Create a tracked reminder in the vault.
## Usage
`/remind <natural language>`
Examples:
- `/remind <person> needs to handle <topic>, reminder sent today`
- `/remind <person> to send numbers by Friday`
- `/remind myself to follow up on <thing> by <date>`
## Step 1: Parse
Extract:
- Who (person name or "myself")
- What (the specific thing)
- When sent (default: today)
- When to chase (default: 3 business days from sent date, unless specified)
## Step 2: Append to the reminders file
Open `20-Areas/Team Management/Reminders.md`. Append a task in this format:
```
- [ ] [[<person>]] — <what>. Reminder sent <DD/MM/YY> #reminder [sent:: <YYYY-MM-DD>] [chase:: <YYYY-MM-DD>] 📅 <YYYY-MM-DD>
```
The 📅 emoji with the chase date ensures the reminder surfaces in any
"Tasks Due Today" query on that date.
## Step 3: Report back
One line: confirm the reminder was created, with who, what, and chase date.
## Rules
- No confirmation prompt. Quick-capture tool — parse, create, report.
- If the reminder is about a delegated deliverable (someone owes you
substantive work), note it but recommend adding to the Waiting For
tracker as well.
- If no chase date is specified, default to 3 business days (not calendar).
- Always wikilink the person's name — this builds the relationship graph
over time.
These are shorter skills — each one is 30-60 lines rather than 200. The full versions are omitted here to keep the appendix readable; the pattern is the same as above.
/capture — sweep Gmail and Google Calendar (via MCP) for actionable items, present them for triage, and pull the confirmed ones into today’s daily note or the inbox folder. Uses the Gmail and Calendar MCP servers.
/inbox — process quick-capture notes from 00-System/Inbox/. Classify each by type (reference, project signal, action item, decision), file into the appropriate location, and link back to related existing notes.
/linker — scan recent notes for unlinked mentions. If a note mentions “cloud migration” and there is a [[Cloud Migration]] project card, add the link. Uses keyword search first, semantic search (Smart Connections MCP) second if available.
/ask-vault — natural-language query across the whole vault. Semantic search first (if Smart Connections is available), keyword fallback, synthesise an answer with citations to specific files.
/gitsync — commit and push across all reference repositories in one command. Useful for the reference data tier (the CISO Desktop) rather than the vault.
/mirror — push the current repository (or all repositories) to the self-hosted Forgejo mirror.
/preserve <topic> — this is not a skill file. It is a convention documented in CLAUDE.md telling Claude Code to append a specific insight or decision to the Persistent Insights section of CLAUDE.md for permanent retention. Recognised in conversation rather than executed as a slash command.
The pattern across every skill above is the same:
Claude Code’s /init command can generate a starter CLAUDE.md based on the directory it is run in. Start there, then add your own skills as you identify workflows that you run more than twice. The second time you do something manually and notice the pattern, build the skill.
A skill is not an attempt to replace your judgement. It is an attempt to make the boring parts of applying your judgement consistent, so that the interesting parts get your attention.
For readers who are more comfortable in boardrooms than terminals — plain-English definitions of the technical terms used in this manual.
| Term | What It Means |
|---|---|
| Claude Code | A terminal-based AI assistant made by Anthropic. Unlike the Claude website, it runs in your command line and can read, write, and search files on your computer directly. |
| Gemini CLI | Google’s equivalent terminal-based AI assistant. Works on the same files as Claude Code. Useful as a backup or for specific tasks where it performs better. |
| MCP / Model Context Protocol | A standard that lets AI assistants connect to external tools and data sources — like a plugin system. An MCP server for Gmail lets your AI read your email; one for your vault lets it query your notes structurally. |
| Git / version control | A system that tracks every change ever made to a file, with a complete history you can roll back. Like “Track Changes” for your entire file system. Essential for backup and audit trail. |
| Forgejo | A community-owned, open-source platform for hosting Git repositories on your own server. Used here for the personal vault — no corporate dependency, data stays on your infrastructure. Not appropriate for sensitive organisational security data. |
| GitHub Enterprise | Microsoft’s enterprise Git hosting service with SSO, MFA enforcement, audit logging, and access management. Used here for CISO Desktop repositories containing sensitive security programme data — the controls the data classification requires. |
| GitHub (personal) | Microsoft’s free/personal Git hosting service. Fine for open source and personal projects; evaluate carefully before storing sensitive professional data. |
| LXC container | A lightweight virtual machine. Uses fewer resources than a full VM but provides the same isolation. Used here to run the server node without needing dedicated hardware. |
| Proxmox | An open-source platform for running virtual machines and containers on a server. Like VMware Home Lab, but free. Used here to host the server node. |
| tmux | A terminal multiplexer — keeps your command-line sessions running after you disconnect. Like minimising a window rather than closing it. Essential for running Claude Code on a remote server and reconnecting later. |
| SSH | Secure Shell — an encrypted connection for remotely accessing another computer’s command line. How you connect from your phone or laptop to the server node. |
| Tailscale | A VPN service that connects your devices privately and securely without exposing any ports to the internet. Makes your home server accessible from anywhere as if it were on your local network. |
| Markdown | A plain-text formatting standard — **bold**, # Heading, - bullet. Everything in Obsidian is a Markdown file. Readable by any text editor, any AI, any tool. |
| Frontmatter / YAML | Structured metadata at the top of a Markdown file, between --- markers. Contains fields like title, status, date, tags. Makes notes queryable — like the label on a file folder. |
| Dataview | An Obsidian plugin that queries your notes like a database. Write WHERE type = "project" AND status = "active" and get a live table. Renders in Obsidian but not on GitHub. |
| Wikilinks | Internal links between notes using [[double brackets]]. Click a name and go directly to that note. Obsidian updates them automatically when you move files. |
| PARA | Projects, Areas, Resources, Archive — a filing method for organising notes. The organising principle of the vault’s folder structure. |
| RAG | Retrieval Augmented Generation. Before answering, the AI retrieves relevant documents from your vault rather than reasoning from training data alone. Answers are grounded in your actual files. |
| Embeddings / semantic search | AI converts text into numbers that capture meaning. A search for “accountability without authority” finds notes about “responsibility with no mandate” — it understands meaning, not just matching words. |
| Context window | The amount of text an AI can hold in working memory at once. Older messages get dropped when the limit is hit. The vault-grounded approach minimises this problem — important information lives in files, not conversation history. |
| Systemd service | A background process on Linux that starts automatically on boot and restarts if it crashes. Used here to run the Obsidian Sync client continuously on the server node. |
| npm / npx | Node.js package managers used to install and run software tools — most MCP servers are installed this way. You don’t need to understand JavaScript to use them. npx <tool> downloads and runs a tool in one command. |
| VeraCrypt | Free, open-source disk encryption software. Creates an encrypted container that looks like an opaque file when locked — your vault becomes unreadable without the passphrase. |
| LUKS | Linux Unified Key Setup — the standard encryption layer for Linux disk encryption. The Linux equivalent of VeraCrypt for protecting data at rest. |
| Cron / cron job | A Linux scheduler that runs commands at set times. Used here for scheduled automation (e.g., automatic Git commits) without needing an always-on daemon. |
If this manual helped you, you can buy me a coffee — entirely optional, never expected. If you build something with it, I’d genuinely like to hear about it.