GitAgent claims your Git repo can be the universal AI agent definition. The question is whether "define once, export everywhere" actually holds up when each target framework handles orchestration, memory, and tool execution so differently.
The project, released as an open-source MIT-licensed spec and CLI tool, introduces a file-based standard for describing AI agents that decouples identity and capabilities from any specific runtime. You define an agent as a directory structure inside a Git repository, two files minimum: agent.yaml (manifest with model provider, versioning, dependencies) and SOUL.md (personality, instructions, tone). Then you run gitagent export and get output formatted for Claude Code, OpenAI Agents SDK, CrewAI, LangChain, Google ADK, or several others. Eight adapters ship today.
The pitch is "Docker for AI agents." Docker solved container portability across deployment environments. GitAgent wants to solve agent portability across orchestration frameworks. It is a useful analogy, but the boundaries of what actually ports deserve scrutiny.
What the spec actually defines
A GitAgent repository follows a prescribed folder structure. Beyond the two required files, optional directories handle skills (modular capability definitions), tools (MCP-compatible YAML schemas for external integrations), rules (hard constraints and safety boundaries), memory (human-readable Markdown files like dailylog.md and context.md), knowledge (reference documents), and workflows (deterministic multi-step YAML pipelines with dependency ordering).
The memory approach is worth calling out specifically. Rather than storing agent state in vector databases or framework-proprietary formats, GitAgent keeps everything in Markdown files inside a memory/runtime/ subdirectory. This makes agent state readable with cat, diffable with git diff, and reversible with git revert. For anyone who has tried to debug why an agent started behaving differently after a memory update, the appeal is immediate.
Sub-agents get their own nested directories with their own agent.yaml and SOUL.md, enabling hierarchical compositions. The project includes a worked example based on NVIDIA's AIQ Deep Researcher, a three-agent hierarchy (orchestrator, planner, researcher) that produces cited research reports, ported into the GitAgent format.
Git as the supervision layer
The more interesting design decision is treating Git itself as the human-in-the-loop mechanism. When an agent updates its memory or acquires a new skill during runtime, the system can create a branch and open a pull request. A human reviewer sees the diff of exactly what changed in the agent's behavior or knowledge, approves or rejects it, and merges. If an agent drifts from its intended persona or starts producing problematic outputs, rolls it back to a known-good state.



