# claude-code-config > A Claude Code configuration suite that combines orchestrator rules, specialist agents, hook scripts, plugins, and CLI automation for reviews, releases, and multi-agent workflows. --- Source: overview.md # Overview `claude-code-config` turns Claude Code into a more structured working environment. Instead of only changing a prompt, it installs a coordinated set of settings, hooks, rule files, specialist agent definitions, slash-command plugins, reusable skills, and a bundled CLI called `myk-claude-tools`. In practice, that means Claude Code starts with stronger defaults: the main assistant is pushed toward delegation, Git mistakes are caught earlier, review workflows are standardized, and common jobs such as PR review, release creation, review-database analysis, and multi-agent prompting become repeatable commands. > **Note:** The shipped `settings.json` expects these files to live under `~/.claude/`, including `~/.claude/scripts/...` and `~/.claude/statusline.sh`. If you install the repository somewhere else, update those paths to match. ## What This Repository Maintains | Component | What it gives you | |---|---| | `settings.json` | Claude Code permissions, hooks, status line, environment flags, and enabled plugins | | `scripts/` | Notification, startup checks, prompt injection, command enforcement, and Git protection | | `rules/` | The orchestrator policy set: routing, review loop, slash-command behavior, and task workflow | | `agents/` | Specialist personas for Python, Git, GitHub, docs, tests, Bash, Docker, Kubernetes, Java, Go, frontend, and more | | `plugins/` | Slash commands for GitHub workflows, local review workflows, and `acpx` multi-agent prompting | | `skills/` | Reusable workflows such as browser automation and AI-driven docs generation | | `myk_claude_tools/` | A Python CLI that powers PR, release, review, database, and CodeRabbit operations | ## What Runs Inside Claude Code The heart of the setup is `settings.json`. It wires Claude Code's hook system to local scripts, narrows what tools can run directly, enables both first-party and official plugins, disables telemetry-related features, and turns on convenience flags such as always-thinking and auto-dream. ```37:85:/tmp/tmph3xeht2o/claude-code-config/settings.json "PreToolUse": [ { "matcher": "TodoWrite|Bash", "hooks": [ { "type": "command", "command": "uv run ~/.claude/scripts/rule-enforcer.py" } ] }, { "matcher": "Bash", "hooks": [ { "type": "command", "command": "uv run ~/.claude/scripts/git-protection.py" } ] }, // ... security prompt hook omitted ... ], "UserPromptSubmit": [ { "matcher": "", "hooks": [ { "type": "command", "command": "uv run ~/.claude/scripts/rule-injector.py" } ] } ], "SessionStart": [ { "matcher": "", "hooks": [ { "type": "command", "command": "~/.claude/scripts/session-start-check.sh", "timeout": 5000 } ] } ] ``` Those hooks do different jobs: - `session-start-check.sh` verifies required tools and plugins such as `uv`, `gh`, `jq`, `gawk`, `prek`, `mcpl`, and the critical review plugins. - `rule-injector.py` adds a short manager-style reminder to each prompt so the main assistant keeps delegating instead of doing all work directly. - `rule-enforcer.py` blocks raw `python`, `python3`, `pip`, `pip3`, and `pre-commit` commands so the environment consistently uses `uv`, `uvx`, and `prek`. - `git-protection.py` blocks commits or pushes on protected branches and on branches that are already merged. - `my-notifier.sh` can surface Claude notifications through the desktop. The setup also adds a custom status line. It builds a compact view from the current directory, SSH host, Git branch, virtual environment, model name, context-window usage, and line-change totals, so you can see the current working state at a glance. `settings.json` also pre-enables a wider plugin set beyond this repository's own plugins, including official integrations such as `github`, `code-review`, `frontend-design`, `security-guidance`, `pyright-lsp`, `jdtls-lsp`, and `gopls-lsp`. At the environment level, it sets `DISABLE_TELEMETRY`, `DISABLE_ERROR_REPORTING`, and `CLAUDE_CODE_DISABLE_FEEDBACK_SURVEY`. > **Warning:** This configuration is intentionally opinionated. Common shortcuts such as raw `python`, `pip`, `pre-commit`, or direct commits to `main` can be blocked until you use the preferred workflow. ## Rules and Specialist Agents This repository separates orchestration from execution. The rule files in `rules/` describe how the main assistant should behave, while the files in `agents/` define the specialists that do the actual work. A routing excerpt from the rule set shows the idea: ```5:25:/tmp/tmph3xeht2o/claude-code-config/rules/10-agent-routing.md | Domain/Tool | Agent | |----------------------------------------------------------------------------------|------------------------------------| | **Languages (by file type)** | | | Python (.py) | `python-expert` | | Go (.go) | `go-expert` | | Frontend (JS/TS/React/Vue/Angular) | `frontend-expert` | | Java (.java) | `java-expert` | | Shell scripts (.sh) | `bash-expert` | | Markdown (.md) | `technical-documentation-writer` | | **Infrastructure** | | | Docker | `docker-expert` | | Kubernetes/OpenShift | `kubernetes-expert` | | Jenkins/CI/Groovy | `jenkins-expert` | | **Development** | | | Git operations (local) | `git-expert` | | GitHub (PRs, issues, releases, workflows) | `github-expert` | | Tests | `test-automator` | | Debugging | `debugger` | | API docs | `api-documenter` | | Claude Code docs (features, hooks, settings, commands, MCP, IDE, Agent SDK, API) | `claude-code-guide` (built-in) | | External library/framework docs (React, FastAPI, Django, etc.) | `docs-fetcher` | ``` That rule library also defines two important workflow policies: - A mandatory three-reviewer loop using `superpowers:code-reviewer`, `pr-review-toolkit:code-reviewer`, and `feature-dev:code-reviewer` before testing and completion. - Special handling for slash commands: the orchestrator runs the slash command workflow directly, rather than delegating the entire command away. The `agents/` directory then supplies the actual specialist prompts. That includes language specialists like `python-expert`, infra specialists like `docker-expert` and `kubernetes-expert`, workflow specialists like `git-expert` and `github-expert`, and support roles like `technical-documentation-writer`, `test-automator`, and `test-runner`. > **Note:** The runtime hook injects a short prompt reminder from `rule-injector.py`. The broader policy library still lives in `rules/`, where you maintain the project’s orchestrator behavior. ## Plugins and Slash Commands The repository ships three first-party plugins through its marketplace manifest: - `myk-github` for GitHub workflows such as PR review, review handling, release creation, and CodeRabbit rate-limit recovery. - `myk-review` for local diff review and review-database analytics. - `myk-acpx` for sending prompts to other ACP-compatible coding agents through `acpx`. Here are real command examples from the plugin definitions. `myk-github` adds PR review entry points: ```36:40:/tmp/tmph3xeht2o/claude-code-config/plugins/myk-github/commands/pr-review.md - `/myk-github:pr-review` - Review PR from current branch (auto-detect) - `/myk-github:pr-review 123` - Review PR #123 in current repo - `/myk-github:pr-review https://github.com/owner/repo/pull/123` - Review from URL ``` `myk-review` exposes review-database queries for patterns and analytics: ```32:37:/tmp/tmph3xeht2o/claude-code-config/plugins/myk-review/commands/query-db.md /myk-review:query-db stats --by-source # Stats by source /myk-review:query-db stats --by-reviewer # Stats by reviewer /myk-review:query-db patterns --min 2 # Find duplicate patterns /myk-review:query-db dismissed --owner X --repo Y /myk-review:query-db query "SELECT * FROM comments WHERE status='skipped' LIMIT 10" /myk-review:query-db find-similar < comments.json # Find similar dismissed comments ``` `myk-acpx` lets Claude hand work to other coding agents through `acpx`: ```32:41:/tmp/tmph3xeht2o/claude-code-config/plugins/myk-acpx/commands/prompt.md - `/myk-acpx:prompt codex fix the tests` - `/myk-acpx:prompt cursor review this code` - `/myk-acpx:prompt gemini explain this function` - `/myk-acpx:prompt codex --exec summarize this repo` - `/myk-acpx:prompt codex --model o3-pro review the architecture` - `/myk-acpx:prompt codex --fix fix the code quality issues` - `/myk-acpx:prompt gemini --peer review this code` - `/myk-acpx:prompt codex --peer --model o3-pro review the architecture` - `/myk-acpx:prompt cursor,codex review this code` - `/myk-acpx:prompt cursor,gemini,codex --peer review the architecture` ``` If you use this repository as a plugin source, these commands become a big part of the day-to-day workflow: review code locally, review PRs on GitHub, refine pending comments, query review history, or hand a prompt to another coding agent without rebuilding the workflow every time. ## Bundled CLI: `myk-claude-tools` The plugins are backed by a Python CLI bundled in the same repository. In `pyproject.toml`, it is packaged as `myk-claude-tools`, currently version `1.7.2`, with a minimum Python version of `3.10`. Its top-level command groups are defined here: ```12:22:/tmp/tmph3xeht2o/claude-code-config/myk_claude_tools/cli.py @click.group() @click.version_option() def cli() -> None: """CLI utilities for Claude Code plugins.""" cli.add_command(coderabbit_commands.coderabbit, name="coderabbit") cli.add_command(db_commands.db, name="db") cli.add_command(pr_commands.pr, name="pr") cli.add_command(release_commands.release, name="release") cli.add_command(reviews_commands.reviews, name="reviews") ``` Those groups map to concrete workflow areas: - `pr` fetches PR diffs and metadata, fetches repository `CLAUDE.md` content, and posts inline PR comments. - `release` validates release state, creates releases, detects version files, and bumps versions. - `reviews` fetches, updates, posts, and stores review-thread data. - `db` queries the local review SQLite database for stats, patterns, dismissed comments, and similar historical comments. - `coderabbit` checks rate-limit state and re-triggers reviews after cooldown windows. The release tooling is broader than a single language stack. The version-detection code supports `pyproject.toml`, `package.json`, `setup.cfg`, `Cargo.toml`, `build.gradle`, `build.gradle.kts`, and Python files that define `__version__`. The database tooling also stays intentionally safe: its default path is `/.claude/data/reviews.db`, and raw SQL is restricted to read-only `SELECT` and `WITH` queries. > **Tip:** You can use `myk-claude-tools` directly, not just through slash commands. That makes it useful for scripting, debugging, or plugging the same workflows into your own automation. ## Skills The repository also bundles reusable skills in `skills/`. These are smaller, task-focused workflow packages rather than full plugins or specialist agents. Today the bundled skills are: - `agent-browser` for browser automation, page inspection, form filling, screenshots, and testing. - `docsfy-generate-docs` for running a `docsfy`-based documentation generation workflow. The `agent-browser` skill shows the style clearly: ```16:20:/tmp/tmph3xeht2o/claude-code-config/skills/agent-browser/SKILL.md agent-browser open # Navigate to page agent-browser snapshot -i # Get interactive elements with refs agent-browser click @e1 # Click element by ref agent-browser fill @e2 "text" # Fill input by ref agent-browser close # Close browser ``` Skills are useful when you want a repeatable workflow but do not need a full plugin or a permanent rule change. ## Testing and Quality Checks This repository treats its configuration as real software, not just a folder of prompts. The `tests/` suite covers hook behavior, Git protection, review-database queries, version detection and bumping, CodeRabbit handling, review storage, and other CLI flows. Local automation is defined in `tox.toml`, `pyproject.toml`, and `.pre-commit-config.yaml`, which together wire up `pytest`, Ruff, mypy, Flake8, markdownlint, secret scanning, and related checks. That matters as an end user because it means the setup is maintained with the same habits you would expect from an application: it is versioned, tested, linted, and designed to fail safely when the environment is missing required tools. Taken together, this repository gives Claude Code a full operating model: stricter settings, active safeguards, explicit routing rules, specialized agents, command-driven plugins, reusable skills, and a CLI that handles the mechanical parts of review and release workflows. If you want Claude Code to behave less like a blank slate and more like a structured engineering assistant, this is the layer that makes that happen. --- Source: architecture.md # Architecture `claude-code-config` is built around one core separation: the main Claude instance orchestrates, and specialist agents execute. The orchestrator reads the request, chooses the workflow, routes work to the right expert, and keeps the overall process moving. Specialists handle the hands-on work in focused domains such as Python, Git, frontend, testing, or documentation. That split is what makes the repository predictable. It keeps the top-level conversation focused, makes parallel work practical, and gives the project a clear place to enforce safety rules, task tracking, and the required review loop. ## At A Glance | Layer | Purpose | | --- | --- | | Orchestrator | Read the request, ask questions, plan, route work, and run slash commands | | Specialist agents | Do the actual implementation work in their domain | | Slash commands | Run end-to-end workflows that can temporarily override normal delegation | | Hooks and permissions | Enforce safety rules around Bash, Git, and tool usage | | Tasks | Track long, multi-step workflows across phases and sessions | | Review loop | Require parallel review and testing after code changes | ## Orchestrator vs Specialist The orchestrator is intentionally narrow. It is supposed to coordinate, not do hands-on implementation. Specialists are the execution layer. ```md > **If you are a SPECIALIST AGENT** (python-expert, git-expert, etc.): > IGNORE all rules below. Do your work directly using Edit/Write/Bash. > These rules are for the ORCHESTRATOR only. ❌ **NEVER** use: Edit, Write, NotebookEdit, Bash (except `mcpl`), direct MCP calls ❌ **NEVER** delegate slash commands (`/command`) OR their internal operations - see slash command rules ✅ **ALWAYS** delegate other work to specialist agents ``` In day-to-day use, that means the orchestrator should do things like: - read files and gather context - ask clarifying questions - analyze and plan - choose the right specialist - coordinate multi-step workflows - execute slash commands directly Work is routed by intent, not just by the tool involved. A few examples from `rules/10-agent-routing.md`: - Python work goes to `python-expert` - local Git work goes to `git-expert` - GitHub PRs, issues, and releases go to `github-expert` - Markdown work goes to `technical-documentation-writer` - tests go to `test-automator` The same routing file also distinguishes between built-in agents such as `claude-code-guide` and `general-purpose`, and custom agents defined in `agents/`. > **Note:** The split is deliberate. The orchestrator keeps the big picture. Specialists keep the domain-specific context. ## Slash Commands Run In A Different Mode Slash commands are the biggest exception to the normal delegation model. In normal mode, the orchestrator should delegate implementation work. During a slash command, the orchestrator runs the command itself and follows that command's own instructions from start to finish. ```md 1. **EXECUTE IT DIRECTLY YOURSELF** - NEVER delegate to any agent 2. **ALL internal operations run DIRECTLY** - scripts, bash commands, everything 3. **Slash command prompt takes FULL CONTROL** - its instructions override general CLAUDE.md rules 4. **General delegation rules are SUSPENDED** for the duration of the slash command ``` > **Warning:** Slash commands are not just shortcuts. They temporarily change how execution works. You can see that direct-execution model in the command definitions themselves. For example, `plugins/myk-github/commands/review-handler.md` explicitly declares the tools it may use: ```md --- description: Process ALL review sources (human, Qodo, CodeRabbit) from current PR argument-hint: [--autorabbit] [REVIEW_URL] allowed-tools: Bash(myk-claude-tools:*), Bash(uv:*), Bash(git:*), Bash(gh:*), AskUserQuestion, Task, Agent --- ``` That is why commands such as `/myk-github:review-handler` and `/myk-github:pr-review` can directly use `git`, `gh`, `uv`, `Task`, and `myk-claude-tools` while they run. The command definition becomes the active workflow. Several slash commands are powered by the local `myk-claude-tools` package. In `myk_claude_tools/cli.py`, the CLI registers `coderabbit`, `db`, `pr`, `release`, and `reviews` command groups, which the plugin commands then call. > **Tip:** Temporary working files belong in `/tmp/claude`. That rule appears in `rules/40-critical-rules.md`, and the CLI commands follow the same pattern when they write JSON artifacts. ## Enforcement Is Layered The project does not rely on a single enforcement mechanism. Instead, it uses three layers together: 1. human-readable rules in `rules/` 2. runtime configuration in `settings.json` 3. hook scripts in `scripts/` ### Prompt-Time Guidance On every user prompt, `scripts/rule-injector.py` injects a short reminder that keeps the orchestrator in manager mode: ```python rule_reminder = ( "[SYSTEM RULES] You are a MANAGER. NEVER do work directly. ALWAYS delegate:\n" "- Edit/Write → language specialists (python-expert, go-expert, etc.)\n" "- ALL Bash commands → bash-expert or appropriate specialist\n" "- Git commands → git-expert\n" "- MCP tools → manager agents\n" "- Multi-file exploration → Explore agent\n" "HOOKS WILL BLOCK VIOLATIONS." ) ``` This injected reminder is short on purpose. It reinforces the architecture at runtime without replacing the fuller policy described in `rules/`. ### Hook Registration And Allowlists `settings.json` is where the runtime guardrails are wired together. It defines tool allowlists and registers the hooks that run before tools and prompts. ```json "PreToolUse": [ { "matcher": "TodoWrite|Bash", "hooks": [ { "type": "command", "command": "uv run ~/.claude/scripts/rule-enforcer.py" } ] }, { "matcher": "Bash", "hooks": [ { "type": "command", "command": "uv run ~/.claude/scripts/git-protection.py" } ] } ], "UserPromptSubmit": [ { "matcher": "", "hooks": [ { "type": "command", "command": "uv run ~/.claude/scripts/rule-injector.py" } ] } ] ``` The same file also scopes direct `Read`, `Edit`, and `Write` permissions to `/tmp/claude/**` and allowlists specific command prefixes such as `mcpl`, `git -C`, `myk-claude-tools`, and the hook scripts themselves. It also includes a prompt-based Bash safety gate that asks for confirmation or blocks clearly destructive OS-level commands. > **Tip:** `settings.json` includes a maintenance note saying hook scripts must be listed in both `permissions.allow` and `allowedTools`. If you add a new script-based hook, update both places. ### Bash Policy Enforcement The most concrete checked-in shell enforcement lives in `scripts/rule-enforcer.py`. It blocks direct Python and `pip` usage, allows `uv` and `uvx`, and blocks raw `pre-commit` so the wrapper tool is used instead. ```python # Allow uv/uvx commands if cmd.startswith(("uv ", "uvx ")): return False # Block direct python/pip forbidden = ("python ", "python3 ", "pip ", "pip3 ") return cmd.startswith(forbidden) ``` When it blocks a command, the script tells the user what to do instead: - use `uv run` for Python execution - use `uvx` for package CLIs - use `prek` instead of raw `pre-commit` The behavior is backed by `tests/test_rule_enforcer.py`, which includes explicit cases for allowed `uv run`, allowed `uvx`, allowed `prek`, and denied direct `python`, `pip`, and `pre-commit` commands. ### Git Protection `scripts/git-protection.py` adds a stricter safety layer for version-control operations. It blocks `git commit` or `git push` when the branch is unsafe to use, including cases like: - committing directly on `main` or `master` - committing on a branch whose PR is already merged - committing on a branch already merged into the main branch - committing in detached HEAD state The companion test suite in `tests/test_git_protection.py` covers those cases, along with GitHub PR detection via `gh`, amend behavior on unpushed commits, and edge cases around command parsing. ### Session Start Validation Before work begins, `scripts/session-start-check.sh` checks whether critical tools are installed. Two items stand out: - `uv` is treated as critical because the hook system depends on it - the three review plugins are treated as critical because the review loop depends on them The script makes that requirement explicit: ```bash critical_marketplace_plugins=( pr-review-toolkit superpowers feature-dev ) ``` > **Warning:** `pr-review-toolkit`, `superpowers`, and `feature-dev` are not optional if you want the full architecture to work as designed. They are required for the mandatory review loop. ## Task Tracking Is For Real Workflows The task system is not meant for every small action. It is there for workflows that are long enough, staged enough, or interruption-prone enough that progress should survive beyond the current turn. `rules/25-task-system.md` defines tasks as persistent data on disk: ```md Tasks are saved to disk and persist across sessions: - **Location:** `~/.claude/tasks//` - **Format:** Each task is a JSON file (1.json, 2.json, etc.) - **Contents:** Full task structure including subject, description, status, blockedBy ``` The same rule file also shows the expected shape of a task: ```text TaskCreate: - subject: "Run tests with coverage" - activeForm: "Running tests" - description: "Execute test suite and verify coverage thresholds" ``` This is an orchestrator feature, not an agent feature. The rules explicitly say tasks are useful for: - complex work that might be interrupted - multi-step workflows with visible progress - approval gates - slash commands with dependent phases They also explicitly say tasks are **not** for: - simple one-off actions - agent work - trivial fixes - internal processing steps > **Tip:** If a workflow has phases, dependencies, or a chance of waiting on the user, create tasks and connect them with `blockedBy`. If it is simple and immediate, skip the task system. One more detail matters in practice: tasks must be cleaned up. The rules require a final `TaskList` and `TaskUpdate` pass so stale tasks do not linger between sessions. `plugins/myk-github/commands/review-handler.md` is called out as the reference example for a multi-phase task-aware workflow. ## The Review Loop Is Mandatory This repository treats code review as part of the architecture, not as an optional follow-up step. After any code change, the required path is: 1. a specialist makes the change 2. three review agents run in parallel 3. findings are merged and deduplicated 4. any issues are fixed and re-reviewed 5. `test-automator` runs 6. only then is the work considered done The core rule is written directly in `rules/20-code-review-loop.md`: ```text 1. Specialist writes/fixes code 2. Send to ALL 3 review agents IN PARALLEL: - `superpowers:code-reviewer` - `pr-review-toolkit:code-reviewer` - `feature-dev:code-reviewer` 3. Merge findings from all 3 reviewers 4. Has comments from ANY reviewer? ──YES──→ Fix code 5. Run `test-automator` 6. Tests pass? ──NO──→ Fix code ``` The same file also requires that all three reviewers be launched in the **same assistant turn**, not one after another. Those reviewers have distinct responsibilities: - `superpowers:code-reviewer` focuses on general code quality and maintainability - `pr-review-toolkit:code-reviewer` checks project guidelines and style adherence - `feature-dev:code-reviewer` focuses on bugs, logic errors, and security issues This is not just a theory document. Plugin commands repeat the same architecture. For example, `plugins/myk-review/commands/local.md` and `plugins/myk-github/commands/pr-review.md` both state that the three reviewers must be invoked in parallel. > **Warning:** The review loop is designed to repeat until reviewers are satisfied. A failing review does not end the workflow; it sends the work back around the loop. ### Testing And Local Quality Gates The checked-in repository does not include a GitHub Actions workflow under `.github/workflows`. Instead, its quality gates are defined locally through hook scripts, `tox`, and pre-commit configuration. `tox.toml` shows the expected unit-test entrypoint: ```toml [env.unittests] description = "Run pytest tests" deps = ["uv"] commands = [["uv", "run", "--group", "tests", "pytest", "tests"]] ``` `.pre-commit-config.yaml` adds additional local checks, including Ruff, Flake8, mypy, detect-secrets, gitleaks, and markdown linting. > **Note:** In this repository, quality assurance is mostly enforced through local automation and command workflows rather than a checked-in CI pipeline definition. ## How The Pieces Fit Together A typical end-to-end flow looks like this: 1. The orchestrator reads the request and decides whether it is normal delegated work or a slash-command workflow. 2. For normal work, it routes by intent to the best specialist. 3. For slash commands, it runs the command directly and follows that command's own rules. 4. Hook scripts and allowlists keep Bash, Git, and prompt behavior inside safe boundaries. 5. If the workflow is long or multi-phase, the orchestrator creates persistent tasks. 6. Any code changes must pass the three-reviewer loop and automated tests before the workflow is considered complete. That combination is what gives `claude-code-config` its character: the orchestrator stays focused on coordination, specialists stay focused on execution, slash commands can run practical end-to-end workflows, and review plus testing are treated as part of the system design rather than an afterthought. --- Source: repository-structure.md # Repository Structure This repository is organized around one practical goal: turning a Claude Code setup into a reusable, versioned workspace with specialist agents, policy files, hook scripts, installable plugins, reusable skills, and a companion Python CLI. The easiest way to understand it is to treat the repository as a control plane plus execution layers. `settings.json` turns features on, `scripts/` implements runtime behavior, `agents/` and `rules/` describe how work should be routed, `plugins/` exposes slash commands, and `myk_claude_tools/` provides deterministic CLI commands that those workflows can call. > **Note:** The live paths in `settings.json` point at `~/.claude/...`, so this repository is meant to be copied or symlinked into a Claude home directory. The checkout is the source of truth; the installed `~/.claude` path is what Claude Code executes. ## At a glance | Path | What it contains | Why it matters | | --- | --- | --- | | `settings.json` and root config files | Hooks, permissions, plugin enablement, packaging, linting, review-tool config | These files wire the rest of the repository together | | `agents/` | Custom specialist agent definitions | This is where the project’s reusable expert personas live | | `rules/` | Long-form orchestration and workflow policy | This directory explains how work should be routed and reviewed | | `scripts/` | Hook implementations and shell helpers | These files enforce and support runtime behavior | | `plugins/` | Installable slash-command plugins | These become user-invokable commands inside Claude Code | | `skills/` | Reusable tool-specific playbooks | These capture procedural workflows without becoming plugins or agents | | `myk_claude_tools/` | Python package and CLI | This is the deterministic execution engine behind many workflows | | `tests/` | Pytest coverage for hooks and CLI modules | This is where behavior is verified | | `docs/plans/` | Design and planning notes | Useful background, but not part of the runtime | | `state/` | Runtime snapshot storage | Keeps local state out of version control | ```text settings.json ├── hooks ----------------------> scripts/ ├── enabledPlugins -------------> plugins/ ├── statusLine -----------------> statusline.sh └── permissions/allowedTools --> what Claude can invoke agents/ + rules/ ---------------> delegation and workflow behavior plugins/*/commands/*.md --------> myk_claude_tools/ tests/ + tox + pre-commit ------> verification ``` ## Root configuration If you only read one file first, read `settings.json`. It is the switchboard for the entire repository: it registers hooks, enables both official and repo-local plugins, defines the status line, and limits which tools Claude Code may use. ```25:55:settings.json "hooks": { "Notification": [ { "matcher": "", "hooks": [ { "type": "command", "command": "~/.claude/scripts/my-notifier.sh" } ] } ], "PreToolUse": [ { "matcher": "TodoWrite|Bash", "hooks": [ { "type": "command", "command": "uv run ~/.claude/scripts/rule-enforcer.py" } ] }, { "matcher": "Bash", "hooks": [ { "type": "command", "command": "uv run ~/.claude/scripts/git-protection.py" } ] }, ``` Beyond `settings.json`, the root also carries the rest of the project-wide configuration: - `pyproject.toml` packages `myk-claude-tools`, defines Python requirements, and centralizes Ruff and mypy settings. - `tox.toml` defines the standard local test entrypoint. - `.pre-commit-config.yaml` runs formatting, linting, secret scanning, and type checks. - `.coderabbit.yaml` and `.pr_agent.toml` configure external review tooling. - `.claude-plugin/marketplace.json` publishes the local plugins. - `statusline.sh` builds the Claude Code status line from JSON input. > **Tip:** When you are tracing behavior, follow the chain from `settings.json` to a hook or plugin name, then to the implementation file in `scripts/`, `plugins/`, or `myk_claude_tools/`. This repository also uses a strict whitelist-style `.gitignore`. New files are not tracked automatically. > **Warning:** Adding a new agent, rule, script, plugin file, skill, test, or Python module is a two-step change: create the file, then explicitly unignore it in `.gitignore`. ```1:18:.gitignore # Ignore everything by default # This config integrates into ~/.claude so we must be explicit about what we track * # Core config files !.coderabbit.yaml !.gitignore !.markdownlint.yaml !LICENSE !AI_REVIEW.md !README.md !settings.json !statusline.sh !uv.lock # agents/ !agents/ !agents/00-base-rules.md ``` ## `agents/`: specialist prompts The `agents/` directory holds the project’s custom specialist definitions. These are not executable programs; they are reusable prompts with tool permissions and working rules for specific domains such as Python, Bash, Git, Docker, Kubernetes, documentation, testing, and debugging. Every agent inherits the shared guidance in `agents/00-base-rules.md`, then adds its own domain-specific instructions. A typical agent file starts with YAML frontmatter that declares its name, description, tools, and optional skills. ```1:5:agents/python-expert.md --- name: python-expert description: MUST BE USED for Python code creation, modification, refactoring, and fixes. Specializes in idiomatic Python, async/await, testing, and modern Python development. tools: Read, Write, Edit, Bash, Glob, Grep, LSP skills: [test-driven-development] ``` A few useful patterns to know: - `agents/00-base-rules.md` is the shared baseline for all custom agents. - Most agent files focus on one domain, such as `python-expert.md`, `bash-expert.md`, `git-expert.md`, or `technical-documentation-writer.md`. - Some agents add runtime details in frontmatter. For example, `git-expert.md` declares a `PreToolUse` hook that points at `git-protection.py`. > **Note:** Not every routed agent has a file in `agents/`. `rules/10-agent-routing.md` also references built-in agents such as `general-purpose` and `claude-code-guide`, which are provided by Claude Code itself rather than stored in this repository. ## `rules/`: workflow and policy library The `rules/` directory is the human-readable policy layer for the project. These files explain how the main Claude session should behave, what to delegate, when to create issues, how to run reviews, how to use tasks, how slash commands differ from normal requests, and how to use MCP. The numbered filenames make the policy set easy to scan and keep in a stable order. In practice, each file has a distinct responsibility: - `00-orchestrator-core.md` defines the baseline manager/delegation model. - `05-issue-first-workflow.md` explains when to create an issue before doing implementation work. - `10-agent-routing.md` maps work types to specialists. - `15-mcp-launchpad.md` documents `mcpl` discovery and execution. - `20-code-review-loop.md` defines the mandatory review cycle. - `25-task-system.md` explains when persisted tasks add value. - `30-slash-commands.md` defines the direct-execution rules for slash commands. - `40-critical-rules.md` covers global guardrails such as parallelism, temp files, and `uv`. - `50-agent-bug-reporting.md` explains how to report bugs in custom agent definitions. > **Note:** The full policy text lives in `rules/`. The current `scripts/rule-injector.py` adds a short manager reminder at prompt-submit time, so if you need the detailed workflow rules, read this directory directly. ## `scripts/`: hook implementations and helpers `scripts/` is the executable layer for runtime behavior. Where `rules/` tells you what should happen, `scripts/` tells you what Claude Code actually runs. The main files are: - `rule-injector.py`: injects a short system reminder during `UserPromptSubmit`. - `rule-enforcer.py`: blocks direct Bash use of `python`, `python3`, `pip`, `pip3`, and `pre-commit`, steering users toward `uv`, `uvx`, and `prek`. - `git-protection.py`: blocks commits and pushes on protected or already-merged branches. - `session-start-check.sh`: checks for required tools and required review plugins at session start. - `my-notifier.sh`: sends desktop notifications. - `statusline.sh`: builds the Claude status line from runtime JSON. Here is the core of `rule-enforcer.py`, which shows that the enforcement logic is implemented as a hook script rather than only as prose: ```35:50:scripts/rule-enforcer.py # Block direct python/pip commands if tool_name == "Bash": command = tool_input.get("command", "") if is_forbidden_python_command(command): output = { "hookSpecificOutput": { "hookEventName": "PreToolUse", "permissionDecision": "deny", "permissionDecisionReason": "Direct python/pip commands are forbidden.", "additionalContext": ( "You attempted to run python/pip directly. Instead:\n" "1. Delegate Python tasks to the python-expert agent\n" "2. Use 'uv run script.py' to run Python scripts\n" "3. Use 'uvx package-name' to run package CLIs\n" "See: https://docs.astral.sh/uv/" ``` This directory is also where the repo’s safety behavior lives. If you want to understand why a Bash command, commit, or push was allowed or denied, this is where to look first. ## `plugins/`: installable slash commands The `plugins/` directory packages user-facing slash commands for Claude Code. Each plugin has its own folder and typically includes: - `.claude-plugin/plugin.json` for plugin metadata - `commands/*.md` for individual slash-command definitions - `README.md` for plugin-level help This repository currently ships three local plugins: - `plugins/myk-github/` for GitHub review, release, and CodeRabbit workflows - `plugins/myk-review/` for local review flows and database queries - `plugins/myk-acpx/` for multi-agent prompting through `acpx` A command definition is a Markdown file with frontmatter plus step-by-step instructions. The frontmatter is what declares the command’s description, arguments, and allowed tools. ```1:5:plugins/myk-github/commands/pr-review.md --- description: Review a GitHub PR and post inline comments on selected findings argument-hint: [PR_NUMBER|PR_URL] allowed-tools: Bash(myk-claude-tools:*), Bash(uv:*), Bash(git:*), Bash(gh:*), AskUserQuestion, Task --- ``` That design is important because it shows how the repository divides responsibilities: - The plugin command file defines the user workflow. - The command is allowed to call tools such as `gh`, `git`, `uv`, or `myk-claude-tools`. - The heavy lifting is often delegated to the Python package in `myk_claude_tools/`. The root `.claude-plugin/marketplace.json` file ties those plugin folders together into a publishable marketplace listing. ## `skills/`: reusable playbooks The `skills/` directory is separate from both `agents/` and `plugins/`. A skill is a reusable, tool-focused playbook: it does not create a slash command, and it does not become a specialist persona. Instead, it gives Claude a repeatable procedure for using a specific tool or workflow well. This repository currently includes: - `skills/agent-browser/SKILL.md` - `skills/docsfy-generate-docs/SKILL.md` For example, the `agent-browser` skill is a compact operational guide for browser automation: ```15:20:skills/agent-browser/SKILL.md agent-browser open # Navigate to page agent-browser snapshot -i # Get interactive elements with refs agent-browser click @e1 # Click element by ref agent-browser fill @e2 "text" # Fill input by ref agent-browser close # Close browser ``` > **Tip:** Use a skill when the work is mostly procedural and tool-driven. Use an agent when you need a domain specialist. Use a plugin when you want a slash command users can invoke directly. ## `myk_claude_tools/`: deterministic CLI engine The `myk_claude_tools/` package is the repository’s executable backend for deterministic operations. It is published as the `myk-claude-tools` console command, and many plugin workflows depend on it. The top-level CLI is intentionally small: it just assembles the package’s subcommand groups. ```12:22:myk_claude_tools/cli.py @click.group() @click.version_option() def cli() -> None: """CLI utilities for Claude Code plugins.""" cli.add_command(coderabbit_commands.coderabbit, name="coderabbit") cli.add_command(db_commands.db, name="db") cli.add_command(pr_commands.pr, name="pr") cli.add_command(release_commands.release, name="release") cli.add_command(reviews_commands.reviews, name="reviews") ``` Those subpackages divide the work cleanly: - `coderabbit/` handles CodeRabbit rate-limit detection and retriggering. - `db/` provides read-only analytics and query access to the review database. - `pr/` fetches PR diffs, retrieves `CLAUDE.md`, and posts inline comments. - `release/` detects version files, bumps versions, and creates releases. - `reviews/` fetches, posts, updates, and stores review-thread data. A few details make this package especially useful: - `pyproject.toml` registers the console entrypoint as `myk-claude-tools = "myk_claude_tools.cli:main"`. - `db/query.py` auto-detects the database at `/.claude/data/reviews.db`. - Raw SQL queries are intentionally restricted to `SELECT` and `WITH` statements for safety. - `release/detect_versions.py` and `release/bump_version.py` handle version files across multiple ecosystems, including Python, Node.js, Rust, and Gradle. If a plugin command feels more like a script than a prompt, the implementation usually lives here. ## `tests/` and automation The `tests/` directory mirrors the repository’s most important runtime concerns: hooks, git safety rules, review database behavior, release/version automation, and CodeRabbit integrations. Local test execution is intentionally simple: ```1:7:tox.toml skipsdist = true envlist = ["unittests"] [env.unittests] description = "Run pytest tests" deps = ["uv"] commands = [["uv", "run", "--group", "tests", "pytest", "tests"]] ``` Important test files include: - `tests/test_rule_enforcer.py` for deny/allow behavior and hook JSON output - `tests/test_git_protection.py` for branch detection, merge checks, and commit/push blocking - `tests/test_review_db.py` for database schema, migrations, query safety, and CLI behavior - `tests/test_detect_versions.py` for cross-ecosystem version-file detection - `tests/test_bump_version.py` for safe version updates - `tests/test_coderabbit_rate_limit.py` for rate-limit parsing and retrigger logic The rest of the automation is configured at the root: - `.pre-commit-config.yaml` runs `pre-commit-hooks`, `flake8`, `detect-secrets`, `ruff`, `ruff-format`, `gitleaks`, `mypy`, and `markdownlint`. - `.coderabbit.yaml` configures hosted CodeRabbit review tone, review settings, and enabled analysis tools. - `.pr_agent.toml` configures Qodo Merge review behavior. > **Note:** There is no `.github/workflows/` directory in this repository. The validation story is local-first: `tox`, `pre-commit`, and external review services configured through `.coderabbit.yaml` and `.pr_agent.toml`. ## Supporting directories Two smaller directories are worth knowing about: - `docs/plans/` holds design notes and planning documents, such as the auto-version-bump plan and design writeups. These are reference material, not runtime configuration. - `state/` is for runtime snapshot data. Its local `.gitignore` keeps `*-snapshot.json` files out of version control while preserving the directory itself. ## Where to look first | If you want to… | Start here | Then check | | --- | --- | --- | | Change hook behavior or startup checks | `settings.json` | `scripts/`, `tests/test_rule_enforcer.py`, `tests/test_git_protection.py` | | Add or edit a specialist agent | `agents/` | `rules/10-agent-routing.md`, `.gitignore` | | Change orchestration policy | `rules/` | `settings.json`, `scripts/` | | Add a slash command | `plugins//commands/` | `.claude-plugin/plugin.json`, `.claude-plugin/marketplace.json`, `.gitignore` | | Add deterministic CLI behavior | `myk_claude_tools/` | `pyproject.toml`, `tests/` | | Update local validation | `tox.toml` and `.pre-commit-config.yaml` | `.coderabbit.yaml`, `.pr_agent.toml` | | Add any new tracked file under the major folders | The target directory | `.gitignore` whitelist entries | --- Source: requirements.md # System Requirements This repository is a Claude Code configuration, not a standalone application. To use it as intended, you need Claude Code itself plus the local tools, hooks, and plugins that the checked-in configuration expects. The list below is based on the codebase itself: `settings.json`, hook scripts, plugin commands, Python modules, tests, and local automation files. ## Requirement Matrix | Component | Why the repository expects it | Required? | |---|---|---| | Python 3.10+ | `myk-claude-tools` declares `requires-python = ">=3.10"` | Yes | | `uv` | Runs all Python hooks and is the expected Python entrypoint | Yes | | `git` | Required by repo-aware CLI flows such as reviews and releases | Yes | | `myk-claude-tools` | Required by the `myk-github` and `myk-review` slash commands | Yes for those plugins | | `superpowers`, `pr-review-toolkit`, `feature-dev` plugins | Power the mandatory 3-agent review loop | Yes | | `gh` | Needed for GitHub PR, review, diff, release, and API operations | Required for GitHub workflows | | `jq` | Used by the status line and notification hook; checked at session start | Recommended, practically required for stock setup | | `gawk` | Checked as part of the expected AI review handler toolchain | Recommended | | `prek` | Required for this repo’s pre-commit workflow; direct `pre-commit ...` is blocked | Recommended, effectively required here | | `mcpl` | Required if you use MCP server discovery and execution | Optional | | `acpx` | Required only for the optional `myk-acpx` plugin | Optional | | `notify-send` | Needed only for the Linux desktop notification hook | Optional, Linux-only | ## Core Runtime The packaged CLI and Python tooling require Python 3.10 or newer, and the repository is wired to run Python through `uv`. ```1:24:pyproject.toml [project] name = "myk-claude-tools" version = "1.7.2" description = "CLI utilities for Claude Code plugins" readme = "README.md" license = "MIT" requires-python = ">=3.10" authors = [{ name = "myk-org" }] keywords = ["claude", "cli", "github", "code-review"] [project.scripts] myk-claude-tools = "myk_claude_tools.cli:main" ``` The checked-in Claude settings run the hook scripts with `uv run`, so `uv` is not optional if you want the configuration to work as shipped. ```37:84:settings.json "PreToolUse": [ { "matcher": "TodoWrite|Bash", "hooks": [ { "type": "command", "command": "uv run ~/.claude/scripts/rule-enforcer.py" } ] }, { "matcher": "Bash", "hooks": [ { "type": "command", "command": "uv run ~/.claude/scripts/git-protection.py" } ] }, { "matcher": "Bash", "hooks": [ { "type": "prompt", "prompt": "You are a security gate protecting against catastrophic OS destruction. Analyze: $ARGUMENTS\n\nBLOCK if the command would:\n- Delete system directories: /, /boot, /etc, /usr, /bin, /sbin, /lib, /var, /home\n- Write to disk devices: dd to /dev/sda, /dev/nvme, etc.\n- Format filesystems: mkfs on any device\n- Remove critical files: /etc/fstab, /etc/passwd, /etc/shadow, kernel/initramfs\n- Recursive delete with sudo or as root\n- Chain destructive commands after safe ones using &&, ;, |, ||\n\nASK (requires user confirmation) if:\n- Command uses sudo but is not clearly destructive\n- Deletes files outside system directories but looks risky\n\nALLOW all other commands - this gate only guards against OS destruction.\n\nIMPORTANT: Use your judgment. If a command seems potentially destructive even if not explicitly listed above, ASK the user for confirmation.\n\nRespond with JSON: {\"decision\": \"approve\" or \"block\" or \"ask\", \"reason\": \"brief explanation\"}", "timeout": 10000 } ] } ], "UserPromptSubmit": [ { "matcher": "", "hooks": [ { "type": "command", "command": "uv run ~/.claude/scripts/rule-injector.py" ``` `git` is also part of the expected baseline. It is not just for cloning the repo; several Python modules call it directly to find branches, roots, tags, and diffs. > **Warning:** If `uv` is missing, the hook system in `settings.json` cannot run as configured. ## Startup Checks The session-start hook is the clearest summary of what this configuration expects to find on your machine. ```29:93:scripts/session-start-check.sh # CRITICAL: uv - Required for Python hooks if ! command -v uv &>/dev/null; then missing_critical+=("[CRITICAL] uv - Required for running Python hooks Install: https://docs.astral.sh/uv/") fi # OPTIONAL: gh - Only check if this is a GitHub repository if git remote -v 2>/dev/null | grep -q "github.com"; then if ! command -v gh &>/dev/null; then missing_optional+=("[OPTIONAL] gh - Required for GitHub operations (PRs, issues, releases) Install: https://cli.github.com/") fi fi # OPTIONAL: jq - Required for AI review handlers if ! command -v jq &>/dev/null; then missing_optional+=("[OPTIONAL] jq - Required for AI review handlers (JSON processing) Install: https://stedolan.github.io/jq/download/") fi # OPTIONAL: gawk - Required for AI review handlers if ! command -v gawk &>/dev/null; then missing_optional+=("[OPTIONAL] gawk - Required for AI review handlers (text processing) Install: brew install gawk (macOS) or apt install gawk (Linux)") fi # OPTIONAL: prek - Only check if .pre-commit-config.yaml exists if [[ -f ".pre-commit-config.yaml" ]]; then if ! command -v prek &>/dev/null; then missing_optional+=("[OPTIONAL] prek - Required for pre-commit hooks (detected .pre-commit-config.yaml) Install: https://github.com/j178/prek") fi fi # OPTIONAL: mcpl - MCP Launchpad (always check) if ! command -v mcpl &>/dev/null; then missing_optional+=("[OPTIONAL] mcpl - MCP Launchpad for MCP server access Install: https://github.com/kenneth-liao/mcp-launchpad") fi # CRITICAL: Review plugins - Required for mandatory code review loop critical_marketplace_plugins=( pr-review-toolkit superpowers feature-dev ) ``` Two details are worth calling out: - `gh`, `jq`, `gawk`, `prek`, and `mcpl` are reported as optional by the startup script, but several project features stop working without them. - This repo contains `.pre-commit-config.yaml`, so the `prek` check is active here. > **Note:** `gawk` is only explicitly checked by the startup script in this repository. That means it is part of the supported review-processing environment, even though the critical hooks do not hard-fail if it is missing. ## `uv` Instead of Raw Python or `pre-commit` The rule-enforcer hook blocks direct Python and direct `pre-commit` usage. The expected workflow is `uv run`, `uvx`, and `prek`. ```8:26:scripts/rule-enforcer.py def is_forbidden_python_command(command: str) -> bool: """Check if command uses python/pip directly instead of uv.""" cmd = command.strip().lower() # Allow uv/uvx commands if cmd.startswith(("uv ", "uvx ")): return False # Block direct python/pip forbidden = ("python ", "python3 ", "pip ", "pip3 ") return cmd.startswith(forbidden) def is_forbidden_precommit_command(command: str) -> bool: """Check if command uses pre-commit directly instead of prek.""" cmd = command.strip().lower() # Block direct pre-commit commands return cmd.startswith("pre-commit ") ``` In practice, that means: - Use `uv run ...` for Python scripts. - Use `uvx ...` for Python-based CLI tools. - Use `prek run --all-files` instead of `pre-commit run --all-files`. `tests/test_rule_enforcer.py` backs this up: it explicitly allows `uv`/`uvx` and `prek`, and explicitly denies direct `pre-commit ...` commands. > **Warning:** If you already use `pre-commit` directly in other projects, do not assume that will work here. This repo’s hook logic is written to reject that path. ## Review Plugins Are Mandatory This configuration enforces a 3-reviewer loop for code changes. Those reviewers come from official Claude plugins, and the workflow depends on them being installed. ```35:43:rules/20-code-review-loop.md Three plugin agents review code in parallel for comprehensive coverage: | Agent | Focus | |---|---| | `superpowers:code-reviewer` | General code quality and maintainability | | `pr-review-toolkit:code-reviewer` | Project guidelines and style adherence (CLAUDE.md) | | `feature-dev:code-reviewer` | Bugs, logic errors, and security vulnerabilities | **All 3 MUST be invoked in the same assistant turn as 3 parallel Task tool calls (one response containing 3 Task invocations, not sequential messages).** ``` If you want the configuration to behave the way the rules describe, install: - `superpowers@claude-plugins-official` - `pr-review-toolkit@claude-plugins-official` - `feature-dev@claude-plugins-official` The checked-in `settings.json` also enables a larger set of optional marketplace plugins, including `coderabbit`, `github`, `code-review`, `code-simplifier`, `frontend-design`, `security-guidance`, `pyright-lsp`, `gopls-lsp`, `jdtls-lsp`, `lua-lsp`, `playground`, `commit-commands`, `claude-code-setup`, and `claude-md-management`. Those are enhancements, not the hard minimum. > **Warning:** Skipping the three review plugins breaks the repository’s mandatory review loop, even if everything else is installed. ## Shipped Plugins From This Repo The repository’s marketplace manifest ships three plugins of its own: - `myk-github` - `myk-review` - `myk-acpx` Those are declared in `.claude-plugin/marketplace.json`, and `settings.json` enables them as `myk-github@myk-org`, `myk-review@myk-org`, and `myk-acpx@myk-org`. If you plan to use the repo’s slash commands, install the relevant plugin packages through Claude Code’s plugin system. ## `myk-claude-tools` Is Required for `myk-github` and `myk-review` The repo’s GitHub and review plugins rely on the packaged `myk-claude-tools` CLI. The command definitions repeatedly check for it and tell users to install it with `uv`. Examples in the command files include: - `myk-claude-tools --version` - `uv tool install myk-claude-tools` That dependency is not theoretical. The `myk-github` and `myk-review` commands are designed around `myk-claude-tools pr ...`, `myk-claude-tools reviews ...`, `myk-claude-tools db ...`, and `myk-claude-tools release ...`. > **Tip:** If you only want the hook behavior and do not plan to use `/myk-github:*` or `/myk-review:*`, you can postpone installing `myk-claude-tools`. For the slash commands themselves, it is required. ## GitHub CLI (`gh`) and Authentication `gh` is where the repo draws an important line between "startup can continue" and "feature works correctly." The startup checker only marks `gh` as optional for GitHub repos, but the actual CLI commands that fetch PRs, diffs, and release metadata fail without it. ```83:88:myk_claude_tools/reviews/fetch.py def check_dependencies() -> None: """Check required dependencies.""" for cmd in ("gh", "git"): if shutil.which(cmd) is None: print_stderr(f"Error: '{cmd}' is required but not installed.") sys.exit(1) ``` ```188:223:myk_claude_tools/reviews/fetch.py for target_repo in repos_to_try: cmd = ["gh", "pr", "view", current_branch, "--json", "number", "--jq", ".number"] if target_repo: cmd.extend(["-R", target_repo]) try: result = subprocess.run(cmd, capture_output=True, text=True, timeout=30) if result.returncode == 0 and result.stdout.strip(): pr_number = result.stdout.strip() matched_repo = target_repo if target_repo: print_stderr(f"Found PR #{pr_number} on upstream ({target_repo})") break except subprocess.TimeoutExpired: continue # Fall back to gh repo view for the default repo try: result = subprocess.run( ["gh", "repo", "view", "--json", "owner,name", "-q", '.owner.login + "/" + .name'], ``` Other modules do the same: - `myk_claude_tools/pr/diff.py` uses `gh api` and `gh pr diff`. - `myk_claude_tools/release/info.py` treats both `gh` and `git` as required for release info. - The `myk-github` plugin command definitions use `gh pr view`, `gh repo view`, and `gh pr create`. That means you need both: - the `gh` binary installed - a working `gh` login with access to the repositories you want to operate on > **Warning:** Installing `gh` is not enough by itself. Review, diff, PR, and release flows assume `gh` can successfully call `gh api`, `gh pr view`, and `gh repo view` against your GitHub account. There is one notable exception: `scripts/git-protection.py` uses `gh` opportunistically to detect merged PRs, but it tolerates a missing `gh` and falls back gracefully. `tests/test_get_all_reviews.py` and `tests/test_git_protection.py` cover that difference. ## `jq`, `notify-send`, and the Stock UI Hooks The repo’s stock status line parses JSON with `jq`, and the notification hook requires both `jq` and `notify-send`. ```7:11:statusline.sh model_name=$(echo "$input" | jq -r '.model.display_name') current_dir=$(echo "$input" | jq -r '.workspace.current_dir') # Get context usage percentage (pre-calculated by Claude Code v2.1.6+) context_pct=$(echo "$input" | jq -r '.context_window.used_percentage // empty') ``` ```4:9:scripts/my-notifier.sh # Check for required commands for cmd in jq notify-send; do if ! command -v "$cmd" &>/dev/null; then echo "Error: Required command '$cmd' not found" >&2 exit 1 fi done ``` For a stock Linux setup, install both. On non-Linux systems, or if you do not want desktop notifications, you can treat `notify-send` as optional and adjust the hook to match your platform. > **Note:** `jq` is labeled optional in the startup script, but it is a practical day-one dependency if you keep the checked-in status line and notifier enabled. ## `mcpl` for MCP Servers If you use MCP-backed workflows, this repo expects `mcpl` as the command-line entrypoint. ```12:22:rules/15-mcp-launchpad.md | Command | Purpose | |----------------------------------------------|-----------------------------------------------------| | `mcpl search ""` | Search all tools (shows required params, 5 results) | | `mcpl search "" --limit N` | Search with more results | | `mcpl list` | List all MCP servers | | `mcpl list ` | List tools for a server (shows required params) | | `mcpl inspect ` | Get full schema | | `mcpl inspect --example` | Get schema + example call | | `mcpl call '{}'` | Execute tool (no arguments) | | `mcpl call '{"param": "v"}'` | Execute tool with arguments | | `mcpl verify` | Test all server connections | ``` `settings.json` also explicitly allows `Bash(mcpl:*)`. If you never use MCP servers, you can skip `mcpl`. If you do use them, install it before relying on MCP-related rules or commands. ## Optional `acpx` Support `myk-acpx` is shipped and enabled in `settings.json`, but its own command file treats `acpx` as optional and prompts users to install it only when needed. ```45:80:plugins/myk-acpx/commands/prompt.md ### Step 1: Prerequisites Check #### 1a: Check acpx ```bash acpx --version ``` If not found, ask the user via AskUserQuestion: "acpx is not installed. It provides structured access to multiple coding agents (Codex, Cursor, Gemini, etc.) via the Agent Client Protocol. Install it now?" Options: - **Yes (Recommended)** — Install globally with `npm install -g acpx@latest` - **No** — Abort If user selects Yes, run: ```bash npm install -g acpx@latest ``` Verify installation: ```bash acpx --version ``` ``` If you want `/myk-acpx:prompt`, you need: - `acpx` - the underlying target agent CLI you want `acpx` to wrap, such as Cursor, Codex, Gemini, or Copilot If you do not use `myk-acpx`, you can skip this entire dependency chain. ## Local Validation Model This repository does not define GitHub Actions workflows under `.github/workflows/`. Its automation model is local-first: - Claude hooks are defined in `settings.json` - test execution is defined in `tox.toml` - pre-commit hooks are defined in `.pre-commit-config.yaml` - the supported wrapper for pre-commit is `prek` `tox.toml` shows the expected test command: ```1:7:tox.toml skipsdist = true envlist = ["unittests"] [env.unittests] description = "Run pytest tests" deps = ["uv"] commands = [["uv", "run", "--group", "tests", "pytest", "tests"]] ``` That means you do not need to provision CI runners just to use the repo. You do need the local CLI toolchain that those hooks and test commands expect. > **Note:** In this repo, "CI/CD requirements" mostly translate to local hook and validation requirements, not hosted workflow requirements. ## Quick Verification Checklist Run the commands below after installing the toolchain you need: ```bash uv --version myk-claude-tools --version gh repo view --json nameWithOwner -q .nameWithOwner prek run --all-files mcpl verify acpx --version uv run --group tests pytest tests ``` Use the ones that match the features you plan to use: - Always verify `uv`. - Verify `myk-claude-tools` if you plan to use `myk-github` or `myk-review`. - Verify `gh` against a real repository if you plan to use PR, review, or release features. - Verify `prek` in this repository, because `.pre-commit-config.yaml` is present. - Verify `mcpl` only if you use MCP integrations. - Verify `acpx` only if you use `myk-acpx`. > **Tip:** `~/.claude/scripts/session-start-check.sh` is the fastest way to surface missing critical and optional requirements in one pass. --- Source: installation.md # Installation This repository is meant to become your Claude Code home directory. Installing it gives you the project’s `settings.json`, hook scripts, custom status line, bundled `myk-claude-tools` CLI integration, and the bundled plugins `myk-github`, `myk-review`, and `myk-acpx`. ## Before You Start You need a working Claude Code installation plus a few local tools. - `uv` is required. The hook configuration runs Python scripts with `uv run ~/.claude/scripts/...`. - Python `3.10+` is required by the bundled CLI package. - `gh` is needed for GitHub workflows such as PR review and release commands. The session-start check only flags it when the repo you open uses a GitHub remote. - `jq` is used by the status line, notification script, and review handlers. - `gawk`, `prek`, and `mcpl` are optional, but the session-start check will tell you when they matter. - `notify-send` is required if you want the built-in desktop notifications from `my-notifier.sh`. > **Note:** This configuration intentionally prefers `uv` and `uvx` over direct `python`, `pip`, and `pre-commit` commands. The rule-enforcer hook blocks those direct commands. ## 1. Install the Repo as `~/.claude` The shipped `settings.json` uses fixed paths under `~/.claude`, so the repository needs to live there. For a fresh install: ```bash git clone https://github.com/myk-org/claude-code-config ~/.claude ``` If you already have a Claude Code home directory, back it up first: ```bash mv ~/.claude ~/.claude.backup.$(date +%Y%m%d%H%M%S) git clone https://github.com/myk-org/claude-code-config ~/.claude ``` > **Warning:** Do not clone this repo into a nested path such as `~/.claude/claude-code-config`. The configured hook and status-line commands point to `~/.claude/...` directly. These paths come straight from the repository’s `settings.json`: ```json { "hooks": { "UserPromptSubmit": [ { "matcher": "", "hooks": [ { "type": "command", "command": "uv run ~/.claude/scripts/rule-injector.py" } ] } ], "SessionStart": [ { "matcher": "", "hooks": [ { "type": "command", "command": "~/.claude/scripts/session-start-check.sh", "timeout": 5000 } ] } ] }, "statusLine": { "type": "command", "command": "bash ~/.claude/statusline.sh", "padding": 0 } } ``` ## 2. Use This Repo’s `settings.json` and Scripts The simplest installation is to use this repository’s `settings.json` as your `~/.claude/settings.json`. That file does more than turn hooks on. It also wires up permissions, enables plugins, and registers extra marketplaces. If you merge it by hand instead of replacing your file, keep these sections aligned: - `hooks` - `statusLine` - `permissions.allow` - `allowedTools` - `enabledPlugins` - `extraKnownMarketplaces` This note is already embedded in the shipped settings: ```json { "_scriptsNote": "Script entries must be duplicated in both permissions.allow and allowedTools arrays. When adding new scripts, update BOTH locations." } ``` The same file already enables the bundled plugins once they are installed: ```json { "enabledPlugins": { "myk-review@myk-org": true, "myk-github@myk-org": true, "myk-acpx@myk-org": true, "pr-review-toolkit@claude-plugins-official": true, "superpowers@claude-plugins-official": true, "feature-dev@claude-plugins-official": true } } ``` > **Note:** `settings.json` also defines `cli-anything` and `worktrunk` under `extraKnownMarketplaces`. Those entries make the marketplaces known to Claude Code, but they do not install any plugins by themselves. ## 3. Install the Bundled CLI Several bundled plugin commands explicitly check for `myk-claude-tools` and tell you to install it with `uv tool install myk-claude-tools`. The CLI package metadata in this repo looks like this: ```toml [project] name = "myk-claude-tools" requires-python = ">=3.10" [project.scripts] myk-claude-tools = "myk_claude_tools.cli:main" ``` Its entrypoint registers these top-level command groups: ```python cli.add_command(coderabbit_commands.coderabbit, name="coderabbit") cli.add_command(db_commands.db, name="db") cli.add_command(pr_commands.pr, name="pr") cli.add_command(release_commands.release, name="release") cli.add_command(reviews_commands.reviews, name="reviews") ``` Install it with `uv`: ```bash uv tool install myk-claude-tools myk-claude-tools --version ``` ## 4. Add the Plugin Marketplaces and Install the Plugins This repository ships its own marketplace manifest. The bundled plugins are defined there: ```json { "name": "myk-org", "plugins": [ { "name": "myk-github", "source": "./plugins/myk-github" }, { "name": "myk-review", "source": "./plugins/myk-review" }, { "name": "myk-acpx", "source": "./plugins/myk-acpx" } ] } ``` Add the marketplace for this repo and install the bundled plugins: ```text /plugin marketplace add myk-org/claude-code-config /plugin install myk-github@myk-org /plugin install myk-review@myk-org /plugin install myk-acpx@myk-org ``` The session-start hook also treats three official review plugins as critical. That list comes straight from `session-start-check.sh`: ```bash critical_marketplace_plugins=( pr-review-toolkit superpowers feature-dev ) ``` Install them from the official marketplace: ```text /plugin marketplace add claude-plugins-official /plugin install pr-review-toolkit@claude-plugins-official /plugin install superpowers@claude-plugins-official /plugin install feature-dev@claude-plugins-official ``` `settings.json` also enables additional official plugins such as `github`, `pyright-lsp`, `gopls-lsp`, `jdtls-lsp`, `lua-lsp`, `code-review`, `code-simplifier`, `frontend-design`, `coderabbit`, `commit-commands`, `claude-md-management`, `claude-code-setup`, `playground`, and `security-guidance`. You can install those as needed; the session-start check reports missing optional marketplace plugins and prints the exact `/plugin install` commands for them. > **Note:** `myk-acpx` has one extra prerequisite. Its command file checks for `acpx` and recommends this install if it is missing. ```bash npm install -g acpx@latest ``` ## 5. Restart Claude Code and Verify the Install Start a new Claude Code session after installing the repo, CLI, and plugins. A quick verification checklist: - `myk-claude-tools --help` works and the CLI is on your `PATH`. - The session-start check does not report missing **CRITICAL** items. - The custom status line appears. - These bundled commands are available: `/myk-review:local`, `/myk-review:query-db`, `/myk-github:pr-review`, `/myk-github:review-handler`, `/myk-github:release`, and `/myk-acpx:prompt`. > **Tip:** The status line script uses `jq` to read Claude Code’s JSON input. If the status line is blank or errors out, install `jq` first. ## What to Expect After Installation Once the configuration is active, Claude Code will automatically: - run a session-start check for required tools and critical plugins - inject the repo’s prompt-time rule reminder hook - enforce command guardrails around Python, `pip`, and `pre-commit` - protect `main` and `master` from direct commits and pushes - show a custom status line from `statusline.sh` If you use the review tooling, review analytics are stored per project, not under your home directory. The database lives here: ```text /.claude/data/reviews.db ``` The storage code creates that directory if needed, and the test suite confirms the database is created there and appended to on later runs. ## Troubleshooting ### `uv` is missing or Python commands are blocked This is expected. The rule-enforcer hook allows `uv` and `uvx` but blocks direct `python`, `python3`, `pip`, and `pip3`: ```python if cmd.startswith(("uv ", "uvx ")): return False forbidden = ("python ", "python3 ", "pip ", "pip3 ") ``` If you hit that guardrail: - install `uv` - run Python scripts with `uv run` - run CLI tools with `uvx` when appropriate - use `prek`, not `pre-commit`, if you want the pre-commit workflow this repo expects ### Notifications fail The bundled notification script checks for both `jq` and `notify-send`. If you do not have `notify-send`, desktop notifications will fail. > **Note:** If you do not want desktop notifications, remove or replace the `Notification` hook in `~/.claude/settings.json` after installation. ### Git commits or pushes are blocked That is expected too. `git-protection.py` is designed to stop commits and pushes on `main`, `master`, and branches that are already merged. The intended workflow is to work on a feature branch. > **Tip:** If Claude Code tells you a commit or push was blocked on a protected branch, create a feature branch and continue there. The test suite in this repo confirms that behavior. ### No CI/CD setup is required This repository does not ship any `.github/workflows` files. Installation is entirely local to your Claude Code environment. --- Source: first-run-checks.md # First-Run Checks When you open a Claude Code session with this configuration, a quick startup hook runs to verify the local tools and plugins it depends on. The goal is to catch missing pieces early, explain what matters, and let you keep going instead of failing hard at startup. ```78:95:settings.json "SessionStart": [ { "matcher": "", "hooks": [ { "type": "command", "command": "~/.claude/scripts/session-start-check.sh", "timeout": 5000 } ] } ] }, "statusLine": { "type": "command", "command": "bash ~/.claude/statusline.sh", "padding": 0 }, ``` > **Note:** A healthy startup is intentionally quiet. If everything required is present, the session-start check does not print a success banner. ## What Runs At Session Start The validation logic lives in `scripts/session-start-check.sh`. It checks command-line tools with `command -v`, checks official marketplace plugins in standard `~/.claude` install locations, groups anything missing into `CRITICAL` and `OPTIONAL`, and only prints a report when there is something to fix. The tool checks are: | Item | Level | When it is checked | Why it matters | |---|---|---|---| | `uv` | Critical | Always | Required for the Python-based hooks used by this config | | `gh` | Optional | Only if the current repo has a GitHub remote | Needed for GitHub PR, issue, and release workflows | | `jq` | Optional | Always | Used for JSON processing in AI review tooling | | `gawk` | Optional | Always | Used for text processing in AI review tooling | | `prek` | Optional | Only if `.pre-commit-config.yaml` exists in the current directory | Needed for pre-commit workflows | | `mcpl` | Optional | Always | Enables MCP Launchpad access | The script also checks a curated set of official marketplace plugins as optional enhancements: `claude-code-setup`, `claude-md-management`, `code-review`, `code-simplifier`, `coderabbit`, `commit-commands`, `frontend-design`, `github`, `gopls-lsp`, `jdtls-lsp`, `lua-lsp`, `playground`, `pyright-lsp`, and `security-guidance`. > **Tip:** Open Claude Code in the project root you actually want to work in. The `gh` check depends on the repo's Git remotes, and the `prek` check depends on files in your current directory. ## Critical Plugin Checks Three official review plugins are treated as critical. If any of them are missing, the startup check tells Claude to prioritize fixing those first. ```69:93:scripts/session-start-check.sh # CRITICAL: Review plugins - Required for mandatory code review loop critical_marketplace_plugins=( pr-review-toolkit superpowers feature-dev ) missing_critical_plugins=() for plugin in "${critical_marketplace_plugins[@]}"; do if ! check_plugin_installed "$plugin"; then missing_critical_plugins+=("$plugin") fi done if [[ ${#missing_critical_plugins[@]} -gt 0 ]]; then missing_list=$(printf '%s, ' "${missing_critical_plugins[@]}") missing_list=${missing_list%, } install_cmds="" for p in "${missing_critical_plugins[@]}"; do install_cmds+=" /plugin install ${p}@claude-plugins-official"$'\n' done missing_critical+=("[CRITICAL] Missing review plugins - Required for mandatory code review loop Install: /plugin marketplace add claude-plugins-official ${install_cmds} Missing: ${missing_list}") ``` Those three plugins are critical because the repository's review workflow depends on all three review agents running in parallel: ```35:41:rules/20-code-review-loop.md Three plugin agents review code in parallel for comprehensive coverage: | Agent | Focus | |---|---| | `superpowers:code-reviewer` | General code quality and maintainability | | `pr-review-toolkit:code-reviewer` | Project guidelines and style adherence (CLAUDE.md) | | `feature-dev:code-reviewer` | Bugs, logic errors, and security vulnerabilities | ``` > **Warning:** The startup check is non-blocking. Claude Code will still open even if something marked `CRITICAL` is missing, so treat those warnings as action required, not as harmless suggestions. ## What Happens When Something Is Missing If the script finds anything missing, it prints a structured `MISSING_TOOLS_REPORT:`. That report does more than list missing items: it explicitly tells Claude to explain each one and ask whether you want help installing it. ```134:166:scripts/session-start-check.sh # Output report only if something is missing if [[ ${#missing_critical[@]} -gt 0 || ${#missing_optional[@]} -gt 0 ]]; then echo "MISSING_TOOLS_REPORT:" echo "" echo "[AI INSTRUCTION - YOU MUST FOLLOW THIS]" echo "Some tools required by this configuration are missing." echo "" echo "Criticality levels:" echo "- CRITICAL: Configuration will NOT work without these. Must install." echo "- OPTIONAL: Enhances functionality. Nice to have." echo "" echo "YOUR REQUIRED ACTION:" echo "1. List each missing tool with its purpose" echo "2. ASK the user: 'Would you like me to help install these tools?'" echo "3. If user accepts, provide the installation command for each tool" echo "4. Prioritize CRITICAL tools first" echo "" echo "DO NOT just mention the tools. You MUST ask if the user wants help installing them." echo "" for item in "${missing_critical[@]}"; do echo "$item" echo "" done for item in "${missing_optional[@]}"; do echo "$item" echo "" done fi # Always exit 0 (non-blocking) exit 0 ``` In practice, the first-run flow looks like this: 1. A new Claude session starts. 2. The `SessionStart` hook runs `session-start-check.sh`. 3. If everything is present, startup stays silent. 4. If anything is missing, Claude should explain what is missing and ask if you want installation help. > **Note:** Because this check is attached to `SessionStart`, the simplest way to run it again after installing something is to start a new Claude session. ## How To Confirm The Configuration Is Active There is no dedicated "all good" banner, so confirmation is mostly about expected behavior. A visible signal is the custom status line. This config enables `statusline.sh`, which builds a line from your current directory, Git branch, model name, context usage, and line delta. ```7:46:statusline.sh model_name=$(echo "$input" | jq -r '.model.display_name') current_dir=$(echo "$input" | jq -r '.workspace.current_dir') # Get context usage percentage (pre-calculated by Claude Code v2.1.6+) context_pct=$(echo "$input" | jq -r '.context_window.used_percentage // empty') # Get current directory basename dir_name=$(basename "$current_dir") # Build status line components status_parts=() # Add directory name status_parts+=("$dir_name") # Add SSH info if connected via SSH if [ -n "$SSH_CLIENT" ] || [ -n "$SSH_TTY" ]; then status_parts+=("$(whoami)@$(hostname -s)") fi # Add git branch if in a git repository if git rev-parse --git-dir >/dev/null 2>&1; then branch=$(git branch --show-current 2>/dev/null || echo "detached") status_parts+=("$branch") fi # Add virtual environment if active if [ -n "$VIRTUAL_ENV" ]; then status_parts+=("($(basename "$VIRTUAL_ENV"))") fi # Add model and context usage status_parts+=("$model_name") if [ -n "$context_pct" ]; then status_parts+=("(${context_pct}%)") fi # Extract lines added/removed lines_added=$(echo "$input" | jq -r '.cost.total_lines_added // 0') lines_removed=$(echo "$input" | jq -r '.cost.total_lines_removed // 0') ``` A practical confirmation checklist: 1. Open a fresh Claude Code session in the repository root. 2. If anything is missing, expect a `MISSING_TOOLS_REPORT:` and an offer to help install it. 3. If nothing is missing, expect no startup warning. 4. Look for the custom status line showing your directory, branch, model, and session context. 5. If you use one of this repo's slash commands, expect it to follow the repo's defined workflows rather than generic Claude defaults. > **Warning:** `jq` is labeled optional by the startup check, but the status line also uses `jq`. If `jq` is missing, Claude Code may still start, but the custom status line can fail or appear incomplete. ## What First-Run Checks Do Not Cover The startup check is useful, but it is not exhaustive. It does not validate every dependency every command might need later. For example, repo-specific commands can run their own prerequisite checks on demand instead of at session start: ```11:32:plugins/myk-github/commands/pr-review.md ## Prerequisites Check (MANDATORY) Before starting, verify the tools are available: ### Step 0: Check uv ... command example omitted ... If not found, install from ### Step 1: Check myk-claude-tools ... command example omitted ... If not found, prompt user: "myk-claude-tools is required. Install with: `uv tool install myk-claude-tools`. Install now?" - Yes: Run `uv tool install myk-claude-tools` - No: Abort with instructions ``` That means "the first-run check passed" and "every later command dependency is already installed" are not exactly the same thing. The startup hook verifies the shared foundation, while individual commands may still perform their own targeted checks. The other important limit is plugin detection scope. The startup script checks official `@claude-plugins-official` plugins from standard `~/.claude` install paths. It does not serve as a full audit of every enabled plugin or every marketplace you may have configured. > **Warning:** If your plugins are installed in a non-standard location, the startup check may report them missing even though Claude Code can still reach them another way. --- Source: personal-claude-md.md # Personal CLAUDE.md Use a personal `CLAUDE.md` when you want Claude to follow your own preferences in this repository without editing shared files like `AI_REVIEW.md`, `rules/`, `scripts/`, or `settings.json`. It is the right place for personal style, communication, environment, and workflow notes. This repository already includes a starting template in `CLAUDE.md.example`. ## Quick Start Copy the template and edit it for your own workflow: ```bash cp CLAUDE.md.example CLAUDE.md ``` The template starts like this: ```markdown # Personal Claude Configuration Template This is a template for your personal `CLAUDE.md` file. Copy this to `CLAUDE.md` and customize it to your preferences. > **Note**: Orchestrator rules (agent routing, delegation, etc.) auto-load from `.claude/rules/` - you don't need to include them here. This file is for YOUR personal preferences only. ``` The template already gives you sections for: - code style preferences - communication preferences - project context - tools and environment - custom personal rules - rule overrides A real example from the template: ```text # Example preferences: - Be concise - skip explanations unless I ask - Always explain complex decisions - Use technical terminology freely - Provide context for breaking changes - Highlight security implications ``` > **Tip:** Start small. A short `CLAUDE.md` with a few strong preferences is usually more useful than a long list of vague rules. ## Where To Put It The repository tooling supports either of these locations: - `CLAUDE.md` - `.claude/CLAUDE.md` If both exist, `CLAUDE.md` wins because it is checked first. Use `CLAUDE.md` when you want the simplest setup. Use `.claude/CLAUDE.md` when you want to keep local AI configuration inside an ignored `.claude/` directory. ## Why It Stays Personal This repository ignores everything by default and then explicitly re-includes tracked files. The top of `.gitignore` shows the pattern: ```gitignore # Ignore everything by default # This config integrates into ~/.claude so we must be explicit about what we track * # Core config files !.coderabbit.yaml !.gitignore !LICENSE !AI_REVIEW.md !README.md !settings.json !statusline.sh ``` `CLAUDE.md` is not whitelisted there, which makes it a good place for local preferences in this repo. That means you can tune Claude for your own work without creating shared repository changes just to store personal taste. > **Note:** `CLAUDE.md.example` is the tracked template. Your own `CLAUDE.md` is the personal working copy. ## What Belongs In Personal `CLAUDE.md` Good things to put here: - how concise or detailed you want responses - your preferred editors, shells, runtimes, or package managers - local workflow habits you want Claude to respect - project-specific reminders that matter to your daily work - narrow, well-documented overrides for your own workflow The template includes practical sections for exactly that: ```text ## Code Style Preferences ## Communication Preferences ## Project Context ## Tools & Environment ## Custom Personal Rules ## Rule Overrides ``` A real tools-and-environment example from the template: ```text # Example preferences: - Editor: VSCode with Pylance - Terminal: zsh with oh-my-zsh - Container runtime: Podman (not Docker) - Cloud provider: AWS - Preferred testing frameworks: - Python: pytest - JavaScript: Jest - Go: standard testing package ``` ## How It Is Discovered The repository’s PR helper looks for `CLAUDE.md` content in a fixed order in `myk_claude_tools/pr/claude_md.py`: ```python # Check local ./CLAUDE.md local_claude_md = Path("./CLAUDE.md") if local_claude_md.is_file(): print(local_claude_md.read_text(encoding="utf-8")) return # Check local ./.claude/CLAUDE.md local_claude_dir_md = Path("./.claude/CLAUDE.md") if local_claude_dir_md.is_file(): print(local_claude_dir_md.read_text(encoding="utf-8")) return # Fetch upstream CLAUDE.md content = fetch_from_github(pr_info.owner, pr_info.repo, "CLAUDE.md") # Fetch upstream .claude/CLAUDE.md content = fetch_from_github(pr_info.owner, pr_info.repo, ".claude/CLAUDE.md") ``` In practice, the lookup order is: 1. local `./CLAUDE.md` 2. local `./.claude/CLAUDE.md` 3. remote `CLAUDE.md` 4. remote `.claude/CLAUDE.md` This matters because a personal file in your local clone can be picked up before anything fetched from GitHub. ## How The Repo Uses It ### PR review The GitHub PR review workflow has an explicit `CLAUDE.md` fetch step: ```bash myk-claude-tools pr claude-md {pr_number} ``` That content is then passed into the review agents alongside the diff. In other words, if you keep stable review-relevant conventions in your personal `CLAUDE.md`, the review flow can use them. ### Peer review The ACPX peer review workflow checks whether `CLAUDE.md` exists and, if it does, tells the peer agent to read it: ```text IMPORTANT: This project has a CLAUDE.md file with coding conventions and project guidelines. Read it before reviewing. Flag any violations of those conventions as findings. ``` This makes `CLAUDE.md` useful for conventions you want reviewers to enforce consistently. > **Warning:** If you ever choose to track a `CLAUDE.md` in another project, its contents may be read by review tooling and fetched remotely. Do not put secrets, tokens, or private credentials in it. ## Using The Rule Overrides Section The template includes a `Rule Overrides` section for personal, targeted exceptions. These are written as readable instructions, not as a replacement for tracked repo configuration. Real examples from `CLAUDE.md.example`: ```text # IGNORE RULE: 20-code-review-loop.md # Reason: Working on quick prototypes, skip mandatory code review # OVERRIDE: Allow direct Bash usage # For simple commands (ls, pwd, cd), allow orchestrator to use Bash directly # instead of delegating to bash-expert # ROUTE OVERRIDE: Python tests # Route Python test execution to test-automator instead of python-expert # CONDITIONAL: Code review for critical files only # Only enforce code review for files in: src/core/, src/security/ # Skip review for: tests/, docs/, scripts/ ``` The template also shows a more structured style with scope and reason: ```text # My Custom Workflow Overrides # 1. Skip code review for documentation IGNORE RULE: 20-code-review-loop.md SCOPE: **/*.md, docs/**/* # 2. Allow direct bash for git commands OVERRIDE: Git operations can use Bash directly REASON: Git commands are simple and don't need git-expert delegation ``` When you use overrides: - keep them narrow - include a reason - include scope when possible - review them periodically - remove them when they are no longer useful > **Tip:** If an override would help everyone, it probably belongs in a tracked shared rule instead of your personal file. ## What Personal Overrides Cannot Change Personal guidance is not the same thing as changing repository enforcement. For example, `scripts/rule-enforcer.py` denies direct `python`, `python3`, `pip`, `pip3`, and `pre-commit` commands in favor of `uv`, `uvx`, and `prek`: ```python def is_forbidden_python_command(command: str) -> bool: cmd = command.strip().lower() # Allow uv/uvx commands if cmd.startswith(("uv ", "uvx ")): return False # Block direct python/pip forbidden = ("python ", "python3 ", "pip ", "pip3 ") return cmd.startswith(forbidden) def is_forbidden_precommit_command(command: str) -> bool: cmd = command.strip().lower() # Block direct pre-commit commands return cmd.startswith("pre-commit ") ``` The test suite confirms that behavior in `tests/test_rule_enforcer.py`: commands like `python script.py` and `pre-commit run --all-files` are denied, while `uv run script.py`, `uvx ruff check .`, and `prek run --all-files` are allowed. So a personal note like “use `python` directly” does not override that hook. Shared hook behavior still lives in tracked configuration and scripts. ## Keep Shared Changes Separate Use this rule of thumb: | Put it in personal `CLAUDE.md` | Change a tracked shared file instead | |---|---| | response tone and detail level | team-wide documentation or project guidance | | preferred editor, shell, runtime, package manager | shared hooks and permissions | | local workflow preferences | repository rule files in `rules/` | | narrow personal overrides | `settings.json` behavior | | personal reminders for review | shared project context in `AI_REVIEW.md` | This also applies to automated checks. Repository validation is configured separately in `.pre-commit-config.yaml`, for example: ```yaml - repo: https://github.com/astral-sh/ruff-pre-commit hooks: - id: ruff - id: ruff-format - repo: https://github.com/pre-commit/mirrors-mypy hooks: - id: mypy - repo: https://github.com/igorshubovych/markdownlint-cli hooks: - id: markdownlint ``` Your personal `CLAUDE.md` can ask Claude to write in a certain style, but it does not disable tracked linting, typing, or documentation checks. ## Optional Plugin Support The repo enables the official `claude-md-management` plugin in `settings.json`: ```json "enabledPlugins": { "claude-md-management@claude-plugins-official": true } ``` The session-start check also looks for `claude-md-management` as an optional plugin. That means the repository is ready to work well with the official `CLAUDE.md` management tooling, but you can still use the template and lookup behavior without editing shared repository files. ## Recommended Workflow 1. Copy `CLAUDE.md.example` to `CLAUDE.md`. 2. Remove sections you do not need. 3. Add only the preferences you actually care about. 4. Put stable review-relevant conventions in the file if you want PR and peer-review tools to use them. 5. Keep shared policy changes out of it and update tracked repo files only when you intentionally want to change behavior for everyone. A good personal `CLAUDE.md` is short, specific, and local. It should make Claude easier to work with for you, without turning personal preferences into shared repository policy. --- Source: settings-json-reference.md # settings.json Reference `settings.json` is the root Claude Code configuration for this repository. It defines what Claude Code is allowed to do, which hooks run, which plugins are enabled, and which environment and UI defaults are applied. > **Note:** This file is written to work with companion files under `~/.claude/`. Every hook command points at `~/.claude/scripts/...`, and the status line points at `~/.claude/statusline.sh`. If you want to use this configuration unchanged, make sure those files exist at those paths. ```1:3:settings.json { "$schema": "https://json.schemastore.org/claude-code-settings.json", "includeCoAuthoredBy": false, ``` The schema line gives editors JSON Schema validation and autocomplete. `includeCoAuthoredBy: false` disables automatic `Co-authored-by` trailers. ## What This File Controls - Tool permissions and Bash allowlists - Session startup checks - Pre-tool safety hooks - Prompt submission behavior - Desktop notifications - Enabled plugins and extra marketplaces - Status-line rendering - Environment flags and feedback settings ## Permissions The first safety layer is `permissions.allow`. It is intentionally narrow: direct file access is limited to `/tmp/claude/**`, and only a small set of Bash command patterns are pre-approved. ```4:23:settings.json "permissions": { "allow": [ "Read(/tmp/claude/**)", "Edit(/tmp/claude/**)", "Write(/tmp/claude/**)", "Bash(mkdir -p /tmp/claude*)", "Bash(claude:*)", "Bash(sed -n:*)", "Bash(grep:*)", "Bash(mcpl:*)", "Bash(git -C:*)", "Bash(prek:*)", "Bash(~/.claude/scripts/session-start-check.sh:*)", "Bash(~/.claude/scripts/my-notifier.sh:*)", "Bash(uv run ~/.claude/scripts/rule-injector.py:*)", "Bash(uv run ~/.claude/scripts/rule-enforcer.py:*)", "Bash(uv run ~/.claude/scripts/git-protection.py:*)", "Grep", "Bash(myk-claude-tools:*)" ] }, ``` In practice, that gives you these categories of access: - Temporary workspace access: `Read`, `Edit`, and `Write` only under `/tmp/claude/**` - Approved shell entry points: `claude`, `sed -n`, `grep`, `mcpl`, `git -C`, `prek`, and `myk-claude-tools` - Hook entry points: the startup checker, notifier, and Python hook scripts - Native search access: the built-in `Grep` tool A few details are worth calling out: - `git` access is narrow here: the allowlist includes `Bash(git -C:*)`, not a blanket `Bash(git:*)` - `pre-commit` is not allowlisted directly; this config expects `prek` - The repo-specific `myk-claude-tools` command is a real CLI entry point defined in `pyproject.toml` ```23:24:pyproject.toml [project.scripts] myk-claude-tools = "myk_claude_tools.cli:main" ``` ## The Duplicate Allowlist This repository keeps a second allowlist in `allowedTools`. The checked-in file treats it as a mirror of the same approved tool set, and the file includes an explicit reminder to keep script entries synchronized. ```137:156:settings.json "_scriptsNote": "Script entries must be duplicated in both permissions.allow and allowedTools arrays. When adding new scripts, update BOTH locations.", "allowedTools": [ "Edit(/tmp/claude/**)", "Write(/tmp/claude/**)", "Read(/tmp/claude/**)", "Bash(mkdir -p /tmp/claude*)", "Bash(claude:*)", "Bash(sed -n:*)", "Bash(grep:*)", "Bash(mcpl:*)", "Bash(git -C:*)", "Bash(prek:*)", "Grep", "Bash(~/.claude/scripts/session-start-check.sh:*)", "Bash(~/.claude/scripts/my-notifier.sh:*)", "Bash(uv run ~/.claude/scripts/rule-injector.py:*)", "Bash(uv run ~/.claude/scripts/rule-enforcer.py:*)", "Bash(uv run ~/.claude/scripts/git-protection.py:*)", "Bash(myk-claude-tools:*)" ], ``` > **Warning:** If you add, remove, or rename a script-based command, update both `permissions.allow` and `allowedTools`. Updating only one of them is a common way to break this configuration. A practical way to think about this is: there is one logical allowlist, but this file stores it twice. ## Hooks The `hooks` block is where most of the behavior lives. This configuration uses four hook phases. | Hook event | Matcher | Runs | Purpose | |---|---|---|---| | `SessionStart` | `""` | `~/.claude/scripts/session-start-check.sh` | Checks tool and plugin prerequisites | | `UserPromptSubmit` | `""` | `uv run ~/.claude/scripts/rule-injector.py` | Injects a short reminder into prompt context | | `PreToolUse` | `TodoWrite|Bash` | `uv run ~/.claude/scripts/rule-enforcer.py` | Denies raw `python`, `pip`, and `pre-commit` usage | | `PreToolUse` | `Bash` | `uv run ~/.claude/scripts/git-protection.py` | Blocks unsafe `git commit` and `git push` usage | | `PreToolUse` | `Bash` | inline `prompt` hook | Blocks or asks about dangerous OS-level commands | | `Notification` | `""` | `~/.claude/scripts/my-notifier.sh` | Sends desktop notifications | > **Note:** Every Python hook is run through `uv run`, so `uv` is a hard requirement for this setup. ### SessionStart The startup hook is a non-blocking health check. It looks for required tools, optional helpers, and a set of official plugins. ```29:66:scripts/session-start-check.sh # CRITICAL: uv - Required for Python hooks if ! command -v uv &>/dev/null; then missing_critical+=("[CRITICAL] uv - Required for running Python hooks Install: https://docs.astral.sh/uv/") fi # OPTIONAL: gh - Only check if this is a GitHub repository if git remote -v 2>/dev/null | grep -q "github.com"; then if ! command -v gh &>/dev/null; then missing_optional+=("[OPTIONAL] gh - Required for GitHub operations (PRs, issues, releases) Install: https://cli.github.com/") fi fi # OPTIONAL: jq - Required for AI review handlers if ! command -v jq &>/dev/null; then missing_optional+=("[OPTIONAL] jq - Required for AI review handlers (JSON processing) Install: https://stedolan.github.io/jq/download/") fi # OPTIONAL: gawk - Required for AI review handlers if ! command -v gawk &>/dev/null; then missing_optional+=("[OPTIONAL] gawk - Required for AI review handlers (text processing) ``` The same script also checks for review-critical plugins and a wider set of optional official plugins: ```69:131:scripts/session-start-check.sh # CRITICAL: Review plugins - Required for mandatory code review loop critical_marketplace_plugins=( pr-review-toolkit superpowers feature-dev ) # OPTIONAL: Marketplace plugins - Check @claude-plugins-official plugins optional_marketplace_plugins=( claude-code-setup claude-md-management code-review code-simplifier coderabbit commit-commands frontend-design github gopls-lsp jdtls-lsp lua-lsp playground pyright-lsp security-guidance ) ``` If anything is missing, the script prints a structured `MISSING_TOOLS_REPORT` and tells the assistant to offer installation help. It never blocks the session. ```134:166:scripts/session-start-check.sh # Output report only if something is missing if [[ ${#missing_critical[@]} -gt 0 || ${#missing_optional[@]} -gt 0 ]]; then echo "MISSING_TOOLS_REPORT:" echo "" echo "[AI INSTRUCTION - YOU MUST FOLLOW THIS]" echo "Some tools required by this configuration are missing." echo "" echo "Criticality levels:" echo "- CRITICAL: Configuration will NOT work without these. Must install." echo "- OPTIONAL: Enhances functionality. Nice to have." # ... fi # Always exit 0 (non-blocking) exit 0 ``` A few practical takeaways: - `uv` is mandatory because the Python hooks use it - `gh` is only checked when the repo remote points at GitHub - `prek` is only checked when `.pre-commit-config.yaml` exists, and this repository does include that file - `mcpl`, `jq`, and `gawk` are optional, but they support real parts of this setup > **Tip:** Install `uv` first. If you want the review workflow to work cleanly, install `pr-review-toolkit`, `superpowers`, and `feature-dev` next. ### UserPromptSubmit The prompt-submit hook injects a fixed reminder string into the prompt context. ```22:32:scripts/rule-injector.py rule_reminder = ( "[SYSTEM RULES] You are a MANAGER. NEVER do work directly. ALWAYS delegate:\n" "- Edit/Write → language specialists (python-expert, go-expert, etc.)\n" "- ALL Bash commands → bash-expert or appropriate specialist\n" "- Git commands → git-expert\n" "- MCP tools → manager agents\n" "- Multi-file exploration → Explore agent\n" "HOOKS WILL BLOCK VIOLATIONS." ) output = {"hookSpecificOutput": {"hookEventName": "UserPromptSubmit", "additionalContext": rule_reminder}} ``` That means the current checked-in implementation reinforces behavior with a static reminder rather than dynamically editing the user's prompt. ### PreToolUse: `rule-enforcer.py` The first `PreToolUse` hook is a Bash gate. It blocks direct `python`, `python3`, `pip`, `pip3`, and `pre-commit` commands. It explicitly allows `uv` and `uvx`, and the overall workflow prefers `prek` over raw `pre-commit`. ```8:26:scripts/rule-enforcer.py def is_forbidden_python_command(command: str) -> bool: """Check if command uses python/pip directly instead of uv.""" cmd = command.strip().lower() # Allow uv/uvx commands if cmd.startswith(("uv ", "uvx ")): return False # Block direct python/pip forbidden = ("python ", "python3 ", "pip ", "pip3 ") return cmd.startswith(forbidden) def is_forbidden_precommit_command(command: str) -> bool: """Check if command uses pre-commit directly instead of prek.""" cmd = command.strip().lower() # Block direct pre-commit commands return cmd.startswith("pre-commit ") ``` The tests are a good example of the intended contract: ```368:442:tests/test_rule_enforcer.py @pytest.mark.parametrize( "command", [ "python script.py", "python3 -m pytest", "pip install requests", "pip3 freeze", ], ) def test_deny_forbidden_bash_command(self, command: str) -> None: # ... @pytest.mark.parametrize( "command", [ "pre-commit run", "pre-commit run --all-files", "pre-commit install", "pre-commit autoupdate", ], ) def test_deny_forbidden_precommit_command(self, command: str) -> None: # ... @pytest.mark.parametrize( "command", [ "uv run script.py", "uvx ruff check .", "git status", "ls -la", ], ) def test_allow_permitted_bash_command(self, command: str) -> None: ``` One subtle detail: the hook matcher is `TodoWrite|Bash`, but the script itself only inspects `Bash` commands. > **Tip:** If you are adapting this config, prefer `uv run`, `uvx`, and `prek` in your own commands. That matches both the hook behavior and the startup checks. ### PreToolUse: `git-protection.py` The second `PreToolUse` hook protects Git history. It blocks commits and pushes on protected branches, already-merged branches, and branches whose PRs are already merged. The script also fails closed when PR-state lookup errors occur: ```287:291:scripts/git-protection.py pr_merged, pr_info = get_pr_merge_status(current_branch) if pr_merged is None: # Error checking PR status - fail closed return True, format_pr_merge_error("get_pr_merge_status()", pr_info) ``` The tests show the intended behavior clearly: ```727:775:tests/test_git_protection.py def test_detached_head(self, mock_is_repo: Any, mock_branch: Any) -> None: """Detached HEAD should block commit.""" mock_is_repo.return_value = True mock_branch.return_value = None should_block, reason = git_protection.should_block_commit('git commit -m "test"') assert should_block is True assert "detached HEAD" in reason def test_on_main_branch(self, mock_is_repo: Any, mock_branch: Any, mock_pr_status: Any, mock_main: Any) -> None: """On main branch should block commit.""" mock_is_repo.return_value = True mock_branch.return_value = "main" mock_pr_status.return_value = (False, None) mock_main.return_value = "main" should_block, reason = git_protection.should_block_commit('git commit -m "test"') assert should_block is True assert "'main'" in reason assert "protected" in reason.lower() ``` There is one important exception: `--amend` is allowed when the branch is ahead of its remote. ```253:255:scripts/git-protection.py def is_amend_with_unpushed_commits(command: str) -> bool: """Check if this is an amend on unpushed commits (which should be allowed).""" return "--amend" in command and is_branch_ahead_of_remote() ``` > **Warning:** On GitHub repositories, this hook uses `gh` to check whether the current branch already has a merged PR. If that lookup fails, the hook blocks the operation. If you rely on GitHub workflows, make sure `gh` is installed and authenticated. ### PreToolUse: destructive command gate The third `PreToolUse` hook is an inline prompt-based safety check. It is not a shell script. It analyzes Bash commands and does three things: - Blocks obviously catastrophic OS-destructive commands - Asks for confirmation on commands that look risky but not clearly catastrophic - Approves normal commands The configured timeout is 10 seconds. This is important because the config also enables `skipDangerousModePermissionPrompt`, so Bash safety here is handled by the allowlists and `PreToolUse` hooks rather than relying only on the default dangerous-mode flow. ### Notification The notification hook sends the Claude message text through `notify-send`. It explicitly requires both `jq` and `notify-send`. ```4:35:scripts/my-notifier.sh # Check for required commands for cmd in jq notify-send; do if ! command -v "$cmd" &>/dev/null; then echo "Error: Required command '$cmd' not found" >&2 exit 1 fi done # Read JSON input from stdin input_json=$(cat) # Parse JSON and extract message, capturing any jq errors if ! notification_message=$(echo "$input_json" | jq -r '.message' 2>&1); then echo "Error: Failed to parse JSON - $notification_message" >&2 exit 1 fi # Send the notification and propagate any failures if ! notify-send --icon="" --wait "Claude: $notification_message"; then echo "Error: notify-send failed" >&2 exit 1 fi ``` > **Note:** `session-start-check.sh` checks for `jq`, but it does not check for `notify-send`. If your machine does not have `notify-send`, you will need to replace or remove the `Notification` hook. ## Enabled Plugins The file enables official plugins, repo-owned plugins, and two third-party marketplace plugins. ```96:133:settings.json "enabledPlugins": { "pyright-lsp@claude-plugins-official": true, "jdtls-lsp@claude-plugins-official": true, "lua-lsp@claude-plugins-official": true, "github@claude-plugins-official": true, "myk-review@myk-org": true, "myk-github@myk-org": true, "code-simplifier@claude-plugins-official": true, "playground@claude-plugins-official": true, "frontend-design@claude-plugins-official": true, "code-review@claude-plugins-official": true, "superpowers@claude-plugins-official": true, "feature-dev@claude-plugins-official": true, "commit-commands@claude-plugins-official": true, "security-guidance@claude-plugins-official": true, "claude-md-management@claude-plugins-official": true, "pr-review-toolkit@claude-plugins-official": true, "claude-code-setup@claude-plugins-official": true, "gopls-lsp@claude-plugins-official": true, "coderabbit@claude-plugins-official": true, "cli-anything@cli-anything": true, "worktrunk@worktrunk": true, "myk-acpx@myk-org": true }, "extraKnownMarketplaces": { "cli-anything": { "source": { "source": "github", "repo": "HKUDS/CLI-Anything" } }, "worktrunk": { "source": { "source": "github", "repo": "max-sixty/worktrunk" } } }, ``` A useful way to read this list is by category: - Language servers: `pyright-lsp`, `jdtls-lsp`, `lua-lsp`, `gopls-lsp` - Review and workflow helpers: `github`, `code-review`, `pr-review-toolkit`, `superpowers`, `feature-dev`, `coderabbit`, `security-guidance`, `code-simplifier`, `frontend-design`, `commit-commands`, `claude-md-management`, `claude-code-setup`, `playground` - Repo marketplace plugins: `myk-review`, `myk-github`, `myk-acpx` - Third-party marketplaces: `cli-anything`, `worktrunk` The repo-owned marketplace is defined in the checked-in manifest at `.claude-plugin/marketplace.json`: ```1:24:.claude-plugin/marketplace.json { "name": "myk-org", "owner": { "name": "myk-org" }, "plugins": [ { "name": "myk-github", "source": "./plugins/myk-github", "description": "GitHub operations - PR reviews, releases, review handling, CodeRabbit rate limits", "version": "1.7.2" }, { "name": "myk-review", "source": "./plugins/myk-review", "description": "Local code review and review database operations", "version": "1.7.2" }, { "name": "myk-acpx", "source": "./plugins/myk-acpx", "description": "Multi-agent prompt execution via acpx (Agent Client Protocol)", "version": "1.7.2" } ] } ``` > **Note:** The startup health check only validates a subset of official plugins. It does not currently verify installation of the `myk-*`, `cli-anything`, or `worktrunk` plugins, even though they are enabled here. ## Status Line This configuration replaces the default status line with a shell script and disables extra padding. ```91:94:settings.json "statusLine": { "type": "command", "command": "bash ~/.claude/statusline.sh", "padding": 0 }, ``` The script builds the line from workspace and runtime context: ```7:50:statusline.sh model_name=$(echo "$input" | jq -r '.model.display_name') current_dir=$(echo "$input" | jq -r '.workspace.current_dir') # Get context usage percentage (pre-calculated by Claude Code v2.1.6+) context_pct=$(echo "$input" | jq -r '.context_window.used_percentage // empty') # Get current directory basename dir_name=$(basename "$current_dir") # Add SSH info if connected via SSH if [ -n "$SSH_CLIENT" ] || [ -n "$SSH_TTY" ]; then status_parts+=("$(whoami)@$(hostname -s)") fi # Add git branch if in a git repository if git rev-parse --git-dir >/dev/null 2>&1; then branch=$(git branch --show-current 2>/dev/null || echo "detached") status_parts+=("$branch") fi # Add virtual environment if active if [ -n "$VIRTUAL_ENV" ]; then status_parts+=("($(basename "$VIRTUAL_ENV"))") fi # Add model and context usage status_parts+=("$model_name") if [ -n "$context_pct" ]; then status_parts+=("(${context_pct}%)") fi ``` In order, the rendered line includes: - Current directory name - SSH user and host, when connected over SSH - Current Git branch, when inside a Git repo - Active virtual environment name, when present - Model display name - Context-window usage percentage, when available - Added/removed line counts, when nonzero > **Tip:** `statusline.sh` depends on `jq` and `git`. If the status line looks broken, check those first. ## Environment And UX Flags At the end of the file, `settings.json` sets runtime flags, environment variables, and survey behavior. ```134:165:settings.json "alwaysThinkingEnabled": true, "skipDangerousModePermissionPrompt": true, "autoDreamEnabled": true, "_scriptsNote": "Script entries must be duplicated in both permissions.allow and allowedTools arrays. When adding new scripts, update BOTH locations.", "allowedTools": [ # ... ], "env": { "DISABLE_TELEMETRY": "1", "DISABLE_ERROR_REPORTING": "1", "CLAUDE_CODE_DISABLE_FEEDBACK_SURVEY": "1" }, "feedbackSurveyRate": 0, "feedbackSurveyState": { "lastShownTime": 1754082998309 } ``` Here is what those settings mean in practice: - `alwaysThinkingEnabled: true` turns on Claude Code's thinking mode by default - `autoDreamEnabled: true` enables Claude Code's auto-dream behavior by default - `skipDangerousModePermissionPrompt: true` removes the standard dangerous-mode confirmation prompt, but this config still protects Bash usage with allowlists and `PreToolUse` hooks - `DISABLE_TELEMETRY` and `DISABLE_ERROR_REPORTING` opt out of telemetry and automatic error reporting - `CLAUDE_CODE_DISABLE_FEEDBACK_SURVEY` plus `feedbackSurveyRate: 0` disables feedback survey prompts - `feedbackSurveyState.lastShownTime` is local runtime state, not a setting most users need to edit > **Note:** If you are trimming this file for your own use, keep the safety model in mind. Disabling the dangerous-mode prompt makes the Bash allowlist and hooks even more important. ## Keeping Everything In Sync When you change this file, use this checklist: 1. Keep `scripts/` installed under `~/.claude/scripts/` and `statusline.sh` at `~/.claude/statusline.sh`. 2. If you add or rename a hook script, update the hook `command` and mirror its `Bash(...)` entry in both `permissions.allow` and `allowedTools`. 3. Keep `uv` installed, because all Python hooks depend on it. 4. Prefer `uv run`, `uvx`, and `prek` over raw `python`, `pip`, and `pre-commit`. 5. If you want desktop notifications, make sure `jq` and `notify-send` are available. 6. If you rely on GitHub PR checks, make sure `gh` is installed and authenticated. ## Verifying Changes This repository validates the hook behavior with unit tests, and `tox.toml` runs those tests through `uv`: ```4:7:tox.toml [env.unittests] description = "Run pytest tests" deps = ["uv"] commands = [["uv", "run", "--group", "tests", "pytest", "tests"]] ``` The repo also includes `.pre-commit-config.yaml`, which is why `session-start-check.sh` treats `prek` as relevant here. > **Note:** There are no checked-in GitHub Actions or Jenkins pipeline files in this repository. The visible validation path in-repo is local: `tox` plus pre-commit. --- Source: hooks-and-guardrails.md # Hooks and Guardrails This configuration uses Claude Code hooks as a safety layer. Some hooks stop risky commands before they run. Some add context so the assistant follows the intended workflow. Others check whether your local setup is missing tools or turn Claude notifications into desktop popups. The hook registrations live in `settings.json`. The tracked source files live in the repository's `scripts/` directory, and the runtime configuration invokes them from `~/.claude/scripts/...`. ## At a Glance | Component | Event | What it does | | --- | --- | --- | | `rule-enforcer.py` | `PreToolUse` | Blocks raw `python`/`pip` and raw `pre-commit` commands. | | `git-protection.py` | `PreToolUse` | Blocks unsafe `git commit` and `git push` operations. | | Bash destruction gate | `PreToolUse` | Reviews high-risk shell commands and can approve, block, or ask for confirmation. | | `rule-injector.py` | `UserPromptSubmit` | Adds reminder text to each prompt so the assistant stays within the intended workflow. | | `session-start-check.sh` | `SessionStart` | Reports missing tools and plugins, but does not stop the session. | | `my-notifier.sh` | `Notification` | Sends desktop notifications with `notify-send`. | ## Where Hooks Are Wired In ```25:55:settings.json "hooks": { "Notification": [ { "matcher": "", "hooks": [ { "type": "command", "command": "~/.claude/scripts/my-notifier.sh" } ] } ], "PreToolUse": [ { "matcher": "TodoWrite|Bash", "hooks": [ { "type": "command", "command": "uv run ~/.claude/scripts/rule-enforcer.py" } ] }, { "matcher": "Bash", "hooks": [ { "type": "command", "command": "uv run ~/.claude/scripts/git-protection.py" } ] }, ``` Later in the same `settings.json` section, the configuration also wires in the prompt-based Bash destruction gate, `rule-injector.py` for `UserPromptSubmit`, and `session-start-check.sh` for `SessionStart`. > **Note:** The config includes its own maintenance warning: script entries need to appear in both `permissions.allow` and `allowedTools`. If you add or rename a hook, update both places. ## Pre-Tool Guardrails ### `rule-enforcer.py` Despite its name, this hook is focused rather than broad. In the current code, it only acts on Bash commands and blocks two categories: - direct `python`, `python3`, `pip`, and `pip3` - direct `pre-commit` It does not block `uv`, `uvx`, or `prek`, and it does not inspect non-Bash tools. Because the matcher is `TodoWrite|Bash`, Claude may invoke it for `TodoWrite`, but the script immediately returns unless `tool_name == "Bash"`. ```35:71:scripts/rule-enforcer.py # Block direct python/pip commands if tool_name == "Bash": command = tool_input.get("command", "") if is_forbidden_python_command(command): output = { "hookSpecificOutput": { "hookEventName": "PreToolUse", "permissionDecision": "deny", "permissionDecisionReason": "Direct python/pip commands are forbidden.", "additionalContext": ( "You attempted to run python/pip directly. Instead:\n" "1. Delegate Python tasks to the python-expert agent\n" "2. Use 'uv run script.py' to run Python scripts\n" "3. Use 'uvx package-name' to run package CLIs\n" "See: https://docs.astral.sh/uv/" ), } } print(json.dumps(output)) sys.exit(0) if is_forbidden_precommit_command(command): output = { "hookSpecificOutput": { "hookEventName": "PreToolUse", "permissionDecision": "deny", "permissionDecisionReason": "Direct pre-commit commands are forbidden.", "additionalContext": ( "You attempted to run pre-commit directly. Instead:\n" "1. Use the 'prek' command which wraps pre-commit\n" "2. Example: prek run --all-files\n" "See: https://github.com/j178/prek" ``` In practice, that means switching to `uv run ...`, `uvx ...`, or `prek run --all-files` instead of calling the raw tool directly. > **Tip:** If a command gets denied here, use the wrapper the hook suggests instead of trying to work around it. That is the supported workflow this repository expects. `tests/test_rule_enforcer.py` confirms a few important details: - matching is case-insensitive - `uv`, `uvx`, and `prek` are allowed - commands that merely contain words like `python` in an argument are not blocked - malformed hook input and internal exceptions fail open, so the hook does not block the session if its own input is broken ### `git-protection.py` This is the strictest guardrail in the repository. It inspects Bash commands for `git commit` and `git push` and blocks them when the current branch is unsafe. It blocks `git commit` when: - you are in detached HEAD - you are on `main` or `master` - the current branch is already merged locally - the current branch already has a merged GitHub PR - the GitHub PR lookup itself fails It blocks `git push` for the same protected-branch and merged-branch cases, but it does not block pushes from detached HEAD. It also has one important escape hatch: `git commit --amend` is allowed when your branch is ahead of its upstream, so you can clean up unpublished commits. ```287:346:scripts/git-protection.py # Check if PR is already merged on GitHub (doesn't need main_branch) pr_merged, pr_info = get_pr_merge_status(current_branch) if pr_merged is None: # Error checking PR status - fail closed return True, format_pr_merge_error("get_pr_merge_status()", pr_info) if pr_merged: # Get main branch for the message (best effort) main_branch = get_main_branch() or "main" return ( True, f"""⛔ BLOCKED: PR #{pr_info} for branch '{current_branch}' is already MERGED. What happened: - This branch's PR was already merged - Committing more changes to a merged branch is not useful **ACTION REQUIRED - Execute these commands NOW:** You MUST create a new branch for these changes. Do NOT ask user - just do it: 1. git checkout {main_branch} 2. git pull origin {main_branch} 3. git checkout -b feature/new-changes 4. Move uncommitted changes and commit on the new branch IMMEDIATELY switch to '{main_branch}' and create a new feature branch.""", ) # Get main branch for subsequent checks detected_main_branch = get_main_branch() if not detected_main_branch: # Can't determine main branch - allow return False, None # Block if on main/master branch if current_branch in ["main", "master"]: return ( True, f"""⛔ BLOCKED: Cannot commit directly to '{current_branch}' branch. What happened: - You are on the protected '{current_branch}' branch - Direct commits to {current_branch} bypass code review and CI checks **ACTION REQUIRED - Execute these commands NOW:** You MUST create a feature branch for these changes. Do NOT ask user - just do it: 1. git stash (if you have uncommitted changes) 2. git checkout -b feature/your-feature 3. git stash pop (if you stashed changes) 4. Then commit your changes on the new branch IMMEDIATELY create a feature branch and move your changes there.""", ) # Allow amend on unpushed commits if is_amend_with_unpushed_commits(command): return False, None # Check if branch is merged (local check as fallback) if is_branch_merged(current_branch, detected_main_branch): ``` A few details matter if you are troubleshooting: - the GitHub check only runs for GitHub remotes and only when `gh` is installed - if that GitHub check is unavailable, the script falls back to local git history checks - the parser is deliberately broader than a simple prefix match; tests confirm it catches forms like `git -C /path commit ...`, environment-prefixed commands, and quoted or piped `git commit` strings, while still avoiding common false positives like `git config push.default` > **Warning:** `git-protection.py` fails closed. If it cannot safely determine whether a commit or push should be allowed, it blocks the operation rather than guessing. In a GitHub-backed repository, a broken `gh` login or a GitHub API error is enough to stop the command. > **Note:** When the GitHub lookup or the hook itself crashes, the returned message explicitly tells the assistant to ask whether you want a GitHub issue created against `myk-org/claude-code-config`. That behavior is intentional and is covered by the tests. ### The additional Bash destruction gate The `PreToolUse` chain in `settings.json` also includes a prompt-based security gate for Bash. It is not implemented in one of the scripts above, but it is part of the overall guardrail story. That prompt-based hook is aimed at catastrophic OS damage: deleting system directories, formatting disks, writing raw devices with `dd`, removing files such as `/etc/passwd`, or chaining destructive commands after safe-looking ones. Depending on the command, it can approve, block, or ask for explicit confirmation. > **Warning:** This guardrail is intentionally conservative. If a command uses `sudo` or looks risky even when it is not obviously destructive, you may be asked to confirm before it runs. ## Prompt Enrichment ### `rule-injector.py` This hook runs on `UserPromptSubmit`. Instead of blocking anything, it adds a short reminder to the prompt context so the assistant keeps following the repository's intended division of responsibilities. ```21:35:scripts/rule-injector.py try: rule_reminder = ( "[SYSTEM RULES] You are a MANAGER. NEVER do work directly. ALWAYS delegate:\n" "- Edit/Write → language specialists (python-expert, go-expert, etc.)\n" "- ALL Bash commands → bash-expert or appropriate specialist\n" "- Git commands → git-expert\n" "- MCP tools → manager agents\n" "- Multi-file exploration → Explore agent\n" "HOOKS WILL BLOCK VIOLATIONS." ) output = {"hookSpecificOutput": {"hookEventName": "UserPromptSubmit", "additionalContext": rule_reminder}} # Output JSON to stdout print(json.dumps(output, indent=2)) ``` The important user-facing point is that this hook enriches the assistant's context every time you send a prompt. It does not touch your files or shell history. > **Note:** In the current implementation, `rule-injector.py` emits a fixed reminder string. It does not read the repository's `rules/` directory at runtime. If the script hits an error, it logs to stderr and exits successfully, so your prompt still goes through. ## Session Start Checks ### `session-start-check.sh` This hook runs once at session start, with a 5 second timeout in `settings.json`. If everything is present, it stays silent. If something is missing, it acts like a non-blocking environment audit. It checks for: - critical `uv`, because the Python hooks are run with `uv run` - `gh`, but only when the current repo points at GitHub - `jq` - `gawk` - `prek`, but only when `.pre-commit-config.yaml` exists - `mcpl` - critical review plugins: `pr-review-toolkit`, `superpowers`, and `feature-dev` - a longer list of optional marketplace plugins When something is missing, it prints a structured report instead of stopping the session. ```134:151:scripts/session-start-check.sh # Output report only if something is missing if [[ ${#missing_critical[@]} -gt 0 || ${#missing_optional[@]} -gt 0 ]]; then echo "MISSING_TOOLS_REPORT:" echo "" echo "[AI INSTRUCTION - YOU MUST FOLLOW THIS]" echo "Some tools required by this configuration are missing." echo "" echo "Criticality levels:" echo "- CRITICAL: Configuration will NOT work without these. Must install." echo "- OPTIONAL: Enhances functionality. Nice to have." echo "" echo "YOUR REQUIRED ACTION:" echo "1. List each missing tool with its purpose" echo "2. ASK the user: 'Would you like me to help install these tools?'" echo "3. If user accepts, provide the installation command for each tool" echo "4. Prioritize CRITICAL tools first" echo "" echo "DO NOT just mention the tools. You MUST ask if the user wants help installing them." ``` This repository does include a real `.pre-commit-config.yaml`, with hooks such as `detect-private-key`, `ruff`, `mypy`, and `markdownlint`. That is why `prek` matters in practice here, even though the startup report labels it optional. > **Tip:** If you see `MISSING_TOOLS_REPORT`, install the critical items first. The startup hook always exits `0`, so it will not block the session on its own. > **Note:** The startup check warns about missing `jq`, but it does not check for `notify-send`. A session can start cleanly and still have notification failures later if your machine cannot run desktop notifications. ## Notifications ### `my-notifier.sh` This is the `Notification` hook. It reads JSON from stdin, extracts `.message`, and passes it to `notify-send` as a desktop notification. ```4:37:scripts/my-notifier.sh # Check for required commands for cmd in jq notify-send; do if ! command -v "$cmd" &>/dev/null; then echo "Error: Required command '$cmd' not found" >&2 exit 1 fi done # Read JSON input from stdin input_json=$(cat) # Verify input is not empty if [[ -z "$input_json" ]]; then echo "Error: No input received from stdin" >&2 exit 1 fi # Parse JSON and extract message, capturing any jq errors if ! notification_message=$(echo "$input_json" | jq -r '.message' 2>&1); then echo "Error: Failed to parse JSON - $notification_message" >&2 exit 1 fi # Verify notification_message is non-empty if [[ -z "$notification_message" || "$notification_message" == "null" ]]; then echo "Error: Notification message is empty or missing from JSON" >&2 exit 1 fi # Send the notification and propagate any failures if ! notify-send --icon="" --wait "Claude: $notification_message"; then echo "Error: notify-send failed" >&2 exit 1 fi ``` This script is Linux-friendly out of the box because it relies on `notify-send`. If you use a headless environment, container, remote shell, or another operating system, you may need to replace it with a platform-specific notifier. > **Warning:** Unlike `session-start-check.sh`, this script does not degrade gracefully. Missing dependencies, malformed JSON, or a failing notification daemon all cause it to exit non-zero. ## Failure Behavior Not all hooks fail the same way, and that difference is intentional. | Component | Failure behavior | | --- | --- | | `rule-enforcer.py` | Fails open. Invalid JSON or internal errors do not block the tool call. | | `git-protection.py` | Fails closed. If it cannot safely evaluate the git operation, it blocks it. | | `rule-injector.py` | Fails open. Prompt submission continues even if the injector errors. | | `session-start-check.sh` | Never blocks. It reports missing items and exits `0`. | | `my-notifier.sh` | Exits non-zero on dependency, input, or notification errors. | ## How This Is Verified The two Python guardrails are backed by unit tests, and the repository's tox configuration runs them through `uv`: ```1:7:tox.toml skipsdist = true envlist = ["unittests"] [env.unittests] description = "Run pytest tests" deps = ["uv"] commands = [["uv", "run", "--group", "tests", "pytest", "tests"]] ``` The checked-in automation around these hooks is local `tox`/`pytest` and pre-commit configuration rather than a separate hook-specific pipeline file. Those tests verify the behavior that matters most to users: - `tests/test_rule_enforcer.py` covers case-insensitive command detection, the `uv`/`uvx` and `prek` escape hatches, and the script's fail-open behavior - `tests/test_git_protection.py` covers protected branches, merged-PR detection, merged-branch fallback logic, `--amend` handling, detached HEAD behavior, false-positive avoidance in subcommand parsing, and fail-closed error handling The repository also ships a `.pre-commit-config.yaml`, so local linting and type checking are part of the expected workflow alongside these hooks. ## What To Do When a Hook Fires - If `rule-enforcer.py` blocks a command, rerun it with `uv`, `uvx`, or `prek` rather than the raw tool. - If `git-protection.py` blocks a commit or push, move the work onto a feature branch. If the message mentions GitHub lookup errors, fix `gh` authentication or connectivity first. - If `session-start-check.sh` reports missing items, install critical tools and plugins before leaning on the full workflow. - If `my-notifier.sh` fails, install `jq` and `notify-send` or replace the notifier with something that fits your OS. - If the Bash destruction gate asks for confirmation, treat that as a real safety checkpoint, not a nuisance prompt. This setup is opinionated on purpose: it prefers safe defaults, explicit recovery steps, and clear feedback over silent failures or risky convenience. --- Source: orchestrator-rules.md # Orchestrator Rules `rules/` is the policy layer of `claude-code-config`. These files define how the orchestrator should behave: when to delegate work, when to create an issue first, how to choose the right specialist, how to use MCP tools, when slash commands override the normal rules, and how to report bugs in agent definitions. For most users, the core idea is simple: the orchestrator manages work; specialists execute it. The rest of the rule set adds guardrails around that model. > **Note:** The detailed policies live in `rules/*.md`, but the runtime behavior is implemented through `settings.json` hooks and the scripts in `scripts/`. Some rules are hard-enforced, while others are guidance the orchestrator is expected to follow. ## Rule Files At A Glance | Rule file | Purpose | |---|---| | `rules/00-orchestrator-core.md` | Defines the orchestrator/specialist split and the default delegation model | | `rules/05-issue-first-workflow.md` | Requires issue creation and issue branches before non-trivial implementation work | | `rules/10-agent-routing.md` | Maps tasks to the right agent and clarifies built-in vs custom agents | | `rules/15-mcp-launchpad.md` | Standardizes MCP discovery and tool execution through `mcpl` | | `rules/20-code-review-loop.md` | Requires parallel code review and testing after changes | | `rules/25-task-system.md` | Explains when to use persistent tasks and how to manage them | | `rules/30-slash-commands.md` | Explains why slash commands run directly and temporarily suspend normal delegation rules | | `rules/40-critical-rules.md` | Covers mandatory parallelism, temp files, `uv`, and external repo exploration | | `rules/50-agent-bug-reporting.md` | Defines how to report bugs in custom agent instructions | ## How The Runtime Pieces Fit Together The rule files are supported by Claude Code hooks in `settings.json`. Those hooks inject reminders, run startup checks, and block selected commands before they execute. From `settings.json`: ```37:76:settings.json "PreToolUse": [ { "matcher": "TodoWrite|Bash", "hooks": [ { "type": "command", "command": "uv run ~/.claude/scripts/rule-enforcer.py" } ] }, { "matcher": "Bash", "hooks": [ { "type": "command", "command": "uv run ~/.claude/scripts/git-protection.py" } ] } // ... prompt-based Bash safety hook omitted for brevity ... ], "UserPromptSubmit": [ { "matcher": "", "hooks": [ { "type": "command", "command": "uv run ~/.claude/scripts/rule-injector.py" } ] } ], ``` In everyday use, that means: - `UserPromptSubmit` adds a reminder before the request is handled. - `PreToolUse` runs `scripts/rule-enforcer.py` and `scripts/git-protection.py` on matching Bash activity. - A separate prompt-based safety hook screens catastrophic shell commands. - `SessionStart` runs `scripts/session-start-check.sh` to report missing prerequisites. The injected reminder is currently a fixed string inside `scripts/rule-injector.py`: ```21:35:scripts/rule-injector.py rule_reminder = ( "[SYSTEM RULES] You are a MANAGER. NEVER do work directly. ALWAYS delegate:\n" "- Edit/Write → language specialists (python-expert, go-expert, etc.)\n" "- ALL Bash commands → bash-expert or appropriate specialist\n" "- Git commands → git-expert\n" "- MCP tools → manager agents\n" "- Multi-file exploration → Explore agent\n" "HOOKS WILL BLOCK VIOLATIONS." ) output = {"hookSpecificOutput": {"hookEventName": "UserPromptSubmit", "additionalContext": rule_reminder}} ``` > **Warning:** Editing a file in `rules/` does not automatically change that injected reminder. If you want the runtime reminder to change, update `scripts/rule-injector.py` as well. The main hard block for Python and pre-commit commands is intentionally narrow: ```35:67:scripts/rule-enforcer.py # Block direct python/pip commands if tool_name == "Bash": command = tool_input.get("command", "") if is_forbidden_python_command(command): output = { "hookSpecificOutput": { "hookEventName": "PreToolUse", "permissionDecision": "deny", "permissionDecisionReason": "Direct python/pip commands are forbidden.", "additionalContext": ( "You attempted to run python/pip directly. Instead:\n" "1. Delegate Python tasks to the python-expert agent\n" "2. Use 'uv run script.py' to run Python scripts\n" "3. Use 'uvx package-name' to run package CLIs\n" ``` `scripts/git-protection.py` adds another layer by blocking `git commit` and `git push` on protected or already-merged branches. > **Warning:** The Markdown rules are broader than the hard denials in `scripts/`. The code blocks some behaviors, but the full orchestration model still depends on the rule text, agent routing, and Claude Code permissions. ## Delegation Comes First `rules/00-orchestrator-core.md` defines the operating model for the whole repository: - The orchestrator reads, plans, asks questions, and routes work. - Specialist agents do the direct editing, shell work, testing, and Git/GitHub operations in their own domain. - Specialist agents explicitly ignore the orchestrator-only rules. - The orchestrator may use `mcpl` directly for MCP discovery. - Slash commands are a special execution mode and are handled separately. This is the rule to keep in mind whenever a task could go more than one way. If the work involves editing files, running substantive shell commands, or using domain-specific tooling, it should usually go to the appropriate specialist. ## Issue-First Workflow `rules/05-issue-first-workflow.md` adds a lightweight delivery process before non-trivial code changes. Use it for: - New features and enhancements - Bug fixes that require code changes - Refactors - Multi-file changes - Work that benefits from tracking and documentation Skip it for: - Tiny fixes - Read-only questions or research - Cases where the user explicitly says to do it directly - Urgent hotfixes The intended sequence is: 1. Create a GitHub issue through `github-expert`. 2. Ask whether the user wants to work on it now. 3. Create an issue branch from `origin/main`. 4. Complete the work and keep the issue updated. 5. Close the issue when the deliverables are done. The branch naming rule is consistent and easy to scan: `feat/issue--`, `fix/issue--`, and similar variants for `refactor` and `docs`. > **Tip:** This workflow is most useful when the work will take multiple steps or multiple files. For quick, obvious fixes, the rule deliberately allows you to skip it. ## Agent Routing `rules/10-agent-routing.md` maps work to the right specialist and emphasizes a simple rule: route by intent, not by the tool being used. A few examples from the routing table are especially important: - Python work goes to `python-expert`. - Frontend work goes to `frontend-expert`. - Shell scripting goes to `bash-expert`. - Local Git work goes to `git-expert`. - GitHub issues, PRs, releases, and workflows go to `github-expert`. - Markdown documentation goes to `technical-documentation-writer`. The file also distinguishes between two kinds of documentation agents: - `claude-code-guide` for Claude Code, hooks, settings, slash commands, MCP setup, Agent SDK, and Claude API docs - `docs-fetcher` for external libraries and frameworks > **Tip:** “Run Python tests” is still a Python task, even though a shell command may be involved. The routing rule prefers domain ownership over tool ownership. > **Warning:** `rules/10-agent-routing.md` explicitly says the orchestrator should not fetch documentation directly. Use the appropriate documentation agent instead. ## MCP Discovery With `mcpl` `rules/15-mcp-launchpad.md` standardizes MCP access around the `mcpl` CLI. The rule is clear: discover first, then inspect, then call. From `rules/15-mcp-launchpad.md`: ```10:22:rules/15-mcp-launchpad.md ## mcpl Commands | Command | Purpose | |----------------------------------------------|-----------------------------------------------------| | `mcpl search ""` | Search all tools (shows required params, 5 results) | | `mcpl search "" --limit N` | Search with more results | | `mcpl list` | List all MCP servers | | `mcpl list ` | List tools for a server (shows required params) | | `mcpl inspect ` | Get full schema | | `mcpl inspect --example` | Get schema + example call | | `mcpl call '{}'` | Execute tool (no arguments) | | `mcpl call '{"param": "v"}'` | Execute tool with arguments | | `mcpl verify` | Test all server connections | ``` In practice: - The orchestrator uses `mcpl` for discovery. - Agents use the full `mcpl` flow when they need to execute MCP tools. - `mcpl verify` is the first troubleshooting step when a server is not responding. ## Tasks For Multi-Phase Work `rules/25-task-system.md` makes task tracking explicit and practical. Use tasks when work is: - Multi-step - Easy to interrupt - Worth resuming later - Organized into visible phases - Waiting on user approval at checkpoints Do not use tasks for: - Simple one-off actions - Trivial fixes - Internal steps that the user does not need to track - Agent-only work Two details in this rule are especially user-friendly: - Tasks persist on disk in `~/.claude/tasks//`, so they survive across sessions. - Cleanup is mandatory. Before finishing a task-driven workflow, the orchestrator is expected to check for `pending` or `in_progress` tasks and close them out. Naming is also standardized: - `subject` should be short and imperative, such as `Run tests`. - `activeForm` should read naturally in progress UIs, such as `Running tests`. > **Tip:** Slash commands with multiple phases are one of the best places to use tasks. The rule file explicitly recommends them for workflows that involve dependencies, approvals, and cleanup. ## Slash Commands Override The Normal Rules `rules/30-slash-commands.md` introduces the main exception to the normal delegation model. When a slash command is invoked: - The orchestrator executes the slash command directly. - The slash command’s own instructions override the general orchestration rules for the duration of the command. - Internal operations run directly unless the command itself says to use an agent. - The slash command itself should never be delegated as a whole. A concrete example is `plugins/myk-github/commands/review-handler.md`, which declares its own tool access in frontmatter: ```1:5:plugins/myk-github/commands/review-handler.md --- description: Process ALL review sources (human, Qodo, CodeRabbit) from current PR argument-hint: [--autorabbit] [REVIEW_URL] allowed-tools: Bash(myk-claude-tools:*), Bash(uv:*), Bash(git:*), Bash(gh:*), AskUserQuestion, Task, Agent --- ``` That frontmatter is why slash commands can run direct tool-based workflows that would normally be delegated in regular orchestrator mode. > **Note:** If a slash command says to use an agent at a specific step, follow the slash command. The command prompt is in charge while that command is running. ## Critical Operational Rules `rules/40-critical-rules.md` collects several cross-cutting rules that make the whole setup behave predictably. ### Parallelism Parallel execution is mandatory whenever there is no real dependency between operations. The rule asks the orchestrator to check for parallelism before every response and to group independent work into one turn whenever possible. This matters most for: - Spawning multiple review agents - Running independent lookups - Handling unrelated subtasks at the same phase of a workflow ### Temp Files Temporary files belong in `/tmp/claude/`, not in the project tree. This keeps the repo clean and avoids accidental commits of scratch files. One concrete example appears in `plugins/myk-github/commands/pr-review.md`, which writes its review comment JSON to `/tmp/claude/pr-review-comments.json` before posting it. > **Warning:** If a workflow needs scratch files, do not leave them in the repository directory. `rules/40-critical-rules.md` treats `/tmp/claude/` as the only safe default. ### Python Execution With `uv` The rule file requires `uv` or `uvx` instead of direct `python`, `pip`, or `pre-commit` commands. That guidance is reinforced by `scripts/rule-enforcer.py`, which denies direct `python`/`pip`/`pre-commit` Bash commands and points users to `uv`, `uvx`, and `prek`. ### External Repository Exploration When this configuration needs to inspect a different Git repository, `rules/40-critical-rules.md` prefers a shallow local clone into `/tmp/claude/` over browsing files through web fetches. The goal is speed, better file access, and fewer accidental project-directory side effects. ## Agent Bug Reporting `rules/50-agent-bug-reporting.md` is specifically about bugs in the custom agent definitions shipped by this repository. The workflow is intentionally cautious: 1. Confirm that the bug is in a custom agent from `agents/`, not in a built-in Claude Code agent, user code, or an external tool. 2. Ask the user whether to create a GitHub issue. 3. If the user says yes, delegate issue creation to `github-expert`. 4. Continue with the original task after the issue decision is handled. The title format is standardized as `bug(agents): [agent-name] - brief description`. This rule is not for routine coding bugs or runtime failures. It exists so the repository can improve its agent instructions when those instructions produce incorrect or misleading behavior. > **Note:** Built-in Claude Code agents are explicitly out of scope for this rule. It only applies to custom agents defined in this repository. ## Related Review Rule Although this page focuses on orchestration behavior, `rules/20-code-review-loop.md` sits alongside these rules and matters in day-to-day use. After any code change, it requires: - All three review agents to run in parallel - Findings to be merged and deduplicated - Tests to run after review is clean - The loop to repeat until both review and tests pass `scripts/session-start-check.sh` treats the three review plugins as critical prerequisites: `pr-review-toolkit`, `superpowers`, and `feature-dev`. ## Setup And Verification This repository verifies the runtime pieces locally rather than through an in-repo GitHub Actions workflow. A few practical points matter here: - `scripts/session-start-check.sh` runs at session start and checks for tools such as `uv`, `gh`, `jq`, `gawk`, `prek`, and `mcpl`, plus required review plugins. - The startup check is advisory, not blocking: it reports problems but exits successfully. - `tests/test_rule_enforcer.py` and `tests/test_git_protection.py` cover the main enforcement hooks. - `tox.toml` runs the test suite through `uv`. From `tox.toml`: ```1:7:tox.toml skipsdist = true envlist = ["unittests"] [env.unittests] description = "Run pytest tests" deps = ["uv"] commands = [["uv", "run", "--group", "tests", "pytest", "tests"]] ``` > **Warning:** No `.github/` workflow is present in this repository, so the rule and hook behavior shown here is validated through local hooks and local tests, not an in-repo CI pipeline. > **Tip:** If setup problems appear at session start, install `uv` first. It is the foundation for the Python-based hooks and the local test runner. ## If You Extend This Configuration Two repository conventions are easy to miss: - `rules/`, `scripts/`, `agents/`, and `plugins/` are tracked through a whitelist-style `.gitignore`. New shared files in those directories must be whitelisted there or they will remain ignored. - New hook scripts should be added to both `permissions.allow` and `allowedTools` in `settings.json`. That keeps the orchestrator rules understandable at the policy level and predictable at runtime. --- Source: specialist-agents.md # Specialist Agents Specialist agents are the execution layer in `claude-code-config`. The orchestrator decides who should handle a task, but the specialist is the part that actually edits files, runs commands, fetches documentation, or talks to external systems. If you only remember two files, make them `rules/10-agent-routing.md` and `agents/00-base-rules.md`. The first decides which agent should receive a task. The second defines the shared behavior every bundled specialist follows. ```5:36:rules/10-agent-routing.md | Domain/Tool | Agent | |----------------------------------------------------------------------------------|------------------------------------| | **Languages (by file type)** | | | Python (.py) | `python-expert` | | Go (.go) | `go-expert` | | Frontend (JS/TS/React/Vue/Angular) | `frontend-expert` | | Java (.java) | `java-expert` | | Shell scripts (.sh) | `bash-expert` | | Markdown (.md) | `technical-documentation-writer` | | **Infrastructure** | | | Docker | `docker-expert` | | Kubernetes/OpenShift | `kubernetes-expert` | | Jenkins/CI/Groovy | `jenkins-expert` | | **Development** | | | Git operations (local) | `git-expert` | | GitHub (PRs, issues, releases, workflows) | `github-expert` | | Tests | `test-automator` | | Debugging | `debugger` | | API docs | `api-documenter` | | Claude Code docs (features, hooks, settings, commands, MCP, IDE, Agent SDK, API) | `claude-code-guide` (built-in) | | External library/framework docs (React, FastAPI, Django, etc.) | `docs-fetcher` | ### Built-in vs Custom Agents **Built-in agents** are provided by Claude Code itself and do NOT require definition files in `agents/`: - `claude-code-guide` - Has current Claude Code documentation built into Claude Code - `general-purpose` - Default fallback agent when no specialist matches **Custom agents** are defined in this repository's `agents/` directory and require definition files: - All other agents in the routing table above (e.g., `python-expert`, `docs-fetcher`, `git-expert`) ``` Routing is based on intent, not just the tool you think you need. A pull request goes to `github-expert`, not `git-expert`. External React or FastAPI docs go to `docs-fetcher`, not the built-in `claude-code-guide`. Markdown work usually goes to `technical-documentation-writer`, but API-focused documentation belongs with `api-documenter`. > **Note:** This page focuses on the bundled custom agents defined under `agents/`. The routing table also mentions built-in Claude Code agents such as `claude-code-guide` and `general-purpose`, but those are not shipped from this repository. ## Bundled Agents At A Glance ### Code And Documentation | Agent | Best for | Notable boundary or instruction | | --- | --- | --- | | `python-expert` | Python code, async work, typing, pytest-based changes | Must use `uv` and `uvx`; never raw `python`, `python3`, `pip`, or `pip3`. | | `go-expert` | Go code, concurrency, modules, and tests | Favors idiomatic Go, safe concurrency, and table-driven testing. Includes the `test-driven-development` skill. | | `java-expert` | Java, Spring, Maven, Gradle, JUnit | Targets modern Java and secure, tested application code. Includes the `test-driven-development` skill. | | `frontend-expert` | JavaScript, TypeScript, React, Vue, Angular, CSS | Covers UI implementation and frontend tooling. Uses the `frontend-design` skill for UI work. | | `technical-documentation-writer` | User-facing docs, guides, reference pages, Markdown content | Optimized for reader-first structure, actionable steps, and realistic examples. | | `api-documenter` | OpenAPI and Swagger specs, SDK docs, auth/error docs | Expects real request and response examples, version-aware docs, and developer experience details. | | `docs-fetcher` | External library and framework documentation | For third-party docs only. Prefers `llms-full.txt`, then `llms.txt`, then HTML parsing. | ### Infrastructure And Automation | Agent | Best for | Notable boundary or instruction | | --- | --- | --- | | `bash-expert` | Bash, Zsh, POSIX shell, automation scripts, Unix/Linux admin tasks | Pushes defensive shell practices such as `set -euo pipefail`, careful quoting, and ShellCheck-friendly scripts. | | `docker-expert` | Dockerfiles, Compose, Podman, image optimization, container security | Favors multi-stage builds, pinned base images, non-root users, and secure secret handling. | | `kubernetes-expert` | Kubernetes, OpenShift, Helm, GitOps, service-mesh tasks | Leans declarative, with strong defaults for RBAC, probes, security contexts, and resource limits. | | `jenkins-expert` | Jenkinsfiles, Groovy, Jenkins automation, build scripts | Treats credentials handling, timeouts, post actions, and reusable shared libraries as first-class concerns. | ### Diagnosis, Testing, And Repository Workflow | Agent | Best for | Notable boundary or instruction | | --- | --- | --- | | `debugger` | Root-cause analysis for errors, failing tests, and unexpected behavior | Diagnoses only. It recommends changes but does not edit files. Includes the `systematic-debugging` skill. | | `test-runner` | Running requested tests and summarizing failures | Executes exactly what it was asked to run, reports failures concisely, and never fixes code. | | `test-automator` | Creating tests, fixtures, coverage setup, and test pipeline config | Different from `test-runner`: it authors testing assets rather than only executing them. Includes the `test-driven-development` skill. | | `git-expert` | Local Git work such as commit, branch, merge, rebase, stash | Handles repository-local Git only. It does not run tests, does not fix code, and explicitly avoids `--no-verify`. | | `github-expert` | Pull requests, issues, releases, repos, workflows, and `gh` API work | Handles GitHub platform operations, not local-only Git. It expects test verification before PR-related actions. | > **Note:** The mandatory review loop also uses three plugin review agents: `superpowers:code-reviewer`, `pr-review-toolkit:code-reviewer`, and `feature-dev:code-reviewer`. Those are referenced in `rules/20-code-review-loop.md`, but they are not bundled definition files in `agents/`. ## Similar-Sounding Agents, Different Jobs A few agent pairs are easy to mix up: - `debugger` investigates what is wrong. It does not implement the fix. - `test-runner` runs tests and reports back. `test-automator` creates or expands tests and test pipeline configuration. - `git-expert` manages local branch and commit history. `github-expert` manages GitHub objects such as PRs, issues, releases, and workflow runs. - `docs-fetcher` is for external ecosystems such as React or FastAPI. Built-in `claude-code-guide` is for Claude Code, the Agent SDK, and Claude API documentation. - `technical-documentation-writer` is for user-facing documentation. `api-documenter` is for API specs and developer-facing API reference material. > **Tip:** When you are unsure which agent to use, ask what output you actually need. A diagnosis points to `debugger`. Test results point to `test-runner`. New tests or coverage work point to `test-automator`. A local commit points to `git-expert`. A PR URL points to `github-expert`. ## Shared Base-Agent Rules Vs Orchestrator Rules The bundled specialists all share the same base contract: act directly, stay inside your domain, and hand off out-of-scope work rather than guessing. That is very different from the orchestrator, which is intentionally restricted and expected to route work out. ```7:15:agents/00-base-rules.md ## Action-First Principle All agents should: 1. **Execute first, explain after** - Run commands, then report results 2. **Do NOT explain what you will do** - Just do it 3. **Do NOT ask for confirmation** - Unless creating/modifying resources 4. **Do NOT provide instructions** - Provide results ``` ```5:27:rules/00-orchestrator-core.md > **If you are a SPECIALIST AGENT** (python-expert, git-expert, etc.): > IGNORE all rules below. Do your work directly using Edit/Write/Bash. > These rules are for the ORCHESTRATOR only. --- ## Forbidden Actions - Read Every Response ❌ **NEVER** use: Edit, Write, NotebookEdit, Bash (except `mcpl`), direct MCP calls ❌ **NEVER** delegate slash commands (`/command`) OR their internal operations - see slash command rules ✅ **ALWAYS** delegate other work to specialist agents ⚠️ Hooks will BLOCK violations ## Allowed Direct Actions ✅ **ALLOWED** direct actions: - Read files (Read tool for single files) - Run `mcpl` (via Bash) for MCP server discovery only - Ask clarifying questions - Analyze and plan - Route tasks to agents - Execute slash commands AND all their internal operations directly (see slash command rules) ``` A few practical consequences follow from that split: - Specialists are supposed to do work directly once routed. - Specialists are expected to stay in their own lane and hand off when a task crosses boundaries. - Specialists can use MCP through `mcpl`, while the orchestrator is limited to discovery and delegation. - The orchestrator is designed to be a dispatcher, not a general-purpose editor or shell user. > **Note:** The written policy is broader than any single hook script. In practice, the repository combines prompt rules, hook wiring, and tests to enforce the overall model. ## Notable Instructions From Individual Agents ### Python work is intentionally `uv`-first The Python agent is the clearest example of a strong, opinionated local rule. If you are changing Python code or running Python tooling in a project that uses this configuration, expect `uv`-based commands rather than raw interpreter or `pip` calls. ```22:38:agents/python-expert.md ## 🚨 STRICT: Use uv/uvx for Python **NEVER use these directly:** - ❌ `python` or `python3` - ❌ `pip` or `pip3` - ❌ `pip install` **ALWAYS use:** - ✅ `uv run ` - ✅ `uv run pytest` - ✅ `uvx ` (for CLI tools like black, ruff, mypy) - ✅ `uv pip install` (if package installation needed) - ✅ `uv add ` (to add to pyproject.toml) **This is NON-NEGOTIABLE.** ``` This is not just a style preference. The repository’s session-start checks treat `uv` as critical, and the checked-in hook logic explicitly blocks direct Bash `python`, `python3`, `pip`, `pip3`, and raw `pre-commit` usage. ### External docs go through `docs-fetcher` `docs-fetcher` is not a generic web-search helper. It is a specialist for current third-party documentation, and its workflow is intentionally optimized for LLM-friendly doc sources before falling back to regular web pages. ```23:59:agents/docs-fetcher.md 1. **Discover** - Use WebSearch to find official documentation URL 2. **llms-full.txt First** - Try `{base_url}/llms-full.txt` for complete docs, then `{base_url}/llms.txt` for index, then HTML 3. **Parse Smart** - Extract only relevant sections based on query 4. **Context Rich** - Include examples and key points 5. **Source Cited** - Always provide source URL and type ## Workflow ```text Request: {library} + {topic} ↓ WebSearch → Find official docs URL ↓ Try: {base_url}/llms-full.txt ↓ Exists? ──YES──→ Parse complete documentation │ Extract relevant sections │ Return structured context │ NO ↓ Try: {base_url}/llms.txt ↓ Exists? ──YES──→ Parse llms.txt index │ Find relevant links │ WebFetch linked pages │ Return structured context │ NO ↓ WebFetch main docs page ↓ Parse HTML/markdown content ↓ Extract relevant sections ↓ Return structured context ``` ``` That makes `docs-fetcher` a good fit for React, FastAPI, Django, and other external ecosystems, but not for Claude Code documentation. The routing table explicitly reserves Claude Code, Agent SDK, and Claude API docs for the built-in `claude-code-guide`. ### Debugging, test running, and test authoring are deliberately separate This repository draws a clean line between three kinds of “testing” work: - `debugger` explains why something failed and points to the likely fix location. - `test-runner` runs the specified tests and returns focused failure analysis. - `test-automator` creates or improves the tests, fixtures, and test pipeline setup. That separation matters because it keeps “figure out what’s broken,” “run the suite,” and “write the tests” from turning into one blurry responsibility. ### Local Git and GitHub platform work are also separate `git-expert` and `github-expert` are intentionally split. Local repository operations stay with `git-expert`. Anything that creates or changes GitHub resources goes to `github-expert` through `gh`. > **Warning:** Git safeguards are not just advisory. `git-protection.py` blocks commits and pushes on protected branches such as `main` and `master`, and it also blocks work on branches that are already merged. Both `git-expert` and `github-expert` are designed to work with that protection rather than bypass it. ## How The Repository Reinforces Agent Behavior The hook and settings layer is what turns these rules from documentation into runtime behavior. `settings.json` wires together the reminder, enforcement, protection, and environment checks that support the agent model. ```37:85:settings.json "PreToolUse": [ { "matcher": "TodoWrite|Bash", "hooks": [ { "type": "command", "command": "uv run ~/.claude/scripts/rule-enforcer.py" } ] }, { "matcher": "Bash", "hooks": [ { "type": "command", "command": "uv run ~/.claude/scripts/git-protection.py" } ] }, { "matcher": "Bash", "hooks": [ { "type": "prompt", "prompt": "You are a security gate protecting against catastrophic OS destruction. Analyze: $ARGUMENTS\n\nBLOCK if the command would:\n- Delete system directories: /, /boot, /etc, /usr, /bin, /sbin, /lib, /var, /home\n- Write to disk devices: dd to /dev/sda, /dev/nvme, etc.\n- Format filesystems: mkfs on any device\n- Remove critical files: /etc/fstab, /etc/passwd, /etc/shadow, kernel/initramfs\n- Recursive delete with sudo or as root\n- Chain destructive commands after safe ones using &&, ;, |, ||\n\nASK (requires user confirmation) if:\n- Command uses sudo but is not clearly destructive\n- Deletes files outside system directories but looks risky\n\nALLOW all other commands - this gate only guards against OS destruction.\n\nIMPORTANT: Use your judgment. If a command seems potentially destructive even if not explicitly listed above, ASK the user for confirmation.\n\nRespond with JSON: {\"decision\": \"approve\" or \"block\" or \"ask\", \"reason\": \"brief explanation\"}", "timeout": 10000 } ] } ], "UserPromptSubmit": [ { "matcher": "", "hooks": [ { "type": "command", "command": "uv run ~/.claude/scripts/rule-injector.py" } ] } ], "SessionStart": [ { "matcher": "", "hooks": [ { "type": "command", "command": "~/.claude/scripts/session-start-check.sh", "timeout": 5000 } ] } ] ``` A few important things happen here: - `rule-injector.py` adds a “manager, not executor” reminder at prompt submission time. - `rule-enforcer.py` blocks certain shell-level violations, especially direct `python`, `pip`, and `pre-commit` usage. - `git-protection.py` protects branch safety for Git operations. - The extra security prompt hook acts as a final guardrail against destructive shell commands. - `session-start-check.sh` validates prerequisites such as `uv`, and conditionally checks for tools like `gh`, `prek`, and `mcpl`. The hook behavior is also covered by unit tests. In particular, `tests/test_rule_enforcer.py` verifies the Bash restrictions and also confirms that non-Bash tools are still allowed, while `tests/test_git_protection.py` covers branch detection, merged-branch checks, protected-branch blocking, and GitHub PR merge-status handling. ## Validation In This Repository This repository does not check in `.github/workflows/` files, and it does not include a `Jenkinsfile`. Even so, it does include checked-in validation configuration that supports the documented behavior of its scripts and rules. `tox.toml` defines the repository’s test entry point: ```1:7:tox.toml skipsdist = true envlist = ["unittests"] [env.unittests] description = "Run pytest tests" deps = ["uv"] commands = [["uv", "run", "--group", "tests", "pytest", "tests"]] ``` That is paired with `.pre-commit-config.yaml`, which enables checks from `pre-commit-hooks`, `flake8`, `detect-secrets`, `ruff`, `gitleaks`, `mypy`, and `markdownlint`. In other words, the repo’s own quality story is local and script-centric rather than workflow-YAML-centric. This distinction matters for the agent docs: - `jenkins-expert` and `test-automator` are available because projects using this configuration may need CI or pipeline work. - The configuration repository itself validates behavior through hook scripts, local linting and formatting checks, and unit tests under `tests/`. - The most important “source of truth” for specialist-agent behavior is still the checked-in agent definitions and rules, not a separate CI pipeline file. > **Tip:** If you want to understand why a task was routed a certain way, read three things in order: `rules/10-agent-routing.md`, the relevant file in `agents/`, and then `settings.json` plus the matching script in `scripts/` if the behavior looks enforced rather than advisory. --- Source: skills-and-templates.md # Skills and Templates This repository ships two reusable skills and a full personal Claude configuration template. The skills live in `skills/`. The template is the repository itself: tracked files are designed to live under `~/.claude`, with `settings.json`, hook scripts, a custom status line, and bundled plugin metadata all working together. ## What ships with this repository The tracked `skills/` entries are explicitly whitelisted in `.gitignore`, which makes it easy to see what is part of the shared setup: ```gitignore # This config integrates into ~/.claude so we must be explicit about what we track * # skills/ !skills/ !skills/agent-browser/ !skills/agent-browser/SKILL.md !skills/docsfy-generate-docs/ !skills/docsfy-generate-docs/SKILL.md ``` That gives you three main building blocks: - `skills/agent-browser/SKILL.md` for browser automation - `skills/docsfy-generate-docs/SKILL.md` for docsfy-powered documentation generation - A personal Claude configuration template built around `settings.json`, `scripts/`, `statusline.sh`, and `.claude-plugin/marketplace.json` > **Note:** There is no separate “template file.” The repository layout itself is the template, and several paths in `settings.json` assume these files are available under `~/.claude`. ## Browser automation skill The `agent-browser` skill is for tasks where Claude needs to drive a real browser: testing flows, filling forms, taking screenshots, recording demos, or extracting page data. Its front matter makes the intent clear: ```yaml --- name: agent-browser description: >- Automates browser interactions for web testing, form filling, screenshots, and data extraction. Use when the user needs to navigate websites, interact with web pages, fill forms, take screenshots, test web applications, or extract information from web pages. allowed-tools: Bash(agent-browser:*) --- ``` ### Quick start The skill recommends a simple loop: open a page, inspect the interactive elements, act on those element references, and re-snapshot when the page changes. ```bash agent-browser open # Navigate to page agent-browser snapshot -i # Get interactive elements with refs agent-browser click @e1 # Click element by ref agent-browser fill @e2 "text" # Fill input by ref agent-browser close # Close browser ``` ### What it can do You can use `agent-browser` for a lot more than basic clicking: - Navigate with `open`, `back`, `forward`, and `reload` - Inspect the page with `snapshot`, including interactive-only mode - Interact with elements using refs like `@e1` - Read content with `get text`, `get html`, `get value`, and `get attr` - Capture screenshots, PDFs, videos, traces, and console output - Work with cookies, storage, tabs, windows, frames, and network routing - Save and restore browser state between sessions Here are a few examples pulled directly from the skill: ```bash agent-browser screenshot path.png agent-browser screenshot --full agent-browser pdf output.pdf agent-browser record start ./demo.webm agent-browser record stop ``` ```bash agent-browser state save auth.json agent-browser state load auth.json agent-browser --session test1 open site-a.com agent-browser session list ``` ```bash agent-browser find role button click --name "Submit" agent-browser find text "Sign In" click agent-browser find label "Email" fill "user@test.com" ``` ### Practical workflow For most UI tasks, the best pattern is: 1. Open the page. 2. Run `agent-browser snapshot -i`. 3. Use the returned refs such as `@e1` and `@e2`. 4. Re-run `snapshot -i` after navigation or a large DOM update. > **Tip:** `snapshot -i` is the default starting point for real work. It gives you stable references for the elements Claude should interact with. The skill also supports JSON output when you want machine-readable results: ```bash agent-browser snapshot -i --json agent-browser get text @e1 --json ``` > **Note:** Video recording starts a fresh browser context but preserves cookies and storage from your session, so it is useful for clean demos without forcing you to log in again. ## docsfy documentation generation skill The `docsfy-generate-docs` skill is for creating AI-generated documentation for a Git repository using the `docsfy` CLI and a running docsfy server. The skill is opinionated in a useful way: it treats documentation generation as a workflow, not just a single command. ### Prerequisites Before generating anything, the skill checks two things: ```bash docsfy --help docsfy health ``` If `docsfy` is missing, the skill points to: ```bash uv tool install docsfy ``` If the server is unavailable, the skill tells you to check or initialize the docsfy configuration: ```bash docsfy-server docsfy config show docsfy config init ``` ### Generation workflow The core generation command is: ```bash docsfy generate --branch --provider --model --watch [--force] ``` A few details matter here: - The skill always uses `--watch` so progress is visible in real time. - The repository is passed as a Git URL, not a local folder. - The provider and model are not guessed. > **Note:** The skill explicitly says to ask for the AI provider and model instead of hardcoding them. The supported provider examples in the skill are `claude`, `gemini`, and `cursor`. ### Branching and download Once generation reaches `ready`, the skill creates a docs branch before downloading the generated files: ```bash git fetch origin git checkout -B docs/docsfy- origin/ ``` Then it downloads the generated documentation: ```bash docsfy download --branch --provider --model --output ``` The docsfy output arrives in a nested folder, so the skill flattens it: ```bash ls /---/ mv /---/* / || true mv /---/.* / 2>/dev/null || true rm -rf /--- ``` > **Warning:** The skill expects a nested download directory and removes it after flattening. If that folder does not exist, your project name, branch, provider, or model probably does not match the generation job you started. ### GitHub Pages support When the target repository is on GitHub, the skill can check whether GitHub Pages is already serving from `docs/`: ```bash gh api repos///pages --jq '.source' 2>/dev/null ``` If Pages is not configured, the skill can set it up like this: ```bash gh api repos///pages -X POST -f "source[branch]=" -f "source[path]=/docs" ``` If Pages is serving from `docs/`, the skill also supports a follow-up step: simplifying the root `README.md` so it points readers to the generated docs site. > **Tip:** This skill is best when you want a fast first version of a docs site for an existing repository. It already accounts for generation, branching, download, Pages checks, and post-generation cleanup. ## The personal Claude configuration template The repository is built as a reusable `~/.claude` setup. You can see that directly in `settings.json`, where the hooks and status line call scripts from `~/.claude`: ```json "hooks": { "UserPromptSubmit": [ { "matcher": "", "hooks": [ { "type": "command", "command": "uv run ~/.claude/scripts/rule-injector.py" } ] } ], "SessionStart": [ { "matcher": "", "hooks": [ { "type": "command", "command": "~/.claude/scripts/session-start-check.sh", "timeout": 5000 } ] } ] }, "statusLine": { "type": "command", "command": "bash ~/.claude/statusline.sh", "padding": 0 } ``` ### What this template gives you If you adopt this template, you get: - Session startup checks for required tools and plugins - Prompt-time rule injection - Command guardrails for Python, pip, and pre-commit - Git branch and PR protection for commit and push operations - A custom status line - Desktop notifications - Bundled marketplace plugins ### Session startup checks Every session runs `scripts/session-start-check.sh`. It checks for: - `uv` as a critical dependency - `gh` when the current repo uses GitHub - `jq` - `gawk` - `prek` when `.pre-commit-config.yaml` exists - `mcpl` for MCP Launchpad - critical review plugins: `pr-review-toolkit`, `superpowers`, and `feature-dev` A representative excerpt: ```bash # CRITICAL: uv - Required for Python hooks if ! command -v uv &>/dev/null; then missing_critical+=("[CRITICAL] uv - Required for running Python hooks Install: https://docs.astral.sh/uv/") fi # OPTIONAL: mcpl - MCP Launchpad (always check) if ! command -v mcpl &>/dev/null; then missing_optional+=("[OPTIONAL] mcpl - MCP Launchpad for MCP server access Install: https://github.com/kenneth-liao/mcp-launchpad") fi ``` This means the template does more than define preferences. It checks whether the environment can actually support the workflows it expects. ### Prompt injection and guardrails The template uses a `UserPromptSubmit` hook to add an orchestration reminder before each request. It also uses `PreToolUse` hooks to block unsafe or off-pattern commands. The `rule-enforcer.py` hook blocks direct Python, pip, and pre-commit usage: ```python # Allow uv/uvx commands if cmd.startswith(("uv ", "uvx ")): return False # Block direct python/pip forbidden = ("python ", "python3 ", "pip ", "pip3 ") return cmd.startswith(forbidden) ``` ```python def is_forbidden_precommit_command(command: str) -> bool: """Check if command uses pre-commit directly instead of prek.""" cmd = command.strip().lower() # Block direct pre-commit commands return cmd.startswith("pre-commit ") ``` In practice, that means this template expects you to use: - `uv run ...` instead of `python ...` - `uvx ...` for one-off Python CLI tools - `prek ...` instead of `pre-commit ...` > **Warning:** If you are used to typing `python`, `pip`, or `pre-commit` directly, this template will stop you. That is intentional. ### Git safety The Git protection hook is another important part of the template. Its own docstring summarizes the behavior well: ```python """PreToolUse hook - prevents commits and pushes on protected branches. This hook intercepts git commit and push commands and blocks them if: 1. The current branch is already merged into the main branch 2. The current branch is the main/master branch itself Allows commits on: - Unmerged branches - Amended commits that haven't been pushed yet """ ``` For GitHub repositories, it also checks whether the current branch already has a merged PR by using `gh pr list --head --state merged`. If that PR is already merged, further commits and pushes from the same branch are blocked. This is backed up by the test suite in `tests/test_git_protection.py`, so the behavior is enforced and verified rather than just described. ### Status line and notifications The custom status line is wired through `settings.json` and built in `statusline.sh`. It shows the current directory, optional SSH user and host, Git branch, active virtual environment, model name, context usage, and line-change totals. A shortened excerpt from `statusline.sh`: ```bash # Add directory name status_parts+=("$dir_name") # Add git branch if in a git repository if git rev-parse --git-dir >/dev/null 2>&1; then branch=$(git branch --show-current 2>/dev/null || echo "detached") status_parts+=("$branch") fi status_parts+=("$model_name") if [ -n "$context_pct" ]; then status_parts+=("(${context_pct}%)") fi ``` The notification hook uses `notify-send` and `jq`, so desktop notifications are part of the template too. ### Plugin marketplace entries The repository publishes three bundled plugins through `.claude-plugin/marketplace.json`: | Plugin | Description | |------|------| | `myk-github` | GitHub operations, including PR reviews, releases, review handling, and CodeRabbit rate limits | | `myk-review` | Local code review and review database operations | | `myk-acpx` | Multi-agent prompt execution through `acpx` | The `myk-acpx` plugin is especially useful if you want to send the same prompt to multiple coding agents. Its command file includes examples like: ```text /myk-acpx:prompt codex fix the tests /myk-acpx:prompt cursor review this code /myk-acpx:prompt gemini explain this function /myk-acpx:prompt codex --exec summarize this repo /myk-acpx:prompt cursor,codex review this code /myk-acpx:prompt cursor,gemini,codex --peer review the architecture ``` > **Note:** The `myk-acpx` command supports multiple agents, but `--fix`, `--peer`, and `--exec` have compatibility rules. For example, multi-agent mode cannot be combined with `--fix`. ### Template defaults worth knowing about A few defaults in `settings.json` are easy to miss but matter in daily use: ```json "env": { "DISABLE_TELEMETRY": "1", "DISABLE_ERROR_REPORTING": "1", "CLAUDE_CODE_DISABLE_FEEDBACK_SURVEY": "1" } ``` The template also keeps a large `enabledPlugins` list turned on, including official review, language server, and setup plugins alongside the `myk-*` plugins. > **Tip:** When you customize the template, keep the directory structure stable under `~/.claude`. The hook and status-line paths are hardcoded around that layout. ## Customizing this template safely Two details are especially important if you plan to extend the setup. First, `settings.json` includes this reminder: ```json "_scriptsNote": "Script entries must be duplicated in both permissions.allow and allowedTools arrays. When adding new scripts, update BOTH locations." ``` Second, the repository uses an allowlist-style `.gitignore`, which means local-only files can live beside tracked files without being committed by accident. That is a good fit for personal configuration because you can keep: - shared, reusable config in version control - machine-specific or experimental files untracked ## Validation and maintenance This repository validates the template with tests and pre-commit tooling. `tox.toml` runs the Python test suite through `uv`: ```toml skipsdist = true envlist = ["unittests"] [env.unittests] description = "Run pytest tests" deps = ["uv"] commands = [["uv", "run", "--group", "tests", "pytest", "tests"]] ``` The pre-commit configuration covers formatting, linting, type checking, documentation linting, and secret scanning. Included tools and hooks include: - `ruff` - `ruff-format` - `mypy` - `flake8` - `markdownlint` - `detect-secrets` - `gitleaks` - standard `pre-commit-hooks` checks such as merge conflicts, docstrings, TOML, and EOF fixes > **Tip:** In this setup, use `prek` instead of calling `pre-commit` directly. That matches the template’s guardrails and avoids the `rule-enforcer.py` block. ## Summary Use this repository when you want both reusable skills and a structured personal Claude setup. Use the skills when you need Claude to: - drive a browser with `agent-browser` - generate project documentation with `docsfy` Use the template when you want Claude to: - start with environment checks - follow consistent tool rules - avoid unsafe Git operations - surface more context in the status line - work with a curated plugin set If you keep the `~/.claude` layout intact, the pieces fit together cleanly and give you a configuration that is practical for day-to-day use, not just a collection of isolated files. --- Source: daily-workflow.md # Daily Workflow This configuration is designed to make a Claude session behave like a guided development workflow, not a free-form chat. In a normal session, the main assistant triages the request, routes implementation to the right specialist, runs mandatory review and validation, and uses hooks to keep Git and command usage safe. If you are using this repo as your Claude Code configuration, this is the day-to-day flow you should expect. ## At a glance 1. The session starts by checking that required tools and plugins are installed. 2. Your request is triaged to decide whether it needs the full issue-first workflow or can be handled directly. 3. The main assistant acts as the orchestrator and routes work to the right specialist agent. 4. Any code change goes through a required three-reviewer loop. 5. Tests and local validation run before work is considered done. 6. Git safety hooks block risky commits and pushes. 7. GitHub review workflows use temporary JSON files in `/tmp/claude/` and store review history in `.claude/data/reviews.db`. ## 1. Session start: check the environment first When a session starts, this configuration immediately runs a startup check and injects its rules into the prompt flow. In `settings.json`, the hooks are wired like this: ```json "UserPromptSubmit": [ { "matcher": "", "hooks": [ { "type": "command", "command": "uv run ~/.claude/scripts/rule-injector.py" } ] } ], "SessionStart": [ { "matcher": "", "hooks": [ { "type": "command", "command": "~/.claude/scripts/session-start-check.sh", "timeout": 5000 } ] } ] ``` That means two things happen automatically: - Every prompt gets a reminder that the main assistant should act like a manager and delegate work. - Every new session checks whether the environment can actually support the configured workflow. The startup checker treats some items as critical, especially: - `uv`, because the Python hooks run through `uv` - The three review plugins: `pr-review-toolkit`, `superpowers`, and `feature-dev` It also checks for useful tools such as `gh`, `jq`, `gawk`, `prek`, and `mcpl`. > **Note:** The startup check is informational, not blocking. If tools are missing, the session continues, but you should expect parts of the workflow to be unavailable or degraded until you install the missing pieces. > **Warning:** Without the three review plugins, the mandatory review loop cannot run as intended. ## 2. Intake and triage: decide whether this is full workflow work Before implementation starts, the config expects the request to be classified. The issue-first rule is written as a checklist: ```text ## Pre-Implementation Checklist (START HERE) Before ANY code changes, complete this checklist: 1. **Should this workflow be skipped?** (see "SKIP this workflow for" list below) - YES → Do directly, skip remaining steps - NO → Continue checklist 2. **GitHub issue created?** - NO → Create issue first (delegate to `github-expert`) - YES → Continue 3. **On correct branch?** (`feat/issue-N-...` or `fix/issue-N-...`) - NO → Create branch from origin/main (delegate to `git-expert`) - YES → Continue 4. **User confirmed "work on it now"?** - NO → Ask user - YES → Proceed with implementation ``` In practice, that means: - Use the full issue-and-branch flow for features, bug fixes, refactors, and multi-file work. - Skip it for simple questions, exploration, tiny fixes, or when the user explicitly wants a quick one-off change. This matters because the rest of the workflow assumes work is happening on a safe branch with a clear unit of scope. > **Tip:** If the task is substantial enough that you would want a branch name, a checklist, or a progress trail, it probably belongs in the full issue-first workflow. ## 3. Delegate implementation instead of doing everything in one place The repository is built around an orchestrator pattern: - The main assistant plans, reads, asks questions, and routes work. - Specialist agents do the actual implementation in their domain. Routing is based on intent, not just the tool being used. A few important examples from `rules/10-agent-routing.md`: - Python work goes to `python-expert` - Local Git work goes to `git-expert` - GitHub issues, PRs, and releases go to `github-expert` - Documentation work goes to `technical-documentation-writer` - Tests go to `test-automator` This is reinforced both by policy and by command restrictions. For example, `scripts/rule-enforcer.py` blocks direct `python`, `pip`, and `pre-commit` calls and expects `uv`, `uvx`, or `prek` instead: ```python def is_forbidden_python_command(command: str) -> bool: """Check if command uses python/pip directly instead of uv.""" cmd = command.strip().lower() # Allow uv/uvx commands if cmd.startswith(("uv ", "uvx ")): return False # Block direct python/pip forbidden = ("python ", "python3 ", "pip ", "pip3 ") return cmd.startswith(forbidden) def is_forbidden_precommit_command(command: str) -> bool: """Check if command uses pre-commit directly instead of prek.""" cmd = command.strip().lower() # Block direct pre-commit commands return cmd.startswith("pre-commit ") ``` That gives you a simple rule of thumb: - Use specialist agents for implementation. - Use `uv` and `uvx` for Python execution. - Use `prek` instead of calling `pre-commit` directly. ### The big exception: slash commands Slash commands do not follow the normal "delegate everything" rule. Once you invoke a slash command, that command's own workflow takes over. The slash-command rule makes that explicit: - The orchestrator executes the slash command directly. - Its internal steps run directly unless the slash command itself tells the assistant to use an agent. - Normal delegation rules are suspended for the duration of that slash command. This is why commands like `/myk-github:pr-review` and `/myk-github:review-handler` feel more like guided tools than normal chat prompts. ## 4. Review is mandatory, and it is parallel After any code change, the configured session is expected to enter the review loop. The rule is very direct: ```text ┌───────────────────────────────────────────────────────────────────┐ │ 1. Specialist writes/fixes code │ │ ↓ │ │ 2. Send to ALL 3 review agents IN PARALLEL: │ │ - `superpowers:code-reviewer` │ │ - `pr-review-toolkit:code-reviewer` │ │ - `feature-dev:code-reviewer` │ │ ↓ │ │ 3. Merge findings from all 3 reviewers │ │ ↓ │ │ 4. Has comments from ANY reviewer? ──YES──→ Fix code (go to 2) │ │ │ │ │ NO │ │ ↓ │ │ 5. Run `test-automator` │ │ ↓ │ │ 6. Tests pass? ──NO──→ Fix code │ │ │ ↓ │ │ │ Minor fix (test/config only)? │ │ │ YES → re-run tests (go to 5) │ │ │ NO → full re-review (go to 2) │ │ YES │ │ ↓ │ │ ✅ DONE │ └───────────────────────────────────────────────────────────────────┘ ``` A few practical points follow from this: - The three reviewers are not optional. - They are expected to run in parallel, not one after another. - Duplicate feedback is merged before the next fix round. - If any reviewer still has a real issue, the code goes back for changes. The deduplication rule prioritizes findings in this order: 1. Security 2. Correctness 3. Performance 4. Style > **Warning:** In this configuration, "looks good enough" is not the same as "done." A change is done only after the review loop is clear and tests pass. ## 5. Validation happens locally, with `tox`, `pytest`, and pre-commit tooling This repo does not rely on an in-repo GitHub Actions workflow to define the normal validation path. There are no `.github/workflows` files here. Instead, the day-to-day workflow is enforced locally through hooks, `tox`, and pre-commit configuration. The test environment is defined in `tox.toml`: ```toml skipsdist = true envlist = ["unittests"] [env.unittests] description = "Run pytest tests" deps = ["uv"] commands = [["uv", "run", "--group", "tests", "pytest", "tests"]] ``` So the underlying unit-test path is: - `pytest` - run through `uv` - scoped to the `tests` directory The repository also has a substantial `.pre-commit-config.yaml` with checks such as: - `ruff` - `ruff-format` - `flake8` - `mypy` - `gitleaks` - `detect-secrets` - `markdownlint` That means a normal local validation pass is expected to include both test execution and code-quality checks. > **Tip:** If you are validating changes yourself, think in two layers: "Does the code work?" and "Does it pass the repo's local quality gates?" ## 6. Use task tracking for long or multi-phase work For complex workflows, the session is expected to create and maintain tasks rather than trying to keep everything in short-term conversational memory. The task rules say tasks are persisted to disk under `~/.claude/tasks//` and are useful for: - multi-step work - work that may be interrupted - workflows with clear phases - work where progress visibility helps They also require cleanup at the end. If a workflow created tasks, those tasks should be marked complete before the session moves on. This is especially relevant for multi-phase slash commands, where there is a clear progression like: 1. collect input 2. execute changes 3. test 4. post results 5. clean up tasks > **Note:** The task system is for longer-running orchestration. It is not meant for every tiny action inside a short, one-step task. ## 7. Git safety is enforced, not just recommended This configuration protects Git history aggressively. The Git hook intercepts `git commit` and `git push` and blocks unsafe cases, including: - committing on `main` or `master` - committing or pushing from a branch whose PR is already merged - committing on a branch that is already merged into the main branch - committing in detached HEAD - pushing from protected branches It also has an explicit escape hatch for one safe case: amending unpushed work. In plain language, the expected behavior is: - work on a feature or issue branch - do not commit directly to `main` - do not keep adding commits to a branch that is already merged - if you need to amend a commit that has not been pushed yet, that is allowed > **Warning:** If the hook blocks a commit or push, the fix is usually to create or switch to the right branch, not to force the action through. ## 8. Review work has its own workflow This repo includes custom plugins for local review, GitHub PR review, and review-thread handling. In `settings.json`, those plugins are enabled alongside the official review plugins. ### Local review Use the local review command when you want a three-reviewer pass on your current work: - `/myk-review:local` - `/myk-review:local main` - `/myk-review:local feature/branch` The command definition says it reviews either: - uncommitted changes with `git diff HEAD`, or - changes against a target branch with `git diff "$ARGUMENTS"...HEAD` That makes it the quickest way to sanity-check work before committing. ### PR review Use the PR review command when you want to review a GitHub PR and post findings: - `/myk-github:pr-review` - `/myk-github:pr-review 123` - `/myk-github:pr-review https://github.com/owner/repo/pull/123` Its workflow fetches: - PR metadata and diff via `myk-claude-tools pr diff` - the target repository's `CLAUDE.md` via `myk-claude-tools pr claude-md` Then it sends the result through the same three-reviewer pattern before posting inline comments. ### Review-thread handling Use the review handler when you are processing existing review comments on a PR: - `/myk-github:review-handler` - `/myk-github:review-handler --autorabbit` - `/myk-github:review-handler ` Under the hood, the CLI writes the fetched review state to a temp JSON file in `/tmp/claude`: ```python tmp_base = Path(os.environ.get("TMPDIR") or tempfile.gettempdir()) out_dir = tmp_base / "claude" out_dir.mkdir(parents=True, exist_ok=True, mode=0o700) try: out_dir.chmod(0o700) except OSError as e: print_stderr(f"Warning: unable to set permissions on {out_dir}: {e}") json_path = out_dir / f"pr-{pr_number}-reviews.json" ``` That review JSON is then used to: 1. present human, Qodo, and CodeRabbit items 2. decide what to address 3. post replies and resolve threads 4. store the completed result for analytics The stored history goes into a local SQLite database inside the repo: ```python project_root = get_project_root() # Get current commit SHA (anchored to repo root for correctness) commit_sha = get_current_commit_sha(cwd=project_root) log(f"Storing reviews for {owner}/{repo}#{pr_number} (commit: {commit_sha[:7]})...") db_path = project_root / ".claude" / "data" / "reviews.db" log(f"Database: {db_path}") ``` A few practical consequences matter to users: - AI review sources are handled alongside human comments, not separately. - Previously dismissed comments can be auto-skipped, but the review-handler workflow still expects them to be surfaced in the decision flow. - Human skipped or not-addressed threads stay open for follow-up, while AI-source threads are resolved after reply posting. > **Tip:** If you are working through a large AI review backlog, `/myk-github:review-handler --autorabbit` is the advanced path. It is designed to keep polling for new CodeRabbit comments after each pass. ## 9. Advanced option: `/myk-acpx:prompt` The `myk-acpx` plugin is the "power user" path for running prompts through external coding agents via `acpx`. Examples from the command definition include: - `/myk-acpx:prompt codex fix the tests` - `/myk-acpx:prompt cursor review this code` - `/myk-acpx:prompt cursor,codex review this code` - `/myk-acpx:prompt codex --fix fix the code quality issues` - `/myk-acpx:prompt gemini --peer review this code` This is not the default daily flow, but it is useful when you want: - a second opinion from another coding agent - a fix-mode pass in an isolated workflow - a peer-review debate loop If you use it, the command does its own safety checks before modifying files. ## What "done" looks like A task is truly complete in this configuration when all of the following are true: - The request was triaged correctly. - Issue and branch workflow was used when appropriate. - Implementation was routed to the right specialist or handled through the right slash command. - All three reviewers are clear, or their findings were addressed and re-reviewed. - Tests and local validation passed. - Git safety hooks are satisfied. - Any long-running task tracking was cleaned up. - If GitHub review handling was part of the job, replies were posted and the review data was stored. That is the core promise of this repository: a Claude session should not just produce changes. It should move work through a predictable, reviewable, and safer daily workflow. --- Source: review-workflows.md # Review Workflows This project gives you a complete review loop inside Claude Code: you can review local changes before you push, review a GitHub PR and post selected findings, process comments from multiple review sources on an open PR, refine your own draft review before submitting it, and store the outcome so future review rounds get smarter. | If you want to... | Use | |---|---| | Review uncommitted or branch-to-branch changes | `/myk-review:local [branch]` | | Review a GitHub PR and optionally post findings | `/myk-github:pr-review [PR number or URL]` | | Process incoming human, Qodo, and CodeRabbit comments on a PR | `/myk-github:review-handler [--autorabbit] [review URL]` | | Polish your own draft review before submitting it | `/myk-github:refine-review ` | | Inspect stored review history and patterns | `/myk-review:query-db ...` or `myk-claude-tools db ...` | > **Note:** The shipped configuration enables both review plugins: ```96:103:settings.json "enabledPlugins": { "pyright-lsp@claude-plugins-official": true, "jdtls-lsp@claude-plugins-official": true, "lua-lsp@claude-plugins-official": true, "github@claude-plugins-official": true, "myk-review@myk-org": true, "myk-github@myk-org": true, "code-simplifier@claude-plugins-official": true, ``` > **Note:** GitHub-facing review commands check for `uv`, `myk-claude-tools`, and `gh` before they run. ## Local Diff Review Use `/myk-review:local` when you want fast feedback without touching GitHub. It is the lightest-weight workflow in the repo. How it works: - `/myk-review:local` reviews your staged and unstaged changes with `git diff HEAD`. - `/myk-review:local ` reviews your branch against another branch with `git diff ""...HEAD`. - The diff is sent to three review agents in parallel, then their findings are merged and deduplicated before you see them. In practice, this is the best workflow for pre-push review or for checking a feature branch before you open a PR. > **Tip:** Use `/myk-review:local main` when you want “what changed since I branched?” and `/myk-review:local` when you want “what have I changed right now?” ## GitHub PR Review Use `/myk-github:pr-review` when you want Claude to review a real PR and optionally post selected findings back to GitHub. This workflow does more than just fetch a diff. It also resolves the base repository context, which matters for fork-based PRs, and pulls the repository’s `CLAUDE.md` so the review agents can judge the PR in project context. The PR diff command produces a structured payload with metadata, the full diff, and the changed-file list: ```216:229:myk_claude_tools/pr/diff.py output = { "metadata": { "owner": pr_info.owner, "repo": pr_info.repo, "pr_number": pr_info.pr_number, "head_sha": head_sha, "base_ref": base_ref, "title": pr_title, "state": pr_state, }, "diff": pr_diff, "files": files, } ``` That data is then analyzed by the three review agents. After you choose which findings to post, the tool sends them to GitHub as a single review with a summary body and inline comments: ```281:295:myk_claude_tools/pr/post_comment.py payload = { "commit_id": commit_sha, "body": review_body, "event": "COMMENT", "comments": [ { "path": c.path, "line": c.line, "body": c.body, "side": "RIGHT", } for c in comments ], } ``` What this means for you: - You can run `/myk-github:pr-review` with no argument to auto-detect the current branch’s PR. - You can also pass a PR number or full PR URL. - You stay in control of what gets posted: all findings, none, or only selected ones. > **Tip:** Inline comments can only be posted on lines that are part of the PR diff. If posting fails, the first things to check are the file path, the line number, and whether the PR head SHA changed since the review started. ## Multi-Source Review Handling Use `/myk-github:review-handler` when a PR already has reviewer feedback and you want to work through it systematically. This is the full review-processing workflow. It fetches unresolved review threads, categorizes them by source, lets you decide what to do with each item, applies fixes, posts replies, and stores the results for future analytics and auto-skip matching. Fetched review items are enriched with a source, priority, reply field, and status: ```682:691:myk_claude_tools/reviews/fetch.py source = detect_source(author) priority = classify_priority(body) enriched = { **thread, "source": source, "priority": priority, "reply": thread.get("reply"), "status": thread.get("status", "pending"), } ``` The workflow groups review threads into three top-level buckets: - `human` - `qodo` - `coderabbit` CodeRabbit also gets special handling because some of its comments are embedded in the review body instead of normal GitHub review threads: ```1:14:myk_claude_tools/reviews/coderabbit_parser.py """Parse CodeRabbit review body comments (outside diff range, nitpick, and duplicate). CodeRabbit embeds certain comments directly in the review body text (not as inline threads). This module extracts those comments into structured data. Three kinds of body-embedded sections are supported: - **Outside diff range** comments (code outside the PR diff range) - **Nitpick** comments (minor suggestions) - **Duplicate** comments (comments repeated from previous reviews) The expected format is a blockquoted ``
`` section with nested ``` That matters because `review-handler` is designed to give you one place to process: - regular human review threads - Qodo comments - CodeRabbit inline threads - CodeRabbit outside-diff, nitpick, and duplicate comments A typical flow looks like this: 1. Fetch open review items from the current PR. 2. Show everything to the user, including auto-skipped items. 3. Decide what to address, what to skip, and why. 4. Make code changes and run tests. 5. Post replies to GitHub. 6. Store the completed review result in the local database. > **Note:** Auto-skipped items are still shown in the review tables. The workflow treats auto-skip as a suggested default, not as a hidden deletion. If you use `--autorabbit`, CodeRabbit items are auto-approved for action while human and Qodo items still follow the normal decision flow. > **Warning:** `--autorabbit` is a polling loop. It keeps checking for new CodeRabbit comments until you stop it. ## Pending-Review Refinement Use `/myk-github:refine-review ` when you already started a review in the GitHub UI and want help rewriting your draft comments before you submit them. This workflow is intentionally different from `review-handler`: - It does not fetch all unresolved review feedback on the PR. - It fetches only your own `PENDING` review. - It is meant for editing your draft review comments, not responding to other reviewers. The update command expects a JSON structure like this: ```7:31:myk_claude_tools/reviews/pending_update.py Expected JSON structure: { "metadata": { "owner": "...", "repo": "...", "pr_number": 123, "review_id": 456, "submit_action": "COMMENT", # optional "submit_summary": "Summary text" # optional }, "comments": [ { "id": 789, "path": "src/main.py", "line": 42, "body": "original comment", "refined_body": "refined version", "status": "accepted" } ] } Status handling: - accepted: Update comment body with refined_body - Other statuses: Skip (no update) ``` When `pending-fetch` builds the comment list, each comment starts in a neutral state with the original body preserved and a place for an accepted refinement: ```163:172:myk_claude_tools/reviews/pending_fetch.py comment: dict[str, Any] = { "id": c.get("id"), "path": c.get("path"), "line": c.get("line"), "side": c.get("side", "RIGHT"), "body": c.get("body", ""), "diff_hunk": c.get("diff_hunk", ""), "refined_body": None, "status": "pending", } ``` Only accepted comments with a changed `refined_body` are updated, and submission is guarded very deliberately: ```254:276:myk_claude_tools/reviews/pending_update.py # Optionally submit the review (requires both JSON submit_action AND --submit flag) submit_action = metadata.get("submit_action") if submit_action and submit: if submit_action not in VALID_SUBMIT_ACTIONS: print_stderr( f"Error: Invalid submit_action '{submit_action}'. " f"Must be one of: {', '.join(sorted(VALID_SUBMIT_ACTIONS))}" ) return 1 if fail_count > 0: print_stderr(f"Skipping review submission due to {fail_count} failed update(s)") else: submit_summary = metadata.get("submit_summary", "") print_stderr(f"Submitting review with action: {submit_action}...") elif submit_action and not submit: print_stderr(f"Note: submit_action='{submit_action}' set but --submit flag not passed. Skipping submission.") ``` That gives you a safe review flow: - refine everything - accept only the edits you want - optionally submit as `COMMENT`, `APPROVE`, or `REQUEST_CHANGES` - or leave the review pending and come back later > **Warning:** Submission requires both the metadata field `submit_action` and the CLI flag `--submit`. If one is missing, the review will not be submitted. > **Warning:** A `404` during pending-review update usually means the draft review was already submitted or deleted somewhere else. ## Auto-Skip Behavior One of the most useful features in this repo is automatic skipping of repeated, previously dismissed review comments. When the tool fetches new review items, it checks the review database for similar comments that were previously marked `skipped` or `not_addressed`, and in some cases `addressed` for body-comment types. If it finds a close enough match, it enriches the new item so it already carries the earlier decision and reason. Here is the core matching logic: ```717:740:myk_claude_tools/reviews/fetch.py if candidates: best = None best_score = 0.0 for prev in candidates: prev_body = (prev.get("body") or "").strip() if not prev_body: continue score = similarity(thread_body, prev_body) if score >= 0.6 and score > best_score: best = prev best_score = score if best_score == 1.0: break if best: reason = (best.get("skip_reason") or best.get("reply") or "").strip() if reason: original_status = best.get("status", "skipped") enriched["status"] = "skipped" enriched["skip_reason"] = reason enriched["original_status"] = original_status # Display-only, not persisted to DB enriched["reply"] = f"Auto-skipped ({original_status}): {reason}" enriched["is_auto_skipped"] = True ``` A few practical details matter here: - Matching is based on exact path first, with a `comment_id` fallback for pathless body comments. - The similarity threshold is `0.6`. - The similarity function is Jaccard-style word overlap, not exact string equality. - The original reason is carried forward so you can see why the new item was auto-skipped. The database query is intentionally conservative about what counts as safe auto-skip input: ```193:202:myk_claude_tools/db/query.py Retrieves comments that were dismissed during review processing: - ``not_addressed`` and ``skipped`` comments are always included (any type). - ``addressed`` comments are only included when their type is ``outside_diff_comment``, ``nitpick_comment``, or ``duplicate_comment``. These types lack a GitHub review thread that can be resolved, so the database is the only mechanism to auto-skip them on subsequent fetches. Normal inline thread comments rely on GitHub's ``isResolved`` filter in the GraphQL query, so including them here could incorrectly auto-skip a similar new finding in a different PR. ``` > **Tip:** Auto-skip gets better over time only if you store completed review results. If you skip `reviews store`, future review rounds lose that memory. ## Reply Posting And Thread Resolution `myk-claude-tools reviews post ` is the command that actually replies on GitHub after statuses and messages have been set. It handles two different kinds of review feedback: - normal GitHub review threads that have a resolvable thread ID - body-only comments such as CodeRabbit outside-diff, nitpick, and duplicate comments The resolution policy is intentionally different for human and AI reviewers: ```538:582:myk_claude_tools/reviews/post.py # Outside-diff and nitpick comments have no GitHub thread to post to or resolve. # They are tracked via the review database only. comment_type = thread_data.get("type") if comment_type in ("outside_diff_comment", "nitpick_comment", "duplicate_comment"): if status == "pending": pending_count += 1 eprint(f"Skipping {category}[{i}] ({path}): {comment_type} status is pending") continue if status in ("addressed", "not_addressed", "skipped", "failed"): # Skip if already posted (idempotency) if posted_at: already_posted_count += 1 eprint(f"Skipping {category}[{i}] ({path}): {comment_type} already posted at {posted_at}") continue # Skip auto-skipped entries — they were already replied to in a previous cycle if thread_data.get("is_auto_skipped"): already_posted_count += 1 eprint( f"Skipping {category}[{i}] ({path}): {comment_type}" " auto-skipped (already replied in previous cycle)" ) continue ... # Determine if we should resolve this thread (MUST be before resolve_only_retry check) should_resolve = True if category == "human" and status != "addressed": should_resolve = False ``` What that means in practice: - `addressed` replies resolve normal threads. - `not_addressed` and `skipped` still post a reply. - Human `not_addressed` and `skipped` threads stay open so the human reviewer can follow up. - Qodo and CodeRabbit threads resolve after reply. - Outside-diff, nitpick, and duplicate comments are collected into consolidated PR comments per reviewer instead of thread replies. If something fails, the command is designed to be rerun safely. Already-posted entries are skipped, and partially processed entries can retry the missing step. > **Note:** If a post fails, rerun the exact `myk-claude-tools reviews post ` command the tool prints. The workflow is intentionally idempotent enough to retry without reposting everything. ## Review Result Storage And Analytics After replies are posted, `myk-claude-tools reviews store ` persists the finished review to SQLite so the repo can support analytics and future auto-skip matching. The storage path is inside the project, not in a global database: ```206:214:myk_claude_tools/reviews/store.py # Get project root and database path project_root = get_project_root() # Get current commit SHA (anchored to repo root for correctness) commit_sha = get_current_commit_sha(cwd=project_root) log(f"Storing reviews for {owner}/{repo}#{pr_number} (commit: {commit_sha[:7]})...") db_path = project_root / ".claude" / "data" / "reviews.db" ``` A few important storage behaviors are worth knowing: - The current `HEAD` SHA is stored with each review record. - Review history is append-only, so multiple review passes on the same PR build history instead of overwriting each other. - Per-comment data includes source, author, path, line, body, priority, status, reply, timestamps, and comment type. > **Warning:** `reviews store` deletes the JSON file after successful storage. Treat the JSON as a working file, not a permanent artifact. Once the data is stored, you can query it either through `/myk-review:query-db` or directly with the CLI. Useful commands: - `myk-claude-tools db stats --by-source` - `myk-claude-tools db stats --by-reviewer` - `myk-claude-tools db patterns --min 2` - `myk-claude-tools db dismissed --owner --repo ` - `myk-claude-tools db query "SELECT status, COUNT(*) AS cnt FROM comments GROUP BY status"` - `echo '{"path":"src/main.py","body":"Consider adding error handling"}' | myk-claude-tools db find-similar --owner --repo --json` Raw SQL queries are intentionally locked down to read-only entry points: ```562:588:myk_claude_tools/db/query.py # Safety check: only allow SELECT/CTE statements sql_stripped = sql.strip() ... # Block multiple statements (semicolon separating statements, not in strings/comments) sql_stmt_check = _strip_sql_strings(sql_for_checks).rstrip(";") if ";" in sql_stmt_check: raise ValueError("Multiple SQL statements are not allowed") # Allow SELECT and WITH (CTE) as read-only entrypoints if not sql_upper.startswith(("SELECT", "WITH")): raise ValueError("Only SELECT/CTE queries are allowed for safety") ``` That makes the analytics side safe for exploration while still giving you useful ways to answer questions like: - Which review source has the highest addressed rate? - Which reviewer keeps raising the same suggestion? - Which comments were skipped, and why? - Has this “new” CodeRabbit comment already been dismissed in an earlier cycle? ## Recommended Flow For most teams, the smoothest path looks like this: 1. Use `/myk-review:local` before pushing or opening a PR. 2. Use `/myk-github:pr-review` when you want Claude-generated PR findings and optional inline GitHub comments. 3. Use `/myk-github:review-handler` after human, Qodo, or CodeRabbit feedback arrives. 4. Use `/myk-github:refine-review` only for your own draft GitHub review comments. 5. Always store completed review runs so auto-skip and analytics keep getting better. --- Source: release-workflow.md # Release Workflow Releases in this repository are driven by two layers: - `/myk-github:release` gives you the guided, end-to-end workflow. - `myk-claude-tools release ...` provides the underlying CLI steps for validation, version detection, version bumping, and GitHub release creation. The CLI is installed as `myk-claude-tools`: ```23:24:pyproject.toml [project.scripts] myk-claude-tools = "myk_claude_tools.cli:main" ``` > **Note:** This repository does not define a separate checked-in release pipeline in GitHub Actions. Release prep and publishing are driven by the plugin workflow, `git`, `gh`, and the `myk-claude-tools` CLI. ## What to use For most releases, start with the guided slash command: ```text /myk-github:release /myk-github:release --dry-run /myk-github:release --prerelease /myk-github:release --draft ``` If you want to run the steps yourself, the CLI exposes these subcommands: - `myk-claude-tools release info` - `myk-claude-tools release detect-versions` - `myk-claude-tools release bump-version` - `myk-claude-tools release create` > **Warning:** `/myk-github:release --dry-run` appears in the plugin workflow, but there is no matching `--dry-run` flag in the checked-in `myk-claude-tools release ...` CLI commands. Treat it as a workflow-level preview mode, not a standalone CLI option. ## 1. Validate the repository The workflow starts with `myk-claude-tools release info`. This is the safety check that decides whether the repository is in a releasable state. Use it like this: ```bash myk-claude-tools release info myk-claude-tools release info --target myk-claude-tools release info --target --tag-match ``` It validates three things before doing the more expensive release analysis: - You are on the correct target branch. - Your working tree is clean. - Your local branch is fully synced with the remote. ```217:238:myk_claude_tools/release/info.py def _perform_validations(default_branch: str, current_branch: str, target_branch: str | None = None) -> Validations: """Perform release prerequisite validations.""" # 1. Default Branch Check effective_target = target_branch or default_branch on_target_branch = current_branch == effective_target # 2. Clean Working Tree Check working_tree_clean = True dirty_files = "" diff_code, _ = _run_command(["git", "diff", "--quiet"]) cached_code, _ = _run_command(["git", "diff", "--cached", "--quiet"]) if diff_code != 0 or cached_code != 0: working_tree_clean = False _, status_output = _run_command(["git", "status", "--porcelain"]) if status_output: dirty_files = "\n".join(status_output.split("\n")[:10]) # 3. Remote Sync Check fetch_code, _ = _run_command(["git", "fetch", "origin", effective_target, "--quiet"]) fetch_successful = fetch_code == 0 ``` When validation succeeds, the command prints structured JSON that includes repo metadata, validation results, the last tag, the recent matching tags, and the commit list: ```103:115:myk_claude_tools/release/info.py return { "metadata": self.metadata.to_dict(), "validations": self.validations.to_dict(), "last_tag": self.last_tag, "all_tags": self.all_tags, "commits": [c.to_dict() for c in self.commits], "commit_count": self.commit_count, "is_first_release": self.is_first_release, "target_branch": self.target_branch, "tag_match": self.tag_match, } ``` > **Warning:** If validation fails, `release info` returns no commit list. That does not mean there is nothing to release. It means you need to fix the branch state first. ### Version-branch auto-detection If you are on a maintenance branch named like `v2.10`, the tool auto-detects that branch as the release target and scopes tag lookup to the same release line. ```201:214:myk_claude_tools/release/info.py def _detect_version_branch(current_branch: str) -> tuple[str | None, str | None]: """Auto-detect version branch and infer tag match pattern. If the current branch matches vMAJOR.MINOR (e.g., v2.10), returns the branch as the target and a glob pattern to scope tag discovery. """ match = _VERSION_BRANCH_RE.match(current_branch) if match: version_prefix = match.group(1) return current_branch, f"v{version_prefix}.*" return None, None ``` That means a branch such as `v2.10` automatically uses `v2.10.*` when looking for the most recent relevant tag. > **Tip:** If your branch naming does not follow `vMAJOR.MINOR`, pass `--target` and `--tag-match` explicitly. ## 2. Analyze commits and choose the bump `myk-claude-tools release info` returns raw commit data. The guided slash command is the part that interprets those commits into a proposed version bump and changelog. The release workflow definition is explicit about that step: ```75:83:plugins/myk-github/commands/release.md ### Phase 3: Changelog Analysis Parse commits from Phase 1 output and categorize by conventional commit type: - Breaking Changes (MAJOR) - Features (MINOR) - Bug Fixes, Docs, Maintenance (PATCH) Determine version bump and generate changelog. ``` In practice, that means: - The CLI gathers commit history since the last matching tag. - The slash command groups those commits into breaking changes, features, and patch-level changes. - The workflow then proposes `major`, `minor`, or `patch`. If the repository has no matching tag yet, the tool treats it as a first release and works from the full history. > **Note:** Commit analysis is a workflow responsibility. The checked-in CLI does not have a separate `analyze` or `recommend-version` subcommand. > **Note:** The commit collection helper reads up to 100 commits from the release range. If your next release spans more than that, review the result before relying on the generated changelog. ## 3. Detect which version files can be updated Next, the workflow runs: ```bash myk-claude-tools release detect-versions ``` This command prints JSON with the detected files and a total count: ```205:212:myk_claude_tools/release/detect_versions.py results = detect_version_files() output = { "version_files": [r.to_dict() for r in results], "count": len(results), } print(json.dumps(output, indent=2)) ``` The supported root-level version files are defined directly in code: ```167:174:myk_claude_tools/release/detect_versions.py _ROOT_SCANNERS: list[tuple[str, Callable[[Path], str | None], str]] = [ ("pyproject.toml", _parse_pyproject_toml, "pyproject"), ("package.json", _parse_package_json, "package_json"), ("setup.cfg", _parse_setup_cfg, "setup_cfg"), ("Cargo.toml", _parse_cargo_toml, "cargo"), ("build.gradle", _parse_gradle, "gradle"), ("build.gradle.kts", _parse_gradle, "gradle"), ] ``` The detector also searches for Python `__version__` assignments in files named `__init__.py` and `version.py`, while skipping common generated or environment directories such as `.venv`, `node_modules`, `dist`, `build`, and `target`. Supported version sources are: - `pyproject.toml` via `project.version` - `package.json` via top-level `version` - `setup.cfg` via static `[metadata] version` - `Cargo.toml` via `[package] version` - `build.gradle` and `build.gradle.kts` via `version` - `__init__.py` and `version.py` via `__version__` This repository itself uses `pyproject.toml` for the CLI package version: ```1:4:pyproject.toml [project] name = "myk-claude-tools" version = "1.7.2" description = "CLI utilities for Claude Code plugins" ``` > **Note:** `setup.cfg` values that use `attr:` or `file:` are intentionally skipped. Only static version values are detected and bumped. > **Note:** Automatic version detection does not scan `.claude-plugin/plugin.json`. In this repository, files such as `plugins/myk-github/.claude-plugin/plugin.json` contain their own version field, but they are not part of the supported detection list above. ## 4. Review the proposal before anything is changed If version files are found, the guided workflow shows you: - The proposed new version - The files it plans to update - A changelog preview You can then accept the proposal, override the bump type, exclude a file, or cancel. ```85:106:plugins/myk-github/commands/release.md ### Phase 4: User Approval Display the proposed release information. If version files were detected in Phase 2, include them in the approval prompt. - 'yes' -- Proceed with proposed version and all listed files - 'major/minor/patch' -- Override the version bump type - 'exclude N' -- Exclude file by number from the version bump (e.g., 'exclude 2') - 'no' -- Cancel the release ``` This approval step is important because version detection is intentionally conservative. The tool can find multiple valid version files, but you still control which ones actually get updated for the release. ## 5. Bump version files When you confirm the version, the workflow runs `myk-claude-tools release bump-version`. Typical usage looks like this: ```bash myk-claude-tools release bump-version myk-claude-tools release bump-version --files --files ``` A few rules matter here: - Pass `1.2.0`, not `v1.2.0`. - `--files` can be repeated to limit the update to a subset of detected files. - Every file passed to `--files` must match a detected version file. - The CLI rewrites files only. It does not create branches, commits, or PRs by itself. The version string is validated strictly: ```217:227:myk_claude_tools/release/bump_version.py new_version = new_version.strip() if not new_version or any(ch in new_version for ch in ("\n", "\r")): return BumpResult( status="failed", error="Invalid version: must be a non-empty single-line string.", ) if new_version.startswith("v") or new_version.startswith("V"): return BumpResult( status="failed", error=f"Invalid version: '{new_version}' should not start with 'v'. Use '{new_version[1:]}' instead.", ) ``` If you use `--files`, the filter must match exactly: ```236:256:myk_claude_tools/release/bump_version.py if files is not None: normalized_files = [Path(f).as_posix() for f in files] filtered = [vf for vf in detected if vf.path in normalized_files] if not filtered: available = [vf.path for vf in detected] return BumpResult( status="failed", error=f"None of the specified files were found in detected version files. Available: {available}", ) matched_paths = {vf.path for vf in filtered} unmatched = [f for f in normalized_files if f not in matched_paths] if unmatched: available = [vf.path for vf in detected] return BumpResult( status="failed", error=( f"Some specified files were not found in detected version files." f" Unmatched: {unmatched}. Available: {available}" ), ) detected = filtered ``` The result JSON distinguishes between `updated` files and `skipped` files, which lets the workflow stage only successful updates and tell you about anything that was skipped. > **Tip:** Use `--files` when you want to keep automatic detection but narrow the actual bump to a smaller set of version files. ## 6. Prepare the release with a PR If version files were updated, the guided workflow does not release directly from that local state. It creates a dedicated bump branch, commits the updated files, opens a PR, merges it, and then syncs the target branch before creating the GitHub release. The documented flow is: ```bash BUMP_BRANCH="chore/bump-version--$(date +%s)" git checkout -b "$BUMP_BRANCH" git add uv lock git add uv.lock git commit -m "chore: bump version to " git push -u origin "$BUMP_BRANCH" PR_URL=$(gh pr create --title "chore: bump version to " \ --body "Bump version to " --base ) gh pr merge --merge --admin --delete-branch ``` A few practical details are worth knowing: - The timestamp suffix on `BUMP_BRANCH` avoids collisions with earlier bump attempts. - `uv lock` is only relevant when `uv.lock` exists at the repo root. - After merge, the workflow switches back to the target branch and pulls the latest changes before continuing. > **Warning:** The workflow attempts `gh pr merge --merge --admin --delete-branch`. If you do not have admin privileges, expect the documented fallback: merge the PR manually, then confirm before the release continues. ## 7. Create the GitHub release Once the branch is current and the changelog is ready, the workflow writes the release notes to a temporary file and calls `release create`. The workflow’s handoff looks like this: ```bash CHANGELOG_FILE=$(mktemp /tmp/claude-release-XXXXXX.md) trap "rm -f $CHANGELOG_FILE" EXIT cat > "$CHANGELOG_FILE" << 'EOF' EOF myk-claude-tools release create {owner}/{repo} {tag} "$CHANGELOG_FILE" [--prerelease] [--draft] [--target {target_branch}] ``` You can also run the CLI directly if you already have a changelog file: ```bash myk-claude-tools release create myk-claude-tools release create --prerelease myk-claude-tools release create --draft --target ``` The CLI forwards the main release flags directly to `gh release create`: ```144:165:myk_claude_tools/release/create.py cmd = [ "gh", "release", "create", tag, "--repo", owner_repo, "--notes-file", changelog_file, "--title", title.strip() if title and title.strip() else tag, ] if target: cmd.extend(["--target", target]) if prerelease: cmd.append("--prerelease") if draft: cmd.append("--draft") ``` The command expects: - A repository in `owner/repo` format - A release tag - An existing changelog file - Optional `--prerelease`, `--draft`, `--target`, and `--title` flags It also checks whether the tag looks like semantic versioning: ```76:79:myk_claude_tools/release/create.py def _is_semver_tag(tag: str) -> bool: """Check if tag follows semantic versioning format (vX.Y.Z).""" pattern = r"^v[0-9]+\.[0-9]+\.[0-9]+(-[a-zA-Z0-9.]+)?$" ``` > **Warning:** Non-semver tags are warned about, not blocked. `release create` can still proceed if the tag does not match `vX.Y.Z`. ## End-to-end summary A typical guided release looks like this: 1. Run `/myk-github:release`. 2. The workflow calls `myk-claude-tools release info` to validate the repo and collect commits since the last matching tag. 3. It analyzes those commits to propose a `major`, `minor`, or `patch` bump and drafts the changelog. 4. It calls `myk-claude-tools release detect-versions` to discover bumpable version files. 5. It asks you to approve the proposed version, exclude files, override the bump type, or cancel. 6. If files need updating, it runs `myk-claude-tools release bump-version ...`, creates a PR, merges it, and syncs the target branch. 7. It writes the changelog to a temporary file and calls `myk-claude-tools release create ...`. 8. GitHub creates the release, optionally as a prerelease or draft. That split is intentional: - The CLI handles validation, JSON output, version-file discovery, version rewriting, and GitHub release creation. - The slash command handles the interactive decisions, commit interpretation, changelog planning, and PR orchestration. --- Source: multi-agent-acpx-guide.md # Multi-Agent ACPX Guide `myk-acpx` lets Claude Code route a prompt through [acpx](https://github.com/openclaw/acpx) to another coding agent. It is the plugin to use when you want a second opinion from Codex, Cursor, Gemini, Claude, Copilot, or another supported ACP-compatible tool without leaving your current workflow. It supports four practical patterns: - A persistent, read-only session with one agent - A stateless one-shot run with `--exec` - A writable fix pass with `--fix` - A multi-round peer review loop with `--peer` ## Quick examples These examples are taken from the command definition in `plugins/myk-acpx/commands/prompt.md`: - `/myk-acpx:prompt codex fix the tests` - `/myk-acpx:prompt cursor review this code` - `/myk-acpx:prompt gemini explain this function` - `/myk-acpx:prompt codex --exec summarize this repo` - `/myk-acpx:prompt codex --model o3-pro review the architecture` - `/myk-acpx:prompt codex --fix fix the code quality issues` - `/myk-acpx:prompt gemini --peer review this code` - `/myk-acpx:prompt codex --peer --model o3-pro review the architecture` - `/myk-acpx:prompt cursor,codex review this code` - `/myk-acpx:prompt cursor,gemini,codex --peer review the architecture` > **Note:** The mode comes from flags, not from the wording in your prompt. `/myk-acpx:prompt codex fix the tests` is still a read-only run unless you add `--fix`. ## Before you start If `acpx` is not installed, the command checks `acpx --version` and offers to install it with: ```bash npm install -g acpx@latest ``` > **Note:** `acpx` can download ACP adapters, but it does not replace the underlying agent CLI. If you want to use `codex`, `cursor`, `gemini`, or another agent, that tool still needs to be installed and available on `PATH`. ## Command format The command definition declares this interface: ```yaml description: Run a prompt via acpx to any supported coding agent argument-hint: [,agent2,...] [--fix | --peer | --exec] [--model ] allowed-tools: Bash(acpx:*), Bash(git:*), AskUserQuestion, Agent, Edit, Write, Read, Glob, Grep ``` That means: - The first token is the agent name, or a comma-separated list of agent names. - `--exec`, `--fix`, and `--peer` choose the mode. - `--model ` is optional and agent-dependent. - Everything after the flags becomes the prompt text. Supported agents currently listed in the command definition: | Agent | Wraps | |---|---| | `pi` | Pi Coding Agent | | `openclaw` | OpenClaw ACP bridge | | `codex` | Codex CLI (OpenAI) | | `claude` | Claude Code | | `gemini` | Gemini CLI | | `cursor` | Cursor CLI | | `copilot` | GitHub Copilot CLI | | `droid` | Factory Droid | | `iflow` | iFlow CLI | | `kilocode` | Kilocode | | `kimi` | Kimi CLI | | `kiro` | Kiro CLI | | `opencode` | OpenCode | | `qwen` | Qwen Code | ## How the workflow runs `/myk-acpx:prompt` is a slash command, so it runs its own workflow directly instead of handing the whole request off to another agent. The slash-command rules in this repository state: ```markdown 1. **EXECUTE IT DIRECTLY YOURSELF** - NEVER delegate to any agent 2. **ALL internal operations run DIRECTLY** - scripts, bash commands, everything 3. **Slash command prompt takes FULL CONTROL** - its instructions override general CLAUDE.md rules 4. **General delegation rules are SUSPENDED** for the duration of the slash command ``` That direct execution model is what allows this command to manage sessions, inspect Git state, run `acpx`, and orchestrate peer-review rounds on its own. ## Modes at a glance | Mode | Keeps session state? | Can the external `acpx` agent write files? | Best for | |---|---|---|---| | Default session mode | Yes | No | Follow-up conversations and ongoing read-only analysis | | `--exec` | No | No | One-shot answers and session fallback | | `--fix` | Yes | Yes | Making changes in one repository | | `--peer` | Yes | No | Structured review and debate; Claude may still apply fixes between rounds | ## Default session mode If you do not pass `--exec`, the command tries to keep a session for the current directory. It first runs: ```bash acpx sessions ensure ``` If that fails, it tries: ```bash acpx sessions new ``` Successful runs stay attached to that working directory, so follow-up prompts can keep using the same agent context. Non-fix runs are explicitly read-only. The command appends this guard to your prompt: ```text IMPORTANT: This is a read-only request. Do NOT modify, create, or delete any files. Report your findings only. ``` > **Note:** If session setup fails with known `acpx` session errors such as `"Invalid params"` or missing session responses, the command falls back to `--exec` for that run and points to upstream issues `acpx#152` and `acpx#161`. > **Tip:** Use default session mode when you expect follow-up questions. Use `--exec` when you want a clean, disposable run. ## Exec mode `--exec` skips session setup entirely and runs a stateless command: ```bash acpx --approve-reads --non-interactive-permissions fail exec '' acpx --approve-reads --non-interactive-permissions fail exec --model '' ``` Use `--exec` when you want: - A one-off summary - A quick read-only review - A fallback when session management is unreliable for a specific agent ## Fix mode `--fix` is the writable mode. It runs the selected agent with full approval: ```bash acpx --approve-all '' acpx --approve-all --model '' ``` The command also makes the permission explicit in the prompt it sends: ```text You have full permission to modify, create, and delete files as needed. Make all necessary changes directly. ``` ### Dirty-worktree safeguards Before `--fix` runs, the command checks whether the current directory is a Git repository and whether the worktree is already dirty. If the directory is not a Git repository, the command warns that it will not be able to show a Git diff or give you an easy rollback point. If the directory is a Git repository and `git status --short` shows existing changes, the command asks you to choose one of these options: 1. `Commit first (Recommended)` creates a checkpoint commit with the exact message `chore: checkpoint before acpx changes`. 2. `Continue anyway` keeps going, but the later diff summary may include pre-existing edits. 3. `Abort` stops the run immediately. This safeguard is there so agent-made changes are easier to isolate and review. ### What happens after the fix After the agent finishes, the command reads the Git diff using: ```bash git status --short git diff --stat git diff git diff --cached --stat git diff --cached ``` If the diff is large, the workflow can fall back to the `--stat` view instead of dumping the entire patch. The final summary is designed to tell you: - Which files were modified, created, or deleted - What changed in plain language - What to run next to verify the result > **Warning:** `--fix` only works with a single agent. If you pass multiple agents with `--fix`, the command aborts. ## Peer mode `--peer` is not a one-shot review. It is a multi-round debate loop. At a high level, the workflow is: 1. The external reviewer inspects the code. 2. Claude evaluates each finding. 3. Valid findings are fixed, and disputed findings get a technical counter-argument. 4. The updated result goes back to the reviewer for another round. 5. The loop continues until the reviewer says there are no actionable issues left. The command definition is explicit about the convergence rule: ```markdown Claude orchestrates an AI-to-AI debate loop with the target agent(s) until all participants agree on the code. When multiple agents are specified, each agent reviews independently in parallel, and Claude evaluates the merged findings. **CRITICAL RULE: Only the peer agent can end the loop.** Claude fixing code does NOT count as convergence. After EVERY fix round, Claude MUST send the fixes back to the peer agent (Step 9c) for re-review. The loop ends ONLY when the peer agent confirms no remaining issues. ``` If the repository contains a `CLAUDE.md`, peer mode tells the reviewer to use it as part of the review standard before it starts judging the code. One subtle but important point: the external `acpx` reviewer stays read-only, but the overall peer workflow can still change your code because Claude may fix accepted findings between review rounds. That is why the same dirty-worktree safeguard runs before `--peer` as well as before `--fix`. If a disagreement survives three or more rounds, the workflow records it as an unresolved disagreement and moves on. The final peer summary is designed to show: - Findings addressed - Agreements reached after debate - Unresolved disagreements - Items where no change was needed > **Warning:** Peer mode does not end when Claude thinks the code is good. It ends when the reviewer says there are no remaining actionable issues. ## Parallel multi-agent runs You can pass several agents as a comma-separated list. The command sends the same prompt to each agent in parallel, using the same mode and the same optional `--model` value. Results are grouped by agent: ```text ## Results from : ## Results from : ``` In peer mode, the initial review and later re-review rounds are also sent to all selected agents in parallel. Claude then merges and deduplicates overlapping findings. Parallel runs are most useful when you want breadth rather than depth: - Use multi-agent default mode to compare viewpoints on the same code or design. - Use multi-agent peer mode when you want several reviewers to challenge the code independently. - Use single-agent session mode when you want follow-up continuity. - Use single-agent fix mode when you want controlled edits and a clean diff summary. > **Tip:** Multi-agent runs are best for review, explanation, and comparison. `--fix` is intentionally limited to one agent so changes stay attributable and easier to inspect. ## Guardrails and failure cases | Situation | What happens | |---|---| | No agent name or no prompt text | The command aborts and shows the usage pattern. | | Unknown agent | The command aborts and lists supported agents. | | `--fix` with `--peer`, `--fix` with `--exec`, or `--peer` with `--exec` | The command aborts because those modes are mutually exclusive. | | Duplicate `--fix`, `--peer`, `--exec`, or `--model` | The command aborts with a duplicate-flag error. | | `--model` with no value, or a model name that starts with `--` | The command aborts. | | `--fix` with multiple agents | The command aborts. | | A read-only run tries to write files anyway | The command retries once with stricter read-only wording, then aborts if the agent still tries to write. | | Session setup fails with known ACP session errors | The command falls back to `--exec` for that invocation. | > **Note:** When the command builds the raw `acpx` shell command, it single-quotes the prompt and escapes embedded single quotes as `'\''` to avoid shell expansion issues. --- Source: mcp-usage.md # MCP Usage This configuration expects all MCP server discovery and access to go through `mcpl` (MCP Launchpad). The working pattern is simple: - Discover tools first. - Inspect the schema if you are not sure about parameters. - Call the tool only after you know the exact server, tool name, and JSON shape. In other words, this setup does **not** expect users or agents to guess MCP tool names from memory. > **Note:** MCP behavior in this repository is defined by `rules/`, enforced by `settings.json` and hooks, and checked at session start. There are no separate `.github/workflows` files for MCP in this repository. ## What you need to know first The central rule lives in `rules/15-mcp-launchpad.md`: ```text Use the `mcpl` command for all MCP server interactions. MCP Launchpad is a unified CLI for discovering and executing tools from multiple MCP servers. If a task requires functionality outside current capabilities, always check `mcpl` for available tools. ``` That same file makes the expectation explicit: ```text ## Workflow **Never guess tool names** - always discover them first. ``` If you remember only one thing from this page, make it this: > **Warning:** Do not guess server names, tool names, or parameter names. Use `mcpl search`, `mcpl list`, and `mcpl inspect` first. ## The discovery-first workflow The repository already includes real examples in `rules/15-mcp-launchpad.md`. This is the intended flow. ### 1. Search for the right tool Use `mcpl search` when you know what you want to do, but not which server or tool provides it: ```bash mcpl search "list projects" ``` ### 2. Inspect the tool before calling it Use `mcpl inspect` when you want the schema or an example payload: ```bash mcpl inspect sentry search_issues --example ``` ### 3. Call the tool with the required JSON Once you know the exact tool and parameters, call it directly: ```bash mcpl call vercel list_projects '{"teamId": "team_xxx"}' ``` ### 4. List tools when you already know the server If you know the server but not the tool name, list that server's tools: ```bash mcpl list vercel # Shows all tools with required params ``` > **Tip:** `mcpl search` is best when you are exploring. `mcpl list ` is best when you already know the server. ## Common `mcpl` commands These commands come directly from `rules/15-mcp-launchpad.md`. | Command | Purpose | |---|---| | `mcpl search ""` | Search all tools | | `mcpl search "" --limit N` | Search with more results | | `mcpl list` | List all MCP servers | | `mcpl list ` | List tools for one server | | `mcpl inspect ` | Show full schema | | `mcpl inspect --example` | Show schema and example call | | `mcpl call '{}'` | Execute a tool with no arguments | | `mcpl call '{"param": "v"}'` | Execute a tool with arguments | | `mcpl verify` | Check server connections | For troubleshooting, the same rule file also documents: | Command | Purpose | |---|---| | `mcpl session status` | Check daemon and connection status | | `mcpl session stop` | Restart the daemon | | `mcpl config` | Show current configuration | | `mcpl call '{}' --no-daemon` | Bypass the daemon for debugging | ## How roles are split in this configuration This repository uses an orchestrator pattern. The main Claude conversation is the **orchestrator**, and specialist agents do the domain-specific work. ### Orchestrator: discover, then delegate `rules/00-orchestrator-core.md` limits what the orchestrator should do directly: ```text ❌ **NEVER** use: Edit, Write, NotebookEdit, Bash (except `mcpl`), direct MCP calls ❌ **NEVER** delegate slash commands (`/command`) OR their internal operations - see slash command rules ✅ **ALWAYS** delegate other work to specialist agents ⚠️ Hooks will BLOCK violations ``` The allowed direct action is intentionally narrow: ```text - Run `mcpl` (via Bash) for MCP server discovery only ``` And the routing rule is clear: ```text ❌ MCP tools → delegate to manager agents ``` In practical terms, the orchestrator should use `mcpl` to figure out what exists, then hand off MCP-backed work to the right specialist. ### Specialist agents: use the full `mcpl` workflow The shared agent rules in `agents/00-base-rules.md` tell all specialist agents how to work with MCP: ```text ## MCP Server Access Agents can access MCP (Model Context Protocol) servers via the `mcpl` command (MCP Launchpad). **Key points:** - **Never guess tool names** - always search/discover first - Use `mcpl search ""` to find tools across all servers - Use `mcpl call ''` to execute tools ``` They also include this quick reference: ```bash mcpl search "" # Find tools mcpl list # List server's tools mcpl inspect # Get tool schema mcpl call '{}' # Execute tool ``` ## What this means for end users If you are using this configuration, the safest way to request MCP-backed work is to describe the outcome you want and let the system discover the exact tool through `mcpl`. Good requests: - "Use `mcpl` to find the right tool for listing projects." - "Search available MCP tools first, then call the correct one." - "Inspect the tool schema before making the call." Less helpful requests: - Naming a server/tool combination you have not verified. - Assuming parameter names without checking the schema. - Treating MCP tools as if they were built-in commands. > **Tip:** If you are unsure whether a tool exists, ask for discovery first. That matches how this configuration is designed to work. ## How the configuration enforces this This behavior is not just documentation. It is built into the repository configuration. ### `settings.json` explicitly allows `mcpl` In `settings.json`, `Bash(mcpl:*)` is allowed in both `permissions.allow` and `allowedTools`: ```json "allow": [ "Read(/tmp/claude/**)", "Edit(/tmp/claude/**)", "Write(/tmp/claude/**)", "Bash(mkdir -p /tmp/claude*)", "Bash(claude:*)", "Bash(sed -n:*)", "Bash(grep:*)", "Bash(mcpl:*)", "Bash(git -C:*)", "Bash(prek:*)" ] ``` ```json "allowedTools": [ "Edit(/tmp/claude/**)", "Write(/tmp/claude/**)", "Read(/tmp/claude/**)", "Bash(mkdir -p /tmp/claude*)", "Bash(claude:*)", "Bash(sed -n:*)", "Bash(grep:*)", "Bash(mcpl:*)", "Bash(git -C:*)", "Bash(prek:*)" ] ``` This is why `mcpl` is the approved path for MCP discovery in this configuration. ### Session start checks whether `mcpl` is installed `settings.json` runs `scripts/session-start-check.sh` on `SessionStart`: ```json "SessionStart": [ { "matcher": "", "hooks": [ { "type": "command", "command": "~/.claude/scripts/session-start-check.sh", "timeout": 5000 } ] } ] ``` That script always checks for `mcpl`: ```bash # OPTIONAL: mcpl - MCP Launchpad (always check) if ! command -v mcpl &>/dev/null; then missing_optional+=("[OPTIONAL] mcpl - MCP Launchpad for MCP server access Install: https://github.com/kenneth-liao/mcp-launchpad") fi ``` > **Note:** The startup check labels `mcpl` as optional because not every task needs MCP. If you want MCP-backed workflows, treat it as required. You can install MCP Launchpad here: [https://github.com/kenneth-liao/mcp-launchpad](https://github.com/kenneth-liao/mcp-launchpad) ### Prompt injection reinforces the rule on every request `scripts/rule-injector.py` injects a reminder into every prompt: ```python rule_reminder = ( "[SYSTEM RULES] You are a MANAGER. NEVER do work directly. ALWAYS delegate:\n" "- Edit/Write → language specialists (python-expert, go-expert, etc.)\n" "- ALL Bash commands → bash-expert or appropriate specialist\n" "- Git commands → git-expert\n" "- MCP tools → manager agents\n" "- Multi-file exploration → Explore agent\n" "HOOKS WILL BLOCK VIOLATIONS." ) ``` That reminder works together with `rules/00-orchestrator-core.md` and `rules/15-mcp-launchpad.md` to keep the MCP workflow consistent. ## Troubleshooting If MCP access is not working, start with the commands already documented in `rules/15-mcp-launchpad.md`: ```bash mcpl verify mcpl session status mcpl session stop mcpl config mcpl call '{}' --no-daemon ``` The same file also gives these recovery hints: - If a server is not connecting, run `mcpl verify`. - If connections look stale, run `mcpl session stop` and try again. - If you hit timeouts, increase `MCPL_CONNECTION_TIMEOUT=120`. > **Warning:** If a call fails because a tool is missing or parameters are wrong, do not switch to guessing. Go back to `mcpl search`, `mcpl list`, or `mcpl inspect --example`. ## Quick checklist Before using an MCP tool in this configuration: 1. Make sure `mcpl` is installed. 2. Search or list tools before calling anything. 3. Inspect the tool if the parameters are not obvious. 4. Use the orchestrator for discovery. 5. Let the appropriate agent perform MCP-backed execution when needed. If you keep discovery-first as your default, you will be using MCP the way this repository expects. --- Source: cli-and-interface-reference.md # CLI and Interface Reference This repository exposes its public functionality through local interfaces: a Python CLI, Claude Code slash commands, hook scripts, and JSON files passed between them. It does not expose a standalone HTTP API. > **Note:** The interfaces are intentionally composable. A slash command often calls `myk-claude-tools`, which then reads or writes JSON under `/tmp/claude/` or `.claude/data/`. ## Interface Map | Surface | Primary files | What you use it for | |---------|---------------|---------------------| | CLI | `myk_claude_tools/` | Pull request data, review handling, review analytics, release automation, CodeRabbit rate-limit handling | | Slash commands | `plugins/*/commands/*.md` | Running those workflows from Claude Code with guided prompts and tool permissions | | Hooks | `settings.json`, `scripts/` | Enforcing guardrails, injecting context, checking prerequisites, desktop notifications | | JSON/config files | `settings.json`, `.claude-plugin/*.json`, `/tmp/claude/*.json` | Configuring Claude Code and passing machine-readable data between commands | | Review database | `.claude/data/reviews.db` | Persisting completed review history for analytics and auto-skip logic | ## `myk-claude-tools` CLI The package exposes a single console entrypoint: ```23:24:pyproject.toml [project.scripts] myk-claude-tools = "myk_claude_tools.cli:main" ``` Python `>=3.10` is required. The command is designed to be installed as a tool, and the plugin workflows repeatedly check for it before they proceed. Install it with: ```bash uv tool install myk-claude-tools ``` At runtime, the CLI is organized into five command groups: ```12:22:myk_claude_tools/cli.py @click.group() @click.version_option() def cli() -> None: """CLI utilities for Claude Code plugins.""" cli.add_command(coderabbit_commands.coderabbit, name="coderabbit") cli.add_command(db_commands.db, name="db") cli.add_command(pr_commands.pr, name="pr") cli.add_command(release_commands.release, name="release") cli.add_command(reviews_commands.reviews, name="reviews") ``` ### Shared invocation patterns `pr diff` and `pr claude-md` accept the same three input styles: - `owner/repo` plus a PR number - A full GitHub PR URL - A bare PR number when you are already inside the target repository > **Tip:** If you only have a PR number, run the command from the repository you want to inspect. The parser uses `gh repo view` to infer `owner/repo`. Most GitHub-backed commands depend on a working `gh` CLI login. Review and release workflows also assume `git`, and many slash-command wrappers check for `uv` before continuing. ## `pr` commands The `pr` group covers three public interfaces: | Command | Input | Output | |---------|-------|--------| | `myk-claude-tools pr diff` | `owner/repo `, PR URL, or PR number | JSON with PR metadata, full diff text, and changed files | | `myk-claude-tools pr claude-md` | Same three forms as `pr diff` | Plain-text `CLAUDE.md` content, or an empty string if none exists | | `myk-claude-tools pr post-comment` | `owner/repo ` or `-` for stdin | JSON reporting success or failure of posted inline comments | `pr diff` is the main machine-readable PR payload. Its output shape is defined directly in the implementation: ```216:229:myk_claude_tools/pr/diff.py output = { "metadata": { "owner": pr_info.owner, "repo": pr_info.repo, "pr_number": pr_info.pr_number, "head_sha": head_sha, "base_ref": base_ref, "title": pr_title, "state": pr_state, }, "diff": pr_diff, "files": files, } ``` The `files` array contains one object per changed file with: - `path` - `status` - `additions` - `deletions` - `patch` `pr claude-md` is intentionally plain text. It checks in this order: 1. Local `./CLAUDE.md` if the current checkout matches the target repo 2. Local `./.claude/CLAUDE.md` if the current checkout matches the target repo 3. Remote `CLAUDE.md` via GitHub 4. Remote `.claude/CLAUDE.md` via GitHub 5. Empty output if nothing is found > **Tip:** `pr claude-md` is local-first. If you are already in the target repository, it will use your local file before calling GitHub. `pr post-comment` expects a JSON array of inline comments. Each object must have `path`, `line`, and `body`. The command also recognizes optional severity markers inside `body`: - `### [CRITICAL]` - `### [WARNING]` - `### [SUGGESTION]` If no marker is present, the comment is treated as a suggestion. The command posts a single GitHub review with a generated summary and the inline comments. > **Warning:** GitHub only accepts inline comments on lines that are part of the current PR diff. If the path, line number, or commit SHA do not match the diff, `pr post-comment` will fail. ## `reviews` commands The `reviews` group is the most important interface family in this repository. It handles fetching review data, updating replies, refining pending reviews, and storing completed review history. | Command | What it does | File behavior | |---------|--------------|---------------| | `reviews fetch` | Fetch unresolved review threads for the current branch’s PR, or for a supplied review URL | Writes `/tmp/claude/pr--reviews.json` and also prints the JSON to stdout | | `reviews post` | Read a fetched review JSON file, post replies, and resolve threads where appropriate | Updates timestamps in place and can be safely re-run | | `reviews pending-fetch` | Fetch the authenticated user’s pending PR review | Writes `/tmp/claude/pr---pending-review.json` and prints only the path | | `reviews pending-update` | Update accepted pending review comments, optionally submit the review | Reads the pending-review JSON file | | `reviews store` | Persist a completed review JSON file into SQLite | Imports it into `.claude/data/reviews.db` and deletes the JSON file on success | ### `reviews fetch` `reviews fetch` outputs a categorized JSON document with three arrays: `human`, `qodo`, and `coderabbit`. ```949:960:myk_claude_tools/reviews/fetch.py final_output = { "metadata": { "owner": owner, "repo": repo, "pr_number": int(pr_number), "json_path": str(json_path), }, "human": categorized["human"], "qodo": categorized["qodo"], "coderabbit": categorized["coderabbit"], } ``` Each review entry can include: - `thread_id` - `node_id` - `comment_id` - `author` - `path` - `line` - `body` - `replies` - `source` - `priority` - `status` - `reply` Depending on the review source and matching history, you may also see: - `skip_reason` - `original_status` - `is_auto_skipped` - `type` - `review_id` - `suggestion_index` `reviews fetch` also supports review-specific URL fragments, including: - `#pullrequestreview-` - `#discussion_r` - A raw numeric review ID That matters when you want to focus on one review rather than all unresolved threads. CodeRabbit support goes further than GitHub’s review-thread API. The implementation also parses CodeRabbit review-body comments and classifies them as: - `outside_diff_comment` - `nitpick_comment` - `duplicate_comment` Those comment types are important later, because they are replied to via consolidated PR comments rather than normal thread replies. ### Status values and reply behavior The review-posting workflow is driven by status values stored in the JSON: ```24:34:myk_claude_tools/reviews/post.py Status handling: - addressed: Post reply and resolve thread - not_addressed: Post reply and resolve thread (similar to addressed) - skipped: Post reply (with skip reason) and resolve thread - pending: Skip (not processed yet) - failed: Retry posting Resolution behavior by source: - qodo/coderabbit: Always resolve threads after replying - human: Only resolve if status is "addressed"; skipped/not_addressed threads are not resolved to allow reviewer follow-up ``` In practice, that means: - Human reviews stay open when you reply with `skipped` or `not_addressed` - Qodo and CodeRabbit threads are resolved after a reply - Re-running `reviews post` is safe because it tracks `posted_at` and `resolved_at` - If some posts fail, the command prints an exact retry command you can run again `reviews post` also groups body-only comments by reviewer and posts one or more consolidated PR comments mentioning that reviewer. That is how outside-diff, nitpick, and duplicate comments are acknowledged. ### `reviews pending-fetch` and `reviews pending-update` Pending-review refinement uses a separate JSON format. The output file contains metadata, a flat `comments` list, and a truncated PR diff for context. ```265:277:myk_claude_tools/reviews/pending_fetch.py final_output: dict[str, Any] = { "metadata": { "owner": owner, "repo": repo, "pr_number": pr_number_int, "review_id": review_id, "username": username, "json_path": str(json_path), }, "comments": comments, "diff": diff, } ``` Each pending comment includes: - `id` - `path` - `line` - `side` - `body` - `diff_hunk` - `refined_body` - `status` The initial `status` is `pending`, and `refined_body` starts as `null`. `reviews pending-update` is deliberately conservative: - It only updates comments whose `status` is `accepted` - It only updates when `refined_body` is non-empty - It skips updates if the refined text is unchanged from the original - It only submits the review when both conditions are true: - `metadata.submit_action` is set to `COMMENT`, `APPROVE`, or `REQUEST_CHANGES` - The CLI was run with `--submit` > **Warning:** `reviews store` deletes the source JSON file after it has been imported into `.claude/data/reviews.db`. If you want to keep a copy, duplicate it before running the store step. ## `db` commands The `db` group is the read-only analytics interface over the review history database. By default it opens: - `/.claude/data/reviews.db` You can override that with `--db-path` on every command. ### Available subcommands | Command | What it returns | |---------|-----------------| | `db stats` | Review stats grouped by source or reviewer | | `db patterns` | Recurring dismissed comment patterns | | `db dismissed` | Previously dismissed comments for a repository | | `db query` | A raw SQL query result | | `db find-similar` | The closest previously dismissed comment for a new path/body pair | ### Important safety rules `db query` is intentionally locked down: - It only accepts `SELECT` and `WITH` queries - It rejects multiple SQL statements - It blocks dangerous keywords such as `INSERT`, `UPDATE`, `DELETE`, `DROP`, `ALTER`, `CREATE`, `ATTACH`, `DETACH`, and `PRAGMA` That behavior is enforced in code and covered by tests. ### Actual stdin contract for `find-similar` The implementation expects one JSON object on stdin with `path` and `body` fields: ```163:172:myk_claude_tools/db/commands.py Reads JSON from stdin with 'path' and 'body' fields. Uses exact path match combined with body similarity (Jaccard word overlap). This is useful for auto-skip logic: if a similar comment was previously dismissed with a reason, the same reason may apply. Examples: # Find similar comment echo '{"path": "foo.py", "body": "Add error handling..."}' | \ myk-claude-tools db find-similar --owner myk-org --repo claude-code-config --json ``` > **Warning:** `db find-similar` expects a single JSON object, not an array. The CLI implementation and tests both use one object with `path` and `body`. ### Database tables If you use `db query`, these are the key tables and columns you will interact with: | Table | Key columns | |-------|-------------| | `reviews` | `id`, `pr_number`, `owner`, `repo`, `commit_sha`, `created_at` | | `comments` | `review_id`, `source`, `thread_id`, `node_id`, `comment_id`, `author`, `path`, `line`, `body`, `priority`, `status`, `reply`, `skip_reason`, `posted_at`, `resolved_at`, `type` | The `type` column is especially useful when working with CodeRabbit body comments such as `outside_diff_comment`, `nitpick_comment`, and `duplicate_comment`. ## `release` commands The `release` group exposes the release workflow in four parts: | Command | What it does | |---------|--------------| | `release info` | Validates the repo state and returns commits since the last matching tag | | `release detect-versions` | Finds version files in the current repository | | `release bump-version` | Updates detected version strings in place | | `release create` | Creates a GitHub release from a changelog file | ### `release info` `release info` returns JSON with: - `metadata` - `validations` - `last_tag` - `all_tags` - `commits` - `commit_count` - `is_first_release` - `target_branch` - `tag_match` The `validations` block tells you whether the repo is ready to release: - On the target branch - Working tree clean - `git fetch` succeeded - Synced with remote If validation fails, the command still returns JSON, but it skips expensive tag and commit collection. > **Tip:** If you are on a branch named like `v2.10`, `release info` auto-detects it as the target branch and automatically narrows tag matching to `v2.10.*`. ### `release detect-versions` Version-file detection covers several common ecosystems: ```167:174:myk_claude_tools/release/detect_versions.py _ROOT_SCANNERS: list[tuple[str, Callable[[Path], str | None], str]] = [ ("pyproject.toml", _parse_pyproject_toml, "pyproject"), ("package.json", _parse_package_json, "package_json"), ("setup.cfg", _parse_setup_cfg, "setup_cfg"), ("Cargo.toml", _parse_cargo_toml, "cargo"), ("build.gradle", _parse_gradle, "gradle"), ("build.gradle.kts", _parse_gradle, "gradle"), ] ``` It also scans Python `__version__` assignments in `__init__.py` and `version.py`. Tests confirm several important details: - `pyproject.toml` detection only reads `[project]`, not unrelated sections - `setup.cfg` detection only reads `[metadata]` - `Cargo.toml` detection only reads `[package]` - Dynamic `setup.cfg` versions such as `attr:` or `file:` are skipped - Common generated or vendor directories like `.venv`, `node_modules`, `.tox`, and `target` are ignored The command prints JSON in this shape: - `version_files`: an array of `{path, current_version, type}` - `count`: number of detected version files ### `release bump-version` `release bump-version` updates the files returned by `detect-versions`. Its output is machine-readable and reports: - `status` - `version` - `updated` - `skipped` - `error` on failure It also validates the version string before doing any file writes. > **Warning:** Pass `1.2.3`, not `v1.2.3`. The implementation rejects versions with a leading `v` and tells you to use the bare version number instead. Tests verify that `bump-version`: - Only updates the correct section in multi-section files - Preserves unrelated content - Fails fast when `--files` does not match detected version files - Returns skipped reasons instead of silently ignoring failures ### `release create` `release create` expects: - `owner/repo` - A tag such as `v1.2.3` - A path to a changelog file It calls `gh release create` and returns JSON with: - `status` - `tag` - `url` - `prerelease` - `draft` If the tag is not semantic-version-like, the command warns to stderr but still attempts the release. ## `coderabbit` commands The `coderabbit` group exposes two composable commands: | Command | What it does | |---------|--------------| | `coderabbit check` | Inspect the latest CodeRabbit summary comment and report whether the PR is rate-limited | | `coderabbit trigger` | Wait if requested, post `@coderabbitai review`, and poll until the review starts | `coderabbit check` returns one of these JSON shapes: - `{"rate_limited": false}` - `{"rate_limited": true, "wait_seconds": , "comment_id": }` The command parses CodeRabbit’s own comment body for the wait time, and the tests cover both minute/second and second-only formats. `coderabbit trigger` behaves like this: - Optional initial sleep via `--wait` - Post `@coderabbitai review` - Poll every 60 seconds - Give up after 10 attempts > **Tip:** The polling logic treats two consecutive “summary comment disappeared” checks as success. That matches CodeRabbit replacing the rate-limit comment when the new review begins. ## Slash commands and plugin interfaces Claude Code plugin commands live under `plugins/*/commands/*.md`. Each command file begins with YAML frontmatter that defines the public interface seen in Claude Code: ```1:5:plugins/myk-github/commands/pr-review.md --- description: Review a GitHub PR and post inline comments on selected findings argument-hint: [PR_NUMBER|PR_URL] allowed-tools: Bash(myk-claude-tools:*), Bash(uv:*), Bash(git:*), Bash(gh:*), AskUserQuestion, Task --- ``` At the plugin level, metadata comes from `plugins/*/.claude-plugin/plugin.json`, and the root marketplace index comes from `.claude-plugin/marketplace.json`: ```1:24:.claude-plugin/marketplace.json { "name": "myk-org", "owner": { "name": "myk-org" }, "plugins": [ { "name": "myk-github", "source": "./plugins/myk-github", "description": "GitHub operations - PR reviews, releases, review handling, CodeRabbit rate limits", "version": "1.7.2" }, { "name": "myk-review", "source": "./plugins/myk-review", "description": "Local code review and review database operations", "version": "1.7.2" }, { "name": "myk-acpx", "source": "./plugins/myk-acpx", "description": "Multi-agent prompt execution via acpx (Agent Client Protocol)", ``` ### `myk-github` `myk-github` is the GitHub workflow plugin. Its public commands are: | Slash command | What it wraps | |---------------|---------------| | `/myk-github:pr-review` | `pr diff`, `pr claude-md`, review agents, and optionally `pr post-comment` | | `/myk-github:review-handler` | `reviews fetch`, local fixes, tests, `reviews post`, `reviews store`; optional `--autorabbit` polling loop | | `/myk-github:refine-review` | `reviews pending-fetch` plus a refinement and submission flow | | `/myk-github:release` | `release info`, `detect-versions`, `bump-version`, `release create` | | `/myk-github:coderabbit-rate-limit` | `coderabbit check` and `coderabbit trigger` | These commands are documented as full workflows, not just wrappers. That means their public interface includes: - Required prerequisites - Expected arguments - Temp-file conventions under `/tmp/claude/` - User-decision points - Allowed tool list ### `myk-review` `myk-review` provides two local workflows: | Slash command | What it does | |---------------|--------------| | `/myk-review:local [BRANCH]` | Review uncommitted changes or diff against a branch using three review agents in parallel | | `/myk-review:query-db ...` | Run the `db` analytics commands through Claude Code | `/myk-review:local` is intentionally simple: it chooses either `git diff HEAD` or `git diff ""...HEAD`, then sends that diff to the review agents. ### `myk-acpx` `myk-acpx` is different from the rest of the repo because it does not call `myk-claude-tools`. It is a slash-command interface for the external `acpx` CLI. Public inputs include: - One agent name or a comma-separated list of agents - Optional `--fix`, `--peer`, or `--exec` - Optional `--model ` - The prompt text Supported agent names come directly from the command file and include: - `pi` - `openclaw` - `codex` - `claude` - `gemini` - `cursor` - `copilot` - `droid` - `iflow` - `kilocode` - `kimi` - `kiro` - `opencode` - `qwen` Important validation rules: - `--fix`, `--peer`, and `--exec` are mutually exclusive in the combinations documented by the command - Multiple agents are allowed, but not with `--fix` - `--model` cannot be repeated - An agent name and prompt are both required In day-to-day use: - Default mode is a persistent, read-only session - `--exec` is one-shot and stateless - `--fix` allows file writes - `--peer` runs a review loop where the peer agent re-checks the code after every fix round ## Hooks and `settings.json` The checked-in `settings.json` is the central runtime configuration file for Claude Code. It defines: - `permissions.allow` - `allowedTools` - `hooks` - `statusLine` - `enabledPlugins` - `extraKnownMarketplaces` - `env` The hook registration is explicit and machine-readable: ```25:55:settings.json "hooks": { "Notification": [ { "matcher": "", "hooks": [ { "type": "command", "command": "~/.claude/scripts/my-notifier.sh" } ] } ], "PreToolUse": [ { "matcher": "TodoWrite|Bash", "hooks": [ { "type": "command", "command": "uv run ~/.claude/scripts/rule-enforcer.py" } ] }, { "matcher": "Bash", "hooks": [ { "type": "command", "command": "uv run ~/.claude/scripts/git-protection.py" } ] }, ``` ### Hook events and contracts | Hook | Script or handler | Contract | |------|-------------------|----------| | `Notification` | `scripts/my-notifier.sh` | Reads JSON from stdin and expects a `.message` field | | `PreToolUse` | `scripts/rule-enforcer.py` | Reads hook JSON with `tool_name` and `tool_input`; prints deny JSON when blocking | | `PreToolUse` | `scripts/git-protection.py` | Same hook JSON shape; denies protected `git commit` and `git push` operations | | `PreToolUse` | Inline prompt gate in `settings.json` | Returns JSON with `decision: approve|block|ask` for destructive Bash commands | | `UserPromptSubmit` | `scripts/rule-injector.py` | Prints `hookSpecificOutput.additionalContext` | | `SessionStart` | `scripts/session-start-check.sh` | Prints a human-readable `MISSING_TOOLS_REPORT` when prerequisites are missing | ### `rule-enforcer.py` `rule-enforcer.py` blocks direct `python`, `python3`, `pip`, `pip3`, and `pre-commit` Bash commands. Instead, it instructs users and agents to use: - `uv run` - `uvx` - `prek` The deny payload is a standard Claude Code hook response: ```35:50:scripts/rule-enforcer.py # Block direct python/pip commands if tool_name == "Bash": command = tool_input.get("command", "") if is_forbidden_python_command(command): output = { "hookSpecificOutput": { "hookEventName": "PreToolUse", "permissionDecision": "deny", "permissionDecisionReason": "Direct python/pip commands are forbidden.", "additionalContext": ( "You attempted to run python/pip directly. Instead:\n" "1. Delegate Python tasks to the python-expert agent\n" "2. Use 'uv run script.py' to run Python scripts\n" "3. Use 'uvx package-name' to run package CLIs\n" ``` Tests confirm two user-visible details: - Allowed commands such as `uv run`, `uvx`, `git status`, and `ls` produce no stdout at all - The script fails open on malformed input or internal exceptions ### `git-protection.py` `git-protection.py` protects the repository against commits and pushes in the wrong place. It blocks: - `git commit` on `main` or `master` - `git push` on `main` or `master` - Commits or pushes on branches whose PR is already merged - Commits in detached HEAD state It allows `git commit --amend` when the branch is ahead of its remote. This hook is stricter than `rule-enforcer.py`: - It fails closed on internal errors - It checks GitHub PR merge state when possible - It falls back to local merge checks when needed > **Warning:** If you adopt this config as-is, direct commits and pushes to protected or already-merged branches are intentionally blocked by the hook layer, not just by convention. ### `session-start-check.sh` `session-start-check.sh` is the dependency-reporting hook. It checks for: - Critical: `uv` - Optional or situational: `gh`, `jq`, `gawk`, `prek`, `mcpl` - Required review plugins: `pr-review-toolkit`, `superpowers`, `feature-dev` If something is missing, it prints a `MISSING_TOOLS_REPORT:` block with installation guidance and still exits `0`, so it informs rather than blocks. ### `my-notifier.sh` `my-notifier.sh` is the desktop-notification interface. It: - Requires `jq` and `notify-send` - Reads stdin JSON - Extracts `.message` - Sends `notify-send --wait "Claude: "` If the message is missing or the tools are absent, it exits non-zero. ### `statusLine` `settings.json` also configures a status-line command: `bash ~/.claude/statusline.sh`. That script reads JSON from stdin and builds a one-line status string from fields such as: - `.model.display_name` - `.workspace.current_dir` - `.context_window.used_percentage` - `.cost.total_lines_added` - `.cost.total_lines_removed` This is a JSON-based interface too, even though it is not a hook. > **Note:** The file also contains a maintenance reminder: when you add or change script entries, keep `permissions.allow` and `allowedTools` in sync. ## Review JSON files The repo’s review commands use local JSON files as a first-class interface. These files live under `/tmp/claude/`, and the implementations explicitly create the directory with `0700` permissions. ### Fetched review file `reviews fetch` writes a file like: - `/tmp/claude/pr-123-reviews.json` Key top-level fields: - `metadata.owner` - `metadata.repo` - `metadata.pr_number` - `metadata.json_path` - `human` - `qodo` - `coderabbit` Key per-comment fields: - `thread_id` - `node_id` - `comment_id` - `author` - `path` - `line` - `body` - `replies` - `source` - `priority` - `status` - `reply` Optional per-comment fields: - `skip_reason` - `original_status` - `is_auto_skipped` - `posted_at` - `resolved_at` - `type` ### Pending review file `reviews pending-fetch` writes a file like: - `/tmp/claude/pr-owner-repo-123-pending-review.json` Key top-level fields: - `metadata.owner` - `metadata.repo` - `metadata.pr_number` - `metadata.review_id` - `metadata.username` - `metadata.json_path` - `comments` - `diff` Key per-comment fields: - `id` - `path` - `line` - `side` - `body` - `diff_hunk` - `refined_body` - `status` The `diff` field is intentionally capped at 50,000 characters so the file remains manageable. ## Plugin metadata files The plugin layer uses two JSON metadata formats. ### Per-plugin `plugin.json` Each plugin defines: - `name` - `version` - `description` - `author` - `repository` - `license` - `keywords` That is the metadata Claude Code and marketplace tooling can display. ### Root marketplace manifest `.claude-plugin/marketplace.json` is the index for this repository’s plugin marketplace. It names the marketplace and points each plugin name at its source directory. For end users, that means the canonical public plugin names are: - `myk-github` - `myk-review` - `myk-acpx` ## Automation and validation surfaces This repository does not check in a GitHub Actions workflow or another CI pipeline definition. The automation surfaces you can inspect in-tree are local. ### `tox.toml` The test runner configuration is intentionally small: ```1:7:tox.toml skipsdist = true envlist = ["unittests"] [env.unittests] description = "Run pytest tests" deps = ["uv"] commands = [["uv", "run", "--group", "tests", "pytest", "tests"]] ``` That means the supported local test entrypoint is: - `tox` via `uv` - Or the equivalent direct `uv run --group tests pytest tests` ### `.pre-commit-config.yaml` The pre-commit configuration exposes another important interface surface. It includes hooks for: - File sanity checks from `pre-commit-hooks` - `flake8` - `detect-secrets` - `ruff` - `ruff-format` - `gitleaks` - `mypy` - `markdownlint` Because `rule-enforcer.py` blocks direct `pre-commit` commands, this configuration is meant to be run through `prek` rather than `pre-commit`. > **Note:** There is no checked-in `.github/workflows/` directory in this repository. If you are looking for automation behavior, the in-repo interfaces to inspect are `tox.toml`, `.pre-commit-config.yaml`, the release commands, and the Claude Code hooks in `settings.json`. --- Source: slash-command-reference.md # Slash Command Reference This repository bundles eight slash commands across three plugins: | Plugin | Command | Best for | Triggered workflow | | --- | --- | --- | --- | | `myk-github` | `/myk-github:coderabbit-rate-limit` | Waiting out a CodeRabbit cooldown | `coderabbit check` -> `coderabbit trigger` | | `myk-github` | `/myk-github:pr-review` | Reviewing an open GitHub PR and posting selected findings | `pr diff` -> `pr claude-md` -> 3 reviewer tasks -> `pr post-comment` | | `myk-github` | `/myk-github:refine-review` | Polishing your own pending GitHub review comments before submission | `reviews pending-fetch` -> AI refinement -> `reviews pending-update` | | `myk-github` | `/myk-github:release` | Validating, versioning, and publishing a GitHub release | `release info` -> `release detect-versions` -> optional `release bump-version` -> `release create` | | `myk-github` | `/myk-github:review-handler` | Working through human, Qodo, and CodeRabbit feedback end to end | `reviews fetch` -> decision/fix/test loop -> `reviews post` -> `reviews store` | | `myk-review` | `/myk-review:local` | Reviewing local changes before or instead of a PR | `git diff` -> 3 reviewer tasks | | `myk-review` | `/myk-review:query-db` | Querying stored review history and analytics | `db stats|patterns|dismissed|query|find-similar` | | `myk-acpx` | `/myk-acpx:prompt` | Sending prompts to ACP-compatible coding agents through `acpx` | `acpx` session, exec, fix, or peer workflow | Most of the GitHub and review commands are orchestration layers over the bundled helper CLI: ```toml [project.scripts] myk-claude-tools = "myk_claude_tools.cli:main" ``` The slash command metadata itself is defined in each command file. A typical example looks like this: ```yaml description: Review a GitHub PR and post inline comments on selected findings argument-hint: [PR_NUMBER|PR_URL] allowed-tools: Bash(myk-claude-tools:*), Bash(uv:*), Bash(git:*), Bash(gh:*), AskUserQuestion, Task ``` > **Note:** The `allowed-tools` values below come from each command's frontmatter. They describe the command's intended tool budget. Your local Claude Code installation still needs the matching executables and permissions. ## Shared Prerequisites - `myk-claude-tools` backs most `myk-github` commands and `/myk-review:query-db`. - `gh` is required for GitHub PR, comment, and release operations. - `git` is required anywhere the workflow inspects or validates the local worktree. - `acpx` plus the underlying agent CLI are required for `/myk-acpx:prompt`. - `/myk-github:pr-review` and `/myk-review:local` both expect the reviewer agents `superpowers:code-reviewer`, `pr-review-toolkit:code-reviewer`, and `feature-dev:code-reviewer` to be available. > **Tip:** When a command checks for `myk-claude-tools` and cannot find it, the command docs consistently instruct Claude to offer `uv tool install myk-claude-tools`. ## Allowed Tools Cheat Sheet | Tool token | What it means in practice | | --- | --- | | `Bash(myk-claude-tools:*)` | Run the repo's helper CLI subcommands such as `pr diff`, `reviews fetch`, or `release info` | | `Bash(gh:*)` | Call GitHub CLI for PRs, GraphQL, comments, and releases | | `Bash(git:*)` | Inspect or update local git state | | `Bash(acpx:*)` | Run ACP-compatible agent CLIs through `acpx` | | `AskUserQuestion` | Require an explicit choice or confirmation from you | | `Task` | Launch structured parallel review tasks | | `Agent` | Delegate implementation or specialist analysis work | | `Edit` / `Write` | Update temp handoff files, or in `myk-acpx:prompt`, allow writable workflows | | `Read`, `Glob`, `Grep` | Read and search the workspace, mainly for `myk-acpx:prompt` | ## `myk-github` The `myk-github` plugin bundles GitHub-facing workflows: PR review, review follow-up, release automation, and CodeRabbit cooldown handling. ### `/myk-github:coderabbit-rate-limit` Use this when CodeRabbit has rate-limited a PR and you want Claude to wait, retrigger the review, and watch for it to start. - `argument-hint`: `[PR_NUMBER|PR_URL]` - `allowed-tools`: `Bash(myk-claude-tools:*)`, `Bash(uv:*)`, `Bash(git:*)`, `Bash(gh:*)` Actual usage examples: ```text /myk-github:coderabbit-rate-limit /myk-github:coderabbit-rate-limit 123 /myk-github:coderabbit-rate-limit https://github.com/owner/repo/pull/123 ``` What it does: 1. Resolves the PR from a URL, a PR number, or the current branch. 2. Runs the helper CLI to check CodeRabbit's current status. 3. If the PR is rate-limited, adds a 30-second safety buffer and triggers a fresh `@coderabbitai review`. 4. Polls every 60 seconds for up to 10 minutes to confirm the review has started. The core helper calls are exactly these: ```bash myk-claude-tools coderabbit check myk-claude-tools coderabbit trigger --wait ``` A few implementation details matter here: - The rate-limit detector looks for CodeRabbit's summary comment on the PR and parses the cooldown text from that comment. - The trigger flow treats two consecutive "summary comment disappeared" checks as success, because that usually means CodeRabbit replaced the summary while starting a new review. - If the PR is not rate-limited, the command exits early instead of waiting. > **Warning:** This workflow depends on CodeRabbit's summary comment being present on the PR. If that comment is missing, the check step fails instead of guessing. ### `/myk-github:pr-review` Use this when you want Claude to review an actual GitHub PR, merge findings from three review agents, and post only the findings you approve. - `argument-hint`: `[PR_NUMBER|PR_URL]` - `allowed-tools`: `Bash(myk-claude-tools:*)`, `Bash(uv:*)`, `Bash(git:*)`, `Bash(gh:*)`, `AskUserQuestion`, `Task` Actual usage examples: ```text /myk-github:pr-review /myk-github:pr-review 123 /myk-github:pr-review https://github.com/owner/repo/pull/123 ``` What it does: 1. Detects the PR from the current branch when you do not pass an argument. 2. Fetches PR metadata, full diff text, and changed-file data through `myk-claude-tools pr diff`. 3. Fetches project rules through `myk-claude-tools pr claude-md`. The helper checks local `CLAUDE.md` or `.claude/CLAUDE.md` first when you are already in the target repo, then falls back to GitHub. 4. Runs three review agents in parallel and merges their findings. 5. Shows the findings grouped by severity and asks which ones to post. 6. Posts the selected findings back to GitHub as a single review. The reviewer trio is defined directly in the command workflow: ```text - superpowers:code-reviewer - pr-review-toolkit:code-reviewer - feature-dev:code-reviewer ``` One important detail for fork PRs: the command uses `gh repo view` to determine the base repository context. That avoids accidentally treating the fork as the target repo. When the posting step runs, it uses the helper CLI's JSON contract for inline review comments: ```python JSON Input Format (array of comments): [ { "path": "src/main.py", "line": 42, "body": "### [CRITICAL] SQL Injection\n\nDescription..." }, { "path": "src/utils.py", "line": 15, "body": "### [WARNING] Missing error handling\n\nDescription..." } ] Severity Markers: - ### [CRITICAL] Title - For critical security/functionality issues - ### [WARNING] Title - For important but non-critical issues - ### [SUGGESTION] Title - For code improvements and suggestions ``` That means the final post back to GitHub is not just free-form prose. It is a structured review with per-line comments and a summary body. > **Tip:** Inline review comments only work on lines that are actually part of the PR diff. If posting fails, the helper CLI specifically points to stale line numbers, wrong file paths, or a stale PR head SHA as the most common causes. ### `/myk-github:refine-review` Use this when you have already started a review in GitHub, added draft comments, and want Claude to polish those comments before you submit them. - `argument-hint`: `` - `allowed-tools`: `Bash(myk-claude-tools:*)`, `Bash(uv:*)`, `AskUserQuestion`, `Edit(/tmp/claude/**)`, `Write(/tmp/claude/**)` Actual usage example: ```text /myk-github:refine-review https://github.com/owner/repo/pull/123 ``` What it does: 1. Fetches your own pending review with `myk-claude-tools reviews pending-fetch ""`. 2. Loads the pending review comments plus PR diff context from a temp JSON file. 3. Generates refined wording for each comment while preserving the original technical intent. 4. Shows you each original comment alongside the refined version. 5. Lets you accept all, accept specific comments, keep originals, cancel, or replace individual comments with custom text. 6. Updates the temp JSON and runs `myk-claude-tools reviews pending-update`, optionally with `--submit`. The backing helper calls are straightforward: ```bash myk-claude-tools reviews pending-fetch "" myk-claude-tools reviews pending-update "" --submit ``` The workflow is intentionally conservative: - It only updates `refined_body` and `status` for each comment during the accept/reject step. - It only submits the review if you explicitly choose a submit action. - The lower-level fetcher only works against the authenticated user's latest `PENDING` review on that PR. > **Warning:** This command requires a full PR URL. A bare PR number is not enough. > **Note:** The pending-review fetcher stores diff context in the temp JSON, but it truncates very large diffs after 50,000 characters. On very large PRs, refinement happens with shortened context instead of the full diff. ### `/myk-github:release` Use this when you want Claude to validate the repo state, generate a conventional-commit changelog, optionally bump version files, and publish a GitHub release. - `argument-hint`: `[--dry-run] [--prerelease] [--draft] [--target ] [--tag-match ]` - `allowed-tools`: `Bash(myk-claude-tools:*)`, `Bash(uv:*)`, `Bash(git:*)`, `Bash(gh:*)`, `AskUserQuestion` Actual usage examples: ```text /myk-github:release /myk-github:release --dry-run /myk-github:release --prerelease /myk-github:release --draft ``` What it does: 1. Runs `myk-claude-tools release info` to validate branch, worktree cleanliness, and sync with remote. 2. Runs `myk-claude-tools release detect-versions` to discover version files in the current repository. 3. Parses commits since the last matching tag and proposes a major, minor, or patch bump using conventional-commit rules. 4. Shows you the proposed version and changelog, and lets you override the bump type or exclude detected version files. 5. If version files are being updated, runs `myk-claude-tools release bump-version`. 6. Creates a bump branch, commits, pushes, opens a PR, and merges it. 7. Creates the GitHub release with `myk-claude-tools release create`. The release helper exposes these subcommands: ```bash myk-claude-tools release info --target --tag-match myk-claude-tools release detect-versions myk-claude-tools release bump-version --files --files myk-claude-tools release create "" ``` Version detection is broader than just Python packages. The implementation scans these root-level files: ```python _ROOT_SCANNERS = [ ("pyproject.toml", _parse_pyproject_toml, "pyproject"), ("package.json", _parse_package_json, "package_json"), ("setup.cfg", _parse_setup_cfg, "setup_cfg"), ("Cargo.toml", _parse_cargo_toml, "cargo"), ("build.gradle", _parse_gradle, "gradle"), ("build.gradle.kts", _parse_gradle, "gradle"), ] ``` It also searches Python `__init__.py` and `version.py` files for `__version__`. Important behaviors to know: - If you are already on a version branch such as `v2.10`, the validation step auto-detects that branch as the target and scopes tag discovery to `v2.10.*`. - The bump step expects a version without the `v` prefix, such as `1.2.0`. - If `uv.lock` exists, the workflow regenerates it after bumping version files. - The low-level `release create` helper warns if the tag does not look like semantic versioning (`vX.Y.Z`), but it does not hard-block the release. User choices during approval include: - `yes` to continue with the proposed version and all detected files - `major`, `minor`, or `patch` to override the proposed bump - `exclude N` to remove one detected version file from the bump - `no` to cancel the release > **Warning:** This workflow can modify version files, create a branch, push commits, open and merge a PR, and publish a GitHub release. Use `--dry-run` first if you want a preview without creating anything. ### `/myk-github:review-handler` Use this when you want the full PR review follow-up workflow: fetch all review feedback, choose what to address, delegate fixes, run tests, reply to reviewers, and store the outcome in the local review database. - `argument-hint`: `[--autorabbit] [REVIEW_URL]` - `allowed-tools`: `Bash(myk-claude-tools:*)`, `Bash(uv:*)`, `Bash(git:*)`, `Bash(gh:*)`, `AskUserQuestion`, `Task`, `Agent` Actual usage examples: ```text /myk-github:review-handler /myk-github:review-handler https://github.com/owner/repo/pull/123#pullrequestreview-456 /myk-github:review-handler --autorabbit ``` What it does: 1. Runs `myk-claude-tools reviews fetch`, usually against the current branch's PR. 2. Groups unresolved feedback by source: human, Qodo, and CodeRabbit. 3. Shows every item in mandatory tables, including auto-skipped items, and asks you what to address. 4. Delegates approved fixes to specialist agents with the full review thread, not a summary. 5. Requires a fully green test run before moving on. 6. Optionally asks whether to commit and push your changes. 7. Posts replies with `myk-claude-tools reviews post`. 8. Stores the finished review record in `.claude/data/reviews.db` with `myk-claude-tools reviews store`. The table format is defined directly in the command: ```text ## Review Items: {source} ({total} total, {auto_skipped} auto-skipped) | # | Priority | File | Line | Summary | Status | |---|----------|------|------|---------|--------| | 1 | HIGH | src/storage.py | 231 | Backfill destroys historical chronology | Pending | | 2 | MEDIUM | src/html_report.py | 1141 | Add/delete leaves badges stale | Pending | | 3 | LOW | src/utils.py | 42 | Unused import | Auto-skipped (skipped): "style only" | | 4 | LOW | src/config.py | 15 | Missing validation | Auto-skipped (addressed): "added in prev PR" | ``` The response options are also part of the command contract: ```text Respond with: - 'yes' / 'no' (per item number — if 'no', ask for a reason) - 'all' — address all remaining pending items - 'skip human/qodo/coderabbit' — skip remaining from that source (ask for a reason) - 'skip ai' — skip all AI sources (qodo + coderabbit) (ask for a reason) ``` A few user-facing details are easy to miss: - `reviews fetch` auto-detects the PR from the current branch and can fall back to an `upstream` remote when needed. - CodeRabbit review bodies can contain outside-diff, nitpick, and duplicate comments that do not have normal GitHub review threads. The post step handles those through consolidated PR comments instead of per-thread replies. - Human threads with `skipped` or `not_addressed` status stay unresolved on purpose, so the reviewer can follow up. Qodo and CodeRabbit replies are resolved after posting. - After a successful `reviews store`, the JSON handoff file is deleted and the database entry is appended rather than updated. The storage step is described in the helper CLI itself: ```python @reviews.command("store") @click.argument("json_path") def reviews_store(json_path: str) -> None: """Store completed review to database. Stores the completed review JSON to SQLite database for analytics. The database is stored at: /.claude/data/reviews.db This command should run AFTER the review flow completes. The JSON file is deleted after successful storage. ``` > **Note:** Previous skip decisions are used for auto-skip suggestions, but those items are still shown in the table so you can override them. > **Tip:** If `myk-claude-tools reviews post` reports failures, rerun the exact retry command it prints. The post helper is designed to retry only entries that have not been successfully posted yet. > **Warning:** `--autorabbit` does not stop on its own. It waits 5 minutes between checks, handles CodeRabbit rate limits automatically, and keeps looping until you stop it. ## `myk-review` The `myk-review` plugin covers local review before GitHub and post-review analytics after GitHub. ### `/myk-review:local` Use this when you want a three-agent review of your local changes without opening or touching a GitHub PR. - `argument-hint`: `[BRANCH]` - `allowed-tools`: `Bash(git:*)`, `Task`, `AskUserQuestion` Actual usage examples: ```text /myk-review:local /myk-review:local main /myk-review:local feature/branch ``` What it does: 1. If you pass a branch, it reviews `git diff "$ARGUMENTS"...HEAD`. 2. If you do not pass a branch, it reviews `git diff HEAD`, which covers tracked staged and unstaged changes. 3. It sends the diff to the same three review agents used by `/myk-github:pr-review`. 4. It merges and deduplicates findings and presents them as critical issues, warnings, and suggestions. The diff commands are taken directly from the command file: ```bash git diff "$ARGUMENTS"...HEAD git diff HEAD ``` This command does not post comments anywhere. It is purely a local review surface. > **Tip:** Use `/myk-review:local` before opening a PR, and `/myk-github:pr-review` after the PR exists. The review model is similar, but the data source is different. ### `/myk-review:query-db` Use this when you want analytics and historical context from the review database created by `/myk-github:review-handler`. - `argument-hint`: `[stats|patterns|dismissed|query|find-similar] [OPTIONS]` - `allowed-tools`: `Bash(myk-claude-tools:*)`, `Bash(uv:*)` Common slash-command examples from the command definition: ```text /myk-review:query-db stats --by-source /myk-review:query-db stats --by-reviewer /myk-review:query-db patterns --min 2 /myk-review:query-db dismissed --owner X --repo Y /myk-review:query-db query "SELECT * FROM comments WHERE status='skipped' LIMIT 10" ``` Subcommands: | Subcommand | What it does | | --- | --- | | `stats` | Shows addressed rates by source or by reviewer | | `patterns` | Finds recurring dismissed comments that look like repeat feedback | | `dismissed` | Returns dismissed review comments for one repo | | `query` | Runs a raw read-only SQL query | | `find-similar` | Looks for a previously dismissed comment with the same path and similar body text | The database location is fixed by the helper code: - Default path: `/.claude/data/reviews.db` - Override: `--db-path` A few behavior details are worth knowing: - `stats` defaults to `--by-source` if you do not pass `--by-source` or `--by-reviewer`. - `--by-source` and `--by-reviewer` are mutually exclusive. - `patterns` groups recurring comments by path plus approximate body similarity, not only exact string matches. - `find-similar` requires both `--owner` and `--repo`, and it expects a single JSON object on stdin. The actual helper example for `find-similar` is: ```bash echo '{"path": "foo.py", "body": "Add error handling..."}' | \ myk-claude-tools db find-similar --owner myk-org --repo claude-code-config --json ``` Read-only SQL is enforced in code: ```python if not sql_upper.startswith(("SELECT", "WITH")): raise ValueError("Only SELECT/CTE queries are allowed for safety") ``` That means `INSERT`, `UPDATE`, `DELETE`, `DROP`, `ALTER`, and other write operations are rejected. > **Warning:** `query` is intentionally read-only. It is for analytics and exploration, not for editing the review database. > **Note:** The `dismissed` view intentionally includes `not_addressed` and `skipped` comments, plus addressed body-only comments that do not have normal GitHub review threads. That behavior supports later auto-skip decisions. ## `myk-acpx` The `myk-acpx` plugin is the bridge to `acpx`, which in turn can drive multiple ACP-compatible coding agents from one slash command. ### `/myk-acpx:prompt` Use this when you want to send a prompt to one or more external coding agents such as Codex, Gemini, Cursor, or Claude through `acpx`. - `argument-hint`: `[,agent2,...] [--fix | --peer | --exec] [--model ] ` - `allowed-tools`: `Bash(acpx:*)`, `Bash(git:*)`, `AskUserQuestion`, `Agent`, `Edit`, `Write`, `Read`, `Glob`, `Grep` Actual usage examples: ```text /myk-acpx:prompt codex fix the tests /myk-acpx:prompt cursor review this code /myk-acpx:prompt gemini explain this function /myk-acpx:prompt codex --exec summarize this repo /myk-acpx:prompt codex --model o3-pro review the architecture /myk-acpx:prompt codex --fix fix the code quality issues /myk-acpx:prompt gemini --peer review this code /myk-acpx:prompt codex --peer --model o3-pro review the architecture /myk-acpx:prompt cursor,codex review this code /myk-acpx:prompt cursor,gemini,codex --peer review the architecture ``` Supported agents are declared in the command file: | Agent | Wraps | | --- | --- | | `pi` | Pi Coding Agent | | `openclaw` | OpenClaw ACP bridge | | `codex` | Codex CLI (OpenAI) | | `claude` | Claude Code | | `gemini` | Gemini CLI | | `cursor` | Cursor CLI | | `copilot` | GitHub Copilot CLI | | `droid` | Factory Droid | | `iflow` | iFlow CLI | | `kilocode` | Kilocode | | `kimi` | Kimi CLI | | `kiro` | Kiro CLI | | `opencode` | OpenCode | | `qwen` | Qwen Code | Modes: | Mode | Flag | Behavior | | --- | --- | --- | | Session | none | Read-only by default, with session persistence | | Exec | `--exec` | Read-only, one-shot, no session persistence | | Fix | `--fix` | Writable run, single agent only | | Peer | `--peer` | Read-only AI-to-AI review loop | The mode-specific `acpx` calls are defined like this: ```bash acpx --approve-reads --non-interactive-permissions fail exec '' acpx --approve-all '' acpx --approve-reads --non-interactive-permissions fail '' ``` What it does: 1. Checks whether `acpx` is installed and offers `npm install -g acpx@latest` if needed. 2. Validates the agent list and the mode flags. 3. In default session mode, runs `acpx sessions ensure` and falls back to `sessions new` when needed. 4. In `--fix` or `--peer` mode, checks git state first and offers a checkpoint commit if the worktree is dirty. 5. Runs the selected agent or agents. 6. In `--fix` mode, reads the resulting diff and summarizes file changes. 7. In `--peer` mode, loops until the peer agent confirms there are no remaining actionable issues, or the discussion is recorded as an unresolved disagreement. Important flag rules: - `--fix`, `--peer`, and `--exec` are mutually exclusive. - `--fix` can only be used with a single agent. - `--model` can only appear once. - If no prompt is provided, the command aborts with a usage message. Two implementation details are especially useful: - In non-fix modes, the command appends a read-only guard to the prompt so the target agent does not try to modify files. - In `--peer` mode, the workflow includes project `CLAUDE.md` conventions in the peer-review framing when a `CLAUDE.md` file exists. > **Tip:** Use session mode when you expect follow-up prompts in the same repo. Use `--exec` when you want a one-shot answer with no session state. > **Warning:** `--fix` can modify files and is single-agent only. In a dirty git worktree, the command may offer to create a checkpoint commit before it proceeds. > **Note:** If `acpx` session management fails with the known session bugs handled by the command, the workflow falls back to one-shot exec mode instead of simply giving up. --- Source: reviews-cli-reference.md # Reviews CLI Reference `myk-claude-tools reviews` supports two different workflows: 1. Handle feedback that already exists on a pull request: `fetch` -> edit JSON -> `post` -> `store`. 2. Refine your own draft GitHub review before submitting it: `pending-fetch` -> edit JSON -> `pending-update`. > **Warning:** `reviews post` only works with JSON created by `reviews fetch`. `reviews pending-update` only works with JSON created by `reviews pending-fetch`. The two file formats are different. ## At a Glance | Command | Use it when you want to | Input | What it writes | | --- | --- | --- | --- | | `myk-claude-tools reviews fetch [REVIEW_URL]` | Pull unresolved PR review feedback into a machine-editable file | Optional GitHub PR/review URL | A temp JSON file and the full JSON on stdout | | `myk-claude-tools reviews post JSON_PATH` | Post replies and resolve review threads from a fetched JSON file | JSON from `reviews fetch` | GitHub replies/resolutions and updated timestamps in the same JSON file | | `myk-claude-tools reviews pending-fetch PR_URL` | Pull your own pending GitHub review into a refinement file | Required PR URL | A temp JSON file and its path on stdout | | `myk-claude-tools reviews pending-update JSON_PATH [--submit]` | Push refined pending-review comments back to GitHub | JSON from `reviews pending-fetch` | Updated GitHub review comments, and optionally a submitted review | | `myk-claude-tools reviews store JSON_PATH` | Archive a completed review run for analytics and later auto-skip behavior | JSON from `reviews fetch` after posting | Rows in `.claude/data/reviews.db`, then deletes the JSON file | ## Requirements and Paths - `gh` is required for all GitHub operations. - `git` is required by `reviews fetch`, `reviews pending-fetch`, and `reviews store`. - Authentication comes from your normal GitHub CLI session. These commands shell out to `gh`; they do not read GitHub tokens directly. - Temp review files are written under `$TMPDIR/claude` when `TMPDIR` is set, otherwise under the system temp directory. - `reviews store` writes to `/.claude/data/reviews.db`. ```858:866:myk_claude_tools/reviews/fetch.py tmp_base = Path(os.environ.get("TMPDIR") or tempfile.gettempdir()) out_dir = tmp_base / "claude" out_dir.mkdir(parents=True, exist_ok=True, mode=0o700) try: out_dir.chmod(0o700) except OSError as e: print_stderr(f"Warning: unable to set permissions on {out_dir}: {e}") json_path = out_dir / f"pr-{pr_number}-reviews.json" ``` > **Note:** The CLI tries to create temp and database directories with `0700` permissions, which is useful on shared machines. ## `reviews fetch` **Syntax:** `myk-claude-tools reviews fetch [REVIEW_URL]` Use `fetch` when you want a full, structured view of unresolved PR feedback that you can review, enrich, and feed back into `reviews post`. ```54:60:plugins/myk-github/commands/review-handler.md myk-claude-tools reviews fetch $ARGUMENTS # ... later in the same file ... myk-claude-tools reviews fetch ``` What `fetch` does: - If you pass a valid GitHub PR URL, it extracts `owner`, `repo`, and `pr_number` directly from that URL. - If you do not pass a valid PR URL, it falls back to the current branch and asks GitHub which open PR matches that branch. - If an `upstream` remote exists, it checks that too, which is helpful in fork-based workflows. - It fetches unresolved GitHub review threads through GraphQL. - It also parses CodeRabbit comments that are embedded in review bodies rather than exposed as normal review threads. - It groups everything into `human`, `qodo`, and `coderabbit`. - It enriches each item with `source`, `priority`, `reply`, and `status`. The generated file is the input for `reviews post`. The filename pattern is: - `pr--reviews.json` What you should expect in the output: - `metadata.owner`, `metadata.repo`, `metadata.pr_number`, and `metadata.json_path` - Arrays named `human`, `qodo`, and `coderabbit` - A default `status` of `pending` - A default `reply` of `null` `priority` is heuristic rather than GitHub-native: - Comments mentioning security, bugs, crashes, or other critical language become `HIGH`. - Comments mentioning style, formatting, nitpicks, or minor cleanup become `LOW`. - Everything else defaults to `MEDIUM`. > **Note:** If `.claude/data/reviews.db` already exists, `fetch` can auto-skip previously dismissed similar comments from the same repository and include the skip reason directly in the JSON. > **Warning:** If `fetch` cannot infer a PR from the current branch, it exits. Detached HEAD is not supported for branch-based detection. > **Tip:** A review URL is optional, but it is useful when you want extra context from a specific review or discussion link such as `#pullrequestreview-...` or `#discussion_r...`. ## `reviews post` **Syntax:** `myk-claude-tools reviews post JSON_PATH` Use `post` after you have reviewed the JSON from `fetch` and filled in the decision fields you want GitHub to receive. ```239:260:plugins/myk-github/commands/review-handler.md myk-claude-tools reviews post {json_path} # ... after output verification ... myk-claude-tools reviews store {json_path} ``` The fields you usually edit before running `post` are: | Field | Meaning | | --- | --- | | `status` | One of `addressed`, `not_addressed`, `skipped`, `pending`, or `failed` | | `reply` | The text to post back to GitHub | | `skip_reason` | Optional explicit reason for a skipped item | What `post` does with each status: | Status | Human review thread | Qodo/CodeRabbit review thread | Body-embedded CodeRabbit comment | | --- | --- | --- | --- | | `addressed` | Posts a reply and resolves the thread | Posts a reply and resolves the thread | Includes it in a consolidated PR comment to the reviewer | | `not_addressed` | Posts a reply and leaves the thread open | Posts a reply and resolves the thread | Includes it in a consolidated PR comment to the reviewer | | `skipped` | Posts a reply and leaves the thread open | Posts a reply and resolves the thread | Includes it in a consolidated PR comment to the reviewer | | `pending` | Ignores it | Ignores it | Ignores it | | `failed` | Retries it on the next run | Retries it on the next run | Treats it as eligible for a consolidated retry post | A few important behaviors are easy to miss: - If a thread has no `thread_id` but does have a `node_id`, `post` tries to look up the missing thread ID before replying. - CodeRabbit `outside_diff_comment`, `nitpick_comment`, and `duplicate_comment` entries do not have normal GitHub review threads, so `post` groups them by reviewer and posts one or more consolidated PR comments instead. - Very large replies are truncated before posting, and large consolidated body-comment replies are split into multiple PR comments. - After a successful run, the tool updates the JSON with `posted_at` and `resolved_at` timestamps. > **Tip:** Re-running `reviews post` is safe. Entries with `posted_at` are skipped, and entries with `posted_at` but no `resolved_at` are retried as resolve-only operations. > **Warning:** If any post fails, the command exits non-zero and prints an `ACTION REQUIRED` retry command to stdout. Run that retry before moving on to `reviews store`. ## `reviews pending-fetch` **Syntax:** `myk-claude-tools reviews pending-fetch PR_URL` Use `pending-fetch` when you already started a review on GitHub, saved it as a pending review, and want to polish those draft comments locally before submitting them. ```40:41:plugins/myk-github/commands/refine-review.md myk-claude-tools reviews pending-fetch "" ``` What `pending-fetch` does: - Parses the PR URL and refuses to continue if it is not a GitHub pull request URL. - Looks up the authenticated `gh` user. - Fetches that user's `PENDING` review on the PR. - If more than one pending review exists, it uses the most recent one. - Fetches all comments from that pending review. - Fetches the PR diff for context and truncates it to 50,000 characters if needed. - Saves the result to a temp JSON file and prints only the file path to stdout. The filename pattern is: - `pr----pending-review.json` The generated JSON includes metadata, comments, and the diff: ```266:277:myk_claude_tools/reviews/pending_fetch.py final_output: dict[str, Any] = { "metadata": { "owner": owner, "repo": repo, "pr_number": pr_number_int, "review_id": review_id, "username": username, "json_path": str(json_path), }, "comments": comments, "diff": diff, } ``` Each comment includes these user-facing fields: - `id` - `path` - `line` - `side` - `body` - `diff_hunk` - `refined_body` - `status` > **Warning:** `pending-fetch` only looks for the authenticated `gh` user's pending review. If `gh` is logged into the wrong account, the command will not find the review you expect. > **Warning:** If no pending review exists yet, the command exits and tells you to start a review on GitHub first by adding comments without submitting. ## `reviews pending-update` **Syntax:** `myk-claude-tools reviews pending-update JSON_PATH [--submit]` Use `pending-update` after you have refined one or more comments in the JSON from `pending-fetch`. ```132:138:plugins/myk-github/commands/refine-review.md myk-claude-tools reviews pending-update "" --submit # ... or keep the review pending ... myk-claude-tools reviews pending-update "" ``` This command only updates comments that meet all of these conditions: - `status` is exactly `accepted` - `refined_body` is non-empty - `refined_body` is actually different from the original `body` The JSON format it expects includes optional submission metadata: ```7:26:myk_claude_tools/reviews/pending_update.py { "metadata": { "owner": "...", "repo": "...", "pr_number": 123, "review_id": 456, "submit_action": "COMMENT", # optional "submit_summary": "Summary text" # optional }, "comments": [ { "id": 789, "path": "src/main.py", "line": 42, "body": "original comment", "refined_body": "refined version", "status": "accepted" } ] } ``` Valid `submit_action` values are: | Value | Meaning | | --- | --- | | `COMMENT` | Submit the pending review as general feedback | | `APPROVE` | Approve the PR | | `REQUEST_CHANGES` | Submit the review as a request for changes | > **Note:** Submitting a review is a two-part opt-in. You must set `metadata.submit_action` in the JSON and also pass `--submit` on the command line. Providing only one of them is not enough. > **Tip:** If you want to refine comments now but keep the review pending on GitHub, leave `submit_action` unset or run the command without `--submit`. > **Warning:** If GitHub returns `404` while updating a comment, the command aborts the remaining updates because the pending review may already have been submitted or deleted elsewhere. ## `reviews store` **Syntax:** `myk-claude-tools reviews store JSON_PATH` Use `store` at the end of the review-thread workflow, after `reviews post` has finished successfully and the JSON reflects the final timestamps and statuses. `store` does three things: - Writes a new review record to the local SQLite database. - Writes one comment row for every item in `human`, `qodo`, and `coderabbit`. - Deletes the JSON file after a successful import. It records the current commit SHA from the current checkout, so the stored review is tied to a specific code state. The storage model is append-only: if you run `store` again for the same PR later, it creates another review record instead of overwriting the old one. ```214:260:myk_claude_tools/reviews/store.py db_path = project_root / ".claude" / "data" / "reviews.db" # ... insert a new review row and one row per comment ... conn.commit() # Delete JSON file after successful storage json_path.unlink() ``` The stored comment data includes, among other things: - Source (`human`, `qodo`, or `coderabbit`) - Thread and comment identifiers - Author, file path, and line number - Comment body - Priority - Status - Reply text - Skip reason - `posted_at` and `resolved_at` - Optional comment type such as `outside_diff_comment`, `nitpick_comment`, or `duplicate_comment` > **Tip:** `store` is what makes later `reviews fetch` runs smarter. `fetch` can use this database to auto-skip previously dismissed comments. > **Warning:** `reviews store` deletes the JSON file after a successful import. Run it last. > **Warning:** The database path is based on the current git root, not on the PR URL inside the JSON. Run it from the checkout that should own the stored review data. ## JSON Fields You Will Usually Edit ### Review-thread JSON from `reviews fetch` | Field | What you set | Used by | | --- | --- | --- | | `status` | `addressed`, `not_addressed`, `skipped`, `pending`, or `failed` | `reviews post` | | `reply` | The message that should be posted back to GitHub | `reviews post` | | `skip_reason` | A reason for skipping the comment | `reviews post` | You normally do **not** set these manually: - `posted_at` - `resolved_at` - `priority` - `source` - `json_path` ### Pending-review JSON from `reviews pending-fetch` | Field | What you set | Used by | | --- | --- | --- | | `comments[].refined_body` | The polished replacement comment body | `reviews pending-update` | | `comments[].status` | `accepted` to update the GitHub comment, or leave `pending` to skip it | `reviews pending-update` | | `metadata.submit_action` | `COMMENT`, `APPROVE`, or `REQUEST_CHANGES` | `reviews pending-update --submit` | | `metadata.submit_summary` | Optional review summary text | `reviews pending-update --submit` | > **Tip:** Leave `metadata.owner`, `metadata.repo`, `metadata.pr_number`, `metadata.review_id`, and `metadata.json_path` alone unless you are deliberately debugging the workflow. ## Common Workflows 1. To handle incoming PR review feedback, run `reviews fetch`, update the generated JSON with decisions, run `reviews post`, then run `reviews store`. 2. To refine your own draft GitHub review, start a pending review on GitHub, run `reviews pending-fetch`, update `refined_body` and `status`, optionally add `submit_action` and `submit_summary`, then run `reviews pending-update` with or without `--submit`. 3. If you want future `fetch` runs to remember skipped or dismissed feedback, always finish the review-thread workflow with `reviews store`. --- Source: pr-cli-reference.md # PR CLI Reference `myk-claude-tools pr` is the PR-focused part of the CLI used by this repository's review workflows. It gives you three building blocks: - `pr diff` fetches pull request metadata, the full diff, and per-file patch data - `pr claude-md` loads the target repository's `CLAUDE.md` instructions - `pr post-comment` posts one structured GitHub review with inline comments ## Before You Start The CLI is exposed as the `myk-claude-tools` console script: ```toml [project.scripts] myk-claude-tools = "myk_claude_tools.cli:main" ``` The repository's own PR workflow checks the toolchain like this: ```bash uv --version myk-claude-tools --version ``` If the CLI is not installed, the documented install command is: ```bash uv tool install myk-claude-tools ``` You also need: - Python 3.10 or newer - GitHub CLI (`gh`) - Access to the GitHub repository you want to review > **Note:** `pr diff` and remote `pr claude-md` call `gh` directly. `pr post-comment` also depends on `gh`, but it does not do its own friendly preflight check first, so install and verify `gh` before using it. ## Command Overview | Command | What it returns | Best used for | |---|---|---| | `myk-claude-tools pr diff` | JSON | Fetching everything you need to review a PR | | `myk-claude-tools pr claude-md` | Plain text | Loading repository-specific instructions before reviewing | | `myk-claude-tools pr post-comment` | JSON | Posting a single review with multiple inline findings | A typical workflow looks like this: ```bash myk-claude-tools pr diff $ARGUMENTS myk-claude-tools pr claude-md $ARGUMENTS mkdir -p /tmp/claude myk-claude-tools pr post-comment {owner}/{repo} {pr_number} {head_sha} /tmp/claude/pr-review-comments.json ``` > **Tip:** Save `metadata.head_sha` from `pr diff` and reuse it as the `commit_sha` for `pr post-comment`. That is the safest way to keep inline comment locations aligned with the exact revision you reviewed. ## Shared PR Target Formats `pr diff` and `pr claude-md` share the same input rules. They accept any of these forms: ```bash myk-claude-tools pr diff myk-claude-tools pr diff https://github.com/owner/repo/pull/123 myk-claude-tools pr diff myk-claude-tools pr claude-md myk-claude-tools pr claude-md https://github.com/owner/repo/pull/123 myk-claude-tools pr claude-md ``` When you pass only a PR number, the CLI resolves the repository from the current working directory with: ```bash gh repo view --json owner,name -q '.owner.login + "/" + .name' ``` A few practical details matter here: - `owner/repo` must match the normal GitHub format - `pr_number` must be numeric - PR URLs can include an optional protocol and optional trailing path segments - Wrong argument counts print usage and exit with status `1` > **Warning:** Number-only mode depends on the current repository context. If you are outside the target repository, working across multiple repositories, or dealing with forks, prefer `owner/repo + pr_number` or a full PR URL. ## `pr diff` ### What it does `pr diff` collects three kinds of data for the target PR: - PR metadata from `GET /repos/{owner}/{repo}/pulls/{pr_number}` - The full unified diff from `gh pr diff` - The changed file list from `GET /repos/{owner}/{repo}/pulls/{pr_number}/files` The file list request is paginated, so large PRs are handled across multiple API pages. ### Usage ```bash myk-claude-tools pr diff myk-claude-tools pr diff https://github.com/owner/repo/pull/123 myk-claude-tools pr diff ``` ### Output `pr diff` always writes JSON to stdout. Its top-level structure is built like this: ```json { "metadata": { "owner": pr_info.owner, "repo": pr_info.repo, "pr_number": pr_info.pr_number, "head_sha": head_sha, "base_ref": base_ref, "title": pr_title, "state": pr_state }, "diff": pr_diff, "files": files } ``` Each file entry is normalized to this shape: ```json { "path": f["filename"], "status": f["status"], "additions": f["additions"], "deletions": f["deletions"], "patch": f.get("patch", "") } ``` What those fields are most useful for: - `metadata.head_sha` is the commit SHA you should pass to `pr post-comment` - `metadata.base_ref` tells you which branch the PR targets - `diff` contains the full text diff from GitHub CLI - `files` gives you a per-file summary plus patch text when GitHub includes it > **Note:** `files[].patch` falls back to an empty string when GitHub does not include patch text, which can happen for some large or non-text changes. ### Failure behavior `pr diff` exits with status `1` when it cannot produce a complete result. Important cases include: - `gh` is not installed - the GitHub API call fails - `gh pr diff` fails - the file list request fails - the PR metadata does not include `head.sha` - the PR metadata does not include `base.ref` Timeouts are also explicit: - PR metadata fetch: 60 seconds - Diff fetch: 120 seconds - File list fetch: 120 seconds > **Tip:** If you are scripting around `pr diff`, treat stdout as machine-readable JSON and stderr as diagnostics. ## `pr claude-md` ### What it does `pr claude-md` resolves the same PR target formats as `pr diff`, but its output is plain text instead of JSON. The command tries to find the repository's instructions file in a specific order and stops on the first match. ### Usage ```bash myk-claude-tools pr claude-md myk-claude-tools pr claude-md https://github.com/owner/repo/pull/123 myk-claude-tools pr claude-md ``` ### Lookup order The command checks these locations in order: 1. Local `./CLAUDE.md`, but only if the current repo matches the target repo 2. Local `./.claude/CLAUDE.md`, but only if the current repo matches the target repo 3. Remote `CLAUDE.md` from the GitHub Contents API 4. Remote `.claude/CLAUDE.md` from the GitHub Contents API 5. If nothing is found, it prints an empty line and exits successfully Local repository matching is based on `git remote get-url origin`, and it supports both GitHub HTTPS and SSH remote formats. > **Note:** Local files win over GitHub when the current repository matches the target repository. That makes local testing fast and lets you review unpublished `CLAUDE.md` edits before they exist upstream. > **Tip:** The local match is based on `origin`. If your local checkout points `origin` at a fork instead of the target repository, the command usually falls back to GitHub and fetches the upstream file instead. ### Output behavior `pr claude-md` prints one of two things: - The raw contents of the matched `CLAUDE.md` - An empty line if no supported file exists That empty output is intentional. > **Warning:** A missing `CLAUDE.md` is not treated as an error. If your automation needs to distinguish "file not found" from "file exists but is empty," you need to handle that outside this command. ### Failure behavior You will get a non-zero exit when: - argument parsing fails - the current directory cannot resolve the repo and you only passed a PR number - local matching does not apply and `gh` is not installed Remote fetch failures are intentionally quiet. If GitHub does not return the file, the command keeps trying the next location and eventually prints empty output. ## `pr post-comment` ### What it does `pr post-comment` takes a list of inline review comments and posts them to GitHub as one pull request review. It does not create one review per finding. Instead, it sends: - one review summary body - one `comments` array containing all inline comments That makes it a good fit for "review first, then post a selected set of findings" workflows. ### Usage ```bash myk-claude-tools pr post-comment myk-claude-tools pr post-comment - # stdin ``` ### Required input The final argument is either: - a path to a JSON file - `-` to read JSON from stdin The expected JSON format is an array of comment objects: ```json [ { "path": "src/main.py", "line": 42, "body": "### [CRITICAL] SQL Injection\n\nDescription..." }, { "path": "src/utils.py", "line": 15, "body": "### [WARNING] Missing error handling\n\nDescription..." } ] ``` Each object must contain: - `path` - `line` - `body` A few small but useful details: - `line` is converted with `int(...)`, so numeric strings work too - missing fields cause an immediate error and exit - invalid JSON causes an immediate error and exit - if extra shell or hook output appears before the JSON array, the loader looks for the first line starting with `[` and tries to parse from there > **Tip:** Stdin mode is handy when another step in your pipeline produces the JSON array dynamically and you do not want to write an intermediate file. ### Severity markers and summary generation The first line of each comment body controls how the review summary is grouped. Supported markers are: ```text ### [CRITICAL] Title ### [WARNING] Title ### [SUGGESTION] Title ``` If no marker is present, the comment is treated as a suggestion. The command uses those markers to generate a review body with: - a `## Code Review` heading - a total issue count - grouped sections for critical issues, warnings, and suggestions - Markdown tables listing file, line, and issue title - a closing footer: `*Review generated by Claude Code*` The issue title comes from the first line of the comment body after removing the severity marker. It is truncated to 80 characters for the summary table. ### What gets posted to GitHub The review payload is built like this: ```json { "commit_id": commit_sha, "body": review_body, "event": "COMMENT", "comments": [ { "path": c.path, "line": c.line, "body": c.body, "side": "RIGHT" } ] } ``` Important implications: - the review event is always `"COMMENT"` - every inline comment is posted on the `"RIGHT"` side - the `commit_sha` needs to match the PR revision GitHub expects for inline comments > **Warning:** Inline comments only work for lines GitHub considers part of the PR diff for that commit. If the line is outside the diff, the file path does not exist at that revision, or the SHA is stale, GitHub will reject the review. > **Note:** The module defines a `validate_commit_sha()` helper, but the command does not currently call it before posting. In practice, you should pass the exact `metadata.head_sha` returned by `pr diff` instead of inventing or reusing an older SHA. ### Output When you pass one or more comments, the command: - prints a progress message to stderr - posts the review - prints a JSON result to stdout If the comment list is empty, it skips the GitHub API call and immediately prints a success object with zero counts: ```json {"status": "success", "comment_count": 0, "posted": [], "failed": []} ``` For normal success and failure cases, the output shape is: ```json { "status": result.status, "comment_count": result.comment_count, "posted": result.posted, "failed": result.failed } ``` On failure, the command also includes `error` in the JSON output and prints common troubleshooting hints to stderr: - line numbers might not be part of the diff - file paths might not exist in the named commit - the commit SHA might not be the PR head ## Putting The Commands Together The repository's own PR review command uses these subcommands in sequence: ```bash myk-claude-tools pr diff {pr_number} myk-claude-tools pr claude-md {pr_number} ``` When arguments are passed through directly, it uses: ```bash myk-claude-tools pr diff $ARGUMENTS myk-claude-tools pr claude-md $ARGUMENTS ``` After analysis, it posts the selected findings with: ```bash myk-claude-tools pr post-comment {owner}/{repo} {pr_number} {head_sha} /tmp/claude/pr-review-comments.json ``` In practice, the flow is: 1. Run `pr diff` and keep the JSON output 2. Read `metadata.head_sha` from that output 3. Run `pr claude-md` to load repository-specific review instructions 4. Generate a JSON array of findings 5. Run `pr post-comment` with the same PR and the `head_sha` from step 2 > **Tip:** Keep `pr diff` and `pr post-comment` tied to the same review session. Re-fetch the diff if the PR head changes before you post comments. ## Configuration Snippet For Claude Code Plugins If you are wrapping these commands in a Claude Code plugin or slash command, this repository's own PR review command uses this frontmatter: ```markdown --- description: Review a GitHub PR and post inline comments on selected findings argument-hint: [PR_NUMBER|PR_URL] allowed-tools: Bash(myk-claude-tools:*), Bash(uv:*), Bash(git:*), Bash(gh:*), AskUserQuestion, Task --- ``` That configuration captures the two important runtime requirements for this CLI: - allow the plugin to call `myk-claude-tools` - allow the plugin to call `gh` If you are automating around these commands, it is also useful to remember the output contract: - `pr diff` returns JSON - `pr claude-md` returns plain text - `pr post-comment` returns JSON, but also writes progress and troubleshooting output to stderr --- Source: release-cli-reference.md # Release CLI Reference `myk-claude-tools release` is the release-focused command group in this repository. It gives you four building blocks: - `info` checks whether the repository is ready for a release and lists commits since the last matching tag - `detect-versions` finds version files the tool knows how to update - `bump-version` rewrites those files to a new version - `create` publishes the GitHub release from a changelog file The CLI is installed as the `myk-claude-tools` command and requires Python 3.10+: ```7:24:pyproject.toml requires-python = ">=3.10" # ... other project metadata ... [project.scripts] myk-claude-tools = "myk_claude_tools.cli:main" ``` ## At A Glance | Command | What it does | External tools | | --- | --- | --- | | `release info` | Validates branch state, working tree, and remote sync, then returns tags and commits | `git`, `gh` | | `release detect-versions` | Scans the current directory for known version files | None | | `release bump-version` | Updates detected version files to a new version | None | | `release create` | Creates a GitHub release from a changelog file | `gh` | > **Note:** All four commands print JSON to stdout. `release info`, `release bump-version`, and `release create` exit with status code `1` on failure. `release detect-versions` reports “nothing found” with `"count": 0` instead of failing. ## `myk-claude-tools release info` Use `release info` first. It tells you whether the current checkout is in a releasable state, and if it is, it returns the last tag and the commits that would be included in the release. Syntax: `myk-claude-tools release info [--repo OWNER/REPO] [--target BRANCH] [--tag-match GLOB]` Example commands used by the project release workflow: ```35:48:plugins/myk-github/commands/release.md myk-claude-tools release info --target myk-claude-tools release info --target --tag-match myk-claude-tools release info ``` ### Inputs | Input | Required | Description | | --- | --- | --- | | `--repo OWNER/REPO` | No | Repository in `owner/repo` format. If omitted, the command asks `gh` for the current repository. | | `--target BRANCH` | No | Branch that must match the current checkout for release validation to pass. | | `--tag-match GLOB` | No | Glob used to filter tags, such as `v2.10.*`. This is a glob, not a regular expression. | ### How target branch selection works - If you pass `--target`, that branch becomes the release target. - If you do not pass `--target` and your current branch looks like `v2.10`, that branch becomes the target automatically. - In that auto-detected version-branch case, `tag_match` also defaults to `v2.10.*` unless you already provided `--tag-match`. - Otherwise, the target branch is the repository default branch from GitHub, with a fallback to `main`. > **Note:** The version-branch shortcut is designed for branch names in the `vMAJOR.MINOR` format, such as `v2.10`. ### What it validates Before it gathers tags or commits, `release info` checks all of the following: - You are on the effective target branch. - The working tree is clean, including both staged and unstaged changes. - `git fetch origin ` succeeds. - The local branch has no unpushed commits relative to `origin/`. - The local branch is not behind `origin/`. > **Note:** Remote sync checks are always done against `origin/`. If your release flow uses another remote, this command will not follow it automatically. > **Warning:** If any validation fails, `release info` does not return partial history. In that case, `last_tag` is `null`, `all_tags` and `commits` are empty, `commit_count` is `0`, and `is_first_release` is `null`. > **Tip:** `all_tags` is limited to the 10 most recent matching tags, and `commits` is limited to 100 entries. This command is meant for release preparation, not full-history export. ### Success JSON Top-level fields: | Field | Type | Meaning | | --- | --- | --- | | `metadata` | object | Repository metadata: `owner`, `repo`, `current_branch`, and `default_branch`. | | `validations` | object | Detailed validation results. | | `last_tag` | `string \| null` | Most recent matching tag, or `null` if none was found. | | `all_tags` | `string[]` | Recent matching tags, sorted by version, newest first. | | `commits` | `object[]` | Commits since `last_tag`, or from `HEAD` if this is the first release. Each item contains `hash`, `short_hash`, `subject`, `body`, `author`, and `date`. | | `commit_count` | `number` | Number of commit objects returned. | | `is_first_release` | `boolean \| null` | `true` if no matching tag exists, `false` otherwise, and `null` when validation failed before history collection. | | `target_branch` | `string` | The effective target branch after applying defaults and auto-detection. | | `tag_match` | `string \| null` | The effective tag filter, whether explicit or auto-detected. | `validations` fields: | Field | Type | Meaning | | --- | --- | --- | | `on_target_branch` | `boolean` | Whether `current_branch` matches the effective target branch. | | `default_branch` | `string` | Repository default branch. | | `current_branch` | `string` | Current local branch name. | | `working_tree_clean` | `boolean` | Whether both staged and unstaged changes are absent. | | `dirty_files` | `string` | Up to the first 10 `git status --porcelain` lines when the tree is dirty; empty string when clean. | | `fetch_successful` | `boolean` | Whether `git fetch origin ` succeeded. | | `synced_with_remote` | `boolean` | Whether the local target branch is neither ahead of nor behind `origin/`. | | `unpushed_commits` | `number` | Number of commits local is ahead of `origin/`. | | `behind_remote` | `number` | Number of commits local is behind `origin/`. | | `all_passed` | `boolean` | Final release-readiness result. | ### Failure JSON On failure, `release info` returns a bare error object: | Field | Type | Meaning | | --- | --- | --- | | `error` | `string` | Reason the command failed, such as missing `git` or `gh`, an invalid repository format, an invalid target branch, or an invalid tag-match pattern. | > **Tip:** `release info` is the only release command that fails with `{"error": "..."}` instead of `{"status": "failed", "error": "..."}`. If you are scripting the CLI, handle it separately. ## `myk-claude-tools release detect-versions` Use `release detect-versions` to see which files the CLI can version-bump automatically in the current directory. Syntax: `myk-claude-tools release detect-versions` Example from the project workflow: ```62:64:plugins/myk-github/commands/release.md myk-claude-tools release detect-versions ``` ### What it scans | Source | What it reads | Output `type` | | --- | --- | --- | | `pyproject.toml` | `project.version` | `pyproject` | | `package.json` | Top-level `version` | `package_json` | | `setup.cfg` | `[metadata] version` only | `setup_cfg` | | `Cargo.toml` | `[package] version` | `cargo` | | `build.gradle` | `version` assignment | `gradle` | | `build.gradle.kts` | `version` assignment | `gradle` | | `__init__.py` | `__version__ = "..."` | `python_version` | | `version.py` | `__version__ = "..."` | `python_version` | A few important details: - Recursive scanning is only used for `__init__.py` and `version.py`. - `setup.cfg` is only detected when the version is static. Dynamic forms like `attr:` and `file:` are skipped. - The command skips hidden directories and common generated or dependency directories such as `.git`, `.venv`, `node_modules`, `dist`, `build`, `target`, and cache folders. > **Tip:** Run `detect-versions` from the repository root. The returned `version_files[].path` values are repo-relative paths, and they are the exact values you should feed back into `release bump-version --files`. > **Note:** Malformed or unsupported files are skipped rather than causing the whole command to fail. If nothing matches, you get `"count": 0`. ### JSON output | Field | Type | Meaning | | --- | --- | --- | | `version_files` | `object[]` | Detected files. Each item contains `path`, `current_version`, and `type`. | | `count` | `number` | Number of detected version files. | Each `version_files[]` item contains: | Field | Type | Meaning | | --- | --- | --- | | `path` | `string` | Repo-relative path such as `pyproject.toml` or `mypackage/__init__.py`. | | `current_version` | `string` | The version string currently found in that file. | | `type` | `string` | One of `pyproject`, `package_json`, `setup_cfg`, `cargo`, `gradle`, or `python_version`. | ## `myk-claude-tools release bump-version` Use `release bump-version` to rewrite version strings in the files discovered by `release detect-versions`. Syntax: `myk-claude-tools release bump-version VERSION [--files PATH]...` Example from the project workflow: ```118:120:plugins/myk-github/commands/release.md myk-claude-tools release bump-version --files --files ``` ### Inputs | Input | Required | Description | | --- | --- | --- | | `VERSION` | Yes | New version string, such as `1.2.0`. It must be non-empty, single-line, and must not start with `v` or `V`. | | `--files PATH` | No | Repeatable filter. Limits the update to specific files returned by `release detect-versions`. If omitted, all detected version files are updated. | > **Warning:** Pass the bare version number, such as `1.2.0`, not `v1.2.0`. The command rejects a leading `v` or `V`. > **Warning:** If you use `--files`, every listed file must match the output of `release detect-versions`. One bad path makes the whole command fail before any file is rewritten. ### What gets updated | Detected `type` | Update behavior | | --- | --- | | `pyproject` | Rewrites `version` inside the `[project]` section only. | | `package_json` | Rewrites the top-level `version` field. | | `setup_cfg` | Rewrites `version` inside `[metadata]` only. Dynamic `attr:` and `file:` versions are skipped. | | `cargo` | Rewrites `version` inside `[package]` only. | | `gradle` | Rewrites the first matching `version` assignment. | | `python_version` | Rewrites the first matching `__version__ = "..."` assignment. | ### Behavior - The command uses the same file-detection logic as `release detect-versions`. - It only updates files that are already recognized by that detector. - Rewrites are done atomically, so a file is written through a temporary file and then replaced. - The command does not run `git add`, `git commit`, or any other Git operation. - A run can succeed even when some files are skipped. It only fails when zero files were updated, or when input validation fails. > **Tip:** For automation, treat `updated[]` as the source of truth for what actually changed. `skipped[]` tells you which files were ignored and why. ### JSON output | Field | Appears when | Type | Meaning | | --- | --- | --- | --- | | `status` | Always | `string` | `success` or `failed`. | | `version` | Success | `string` | The new version that was written. | | `updated` | Success | `object[]` | Files that were updated. Each item contains `path`, `old_version`, and `new_version`. | | `skipped` | Success or some failures | `object[]` | Files that were not updated. Each item contains `path` and `reason`. | | `error` | Failure | `string` | Why the command failed, such as an invalid version string, no detected version files, unmatched `--files`, or no files updated. | `updated[]` items: | Field | Type | Meaning | | --- | --- | --- | | `path` | `string` | Updated file path. | | `old_version` | `string` | Version found before the rewrite. | | `new_version` | `string` | Version written to the file. | `skipped[]` items: | Field | Type | Meaning | | --- | --- | --- | | `path` | `string` | Skipped file path. | | `reason` | `string` | Reason the file was skipped, such as an unrecognized pattern or I/O error. | ## `myk-claude-tools release create` Use `release create` to publish the GitHub release once you have a final tag and a changelog file. Syntax: `myk-claude-tools release create OWNER/REPO TAG CHANGELOG_FILE [--prerelease] [--draft] [--target BRANCH] [--title TEXT]` Example from the project workflow: ```197:204:plugins/myk-github/commands/release.md CHANGELOG_FILE=$(mktemp /tmp/claude-release-XXXXXX.md) trap "rm -f $CHANGELOG_FILE" EXIT cat > "$CHANGELOG_FILE" << 'EOF' EOF myk-claude-tools release create {owner}/{repo} {tag} "$CHANGELOG_FILE" [--prerelease] [--draft] [--target {target_branch}] ``` ### Inputs | Input | Required | Description | | --- | --- | --- | | `OWNER/REPO` | Yes | Repository in `owner/repo` format. | | `TAG` | Yes | Release tag, usually something like `v1.2.0`. | | `CHANGELOG_FILE` | Yes | Path to a file containing the release notes. The file must already exist. | | `--prerelease` | No | Marks the release as a prerelease. | | `--draft` | No | Creates the release as a draft. | | `--target BRANCH` | No | Target branch passed through to GitHub. | | `--title TEXT` | No | Release title. If omitted or blank, the tag is used as the title. | ### Behavior - The command validates that `OWNER/REPO` looks like `owner/repo`. - It validates that `CHANGELOG_FILE` exists before calling `gh`. - It runs `gh release create` with a 300-second timeout. - If `gh` prints a release URL, that URL is returned. If not, the CLI constructs the standard GitHub release URL from the repo and tag. - The success payload echoes `prerelease` and `draft`, but it does not echo `target` or `title`. > **Warning:** `release create` has real side effects. It creates a GitHub release in the repository named by `OWNER/REPO` using your current `gh` authentication and GitHub context. > **Note:** A tag that does not look like `vX.Y.Z` or `vX.Y.Z-suffix` only triggers a warning on stderr. The command still attempts to create the release. > **Tip:** `release create` is independent. If you already have a tag and a changelog file, you can use it without running the other release commands first. ### JSON output | Field | Appears when | Type | Meaning | | --- | --- | --- | --- | | `status` | Always | `string` | `success` or `failed`. | | `tag` | Success | `string` | Tag used for the release. | | `url` | Success | `string` | GitHub release URL, either extracted from `gh` output or constructed by the CLI. | | `prerelease` | Success | `boolean` | Whether the release was created as a prerelease. | | `draft` | Success | `boolean` | Whether the release was created as a draft. | | `error` | Failure | `string` | Why the command failed, such as a missing `gh` installation, invalid repo format, missing changelog file, or `gh release create` failure. | --- Source: database-cli-reference.md # Database CLI Reference The database CLI lets you inspect the local review history stored in the review database. Use it to answer practical questions like: - Which review sources produce the most actionable comments? - Which skipped or not-addressed comments keep repeating? - Has this comment already been dismissed before? - What does the raw review data look like? You can run these queries in two ways: ```bash myk-claude-tools db stats --by-source /myk-review:query-db stats --by-source ``` All command examples on this page come from the project’s actual CLI definitions, plugin command, or tests. ## At a glance | Command | Best for | | --- | --- | | `stats` | Addressed-rate reports by source or reviewer | | `patterns` | Recurring skipped or not-addressed comment patterns | | `dismissed` | Repo-specific dismissed history and reasons | | `query` | One-off read-only SQL queries | | `find-similar` | Comparing a new comment to previously dismissed comments | ## Before you start If you want to run the CLI directly, install the `myk-claude-tools` console script: ```bash uv tool install myk-claude-tools myk-claude-tools --version ``` > **Note:** By default, the CLI looks for the database at `/.claude/data/reviews.db`, where `` is the Git repository root of your current working directory. > **Tip:** Every `db` subcommand accepts `--db-path`, so you can point at a different database file when needed. These analytics commands query an existing database. Review data is stored separately after a completed review flow is written to disk. A minimal stored review payload looks like this: ```json { "metadata": { "owner": "test-owner", "repo": "test-repo", "pr_number": 123 }, "human": [{"body": "human comment"}], "qodo": [{"body": "qodo comment"}], "coderabbit": [{"body": "coderabbit comment"}] } ``` The database is append-only: each stored review run creates a new row in `reviews`, then inserts its comments into `comments`. > **Note:** If the same PR is stored more than once, the database keeps multiple `reviews` rows for that PR. That is useful for history, but it also means custom queries may need `DISTINCT`, grouping, or a “latest review” filter. ## Common options | Option | Available on | What it does | | --- | --- | --- | | `--json` | All `db` subcommands | Returns JSON instead of a formatted table | | `--db-path` | All `db` subcommands | Uses a specific database file | | `--owner`, `--repo` | `dismissed`, `find-similar` | Scopes the command to one repository | | `--min` | `patterns` | Sets the minimum number of repeated matches to report | | `--threshold` | `find-similar` | Sets the minimum similarity score from `0.0` to `1.0` | > **Tip:** Table output truncates long values for readability. Use `--json` if you need the full comment body or want to pipe the result into another tool. ## Command reference ### `stats` Use `stats` to measure how often comments are addressed, skipped, or left unaddressed. ```bash myk-claude-tools db stats myk-claude-tools db stats --by-reviewer myk-claude-tools db stats --by-source --json ``` What it does: - `--by-source` groups by comment source such as `human`, `qodo`, and `coderabbit` - `--by-reviewer` groups by the individual `author` - Source-based output includes an `addressed_rate` percentage > **Note:** If you do not pass either `--by-source` or `--by-reviewer`, `stats` defaults to source-based output. > **Warning:** `--by-source` and `--by-reviewer` are mutually exclusive. Passing both returns an error. ### `patterns` Use `patterns` to find repeated comment clusters that may be worth turning into reviewer guidance or skip rules. ```bash myk-claude-tools db patterns myk-claude-tools db patterns --min 3 myk-claude-tools db patterns --json ``` What it returns: - `path`: the file path where the pattern appears - `occurrences`: how many similar comments were found - `reason`: the most common `reply` or `skip_reason` in that cluster - `body_sample`: the first comment body in the cluster How matching works: - The command only looks at comments with status `not_addressed` or `skipped` - Comments are grouped by exact `path` - Within a path, comments are clustered when their body similarity is at least `0.6` > **Note:** `patterns` works across the whole database. It is not scoped to one `owner` or `repo`. ### `dismissed` Use `dismissed` to list comments for a specific repository that were skipped, not addressed, or otherwise recorded as reusable dismissals. ```bash myk-claude-tools db dismissed --owner myk-org --repo claude-code-config myk-claude-tools db dismissed --owner myk-org --repo claude-code-config --json ``` In JSON mode, the command returns fields such as: - `path` - `line` - `body` - `status` - `reply` - `skip_reason` - `author` - `type` - `comment_id` In table mode, the CLI shows a smaller set of columns: - `path` - `line` - `status` - `reply` - `author` > **Note:** You may see some rows with status `addressed`. That is intentional for `outside_diff_comment`, `nitpick_comment`, and `duplicate_comment`, because those body-comment types do not rely on a normal GitHub review thread for later skipping. ### `query` Use `query` when you want raw SQL access to the database. ```bash myk-claude-tools db query "SELECT * FROM comments WHERE status = 'skipped'" myk-claude-tools db query "SELECT status, COUNT(*) as cnt FROM comments GROUP BY status" myk-claude-tools db query "SELECT * FROM comments LIMIT 5" --json myk-claude-tools db query "SELECT COUNT(*) as count FROM comments" --json ``` A practical example from the plugin command is: ```bash myk-claude-tools db query "SELECT * FROM comments WHERE status = 'skipped' ORDER BY id DESC LIMIT 10" ``` This command is useful when you want exact control over: - filtering by status, source, author, or path - counting rows - grouping and sorting - inspecting raw rows behind `stats`, `patterns`, or `dismissed` > **Warning:** The query interface is read-only. Only `SELECT` and `WITH` statements are allowed. Multiple statements are blocked, and mutating keywords such as `INSERT`, `UPDATE`, `DELETE`, `DROP`, `ALTER`, `CREATE`, `ATTACH`, `DETACH`, and `PRAGMA` are rejected. ### `find-similar` Use `find-similar` to compare a new comment against previously dismissed comments in the same repository and file path. ```bash echo '{"path": "foo.py", "body": "Add error handling..."}' | \ myk-claude-tools db find-similar --owner myk-org --repo claude-code-config --json ``` The CLI reads JSON from standard input. The input must be a single JSON object with both `path` and `body`. Actual input shape used by the test suite: ```json {"path": "path/to/file.py", "body": "Add skip option"} ``` How matching works: - Repository must match `--owner` and `--repo` - File path must match exactly - Only comments with status `not_addressed` or `skipped` are considered - Similarity is based on case-insensitive word overlap, not semantic meaning - The default threshold is `0.6` - Valid threshold values are `0.0` through `1.0` If a match is found, JSON output includes the matched row plus a `similarity` score. In text mode, the CLI prints: - the similarity score - the matched `path:line` - the matched status - the matched reason - the first 100 characters of the original body > **Warning:** Pass a single JSON object, not a JSON array. The CLI reads `path` and `body` directly from the top-level input object. ## Slash command wrapper Inside Claude Code, the `myk-review` plugin exposes the same database queries through `/myk-review:query-db`. ```bash /myk-review:query-db stats --by-source /myk-review:query-db stats --by-reviewer /myk-review:query-db patterns --min 2 /myk-review:query-db dismissed --owner X --repo Y /myk-review:query-db query "SELECT * FROM comments WHERE status='skipped' LIMIT 10" /myk-review:query-db find-similar < comments.json ``` Use the slash command when you are already working inside Claude Code and want the same analytics without leaving the chat. ## Database schema The review database has two main tables: - `reviews`: one row per stored review run - `comments`: one row per stored comment from `human`, `qodo`, or `coderabbit` Actual schema snippet: ```sql CREATE TABLE IF NOT EXISTS reviews ( id INTEGER PRIMARY KEY AUTOINCREMENT, pr_number INTEGER NOT NULL, owner TEXT NOT NULL, repo TEXT NOT NULL, commit_sha TEXT NOT NULL, created_at TEXT NOT NULL ); CREATE TABLE IF NOT EXISTS comments ( id INTEGER PRIMARY KEY AUTOINCREMENT, review_id INTEGER NOT NULL REFERENCES reviews(id), source TEXT NOT NULL, thread_id TEXT, node_id TEXT, comment_id INTEGER, author TEXT, path TEXT, line INTEGER, body TEXT, priority TEXT, status TEXT, reply TEXT, skip_reason TEXT, posted_at TEXT, resolved_at TEXT, type TEXT DEFAULT NULL ); ``` Relevant indexes from the same schema: ```sql CREATE INDEX IF NOT EXISTS idx_comments_review_id ON comments(review_id); CREATE INDEX IF NOT EXISTS idx_comments_source ON comments(source); CREATE INDEX IF NOT EXISTS idx_comments_status ON comments(status); CREATE INDEX IF NOT EXISTS idx_reviews_pr ON reviews(owner, repo, pr_number); CREATE INDEX IF NOT EXISTS idx_reviews_commit ON reviews(commit_sha); ``` The most useful fields for day-to-day analytics are: - `source`: where the comment came from - `author`: who wrote it - `path` and `line`: where it applies - `status`: `pending`, `addressed`, `skipped`, or `not_addressed` - `reply` and `skip_reason`: why it was resolved, skipped, or not addressed - `type`: special body-comment categories such as `outside_diff_comment`, `nitpick_comment`, and `duplicate_comment` ## Practical query recipes ### See which sources are most actionable ```bash myk-claude-tools db stats --by-source ``` Use this when you want a quick view of how often comments from each review source get addressed. ### Compare reviewers ```bash myk-claude-tools db stats --by-reviewer ``` This is useful when your database contains multiple human or AI reviewers and you want author-level totals. ### Find repeated skipped guidance ```bash myk-claude-tools db patterns --min 2 ``` Use this to surface repeated comment clusters that may be good candidates for future auto-skip rules or reviewer guidance. ### Pull recent skipped rows directly ```bash myk-claude-tools db query "SELECT * FROM comments WHERE status = 'skipped' ORDER BY id DESC LIMIT 10" ``` This is the quickest way to inspect recent skipped rows without writing a separate script. ### Check whether a comment has already been dismissed ```bash echo '{"path": "foo.py", "body": "Add error handling..."}' | \ myk-claude-tools db find-similar --owner myk-org --repo claude-code-config --json ``` Use this when you want to answer, “Have we seen and skipped something like this before?” ## Troubleshooting > **Note:** If the database file does not exist, the commands return no results rather than creating a new database. > **Warning:** Default database discovery depends on Git. If you run the CLI outside the repository you want to inspect, the default path may point at the wrong place or fail entirely. > **Tip:** If the table output looks cut off, rerun the same command with `--json`. The built-in formatter truncates long cell values for readability. --- Source: coderabbit-cli-reference.md # CodeRabbit CLI Reference Use this reference when CodeRabbit tells you to wait before requesting another review. In `claude-code-config`, the `myk-claude-tools coderabbit` commands are the reusable building blocks behind the GitHub review workflows. - `check` tells you whether a pull request is currently rate-limited. - `trigger` waits if needed, posts `@coderabbitai review`, and watches for the next review to start. The CLI entrypoint is declared directly in the project: ```toml [project.scripts] myk-claude-tools = "myk_claude_tools.cli:main" ``` ## Prerequisites The project declares Python `>=3.10` in `pyproject.toml`. To use the CodeRabbit commands comfortably, you also need: - `myk-claude-tools` on your `PATH` - `gh` installed and authenticated, because the implementation shells out to `gh api` - `uv` if you want to use the install flow referenced by the plugin commands The plugin workflows in this repo use these exact checks and install command: ```bash uv --version myk-claude-tools --version uv tool install myk-claude-tools ``` > **Note:** The CodeRabbit implementation itself depends on `gh`, even though the plugin command docs focus first on checking `uv` and `myk-claude-tools`. ## Command Summary The CodeRabbit CLI exposes two subcommands: ```bash myk-claude-tools coderabbit check myk-claude-tools coderabbit trigger --wait ``` The Click command definitions are small and direct: ```python @coderabbit.command("check") @click.argument("owner_repo") @click.argument("pr_number", type=int) def check(owner_repo: str, pr_number: int) -> None: """Check if CodeRabbit is rate limited on a PR. Outputs JSON with rate limit status and wait time. """ from myk_claude_tools.coderabbit.rate_limit import run_check sys.exit(run_check(owner_repo, pr_number)) ``` ```python @coderabbit.command("trigger") @click.argument("owner_repo") @click.argument("pr_number", type=int) @click.option("--wait", "wait_seconds", type=int, default=0, help="Seconds to wait before posting review trigger") def trigger(owner_repo: str, pr_number: int, wait_seconds: int) -> None: """Wait and trigger a CodeRabbit review on a PR. Optionally waits, then posts @coderabbitai review and polls until the review starts (max 10 minutes). """ from myk_claude_tools.coderabbit.rate_limit import run_trigger sys.exit(run_trigger(owner_repo, pr_number, wait_seconds)) ``` ## `coderabbit check` Use `check` when you want a machine-readable answer to one question: is this PR still rate-limited? ```bash myk-claude-tools coderabbit check ``` ### What it returns On success, `check` writes JSON to standard output and exits with code `0`. The JSON shape comes straight from `run_check()`: ```python if _RATE_LIMITED_MARKER not in body: print(json.dumps({"rate_limited": False})) return 0 wait_seconds = _parse_wait_seconds(body) if wait_seconds is None: print("Error: Could not parse wait time from rate limit message.") snippet = "\n".join(body.split("\n")[:10]) print(f"Comment snippet:\n{snippet}") return 1 print(json.dumps({"rate_limited": True, "wait_seconds": wait_seconds, "comment_id": comment_id})) return 0 ``` In practice, that means: - If the PR is not rate-limited, you get `{"rate_limited": false}`. - If the PR is rate-limited, you get: - `rate_limited` - `wait_seconds` - `comment_id` On failure, the command prints a human-readable error and exits with code `1`. ### Input rules The repository name must be in strict `owner/repo` form: ```python def _validate_owner_repo(owner_repo: str) -> bool: """Validate owner/repo format.""" if "/" not in owner_repo or len(owner_repo.split("/")) != 2: print(f"Error: Invalid repository format: {owner_repo}. Expected owner/repo.") return False return True ``` > **Warning:** `check` does not auto-detect the PR for you. If you want current-branch detection, use the `/myk-github:coderabbit-rate-limit` workflow described later on this page. ## How Rate-Limit Detection Works The CLI does not talk to a CodeRabbit-specific API. Instead, it inspects the latest CodeRabbit summary comment on the pull request through GitHub issue comments. The core markers and parser are defined here: ```python # HTML comment markers in CodeRabbit's summary comment _SUMMARY_MARKER = "" _RATE_LIMITED_MARKER = "" # Regex to parse wait time from rate limit message _WAIT_TIME_RE = re.compile(r"Please wait \*\*(?:(\d+) minutes? and )?(\d+) seconds?\*\*") _POLL_INTERVAL = 60 # seconds between polls _MAX_POLL_ATTEMPTS = 10 # max 10 minutes ``` To find the comment, the implementation asks GitHub for PR issue comments and selects the last matching summary comment: ```python code, output, _stderr = _run_gh( [ "api", f"repos/{owner}/{repo}/issues/{pr_number}/comments", "--jq", f'[.[] | select(.body | contains("{_SUMMARY_MARKER}"))] | last | {{id: .id, body: .body}}', ], timeout=60, ) ``` That means `check` works like this: 1. Find the most recent issue comment whose body contains the CodeRabbit summary marker. 2. Look for the rate-limit marker in that comment body. 3. If present, parse the cooldown from the message text. The tests in `tests/test_coderabbit_rate_limit.py` cover both full minute-and-second messages and seconds-only messages, including examples like: - `Please wait **22 minutes and 57 seconds**` - `Please wait **45 seconds**` > **Tip:** If `check` says it cannot parse the wait time, the CLI prints the first few lines of the comment body to help you see what changed. ## `coderabbit trigger` Use `trigger` when you already know the PR should be retried and you want the CLI to handle the waiting and polling for you. ```bash myk-claude-tools coderabbit trigger --wait ``` ### What it does `run_trigger()` performs three steps: 1. Wait for the requested number of seconds, if `--wait` is greater than zero. 2. Post a fresh `@coderabbitai review` comment to the PR. 3. Poll until the review appears to have started, or time out. The implementation is explicit: ```python if wait_seconds > 0: minutes, secs = divmod(wait_seconds, 60) print(f"Waiting {minutes}m {secs}s before triggering review...") time.sleep(wait_seconds) print("Posting @coderabbitai review...") if not _post_review_trigger(owner_repo, pr_number): print("Error: Failed to post review trigger comment.") return 1 print("Review trigger posted.") ``` The trigger comment is posted as a GitHub issue comment: ```python code, _, stderr = _run_gh( [ "api", f"repos/{owner}/{repo}/issues/{pr_number}/comments", "-f", "body=@coderabbitai review", ], timeout=30, ) ``` ### Polling behavior After posting the trigger, the CLI polls once per minute, for up to ten attempts: ```python none_streak = 0 for attempt in range(1, _MAX_POLL_ATTEMPTS + 1): print(f"Polling for review start (attempt {attempt}/{_MAX_POLL_ATTEMPTS})...") status = _is_rate_limited(owner_repo, pr_number) if status == "error": print("Warning: API error while checking status. Retrying...") none_streak = 0 # API errors don't count toward comment-gone detection elif status == "no_comment": none_streak += 1 if none_streak >= 2: print("Review started (comment replaced).") return 0 print("Warning: Could not find comment. Retrying...") elif not status: print("Review started!") return 0 else: none_streak = 0 if attempt < _MAX_POLL_ATTEMPTS: time.sleep(_POLL_INTERVAL) print("Error: Timeout waiting for review to start (10 minutes).") ``` There are two success paths: - The summary comment is still present, but it no longer contains the rate-limit marker. - The summary comment disappears twice in a row, which the CLI treats as a strong signal that CodeRabbit replaced it with a new review state. > **Note:** Two consecutive `no_comment` results count as success on purpose. The tests document this as the expected behavior when the old summary comment has been replaced. ## Workflow Integration in `myk-github` In this repository, the GitHub review workflows live in plugin command files under `plugins/myk-github/commands/`. These workflows are where most users will encounter the CodeRabbit CLI. ### `/myk-github:coderabbit-rate-limit` This workflow supports three forms: ```bash /myk-github:coderabbit-rate-limit /myk-github:coderabbit-rate-limit 123 /myk-github:coderabbit-rate-limit https://github.com/owner/repo/pull/123 ``` Its documented flow is: 1. Detect the PR from the current branch, a PR number, or a full PR URL. 2. Run `coderabbit check`. 3. If the PR is rate-limited, add a 30-second buffer and run `coderabbit trigger`. The exact commands in the workflow file are: ```bash gh repo view --json nameWithOwner -q .nameWithOwner gh pr view --json number,url -q '.number' myk-claude-tools coderabbit check myk-claude-tools coderabbit trigger --wait ``` > **Note:** The extra 30-second buffer is added by the workflow, not by the `trigger` command itself. If you call the CLI directly, it waits for exactly the `--wait` value you pass. ### `/myk-github:review-handler --autorabbit` The longer-running review handler also uses the same CodeRabbit commands. In its `--autorabbit` loop, it checks for a CodeRabbit cooldown before fetching new review comments again: ```bash myk-claude-tools coderabbit check myk-claude-tools coderabbit trigger --wait ``` That makes the CodeRabbit CLI the shared mechanism for both: - one-off rate-limit recovery - automatic retry behavior inside the review handler loop ## Related CodeRabbit Configuration The repository also enables CodeRabbit review behavior in `.coderabbit.yaml`: ```yaml reviews: profile: assertive request_changes_workflow: true high_level_summary: true review_status: true collapse_walkthrough: false auto_review: enabled: true drafts: false ``` This config does not replace the CLI commands. Instead, it explains why the repo expects CodeRabbit summary comments and review status to exist in the first place. ## Tested Behavior The behavior described above is backed by `tests/test_coderabbit_rate_limit.py`. The test coverage includes: - wait-time parsing for minute-and-second and seconds-only formats - strict `owner/repo` validation - rate-limited and not-rate-limited JSON responses from `check` - waiting before a trigger - timeout handling - failed trigger posting - the two-consecutive-`no_comment` success heuristic - API errors during polling > **Tip:** If you change the detection markers or wait-time format in the implementation, update the tests at the same time. The tests already capture the edge cases that matter most for real review workflows. ## Troubleshooting ### `Error: Invalid repository format` Use exactly `owner/repo`. The CLI rejects anything with no slash or more than one slash. ### `Error: No CodeRabbit summary comment found on this PR` The command only works after CodeRabbit has posted its summary comment. If the PR is brand new, or CodeRabbit has not finished reviewing yet, wait and try again. ### `Error: Could not parse wait time from rate limit message` The command found the rate-limit marker but could not match the expected wait text. The CLI prints a snippet of the comment body to help you inspect what changed. ### `gh CLI not found` Install `gh` and make sure it is available on `PATH`. The implementation calls `gh api` directly for both detection and trigger posting. ### `Error: Failed to post review trigger comment` Check your GitHub authentication and permissions. This step creates a PR issue comment with the body `@coderabbitai review`. ### `Error: Timeout waiting for review to start (10 minutes).` The trigger comment was posted, but the CLI never saw a clear sign that the new review started. At that point, inspect the PR manually and retry later if needed. > **Warning:** The detection logic depends on CodeRabbit's current comment markers and wait-message wording. If CodeRabbit changes those formats upstream, `check` may stop recognizing the cooldown until the parser is updated. --- Source: data-formats-and-schema.md # Data Formats and Schema This project moves review data through a small set of predictable formats: - Temporary JSON files under `$TMPDIR/claude` (or `/tmp/claude` if `TMPDIR` is not set) - Hook payloads passed over stdin/stdout - Plugin and marketplace metadata files - A local SQLite database for review history and analytics If you are debugging a review flow, installing plugins, or querying past review data, these are the formats that matter. ## At A Glance | Format | Example location | Produced by | Used by | |---|---|---|---| | Review snapshot JSON | `$TMPDIR/claude/pr-123-reviews.json` | `reviews fetch` | `reviews post`, `reviews store` | | Pending review JSON | `$TMPDIR/claude/pr-owner-repo-123-pending-review.json` | `reviews pending-fetch` | `reviews pending-update` | | Inline comment batch JSON | any file or stdin | user or slash command workflow | `pr post-comment` | | Review database | `.claude/data/reviews.db` | `reviews store` | `db` commands and auto-skip logic | | Hook payload JSON | stdin/stdout | Claude Code hooks | hook scripts in `scripts/` | > **Note:** The command docs often say `/tmp/claude/...`, but the code actually uses `$TMPDIR/claude` when `TMPDIR` is set. ## Temporary JSON Artifacts ### Review snapshot: `pr--reviews.json` This is the main handoff file for the review-reply workflow. It is created by `myk-claude-tools reviews fetch` and groups fetched threads by reviewer source. The file is built in `myk_claude_tools/reviews/fetch.py` like this: ```python final_output = { "metadata": { "owner": owner, "repo": repo, "pr_number": int(pr_number), "json_path": str(json_path), }, "human": categorized["human"], "qodo": categorized["qodo"], "coderabbit": categorized["coderabbit"], } ``` A real test fixture from `tests/test_store_reviews_to_db.py` shows the shape that `reviews store` accepts: ```python data = { "metadata": { "owner": "org", "repo": "repo", "pr_number": 1, }, "human": [ { "thread_id": "thread_abc", "node_id": "node_xyz", "comment_id": 12345, "author": "reviewer1", "path": "src/main.py", "line": 100, "body": "Please fix this bug", "priority": "HIGH", "status": "addressed", "reply": "Fixed in commit abc123", "skip_reason": None, "posted_at": "2024-01-15T10:00:00Z", "resolved_at": "2024-01-15T10:05:00Z", "type": "outside_diff_comment", } ], "qodo": [], "coderabbit": [], } ``` What you can expect in each thread object: - GitHub identifiers: `thread_id`, `node_id`, `comment_id` - Location fields: `path`, `line` - Review text: `body`, `reply`, `skip_reason` - Workflow state: `status`, `posted_at`, `resolved_at` - Classification: `source`, `priority`, and sometimes `type` When threads are enriched in `process_and_categorize()`, the code adds these defaults: ```python enriched = { **thread, "source": source, "priority": priority, "reply": thread.get("reply"), "status": thread.get("status", "pending"), } ``` That means a freshly fetched thread usually starts with: - `status: "pending"` - `reply: null` - `source: "human"`, `"qodo"`, or `"coderabbit"` - `priority: "HIGH"`, `"MEDIUM"`, or `"LOW"` ### Special synthesized comment types Not every review note comes from a normal GitHub review thread. CodeRabbit body-parsed comments are converted into thread-like objects with extra fields. From `myk_claude_tools/reviews/fetch.py`: ```python threads.append({ "thread_id": None, "node_id": node_id, "comment_id": review_id, "author": author, "path": path, "line": line_int, "end_line": end_line_int, "body": body, "category": comment.get("category", ""), "severity": comment.get("severity", ""), "replies": [], "type": thread_type, "review_id": review_id, "suggestion_index": idx, }) ``` These special `type` values are currently: - `outside_diff_comment` - `nitpick_comment` - `duplicate_comment` > **Warning:** These synthesized comments do not behave like normal GitHub review threads. They are handled later as consolidated PR comments rather than replied to inline. ### Status values and resolution rules The reply/posting step recognizes these statuses in `myk_claude_tools/reviews/post.py`: ```text Status handling: - addressed: Post reply and resolve thread - not_addressed: Post reply and resolve thread (similar to addressed) - skipped: Post reply (with skip reason) and resolve thread - pending: Skip (not processed yet) - failed: Retry posting Resolution behavior by source: - qodo/coderabbit: Always resolve threads after replying - human: Only resolve if status is "addressed"; skipped/not_addressed threads are not resolved to allow reviewer follow-up ``` That source-specific rule is important if you are reading `posted_at` and `resolved_at` later: - AI review threads are usually both replied to and resolved. - Human review threads may be replied to without being resolved. ### Atomic writes and cleanup The review snapshot is written atomically and the temp directory is created with restricted permissions: ```python tmp_base = Path(os.environ.get("TMPDIR") or tempfile.gettempdir()) out_dir = tmp_base / "claude" out_dir.mkdir(parents=True, exist_ok=True, mode=0o700) ``` The file itself is written through a temp file and renamed into place: ```python fd, tmp_json_path = tempfile.mkstemp( prefix=f"pr-{pr_number}-reviews.json.", dir=str(out_dir), ) ... os.replace(tmp_path, json_path) ``` The fetch module also tracks temp files and removes any orphaned `.new` files during cleanup. > **Tip:** `reviews fetch` prints the full JSON to stdout as well as saving it to disk. `reviews pending-fetch` behaves differently and prints only the saved file path. ### Pending review snapshot: `pr----pending-review.json` This file is created by `myk-claude-tools reviews pending-fetch`. It is used for the “refine an existing draft review” workflow. The exact output shape comes from `myk_claude_tools/reviews/pending_fetch.py`: ```python final_output: dict[str, Any] = { "metadata": { "owner": owner, "repo": repo, "pr_number": pr_number_int, "review_id": review_id, "username": username, "json_path": str(json_path), }, "comments": comments, "diff": diff, } ``` Each comment starts with this structure: ```python comment: dict[str, Any] = { "id": c.get("id"), "path": c.get("path"), "line": c.get("line"), "side": c.get("side", "RIGHT"), "body": c.get("body", ""), "diff_hunk": c.get("diff_hunk", ""), "refined_body": None, "status": "pending", } ``` What each field is for: - `id`: the GitHub review comment ID to patch later - `path`, `line`, `side`: where the draft comment is attached - `body`: the original comment text - `diff_hunk`: nearby diff context - `refined_body`: where your edited version goes - `status`: workflow state, typically moved from `pending` to `accepted` If you later run `pending-update`, the file may also include optional submission metadata. The module documents the expected structure like this: ```text Expected JSON structure: { "metadata": { "owner": "...", "repo": "...", "pr_number": 123, "review_id": 456, "submit_action": "COMMENT", # optional "submit_summary": "Summary text" # optional }, "comments": [ { "id": 789, "path": "src/main.py", "line": 42, "body": "original comment", "refined_body": "refined version", "status": "accepted" } ] } ``` Valid `submit_action` values come directly from code: ```python VALID_SUBMIT_ACTIONS = {"COMMENT", "APPROVE", "REQUEST_CHANGES"} ``` > **Note:** `reviews pending-update` reads this JSON and updates GitHub comments, but it does not rewrite the local JSON file the way `reviews post` does. ### Batched inline comment input `myk-claude-tools pr post-comment` accepts a much simpler format: a JSON array of `{path, line, body}` objects. The exact example in `myk_claude_tools/pr/post_comment.py` is: ```json [ { "path": "src/main.py", "line": 42, "body": "### [CRITICAL] SQL Injection\n\nDescription..." }, { "path": "src/utils.py", "line": 15, "body": "### [WARNING] Missing error handling\n\nDescription..." } ] ``` Severity markers are parsed from the first line of `body`: ```text Severity Markers: - ### [CRITICAL] Title - For critical security/functionality issues - ### [WARNING] Title - For important but non-critical issues - ### [SUGGESTION] Title - For code improvements and suggestions ``` One practical detail from the loader: it can recover from prepended shell or hook output by scanning for the first line that starts with `[` and attempting JSON parsing from there. ### Other JSON you may see: `pr diff` output `myk-claude-tools pr diff` prints a JSON object to stdout rather than saving a fixed temp file. This is often used as structured input for PR review workflows. From `myk_claude_tools/pr/diff.py`: ```python output = { "metadata": { "owner": pr_info.owner, "repo": pr_info.repo, "pr_number": pr_info.pr_number, "head_sha": head_sha, "base_ref": base_ref, "title": pr_title, "state": pr_state, }, "diff": pr_diff, "files": files, } ``` Each `files` entry includes: ```python { "path": f["filename"], "status": f["status"], "additions": f["additions"], "deletions": f["deletions"], "patch": f.get("patch", ""), } ``` ## Hook Payload Expectations Hook registration lives in `settings.json`. The repo uses four hook event types: ```json "hooks": { "Notification": [...], "PreToolUse": [...], "UserPromptSubmit": [...], "SessionStart": [...] } ``` ### `PreToolUse`: stdin JSON in, optional deny JSON out Both `scripts/rule-enforcer.py` and `scripts/git-protection.py` read JSON from stdin and look for `tool_name` plus `tool_input`. From `rule-enforcer.py`: ```python input_data = json.loads(sys.stdin.read()) tool_name = input_data.get("tool_name", "") tool_input = input_data.get("tool_input", {}) ``` The test suite shows the expected input shape clearly: ```python input_data = { "tool_name": "Bash", "tool_input": {"command": "python script.py"}, } ``` When a command is denied, the scripts return a JSON envelope under `hookSpecificOutput`. From `rule-enforcer.py`: ```python output = { "hookSpecificOutput": { "hookEventName": "PreToolUse", "permissionDecision": "deny", "permissionDecisionReason": "Direct python/pip commands are forbidden.", "additionalContext": ( "You attempted to run python/pip directly. Instead:\n" "1. Delegate Python tasks to the python-expert agent\n" "2. Use 'uv run script.py' to run Python scripts\n" "3. Use 'uvx package-name' to run package CLIs\n" "See: https://docs.astral.sh/uv/" ), } } ``` In practice: - `tool_name` is usually `"Bash"` for these hooks - `tool_input.command` is the important field for command hooks - allow decisions are normally represented by exiting successfully without printing a deny payload > **Warning:** The two command hooks have different failure behavior. `rule-enforcer.py` fails open on unexpected errors, while `git-protection.py` fails closed and returns a deny payload if it crashes. ### The prompt-based destructive-command gate There is also a prompt-style `PreToolUse` hook in `settings.json`. It asks an LLM to classify destructive shell commands and requires a very small JSON response. The configured prompt ends with this exact contract: ```text Respond with JSON: {"decision": "approve" or "block" or "ask", "reason": "brief explanation"} ``` If you are building tooling around this repo, those are the only three supported decisions for that gate: - `approve` - `block` - `ask` ### `UserPromptSubmit`: stdin ignored, context JSON returned `scripts/rule-injector.py` reads stdin only because the hook protocol expects it, then returns structured JSON with additional prompt context. From the script: ```python output = {"hookSpecificOutput": {"hookEventName": "UserPromptSubmit", "additionalContext": rule_reminder}} ``` That means the payload contract is simple: - input: whatever Claude Code provides on stdin - output: JSON with `hookSpecificOutput.hookEventName` and `additionalContext` ### `Notification`: JSON with a top-level `message` `scripts/my-notifier.sh` expects JSON on stdin and reads one field: ```bash if ! notification_message=$(echo "$input_json" | jq -r '.message' 2>&1); then echo "Error: Failed to parse JSON - $notification_message" >&2 exit 1 fi ``` Practical rules for this hook: - `message` must be present - `message` must not be empty or `null` - the script does not read nested fields A minimal valid payload looks like: ```json { "message": "Review completed" } ``` ### `SessionStart`: plain text, not JSON `scripts/session-start-check.sh` is the outlier. It does not parse JSON input, and when it finds missing tools or plugins it prints a plain-text report. The report starts like this: ```text MISSING_TOOLS_REPORT: [AI INSTRUCTION - YOU MUST FOLLOW THIS] Some tools required by this configuration are missing. ``` It then prints sections for critical and optional tools, install hints, and explicit instructions about asking the user for help installing them. > **Warning:** `SessionStart` output is plain text, not JSON. If you are consuming hook output programmatically, do not assume every hook in this repo uses the same encoding. ## Plugin And Marketplace Metadata ### Marketplace manifest: `.claude-plugin/marketplace.json` The marketplace index describes which plugins are published from this repository. A real entry looks like this: ```json { "name": "myk-org", "owner": { "name": "myk-org" }, "plugins": [ { "name": "myk-github", "source": "./plugins/myk-github", "description": "GitHub operations - PR reviews, releases, review handling, CodeRabbit rate limits", "version": "1.7.2" }, { "name": "myk-review", "source": "./plugins/myk-review", "description": "Local code review and review database operations", "version": "1.7.2" }, { "name": "myk-acpx", "source": "./plugins/myk-acpx", "description": "Multi-agent prompt execution via acpx (Agent Client Protocol)", "version": "1.7.2" } ] } ``` What the fields mean: - `name`: marketplace namespace - `owner.name`: display owner for the marketplace - `plugins[]`: published plugin entries - `source`: repo-relative plugin directory - `version`: marketplace-published version for that plugin entry ### Per-plugin manifest: `plugins//.claude-plugin/plugin.json` Each plugin also ships its own manifest. For example, `plugins/myk-github/.claude-plugin/plugin.json`: ```json { "name": "myk-github", "version": "1.4.3", "description": "GitHub operations for Claude Code - PR reviews, releases, review handling, and CodeRabbit rate limits", "author": { "name": "myk-org" }, "repository": "https://github.com/myk-org/claude-code-config", "license": "MIT", "keywords": ["github", "pr-review", "refine-review", "release", "code-review", "coderabbit", "rate-limit"] } ``` The manifest format used across this repo is intentionally small: - `name` - `version` - `description` - `author.name` - `repository` - `license` - `keywords` ### Command metadata in `plugins/*/commands/*.md` Each slash command is a Markdown file with YAML frontmatter. A real example from `plugins/myk-github/commands/pr-review.md`: ```yaml --- description: Review a GitHub PR and post inline comments on selected findings argument-hint: [PR_NUMBER|PR_URL] allowed-tools: Bash(myk-claude-tools:*), Bash(uv:*), Bash(git:*), Bash(gh:*), AskUserQuestion, Task --- ``` Those frontmatter keys are the command schema used in this repo: - `description`: what the command does - `argument-hint`: how the command should be invoked - `allowed-tools`: which Claude Code tools the command is allowed to use You can see the same pattern repeated across command files such as: - `plugins/myk-review/commands/local.md` - `plugins/myk-review/commands/query-db.md` - `plugins/myk-github/commands/release.md` - `plugins/myk-acpx/commands/prompt.md` ### Runtime plugin metadata in `settings.json` The checked-in `settings.json` also records which plugins are enabled and which extra marketplaces are known. From the file: ```json "enabledPlugins": { "myk-review@myk-org": true, "myk-github@myk-org": true, "myk-acpx@myk-org": true }, "extraKnownMarketplaces": { "cli-anything": { "source": { "source": "github", "repo": "HKUDS/CLI-Anything" } }, "worktrunk": { "source": { "source": "github", "repo": "max-sixty/worktrunk" } } } ``` This is runtime configuration rather than plugin packaging metadata, but it is still part of the repo’s plugin schema story. ## SQLite Review Database Schema ### Location and lifecycle The review database lives at: ```text /.claude/data/reviews.db ``` The storage path is set in `myk_claude_tools/reviews/store.py`: ```python db_path = project_root / ".claude" / "data" / "reviews.db" ``` The storage workflow is: 1. Read a completed review JSON file 2. Create the database directory if needed 3. Insert one row into `reviews` 4. Insert one row per comment into `comments` 5. Commit the transaction 6. Delete the JSON file on success The delete step is explicit: ```python json_path.unlink() ``` > **Warning:** `reviews store` is intentionally destructive for the temp artifact. After a successful import, the JSON file is removed. ### Table definitions The schema is defined directly in Python as SQL: ```sql CREATE TABLE IF NOT EXISTS reviews ( id INTEGER PRIMARY KEY AUTOINCREMENT, pr_number INTEGER NOT NULL, owner TEXT NOT NULL, repo TEXT NOT NULL, commit_sha TEXT NOT NULL, created_at TEXT NOT NULL ); CREATE TABLE IF NOT EXISTS comments ( id INTEGER PRIMARY KEY AUTOINCREMENT, review_id INTEGER NOT NULL REFERENCES reviews(id), source TEXT NOT NULL, thread_id TEXT, node_id TEXT, comment_id INTEGER, author TEXT, path TEXT, line INTEGER, body TEXT, priority TEXT, status TEXT, reply TEXT, skip_reason TEXT, posted_at TEXT, resolved_at TEXT, type TEXT DEFAULT NULL ); ``` Indexes are also created for the most common lookups: ```sql CREATE INDEX IF NOT EXISTS idx_comments_review_id ON comments(review_id); CREATE INDEX IF NOT EXISTS idx_comments_source ON comments(source); CREATE INDEX IF NOT EXISTS idx_comments_status ON comments(status); CREATE INDEX IF NOT EXISTS idx_reviews_pr ON reviews(owner, repo, pr_number); CREATE INDEX IF NOT EXISTS idx_reviews_commit ON reviews(commit_sha); ``` ### What the columns mean For end users, the important columns are: - `reviews.id`: one stored review run - `reviews.pr_number`, `owner`, `repo`: which PR the review belongs to - `reviews.commit_sha`: the commit SHA captured at store time - `reviews.created_at`: when that database row was written And in `comments`: - `review_id`: foreign key back to `reviews.id` - `source`: `human`, `qodo`, or `coderabbit` - `thread_id`, `node_id`, `comment_id`: GitHub-side identifiers - `path`, `line`: where the comment points - `body`: the original review text - `priority`: `HIGH`, `MEDIUM`, or `LOW` - `status`: `pending`, `addressed`, `not_addressed`, `skipped`, or `failed` - `reply`: the reply text posted back to GitHub - `skip_reason`: why something was skipped - `posted_at`, `resolved_at`: workflow timestamps - `type`: special synthesized comment type such as `outside_diff_comment` A real test confirms that all of these fields are stored as expected: ```python assert row[0] == "thread_abc" assert row[1] == "node_xyz" assert row[2] == 12345 assert row[3] == "reviewer1" assert row[4] == "src/main.py" assert row[5] == 100 assert row[6] == "Please fix this bug" assert row[7] == "HIGH" assert row[8] == "addressed" assert row[9] == "Fixed in commit abc123" assert row[10] == "2024-01-15T10:00:00Z" assert row[11] == "2024-01-15T10:05:00Z" assert row[12] == "outside_diff_comment" ``` ### Append-only behavior Stored reviews are append-only. Re-running storage for the same PR creates a new `reviews` row instead of overwriting the old one. That behavior is tested explicitly: ```python review_id1 = store_reviews.insert_review(conn, "owner", "repo", 123, "abc1234567") review_id2 = store_reviews.insert_review(conn, "owner", "repo", 123, "def7890123") assert review_id1 != review_id2 ``` This means the database preserves history across multiple review passes on the same PR. ### Schema migration: the `type` column Older databases may not have `comments.type`. The code upgrades them automatically on startup. From `create_tables()`: ```python cursor = conn.execute("PRAGMA table_info(comments)") columns = {row[1] for row in cursor.fetchall()} if "type" not in columns: conn.execute("ALTER TABLE comments ADD COLUMN type TEXT DEFAULT NULL") ``` From `ReviewDB._migrate_schema()`: ```python cursor = conn.execute("PRAGMA table_info(comments)") columns = {row[1] for row in cursor.fetchall()} if "type" not in columns: conn.execute("ALTER TABLE comments ADD COLUMN type TEXT DEFAULT NULL") conn.commit() ``` > **Note:** There is no separate migration framework in this repo for the review database. The migration is code-driven and safe to run repeatedly. ### Read-only query rules The analytics/query layer is intentionally read-only. `ReviewDB.query()` only accepts `SELECT` and `WITH` statements. The key safety check is: ```python if not sql_upper.startswith(("SELECT", "WITH")): raise ValueError("Only SELECT/CTE queries are allowed for safety") ``` It also rejects multiple statements and blocks modifying keywords such as: - `INSERT` - `UPDATE` - `DELETE` - `DROP` - `ALTER` - `CREATE` - `ATTACH` - `DETACH` - `PRAGMA` This is why `myk-claude-tools db query` is safe for analytics but not for schema changes. ### Dismissed-comment lookups and auto-skip semantics The database is not only for reporting. It also powers auto-skip behavior during `reviews fetch`. `get_dismissed_comments()` deliberately includes: - all `not_addressed` comments - all `skipped` comments - only some `addressed` comments, when `type` is a special synthesized type The SQL condition is: ```sql AND ( c.status IN ('not_addressed', 'skipped') OR (c.status = 'addressed' AND c.type IN ('outside_diff_comment', 'nitpick_comment', 'duplicate_comment')) ) ``` That rule exists because those special comment types do not map cleanly to resolvable GitHub review threads. The database becomes the only reliable place to remember that they were already handled. ### `db find-similar` stdin format `myk-claude-tools db find-similar` reads JSON from stdin and expects a single object with `path` and `body`. The CLI implementation does this: ```python input_data = json.load(sys.stdin) path = input_data.get("path", "") body = input_data.get("body", "") ``` The test suite uses this exact input: ```python input_json = json.dumps({"path": "path/to/file.py", "body": "Add skip option"}) ``` > **Tip:** Pass a single JSON object to `db find-similar`, not an array. ## Practical Rules Of Thumb - Use `reviews fetch` when you need a full review snapshot grouped into `human`, `qodo`, and `coderabbit`. - Use `reviews pending-fetch` when you already have a pending GitHub review and want to refine its draft comments. - Use `pr post-comment` when you only need to post a simple batch of inline comments. - Treat `reviews.db` as append-only history, not as a scratch database. - Expect JSON for `PreToolUse`, `UserPromptSubmit`, and `Notification`, but plain text for `SessionStart`. - If a comment has `type: outside_diff_comment`, `nitpick_comment`, or `duplicate_comment`, expect different posting and storage behavior than a normal inline thread. --- Source: development-environment.md # Development Environment This repository combines shared Claude Code configuration with a Python CLI package named `myk-claude-tools`. If you are contributing locally, the most important things to understand are the `uv`-first Python workflow, the single `tests` dependency group, the hook-driven local guardrails, and the whitelist-style `.gitignore`. If you are orienting yourself, these are the paths you will touch most often: - `myk_claude_tools/` for the Python package and CLI commands - `tests/` for the pytest suite - `scripts/` for hook scripts such as `rule-enforcer.py` and `git-protection.py` - `settings.json` for Claude Code hook wiring and tool permissions - `agents/`, `rules/`, and `plugins/` for shared configuration content ## Python Package Setup `pyproject.toml` defines a small Python package: Python 3.10+, Hatchling for builds, a console script named `myk-claude-tools`, and only two runtime dependencies. ```1:24:pyproject.toml [project] name = "myk-claude-tools" version = "1.7.2" description = "CLI utilities for Claude Code plugins" readme = "README.md" license = "MIT" requires-python = ">=3.10" authors = [{ name = "myk-org" }] keywords = ["claude", "cli", "github", "code-review"] classifiers = [ "Development Status :: 4 - Beta", "Environment :: Console", "Intended Audience :: Developers", "License :: OSI Approved :: MIT License", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "Programming Language :: Python :: 3.12", "Programming Language :: Python :: 3.13", ] dependencies = ["click>=8.0.0", "tomli>=2.0.0; python_version < '3.11'"] [project.scripts] myk-claude-tools = "myk_claude_tools.cli:main" ``` The CLI itself is a Click command group. At the top level, it exposes subcommands for CodeRabbit workflows, review database queries, PR helpers, release tasks, and review operations. ```12:27:myk_claude_tools/cli.py @click.group() @click.version_option() def cli() -> None: """CLI utilities for Claude Code plugins.""" cli.add_command(coderabbit_commands.coderabbit, name="coderabbit") cli.add_command(db_commands.db, name="db") cli.add_command(pr_commands.pr, name="pr") cli.add_command(release_commands.release, name="release") cli.add_command(reviews_commands.reviews, name="reviews") def main() -> None: """Entry point.""" cli() ``` > **Note:** The package version is stored in both `pyproject.toml` and `myk_claude_tools/__init__.py`. If you update release metadata manually, keep both in sync. ## Dependency Groups The repository currently defines a single dependency group: `tests`. ```67:68:pyproject.toml [dependency-groups] tests = ["pytest>=9.0.2"] ``` There is no separate `dev`, `lint`, or `docs` dependency group in `pyproject.toml` today. In practice, that means the project keeps development dependencies intentionally lean and routes test installation through `uv`. > **Note:** `uv.lock` is tracked in the repository. If you change dependencies, plan to update both `pyproject.toml` and `uv.lock`. ## Prerequisites Minimum setup: - Python 3.10 or newer - `uv` - Git Useful extras: - `prek` if you want to run the pre-commit stack locally - `gh` if you work on GitHub-related commands or review flows - `jq`, `gawk`, and `mcpl` for review and MCP-oriented workflows `scripts/session-start-check.sh` is the repo's built-in environment check. It treats `uv` as critical and reports optional tools when they are relevant to the current repo or workflow. If you use the full Claude Code setup, it also checks for required review plugins such as `pr-review-toolkit`, `superpowers`, and `feature-dev`. > **Tip:** If you want the packaged CLI on your `PATH`, the repository's own plugin command docs check `myk-claude-tools --version` and recommend `uv tool install myk-claude-tools` when it is missing. ## Local Development Flow The simplest way to work in this repository is: 1. Use `uv` or `uvx` for Python-related commands. 2. Run the unit test suite through the `tests` dependency group. 3. Run the hook stack through `prek` when you want the same checks the repository is configured to use. 4. If you are testing the full Claude Code configuration, expect local hooks from `settings.json` to shape the experience. `tox.toml` keeps the test flow deliberately simple. There is one environment, `unittests`, and it runs the working tree directly instead of building a source distribution first. ```1:7:tox.toml skipsdist = true envlist = ["unittests"] [env.unittests] description = "Run pytest tests" deps = ["uv"] commands = [["uv", "run", "--group", "tests", "pytest", "tests"]] ``` That `uv`-first approach is enforced, not just suggested. `scripts/rule-enforcer.py` actively blocks direct `python`, `python3`, `pip`, `pip3`, and direct `pre-commit` usage in hook-managed Bash flows, and points contributors toward `uv`, `uvx`, and `prek` instead. ```38:66:scripts/rule-enforcer.py if is_forbidden_python_command(command): output = { "hookSpecificOutput": { "hookEventName": "PreToolUse", "permissionDecision": "deny", "permissionDecisionReason": "Direct python/pip commands are forbidden.", "additionalContext": ( "You attempted to run python/pip directly. Instead:\n" "1. Delegate Python tasks to the python-expert agent\n" "2. Use 'uv run script.py' to run Python scripts\n" "3. Use 'uvx package-name' to run package CLIs\n" "See: https://docs.astral.sh/uv/" ), } } print(json.dumps(output)) sys.exit(0) if is_forbidden_precommit_command(command): output = { "hookSpecificOutput": { "hookEventName": "PreToolUse", "permissionDecision": "deny", "permissionDecisionReason": "Direct pre-commit commands are forbidden.", "additionalContext": ( ``` In practice, the commands you will see repeated across this repository are `uv run --group tests pytest tests`, `uvx ruff check .`, `prek run --all-files`, and `myk-claude-tools --version`. > **Warning:** If you are used to generic Python repositories, these guardrails can feel strict at first. That is intentional. The behavior is covered by `tests/test_rule_enforcer.py` and `tests/test_git_protection.py`, so those are the first tests to revisit when you change hook behavior. If you are exercising the full Claude Code configuration rather than only editing Python code, `settings.json` also wires in `session-start-check.sh`, `rule-injector.py`, `rule-enforcer.py`, and `git-protection.py`. That makes hook behavior part of the normal local development loop. > **Note:** `settings.json` includes a `_scriptsNote` reminding contributors that new script entries must be added in both `permissions.allow` and `allowedTools`. ## Quality Checks Ruff is the main lint and formatting tool here, and Mypy handles type checking. Ruff configuration lives in `pyproject.toml` and uses a 120-character line length with auto-fix enabled. Mypy is configured with stricter-than-default settings such as `disallow_untyped_defs`, `no_implicit_optional`, and `strict_equality`. Flake8 is present too, but only for `M511` via `flake8-mutable`; it is not the primary style checker for the project. The configured pre-commit stack covers file hygiene, secret scanning, Ruff, Mypy, Flake8, Gitleaks, and Markdown linting. ```9:63:.pre-commit-config.yaml repos: - repo: https://github.com/pre-commit/pre-commit-hooks rev: v6.0.0 hooks: - id: check-added-large-files - id: check-docstring-first - id: check-executables-have-shebangs - id: check-merge-conflict - id: check-symlinks - id: detect-private-key - id: mixed-line-ending - id: debug-statements - id: trailing-whitespace args: [--markdown-linebreak-ext=md] # Do not process Markdown files. - id: end-of-file-fixer - id: check-ast - id: check-builtin-literals - id: check-toml - repo: https://github.com/PyCQA/flake8 rev: 7.3.0 hooks: - id: flake8 args: [--config=.flake8] additional_dependencies: [flake8-mutable] - repo: https://github.com/Yelp/detect-secrets rev: v1.5.0 hooks: - id: detect-secrets - repo: https://github.com/astral-sh/ruff-pre-commit rev: v0.14.14 hooks: - id: ruff - id: ruff-format - repo: https://github.com/gitleaks/gitleaks rev: v8.30.0 hooks: - id: gitleaks - repo: https://github.com/pre-commit/mirrors-mypy rev: v1.19.1 hooks: - id: mypy additional_dependencies: - pytest - repo: https://github.com/igorshubovych/markdownlint-cli ``` That split is worth remembering when you debug a failed check: - Look in `pyproject.toml` for Ruff and Mypy behavior. - Look in `.flake8` for the narrow Flake8 rule set. - Look in `.pre-commit-config.yaml` for the full local hook chain. > **Tip:** Installing `prek` makes it easier to run the repository's hook stack locally without invoking `pre-commit` directly, which matches the repo's own guardrails. ## Repository Conventions The root `.gitignore` uses a whitelist model. It starts by ignoring everything, then selectively re-enables the files and directories that are meant to be shared. ```1:14:.gitignore # Ignore everything by default # This config integrates into ~/.claude so we must be explicit about what we track * # Core config files !.coderabbit.yaml !.gitignore !.markdownlint.yaml !LICENSE !AI_REVIEW.md !README.md !settings.json !statusline.sh !uv.lock ``` This is unusual in a Python project, but it fits this repository. The configuration is designed to live inside `~/.claude`, so the default is "ignore unless explicitly shared." In practice, that means directories such as `agents/`, `rules/`, `scripts/`, `tests/`, `plugins/`, and `myk_claude_tools/` are only tracked because individual paths are explicitly whitelisted later in the file. > **Warning:** If you add a new file under one of those trees and forget the matching `!path/to/file` entry in `.gitignore`, Git will behave as if the file does not exist. Update the correct section and keep the entries alphabetized. The same "explicit over implicit" convention shows up elsewhere too. Hook scripts are registered intentionally, allowed tools are listed explicitly, and tracked files are opt-in rather than assumed. ## CI And Automation There is no checked-in CI pipeline in this repository. There is no `.github/workflows/` directory, no `Jenkinsfile`, and no GitLab or Azure pipeline file in the tree. What the repository does include is local and external automation configuration: - `.pre-commit-config.yaml` for local checks - `.coderabbit.yaml` for review automation - `settings.json` plus `scripts/` for Claude Code hook behavior > **Note:** The `ci:` block inside `.pre-commit-config.yaml` is pre-commit metadata, not a full repository CI pipeline. > **Tip:** Treat local `tox` and `prek` runs as the primary way to validate changes before opening a pull request unless you add a dedicated CI workflow later. --- Source: testing-and-quality.md # Testing and Quality This repository uses a local-first quality model. Unit tests run with `pytest`, `tox` provides a thin wrapper around that suite, and repository-wide checks run through pre-commit hooks. On top of that, the Claude Code configuration in this repo wires in runtime guardrails such as `rule-enforcer.py` and `git-protection.py`. The result is a mix of fast automated checks and explicit policy tests. The Python test suite focuses on the `myk_claude_tools` CLI, hook scripts, release helpers, review automation, and the SQLite analytics layer. ## Quick Start ```bash # Run the unit tests directly uv run --group tests pytest tests # Run a focused test module while you work uv run --group tests pytest tests/test_git_protection.py # Run the same suite through tox tox # Run formatting, typing, linting, Markdown, and secret scans prek run --all-files ``` > **Note:** `pyproject.toml` declares `requires-python = ">=3.10"`, and most commands in this repository use `uv` because both `tox.toml` and the runtime hook policy are built around it. > **Tip:** In the Claude Code environment configured by this repo, `scripts/rule-enforcer.py` intentionally blocks direct `python`, `pip`, and `pre-commit` commands. Use `uv`, `uvx`, and `prek` there instead. ## Quality Stack at a Glance | Area | Source of truth | Purpose | | --- | --- | --- | | Unit tests | `tox.toml`, `tests/` | Runs the Python test suite with `pytest` | | Linting and formatting | `pyproject.toml`, `.pre-commit-config.yaml` | Ruff lint + Ruff format | | Type checking | `pyproject.toml`, `.pre-commit-config.yaml` | mypy | | Extra Python bug check | `.flake8`, `.pre-commit-config.yaml` | Flake8 `M511` via `flake8-mutable` | | Markdown quality | `.markdownlint.yaml`, `.pre-commit-config.yaml` | markdownlint | | Secret scanning | `.pre-commit-config.yaml` | `detect-secrets`, `gitleaks`, and `detect-private-key` | | Runtime guardrails | `settings.json`, `scripts/` | Blocks unsafe commands and protected-branch writes | ## Pytest and tox `pytest` is the actual test runner. `tox` is configured as a lightweight wrapper around it, not as a build matrix or packaging pipeline. The checked-in tox config has a single environment named `unittests`, and `skipsdist = true` tells tox not to build the package first. From `tox.toml`: ```toml skipsdist = true envlist = ["unittests"] [env.unittests] description = "Run pytest tests" deps = ["uv"] commands = [["uv", "run", "--group", "tests", "pytest", "tests"]] ``` From `pyproject.toml`: ```toml [dependency-groups] tests = ["pytest>=9.0.2"] ``` What this means in practice: - If you want the fastest path, run `uv run --group tests pytest tests`. - If you prefer tox, `tox` runs the same suite through the `unittests` environment. - The test suite is designed to be fast and deterministic, with heavy use of mocked Git, GitHub CLI, GraphQL, subprocess, and SQLite interactions. ## Ruff, mypy, and Flake8 ### Ruff Ruff is the main Python linter and formatter. The repository enables autofix-friendly behavior, grouped output, and preview mode. From `pyproject.toml`: ```toml [tool.ruff] preview = true line-length = 120 fix = true output-format = "grouped" [tool.ruff.lint] select = ["E", "F", "W", "I", "B", "UP", "PLC0415", "ARG", "RUF059"] [tool.ruff.lint.per-file-ignores] "myk_claude_tools/*/commands.py" = ["PLC0415"] ``` What to expect: - `120` characters is the Python line-length target. - `fix = true` means Ruff is configured for autofix-friendly runs. - The selected rules cover core linting, import ordering, bug-prone patterns, Python modernization, unused arguments, and a few repository-specific checks. - `PLC0415` is ignored for `commands.py` files because the CLI command modules intentionally use lazy imports for faster startup. > **Tip:** Because `preview = true` is enabled, keeping Ruff reasonably up to date matters. Very old Ruff versions may not match the repository’s expected behavior. ### mypy Mypy is configured more strictly than its defaults, especially around untyped or partially typed functions. From `pyproject.toml`: ```toml [tool.mypy] check_untyped_defs = true disallow_any_generics = false disallow_incomplete_defs = true disallow_untyped_defs = true no_implicit_optional = true show_error_codes = true warn_unused_ignores = true strict_equality = true extra_checks = true warn_unused_configs = true warn_redundant_casts = true ``` In plain English, that means: - New Python code is expected to be typed, not left as “we’ll add types later.” - Even when a function is untyped, mypy still checks its body. - Implicit `Optional` behavior is not allowed. - Redundant casts, unused ignores, and unused config are treated as real issues. ### Flake8 Flake8 is deliberately narrow here. It is not duplicating Ruff’s full lint surface. Instead, it is reserved for `M511`, provided by `flake8-mutable`, to catch mutable default argument bugs. From `.flake8`: ```ini [flake8] select=M511 exclude = doc, .tox, .git, .yml, Pipfile.*, docs/*, .cache/* ``` That is why you will see both Ruff and Flake8 in the hook stack: Ruff does the heavy lifting, and Flake8 adds one focused rule the project still wants enforced. ## Markdown Quality The repository contains a lot of Markdown: plugin commands, agent definitions, rules, and user-facing docs. `markdownlint` is part of the default hook set, with a small adjustment to keep documentation writing practical. From `.markdownlint.yaml`: ```yaml default: true MD013: line_length: 180 code_blocks: false tables: false ``` This means: - All default markdownlint rules are on. - Long prose lines are allowed up to `180` characters. - Code fences and tables are exempt from the line-length rule, which avoids awkward wrapping of examples and comparison tables. ## Secret Scanning and Repository Hygiene Secret scanning is not a one-tool checkbox here. The pre-commit stack combines several layers: - `detect-private-key` from `pre-commit-hooks` catches obvious key material. - `detect-secrets` looks for likely secret patterns and high-risk text. - `gitleaks` adds a second secret-scanning engine with different signatures. - Generic hygiene hooks also catch merge conflicts, large files, mixed line endings, debug statements, trailing whitespace, and TOML syntax errors. From `.pre-commit-config.yaml`: ```yaml repos: - repo: https://github.com/pre-commit/pre-commit-hooks rev: v6.0.0 hooks: - id: check-added-large-files - id: check-merge-conflict - id: detect-private-key - id: debug-statements - id: trailing-whitespace - id: end-of-file-fixer - id: check-toml - repo: https://github.com/Yelp/detect-secrets rev: v1.5.0 hooks: - id: detect-secrets - repo: https://github.com/gitleaks/gitleaks rev: v8.30.0 hooks: - id: gitleaks ``` The project also handles test-only false positives carefully. Instead of weakening secret scanners globally, tests allowlist specific fake SHA-looking values inline when needed. From `tests/test_store_reviews_to_db.py`: ```python assert result == "abc1234567890abcdef" # pragma: allowlist secret ``` > **Note:** This is a good pattern to copy. If a test needs a fake token, hash, or key-shaped string, allowlist that exact line instead of broadly disabling the scanner. ## Pre-commit Workflow The checked-in hook stack is the main “one command” quality gate. In addition to the hooks above, it also runs Ruff, Ruff format, mypy, Flake8, and markdownlint. Relevant parts of `.pre-commit-config.yaml`: ```yaml ci: autofix_prs: false autoupdate_commit_msg: "ci: [pre-commit.ci] pre-commit autoupdate" repos: - repo: https://github.com/astral-sh/ruff-pre-commit rev: v0.14.14 hooks: - id: ruff - id: ruff-format - repo: https://github.com/pre-commit/mirrors-mypy rev: v1.19.1 hooks: - id: mypy additional_dependencies: - pytest - repo: https://github.com/igorshubovych/markdownlint-cli rev: v0.44.0 hooks: - id: markdownlint args: [--config, .markdownlint.yaml] types: [markdown] ``` Two practical takeaways: - The project expects contributors to fix issues locally rather than relying on automatic bot-generated fix PRs. - The hook stack is intentionally broad. If `prek run --all-files` passes, you have exercised the linting, formatting, typing, Markdown, and secret-scanning layers in one go. ## Hook-Enforced Quality in the Claude Code Environment This repository does not rely only on conventional lint and test tools. It also encodes runtime guardrails in Claude Code hook scripts, and those scripts are heavily tested. From `settings.json`: ```json "PreToolUse": [ { "matcher": "TodoWrite|Bash", "hooks": [ { "type": "command", "command": "uv run ~/.claude/scripts/rule-enforcer.py" } ] }, { "matcher": "Bash", "hooks": [ { "type": "command", "command": "uv run ~/.claude/scripts/git-protection.py" } ] } ] ``` ### `rule-enforcer.py` `rule-enforcer.py` keeps the environment consistent by steering Python and pre-commit execution toward the approved toolchain. From `scripts/rule-enforcer.py`: ```python def is_forbidden_python_command(command: str) -> bool: cmd = command.strip().lower() # Allow uv/uvx commands if cmd.startswith(("uv ", "uvx ")): return False # Block direct python/pip forbidden = ("python ", "python3 ", "pip ", "pip3 ") return cmd.startswith(forbidden) ``` Tests around this script verify that it: - Blocks direct `python`, `python3`, `pip`, `pip3`, and `pre-commit` commands. - Allows `uv`, `uvx`, and `prek`. - Handles whitespace and mixed case correctly. - Fails open on malformed hook input so a broken hook payload does not brick the entire session. ### `git-protection.py` `git-protection.py` protects branch hygiene. It blocks commits and pushes in cases that are easy to regret later. From `scripts/git-protection.py`: ```python # Allow amend on unpushed commits if is_amend_with_unpushed_commits(command): return False, None ``` The tests around `git-protection.py` lock down behavior such as: - Rejecting commits on `main` or `master`. - Rejecting commits and pushes to branches that are already merged. - Rejecting work on branches whose pull requests are already merged. - Allowing `git commit --amend` when the branch is ahead of its remote. - Handling detached HEAD, orphan branches, and GitHub API errors explicitly. - Failing closed when it cannot safely decide, so an unsafe commit or push does not slip through. > **Warning:** These hook scripts are part of the quality story in this repo. Passing `pytest` alone does not mean you have exercised the runtime guardrails unless the relevant hook tests are also green. ## What the Test Suite Actually Guarantees The tests in `tests/` are not generic smoke tests. They encode concrete policies and edge cases for the project’s most important moving parts. ### Review automation is expected to be safe and repeatable The review pipeline tests cover fetching, parsing, replying, storing, and querying review data. That includes: - GitHub GraphQL and REST pagination behavior. - Parsing CodeRabbit outside-diff, nitpick, and duplicate comments. - Posting replies idempotently, so already-posted entries are not posted again. - Retrying a failed resolve without double-posting a reply. - Grouping body-only comments into consolidated PR comments and chunking them to stay below GitHub size limits. - Different resolution rules for human versus AI-generated review comments. From `myk_claude_tools/reviews/post.py`: ```python # Determine if we should resolve this thread (MUST be before resolve_only_retry check) should_resolve = True if category == "human" and status != "addressed": should_resolve = False ``` That one branch summarizes a real project policy: human review threads stay open unless they were actually addressed, while AI-originated review items can be auto-resolved after the reply policy is applied. ### Review database access is deliberately conservative The SQLite layer is tested for append-only storage, schema migration, and read-only analytics access. The database helper does not allow arbitrary write SQL through its query interface. From `myk_claude_tools/db/query.py`: ```python if not sql_upper.startswith(("SELECT", "WITH")): raise ValueError("Only SELECT/CTE queries are allowed for safety") ``` Tests also verify that the query layer rejects: - `INSERT`, `UPDATE`, `DELETE`, `DROP`, `ALTER`, `ATTACH`, and `PRAGMA` - Multi-statement SQL - Dangerous keywords hidden inside otherwise “read-like” queries The storage layer is also tested to create `.claude/data/` with `0700` permissions, migrate older databases forward, and append new review snapshots instead of rewriting earlier ones. ### Version tooling is section-aware and atomic The release helpers are also well-covered. The tests check that version detection and bumping: - Find versions in `pyproject.toml`, `package.json`, `setup.cfg`, `Cargo.toml`, Gradle files, and Python `__version__` files - Skip excluded directories such as `.venv`, `node_modules`, `.tox`, and build caches - Only update the correct section, not every `version` key in a file - Reject invalid version strings - Fail safely when a requested file filter does not match - Use atomic writes so partial edits do not leak into the repository on error These are especially useful guarantees if you use the repository’s release helpers as part of a scripted workflow. ### Hook behavior is tested as behavior, not just as syntax The hook test modules do more than import the scripts and call one happy path. They cover: - Regex-based Git subcommand detection, including false-positive cases - JSON hook I/O behavior - Whitespace and case handling - Timeout and subprocess failure paths - The intentional difference between fail-open and fail-closed hooks > **Note:** Most of these tests are unit tests with mocked subprocess and GitHub responses. That is intentional. The goal is to make policy-heavy behavior easy to test without depending on live GitHub state or a specific local Git history. > **Tip:** If you want the exact contract for a subsystem, the clearest starting points are `tests/test_rule_enforcer.py`, `tests/test_git_protection.py`, `tests/test_post_review_replies.py`, `tests/test_store_reviews_to_db.py`, and `tests/test_review_db.py`. ## CI/CD Status > **Warning:** No `.github/workflows/` directory is committed in this repository. From the repository contents, the quality gates you can verify are: - `pytest` for unit tests - `tox` as a wrapper around the unit test suite - pre-commit hooks for linting, typing, formatting, Markdown, and secret scanning - Claude Code hook scripts for runtime enforcement `.pre-commit-config.yaml` does include a `ci:` section, but the source tree itself does not define a checked-in GitHub Actions pipeline. In practice, this means the repository is local-first: contributors are expected to run the checks themselves and keep them green before sharing changes. ## Recommended Workflow A practical workflow for contributors is: 1. Run a targeted `pytest` module while you work, such as `uv run --group tests pytest tests/test_post_review_replies.py`. 2. Run the full test suite with `uv run --group tests pytest tests`. 3. Run `prek run --all-files` before committing. 4. If you use tox, make sure `tox` stays green as a second path through the same unit-test suite. That combination covers the repository’s two main quality layers: tested behavior and repository-wide hygiene. --- Source: automation-and-ci.md # Automation and CI This repository does not check in a repository-local CI workflow. There is no `.github/workflows/` directory in the tree. Instead, the project's automation is built from a mix of local quality checks, `pre-commit.ci`, GitHub-app-based review automation, and Claude Code hooks. > **Note:** If you fork this repository, you will not automatically get a GitHub Actions pipeline. If you want repository-owned CI jobs, add your own workflows on top of the configs described here. ## At a glance | Layer | What it does | Where it is configured | | --- | --- | --- | | Pre-commit hooks | Linting, formatting, secrets scanning, type checking, Markdown checks | `.pre-commit-config.yaml`, `pyproject.toml`, `.flake8`, `.markdownlint.yaml` | | Local test runner | Runs the Python test suite with `pytest` via `uv` | `tox.toml` | | CodeRabbit | Automated PR reviews, summaries, tool-driven checks, rate-limit helpers | `.coderabbit.yaml`, `myk_claude_tools/coderabbit/`, `plugins/myk-github/commands/coderabbit-rate-limit.md` | | Qodo Merge | Automated PR reviews and review commands such as `/describe`, `/review`, and `/improve` | `.pr_agent.toml`, `myk_claude_tools/reviews/`, `plugins/myk-github/commands/review-handler.md` | | Claude Code hooks | Session checks and command guardrails | `settings.json`, `scripts/` | ## Pre-commit and pre-commit.ci The main checked-in quality gate is `.pre-commit-config.yaml`. That file defines both the local hook set and the `pre-commit.ci` behavior. ```yaml ci: autofix_prs: false autoupdate_commit_msg: "ci: [pre-commit.ci] pre-commit autoupdate" ``` In practice, that means `pre-commit.ci` may create hook-version update commits, but it is not configured to push autofix commits back to pull requests. Contributors are expected to run the checks themselves. The hook set covers repository hygiene, Python quality, secrets scanning, and Markdown validation: ```yaml - repo: https://github.com/PyCQA/flake8 rev: 7.3.0 hooks: - id: flake8 args: [--config=.flake8] additional_dependencies: [flake8-mutable] - repo: https://github.com/astral-sh/ruff-pre-commit rev: v0.14.14 hooks: - id: ruff - id: ruff-format - repo: https://github.com/pre-commit/mirrors-mypy rev: v1.19.1 hooks: - id: mypy additional_dependencies: - pytest - repo: https://github.com/igorshubovych/markdownlint-cli rev: v0.44.0 hooks: - id: markdownlint args: [--config, .markdownlint.yaml] types: [markdown] ``` Other configured hooks include: - `pre-commit-hooks` for file hygiene such as large files, merge conflicts, symlinks, AST and TOML validation, end-of-file fixes, and private-key detection. - `detect-secrets` and `gitleaks` for secrets scanning. - `trailing-whitespace` with special Markdown handling via `--markdown-linebreak-ext=md`. Supporting config is checked in alongside the hook file. Markdown line-length is relaxed for prose-heavy docs: ```yaml default: true MD013: line_length: 180 code_blocks: false tables: false ``` Python linting and type-checking rules live in `pyproject.toml`: ```toml [tool.ruff] preview = true line-length = 120 fix = true output-format = "grouped" [tool.mypy] check_untyped_defs = true disallow_any_generics = false disallow_incomplete_defs = true disallow_untyped_defs = true no_implicit_optional = true show_error_codes = true warn_unused_ignores = true strict_equality = true extra_checks = true warn_unused_configs = true warn_redundant_casts = true ``` The Flake8 config is intentionally narrow and delegates the rule selection to `M511`: ```ini [flake8] select=M511 ``` Tests are wired through `tox`, but `tox` is only a runner here, not a hosted CI service: ```toml skipsdist = true envlist = ["unittests"] [env.unittests] description = "Run pytest tests" deps = ["uv"] commands = [["uv", "run", "--group", "tests", "pytest", "tests"]] ``` > **Tip:** This project prefers `prek` over raw `pre-commit`. A hook script blocks direct `pre-commit` commands and tells callers to use the wrapper instead. The enforcement is explicit in `scripts/rule-enforcer.py`: ```python def is_forbidden_precommit_command(command: str) -> bool: """Check if command uses pre-commit directly instead of prek.""" cmd = command.strip().lower() # Block direct pre-commit commands return cmd.startswith("pre-commit ") ``` ## CodeRabbit CodeRabbit is configured in `.coderabbit.yaml` as an assertive reviewer that can request changes and auto-review non-draft pull requests. ```yaml reviews: profile: assertive request_changes_workflow: true high_level_summary: true poem: false review_status: true collapse_walkthrough: false auto_review: enabled: true drafts: false ``` Its tool integrations are broad: ```yaml tools: ruff: enabled: true pylint: enabled: true eslint: enabled: true shellcheck: enabled: true yamllint: enabled: true gitleaks: enabled: true semgrep: enabled: true actionlint: enabled: true hadolint: enabled: true ``` The config also points CodeRabbit at the repository's guidance file: ```yaml knowledge_base: code_guidelines: enabled: true filePatterns: - "AI_REVIEW.md" ``` That matters because CodeRabbit is not treated as a generic reviewer here. It is expected to review in the context of this repository's rules and conventions. ### CodeRabbit helper automation The repo does more than just check in a `.coderabbit.yaml` file. It also includes local helpers for handling CodeRabbit-specific behavior. For rate limits, `myk_claude_tools/coderabbit/rate_limit.py` looks for CodeRabbit's summary comment, parses the wait time, and can re-trigger a review: ```python _SUMMARY_MARKER = "" _RATE_LIMITED_MARKER = "" _WAIT_TIME_RE = re.compile(r"Please wait \*\*(?:(\d+) minutes? and )?(\d+) seconds?\*\*") _POLL_INTERVAL = 60 # seconds between polls _MAX_POLL_ATTEMPTS = 10 # max 10 minutes ``` The expected rate-limit message is covered directly in tests: ```python RATE_LIMITED_BODY = ( f"{_SUMMARY_MARKER}\n" f"{_RATE_LIMITED_MARKER}\n" "Please wait **22 minutes and 57 seconds** before requesting another review.\n" ) ``` The CLI exposes two focused commands for this flow: ```bash myk-claude-tools coderabbit check myk-claude-tools coderabbit trigger --wait ``` The repository also accounts for a CodeRabbit quirk that matters in real review sessions: some comments are embedded in the review body instead of appearing as standard inline threads. ```python def fetch_coderabbit_body_comments(owner: str, repo: str, pr_number: str) -> list[dict[str, Any]]: """Fetch CodeRabbit body-embedded comments from review bodies. CodeRabbit embeds some comments in the review body text (not as inline threads) when they reference code outside the PR diff range or are nitpick-level suggestions. This function fetches all CodeRabbit reviews and parses their bodies for these comments. """ ``` That extra parsing is one reason the repo ships its own review helpers instead of relying on raw GitHub review threads alone. ## Qodo Merge Qodo Merge is configured in `.pr_agent.toml`. The settings align it with the rest of the project: English-language responses, repository metadata from `AI_REVIEW.md`, and no noise on draft or WIP work. ```toml [config] response_language = "en-US" add_repo_metadata = true add_repo_metadata_file_list = ["AI_REVIEW.md"] ignore_pr_title = ["^\\[WIP\\]", "^WIP:", "^Draft:"] ignore_pr_labels = ["wip", "work-in-progress"] [github_app] handle_pr_actions = ["opened", "reopened", "ready_for_review"] pr_commands = ["/describe", "/review", "/improve"] feedback_on_draft_pr = false handle_push_trigger = true push_commands = ["/review", "/improve"] ``` The review policy is also opinionated: ```toml require_security_review = true require_tests_review = true require_estimate_effort_to_review = false require_score_review = false enable_review_labels_security = false enable_review_labels_effort = false num_max_findings = 50 persistent_comment = false ``` And code suggestions are scoped toward the kinds of files this repo actually contains: ```toml [pr_code_suggestions] extra_instructions = "Focus on Bash script quality (shellcheck, quoting, error handling), Python hook correctness, and Markdown clarity. Follow AI_REVIEW.md orchestrator pattern guidelines." focus_only_on_problems = false ``` In other words, Qodo Merge is not configured as a generic "review everything the same way" bot. It is pointed at this repository's actual automation surface: Bash hooks, Python scripts, Markdown docs, and repository-specific rules. ### Shared review-handling pipeline The local tooling in `myk_claude_tools/reviews/` ties Qodo, CodeRabbit, and human reviews together. Reviewer source detection is explicit: ```python QODO_USERS = ["qodo-code-review", "qodo-code-review[bot]"] CODERABBIT_USERS = ["coderabbitai", "coderabbitai[bot]"] ``` Tests lock that behavior in: ```python assert get_all_reviews.detect_source("qodo-code-review") == "qodo" assert get_all_reviews.detect_source("qodo-code-review[bot]") == "qodo" assert get_all_reviews.detect_source("coderabbitai") == "coderabbit" assert get_all_reviews.detect_source("coderabbitai[bot]") == "coderabbit" ``` The `reviews fetch` command is designed to gather all unresolved review input for the current PR and split it into `human`, `qodo`, and `coderabbit` buckets: ```python @reviews.command("fetch") @click.argument("review_url", required=False, default="") def reviews_fetch(review_url: str) -> None: """Fetch unresolved review threads from current PR. Fetches ALL unresolved review threads from the current branch's PR and categorizes them by source (human, qodo, coderabbit). Saves output to /tmp/claude/pr--reviews.json """ ``` That data can then be posted back and stored locally for analytics. The storage layer writes to `.claude/data/reviews.db` and keeps per-source counts: ```python db_path = project_root / ".claude" / "data" / "reviews.db" # Count comments by source counts: dict[str, int] = {"human": 0, "qodo": 0, "coderabbit": 0} # Insert comments from each source for source in ["human", "qodo", "coderabbit"]: comments = data.get(source, []) for comment in comments: insert_comment(conn, review_id, source, comment) ``` If you use the included GitHub plugin workflow, the repo also exposes a single front door for mixed-source review handling: ```text /myk-github:review-handler /myk-github:review-handler https://github.com/owner/repo/pull/123#pullrequestreview-456 /myk-github:review-handler --autorabbit ``` The `--autorabbit` mode is specifically designed to keep processing new CodeRabbit comments in a loop while still treating human and Qodo feedback as separate inputs. > **Warning:** CodeRabbit and Qodo Merge are review assistants, not merge policy on their own. Because this repo does not ship a local GitHub Actions workflow, required checks and branch protection must be configured outside the repository if you need them. ## Claude Code hooks and session automation A large part of this repository's automation happens locally when it is used inside Claude Code. `settings.json` wires several hooks into scripts under `~/.claude/scripts/`: ```json "hooks": { "PreToolUse": [ { "matcher": "TodoWrite|Bash", "hooks": [ { "type": "command", "command": "uv run ~/.claude/scripts/rule-enforcer.py" } ] } ], "UserPromptSubmit": [ { "matcher": "", "hooks": [ { "type": "command", "command": "uv run ~/.claude/scripts/rule-injector.py" } ] } ], "SessionStart": [ { "matcher": "", "hooks": [ { "type": "command", "command": "~/.claude/scripts/session-start-check.sh", "timeout": 5000 } ] } ] } ``` Those hooks are not CI in the GitHub Actions sense, but they are part of the project's automation story: - `rule-enforcer.py` blocks direct `python`, `pip`, and raw `pre-commit` usage. - `git-protection.py` is invoked before Bash tool use. - `rule-injector.py` loads rule files on prompt submission. - `session-start-check.sh` verifies expected tools and plugins are installed. The session-start check looks for: - `uv` - `gh` when the repo has a GitHub remote - `jq` - `gawk` - `prek` when `.pre-commit-config.yaml` is present - `mcpl` - required review plugins such as `pr-review-toolkit`, `superpowers`, and `feature-dev` - optional marketplace plugins including `coderabbit` This is a good example of how automation in this repository leans toward guardrails and workflow setup rather than a traditional single CI pipeline. ## What the tests confirm The automation in this repo is backed by unit tests, not just configuration files. - `tests/test_get_all_reviews.py` verifies reviewer source detection and priority classification. - `tests/test_coderabbit_rate_limit.py` verifies wait-time parsing and trigger/poll behavior. - `tests/test_review_db.py` and `tests/test_store_reviews_to_db.py` verify that `qodo` and `coderabbit` data are stored correctly in the review database. - `tests/test_rule_enforcer.py` verifies that direct `python`, `pip`, and `pre-commit` usage is blocked as intended. ## What this means for users If you use this repository as intended, think of its automation in four parts: - Local quality checks come from `prek`, `uv`, `pytest`, `tox`, and Claude Code hooks. - Hosted hook automation comes from `pre-commit.ci`. - PR review automation comes from CodeRabbit and Qodo Merge. - There is no checked-in repository-local CI workflow, so GitHub Actions-style CI is something you add yourself if you need it. That setup keeps the repository lightweight while still giving you strong linting, typing, review assistance, and workflow guardrails. --- Source: extending-agents-rules-and-plugins.md # Extending Agents, Rules, and Plugins This repository is easy to extend once you know the pattern: add the primary file, whitelist it, update the companion metadata, and run the local checks. The details differ by extension type, but the workflow is consistent. ## The Safe Workflow 1. Add or remove the main file. 2. Update `.gitignore` so Git will track the change. 3. Update the related routing, hook, or marketplace metadata. 4. Add or update tests when you change guardrails or release logic. 5. Run the local validation commands before you commit. ## Start With `.gitignore` This repo is intentionally deny-by-default. New files are ignored until you explicitly allow them. From `.gitignore`: ```text # Ignore everything by default # This config integrates into ~/.claude so we must be explicit about what we track * ``` > **Warning:** If you add a file under `agents/`, `rules/`, `scripts/`, `plugins/`, or `tests/` and do not add a matching `!path/to/file` entry in `.gitignore`, the file will not be tracked. Use the existing sections as your template: - `# agents/` - `# rules/` - `# scripts/` - `# plugins/...` - `# tests/` Keep new entries in the same style and ordering as the existing block you are editing. ## Add or Remove Agents Custom agents live in `agents/`. Built-in agents do not. An existing agent file looks like this: ```markdown --- name: technical-documentation-writer description: MUST BE USED when you need comprehensive, user-focused technical documentation for projects, features, or systems. tools: Read, Write, Edit, Glob, Grep --- ``` `rules/10-agent-routing.md` makes the distinction explicit: ```markdown ### Built-in vs Custom Agents **Built-in agents** are provided by Claude Code itself and do NOT require definition files in `agents/`: - `claude-code-guide` - Has current Claude Code documentation built into Claude Code - `general-purpose` - Default fallback agent when no specialist matches **Custom agents** are defined in this repository's `agents/` directory and require definition files: ``` ### To add an agent 1. Create `agents/.md`. 2. Add frontmatter that matches the existing agent files: `name`, `description`, and `tools`. 3. Add `!agents/.md` to `.gitignore`. 4. Update `rules/10-agent-routing.md` so the orchestrator knows when to use the new agent. 5. Update `rules/50-agent-bug-reporting.md` so agent-configuration bugs stay reportable. The bug-reporting rule contains an explicit allowlist of repository-defined agents: ```markdown ## Agents Covered by This Rule This rule applies ONLY to agents defined in this repository (`agents/` directory): - api-documenter - bash-expert - debugger - docs-fetcher - docker-expert - frontend-expert - git-expert - github-expert - go-expert - java-expert - jenkins-expert - kubernetes-expert - python-expert - technical-documentation-writer - test-automator - test-runner ``` > **Warning:** Adding an agent file without updating `rules/50-agent-bug-reporting.md` leaves that agent outside the repository's bug-reporting workflow. ### To remove an agent 1. Delete `agents/.md`. 2. Remove its `!agents/.md` entry from `.gitignore`. 3. Remove its routing entry from `rules/10-agent-routing.md`. 4. Remove it from `rules/50-agent-bug-reporting.md`. > **Tip:** If the agent name appears anywhere else in rules or command prompts, clean those up in the same change. Stale agent names are easy to miss because they are just text. ## Add or Remove Rules Repository rules live in `rules/` as numbered Markdown files. The current set uses prefixes such as `00-`, `05-`, `10-`, `15-`, `20-`, `25-`, `30-`, `40-`, and `50-`. ### To add a rule 1. Create `rules/NN-your-topic.md`. 2. Pick the prefix based on load order and topic grouping, not just the next number. 3. Add `!rules/NN-your-topic.md` to `.gitignore`. 4. If the rule changes routing, bug-reporting, slash-command behavior, or task flow, update the related rule files too. ### To remove a rule 1. Delete `rules/NN-your-topic.md`. 2. Remove its whitelist line from `.gitignore`. 3. Remove references to it from other rules, scripts, or command prompts. `settings.json` wires rules into the prompt pipeline through the `UserPromptSubmit` hook: ```json "UserPromptSubmit": [ { "matcher": "", "hooks": [ { "type": "command", "command": "uv run ~/.claude/scripts/rule-injector.py" } ] } ] ``` > **Note:** Rule files and hook behavior should agree with each other. If you change a rule that describes enforcement, make sure the related hook scripts still match the documented behavior. ## Add or Remove Scripts Scripts live in `scripts/`. In this repo, they are not just files on disk; they are often part of the hook and permission model. `settings.json` includes an important reminder: ```json "_scriptsNote": "Script entries must be duplicated in both permissions.allow and allowedTools arrays. When adding new scripts, update BOTH locations." ``` It also shows how scripts are hooked into Claude Code events: ```json "PreToolUse": [ { "matcher": "TodoWrite|Bash", "hooks": [ { "type": "command", "command": "uv run ~/.claude/scripts/rule-enforcer.py" } ] } ], "UserPromptSubmit": [ { "matcher": "", "hooks": [ { "type": "command", "command": "uv run ~/.claude/scripts/rule-injector.py" } ] } ], "SessionStart": [ { "matcher": "", "hooks": [ { "type": "command", "command": "~/.claude/scripts/session-start-check.sh", "timeout": 5000 } ] } ] ``` ### To add a script 1. Create the file under `scripts/`. 2. Add a matching `!scripts/` line to `.gitignore`. 3. If the orchestrator needs permission to run it, add the script entry to both `permissions.allow` and `allowedTools` in `settings.json`. 4. If it should run automatically, register it under the appropriate hook in `settings.json`. 5. Add or update tests under `tests/`, and whitelist any new test files in `.gitignore`. If you are writing a hook script, follow the same stdin/stdout protocol used by `scripts/rule-injector.py`: ```python output = {"hookSpecificOutput": {"hookEventName": "UserPromptSubmit", "additionalContext": rule_reminder}} # Output JSON to stdout print(json.dumps(output, indent=2)) ``` ### To remove a script 1. Delete the script file. 2. Remove its `.gitignore` whitelist entry. 3. Remove any matching entries from both `permissions.allow` and `allowedTools`. 4. Remove any hook registrations that call it. 5. Remove or update its tests. > **Warning:** Updating a script file without updating `settings.json` is only half the change. The repo treats hook registration and tool allowlists as part of the script contract. ## Add Slash Commands to a Plugin Each plugin has two important parts: - A manifest at `plugins//.claude-plugin/plugin.json` - One Markdown file per command at `plugins//commands/*.md` An existing plugin manifest looks like this: ```json { "name": "myk-github", "version": "1.4.3", "description": "GitHub operations for Claude Code - PR reviews, releases, review handling, and CodeRabbit rate limits", "author": { "name": "myk-org" }, "repository": "https://github.com/myk-org/claude-code-config", "license": "MIT", "keywords": ["github", "pr-review", "refine-review", "release", "code-review", "coderabbit", "rate-limit"] } ``` An existing command file looks like this: ```markdown --- description: Review a GitHub PR and post inline comments on selected findings argument-hint: [PR_NUMBER|PR_URL] allowed-tools: Bash(myk-claude-tools:*), Bash(uv:*), Bash(git:*), Bash(gh:*), AskUserQuestion, Task --- ``` ### To add a command to an existing plugin 1. Create `plugins//commands/.md`. 2. Add frontmatter that matches the existing command files: `description`, `argument-hint`, and `allowed-tools`. 3. Write the command body in the same style as the existing command prompts. 4. Add a matching whitelist entry in `.gitignore`. The slash-command rule is strict about execution mode: ```text 1. **EXECUTE IT DIRECTLY YOURSELF** - NEVER delegate to any agent 2. **ALL internal operations run DIRECTLY** - scripts, bash commands, everything 3. **Slash command prompt takes FULL CONTROL** - its instructions override general CLAUDE.md rules ``` > **Note:** When you write a command file, write it as the command's own workflow. Do not assume the normal routing rules apply unless the command itself says to call an agent. > **Warning:** Keep command frontmatter minimal. Existing commands use `description`, `argument-hint`, and `allowed-tools`. Do not add extra keys unless you know Claude Code supports them for command files. ## Add or Remove a Plugin ### To add a plugin 1. Create `plugins//`. 2. Add `plugins//.claude-plugin/plugin.json`. 3. Add `plugins//commands/` and at least one command file. 4. Add the full whitelist block to `.gitignore` for the plugin directory, manifest, commands directory, and each tracked command file. 5. If this plugin should be enabled in the checked-in config, add it to `settings.json` under `enabledPlugins`. 6. Add it to `.claude-plugin/marketplace.json` if it should be installable from this repo's marketplace. The existing `enabledPlugins` block includes the repository's own plugins: ```json "enabledPlugins": { "myk-review@myk-org": true, "myk-github@myk-org": true, "myk-acpx@myk-org": true } ``` ### To remove a plugin 1. Delete the plugin directory. 2. Remove its whitelist block from `.gitignore`. 3. Remove its entry from `settings.json` `enabledPlugins` if present. 4. Remove its object from `.claude-plugin/marketplace.json`. 5. Remove references to its commands anywhere else in the repo. > **Tip:** You do not need to list commands inside `plugin.json`. In this repo, command discovery comes from the files under `plugins//commands/`. ## Add or Remove Marketplace Entries The repo's marketplace manifest lives at `.claude-plugin/marketplace.json`. Each entry includes a plugin `name`, `source`, `description`, and `version`. Example: ```json { "name": "myk-org", "owner": { "name": "myk-org" }, "plugins": [ { "name": "myk-github", "source": "./plugins/myk-github", "description": "GitHub operations - PR reviews, releases, review handling, CodeRabbit rate limits", "version": "1.7.2" }, { "name": "myk-review", "source": "./plugins/myk-review", "description": "Local code review and review database operations", "version": "1.7.2" } ] } ``` ### To add a marketplace entry 1. Add a new object under `plugins`. 2. Point `source` at the plugin directory, for example `./plugins/my-plugin`. 3. Use a short description that matches the manifest and command set. 4. Set the version you want to publish. ### To remove a marketplace entry 1. Remove the plugin object from `.claude-plugin/marketplace.json`. 2. Remove the plugin itself if it is no longer part of the repo. 3. Remove any `enabledPlugins` entry if the checked-in config enables it. > **Note:** The release tooling in `myk_claude_tools/release/detect_versions.py` only scans `pyproject.toml`, `package.json`, `setup.cfg`, `Cargo.toml`, `build.gradle`, `build.gradle.kts`, and Python `__version__` files such as `__init__.py` and `version.py`. It does not automatically update `plugins/*/.claude-plugin/plugin.json` or `.claude-plugin/marketplace.json`, so plugin and marketplace version metadata must be maintained separately. ## Validate the Change The repo's local validation lives in `tox.toml`, `pyproject.toml`, and `.pre-commit-config.yaml`. `tox.toml` runs pytest through `uv`: ```toml [env.unittests] description = "Run pytest tests" deps = ["uv"] commands = [["uv", "run", "--group", "tests", "pytest", "tests"]] ``` Use these checks from the repository root: ```bash uv run --group tests pytest tests uv run --group tests pytest tests/test_rule_enforcer.py tests/test_git_protection.py uv run --group tests pytest tests/test_detect_versions.py tests/test_bump_version.py prek run --all-files ``` What those checks cover: - `uv run --group tests pytest tests` runs the full unit test suite. - `tests/test_rule_enforcer.py` and `tests/test_git_protection.py` are the important targeted tests when you change hook or guardrail scripts. - `tests/test_detect_versions.py` and `tests/test_bump_version.py` are the targeted tests when you change release or version-detection logic. - `prek run --all-files` runs the hooks defined in `.pre-commit-config.yaml`, including `flake8`, `detect-secrets`, `ruff`, `gitleaks`, `mypy`, and `markdownlint`. > **Warning:** Use `prek`, not `pre-commit`. `scripts/rule-enforcer.py` explicitly blocks direct `pre-commit` commands and tells the caller to use `prek` instead. `pyproject.toml` is the source of truth for Ruff and Mypy settings, so if you add Python code or release tooling, treat lint and type checks as part of the change, not as optional cleanup. If you keep the whitelist, routing, bug-reporting coverage, hooks, and marketplace metadata in sync, new extensions behave like the rest of the repository instead of becoming one-off exceptions. --- Source: maintaining-release-and-versioning.md # Maintaining Releases and Versioning `claude-code-config` has more than one version surface, and not all of them are automated in the same way. At a high level, maintainers need to keep track of: - the Python CLI package version for `myk-claude-tools` - the runtime `__version__` inside the Python package - each plugin's own `.claude-plugin/plugin.json` - the root `.claude-plugin/marketplace.json` Only the first two are automatically discovered and updated by the built-in release helpers. > **Note:** This repository does not include a checked-in `.github/workflows` release pipeline. Release preparation and publishing are driven by `myk-claude-tools release ...` and the `/myk-github:release` command. ## Where the CLI version lives The installable CLI package is versioned in standard Python packaging metadata, and the package also keeps a runtime `__version__` constant. From `pyproject.toml`: ```toml [project] name = "myk-claude-tools" version = "1.7.2" description = "CLI utilities for Claude Code plugins" ``` From `myk_claude_tools/__init__.py`: ```python """myk-claude-tools: CLI utilities for Claude Code plugins.""" __version__ = "1.7.2" ``` The root CLI exposes a version flag through Click: ```python @click.group() @click.version_option() def cli() -> None: """CLI utilities for Claude Code plugins.""" ``` In practice, that means a release of the CLI should keep `pyproject.toml` and `myk_claude_tools/__init__.py` aligned. ## Where plugin versions live Each plugin has its own manifest under `plugins/*/.claude-plugin/plugin.json`. From `plugins/myk-github/.claude-plugin/plugin.json`: ```json { "name": "myk-github", "version": "1.4.3", "description": "GitHub operations for Claude Code - PR reviews, releases, review handling, and CodeRabbit rate limits", "author": { "name": "myk-org" }, "repository": "https://github.com/myk-org/claude-code-config", "license": "MIT", "keywords": ["github", "pr-review", "refine-review", "release", "code-review", "coderabbit", "rate-limit"] } ``` From `plugins/myk-review/.claude-plugin/plugin.json`: ```json { "name": "myk-review", "version": "1.4.3", "description": "Local code review and review database operations for Claude Code", "author": { "name": "myk-org" }, "repository": "https://github.com/myk-org/claude-code-config", "license": "MIT", "keywords": ["code-review", "local", "database", "analytics"] } ``` From `plugins/myk-acpx/.claude-plugin/plugin.json`: ```json { "name": "myk-acpx", "version": "1.4.6", "description": "Multi-agent prompt execution via acpx (Agent Client Protocol)", "author": { "name": "myk-org" }, "repository": "https://github.com/myk-org/claude-code-config", "license": "MIT", "keywords": ["acpx", "acp", "multi-agent", "codex", "gemini", "cursor", "copilot"] } ``` The repository also publishes plugin metadata through the root marketplace manifest: ```json { "name": "myk-org", "owner": { "name": "myk-org" }, "plugins": [ { "name": "myk-github", "source": "./plugins/myk-github", "description": "GitHub operations - PR reviews, releases, review handling, CodeRabbit rate limits", "version": "1.7.2" }, { "name": "myk-review", "source": "./plugins/myk-review", "description": "Local code review and review database operations", "version": "1.7.2" }, { "name": "myk-acpx", "source": "./plugins/myk-acpx", "description": "Multi-agent prompt execution via acpx (Agent Client Protocol)", "version": "1.7.2" } ] } ``` > **Warning:** In the current repository state, the plugin manifests and the marketplace manifest do not match. The per-plugin `plugin.json` files use `1.4.3` / `1.4.6`, while `.claude-plugin/marketplace.json` uses `1.7.2` for all three plugins. The release helpers do not reconcile this automatically, so maintainers need to decide which version numbers should move for a given release and update those files deliberately. ## What `release detect-versions` can find The release tooling scans from the current working directory, so run it from the repository root. Supported root-level version files are defined in `myk_claude_tools/release/detect_versions.py`: ```python _ROOT_SCANNERS: list[tuple[str, Callable[[Path], str | None], str]] = [ ("pyproject.toml", _parse_pyproject_toml, "pyproject"), ("package.json", _parse_package_json, "package_json"), ("setup.cfg", _parse_setup_cfg, "setup_cfg"), ("Cargo.toml", _parse_cargo_toml, "cargo"), ("build.gradle", _parse_gradle, "gradle"), ("build.gradle.kts", _parse_gradle, "gradle"), ] ``` It also walks the tree looking for Python files named `__init__.py` or `version.py` that contain a `__version__ = "..."` assignment: ```python def _find_python_version_files(root: Path) -> list[VersionFile]: """Find Python files containing __version__ assignments.""" results: list[VersionFile] = [] for dirpath, dirnames, filenames in os.walk(root): dirnames[:] = [d for d in dirnames if not _should_skip_dir(d)] for name in filenames: if name not in ("__init__.py", "version.py"): continue filepath = Path(dirpath) / name version = _parse_python_version(filepath) if version: results.append( VersionFile( path=filepath.relative_to(root).as_posix(), current_version=version, file_type="python_version", ) ) return results ``` The scanner skips common generated and dependency directories such as `.venv`, `node_modules`, `.tox`, `dist`, `build`, and `target`. For this repository layout, the built-in detector will typically find: - `pyproject.toml` - `myk_claude_tools/__init__.py` It will not find: - `plugins/*/.claude-plugin/plugin.json` - `.claude-plugin/marketplace.json` That limitation is important: the release helper automates the CLI version files, not the plugin manifests. The command emits JSON with a simple shape: ```python output = { "version_files": [r.to_dict() for r in results], "count": len(results), } print(json.dumps(output, indent=2)) ``` The plugin release workflow expects these fields: - `version_files[].path` - `version_files[].current_version` - `count` ## What `release bump-version` actually does `myk-claude-tools release bump-version` updates detected version files in place. It does not create commits, tags, or releases. The command validates the new version string before doing any file changes: ```python new_version = new_version.strip() if not new_version or any(ch in new_version for ch in ("\n", "\r")): return BumpResult( status="failed", error="Invalid version: must be a non-empty single-line string.", ) if new_version.startswith("v") or new_version.startswith("V"): return BumpResult( status="failed", error=f"Invalid version: '{new_version}' should not start with 'v'. Use '{new_version[1:]}' instead.", ) ``` A few details matter in day-to-day maintenance: - writes are atomic, using a temporary file and rename - existing file permissions are preserved when possible - `pyproject.toml` updates are scoped to the `[project]` section only - `setup.cfg` updates are scoped to `[metadata]` only - dynamic `setup.cfg` versions such as `attr:` and `file:` are intentionally skipped - if you pass `--files`, every listed path must match a detected file or the command fails before modifying anything From `myk_claude_tools/release/commands.py`: ```python @release.command("bump-version") @click.argument("version") @click.option("--files", multiple=True, help="Specific files to update (can be repeated)") def release_bump_version(version: str, files: tuple[str, ...]) -> None: """Update version strings in detected version files.""" bump_run(version, list(files) if files else None) ``` > **Warning:** `bump-version` expects the bare version number, such as `1.8.0`. `release create` expects a Git tag, such as `v1.8.0`. That `v` difference is intentional and easy to get wrong. ## What `release info` checks before a release `myk-claude-tools release info` is the pre-flight command. It uses both `git` and `gh`, and it fails early if either dependency is missing. The release subcommands are registered like this: ```python @release.command("info") @click.option("--repo", help="Repository in owner/repo format") @click.option("--target", help="Target branch for release (overrides default branch check)") @click.option("--tag-match", help="Glob pattern to filter tags (e.g., 'v2.10.*')") def release_info(repo: str | None, target: str | None, tag_match: str | None) -> None: """Fetch release validation info and commits since last tag.""" info_run(repo, target=target, tag_match=tag_match) ``` `release info` validates: - that you are on the effective target branch - that the working tree is clean - that `git fetch origin ` succeeds - that the local branch is neither ahead of nor behind `origin/` It also supports automatic version-branch behavior. If the current branch is named like `v2.10`, the tool infers both the release target and a tag filter: ```python def _detect_version_branch(current_branch: str) -> tuple[str | None, str | None]: """Auto-detect version branch and infer tag match pattern.""" match = _VERSION_BRANCH_RE.match(current_branch) if match: version_prefix = match.group(1) return current_branch, f"v{version_prefix}.*" return None, None ``` That makes it easier to manage release lines such as `v2.10` without passing `--target` and `--tag-match` every time. If validation fails, the tool returns early with validation data and skips the more expensive tag and commit collection. If validation passes, it reports: - repository metadata - validation results - the most recent tag - recent tags - commits since the last tag - whether this would be the first release ## What `release create` publishes `myk-claude-tools release create` is a thin wrapper around `gh release create`. It expects: - a repository in `owner/repo` format - a tag such as `v1.8.0` - a changelog file path From `myk_claude_tools/release/create.py`: ```python cmd = [ "gh", "release", "create", tag, "--repo", owner_repo, "--notes-file", changelog_file, "--title", title.strip() if title and title.strip() else tag, ] if target: cmd.extend(["--target", target]) if prerelease: cmd.append("--prerelease") if draft: cmd.append("--draft") ``` A few useful details: - `owner_repo` must match `owner/repo` - the changelog file must already exist - `--target` is passed straight through to `gh release create` - `--draft` and `--prerelease` are supported - tags that do not match `vX.Y.Z` style semver are warned about, but not blocked > **Tip:** Because this command shells out to `gh`, the actual release publishing behavior is whatever `gh release create` does with the tag and target you provide. The helper mainly adds validation and structured JSON output around that call. ## Maintainer command reference The release flow in `plugins/myk-github/commands/release.md` uses these commands directly: ```bash myk-claude-tools release info myk-claude-tools release info --target myk-claude-tools release info --target --tag-match myk-claude-tools release detect-versions myk-claude-tools release bump-version --files --files ``` And for publishing: ```bash myk-claude-tools release create {owner}/{repo} {tag} "$CHANGELOG_FILE" [--prerelease] [--draft] [--target {target_branch}] ``` The same command definition also recommends checking that the CLI is installed first: ```bash myk-claude-tools --version ``` If it is missing: ```bash uv tool install myk-claude-tools ``` ## Recommended release workflow for this repository A practical release for `claude-code-config` usually looks like this: 1. Confirm the installed CLI version with `myk-claude-tools --version`. 2. Run `myk-claude-tools release info` from the repository root. 3. Fix any validation failures before going further. 4. Run `myk-claude-tools release detect-versions` to see which files will be auto-updated. 5. Run `myk-claude-tools release bump-version ` for the CLI version files. 6. Manually update plugin manifests and the marketplace manifest if your release should move those versions too. 7. Commit, tag, and push according to your normal git workflow. 8. Generate release notes and pass them to `myk-claude-tools release create`. 9. Verify the published GitHub release URL returned by the command. For this repo specifically, step 6 matters because plugin version files are not part of automatic detection. ## Guided release flow with `/myk-github:release` If you prefer a guided workflow, the `myk-github` plugin already defines one in `plugins/myk-github/commands/release.md`. That command orchestrates these phases: - validation with `release info` - version discovery with `release detect-versions` - changelog analysis - user approval - optional `release bump-version` - branch creation, PR creation, and merge for the version bump - final `release create` It also includes a conditional `uv lock` step if a `uv.lock` file exists: ```bash uv lock git add uv.lock ``` This repository does not currently include `uv.lock`, but the workflow is built to handle projects that do. > **Tip:** The slash command is the easiest way to follow the repo's intended release sequence, especially when you want the version bump to happen in a dedicated PR before the GitHub release is published. ## What the tests guarantee The tests are a useful source of truth for what the release helpers support. `tests/test_detect_versions.py` confirms that detection: - reads `pyproject.toml`, `package.json`, `setup.cfg`, `Cargo.toml`, and Gradle files - detects `__version__` in `__init__.py` and `version.py` - skips excluded directories - ignores malformed files - ignores `setup.cfg` values that use `attr:` or `file:` - only reads the correct section in multi-section files `tests/test_bump_version.py` confirms that bumping: - preserves unrelated file content - only updates the intended section - preserves indentation - supports updating multiple files at once - fails fast on partial `--files` mismatches - reports skipped files cleanly when writes fail When in doubt, treat those tests as the supported behavior of the release tooling. --- Source: troubleshooting.md # Troubleshooting Most problems in this repository come from one of six places: hook policy, missing prerequisites, GitHub CLI auth, `acpx` session state, review-state files, or stale task/temp data. Start with the quick checks below, then jump to the section that matches what you are seeing. ## Start Here Run the same basic checks the project itself expects: ```bash uv --version myk-claude-tools --version gh api user --jq .login acpx --version ``` If one of those fails, fix that first. - If `uv --version` fails, Python-backed hooks and CLI flows will not work. - If `myk-claude-tools --version` fails, the GitHub review and release commands cannot run. - If `gh api user --jq .login` fails, PR, review, and release commands will usually fail too. - If `acpx --version` fails, `/myk-acpx:prompt` cannot start. > **Tip:** `gh api user --jq .login` is the quickest way to tell whether a "GitHub problem" is really an auth problem. ## Hook and startup failures This repo wires its behavior through `settings.json`. The important hook events are `PreToolUse`, `UserPromptSubmit`, and `SessionStart`: ```json "PreToolUse": [ { "matcher": "TodoWrite|Bash", "hooks": [ { "type": "command", "command": "uv run ~/.claude/scripts/rule-enforcer.py" } ] }, { "matcher": "Bash", "hooks": [ { "type": "command", "command": "uv run ~/.claude/scripts/git-protection.py" } ] } ], "UserPromptSubmit": [ { "matcher": "", "hooks": [ { "type": "command", "command": "uv run ~/.claude/scripts/rule-injector.py" } ] } ], "SessionStart": [ { "matcher": "", "hooks": [ { "type": "command", "command": "~/.claude/scripts/session-start-check.sh", "timeout": 5000 } ] } ] ``` If hook behavior looks inconsistent after you add or change an allowed script, check both permission lists. The repo keeps the same entries in `permissions.allow` and `allowedTools`: ```json "_scriptsNote": "Script entries must be duplicated in both permissions.allow and allowedTools arrays. When adding new scripts, update BOTH locations." ``` If you update only one list, Claude Code may still deny the tool use even though the hook script exists. ### "Direct python/pip commands are forbidden" This is `scripts/rule-enforcer.py` doing exactly what it was written to do: ```python # Block direct python/pip forbidden = ("python ", "python3 ", "pip ", "pip3 ") return cmd.startswith(forbidden) def is_forbidden_precommit_command(command: str) -> bool: """Check if command uses pre-commit directly instead of prek.""" cmd = command.strip().lower() # Block direct pre-commit commands return cmd.startswith("pre-commit ") ``` The same hook also tells you what to use instead: ```python "You attempted to run python/pip directly. Instead:\n" "1. Delegate Python tasks to the python-expert agent\n" "2. Use 'uv run script.py' to run Python scripts\n" "3. Use 'uvx package-name' to run package CLIs\n" "You attempted to run pre-commit directly. Instead:\n" "1. Use the 'prek' command which wraps pre-commit\n" "2. Example: prek run --all-files\n" ``` Practical replacements that are already covered by tests in this repo: - Use `uv run script.py` instead of `python script.py` - Use `uvx ruff check .` instead of installing a tool ad hoc with `pip` - Use `prek run --all-files` instead of `pre-commit run --all-files` ### "Session start says tools are missing" `scripts/session-start-check.sh` does a non-blocking dependency sweep. It treats `uv` and the three review plugins as critical, and several other tools as optional: ```bash # CRITICAL: uv - Required for Python hooks if ! command -v uv &>/dev/null; then missing_critical+=("[CRITICAL] uv - Required for running Python hooks Install: https://docs.astral.sh/uv/") fi # OPTIONAL: gh - Only check if this is a GitHub repository if git remote -v 2>/dev/null | grep -q "github.com"; then if ! command -v gh &>/dev/null; then missing_optional+=("[OPTIONAL] gh - Required for GitHub operations (PRs, issues, releases) Install: https://cli.github.com/") fi fi # CRITICAL: Review plugins - Required for mandatory code review loop critical_marketplace_plugins=( pr-review-toolkit superpowers feature-dev ) ``` If you see a `MISSING_TOOLS_REPORT`, prioritize fixes in this order: - `uv` - Missing review plugins: `pr-review-toolkit`, `superpowers`, `feature-dev` - `gh` if you use GitHub-backed commands - `prek` if you run pre-commit checks locally - `mcpl`, `jq`, `gawk`, and optional marketplace plugins as needed The script also prints the exact plugin install pattern it expects, including: ```text /plugin marketplace add claude-plugins-official /plugin install pr-review-toolkit@claude-plugins-official /plugin install superpowers@claude-plugins-official /plugin install feature-dev@claude-plugins-official ``` ### "The security gate blocked my Bash command" `settings.json` includes a second `PreToolUse` prompt that inspects destructive Bash. It can `approve`, `block`, or `ask`. That is expected for commands involving things like system directories, raw disk writes, filesystem formatting, or risky `sudo` deletes. If you get asked for confirmation on a command that feels safe, the command probably looks risky to the prompt because of chained operators, delete targets, or elevated permissions. > **Warning:** This prompt is intentionally conservative. If you are doing something destructive on purpose, expect it to stop and ask. ## Git commits and pushes are blocked `scripts/git-protection.py` intercepts `git commit` and `git push` and blocks a few categories of mistakes: - Committing directly on `main` or `master` - Pushing directly on `main` or `master` - Committing on a branch whose PR is already merged - Pushing on a branch whose PR is already merged - Committing in detached `HEAD` It also checks GitHub merge state when the remote is on GitHub. That lookup uses `gh`, so an auth or API failure can surface as a blocked commit or push. The important behavior to know is that this hook fails closed. If the GitHub lookup errors, the command is denied. > **Warning:** A broken `gh` session can look like a Git protection problem, even when your branch is otherwise fine. Two details that surprise people: - Commits in detached `HEAD` are blocked, because the script warns that those commits can become orphaned. - Pushes from detached `HEAD` are not automatically blocked, because the code treats an explicit detached push as potentially intentional. If you see a Git protection message that mentions a hook error, fix the `gh` problem first. The hook is not asking you to bypass it; it is telling you the merge-status check did not complete safely. ## GitHub CLI auth and API problems The GitHub-backed flows in `myk_claude_tools` use both REST and GraphQL through `gh`. For example, `reviews pending-fetch` checks the authenticated user like this: ```python result = subprocess.run( ["gh", "api", "user", "--jq", ".login"], capture_output=True, text=True, encoding="utf-8", timeout=30, ) ... if result.returncode != 0: stderr = result.stderr or "" print_stderr(f"Error: Could not get authenticated user: {stderr.strip()}") return None ``` If `gh` is not installed, several commands fail fast with messages like: - `Error: GitHub CLI (gh) not found. Install gh to fetch PR diff.` - `Error: GitHub CLI (gh) not found. Install gh to fetch CLAUDE.md.` If `gh` is installed but not authenticated, you will usually see one of these patterns instead: - `Error: Could not get authenticated user: ...` - `Error: Could not determine authenticated user` - `Warning: GraphQL query failed: ...` - `Warning: API call to ... failed: ...` The quickest way to narrow this down is: 1. Run `gh api user --jq .login` 2. If that fails, fix GitHub CLI auth before retrying anything else 3. Retry the original command only after the user lookup succeeds ### "No pending review found" `myk_claude_tools/reviews/pending_fetch.py` is used by `/myk-github:refine-review`. It expects you to already have a pending review on GitHub: - `Error: No pending review found for user '' on PR #` - `Start a review on GitHub first by adding comments without submitting.` That is not a bug. It means there is nothing pending for the tool to refine yet. ### "Could not infer PR from branch" `reviews fetch` can auto-detect the PR from the current branch, but it refuses to guess in detached `HEAD`: - `Error: Detached HEAD; cannot infer PR from branch. Check out a branch with an open PR.` If you are on a detached commit, check out the branch that owns the PR or pass an explicit PR URL where the command supports it. ## acpx session failures `/myk-acpx:prompt` tries to use persistent sessions by default. The workflow in `plugins/myk-acpx/commands/prompt.md` is: ```bash acpx sessions ensure acpx sessions new ``` If both fail, the command documentation is explicit about what to do next: ```text If the error contains "Invalid params" or "session" and "not found", display: "acpx session management failed for ``. This is a known issue — see: - - Falling back to one-shot mode (`--exec`)." ``` If you hit session problems: - Use `--exec` to bypass session persistence for that run - Confirm `acpx --version` works - Confirm the underlying agent CLI is installed separately - Retry session mode later if you specifically need follow-up prompts in the same agent session `--fix`, `--peer`, and `--exec` are also mutually exclusive in several combinations. If the command rejects your flags, treat that as input validation, not a runtime failure. > **Tip:** If you only need one answer and do not need conversation state, `--exec` is the most reliable mode. ## CodeRabbit problems ### Rate limit handling does not start or resume The CodeRabbit helper looks for a specific summary comment and parses the cooldown from its body. The core behavior is in `myk_claude_tools/coderabbit/rate_limit.py`: ```python if comment_id is None or body is None: print(f"Error: {error}") return 1 if _RATE_LIMITED_MARKER not in body: print(json.dumps({"rate_limited": False})) return 0 wait_seconds = _parse_wait_seconds(body) if wait_seconds is None: print("Error: Could not parse wait time from rate limit message.") snippet = "\n".join(body.split("\n")[:10]) print(f"Comment snippet:\n{snippet}") return 1 print(json.dumps({"rate_limited": True, "wait_seconds": wait_seconds, "comment_id": comment_id})) ``` Common meanings: - `No CodeRabbit summary comment found on this PR` means the handler could not find the comment it depends on - `Could not parse wait time from rate limit message` means the comment exists, but its text no longer matches the expected format - A JSON response with `rate_limited: true` means the helper successfully parsed the cooldown and can trigger later The trigger flow also treats a replaced summary comment as success after two consecutive misses: ```python elif status == "no_comment": none_streak += 1 if none_streak >= 2: print("Review started (comment replaced).") return 0 print("Warning: Could not find comment. Retrying...") ``` That is why the tool can say `Review started (comment replaced).` even though it no longer sees the original summary comment. ### Body comments seem to be missing CodeRabbit findings are not always inline threads. `myk_claude_tools/reviews/coderabbit_parser.py` parses three body-embedded sections: ```python _OUTSIDE_SECTION_START_RE = re.compile( r"\s*(?:\S+\s+)*?Outside diff range comments?\s*(?:\(\d+\))?\s*\s*
", ) _NITPICK_SECTION_START_RE = re.compile( r"\s*(?:\S+\s+)*?Nitpick comments?\s*(?:\(\d+\))?\s*\s*
", ) _DUPLICATE_SECTION_START_RE = re.compile( r"\s*(?:\S+\s+)*?Duplicate comments?\s*(?:\(\d+\))?\s*\s*
", ) ``` It also strips AI-only helper text before building the final comment body: ```python _AI_PROMPT_RE = re.compile( r"
\s*\n?\s*\s*\S*\s*Prompt for AI Agents\s*.*?
", re.DOTALL, ) ``` The tests in `tests/test_coderabbit_parser.py` cover the edge cases that matter in practice: - single-line references like `` `42` `` - file paths with spaces - missing sections - truncated HTML with unclosed tags, which intentionally returns an empty parse - trailing `Prompt for AI Agents` blocks, which are intentionally removed - mixed bodies that contain outside-diff, nitpick, and duplicate sections together If you expect a body-embedded comment and do not see it after fetch, compare the current CodeRabbit comment HTML to those parser expectations first. ### Outside-diff comments are grouped into PR comments Outside-diff, nitpick, and duplicate items do not have normal GitHub review threads to reply to. `myk_claude_tools/reviews/post.py` groups them into one or more PR-level comments per reviewer: ```python max_len = 55000 # Leave margin below GitHub's ~65KB limit header_template = "@{reviewer}\n\nThe following review comments were reviewed and a decision was made:\n\n" ``` That means you should expect: - a PR comment, not an inline thread reply - one comment per reviewer, sometimes split into parts when large - body-comment items to be tracked in the review database, even though there was no inline thread to resolve > **Note:** If a CodeRabbit item was outside the diff, it is normal for it to show up as a consolidated PR comment instead of an inline thread resolution. ### "Why did my reply post but the human thread stay open?" That is deliberate. In `myk_claude_tools/reviews/post.py`, human review threads are only resolved when the status is `addressed`. For human `skipped` or `not_addressed`, the tool replies but keeps the thread open. On rerun, you may see the exact text `reply already posted at ... (not resolving by policy)`. AI threads behave differently. Qodo and CodeRabbit replies are resolved after replying for the handled statuses. ## Review JSON, temp files, and database state Review commands intentionally keep intermediate state on disk so they can retry safely. `myk_claude_tools/reviews/fetch.py` writes review JSON under `$TMPDIR/claude` and sets restrictive permissions: ```python tmp_base = Path(os.environ.get("TMPDIR") or tempfile.gettempdir()) out_dir = tmp_base / "claude" out_dir.mkdir(parents=True, exist_ok=True, mode=0o700) try: out_dir.chmod(0o700) except OSError as e: print_stderr(f"Warning: unable to set permissions on {out_dir}: {e}") json_path = out_dir / f"pr-{pr_number}-reviews.json" ``` That aligns with the repo rule in `rules/40-critical-rules.md`: temp files belong in `/tmp/claude/`, not in the project tree. `reviews post` then updates the JSON with `posted_at` and `resolved_at`. This is what makes retries safe: - If a reply posted but resolution failed, rerunning can do a resolve-only retry - If both `posted_at` and `resolved_at` are already present, reruns skip that entry - If posting fails, the tool prints `ACTION REQUIRED` and a retry command: `myk-claude-tools reviews post ` After the whole flow is done, `reviews store` imports the finished JSON into SQLite at `/.claude/data/reviews.db` and deletes the JSON on success. That means two things are normal: - The JSON file disappearing after `myk-claude-tools reviews store` - The database directory being created with `0700` It also means you should not delete the JSON too early if you still need `reviews post` to retry. > **Note:** Review tools honor `TMPDIR`. On systems with a custom temp directory, your files may be under `/claude` instead of `/tmp/claude`. ### Why a dismissed comment can disappear automatically later `reviews fetch` can preload dismissed comments from the review database and mark matching items as auto-skipped. The exact display string comes from `fetch.py`: - `Auto-skipped (skipped): ...` - `Auto-skipped (addressed): ...` The database logic is intentionally conservative: - `not_addressed` and `skipped` comments are always treated as dismissed - `addressed` comments are only reused for body-comment types such as `outside_diff_comment`, `nitpick_comment`, and `duplicate_comment` Regular inline thread comments rely on GitHub's own resolved-thread state instead of body-similarity matching. If an inline thread seems to "come back" in a later PR, that is usually why. ## Stale task state Task state persists on disk at: `~/.claude/tasks//` The repo's task-system rule is explicit that stale tasks cause confusing progress indicators, clutter, and blocked-looking workflows. The documented cleanup pattern is: ```text TaskList TaskUpdate: taskId="5", status="completed" TaskList ``` If your task panel looks wrong after a multi-phase workflow, do a cleanup pass: - Run `TaskList` - Look for `pending` or `in_progress` items that should be closed - Update them to `completed` - Run `TaskList` again to confirm the state is clean ## Local validation and "does this repo expect me to run that another way?" The supported local test command comes from `tox.toml`: ```toml [env.unittests] description = "Run pytest tests" deps = ["uv"] commands = [["uv", "run", "--group", "tests", "pytest", "tests"]] ``` That matches the Python tooling defined in `pyproject.toml` and the hook stack in `.pre-commit-config.yaml`, which includes `ruff`, `ruff-format`, `flake8`, `mypy`, `markdownlint`, `gitleaks`, `detect-secrets`, and other hygiene checks. A short example from `.pre-commit-config.yaml`: ```yaml - repo: https://github.com/astral-sh/ruff-pre-commit rev: v0.14.14 hooks: - id: ruff - id: ruff-format ``` Two practical takeaways: - Use `uv run --group tests pytest tests` for the test suite - Use `prek`, not raw `pre-commit`, because the hook policy blocks direct `pre-commit ...` The CodeRabbit config also enables several analyzers, including `ruff`, `pylint`, `eslint`, `shellcheck`, `yamllint`, `gitleaks`, `semgrep`, `actionlint`, and `hadolint`. If review feedback seems unfamiliar, it may be mirroring one of those configured tools. ## Quick symptom index | Symptom | Most likely cause | What to do | | --- | --- | --- | | `Direct python/pip commands are forbidden` | `rule-enforcer.py` | Switch to `uv run`, `uvx`, or `prek` | | `Direct pre-commit commands are forbidden` | `rule-enforcer.py` | Run `prek run --all-files` instead | | Missing tools report on startup | `session-start-check.sh` | Install `uv` first, then required review plugins, then optional tools you actually use | | Commit or push blocked unexpectedly | `git-protection.py` | Check branch state, then verify `gh api user --jq .login` works | | `Could not determine authenticated user` | `gh` auth or API problem | Fix GitHub CLI auth before rerunning review commands | | `No pending review found` | No draft review exists yet | Start a review on GitHub, add comments, leave it pending, then rerun | | `Detached HEAD; cannot infer PR from branch` | Branch auto-detection cannot work | Check out the PR branch or use an explicit PR URL if supported | | `acpx` session errors mentioning invalid params or session not found | Known upstream session issue | Use `--exec` and retry later if you need persistent sessions | | CodeRabbit rate-limit handler cannot continue | Missing summary comment or changed cooldown text | Check for `No CodeRabbit summary comment found` or `Could not parse wait time...` | | CodeRabbit outside-diff comment did not resolve inline | It was a body comment, not a thread | Look for the consolidated PR-level comment instead | | Review JSON disappeared after storage | Normal `reviews store` behavior | Check `.claude/data/reviews.db` instead | | Task UI shows stale progress | Old tasks still marked open | Run `TaskList`, then complete stale tasks | --- Source: license.md # License `claude-code-config` is released under the MIT License. That is a permissive open source license: you can use the project, copy it, change it, redistribute it, and build on it, including for internal or commercial work. The full legal text is in the repository root `LICENSE`, and it applies to the software and associated documentation files in this repository. ## What You Can Do - Use the project privately, internally, or commercially. - Fork the repository and customize its agents, rules, plugins, scripts, and Python utilities. - Copy code or configuration into your own tooling. - Redistribute the original project or a modified version. - Sell software or services that include this project. - Keep your changes private; MIT does not require you to publish modifications. > **Tip:** If you are adapting this repository for your own Claude Code setup, the MIT license lets you do that without asking for additional permission. ## What You Need To Keep MIT is simple, but it does have one important condition: if you redistribute the software or a substantial portion of it, keep the copyright notice and permission notice with it. In this repository, that notice starts with `Copyright (c) 2026 myk-org`. ```5:13:LICENSE Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. ``` > **Note:** The safest default is to include the root `LICENSE` file whenever you redistribute this project, a fork of it, or a substantial copied portion. ## How The Project Declares MIT This repository uses a single top-level `LICENSE` file, and its published artifacts repeat that MIT designation in their metadata. The Python package `myk-claude-tools` declares MIT in `pyproject.toml`: ```1:14:pyproject.toml [project] name = "myk-claude-tools" version = "1.7.2" description = "CLI utilities for Claude Code plugins" readme = "README.md" license = "MIT" requires-python = ">=3.10" authors = [{ name = "myk-org" }] keywords = ["claude", "cli", "github", "code-review"] classifiers = [ "Development Status :: 4 - Beta", "Environment :: Console", "Intended Audience :: Developers", "License :: OSI Approved :: MIT License", ``` Plugin manifests also mark themselves as MIT, for example `plugins/myk-github/.claude-plugin/plugin.json`: ```1:10:plugins/myk-github/.claude-plugin/plugin.json { "name": "myk-github", "version": "1.4.3", "description": "GitHub operations for Claude Code - PR reviews, releases, review handling, and CodeRabbit rate limits", "author": { "name": "myk-org" }, "repository": "https://github.com/myk-org/claude-code-config", "license": "MIT", "keywords": ["github", "pr-review", "refine-review", "release", "code-review", "coderabbit", "rate-limit"] } ``` ## Example Of Code You May Reuse The license covers real project code, not just metadata. For example, the CLI entry point in `myk_claude_tools/cli.py` is part of the MIT-licensed repository and can be reused or adapted under the same notice requirement: ```12:27:myk_claude_tools/cli.py @click.group() @click.version_option() def cli() -> None: """CLI utilities for Claude Code plugins.""" cli.add_command(coderabbit_commands.coderabbit, name="coderabbit") cli.add_command(db_commands.db, name="db") cli.add_command(pr_commands.pr, name="pr") cli.add_command(release_commands.release, name="release") cli.add_command(reviews_commands.reviews, name="reviews") def main() -> None: """Entry point.""" cli() ``` The same practical rule applies across the repository contents, including hook scripts in `scripts/`, tests in `tests/`, and automation files such as `tox.toml` and `.pre-commit-config.yaml`, unless a specific file says otherwise. ## What MIT Does Not Promise MIT is permissive, but it also says the software is provided "AS IS." That means there is no warranty that the project will fit your exact use case, remain bug-free, or come with support obligations. > **Warning:** The MIT license covers this repository's code and documentation. Third-party dependencies and external tools referenced by the project, such as packages in `pyproject.toml` or hooks listed in `.pre-commit-config.yaml`, keep their own licenses. For exact legal terms, read the root `LICENSE` file directly. This page is a practical summary, not legal advice. ---