Run with Docker

This repository provides both a Dockerfile and a docker-compose.yaml to run docsfy as a containerized service on port 8000.

Prerequisites and Environment

Create a local .env file from .env.example:

cp .env.example .env

The shipped example includes required and optional runtime variables:

# REQUIRED - Admin key for user management (minimum 16 characters)
ADMIN_KEY=your-secure-admin-key-here-min-16-chars

# AI Configuration
AI_PROVIDER=claude
# [1m] = 1 million token context window, this is a valid model identifier
AI_MODEL=claude-opus-4-6[1m]
AI_CLI_TIMEOUT=60

# Claude - Option 1: API Key
# ANTHROPIC_API_KEY=

# Claude - Option 2: Vertex AI
# CLAUDE_CODE_USE_VERTEX=1
# CLOUD_ML_REGION=
# ANTHROPIC_VERTEX_PROJECT_ID=

# Gemini
# GEMINI_API_KEY=

# Cursor
# CURSOR_API_KEY=

# Logging
LOG_LEVEL=INFO

# Set to false for local HTTP development
# SECURE_COOKIES=false

Startup enforces ADMIN_KEY presence and minimum length:

@asynccontextmanager
async def lifespan(app: FastAPI) -> AsyncIterator[None]:
    settings = get_settings()
    if not settings.admin_key:
        logger.error("ADMIN_KEY environment variable is required")
        raise SystemExit(1)

    if len(settings.admin_key) < 16:
        logger.error("ADMIN_KEY must be at least 16 characters long")
        raise SystemExit(1)

    _generating.clear()
    await init_db(data_dir=settings.data_dir)
    await cleanup_expired_sessions()
    yield

Warning: If ADMIN_KEY is missing or shorter than 16 characters, the container exits during startup.

Warning: SECURE_COOKIES defaults to true. For plain HTTP local development, set SECURE_COOKIES=false in .env or browser login cookies may not persist.


Repository compose file:

services:
  docsfy:
    build: .
    ports:
      - "8000:8000"
    env_file: .env
    volumes:
      - ./data:/data
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:8000/health"]
      interval: 30s
      timeout: 10s
      retries: 3

Run it:

mkdir -p data
docker compose up --build

Detached mode:

docker compose up -d --build

Stop and remove container/network:

docker compose down

Run directly from Dockerfile

The image is multi-stage (builder + runtime), installs dependencies with uv, and runs as non-root appuser:

FROM python:3.12-slim AS builder
WORKDIR /app
COPY --from=ghcr.io/astral-sh/uv:0.5.14 /uv /usr/local/bin/uv
RUN apt-get update && apt-get install -y --no-install-recommends \
    git \
    && rm -rf /var/lib/apt/lists/*
COPY pyproject.toml uv.lock ./
COPY src/ src/
RUN uv sync --frozen --no-dev

FROM python:3.12-slim
WORKDIR /app
RUN apt-get update && apt-get install -y --no-install-recommends \
    bash \
    git \
    curl \
    nodejs \
    npm \
    && rm -rf /var/lib/apt/lists/*

Runtime data, health check, and entrypoint:

RUN useradd --create-home --shell /bin/bash -g 0 appuser \
    && mkdir -p /data \
    && chown appuser:0 /data \
    && chmod -R g+w /data

USER appuser
ENV PATH="/home/appuser/.local/bin:/home/appuser/.npm-global/bin:${PATH}"
ENV HOME="/home/appuser"

EXPOSE 8000

HEALTHCHECK --interval=30s --timeout=10s --retries=3 \
    CMD curl -f http://localhost:8000/health || exit 1

ENTRYPOINT ["uv", "run", "--no-sync", "uvicorn", "docsfy.main:app", "--host", "0.0.0.0", "--port", "8000"]

Build and run:

docker build -t docsfy:local .
mkdir -p data
docker run --rm -p 8000:8000 --env-file .env -v "$(pwd)/data:/data" docsfy:local

Note: The container listens on internal port 8000 (ENTRYPOINT is fixed to --port 8000). Change host-side port with mappings like -p 8080:8000.


Mounted Data Volume (/data)

Compose mounts host ./data into container /data:

volumes:
  - ./data:/data

Application defaults also target /data:

class Settings(BaseSettings):
    model_config = SettingsConfigDict(
        env_file=".env",
        env_file_encoding="utf-8",
        extra="ignore",
    )

    data_dir: str = "/data"

Storage paths are derived from DATA_DIR and initialized on startup:

DB_PATH = Path(os.getenv("DATA_DIR", "/data")) / "docsfy.db"
DATA_DIR = Path(os.getenv("DATA_DIR", "/data"))
PROJECTS_DIR = DATA_DIR / "projects"

async def init_db(data_dir: str = "") -> None:
    ...
    DB_PATH.parent.mkdir(parents=True, exist_ok=True)
    PROJECTS_DIR.mkdir(parents=True, exist_ok=True)

Project artifacts are organized under provider/model-specific subdirectories:

return PROJECTS_DIR / safe_owner / _validate_name(name) / ai_provider / ai_model

The repository intentionally ignores local data folders:

# Data
data/
.dev/data/

Tip: Back up ./data (especially docsfy.db and projects/) to preserve generated docs and metadata across container rebuilds.


Health Checks

Container-level health checks call the app endpoint:

HEALTHCHECK --interval=30s --timeout=10s --retries=3 \
    CMD curl -f http://localhost:8000/health || exit 1

Compose defines the same check:

healthcheck:
  test: ["CMD", "curl", "-f", "http://localhost:8000/health"]
  interval: 30s
  timeout: 10s
  retries: 3

App endpoint implementation:

# Paths that do not require authentication
_PUBLIC_PATHS = frozenset({"/login", "/login/", "/health"})

@app.get("/health")
async def health() -> dict[str, str]:
    return {"status": "ok"}

Behavior is covered in tests:

async def test_health_is_public(unauthed_client: AsyncClient) -> None:
    """The /health endpoint should be accessible without authentication."""
    response = await unauthed_client.get("/health")
    assert response.status_code == 200
    assert response.json()["status"] == "ok"

Quick checks:

curl -f http://localhost:8000/health
docker compose ps

Warning: /health currently reports only application liveness ({"status":"ok"}); it does not validate external AI credentials or downstream service readiness.


CI/CD Status for Docker

No CI/CD workflow files are present in this repository (no .github/workflows, GitLab, CircleCI, Jenkins, or Buildkite pipeline definitions), so Docker image build/run behavior documented here is currently local/manual.