Model Discovery and Defaults

docsfy builds its model picker suggestions from real, successful generations instead of a hardcoded model list.
That keeps suggestions aligned with what has actually worked in your deployment.

How a model becomes “known”

A model is considered known only when a project variant is stored with:

  • non-empty ai_provider
  • non-empty ai_model
  • status = 'ready'
async def get_known_models() -> dict[str, list[str]]:
    """Get distinct ai_model values per ai_provider from completed projects."""
    async with aiosqlite.connect(DB_PATH) as db:
        cursor = await db.execute(
            "SELECT DISTINCT ai_provider, ai_model FROM projects WHERE ai_provider != '' AND ai_model != '' AND status = 'ready' ORDER BY ai_provider, ai_model"
        )
        rows = await cursor.fetchall()
        models: dict[str, list[str]] = {}
        for provider, model in rows:
            if provider not in models:
                models[provider] = []
            if model not in models[provider]:
                models[provider].append(model)
        return models

Warning: get_known_models() is instance-wide. It does not filter by owner, so the suggestion catalog is shared across users in the same docsfy instance.

When discovery happens in the generation lifecycle

Discovery is not a separate job. It happens naturally because variants are marked ready, then picked up by get_known_models():

if old_sha == commit_sha:
    await update_project_status(
        project_name,
        ai_provider,
        ai_model,
        status="ready",
        owner=owner,
        current_stage="up_to_date",
    )
    return
await update_project_status(
    project_name,
    ai_provider,
    ai_model,
    status="ready",
    owner=owner,
    current_stage=None,
    last_commit_sha=commit_sha,
    page_count=page_count,
    plan_json=json.dumps(plan),
)

Tip: If you want picker suggestions pre-populated for a provider/model pair, run one successful generation with that pair first.

Default provider/model behavior

Defaults come from settings (.env or environment), with built-in fallbacks:

class Settings(BaseSettings):
    ...
    ai_provider: str = "claude"
    ai_model: str = "claude-opus-4-6[1m]"
    ai_cli_timeout: int = Field(default=60, gt=0)
# .env.example
AI_PROVIDER=claude
AI_MODEL=claude-opus-4-6[1m]
AI_CLI_TIMEOUT=60

If a generation request omits provider/model, API defaults are applied:

settings = get_settings()
ai_provider = gen_request.ai_provider or settings.ai_provider
ai_model = gen_request.ai_model or settings.ai_model

How dashboard pickers are populated

The dashboard route injects both defaults and discovered models:

known_models = await get_known_models()
...
html = template.render(
    grouped_projects=grouped,
    projects=projects,
    default_provider=settings.ai_provider,
    default_model=settings.ai_model,
    known_models=known_models,
    role=request.state.role,
    username=request.state.username,
)

The template uses those values for:

  • the top-level Generate form
  • each variant’s Regenerate controls
<select id="gen-provider" class="form-select">
    <option value="claude"{% if default_provider == 'claude' %} selected{% endif %}>claude</option>
    <option value="gemini"{% if default_provider == 'gemini' %} selected{% endif %}>gemini</option>
    <option value="cursor"{% if default_provider == 'cursor' %} selected{% endif %}>cursor</option>
</select>

<input type="text" class="form-input" id="gen-model" value="{{ default_model }}" placeholder="Model name" autocomplete="off">
<div class="model-dropdown" id="model-dropdown">
    {% for provider, models in known_models.items() %}
    {% for model in models %}
    <div class="model-option" data-provider="{{ provider }}" data-value="{{ model }}">
        <span class="model-option-name">{{ model }}</span>
        <span class="model-option-provider">{{ provider }}</span>
    </div>
    {% endfor %}
    {% endfor %}
</div>

Picker UX rules in the browser

The client receives known_models as JSON and enforces provider-aware filtering:

var knownModels = {{ known_models | tojson }};

providerSelect.addEventListener('change', function() {
    if (_restoring) return;
    var newProvider = this.value;
    var modelsForProvider = knownModels[newProvider] || [];

    // If current model is not valid for the new provider, auto-fill
    if (modelInput) {
        var currentModel = modelInput.value;
        if (modelsForProvider.length > 0 && modelsForProvider.indexOf(currentModel) === -1) {
            modelInput.value = modelsForProvider[0];
            saveFormState();
        } else if (modelsForProvider.length === 0) {
            modelInput.value = '';
            modelInput.placeholder = 'Enter model name';
            saveFormState();
        }
    }

    filterModelOptions(modelDropdown, modelInput ? modelInput.value : '', newProvider);
});

The same provider-switch/autofill logic is also applied to per-variant regenerate controls.

Note: Picker suggestions are assistive, not a strict backend whitelist. Users can type a model manually; backend validation only requires a valid provider and non-empty model string.

Live model discovery updates in running dashboards

/api/status includes known_models on every poll response:

@app.get("/api/status")
async def status(request: Request) -> dict[str, Any]:
    ...
    known_models = await get_known_models()
    return {"projects": projects, "known_models": known_models}

The dashboard polling loop updates model dropdowns without full refresh:

if (data.known_models) {
    knownModels = data.known_models;
    rebuildModelDropdownOptions();
}

This means newly successful variants can teach new models to active dashboard sessions.

Validation and quality signals (tests + CI entry points)

Model discovery and defaults are covered by tests:

# tests/test_storage.py
models = await get_known_models()
assert "claude" in models
assert "opus-4-6" in models["claude"]
assert "sonnet-4-6" in models["claude"]
assert "gemini" in models
assert "gemini-2.5-pro" in models["gemini"]
# tests/test_config.py
assert settings.ai_provider == "claude"
assert settings.ai_model == "claude-opus-4-6[1m]"
assert settings.ai_cli_timeout == 60

Pipeline entry points in this repo are defined via tox and pre-commit:

# tox.toml
[env.unittests]
deps = ["uv"]
commands = [["uv", "run", "--extra", "dev", "pytest", "-n", "auto", "tests"]]
# .pre-commit-config.yaml (excerpt)
repos:
  - repo: https://github.com/astral-sh/ruff-pre-commit
    hooks:
      - id: ruff
      - id: ruff-format

Note: No .github/workflows pipeline is committed in this repository; CI systems should invoke tox and pre-commit hooks directly.