Ziggy
AI-Native Development

AI-native codingthe way the world's going

Ziggy is a codebase that understands how modern teams build with AI. Everything is designed to help AI tools like Claude Code work optimally.

Easy on the AI

Pattern Consistency predictable code, reliable AI

Consistent patterns across the entire codebase make AI-assisted development reliable.

  • Repository pattern for data access — consistent query interfaces AI can follow
  • Swagger and OpenAPI — auto-generated API documentation always available to the client.
  • DTOs common to backend and frontend.
  • Explicit module boundaries — AI tools can reason about dependencies without guessing
Claude

Architectural Context CLAUDE.md at every level

CLAUDE.md files at the project root, /app, /app/server, /app/client, /admin, and /devops give Claude Code full architectural context. Module boundaries, naming conventions, database schema, and security patterns are documented where AI tools can read them automatically.

  • Six CLAUDE.md files spanning the entire monorepo — root, app, server, client, admin, devops
  • Each file describes module boundaries, creation rules, and naming conventions for its scope
  • Architecture maps document the core/app module distinction and dependency rules
  • Server CLAUDE.md covers NestJS patterns, database rules, and testing conventions
  • Client CLAUDE.md covers React patterns, Zustand stores, and API proxy configuration
  • Claude Code reads these automatically — no manual context-setting per session
Skills

Custom Claude Code Skills scan, fingerprint, generate

The platform ships with built-in Claude Code skills that understand the codebase structure. Scan modules, fingerprint architecture changes over time, and generate SOC audit documentation — all from the command line.

  • architecture-scan: fingerprints all server modules and infrastructure files into a versioned manifest helping Claude Code know when things have changed.
  • soc-template: generates SOC audit documentation for specific trust services criteria (CC1-CC9, A1, C1, PI1, Privacy)
  • guard-scan: evaluates all API endpoints and their role guards from a SOC security perspective
  • soc-system: generates documentation for system architecture, infrastructure, authentication, MFA, RBAC, encryption, and logging
  • soc-capabilities: scans the entire codebase and produces a detailed feature catalog for all SOC sections.
AI Providers

Provider Integration Anthropic and OpenAI built in

Integrated AI provider management for Anthropic (Claude) and OpenAI (GPT/o-series) — API keys encrypted at rest, model configurations with per-model pricing, and automatic parameter handling for different model families.

  • Anthropic (Claude models) and OpenAI (GPT/o-series) provider support
  • API keys stored encrypted in Settings table using AES-256-GCM with PII encryption key
  • Model configurations with per-model pricing constants for cost calculation
Audit

Token Tracking for cost experimentation and audit

Every AI API call is logged with provider, model, token counts, calculated cost, duration, and entity context. Filter by provider, model, or date range to understand usage patterns and control spending.

  • Cost calculation per completion based on input/output token counts and model-specific pricing
  • Usage logged to AiLog table with provider, model, token counts, cost, duration, and entity context
  • Log filtering by provider, model, and date range with distinct value queries for filter dropdowns

Want to see it in action? Get in touch for a demo or trial