All articles
ArchitectureAIKubernetes

Building an AI Website Builder at Seaional

July 202512 min read

The Vision: AI-Generated Websites

At Seaional, we wanted to build something ambitious: give users a description of the website they want, and have AI generate it-complete with content, images, and a functional codebase ready for deployment. Not a template picker with AI-generated text, but genuine end-to-end generation.

Why Microservices Made Sense

When you're building an AI system that needs to generate content, images, code, and deploy the result, you quickly realize these are fundamentally different workloads:

  • **Content generation** is CPU-bound and latency-sensitive
  • **Image generation** requires GPU access and can take 30+ seconds
  • **Code generation** needs isolated execution environments
  • **Deployment** involves interacting with cloud providers and DNS

Cramming all this into a monolith would be a nightmare. We ended up with six services: an Orchestrator that manages the agentic workflow, Content Service for text generation via Azure OpenAI, Image Service for generating visuals, Code Service that produces and validates website code, Preview Service for live previews, and Deploy Service for final deployment.

The Agentic Workflow Pattern

Traditional request-response doesn't work when generation can take minutes. Instead, we implemented an agentic workflow where the orchestrator acts as a coordinator, spawning tasks and waiting for results.

The key insight: treat each generation step as an independent job that can succeed, fail, or timeout independently. The orchestrator maintains state and decides what to do next based on intermediate results.

If content generation produces something off-topic, the orchestrator can retry with different prompting. If image generation fails, we can fall back to stock images. This resilience is only possible with independent services.

Kubernetes Job Pods for Isolation

Code generation is inherently risky. You're executing AI-generated code that might do anything-infinite loops, memory exhaustion, malicious behavior. We needed strong isolation.

Kubernetes Job pods turned out to be perfect:

  1. Spin up an ephemeral pod with strict resource limits
  2. Execute the generation in a sandboxed environment
  3. Capture logs and artifacts
  4. Terminate the pod regardless of outcome

The beauty is that a misbehaving generation can't affect other users. The pod gets killed, we log the failure, and the orchestrator can retry with different parameters.

Integrating Azure OpenAI

We chose Azure OpenAI over the direct OpenAI API for enterprise compliance reasons-data residency, SLAs, and integration with our existing Azure infrastructure.

The integration patterns that worked well: streaming responses for real-time feedback, function calling for structured output, and conversation context for iterative refinement.

One gotcha: Azure's rate limits are different from OpenAI's. We had to implement token bucket rate limiting at the service level to avoid hitting limits during peak usage.

Lessons for AI System Design

Building this system taught me that AI applications are fundamentally different from traditional software:

  1. **Expect variability.** The same input won't always produce the same output. Design for non-determinism.
  2. **Human-in-the-loop is valuable.** Full automation is a goal, not a starting point. Let users guide and correct.
  3. **Observability is critical.** You need to understand what the AI is doing, not just whether it succeeded.
  4. **Cost scales with usage.** AI inference costs don't amortize well. Monitor and optimize aggressively.