Shipset is for engineers who want to actually ship AI.
Most AI learning material stops at the tutorial. The hard part — turning a half-working notebook into a thing that survives real users — is where careers and paychecks happen. Shipset is the practice ground for that part.
What we believe
Real briefs, not toy datasets
Every challenge starts from a fake-but-realistic product requirement. You decide the architecture, the model choice, the eval strategy. We grade on whether it works, not whether you used the textbook answer.
Submit something that runs
A submission means a public deployment URL plus a GitHub repo someone else can clone. No theoretical answers, no screenshots — the artefact has to exist.
Engineering quality counts
AI work that ships looks like backend work that ships: typed boundaries, observable failures, reasonable cost, sensible eval. Shipset rewards the boring parts that make systems durable.
Who is building this
Shipset is built by Julio Daal, a software engineer based in Germany. Background in full-stack product work; current focus on AI engineering tooling, infrastructure, and developer-experience platforms. You can reach me at julio@shipset.dev.
The platform runs on a small, EU-region stack — Vercel for the app, Supabase for the DB, Sentry (DE) for telemetry, a Hetzner box for the heavier services. The codebase is a single Next.js app + a Python service, both versioned and CI-tested. We'll write more about the architecture on the blog as it stabilises.
Get involved
Pre-launch right now. The fastest way to help is to sign up, try a challenge, and tell me what felt wrong. There's a feedback link in the footer of every page once you're signed in.