Skip to content

Agentic AI Coding with Claude Code

Welcome to Claude Code LabCamp. In this hands-on session, you will learn Claude Code by building a live trivia quiz app together, then testing it with the whole room. Plan for about two hours for the core labs and final project (including setup and breaks).

Documentation: storm-ai-reply.github.io/ClaudeCode-Labcamp

Repository: github.com/Storm-AI-Reply/ClaudeCode-Labcamp

git clone https://github.com/Storm-AI-Reply/ClaudeCode-Labcamp.git

How it works

Each participant works on their own laptop with personal, temporary AWS Bedrock credentials (provided by the organizers). You will go through four short labs individually, each teaching a Claude Code feature. Then you form teams of 5 for the final project: collaborate, split the work, and ship the most beautiful quiz in the room.

Getting started

Labs at a glance

Each lab is split into separate pages (overview → exercises → wrap-up) so you can focus on one step at a time. Use the Labs tab and sidebar for navigation; All labs (map) lists everything on one screen.

Lab ~Time
1 The Execution Model 30 min
2 Project Configuration 25 min
3 Control & Connect 30 min
4 Scale & Reuse 25 min
Final Project 30 min

Work through them in order. Each lab has its own starter code. If you break something, run git checkout -- . to reset.

The competition

At the end, teams of 5 host their quiz. The room joins via QR code and plays a round. Best quiz wins.

Criterion Points
Works end-to-end (join, answer, leaderboard) 25
Claude Code setup (CLAUDE.md, commands, hooks, skills, MCP, subagents) 25
Quiz content (creative, entertaining, on-theme) 25
Beautiful UI (visual craft, polish, identity) 25