Write the Test First and Tell Claude to Make It Pass — TDD with AI
Tests are the most precise way to tell Claude what you want. Instead of describing a feature in English and hoping Claude interprets it correctly, write a test that defines the exact behavior — then let Claude figure out the implementation.
> I just wrote a failing test in tests/Feature/DiscountTest.php.
> Read it, then write the code to make it pass. Run the test to verify.
Claude reads your test, understands the expected inputs, outputs, edge cases, and error conditions, then writes exactly the code needed to satisfy those assertions — nothing more, nothing less.
This works because tests are unambiguous specifications. Instead of "add a discount feature," your test says:
it('applies a percentage discount to the order total', function () {
$order = Order::factory()->create(['subtotal' => 10000]);
$discount = new PercentageDiscount(15);
$result = $discount->apply($order);
expect($result->total)->toBe(8500);
expect($result->discount_amount)->toBe(1500);
});
Claude now knows exactly what class to create, what method signature to use, what the return type looks like, and what the math should be. No ambiguity.
The TDD loop with Claude looks like this:
- You write a failing test that describes the next piece of behavior
- Claude reads the test and writes the minimum code to make it pass
- Claude runs the test to verify it passes
- You write the next test — repeat
This approach gives you several advantages:
- Precision — the test is the spec, no room for misinterpretation
- Built-in verification — Claude proves its implementation works before moving on
- No over-engineering — Claude writes only what the test requires
- Instant test coverage — every feature ships with tests already written
Write the test, hand Claude the red bar, and let it turn it green.
via Claude Code
Log in to leave a comment.
Set up Claude Code as an automated reviewer in your CI pipeline — on every pull request, it reads the diff, checks for bugs, security issues, missing tests, and convention violations, then posts its findings as a PR comment. Your human reviewers get a head start because the obvious issues are already flagged before they look.
Before deploying, tell Claude to read your project — migrations, environment variables, queue workers, scheduled tasks, caching, third-party integrations — and generate a deployment checklist that's specific to your app. Not a generic "did you run migrations?" list, but one that knows YOUR infrastructure and catches the things YOUR deploy can break.
Instead of writing a README from memory or copying a template, tell Claude to read your project and generate one that's actually accurate — real setup instructions from your config, real architecture from your directory structure, real API examples from your routes, and real prerequisites from your dependency files.