// 42 tips tagged "testing"
Got a module with zero tests? Point Claude at the code and it reads the implementation, identifies every behavior — the happy path, edge cases, error conditions, and boundary values — then writes a comprehensive test suite that documents what the code actually does and catches regressions if anything changes.
Tell Claude to read your models, migrations, and relationships, then generate factory definitions that produce realistic, interconnected test data — proper names, valid emails, sensible dates, correct foreign keys, and state variations that cover your actual business logic.
Flaky tests are maddening — they pass locally, fail in CI, pass again when you retry. Tell Claude to read the test, identify the source of non-determinism — timing issues, shared state, date dependencies, or order-dependent setup — and fix the root cause so the test is reliably green or reliably red.
Flip the usual workflow: write a failing test that describes exactly what you want, then tell Claude to make it pass. The test becomes your specification — Claude reads it, understands the expected behavior, and writes the implementation that satisfies it. Test-driven development, powered by AI.
When your app throws an error, don't just Google the message — paste the full stack trace into Claude Code. It reads the trace, opens the referenced files in your codebase, follows the call chain, and pinpoints the actual root cause instead of just explaining the symptom.
Tell Claude to read your routes file and write a smoke test that visits every endpoint — just checking that nothing returns a 500 error. It's the cheapest possible safety net: one test file that catches the most obvious breakage across your entire app.
Add "run the tests after each change" to your prompt and Claude shifts into an edit-test-fix loop — make a change, run the suite, fix any failures immediately, then move to the next change. Regressions get caught at the smallest possible scope instead of piling up.
Tell Claude to read your code and your existing tests, find what's not covered — untested edge cases, missing error paths, uncovered branches, and entire functions with no tests at all — then write the tests that close the gaps.
Tell Claude to read your models and database schema, then generate factory definitions and seed data that actually make sense — proper relationships, realistic values, domain-appropriate formats, and edge cases, not random strings and lorem ipsum.
Tell Claude what the code should do before it exists — it writes the failing test first, then implements the minimum code to make it pass, then refactors. True test-driven development where the test defines the contract and the implementation follows.
Tell Claude to read how your code calls external APIs, then generate mock responses that match the real shape — so you can develop and test without hitting rate limits, requiring API keys, or depending on services being online.
Instead of describing what went wrong, paste the raw test output into Claude Code — it reads assertion errors, expected vs actual values, and stack traces, then navigates to the source and fixes the issue in one go.