Ask Claude to Generate Realistic Factory Definitions and Seed Data for Your Models
Good test data is hard to write by hand. You need realistic values, correct relationships between models, and state variations that exercise your business logic. Claude reads your actual schema and generates factories that produce data worth testing against.
> read the models in app/Models/ and their migrations, then create
> factory definitions for each one with realistic fake data and
> proper relationships
Claude checks your columns, types, constraints, and foreign keys, then generates factories that produce coherent data — not just random strings, but values that make sense together.
A factory Claude generates might look like:
// Claude reads your Order model and produces this
Order::factory()->definition();
// [
// 'user_id' => User::factory(),
// 'status' => fake()->randomElement(['pending', 'processing', 'shipped']),
// 'subtotal' => fake()->numberBetween(1000, 50000),
// 'tax' => fn ($attrs) => (int) ($attrs['subtotal'] * 0.2),
// 'total' => fn ($attrs) => $attrs['subtotal'] + $attrs['tax'],
// 'shipped_at' => fn ($attrs) => $attrs['status'] === 'shipped'
// ? fake()->dateTimeBetween('-30 days') : null,
// ]
Notice how Claude makes the data internally consistent — tax is calculated from subtotal, total adds up correctly, and shipped_at is only set when the status is shipped.
You can also ask for specific scenarios:
> create a database seeder that generates a realistic demo environment —
> 50 users with varying activity levels, orders across different statuses,
> some with reviews, some with support tickets
> add factory states for common test scenarios — a "vip" user with
> 100+ orders, a "new" user with no activity, and a "suspended" user
> with a locked account
Claude generates the seeder with proper creation order — users first, then orders referencing those users, then reviews referencing both — so foreign keys are always valid.
Realistic test data catches bugs that random data misses — let Claude generate factories that produce data your app would actually see in production.
via Claude Code
Log in to leave a comment.
Set up Claude Code as an automated reviewer in your CI pipeline — on every pull request, it reads the diff, checks for bugs, security issues, missing tests, and convention violations, then posts its findings as a PR comment. Your human reviewers get a head start because the obvious issues are already flagged before they look.
Before deploying, tell Claude to read your project — migrations, environment variables, queue workers, scheduled tasks, caching, third-party integrations — and generate a deployment checklist that's specific to your app. Not a generic "did you run migrations?" list, but one that knows YOUR infrastructure and catches the things YOUR deploy can break.
Instead of writing a README from memory or copying a template, tell Claude to read your project and generate one that's actually accurate — real setup instructions from your config, real architecture from your directory structure, real API examples from your routes, and real prerequisites from your dependency files.