Back to Blog

Why Our 6-Week Estimate Took 3 Days

Last week we finished a major feature: full team collaboration for Ginko. User invites, permission management, real-time visibility into who's working on what, member-filtered insights, and per-seat Stripe billing.

Using traditional scrum estimation, we'd scoped it at 6-7 weeks. Four sprints, roughly two weeks each. That's what a small team of 2-3 developers would need—accounting for standups, sprint planning, code review cycles, QA, and the general overhead of coordinating humans.

We shipped it in 3 days.

This isn't a story about working nights and weekends or cutting corners. It's about what happens when Human+AI collaboration eliminates the overhead that traditional estimation treats as constants.

The Math That Doesn't Add Up

A 15x acceleration sounds impossible until you break down where development time actually goes.

Here's how a typical scrum team's week breaks down:

Activity Hours % of Week
Actual coding 16-20 40-50%
Meetings (standups, planning, retros, ad-hoc syncs) 8-12 20-30%
Code review (giving and receiving) 4-6 10-15%
Context switching and interruptions 4-6 10-15%
Documentation and admin 2-4 5-10%

Only 40-50% of a developer's time is actually writing code. The rest is coordination overhead—necessary overhead when multiple humans need to stay aligned, share knowledge, and maintain quality.

Now here's what a Human+AI collaboration day looks like:

Activity Hours % of Day
Active development 6-8 75-100%
Planning and clarification 0.5-1 6-12%
Testing and validation 0.5-1 6-12%

The overhead largely disappears. Not because we're skipping important work, but because the work happens differently.

Where the Time Goes (And Doesn't)

Traditional teams need coordination ceremonies because of how humans work:

Distributed knowledge. No single person holds the full context of a codebase. Meetings exist to synchronize understanding across brains.

Asynchronous communication. Code review happens in PR comments over hours or days. Decisions require scheduling discussions.

Context switching. When you interrupt a developer, they lose 20-30 minutes of context recovery. Add that up across a day of Slack pings and meetings.

Ramp-up time. New subsystems require archaeology. Reading code, understanding patterns, building mental models—often 1-2 days before you can make changes confidently.

Human+AI collaboration changes the equation:

Unified context. The AI has instant access to the entire codebase. No synchronization needed—it can read every file in seconds.

Real-time decisions. Instead of scheduling a meeting to discuss architecture, the conversation happens inline as we work. Decision → implementation → test, all in one flow.

No context loss. The conversation preserves context. When we pause and resume, the AI remembers where we were. No re-explanation needed.

Instant comprehension. Moving to a new subsystem doesn't require archaeology. The AI reads the code, understands the patterns, and we're productive in minutes.

We're Not Doing Scrum Faster

This is the key insight: Human+AI collaboration isn't an acceleration of traditional development. It's a different mode of working that happens to produce the same artifacts.

Traditional project management assumes:

Human+AI collaboration assumes:

When you eliminate 30-50% overhead and compress decision latency from days to seconds, a 15x acceleration becomes arithmetically reasonable, not magical.

What This Means for Estimation

If you're working with AI collaborators, traditional story points calibrated to team velocity are meaningless. A "5-point story" that takes a scrum team a week might take an afternoon.

Here's the framework we're now using:

Estimate Complexity, Not Time

Complexity Traditional Team Human+AI
Trivial < 1 day < 1 hour
Low 1-3 days 1-4 hours
Medium 1-2 weeks 4-8 hours
High 2-4 weeks 1-2 days
Very High 1-2 months 3-5 days

Time falls out of complexity, not the other way around.

Track Your Acceleration Factor

After each project, record:

Over time, you'll build a reliable conversion factor. Ours is hovering around 15x, but it varies by project type.

Apply Adjustment Factors

Not everything accelerates equally:

Factor Multiplier Why
Greenfield (no existing patterns) 0.5x More exploration, less reuse
Waiting on third parties 0.3-0.5x Can't accelerate external dependencies
Legacy integration 0.6x Archaeology and compatibility constraints
Well-documented codebase 1.2x Faster context loading
Strong test coverage 1.3x Confidence to move fast

Our team collaboration epic built on existing Stripe integration, Supabase auth, and Neo4j patterns. That foundation contributed to the acceleration—a true greenfield project might see 8-10x instead of 15x.

Keep Both Numbers

Stakeholders still think in traditional timelines. When asked "how long will this take?", provide both:

"A traditional team would estimate 6-8 weeks for this feature. With our Human+AI workflow, we expect 3-5 days of active development, assuming no external blockers."

The dual-track approach lets you plan realistically while communicating in terms stakeholders understand.

The Uncomfortable Questions

This acceleration raises questions that don't have clean answers yet:

What happens to team collaboration? If one human+AI pair can outpace a small team, what's the role of the team? We're still figuring this out, but early signals suggest teams shift from implementation collaboration to strategic alignment and review.

How do you price this? If you can deliver six weeks of work in three days, do you charge for the days or the value? The consulting industry hasn't caught up to this reality.

What about knowledge distribution? A team of five has natural redundancy—multiple people understand each subsystem. A human+AI pair is a single point of failure. We mitigate this with aggressive session logging and knowledge capture, but it's a real concern.

Is this sustainable? We delivered a major feature in three days. Can we do that repeatedly without burning out? The intensity of Human+AI work is different from traditional development—more focused, less context-switching—but the jury's still out on long-term sustainability.

What We're Doing About It

We've started tracking velocity metrics on every project:

# In our epic frontmatter
traditional_estimate: 6-7 weeks
actual_delivery: 3 days
acceleration_factor: 15
complexity: high
work_mode: think-and-build

After 10+ projects, we'll have enough data to build a predictive model. The goal is reliable estimation that accounts for Human+AI capabilities—not as a replacement for traditional metrics, but alongside them.

We've also formalized this in ADR-057, our internal architecture decision record. It's public if you want to adapt the framework for your own projects.

The Takeaway

If you're estimating software projects the same way you did before AI tools, you're likely either:

  1. Over-promising (assuming AI magic without accounting for its limitations)
  2. Under-promising (using traditional estimates when you could move much faster)

Neither is good for planning.

The fix isn't to abandon traditional estimation—it's to recognize that Human+AI collaboration is a fundamentally different mode of development with its own velocity characteristics. Track both. Learn the conversion factors. Plan accordingly.

Six weeks to three days isn't magic. It's what happens when you stop treating coordination overhead as a constant.


Ginko helps Human+AI teams work faster by making context persistent and knowledge shareable. If you're seeing similar acceleration factors—or want to—we'd love to hear about it. Get in touch.

Stop losing AI context. Start shipping faster.

Ginko keeps your development context in git. Resume any AI session in 30 seconds.

$ npm install -g @ginkoai/cli
Copied!