© 2024 Felix Ng

arrow_backBack to Journal
AI Stopped Being a Tool. It Became My Co-Worker.
February 25, 2026Journal6 min read

AI Stopped Being a Tool. It Became My Co-Worker.

There was a specific moment — I can pinpoint it exactly — when AI stopped being a tool I used and started being a colleague I worked with.

I was building a content automation pipeline. The kind of project that involves API integration, database schema design, image generation, and multi-platform deployment. In previous years, this would have been a two-week sprint with at least one existential crisis about scope.

Instead, I described what I wanted. The AI planned the architecture. Built the API endpoints. Generated cover images. Drafted content. Deployed to production. And when the first deployment failed because of a slug collision in the database, it diagnosed the problem, added a PATCH handler to the API, fixed the existing records, and updated the workflow documentation to prevent the issue from recurring.

I didn't write a single line of code for any of that. I made decisions. I reviewed outputs. I approved directions. But the execution? That was my AI co-worker.

And that shift — from executing to directing — changed everything about how I think about building software.

The Trust Curve

You don't go from "I'll do it myself" to "I trust you completely" overnight. There's a curve, and it's steeper than you'd expect.

Phase 1: Suspicion. You check every line. You rewrite half the output. You wonder if it's actually saving time or just creating different work. This phase lasts about two weeks.

Phase 2: Calibration. You learn what the AI does well (boilerplate, patterns, multi-file refactoring) and what it doesn't (novel algorithms, subtle business logic, anything requiring human judgment about user experience). This phase lasts about a month.

Phase 3: Delegation. You start giving entire tasks, not just prompts. "Set up authentication for this API" instead of "write a JWT validation function." You trust the architecture decisions and focus your review on the things that matter — security, edge cases, user-facing behavior. This phase is ongoing.

Phase 4: Partnership. You start thinking in terms of "we." Not "I need to build this" but "we need to build this." You plan projects differently because your capacity has fundamentally changed. This is where I am now.

Each phase requires letting go of something. Suspicion requires letting go of ego. Calibration requires letting go of perfectionism. Delegation requires letting go of control. Partnership requires letting go of the idea that you need to understand every line of code in your system.

That last one is the hardest for engineers. And the most important.

The Delegation Framework

Not everything should be delegated to AI. I've learned this the hard way. Here's the framework I've settled on:

Delegate freely:

  • Boilerplate and scaffolding
  • Standard CRUD operations
  • File structure and project setup
  • Documentation and comments
  • Test generation for existing code
  • Multi-file refactoring
  • Content drafting and formatting

Delegate with review:

  • API design and architecture decisions
  • Database schema changes
  • Authentication and authorization logic
  • Error handling strategies
  • Performance-sensitive code
  • Third-party integrations

Keep for yourself:

  • Product decisions (what to build)
  • User experience judgment calls
  • Business logic that encodes company knowledge
  • Security architecture at the system level
  • Anything involving money, identity, or privacy
  • The "why" behind every decision

The pattern is simple: delegate execution, own judgment. The AI is excellent at "how." It's unreliable at "should we."

How Collaboration Actually Works

My daily workflow looks nothing like it did a year ago. Here's a realistic snapshot:

Morning: I review what the AI produced overnight or in the previous session. Not line-by-line code review — high-level assessment. Does the approach make sense? Did it miss any edge cases? Is the user experience right?

Mid-day: I work on the judgment-intensive parts. Product decisions, user feedback analysis, architectural trade-offs. These are the things where human context — knowing the user, understanding the business, feeling the friction — matters more than technical capability.

Afternoon: I set up the next batch of work. Clear briefs, specific constraints, examples of what "good" looks like. The quality of AI output is directly proportional to the quality of the input. Vague requests get mediocre code. Precise requests get production-ready code.

The ratio has settled at roughly 20/80: I spend about 20% of my time actively creating and 80% directing, reviewing, and deciding. This sounds like management. And honestly? It is. I'm managing an AI engineering team.

What Changes When AI Is a Co-Worker

The most unexpected consequence isn't productivity. It's ambition.

When your execution capacity doubles — or triples — you stop asking "can I build this?" and start asking "should I build this?" The constraint isn't capability anymore. It's judgment.

Projects I would never have attempted as a solo developer become feasible:

  • A full content pipeline with 6-platform social media distribution
  • A personal website with an admin panel, draft system, and automation API
  • Daily content production that would normally require a 3-person team

These aren't toy projects. They're real systems serving real users. And they were built in weeks, not months.

But here's the counterpoint: the same capacity increase means you can build yourself into corners faster. More code means more maintenance. More systems means more integration points. More automation means more failure modes.

The AI co-worker doesn't come with free maintenance. It comes with a management overhead that scales with output.

The Hard Truth About AI Co-Workers

AI co-workers still need management. And I don't mean prompt engineering — I mean actual management practices.

Clear expectations. If you don't specify what "done" looks like, the AI will make assumptions. Some will be correct. Some will be expensive.

Regular check-ins. You can't delegate a week-long task and check back on Friday. Review in small increments. Course-correct early.

Performance feedback. When the AI gets something wrong, analyze why. Was the instruction unclear? Was the context missing? Was it the wrong tool for the task? Each failure improves the next delegation.

Know when to take over. Not every problem is worth delegating. Sometimes the fastest path is to just write the code yourself. Recognizing when delegation costs more than execution is a management skill, not a technical one.

What This Means Going Forward

The "AI as tool" era is ending. The "AI as co-worker" era is beginning. And the skills that matter are shifting accordingly.

The engineers who thrive won't be the ones who write the best code. They'll be the ones who direct the best outcomes — who know what to build, how to evaluate quality, when to trust the output, and when to override it.

We spent decades learning to think like computers. Now we're learning to manage them.

And honestly? It's a better job. The grunt work is handled. The tedious parts are automated. What's left is the interesting stuff — the decisions, the design, the human judgment that no model can replicate.

I didn't expect to enjoy managing an AI team. But here I am, and I'm not going back.