© 2024 Felix Ng

arrow_backBack to Journal
March 4, 2026Journal5 min read

When AI Left the Screen

I started 2026 thinking about AI the way most software engineers do — as a code problem. Better prompts. Smarter agents. Faster inference. The context window as the boundary of everything interesting.

Then Samsung announced they're converting every factory they own into an AI-driven operation by 2030. Deloitte opened a Physical AI Center in Shanghai. NVIDIA rebranded GTC from a GPU conference to an industrial AI conference. And somewhere between reading those announcements and writing this journal entry, my mental model of AI shifted permanently.

AI isn't just leaving the IDE. It's leaving the screen entirely.

The Moment It Clicked

I was reading Samsung's details on their AI-Driven Factory 2030 strategy when something clicked. They're deploying digital twin simulations of their entire manufacturing process — every conveyor belt, every quality checkpoint, every material flow. Then AI agents run on top of those simulations, making decisions about production scheduling, defect detection, and logistics in real-time.

This isn't a chatbot. This isn't a copilot. This is AI as the operating system for physical reality.

And the thought that hit me was: I've been thinking too small?

As a builder who spends most of my time in the terminal, it's easy to believe that the frontier of AI is better language models, smarter coding assistants, more capable reasoning systems. And those things matter. But the economic center of gravity is shifting toward physical AI — AI that doesn't just process information but coordinates physical systems.

What I'm Paying Attention To Now

Three things are reshaping how I think about where AI is going:

The physical AI stack is standardizing. Digital twin → AI agents → edge compute → robotics → feedback loop. This isn't theoretical anymore. Companies like Samsung and the Deloitte-NVIDIA partnership are deploying this stack at scale. It's becoming as standardized as the web stack was in 2010.

Edge computing is the new hard problem. Running AI in the cloud is essentially solved. Running AI on a robot arm in a factory with 5ms latency requirements, power constraints, and safety certification — that's where the unsolved engineering is. Qualcomm, AMD, and NVIDIA are all shipping specialized edge AI chips. The hardware is arriving. The software ecosystem is lagging.

"AI engineer" is splitting into subspecialties. Just like "web developer" eventually split into frontend, backend, DevOps, and SRE, "AI engineer" is splitting into LLM application developer, ML engineer, and now physical AI engineer. The skills required are different — simulation engineering, robotics programming, sensor fusion, safety systems. If I were starting my career today, I'd be looking hard at this intersection.

The Anxiety Part

I won't pretend this doesn't create some anxiety. When Samsung talks about humanoid robots on production lines and autonomous quality control agents, the subtext is clear: many jobs that currently exist in manufacturing won't exist in the same form by 2030.

But I've learned to be skeptical of both the utopian and dystopian narratives around AI and employment. The historical pattern is that automation doesn't eliminate work — it transforms it. The factory worker of 2030 won't be assembling components by hand. They'll be supervising AI agents, maintaining robotic systems, and managing digital twins. Different skills, different education, different career paths.

The transition period is the hard part. And we're entering the transition period right now.

What This Means for My Own Work

For me personally, this shift is changing what I build and what I study. I'm spending more time understanding simulation engineering — not because I'm planning to build factory AI, but because the patterns are transferable. Digital twins, edge deployment, real-time feedback loops — these patterns appear everywhere from manufacturing to healthcare to smart cities.

I'm also rethinking what "full-stack" means. In 2024, full-stack meant React + Node + PostgreSQL. In 2025, it expanded to include LLM integration and agent orchestration. In 2026, the most valuable engineers will understand how AI systems interact with physical systems — sensors, actuators, real-time constraints, safety requirements.

The terminal isn't going away. But the most impactful AI engineering is increasingly happening at the boundary between software and the physical world.

Looking Forward

A year from now, I expect physical AI to be as mainstream in tech discourse as agentic AI is today. The investments are too large and the deployments too significant to ignore.

For builders in the AI space, my suggestion is simple: start experimenting with simulation tools. Download NVIDIA Omniverse. Play with Isaac Sim. Build a digital twin of something simple — your desk, your office, a basic manufacturing process. The point isn't to become a robotics engineer overnight. It's to build intuition for how AI works when the output isn't text on a screen but actions in the physical world.

The next chapter of AI isn't going to be written in a chat window. It's going to be written on factory floors, in hospital operating rooms, on construction sites, and in logistics warehouses.

And the builders who understand both the software and the physical world will be the ones writing it.