Tech.Talk: AI This, AI That – The Agentic Wave
A simulation for training Agentic AI models
AI… AI… AI… that’s all we get to hear nowadays. Although all we get from AI is denoted as “Slop.” So, what’s next? How about applications of AI that actually do the work? With Nvidia’s latest GTC and the new buzz in the valley, Janam Gupta talks agentic.
Since the widespread adoption of AI, one question dominated conversation: who has the smartest one? OpenAI, Anthropic, Google, and Meta poured billions into the arms race, and we are witnesses to the results. But, as of late, the tech industry has been hinting where real disruption lies next. LLMs just might be history.
Don’t get me wrong, Generative AI is not going anywhere. Students and educators still need their darling chatbots for some “homework help.”
From Chatbot To Co-Worker
The AI industry is taking the “agentic turn.” A shift from models that generate text and images to autonomous systems that act by themselves. Instead of just answering questions, these AI agents browse the web for you, manage your advertising campaigns, negotiate with vendors, and execute multi-step business workflows, all while you sip on an espresso martini.
The agentic wave is fundamentally different from the first wave. Rather than making knowledgeable workers slightly more productive — drafting emails, summarizing docs, brainstorming — Agentic AI can read natural language prompts, spin hundreds of cloud browsers, and extract data autonomously.
The next wave of AI won’t be about who has the smartest model, but about who can turn that intelligence into autonomous action at scale.
For businesses, this is bigger than a ten percent efficiency bump; this could amount to the elimination of entire budget categories. Remember how e-commerce agencies charged fifteen percent of ad spend to manually tweak campaigns? Replace it with an agent that understands your brand context, market conditions, and competitive dynamics and then acts on it 24/7. Jensen Huang made this exact case at NVIDIA’s GTC 2026 in San Jose this week, calling it the “inflection point of inference” — the moment AI stops training and starts doing. He shares that 100% of NVIDIA’s engineers use agentic AI tools daily, redefining the new normal.
How It Actually Works

Shall we pop the hood? The best models take LLMs, like those homework-helping GPTs, and wrap them in execution layers that interact with the real world.
In web automation, they shifted from screenshot-based approaches toward DOM-level parsing. For the non-techies: Gen AI used visual screenshots of webpages and worked from there. Now agents directly read the underlying structure of the site. What we get are faster, more reliable, and cheaper results. They even work on messy websites. No more barfing over unappealing-looking websites. Did I mention it’s cheaper?
In e-commerce, systems build “context graphs” which map historical data, past decisions, seasonal patterns, competitive dynamics, and market conditions into a unified representation. Forecasting has never been easier. When the AI adjusts something, it does so with a genuine understanding of what worked or failed before.
What Makes a “Winner”
For agentic startups, “elevating the villain” cannot go unrecognized. Make the pain point vivid and visceral before introducing the solution. Agentic AI is inherently complex, but the value proposition is simple: it does the work for you. Founders who bridge that gap will close rounds.
Then there’s my favorite unofficial business term: the Steve Jobs factor. In the early stages, it’s always about the people. Technical moats in AI can be transient; what’s cutting edge today may be commoditized within months. What matters is whether the founding team is ready for the grit and grind to navigate the fastest-moving technology landscape.
The winners will figure out how to package intelligence into systems that execute and effortlessly operate in the messy real world, potentially replacing workflows end-to-end. They will strike gold.
The Cyber-Physical Connection

Here’s where it gets exciting. What we’ve seen so far is AI in the digital world, and the physical form of AI is just starting to take shape. GTC 2026 closed with Jensen Huang sharing the stage with a surprise guest: Olaf, Disney’s snowman from Frozen. No, it wasn’t a kid wearing a costume. It was a free-roaming autonomous robot built by Walt Disney Imagineering on NVIDIA’s Newton physics engine. Walks, talks, balances on a moving boat, and apparently complains his legs get sore from standing too long.
Huang called it “Physical AI” — systems that move through real environments and interact with real people. Every industrial company, he said, will become a robotics company. Big claim. But so was “every company will become a software company,” and look how that turned out.
The Bigger Picture
The agentic wave represents a particularly compelling opportunity. Startups like Retriever and Akai are led by founders with deep roots in the ecosystem: engineers who built infrastructure at big tech and are now entrepreneurs navigating Silicon Valley’s venture farm.
The e-commerce automation space has massive implications. AI systems autonomously managing operations at a fraction of current costs are being pitched in Bay Area conference rooms right now.
As the AI industry matures past the hype cycle of bigger models and flashier chatbots, the winners will focus on the unsexy but essential work of making AI automate real-world processes. LLMs gave us intelligence; agentic will give us autonomy. If newly funded startups and GTC are any indication, the future is already in production.
All photos courtesy of NVIDIA

