In this issue of The AI Edge
🔥 The Real AI Bottleneck — Forget the bubble talk, the real limit to AI growth isn’t hype, it’s energy and compute. The future of AI will be powered by nuclear, not narratives.
🧰 Build Your First Agent — A simple, step-by-step prompt to design, deploy, and refine your own AI agent, no coding required.
🎯 The Agentic Internet Arrives — ChatGPT’s new in-app ecosystem turns conversation into the new interface, and marks the beginning of truly interoperable AI.
🔥 Signal, Not Noise
I'm skeptical about people thinking they are in an bubble in AI when it is actually an energy and compute bottleneck. We're seeing multi $100 million deals between the major compute companies and the AI service providers (most recently Open AI and AMD). Yet there is a more fundamental aspect which is tied directly to energy production and consumption.
You're hearing the same energy consumption scare tactics that were heard years ago in relation to bitcoin mining. The current scare tactics paint the picture that AI will consume more energy and tax natural resources. But this misses the point. That assumes we aren't making any progress when it comes to efficient energy production. It also ignores the fact that higher energy consumption is correlated with a higher standard of living:

The real danger is not adequately funding energy sources appropriately, and most importantly not funding nuclear energy. It isn't outside the realm of possibility that future data centers will be paired with their own nuclear reactors. In fact I don't see any other way around this. China is building more coal burning power plants, but that wouldn't be palatable in the US.
Residential electricity prices are skyrocketing because of demand from AI data centers and the infrastructure updates that are needed to support this demand.

This isn't sustainable for the average household and you don't suddenly create new energy sources. It takes years of planning and investment. There should be a national outcry to build and produce these energy sources, but the actual outcry is in the amount of water that an LLM prompt uses instead.
📌 Quick Hits
Google’s AI Privacy Firestorm — Google’s partnership with AI startup Nayya, meant to simplify employee health benefits, sparked backlash after staff were told to share personal data to access their benefits. The policy was quickly reversed after employee outrage and media scrutiny. A reminder: AI efficiency means nothing without trust. Read more →
Pulse-Fi: Heart Monitoring Through WiFi — Researchers at UC Santa Cruz have developed Pulse-Fi, a breakthrough system that measures heart rate using nothing but cheap WiFi chips and AI. It works from up to 10 feet away and across 17 body positions, with no wearables, no wires, just signal and code. Read more →
OpenAI Launches AgentKit — A new all-in-one toolkit for building and deploying AI agents just dropped. AgentKit lets developers design multi-agent workflows visually, embed chat interfaces with ChatKit, and measure performance with built-in evals and guardrails. It’s a major step toward making “AI agents in production” as easy as building an app. Read more →
🧰 Prompt of the Week
AI agents don't feel real until you actually build one. It can be intimidating, however you can use your LLM of choice to help you out. Here is a great prompt template that can be customized for any AI agent that you would like to build:
You are an expert AI systems designer and operations strategist helping me create my first AI agent.
I want to build an agent that will: [Describe your goal here — e.g., research topics for my newsletter, summarize long reports, manage content ideas, identify emerging AI trends, etc.]
Design the agent from the ground up. Please outline:
Purpose: The specific problem it solves and measurable outcomes.
Core Capabilities: What the agent can do (data retrieval, reasoning, writing, summarizing, scheduling, etc.).
Tools / Integrations Needed: APIs, connectors, or systems it should access (web search, docs, Notion, Slack, email, etc.).
Memory & Context Strategy: How it should remember past interactions and apply them intelligently.
Prompting / Instruction Framework: A base system prompt or rule set for consistency and accuracy.
Error Handling & Guardrails: How it should flag uncertainty or limit risks.
Testing Plan: A step-by-step method for validating its performance and accuracy.
Improvement Loop: How it learns or adapts over time (manual feedback, automatic evaluation, etc.).
Finally, provide the exact base prompt I could use to deploy this agent, along with a short checklist of what I’d need to build it using either:
OpenAI’s AgentKit (if I want to code it), or
ChatGPT’s custom GPT builder (if I want to no-code it).
🎯 AI in the Wild
OpenAI is turning ChatGPT into an app store experience. This is revolutionary, because it turns ChatGPT into a hub for more capabilities. Historically, ChatGPT could generate text, images or code, but outside of that you had to leave the app to do other things. Not anymore.
Take these two sample prompts below:
"Zillow, find me 3 bedroom homes under $600K near Chicago
"Spotify, make a playlist for my morning run"
Before integrating these apps, these prompts would have been useless. Now, you're saving a lot of time because you can use natural language to get your answer and have it integrate with multiple apps.
The real unlock will come in the future, as more apps are added to this framework. It will create an interoperable AI ecosystem, where you won't have to switch between all the different apps that you have. It is similar to what search engines did for the original internet, making it easier to navigate and find what you needed. It is a new paradigm and the start of an agentic internet.
💬 The AI Takeaway
The management of AI agents will be an essential skill for all employees in an organization. We're nearing the start of a trend where AI agents will eventually outnumber human employees in the not so distant future. It is going to require a new style of management and flexibility, particularly with the management class.
AI agents don't need empathy, don't have feelings and don't respond to the same incentives that humans do. They also don't sleep, don't need food and can be on 24/7. This challenges all the deeply held beliefs of current management theory.
Can you as the manager communicate and articulate what is needed clearly enough, where an AI agent can work semi-autonomously to achieve the outcomes you need? If prompting is any indicator, this would be a resounding "no". People struggle with LLMs and prompting today because they write vague statements and don't use the right level of specificity. It is no fault of the LLM, but rather the fault of the person. Human employees can pick up on nuance, have gut feelings and utilize critical thinking. AI agents, though, will need a much higher level of specificity that what was historically required to lead high performing teams.
If we lean in too much to AI agents, we run the risk of creating a soulless culture and environment for our human employees. You'll have continued disengagement and people phoning it in at work. The solution will be utilizing humans for what they are best at, and doing the same with AI agents. So skills like judgement, creativity, empathy and relationships aren't going away any time soon. These are inherent human strengths and we should lean into it. Conversely, AI agents excel at speed, scale, optimization and parallel execution.
The management of AI agents is going to fundamentally change corporate structures, business models and strategy. It's a trend I'm keeping an eye on, and so should you.
-Ylan