In this issue of The AI Edge

🔥 AI agents are in their pre-Cambrian moment — Early failures are noise, not signal. The real breakthroughs come from focusing on the few documented, high impact processes where agents can win today.

🎯 Execution beats experimentation — Success requires cultural alignment, incentive redesign, and actually tracking where freed up hours go. Most “AI problems” are really measurement problems.

💬 Governance can’t smother innovation — Put risks in context, talk directly with your legal and privacy partners, and diffuse AI broadly. Breakthroughs happen at the edges, not from locking AI to a privileged few.

🔥 Signal, Not Noise

The Cambrian explosion was a dramatic burst of evolutionary innovation about 540 million years ago. Life shifted from from simple organisms to much more complex ones, setting the stage for evolution today.

I think about this in relation to AI agents today. We are in the pre-Cambrian phase, where there are lots of experiments and periodic success. Then every few months there is a big step change, whether that is easier techniques or new use cases that AI agents can be applied to. The stories you hear about AI agents failing is noise. Failure is the price of eventual success, and there is no shortcut.

The next question is how do you maximize success while minimizing failure? I you're not part of a startup or a tech company (which includes the majority of us), you can't bring a business plan to your CEO showing that you're going to fail 90% and only succeed 10% of the time. That would be career suicide.

What you'll find in any organization is that most processes in their current form are not good candidates for applying AI agents to. Many processes aren't fully documented and it is very challenging to take tribal knowledge as your source of truth. So what you can do is 80/20 it. There are probably a small handful of well documented processes. Additionally, there are probably some of those processes that drive the majority of business outcomes (ie, revenue, profitability, customer acquisition, etc). And if you're applying it to a revenue type process, you don't have to even name your initiative "AI agent for revenue." Just call it "Revenue Acceleration" and thank me later.

As you start applying AI agents and then documenting success, the next challenge is cultural. There is still a lot of fear related to AI, specifically employees thinking they are going to lose their jobs. Using AI will make people more productive, and yes there is a future risk of a role being partially automated. The challenge around this is aligning incentives. You can't expect individuals to fully utilize AI without their being a company workforce plan that helps with reallocation of resources.

All of these gains are additive, and the currently struggle in many organizations is taking all of the hours saved and actually tracking where they are reallocated to. If you're not seeing improvements let's say in product development, marketing, sales, etc from these freed up hours, that is a measurement problem, not an AI problem. AI forces accurate metric tracking and measurement to get the true value from it.

On the horizon are full stack AI companies, according to Y Combinator, a startup accelerator. They've come out and said that they want to fund more companies like this. Keep an eye on this trend, as you'll hear more about and it will eventually make it's way into larger, more traditional companies.

📌 Quick Hits

Start-up Culture Level: Expert — Cursor, a fast growing AI coding company, grown to a $100M ARR company fast. Their recruiting process is uniquely aggressive: every good name spotted in Slack can trigger a full team outreach. Internally, everyone uses the Cursor product and there are cool perks. The culture emphasizes raising the ceiling for elite builders over democratizing for the masses. Read more →

Automaker Race to Market Gets AI Boost — Nissan has partnered with UK form Monolith to deploy AI that predicts outcomes of physical vehicle tests using decades of engineering data. The pilot cut chassis bolt testing by 17% and the aim is to halve full testing time across Europe, bringing new models to customers faster. Legacy auto development is about to meet its AI accelerator. Read more →

AI Tools Giga List Unveiled — Andreesen Horowitz (a16z) just dropped Apps Unwrapped, a curated list of 36 frontier AI apps across categories like build, create, productivity, and fun. From chatbots that converse with your docs to avatars that scale your content, the roster shows the next gen AI stack is being built now. Bookmark the list. Read more →

🧰 Prompt of the Week

I use LLMs frequently to help accelerate problem solving and generate ideas. Here is a prompt you can use in any setting, whether personal or professional:

Act as my personal system level problem solver. I will describe a task, goal or idea. For every input I give you, return these items below:

  1. A 3 step version for beginners

  2. A detailed expert level version

  3. The hidden pitfalls most people miss

  4. The 80/20 shortcut

  5. Three high leverage suggestions I didn't ask for but should consider

  6. A ready to copy template or example based on my exact situation

Ask clarifying questions only if absolutely necessary.

🎯 AI in the Wild

Sometimes the biggest barrier to using generative AI in an organization is just starting. Employees will dip their toe in the water, use it for a few days, and then go back to their normal routines. Many people learn by doing, so no amount of telling them or laying out a scenario will work. They need to see it to believe it.

Claude recently published a list of use cases applicable to most companies. It is one of the most user friendly walkthroughs I've seen yet. Take contract redlining for instance, a common occurrence in any company. Very time consuming and using an LLM reduces the overall level of effort. Or creating a structured interview process. Historically once again time consuming, and using an LLM can turn your stream of consciousness into an actual structure.

And just remember, if you're doing this on a work laptop make sure to use your organization's LLM of choice. You do not want to put any proprietary company information into a public LLM, and even internally you'll want to follow your company's policies.

💬 The AI Takeaway

You can't have an AI enabled or an AI first organization if the majority of the time is spent on AI safety and governance at the expense of innovation. Organizations struggle with this because it is so easy to be fearful of the risk and bad outcomes AI can drive. Yet when you look deeper, in many cases AI is amplifying the bad habits and misaligned incentives in an organization. It is important to remember that AI is still a tool, and requires human judgment on where to apply it and how to use it.

What I've found effective to diffuse the scaremongering tactics of the AI safetyists is showing baseline comparisons to other activities that we do each day. So let's say that your risk officer comes to you and talks about an AI scenario that is a 1 in a million type of event. This has to be put into context of all the other activities that currently go on in an organization. A company has a much higher risk of an employee committing fraud or dying on their drive to work. And yet we don't lock down those activities.

Cybersecurity risk is also a common area that is brought up. It usually follows this type of example: A generative AI solution for employees is being rolled out and then the fear stories come up of all the bad things that employees can do. Once again, context and baseline comparisons need to be taken into account. It could be argued that the biggest cybersecurity risk is giving employees laptops, an internet connection and email. Social engineering continues to be one of the top cyber risks, which is basically a people problem.

The key is to have these conversations directly with your risk officer, privacy officer, legal, etc within your organization. You will be amazed what they are encountering and reading in the general media, and how much of the hype risk can be dispelled. And vice versa. As an AI officer, there are also many privacy laws, regulation, etc that I had a cursory understanding of until meeting the experts in the organization. What these conversations do is create an environment that has the right guardrails while at the same time encouraging innovation and enabling employees.

The last area is culture. The AI safetyists would love to lock down AI capabilities only to the privileged few and so-called "experts." Yet with any disruptive technology including generative AI, much of the innovation occurs at the edges. There truly is the wisdom of the crowd, and someone plugging away in their basement halfway around the world can innovate just as easily as someone in a perfect academic setting. I've found this to be a case in a company setting. Diffusing generative AI throughout the organization has brought innovation that I never would have thought of on my own.

AI (and now generative AI) is the trend that is not only here to stay but will also accelerate. Future generations will look back and wonder why people were so against it. Similarly to how farming is now mechanized and you don't really see haberdasheries anymore, that is how we'll eventually view generative AI. An amazing technology that can accelerate innovation in any industry and help us solve the hard problems of humanity.

-Ylan

Keep reading

No posts found