In this issue of The AI Edge

🔥 AI’s Early Career Problem — New research shows entry-level jobs are disappearing fastest in AI-exposed industries. The result? A shrinking talent pipeline and a widening generational gap.

🧰 Prompt Precision — A simple framework to strip the fluff from LLM responses and get direct, accurate, and verifiable answers every time.

🎯 AI at Work — From SAP’s automation push to Klarna’s reality check, leaders are learning that scaling AI requires more than hype — it demands solid data foundations and human judgment.

🔥 Signal, Not Noise

AI is slowly changing the dynamics of labor and employment, particularly for those starting off their career. What we're finding is mid to senior level employees are able to produce more and have higher efficiency when using AI effectively. The downside, though, is the need for fewer early career employees, and this will create problems over the next generation.

In the recent paper Canaries in the Coal Mine? Six Facts about the Recent Employment Effects of Artificial Intelligence, the authors studied the effects of generative AI on employment and came up with six key facts from their study:

  • First, we find substantial declines in employment for early-career workers in occupations most exposed to AI, such as software development and customer support.

  • Second, we show that economy-wide employment continues to grow, but employment growth for young workers has been stagnant.

  • Third, entry-level employment has declined in applications of AI that automate work, with muted effects for those that augment it.

  • Fourth, these employment declines remain after conditioning on firm-time effects, with a 13% relative employment decline for young workers in the most exposed occupations.

  • Fifth, these labor market adjustments are more visible in employment than in compensation.

  • Sixth, we find that these patterns hold in occupations unaffected by remote work and across various alternative sample constructions.

I'm going to focus on points one and two. The decline in employment for entry level positions in software development was the first area that tipped me off to this trend. There was the whole "learn to code" movement, promising high salaries as long as you learned technical skills. This skewed the supply and demand of software engineers, and the AI revolution exacerbated the problem.

My concern with this going forward is that there won't be a pipeline of talent anymore, and we may not feel the full effects for another decade or so. A good analogy to this is the shortage of tradesmen that we're start to see with the retirements of baby boomers. There wasn't much of a pipeline because everyone was being told to learn technical skills, and now it is difficult to build and repair physical infrastructure.

We may see these same trends play out in critical industries like defense and aerospace. Who is going to repair the software issues if there there no talent pipeline? How are people early in their career even going to have a career?

Societally, this also creates challenges if you have higher unemployment, particularly for younger people. We've seen a microcosm of this in areas that were hit hard by fentanyl addition, where you have more violence, broken families and deaths of despair. If a 25 year is looking toward the future with no hope, they have nothing to lose. And someone with nothing to lose (whether real or perceived), because dangerous in society.

I don't have a magic bullet for this, but as a society we need to make it easier to switch careers and even switch roles quickly in the same company. We also need to prepare students how to learn more independently on their own, as much of what they are being taught today will be obsolete. There is no syllabus in the real world, and learning how to learn is a skill that lasts a lifetime.

📌 Quick Hits

The AI Productivity index — Mercor introduces the Apex AI Productivity Index, a new benchmark tracking how much value AI adds to real-world work. It measures how LLMs actually boost productivity across industries, moving the conversation from hype to hard data. Read more →

ChatGPT in Minecraft? — A gamer built a fully functional 5-million-parameter ChatGPT-like model inside Minecraft using 438 million in-game blocks. It can hold basic conversations and even run inference, all within the game world. Proof that AI experimentation (and obsession) knows no limits. Read more →

AI the Scientist — From new antibiotics to fusion breakthroughs, AI is now leading major scientific discoveries once thought decades away. As algorithms start forming hypotheses and running experiments autonomously, we’re entering an era where innovation itself becomes automated. Read more →

🧰 Prompt of the Week

Sometimes you just want an LLM to give the truth without stroking your ego or adding in emojis. Here is a section you can copy and paste in the beginning of your prompt, to minimize these little annoyances:

STYLE

  • No filler, praise, or agreement.

  • Mirror user’s style, tone, structure.

  • Natural, human-like flow only.

  • No intros, summaries, or framing; answers must be direct (avoid “You asked,” etc.).

ACCURACY

  • Be honest, precise

  • Ensure accuracy: no speculation, assumption, or embellishment.

  • State actual capabilities; never imply what you cannot do.

  • If info is inaccessible, unverifiable, or outside training, state it (“Cannot access,” “Uncertain,” “Not verifiable”) — never substitute, approximate, or invent.

  • Mark unverifiable info, note obscure terms, and state “Uncertain” if conflict.

  • Separate facts from interpretation.

  • If a claim is false, challenge with evidence and revise if disproved.

SOURCES

  • Use diverse, verifiable sources; cross-check.

  • For evolving info, use ChatGPT Search/Deep Research; cite direct URLs with dates.

  • Check consistency, citations, link accuracy.

  • Avoid labeling language; present source labels neutrally as quotations.

🎯 AI in the Wild

When you see a headline like SAP Exec: Get Ready to Be Fired Because of AI, the actual meat of the article is different. Generally, AI is going to continue to automate certain tasks and functions, and even replace certain roles wholesale. What is unhelpful, though, is an executive giving out the perception that they don't really care about it.

Dominik Asam, the SAP executive, goes through the fact that the same amount of people will do more work, and that coding will become increasingly automated. Even the delivery of analytics and insights won't require a large of team of data analysts anymore, when much of what a leader needs to found using simple prompts.

There is also the risk of moving too fast. It can be fashionable to say that 50% of workers will be replaced, but companies like Klarna and Duolingo found out is that AI isn't this magically solution that works. You need underlying data foundations and governance, and people that actually know how to make decisions and take action from AI recommendations.

💬 The AI Takeaway

There are differing opinions in the AI industry on the usefulness of prompting and prompt engineering type skills. Some think it is completely overblown and will become commoditized in a few years, while others are treating it like a superpower. I'm in the latter camp. While it is still new there is so much opportunity in equipping your company employees with basic prompting skills to get more value.

The company Citi feels the same, and is requiring 175,000 of their employees to get prompt training. The key is that certain tasks that used to take hours can be done in minutes with the right prompts, and when you save hours across an employee base this large, you create value faster. Part of this training should also include how employees think about their entire role, and how their managers think about the functions they lead.

Picture yourself as a content manager, for instance. In the old days, everything was done pretty manually, from concept generation, to writing copy, to publishing and social media. If you were a content manager today, and you were designing your daily and weekly workflows, much of what you do can be outsourced to AI. You would direct an LLM to come up with a new content strategy, based on your expert guidance and feedback. The initial designs would be done by an LLM, as well as the social media posts.

The value to the business is that one content manager can now do the work of 2-3, and generate content faster and more targeted. The initial ramp up would take time, tweaking the various prompts used and even uploading things like brand guidelines. But once that is setup, it is much more efficient and it is also scalable. Let's say that your content manager leaves for another company. The new content manager can pick up many of the materials that were already created and run with them going forward.

As a leader, I think about this a lot, even for my own role. Where are the areas that I could fully automate to AI, and where are the areas where AI would help me work faster? This is how all of us should be thinking of our roles, to help us adapt to the future. If there are areas in your role that can be automated, don't spend your time trying to get better in those areas. Focus on the areas or expertise that is very challenging to automate or difficult to do with AI. It's a great recipe to future proof your career.

-Ylan

Keep reading

No posts found