In this issue of The AI Edge
🔥 Privacy is the real fault line — LLMs now hold medical notes, secrets and sensitive work data, and none of it is truly private or protected.
🎯 Model bias can become political power — Safety filters and government access risks mean AI can subtly change perception. Open source is the antidote.
💬 You need a personal risk calculus — AI and robotics are changing the way we think about privacy. Only you can choose what fits right for you, and there is no “right” answer.
🔥 Signal, Not Noise
Privacy is on my mind. With any AI, including LLMs, there is a convenience factor that makes you forget about privacy. What are the LLMs using my data for? How are they creating products or services based on what I'm prompting?
When LLMs first came into public consciousness, this wasn't a problem. People were using it for recipes or trip planning, and maybe a few simple things for work. Now people are putting almost any type of information in the LLM. And the prompts aren't private. They can be subpoenaed, there is no legal privilege, and LLM providers have data retention policies that keep this information even if you delete it.
Now, it changes how you think about what you input into an LLM. Do I really want to upload the clinical notes from my recent doctor visit? Maybe. Do I want my deepest secrets about me to possibly become public knowledge? Absolutely not. Each of us need an individual risk calculus as LLMs become a great part of our lives.
I tend to rail against the hypocrisy of the AI safetyists, but privacy is an area in AI safety that is more signal than noise. And not just because we may be embarrassed by our private information becoming public. There are greater political implications of this, particularly outside of the US. Let's say an authoritarian government demands access to an LLM provider, and that provider has to comply. We have no way of knowing the influence that the government has on that provider, and how they could use the LLM to slowly change the perception of their populace.
We have seen authoritarian-lite already from current LLM providers, where historical images are inaccurate because there was an emphasis on representation instead of accuracy. These types of safety guardrails influence how people think, and eventually if enough people are getting their information from it, that can nudge them in a difference direction.
So what is the solution? Equal parts privacy and open source. Demanding that LLM providers put in privacy guidelines (real ones) and if they don't, voting with your money and finding an LLM provider that has end-to-end privacy. This will definitely change the business model of LLM providers if consumers reach a critical mass in this area. From an open source standpoint, none of the current large LLM providers have revealed their code for their LLMs. Open source ensures that a larger community can actually look at the code and see where a company has made trade-offs and where they've manipulated the outputs of an LLM.
So think differently about using your LLM of choice, and make sure it aligns with your risk profile. No one (including me), can tell you what is right for you.
📌 Quick Hits
China Drops a Benchmark Bomb — Moonshot AI just unveiled Kimi K2 Thinking, a frontier-scale model that claims to beat GPT-5 and Sonnet 4.5 across key reasoning tests. The shocker: it's free. That means world-class cognitive capability is no longer gated behind American labs or premium APIs. Global AI competition just leveled up, and the open-access wave is accelerating. Read more →
AI's Race Redefined — The world's eyes are on the U.S., but China may be quietly pulling ahead in the AI arms race. An in-depth feature in the Financial Times argues Beijing has the means, motive and opportunity to take the global lead in AI development. Read more →
AI's Service Split Moment — In the essay Why ACE is cheap, but AC repair is a luxury, the authors map how massive productivity gains in sectors powered by AI are collapsing costs in some areas (think chips, bits, compute), while locking in steep wage and cost dynamics in hands on services. Read more →
🧰 Prompt of the Week
There are times when you don't want to get too specific in your prompting. I've used this prompt below as I've built up time with different LLMs, to help identify strengths and opportunities in the different parts of my life. I've also used it as a starting point to get more introspective, as the outputs can guide you into areas that are most important to you.
Based on what you know about me, make recommendations about how I can improve my personal life and my professional life.
🎯 AI in the Wild
There is a general uneasiness knowing that a ChatGPT or Claude output could be obtained or even leaked publicly. It's something I think about a lot, as there are things I've asked these LLMs that I would never want to be public. I stumbled upon Maple AI, an innovative LLM solution that has end to end privacy built in.
In a world where data privacy and security are increasingly important, Maple AI is revolutionizing the way companies approach AI adoption. This cutting-edge platform is designed to protect sensitive information while still providing the benefits of AI-driven insights and automation. Unlike traditional AI solutions, which often require companies to share sensitive data with third-party providers, Maple AI allows organizations to keep their data on-premise and under their control.
Maple AI's platform also prioritizes transparency and explainability, providing users with clear insights into how AI-driven decisions are made. This level of transparency is essential for building trust in AI systems and ensuring that they are used responsibly. By providing a secure and private AI solution, Maple AI is empowering organizations to unlock the full potential of AI while minimizing the risks associated with data sharing. This approach is particularly important in a rapidly changing regulatory landscape, where companies must balance the need for innovation with the need for data protection.
As companies become increasingly aware of the importance of data privacy and security, Maple AI is likely to play a major role in shaping the future of AI adoption. By providing a secure and private AI solution, Maple AI is poised to become a key partner for companies looking to harness the power of AI while protecting their sensitive data. Whether it's improving operational efficiency, enhancing customer experiences, or driving innovation, Maple AI is well-positioned to help companies achieve their goals while maintaining the highest standards of data privacy and security.
💬 The AI Takeaway
Originally when AI was coming into public consciousness more than a decade ago, everyone had the image of the Terminator robot in their mind. That was the big threat and that was where AI was going. But then after the emotional reaction, we could see the benefits of having robots in our lives, and the real risk was that manual labor would go away.
Well, it was the opposite. AI has had a much larger impact on knowledge workers instead. This is where the risk is, and why utilizing in your job is paramount. Robotics isn't really in the public consciousness, and outside of using robots for war it isn't really in the news much.
Recently, there was a story and ad campaign for the 1x Neo robot, a very benign and friendly looking robot. It can vacuum, load the dishwasher, and future household chores that no one enjoys. This may be the next big productivity enhancer. Saving hours of housework each day would unlock an even greater potential for innovation.
The fusion of robotics and AI will change the physical space as much as AI on its own changed the digital space. A fleet of robots powering manufacturing and other labor intensive businesses. Your own household robot keeping everything tidy. It's isn't science fiction anymore imagining that robots will eventually outnumber people.
Once again, as I alluded to earlier in this newsletter, is the privacy question. Do I really want a robot seeing the most intimate details of my life? What if it gets hacked and video of me gets on the internet? As machines (AI and robots) become more of our life, some people will willingly give up their privacy, similarly to how they did with social media. And maybe that is the direction that these innovations are heading towards. But there is going to come a time when privacy will be valued more highly, and that time is coming as well.
Privacy and innovation aren't mutually exclusive. Realizing that every person (and company) has their own agenda, it is essential to have privacy focused and open source technology. That is the only way to align incentives and ensure choice.
-Ylan

