- Like Magic: AI newsletter
- Posts
- When AI starts writing its own release notes — and what you should build tomorrow
When AI starts writing its own release notes — and what you should build tomorrow
Big shifts in AI this week, three quick wins for builders, and one tool you need to try.
Hi fam!
You know that moment when you open your IDE and the auto-complete writes more lines than you do?
Welcome to November 2025: we’re not just talking smart assistants anymore — we’re talking assistants that ship updates.
This edition of Like Magic AI is all about the agents doing work, the models doing launches, and how you — yes you — can ride the wave rather than get swept under.
(And yes, there’s a no-code puzzle in here for you builders.)
The future of AI feels Like Magic, and it’s here!

Big Idea: “Agents managing agents: the new frontier”
Big Idea: “Agents managing agents: the new frontier”
Last week, a major AI-lab quietly released an update to their foundation model — but instead of human release notes, the release notes were generated by the model itself. Think about that for a second. The model wrote the summary, flagged its own improvements, and drafted the migration guide. Why it matters:
It signals that agentic systems are moving from being “smart assistants” to being “self-managing systems”.
For builders, this means the role of “model operator” shifts — less manual prompt-tweaking, more orchestration of multiple agents, monitoring flows, emergent behaviour.
For startups, this opens a new category: systems that not only use AI but supervise AI — you’re not the operator, you’re the architect.
What you should build:
A mini-agent that monitors your own AI-pipeline logs, flags anomalies and drafts suggested fixes. (Think: “Hey, ingestion latency up 17%, here’s a fix I propose.”)
A marketplace of “super-agents” that plug into no-code builders (like your Replit integrations) and carry out routine tasks.
A UX that treats the AI-agent like a team member: chat interface, status updates, version history, accountability.
The kicker: once the agent starts documenting its own updates, you’ve crossed into meta-AI territory. And that’s where exponential signup curves start.
So this week: shift your mindset from “what can the model do for me?” to “how can the model manage what I built for it?”. Because that’s where the magic lies.
Other News
🧠 Google’s DeepMind launches “MemoryGraph” model: A new architecture that claims to retain training context weeks later — meaning agents can develop long-term projects, not just one-off tasks.
Why we care: If your AI “forgets” between sessions, you’re stuck repeating the cycle. This could break it.🤖 OpenAI opens “SORA” text-to-video to select creatives: Their new model is still closed but now in wider red-team/creative access. The public launch date remains unannounced.
Why we care: Video generation is creeping into the workflow. If you’re building content apps, your stack changes.🌍 EU AI Act enters enforcement phase: The new disclosure and transparency rules for high-risk AI systems kick in Jan 2026. For startups building agents, this matters.
Why we care: If you’re deploying an agent that makes decisions (even internal ones), you might need logs, audit-chains, and “why did the agent do this” explanations.📈 Startups raise: “AgentOps” rounds surge 4× in Q3: VCs are increasingly backing companies building orchestration layers over LLMs rather than LLMs themselves.
Why we care: Funding is flowing where builders like you can plug in wins — maybe you should consider AgentOps as your angle.
Run ads IRL with AdQuick
With AdQuick, you can now easily plan, deploy and measure campaigns just as easily as digital ads, making them a no-brainer to add to your team’s toolbox.
You can learn more at www.AdQuick.com

Tool / Build Highlight
Tool: “Replit AgentKit” (Beta) This no-code kit plugs into your Replit workspace and allows you to spin up a chat-agent with:
event triggers (webhooks)
scheduled tasks (cron-style)
conversation memory
custom actions (call your API, write a file, send an email)
Why it matters: You can build an operational agent in under 30 minutes without learning infra.
Try this: Spin one up that watches your GitHub repo, and when there’s a new commit, it posts a summary in Slack and assigns a follow-up task.
Bonus: You can layer on one of the new memory-models (see MemoryGraph above) to give it context across days.
Pro tip: Log all interactions to a CSV so you can later analyse “what actions did the agent take”, “how many were reverted”, “what failed” — this is your feedback loop for improvement.
Founders / Builders Tip
“Build for clarity, not just efficiency.” Many AI agents are built to do things — summarise, generate, automate. But the next wave? Agents that explain themselves. As a builder, ask yourself:
When my agent runs, can another human follow what happened and why?
Can I generate a log or human-readable explanation alongside the agent’s action?
Can I build the “undo” or “supervise” button easily?
Because if you don’t build the supervision & transparency layer now, your user (or client) will hate you when everything breaks at scale.
Bonus: The clearer your agent’s actions, the faster you can iterate, the faster you can onboard non-tech users.
The New Enterprise Approach to Voice AI Deployment
A practical, repeatable lifecycle for designing, testing, and scaling Voice AI. Learn how BELL helps teams deploy faster, improve call outcomes, and maintain reliability across complex operations.

Pollo AI: When “Realistic Video” Takes a Creative Detour 🎬✨
Realistic image-to-video tools are getting better every month… but some of them still like to take the scenic route. This week we put Pollo AI to the test — a tool that promises smooth motion, believable lighting, and lifelike animation.

What we got was a mix of “huh, nice,” “wait, what,” and “Dali would be proud.”
If you’ve ever wondered how close (or far) we are from true realism in AI-generated video, buckle in — this experiment delivered surprises, frustrations, and one runaway glass.
Closing
That’s it for this week’s dive. If you’re building an AI-agent stack in Replit, remember: the model is only half the job. The OTHER half? The wiring, context-memory, supervision, transparency — the real “magic” behind the hood. Let me know what you’re working on — hit reply and tell me one agent you’ve built (or are thinking about). I might feature a few of your builds in a upcoming issue. Until next week: keep pushing the boundaries — and remember: magic is just a well-orchestrated algorithm away.
P.S. Next issue: “From prompt to product — how builders are shipping AI start-ups in 30 days or less.” Stay tuned.

Text generation | Image Generation | LMAI recommends |
|---|---|---|
Like Magic AI NFT 🏞️
Our master plan is to publish an NFT image in each newsletter and hand it out to our subscribers. The earlier you subscribe, the smaller the series are. It's a future collectible, a piece of digital art that captures the essence of this moment in time.
Thank you for being a valued subscriber. Together, let's embrace the magic of AI and creativity!

LMAI156-21112025
Was this email forwarded to you? Sign up here 👇




Reply