For YC W25 I moved to SF in January, and shortly after settling into my new place, I started receiving notifications for likes on a Twitter post I didn’t even remember writing. When I dug a little deeper, I discovered my podcast agents had already summarized the latest AI breakthroughs, shared insightful discussions on Twitter, and even curated a personalized newsletter. A quick check on Spotify confirmed they’d published a podcast episode covering the week’s AI news.

Over the Christmas break, I decided to build something purely for fun—an automated multi-agent workflow that curates the best AI news each week and transforms it into a newsletter, a tweet, and a fully automated weekly podcast. I just wanted it to appear in Spotify so I could listen without having to scour countless feeds. The entire pipeline runs on a weekly cron job in ECS, transforming raw content into curated, multi-format updates. Pretty cool, right? It was definitely a fun project to build.
With AI taking the spotlight and breakthrough after breakthrough flooding every feed, keeping up with all the updates was quickly becoming impossible—especially with the surge in AI-generated content. So, in a very meta move, I thought: Why not use AI to fight AI-driven info overload?
This is how I built an automated multi-agent system that transforms AI news into weekly podcasts, newsletters, and social media content using LangGraph, topic modeling, and TTS - all running autonomously on ECS.
The Information Overload Problem
At first, I deployed a Telegram bot to process newsletters, RSS feeds, and similar sources. Using standard tools like LangChain and OpenAI APIs, the bot identified, extracted, and structured key findings from various AI newsletters and feeds. Ironically, as I added more sources, the Telegram chat morphed into another sprawling feed—a never-ending scroll of over 1.4K updates in one screenshot!

Then, over Christmas, inspired by the magic of NotebookLM, I took it one step further: convert all these AI updates into a single, digestible weekly podcast. Instead of dealing with an overwhelming feed, I envisioned a well-curated conversation culminating in a podcast episode—perfect for those times when reading just isn’t an option (because, let's face it, I'm pretty lazy). The result? A multi-agent system that clusters AI news, filters out the fluff, and uses “editor” and “domain expert” agents to engage in natural dialogue, resulting in a polished podcast transcript with TTS output for platforms like Spotify and Apple Podcasts.
An autonomous system that produces a weekly AI news podcast and newsletter
How the System Works
The pipeline comes together as follows:
-
Collection & Clustering
- All AI/ML-related updates (from newsletters, RSS, etc.) are ingested from my database.
- Content is vetted for "true AI relevance" (distinguishing actual AI technology from generic tech news).
- Each update already comes with embeddings generated from the initial Telegram bot parser.
-
Topic Modelling
- Unsupervised topic modeling groups similar updates into clusters.
- I use OpenAI's
text-embedding-3-large
to generate embeddings, then apply UMAP for dimensionality reduction and HDBSCAN for density-based clustering. - Key terms per topic are surfaced using c-TF-IDF, KeyBERT, MMR, and sometimes OpenAI for additional label suggestions.
- For more details, check out my earlier post, “Mapping out the AI Landscape with Topic Modelling.”
-
Multi-Agent Dialogue
- Using LangGraph, I spawn AI “Perspectives” (or domain experts) to discuss each topic cluster. Each perspective—say, a research analyst or a business strategist—brings a unique viewpoint and uses tools for deep-dive research.
- This technique, inspired by the STORM paper, generates multi-perspective, grounded conversations that pack in more information and references.
- The dialogue is orchestrated using a structured state machine for smooth, natural interactions.
-
Content Generation & Distribution
- Newsletter: The curated topics and conversations are assembled into a comprehensive newsletter complete with source references.
- Podcast Transcript: The system pieces together all the topic-level discussions and the newsletter into a cohesive, multi-speaker transcript.
- TTS: The transcript is converted to audio using multiple AI voices and uploaded to Transistor.fm, reaching platforms like Spotify and Apple Podcasts.
- Tweet: A concise tweet is automatically generated to spotlight key updates and invite further engagement.
All these steps run completely automatically—resulting in a weekly multi-format digest: newsletter, tweet, and a conversation-based podcast delivered right to you.
AI Agent Architecture
The architecture is modeled after a real editorial team, just entirely AI-based:
- Research Analyst: Uses advanced clustering and topic modeling to sift through the noise.
- Content Strategist: Crafts the narrative and shapes the newsletter.
- Domain Experts: Represent specialized viewpoints (like academic researchers or industry insiders) and engage in natural multi-agent dialogues.
- Podcast Editor: Assembles the conversation, ensures coherence, and finalizes the transcript.
To be a bit more specific, the logic looks like this:

The outcome? A surprisingly human-sounding dialogue about the week’s AI developments, automatically delivered to all the podcast platforms I use. Noice.
Future Improvements
While the system already solves my biggest headache—AI info overload—there’s still plenty more I’d like to explore:
- Temporal Topic Analysis: Extend the pipeline to track how AI trends evolve weekly or monthly using dynamic topic modeling.
- More Natural Dialogue: Although the podcast already sounds human, fine-tuning for natural filler words like “um,” “yeah,” and “really?” could add even more authenticity (and a touch of humor). I’m eager to test Google’s dialogue with multiple speakers once it’s more widely available.
- Community Engagement: Imagine letting listeners vote on the most relevant updates, paving the way for Personalized Podcasts. With AI-generated content, production constraints vanish, allowing a custom AI news experience tailored to each listener’s interests.
- Explore Other TTS Models: Google’s Cloud Text-to-Speech API is currently invite-only, but when available, I might switch to see if I can further elevate the audio experience.
Conclusion
This project was a blast to build—and even better, it’s solving one of my biggest challenges: staying updated without drowning in information. By merging topic modeling, multi-agent dialogue, and seamless multi-channel distribution, I now get a weekly digest (newsletter, tweet, and podcast) that takes care of itself.
Feel free to check out the newsletter on Dev.to, follow me on Twitter, and listen to the podcast on Spotify or Apple Podcasts.