There is a lot to unpack this week. We shipped a redesigned social video editor, gave the episode suggestion system a major brain upgrade, made interviews smarter with guest context, and fixed a long list of things that were quietly bothering us. Here is everything that landed.
New Features
Social Video Editor — Redesigned from the Ground Up
The social clip editor got a full makeover. You now get an integrated editing experience with an iPhone frame preview so you can see exactly how your clip will look on a phone before you publish. A new filmstrip transport bar lets you scrub through the video, and the style panel is cleaner and more organized — preset grids, caption controls, background options, and template pickers all in one place.
The content hub that lists your clip batches has also been redesigned. Batches are now grouped by podcast with collapsible sections, and there is a new filter toolbar so you can search, filter by status, and toggle between card and list views.
Schedule Episodes from AI Suggestions
Your podcast can now run entirely on autopilot using AI-generated episode suggestions instead of just RSS feeds. When scheduling a podcast, you can choose whether each episode should come from your RSS feed or from your suggestion queue — and set a fallback if the primary source has nothing available.
A new suggestion queue in settings lets you pin specific topic ideas for upcoming episodes. When the queue is empty, an AI picker automatically selects the best suggestion based on urgency, value, and your recent episode history. Suggestions that are more than three days stale get refreshed automatically before selection.
Cover Art Shapes Your Social Clip Colors
The AI color engine for social clips now looks at your podcast's cover art when picking a color palette. Instead of guessing a theme, it matches the colors and mood of your existing visual brand. You will also notice that highlighted words in captions are now always visually distinct from the surrounding text — we enforce a minimum contrast ratio so the highlight effect is never invisible.
Gradient Background Color Controls
When you enable animated gradient backgrounds in the social clip editor, three new color pickers appear — one for each gradient stop. You can dial in exactly the gradient you want and see it update live in the preview.
Richer Context for Interview Guests
When creating an interview invite, you can now give your AI host more to work with. Three new optional fields let you add background notes about your guest, paste in their LinkedIn About section, and upload their resume or CV as a PDF. All of this context is extracted and fed into the AI agent's briefing so it can ask more informed, personalized questions.
Episode SEO Editor
Episodes now have a dedicated Content and SEO tab in the episode detail page. You can edit seven sections — title, description, show notes, highlights, FAQs, chapter markers, and tags — and regenerate any individual section with AI without touching the others. A one-click backfill lets you enrich all your existing episodes at once.
Featured Episodes on Your Public Show Page
You can now designate one episode as your show's featured episode. It appears as a prominent "Start Here" card at the top of your public podcast page, making it easy to guide new listeners to your best work.
Podcasting 2.0 Fields and AI Disclosure
When syncing your show to podcast directories, the system now sends Podcasting 2.0 metadata fields including medium, license, and location. Episode publishes now include your transcript URL and formatted show notes. An AI disclosure tag is automatically attached to both shows and episodes to comply with emerging transparency standards.
Improvements
More Human-Sounding AI Content
Writing guidelines that discourage AI-isms — em dashes, buzzwords, formulaic transitions, hedging phrases — are now applied consistently across every AI-generated piece of content: podcast scripts, outlines, episode metadata, articles, show notes, and social clip captions. The goal is for generated content to sound like something a real person actually wrote and said.
Higher Quality Social Clip Renders
Video renders from Remotion Lambda now use PNG frames instead of JPEG during processing, which eliminates compression artifacts on text and caption pills. The final H.264 output uses a higher quality setting that matches what you see in the in-browser preview.
Podcast Artwork Fixed for Directory Listings
Cover art uploaded to podcast directories is now automatically resized and processed into a compliant square format before upload. Previously, non-square or out-of-spec images were silently dropped by directory backends, causing your artwork to be missing from generated podcast feeds.
Faster and Cheaper Suggestion Pipeline
The episode suggestion pipeline was migrated to a newer AI SDK, upgraded to the latest Claude models, and simplified from four stages down to three. Web research is now handled by Anthropic's built-in search tool, removing the Tavily API dependency entirely. All 51 tests pass.
Social Clip Editor Controls Are Now Template-Aware
The editor panel now reads each video template's capabilities from a manifest rather than relying on scattered hardcoded conditionals. This means the right controls appear for the right templates automatically, and adding new templates in the future requires no changes to the editor UI — just an entry in the manifest.
Caption Density Slider Now Works
The caption density control in the social clip editor was being assembled correctly but never actually forwarded to the video renderer, making it a no-op in both preview and final render. It is now wired all the way through across all six templates.
Preset Switching Updates Fine-Tune Defaults Immediately
When you switch style presets in the social clip editor, slider defaults for font size and weight now update to reflect the new preset right away instead of pulling from a stale database value.
Bug Fixes
Fixed a crash in the background job pipeline where importing video rendering code on the server caused a React context error. Video rendering packages are now always loaded dynamically in server-side code, keeping them out of the server bundle.
Fixed a database query ambiguity error that was causing episode fetches to fail after a new relationship was added between the episodes and podcast feeds tables. Queries now use explicit foreign key references to avoid the conflict.
Fixed highlighted words in social clip captions being completely invisible because the highlight background color defaulted to the same value as the active word color. Both are now always distinct.
Fixed caption shadow controls in the editor showing a different value than what was actually rendered in the video, due to theme-derived shadow values not being resolved the same way in the sidebar as in the renderer.
Fixed a race condition where switching between clips could leave the editor showing the previous clip's style values briefly.
Fixed the initial render of social clips using a different default text shadow than the editor preview for non-viral templates, so what you see in preview is what you get in the rendered video.