VP Land is a newsletter & YouTube channel covering the latest updates in creative technology for M&E professionals.
Here's a quick roundup of the most significant developments in media and entertainment technology this week:
🏰 Disney Invests $1B in OpenAI to Bring 200+ Characters to Sora
Disney has agreed to invest $1 billion in OpenAI, licensing over 200 characters from Disney, Marvel, Pixar, and Star Wars for the Sora video model. The three-year deal allows users to generate short, social-style videos using these IP assets, though it explicitly bans the use of actor likenesses and voices. Select fan-created content will be curated and streamed on Disney+, marking the first time a major studio has integrated user-generated AI content into its primary streaming service.
📺 McDonald's Pulls AI Christmas Ad After Backlash
McDonald's Netherlands retracted its "The Most Terrible Time of the Year" commercial after viewers criticized the AI-generated visuals as "low quality." Created by TBWA\Neboko and The Sweetshop, the 45-second spot used generative AI to depict chaotic holiday scenes like exploding trees, but faced immediate complaints about "AI slop" and unnatural motion. The company stated it pulled the ad to respect the audience's reaction and acknowledged the experiment as a learning moment for future productions.
🔊 ElevenLabs Partners with Meta to Power Dubbing and Voices on Instagram
ElevenLabs is integrating its audio technology into Meta's platforms, powering features like automatic translation and dubbing for Instagram Reels. The partnership also brings ElevenLabs' library of over 11,000 voices to Horizon, allowing users to generate distinct character voices for their avatars. This integration leverages ElevenLabs' ability to scale audio across 70+ languages, aiming to make global content more accessible through seamless localization.
🎨 New OpenAI Image Models Reportedly Surface with Improved Text Rendering
Users on the blind testing platforms Design Arena and LMArena have spotted two new models, codenamed "Chestnut" and "Hazelnut," believed to be OpenAI's next-generation image generator. Early tests indicate these models significantly outperform DALL-E 3 in rendering legible text, complex code, and realistic skin textures without the "yellowish" tint common in previous versions. Speculation suggests these models may be released alongside the rumored GPT-5.2 update.
🕺 sync. Launches react-1 to Edit Avatar Emotions Without Reshoots
sync. released react-1, a video-to-video AI model that allows you to alter an avatar's performance and emotional expression (e.g., "happy," "sad") in post-production. Unlike traditional lip-sync tools, react-1 reanimates the entire face and head movement to match a new audio track or emotion prompt while preserving the original avatar's identity. The tool supports up to 4K resolution and is available now via API and a research preview.
Want to stay on top of stories like this throughout the week?
Get VP Land, the free newsletter sent twice a week covering the latest trends, projects, and developments in creative technology. Subscribe here.