Hedra Labs has launched its new multimodal foundation model, Character-1, which supports controllable video generation. Led by a team of former Stanford researchers, including Michael Bach and Alex Bergman, Character-1 focuses on generating expressive human characters with full motion video and synchronized sound. The model allows users to create and animate faces from audio text, making lip sync apps potentially obsolete. It also supports dynamic 3D content generation. The technology is in its early stages but is already pushing the boundaries of the uncanny valley. The model is currently available for free and has received positive feedback from early users who have highlighted its potential for storytelling and creative applications. Hedra Labs aims to provide creators with tools for expressive control and has an ambitious roadmap for future developments. The research preview of Character-1 has shown promising results.
Hedra's Character-1 just dropped 12 hours ago. And users are having a field day bringing characters to life with AI. 9 mind-blowing examples you must watch: 1. Thanos the Rap God https://t.co/pjPOirwiav
Exciting news from Hedra AI, a new text-to-video (incl. audio) generative AI tool on the market. Hedra AI just published the research preview of our foundation model, Character-1. "This is just the first glimpse of what's to come as we work towards building a multimodal creation… https://t.co/w5ZBXddcOL
Hedra just announced Character-1. You can generate videos with human characters easily using AI. Here are 10 mind-blowing reveals and examples: https://t.co/vOKw8xFFH8
The new video avatar generator Hedra is available for free (at least for now). It's way too much fun... :-) https://t.co/zx5zy5Mx9v https://t.co/fGe5Txiqov
First official announcement from Hedra, the company I've been working with for the last few months. Our foundation model, Character-1 is just the first glimpse of what's to come. I'm excited about what we're building, and if you've been following my work you know that I'm all… https://t.co/mox4lw4e64
Storytelling will soon be very different with @hedra_labs. #hedra https://t.co/8e2ptyoQQc
congrats to @mjlbach and the @hedra_labs team on the launch of their CHARACTER-1 foundation model research preview. Amazing output. https://t.co/E0I9eepBBR
Character-1 from Hedra is probably the best Lip Sync I have seen so far. Congratulations on the launch @hedra_labs https://t.co/y5eEhLMblt
🚨#BreakingNews🚨 Boom, another new AI has arrived! 🎉 @hedra_labs lets you animate faces effortlessly from audio text. Want to make any image talk or sing? No problem, go check out HedraLabs! 🚀 https://t.co/uN7VYwhQRm
I got early access to @hedra_labs Character-1 and it's incredible. The model generates video and dynamic 3D content with a focus on expressive humans. Workflow and examples below: #hedra https://t.co/KNcsV0f1qk
With HEDRA you can create a character and a voice in app or you can import your own. This tech is in it’s infancy but already it’s pushing the frontiers of the uncanny valley. A couple of things to flag up, MID SHOTS and MCU’s work best and it only outputs 1:1 at present but team… https://t.co/e607adIuhw
Congrats to team @hedra_labs on the launch! Here's AI me 😎 -- We're excited to see what creators do with their character centric video model They have an ambitions roadmap in pursuit of providing creators the tools they need for expressive control We loved working with… https://t.co/zmbPr8sSn6 https://t.co/mxqvVRC1DF
had so much fun with @hedra_labs new multimodal foundation model supporting controllable video generation, led by a top ex Stanford research team @mjlbach @alexwbergman Hedra's research preview generates expressive characters with full motion video + synced sound! 🎬 https://t.co/Khx8F9D2ht
Congrats to team @hedra_ai on the launch! Here's AI me 😎 -- We're excited to see what creators do with their character centric video model. They have an ambitions roadmap in pursuit of providing creators the tools they need for expressive control We loved working with… https://t.co/LrbPBiWCvl https://t.co/mxqvVRC1DF
Lip sync apps just became irrelevant. @hedra_labs is a foundational video model focused on humans — these are expressive generations brimming with emotion. Oh and they can do beards too :) I got a sneak peek into what’s next and I couldn’t be more excited https://t.co/Z63ajusurn