Runway has launched the Gen-3 Alpha model, now available to the public as of July 1st. This new AI video model, announced on June 17th, allows users to generate high-quality videos from text prompts. The community has been actively testing and sharing their results, showcasing impressive and creative videos. Gen-3 introduces advanced features such as dynamic camera movements, detailed animations, and the ability to animate text. It is being compared favorably to OpenAI's Sora, with many users noting its superior video quality and creative potential. The model has sparked significant interest and collaboration within the AI community, with numerous examples of its capabilities being shared online. The launch is part of Runway's Creative Partners Program.
“Hyperspeed POV shot: going through a crystal cave, then through a portal, then through a burning city.” First try. Runway Gen-3 is clearly the best text-to-video system. Yet, its closer to a world model, but isn’t quite there. Also not controllable enough yet for full use. https://t.co/3fuQS6sFXP
That's groovy man - made with Gen-3 from @runwayml and music with @udiomusic https://t.co/n2jsYvDpWI
posting some @runwayml Gen-3s on @getremixai. AI video is going to be amazing for music videos and new types of creator shorts https://t.co/toWyuSDTjN
"Imagine" - made with Gen-3 from @runwayml https://t.co/Im58kBQ41S
Here's a little clip from today's video that will sort of blow you away: @runwayml Gen-2, waaay back when it was still on Discord vs. Gen-3 today. 14 months is 14 years in AI time. Full video on the channel, collecting all the best prompt tips along w/ my findings! https://t.co/jKEPWqAQQF
Gen-3 ❤️#runway Prompt share summary 🫶🏼Community A short 🧵 of prompt share posts so far (in order) Please let me know specifically what is useful 🙏🏼 1. Introduction to prompting https://t.co/ZQLHYDqpdu
In honor of my original ai bird video getting ready to hit 24M on TikTok, I created this new #Gen3 @runwayml version. I had to shorten it for X but will post the full version elsewhere. #ai #aivideo #runwaygen3 https://t.co/IpKim3mrrK
Gen-3 from @runwayml animates text so nicely https://t.co/6yhcie2Mwl
Alright, Runway Gen3 is here! Now it's time to side-by-sides Runway Gen3 vs OpenAI Sora ↓ https://t.co/zc7RTRf67P
Poetry in motion. @runwayml Gen-3 test. https://t.co/6s6FaZERAI
Generated with Gen-3 Alpha. Love the image quality. See how @runwayml built this at Ray Summit. https://t.co/vEtJmQ9HOs https://t.co/wEMsUKin1w https://t.co/DnuSt1MWgn
Exploring Space And Time @runwayml #gen3 #gen3alpha https://t.co/GJGq2D4yxu
I released "Cinematic AI" almost 9 months ago and now with the recent release of @LumaLabsAI Dream Machine and @runwayml GEN3, I feel it is time again to capture this extraordinary leap forward in cinematic AI video on the blockchain. CINEMATIC AI MMXXIV Coming soon... https://t.co/FTtFAwwoIj https://t.co/EMyJGT4FQC
While I do want image-to-video from @runwayml I'm impressed with the quality of its video from text, especially compared to other platforms. I put this prompt into Luma Dream Machine and Runway Gen-3: ""Experience a drone flight over the first human base on Mars. Glide over the… https://t.co/G0Y90s6zD9
I thought I'd share a few prompts I've tried on Gen-3 Alpha that have, and haven't worked. I can't use it for any longer form projects yet as it doesn't have character consistency and image-to-video isn't live yet (soon, I'm told). "A conductor raises their baton. Colored light… https://t.co/yNv4Uc2OYC
Gen 3 is now Public... Let's Test Drive it! https://t.co/SMOkLb7cgO https://t.co/KEUc7hf20v
Gen-2 vs. Gen-3: The Evolution of Text-to-Video! I spent some time today comparing Runway Gen-3 with Runway Gen-2 and honestly I am blown away by the quality difference. I tested out specifically scenes with water and Gen-3 produced some spectacular movement in each of the… https://t.co/FbRF0bIc5N
A comparison between @runwayml Gen-3 and SORA done by the awesome @amebagpt 👏 You can see and judge side by side, while also keeping in mind that only one of those is available to actually use right now! 😉 https://t.co/RLHh8pcMXf
Side by side comparison of @runwayml newly new Gen-3 model vs Sora . To do this, I have taken the videos from @OpenAI and used the same prompts in Runway. This would be biased against Runway of course, but interesting to see the difference. Runway is very close. You have no… https://t.co/LWYHiANR5T
Gen-3 for all https://t.co/ill2wF8dTb
Playing with and trying to work out the most effective way to prompt @runwayml Gen-3 for a story tomorrow. So far, mixed results but I've found the motion is better than the image realism.
📢Gen-3 Alpha is available to everyone: https://t.co/zaMftb2IUz
gen-3 is p legit video by @luiscape using @runwayml gpu-themed banger by me using @suno_ai_ https://t.co/ZGomRll8He
Every time I touch a new #GenAI tool I use the same initial test prompt: "A spaceship landing in the jungle." This is what @runwayml Gen-3 did on the first try. Mind. Officially. Blown. @c_valenzuelab @iamneubert https://t.co/H1WIsmGQFZ
Runway Gen-3 is out and is amazing! Unlike LumaLabs, you need to pay to try out Gen-3. Considering the recent takes on both on X, which one would you choose and why?
If I post Gen 3 video from Runway will be only if its combine with 3D or an other tech. Or if it tell a nice story ! Why people dont use it to tell something more deep than just tapping a prompt ? 99% of what I see with Luma ai and Runway are pretty boring. And what is…
If I post Gen 3 video from Runway will be only if its combine with 3D or an other tech. Or if it tell a nice story ! Why people dont use it to tell something more deep than just tapping a prompt ? 99% of what I see with Luma ai and Runway are boring. And what is important in…
On June 17th, Runway announced Gen-3 Alpha, and today, July 1st, it's available for everyone. This is how you do it, OpenAI. Take note, you demoed Sora in February, and people are still waiting. 🙄 https://t.co/pbGSeSyZLW
GEN-3 Alpha is out! And this is one of my favorite edits 🥹 https://t.co/CFUCxlpuns
GEN-3 Alpha is available now - for everyone! Our team has poured their hearts and souls into building this new generation of models, ushering in a new era of creativity accessible to all. This model was built by artists, for artists. We’re just getting started. https://t.co/X7mbllUYnc
Runway Gen-3 is feeling less like a rehash of the training data and more like a world simulator. The man's movements. The attempt to to get the space sprinkler physics right. His eye/face movements at the end. Very impressive model that creates its own impressions of the world. https://t.co/zkt32Vp6aS
"Rogue Runway: Intergalactic Edition" Made with @runwayml Gen-3 🤯 🔥 Feel free to join our 'Rogue Runway' Discord to get the prompts, make suggestions and create your own editions: https://t.co/jN45L4Wp4S #runway #gen3 #aiart #aigenerated #aifilm #aimovie #aifashion https://t.co/kBDZn6cJXq
"An astronaut running through an alley in Rio de Janeiro"🔥🔥 The Gen-3 Alpha model from RunwayML was just released a couple of days ago. It is now possible with Gen-3 to generate videos from a single prompt. Sora from OpenAI was mind-blowing but this is at the same level if not… https://t.co/22tCW0u7Hm
Gen 3- Alpha is WILD. Now, you will be able to generate high quality videos. I tested with a few Prompts + Sharing examples of other creators Here are 10 examples that will blow your mind. 1. Prompt - Cinematic Shot, close up of Polar Bear, running towards his mother https://t.co/I4suXwO8D2
For those who missed it, here's DJ FL-AI's first full length music video. Made entirely with @runwayml gen-3. https://t.co/1B7Ja9Zuen
Runway Gen-3 dropped a few days ago. Text and images can now make epic videos. Crazy, huh? 10 examples: https://t.co/fRJ7BPfhKl
After 48 hours with Gen-3 Alpha, here are some initial thoughts: - Prompting requires you to be quite specific to what you want. Definitely a learning curve there. - Human motions are mostly cursed af (but I like it). - Alpha is text-to-video only atm, so it’s hit or miss to get…
Gen-3 Alpha is pure magic. People with access are dropping wild AI videos. 10 examples (100% AI) 1. The Presidential Debate https://t.co/VWpwdpmiHp
An honest take on Runway Gen-3... https://t.co/9ciTOBbwMb
A new cinematic AI piece. "The Unsuspecting Public" I've been constantly prompting Runway GEN3 to the point that I'm slowly starting to get a sense for how best to use it in my AI art practice. For this piece, I used Runway GEN2, GEN3, and Luma. Each AI video tool served a very… https://t.co/JFOvgYiMcB
oooh, runway gen3 🥹 dynamic camera movement, low angle shot, of a slimey dripping green monster, melting morphing into a white cat two 5s results put together inspired by @Uncanny_Harry https://t.co/RryeSOqFgC
runway gen3 dynamic camera movement, low angle shot, purplish glitter hill in the living room, tornado, transforms into a walking giant peas monster, flys over the house, milkyway (it missed some parts of the prompt, but i think it is a cool result) https://t.co/8rVb14Kj1V
Beginners guide to prompting large language models (LLM’s) like @ChatGPTapp , @AnthropicAI & @MSFTCopilot Talking to a LLM (prompting) is like having a conversation. This guide will help you learn to communicate better with large language models to get the most out of them. https://t.co/tkopmKUWhE
Decided to test @runwayml GEN3-alpha with typography, found that not only you can prompt type styles but also animate the letters. Music: "Jazzy Nights" by Andres Vegas (Apersonal) #gen3 #aianimation #aifilm https://t.co/n16wuIry39
Gen3 is insane with text effect. 🔥🤩😱 https://t.co/MZKc7rNeiq
Decided to test @runwayml GEN3-alpha with typography, found that not only you can prompt type styles but also animate the letters. #gen3 #aianimation #aifilm https://t.co/Sk3YXnWkJ0
Is Runway Gen-3 model release the Midjourney 4.0 or 5.0 moment for AI video? 🤔
Gen-3 Alpha is an imagination black box. CPP with access just dropped more new AI videos 11 litty examples right hurrr:
Give me a prompt for Runway Gen-3 (500 characters max), and I'll generate the video. I'll then post the result in the comments section for everyone to see. share and like for more exposure! https://t.co/CNneawM65o
Most people tend to use relatively short prompts when using generative AI. But, a new form of prompts can provide more detailed and accurate responses. https://t.co/WERAIgCHGg https://t.co/18w7R15oBz
#gen3alpha is incredible. It understands the physical properties of things like nothing else I've seen in generative AI. #runwayml @runwayml https://t.co/AMuFagQ2cs
Alright, here it is, my first music video made with video assets from Gen-3 @runwayml. The music was made with @suno_ai_ Presenting "For All to See" https://t.co/69jN3z22Qf
Gen-3 Alpha is pretty wild. People with access just dropped more new AI videos 11 wild examples:
runway gen 3 (text to video) + eleven labs (video to sound effects) dynamic camera movement, a toy car crashing into a feather explosion and transforming into a transformer and walking among people https://t.co/0kUs39qFeJ
The adherence of generated video to this maximum length prompt is super impressive. @runwayml Gen-3 takes generative video to a level never before experienced. Prompt: "The footage on the TV has that grainy, “taped on a bad VHS over a wedding video and left in the sun” type… https://t.co/YaUyBVl5YZ
Less than 48 hours after RunwayML released Gen-3 Alpha. It looks better than OpenAI's Sora. Here are 10 wild examples: https://t.co/1iczSM5WNO
Hey #Gen3 #RunwayCCP folks! I’m putting together a Prompt Tips video for tomorrow. Can you kindly share what you’ve learned below? Would love for the larger community to hit the ground running when it drops public!
The 3d coherence of Gen-3 is mind blowing. @runwayml https://t.co/Vbe1jHVszj
Wow... Runway Gen-3 excels at creating stunning title screen animations. 9 impressive examples: 1. https://t.co/PAi80T7WNA
Exploring weird @runwayml gen-3 prompts for future DJ FL-AI videos. This one is pretty great: "Cinematic psychedelic footage, bokeh, morphing people and creatures, artistic style. " https://t.co/5I8iin3Ghj
🚨 Gen-3 Alpha Prompt share 🚨 https://t.co/3htiN41c5F
The @runwayml Gen3 new AI video model beta is open for their sleepless CPP members – here some cool videos the community is creating instead of going to sleep: Starting with @AzeAlter https://t.co/sZxzQZXOj5
Gen-3 ❤️#runway Prompt share 🫶🏼 Community Subject: Descriptive vs Structured prompting Personal observation: Noticed that I prompt in two distinct ways. They both seem to work incredibly well, but they also feel like they engage completely different parts of my brain. When… https://t.co/6i8TPtw88e
Gen-3 ❤️ # runway Prompt share 🫶🏼Community Useful: Descriptive vs Structured prompting Personal observation: Noticed that I prompt in two distinct ways. They both seem to work incredibly well, but they also feel like they engage completely different parts of my brain. When… https://t.co/Hbq2jqRL0R
feel ~ runway gen3 https://t.co/BV6X7qw1fl
Let's collaborate and make a mega Runway ML Gen-3 thread... Send me your prompts to try! I'll start... Prompt: Cinematic shot of The word "Matt" made up of curving fluid form building facade made up of nested metal, wood, and stone pieces. Camera Movement - zoom out to reveal… https://t.co/shShKwQUrb
BREAKING: Runway launched Gen-3 Alpha for Creative Partners Program. And they generated some crazy videos. 10 wild examples: https://t.co/xDMKkEaBlC
#runway Gen-3 ❤️ Prompt share 🫶🏼 Community Note: animals are not real (this is a prompt technique test) Important: using token averaging across prompt by adding 'unnecessary' tokens to suppress unwanted camera movement / create natural movement, prompt structure may evolve… https://t.co/MxYzFCqKNU
A Comprehensive Overview of Prompt Engineering for ChatGPT By Aswin Ak Prompt engineering is crucial to leveraging ChatGPT’s capabilities, enabling users to elicit relevant, accurate, high-quality responses from the model. As language models like ChatGPT become more… https://t.co/GMRF4urO84
#runway Gen-3 ❤️ Prompt share 🧵 Community 🫶🏼 Note: not real animals Comments: Adding 'unnecessary' descriptive tokens to prevent unwanted camera movement through token averaging, prompt not yet stable to be completely re-useable, still testing Static closeup shot of a green… https://t.co/m6N2Zgrkv2