BREAKING NEWS

Create your own bullet time images using $9 cameras

×

Create your own bullet time images using $9 cameras

Share this article
Create your own bullet time images using  cameras


If you have ever thought how cool it would be to know how to create the awesome bullet time visual effects, used in movies, without the Hollywood budget. A recent experiment by 3DSage has shown that with a bit of creativity and technical skill, and using a handful of affordable $9 cameras you can produce stunning bullet time effect popularized by “The Matrix.” The video below shows how affordable technology can be used to recreate a high-end cinematic technique, demonstrating that expensive equipment is not always necessary for creating amazing visual effects.

What is Bullet Time?

Bullet time is a visual effect or filming technique that creates the illusion of slowing down time during an action sequence, allowing high speed movements to be viewed in slow motion and from multiple angles simultaneously. This effect gives the viewer a dramatic experience as though they are moving through the scene at normal speed while the environment around them moves in slow motion.

The technique typically involves the use of multiple cameras capturing the same action from different angles. These cameras are either set up in a circular pattern around the subject or moved along a track to capture the movement dynamically. The captured frames are then processed and sequenced to create a continuous, fluid motion that appears to ‘wrap around’ the subject seamlessly.

The concept was popularized by the film “The Matrix” (1999), directed by the Wachowskis. In the movie, bullet time is used to showcase complex fight scenes and bullet dodges, emphasizing the supernatural abilities of the characters. The term itself has since been used widely in film, video games, and other media to refer to similar effects that emphasize motion and perception in dynamic visual contexts.

Creating Bullet Time Images

The process starts with the purchase of ten $9 cameras from Amazon, selected for their robust construction despite their low price. Early testing highlighted some challenges, including:

  • Overestimated video quality
  • Poor audio performance
See also  Huawei angers Chinese consumers at a very bad time for the company

Nonetheless, these cameras proved sufficient for basic video tasks, making them viable for the experiment. The experimenter’s determination to push the boundaries of what could be achieved with budget equipment drove them to find creative solutions to these initial hurdles.

Designing the Bullet Time Rig

Driven by the desire to replicate the mesmerizing bullet time effect, the project by 3DSage uses custom rig using 3D modeling software. The setup was engineered to hold the cameras at precise angles, allowing simultaneous multi-angle video capture—a crucial component for creating the desired visual effect.

The rig’s design went through multiple iterations to ensure optimal camera positioning and stability. The attention to detail and willingness of 3DSage to refine the design were key factors in the project’s success. So you might need to

Overcoming Technical Hurdles

As you will see process documented in the video above, the project faced several technical obstacles. Initial configurations of the camera rig had issues with camera angles and spacing. After multiple redesigns and adjustments to the 3D printed mounts, each camera was perfectly positioned to capture the correct angle.

To improve video fluidity, software tools like Flowframes were used to enhance frame transitions, significantly smoothing the video output. Flowframes Windows GUI for video interpolation – Supports RIFE (Pytorch & NCNN), DAIN (NCNN), and FLAVR (Pytorch) implementations. Flowframes comes with RIFE-NCNN which runs on Tencent’s NCNN framework, which allows it to run on any modern (Vulkan-capable) GPU. Here is an explanation of some of the more important settings.

  • Processing Style: Either run all steps at once, or each step manually, in case you want to edit frames, or deduplicate manually.
  • Maximum Video Size: Frames are exported at this resolution if the video is larger. Lower resolutions speed up interpolation a lot.
  • Export Name Pattern: Customize the pattern of the filenames of outputs using variables.
  • Input Media To Preserve: Toggle transfer of Audio, Subtitles and MKV Metadata.
  • Enable Transparency: Interpolate transparency. Only active if the input and output support transparency (PNG/GIF).
  • Import HQ JPEGs: Will extract JPEG instead of PNG frames from videos. Fast and lightweight, but with a tiny (invisible) quality loss.
  • Frame De-Duplication: This is meant for 2D animation. Removing duplicates makes a smooth interpolation possible.
    • You should disable this completely if you only use content without duplicates (e.g. camera footage, CG renders).
    • “During Extraction” works for most content. Use “Accurate (After Extraction)” for fine-tuning the sensitivity.
  • Loop Interpolation: This will make looped animations interpolate to a perfect loop by interpolating back to the first frame at the end.
  • Fix Scene Changes: This avoids interpolating scene changes (cuts) as this would produce weird a morphing effect.
  • Auto-Encode: Encode video while interpolating. Optionally delete the already encoded frames to minimize disk space usage.
  • RIFE – UHD Mode – This mode changes some scaling parameters and should improve results on high-resolution video.
  • GPU IDs: 0 is the default for setups with one dedicated GPU. Four dedicated GPUs would mean 0,1,2,3 for example.
  • NCNN Processing Threads: Increasing this number to 2, 3 or 4 can improve GPU utilization, but also slow things down.
  • RIFE CUDA Fast Mode: Utilizes Half-Precision (fp16) to speed things up and reduce VRAM usage, but can be unstable.
  • Encoding Options: Set options for video/GIF encoding. Refer to the FFmpeg documentation for details.
  • Minimum Video Length: Make sure the output is as long as this value by looping it.
  • Maximum Output Frame Rate: Limit frame rate by downsampling, for example, if you want a 60 FPS output from a 24 FPS video.

The final video successfully captured the dynamic, immersive bullet time effect reminiscent of “The Matrix.” This experiment not only served as a valuable learning experience but also demonstrated how combining low-cost technology with innovative thinking can lead to professional-quality results.

See also  More details on Elon Musk's lawsuit against OpenAI

This fantastic exploration into bullet time photography using inexpensive video cameras illustrates a key point: with the right knowledge and a creative approach, the possibilities are virtually limitless. Whether you are an experienced photographer or a tech enthusiast eager to try new things, this project shows that expensive equipment is not necessary for creating captivating visual effects.

Video Credit: 3DSage

Filed Under: Camera News, Guides





Latest TechMehow Deals

Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, TechMehow may earn an affiliate commission. Learn about our Disclosure Policy.





Source Link Website

Leave a Reply

Your email address will not be published. Required fields are marked *