Bubblerama
Bubblerama
  • HOME
  • ABOUT BUBBLERAMA
  • MAIN CHARACTERS
  • THE KITCHEN
  • TECHNOLOGIES
  • ABOUT TECHNOLOGIES
  • CONTACT US

about technologies

 In this project, two types of innovative 3D animation technologies will be utilized and synthesized into a single process to achieve a modern, high-quality final result while simultaneously accelerating and streamlining the animation production cycle.

For character animation and movement transfer to the digital environment, only surface motion capture technology is used. Due to the specific nature of the characters and their movements, conventional methods are not applied—typically, a motion capture suit with optical sensors is worn by an actor to create humanoid character animations by transferring their movements to the digital environment. In this case, however, a unique grid of inertial sensors (IMU – "inertial measurement units") is specifically developed and customized for each animated object. This sensor grid can be adapted to complex 3D geometries, including fabric-like, tubular, and vertical structures.

For this project, fabric-type sensor grids are uniquely developed and tailored for each object (animated character), taking into account physical properties such as geometry, range of motion, and materiality. The most complex objects will be equipped with grids containing up to 60 individual sensors embedded in parallel across three planes, ensuring that both movement and material properties of the physical objects are accurately transferred into the digital model. The captured motion data is then specially encoded and displayed in a computer program as local coordinates, where each sensor's position within the object corresponds to a designated location in its digital model.

The selected technology provides high precision—tests have shown that the maximum error of each individual sensor does not exceed 0.2 mm. Furthermore, current computer connectivity capabilities allow for the simultaneous transmission of data from up to 300 individual sensors, enabling semi-automated real-time data processing. These two factors ensure a highly detailed sensor grid and a realistic depiction of physical model movements in the digital environment. Additionally, post-processing time for animation—typically required to clean up excess movements, correct inaccuracies, or enhance material realism—is significantly reduced. In traditional motion capture workflows, such refinements require extensive animator work hours.

The second technology employed in this project is the digital animation platform Unreal Engine, which will be used for environment creation, the placement of moving objects in the digital space, and animation post-processing. Unreal Engine is currently the most in-demand and rapidly growing 3D environment and game development platform. It supports high-poly geometries from all major 3D modeling software programs (such as SketchUp, 3ds Max, C4D, Blender, and any .fbx file) for digital environment creation. Unreal Engine stands out for its vast library of integrated digital materials, accurate and fast lighting, and the ability to use an unlimited number of virtual cameras for animation and cinematic rendering. Each virtual camera can be set to match the specifications of its real-world counterpart, including focal length, sensor type and size, and movement within the digital space.

Additionally, Unreal Engine allows for the real-time modeling of an unlimited number of frames and camera positions within specific animation sequences, providing nearly limitless framing possibilities according to the creative and artistic vision. The platform supports ray tracing and path tracing for rendering images, while its Lumen lighting technology—available from Unreal Engine 5.0 onward—ensures realistic light representation.

By integrating these two technologies—customized surface motion capture sensor grids and the Unreal Engine platform—along with specially developed software for this project, we can ensure not only precise data transmission but also real-time error correction and automatic refinement of minor inaccuracies. This will allow us to faithfully transfer even the most complex physical object movements into the digital environment while enhancing 3D objects with realistic material properties. Additionally, motion sensor data will be integrated into the Unreal Engine-generated digital world in real time, enabling the simulation of 3D object interactions with preassigned materiality and physics using virtual cameras.


Key Benefits of This Technological Integration:

  • Highly accurate motion capture and rendering
  • Efficient and streamlined environment creation
  • Fast and flexible setup of virtual cameras in the digital space
  • Elimination of traditional frame-by-frame rendering, significantly reducing animation production time

This approach not only enhances animation quality but also optimizes the overall workflow, making the production process more efficient and cost-effective.

  • Privacy Policy

Bubblerama

Copyright © 2025 Bubblerama - All Rights Reserved.

Powered by

This website uses cookies.

We use cookies to analyze website traffic and optimize your website experience. By accepting our use of cookies, your data will be aggregated with all other user data.

DeclineAccept