AboutRoadmap

Alpha
Finished
Phase 1: Foundation and Camera Feed Integration

Established a modern and versatile application framework. Invested considerable time in optimizing the project's development process, prioritizing ease of use and developer-friendliness.


Implemented a highly efficient camera feed pipeline to seamlessly integrate image inputs from diverse sources.

Alpha
Finished
Phase 2: Precision and Hardware-Independent Recording

Establish infrastructure for the storage, processing, and retrieval of images from diverse sources.

Develop a system to estimate intrinsic camera parameters for various hardware devices.

Alpha
Finished
Phase 3: Pose Estimation, Editor, and Advanced Pose Tools

Developed infrastructure for running and reading from AI models.

Implemented base pose estimation with 33 keypoints for accurate human movement tracking.

Developed accompanying tools for efficient storage, retrieval, and manipulation of motion data.

Alpha
Finished
Phase 4: Depth and User-Friendly GUI

Implemented robust stereo calibration for accurate extrinsic parameters estimation and 3D reconstruction.

Create a basic GUI for system calibration and motion capture recording.

Enable multi-device recording.

Alpha
Finished
Phase 5: GPU Computing and Enhanced Performance

Implemented the required infrastructure and code to efficiently run the AI model on Windows GPUs. It now achieves an impressive 23 frames per second, providing nearly real-time output on a standard PC ( NVIDIA GTX 1060 )

Substantial progress has been made in writing the essential code for the 3D editor.

Alpha
Finished
Phase 6: Hands Estimation, Face Estimation, Networking Backend, and Multi-View Multi-Screen 3D Viewport

Expanded the pose model to include an additional 42 keypoints, with 21 keypoints dedicated to each hand.

Established the foundation for a gesture recognition system.

Incorporated the face mesh AI model into the system, although it is currently inactive due to the additional processing requirements for face mesh data.

Created a networking backend to enable smooth connections between network devices and AIMation.

Developed a fully operational multi-view, multi-screen 3D viewport that enables users to observe the animation from different angles.

Alpha
Finished
Phase 7: Mobile Application Development

The required protocol for remote AIMation-studio control has been successfully implemented.

Begin developing a mobile application for iOS and Android platforms, focusing on AImation control and image streaming.

Alpha
Finished
Phase 8: Refining Animation Output & Editor Experience

Implemented advanced post-processing tools to achieve smooth and precise animations.

Developed an FBX export tool for easy transfer of animations to 3D software platforms.

Enhanced the 3D editor and user interface for improved usability.

Alpha
In progress
Phase 9: Advanced Hand Detection and Motion Capture Filters

Enhanced hand detection capabilities.

Initiate the development of additional calibration methods for mobile phones to estimate depth using a single sensor, enhancing the depth perception capabilities of the system.

Begin the implementation of smoothing splines and butterworth filters to smooth and polish motion data

Alpha
In progress
Phase 10: Testing and Public Beta Release

Conduct extensive testing to ensure stability and performance.

Develop detailed tutorials and guides to assist users in navigating and utilizing every aspect of our application.

Release the application to the public as a beta version.

Alpha
In progress
Phase 11: Integration with Game Engines, Custom Streaming and more export formats

Integrate Animation Studio with popular animation tools such as Blender and Maya, as well as game engines like Unity and Unreal, allow users to stream their animations to custom clients.


Implement industry-standard exporters for motion formats.

Alpha
In progress
Phase 12: Additional animation tooling & face capture

Implement graph-based animation retargeting system.

Implement advancedinverse kinematics solver with constraints.

Integrate face mesh functionality into estimation pipeline.

Implement dedicated face recording window with additional tooling for face recording sessions.


Alpha
In progress
Phase 13: Additional device support

Expand our device support by incorporating action cameras, other high-quality cameras as well as AI cameras.

At present, we already offer support for OpenCV depth cameras and their corresponding API.

Alpha
In the future
Phase 14: Gesture Recognition System

Integrate modular gesture recognition AI model.

Enhance animation output with recognized gestures.

Allow user customization of gesture sets.

Alpha
In the future
Phase 15: Programmable Actions

Enable programmable actions based on recognized gestures.

Implement user-friendly interface for action creation.

Alpha
In the future
Phase 16: API for Game Integration

Develop an efficient and robust API for VR/AR games to enable seamless real-time avatar control.

Release AImation as a high-performance API, empowering game developers to leverage its capabilities.

Moreover, integrate AImation seamlessly with AR goggles to enhance the experience of real-time business meetings conducted over the web

Alpha
In the future
Phase 17: Enable Mobile GPU Computation

Allow our models to run on mobile devices since they are compact enough.


Implement the extraction of depth data from depth sensors on iOS.


Implement depth using the multi-camera API on Android, considering that most new phones have a multi-camera setup. Additionally, research additional techniques for depth extraction on smartphones.


The mobile team is responsible for handling these tasks, and they are conducted in parallel with AImation-studio development.

Alpha
In the future
Future

Enhance the system by introducing the capability to substitute pose estimation AI models with object feature extractor AI models. This new functionality will enable the recording and reconstruction of objects within a 3D space.


Research and develop additional tooling and AI models with a strong focus on single device depth perception and multi-device depth perception.


Explore potential integrations into fitness-related markets, with the goal of leveraging the technology for applications such as personalized workout tracking, posture correction, and gesture-based exercise guidance.


Investigate potential integrations into DIY security systems, aiming to enhance surveillance capabilities, object recognition, and anomaly detection to provide users with a more comprehensive and intelligent home security solution.


Explore opportunities for integration into the healthcare industry, including remote patient monitoring, fall detection, and gesture-based control for medical devices.


Explore possibilities for integrating the technology into augmented reality (AR) and virtual reality (VR) applications, with the aim of enhancing immersive experiences and interactions within these environments.


Investigate opportunities for integration into the healthcare industry, exploring applications such as remote patient monitoring, fall detection, and gesture-based control for medical devices.


Continuously conduct research and development efforts to expand the capabilities and explore new potential applications of the technology. Collaborate with industry partners and experts to identify market opportunities and optimize integrations