Hi! I’m Balamurugan Thambiraja, presently pursuing a PhD at the
Neural Capture and Synthesis group
of Prof. Justus Thies
at the Max-Planck Institute for Intelligent Systems, Germany. During my Ph.D., I'm collobrating with
Darren Cosker
and Sadegh Aliakbarian
from Mesh Labs, Microsoft, Cambrdige.
In my Ph.D. im focusing on human motion and its associated dynamics. Recently, I have been exploring the potential use of diffusion-models and large-language-models for
motion synthesis and editing.
In general, I'm interested in the temporal dimension, particularly on how things change and evolve over time. I aim to improve video synthesis and our comprehension of video information.
Before the PhD, I did a Masters in Informatics at TUM, Germany, where I worked on human modelling and
sign language synthesis
at the
Visual Computing and Artificial Intelligence group
of Prof. Matthias Niessner
. Before that, I did my Bachelors in Electrical and Electronics Engineering at Kumaraguru College of Technology, Coimbatore, India.
3DiFACE is a novel diffusion-based method for synthesizing and editing holistic 3D facial animation from an audio sequence, wherein one can synthesize a diverse set of facial animations and seamlessly edit facial animations.
View ProjectImitator is a novel method for personalized speech-driven 3D facial animation. Given an audio sequence and a personalized style-embedding as input, we generate person-specific motion sequences with accurate lip closures for bilabial consonants ('m','b','p'). The style-embedding of a subject can be computed by a short reference video (e.g., 5s).
View ProjectWe present a novel method to synthesize sign pose sequences from input text using a transformer-based method. We achieved state-of-the-results in RWTH-PHOENIX 2014T benchmark by utilizing relative positional embedding and relative patch discriminator.
Thesis(pdf) Presentation(pptx)Worked on the online real-time virtual try-on system. Designed and developed a novel FLOW-based virtual try-on method.
Developed image processing and computer vision algorithms in CUDA for neutron imaging.
Worked on real-time head pose and eye gaze estimation for driver awareness monitoring system. Contributed to development of the eye gaze tracking solution that can run real-time on edge-computing devices.
At Wirecard, I worked on prepaid card application platform called NARADA and CORECARD, where I mainly worked on automating manual tasks using Python, VB Scripts, RUNDECK and Shell Scripts.