User Tools

Site Tools

Writing /home/users/ashutosn/public_html/CommNetS2016/dokuwiki/data/cache/3/3be6f3c7cfc4200b4d01a1a641fb9805.i failed
Unable to save cache file. Hint: disk full; file permissions; safe_mode setting.
Writing /home/users/ashutosn/public_html/CommNetS2016/dokuwiki/data/cache/3/3be6f3c7cfc4200b4d01a1a641fb9805.metadata failed
Writing /home/users/ashutosn/public_html/CommNetS2016/dokuwiki/data/cache/3/3be6f3c7cfc4200b4d01a1a641fb9805.i failed
Unable to save cache file. Hint: disk full; file permissions; safe_mode setting.
Writing /home/users/ashutosn/public_html/CommNetS2016/dokuwiki/data/cache/3/3be6f3c7cfc4200b4d01a1a641fb9805.xhtml failed

Title: Digitizing Humans into VR with a Glimpse into Deep Learning Applications


The age of social media and immersive technologies has created a growing need for processing detailed visual representations of ourselves. With recent advancements in graphics, we can now generate highly realistic digital characters for games, movies, and virtual reality. However, creating compelling digital content is still associated with a complex and manual workflow. While cutting-edge computer vision algorithms can detect and recognize humans reliably, obtaining functional digital models and their animations automatically still remains beyond reach. Such models are not only visually pleasing but would also bring semantical structure into the captured data, enabling new possibilities such as intuitive data manipulation and machine perception. With the democratization of 3D sensors, many difficult vision problems can be turned into geometric ones, where effective data-driven solutions exist. My research aims at pushing the boundaries of data-driven digitization of humans and developing frameworks that are accessible to anyone. Such system should be fully unobtrusive and operate in fully unconstrained environments. With these goals in mind, I will showcase several highlights of our current research efforts from dynamic shape reconstruction, human body scanning, facial capture, and the digitization of human hair. By the end of this decade, our homes will be equipped with 3D sensors that digitally monitor our actions, habits, and health. These advances will help machines understand our appearances and movements, revolutionizing the way we interact with computers, and developing new forms of live communication through compelling virtual avatars.


Hao Li joined the University of Southern California in 2013 as an assistant professor of Computer Science. Before his faculty appointment he was a research lead at Industrial Light & Magic/Lucasfilm, where he developed the next generation real-time performance capture technologies for virtual production and visual effects. Prior to joining the force, Hao spent a year as a postdoctoral researcher at Columbia and Princeton Universities. His research lies in geometry processing, 3D reconstruction, performance capture, a human hair digitization. While primarily developed to improve film production, his work on markerless dynamic shape reconstruction has also impacted the field of human shape analysis and biomedicine. His algorithms are widely deployed in the industry, ranging from leading visual effects studios to manufacturers of state-of-the-art radiation therapy systems. He has been named top 35 innovator under 35 by MIT Technology Review in 2013 and NextGen 10: Innovators under 40 by CSQ in 2014. He was also awarded the Google Faculty Award in 2015, the SNF Fellowship for prospective researchers in 2011, and best paper award at SCA 2009. He obtained his PhD from ETH Zurich in 2010 and received his MSc degree in Computer Science in 2006 from the University of Karlsruhe (TH). He was a visiting professor at Weta Digital in 2014 and visiting researcher at EPFL in 2010, Industrial Light & Magic (Lucasfilm) in 2009, Stanford University in 2008, National University of Singapore in 2006, and ENSIMAG in 2003.

digitizing_humans_into_vr_with_a_glimpse_into_deep_learning_applications.txt · Last modified: 2016/09/01 19:15 (external edit)