Hybrid Parallelism for Visualization
Hank Childs (University of Oregon and Lawrence Berkeley National Laboratory)
Many of today’s parallel visualization programs are designed for distributed-memory parallelism, but not for the shared-memory parallelism available on GPUs or multi-core CPUs. However, architectural trends on supercomputers increasingly contain more and more cores per node, whether through the presence of GPUs or through more cores per CPU node. To make the best use of such hardware, we must evaluate the benefits of hybrid parallelism — parallelism that blends distributed- and shared-memory approaches — for visualization's data-intensive workloads. With this talk, Hank explores the fundamental challenges and opportunities for hybrid parallelism with visualization, and discusses recent results that measure its benefit.
Hank Childs is an assistant professor at the University of Oregon and a computer systems engineer at Lawrence Berkeley National Laboratory. His research focuses on scientific visualization, high performance computing, and the intersection of the two. He received the Department of Energy Career award in 2012 to research explorative visualization use cases on exascale machines. Additionally, Hank is one of the founding members of the team that developed the VisIt visualization and analysis software. He received his Ph.D. from UC Davis in 2006.
Depth, You, and the World
Jamie Shotton (Microsoft Research, Cambridge)
Consumer level depth cameras such as Kinect have changed the landscape of 3D computer vision. In this talk we will discuss two approaches that both learn to directly infer correspondences between observed depth image pixels and 3D model points. These correspondences can then be used to drive an optimization of a generative model to explain the data. The first approach, the "Vitruvian Manifold", aims to fit an articulated 3D human model to a depth camera image, and extends our original Body Part Recognition algorithm used in Kinect. It applies a per-pixel regression forest to infer direct correspondences between image pixels and points on a human mesh model. This allows an efficient "one-shot" continuous optimization of the model parameters to recover the human pose. The second approach, "Scene Coordinate Regression", addresses the problem of camera pose relocalization. It uses a similar regression forest, but now aims to predict correspondences between observed image pixels and 3D world coordinates in an arbitrary 3D scene. These correspondences are again used to drive an efficient optimization of the camera pose to a highly accurate result from a single input frame.
Jamie Shotton studied Computer Science at the University of Cambridge, and remained at Cambridge for his PhD in Computer Vision and Visual Object Recognition, graduating in 2007. He was awarded the Toshiba Fellowship and travelled to Japan to continue his research at the Toshiba Corporate Research & Development Center in Kawasaki. In 2008 he returned to the UK and started work at Microsoft Research Cambridge in the Machine Learning & Perception group where he is now a Senior Researcher. His research interests include human pose and shape estimation, object recognition, machine learning, gesture and action recognition, and medical imaging. He has published papers in all the major computer vision conferences and journals. His work on machine learning for body part recognition for Kinect was awarded the Best Paper Award at the IEEE Conference on Computer Vision and Pattern Recognition 2011, and the Royal Academy of Engineering's MacRobert Award 2011. He and the Kinect team received Microsoft's Outstanding Technical Achievement Award in 2012.
Reality-Inspired Constraints for Shape Modeling and Editing
Olga Sorkine-Hornung (ETH Zürich)
Digital shapes can be turned into physical objects using modern manufacturing processes, today easier than ever thanks to the advancements in 3D printing. Current digital modeling tools, however, often do not produce reality-ready 3D models: the shapes might look great as virtual objects, but be ridden with problems that prevent their direct manufacturing in practice, such as self-intersections, structural instability, imbalance and more. These problems are usually removed through a tedious, iterative post-process involving repeated simulations and manual corrections. In this talk, I will show that incorporating some physics laws directly into the interactive modeling framework can be done inexpensively and is beneficial for geometric modeling: while not being as restrictive and parameter-heavy as a full-blown physical simulation, this allows to creatively model shapes with improved realism and directly use them in fabrication.
Olga Sorkine-Hornung is an Assistant Professor of Computer Science at ETH Zurich, where she leads the Interactive Geometry Lab at the Institute of Visual Computing. Prior to joining ETH she was an Assistant Professor at the Courant Institute of Mathematical Sciences, New York University (2008-2011). She earned her BSc in Mathematics and Computer Science and PhD in Computer Science from Tel Aviv University (2000, 2006). Following her studies, she received the Alexander von Humboldt Foundation Fellowship and spent two years as a postdoc at the Technical University of Berlin. Olga is interested in theoretical foundations and practical algorithms for digital content creation tasks, such as shape representation and editing, artistic modeling techniques, computer animation and digital image manipulation. She also works on fundamental problems in digital geometry processing, including reconstruction, parameterization, filtering and compression of geometric data. Olga received the EUROGRAPHICS Young Researcher Award (2008), the ACM SIGGRAPH Significant New Researcher Award (2011), the ERC Starting Grant (2012), the ETH Latsis Prize (2012) and the Intel Early Career Faculty Award (2013).