The Virtual Environments Group (formerly the Living Environments Laboratory) is a space and a place where scholars explore the connections among environment, technology, human action, experience, and visualization.
We represent a multidisciplinary group of faculty and students who share an affinity for incorporating 3D design, visualization, and virtual reality environments into education, research and creative expression. Our primary instrument for research is a six-sided virtual reality CAVE.
We engage in projects with departments such as Industrial and Systems Engineering, Design Studies, Library and Information Studies, Nursing, Art, Dance, as well as projects in Digital Humanities. We are developing a broader intellectual foray by expanding our repertoire of the types of scientific and scholarly data we analyze and visualize. We welcome collaborators from across campus, the state, and the nation to join in this endeavor through data acquisition, visualization practices and 3D data analysis.
Virtual Environments explores how technology shapes health in everyday living and how to use this understanding to create technologies that better fit into everyday lives. In Project HealthDesign, they partner with teams across the country to build computer tools that help people monitor health in everyday living and share those observations with their clinicians. In the new School of Nursing building, a state-of-the-art, fully instrumented apartment will allow scientists, clinicians and people to see and test the technologies of the future that will improve life today.
Assistant Professor Kevin Ponto’s research is focused on advancing the field of virtual reality, ranging from creating novel and natural interfaces for immersive virtual environments to developing methods, techniques and tools to better understand, evaluate and develop interactive virtual experiences. The challenges and benefits of this research span across many disciplines, continuing Ponto’s passion for working in interdisciplinary environments.
Assistant Professor Karen Schloss investigates how observers make predictions about objects and entities based on their cognitive and emotional responses to perceptual information. She is currently focusing on how people’s associations with colors influence cognitive processing in three broad areas: (1) aesthetic response, (2) judgment and decision making, and (3) interpretation of information visualizations. In doing so, she takes an empirical approach to design, with the goals of understanding how to communicate effectively through visualizations and what determines affective response to perceptual features.
Spaces, Equipment, and Technology
The VE Group’s projects would not be possible without advanced visualization technologies. This includes four display environments located in two spaces. See video demos of our technology in action.
The fully immersive CAVE C6 is a room with four 9’6” x 9’6 walls, a ceiling and a floor — all of which are projection screens, except for the floor, which is a clear plexi-glass surface. Behind each projection screen are two projectors, which form 3D rear projected images on each screen. The system also has a 5.1 surround sound audio system and ceiling microphones for two-way communication to the development lab. The C6 uses Intersense head and wand tracking for full user-viewpoint-dependent stereoscopic (3D) viewing. Ultrasonic tracking sensors embedded into the CAVE corners provide full tracking of the head and wand. An IP-based camera is mounted in the rear of the CAVE with the ability to record and stream video of the happenings in the space to other parts of the Discovery Building.
The development lab, or dev lab for short, includes three display systems: a single “Powerwall” 3D interactive rear-projected display (9’6” x 9’6”), a 63” 3D HDTV, and a ceiling mounted front projection display.
DSCVR is an immersive 3D tiled VR display system consisting of 20 consumer-grade 3-D televisions arranged in a half cylinder driven by 10 Alienware PCs with nVidia GeForce graphics cards. User tracking is accomplished via a Microsoft Kinect 2 depth camera and interaction occurs via a tracked PS3 dual analog controller. Total resolution of the display system is 41 megapixels.
Other Visualization Equipment
The VE Group possesses a Faro Focus S120 LiDAR scanner to enable rapid acquisition of real world environments. They also use two Oculus Rift dk2 and two Oculus CV1, six Microsoft Kinect RGBD cameras, a Leap Motion, HTC Vive, and a 63” 3D HD television. A common middleware framework developed in-house allows for immersive visualization of a variety of data types across all three laboratories on different VR software.
|Visible Human Project||Complete 3D representations of the male and female human bodies from the U.S. National Library of Medicine|
|Taliesin Estate||Currently 3D Scans and Quad Copter Videos, to be turned into a 3D Point Cloud, of the estate of architect Frank Lloyd Wright|
|Underwater Ship-Wreck||Video footage courtesy of Woods Hole Oceanographic Institution to study underwater point cloud reconstruction|
|Two Galaxies Colliding||Time-varying cloud point data set of 2500 time-steps over 2-3 minutes from the University of Wisconsin- Madison Astronomy Department.|
|Sequential Textual Information||Various data sets of textual information over time that can be visualized using the WordCAKE tool|
|DC Smith Greenhouse||3D Scan of DC Smith Greenhouse located on UW–Madison campus. Resulted from first Model This! contest winners.|
|Little Norway Stave Church||3D Scan of famous church from 1893 World’s Fair in Chicago. Once located in Mt. Horeb, WI, now being shipped back to Norway.|
|Point Cloud||pcd, e57, PTX, PTZ, XYZ|
|Molecular File||Protein Data Bank Format and various other molecule coordinate/structure files|
|Volumetric Image||DICOM, TIFF stacks, RAW, interfile, vevo and others|
|Graphics / Game Engines||Unity 3D, Ogre 3D, and OpenSceneGraph|
|ParaView supported formats||VTK, H5part HDF5 Particle files, and more|
|Autodesk Formats||FBX, Revit, 3D Studio Max, Maya, etc.|