RESEARCH

Projects and labs within Arts and Technology

Creative Automata Lab

The Creative Automata Lab exists to explore how abstract foundation computing artifacts are represented. Representations include functions, equations, dynamic models, and formal automata as well as the control and data involved in them.

Researchers and artists work together to merge the scientific with the aesthetic to focus on human interaction with metaphor and analogy. Research includes historic mechanics for mathematical functions, such as mechanical and electronic devices.

Researchers seek to investigate the next generation of technology using games, cinema, 3D printing, consumer electronics, virtual and augmented reality, and Web-based interaction. They also seek to determine how humanities will be a part of the process.

Paul Fishwick serves as Director of the Creative Automata Laboratory and holds an ATEC Distinguished University Chair

ATEC & EMAC ArtSciLab

The UT Dallas ArtSciLab exists to support trans-disciplinary innovation that involves art, scientific research, technology development, and education. Research includes collaboration between artists and scientists who seek to investigate problems of cultural timeliness and societal urgency.

Research to date includes partnerships with Leonardo in San Francisco, Observatoire Leonardo des Arts et Techno-Sciences in Paris, and Cognovo’s Consortium on Cognitive Innovation at Plymouth University in U.K. (funded by the European Union).

Students and faculty are encouraged to participate with projects in the ArtSciLab, which focus to innovate scholarly and professional publication and presentation.

Projects include

  • STEM to STEAM,” an effort to integrate arts, humanities, and design into STEM education and research strategies such as STEAM Summer Camp and the UT Dallas STEAM working group;
  • NSF-funded SEAD study to facilitate new collaboration between artists, scientists, engineers, and humanities scholars;
  • Multi-modal data representation—a collaborative project of multi-sensory research with the ATEC Sound Design program; and
  • DataRemix Engine to develop innovative systems to analyze and display scientific data for research and art.

Roger Malina serves as Director of the UT Dallas ArtSciLab, holds an Arts and Technology Distinguished Chair and is the Associate Director of ATEC

antÉ— the Institute for Research in Anticipatory Systems

antÉ, which is open to all UT Dallas researchers, exists to prepare scientists, particularly those who seek to quantify anticipatory capabilities in high-performance physical and mental activities, aging, and illness.

Research to date has included UT Dallas faculty in computer science, electrical engineering, brain science, and business who studied creativity, human performance, aging, and cognitive plasticity. antÉ faculty have also consulted with researchers investigating.

antÉ is currently establishing an international study group of researchers from seven countries.

antÉ Director Dr. Mihai Nadin is a founder of research in anticipatory systems. He has contributed to a variety of projects including German Audi’s development of the adaptive car and studies in risk mitigation and neural damage as it relates to anticipation and adaptability. In 2012, he received international recognition with the Distinguished Fellowship at Germany’s Hanse Institute for Advanced Study.

Mihai Nadin serves as Director of the Institute for Research in Anticipatory Systems and is an Ashbel Smith Professor

Sound Design Research Initiative

The Arts and Technology Sound Design Research Initiative is developing two deeply interrelated axes of research, production and teaching focusing on (1) rendition of immersive auditory environments and (2) sonification.

Rendition of immersive auditory environments

Creating realistic presentations of sound objects in a 3D space

  • Study of extended auditory perception
  • Analysis of tactile and visual perceptions involved in perception of auditory space
  • Recoding and reproduction of complex auditory scenes
  • Production of the auditory components for all the audiovisual productions developed in the Arts and Technology program:

Major Sponsored Projects

Rendition of immersive auditory environments includes audio engineering topics as well as esthetic and design principles related to the design, creation, recording, processing and integration of the various auditory components (music, dialog, sound effects) in films, animations, 2D and 3D video games, artistic installations, and virtual environments.

This highly interdisciplinary axis of research engages collaborations with the industry and multiple scientific disciplines: behavioral and brain sciences, physics, real-time digital signal processing, acoustics.

Sonification

Creating multimodal representations of large datasets:

  • Computational analysis of large data sets via an emphasis on sonification supported by visualization
  • Generation of multi-modal representations of these data sets with an audio-centric integration of sound and visual technology
  • Controlled projection and display of the representations with precise 3D location of sound stimuli tied to data animated visualizations
  • Controlled presentations of multi-modal stimuli (anechoic chamber)
  • Collection of data generated by recording of human multi-modal reactions
  • Study of the influence of sound on cognitive perception in multi-modal representations

Sonification itself is a robust area of research internationally and is a large component of our graduate research agenda, but in an effort to integrate with the more far reaching goals of the overall ATEC/EMAC program both spurs of sound design research agenda are designed to create collaborative results within ATEC/EMAC (with animators, game theorist/designers, social media designers, etc.) as well as reaching across disciplines to facilitate research in the geosciences, mind/brain research, computer science user interface design, physics of weather and dust storms, testing of new acoustic and signal processing research in both mechanical and electrical engineering, etc.

Narrative Systems Research Lab

The Narrative Systems Research Lab pursues models of understanding, structural research, and the creation of new work in the fields of narrative and interactive media.

Through independent research, collaborative projects, and serious game development, researchers in the lab forge connections between narrative, new media, digital games, medicine and healing, the fine arts, engineering, literature, and the humanities.

Members of the lab pursue research in a variety of areas, including:

  • perception, phenomenology, and narrative immersion;
  • character complexity in intelligent systems;
  • applying traditional literary theories such as post-structuralism to interactive narratives;
  • applying principles of computation and artificial agent design to narrative systems;
  • the effect of digital feedback on narrative immersion;
  • engagement with informal learning environments, such as museums, through interactive storytelling;
  • and healing and the arts.

Monica Evans serves as Director of the Narrative Systems Research Lab

Future Immersive Virtual Environments (FIVE) Lab

The Future Immersive Virtual Environments (FIVE) Lab performs research on state-of-the-art virtual reality (VR) systems and 3D user interfaces (3DUIs). We investigate the effects of system fidelity through user studies focused on performance, experience, learning, and training. 

Our goal is to better understand today’s technologies to push the boundaries of tomorrow’s immersive virtual environments.

Ryan P. McMahan serves as Director of the Future Immersive Virtual Environments (FIVE) Lab.

SeRViCE Lab

The Sensing, Robotics, Vision, Control and Estimation Lab was founded in January 2010. We are interested in topics of control and estimation with applications in robotics, autonomous vehicles and sensor management. We have focus and expertise in vision-based control and estimation and nonlinear control.

Members of the lab have backgrounds in Electrical Engineering, Computer Engineering, Mechanical Engineering and Computer Science. This breadth allows us to address stages of research, advancing from theoretical, mathematical development to advanced simulation through implementation on robot platforms.

Dr. Nicholas R. Gans serves as Director of the Sensing, Robotics, Vision, Control and Estimation Lab.

Visual Computing Laboratory

VESTIGE - Visual Engineering for Specification, Transformation, Integration, Generation, and Evolution of digital information - indicates one of the major research activities of the group. The primary aim of this project framework is to develop visual programming and visual language technology and apply such technology to multimedia/Web authoring and presentation, software engineering, digital document interchange, data mining, and parallel/distributed programming.

The group's latest research has been in a spatial graph grammar formalism and its applications. Example applications include XML generation and transformation and mutlimedia document design. Visual data mining is also one of the current research directions.

Dr Kang Zhang serves as Director of the Visual Computing Laboratory

Recent Research News