Pictured: Vignesh Narayanan surrounded by his undergraduate and graduate research assistants.
The statistics are impressive. According to GrandView Research, the global artificial intelligence (AI) market is valued at more than $196 billion. In addition, Forbes found that 83% of companies claim that AI is a top priority in their business plans.
Since 2021, Assistant Professor Vignesh Narayanan has taught in the Department of Computer Science and Engineering and is affiliated with the Artificial Intelligence Institute of the University of South Carolina (AIISC) and Carolina Autism and Neurodevelopment (CAN) Research Center. He is passionate about the integration between AI and dynamic systems, and its impact on safety and efficiency for consumers. Narayanan’s research surrounds the interaction between humans and dynamic systems to prevent such systems from unsafe behavior as they change over time.
Narayanan came to the College of Engineering and Computing (CEC) from Washington University in St. Louis, where he conducted his post-doctorate research in applied mathematics, dynamical systems and computational neuroscience. He says his current faculty position is a good fit because of his interdisciplinary background.
“USC was looking for candidates with a unique background in dynamic modeling and AI with applications to neuroscience. It was a perfect fit for my background,” Narayanan says.
Much of Narayanan’s research involves dynamical systems theory, a field in applied mathematics where researchers study systems and how they change over time.
“This can refer to systems as diverse as sensor and computing systems, robotic systems, neural systems, chemical systems or batteries, but they can all be studied as dynamical systems because there is change in their behavior,” Narayanan says. “Researchers of dynamical systems and control try to understand this behavior to determine if the systems can be steered in a desired fashion, and to detect and compensate for deviations from safe behavior.”
According to Narayanan, the overarching importance of his collection of research boils down to enhancing the safety and efficiency of dynamic systems using AI. Traditionally, these systems were less interactive, but in the last few decades, AI and dynamic systems have become more integrated, such as the advent of cameras and autopilots in vehicles and chat bots used in health settings.
“You have AI models that try to make sense of data and a human user interacting with all these components. We want to make sure the AI systems are facilitating efficiency and safety,” Narayanan explains. “AI is becoming integrated with every system we use. These systems change over time, and we want to understand how they are evolving so we prevent deviation from safe behavior.”
Since arriving on campus, Narayanan has participated in several collaborative and independent research activities. This includes a study related to the dynamics of information dissemination and the formation of opinions. He and other researchers are analyzing the interactions of individuals with external sources, such as media and social media, to construct a simulator (digital twin) designed to comprehend the propagation of information in dynamic environments over time.
In conjunction with departments from the CEC and the USC School of Medicine, Narayanan is studying chat bots in hopes of increasing the safety and reliability of virtual health assistants (VHAs) in mental health settings. While current VHAs can perform simple tasks in medical settings, such as scheduling appointments and setting reminders, this project aims to produce safety constrained VHAs that adhere to medical guidelines and protocols, providing understandable user guidance.
“The difference in the number of patients and the practitioners available to help them is huge,” Narayanan says. “To ease the impact of this shortage, we are trying to develop a chat bot that can interact with patients on a greater level and help the practitioners and patients.”
Narayanan is also studying autopilot systems found in drones and ground vehicles to improve the collaboration between the AI software utilized in a device and the user, as well as between multiple such systems operating in tandem, improving both safety and performance. This field of research, called collaborating autonomy, ensures that AI and human systems collaborate instead of competing. For example, when a human driver is operating a vehicle with an automatic steering system, the driver continues to monitor the autopilot system to ensure it is not doing something harmful and intervenes when necessary. This collaboration is not seamless.
Furthermore, when these autopilot systems communicate with other autopilots when multiple drones or ground vehicles are working as a team to execute a shared task, that communication should be private and not, for example, easily accessible to unauthorized entities. Likewise, most AI systems rely on the data provided to them for learning. If they are fed with false or inaccurate data, it is crucial for the AI system to identify and flag such adversarial information, refraining from incorporating it into the learning process. Failure to do so could result in potential harm to the physical system or to the human user.
“These are all challenges when designing AI systems so that they can collaborate to complete a task efficiently,” Narayanan says. “We want to understand how that collaboration between humans and AI can be seamlessly designed.”