The mDOT Center

Transforming health and wellness via temporally-precise mHealth interventions
mDOT@MD2K.org
901.678.1526
 

CP 11: Capturing Autobiographical Memory Formation in People Moving Through Real-world Spaces Using Synchronized Wearables and Intracranial Recordings of EEG

mDOT Center > CP 11: Capturing Autobiographical Memory Formation in People Moving Through Real-world Spaces Using Synchronized Wearables and Intracranial Recordings of EEG

CP 11: Capturing Autobiographical Memory Formation in People Moving Through Real-world Spaces Using Synchronized Wearables and Intracranial Recordings of EEG

Garcia Utah

Collaborating Investigator:

Luis Garcia, The University of Utah

Funding Status: 

1R61MH135109

NIH/NIMH

03/15/24 – 02/28/27

Associated with:

TR&D3

*Detecting Context Shifts in the Human Experience Using Multimodal Foundation Models
Authors:

Iris Nguyen, Liying Han, Burke Dambly, Alireza Kazemi, Marina Kogan, Cory Inman, Mani Srivastava, Luis Garcia

Publication Venue:

Proceedings of the 23rd ACM Conference on Embedded Networked Sensor Systems

Publication Date:

May 6, 2025

Keywords:
Related Projects:
Abstract:
Detecting context shifts in human experience is critical for applications in cognitive modeling, human-AI interaction, and adaptive neurotechnology. However, formalizing and identifying these shifts in real-world settings remains challenging due to annotation inconsistencies, data sparsity, and the multimodal nature of human perception.
 
In this poster, we explore the use of multimodal foundation models for detecting context shifts by leveraging neural, wearable, and environmental sensors. Initial findings from a neuroscience-driven annotation study highlight discrepancies in human-labeled transitions, emphasizing the need for a model-driven approach. Given the limited availability of labeled datasets, we examine: 1) Surrogate models trained on synthetic datasets, 2) Sensor fusion techniques to align real-world neural and behavioral signals, and 3) The role of foundation models in interpreting multimodal context
TL;DR:
*Understanding Factors Behind IoT Privacy—A User’s Perspective on RF Sensors
Authors:

Akash Deep Singh, Brian Wang, Luis Garcia, Xiang Chen, Mani Srivastava

Publication Venue:
Publication Date:

January 1, 2024

Keywords:
Related Projects:
Abstract:

While IoT sensors in physical spaces have provided utility and comfort in our lives, their instrumentation in private and personal spaces has led to growing concerns regarding privacy. The existing notion behind IoT privacy is that the sensors whose data can easily be understood and interpreted by humans (such as cameras) are more privacy-invasive than sensors that are not human-understandable, such as RF (radio-frequency) sensors. However, given recent advancements in machine learning, we can not only make sensitive inferences on RF data but also translate between modalities. Thus, the existing notions of privacy for IoT sensors need to be revisited. In this paper, our goal is to understand what factors affect the privacy notions of a non-expert user (someone who is not well-versed in privacy concepts). To this regard, we conduct an online study of 162 participants from the USA to find out what factors affect the privacy perception of a user regarding an RF-based device or a sensor. Our findings show that a user’s perception of privacy not only depends upon the data collected by the sensor but also on the inferences that can be made on that data, familiarity with the device and its form factor as well as the control a user has over the device design and its data policies. When the data collected by the sensor is not human-interpretable, it is the inferences that can be made on the data and not the data itself that users care about when making informed decisions regarding device privacy.

TL;DR:
*PrivacyOracle: Configuring Sensor Privacy Firewalls with Large Language Models in Smart Built Environments
Authors:
Publication Venue:

2024 IEEE Security and Privacy Workshops (SPW)

Publication Date:

May 1, 2024

Keywords:
Related Projects:
Abstract:

Modern smart buildings and environments rely on sensory infrastructure to capture and process information about their inhabitants. However, it remains challenging to ensure that this infrastructure complies with privacy norms, preferences, and regulations; individuals occupying smart environments are often occupied with their tasks, lack awareness of the surrounding sensing mechanisms, and are non-technical experts. This problem is only exacerbated by the increasing number of sensors being deployed in these environments, as well as services seeking to use their sensory data. As a result, individuals face an unmanageable number of privacy decisions, preventing them from effectively behaving as their own “privacy firewall” for filtering and managing the multitude of personal information flows. These decisions often require qualitative reasoning over privacy regulations, understanding privacy-sensitive contexts.

TL;DR:
*Systems and Methods for Using Ultrawideband Audio Sensing Systems
Authors:

Ziqi Wang, Mani Srivastava, Akash Deep Singh, Luis Garcia, Zhe Chen, Jun Luo

Publication Venue:

United States Patent Application 20230288549

Publication Date:

September 14, 2023

Keywords:
Related Projects:
Abstract:

Systems and methods for simultaneously recovering and separate sounds from multiple sources using Impulse Radio Ultra-Wideband (IR-UWB) signals are described. In one embodiment, a device can be configured for generating an audio signal based on audio source ranging using ultrawideband signals. In an embodiment the device includes, a transmitter circuitry, a receiver circuitry, memory and a processor. The processor configured to generate a radio signal. The radio signal including an ultra-wideband Gaussian pulse modulated on a radio-frequency carrier. The processor further configured to transmit the radio signal using the transmitter circuitry, receive one or more backscattered signals at the receiver circuitry, demodulate the one or more backscattered signals to generate one or more baseband signals, and generate a set of data frames based on the one or more baseband signals.

TL;DR:
*Tinyodom: Hardware-Aware Efficient Neural Inertial Navigation
Authors:

Swapnil Sayan Saha, Sandeep Singh Sandha, Luis Garcia, Mani Srivastava

Publication Venue:

Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies

Publication Date:

July 7, 2022

Keywords:
Related Projects:
Abstract:

Deep inertial sequence learning has shown promising odometric resolution over model-based approaches for trajectory estimation in GPS-denied environments. However, existing neural inertial dead-reckoning frameworks are not suitable for real-time deployment on ultra-resource-constrained (URC) devices due to substantial memory, power, and compute bounds. Current deep inertial odometry techniques also suffer from gravity pollution, high-frequency inertial disturbances, varying sensor orientation, heading rate singularity, and failure in altitude estimation. In this paper, we introduce TinyOdom, a framework for training and deploying neural inertial models on URC hardware. TinyOdom exploits hardware and quantization-aware Bayesian neural architecture search (NAS) and a temporal convolutional network (TCN) backbone to train lightweight models targeted towards URC devices.

TL;DR:
*PhysioGAN: Training High Fidelity Generative Model for Physiological Sensor Readings
Authors:

Moustafa Alzantot, Luis Garcia, Mani Srivastava

Publication Venue:
Publication Date:

April 25, 2022

Keywords:
Related Projects:
Abstract:

Generative models such as the variational autoencoder (VAE) and the generative adversarial networks (GAN) have proven to be incredibly powerful for the generation of synthetic data that preserves statistical properties and utility of real-world datasets, especially in the context of image and natural language text. Nevertheless, until now, there has no successful demonstration of how to apply either method for generating useful physiological sensory data. The state-of-the-art techniques in this context have achieved only limited success. We present PHYSIOGAN, a generative model to produce high fidelity synthetic physiological sensor data readings. PHYSIOGAN consists of an encoder, decoder, and a discriminator. We evaluate PHYSIOGAN against the state-of-the-art techniques using two different real-world datasets: ECG classification and activity recognition from motion sensors datasets. We compare PHYSIOGAN to the baseline models not only the accuracy of class conditional generation but also the sample diversity and sample novelty of the synthetic datasets. We prove that PHYSIOGAN generates samples with higher utility than other generative models by showing that classification models trained on only synthetic data generated by PHYSIOGAN have only 10% and 20% decrease in their classification accuracy relative to classification models trained on the real data. Furthermore, we demonstrate the use of PHYSIOGAN for sensor data imputation in creating plausible results.

TL;DR:
*Aerogel: Lightweight Access Control Framework for WebAssembly-Based Bare-Metal IoT Devices
Authors:
Publication Venue:

The Sixth ACM/IEEE Symposium on Edge Computing (ACM SEC ‘21)

Publication Date:

December 2021

Keywords:
Related Projects:
Abstract:

Application latency requirements, privacy, and security concerns have naturally pushed computing onto smartphone and IoT devices in a decentralized manner. In response to these demands, researchers have developed micro-runtimes for WebAssembly (Wasm) on IoT devices to enable streaming applications to a runtime that can run the target binaries that are independent of the device. However, the migration of Wasm and the associated security research has neglected the urgent needs of access control on bare-metal, memory management unit (MMU)-less IoT devices that are sensing and actuating upon the physical environment. This paper presents Aerogel, an access control framework that addresses security gaps between the bare-metal IoT devices and the Wasm execution environment concerning access control for sensors, actuators, processor energy usage, and memory usage.

TL;DR:

CP11 is developing a smartphone-based recording application called the CAPTURE app that synchronizes invasive neural recordings with continuous audio-visual, accelerometry, GPS, subjective report, autonomic physiology, and wearable eye tracking recordings during real-world behaviors like autobiographical memory encoding.

This project aims to integrate wearable devices—smartphones capturing audio-visual, accelerometry, GPS, physiological, and eye-tracking data—with synchronized intracranial neural recordings to study autobiographical memory (AM) formation in real-world settings. AM, which is uniquely personal and complex, has been challenging to analyze due to the limitations of traditional neuroimaging. This study will investigate how the brain encodes, processes, and retrieves AM by tracking real-world behaviors, offering insights into cognitive and neural mechanisms that are compromised in disorders like Alzheimer’s. By developing the CAPTURE app to record multimodal data synchronized with invasive neural data during daily experiences, the research will establish a foundation for neuromodulation approaches to enhance memory in real-world contexts. The project will leverage the NeuroPace Responsive Neurostimulation System, implanted in over 2,000 epilepsy patients, as a unique opportunity to collect direct neural data associated with AM formation and potentially develop real-world memory restoration tools.

TR&D3 is pushing new methods for efficiently learning robust embedding representations from multimodal sensor data streams from wearables and intracranial recordings.

CP11 challenges TR&D3 to develop neural foundation models to detect spatiotemporal events.

CP11 will get deep learning based neural foundation models that can be utilized to create analytics pipelines for a variety of downstream tasks involving detection of spatiotemporal events as people move through real-world spaces for purposes of assisting in memory formation.

This project aims to unlock the potential of combining wearable mobile recording devices, such as smartphones with continuous audio-visual, accelerometry, GPS, subjective report, autonomic physiology, and wearable eye tracking recordings, with precisely synchronized intracranial neural recordings during real-world behaviors. Autobiographical memory (AM) formation is a critical human behavior that has been difficult to study with traditional neuroimaging methods. Thus the proposed project aims to develop a smartphone-based recording application (CAPTURE app; R61 phase) synchronized with wearables and invasive neural recordings during real-world behaviors like autobiographical memory encoding (R33 phase).

Category

CP, TR&D3

Share
No Comments

Post a Comment