Software/Data

Working Memory Dataset: 1st release Feb 2023

Working Memory (WM) involves the temporary retention of information over small amounts of time mission. It is an important aspect of cognitive function that allows humans to perform a variety of tasks that require online processing, such as dialling a phone number, recalling routes, etc. Inherent limitations in the individual capacity to hold information leads to people often forgetting important specifics during such tasks. In this work, we would like to showcase how wearable and assistive technologies for improving other types of memory functions that are longer-term in nature (e.g., episodic memory). Motivated by this, we leverage multimodal, wearable sensor data to legibly extract attentional focus during those activities to intelligently cue in-situation, to improve the recall of those tasks.

Access the dataset here. 

Publication: Under Preparation

Dataset Description: We collected data from 20 volunteers performing retention of cognitive information in desktop-based navigation tasks for four different stimulated environments, (a) an indoor dorm, (b) a familiar, suburban campus area, (c) the downtown area of a mid-size US city (Baltimore) and (d) dense, cosmopolitan city (NYC). We design two data collection procedures for this work:-physiology-driven episode extraction and verbal cueing and navigation retracing. We employed multi-modalities, wearable and eye tracking-based sensing modules which comprised of inertial motion units (IMU) (Accelerometer, Gyroscope and Magnetometer), Galvanic Skin Response (GSR), Photoplethysmography (PPG), Electroencephalogram (EEG) and Eye Tracking sensors. 

------------------------------------------------------##############---------------------------------------------------

Multi-view Dataset: 1st release Feb 2023

Deep video representation learning has achieved excellent performance in video action recognition, but performance degrades significantly when applied to video clips from varying perspectives. Existing video action recognition (VAR) models often include both view and action information, making it challenging to learn a view-invariant representation. To address this issue, we collected a large-scale multiview video dataset. This dataset includes various metadata to facilitate further research for robust VAR systems.

Access the dataset here. 

Publications: Under Review in ICASSP 2023 and IEEE Transactions on Image Processing

Dataset Description: We collected data on ten micro-actions, including static and dynamic poses, with regular, wide-angle, and drone cameras from 12 volunteers in different environments and lighting conditions. We obtained approximately ten hours of total video data in a time-controlled and safe setup, including the background-only data for the identical backgrounds. The videos are collected under varying realistic lighting conditions indoors, outdoors, and multiple realistic backgrounds with varying camera settings.

------------------------------------------------------##############---------------------------------------------------

The Firearm Recoil Dataset: 1st release Oct 2021

This dataset was collected utilizing a wrist worn accelerometer to record the recoil generated from one subject’s use of 15 different firearms of the Handgun, Rifle and Shotgun class. The type of the firearm based on its ability to auto-load or not is also denoted. Data was collected at a private range where the user was instructed to conducting the shooting exercise in the same manner they would during a normal day at a shooting range. Slow deliberate shots were taken, with the subject taking time to aim at a target in a standing position; feet shoulder width apart.  

Access the dataset here. 

Publications:

Dataset Description:

A wrist worn AX3 Watch, Axivity Ltd tri-axis accelerometer sensor is used.  Data collection was performed at 1600 Hz with offset ±16g. The age of the participant was 27 years and height and weight was 6'2'' and 180lbs respectively.  The user was right handed and sensor was placed on the right wrist. Data files are saved in separate CSV files based on the firearm used. 

------------------------------------------------------##############---------------------------------------------------

The MPSC-rPPG Dataset: 1st release September 2021

This MPSC-rPPG dataset was collected to capture facial video at high-resolution and frame per second (input) with simultaneous wrist PPG signal (ground truth). The dataset covers personal variances, background, skin tone, brightness variations. We believe, providing open access to the MPSC-rPPG dataset would enable development and validation for different PPG extraction methods, and thus data is made available. If you use these datasets for your research, please cite the following paper. You can find the dataset in the following website link. However, please do not use any subject's face/description in your presentation, report, or paper. 

Access the dataset here.

Project GitHub page with source code available here.

Publication:

Dataset Description:

The dataset contains RGB DSLR facial videos under artificial light from 3-6 feet distance. The subjects wear Empatica E4 Wristwatch during the video collection to track the PPG simultaneously. We align the vides and corresponding Empatica PPG signal with an error bound of 1/30 seconds by leveraging the Event Marker features of Empatica. The two hours of rPPG data contain two females and six males who volunteered multiple times, covering heterogeneity such as sex, facial hair, fitness level, skin color, and spectacles usage in the dataset. 

------------------------------------------------------##############---------------------------------------------------

Badminton Activity Recognition (BAR): 1st release 2020

The Badminton Activity Recognition (BAR) Dataset was collected for the sport of Badminton for 12 commonly played strokes. Besides the strokes, the objective of the dataset is to capture the associated leg movements. We believe in open access and thus data is made available without any password protection. If you use these datasets for your research, please cite the following dataset and paper. You can find the dataset in the following website link given below with the help of the DOI.

Access the dataset here

Publication:

 Dataset Description:

------------------------------------------------------##############---------------------------------------------------

A Circuit level Green Building Dataset with Appliance, Room and Floor level Information for Energy Disaggregation.

We are making available a floor, room and an appliance level dataset for one of our locations. The data has been collected from a three-storied townhome (approx. 2000 sq. ft.) with a variety of appliances at the circuit level. Data is available at the minute-by-minute, hour-by-hour, and day-by-day level. We believe in open access and thus data is made available without any password protection. If you use these datasets for your research please cite the following papers. We plan to provide the subsequent circuit level datasets from this location in the due course of time. Stay tuned.

Publications:

Datasets: 

Dataset Description: