Meta Reality Labs Research
A dataset for understanding the fundamental capabilities of the Aria Gen 2 device for your research applications.
An egocentric dataset created using Aria Gen 2
The Aria Gen 2 Pilot Dataset (A2PD) is an egocentric multimodal dataset captured using Aria Gen 2 glasses and is available for download today. Additional data will be released in upcoming batches, stay tuned for more details.
The initial release features everyday scenarios between a primary user and her 3 friends, including cleaning, cooking, eating, playing, and walking outside. Each user in the sequences are equipped with Aria Gen 2 glasses. The dataset includes comprehensive raw sensor data alongside outputs from various machine perception algorithms.
A2PD illustrates Aria Gen 2’s ability to perceive the wearer, the surrounding environment, and the interactions between wearer and environment, while maintaining robust performance across diverse users and conditions.
Dataset Contents
Sensor Data
Rich Machine Perception Output
On-device Machine Perception
All sequences are captured with on-device machine perception, including Eye Tracking, Hand Tracking and SLAM. The diverse and accurate perception data in A2PD enables researchers to build real-time prototypes, eliminating the need for offline processing.


Machine Perception Services (MPS)
All recordings are processed offline by our popular Machine Perception Services (MPS) for SLAM and Hand Tracking. MPS offers higher precision hand tracking results, additional closed loop trajectory, and semi-dense point clouds.
Directional Automatic Speech Recognition
The contact microphones on Aria Gen 2 device distinctly capture the wearer vs other voices. To demonstrate this capability, we apply directional ASR to all sequences. The directional ASR algorithm distinguishes between self and others, and provides accurate start and end timestamps for each utterance.

Heart Rate Estimation
The Aria Gen 2 device uses photoplethysmography (PPG) sensors to continuously measure the wearer’s heart rate. The heart rate estimation algorithm is run on all sequences in the dataset. While we don’t have ground truth heart rate data in this release, the estimated heart rate closely matches physical activity—showing higher values after running or jumping and lower values during rest. We plan to compare these estimates to chest strap sensor measurements in a future study.

Hand Object Interaction
The Aria Gen 2 device can estimate hand poses and identify when hands interact with objects. To show how this works, we use our hand-object interaction model, on every sequence in the dataset. This model creates segmentation masks for the left hand, right hand, and any objects being touched.


3D Object Detection
The Aria Gen 2 device uses a wide field-of-view camera system with one RGB camera and four computer vision cameras. It applies the Egocentric Voxel Lifting (EVL) algorithm to all indoor recordings, allowing it to detect objects in the environment using both 2D and 3D bounding boxes. With EVL, Aria Gen 2 can accurately reconstruct scenes and objects in detail.
Depth Estimation
With four overlapping CV cameras, Aria Gen 2 is now capable of producing reliable depth maps, effectively functioning as a precise depth capture device. To achieve this, we rectify the front left and front right CV camera images and process them using the Foundation Stereo model to generate corresponding depth images.

Comprehensive tools to load and visualize data easily
Tools for working with Aria Gen 2 Pilot Dataset allow researchers to access, interact with, and visualize all raw data and machine perception algorithms results available in the dataset.

Visualizers of representative raw sensor signals.

Visualizer of machine perception algorithms output.
Learn more about Aria Gen 2 Pilot Dataset
For more information about our reconstruction method, check out our paper on arXiv.

BibTex Citation
If you use the Aria Gen 2 Pilot Dataset in your research, please cite the following:
@misc{kong2025ariagen2pilot,
title ={Aria Gen 2 Pilot Dataset},
author ={Chen Kong and James Fort and Aria Kang and Jonathan Wittmer and Simon Green and Tianwei Shen and Yipu Zhao and Cheng Peng and Gustavo Solaira and Andrew Berkovich and Nikhil Raina and Vijay Baiyya and Evgeniy Oleinik and Eric Huang and Fan Zhang and Julian Straub and Mark Schwesinger and Luis Pesqueira and Xiaqing Pan and Jakob Julian Engel and Carl Ren and Mingfei Yan and Richard Newcombe},
year ={2025},
eprint ={2510.16134},
archivePrefix={arXiv},
primaryClass={cs.CV},
url ={https://arxiv.org/abs/2510.16134},
}
Stay in the loop with the latest news from Project Aria.
By providing your email, you agree to receive marketing related electronic communications from Meta, including news, events, updates, and promotional emails related to Project Aria. You may withdraw your consent and unsubscribe from these at any time, for example, by clicking the unsubscribe link included on our emails. For more information about how Meta handles your data please read our Data Policy.
Sign up for our newsletter
By providing your email, you agree to receive marketing related electronic communications from Meta, including news, events, updates, and promotional emails related to Project Aria. You may withdraw your consent and unsubscribe from these at any time, for example, by clicking the unsubscribe link included on our emails.