DatasetsThe various shared datasets are listed below as links leading to sections with their details and illustrations:
WAIr-JaM: Wide Angle InfraRed dataset of JRL and MIS labThe WAIr-JaM dataset was created for the evaluation of visual Simultaneous Localization and Mapping with both Infrared wide angle camera (120 degrees field-of-view) and RGB camera (90 degrees times 75 degrees field-of-view) outdoors. It was captured with an Azure Kinect camera in passive IR mode in hand while walking in two paths of 130 m in 2'30'' and about 210 m in 3' with loop closure.
ROS bags and calibration images are provided together with configuration files for running StellaVSLAM.
More details on the dataset organization are within the WAIr-JaM dataset shared folder (5.3 GB) to be downloaded from: WAiR-JaM shared directory.
This dataset is published with the following paper:
CoDeHy-JRL: Objects Color-Depth-Hyperspectral dataset of JRLThe CoDeHy-JRL dataset was created for the evaluation of object 3D reconstruction from all around views captured with a rig of RGB-D camera and hyperspectral camera in order to build a Hyperspectral-Depth camera. It was captured with an Azure Kinect camera and a NH-5 linescan hyperspectral camera observing 4 objects of the YCB dataset with 24 to 29 orientations with loop closure.
The code repository HSI_3D_Object_Reconstruction provides together the intrinsic and extrinsic parameters of the cameras as well as the programs to reproduce the results of the article:
The CoDeHy-JRL dataset is available at shared folder (4 GB + 3.3 GB): CoDeHy-JRL shared directory.
CD-MaJ: the Color-Depth dataset of MIS lab and JRLThe CD-MaJ dataset was created for the evaluation of visual Simulatenous Localization and Mapping with RGB-D camera of various fields-of-view and resolutions with respect to ground truth measurements obtained by motion capture. It was generated with an Azure Kinect camera mounted on a Pioneer 3AT mobile robot moving in the indoor area covered by an Optitrack system which readings are synchronized with camera captures.
Seven RGB-D image sequences are shared from images captured while the mobile robot was moving. Various modes of the Azure Kinect cameras are considered: narrow or wide depth camera field-of-view, high or low resolution, at 15 or 30 images per second (except for wide field-of-view 30 images per second that is not possible). The motion capture data synchronized (software via ROS) with the image capture are also provided in rosbags.
More details on the camera frames and pose representation are within the CD-MaJ dataset archive (15.4 GB) to be downloaded from: CD-MaJ shared directory.
This dataset is published with the following paper:
SVMIS+: Spherical Vision dataset onboard hexarotorThe SVMIS+ dataset, created for the evaluation of visual gyroscopes in outdoor environment with respect to ground truth measurements, was generated with a Ricoh Theta S dual-fisheye camera mounted on a DJI Matrice 600 Pro hexarotor drone with additional onboard computer to record images synchronized with IMU and GPS readings.
Both raw dual-fisheye and equirectangular transformed image sequences are shared from images captured while the drone was flying, thus rotating and translating the camera (4'36'' video at 10 FPS). The drone flight data (IMU and GPS measurements) synchronized (software via ROS) with the image capture are also provided in the form of camera poses, with equirectangular with the sky in the top part of the image and the ground in the bottom part of the image, when the drone is static (horizon line straight and horizontal in the image).
More details on the camera frames and camera pose representation are within the SVMIS+ dataset provided as archives to be downloaded from SVMISplus.zip (2.2 GB) for the dual-fisheye images and SVMISplus_er_pano.zip (3.1 GB) for the equirectangular images.
This dataset is published with the following articles (André et al. for the dual-fisheye images and Berenguel-Baeta et al. for the equirectangular images):
FULLSCAN: FullScan project datasetThe FULLSCAN dataset, created for studying the nD reconstruction of stained-glasses, was generated with seven different sensors:
Data were acquired and structured by MIS (UPJV, France), OMI (NAIST, Japan), VIBOT (UBFC, France) and IRSEEM (Esigelec, France).
For the moment (June 2022), only hyperspectral data (raw and corrected) of the REHS sensor (NAIST) and full-view equirectangular images of the Ricoh Theta V camera (MIS) are shared for three of the main stained-glass windows of the Choir and Transept parts of the Amiens cathedral triforium:
| Stained-glass | ThetaV | REHS | ||
| params | raw | comp | ||
| III | ✔ | ✔ | ✔ | ✔ |
| XVIII | ✔ | ✔ | ✔ | ✔ |
| SouthPortal | ✔ | ✔ | ✔ | ✔ |
Stained-glass III represents St Peter, XVIII represents St Bishop.
In the HSI data from the REHS sensor, "params" are the angular directions of captured pixels and the wavelengths of the 2068 channels (time is provided for information), "raw" stands for the raw data captured (hyperspectral cube) and "comp" stands for the compensated illumination variation HSI data. When using these spectral data, please cite:
When using the equirectangular images, please cite:
C++ code to densely align compensated HSI data with RGB equirectangular images is also provided at: github.com/jrl-umi3218/hsrgbalign.
PanoraMIS: Panoramic Vision of the MIS laboratoryThe PanoraMIS dataset is the gathering and extension of previous OVMIS and SVMIS datasets (kept at the bottom of this page for legacy) to spherical vision on long paths of a Seekur Jr mobile robot in various urban and natural environments with synchronized position and 3D orientation ground truth.
Dedicated website: mis.u-picardie.fr/~panoramis.
For more details on the PanoraMIS dataset (e.g. acquisition setup or paths shape), see article:
ConveJRL: Convenience store objects dataset of JRLThe ConveJRL dataset has been made at CNRS-AIST JRL (Tsukuba, Japan). This dataset was created for evaluating cost functions of eye-to-hand visual servoing of a robot arm for object manipultion. It contains images of a webcam observing 13 axial-symmetric objects undergoing pure rotations on a turn table, each for 360 degrees at a step of less than 1 degree.
The ConveJRL dataset features 6850 images.
For more details on the ConveJRL dataset please see the following paper (to cite if using the dataset):
The ConveJRL dataset (4 GB) is available for download here: ConveJRL.zip.
ArUcOmni: Panoramic dataset of ArUco markersThe ArUcOmni dataset is a collaboration between L@bISEN, Vision-AD team, Yncréa Ouest, ISEN (Brest, France) and the MIS lab of UPJV. This dataset was created for evaluating the adaptation of the detection and pose estimation of the so called ArUco fiducial markers to panoramic vision. It contains images of a hypercatadioptric camera and a fisheye camera observing a 3-plane rig displaying ArUco markers of which the relative poses serve as ground truth.
Target images showing calibration checkerboards are provided to calibrate both hypercatadioptric and fisheye cameras.
The ArUcOmni dataset features 189 hypercatadioptric images and 36 fisheye images. Matlab scripts are provided to compute the pose estimation errors with respect to the ground truth.
For more details on the ArUcOmni dataset please see the following paper:
The ArUcOmni dataset (486 MB) is available for download here: ArUcOmni_dataset.zip.
LFMIS: Light-Field dataset of the MIS laboratoryThe LFMIS dataset, created for the evaluation of planar object visual tracking, was generated with a static Lytro Photo camera pointed toward the end-effector of a Stäubli manipulator holding a planar target.
Target images (12) showing the calibration rigs are provided to calibrate the light-field camera.
Two experimental setups with a textured planar object led to the following datasets:
For each sequence, the ground truth poses of the planar object measured by the Stäubli manipulator are provided.
For more details on the LFMIS dataset (e.g. acquisition setup or paths shape), see Section VI in the paper:
The LFMIS dataset (2.24 GB) is available for download here: LFMIS_dataset.zip.
AFAMIS: Image templates dataset for registration evaluationThe AFAMIS dataset, created for the evaluation of image region tracking, under translation and projective motion model, was generated from the MS-COCO dataset and the Yale face database. We built a set of 110000 images to evaluate our AFA-LK tracker (Adaptive Forward Additive Lucas-Kanade tracker) with respect to state of the art methods in:
The AFAMIS dataset (520 MB) is available for download here: AFAMIS_dataset.zip.
SVMIS: Spherical Vision dataset of the MIS laboratoryThe SVMIS dataset, created for the evaluation of visual gyroscopes in structured and unstructured environments, was generated with a dual-fisheye camera mounted on the end-effector of a Stäubli manipulator (TX-60) and on a Parrot fixed-wing drone (Disco FPV).
Multi-plane target images are provided to calibrate the dual-fisheye camera.
Two image datasets acquired with an industrial manipulator are provided (the ground truth is given by robot's odometry):
An experimental scenario with the Disco drone is also considered leading to an outdoor image sequence in which the camera is rotated and translated (6'41'' video at 30 FPS for a 4.2 km trajectory). The drone flight data (including the IMU measurements) are also provided but not synchronized with the dual-fisheye camera.
For more details about the organization of the SVMIS dataset, see Section V in the paper:
The SVMIS dataset (1.6 GB) is available for download here: SVMIS_dataset.zip.
Due to requests, we slightly extended the SVMIS dataset. Two videos (810 MB each) that are the equirectangular conversion results of the dual-fisheye video acquired with the Disco drone are now available. The conversion is done with the Ricoh Theta desktop software with or without the top/down correction:
OVMIS: Omnidirectional Vision dataset of the MIS laboratoryThe OVMIS dataset, created for the evaluation of visual compasses in structured and unstructured environments, was generated with a hypercatadioptric camera mounted on the end-effector of a Stäubli manipulator and on a Pioneer mobile robot.
Target images showing the calibration rigs are provided to calibrate the hypercatadioptric camera.
3600 images sampling a horizontal disk of the manipulator workspace with the camera frame poses measured by the Stäubli manipulator are provided. These 3600 images are obtained from a pure rotation of the camera with a 2.5° step around its optical axis leading to 144 images at 25 collection positions.
Furthermore, three experimental scenarios with the Pioneer mobile robot led to the following datasets:
For each scenario, the ground truth (Pioneer's odometry corrected with the gyroscopic measurements) is provided.
For more details on the organization of the OVMIS dataset, see Section IVA in the paper:
The OVMIS dataset (2.72 GB) is available for download here: OVMIS_dataset.zip.