The various shared datasets are listed below as links leading to sections with their details and illustrations:
The PanoraMIS dataset is the gathering and extension of previous OVMIS and SVMIS datasets (kept at the bottom of this page for legacy) to spherical vision on long paths of a Seekur Jr mobile robot in various urban and natural environments with synchronized position and 3D orientation ground truth.
Dedicated website: mis.u-picardie.fr/~panoramis.
For more details on the PanoraMIS dataset (e.g. acquisition setup or paths shape), see article:
The ArUcOmni dataset is a collaboration between L@bISEN, Vision-AD team, Yncréa Ouest, ISEN (Brest, France) and the MIS lab of UPJV. This dataset was created for evaluating the adaptation of the detection and pose estimation of the so called ArUco fiducial markers to panoramic vision. It contains images of a hypercatadioptric camera and a fisheye camera observing a 3-plane rig displaying ArUco markers of which the relative poses serve as ground truth.
Target images showing calibration checkerboards are provided to calibrate both hypercatadioptric and fisheye cameras.
The ArUcOmni dataset features 189 hypercatadioptric images and 36 fisheye images. Matlab scripts are provided to compute the pose estimation errors with respect to the ground truth.
For more details on the ArUcOmni dataset please see the following paper:
The ArUcOmni dataset (486 MB) is available for download here: ArUcOmni_dataset.zip.
The LFMIS dataset, created for the evaluation of planar object visual tracking, was generated with a static Lytro Photo camera pointed toward the end-effector of a Stäubli manipulator holding a planar target.
Target images (12) showing the calibration rigs are provided to calibrate the light-field camera.
Two experimental setups with a textured planar object led to the following datasets:
For each sequence, the ground truth poses of the planar object measured by the Stäubli manipulator are provided.
For more details on the LFMIS dataset (e.g. acquisition setup or paths shape), see Section VI in the paper:
The LFMIS dataset (2.24 GB) is available for download here: LFMIS_dataset.zip.
The AFAMIS dataset, created for the evaluation of image region tracking, under translation and projective motion model, was generated from the MS-COCO dataset and the Yale face database. We built a set of 110000 images to evaluate our AFA-LK tracker (Adaptive Forward Additive Lucas-Kanade tracker) with respect to state of the art methods in:
The AFAMIS dataset (520 MB) is available for download here: AFAMIS_dataset.zip.
The SVMIS dataset, created for the evaluation of visual gyroscopes in structured and unstructured environments, was generated with a dual-fisheye camera mounted on the end-effector of a Stäubli manipulator (TX-60) and on a Parrot fixed-wing drone (Disco FPV).
Multi-plane target images are provided to calibrate the dual-fisheye camera.
Two image datasets acquired with an industrial manipulator are provided (the ground truth is given by robot's odometry):
An experimental scenario with the Disco drone is also considered leading to an outdoor image sequence in which the camera is rotated and translated (6'41'' video at 30 FPS for a 4.2 km trajectory). The drone flight data (including the IMU measurements) are also provided but not synchronized with the dual-fisheye camera.
For more details about the organization of the SVMIS dataset, see Section V in the paper:
The SVMIS dataset (1.6 GB) is available for download here: SVMIS_dataset.zip.
Due to requests, we slightly extended the SVMIS dataset. Two videos (810 MB each) that are the equirectangular conversion results of the dual-fisheye video acquired with the Disco drone are now available. The conversion is done with the Ricoh Theta desktop software with or without the top/down correction:
The OVMIS dataset, created for the evaluation of visual compasses in structured and unstructured environments, was generated with a hypercatadioptric camera mounted on the end-effector of a Stäubli manipulator and on a Pioneer mobile robot.
Target images showing the calibration rigs are provided to calibrate the hypercatadioptric camera.
3600 images sampling a horizontal disk of the manipulator workspace with the camera frame poses measured by the Stäubli manipulator are provided. These 3600 images are obtained from a pure rotation of the camera with a 2.5° step around its optical axis leading to 144 images at 25 collection positions.
Furthermore, three experimental scenarios with the Pioneer mobile robot led to the following datasets:
For each scenario, the ground truth (Pioneer's odometry corrected with the gyroscopic measurements) is provided.
For more details on the organization of the OVMIS dataset, see Section IVA in the paper:
The OVMIS dataset (2.72 GB) is available for download here: OVMIS_dataset.zip.