HAUTDatasets

The various shared datasets are listed below as links leading to sections with their details and illustrations:

PanoraMIS | ArUcOmni | LFMIS | AFAMIS | SVMIS | OVMIS

HAUTPanoraMIS: Panoramic Vision of the MIS laboratory

The PanoraMIS dataset is the gathering and extension of previous OVMIS and SVMIS datasets (kept at the bottom of this page for legacy) to spherical vision on long paths of a Seekur Jr mobile robot in various urban and natural environments with synchronized position and 3D orientation ground truth.

Dedicated website: mis.u-picardie.fr/~panoramis.

For more details on the PanoraMIS dataset (e.g. acquisition setup or paths shape), see article:

Houssem-Eddine Benseddik, Fabio Morbidi, Guillaume Caron, PanoraMIS: An Ultra-wide Field of View Image Dataset for Vision-based Robot-Motion Estimation, SAGE International Journal of Robotics Research, IJRR, 14 pages, in press, to appear in 2020. PDF

HAUTArUcOmni: Panoramic dataset of ArUco markers

The ArUcOmni dataset is a collaboration between L@bISEN, Vision-AD team, Yncréa Ouest, ISEN (Brest, France) and the MIS lab of UPJV. This dataset was created for evaluating the adaptation of the detection and pose estimation of the so called ArUco fiducial markers to panoramic vision. It contains images of a hypercatadioptric camera and a fisheye camera observing a 3-plane rig displaying ArUco markers of which the relative poses serve as ground truth.

  

Target images showing calibration checkerboards are provided to calibrate both hypercatadioptric and fisheye cameras.

The ArUcOmni dataset features 189 hypercatadioptric images and 36 fisheye images. Matlab scripts are provided to compute the pose estimation errors with respect to the ground truth.

For more details on the ArUcOmni dataset please see the following paper:

Jaouad Hajjami, Jordan Caracotte, Guillaume Caron, Thibault Napoléon, ArUcOmni: detection of highly reliable fiducial markers in panoramic images, IEEE Conference on Computer Vision and Pattern Recognition (CVPR) workshop on Omnidirectional Computer Vision, OmniCV, June 2020.

The ArUcOmni dataset (486 MB) is available for download here: ArUcOmni_dataset.zip.

HAUTLFMIS: Light-Field dataset of the MIS laboratory

The LFMIS dataset, created for the evaluation of planar object visual tracking, was generated with a static Lytro Photo camera pointed toward the end-effector of a Stäubli manipulator holding a planar target.

Light-field camera pointed toward an industrial manipulator

Target images (12) showing the calibration rigs are provided to calibrate the light-field camera.

Two experimental setups with a textured planar object led to the following datasets:

  • Sequence 1: pure translation, rectangular shape (9 light-fields)
  • Sequence 2: free combination of translation and rotation (9 light-fields)

For each sequence, the ground truth poses of the planar object measured by the Stäubli manipulator are provided.

For more details on the LFMIS dataset (e.g. acquisition setup or paths shape), see Section VI in the paper:

Nathan Crombez, Guillaume Caron, Takuya Funatomi, Yasuhiro Mukaigawa, Reliable planar object pose estimation in light-fields from best sub-aperture camera pairs, IEEE Robotics and Automation Letters, RA-L, vol. 3, no. 8, pp. 3561 - 3568, October 2018. PDF

The LFMIS dataset (2.24 GB) is available for download here: LFMIS_dataset.zip.

HAUTAFAMIS: Image templates dataset for registration evaluation

The AFAMIS dataset, created for the evaluation of image region tracking, under translation and projective motion model, was generated from the MS-COCO dataset and the Yale face database. We built a set of 110000 images to evaluate our AFA-LK tracker (Adaptive Forward Additive Lucas-Kanade tracker) with respect to state of the art methods in:

Yassine Ahmine, Guillaume Caron, El Mustapha Mouaddib, Fatima Chouireb, Adaptive Lucas-Kanade tracking, Elsevier Image and Vision Computing, IMAVIS, vol. 88, pp. 1 - 8, August 2019. PDF

The AFAMIS dataset (520 MB) is available for download here: AFAMIS_dataset.zip.

HAUTSVMIS: Spherical Vision dataset of the MIS laboratory

The SVMIS dataset, created for the evaluation of visual gyroscopes in structured and unstructured environments, was generated with a dual-fisheye camera mounted on the end-effector of a Stäubli manipulator (TX-60) and on a Parrot fixed-wing drone (Disco FPV).

Dual fisheye camera mounted on an industrial manipulator     Disco drone embedding a dual fisheye camera

Multi-plane target images are provided to calibrate the dual-fisheye camera.

Two image datasets acquired with an industrial manipulator are provided (the ground truth is given by robot's odometry):

  • OneDOF: 720 images obtained by rotating the camera with a 2.5° step size about its "vertical" axis leading to 144 images at 5 collection points.
  • ThreeDOFs: 94 images taken at a unique collection point.

An experimental scenario with the Disco drone is also considered leading to an outdoor image sequence in which the camera is rotated and translated (6'41'' video at 30 FPS for a 4.2 km trajectory). The drone flight data (including the IMU measurements) are also provided but not synchronized with the dual-fisheye camera.

For more details about the organization of the SVMIS dataset, see Section V in the paper:

Guillaume Caron and Fabio Morbidi, Spherical Visual Gyroscope for Autonomous Robots using the Mixture of Photometric Potentials, in Proc. of IEEE Int. Conf. on Robotics and Automation, ICRA, pp. 820-827, May 2018. PDF

The SVMIS dataset (1.6 GB) is available for download here: SVMIS_dataset.zip.

Due to requests, we slightly extended the SVMIS dataset. Two videos (810 MB each) that are the equirectangular conversion results of the dual-fisheye video acquired with the Disco drone are now available. The conversion is done with the Ricoh Theta desktop software with or without the top/down correction:

HAUTOVMIS: Omnidirectional Vision dataset of the MIS laboratory

The OVMIS dataset, created for the evaluation of visual compasses in structured and unstructured environments, was generated with a hypercatadioptric camera mounted on the end-effector of a Stäubli manipulator and on a Pioneer mobile robot.

Hypercatadioptric camera mounted on an industrial manipulator     Pioneer 3AT mobile robot embedding a hypercatadoptric camera

Target images showing the calibration rigs are provided to calibrate the hypercatadioptric camera.

3600 images sampling a horizontal disk of the manipulator workspace with the camera frame poses measured by the Stäubli manipulator are provided. These 3600 images are obtained from a pure rotation of the camera with a 2.5° step around its optical axis leading to 144 images at 25 collection positions.

Furthermore, three experimental scenarios with the Pioneer mobile robot led to the following datasets:

  • Scenario 1: indoor, pure rotation (160 images acquired during 49.69 s)
  • Scenario 2: outdoor, pure rotation (156 images acquired during 50.22 s)
  • Scenario 3: outdoor, rotation and translation (318 images acquired during 160.99 s for a 25.27 m length trajectory: mean speed 0.187 m/s)

For each scenario, the ground truth (Pioneer's odometry corrected with the gyroscopic measurements) is provided.

For more details on the organization of the OVMIS dataset, see Section IVA in the paper:

Fabio Morbidi and Guillaume Caron, Phase Correlation for Dense Visual Compass from Omnidirectional Camera-Robot Images, IEEE Robotics and Automation Letters, RA-L, vol. 2, no. 2, pp. 688 - 695, Avril 2017. PDF

The OVMIS dataset (2.72 GB) is available for download here: OVMIS_dataset.zip.