FruitBin contains more than 1M images and 40M instance-level 6D pose annotations over both symmetric and asymmetric fruits with or without texture. Rich annotations and metadata (including 6D pose, segmentation mask, point cloud, 2D and 3D bounding boxes, occlusion rate) allow the tuning of the proposed dataset for benchmarking the robustness of object instance segmentation and 6D pose estimation models (with respect to variations in terms of lighting, texture, occlusion, camera pose and scenes). We further propose three scenarios presenting significant challenges of 6D pose estimation models: new scene generalization; new camera viewpoint generalization; and occlusion robustness. We show the results of these three scenarios for two 6D pose estimation baselines making use of RGB or RGBD images. To the best of our knowledge, FruitBin is the first dataset for the challenging task of fruit bin picking and the biggest large-scale dataset for 6D pose estimation with the most comprehensive challenges, tunable over scenes, camera poses and occlusions.
Estimating fluid dynamics is classically done through the simulation and integration of numerical models solving the Navier-Stokes equations, which is computationally complex and time-consuming even on high-end hardware. This is a notoriously hard problem to solve, which has recently been addressed with machine learning, in particular graph neural networks (GNN) and variants trained and evaluated on datasets of static objects in static scenes with fixed geometry. We attempt to go beyond existing work in complexity and introduce a new model, method and benchmark. We propose EAGLE, a large-scale dataset of ∼1.1 million 2D meshes resulting from simulations of unsteady fluid dynamics caused by a moving flow source interacting with nonlinear scene structure, comprised of 600 different scenes of three different types. To perform future forecasting of pressure and velocity on the challenging EAGLE dataset, we introduce a new mesh transformer. It leverages node clustering, graph pooling and global attention to learn long-range dependencies between spatially distant data points without needing a large number of iterations, as existing GNN methods do. We show that our transformer outperforms state-of-the-art performance on, both, existing synthetic and real datasets and on EAGLE. Finally, we highlight that our approach learns to attend to airflow, integrating complex information in a single iteration.
A collection of urban data graphs in RDF/OWL formats derived from CityGML Grand Lyon Open data
We provide a large-scale dataset of textured meshes with over 343k stimuli generated from 55 source models quantitatively characterized in terms of geometric, color, and semantic complexity to ensure their diversity. The dataset covers a wide range of compression-based distortions applied on the geometry, texture mapping and texture image. The database can be used to train no-reference quality metrics and develop rate-distortion models for meshes.
From the established dataset, we carefully selected a challenging subset of 3000 stimuli that we annotated in a large-scale subjective experiment in crowdsourcing based on the double stimulus impairment scale (DSIS) method. Over 148k quality scores were collected from 4513 participants. To the best of our knowledge, it is the largest quality assessment dataset of textured meshes associated with subjective scores and Mean Opinion Scores (MOS) to date. This database is valuable for training and benchmarking quality metrics.
Quality scores of the remaining stimuli in the dataset (i.e. those not involved in the subjective experiment) were predicted (Pseudo-MOS) using a quality metric called Graphics-LPIPS, based on deep learning, trained and tested on the subset of annotated stimuli.
This dataset was created at the LIRIS lab, Université de Lyon. It is associated with the following reference. Please cite it, if you use the dataset.
Yana Nehmé, Florent Dupont, Jean-Philippe Farrugia, Patrick Le Callet, Guillaume Lavoué. Textured Mesh Quality Assessment: Large-Scale Dataset and Deep Learning-based Quality Metric, ArXiv preprint arXiv:2202.02397, 2022.
A set of data samples illustrating the range of data formats and sizes used in the field of Urban Data.
These datasets are the resulting outputs of the cityGMLto3DTiles data pipeline for transforming CityGML datasets from the Metropole of Lyon to 3DTile Tilesets. The results also include the resulting 3DCityDB docker container volume contents used to produce the 3DTiles found in this dataset repository: https://datasets.liris.cnrs.fr/3dtiles-tilesets-metropolis-lyon-version1
Instructions for how to reproduce these datasets manually can be found here : https://github.com/VCityTeam/UD-Reproducibility/tree/master/Computations/3DTiles
This dataset contains the Bidirectional Reflectance Distribution Functions (BRDFs) related to the study presented in the reference below.
Guillaume Lavoué, Nicolas Bonneel, Jean-Philippe Farrugia, Cyril Soler, Perceptual Quality of BRDF Approximations: Dataset and Metrics, Computer Graphics Forum (Eurographics 2021), May 2021.
The dataset consists in 100 source BRDFs (from the MERL-MIT BRDF database), subject to approximations with different models, producing a total of 2026 BRDFs (including references). The dataset is provided in two formats: the standard MERL binary format and our own TITOPO format.
3DTiles tile-sets of various boroughs of the Metropolis of Lyon (data derived from Grand Lyon Open data)