Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -65,6 +65,32 @@ ROVR Open Dataset includes various data types to support different autonomous dr
|
|
| 65 |
</tr>
|
| 66 |
</table>
|
| 67 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 68 |
## ROVR Open Dataset Overview
|
| 69 |
|
| 70 |
### Dataset Volume
|
|
|
|
| 65 |
</tr>
|
| 66 |
</table>
|
| 67 |
|
| 68 |
+
## Omni-Quad Dataset
|
| 69 |
+
|
| 70 |
+
We are excited to introduce the **Omni-Quad Dataset**, a surround-view extension of the ROVR Open Dataset.
|
| 71 |
+
This dataset is built using **4 LightCone (LC) units** mounted in the **front, rear, left, and right** directions, forming a synchronized **360° multi-view LiDAR–camera perception system**.
|
| 72 |
+
|
| 73 |
+
The Omni-Quad Dataset provides high-fidelity 3D perception data with dense, multi-directional coverage, making it well-suited for:
|
| 74 |
+
|
| 75 |
+
- Surround-view depth estimation
|
| 76 |
+
- Multi-view LiDAR fusion
|
| 77 |
+
- 360° perception and scene understanding
|
| 78 |
+
- Panoptic and semantic analysis
|
| 79 |
+
- Autonomous driving and robotics research
|
| 80 |
+
|
| 81 |
+
### Key Characteristics
|
| 82 |
+
- **Full 360° coverage** using four synchronized LC sensors
|
| 83 |
+
- **High-density multi-view LiDAR point clouds**
|
| 84 |
+
- **Synchronized camera–LiDAR data** ideal for multimodal fusion
|
| 85 |
+
- **Globally collected** across diverse real-world environments
|
| 86 |
+
- Suitable for **fusion-based perception, 3D reconstruction, and scene analysis**
|
| 87 |
+
|
| 88 |
+
Below is a preview GIF showing the fused 360° surround-view point cloud output:
|
| 89 |
+
|
| 90 |
+
<div align="center">
|
| 91 |
+
<img src="https://raw.githubusercontent.com/rovr-network/ROVR-Open-Dataset/main/images/Omni-Quad%20Dataset.gif" width="600"/>
|
| 92 |
+
</div>
|
| 93 |
+
|
| 94 |
## ROVR Open Dataset Overview
|
| 95 |
|
| 96 |
### Dataset Volume
|