Update README.md
Browse files
README.md
CHANGED
|
@@ -1,6 +1,205 @@
|
|
| 1 |
-
|
| 2 |
-
|
| 3 |
-
|
| 4 |
-
|
| 5 |
-
-
|
| 6 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
|
| 3 |
+
<div>
|
| 4 |
+
|
| 5 |
+
# SAGA: Semantic-Aware Gray color Augmentation for Visible-to-Thermal Domain Adaptation across Multi-View Drone and Ground-Based Vision Systems
|
| 6 |
+
|
| 7 |
+
<div>
|
| 8 |
+
|
| 9 |
+
<!-- <a href="https://sites.google.com/view/indraeye">
|
| 10 |
+
<img src="https://github.com/Manjuphoenix/IndraEye/blob/master/images/airl_logo-1.jpg" alt="Paper Link" width="70px">
|
| 11 |
+
</a> -->
|
| 12 |
+
|
| 13 |
+
<a href="https://arxiv.org/pdf/2504.15728">
|
| 14 |
+
<img src="https://img.shields.io/badge/Paper-arxiv.2403.20126-red" alt="Paper Link" width="190px">
|
| 15 |
+
</a>
|
| 16 |
+
<!-- <img src="https://github.com/Manjuphoenix/IndraEye/blob/master/images/airl_logo-1.jpg" alt="Paper Link" width="60px">
|
| 17 |
+
</div>
|
| 18 |
+
|
| 19 |
+
|
| 20 |
+
<!--
|
| 21 |
+
</a>
|
| 22 |
+
<a href="https://arxiv.org/pdf/2410.20953">
|
| 23 |
+
<img src="https://img.shields.io/badge/Paper-arxiv.2403.20126-red" alt="Paper Link" width="190px">
|
| 24 |
+
</a> -->
|
| 25 |
+
|
| 26 |
+
|
| 27 |
+
|
| 28 |
+
|
| 29 |
+
[Manjunath D](https://scholar.google.com/citations?user=379B-doAAAAJ&hl=en), [Aniruddh Sikdar](https://scholar.google.com/citations?user=FdgpBuoAAAAJ&hl=en&authuser=1), [Prajwal Gurunath](https://scholar.google.com/citations?user=1D-q8wwAAAAJ&hl=en&oi=ao), [Sumanth Udupa](https://scholar.google.com/citations?user=d3cLdNoAAAAJ&hl=en&oi=ao), [Suresh Sundaram](https://scholar.google.com/citations?user=5iAMbhMAAAAJ&hl=en&authuser=1)
|
| 30 |
+
|
| 31 |
+
Domain-adaptive thermal object detection plays a key role in facilitating visible (RGB)-to-thermal (IR) adaptation by reducing the need for co-registered image pairs and minimizing reliance on large annotated IR datasets. However, inherent limitations of IR images, such as the lack of color and texture cues, pose challenges for RGB-trained models, leading to increased false positives and poor-quality pseudo-labels. To address this, we propose Semantic-Aware Gray color Augmentation (SAGA), a novel strategy for mitigating color bias and bridging the domain gap by extracting object-level features relevant to IR images. Additionally, to validate the proposed SAGA for drone imagery, we introduce the IndraEye, a multi-sensor (RGB-IR) dataset designed for diverse applications. The dataset contains 5,612 images with 145,666 instances, captured from diverse angles, altitudes, backgrounds, and times of day, offering valuable opportunities for multimodal learning, domain adaptation for object detection and segmentation, and exploration of sensor-specific strengths and weaknesses. IndraEye aims to enhance the development of more robust and accurate aerial perception systems, especially in challenging environments. Experimental results show that SAGA significantly improves RGB-to-IR adaptation for autonomous driving and IndraEye dataset, achieving consistent performance gains of +0.4 to +7.6 when integrated with state-of-the-art domain adaptation techniques. The dataset and codes are available at https://bit.ly/indraeye
|
| 32 |
+
|
| 33 |
+
|
| 34 |
+
|
| 35 |
+
#### Download dataset from [here](https://bit.ly/indraeye).
|
| 36 |
+
#### Project page Link: [link](https://sites.google.com/view/indraeye).
|
| 37 |
+
|
| 38 |
+
|
| 39 |
+
### IndraEye Dataset structure:
|
| 40 |
+
```sh
|
| 41 |
+
[data]
|
| 42 |
+
βββ IndraEye_eo-ir_split_version3
|
| 43 |
+
βββ eo
|
| 44 |
+
βββ train
|
| 45 |
+
βββ Annotations (Pascal VOC format)
|
| 46 |
+
βββ annotations (COCO json format)
|
| 47 |
+
βββ images (.jpg format with individual .json files)
|
| 48 |
+
βββ labels (.txt for YOLO format)
|
| 49 |
+
βββ labelTxt (.txt for DOTA format)
|
| 50 |
+
βββ val
|
| 51 |
+
(Same as train)
|
| 52 |
+
βββ ir
|
| 53 |
+
βββ train
|
| 54 |
+
βββ Annotations (Pascal VOC format)
|
| 55 |
+
βββ annotations (COCO json format)
|
| 56 |
+
βββ images (.jpg format with individual .json files)
|
| 57 |
+
βββ labels (.txt for YOLO format)
|
| 58 |
+
βββ labelTxt (.txt for DOTA format)
|
| 59 |
+
βββ val
|
| 60 |
+
(Same as train)
|
| 61 |
+
```
|
| 62 |
+
|
| 63 |
+
Classes list (in same order as class id): 0: "backhoe_loader", 1: "bicycle", 2: "bus", 3: "car", 4: "cargo_trike", 5: "ignore", 6: "motorcycle", 7: "person", 8: "rickshaw", 9: "small_truck", 10: "tractor", 11: "truck", 12: "van"
|
| 64 |
+
|
| 65 |
+
|
| 66 |
+
|
| 67 |
+
<!--
|
| 68 |
+
### SAGA
|
| 69 |
+
<img src="/images/SAGA.png" class=center>
|
| 70 |
+
|
| 71 |
+
|
| 72 |
+
### SAGA
|
| 73 |
+

|
| 74 |
+
|
| 75 |
+
|
| 76 |
+
|
| 77 |
+
|
| 78 |
+
|
| 79 |
+
### Qualitative Comparision
|
| 80 |
+

|
| 81 |
+
|
| 82 |
+
|
| 83 |
+
<div style="display: flex; justify-content: center; gap: 20px;">
|
| 84 |
+
|
| 85 |
+
<div>
|
| 86 |
+
<img src="/images/SAGA.png" alt="SAGA" style="width: 50%;">
|
| 87 |
+
</div>
|
| 88 |
+
|
| 89 |
+
<div>
|
| 90 |
+
<img src="/images/cmt_pred.png" alt="Qualitative Comparison" style="width: 50%;">
|
| 91 |
+
</div>
|
| 92 |
+
|
| 93 |
+
</div>
|
| 94 |
+
-->
|
| 95 |
+
|
| 96 |
+
<h2>SAGA</h2>
|
| 97 |
+
<div align="center">
|
| 98 |
+
<img src="/images/SAGA.png" alt="SAGA" style="width:60%;">
|
| 99 |
+
<p style="font-size:2px">
|
| 100 |
+
Illustration of SAGA augmentation. The process involves extracting objects from the image, converting them to grayscale, and reintegrating them into the original image while preserving background color information.
|
| 101 |
+
</p>
|
| 102 |
+
</div>
|
| 103 |
+
|
| 104 |
+
<!--
|
| 105 |
+
<h2>Qualitative Comparison</h2>
|
| 106 |
+
<div align="center">
|
| 107 |
+
<img src="/images/cmt_pred.png" alt="Qualitative Comparison" style="width:50%;">
|
| 108 |
+
<div>
|
| 109 |
+
Output predictions to highlight the importance of the SAGA augmentation on CMT algorithm. (a) and (c) shows the increase in false positives while using vanilla CMT. Meanwhile (b) and (d) shows the reduction in false positives when using SAGA with CMT, showcasing its effectiveness.
|
| 110 |
+
</div>
|
| 111 |
+
</div>
|
| 112 |
+
-->
|
| 113 |
+
|
| 114 |
+
|
| 115 |
+
<!--
|
| 116 |
+
<table>
|
| 117 |
+
<tr>
|
| 118 |
+
<td align="center">
|
| 119 |
+
<img src="/images/SAGA.png" alt="SAGA" style="width: 100%;">
|
| 120 |
+
</td>
|
| 121 |
+
<td align="center">
|
| 122 |
+
<img src="/images/cmt_pred.png" alt="Qualitative Comparison" style="width: 50%; height: 100%">
|
| 123 |
+
</td>
|
| 124 |
+
</tr>
|
| 125 |
+
</table>
|
| 126 |
+
-->
|
| 127 |
+
|
| 128 |
+
</div>
|
| 129 |
+
|
| 130 |
+
# SAGA Usage
|
| 131 |
+
To convert RGB image to instance gray image use the following command:
|
| 132 |
+
```bash
|
| 133 |
+
python inst_gry.py --coco_json_file /path/to/coco/json --image_directory /path/to/images --inst_gry_directory /path/to/store/images
|
| 134 |
+
|
| 135 |
+
```
|
| 136 |
+
|
| 137 |
+
|
| 138 |
+
<table>
|
| 139 |
+
<tr>
|
| 140 |
+
<th><h2>IndraEye Dataset</h2></th>
|
| 141 |
+
<th><h2>Qualitative Comparison</h2></th>
|
| 142 |
+
<tr>
|
| 143 |
+
<td align="center">
|
| 144 |
+
<img src="/images/eo_ir.jpg" alt="Qualitative Comparison" style="width: 100%;">
|
| 145 |
+
<p style="font-size:2px">
|
| 146 |
+
IndraEye RGB-IR samples. (a,b): High scale-variation medium altitude with minimal slant-angle during day time.
|
| 147 |
+
(c,d): High scale-variation medium altitude with minimal slant-angle during night time.
|
| 148 |
+
(e,f): High altitude with lesser slant-angle covering a large area.
|
| 149 |
+
(g-h): Mid altitude with high scale variations.
|
| 150 |
+
</p>
|
| 151 |
+
</td>
|
| 152 |
+
<td align="center">
|
| 153 |
+
<img src="/images/cmt_pred.png" alt="SAGA" style="height: 80%; width: 80%;">
|
| 154 |
+
<p style="font-size:2px">
|
| 155 |
+
Output predictions to highlight the importance of the SAGA augmentation on CMT algorithm. (a) and (c) shows the increase in false positives while using vanilla CMT. Meanwhile (b) and (d) shows the reduction in false positives when using SAGA with CMT, showcasing its effectiveness.
|
| 156 |
+
</p>
|
| 157 |
+
</td>
|
| 158 |
+
</tr>
|
| 159 |
+
</table>
|
| 160 |
+
|
| 161 |
+
<!--
|
| 162 |
+
<div>
|
| 163 |
+
|
| 164 |
+
## IndraEye: Infrared Electro-Optical Drone-based Aerial Object Detection Dataset
|
| 165 |
+
> **Abstract:** *Deep neural networks (DNNs) have demonstrated superior performance when trained on well-illuminated environments, given that the images are captured through an Electro-Optical (EO) camera, which offers rich texture content. In critical applications such as aerial surveillance, maintaining consistent reliability of DNNs throughout all times of the day is paramount, including during low-light conditions where EO cameras often struggle to capture relevant details. Furthermore, UAV-based aerial object detection encounters significant scale variability stemming from varying altitudes and slant angles, introducing an additional layer of complexity. Existing approaches consider only illumination change/style variations as the domain shift, while in aerial surveillance, correlation shifts also acts as a hindrance to the performance of DNNs. In this paper we propose a multi-sensor (EO-IR) labelled object detection dataset consisting of 5276 images with 142991 instances covering multiple viewing angles and altitudes, 7 backgrounds and at different times of the day. This dataset serves as an effective resource for UAV-based object detection, facilitating the development of robust DNNs capable of operating round-the-clock.*
|
| 166 |
+
|
| 167 |
+
</div>
|
| 168 |
+
|
| 169 |
+
|
| 170 |
+
<div align="center">
|
| 171 |
+
|
| 172 |
+

|
| 173 |
+
**Visualization of our EO-IR images**
|
| 174 |
+
</div>
|
| 175 |
+
|
| 176 |
+
|
| 177 |
+
<div align="center">
|
| 178 |
+
<img src="/images/eo_ir.jpg" alt="Qualitative Comparison" style="width:50%;">
|
| 179 |
+
</div>
|
| 180 |
+
-->
|
| 181 |
+
|
| 182 |
+
### License
|
| 183 |
+
This repo is released under the CC BY 4.0 license. Please see the LICENSE file for more information.
|
| 184 |
+
|
| 185 |
+
### Contact
|
| 186 |
+
For inquiries, please contact: [email protected]
|
| 187 |
+
|
| 188 |
+
|
| 189 |
+
## Citation
|
| 190 |
+
|
| 191 |
+
If you use our dataset, code, or results in your research, please consider citing our paper:
|
| 192 |
+
|
| 193 |
+
```BibTeX
|
| 194 |
+
@misc{d2025sagasemanticawaregraycolor,
|
| 195 |
+
title={SAGA: Semantic-Aware Gray color Augmentation for Visible-to-Thermal Domain Adaptation across Multi-View Drone and Ground-Based Vision Systems},
|
| 196 |
+
author={Manjunath D and Aniruddh Sikdar and Prajwal Gurunath and Sumanth Udupa and Suresh Sundaram},
|
| 197 |
+
year={2025},
|
| 198 |
+
eprint={2504.15728},
|
| 199 |
+
archivePrefix={arXiv},
|
| 200 |
+
primaryClass={cs.CV},
|
| 201 |
+
url={https://arxiv.org/abs/2504.15728},
|
| 202 |
+
}
|
| 203 |
+
|
| 204 |
+
```
|
| 205 |
+
|