--- annotations_creators: [] language: en size_categories: - 10K, , , ]` - `text`: Text content of element - `keypoints`: EmbeddedDocumentField(Keypoints) containing interaction points: - `label`: Element type (e.g., "ListItem") - `points`: A list of `(x, y)` keypoints in `[0, 1] x [0, 1]` - `text`: Text content associated with the interaction point The dataset captures web interface elements and interaction points with detailed text annotations for web interaction research. Each element has both its bounding box coordinates and a corresponding interaction point, allowing for both element detection and precise interaction targeting. ## Dataset Creation ### Curation Rationale The authors identified that most existing web datasets contain a high proportion of static text elements (around 40%) that provide limited value for training visual GUI agents, since modern Vision-Language Models already possess strong OCR capabilities. Instead, they focused on collecting visually distinctive interactive elements that would better enable models to learn UI navigation skills. This selective approach prioritizes quality and relevance over raw quantity. ### Source Data #### Data Collection and Processing The authors: 1. Developed a custom parser using PyAutoGUI 2. Selected 22 representative website scenarios (including Airbnb, Booking, AMD, Apple, etc.) 3. Collected multiple screenshots per scenario to maximize annotation coverage 4. Initially gathered 926,000 element annotations across 22,000 screenshots 5. Filtered out elements classified as static text, retaining 576,000 visually interactive elements 6. Focused on elements tagged with categories like "Button" or "Checkbox" #### Who are the source data producers? The data was collected from 22 publicly accessible websites across various domains (e-commerce, technology, travel, etc.). The specific screenshots and annotations were produced by the authors of the ShowUI paper (Show Lab, National University of Singapore and Microsoft). ## Bias, Risks, and Limitations The paper doesn't explicitly discuss biases or limitations specific to this dataset, but potential limitations might include: - Limited to 22 website scenarios, which may not represent the full diversity of web interfaces - Filtering out static text could limit the model's ability to handle text-heavy interfaces - Potential overrepresentation of popular or mainstream websites compared to niche or specialized interfaces - May not capture the full range of web accessibility features or alternative UI designs ### Recommendations Users should be aware that this dataset deliberately excludes static text elements, which makes it complementary to text-focused datasets but potentially incomplete on its own. For comprehensive web navigation models, it should be used alongside datasets that include text recognition capabilities. Additionally, researchers may want to evaluate whether the 22 selected website scenarios adequately represent their target application domains. ## Citation **BibTeX:** ```bibtex @misc{lin2024showui, title={ShowUI: One Vision-Language-Action Model for GUI Visual Agent}, author={Kevin Qinghong Lin and Linjie Li and Difei Gao and Zhengyuan Yang and Shiwei Wu and Zechen Bai and Weixian Lei and Lijuan Wang and Mike Zheng Shou}, year={2024}, eprint={2411.17465}, archivePrefix={arXiv}, primaryClass={cs.CV}, url={https://arxiv.org/abs/2411.17465}, } ``` **APA:** Lin, K. Q., Li, L., Gao, D., Yang, Z., Wu, S., Bai, Z., Lei, S. W., Wang, L., & Shou, M. Z. (2024). ShowUI: One Vision-Language-Action Model for GUI Visual Agent. arXiv preprint arXiv:2411.17465.