AdaptCLIP
Universal Visual Anomaly Detection model based on CLIP with learnable adapters.
Model Description
AdaptCLIP is a universal (zero-shot and few-shot) anomaly detection framework that leverages CLIP's vision-language capabilities with lightweight learnable adapters for open-word industrial and medical anomaly detection.
Model Variants
| Checkpoint | Training Dataset | Description |
|---|---|---|
adaptclip_checkpoints/12_4_128_train_on_mvtec_3adapters_batch8/epoch_15.pth |
MVTec-AD | Trained on MVTec-AD dataset |
adaptclip_checkpoints/12_4_128_train_on_visa_3adapters_batch8/epoch_15.pth |
VisA | Trained on VisA dataset |
Usage
# Load checkpoint
import torch
checkpoint = torch.load("./adaptclip_checkpoints/12_4_128_train_on_mvtec_3adapters_batch8/epoch_15.pth")
Citation
If you find this model useful, please cite our work.
@inproceedings{adaptclip,
title={AdaptCLIP: Adapting CLIP for Universal Visual Anomaly Detection},
author={Gao, Bin-Bin and Zhou, Yue and Yan, Jiangtao and Cai, Yuezhi and Zhang, Weixi and Wang, Meng and Liu, Jun and Liu, Yong and Wang, Lei and Wang, Chengjie},
booktitle={AAAI}
year={2026}
}
License
gpl-2.0
Model tree for csgaobb/AdaptCLIP
Base model
openai/clip-vit-large-patch14-336