SentenceTransformer based on distilbert/distilroberta-base
This is a sentence-transformers model finetuned from distilbert/distilroberta-base on the sentence-transformers/all-nli dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: distilbert/distilroberta-base
- Maximum Sequence Length: 512 tokens
- Output Dimensionality: 768 tokens
- Similarity Function: Cosine Similarity
- Training Dataset:
- Language: en
Model Sources
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
model = SentenceTransformer("tomaarsen/distilroberta-base-nli-adaptive-layer")
sentences = [
'Introduction',
'Analytical Perspectives.',
'A man reads the paper.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
similarities = model.similarity(embeddings)
print(similarities.shape)
Evaluation
Metrics
Semantic Similarity
| Metric |
Value |
| pearson_cosine |
0.8456 |
| spearman_cosine |
0.8486 |
| pearson_manhattan |
0.8475 |
| spearman_manhattan |
0.8506 |
| pearson_euclidean |
0.8495 |
| spearman_euclidean |
0.8527 |
| pearson_dot |
0.7867 |
| spearman_dot |
0.7816 |
| pearson_max |
0.8495 |
| spearman_max |
0.8527 |
Semantic Similarity
| Metric |
Value |
| pearson_cosine |
0.8183 |
| spearman_cosine |
0.8148 |
| pearson_manhattan |
0.8132 |
| spearman_manhattan |
0.8088 |
| pearson_euclidean |
0.8148 |
| spearman_euclidean |
0.8105 |
| pearson_dot |
0.75 |
| spearman_dot |
0.735 |
| pearson_max |
0.8183 |
| spearman_max |
0.8148 |
Training Details
Training Dataset
sentence-transformers/all-nli
- Dataset: sentence-transformers/all-nli at e587f0c
- Size: 557,850 training samples
- Columns:
anchor, positive, and negative
- Approximate statistics based on the first 1000 samples:
|
anchor |
positive |
negative |
| type |
string |
string |
string |
| details |
- min: 7 tokens
- mean: 10.38 tokens
- max: 45 tokens
|
- min: 6 tokens
- mean: 12.8 tokens
- max: 39 tokens
|
- min: 6 tokens
- mean: 13.4 tokens
- max: 50 tokens
|
- Samples:
| anchor |
positive |
negative |
A person on a horse jumps over a broken down airplane. |
A person is outdoors, on a horse. |
A person is at a diner, ordering an omelette. |
Children smiling and waving at camera |
There are children present |
The kids are frowning |
A boy is jumping on skateboard in the middle of a red bridge. |
The boy does a skateboarding trick. |
The boy skates down the sidewalk. |
- Loss:
AdaptiveLayerLoss with these parameters:{
"loss": "MultipleNegativesRankingLoss",
"n_layers_per_step": 1,
"last_layer_weight": 1.0,
"prior_layers_weight": 1.0,
"kl_div_weight": 1.0,
"kl_temperature": 0.3
}
Evaluation Dataset
sentence-transformers/all-nli
- Dataset: sentence-transformers/all-nli at e587f0c
- Size: 6,584 evaluation samples
- Columns:
anchor, positive, and negative
- Approximate statistics based on the first 1000 samples:
|
anchor |
positive |
negative |
| type |
string |
string |
string |
| details |
- min: 6 tokens
- mean: 18.02 tokens
- max: 66 tokens
|
- min: 5 tokens
- mean: 9.81 tokens
- max: 29 tokens
|
- min: 5 tokens
- mean: 10.37 tokens
- max: 29 tokens
|
- Samples:
| anchor |
positive |
negative |
Two women are embracing while holding to go packages. |
Two woman are holding packages. |
The men are fighting outside a deli. |
Two young children in blue jerseys, one with the number 9 and one with the number 2 are standing on wooden steps in a bathroom and washing their hands in a sink. |
Two kids in numbered jerseys wash their hands. |
Two kids in jackets walk to school. |
A man selling donuts to a customer during a world exhibition event held in the city of Angeles |
A man selling donuts to a customer. |
A woman drinks her coffee in a small cafe. |
- Loss:
AdaptiveLayerLoss with these parameters:{
"loss": "MultipleNegativesRankingLoss",
"n_layers_per_step": 1,
"last_layer_weight": 1.0,
"prior_layers_weight": 1.0,
"kl_div_weight": 1.0,
"kl_temperature": 0.3
}
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy: steps
per_device_train_batch_size: 128
per_device_eval_batch_size: 128
num_train_epochs: 1
warmup_ratio: 0.1
fp16: True
batch_sampler: no_duplicates
All Hyperparameters
Click to expand
overwrite_output_dir: False
do_predict: False
eval_strategy: steps
prediction_loss_only: False
per_device_train_batch_size: 128
per_device_eval_batch_size: 128
per_gpu_train_batch_size: None
per_gpu_eval_batch_size: None
gradient_accumulation_steps: 1
eval_accumulation_steps: None
learning_rate: 5e-05
weight_decay: 0.0
adam_beta1: 0.9
adam_beta2: 0.999
adam_epsilon: 1e-08
max_grad_norm: 1.0
num_train_epochs: 1
max_steps: -1
lr_scheduler_type: linear
lr_scheduler_kwargs: {}
warmup_ratio: 0.1
warmup_steps: 0
log_level: passive
log_level_replica: warning
log_on_each_node: True
logging_nan_inf_filter: True
save_safetensors: True
save_on_each_node: False
save_only_model: False
no_cuda: False
use_cpu: False
use_mps_device: False
seed: 42
data_seed: None
jit_mode_eval: False
use_ipex: False
bf16: False
fp16: True
fp16_opt_level: O1
half_precision_backend: auto
bf16_full_eval: False
fp16_full_eval: False
tf32: None
local_rank: 0
ddp_backend: None
tpu_num_cores: None
tpu_metrics_debug: False
debug: []
dataloader_drop_last: False
dataloader_num_workers: 0
dataloader_prefetch_factor: None
past_index: -1
disable_tqdm: False
remove_unused_columns: True
label_names: None
load_best_model_at_end: False
ignore_data_skip: False
fsdp: []
fsdp_min_num_params: 0
fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
fsdp_transformer_layer_cls_to_wrap: None
accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
deepspeed: None
label_smoothing_factor: 0.0
optim: adamw_torch
optim_args: None
adafactor: False
group_by_length: False
length_column_name: length
ddp_find_unused_parameters: None
ddp_bucket_cap_mb: None
ddp_broadcast_buffers: None
dataloader_pin_memory: True
dataloader_persistent_workers: False
skip_memory_metrics: True
use_legacy_prediction_loop: False
push_to_hub: False
resume_from_checkpoint: None
hub_model_id: None
hub_strategy: every_save
hub_private_repo: False
hub_always_push: False
gradient_checkpointing: False
gradient_checkpointing_kwargs: None
include_inputs_for_metrics: False
eval_do_concat_batches: True
fp16_backend: auto
push_to_hub_model_id: None
push_to_hub_organization: None
mp_parameters:
auto_find_batch_size: False
full_determinism: False
torchdynamo: None
ray_scope: last
ddp_timeout: 1800
torch_compile: False
torch_compile_backend: None
torch_compile_mode: None
dispatch_batches: None
split_batches: None
include_tokens_per_second: False
include_num_input_tokens_seen: False
neftune_noise_alpha: None
optim_target_modules: None
batch_sampler: no_duplicates
multi_dataset_batch_sampler: proportional
Training Logs
| Epoch |
Step |
Training Loss |
loss |
sts-dev_spearman_cosine |
sts-test_spearman_cosine |
| 0.0229 |
100 |
7.0517 |
3.9378 |
0.7889 |
- |
| 0.0459 |
200 |
4.4877 |
3.8105 |
0.7906 |
- |
| 0.0688 |
300 |
4.0315 |
3.6401 |
0.7966 |
- |
| 0.0918 |
400 |
3.822 |
3.3537 |
0.7883 |
- |
| 0.1147 |
500 |
3.0608 |
2.5975 |
0.7973 |
- |
| 0.1376 |
600 |
2.6304 |
2.3956 |
0.7943 |
- |
| 0.1606 |
700 |
2.7723 |
2.0379 |
0.8009 |
- |
| 0.1835 |
800 |
2.3556 |
1.9645 |
0.7984 |
- |
| 0.2065 |
900 |
2.4998 |
1.9086 |
0.8017 |
- |
| 0.2294 |
1000 |
2.1834 |
1.8400 |
0.7973 |
- |
| 0.2524 |
1100 |
2.2793 |
1.5831 |
0.8102 |
- |
| 0.2753 |
1200 |
2.1042 |
1.6485 |
0.8004 |
- |
| 0.2982 |
1300 |
2.1365 |
1.7084 |
0.8013 |
- |
| 0.3212 |
1400 |
2.0096 |
1.5520 |
0.8064 |
- |
| 0.3441 |
1500 |
2.0492 |
1.4917 |
0.8084 |
- |
| 0.3671 |
1600 |
1.8764 |
1.5447 |
0.8018 |
- |
| 0.3900 |
1700 |
1.8611 |
1.5480 |
0.8046 |
- |
| 0.4129 |
1800 |
1.972 |
1.5353 |
0.8075 |
- |
| 0.4359 |
1900 |
1.8062 |
1.4633 |
0.8039 |
- |
| 0.4588 |
2000 |
1.8565 |
1.4213 |
0.8027 |
- |
| 0.4818 |
2100 |
1.8852 |
1.3860 |
0.8002 |
- |
| 0.5047 |
2200 |
1.7939 |
1.5468 |
0.7910 |
- |
| 0.5276 |
2300 |
1.7398 |
1.6041 |
0.7888 |
- |
| 0.5506 |
2400 |
1.8535 |
1.5791 |
0.7949 |
- |
| 0.5735 |
2500 |
1.8486 |
1.4871 |
0.7951 |
- |
| 0.5965 |
2600 |
1.7379 |
1.5427 |
0.8019 |
- |
| 0.6194 |
2700 |
1.7325 |
1.4585 |
0.8087 |
- |
| 0.6423 |
2800 |
1.7664 |
1.5264 |
0.7965 |
- |
| 0.6653 |
2900 |
1.7517 |
1.6344 |
0.7930 |
- |
| 0.6882 |
3000 |
1.8329 |
1.4947 |
0.8008 |
- |
| 0.7112 |
3100 |
1.7206 |
1.4917 |
0.8089 |
- |
| 0.7341 |
3200 |
1.7138 |
1.4185 |
0.8065 |
- |
| 0.7571 |
3300 |
1.3705 |
1.2040 |
0.8446 |
- |
| 0.7800 |
3400 |
1.1289 |
1.1363 |
0.8447 |
- |
| 0.8029 |
3500 |
1.0174 |
1.1049 |
0.8464 |
- |
| 0.8259 |
3600 |
1.0188 |
1.0362 |
0.8466 |
- |
| 0.8488 |
3700 |
0.9841 |
1.1391 |
0.8470 |
- |
| 0.8718 |
3800 |
0.8466 |
1.0116 |
0.8485 |
- |
| 0.8947 |
3900 |
0.9268 |
1.1323 |
0.8488 |
- |
| 0.9176 |
4000 |
0.8686 |
1.0296 |
0.8495 |
- |
| 0.9406 |
4100 |
0.9255 |
1.1737 |
0.8484 |
- |
| 0.9635 |
4200 |
0.7991 |
1.0609 |
0.8486 |
- |
| 0.9865 |
4300 |
0.8431 |
0.9976 |
0.8486 |
- |
| 1.0 |
4359 |
- |
- |
- |
0.8148 |
Environmental Impact
Carbon emissions were measured using CodeCarbon.
- Energy Consumed: 0.244 kWh
- Carbon Emitted: 0.095 kg of CO2
- Hours Used: 0.849 hours
Training Hardware
- On Cloud: No
- GPU Model: 1 x NVIDIA GeForce RTX 3090
- CPU Model: 13th Gen Intel(R) Core(TM) i7-13700K
- RAM Size: 31.78 GB
Framework Versions
- Python: 3.11.6
- Sentence Transformers: 3.0.0.dev0
- Transformers: 4.41.0.dev0
- PyTorch: 2.3.0+cu121
- Accelerate: 0.26.1
- Datasets: 2.18.0
- Tokenizers: 0.19.1
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
AdaptiveLayerLoss
@misc{li20242d,
title={2D Matryoshka Sentence Embeddings},
author={Xianming Li and Zongxi Li and Jing Li and Haoran Xie and Qing Li},
year={2024},
eprint={2402.14776},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
MultipleNegativesRankingLoss
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}