Instructions to use jfkback/hypencoder.8_layer with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use jfkback/hypencoder.8_layer with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("feature-extraction", model="jfkback/hypencoder.8_layer")# Load model directly from transformers import HypencoderDualEncoder model = HypencoderDualEncoder.from_pretrained("jfkback/hypencoder.8_layer", dtype="auto") - Notebooks
- Google Colab
- Kaggle
| { | |
| "architectures": [ | |
| "HypencoderDualEncoder" | |
| ], | |
| "base_encoder_output_dim": 768, | |
| "loss_kwargs": [ | |
| {} | |
| ], | |
| "loss_type": [ | |
| "margin_mse" | |
| ], | |
| "passage_encoder_kwargs": { | |
| "model_name_or_path": "google-bert/bert-base-uncased" | |
| }, | |
| "passage_encoder_type": "", | |
| "query_encoder_kwargs": { | |
| "converter_kwargs": { | |
| "activation_type": "relu", | |
| "vector_dimensions": [ | |
| 768, | |
| 768, | |
| 768, | |
| 768, | |
| 768, | |
| 768, | |
| 768, | |
| 768, | |
| 768, | |
| 1 | |
| ] | |
| }, | |
| "model_name_or_path": "google-bert/bert-base-uncased" | |
| }, | |
| "query_encoder_type": "", | |
| "shared_encoder": true, | |
| "torch_dtype": "float32", | |
| "transformers_version": "4.48.2" | |
| } | |