The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.
Large-Scale Search Recommendation Dataset for Temporal Distribution Shift 🔥🔥🔥
Background
Temporal Distribution Shift (TDS) in real-world recommender systems refers to the phenomenon where the data distribution changes over time, driven by internal interventions (e.g., promotions, product launches) and external shocks (e.g., seasonality, media), degrading model generalization if trained under IID assumption.
In our work, we propose ELBO_TDS [paper|code], a lightweight and theoretical grounded framework addressing TDS in real-world recommender systems.
To investigate TDS problem at industrial scale, we release a production-level search recommendation dataset spanning consecutive days and covering heterogeneous feature types (categorical, numerical, sequential) with multiple implicit labels.
Dataset Use Cases
- Temporal Distribution Shift (TDS): temporal-wise (e.g., day-wise) shards with strict temporal validation/testing protocol
- General large-scale recommendation tasks: Supports CTR/CVR prediction, multi-task learning, and list-wise ranking
Dataset Summary
- Domain: user interaction logs in search scenario
- Time span: 13 consecutive days
- Scale: ~50M samples/day, ~650M samples total
- Unit: Session-level records with a list of item interactions per session
- Intended uses: CTR/ranking models
Dataset Structure
- Storage format: Parquet
- Row and session:
- Row: an user(request)-item pair, with their corresponding implicit labels (Click, Add-to-Cart, Purchase).
- Session: an user(request) consists of 12 rows stored consecutively in the Parquet file. User(request) features are identical within each session (items are random sampled to form session with the same length 12)
- Features:
- 101 user feature fields (all feature are converted to discrete ID)
- 105 item feature fields (all feature are converted to discrete ID)
- Feature Fields Type:
- Categorical: (e.g., item_id, user_id, etc.) the original categorical features are encrypted, and the semantic of each field will not be revealed
- Numerical: (e.g., click_count, order_count, etc.) the numerical features is converted categorical features by bucketization, the bucket id is given, indexed from 1, the larger bucket id indicates larger numerical feature, and the semantic of each field will not be revealed
- Sequential: (e.g., user's last interacted items, etc.) the elements in the sequence are all categorical, encrypted, and in reverse chronological order, and the semantic of each field will not be revealed
- Labels
- Click: implicit label, the positive rate is ~14.4%,
1denotes positive while0denotes negative - Add-to-Cart: implicit label, the positive rate is ~1.4%,
1denotes positive while0denotes negative - Purchase: implicit label, the positive rate is ~0.4%,
1denotes positive while0denotes negative
- Click: implicit label, the positive rate is ~14.4%,
- Data Type: each feature is stored in an array of
int64, for categorical and numerical features, the length of array is 1
Table1: The summary of overall feature fields. The u in the notation f_u_c_1 indicate user, c indicate categorical feature, 1 is the number indicating the index of the feature.
similarly, s in f_u_s_1 indicates sequential feature. n in f_u_n_1 indicates numerical features.
i in f_i_c_1 indicate item features
| Feature Type | Field Type | Field Name |
|---|---|---|
| User Feature | Categorical | f_u_c_1, f_u_c_2, f_u_c_3, f_u_c_4, f_u_c_5, f_u_c_6, f_u_c_7 |
| Numerical | f_u_n_1, f_u_n_2, f_u_n_3, f_u_n_4, f_u_n_5, f_u_n_6, f_u_n_7, f_u_n_8, f_u_n_9, f_u_n_10, f_u_n_11, f_u_n_12, f_u_n_13, f_u_n_14, f_u_n_15, f_u_n_16, f_u_n_17, f_u_n_18, f_u_n_19, f_u_n_20, f_u_n_21, f_u_n_22, f_u_n_23, f_u_n_24, f_u_n_25, f_u_n_26, f_u_n_27, f_u_n_28, f_u_n_29, f_u_n_30, f_u_n_31, f_u_n_32, f_u_n_33, f_u_n_34, f_u_n_35, f_u_n_36, f_u_n_37, f_u_n_38, f_u_n_39, f_u_n_40, f_u_n_41, f_u_n_42, f_u_n_43, f_u_n_44, f_u_n_45, f_u_n_46, f_u_n_47, f_u_n_48, f_u_n_49, f_u_n_50, f_u_n_51, f_u_n_52, f_u_n_53, f_u_n_54, f_u_n_55, f_u_n_56, f_u_n_57, f_u_n_58, f_u_n_59 |
|
| Sequential | f_u_s_1, f_u_s_2, f_u_s_3, f_u_s_4, f_u_s_5, f_u_s_6, f_u_s_7, f_u_s_8, f_u_s_9, f_u_s_10, f_u_s_11, f_u_s_12, f_u_s_13, f_u_s_14, f_u_s_15, f_u_s_16, f_u_s_17, f_u_s_18, f_u_s_19, f_u_s_20, f_u_s_21, f_u_s_22, f_u_s_23, f_u_s_24, f_u_s_25, f_u_s_26, f_u_s_27, f_u_s_28, f_u_s_29, f_u_s_30, f_u_s_31, f_u_s_32, f_u_s_33, f_u_s_34, f_u_s_35 |
|
| Item Feature | Categorical | f_i_c_1, f_i_c_2, f_i_c_3, f_i_c_4, f_i_c_5, f_i_c_6, f_i_c_7, f_i_c_8, f_i_c_9, f_i_c_10, f_i_c_11 |
| Numerical | f_i_n_1, f_i_n_2, f_i_n_3, f_i_n_4, f_i_n_5, f_i_n_6, f_i_n_7, f_i_n_8, f_i_n_9, f_i_n_10, f_i_n_11, f_i_n_12, f_i_n_13, f_i_n_14, f_i_n_15, f_i_n_16, f_i_n_17, f_i_n_18, f_i_n_19, f_i_n_20, f_i_n_21, f_i_n_22, f_i_n_23, f_i_n_24, f_i_n_25, f_i_n_26, f_i_n_27, f_i_n_28, f_i_n_29, f_i_n_30, f_i_n_31, f_i_n_32, f_i_n_33, f_i_n_34, f_i_n_35, f_i_n_36, f_i_n_37, f_i_n_38, f_i_n_39, f_i_n_40, f_i_n_41, f_i_n_42, f_i_n_43, f_i_n_44, f_i_n_45, f_i_n_46, f_i_n_47, f_i_n_48, f_i_n_49, f_i_n_50, f_i_n_51, f_i_n_52, f_i_n_53, f_i_n_54, f_i_n_55, f_i_n_56, f_i_n_57, f_i_n_58, f_i_n_59, f_i_n_60, f_i_n_61, f_i_n_62, f_i_n_63, f_i_n_64, f_i_n_65, f_i_n_66, f_i_n_67, f_i_n_68, f_i_n_69, f_i_n_70, f_i_n_71, f_i_n_72, f_i_n_73, f_i_n_74, f_i_n_75, f_i_n_76, f_i_n_77, f_i_n_78, f_i_n_79, f_i_n_80, f_i_n_81, f_i_n_82, f_i_n_83, f_i_n_84, f_i_n_85, f_i_n_86, f_i_n_87, f_i_n_88, f_i_n_89, f_i_n_90, f_i_n_91, f_i_n_92, f_i_n_93 |
|
| Sequential | f_i_s_1 |
Data Splits
- By day: 13 shards
- Recommended:
- Temporal: train = first N−2 days; valid = N−1; test = N
- Shuffle: only if temporal leakage is acceptable
Data Loading
The data is stored in Parquet format, which could be read by library such as pandas and datasets, for example,
import pandas as pd
df = pd.read_parquet('data/day1/', engine='pyarrow')
import datasets as ds
dataset = ds.dataset('data/day1/', format="parquet")
Citation
If you use this dataset, please cite:
@misc{zhu2025probabilisticframeworktemporaldistribution,
title={A Probabilistic Framework for Temporal Distribution Generalization in Industry-Scale Recommender Systems},
author={Yuxuan Zhu and Cong Fu and Yabo Ni and Anxiang Zeng and Yuan Fang},
year={2025},
eprint={2511.21032},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2511.21032},
}
- Downloads last month
- 1