BERSt / README.md
chocobearz's picture
Upload dataset
fed35c4 verified
metadata
license: cc-by-4.0
task_categories:
  - automatic-speech-recognition
  - audio-classification
language:
  - en
tags:
  - shouts
  - emotional_speech
  - distance_speech
  - smartphone_recordings
  - nonsense_phrases
  - non-native_accents
  - regional_accents
pretty_name: B(asic) E(motion) R(andom phrase) S(hou)t(s)
size_categories:
  - 1K<n<10K
dataset_info:
  features:
    - name: audio
      dtype:
        audio:
          sampling_rate: 48000
    - name: user_id
      dtype: string
    - name: age
      dtype: string
    - name: current_language
      dtype: string
    - name: first_language
      dtype: string
    - name: gender
      dtype: string
    - name: phone_model
      dtype: string
    - name: audio_id
      dtype: string
    - name: affect
      dtype: string
    - name: last_modified
      dtype: string
    - name: phone_position
      dtype: string
    - name: script
      dtype: string
    - name: shout_level
      dtype: string
  splits:
    - name: train
      num_bytes: 956572664.583
      num_examples: 3503
    - name: test
      num_bytes: 140282965
      num_examples: 532
    - name: validation
      num_bytes: 143434236
      num_examples: 488
  download_size: 1055596177
  dataset_size: 1240289865.583
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: test
        path: data/test-*
      - split: validation
        path: data/validation-*

BERSt Dataset

We release the BERSt Dataset for various speech recognition tasks including Automatic Speech Recognition (ASR) and Speech Emotion Recogniton (SER)

Read the paper here

Overview

  • 4526 single phrase recordings (~3.75h)
  • 98 professional actors
  • 19 phone positions
  • 7 emotion classes
  • 3 vocal intensity levels
  • varied regional and non-native English accents
  • nonsense phrases covering all English Phonemes

Data collection

The BERSt dataset represents data collected in home environments using various smartphone microphones (phone model available as metadata) The recordings come from professional actors around the globe and represent varying regional accents in English: UK, Canada, USA (multi-state), Australia, including a subset of the data that is non-native English speakers including: French, Russian, Hindi etc. The data includes 13 non-sense phrases for use cases robust to linguistic context and high surprisal. Participants were prompted to speak, raise their voice and shout each phrase while moving their phone to various distances and locations in their home, as well as with various obstructions to the microphone, e.g. in a backpack.

Baseline results of various state-of-the-art methods for ASR and SER show that this dataset remains a challenging task, and we encourage researchers to use this data to fine-tune and benchmark their models in these difficult conditions representing possible real-world situations.

Affect annotations are those provided to the actors; they have not been validated through perception. The speech annotations, however, have been checked and adjusted for mistakes in the speech.

Data splits and organisation

For each phone position and phrase, the actors provided a single recording for the three vocal intensity levels, these raw audio files are available

Meta-data in csv format corresponds to the files split per utterance with noise and silence before and after speech removed, found inside clean_clips for each data splits

We provide a test, train and validation split

There is no speaker cross-over between splits, the train and validation sets each contain 10 speakers not seen in the training set

Baseline Results

Automatic speech recognition: word error rate, character error rate, phone error rate

Model WER ↓ CER ↓ PER ↓
Whisper - medium.en 17.27% 7.81% 7.80%
Whisper - turbo 17.93% 7.28% 7.30%
NeMo Quartznet 39.49% 15.24% 15.77%
NeMo Fastconformer Transducer 24.96% 10.72% 10.13%
Wav2Vec2-Base-960h 49.65% 18.94% 19.90%

emotion_wer_plot distance_wer_plot shout_wer_plot

Speech emotion recognition: Weighted and Unweighted Accuracy

Model UA ↑ WA ↑
SpeechBrain Wav2Vec2 20.7% 20.8%
DAWN-hidden-SVM 32.1% 32.2%
Wav2Small-VAD-SVM* 23.3% 22.3%

*Teacher model

SVM indicates an SVM on the hidden layers or VAD output, see paper for details

Metadata Details

  • actor count
    • 98
  • Gender counts
    • Woman: 61
    • Man: 34
    • Non-Binary: 1
    • Prefer not to disclose 2
  • Current daily language counts
    • English: 95
    • Norwegian: 1
    • Russian: 1
    • French: 1
  • First language counts
    • English: 75
    • Non English: 23
      • Spanish: 6
      • French: 3
      • Portuguese: 3
      • Chinese: 2
      • Norwegian: 1
      • Mandarin: 1
      • Tagalog: 1
      • Italian: 1
      • Hungarian: 1
      • Russian: 1
      • Hindi: 1
      • Swahili: 1
      • Croatian: 1 Pre-split Data counts
  • Emotion counts
    • fear: 236
    • neutral: 234
    • disgust: 232
    • joy: 224
    • anger: 223
    • surprise: 210
    • sadness: 201
  • Distance counts:
    • Near body: 627
    • 1-2m away: 324
    • Other side of room: 316
    • Outside of room: 293

Cite as:

@article{tuttösí2025berstingscreamsbenchmarkdistanced, 
      title={BERSting at the Screams: A Benchmark for Distanced, Emotional and Shouted Speech Recognition}, 
      author={Paige Tuttösí and Mantaj Dhillon and Luna Sang and Shane Eastwood and Poorvi Bhatia and Quang Minh Dinh and Avni Kapoor and Yewon Jin and Angelica Lim},
      journal = {Computer Speech & Language},
      volume = {95},
      pages = {101815},
      year = {2026},
      issn = {0885-2308},
      doi = {https://doi.org/10.1016/j.csl.2025.101815},
      url = {https://www.sciencedirect.com/science/article/pii/S0885230825000403},
}