jialucode commited on
Commit
35b1cb2
·
verified ·
1 Parent(s): d08f5d9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +269 -0
README.md CHANGED
@@ -58,4 +58,273 @@ Example QA:
58
  - Training diagnostic QA systems for auscultation sounds
59
  - Benchmarking audio-language models in healthcare
60
  - Studying generalization across unseen respiratory/cardiac datasets
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
61
 
 
58
  - Training diagnostic QA systems for auscultation sounds
59
  - Benchmarking audio-language models in healthcare
60
  - Studying generalization across unseen respiratory/cardiac datasets
61
+
62
+
63
+ ---
64
+ license: <!-- [CUSTOMIZE THIS] Dataset license, e.g. "cc-by-4.0", "cc-by-sa-4.0", "apache-2.0", "mit", etc. -->
65
+ language:
66
+ - <!-- [CUSTOMIZE THIS] Primary language of the dataset, e.g. "en", "zh", "multilingual" -->
67
+ multilinguality:
68
+ - <!-- [CUSTOMIZE THIS] Whether the dataset is monolingual, multilingual, or translation, e.g. "monolingual", "multilingual", "translation" -->
69
+ size_categories:
70
+ - <!-- [CUSTOMIZE THIS] Size category of the dataset, e.g. "10K<n<100K", "100K<n<1M", "1M<n<10M", etc. -->
71
+ source_datasets:
72
+ - <!-- [CUSTOMIZE THIS] Source of the dataset, e.g. "original", "extended", "derived" -->
73
+ task_categories:
74
+ - <!-- [CUSTOMIZE THIS] Task categories, e.g. "text-classification", "question-answering", "image-classification", etc. -->
75
+ task_ids:
76
+ - <!-- [CUSTOMIZE THIS] Specific task IDs, e.g. "sentiment-classification", "topic-classification", etc. -->
77
+ paperswithcode_id: <!-- [CUSTOMIZE THIS] Optional: paperswithcode ID if applicable -->
78
+ dataset_info:
79
+ features:
80
+ - name: <!-- [CUSTOMIZE THIS] Feature name, e.g. "text" -->
81
+ dtype: <!-- [CUSTOMIZE THIS] Data type, e.g. "string" -->
82
+ - name: <!-- [CUSTOMIZE THIS] Feature name, e.g. "label" -->
83
+ dtype: <!-- [CUSTOMIZE THIS] Data type, e.g. "int64" -->
84
+ # Add more features as needed
85
+ config_name: <!-- [CUSTOMIZE THIS] Configuration name, e.g. "default" -->
86
+ splits:
87
+ - name: train
88
+ num_examples: <!-- [CUSTOMIZE THIS] Number of examples in train split, e.g. 8000 -->
89
+ - name: validation
90
+ num_examples: <!-- [CUSTOMIZE THIS] Number of examples in validation split, e.g. 1000 -->
91
+ - name: test
92
+ num_examples: <!-- [CUSTOMIZE THIS] Number of examples in test split, e.g. 1000 -->
93
+ pretty_name: <!-- [CUSTOMIZE THIS] A human-readable name for the dataset -->
94
+ ---
95
+
96
+ # <!-- [CUSTOMIZE THIS] Dataset Name -->
97
+
98
+ ## Dataset Description
99
+
100
+ ### Dataset Summary
101
+
102
+ <!-- [CUSTOMIZE THIS] Provide a short introduction to the dataset, including:
103
+ - What is this dataset about?
104
+ - What tasks does it support?
105
+ - How was it created?
106
+ - What makes it unique or valuable? -->
107
+
108
+ ### Supported Tasks and Leaderboards
109
+
110
+ <!-- [CUSTOMIZE THIS] Describe the tasks this dataset supports:
111
+ - What tasks can be performed on this dataset? (e.g., classification, Q&A, etc.)
112
+ - Are there leaderboards associated with this dataset?
113
+ - What metrics are appropriate for evaluating models on this dataset? -->
114
+
115
+ ### Languages
116
+
117
+ <!-- [CUSTOMIZE THIS] Specify the languages used in the dataset and any relevant language-specific information -->
118
+
119
+ ## Dataset Structure
120
+
121
+ ### Data Instances
122
+
123
+ <!-- [CUSTOMIZE THIS] Provide examples of data instances from the dataset. Include:
124
+ - A description of what each instance represents
125
+ - One or more concrete examples in JSON or dictionary format -->
126
+
127
+ ```python
128
+ # Example data instance
129
+ {
130
+ 'feature1': 'value1',
131
+ 'feature2': 'value2',
132
+ 'label': 0
133
+ }
134
+ ```
135
+
136
+ ### Data Fields
137
+
138
+ <!-- [CUSTOMIZE THIS] Describe all data fields, including:
139
+ - Field name
140
+ - Data type
141
+ - Description of what the field represents
142
+ - For categorical fields, the possible values and their meanings -->
143
+
144
+ - `feature1`: a `string` feature representing <!-- description -->
145
+ - `feature2`: a `string` feature representing <!-- description -->
146
+ - `label`: a `int64` classification label, with 0 indicating <!-- meaning --> and 1 indicating <!-- meaning -->
147
+
148
+ ### Data Splits
149
+
150
+ <!-- [CUSTOMIZE THIS] Describe how the data is split:
151
+ - Number of instances in each split (train/validation/test)
152
+ - Criteria used for splitting the data
153
+ - Whether the splits are balanced or representative -->
154
+
155
+ ## Dataset Creation
156
+
157
+ ### Curation Rationale
158
+
159
+ <!-- [CUSTOMIZE THIS] Explain why this dataset was created:
160
+ - What need does it address?
161
+ - What gaps in existing datasets does it fill?
162
+ - What research questions was it designed to help answer? -->
163
+
164
+ ### Source Data
165
+
166
+ #### Initial Data Collection and Normalization
167
+
168
+ <!-- [CUSTOMIZE THIS] Describe how the initial data was collected:
169
+ - What sources were used?
170
+ - What collection process was followed?
171
+ - How was the data normalized or standardized? -->
172
+
173
+ #### Who are the source language producers?
174
+
175
+ <!-- [CUSTOMIZE THIS] Describe who produced the original data:
176
+ - Was it written/created by professionals, crowdworkers, experts in a domain?
177
+ - Is it from a specific demographic or community?
178
+ - What motivated the original authors/speakers? -->
179
+
180
+ ### Annotations
181
+
182
+ #### Annotation process
183
+
184
+ <!-- [CUSTOMIZE THIS] If the dataset includes annotations, describe:
185
+ - How annotations were created (expert labeling, crowdsourcing, etc.)
186
+ - Annotation guidelines provided to annotators
187
+ - Quality control measures -->
188
+
189
+ #### Who are the annotators?
190
+
191
+ <!-- [CUSTOMIZE THIS] Describe who performed the annotations:
192
+ - Professional annotators, crowdworkers, domain experts?
193
+ - Demographic information if relevant
194
+ - How were annotators compensated? -->
195
+
196
+ ### Personal and Sensitive Information
197
+
198
+ <!-- [CUSTOMIZE THIS] Describe handling of personal information:
199
+ - Does the dataset contain personal information?
200
+ - What steps were taken to protect privacy?
201
+ - Were individuals notified or did they consent to data collection? -->
202
+
203
+ ## Considerations for Using the Data
204
+
205
+ ### Social Impact of Dataset
206
+
207
+ <!-- [CUSTOMIZE THIS] Consider the social impact:
208
+ - How might this dataset benefit society?
209
+ - Are there potential risks or harms from using this dataset?
210
+ - Are there specific applications that should be encouraged or discouraged? -->
211
+
212
+ ### Discussion of Biases
213
+
214
+ <!-- [CUSTOMIZE THIS] Discuss potential biases:
215
+ - What biases might be present in the data?
216
+ - How might these biases affect models trained on this data?
217
+ - What steps were taken to mitigate biases? -->
218
+
219
+ ### Other Known Limitations
220
+
221
+ <!-- [CUSTOMIZE THIS] Describe any other limitations:
222
+ - Coverage limitations
223
+ - Technical limitations
224
+ - Areas where the dataset may not perform well -->
225
+
226
+ ## Additional Information
227
+
228
+ ### Dataset Curators
229
+
230
+ <!-- [CUSTOMIZE THIS] Information about the dataset curators:
231
+ - Who created this dataset?
232
+ - Institutional affiliations
233
+ - Contact information if appropriate -->
234
+
235
+ ### Licensing Information
236
+
237
+ <!-- [CUSTOMIZE THIS] Detail the licensing:
238
+ - What license covers this dataset?
239
+ - Any restrictions on use
240
+ - Attribution requirements -->
241
+
242
+ ### Citation Information
243
+
244
+ <!-- [CUSTOMIZE THIS] Provide citation information:
245
+ - How should this dataset be cited?
246
+ - BibTeX citation -->
247
+
248
+ ```
249
+ @inproceedings{
250
+ author = {Author1 LastName, Author2 LastName},
251
+ title = {Dataset Title},
252
+ booktitle = {Conference or Journal Name},
253
+ year = {20XX},
254
+ url = {URL to paper or dataset}
255
+ }
256
+ ```
257
+
258
+ ### Contributions
259
+
260
+ <!-- [CUSTOMIZE THIS] Acknowledge contributions:
261
+ - Who contributed to this dataset card?
262
+ - Thanks to reviewers or other contributors -->
263
+
264
+ ## How to Use
265
+
266
+ ### Loading the Dataset
267
+
268
+ ```python
269
+ # Example code to load the dataset
270
+ from datasets import load_dataset
271
+
272
+ dataset = load_dataset("username/dataset_name")
273
+
274
+ # Access splits
275
+ train_data = dataset["train"]
276
+ validation_data = dataset["validation"]
277
+ test_data = dataset["test"]
278
+
279
+ # Example usage
280
+ for example in train_data.select(range(3)):
281
+ print(example)
282
+ ```
283
+
284
+ ### Example Preprocessing and Training
285
+
286
+ ```python
287
+ # Example preprocessing and model training code
288
+ from transformers import AutoTokenizer, AutoModelForSequenceClassification, Trainer, TrainingArguments
289
+
290
+ # Load tokenizer and tokenize data
291
+ tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
292
+
293
+ def tokenize_function(examples):
294
+ return tokenizer(examples["feature1"], padding="max_length", truncation=True)
295
+
296
+ tokenized_dataset = dataset.map(tokenize_function, batched=True)
297
+
298
+ # Define model
299
+ model = AutoModelForSequenceClassification.from_pretrained("bert-base-uncased", num_labels=2)
300
+
301
+ # Define training arguments
302
+ training_args = TrainingArguments(
303
+ output_dir="./results",
304
+ per_device_train_batch_size=16,
305
+ per_device_eval_batch_size=16,
306
+ num_train_epochs=3,
307
+ evaluation_strategy="epoch",
308
+ save_strategy="epoch",
309
+ load_best_model_at_end=True,
310
+ )
311
+
312
+ # Define trainer
313
+ trainer = Trainer(
314
+ model=model,
315
+ args=training_args,
316
+ train_dataset=tokenized_dataset["train"],
317
+ eval_dataset=tokenized_dataset["validation"],
318
+ )
319
+
320
+ # Train model
321
+ trainer.train()
322
+ ```
323
+
324
+ ### Community and Support
325
+
326
+ <!-- [CUSTOMIZE THIS] Information on how to get help with the dataset:
327
+ - Links to community forums
328
+ - Ways to report issues or contribute improvements
329
+ - Contact information for maintainers -->
330