Update README.md
Browse files
README.md
CHANGED
|
@@ -35,6 +35,21 @@ in an exam format. **It is insufficient for training or finetuning an expert sys
|
|
| 35 |
Patients need personalized, expert advice beyond what can be described on an exam
|
| 36 |
or returned by an AI.
|
| 37 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 38 |
## Source information
|
| 39 |
|
| 40 |
### Source 1
|
|
|
|
| 35 |
Patients need personalized, expert advice beyond what can be described on an exam
|
| 36 |
or returned by an AI.
|
| 37 |
|
| 38 |
+
### How to evaluate
|
| 39 |
+
|
| 40 |
+
You can run [this LightEval evaluation](https://github.com/mapmeld/lighteval-tasks/blob/main/community_tasks/gen_counselor_evals.py) with the command:
|
| 41 |
+
|
| 42 |
+
```
|
| 43 |
+
lighteval accelerate \
|
| 44 |
+
"pretrained=meta-llama/Llama-3.1-8B-Instruct" \
|
| 45 |
+
"community|genetic-counselor-multiple-choice|0|0" \
|
| 46 |
+
--custom-tasks lighteval-tasks/community_tasks/gen_counselor_evals.py \
|
| 47 |
+
--override-batch-size=5 \
|
| 48 |
+
--use-chat-template
|
| 49 |
+
```
|
| 50 |
+
|
| 51 |
+
Llama-3.1-8B-Instruct got 50.67% accuracy. I haven't been able to run all of the questions by OpenAI, but ChatGPT was doing pretty good on a sample of questions.
|
| 52 |
+
|
| 53 |
## Source information
|
| 54 |
|
| 55 |
### Source 1
|