Advanced: Troubleshooting Variables
My variable is computing dates or time periods incorrectly
Brim includes an option for including today's date in the LLM prompt.
- If computing your variable requires knowledge of today's date, scroll down to the "Semantics" section, and make sure the box next to "Reference today's date" is checked.
- Explicitly specify the format in which you want a time variable and whether you want the variable to round up or down.
-
Use the "timestamp" variable type for builtin formatting guardrails.
My variable is pulling a lot of irrelevant evidence
The LLM tries to find relevant evidence, and without more information, it can find too much.
- Try explaining the difference between what you want included and excluded. You can do this in the Variable Instructions, or by labelling examples in Label Review and Optimizing the variable.
- Be as black and white as you can in the instruction. Try adding the instruction to "Ignore" the specific case(s) you want to exclude.
-
Limit generation by Document Type or Document Date if that is relevant for your variable. You can find these options in the "Semantics" section.
I'm not getting any results when I generate
By default, Brim hides label instances that represent an "Empty" response. In Label Review > Variables > choose a variable, try clicking "Show Default value labels".
If "Show Default value labels" is on and you see values now:
- Verify that your data set has evidence for this variable.
- Try making the variable name more descriptive.
If "Show Default value labels" is on and you still don't see values:
- Verify that your data set is loaded
- In the Variable Edit Screen, ensure that:
- The "Include variable in generation" box is checked.
- The "Restrict by Document Name" option is set to no Restrictions or correctly spelled Document names
- The "Restrict by Document Date" option is set to no Restrictions or correct Document dates.
- Ensure that the patient you're looking for was included in your most recent label generation.
-
Try generating again for this specific patient.
The LLM Reasoning for my variable is drawing the wrong conclusion
Vague or faulty LLM reasoning is often a consequence of the LLM not knowing how to behave in a specific situation:
- You can give in-line reasoning feedback and re-generate. Learn more here.
- Manually modify the prompt to add instructions for how to react if it is missing relevant information (e.g. "If there are no values for the input variable, return False.").
- Check that the variable has the correct evidence snippets.
- If the evidence snippets are correct, focus on modifying the prompt to be more specific or clarifying the logic.
- If the evidence snippets are incorrect, verify that important documents aren't being filtered out by title or date.
Running my variable is very slow or very expensive
The root cause of a very slow or expensive variable is usually the number of documents it's running on.
- Can you level-up the scope? Instead of one value per note, can your variable be one value per patient?
- Consider using conditional generation to limit the documents that the variable is run on.
- Can you run in batches?
- Do you have the "Use Advanced LLM Model" option checked in Variable Advanced Settings?