Language models in the biomedical and clinical tasks

Exploring the use cases and limitations of LLMs

Ahmad Albarqawi
8 min readJan 23, 2023
imaginary image representing the language models as a growing brain — generate by author using Midjourney

Large language models (LLMs) provide unprecedented opportunities to augment humans in various industries, including healthcare. However, understanding the language models’ limitations and mitigations is essential before applying them in regulated environments. In recent years multiple studies have been published to propose new techniques for tuning existing language models for medical tasks, and selecting the most suitable one for specific tasks requires exploring all of the model’s capabilities and weaknesses.
The availability of scientific data over the internet helped to advance the fine-tuning techniques to provide task-specific models like SciBERT and BioBERT, which are trained on various biomedical and clinical sources to provide focused capabilities on specific tasks. Most industry medical models are built based on fine-tuning models, which require extensive training data to achieve high-quality results in new domains, limiting their expansion based on data availability. Few-shot learners, such as GPT3 provided the ability to train the model on a new domain with zero or few examples, resolving the need for operational cost to label the vast amount of data. Still, it comes with issues, like the tendency to generate nonfactual information, and the research is restricted…

--

--

Ahmad Albarqawi
Ahmad Albarqawi

Written by Ahmad Albarqawi

Master’s data science scholar at UIUC. ahmadai.com

No responses yet