Office of Sponsored Programs

 

Efficient Fine Tuning of Large Language Models

  • Published
  • By 16 AF/A5

TOPIC SPONSOR: 16 AF/A5

There are many small and medium data sets spread across the USAF that need to be parsed for standardized output. An LLM and/or AI/ML solution may be able to rapidly parse this data and provide outputs. However, when LLMs such as Chatgpt are asked complex questions they may either provide incorrect information (hallucination) or lose "focus" on the question (context overrun).

What are the most robust ways to incorporate new data sets into a large language model that do not truncate the breadth of data available while simultaneously allowing for complex answers and minimizing hallucinations?


  •