
This repository contains the artifacts for our paper "Incident Response Planning Using a Lightweight Large Language Model with Reduced Hallucination", conditionally accepted to NDSS 2026. We introduce a novel method that enables the effective use of a large language model (LLM) to provide decision support for incident response planning. Our method uses the LLM for translating system logs into effective response plans while addressing its limitations through fine-tuning, information retrieval, and decision-theoretic planning. Unlike prior work, which relies on prompt engineering of frontier models, our method is lightweight and can run on commodity hardware. Our artifacts include: The first public fine-tuning dataset of incidents and response actions. This is the dataset we use to produce the results in the paper. The weights of the fine-tuned model. Python code for downloading the fine-tuned model and using it to generate an incident response plan. Python code for fine-tuning a new model based on our dataset. Video demonstration of our decision-support system for incident response.
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
