TU Delft research on the Impact of AI-Generated Security Reports on OSS Maintainers & Security Triage

Hi everyone,

I’m currently pursuing my MSc at TU Delft in TU Delft, Netherlands, where I’m conducting my thesis research on how AI-generated security bug reports are affecting open-source maintainers and security triage practices.

As LLMs become more capable, they seem to be lowering the barrier for generating vulnerability reports, bug submissions, and security-related findings. While this may increase accessibility and reporting volume, I’m also interested in understanding whether it creates new operational challenges for maintainers and security teams, for example around report quality, trust, triage workload, false positives, or signal-to-noise ratio.

Given the strong overlap between OSS security tooling and vulnerability workflows in this community, I was curious whether maintainers or security practitioners here have already started noticing changes related to AI-generated reports or AI-assisted submissions.

I’d be particularly interested in hearing perspectives on whether people have noticed changes in the volume or quality of security reports recently, whether AI-generated reports are easy to identify in practice, whether they meaningfully affect triage workflows or maintainer workload, and whether there are practices or tooling approaches that help filter useful signals from low-quality submissions.

I’d also be very grateful to speak with OSS maintainers or security practitioners who may be open to participating in a short research interview related to this topic.

Thank you, and I’d genuinely appreciate any perspectives or discussion from the community.