Filter dates
Overview
- supervised machine learning
- evaluation framework
- language model
- detecting anomalous
- programmable logic
Publications
Detection and Defense Against Prominent Attacks on Preconditioned LLM-Integrated Virtual Assistants.
CoRR
A Novel Evaluation Framework for Assessing Resilience Against Prompt Injection Attacks in Large Language Models.
CoRR
A Comprehensive Survey of Attack Techniques, Implementation, and Mitigation Strategies in Large Language Models.
UbiSec
Process Mining with Programmable Logic Controller Memory States.
UbiSec
A Comprehensive Survey of Attack Techniques, Implementation, and Mitigation Strategies in Large Language Models.
CoRR
Security Analysis of Software Updates for Industrial Robots.
Critical Infrastructure Protection