Logged off icon

 

Safety Critical Systems Club
For Everyone Working in System Safety

AN autonomous vehicle on a crossing with peopleA new class of cybersecurity vulnerability has emerged, known as environmental indirect prompt injection, where physical road signs are used to hijack the decision-making processes of autonomous vehicles and drones.

Researchers from the University of California, Santa Cruz (UCSC) developed a method called CHAI (Command Hijacking against Embodied AI). This attack exploits Large Vision-Language Models (LVLMs) that power "embodied" AI – systems that interact with the physical world. Unlike traditional prompt injection, which occurs via text files or web pages, CHAI uses visual cues in the environment.

By placing specific text on road signs, attackers can force an AI to interpret the sign as a direct command rather than a piece of passive environmental data. In tests, the researchers successfully tricked self-driving cars into ignoring stop signs or driving through pedestrian crossings even when pedestrians were present.

https://www.theregister.com/2026/01/30/road_sign_hijack_ai

img: midjourney

You are not authorised to post comments.

Comments powered by CComment