Skip to main content

ViDDAR: Vision Language Model-Based Task-Detrimental Content Detection for Augmented Reality.

Publication ,  Journal Article
Xiu, Y; Scargill, T; Gorlatova, M
Published in: IEEE transactions on visualization and computer graphics
May 2025

In Augmented Reality (AR), virtual content enhances user experience by providing additional information. However, improperly positioned or designed virtual content can be detrimental to task performance, as it can impair users' ability to accurately interpret real-world information. In this paper we examine two types of task-detrimental virtual content: obstruction attacks, in which virtual content prevents users from seeing real-world objects, and information manipulation attacks, in which virtual content interferes with users' ability to accurately interpret real-world information. We provide a mathematical framework to characterize these attacks and create a custom open-source dataset for attack evaluation. To address these attacks, we introduce ViDDAR (Vision language model-based Task-Detrimental content Detector for Augmented Reality), a comprehensive full-reference system that leverages Vision Language Models (VLMs) and advanced deep learning techniques to monitor and evaluate virtual content in AR environments, employing a user-edge-cloud architecture to balance performance with low latency. To the best of our knowledge, ViDDAR is the first system to employ VLMs for detecting task-detrimental content in AR settings. Our evaluation results demonstrate that ViDDAR effectively understands complex scenes and detects task-detrimental content, achieving up to 92.15% obstruction detection accuracy with a detection latency of 533 ms, and an 82.46% information manipulation content detection accuracy with a latency of 9.62 s.

Duke Scholars

Published In

IEEE transactions on visualization and computer graphics

DOI

EISSN

1941-0506

ISSN

1077-2626

Publication Date

May 2025

Volume

31

Issue

5

Start / End Page

3194 / 3203

Related Subject Headings

  • Software Engineering
  • 46 Information and computing sciences
  • 0802 Computation Theory and Mathematics
  • 0801 Artificial Intelligence and Image Processing
 

Citation

APA
Chicago
ICMJE
MLA
NLM
Xiu, Y., Scargill, T., & Gorlatova, M. (2025). ViDDAR: Vision Language Model-Based Task-Detrimental Content Detection for Augmented Reality. IEEE Transactions on Visualization and Computer Graphics, 31(5), 3194–3203. https://doi.org/10.1109/tvcg.2025.3549147
Xiu, Yanming, Tim Scargill, and Maria Gorlatova. “ViDDAR: Vision Language Model-Based Task-Detrimental Content Detection for Augmented Reality.IEEE Transactions on Visualization and Computer Graphics 31, no. 5 (May 2025): 3194–3203. https://doi.org/10.1109/tvcg.2025.3549147.
Xiu Y, Scargill T, Gorlatova M. ViDDAR: Vision Language Model-Based Task-Detrimental Content Detection for Augmented Reality. IEEE transactions on visualization and computer graphics. 2025 May;31(5):3194–203.
Xiu, Yanming, et al. “ViDDAR: Vision Language Model-Based Task-Detrimental Content Detection for Augmented Reality.IEEE Transactions on Visualization and Computer Graphics, vol. 31, no. 5, May 2025, pp. 3194–203. Epmc, doi:10.1109/tvcg.2025.3549147.
Xiu Y, Scargill T, Gorlatova M. ViDDAR: Vision Language Model-Based Task-Detrimental Content Detection for Augmented Reality. IEEE transactions on visualization and computer graphics. 2025 May;31(5):3194–3203.

Published In

IEEE transactions on visualization and computer graphics

DOI

EISSN

1941-0506

ISSN

1077-2626

Publication Date

May 2025

Volume

31

Issue

5

Start / End Page

3194 / 3203

Related Subject Headings

  • Software Engineering
  • 46 Information and computing sciences
  • 0802 Computation Theory and Mathematics
  • 0801 Artificial Intelligence and Image Processing