Do Vision-Language Models Leak What They Learn? Adaptive Token-Weighted Model Inversion Attacks

1 Singapore University of Technology and Design (SUTD)
2 University of Maryland, College Park
CVPR 2026
-->

Abstract


Inverted examples

Model inversion (MI) attacks pose significant privacy risks by reconstructing private training data from trained neural networks. While prior studies have primarily examined unimodal deep networks, the vulnerability of vision-language models (VLMs) remains largely unexplored. In this work, we present the first systematic study of MI attacks on VLMs to understand their susceptibility to leaking private visual training data. Our work makes two main contributions. First, tailored to the token-generative nature of VLMs, we introduce a suite of token-based and sequence-based model inversion strategies, providing a comprehensive analysis of VLMs' vulnerability under different attack formulations. Second, based on the observation that tokens vary in their visual grounding, and hence their gradients differ in informativeness for image reconstruction, we propose Sequence-based Model Inversion with Adaptive Token Weighting (SMI-AW) as a novel MI for VLMs. SMI-AW dynamically reweights each token's loss gradient according to its visual grounding, enabling the optimization to focus on visually informative tokens and more effectively guide the reconstruction of private images. Through extensive experiments and human evaluations on a range of state-of-the-art VLMs across multiple datasets, we show that VLMs are susceptible to training data leakage. Human evaluation of the reconstructed images yields an attack accuracy of 61.21%, underscoring the severity of these privacy risks. Notably, we demonstrate that publicly released VLMs are vulnerable to such attacks. Our study highlights the urgent need for privacy safeguards as VLMs become increasingly deployed in sensitive domains such as healthcare and finance.

framework
Figure 1: Designed for the token-generative characteristics of VLMs, we introduce a set of token-level and sequence-level MI strategies to investigate VLMs' privacy vulnerability. Particularly, conventional MI typically targets unimodal DNNs, where the adversary seeks to reconstruct a training image x = G(w) that maximizes the likelihood of a target class label y under the target model MDNN by repeating N inversion steps. In contrast, VLMs MVLM generate a sequence of tokens, and the target output y = (y1, …, ym) is also a sequence of m tokens. To address the unique nature of VLMs, we introduce several MI strategies: Token-based Model Inversion (TMI), Convergent Token-based Model Inversion (TMI-C), and Sequence-based Model Inversion (SMI). (B) Building on the insight that output tokens differ in their degree of visual grounding, and hence their gradients vary in informativeness for reconstructing images during inversion, we propose Sequence-based Model Inversion with Adaptive Token Weighting (SMI-AW), a novel MI for VLMs. SMI-AW adaptively adjusts each token's gradient contribution according to its visual grounding, allowing the optimization to concentrate on visually grounded tokens and more effectively recover private training images.

Proposed method: SMI-AW


Analysis of visual-textual attention across output tokens and inversion steps

We visualize the cross-attention map between the reconstructed image and each output token during inversion. Different tokens exhibit markedly different attention maps: visually grounded tokens show strong attention, while others produce weak responses, indicating limited reliance on the image. Moreover, attention patterns evolve over inversion steps, as a token's dependence on visual input changes when the reconstructed image becomes more consistent with the target output. These observations reveal that token-level gradients vary substantially in visual informativeness both across tokens and over time. This motivates our SMI-AW method, which dynamically reweights token contributions based on their visual attention strength.
InternVL2
MiniGPT-v2

Sequence-based Model Inversion with Adaptive Token Weighting (SMI-AW)

We propose to use the magnitude of the attention map as a proxy for the informativeness of each token's loss gradient in a model inversion step and use it to weight its contribution to the overall inversion gradient β€” tokens with higher visual attention receive larger weights, while those with weaker visual grounding are down-weighted.

\[ \beta_i = \frac{\alpha_i}{\sum_{j=1}^{m} \alpha_j} \quad (4) \]

algorithm

Attack results


Attack results on a range of state-of-the-art VLMs

We report the results of LLaVa-1.6-7B, Qwen2.5-VL-7B, MiniGPT-v2, and InternVL2.5-8B on the Facescub dataset.

Attack results

Human evaluation results

We further conduct human evaluation on reconstructed images using three datasets Facescrub, CelebA, StanfordDogs. Each user study involves 4,240 participants for the FaceScrub dataset, 8,000 participants for the CelebA dataset, and 960 participants for the StanfordDogs dataset. The results show that 53.42% to 61.21% of the reconstructed samples are deemed successful attacks, i.e., human annotators recognize the generated images as depicting the same identity as those in the private image set

Human evaluation results

Evaluation with Publicly Released VLM

We reconstruct images of celebrities from the pre-trained LLaVA-v1.6-7B model. For each pair, the left image shows a training image in Dpriv while the right image presents the reconstruction xrecon obtained via our model inversion attack. This result illustrates that the pre-trained VLM is vulnerable to training data leakage through model inversion. More results can be found in the paper.

Evaluation with Publicly Released VLM

Citation


@inproceedings{
nguyen2026MIVLM,
title={Do Vision-Language Models Leak What They Learn? Adaptive Token-Weighted Model Inversion Attacks},
author={Ngoc-Bao Nguyen and Sy-Tuyen Ho and Koh Jun Hao and Ngai-Man Cheung},
booktitle={CVPR},
year={2026}
}

Acknowledgements


This research is supported by the National Research Foundation, Singapore under its AI Singapore Programmes (AISG Award No.: AISG2-TC-2022-007); The Agency for Science, Technology and Research (A*STAR) under its MTC Programmatic Funds (Grant No. M23L7b0021). This research is supported by the National Research Foundation, Singapore and Infocomm Media Development Authority under its Trust Tech Funding Initiative. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of National Research Foundation, Singapore and Infocomm Media Development Authority. The work is sponsored by the SUTD Decentralised Gap Funding Grant.