Papers¶
A tabular view of curated papers organized by the reprogrammability taxonomy dimensions.
| Paper | Configuration ($\lambda$) | Location ($\ell$) | Operator ($\tau$) | Alignment ($\omega$) | Venue |
|---|---|---|---|---|---|
| Adversarial Reprogramming of Neural Networks Elsayed et al. (2019) | Learnable | Input ($\mathcal{X}_S$) | Additive (AD) | Statistical (SA) | ICLR |
| Adversarial Reprogramming of Text Classification Neural Networks Neekhara et al. (2019) | Learnable | Embedding ($\mathcal{E}$) | Parametric (PR) | Statistical (SA) / Linear (LA) | EMNLP/IJCNLP |
| Language Models are Few-Shot Learners BROWN et al. (2020) | Fixed | input-space | Concatenative (CO) | Identity (ID) | NeurIPS |
| Reprogramming Language Models for Molecular Representation Learning Vinod et al. (2020) | Learnable | Input ($\mathcal{X}_S$) | Parametric (PR) | Rule-based (RA) | NeurIPS Workshop |
| Learning how to ask: Querying LMs with mixtures of soft prompts Qin et al. (2021) | Learnable | Embedding ($\mathcal{E}$) | Concatenative (CO) | Identity (ID) | NAACL |
| PTR: Prompt Tuning with Rules for Text Classification HAN et al. (2021) | Learnable | Embedding ($\mathcal{E}$) | Concatenative (CO) | Rule-based (RA) | arXiv preprint (cs.CL) |
| Prefix-Tuning: Optimizing Continuous Prompts for Generation Li et al. (2021) | Learnable | Embedding ($\mathcal{E}$) | Concatenative (CO) | Identity (ID) | ACL/IJCNLP |
| The Power of Scale for Parameter-Efficient Prompt Tuning Lester et al. (2021) | Learnable | Input ($\mathcal{X}_S$) | Additive (AD) | Identity (ID) | EMNLP |
| Transfer Learning without Knowing: Reprogramming Black-box Machine Learning Models with Scarce Data and Limited Resources Tsai et al. (2021) | Learnable | input-layers | statistical / linear | Identity (ID) | ICML |
| Voice2series: Reprogramming acoustic models for time series classification Yang et al. (2021) | Learnable | Input ($\mathcal{X}_S$) | Parametric (PR) | Statistical (SA) | ICML |
| WARP: Word-level Adversarial ReProgramming Hambardzumyan et al. (2021) | Learnable | Input ($\mathcal{X}_S$) | Concatenative (CO) | Linear (LA) | ACL / ACL-IJCNLP |
| Adversarial Reprogramming Revisited Englert et al. (2022) | Learnable | Input ($\mathcal{X}_S$) | Additive (AD) | Statistical (SA) | NeurIPS |
| An Explanation of In-context Learning as Implicit Bayesian Inference Xie et al. (2022) | Fixed | Input ($\mathcal{X}_S$) | Concatenative (CO) | Identity (ID) | ICLR |
| Chain-of-Thought Prompting Elicits Reasoning in Large Language Models Wei et al. (2022) | Fixed | input-space | Concatenative (CO) | Identity (ID) | NeurIPS |
| Conditional Prompt Learning for Vision-Language Models Zhou et al. (2022) | Learnable | Embedding ($\mathcal{E}$) | Concatenative (CO) | Identity (ID) / Linear (LA) | CVPR |
| Cross-modal Adversarial Reprogramming Neekhara et al. (2022) | Learnable | Input ($\mathcal{X}_S$) | Additive (AD) | Linear (LA) | WACV |
| Differentiable Prompt Makes Pre-trained Language Models Better Few-shot Learners ZHANG et al. (2022) | Fixed | Embedding ($\mathcal{E}$) | Concatenative (CO) | Statistical (SA) | ICLR |
| Exploring Visual Prompts for Adapting Large-Scale Models Bahng et al. (2022) | Learnable | Input ($\mathcal{X}_S$) | Additive (AD) | Statistical (SA) | arXiv |
| In-context Learning and Induction Heads OLSSON et al. (2022) | Fixed | Input ($\mathcal{X}_S$) | Concatenative (CO) | Identity (ID) | arXiv |
| Learning To Retrieve Prompts for In-Context Learning Rubin et al. (2022) | Fixed | input-space | Concatenative (CO) | Identity (ID) | NAACL |
| Learning to Prompt for Vision-Language Models Zhou et al. (2022) | Learnable | Input ($\mathcal{X}_S$) | Additive (AD) | Identity (ID) | IJCV |
| Learning to Prompt for Vision-Language Models Zhou et al. (2022) | Learnable | Embedding ($\mathcal{E}$) | Concatenative (CO) | Linear (LA) | IJCV |
| Least-to-Most Prompting Enables Complex Reasoning in Large Language Models Zhou et al. (2022) | Fixed | Input ($\mathcal{X}_S$) | Concatenative (CO) | Identity (ID) | ICLR |
| P-tuning v2: Prompt tuning can be comparable to fine-tuning universally across scales and tasks Liu et al. (2022) | Learnable | Hidden ($\mathcal{H}$) | Concatenative (CO) | Linear (LA) | ACL |
| PPT: Pre-trained Prompt Tuning for Few-shot Learning GU et al. (2022) | Learnable | Embedding ($\mathcal{E}$) | Concatenative (CO) | Rule-based (RA) | ACL |
| Rethinking the Role of Demonstrations: What Makes In-Context Learning Work? MIN et al. (2022) | Fixed | Input ($\mathcal{X}_S$) | Concatenative (CO) | Rule-based (RA) | EMNLP |
| Spot: Better frozen model adaptation through soft prompt transfer Vu et al. (2022) | Learnable | Embedding ($\mathcal{E}$) | Concatenative (CO) | Identity (ID) | ACL |
| Structured Prompting: Scaling In-Context Learning to 1,000 Examples HAO et al. (2022) | Fixed | Hidden ($\mathcal{H}$) | Concatenative (CO) | Identity (ID) | arXiv |
| Unleashing the Power of Visual Prompting At the Pixel Level WU et al. (2022) | Learnable | Input ($\mathcal{X}_S$) | Concatenative (CO) | Identity (ID) / Statistical (SA) | arXiv |
| Visual Prompt Tuning JIA et al. (2022) | Fixed | Embedding ($\mathcal{E}$) / Hidden ($\mathcal{H}$) | Concatenative (CO) | Linear (LA) | ECCV |
| Visual Prompting via Image Inpainting BAR et al. (2022) | Fixed | Input ($\mathcal{X}_S$) | Concatenative (CO) | Identity (ID) | NeurIPS |
| A Simple Zero-shot Prompt Weighting Technique to Improve Prompt Ensembling in Text-Image Models ALLINGHAM et al. (2023) | Fixed | Input ($\mathcal{X}_S$) | Concatenative (CO) | Identity (ID) | ICML |
| BlackVIP: Black-Box Visual Prompting for Robust Transfer Learning OH et al. (2023) | Learnable | input-space | Additive (AD) | Rule-based (RA) | CVPR |
| Decomposed Prompting: A Modular Approach for Solving Complex Tasks Khot et al. (2023) | Fixed | Input ($\mathcal{X}_S$) | Concatenative (CO) | Identity (ID) | ICLR |
| Deep Graph Reprogramming JING et al. (2023) | Learnable | Input ($\mathcal{X}_S$) / Hidden ($\mathcal{H}$) | concatenation / parametric | Rule-based (RA) | CVPR |
| Explicit Visual Prompting for Low-Level Structure Segmentations Liu et al. (2023) | Learnable | Embedding ($\mathcal{E}$) / Hidden ($\mathcal{H}$) | Parametric (PR) | Identity (ID) | CVPR |
| From English to More Languages: Parameter-Efficient Model Reprogramming for Cross-Lingual Speech Recognition YANG et al. (2023) | Learnable | Input ($\mathcal{X}_S$) / Hidden ($\mathcal{H}$) | Additive (AD) | Rule-based (RA) | ICASSP |
| InstructBLIP: Towards General-purpose Vision-Language Models with Instruction Tuning Dai et al. (2023) | Learnable | Embedding ($\mathcal{E}$) | Parametric (PR) | Identity (ID) / rule / Linear (LA) | NeurIPS |
| Interleaving Retrieval with Chain-of-Thought Reasoning for Knowledge-Intensive Multi-Step Questions TRIVEDI et al. (2023) | Fixed | Input ($\mathcal{X}_S$) | Concatenative (CO) | Identity (ID) | ACL |
| Low-Resource Music Genre Classification with Cross-Modal Neural Model Reprogramming HUNG et al. (2023) | Learnable | Input ($\mathcal{X}_S$) | Parametric (PR) | Statistical (SA) | ICASSP |
| MaPLe: Multi-modal Prompt Learning Khattak et al. (2023) | Learnable | Embedding ($\mathcal{E}$) / Hidden ($\mathcal{H}$) | Concatenative (CO) | Linear (LA) | CVPR |
| Neural Model Reprogramming with Similarity Based Mapping for Low-Resource Spoken Command Recognition Yen et al. (2023) | Learnable | input-space | Additive (AD) | Statistical (SA) | Interspeech |
| On the Role of Attention in Prompt-tuning OYMAK et al. (2023) | Learnable | Embedding ($\mathcal{E}$) / Hidden ($\mathcal{H}$) | Concatenative (CO) | Linear (LA) | ICML |
| PLOT: Prompt Learning with Optimal Transport for Vision-Language Models CHEN et al. (2023) | Learnable | Embedding ($\mathcal{E}$) | Concatenative (CO) | Identity (ID) | ICLR |
| Reprogramming Pretrained Language Models for Antibody Sequence Infilling MELNYK et al. (2023) | Learnable | Embedding ($\mathcal{E}$) | Parametric (PR) | Linear (LA) | ICML |
| Set-of-Mark Prompting Unleashes Extraordinary Visual Grounding in GPT-4V YANG et al. (2023) | Fixed | Input ($\mathcal{X}_S$) | Additive (AD) | Rule-based (RA) | arXiv |
| TransHP: Image Classification with Hierarchical Prompting WANG et al. (2023) | Learnable | Embedding ($\mathcal{E}$) / Hidden ($\mathcal{H}$) | Concatenative (CO) | Linear (LA) | NeurIPS |
| Tuning Multi-mode Token-level Prompt Alignment across Modalities WANG et al. (2023) | Learnable | Embedding ($\mathcal{E}$) | Concatenative (CO) | Identity (ID) | NeurIPS 2023 |
| Understanding and Improving Visual Prompting: A Label-Mapping Perspective CHEN et al. (2023) | Learnable | Input ($\mathcal{X}_S$) | Additive (AD) | Statistical (SA) | CVPR |
| Universal Prompt Tuning for Graph Neural Networks FANG et al. (2023) | Learnable | Input ($\mathcal{X}_S$) | Additive (AD) | Linear (LA) | NeurIPS |
| Visual Instruction Tuning LIU et al. (2023) | Learnable | Embedding ($\mathcal{E}$) | concatenation / parametric | Identity (ID) | NeurIPS |
| What Does a Platypus Look Like? Generating Customized Prompts for Zero-Shot Image Classification PRATT et al. (2023) | Fixed | Input ($\mathcal{X}_S$) | Concatenative (CO) | Identity (ID) | ICCV |
| What Makes Good Examples for Visual In-Context Learning? ZHANG et al. (2023) | Fixed | Input ($\mathcal{X}_S$) | Concatenative (CO) | Identity (ID) | arXiv |
| ArGue: Attribute-Guided Prompt Tuning for Vision-Language Models TIAN et al. (2024) | Learnable | Embedding ($\mathcal{E}$) | Concatenative (CO) | Identity (ID) | CVPR |
| AutoVP: An Automated Visual Prompting Framework and Benchmark TSAO et al. (2024) | Learnable | Input ($\mathcal{X}_S$) | Concatenative (CO) | Statistical (SA) / Linear (LA) | ICLR |
| Bayesian-guided Label Mapping for Visual Reprogramming CAI et al. (2024) | Learnable | input-space | Additive (AD) | Statistical (SA) | NeurIPS |
| Exploring the Transferability of Visual Prompting for Multimodal Large Language Models Zhang et al. (2024) | Learnable | Input ($\mathcal{X}_S$) | Additive (AD) | Statistical (SA) | CVPR |
| Joint Visual and Text Prompting for Improved Object-Centric Perception with Multimodal Large Language Models Jiang et al. (2024) | Fixed | Input ($\mathcal{X}_S$) / Embedding ($\mathcal{E}$) | addition / concatenation | Identity (ID) | arXiv |
| Model Reprogramming Outperforms Fine-tuning on Out-of-distribution Data in Text-Image Encoders GENG et al. (2024) | Learnable | Input ($\mathcal{X}_S$) / Embedding ($\mathcal{E}$) | addition / parametric | Identity (ID) | SatML |
| PIVOT: Iterative Visual Prompting Elicits Actionable Knowledge for VLMs NASIRIANY et al. (2024) | Fixed | Input ($\mathcal{X}_S$) | Additive (AD) | Rule-based (RA) | ICML |
| PromptKD: Unsupervised Prompt Distillation for Vision-Language Models LI et al. (2024) | Learnable | Embedding ($\mathcal{E}$) | Concatenative (CO) | Identity (ID) | CVPR |
| Sample-specific Masks for Visual Reprogramming-based Prompting Cai et al. (2024) | Learnable | Input ($\mathcal{X}_S$) | Additive (AD) | Statistical (SA) | ICML |
| Time-LLM: Time Series Forecasting by Reprogramming Large Language Models JIN et al. (2024) | Learnable | Embedding ($\mathcal{E}$) | Parametric (PR) | Linear (LA) | ICLR |
| When Do Prompting and Prefix-Tuning Work? A Theory of Capabilities and Limitations PETROV et al. (2024) | Learnable / Fixed | Input ($\mathcal{X}_S$) / Embedding ($\mathcal{E}$) / Hidden ($\mathcal{H}$) | Concatenative (CO) | Identity (ID) | ICLR |
| Attribute-based Visual Reprogramming for Vision-Language Models Cai et al. (2025) | Learnable | Input ($\mathcal{X}_S$) | addition / concatenation | Rule-based (RA) | ICLR |
| Draw-and-Understand: Leveraging Visual Prompts to Enable MLLMs to Comprehend What You Want Lin et al. (2025) | Learnable | embedding-level | Parametric (PR) | Linear (LA) | ICLR |
| Model Reprogramming Demystified: A Neural Tangent Kernel Perspective Chung et al. (2025) | Learnable | input-layers | Additive (AD) | Identity (ID) | arXiv |
| Refine: Inversion-free backdoor defense via model reprogramming Chen et al. (2025) | Learnable | input-layers | Additive (AD) | Identity (ID) | ICLR |
| Reprogramming pretrained language models for protein sequence representation learning Vinod et al. (2025) | Learnable | input-layers | Additive (AD) | Identity (ID) | Digital Discovery |
| Understanding Model Reprogramming for CLIP via Decoupling Visual Prompts CAI et al. (2025) | Learnable | Input ($\mathcal{X}_S$) | Additive (AD) | Linear (LA) | ICML |