Neural Network Reprogrammability Tutorial
Loading tutorial description...
What You'll Learn
- Understand the unified reprogrammability framework and its theoretical foundations
- Master the taxonomy of input manipulation techniques across different domains
- Apply reprogramming methods to real-world scenarios and evaluate trade-offs
- Assess trustworthiness implications including robustness and security considerations
- Implement practical solutions using provided code examples and frameworks
Why This Tutorial Matters
The massive scale of Foundation Models has created an adaptation bottleneck: how can we efficiently
post-train/reuse large-scale pre-trained models for specific tasks without extensive fine-tuning?
Neural Network Reprogrammability answers this question by offering a solution through input
space
manipulation and output space alignments.
This unified framework encompasses three active research paradigms:
- Model Reprogramming: Learning input transformations to repurpose frozen models
- Prompt Tuning: Learning continuous or discrete prompts to guide model behavior
- Prompt Instruction: Using natural language/visual instructions and few-shot examples
You'll gain both theoretical understanding and practical skills to implement these techniques in your own research and applications.
Tutorial Format
Lecture-based tutorial with interactive demonstrations featuring:
- Theoretical foundations with formal mathematical framework
- Live demonstrations and code walkthroughs
- Case studies across computer vision, NLP, and multimodal domains
- Discussion of trustworthiness implications and best practices
- Hands-on exercises with provided implementation examples
Quick Info
Date: January 20-21, 2026
Venue: TBD
Contact: feng.liu1@unimelb.edu.au
Project: awesome-reprogrammability
Prerequisites
- Basic machine learning knowledge
- Familiarity with neural networks
- Python/PyTorch programming experience