Abstract
Fine-tuning large pre-trained models is a common practice in machine learning applications, yet its mathematical analysis remains largely unexplored. In this paper, we study fine-tuning through the lens of memorization capacity. Our new measure, the Fine-Tuning Capacity (FTC), is defined as the maximum number of samples a neural network can fine-tune, or equivalently, as the minimum number of neurons (m) needed to arbitrarily change N labels among K samples considered in the fine-tuning process. In essence, FTC extends the memorization capacity concept to the fine-tuning scenario. We analyze FTC for the additive fine-tuning scenario where the fine-tuned network is defined as the summation of the frozen pre-trained network f and a neural network g (with m neurons) designed for fine-tuning. When g is a ReLU network with either 2 or 3 layers, we obtain tight upper and lower bounds on FTC; we show that N samples can be fine-tuned with m = Θ(N) neurons for 2-layer networks, and with m = Θ(√N) neurons for 3-layer networks, no matter how large K is. Our results recover the known memorization capacity results when N = K as a special case.
Original language | English |
---|---|
Pages (from-to) | 3264-3278 |
Number of pages | 15 |
Journal | Proceedings of Machine Learning Research |
Volume | 244 |
Publication status | Published - 2024 |
Event | 40th Conference on Uncertainty in Artificial Intelligence, UAI 2024 - Barcelona, Spain Duration: 2024 Jul 15 → 2024 Jul 19 |
Bibliographical note
Publisher Copyright:© 2024 Proceedings of Machine Learning Research. All rights reserved.
All Science Journal Classification (ASJC) codes
- Artificial Intelligence
- Software
- Control and Systems Engineering
- Statistics and Probability