Fix resuming PeftModel checkpoints in Trainer (#24274)
* Fix resuming checkpoints for PeftModels
Fix an error occurred when resuming a PeftModel from a training checkpoint. That was caused since PeftModel.pre_trained saves only adapter-related data while _load_from_checkpoint was expecting a torch sved model. This PR fix this issue and allows the adapter checkpoint to be loaded.
Resolves: #24252
* fix last comment
* fix nits
---------
Co-authored-by: younesbelkada <younesbelkada@gmail.com>