Recent Releases of hezar
hezar - 0.39.1
Changes:
- Now all build functions (previously only accessible from hezar.builders) are also importable from their base modules. (from hezar.models import build_model)
- Fixed the bug of requiring librosa for the base installation.
- Some updates to the docs.
- Python
Published by github-actions[bot] over 1 year ago
hezar - 0.39.0
Main Changes
- All dataset classes now have a
preprocessorparameter which can be a path or object. Meaning that all manual preprocessor configs or paths are removed from the dataset configs. - Dataset configs now have a
hf_load_kwargsto be passed to thedatasets.load_dataset(). - Dataset config names are now supported either in
hf_load_kwargs(hf_load_kwargs={"name": "fa"}) or by passing to path likeDataset.load("<path>:<config_name>")`. - Datasets now have the
_load()method which is responsible for loading data files either from the hub or custom data reading. - Datasets now have a
max_sizein their config to overwrite the length of the dataset (all__len__implementations are moved to the base class). This value can be a fraction too e.g, 0.3 means 30% of the original length. - Better LR scheduling implemented in the Trainer.
- The docs have been rewritten or improved a lot!
- Make
log_stepsandsave_stepsaccept float values between 0 and 1 representing a fraction. - Other bug fixes and improvements.
- Python
Published by github-actions[bot] over 1 year ago
hezar - 0.38.0
Main changes
- Resuming from checkpoint moved to the
TrainerConfig(settingresume_from_checkpointinTrainer.train()is now deprecated and raises error) - Resuming from checkpoints now supports inner-loop steps instead of only epochs
- Add data sampler for slicing data loaders (mainly used for training resumption)
- Re-order objects initialization in the Trainer's init function
- Add support for optimizer checkpointing in
Trainer - Add option to disable preprocess and post processing in
Model.predict() - Seperate generation config in Whisper's model config with a separate data class
- Drop support for Python 3.9
- Python
Published by github-actions[bot] almost 2 years ago
hezar - 0.36.0
v0.36.0
1. Add gradient accumulation support to the Trainer
Now, you can set gradient_accumulation_steps (defaults to 1 which is the same as regular training) in the TrainerConfig to enable this feature. This technique can mimic having larger batch sizes without changing the actual batch size! For example, having batch size of 16 and gradient accumulation steps of 4 equals to having batch size of 64! This can lead to faster convergance.
2. Implement tools for training speech recognition models
In this release we added SpeechRecognitionDataset, SpeechRecognitionDataCollator and SpeechRecognitionMetricsHandler so that you can easily train or finetune a Whisper model. Take a look at this example.
3. Split and refactor in Trainer for better subclassing
We split the training_step function of the Trainer in a way that now it only takes care of forward/backward pass and the optimization step is now moved to its own method called optimization_step. Also added lr_scheduler_step for customizing LR scheduling step.
4. Add support for more LR schedulers
5. Other bug fixes and improvements
- Python
Published by github-actions[bot] about 2 years ago
hezar - 0.35.0
This is a big one! We made a lot of changes and improvements in Hezar.
Improvements
- Add support for
acceleratefor distributed training - Add resume from checkpoint feature to Trainer
- Improve saving/logging capabilities in Trainer
- Improve
print_info() - Add
ImageCaptioningDatasetandImageCaptioningDataCollator - Enhance padding in tokenizers
- Rewrite contribution docs
- Add tests workflow to actions
- Add
cache_dirparameter to allload()methods - Improve
OCRDatasetand bug fixes - Add training scripts for image captioning
- Add training script for CRNN training
- Clean
registry.py - Change license from MIT to Apache 2.0
- Some improvements and bug fixes in
ViTRobertaImage2Text - Bug fixes in tests
- Safe class var handling in configs
- Add
return_scorestoCRNNImage2Text - Add
get_state_dict_from_hubto support loading from any (non-Hezar) model on the Hub - Set default LR scheduler (reduce on plateau) to
Trainer
Bug fixes
- Fix image captioning decoding bug
- Fix mixed precision bug on CPU
- Fix embedding config bug
Deletions
- Delete empty models modules
- Remove all
Unionannotations and replace with|
- Python
Published by github-actions[bot] about 2 years ago