Recent Releases of poutyne
poutyne - v1.17.3
What's Changed
- Fix TORCHGREATEREQUAL112 with lightningutilities by @atremblay in https://github.com/GRAAL-Research/poutyne/pull/174
- Add support for Python 3.12 by @freud14 in https://github.com/GRAAL-Research/poutyne/pull/175
Full Changelog: https://github.com/GRAAL-Research/poutyne/compare/v1.17.2...v1.17.3
- Python
Published by freud14 about 1 year ago
poutyne - v1.17.2
What's Changed
- fix: np.Inf was deprecated forever and is now gone by @dhdaines in https://github.com/GRAAL-Research/poutyne/pull/171
- Update Black, isort, PyLint and flake8. by @freud14 in https://github.com/GRAAL-Research/poutyne/pull/172
- Update Github Actions by @freud14 in https://github.com/GRAAL-Research/poutyne/pull/173
New Contributors
- @dhdaines made their first contribution in https://github.com/GRAAL-Research/poutyne/pull/171
Full Changelog: https://github.com/GRAAL-Research/poutyne/compare/v1.17.1...v1.17.2
- Python
Published by freud14 over 1 year ago
poutyne - v1.17
FBetais using the non-deterministic torch functionbincount. Either by passing the argumentmake_deterministicto theFBetaclass or by using one of the PyTorch functionstorch.set_deterministic_debug_modeortorch.use_deterministic_algorithms, you can now make this function deterministic. Note that this might make your code slower.
- Python
Published by freud14 almost 3 years ago
poutyne - v1.16
- Add
run_idandterminate_on_endarguments to MLFlowLogger.
Breaking change:
- In MLFlowLogger, except for
experiment_name, all arguments must now be passed as keyword arguments. Passingexperiment_nameas a positional argument is also deprecated and will be removed in future versions.
- Python
Published by freud14 almost 3 years ago
poutyne - v1.13
Breaking changes:
- The deprecated
torch_metricskeyword argument has been removed. Users should use thebatch_metricsorepoch_metricskeyword argument for torchmetrics' metrics. - The deprecated
EpochMetricclass has been removed. Users should implement theMetricclass instead.
- Python
Published by freud14 over 3 years ago
poutyne - v1.12
- Fix a bug when transfering the optimizer on another device caused by a new feature in PyTorch 1.12, i.e. the "capturable" parameter in Adam and AdamW.
- Add utilitary functions for saving (
save_random_states) and loading (load_random_states) Python, Numpy and Pytorch's (both CPU and GPU) random states. Furthermore, we also add theRandomStatesCheckpointcallback. This callback is now used in ModelBundle.
- Python
Published by freud14 over 3 years ago
poutyne - v1.10
- Add a WandB logger.
- Epoch and batch metrics are now unified. Their only difference is whether the metric for the batch is computed. The main interface is now the
Metricclass. It is compatible with TorchMetrics. Thus, TorchMetrics metrics can now be passed as either batch or epoch metrics. The metrics with the interfacemetric(y_pred, y_true)are internally wrapped into aMetricobject and are still fully supported. Thetorch_metricskeyword argument and theEpochMetricclass are now deprecated and will be removed in future versions. Model.get_batch_sizeis replaced bypoutyne.get_batch_size().
- Python
Published by freud14 almost 4 years ago
poutyne - v1.9
- Add support for TorchMetrics metrics.
Experimentis now an alias forModelBundle, a class quite similar toExperimentexcept that it allows to instantiate an "Experiment" from a Poutyne Model or a network.- Add support for PackedSequence.
- Add flag to
TensorBoardLoggerto allow to put training and validation metrics in different graphs. This allow to have a behavior closer to Keras. - Add support for fscore on binary classification.
- Add
convert_to_numpyflag to be able to obtain tensors instead of NumPy arrays in evaluate* and predict*.
- Python
Published by freud14 about 4 years ago
poutyne - v1.8
Breaking changes:
- When using epoch metrics
'f1','precision','recall'and associated classes, the default average has been changed to'macro'instead of'micro'. This changes the names of the metrics that is displayed and that is in the log dictionnary in callbacks. This change also applies toExperimentwhen usingtask='classif'. - Exceptions when loading checkpoints in
Experimentare now propagated instead of being silenced.
- Python
Published by freud14 about 4 years ago
poutyne - v1.7
- Add
plot_historyandplot_metricfunctions to easily plot the history returned by Poutyne.Experimentalso saves the figures at the end of the training. - All text files (e.g. CSVs in CSVLogger) are now saved using UTF-8 on all platforms.
- Python
Published by freud14 over 4 years ago
poutyne - v1.6
- PeriodicSaveCallback and all its subclasses now have the
restore_bestargument. Experimentnow contains amonitoringargument that can be set to false to avoid monitoring any metric and saving uneeded checkpoints.- The format of the ETA time and total time now contains days, hours, minutes when appropriate.
- Add
predictmethods to Callback to allow callback to be call during prediction phase. - Add
infermethods to Experiment to more easily make inference (predictions) with an experiment. - Add a progress bar callback during predictions of a model.
- Add a method to compare the results of two experiments.
- Add
return_ground_truthandhas_ground_trutharguments topredict_datasetandpredict_generator.
- Python
Published by freud14 over 4 years ago
poutyne - v1.5
- Add
LambdaCallbackto more easily define a callback from lambdas or functions. - In Jupyter Notebooks, when coloring is enabled, the print rate of progress output is limited to one output every 0.1 seconds. This solves the slowness problem (and the memory problem on Firefox) when there is a great number of steps per epoch.
- Add
return_dict_formatargument totrain_on_batchandevaluate_on_batchand allows to return predictions and ground truths inevaluate_*even whenreturn_dict_format=True. Furthermore,Experiment.test*now supportreturn_pred=Trueandreturn_ground_truth=True. - Split Tips and Tricks example into two examples: Tips and Tricks and Sequence Tagging With an RNN.
- Python
Published by freud14 almost 5 years ago
poutyne - v1.4
- Add examples for image reconstruction and semantic segmentation with Poutyne.
- Add the following flags in
ProgressionCallback:show_every_n_train_steps,show_every_n_valid_steps,show_every_n_test_steps. They allow to show only certain steps instead of all steps. - Fix bug where all warnings were silenced.
- Add
strictflag when loading checkpoints. In Model, a NamedTuple is returned as in PyTorch'sload_state_dict. In Experiment, a warning is raised when there are missing or unexpected keys in the checkpoint. - In CSVLogger, when multiple learning rates are used, we use the column names
lr_group_0,lr_group_1, etc. instead oflr. - Fix bug where EarlyStopping would be one epoch late and would anyway disregard the monitored metric at the last epoch.
- Python
Published by freud14 almost 5 years ago
poutyne - v1.3
- A progress bar is now set on validation a model (similar to training). It is disableable by passing
progress_options=dict(show_on_valid=False)in thefit*methods. - A progress bar is now set testing a model (similar to training). It is disableable by passing
verbose=Falsein theevaluate*methods. - A new notification callback
NotificationCallbackallowing to received message at specific time (start/end training/testing an at any given epoch). - A new logging callback,
MLflowLogger, this callback allows you to log experimentation configuration and metrics during training, validation and testing. - Fix bug where
evaluate_generatordid not support generators with StopIteration exception. - Experiment now has a
train_dataand atest_datamethod. - The Lambda layer now supports multiple arguments in its forward method.
- Python
Published by freud14 about 5 years ago
poutyne - v1.2
- A
deviceargument is added toModel. - The argument
optimizerofModelcan now be a dictionary. This allows to pass different argument to the optimizer, e.g.optimizer=dict(optim='sgd', lr=0.1). - The progress bar now uses 20 characters instead of 25.
- The progress bar is now more fluid since partial blocks are used allowing increments of 1/8th of a block at once.
- The function
torch_to_numpynow does .detach() before .cpu(). This might slightly improves performances in some cases. - In Experiment, the
load_checkpointmethod can now load arbitrary checkpoints by passing a filename instead of the usual argument. - Experiment now has a
train_datasetand atest_datasetmethod. - Experiment is not considered a beta feature anymore.
Breaking changes:
* In evaluate, dataloader_kwargs is now a dictionary keyword argument instead of arbitrary keyword arguments. Other methods are already this way. This was an oversight of the last release.
- Python
Published by freud14 about 5 years ago
poutyne - v1.1
- There is now a batch metric
TopKAccuracyand it is possible to use them as strings forkin 1 to 10 and 20, 30, …, 100, e.g.'top5'. - Add
fit_dataset,evaluate_datasetandpredict_datasetmethods which allow to pass PyTorch Datasets and creates DataLoader internally. Here is an example with MNIST. - Colors now work correctly in Colab.
- The default colorscheme was changed so that it looks good in Colab, notebooks and command line. The previous one was not readable in Colab.
- Checkpointing callbacks now don't use the Python
tempfilepackage anymore for the temporary file. The use of this package caused problem when the temp filesystem was not on the same partition as the final destination of the checkpoint. The temporary file is now created at the same place as the final destination. Thus, in most use cases, this will render the use of thetemporary_filenameargument not necessary. The argument is still available for those who need it. - In Experiment, it is not possible to call the method
testwhen training without logging.
- Python
Published by freud14 over 5 years ago
poutyne - v1.0.0
Version 1.0.0 of Poutyne is here!
- Output is now very nicely colored and now has a progress bar. Both are disableable with the
progress_optionsarguments. Thecoloramapackage needs to be installed to have the colors. See the documentation of the fit method for details. - Multi-GPU support: Uses
torch.nn.parallel.data_parallelunder the hood. - Huge update to the documentation with a documentation of metrics and a lot of examples.
- No need to import
frameworkanymore. Everything now can be imported frompoutynedirectly, i.e.from poutyne import whatever_you_want. PeriodicSaveCallbacks(such asModelCheckpoint) now has a flagkeep_only_last_bestwhich allow to only keep the last best checkpoint even when the names differ between epochs.FBetanow supports anignore_indexas innn.CrossEntropyLoss.- Epoch metrics strings
'precision'and'recall'now available directly without instantiatingFBeta. - Better ETA estimation in output by weighting more recent batches than older batches.
- Batch metrics
accandbin_accnow have class counterpartsAccuracyandBinaryAccuracyin addition to areductionkeyword argument as in PyTorch. - Various bug fixes.
- Python
Published by freud14 over 5 years ago
poutyne - v0.8.2
- Add new callback methods
on_test_*to callbacks. Callback can now be passed to theevaluate*methods. - New epoch metrics for scikit-learn functions (See documentation of SKLearnMetrics).
- It is now possible to return multiple metrics for a single batch metric function or epoch metric object. Furthermore, their names can be changed. (See note in documentation of Model class)
- Computation of batch size is now added for dictionnary inputs and outputs. (See documentation of the new method
get_batch_size) - Add a lot of type hinting.
Breaking changes:
- Ground truths and predictions returned by evaluate_generator and predict_generator are going to be concatenated except when inside custom objects in the next version. A warning is issued in those methods. If the warning is disabled as instructed, the new behavior takes place. (See documentation of evaluate_generator and predict_generator)
- Names of methods on_batch_begin and on_batch_end changed to on_train_batch_begin and on_train_batch_end respectively. When the old names are used, a warning is issued with backward compatibility added. This backward compatibility will be removed in the next version.
- EpochMetric classes now have an obligatory reset method.
- Support of Python 3.5 is dropped. (Anyway, PyTorch was already not supporting it either)
- Python
Published by freud14 over 5 years ago
poutyne - v0.7.2
Poutyne is now under LGPLv3 instead of GPLv3.
Essentially, what this means is that you can now include Poutyne into any proprietary software as long as you are willing to provide the source code and the modifications of Poutyne with your software. The LICENSE file contains more details.
This is not legal advice. You should consult your lawyer about the implication of the license for your own case.
- Python
Published by freud14 about 6 years ago
poutyne - v0.7
- Add automatic naming for class object in
batch_metricsandepoch_metrics. - Add getsavedepochs method to Experiment
optimizerparameter can now be set to None inModelin the case where there is no need for it.- Fixes warning from new PyTorch version.
- Various improvement of the code.
Breaking changes:
- Threshold of the binary_accuracy metric is now 0 instead of 0.5 so that it works using the logits instead of the probabilities.
- The attribute model of the Model class is now called network instead. A deprecation warning is in place until the next version.
- Python
Published by freud14 about 6 years ago
poutyne - v0.6
- Poutyne now has a new logo!
- Add a beta
Experimentclass that encapsulates logging and checkpointing callbacks so that it is possible to stop and resume optimization at any time. - Add epoch metrics allowing to compute metrics over an epoch that are not decomposable such as F1 scores, precision, recall. While only these former epoch metrics are currently available in Poutyne, epoch metrics can allow to compute the AUROC metric, PCC metric, etc.
- Support for multiple batches per optimizer step. This allows to have smaller batches that fit in memory instead of a big batch that does not fit while retaining the advantage of the big batch.
- Add returngroundtruth argument to evaluate_generator.
- Data loading is now taken into account time for progress estimation.
- Various doc updates and example finetunings.
Breaking changes:
- metrics argument in Model is now deprecated. This argument will be removed in the next version. Use batch_metrics instead.
- pytoune package is now removed.
- If stepsperepoch or validationsteps are greater than the generator length in *generator methods, then the generator is cycled through instead of stopping as before.
- Python
Published by freud14 over 6 years ago
poutyne - v0.5
- Adding a new
OptimizerPolicyclass allowing to have Phase-based learning rate policies. The two following learning policies are also provided:- "Super-Convergence: Very Fast Training of Neural Networks Using Large Learning Rates", Leslie N. Smith, Nicholay Topin, https://arxiv.org/abs/1708.07120
- "SGDR: Stochastic Gradient Descent with Warm Restarts", Ilya Loshchilov, Frank Hutter, https://arxiv.org/abs/1608.0398
- Adding of "bin_acc" metric for binary classification in addition to the "accuracy" metric".
- Adding "time" in callbacks' logs.
- Various refactoring and small bug fixes.
- Python
Published by freud14 almost 7 years ago
poutyne - v0.4.1
Breaking changes: - Update for PyTorch 0.4.1 (PyTorch 0.4 not supported) - Keyword arguments must now be passed with their keyword names in most PyToune functions.
Non-breaking changes: - self.optimizer.zerograd() is called instead of self.model.zerograd(). - Support strings as input for all PyTorch loss functions, metrics and optimizers. - Add support for generators that raise the StopIteration exception. - Refactor of the Model class (no API break changes). - Now using pylint as code style linter. - Fix typos in documentation.
- Python
Published by freud14 over 7 years ago
poutyne - v0.4
- New usage example using MNIST
- New *onbatch methods to Model
- Every Numpy array is converted into a tensor and vice-versa everywhere it applies i.e. methods return Numpy arrays and can take Numpy arrays as input.
- New convenient simple layers (Flatten, Identity and Lambda layers)
- New callbacks to save optimizers and LRSchedulers.
- New Tensorboard callback.
- Various bug fixes and improvements.
- Python
Published by freud14 over 7 years ago
poutyne - v0.3
Breaking changes: - Update to PyTorch 0.4.0 - When one or zero metric is used, evaluate and evaluate generator do not return numpy arrays anymore.
Other changes: - Model now offers a to() method to send the PyTorch module and its input to a specified device. (thanks PyTorch 0.4.0) - There is now a 'accuracy' metric that can be used as string in the metrics list. - Various bug fixes.
- Python
Published by freud14 almost 8 years ago
poutyne -
- ModelCheckpoint now writes off the checkpoint atomically.
- New initial_epoch parameter to Model.
- Mean of losses and metrics done with batch size weighted by len(y) instead of just the mean of the losses and metrics.
- Update to the documentation.
- Model's predict and evaluate makes more sense now and have now a generator version.
- Few other bug fixes.
- Python
Published by freud14 almost 8 years ago