Recent Releases of kxy
kxy - https://github.com/kxytechnologies/kxy-python/releases/tag/v1.4.10
Change Log
v.1.4.10 Changes
- Added a function to construct features derived from PFS mutual information estimation that should be expected to be linearly related to the target.
- Fixed a global name conflict in
kxy.learning.base_learners.
v.1.4.9 Changes
- Change the activation function used by PFS from ReLU to switch/SILU.
- Leaving it to the user to set the logging level.
v.1.4.8 Changes
- Froze the versions of all python packages in the docker file.
v.1.4.7 Changes
Changes related to optimizing Principal Feature Selection.
- Made it easy to change PFS' default learning parameters.
- Changed PFS' default learning parameters (learning rate is now 0.005 and epsilon 1e-04)
- Adding a seed parameter to PFS' fit for reproducibility.
To globally change the learning rate to 0.003, change Adam's epsilon to 1e-5, and the number of epochs to 25, do
Python
from kxy.misc.tf import set_default_parameter
set_default_parameter('lr', 0.003)
set_default_parameter('epsilon', 1e-5)
set_default_parameter('epochs', 25)
To change the number epochs for a single iteration of PFS, use the epochs argument of the fit method of your PFS object. The fit method now also has a seed parameter you may use to make the PFS implementation deterministic.
Example:
Python
from kxy.pfs import PFS
selector = PFS()
selector.fit(x, y, epochs=25, seed=123)
Alternatively, you may also use the kxy.misc.tf.set_seed method to make PFS deterministic.
v.1.4.6 Changes
Minor PFS improvements.
- Adding more (robust) mutual information loss functions.
- Exposing the learned total mutual information between principal features and target as an attribute of PFS.
- Exposing the number of epochs as a parameter of PFS' fit.
- Python
Published by drylks almost 4 years ago
kxy - https://github.com/kxytechnologies/kxy-python/releases/tag/v1.4.9
Change Log
v.1.4.9 Changes
- Change the activation function used by PFS from ReLU to switch/SILU.
- Leaving it to the user to set the logging level.
v.1.4.8 Changes
- Froze the versions of all python packages in the docker file.
v.1.4.7 Changes
Changes related to optimizing Principal Feature Selection.
- Made it easy to change PFS' default learning parameters.
- Changed PFS' default learning parameters (learning rate is now 0.005 and epsilon 1e-04)
- Adding a seed parameter to PFS' fit for reproducibility.
To globally change the learning rate to 0.003, change Adam's epsilon to 1e-5, and the number of epochs to 25, do
Python
from kxy.misc.tf import set_default_parameter
set_default_parameter('lr', 0.003)
set_default_parameter('epsilon', 1e-5)
set_default_parameter('epochs', 25)
To change the number epochs for a single iteration of PFS, use the epochs argument of the fit method of your PFS object. The fit method now also has a seed parameter you may use to make the PFS implementation deterministic.
Example:
Python
from kxy.pfs import PFS
selector = PFS()
selector.fit(x, y, epochs=25, seed=123)
Alternatively, you may also use the kxy.misc.tf.set_seed method to make PFS deterministic.
v.1.4.6 Changes
Minor PFS improvements.
- Adding more (robust) mutual information loss functions.
- Exposing the learned total mutual information between principal features and target as an attribute of PFS.
- Exposing the number of epochs as a parameter of PFS' fit.
- Python
Published by drylks almost 4 years ago
kxy - https://github.com/kxytechnologies/kxy-python/releases/tag/v1.4.8
Change Log
v.1.4.8 Changes
- Froze the versions of all python packages in the docker file.
v.1.4.7 Changes
Changes related to optimizing Principal Feature Selection.
- Made it easy to change PFS' default learning parameters.
- Changed PFS' default learning parameters (learning rate is now 0.005 and epsilon 1e-04)
- Adding a seed parameter to PFS' fit for reproducibility.
To globally change the learning rate to 0.003, change Adam's epsilon to 1e-5, and the number of epochs to 25, do
Python
from kxy.misc.tf import set_default_parameter
set_default_parameter('lr', 0.003)
set_default_parameter('epsilon', 1e-5)
set_default_parameter('epochs', 25)
To change the number epochs for a single iteration of PFS, use the epochs argument of the fit method of your PFS object. The fit method now also has a seed parameter you may use to make the PFS implementation deterministic.
Example:
Python
from kxy.pfs import PFS
selector = PFS()
selector.fit(x, y, epochs=25, seed=123)
Alternatively, you may also use the kxy.misc.tf.set_seed method to make PFS deterministic.
v.1.4.6 Changes
Minor PFS improvements.
- Adding more (robust) mutual information loss functions.
- Exposing the learned total mutual information between principal features and target as an attribute of PFS.
- Exposing the number of epochs as a parameter of PFS' fit.
- Python
Published by drylks almost 4 years ago
kxy - https://github.com/kxytechnologies/kxy-python/releases/tag/v1.4.7
Change Log
v.1.4.7 Changes
Changes related to optimizing Principal Feature Selection.
- Made it easy to change PFS' default learning parameters.
- Changed PFS' default learning parameters (learning rate is now 0.005 and epsilon 1e-04)
- Adding a seed parameter to PFS' fit for reproducibility.
To globally change the learning rate to 0.003, change Adam's epsilon to 1e-5, and the number of epochs to 25, do
Python
from kxy.misc.tf import set_default_parameter
set_default_parameter('lr', 0.003)
set_default_parameter('epsilon', 1e-5)
set_default_parameter('epochs', 25)
To change the number epochs for a single iteration of PFS, use the epochs argument of the fit method of your PFS object. The fit method now also has a seed parameter you may use to make the PFS implementation deterministic.
Example:
Python
from kxy.pfs import PFS
selector = PFS()
selector.fit(x, y, epochs=25, seed=123)
Alternatively, you may also use the kxy.misc.tf.set_seed method to make PFS deterministic.
v.1.4.6 Changes
Minor PFS improvements.
- Adding more (robust) mutual information loss functions.
- Exposing the learned total mutual information between principal features and target as an attribute of PFS.
- Exposing the number of epochs as a parameter of PFS' fit.
- Python
Published by drylks almost 4 years ago
kxy - https://github.com/kxytechnologies/kxy-python/releases/tag/v1.4.6
Changes
- Adding more (robust) mutual information loss functions.
- Exposing the learned total mutual information between principal features and target as an attribute of PFS.
- Exposing the number of epochs as a parameter of PFS' fit.
- Python
Published by drylks almost 4 years ago
kxy - https://github.com/kxytechnologies/kxy-python/releases/tag/v1.4.5
Fixing some package incompatibilities.
- Python
Published by drylks almost 4 years ago
kxy - https://github.com/kxytechnologies/kxy-python/releases/tag/v1.4.4
Adding Principal Feature Selection.
- Python
Published by drylks almost 4 years ago
kxy - Adding an option to anonymize explanatory variables before uploading the data in most df.kxy.* methods.
- Python
Published by drylks over 4 years ago
kxy - Making the license more permissive (AGPLv3 -> GPLv3)
- Python
Published by drylks almost 5 years ago
kxy - Cutting a release to be in sync with the latest pypi version
- Python
Published by drylks about 5 years ago
kxy - Allowing the package to be accessible as an AWS lambda layer.
- Python
Published by drylks about 5 years ago
kxy - Adding support for RMSE
Adding regression root mean square error (RMSE) in the list of metrics whose achievable values we calculate.
- Python
Published by drylks over 5 years ago
kxy - Adding maximum-entropy predictive models
Adding a maximum-entropy based classifier (kxy.MaxEntClassifier) and regressor (kxy.MaxEntRegressor) following the scikit-learn signature for fitting and predicting.
These models estimate the posterior mean E[uy|x] and the posterior standard deviation sqrt(Var[uy|x]) for any specific value of x, where the copula-uniform representations (uy, ux) follow the maximum-entropy distribution.
Predictions in the primal are derived from E[u_y|x].
- Python
Published by drylks over 5 years ago
kxy - Improving support for categorical variables
- Regression analyses now fully support categorical variables.
- Foundations for multi-output regressions are laid.
- Categorical variables are now systematically encoded and treated as continuous, consistent with what's done at the learning stage.
- Regression and classification are further normalized, and most the compute for classification problems now takes place on the API side, and should be considerably faster.
- Python
Published by drylks over 5 years ago
kxy - Making the mutual information analysis abide by variable groups.
- Python
Published by drylks over 5 years ago
kxy - Moving away from mutual information values and towards performance.
- Python
Published by drylks over 5 years ago
kxy - Switching to statsmodels for kde based entropy estimation of scalar random variables.
- Python
Published by drylks almost 6 years ago
kxy - Adding Gaussian kernel density estimator based entropy estimation
- Python
Published by drylks almost 6 years ago