Recent Releases of opentsne
opentsne - v1.0.2
General maintenance to keep openTSNE up to date with Python versions and dependencies.
Changes
- build wheels for Python 3.12 (#255)
- update minimum Python version to 3.9 (4e86511b1a2c041d122cb2869480b0c96af79d63)
- add numpy 2.x support (aa3d76c2d86055caae0601cec10dd53db7769b8e)
- Python
Published by pavlin-policar over 1 year ago
opentsne -
Given the longtime stability of openTSNE, it is only fitting that we release a v1.0.0.
Changes
- Various documentation fixes involving initialization, momentum, and learning rate (#243)
- Include Python 3.11 in the test and build matrix
- Uniform affinity kernel now supports
meanandmaxmode (#242)
- Python
Published by pavlin-policar over 2 years ago
opentsne - v0.7.0
Changes
- By default, we now add jitter to non-random initialization schemes. This has almost no effect on the resulting visualizations, but helps avoid potential problems when points are initialized at identical positions (#225)
- By default, the learning rate is now calculated as
N/exaggeration. This speeds up convergence of the resulting embedding. Note that the learning rate during the EE phase will differ from the learning rate during the standard phase. Additionally, we setmomentum=0.8in both phases. Before, it was 0.5 during EE and 0.8 during the standard phase. This, again, speeds up convergence. (#220) - Add
PrecomputedAffinitiesto wrap square affinity matrices (#217)
Build changes
- Build
universal2macos wheels enabling ARM support (#226)
Bug Fixes
- Fix BH collapse for smaller data sets (#235)
- Fix
updatesin optimizer not being stored correctly between optimization calls (#229) - Fix
inplace=Trueoptimization changing the initializations themselves in some rare use-cases (#225)
As usual, a special thanks to @dkobak for helping with practically all of these bugs/changes.
- Python
Published by pavlin-policar about 3 years ago
opentsne - v0.6.2
Changes
- By default, we now use the
MultiscaleMixtureaffinity model, enabling us to pass in a list of perplexities instead of a single perplexity value. This is fully backwards compatible. - Previously, perplexity values would be changed according to the dataset. E.g. we pass in
perplexity=100with N=150. ThenTSNE.perplexitywould be equal to 50. Instead, keep this value as is and add aneffective_perplexity_attribute (following the convention from scikit-learn, which puts in the corrected perplexity values. - Fix bug where interpolation grid was being prepared even when using BH optimization during transform.
- Enable calling
.transformwith precomputed distances. In this case, the data matrix will be assumed to be a distance matrix.
Build changes
- Build with
oldest-supported-numpy - Build linux wheels on
manylinux2014instead ofmanylinux2010, following numpy's example - Build MacOS wheels on
macOS-10.15instead ofmacos-10.14Azure VM - Fix potential problem with clang-13, which actually does optimization with infinities using the
-ffast-mathflag
- Python
Published by pavlin-policar almost 4 years ago
opentsne - v0.6.0
Changes:
- Remove affinites from TSNE construction, allow custom affinities and initialization in .fit method. This improves the API when dealing with non-tabular data. This is not backwards compatible.
- Add metric="precomputed". This includes the addition of openTSNE.nearest_neighbors.PrecomputedDistanceMatrix and openTSNE.nearest_neighbors.PrecomputedNeighbors.
- Add knn_index parameter to openTSNE.affinity classes.
- Add (less-than-ideal) workaround for pickling Annoy objects.
- Extend the range of recommended FFTW boxes up to 1000.
- Remove deprecated openTSNE.nearest_neighbors.BallTree.
- Remove deprecated openTSNE.callbacks.ErrorLogger.
- Remove deprecated TSNE.neighbors_method property.
- Add and set as default negative_gradient_method="auto".
- Python
Published by pavlin-policar almost 5 years ago
opentsne -
Major changes: - Remove numba dependency, switch over to using Annoy nearest neighbor search. Pynndescent is now optional and can be used if installed manually. - Massively speed-up transform by keeping reference interpolation grid fixed. Limit new points to circle centered around reference embedding. - Implement variable degrees of freedom.
Minor changes:
- Add spectral initialization using diffusion maps.
- Replace cumbersome ErrorLogger callback with the verbose flag.
- Change the default number of iterations to 750.
- Add learning_rate="auto" option.
- Remove the min_grad_norm parameter.
Bugfixes: - Fix case where KL divergence was sometimes reported as NaN.
- Python
Published by pavlin-policar almost 6 years ago
opentsne - Replace FFTW with numpy's FFT
In order to make usage as simple as possible and remove and external dependencies on FFTW (which needed to be installed locally before), this update replaces FFTW with numpy's FFT.
- Python
Published by pavlin-policar over 7 years ago