| Name | Modified | Size | Downloads / Week |
|---|---|---|---|
| Parent folder | |||
| Optax 0.2.6 source code.tar.gz | 2025-09-15 | 3.7 MB | |
| Optax 0.2.6 source code.zip | 2025-09-15 | 3.9 MB | |
| README.md | 2025-09-15 | 5.4 kB | |
| Totals: 3 Items | 7.6 MB | 0 | |
What's Changed
- Fix for https://github.com/google-deepmind/optax/issues/1328 by @copybara-service[bot] in https://github.com/google-deepmind/optax/pull/1329
- Make pip quiet in a notebook by @copybara-service[bot] in https://github.com/google-deepmind/optax/pull/1330
- Clean up freezing doctests by @rdyro in https://github.com/google-deepmind/optax/pull/1333
- Fix rendering issue of
Freezingintransformations api pagein the documentation by @rajasekharporeddy in https://github.com/google-deepmind/optax/pull/1331 - Remove the reference to
optax.transformsinfreezingdocumentation by @rajasekharporeddy in https://github.com/google-deepmind/optax/pull/1334 - Fix for https://github.com/google-deepmind/optax/issues/1335 by @rdyro in https://github.com/google-deepmind/optax/pull/1336
- Add Salimans et al. 2017 citation to make_perturbed_fun docstring. by @carlosgmartin in https://github.com/google-deepmind/optax/pull/1325
- Add tree utility functions. by @carlosgmartin in https://github.com/google-deepmind/optax/pull/1321
- Add tests to verify cross_entropy_losses accept per-logit masks. by @copybara-service[bot] in https://github.com/google-deepmind/optax/pull/1343
- Remove reliance on chex.dataclass since it's not supported in newest JAX by @copybara-service[bot] in https://github.com/google-deepmind/optax/pull/1350
- Add line too long (E501) to optax source code by @rdyro in https://github.com/google-deepmind/optax/pull/1347
- Simplify code by using new tree.size function. by @carlosgmartin in https://github.com/google-deepmind/optax/pull/1354
- Enable adaptive gradient clipping for high-dimensional tensors by @aymuos15 in https://github.com/google-deepmind/optax/pull/1340
- Extend the fromage optimizer to allow a learning rate schedule by @rdyro in https://github.com/google-deepmind/optax/pull/1359
- Fix ruff to check line-length=80 by @rdyro in https://github.com/google-deepmind/optax/pull/1360
- Add function tree_allclose. by @carlosgmartin in https://github.com/google-deepmind/optax/pull/1352
- fix CI failure from line-too-long by @copybara-service[bot] in https://github.com/google-deepmind/optax/pull/1361
- Fix gradient NaN issues in sigmoid_focal_loss for extreme logits by @leochlon in https://github.com/google-deepmind/optax/pull/1346
- Internal changes by @copybara-service[bot] in https://github.com/google-deepmind/optax/pull/1367
- Clean up and fix errors in DoG implementation and documentation. by @carlosgmartin in https://github.com/google-deepmind/optax/pull/1292
- Trimming the library. by @copybara-service[bot] in https://github.com/google-deepmind/optax/pull/1370
- Address optimistic_adam interface re-work in the documentation. by @copybara-service[bot] in https://github.com/google-deepmind/optax/pull/1381
- Small docs fixes by @copybara-service[bot] in https://github.com/google-deepmind/optax/pull/1382
- Add missing entry for tree_cast_like in utilities.rst. by @carlosgmartin in https://github.com/google-deepmind/optax/pull/1377
- Remove type hint in test to align with new jax.nn annotations by @copybara-service[bot] in https://github.com/google-deepmind/optax/pull/1385
- Bump jax version for optax by @copybara-service[bot] in https://github.com/google-deepmind/optax/pull/1392
- Simplify l2 projection by @copybara-service[bot] in https://github.com/google-deepmind/optax/pull/1394
- Make init_empty_state public by @copybara-service[bot] in https://github.com/google-deepmind/optax/pull/1395
- Use OrderedDict in named_chain to preserve transformation order in the state object through jax.jit. by @copybara-service[bot] in https://github.com/google-deepmind/optax/pull/1397
- Fix hlo equivalence test for abs_sqr, fix broken html links by @copybara-service[bot] in https://github.com/google-deepmind/optax/pull/1404
- Add pyink config for external PRs (optional) by @copybara-service[bot] in https://github.com/google-deepmind/optax/pull/1409
- Expose scale by muon mask in the muon alias by @copybara-service[bot] in https://github.com/google-deepmind/optax/pull/1407
- add segmentation based (dice) loss by @aymuos15 in https://github.com/google-deepmind/optax/pull/1366
- fix CI by fixing pylint errors by @copybara-service[bot] in https://github.com/google-deepmind/optax/pull/1411
- Add explanation to Newton Schulz step by @copybara-service[bot] in https://github.com/google-deepmind/optax/pull/1410
- Fix doctests: add necessary dependency for sphinx-collections by @copybara-service[bot] in https://github.com/google-deepmind/optax/pull/1417
- Add missing equations to optax.optimistic_gradient_descent. by @carlosgmartin in https://github.com/google-deepmind/optax/pull/1400
- Fix dtype casting inside tree_add_scale. by @carlosgmartin in https://github.com/google-deepmind/optax/pull/1376
- Update version number for release. by @copybara-service[bot] in https://github.com/google-deepmind/optax/pull/1419
New Contributors
- @rajasekharporeddy made their first contribution in https://github.com/google-deepmind/optax/pull/1331
- @aymuos15 made their first contribution in https://github.com/google-deepmind/optax/pull/1340
- @leochlon made their first contribution in https://github.com/google-deepmind/optax/pull/1346
Full Changelog: https://github.com/google-deepmind/optax/compare/v0.2.5...v0.2.6