| Name | Modified | Size | Downloads / Week |
|---|---|---|---|
| Parent folder | |||
| NNI v3.0 Preview Release (v3.0rc1).tar.gz | 2023-05-10 | 37.7 MB | |
| NNI v3.0 Preview Release (v3.0rc1).zip | 2023-05-10 | 39.0 MB | |
| README.md | 2023-05-10 | 6.3 kB | |
| Totals: 3 Items | 76.7 MB | 0 | |
Web Portal
- New look and feel
Neural Architecture Search
- Breaking change:
nni.retiariiis no longer maintained and tested. Please migrate tonni.nas. - Inherit
nni.nas.nn.pytorch.ModelSpace, rather than use@model_wrapper. - Use
nni.choice, rather thannni.nas.nn.pytorch.ValueChoice. - Use
nni.nas.experiment.NasExperimentandNasExperimentConfig, rather thanRetiariiExperiment. - Use
nni.nas.model_context, rather thannni.nas.fixed_arch. - Please refer to quickstart for more changes.
- A refreshed experience to construct model space.
- Enhanced debuggability via
freeze()andsimplify()APIs. - Enhanced expressiveness with
nni.choice,nni.uniform,nni.normaland etc. - Enhanced experience of customization with
MutableModule,ModelSpaceandParamterizedModule. - Search space with constraints is now supported.
- Improved robustness and stability of strategies.
- Supported search space types are now enriched for PolicyBaseRL, ENAS and Proxyless.
- Each step of one-shot strategies can be executed alone: model mutation, evaluator mutation and training.
- Most multi-trial strategies now supports specifying seed for reproducibility.
- Performance of strategies have been verified on a set of benchmarks.
- Strategy/engine middleware.
- Filtering, replicating, deduplicating or retrying models submitted by any strategy.
- Merging or transforming models before executing (e.g., CGO).
- Arbitrarily-long chains of middlewares.
- New execution engine.
- Improved debuggability via SequentialExecutionEngine: trials can run in a single process and breakpoints are effective.
- The old execution engine is now decomposed into execution engine and model format.
- Enhanced extensibility of execution engines.
- NAS profiler and hardware-aware NAS.
- New profilers profile a model space, and quickly compute a profiling result for a sampled architecture or a distribution of architectures (FlopsProfiler, NumParamsProfiler and NnMeterProfiler are officially supported).
- Assemble profiler with arbitrary strategies, including both multi-trial and one-shot.
- Profiler are extensible. Strategies can be assembled with arbitrary customized profilers.
Compression
- Compression framework is refactored, new framework import path is
nni.contrib.compression. - Configure keys are refactored, support more detailed compression configurations. view doc
- Support multi compression methods fusion. view doc
- Support distillation as a basic compression component. view doc
- Support more compression targets, like
input,outputand any registered parameters. view doc - Support compressing any module type by customizing module settings. view doc
- Pruning
- Pruner interfaces have fine-tuned for easy to use. view doc
- Support configuring
granularityin pruners. view doc - Support different mask ways, multiply zero or add a large negative value.
- Support manully setting dependency group and global group. view doc
- A new powerful pruning speedup is released, applicability and robustness have been greatly improved. view doc
- The end to end transformer compression tutorial has been updated, achieved more extreme compression performance. view doc
- Quantization
- Support using
Evaluatorto handle training/inferencing. - Support more module fusion combinations. view doc
- Support configuring
granularityin quantizers. view doc - Distillation
- DynamicLayerwiseDistiller and Adaptive1dLayerwiseDistiller are supported.
- Compression documents now updated for the new framework, the old version please view v2.10 doc.
- New compression examples are under
nni/examples/compression - Create a evaluator:
nni/examples/compression/evaluator - Pruning a model:
nni/examples/compression/pruning - Quantize a model:
nni/examples/compression/quantization - Fusion compression:
nni/examples/compression/fusion
Training Services
- Breaking change: NNI v3.0 cannot resume experiments created by NNI v2.x
- Local training service:
- Reduced latency of creating trials
- Fixed "GPU metric not found"
- Fixed bugs about resuming trials
- Remote training service:
reuse_modenow defaults toFalse; setting it toTruewill fallback to v2.x remote training service- Reduced latency of creating trials
- Fixed "GPU metric not found"
- Fixed bugs about resuming trials
- Supported viewing trial logs on the web portal
- Supported automatic recover after temporary server failure (network fluctuation, out of memory, etc)