Recent Releases of onnxruntime
onnxruntime - ONNX Runtime v1.22.2
What's new?
This release adds an optimized CPU/MLAS implementation of DequantizeLinear (8 bit) and introduces the build option clientpackagebuild, which enables default options that are more appropriate for client/on-device workloads (e.g., disable thread spinning by default).
Build System & Packages
- Add –clientpackagebuild option (#25351) - @jywu-msft
- Remove the python installation steps from win-qnn-arm64-ci-pipeline.yml (#25552) - @snnn
CPU EP
- Add multithreaded/vectorized implementation of DequantizeLinear for int8 and uint8 inputs (SSE2, NEON) (#24818) - @adrianlizarraga
QNN EP
- Add support for the Upsample, Einsum, LSTM, and CumSum operators (#24265, #24616, #24646, #24820) - @quic-zhaoxul, @1duo, @chenweng-quic, @Akupadhye
- Fuse scale into Softmax (#24809) - @qti-yuduo
- Enable DSP queue polling when performance is set to “burst” mode (#25361) - @quic-calvnguy
- Update QNN SDK to version 2.36.1 (#25388) - @qti-jkilpatrick
- Include the license file from QNN SDK in the Microsoft.ML.OnnxRunitme.QNN NuGet package (#25158) - @HectorSVC
- C++
Published by vraspar 7 months ago
onnxruntime - ONNX Runtime v1.22.1
What's new?
This release replaces static linking of dxcore.lib with optional runtime loading, lowering the minimum supported version from Windows 10 22H2 (10.0.22621) to 20H1 (10.0.19041). This enables compatibility with Windows Server 2019 (10.0.17763), where dxcore.dll may be absent.
- change dependency from gitlab eigen to github eigen-mirror #24884 - @prathikr
- Weaken dxcore dependency #24845 - @skottmckay
- [DML] Restore compatibility with Windows Sdk 10.0.17134.0 #24950 - @JulienMaille
- Disable VCPKG's binary cache #24889 - @snnn
- C++
Published by vraspar 8 months ago
onnxruntime - ONNX Runtime v1.22
Announcements
- This release introduces new API's for Model Editor, Auto EP infrastructure, and AOT Compile
- OnnxRuntime GPU packages require CUDA 12.x , packages built for CUDA 11.x are no longer published.
GenAI & Advanced Model Features
- Constrained Decoding: Introduced new capabilities for constrained decoding, offering more control over generative AI model outputs.
Execution & Core Optimizations
Core
- Auto EP Selection Infrastructure: Added foundational infrastructure to enable automatic selection of Execution Providers via selection policies, aiming to simplify configuration and optimize performance. (Pull Request #24430)
- Compile API: Introduced new APIs to support explicit compilation of ONNX models.
- See: OrtCompileApi Struct Reference (Assuming a similar link structure for future documentation)
- See: EP Context Design (Assuming a similar link structure for future documentation)
- Model Editor API api's for creating or editing ONNX models
- See: OrtModelEditorApi
Execution Provider (EP) Updates
CPU EP/MLAS
- KleidiAI Integration: Integrated KleidiAI into ONNX Runtime/MLAS for enhanced performance on Arm architectures.
- MatMulNBits Support: Added support for
MatMulNBits, enabling matrix multiplication with weights quantized to 8 bits. - GroupQueryAttention optimizations and enhancements
OpenVINO EP
- Added support up to OpenVINO 2025.1
- Introduced Intel compiler level optimizations for QDQ models.
- Added support to select Intel devices based on LUID
- Load_config feature improvement to support AUTO, HETERO and MULTI plugin.
- misc bugfixes/optimizations
- For detailed updates, refer to Pull Request #24394: ONNXRuntime OpenVINO - Release 1.22
QNN EP
- SDK Update: Added support for QNN SDK 2.33.2.
- operator updates/support to Sum, Softmax, Upsample, Expand, ScatterND, Einsum
- QNN EP can be built as shared or static library.
- enable QnnGpu backend
- For detailed updates refer to recent QNN tagged PR's
TensorRT EP
- TensorRT Version: Added support for TensorRT 10.9.
- Note for onnx-tensorrt open-source parser users: Please check here for specific requirements (Referencing 1.21 link as a placeholder, this should be updated for 1.22).
- New Features:
- EP option to enable TRT Preview Feature
- Support to load TensorRT V3 plugin
- Bug Fixes:
- Resolved an issue related to multithreading scenarios.
- Fixed incorrect GPU usage that affected both TensorRT EP and CUDA EP.
NV TensorRT RTX EP
- New Execution Provider: Introduced a new Execution Provider specifically for Nvidia RTX GPUs, leveraging TensorRT for optimized performance.
CUDA EP
- MatMulNBits Enhancement: Added support for 8-bit weight-only quantization in
MatMulNBits. - Bug Fixes:
- Fixed incorrect GPU usage (also mentioned under TensorRT EP).
VitisAI EP
- Miscellaneous bug fixes and improvements.
Infrastructure & Build Improvements
Build System & Packages
- QNN Nuget Package: The QNN Nuget package is now built as ARM64x.
Dependencies / Version Updates
- CUDA Version Update: This release includes an update to the CUDA version. Users should consult the documentation for specific version requirements. CUDA 11 based GPU packages no longer published.
Web
- WebGPU Expansion:
- Added WebGPU support to the node.js package (Windows and macOS).
- Enabled WebGPU when building from source for macOS, Linux, and Windows.
Mobile
- No major updates of note this release.
Contributions
Contributors to ONNX Runtime include members across teams at Microsoft, along with our community members:
Yulong Wang, Jian Chen, Changming Sun, Satya Kumar Jandhyala, Hector Li, Prathik Rao, Adrian Lizarraga, Jiajia Qin, Scott McKay, Jie Chen, Tianlei Wu, Edward Chen, Wanming Lin, xhcao, vraspar, Dmitri Smirnov, Jing Fang, Yifan Li, Caroline Zhu, Jianhui Dai, Chi Lo, Guenther Schmuelling, Ryan Hill, Sushanth Rajasankar, Yi-Hong Lyu, Ankit Maheshkar, Artur Wojcik, Baiju Meswani, David Fan, Enrico Galli, Hans, Jambay Kinley, John Paul, Peishen Yan, Yateng Hong, amarin16, chuteng-quic, kunal-vaishnavi, quic-hungjuiw, Alessio Soldano, Andreas Hussing, Ashish Garg, Ashwath Shankarnarayan, Chengdong Liang, Clément Péron, Erick Muñoz, Fanchen Kong, George Wu, Haik Silm, Jagadish Krishnamoorthy, Justin Chu, Karim Vadsariya, Kevin Chen, Mark Schofield, Masaya, Kato, Michael Tyler, Nenad Banfic, Ningxin Hu, Praveen G, Preetha Veeramalai, Ranjit Ranjan, Seungtaek Kim, Ti-Tai Wang, Xiaofei Han, Yueqing Zhang, co63oc, derdeljan-msft, genmingz@AMD, jiangzhaoming, jing-bao, kuanyul-quic, liqun Fu, minfhong-quic, mingyue, quic-tirupath, quic-zhaoxul, saurabh, selenayang888, sfatimar, sheetalarkadam, virajwad, zz002, Ștefan Talpalaru
- C++
Published by MaanavD 10 months ago
onnxruntime - ONNX Runtime v1.21.1
What's new?
- Extend CMAKECUDAFLAGS with all Blackwell compute capacity #23928 - @yf711
- [ARM CPU] Fix fp16 const initialization on no-fp16 platform #23978 - @fajin-corp
- [TensorRT EP] Call cudaSetDevice at compute function for handling multithreading scenario #24010 - @chilo-ms
- Fix attention bias broadcast #24017 - @tianleiwu
- Deleted the constant SKIPCUDATESTWITHDML #24113 - @CodingSeaotter
- [QNN EP] ARM64EC python package remove --vcpkg in build #24174 - @jywu-msft
- [wasm] remove --vcpkg in wasm build #24179 - @fs-eire
- C++
Published by amarin16 10 months ago
onnxruntime - ONNX Runtime v1.21.0
Announcements
- No large announcements of note this release! We've made a lot of small refinements to streamline your ONNX Runtime experience.
GenAI & Advanced Model Features
Enhanced Decoding & Pipeline Support
- Added "chat mode" support for CPU, GPU, and WebGPU.
- Provided support for decoder model pipelines.
- Added support for Java API for MultiLoRA.
API & Compatibility Updates
- Chat mode introduced breaking changes in the API (see migration guide).
Bug Fixes for Model Output
- Fixed Phi series garbage output issues with long prompts.
- Resolved gibberish issues with
top_kon CPU.
Execution & Core Optimizations
Core Refinements
- Reduced default logger usage for improved efficiency(#23030).
- Fixed a visibility issue in theadpool (#23098).
Execution Provider (EP) Updates
General
- Removed TVM EP from the source tree(#22827).
- Marked NNAPI EP for deprecation (following Google's deprecation of NNAPI).
- Fixed a DLL delay loading issue that impacts WebGPU EP and DirectML EP's usability on Windows (#23111, #23227)
TensorRT EP Improvements
- Added support for TensorRT 10.8.
- onnx-tensorrt open-source parser user: please check here for requirement.
- Assigned DDS ops (
NMS,RoiAlign,NonZero) to TensorRT by default. - Introduced option
trt_op_types_to_excludeto exclude specific ops from TensorRT assignment.
CUDA EP Improvements
- Added a python API preload_dlls to coexist with PyTorch.
- Miscellaneous enhancements for Flux model inference.
QNN EP Improvements
- Introduced QNN shared memory support.
- Improved performance for AI Hub models.
- Added support for QAIRT/QNN SDK 2.31.
- Added Python 3.13 package.
- Miscellaneous bug fixes and enhancements.
- QNN EP is now built as a shared library/DLL by default. To retain previous build behavior, use build option
--use_qnn static_lib.
DirectML EP Support & Upgrades
- Updated DirectML version from 1.15.2 to 1.15.4(#22635).
OpenVINO EP Improvements
- Introduced OpenVINO EP Weights Sharing feature.
- Added support for various contrib Ops in OVEP:
SkipLayerNormalization,MatMulNBits,FusedGemm,FusedConv,EmbedLayerNormalization,BiasGelu,Attention,DynamicQuantizeMatMul,FusedMatMul,QuickGelu,SkipSimplifiedLayerNormalization
- Miscellaneous bug fixes and improvements.
VitisAI EP Improvements
- Miscellaneous bug fixes and improvements.
Mobile Platform Enhancements
CoreML Updates
- Added support for caching generated CoreML models.
Extensions & Tokenizer Improvements
Expanded Tokenizer Support
- Now supports more tokenizer models, including
ChatGLM,Baichuan2,Phi-4, etc. - Added full
Phi-4pre/post-processing support for text, vision, and audio. - Introduced RegEx pattern loading from
tokenizer.json.
Image Codec Enhancements
ImageCodecnow links to native APIs if available; otherwise, falls back to built-in libraries.
Unified Tokenizer API
- Introduced a new tokenizer op schema to unify the tokenizer codebase.
- Added support for loading tokenizer data from a memory blob in the C API.
Infrastructure & Build Improvements
Runtime Requirements
All the prebuilt Windows packages now require VC++ Runtime version >= 14.40(instead of 14.38). If your VC++ runtime version is lower than that, you may see a crash when ONNX Runtime was initializing. See https://github.com/microsoft/STL/wiki/Changelog#vs-2022-1710 for more details.
Updated minimum iOS and Android SDK requirements to align with React Native 0.76: - iOS >= 15.1 - Android API >= 24 (Android 7)
All macOS packages now require macOS version >= 13.3.
CMake File Changes
CMake Version: Increased the minimum required CMake version from 3.26 to 3.28. Added support for CMake 4.0. Python Version: Increased the minimum required Python version from 3.8 to 3.10 for building ONNX Runtime from source. Improved VCPKG support
Added the following cmake options for WebGPU EP
- onnxruntimeUSEEXTERNAL_DAWN
- onnxruntimeCUSTOMDAWNSRCPATH
- onnxruntimeBUILDDAWNMONOLITHICLIBRARY
- onnxruntimeENABLEPIXFORWEBGPU_EP
- onnxruntimeENABLEDAWNBACKENDVULKAN
- onnxruntimeENABLEDAWNBACKENDD3D12
Added cmake option onnxruntimeBUILDQNNEPSTATICLIB for building with QNN EP as a static library. Removed cmake option onnxruntimeUSEPREINSTALLEDEIGEN.
Fixed a build issue with Visual Studio 2022 17.3 (#23911)
Modernized Build Tools
- Now using VCPKG for most package builds.
- Upgraded Gradle from 7.x to 8.x.
- Updated JDK from 11 to 17.
- Enabled
onnxruntime_USE_CUDA_NHWC_OPSby default for CUDA builds. - Added support for WASM64 (build from source; no package published).
Dependency Cleanup
- Removed Google’s
nsyncfrom dependencies.
Others
Updated Node.js installation script to support network proxy usage (#23231)
Web
- No updates of note.
Contributors
Contributors to ONNX Runtime include members across teams at Microsoft, along with our community members:
Changming Sun, Yulong Wang, Tianlei Wu, Jian Chen, Wanming Lin, Adrian Lizarraga, Hector Li, Jiajia Qin, Yifan Li, Edward Chen, Prathik Rao, Jing Fang, shiyi, Vincent Wang, Yi Zhang, Dmitri Smirnov, Satya Kumar Jandhyala, Caroline Zhu, Chi Lo, Justin Chu, Scott McKay, Enrico Galli, Kyle, Ted Themistokleous, dtang317, wejoncy, Bin Miao, Jambay Kinley, Sushanth Rajasankar, Yueqing Zhang, amancini-N, ivberg, kunal-vaishnavi, liqun Fu, Corentin Maravat, Peishen Yan, Preetha Veeramalai, Ranjit Ranjan, Xavier Dupré, amarin16, jzm-intel, kailums, xhcao, A-Satti, Aleksei Nikiforov, Ankit Maheshkar, Javier Martinez, Jianhui Dai, Jie Chen, Jon Campbell, Karim Vadsariya, Michael Tyler, PARK DongHa, Patrice Vignola, Pranav Sharma, Sam Webster, Sophie Schoenmeyer, Ti-Tai Wang, Xu Xing, Yi-Hong Lyu, genmingz@AMD, junchao-zhao, sheetalarkadam, sushraja-msft, Akshay Sonawane, Alexis Tsogias, Ashrit Shetty, Bilyana Indzheva, Chen Feiyue, Christian Larson, David Fan, David Hotham, Dmitry Deshevoy, Frank Dong, Gavin Kinsey, George Wu, Grégoire, Guenther Schmuelling, Indy Zhu, Jean-Michaël Celerier, Jeff Daily, Joshua Lochner, Kee, Malik Shahzad Muzaffar, Matthieu Darbois, Michael Cho, Michael Sharp, Misha Chornyi, Po-Wei (Vincent), Sevag H, Takeshi Watanabe, Wu, Junze, Xiang Zhang, Xiaoyu, Xinpeng Dou, Xinya Zhang, Yang Gu, Yateng Hong, mindest, mingyue, raoanag, saurabh, shaoboyan091, sstamenk, tianf-fff, wonchung-microsoft, xieofxie, zz002
- C++
Published by MaanavD 12 months ago
onnxruntime - ONNX Runtime v1.20.2
What's new?
Build System & Packages
- Merge Windows machine pools for Web CI pipeline to reduce maintenance costs (#23243) - @snn
- Update boost URL for React Native CI pipeline (#23281) - @jchen351
- Move ORT Training pipeline to GitHub actions and enable CodeQL scan for the source code (#22543) - @snn
- Move Linux GitHub actions to a dedicated machine pool (#22566) - @snnn
- Update Apple deployment target to iOS 15.1 and macOS 13.3 (#23308) - @snn
- Deprecate macOS 12 in packaging pipeline (#23017) - @mszhanyi
- Remove net8.0-android MAUI target from MAUI test project (#23607) - @carzh
CUDA EP
- Fixes use of numeric_limits that causes a compiler error in Visual Studio 2022 v17.12 Preview 5 (#22738, #22868) - @tianleiwu
QNN EP
- Enable offloading graph input quantization and graph output dequantization to CPU by default. Improves inference latency by reducing the amount of I/O data copied between CPU and NPU. (#23368) - @adrianlizarraga
- C++
Published by adrianlizarraga about 1 year ago
onnxruntime - ONNX Runtime v1.20.1
What's new?
Python Quantization Tool
- Prevent int32 quantized bias from clipping by adjusting the weight's scale (#22020) - @adrianlizarraga
- Update QDQ Pad, Slice, Softmax (#22676) - @adrianlizarraga
- Introduce getqdqconfig() helper to get QDQ configurations (#22677) - @adrianlizarraga
- Add reducerange option to getqdq_config() (#22782) - @adrianlizarraga
- Flaky test due to Pad reflect bug (#22798) - @adrianlizarraga
CPU EP
- Refactor SkipLayerNorm implementation to address issues (#22719, #22862) - @amarin16, @liqunfu
QNN EP
- Add QNN SDK v2.28.2 support (#22724, #22844) - @HectorSVC, @adrianlizarraga
TensorRT EP
- Exclude DDS ops from running on TRT (#22875) - @chilo-ms
Packaging
- Rework the native library usage so that a pre-built ORT native package can be easily used (#22345) - @skottmckay
- Fix Maven Sha256 Checksum Issue (#22600) - @idiskyle
Contributions
Big thank you to the release manager @yf711, along with @adrianlizarraga, @HectorSVC, @jywu-msft, and everyone else who helped to make this patch release process a smooth one!
- C++
Published by sophies927 over 1 year ago
onnxruntime - ONNX Runtime v1.20.0
Release Manager: @apsonawane
Announcements
- All ONNX Runtime Training packages have been deprecated. ORT 1.19.2 was the last release for which onnxruntime-training (PyPI), onnxruntime-training-cpu (PyPI), Microsoft.ML.OnnxRuntime.Training (Nuget), onnxruntime-training-c (CocoaPods), onnxruntime-training-objc (CocoaPods), and onnxruntime-training-android (Maven Central) were published.
- ONNX Runtime packages will stop supporting Python 3.8 and Python 3.9. This decision aligns with NumPy Python version support. To continue using ORT with Python 3.8 and Python 3.9, you can use ORT 1.19.2 and earlier.
- ONNX Runtime 1.20 CUDA packages will include new dependencies that were not required in 1.19 packages. The following dependencies are new: libcudnnadv.so.9, libcudnncnn.so.9, libcudnnenginesprecompiled.so.9, libcudnnenginesruntimecompiled.so.9, libcudnngraph.so.9, libcudnnheuristic.so.9, libcudnnops.so.9, libnvrtc.so.12, and libz.so.1.
Build System & Packages
- Python 3.13 support is included in PyPI packages.
- ONNX 1.17 support will be delayed until a future release, but the ONNX version used by ONNX Runtime has been patched to include a shape inference change to the Einsum op.
- DLLs in the Maven build are now digitally signed (fix for issue reported here).
- (Experimental) vcpkg support added for the CPU EP. The DML EP does not yet support vcpkg, and other EPs have not been tested.
Core
- MultiLoRA support.
- Reduced memory utilization.
- Fixed alignment that was causing mmap to fail for external weights.
- Eliminated double allocations when deserializing external weights.
- Added ability to serialize pre-packed weights so that they don’t cause an increase in memory utilization when the model is loaded.
- Support bfloat16 and float8 data types in python I/O binding API.
Performance
- INT4 quantized embedding support on CPU and CUDA EPs.
- Miscellaneous performance improvements and bug fixes.
EPs
CPU
- FP16 support for MatMulNbits, Clip, and LayerNormalization ops.
CUDA
- Cudnn frontend integration for convolution operators.
- Added support of cuDNN Flash Attention and Lean Attention in MultiHeadAttention op.
TensorRT
QNN
- QNN HTP support for weight sharing across multiple ORT inference sessions. (See ORT QNN EP documentation for more information.)
- Support for QNN SDK 2.27.
OpenVINO
- Added support up to OpenVINO 2024.4.1.
- Compile-time memory optimizations.
- Enhancement of ORT EPContext Session option for optimized first inference latency.
- Added remote tensors to ensure direct memory access for inferencing on NPU.
DirectML
- DirectML 1.15.2 support.
Mobile
- Improved Android QNN support, including a pre-built Maven package and various performance improvements.
- FP16 support for ML Program models with CoreML EP.
- FP16 XNNPACK kernels to provide a fallback option if CoreML is not available at runtime.
- Initial support for using the native WebGPU EP on Android and iOS. _Note: The set of initial operators is limited, and the code is available from the main branch, not ORT 1.20 packages. See #22591 for more information.
Web
- Quantized embedding support.
- On-demand weight loading support (offloads Wasm32 heap and enables 8B-parameter LLMs).
- Integrated Intel GPU performance improvements.
- Opset-21 support (Reshape, Shape, Gelu).
GenAI
- MultiLoRA support.
- Generations can now be terminated mid-loop.
- Logit soft capping support in Group Query Attention (GQA).
- Additional model support, including Phi-3.5 Vision Multi-Frame, ChatGLM3, and Nemotron-Mini.
- Python package now available for Mac.
- Mac / iOS now available in NuGet packages.
Full release notes for ONNX Runtime generate() API v0.5.0 can be found here.
Extensions
- Tokenization performance improvements.
- Support for latest Hugging Face tokenization JSON format (transformers>=4.45).
- Unigram tokenization model support.
- OpenCV dependency removed from C API build.
Full release notes for ONNX Runtime Extensions v0.13 can be found here.
Olive
- Olive command line interface (CLI) now available with support to execute well-defined, concrete workflows without the need to create or edit configs manually.
- Additional improvements, including support for YAML-based workflow configs, streamlined DataConfig management, simplified workflow configuration, and more.
- Llama and Phi-3 model updates, including an updated MultiLoRA example using the ORT generate() API. Full release notes for Olive v0.7.0 can be found here.
Contributors
Big thank you to the release manager @apsonawane, as well as @snnn, @jchen351, @sheetalarkadam, and everyone else who made this release possible!
Tianlei Wu, Yi Zhang, Yulong Wang, Scott McKay, Edward Chen, Adrian Lizarraga, Wanming Lin, Changming Sun, Dmitri Smirnov, Jian Chen, Jiajia Qin, Jing Fang, George Wu, Caroline Zhu, Hector Li, Ted Themistokleous, mindest, Yang Gu, jingyanwangms, liqun Fu, Adam Pocock, Patrice Vignola, Yueqing Zhang, Prathik Rao, Satya Kumar Jandhyala, Sumit Agarwal, Xu Xing, aciddelgado, duanshengliu, Guenther Schmuelling, Kyle, Ranjit Ranjan, Sheil Kumar, Ye Wang, kunal-vaishnavi, mingyueliuh, xhcao, zz002, 0xdr3dd, Adam Reeve, Arne H Juul, Atanas Dimitrov, Chen Feiyue, Chester Liu, Chi Lo, Erick Muñoz, Frank Dong, Jake Mathern, Julius Tischbein, Justin Chu, Xavier Dupré, Yifan Li, amarin16, anujj, chenduan-amd, saurabh, sfatimar, sheetalarkadam, wejoncy, Akshay Sonawane, AlbertGuan9527, Bin Miao, Christian Bourjau, Claude, Clément Péron, Emmanuel, Enrico Galli, Fangjun Kuang, Hann Wang, Indy Zhu, Jagadish Krishnamoorthy, Javier Martinez, Jeff Daily, Justin Beavers, Kevin Chen, Krishna Bindumadhavan, Lennart Hannink, Luis E. P., Mauricio A Rovira Galvez, Michael Tyler, PARK DongHa, Peishen Yan, PeixuanZuo, Po-Wei (Vincent), Pranav Sharma, Preetha Veeramalai, Sophie Schoenmeyer, Vishnudas Thaniel S, Xiang Zhang, Yi-Hong Lyu, Yufeng Li, goldsteinn, mcollinswisc, mguynn-intc, mingmingtasd, raoanag, shiyi, stsokolo, vraspar, wangshuai09
Full changelog: v1.19.2...v1.20.0
- C++
Published by sophies927 over 1 year ago
onnxruntime - ONNX Runtime v1.19.2
Announcements
- ORT 1.19.2 is a small patch release, fixing some broken workflows and introducing bug fixes.
Build System & Packages
- Fixed the signing of native DLLs.
- Disabled absl symbolize in Windows Release build to avoid dependency on dbghelp.dll.
Training
- Restored support for CUDA compute capability 7.0 and 7.5 with CUDA 12, and 6.0 and 6.1 with CUDA 11.
- Several fixes for training CI pipelines.
Mobile
- Fixed ArgMaxOpBuilder::AddToModelBuilderImpl() nullptr Node access for CoreML EP.
Generative AI
- Added CUDA kernel for Phi3 MoE.
- Added smooth softmax support in CUDA and CPU kernels for the GroupQueryAttention operator.
- Fixed number of splits calculations in GroupQueryAttention CUDA operator.
- Enabled causal support in the MultiHeadAttention CUDA operator.
Contributors
@prathikr, @mszhanyi, @edgchen1, @tianleiwu, @wangyems, @aciddelgado, @mindest, @snnn, @baijumeswani, @MaanavD
Thanks to everyone who helped ship this release smoothly!
Full Changelog: https://github.com/microsoft/onnxruntime/compare/v1.19.0...v1.19.2
- C++
Published by MaanavD over 1 year ago
onnxruntime - ONNX Runtime v1.19
Announcements
- Training (pypi) packages are delayed from package manager release due to some publishing errors. Feel free to contact @maanavd if you need release candidates for some workflows ASAP. In the meantime, binaries are attached to this post. This message will be deleted once this ceases to be the case. Thanks for your understanding :)
- Second note that the wrong commit was initially tagged with v1.19.0. The final commit has since been correctly tagged: https://github.com/microsoft/onnxruntime/commit/26250ae74d2c9a3c6860625ba4a147ddfb936907. This shouldn't effect much, but sorry for the inconvenience!
Build System & Packages
- Numpy support for 2.x has been added
- Qualcomm SDK has been upgraded to 2.25
- ONNX has been upgraded from 1.16 → 1.16.1
- Default GPU packages use CUDA 12.x and Cudnn 9.x (previously CUDA 11.x/CuDNN 8.x) CUDA 11.x/CuDNN 8.x packages are moved to the aiinfra VS feed.
- TensorRT 10.2 support added
- Introduced Java CUDA 12 packages on Maven.
- Discontinued support for Xamarin. (Xamarin reached EOL on May 1, 2024)
- Discontinued support for macOS 11 and increasing the minimum supported macOS version to 12. (macOS 11 reached EOL in September 2023)
- Discontinued support for iOS 12 and increasing the minimum supported iOS version to 13.
Core
- Implemented DeformConv
- Fixed big-endian and support build on AIX
Performance
- Added QDQ support for INT4 quantization in CPU and CUDA Execution Providers
- Implemented FlashAttention on CPU to improve performance for GenAI prompt cases
- Improved INT4 performance on CPU (X64, ARM64) and NVIDIA GPUs
Execution Providers
TensorRT
- Updated to support TensorRT 10.2
- Remove calls to deprecated api’s
- Enable refittable embedded engine when ONNX model provided as byte stream
CUDA
- Upgraded cutlass to 3.5.0 for performance improvement of memory efficient attention.
- Updated MultiHeadAttention and Attention operators to be thread-safe.
- Added sdpa_kernel provider option to choose kernel for Scaled Dot-Product Attention.
- Expanded op support - Tile (bf16)
CPU
- Expanded op support - GroupQueryAttention, SparseAttention (for Phi-3 small)
QNN
- Updated to support QNN SDK 2.25
- Expanded op support - HardSigmoid, ConvTranspose 3d, Clip (int32 data), Matmul (int4 weights), Conv (int4 weights), prelu (fp16)
- Expanded fusion support – Conv + Clip/Relu fusion
OpenVINO
- Added support for OpenVINO 2024.3
- Support for enabling EpContext using session options
DirectML
- Updated DirectML from 1.14.1 → 1.15
- Updated ONNX opset from 17 → 20
- Opset 19 and Opset 20 are supported with known caveats:
- Gridsample 20: 5d not supported
- DeformConv not supported
Mobile
- Additional CoreML ML Program operators were added
- See supported operators list here
- Fixed packaging issue with macOS framework in onnxruntime-c cocoapod
- Removed Xamarin support
- Xamarin EOL was May 1, 2024
- Xamarin official support policy | .NET (microsoft.com)
Web
- Updated JavaScript packaging to align with best practices, including slight incompatibilities when apps bundle onnxruntime-web
- Improved CPU operators coverage for WebNN (now supported by Chrome)
Training
- No specific updates
GenAI
- Support for new models Qwen, Llama 3.1, Gemma 2, phi3 small
- Support to build quantized models with method AWQ and GPTQ
- Performance improvements for Intel and Arm CPU
- Packing and language binding
- Added Java bindings (build from source)
- Separate OnnxRuntime.dll and directml.dll out of GenAI package to improve usability
- Publish packages for Win Arm
- Support for Android (build from source)
- Bug fixes, like the long prompt correctness issue for phi3.
Extensions
- Added C APIs for language, vision and audio processors including new FeatureExtractor for Whisper
- Support for Phi-3 Small Tokenizer and new OpenAI tiktoken format for fast loading of BPE tokenizers
- Added new CUDA custom operators such as MulSigmoid, Transpose2DCast, ReplaceZero, AddSharedInput and MulSharedInput
- Enhanced Custom Op Lite API on GPU and fused kernels for DORT
- Bug fixes, including null bos_token for Qwen2 tokenizer and SentencePiece converted FastTokenizer issue on non-ASCII characters, as well as necessary updates for MSVC 19.40 and numpy 2.0 release
Contributors
Changming Sun, Baiju Meswani, Scott McKay, Edward Chen, Jian Chen, Wanming Lin, Tianlei Wu, Adrian Lizarraga, Chester Liu, Yi Zhang, Yulong Wang, Hector Li, kunal-vaishnavi, pengwa, aciddelgado, Yifan Li, Xu Xing, Yufeng Li, Patrice Vignola, Yueqing Zhang, Jing Fang, Chi Lo, Dmitri Smirnov, mingyueliuh, cloudhan, Yi-Hong Lyu, Ye Wang, Ted Themistokleous, Guenther Schmuelling, George Wu, mindest, liqun Fu, Preetha Veeramalai, Justin Chu, Xiang Zhang, zz002, vraspar, kailums, guyang3532, Satya Kumar Jandhyala, Rachel Guo, Prathik Rao, Maximilian Müller, Sophie Schoenmeyer, zhijiang, maggie1059, ivberg, glen-amd, aamajumder, Xavier Dupré, Vincent Wang, Suryaprakash Shanmugam, Sheil Kumar, Ranjit Ranjan, Peishen Yan, Frank Dong, Chen Feiyue, Caroline Zhu, Adam Louly, Ștefan Talpalaru, zkep, winskuo-quic, wejoncy, vividsnow, vivianw-amd, moyo1997, mcollinswisc, jingyanwangms, Yang Gu, Tom McDonald, Sunghoon, Shubham Bhokare, RuomeiMS, Qingnan Duan, PeixuanZuo, Pavan Goyal, Nikolai Svakhin, KnightYao, Jon Campbell, Johan MEJIA, Jake Mathern, Hans, Hann Wang, Enrico Galli, Dwayne Robinson, Clément Péron, Chip Kerchner, Chen Fu, Carson M, Adam Reeve, Adam Pocock.
Big thank you to everyone who contributed to this release!
Full Changelog: https://github.com/microsoft/onnxruntime/compare/v1.18.1...v1.19.0
- C++
Published by MaanavD over 1 year ago
onnxruntime - ONNX Runtime v1.18.1
What's new?
Announcements: - ONNX Runtime Python packages now have numpy dependency >=1.21.6, <2.0. Support for numpy 2.0 will be added in a future release. - CUDA 12.x ONNX Runtime GPU packages are now built against cuDNN 9.x (1.18.0 packages previously depended on cuDNN 8.x). CUDA 11.x ONNX Runtime GPU packages continue to depend on CuDNN 8.x. - Windows packages require installation of Microsoft Visual C++ Redistributable Runtime 14.38 or newer.
TensorRT EP: - TensorRT Weightless API integration. - Support for TensorRT hardware compatible engines. - Support for INT64 types in TensorRT constant layer calibration. - Now using latest commit of onnx-tensorrt parser, which includes several issue fixes. - Additional TensorRT support and performance improvements.
Packages: - Publish CUDA 12 Java packages to Azure DevOps feed. - Various packaging pipeline fixes.
This patch release also features various other bug fixes, including a CUDA 12.5 build error fix.
Big thank you to @yf711 for driving this release as the release manager and to all our contributors!
@yf711 @jchen351 @mszhanyi @snnn @wangyems @jywu-msft @skottmckay @chilo-ms @moraxu @kevinch-nv @pengwa @wejoncy @pranavsharma @Craigacp @jslhcl @adrianlizarraga @inisis @jeffbloo @mo-ja @kunal-vaishnavi @sumitsays @neNasko1 @yufenglee @dhruvbird @wangshuai09 @xiaoyu-work @axinging @yuslepukhin @YUNQIUGUO @shubhambhokare1 @fs-eire @afantino951 @tboby @HectorSVC @baijumeswani
- C++
Published by sophies927 over 1 year ago
onnxruntime - ONNX Runtime v1.18.0
Announcements
- Windows ARM32 support has been dropped at the source code level.
- Python version >=3.8 is now required for build.bat/build.sh (previously >=3.7). Note: If you have Python version <3.8, you can bypass the tools and use CMake directly.
- The onnxruntime-mobile Android package and onnxruntime-mobile-c/onnxruntime-mobile-objc iOS cocoapods are being deprecated. Please use the onnxruntime-android Android package, and onnxruntime-c/onnxruntime-objc cocoapods, which support ONNX and ORT format models and all operators and data types. Note: If you require a smaller binary size, a custom build is required. See details on creating a custom Android or iOS package on Custom build | onnxruntime.
Build System & Packages
- CoreML execution provider now depends on coremltools.
- Flatbuffers has been upgraded from 1.12.0 → 23.5.26.
- ONNX has been upgraded from 1.15 → 1.16.
- EMSDK has been upgraded from 3.1.51 → 3.1.57.
- Intel neural_speed library has been upgraded from v0.1.1 → v0.3 with several important bug fixes.
- There is a new onnxruntimeCUDAMINIMAL CMake option for building ONNX Runtime CUDA execution provider without any operations apart from memcpy ops.
- Added support for Catalyst for macOS build support.
- Added initial support for RISC-V and three new build options for it:
--rv64,--riscv_toolchain_root, and--riscv_qemu_path. - Now you can build TensorRT EP with protobuf-lite instead of the full version of protobuf.
- Some security-related compile/link flags have been moved from the default setting → new build option:
--use_binskim_compliant_compile_flags. Note: All our release binaries are built with this flag, but when building ONNX Runtime from source, this flag is default OFF. - Windows ARM64 build now depends on PyTorch CPUINFO library.
- Windows OneCore build now uses “Reverse forwarding” apisets instead of “Direct forwarding”, so onnxruntime.dll in our Nuget packages will depend on kernel32.dll. Note: Windows systems without kernel32.dll need to have reverse forwarders (see API set loader operation - Win32 apps | Microsoft Learn for more information).
Core
- Added ONNX 1.16 support.
- Added additional optimizations related to Dynamo-exported models.
- Improved testing infrastructure for EPs developed as shared libraries.
- Exposed Reserve() in OrtAllocator to allow custom allocators to work when session.usedeviceallocatorforinitializers is specified.
- Improved lock contention due to memory allocations.
- Improved session creation time (graph and graph transformer optimizations).
- Added new SessionOptions config entry to disable specific transformers and rules.
- [C# API] Exposed SessionOptions.DisablePerSessionThreads to allow sharing of threadpool between sessions.
- [Java API] Added CUDA 12 Java support.
Performance
- Improved 4bit quant support:
- Added HQQ quantization support to improve accuracy.
- Implemented general GEMM kernel and improved GEMV kernel performance on GPU.
- Improved GEMM kernel quality and performance on x64.
- Implemented general GEMM kernel and improved GEMV performance on ARM64.
- Improved MultiheadAttention performance on CPU.
Execution Providers
TensorRT
- Added support for TensorRT 10.
- Finalized support for DDS ops.
- Added Python support for user provided CUDA stream.
- Fixed various bugs.
CUDA
- Added support of multiple CUDA graphs.
- Added a provider option to disable TF32.
- Added Python support for user provided CUDA stream.
- Extended MoE to support of Tensor Parallelism and int4 quantization.
- Fixed bugs in BatchNorm and TopK kernel.
QNN
- Added support for up to QNN SDK 2.22.
- Upgraded support from A16W8 → mixed 8/16-bit precision configurability per layer.
- Added fp16 execution support via enablehtpfp16 option.
- Added multiple partition support for QNN context binary.
- Expanded operator support and fixed various bugs.
- Added support for per-channel quantized weights for Conv.
- Integration with Qualcomm’s AIHub.
OpenVINO
- Added support for up to OpenVINO 2024.1.
- Added support for importing pre-compiled blob as EPContext blob.
- Separated device and precision as inputs by removing support for device_id in provider options and adding precision as separate CLI option.
- Deprecated CPUFP32 and GPUFP32 terminology and introduced CPU and GPU terminology.
AUTO:GPU,CPUwill only create GPU blob, not CPU blob.
DirectML
- Additional ONNX operator support: Resize-18 and Resize-19, Col2Im-18, InNaN-20, IsInf-20, and ReduceMax-20.
- Additional contrib op support: SimplifiedLayerNormalization, SkipSimplifiedLayerNormalization, QLinearAveragePool, MatMulIntegerToFloat, GroupQueryAttention, DynamicQuantizeMatMul, and QAttention.
Mobile
- Improved performance of ARM64 4-bit quantization.
- Added support for building with QNN on Android.
- Added MacCatalyst support.
- Added visionOS support.
- Added initial support for creating ML Program format CoreML models.
- Added support for 1D Conv and ConvTranspose to XNNPACK EP.
Web
- Added WebNN EP preview.
- Improved WebGPU performance (MHA, ROE).
- Added more WebGPU and WebNN examples.
- Increased generative model support.
- Optimized Buffer management to reduce memory footprint.
Training
- Large Model Training
- Added optimizations for Dynamo-exported models.
- Added Mixtral integration using ORT backend.
- On-Device Training
- Added support for models >2GB to enable SLM training on edge devices.
GenAI
- Added additional model support: Phi-3, Gemma, LLama-3.
- Added DML EP support.
- Improved tokenizer quality.
- Improved sampling method and ORT model performance.
Extensions
- Created Java packaging pipeline and published to Maven repository.
- Added support for conversion of Huggingface FastTokenizer into ONNX custom operator.
- Unified the SentencePiece tokenizer with other Byte Pair Encoding (BPE) based tokenizers.
- Fixed Whisper large model pre-processing bug.
- Enabled eager execution for custom operator and refactored the header file structure.
Contributors
Yi Zhang, Yulong Wang, Adrian Lizarraga, Changming Sun, Scott McKay, Tianlei Wu, Peng Wang, Hector Li, Edward Chen, Dmitri Smirnov, Patrice Vignola, Guenther Schmuelling, Ye Wang, Chi Lo, Wanming Lin, Xu Xing, Baiju Meswani, Peixuan Zuo, Vincent Wang, Markus Tavenrath, Lei Cao, Kunal Vaishnavi, Rachel Guo, Satya Kumar Jandhyala, Sheil Kumar, Yifan Li, Jiajia Qin, Maximilian Müller, Xavier Dupré, Yi-Hong Lyu, Yufeng Li, Alejandro Cid Delgado, Adam Louly, Prathik Rao, wejoncy, Zesong Wang, Adam Pocock, George Wu, Jian Chen, Justin Chu, Xiaoyu, guyang3532, Jingyan Wang, raoanag, Satya Jandhyala, Hariharan Seshadri, Jiajie Hu, Sumit Agarwal, Peter Mcaughan, Zhijiang Xu, Abhishek Jindal, Jake Mathern, Jeff Bloomfield, Jeff Daily, Linnea May, Phoebe Chen, Preetha Veeramalai, Shubham Bhokare, Wei-Sheng Chin, Yang Gu, Yueqing Zhang, Guangyun Han, inisis, ironman, Ivan Berg, Liqun Fu, Yu Luo, Rui Ren, Sahar Fatima, snadampal, wangshuai09, Zhenze Wang, Andrew Fantino, Andrew Grigorev, Ashwini Khade, Atanas Dimitrov, AtomicVar, Belem Zhang, Bowen Bao, Chen Fu, Dhruv Matani, Fangrui Song, Francesco, Frank Dong, Hans Chen, He Li, Heflin Stephen Raj, Jambay Kinley, Masayoshi Tsutsui, Matttttt, Nanashi, Phoebe Chen, Pranav Sharma, Segev Finer, Sophie Schoenmeyer, TP Boudreau, Ted Themistokleous, Thomas Boby, Xiang Zhang, Yongxin Wang, Zhang Lei, aamajumder, danyue, Duansheng Liu, enximi, fxmarty, kailums, maggie1059, mindest, mo-ja, moyo1997 Big thank you to everyone who contributed to this release!
- C++
Published by yihonglyu almost 2 years ago
onnxruntime - ONNX Runtime v1.17.3
What's new?
General: - Update copying API header files to make Linux logic consistent with Windows (#19736) - @mszhanyi - Pin ONNX version to fix DML and Python packaging pipeline exceptions (#20073) - @mszhanyi
Build System & Packages: - Fix minimal build with training APIs enabled bug affecting Apple framework (#19858) - @edgchen1
Core: - Fix SplitToSequence op with string tensor bug (#19942) - @Craigacp
CUDA EP: - Fix onnxruntimetestall build break with CUDA (#19673) - @gedoensmax - Fix broken pooling CUDA NHWC ops and ensure NCHW / NHWC parity (#19889) - @mtavenrath
TensorRT EP: - Fix TensorRT build break caused by image update (#19880) - @jywu-msft - Fix TensorRT custom op list concurrency bug (#20093) - @chilo-ms
Web: - Add hardSigmoid op support and hardSigmoid activation for fusedConv (#19215, #19233) - @qjia7 - Add support for WebNN async API with Asyncify (#19415) - @Honry - Add uniform support for conv, conv transpose, conv grouped, and fp16 (#18753, #19098) - @axinging - Add capture and replay support for JS EP (#18989) - @fs-eire - Add LeakyRelu activation for fusedConv (#19369) - @qjia7 - Add FastGelu custom op support (#19392) - @fs-eire - Allow uint8 tensors for WebGPU (#19545) - @satyajandhyala - Add and optimize MatMulNBits (#19852) - @satyajandhyala - Enable ort-web with any Float16Array polyfill (#19305) - @fs-eire - Allow multiple EPs to be specified in backend resolve logic (#19735) - @fs-eire - Various bug fixes: (#19258) - @gyagp, (#19201, #19554) - @hujiajie, (#19262, #19981) - @guschmue, (#19581, #19596, #19387) - @axinging, (#19613) - @satyajandhyala - Various improvements for performance and usability: (#19202) - @qjia7, (#18900, #19281, #18883) - @axinging, (#18788, #19737) - @satyajandhyala, (#19610) - @segevfiner, (#19614, #19702, #19677, #19857, #19940) - @fs-eire, (#19791) - @gyagp, (#19868) - @guschmue, (#19433) - @martholomew, (#19932) - @ibelem
Windows: - Fix Windows memory mapping bug affecting some larger models (#19623) - @yufenglee
Kernel Optimizations: - Fix GQA and Rotary Embedding bugs affecting some models (#19801, #19874) - @aciddelgado - Update replacement of MultiHeadAttention (MHA) and GroupQueryAttention (GQA) (#19882) - @kunal-vaishnavi - Add support for packed QKV input and Rotary Embedding with sm<80 using Memory Efficient Attention kernel (#20012) - @aciddelgado
Models: - Add support for benchmarking LLaMA model end-to-end performance (#19985, #20033, #20149) - @kunal-vaishnavi - Add example to demonstrate export of Open AI Whisper implementation with batched prompts (#19854) - @shubhambhokare1
This patch release also includes additional fixes by @spampana95 and @enximi. Big thank you to all our contributors!
- C++
Published by sophies927 almost 2 years ago
onnxruntime - ONNX Runtime v1.17.1
This patch release includes the following updates:
General
- Update thread affinity on server so it is only set with auto affinity (#19318) - @ivberg
Build System and Packages
- Fix bug that was breaking arm64 build by disabling __cpuid check on arm64 builds since intrinsic is not available (#19574) - @smk2007
Core
- Add capturestate / rundown ETW support logging for session and provider options (#19397) - @ivberg
- Restrict L2 cache core check on Intel devices (#19483) - @smk2007
Performance
- Optimize KahnsTopologicalSort and PriorityNodeCompare to fix performance degradation in session creation time that was affecting many models (#19475) - @smk2007
EPs
- Enable DirectML on Windows and CUDA on Linux for Node.js binding (#19274) - @jchen351
QNN
- Fix split index bugs uncovered by QNN SDK 2.19 release (#19381) - @adrianlizarraga
- Add job that builds x64 Python wheels for QNN EP so cached QNN models can be created on Windows x64 (#19499) - @adrianlizarraga
OpenVINO
- Fix bugs for API backwards compatibility (#19482) - @preetha-intel
DirectML
- Fix bug in external data packing that was causing crash (#19415) - @PatriceVignola
- Fix bug in allocation planner by disabling streams for DML EP (#19481) - @PatriceVignola
Web
- Fix bug with types export in package.json (#19458) - @fs-eire
Training
- Reduce onnxruntime-training package size so it can be published on PyPI (#19486) - @baijumeswani
- Update default std flag used during torch extensions compilation (#19516) - @baijumeswani
- Add ATen fallback support for bicubic interpolation algorithm (#19380) - @prathikr
Quantization
- Update Q/DQ quantization to ensure Microsoft opset (#19335) - @adrianlizarraga
- Add contrib Q/DQ ops to symbolic shape inference tool (#19340) - @adrianlizarraga
- Fix subgraph quantization regression (#19421) - @fxmarty
- Add DefaultTensorType option to specify the default tensor type to quantize (#19455) - @yufenglee
- Fix bug with command line argparse to process --symmetric [True|False] correctly (#19577) - @satyajandhyala
Whisper Model
- Fix bug in BeamSearch implementation of Whisper model that was causing a crash in some scenarios (#19345) - @petermcaughan
- Fix bug in Whisper model timestamps and temperature (#19509) - @kunal-vaishnavi
- C++
Published by YUNQIUGUO about 2 years ago
onnxruntime - ONNX Runtime v1.17.0
Announcements
In the next release, we will totally drop support for Windows ARM32.
General
- Added support for new ONNX 1.15 opsets: IsInf-20, IsNaN-20, DFT-20, ReduceMax-20, ReduceMin-20, AffineGrid-20, GridSample, ConstantOfShape-20, RegexFullMatch, StringConcat, StringSplit, and ai.onnx.ml.LabelEncoder-4.
- Updated C/C++ libraries: abseil, date, nsync, googletest, wil, mp11, cpuinfo, safeint, and onnx.
Build System and Packages
- Dropped CentOS 7 support. All Linux binaries now require glibc version >=2.28, but users can still build the source code for a lower glibc version.
- Added CUDA 12 packages for Python and Nuget.
- Added Python 3.12 packages for ONNX Runtime Inference. ONNX Runtime Training Python 3.12 packages cannot be provided at this time since training packages depend on PyTorch, which does not support Python 3.12 yet.
- Linux binaries (except those in AMD GPU packages) are built in a more secure way that is compliant with BinSkim's default policy (e.g., the binaries no longer have an executable stack).
- Added support for Windows ARM64X for users who build ONNX Runtime from source. No prebuilt package provided yet.
- Removed Windows ARM32 binaries from official packages. Users who still need these binaries can build them from source.
- Added AMD GPU package with ROCm and MiGraphX (Python + Linux only).
- Split ONNX Runtime GPU Nuget package into two packages.
- When building the source code for Linux ARM64 or Android, the C/C++ compiler must support BFloat16. Support for Android NDK 24.x has been removed. Please use NDK 25.x or 26.x instead.
- Link time code generation (LTCG or LTO) is now disabled by default when building from source. To re-enable it, users can add "--enable_lto" to the build command. All prebuilt binaries are still built with LTO.
Core
- Optimized graph inlining.
- Allow custom op to invoke internal thread-pool for parallelism.
- Added support for supplying a custom logger at the session level.
- Added new logging and tracing of session and execution provider options.
- Added new dynamic ETW provider that can trace/diagnose ONNX internals while maintaining great performance.
Performance
- Added 4bit quant support on NVIDIA GPU and ARM64.
EPs
TensorRT EP
- Added support for direct load of precompiled TensorRT engines and customizable engine prefix.
- Added Python support for TensorRT plugins via ORT custom ops.
- Fixed concurrent Session::Run bugs.
- Updated calls to deprecated TensorRT APIs (e.g., enqueuev2 → enqueuev3).
- Fixed various memory leak bugs.
QNN EP
- Added support for QNN SDK 2.18.
- Added context binary caching and model initialization optimizations.
- Added mixed precision (8/16 bit) quantization support.
- Add device-level session options (socmodel, htparch, deviceid), extremepowersaver for htpperformancemode, and vtcmmb settings.
- Fixed multi-threaded inference bug.
- Fixed various other bugs and added performance improvements.
- QNN profiling of the NPU can be enabled dynamically with ETW or write out to CSV.
OpenVINO EP
- Added support for OpenVINO 2023.2.
- Added AppendExecutionProviderOpenVINOV2 API for supporting new OpenVINO EP options.
DirectML EP
- Updated to DirectML 1.13.1.
- Updated operators LpPool-18 and AveragePool-19 with dilations.
- Improved Python I/O binding support.
- Added RotaryEmbedding.
- Added support for fusing subgraphs into DirectML execution plans.
- Added new Python API to choose a specific GPU on multi-GPU devices with the DirectML EP.
Mobile
- Added initial support for 4bit quantization on ARM64.
- Extended CoreML/NNAPI operator coverage.
- Added support for YOLOv8 pose detection pre/post processing.
- Added support for macOS in CocoaPods package.
Web
- Added support for external data format.
- Added support for I/O bindings.
- Added support for training.
- Added WebGPU optimizations.
- Transitioned WebGPU out of experimental.
- Added FP16 support for WebGPU.
Training
Large Model Training
- Enabled support for QLoRA (with support for BFloat16).
- Added symbolic shape support for Triton codegen (see PR).
- Made improvements to recompute optimizer with easy ON/OFF to allow layer-wise recompute (see PR).
- Enabled memory-efficient gradient management. For Mistral, we see ~10GB drop in memory consumption when this feature is ON (see PR).
- Enabled embedding sparsity optimizations.
- Added support for Aten efficient attention and Triton Flash Attention (see PR).
- Packages now available for CUDA 11.8 and 12.1.
On Device Training
- On-Device training will now support training on the web. This release focuses on federated learning and developer exploration scenarios. More features coming soon in future releases.
Extensions
- Modified genprocessingmodel tokenizer model to output int64, unifying output datatype of all tokenizers.
- Implemented support for post-processing of YOLO v8 within the Python extensions package.
- Introduced 'fairseq' flag to enhance compatibility with certain Hugging Face tokenizers.
- Incorporated 'added_token' attribute into the BPE tokenizer to improve CodeGen tokenizer functionality.
- Enhanced the SentencePiece tokenizer by integrating token indices into the output.
- Added support for the custom operator implemented with CUDA kernels, including two example operators.
- Added more tests on the Hugging Face tokenizer and fixed identified bugs.
Known Issues
- The onnxruntime-training package is not yet available in PyPI but can be accessed in ADO as follows:
python -m pip install cerberus flatbuffers h5py numpy>=1.16.6 onnx packaging protobuf sympy setuptools>=41.4.0 pip install -i https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/ORT/pypi/simple/ onnxruntime-training pip install torch-ort python -m torch_ort.configureInstallation instructions can also be accessed here. - For models with int4 kernel only:
- Crash may occur when int4 is applied on Intel CPUs with hybrid core if the E-cores are disabled in BIOS. Fix is in progress to be patched.
- Performance regression on the int4 kernel on x64 makes the op following MatMulNBits much slower. Fix is in progress to be patched.
- Current bug in BeamSearch implementation of T5, GPT, and Whisper may break models under heavy inference load using BeamSearch on CUDA. See #19345. Fix is in progress to be patched.
- Full support of ONNX 1.15 opsets is still in progress. A list of new ONNX 1.15 opset support that has been included in this release can be found above in the 'General' section.
- Some Cast nodes will not be removed (see https://github.com/microsoft/onnxruntime/pull/17953): Cast node from higher precision to lower precision (like fp32 to fp16) will be kept. If model result is different from ORT 1.16 and 1.17, check whether some Cast nodes was removed in 1.16 but kept in 1.17.
Contributions
Contributors to ONNX Runtime include members across teams at Microsoft, along with our community members: snnn, fs-eire, tianleiwu, mszhanyi, edgchen1, skottmckay, jchen351, adrianlizarraga, qjia7, Honry, HectorSVC, chilo-ms, axinging, jeffbloo, pengwa, yuslepukhin, guschmue, satyajandhyala, xadupre, RandyShuai, PeixuanZuo, RandySheriffH, er3x3, wschin, yf711, PatriceVignola, askhade, smk2007, natke, kunal-vaishnavi, YUNQIUGUO, liqunfu, cloudhan, wangyems, yufenglee, ajindal1, baijumeswani justinchuby, Craigacp, wejoncy, jywu-msft, hariharans29, nums11, jslhcl, jeffdaily, chenfucn, zhijxu-MS, mindest, BowenBao, sumitsays, prasanthpul, fdwr, pranavsharma, chentaMS, zhangxiang1993, souptc, zhanghuanrong, faxu, georgen117, sfatimar, thiagocrepaldi, adityagoel4512, ivberg, sophies927
NOTE: Please let us know via this GitHub issue if you contributed to this release but your name is missing from this list, and we will add you manually!
- C++
Published by YUNQIUGUO about 2 years ago
onnxruntime - ONNX Runtime v1.16.3
What's Changed
- Stable Diffusion XL demo update by @tianleiwu in https://github.com/microsoft/onnxruntime/pull/18496
- Fixed a memory leak issue(#18466) in TensorRT EP by @chilo-ms in https://github.com/microsoft/onnxruntime/pull/18467
- Fix a use-after-free bug in SaveInputOutputNamesToNodeMapping function by @snnn in https://github.com/microsoft/onnxruntime/pull/18456 . The issue was found by AddressSanitizer.
- C++
Published by snnn over 2 years ago
onnxruntime - ONNX Runtime v1.16.2
The patch release includes updates on:
- Performance optimizations for Llama2 on CUDA EP and DirectML EP
- Performance optimizations for Stable Diffusion XL model for CUDA EP
- Demos for text to image generation
- Mobile bug fixes for crash on some older 64-bit ARM devices and AOT inlining issue on iOS with C# bindings
- TensorRT EP bug fixes for user provided compute stream and stream synchronization
- C++
Published by snnn over 2 years ago
onnxruntime - ONNX Runtime v1.16.1
This release fixed some issues in 1.16.0
- C++
Published by snnn over 2 years ago
onnxruntime - ONNX Runtime v1.16.0
General
- Support for serialization of models >=2GB
APIs
- New session option to disable default CPU EP fallback
session.disable_cpu_ep_fallback - Java
- Support for fp16 and bf16 tensors as inputs and outputs, along with utilities to convert between these and fp32 data. On JDK 20 and newer the fp16 conversion methods use the JDK's Float.float16ToFloat and Float.floatToFloat16 methods which can be hardware accelerated and vectorized on some platforms.
- Support for external initializers so that large models that can be instantiated without filesystem access
- C#
- Expose OrtValue API as the new preferred API to run inference in C#. This reduces garbage and exposes direct native memory access via Slice like interfaces.
- Make Float16 and BFloat16 full featured fp16 interfaces that support conversion and expose floating properties (e.g. IsNaN, IsInfinity etc)
Performance
- Improve LLM quantization accuracy with smoothquant
- Support 4-bit quantization on CPU
- Optimize BeamScore to improve BeamSearch performance
- Add FlashAttention v2 support for Attention, MultiHeadAttention and PackedMultiHeadAttention ops
Execution Providers
- CUDA EP
- Initial fp8 support (QDQ, Cast, MatMul)
- Relax CUDA Graph constraints to allow more models to utilize
- Allow CUDA allocator to be registered with ONNX Runtime externally
- TensorRT EP
- CUDA Graph support
- Support user provided cuda compute stream
- Misc bug fixes and improvements
- OpenVINO EP
- Support OpenVINO 2023.1
- QNN EP
- Enable context binary cache to reduce initialization time
- Support QNN 2.12
- Support for resize with asymmetric transformation mode on HTP backend
- Ops support: Equal, Less, LessOrEqual, Greater, GreaterOrEqual, LayerNorm, Asin, Sign, DepthToSpace, SpaceToDepth
- Support 1D Conv/ConvTranspose
- Misc bug fixes and improvements
Mobile
- Initial support for Azure EP
- Dynamic shape support for CoreML
- Improve React Native performance with JSI
- Mobile support for CLIPImageProcessor pre-processing and CLIP scenario
- Swift Package Manager support for ONNX Runtime inference and ONNX Runtime extensions via onnxruntime-swift-package-manager
Web
- webgpu ops coverage improvements (SAM, T5, Whisper)
- webnn ops coverage improvements (SAM, Stable Diffusion)
- Stability/usability improvements for webgpu
Large model training
- ORTModule + OpenAI Triton Integration now available. See details here
- Label Sparsity compute optimization support complete and enabled by default starting release 1.16
- New experimental embedding sparsity related optimizations available (disabled by default).
- Improves training performance of Roberta in Transformers by 20-30%
- Other compute optimizations like Gather/Slice/Reshape upstream support enabled.
- Optimizations for LLaMAv2 (~10% acceleration) and OpenAI Whisper
- Improvements to logging and metrics (initialization overhead, memory usage, statistics convergence tool, etc) system improvements.
- PythonOp enhancement: bool and tuple[bool] constants, materialize grads, empty inputs, save in context, customized shape inference, use full qualified name for export.
- SCELossInternal/SCELossGradInternal CUDA kernels can handle elements more than std::numericlimits<int32t>::max.
- Improvements to LayerNorm fusion
- Model cache for exported onnx model is introduced to avoid repeatedly exporting a model that is not changed across.
On-Device Training
- iOS support available starting this release
- Minimal build now available for On-Device Training. Basic binary size ~1.5 MB
- ORT-Extensions custom op support enabled through onnxblock for on-device training scenarios
ORT Extensions
This ORT release is accompanied by updates to onnxruntime-extensions. Features include: * New Python API genprocessingmodels to export ONNX data processing model from Huggingface Tokenizers such as LLaMA , CLIP, XLM-Roberta, Falcon, BERT, etc. * New TrieTokenizer operator for RWKV-like LLM models, and other tokenizer operator enhancements. * New operators for Azure EP compatibility: AzureAudioToText, AzureTextToText, AzureTritonInvoker for Python and NuGet packages.
* Processing operators have been migrated to the new Lite Custom Op API
Known Issues
* ORT CPU Python package requires execution provider to be explicitly provided. See #17631. Fix is in progress to be patched.
Contributions
Contributors to ONNX Runtime include members across teams at Microsoft, along with our community members: fs-eire, edgchen1, snnn, pengwa, mszhanyi, PeixuanZuo, tianleiwu, adrianlizarraga, baijumeswani, cloudhan, satyajandhyala, yuslepukhin, RandyShuai, RandySheriffH, skottmckay, Honry, dependabot[bot], HectorSVC, jchen351, chilo-ms, YUNQIUGUO, justinchuby, PatriceVignola, guschmue, yf711, Craigacp, smk2007, RyanUnderhill, jslhcl, wschin, kunal-vaishnavi, mindest, xadupre, fdwr, hariharans29, AdamLouly, wejoncy, chenfucn, pranavsharma, yufenglee, zhijxu-MS, jeffdaily, natke, jeffbloo, liqunfu, wangyems, er3x3, nums11, yihonglyu, sumitsays, zhanghuanrong, askhade, wenbingl, jingyanwangms, ashari4, gramalingam, georgen117, sfatimar, BowenBao, hanbitmyths, stevenlix, jywu-msft
- C++
Published by er3x3 over 2 years ago
onnxruntime - ONNX Runtime v1.15.1
This release fixed the following issues:
- A coding problem in test/sharedlib/testinference.cc that it should use ASSERTNEAR to test float values instead of ASSERTEQ. Without this change, some DNNL/OpenVino tests would fail on some AMD CPUs.
- A misaligned error in cublasGemmBatchedHelper function. The error only occurs when CUDA version = 11.8 and the GPU's CUDA Compute capability >=80. (In other words: with TensorFloat-32 support) (#15981)
- A build issue that build with onnxruntimeENABLEMEMORY_PROFILE was broken in 1.15.0 release. (#16124)
- Native onnxruntime library not loading in Azure App Service. It is because in 1.15.0 we introduced a Windows API call to SetThreadDescription. Though the API is available in all Windows 10 versions, some sandbox environments block using the API. (#15375)
- An alignment problem for xnnpack EP on Intel/AMD CPUs on PC platforms.
- Some training header files were missing in the 1.15.0 training nuget package.
- Some fields in OrtCUDAProviderOptionsV2 struct are not initialized
- The *.dylib files in ONNX Runtime nuget package are not signed. (#16168)
Known issue
- Segfaults when loading model with local functions, works fine if model is inlined by ONNX (#16170)
- Cross building for iOS requires manually downloading protoc (#16238)
- C++
Published by snnn over 2 years ago
onnxruntime - ONNX Runtime v1.15.0
Announcements
Starting from the next release(ONNX Runtime 1.16.0), at operating system level we will drop the support for - iOS 11 and below. iOS 12 will be the minimum supported version. - CentOS 7, Ubuntu 18.04, and any Linux distro without glibc version >=2.28.
At compiler level we will drop the support for - GCC version <= 9 - Visual Studio 2019
Also, we will remove the onnxruntimeDISABLEABSEIL build option since we will upgrade protobuf and the new protobuf version will need abseil.
General
- Added support for ONNX Optional type in C# API
- Added collectives to support multi-GPU inferencing
- Updated macOS build machines to macOS-12, which comes with Xcode 14.2 and we should stop using Xcode 12.4
- Added Python 3.11 support (deprecate 3.7, support 3.8-3.11) in packages for Onnxruntime CPU, Onnxruntime-GPU, Onnxruntime-directml, and onnxruntime-training.
- Updated to CUDA 11.8. ONNX Runtime source code is still compatible with CUDA 11.4 and 12.x.
- Dropped the support for Windows 8.1 and below
- Eager mode code and onnxruntimeENABLEEAGER_MODE cmake option are deleted.
- Upgraded Mimalloc version from 2.0.3 to 2.1.1
- Upgraded protobuf version from 3.18.3 to 21.12
- New dependency: cutlass, which is only used in CUDA/TensorRT packages.
- Upgraded DNNL from 2.7.1 to 3.0
Build System
- On POSIX systems by default we disallow using "root" user to build the code. If needed, you can append "--allowrunningas_root" to your build command to bypass the check.
- Add the support for building the source natively on Windows ARM64 with Visual Studio 2022.
- Added a Gradle wrapper and updated Gradle version from 6.8.3 to 8.0.1. (Gradle is the tool for building ORT Java package)
- When doing cross-compiling, the build scripts will try to download a prebuit protoc from Github instead of building the binary from source. Because now protobuf has many dependencies. It is not easy to setup a build environment for protobuf.
Performance
- Improved string marshalling and reduce GC pressure
- Added a build option to allow using a lock-free queue in threadpool for improved CPU utilization
- Fix CPU memory leak due to external weights
- Added fused decoder multi-head attention kernel to improve GPT and decoder models(like T5, Whisper)
- Added packing mode to improve encoder models with inputs of large padding ratio
- Improved generation algorithm (BeamSearch, TopSampling, GreedySearch)
- Improved performance for StableDiffusion, ViT, GPT, whisper models
Execution Providers
Two new execution providers: JS EP and QNN EP.
TensorRT EP
- Official support for TensorRT 8.6
- Explicit shape profile overrides
- Support for TensorRT plugins via ORT custom op
- Improve support for TensorRT options (heuristics, sparsity, optimization level, auxiliary stream, tactic source selection etc.)
- Support for TensorRT timing cache
- Improvements to our test coverage, specifically for opset16-17 models and package pipeline unit test coverage.
- Other misc bugfixes and improvements.
OpenVINO EP
- Support for OpenVINO 2023.0
- Dynamic shapes support for iGPU
- Changes to OpenVINO backend to improve first inference latency
- Deprecation of HDDL-VADM and Myriad VPU support
- Misc bug fixes.
QNN EP
DirectML EP:
- Updated to DirectML 1.12
- Opset 16-17 support
AzureEP
- Added support for OpenAI whisper model
- Available in a Nuget pkg in addition to Python
Mobile
New packages
- Swift Package Manager for onnxruntime
- Nuget package for onnxruntime-extensions (supports Android/iOS for MAUI/Xamarin)
- React Native package for onnxruntime can optionally include onnxruntime-extensions
Pre/Post processing
- Added support for built-in pre and post processing for NLP scenarios: classification, question-answering, text-prediction
- Added support for built-in pre and post processing for Speech Recognition (Whisper)
- Added support for built-in post processing for Object Detection (YOLO). Non-max suppression, draw bounding boxes
Additional CoreML and NNAPI kernels to support customer scenarios
- NNAPI: BatchNormalization, LRN
- CoreML: Div, Flatten, LeakyRelu, LRN, Mul, Pad, Pow, Sub
Web
- [preview] WebGPU support
- Support building the source code with "MinGW make" on Windows.
ORT Training
On-device training:
- Official package for On-Device Training now available. On-device training extends ORT Inference solutions to enable training on edge devices.
- APIs and Language bindings supported for C, C++, Python, C#, Java.
- Packages available for Desktop and Android.
- For custom builds refer build instructions.
Others
- Added graph optimizations which leverage the sparsity in the label data to improve performance. With these optimizations we see performance gains ranging from 4% to 15% for popular HF models over baseline ORT.
- Vision transformer models like ViT, BEIT and SwinV2 see upto 44% speedup with ORT Training+ DeepSpeed over PyTorch eager mode on AzureML.
- Added optimizations for SOTA models like Dolly and Whisper. ORT Training + DS now gives ~17% speedup for Whisper and ~4% speedup for Dolly over PyTorch eager mode. Dolly optimizations on main branch show a ~40% over eager mode.
Known Issues
- The onnxruntime-training 1.15.0 packages published to pypi.org were actually built in Debug mode instead of Release mode. You can get the right one from https://download.onnxruntime.ai/ . We will fix the issue in the next patch release.
- XNNPack EP does not work on x86 CPUs without AVX-512 instructions, because we used wrong alignment when allocating buffers for XNNPack to use.
- The CUDA EP source code has a build error when CUDA version <11.6. See #16000.
- The onnxruntime-training builds are missing the training header files.
Contributions
Contributors to ONNX Runtime include members across teams at Microsoft, along with our community members: snnn, fs-eire, edgchen1, wejoncy, mszhanyi, PeixuanZuo, pengwa, jchen351, cloudhan, tianleiwu, PatriceVignola, wangyems, adrianlizarraga, chenfucn, HectorSVC, baijumeswani, justinchuby, skottmckay, yuslepukhin, RandyShuai, RandySheriffH, natke, YUNQIUGUO, smk2007, jslhcl, chilo-ms, yufenglee, RyanUnderhill, hariharans29, zhanghuanrong, askhade, wschin, jywu-msft, mindest, zhijxu-MS, dependabot[bot], xadupre, liqunfu, nums11, gramalingam, Craigacp, fdwr, shalvamist, jstoecker, yihonglyu, sumitsays, stevenlix, iK1D, pranavsharma, georgen117, sfatimar, MaajidKhan, satyajandhyala, faxu, jcwchen, hanbitmyths, jeffbloo, souptc, ytaous kunal-vaishnavi
- C++
Published by snnn almost 3 years ago
onnxruntime - ONNX Runtime v1.14.1
This patch addresses packaging issues and bug fixes on top of v1.14.0: * Mac OS Python build for x86 arch (issue: #14663) * DirectML EP fixes: sequence ops (#14442), package naming to remove -dev suffix * CUDA12 build compatibility (#14659) * Performance regression fixes: IOBinding input (#14719), Transformer models (#14732, #14517, #14699) * ORT Training kernel fix (#14727)
Only select packages were published for this patch release; others can be found in the attachments below: * Pypi: onnxruntime, onnxruntime-gpu, onnxruntime-directml, onnxruntime-training * Nuget: Microsoft.ML.OnnxRuntime, Microsoft.ML.OnnxRuntime.Gpu, Microsoft.ML.OnnxRuntime.DirectML, Microsoft.AI.MachineLearning
- C++
Published by PatriceVignola almost 3 years ago
onnxruntime - ONNX Runtime v1.14.0
Announcements
- Building ORT from source will require cmake version >=3.24 instead of >=3.18.
General
- ONNX 1.13 support (opset 18)
- Threading
- New custom operator APIs
- Multi-stream Execution Provider refactoring
- Improves GPU utilization by putting parallel inference requests on different GPU streams. Updated for CUDA, TensorRT, and ROCM execution providers
- Improves memory efficiency by enabling GPU memory reuse across different streams
- Enables Execution Provider developer to customize its stream implementation by providing "Stream" interface in ExecutionProvider API
- [Preview] Rust API for ORT - not part of release branch but available to build in main.
Performance
- Support of quantization with AMX on Sapphire Rapids processors
- CUDA EP performance improvements:
- Improve performance of transformer models and decoding methods: beam search, greedy search, and topp sampling.
- Stable Diffusion model optimizations
- Change cudnnconvusemaxworkspace default value to be 1
- Performance improvements to GRU and Slice operators
Execution Providers
- TensorRT EP
- Adds support for TensorRT 8.5 GA versions
- Bug fixes
- OpenVINO EP
- Adds support for OpenVINO 2022.3
- DirectML EP:
- Updated to DML 1.10.1
- Additional operators: NonZero, Shape, Size, Attention, EmbedLayerNorm, SkipLayerNorm, BiasGelu
- Additional data types: Abs, Sign, Where
- Enable SetOptimizedFilePath export/reload
- Bug fixes/extensions: allow squeeze-13 axes, EinSum with MatMul NHCW
- ROCm EP: 5.4 support and GA ready
- [Preview] Azure EP - supports AzureML hosted models using Triton for hybrid inferencing on-device and on-cloud
Mobile
- Pre/Post processing
- Support updating mobilenet and super resolution models to move the pre and post processing into the model, including usage of custom ops for conversion to/from jpg/png
- onnxruntime-extensions python package includes the model update script to add pre/post processing to the model
- See example model update usage
- [Coming soon] onnxruntime-extensions packages for Android and iOS with DecodeImage and EncodeImage custom ops
- Updated the onnxruntime inference examples to demonstrate end-to-end usage with onnxruntime-extensions package
- SuperResolution model
- Support updating mobilenet and super resolution models to move the pre and post processing into the model, including usage of custom ops for conversion to/from jpg/png
- XNNPACK
- Added support for additional commonly used operators
- Add iOS build support
- XNNPACK EP is now included in the onnxruntime-c iOS package
- Added support for using the ORT allocator in XNNPACK kernels to minimize memory usage
Web
- onnxruntime-extensions included in default ort-web build (NLP centric)
- XNNPACK Gemm
- Improved exception handling
- New utility functions (experimental) to help with exchanging data between images and tensors.
Training
- Performance optimizations and bug fixes for Hugging Face models (i.e. Xlnet and Bloom)
- Stable diffusion optimizations for training, including support for Resize and InstanceNorm gradients and addition of ORT-enabled examples to the diffusers library
- FP16 optimizer exposed in torch-ort (details)
- Bug fixes for Hugging Face models
Known Issues
* The Microsoft.ML.OnnxRuntime.DirectML package name includes -dev-* suffix. This is functionally equivalent to the release branch build, and a patch is in progress.
Contributions
Contributors to ONNX Runtime include members across teams at Microsoft, along with our community members: snnn, skottmckay, edgchen1, hariharans29, tianleiwu, yufenglee, guoyu-wang, yuslepukhin, fs-eire, pranavsharma, iK1D, baijumeswani, tracysh, thiagocrepaldi, askhade, RyanUnderhill, wangyems, fdwr, RandySheriffH, jywu-msft, zhanghuanrong, smk2007, pengwa, liqunfu, shahasad, mszhanyi, SherlockNoMad, xadupre, jignparm, HectorSVC, ytaous, weixingzhang, stevenlix, tiagoshibata, faxu, wschin, souptc, ashbhandare, RandyShuai, chilo-ms, PeixuanZuo, cloudhan, dependabot[bot], jeffbloo, chenfucn, linkerzhang, duli2012, codemzs, oliviajain, natke, YUNQIUGUO, Craigacp, sumitsays, orilevari, BowenBao, yangchen-MS, hanbitmyths, satyajandhyala, MaajidKhan, smkarlap, sfatimar, jchen351, georgen117, wejoncy, PatriceVignola, adrianlizarraga, justinchuby, zhangxiang1993, gineshidalgo99, tlh20, xzhu1900, jeffdaily, suryasidd, yihonglyu, liuziyue, chentaMS, jcwchen, ybrnathan, ajindal1, zhijxu-MS, gramalingam, WilBrady, garymm, kkaranasos, ashari4, martinb35, AdamLouly, zhangyaobit, vvchernov, jingyanwangms, wenbingl, daquexian, sreekanth-yalachigere, NonStatic2014, mayavijx, mindest, jstoecker, manashgoswami, Andrews548, baowenlei, kunal-vaishnavi
- C++
Published by rui-ren about 3 years ago
onnxruntime - ONNX Runtime v1.13.1
Announcements
- Security issues addressed by this release
- A protobuf security issue CVE-2022-1941 that impact users who load ONNX models from untrusted sources, for example, a deep learning inference service which allows users to upload their models then runs the inferences in a shared environment.
- An ONNX security vulnerability that allows reading of tensor_data outside the model directory, which allows attackers to read or write arbitrary files on an affected system that loads ONNX models from untrusted sources. (#12915)
- Deprecations
- CUDA 10.x support at source code level
- Windows 8.x support in Nuget/C API prebuilt binaries. Support for Windows 7+ Desktop versions (including Windows servers) will be retained by building ONNX Runtime from source.
- NUPHAR EP code is removed
- Dependency versioning updates
- C++ 17 compiler is now required to build ORT from source. On Linux, GCC version >=7.0 is required.
- Minimal numpy version bumped to 1.21.6 (from 1.21.0) for ONNX Runtime Python packages
- Official ONNX Runtime GPU packages now require CUDA version >=11.6 instead of 11.4.
General
- Expose all arena configs in Python API in an extensible way
- Fix ARM64 NuGet packaging
- Fix EP allocator setup issue affecting TVM EP
Performance
- Transformers CUDA improvements
- Quantization on GPU for BERT - notebook, documentation on QAT, transformer optimization toolchain and quantized kernels.
- Add fused attention CUDA kernels for BERT.
- Fuse
Add(bias) andTransposeof Q/K/V into one kernel for Attention and LongformerAttention. - Reduce GEMM computation in LongformerAttention with a new weight format.
- General quantization (tool and kernel)
- Quantization debugging tool - identify sensitive node/layer from accuracy drop discrepancies
- New quantize API based on QuantConfig
- New quantized operators: SoftMax, Split, Where
Execution Providers
- CUDA EP
- Official ONNX Runtime GPU packages are now built with CUDA version 11.6 instead of 11.4, but should still be backwards compatible with 11.4
- TensorRT EP
- Build option to link against pre-built onnx-tensorrt parser; this enables potential "no-code" TensorRT minor version upgrades and can be used to build against TensorRT 8.5 EA
- Improved nested control flow support
- Improve HashId generation used for uniquely identifying TRT engines. Addresses issues such as TRT Engine Cache Regeneration Issue
- TensorRT uint8 support
- OpenVINO EP
- OpenVINO version upgraded to 2022.2.0
- Support for INT8 QDQ models from NNCF
- Support for Intel 13th Gen Core Process (Raptor Lake)
- Preview support for Intel discrete graphics cards Intel Data Center GPU Flex Series and Intel Arc GPU
- Increased test coverage for GPU Plugin
- SNPE EP
- Add support for Windows Dev Kit 2023
- Nuget Package is now available
- DirectML EP
- Update to DML 1.9.1
- New ops: LayerNormalization, Gelu, MatMulScale, DFT, FusedMatMul (contrib)
- Bug fixes: DML EP Fix InstanceNormalization with 3D tensors (#12693), DML EP squeeze all axes when empty (#12649), DirectML GEMM broken in opset 11 and 13 when optional tensor C not provided (#12568)
- [new] CANN EP - Initial integration of CANN EP contributed by Huawei to support Ascend 310 (#11477)
Mobile
- EP infrastructure
- Implemented support for additional EPs that use static kernels
- Required for EPs like XNNPACK to be supported in minimal build
- Removes need for kernel hashes to reduce maintenance overhead for developers
- NOTE: ORT format models will need to be regenerated as the format change is NOT backwards compatible. We're replacing hashes for the CPU EP kernels with operator constraint information for operators used by the model so that we can match any static kernels available at runtime.
- XNNPack
- Added more kernels including QDQ format model support
- AveragePool, Softmax,
- QLinearConv, QLinearAveragePool, QLinearSoftmax
- Added support for XNNPACK using threadpool
- See documentation for recommendations on how to configure the XNNPACK threadpool
- ORT format model peak memory usage
- Added ability to use ORT format model directly for initializers to reduce peak memory usage
- Enabled via SessionOptions config
- https://onnxruntime.ai/docs/reference/ort-format-models.html#load-ort-format-model-from-an-in-memory-byte-array
- Set "session.useortmodelbytesdirectly" and "session.useortmodelbytesfor_initializers" to "1"
Web
- Support for 4GB memory in webassembly
- Upgraded emscripten to 3.1.19
- Build from source support for onnxruntime-extensions and sentencepiece
- Initial support for XNNPACK for optimizations for Wasm
Training
- Training packages updated to CUDA version 11.6 and removed CUDA 10.2 and 11.3
- Performance improvements via op fusions like BiasSoftmax and Dropout fusion, Gather to Split fusion etc targeting SOTA models
- Added Aten support for GroupNorm, InstanceNormalization, Upsample nearest
* Bug fix for SimplifiedLayerNorm, seg fault for alltoall
Contributions
Contributors to ONNX Runtime include members across teams at Microsoft, along with our community members: snnn, baijumeswani#2baijumeswani, edgchen1, iK1D, skottmckay, cloudhan, tianleiwu, fs-eire, mszhanyi, WilBrady, hariharans29, chenfucn, fdwr, yuslepukhin, wejoncy, PeixuanZuo, pengwa, yufenglee, jchen351, justinchuby, dependabot[bot], RandySheriffH, sumitsays, wschin, wangyems, YUNQIUGUO, ytaous, pranavsharma, vvchernov, natke, Craigacp, RandyShuai, smk2007, zhangyaobit, jcwchen, yihonglyu, georgen117, chilo-ms, ashbhandare, faxu, jstoecker, gramalingam, garymm, jeffbloo, xadupre, jywu-msft, askhade, RyanUnderhill, thiagocrepaldi, mindest, jingyanwangms, wenbingl, ashari4, sfatimar, MaajidKhan, souptc, HectorSVC, weixingzhang, zhanghuanrong
- C++
Published by jchen351 over 3 years ago
onnxruntime - ONNX Runtime v1.12.1
This patch addresses packaging issues and bug fixes on top of v1.12.0.
- Java package: MacOS M1 support folder structure fix
- Android package: enable optimizations
- GPU (TensorRT provider): bug fixes
- DirectML: package fix
- WinML: bug fixes
See #12418 for full list of specific fixes included
- C++
Published by RandySheriffH over 3 years ago
onnxruntime - ONNX Runtime v1.12.0
Announcements
- For Execution Provider maintainers/owners: the lightweight compile API is now the default compiler API for all Execution Providers (this was previously only available for the mobile build). If you have an EP using the legacy compiler API, please migrate to the lightweight compile API as soon as possible. The legacy API will be deprecated in next release (ORT 1.13).
- netstandard1.1 support is being deprecated in this release and will be removed in the next ORT 1.13 release
Key Updates
General
- ONNX spec support
- onnx opset 17
- onnx-ml opset 3 (TreeEnsemble update)
- BeamSearch operator for encoder-decoder transformers models
- Support for invoking individual ops without the need to create a separate graph
- For use with custom op development to reuse ORT code
- Support for feeding external initializers (for large models) as byte arrays for model inferencing
- Build switch to disable usage of abseil library to remove dependency
Packages
- Python 3.10 support
- Mac M1 support in Python and Java packages
- .NET 6/MAUI support in Nuget C# package
- Additional target frameworks: net6.0, net6.0-android, net6.0-ios, net6.0-macos
- NOTE: netstandard1.1 support is being deprecated in this release and will be removed in the 1.13 release
- onnxruntime-openvino package available on Pypi (from Intel)
Performance and Quantization
- Improved C++ APIs that now utilize RAII for better memory management
- Operator performance optimizations, including GatherElements
- Memory optimizations to support compute-intensive real-time inferencing scenarios (e.g. audio inferencing scenarios)
- CPU usage savings for infrequent inference requests by reducing thread spinning
- Memory usage reduction through use of containers from the abseil library, especially inlined vectors used to store tensor shapes and inlined hash maps
- New quantized kernels for weight symmetry to improve performance on ARM64 little core (GEMM and Conv)
- Specialized kernel to improve performance of quantized Resize by up to 2x speedup
- Improved the thread job partition for QLinearConv, demonstrating up to ~20% perf gain for certain models
- Quantization tool: improved ONNX shape inference for large models
Execution Providers
- TensorRT EP
- TensorRT 8.4 support
- Provide option to share execution context memory between TensorRT subgraphs
- Workaround long CI test time caused by frequent initialization/de-initialization of TensorRT builder
- Improve subgraph partitioning and consolidate TensorRT subgraphs when possible
- Refactor engine cache serialization/deserialization logic
- Miscellaneous bug fixes and performance improvements
- OpenVINO EP
- Pre-Built ONNXRuntime binaries with OpenVINO now available on pypi: onnxruntime-openvino
- Performance optimizations of existing supported models
- New runtime configuration option ‘enabledynamicshapes’ added to enable dynamic shapes for each iteration
- ORTModule included as part of OVEP Python Package to enable Torch ORT Inference
- DirectML EP
- Updated to DirectML 1.9
- Opset 13-15 support: #11827, #11814, #11782, #11772
- Bug fixes: Xbox command list reuse, descriptor heap reset, command allocator memory growth, negative pad counts, node suffix removal
- TVM EP - details
- Updated to add model .dll ingestion and execution on Windows
- Updated documentation and CI tests
- [New] SNPE EP - details
- [Preview] XNNPACK EP - initial infrastructure with limited operator support, for use with ORT Mobile and ORT Web
- Currently supports Conv and MaxPool, with work in progress to add more kernels
Mobile
- Binary size reductions in Android minimal build - 12% reduction in size of base build with no operator kernels
- Added new operator support to NNAPI and CoreML EPs to improve ability to run super resolution and BERT models using NPU
- NNAPI: DepthToSpace, PRelu, Gather, Unsqueeze, Pad
- CoreML: DepthToSpace, PRelu
- Added Docker file to simplify running a custom minimal build to create an ORT Android package
- Initial XNNPACK EP compatibility
Web
- Memory usage optimizations
- Initial XNNPACK EP compatibility
ORT Training
- [New] ORT Training acceleration is also natively available through HuggingFace Optimum
- [New] FusedAdam Optimizer now available through the torch-ort package for easier training integration
- FP16_Optimizer Support for more DeepSpeed Versions
- Bfloat16 support for AtenOp
- Added gradient ops for ReduceMax and ReduceMin
- Updates to Min and Max grad ops to use distributed logic
- Optimizations
- Optimized perf for Gelu and GeluGrad kernels for mixed precision models
- Enabled fusions for SimplifiedLayerNorm
- Added bitmask versions of Dropout, BiasDropout and DropoutGrad which brings ~8x space savings for the mast output.
Known issues
- The Microsoft.ML.OnnxRuntime.DirectML package on Nuget has an issue and will be fixed in a patch. Fix: #12368
- The Maven package has a packaging issue for Mac M1 builds and will be fixed in a patch. Fix: #12335 / Workaround discussion
* Windows builds are not compatible with Windows 8.x in this release. Please use v1.11 for now.
Contributions
Contributors to ONNX Runtime include members across teams at Microsoft, along with our community members: snnn, edgchen1, fdwr, skottmckay, iK1D, fs-eire, mszhanyi, WilBrady, justinchuby, tianleiwu, PeixuanZuo, garymm, yufenglee, adrianlizarraga, yuslepukhin, dependabot[bot], chilo-ms, vvchernov, oliviajain, ytaous, hariharans29, sumitsays, wangyems, pengwa, baijumeswani, smk2007, RandySheriffH, gramalingam, xadupre, yihonglyu, zhangyaobit, YUNQIUGUO, jcwchen, chenfucn, souptc, chandru-r, jstoecker, hanbitmyths, RyanUnderhill, georgen117, jywu-msft, mindest, sfatimar, HectorSVC, Craigacp, jeffdaily, zhijxu-MS, natke, stevenlix, jeffbloo, guoyu-wang, daquexian, faxu, jingyanwangms, adtsai, wschin, weixingzhang, wenbingl, MaajidKhan, ashbhandare, ajindal1, zhanghuanrong, tiagoshibata, askhade, liqunfu
- C++
Published by RandySheriffH over 3 years ago
onnxruntime - ONNX Runtime v1.11.1
This is a patch release on 1.11.0 with the following fixes:
- Symbolic shape infer error (https://github.com/microsoft/onnxruntime/pull/10674)
- Quantization tool bug (https://github.com/microsoft/onnxruntime/pull/10940)
- Adds missing numpy type when looking for the ort correspondance (https://github.com/microsoft/onnxruntime/pull/10943)
- Profiling tool JSON format bug (https://github.com/microsoft/onnxruntime/pull/11046)
- Function bug fix (https://github.com/microsoft/onnxruntime/pull/11148)
- Add mobile helpers to Python build (https://github.com/microsoft/onnxruntime/pull/11196)
- Scoped GIL release in runwithiobinding (https://github.com/microsoft/onnxruntime/pull/11248)
- Fix output type mapping for JS (https://github.com/microsoft/onnxruntime/pull/11049)
All official packages are attached, and Python packages are additionally published to PyPi.
- C++
Published by chilo-ms almost 4 years ago
onnxruntime - ONNX Runtime v1.11.0
Key Updates
General
- Support for ONNX 1.11 with opset 16
- Updated protobuf version to 3.18.x
- Enable usage of Mimalloc (details)
- Transformer model helper scripts
- On Windows, error strings in OrtStatus are now encoded in UTF-8. When you need to print it out to screen, first convert it to a wide char string by using the MultiByteToWideChar Windows API.
Performance
- Memory utilization related performance improvements (e.g. elimination of vectors for small dims)
- Performance variance stability improvement through dynamic cost model session option (details)
- New quantization data format support: S8S8 in QDQ format
- Added s8s8 kernels for ARM64
- Support to convert s8s8 to u8s8 automatically for x64
- Improved performance on ARM64 for quantized CNN model through:
- New kernels for quantized depthwise Conv
- Improved symmetrically quantized Conv by leveraging indirect buffer
- New Gemm kernels for symmetric quantized Conv and MatMul
- General quantization improvements, including new quantized operators (Resize, ArgMax) and quantization tool updates
API
- Java: Only a single OrtEnv can be created in any given execution of the JVM. Previously, the environment could be closed completely and a fresh one could be created with different parameters (e.g. global thread pool, or logging level) (details)
Packages
- Nuget packages
- C# packages now tested with .NET 5. .NET Core 2.1 support is deprecated as it has reached end of life support on August 21, 2021. We will closely follow .NET's support policy
- Removed PDB files. These are attached as release artifacts below.
- Pypi packages
- Python 3.6 is deprecated as it has reached EOL December 2021. Supported Python versions: 3.7-3.9
- Note: Mac M1 builds are not yet available in pypi but can be built from source
- OnnxRuntime with OpenVINO support available at https://pypi.org/project/onnxruntime-openvino/1.11.0/
Execution Providers
- CUDA
- Enable CUDA provider option configuration for C# to support workspace size configuration from and fix binary compatibility of CUDAProviderOptions C API
- Preview support for CUDA Graphs (details)
- TensorRT
- TRT 8.2.3 support
- Memory footprint optimizations
- Support protobuf >= 3.11
- Updated flatbuffers version to 2.0
- Misc Bug Fixes
- DirectML
- Updated more operators to opset 13 (QuantizeLinear, DequantizeLinear, ReduceSum, Split, Squeeze, Unsqueeze, ReduceSum).
- OpenVINO
- OpenVINO™ version upgraded to 2022.1.0 - biggest OpenVINO™ upgrade in 3.5 years. This provides functional bug fixes, API Change 2.0 and capability changes from the previous 2021.4.2 LTS release.
- Performance Optimizations of existing supported models.
- Pre-Built OnnxRuntime Binaries with OpenVINO enabled can be downloaded from https://github.com/intel/onnxruntime/releases/tag/v4.0 https://pypi.org/project/onnxruntime-openvino/1.11.0/
- OpenCL (in preview)
- Introduced the EP for OpenCL to use with Mobile GPUs
- Available in
experimental/openclbranch for users to try. Provide feedback through Issues and Discussions in the repo. - README is available here.
Mobile
- Added general support for converting a model to NHWC layout at runtime
- Execution provider sets preferred layout and shared infrastructure in ORT will ensure the nodes the execution provider is assigned will be in that layout
- Added support for runtime optimization with minimal binary size impact
- Relevant optimizations are saved in the ORT format model for replay at runtime if applicable
- Added support for QDQ format models to the NNAPI EP
- Will fall back to CPU EP’s QDQ handling if NNAPI is not available using runtime optimizations
- Includes updates to the ORT QDQ optimizers so they work better with mobile scenarios
- Added helpers to:
- Analyze if a model can be used with the pre-built ORT Mobile package
- Update ONNX opset so model can be used with the pre-built package
- Convert dynamic inputs into fixed size inputs so that the model can be used with NNAPI/CoreML
- Optimize a QDQ format model for use with ORT
- Added Android and iOS packages with full ORT builds
- These packages have additional support for the full set of opsets and ops for ONNX models at the cost of a larger binary size.
## Web * Build option to create ONNX Runtime WebAssembly static library * Support for concurrent creation of multiple inference sessions * Upgraded emsdk version to 3.1.3 for more stable multi-threads and enables LTO with multi-threads build on WebAssembly.
Known issues
- When using tensor sequences/sparse tensors, the generated profile is not valid JSON. (Fixed in https://github.com/microsoft/onnxruntime/pull/10974)
- There is a bug in the quantization tool for calibration when choosing percentile algorithm (Fixed in https://github.com/microsoft/onnxruntime/pull/10940). To fix this, please apply the typo fix in the python file.
- Mac M
Contributions
Contributors to ONNX Runtime include members across teams at Microsoft, along with our community members: snnn, edgchen1, skottmckay, yufenglee, wangyems, yuslepukhin, gwang-msft, iK1D, chilo-ms, fdwr, ytaous, RandySheriffH, hanbitmyths, chenfucn, yihonglyu, ajindal1, fs-eire, souptc, tianleiwu, YUNQIUGUO, hariharans29, oliviajain, xadupre, ashari4, RyanUnderhill, jywu-msft, weixingzhang, baijumeswani, georgen117, natke, Craigacp, jeffdaily, JingqiaoFu, zhanghuanrong, satyajandhyala, smk2007, ryanlai2, askhade, thiagocrepaldi, jingyanwangms, pengwa, scxiao, ashbhandare, BowenBao, SherlockNoMad, sumitsays, sfatimar, mosdav, harshithapv, liqunfu, tiagoshibata, gineshidalgo99, pranavsharma, jcwchen, nkreeger, xkszltl, faxu, suffiank, stevenlix, jeffbloo, feihugis
- C++
Published by chilo-ms almost 4 years ago
onnxruntime - ONNX Runtime v1.10.0
Announcements
- As noted in the deprecation notice in ORT 1.9, InferenceSession now requires the providers parameters to be set when enabling Execution Providers other than default CPUExecutionProvider. e.g. InferenceSession('model.onnx', providers=['CUDAExecutionProvider'])
- Python 3.6 support removed for Mac builds. Since 3.6 is end-of-life in December 2021, it will no longer be supported from next release (ORT 1.11) onwards
- Removed dependency on optional-lite
- Removed experimental Featurizers code
General
- Support for plug-in custom thread creation and join functions to enable usage of external threads
- Optional type support from op set 15
Performance
- Introduced indirect Convolution method for QLinearConv which has symmetrically quantized filter, i.e., filter type is int8 and zero point of filter is 0. The method leverages in-direct buffer instead of memcpy'ing the original data and doesn’t need to compute the sum of each pixel of output image for quantized Conv.
- X64: new kernels - including avx2, avxvnni, avx512 and avx 512 vnni, for general and depthwise quantized Conv.
- ARM64: new kernels for depthwise quantized Conv.
- Tensor shape optimization to avoid allocating heap memory in most cases - #9542
- Added transpose optimizer to push and cancel transpose ops, significantly improving perf for models requiring layout transformation
API
- Python
- Following through on the deprecation notice in ORT 1.9, InferenceSession now requires the providers parameters to be set when enabling Execution Providers other than default CPUExecutionProvider. e.g. InferenceSession('model.onnx', providers=['CUDAExecutionProvider'])
- C/C++
- New API to query CUDA stream to launch a custom kernel for scenarios where custom ops compiled into shared libraries need implicit synchronization with ORT CUDA kernels - #9141
- Updated Invalid -> OrtInvalidAllocator
- Updated every item in OrtCudnnConvAlgoSearch to a safer global name
- WinML
- New APIs to create OrtValues from Windows platform specific ID3D12Resources by exposing DirectML Execution Provider specific APIs. These APIs allow DML to extend the C-API and provide EP specific extensions.
- OrtSessionOptionsAppendExecutionProviderEx_DML
- DmlCreateGPUAllocationFromD3DResource
- DmlFreeGPUAllocation
- DmlGetD3D12ResourceFromAllocation
- Bug fix: LearningModel::LoadFromFilePath in UWP apps
Packages
- Added Mac M1 Universal2 build support for a single binary that runs natively on both Apple silicon and Intel-based Macs. These are included in the official Nuget packages. (build instructions)
- Windows C API Symbols are now uploaded to Microsoft symbol server
- Nuget package now supports ARM64 Linux C#
- Python GPU package now includes both TensorRT and CUDA EPs. Note: EPs need to be explicitly registered to ensure the correct provider is used. e.g. InferenceSession('model.onnx', providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider']). Please also ensure you have appropriate TensorRT dependencies and CUDA dependencies installed.
Execution Providers
- TensorRT EP
- Python GPU release packages now include support for TensorRT 8.0. Enable TensorrtExecutionProvider by explicitly setting providers parameter when creating an InferenceSession. e.g. InferenceSession('model.onnx', providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider'])
- Published quantized BERT model example
- OpenVINO EP
- Add support for OpenVINO 2021.4.x
- Auto Plugin support
- IO Buffer/Copy Avoidance Optimizations for GPU plugin
- Misc fixes
- DNNL EP
- Add Softmaxgrad op
- Add Transpose, Reshape, Pow and LeakyRelu ops
- Add DynamicQuantizeLinear op
- Add squeeze/unsqueeze ops
- DirectML EP
Mobile
- Added Xamarin support to the ORT C# Nuget packages
- Updated target frameworks in native package
- iOS and Android binaries now included in native package
- ORT format models now have backwards compatibility guarantee
Web
- Support WebAssembly SIMD for qgemm kernel to accelerate the performance of quantized models
- Upgraded existing WebGL kernels to the latest opset
* Optimized bundle size to support various production scenarios, such as WebAssembly only or WebGL only
Contributions
Contributors to ONNX Runtime include members across teams at Microsoft, along with our community members: snnn, gineshidalgo99, fs-eire, gwang-msft, edgchen1, hariharans29, skottmckay, jeffdaily, baijumeswani, fdwr, smk2007, suffiank, souptc, RyanUnderhill, iK1D, yuslepukhin, chilo-ms, satyajandhyala, hanbitmyths, thiagocrepaldi, wschin, tianleiwu, pengwa, xadupre, zhanghuanrong, SherlockNoMad, wangyems, RandySheriffH, ashbhandare, tiagoshibata, yufenglee, mindest, sumitsays, MaajidKhan, gramalingam, tracysh, georgen117, jywu-msft, sfatimar, martinb35, nkreeger, ytaous, ashari4, stevenlix, chandru-r, jingyanwangms, mosdav, raviskolli, faxu, liqunfu, kit1980, weixingzhang, pranavsharma, jcwchen, chenfucn, BowenBao, jeffbloo
- C++
Published by jingyanwangms about 4 years ago
onnxruntime - ONNX Runtime v1.9.1
This is a patch release on 1.9.0 with the following fixes:
- Microsoft.AI.MachineLearning NuGet Package Fixes
- Bug fix for OpenVino EP Python API- 9166.
- Bump up TVM version for NUPHAR EP - 9159.
- Fixed build issue for iOS 11 and earlier versions - 9036.
- C++
Published by smk2007 over 4 years ago
onnxruntime - ONNX Runtime v1.9.0
Announcements
- GCC version < 7 is no longer supported
- CMAKESYSTEMPROCESSOR needs be set when cross-compiling on Linux because pytorch cpuinfo was introduced as a dependency for ARM big.LITTLE support. Set it to the value of
uname -moutput of your target device.
General
- ONNX 1.10 support
- opset 15
- ONNX IR 8 (SparseTensor type, model local functionprotos, Optional type not yet fully supported this release)
- Improved documentation of C/C++ APIs
- IBM Power support
- WinML - DLL dependency fix supports learning models on Windows 8.1
- Support for sub-building onnxruntime-extensions and statically linking into onnxruntime binary for custom builds
- Add
--_use_extensionsoption to run models with custom operators implemented in onnxruntime-extensions
- Add
APIs
- Registration of a custom allocator for sharing between multiple sessions. (See RegisterAllocator and UnregisterAllocator APIs in onnxruntimecapi.h)
- SessionOptionsAppendExecutionProviderTensorRT API is deprecated; use SessionOptionsAppendExecutionProviderTensorRT_V2
- New APIs: SessionOptionsAppendExecutionProviderTensorRTV2, CreateTensorRTProviderOptions, UpdateTensorRTProviderOptions, GetTensorRTProviderOptionsAsString, ReleaseTensorRTProviderOptions, EnableOrtCustomOps, RegisterAllocator, UnregisterAllocator, IsSparseTensor, CreateSparseTensorAsOrtValue, FillSparseTensorCoo, FillSparseTensorCsr, FillSparseTensorBlockSparse, CreateSparseTensorWithValuesAsOrtValue, UseCooIndices, UseCsrIndices, UseBlockSparseIndices, GetSparseTensorFormat, GetSparseTensorValuesTypeAndShape, GetSparseTensorValues, GetSparseTensorIndicesTypeShape, GetSparseTensorIndices,
Performance and quantization
- Performance improvement on ARM
- Added S8S8 (signed int8, signed int8) matmul kernel. This avoids extending uin8 to int16 for better performance on ARM64 without dot-product instruction
- Expanded GEMM udot kernel to 8x8 accumulator
- Added sgemm and qgemm optimized kernels for ARM64EC
- Operator improvements
- Improved performance for quantized operators: DynamicQuantizeLSTM, QLinearAvgPool
- Added new quantized operator QGemm for quantizing Gemm directly
- Fused HardSigmoid and Conv
- Quantization tool - subgraph support
- Transformers tool improvements
- Fused Attention for BART encoder and Megatron GPT-2
- Integrated mixed precision ONNX conversion and parity test for GPT-2
- Updated graph fusion for embed layer normalization for BERT
- Improved symbolic shape inference for operators: Attention, EmbedLayerNormalization, Einsum and Reciprocal
Packages
- Official ORT GPU packages (except Python) now include both CUDA and TensorRT Execution Providers.
- Python packages will be updated next release. Please note that EPs should be explicitly registered to ensure the correct provider is used.
- GPU packages are built with CUDA 11.4 and should be compatible with 11.x on systems with the minimum required driver version. See: CUDA minor version compatibility
- Pypi
- ORT + DirectML Python packages now available: onnxruntime-directml
- GPU package can be used on both CPU-only and GPU machines
- Nuget
- C#: Added support for using netstandard2.0 as a target framework
- Windows symbol (PDB) files are no longer included in the Nuget package, reducing size of the binary Nuget package by 85%. To download, please see the artifacts below in Github.
Execution Providers
CUDA EP
- Framework improvements that boost CUDA performance of subgraph heavy models (#8642, #8702)
- Support for sequence ops for improved performance for models using sequence type
- Kernel perf improvements for Pad and Upsample (up to 4.5x faster)
TensorRT EP
- Added support for TensorRT 8.0 (x64 Windows/Linux, ARM Jetson), which includes new TensorRT explicit-quantization features (ONNX Q/DQ support)
- General fixes and quality improvements
OpenVINO EP
- Added support for OpenVINO 2021.4
DirectML EP
- Bug fix for Identity with non-float inputs affecting DynamicQuantizeLinear ONNX backend test
ORT Web
- WebAssembly
- SIMD (Single Instruction, Multiple Data) support
- Option to load WebAssembly from worker thread to avoid blocking main UI thread
- wasm file path override
- WebGL
- Simpler workflow for WebGL kernel implementation
- Improved performance with Conv kernel enhancement
ORT Mobile
- Added more example mobile apps
- CoreML and NNAPI EP enhancements
- Reduced peak memory usage when initializing session with ORT format model as bytes
- Enhanced partitioning to improve performance when using NNAPI and CoreML
- Reduce number of NNAPI/CoreML partitions required
- Add ability to force usage of CPU for post-processing in SSD models
- Improves performance by avoiding expensive device copy to/from NPU for cheap post-processing section of the model
- Changed to using xcframework in the iOS package
- Supports usage of arm64 iPhone simulator on Mac with Apple silicon
ORT Training
- Expanding input formats supported to include dictionaries and lists.
- Enable user defined autograd functions
- Support for fallback to PyTorch for execution
- Added support for deterministic compute to enable reproducibility with ORTModule
- Add DebugOptions and LogLevels to ORTModule API* to improve debuggability
- Improvements additions to kernels/gradients: Concat, Split, MatMul, ReluGrad, PadOp, Tile, BatchNormInternal
- Support for ROCm 4.3.1 on AMD GPU
Contributions
Contributors to ONNX Runtime include members across teams at Microsoft, along with our community members: edgchen1, gwang-msft, tianleiwu, fs-eire, hariharans29, skottmckay, baijumeswani, RyanUnderhill, iK1D, souptc, nkreeger, liqunfu, pengwa, SherlockNoMad, wangyems, chilo-ms, thiagocrepaldi, KeDengMS, suffiank, oliviajain, chenfucn, satyajandhyala, yuslepukhin, pranavsharma, tracysh, yufenglee, hanbitmyths, ytaous, YUNQIUGUO, zhanghuanrong, stevenlix, jywu-msft, chandru-r, duli2012, smk2007, wschin, MaajidKhan, tiagoshibata, xadupre, RandySheriffH, ashbhandare, georgen117, Tixxx, harshithapv, Craigacp, BowenBao, askhade, zhangxiang1993, gramalingam, weixingzhang, natke, tlh20, codemzs, ryanlai2, raviskolli, pranav-prakash, faxu, adtsai, fdwr, wenbingl, jcwchen, neginraoof, cschreib-ibex
- C++
Published by wangyems over 4 years ago
onnxruntime - ONNX Runtime v1.8.2
This is a minor patch release on 1.8.1 with the following changes:
Inference
- Fix a crash issue when optimizing
Conv->Add->Relufor CUDA EP - ORT Mobile updates
- Change Pre-built iOS package to static framework to fix App Store submission issue
- Support for metadata in ORT format models
- Additional operators
- Bug fixes
Known issues
- cudnn 8.0.5 causes memory leaks on T4 GPU as indicated by the issue, an upgrade to later version solves the problem.
- C++
Published by guoyu-wang over 4 years ago
onnxruntime - ONNX Runtime v1.8.1
This release contains fixes and key updates for 1.8.0. For all package installation details, please refer to https://www.onnxruntime.ai.
Inference
- Fixes for GPU package loading issues
- Fix for memory issue for models with convolution nodes while using the EXHAUSTIVE algo search mode
- ORT Mobile updates
- CoreML EP enabled in iOS mobile package
- Additional operators
- Bug fixes
- React Native package now available
Training
Performance updates for ONNX Runtime for PyTorch (training acceleration for PyTorch models) - Accelerates most popular Hugging Face models as well as GPT-Neo and Microsoft TNLG and TNLU models - Support for PyTorch 1.8.1 and 1.9 - Support for CUDA 10.2 and 11.1 - Preview packages for ROCm 4.2
- C++
Published by harshithapv over 4 years ago
onnxruntime - ONNX Runtime v1.8.0
Announcements
- This release
- Building onnxruntime from source now requires a C++ compiler with full C++14 support.
- Builds with OpenMP are no longer published. They can still be built from source if needed. The default threadpool option should provide optimal performance for the majority of models.
- New dependency for Python package: flatbuffers
- Next release (v1.9)
- Builds will require C++ 17 compiler
- GPU build will be updated to CUDA 11.1
General
- ONNX opset 14 support - new and updated operators from the ONNX 1.9 release
- Dynamically loadable CUDA execution provider
- Allows a single build to work for both CPU and GPU (excludes Python packages)
- Profiler tool now includes information on threadpool usage
- multi-threading preparation time
- multi-threading run time
- multi-threading wait time
- [Experimental] onnxruntime-extensions package
- Crowd-sourced library of common/shareable custom operator implementations that can be loaded and run with ONNX Runtime; community contributions are welcome! - microsoft/onnxruntime-extensions
- Currently includes mostly ops and tokenizers for string operations (full list here)
- Tutorials to export and load custom ops from onnxruntime-extensions: TensorFlow, PyTorch
Training
- torch-ort package released as the ONNX Runtime backend in PyTorch
- onnxruntime-training-gpu and onnxruntime-training-rocm packages now available for distributed training on NVIDIA and AMD GPUs
Mobile
- Official package now available
- Pre-built Android and iOS packages with support for selected operators and data types
- Objective-C API for iOS in preview
- Expanded operators supported by NNAPI (Android) and CoreML (iOS) execution providers
- All operators in the ai.onnx domain now support type reduction
- Create ORT format model with
--enable_type_reductionflag, and perform minimal build--enable_reduced_operator_type_supportflag
- Create ORT format model with
ORT Web
- New ONNX Runtime Javascript API
- ONNX Runtime Web package
- Support WebAssembly and WebGL for CPU and GPU
- Support Web Worker based multi-threaded WebAssembly backend
- Supports ORT model format
- Improved WebGL performance
Performance
- Memory footprint reduction through shared pre-packed weights for shared initializers
- Pre-packing refers to weights that are pre-processed at model load time
- Allows pre-packed weights of shared initializers to also be shared between sessions, preserving memory savings from using shared initializers
Memory footprint reduction through arena shrinkage
- By default, the memory arena doesn't shrink and it holds onto any allocated memory forever. This feature exposes a RunOption that scans the arena and potentially returns unused memory back to the system after the end of a Run. This feature is particularly useful while running a dynamic shape model that may occasionally process an outlier inference request that requires a large amount of memory. If the shrinkage option if invoked as part of these Runs, the memory that was required for that Run is not held on forever by the memory arena.
Quantization
- Native support of Quantize-Dequantize (QDQ) format for CPU
- Support for Concat, Transpose, GlobalAveragePool, AveragePool, Resize, Squeeze
- Improved performance on high-end ARM devices by leveraging dot-product instructions
- Improved performance for batched quant GEMM with optimized multi-threading logic
- Per-column quantization for MatMul
Transformers
- GPT-2 and beam search integration (example)
APIs
- WinML
- New native WinML API SetIntraOpThreadSpinning for toggling Intra Op thread spin behavior. When enabled, and when there is no current workload, IntraOp threads will continue to spin for some additional time as it waits for any additional work. This can result in better performance for the current workload but may impact performance of other unrelated workloads. This toggle is enabled by default.
- ORT Inferencing
- The following APIs have been added to this release. Please check the API documentation for information.
- KernelInfoGetAttributeArray_float
- KernelInfoGetAttributeArray_int64
- CreateArenaCfgV2
- AddRunConfigEntry
- CreatePrepackedWeightsContainer
- PrepackedWeightsContainer
- CreateSessionWithPrepackedWeightsContainer
- CreateSessionFromArrayWithPrepackedWeightsContainer # Execution Providers
- The following APIs have been added to this release. Please check the API documentation for information.
- TensorRT
- Added support for TensorRT EP configuration using session options instead of environment variables.
- Added support for DLA on Jetson Xavier (AGX, NX)
- General bug fixes and quality improvements.
- OpenVINO
- Added support for OpenVINO 2021.3
- Removed support for OpenVINO 2020.4
- Added support for Loading/Saving of Blobs on MyriadX devices to avoid expensive model blob compilation at runtime.
- DirectML • Supports ARM/ARM64 architectures now in WinML and ONNX RunTime NuGet packages. • Support for 8-dimensional tensors to: BatchNormalization, Cast, Join, LpNormalization, MeanVarianceNormalization, Padding, Tile, TopK. • Substantial performance improvements for several operators. • Resize nearestmode “floor” and “roundprefer_ceil”. • Fusion activations for: Conv, ConvTranspose, BatchNormalization, MeanVarianceNormalization, Gemm, MatMul. • Decomposes unsupported QLinearSigmoid operation. • Removes strided 64-bit emulation in Cast. • Allows empty shapes on constant CPU inputs.
Known issues
- This release has an issue that may result in segmentation faults when deployed on Intel 12th Gen processors with hybrid architecture capabilities with Performance and Efficient-cores (P-core and E-core). This has been fixed in ORT 1.9.
- The CUDA build of this release has a regression in that the memory utilization increases significantly compared to the previous releases. A fix for this will be released shortly as part of 1.8.1 patch. Here is an incomplete list of issues where this was reported - 8287, 8171, 8147.
- GPU part of source code is not compatible with
- Visual Studio 2019 16.10.0 ( which was just released on May 25, 2021). 16.9.x is fine.
- clang 12
- CPU part of source code is not compatible with
- VS 2017 (https://github.com/microsoft/onnxruntime/issues/7936). Before we fix it please use VS 2019 instead.
- GCC 11. See #7918
- C# OpenVino EP is broken. #7951
- Python and Windows only: if your CUDNN DLLs are not in CUDA's installation dir, then you need to set manually "CUDNN_HOME" variable. Just putting them in %PATH% is not enough. #7965
- onnxruntime-win-gpu-x64-1.8.0.zip on this page misses important DLLs, please don't use it.
Contributions
Contributors to ONNX Runtime include members across teams at Microsoft, along with our community members:
snnn, gwang-msft, baijumeswani, fs-eire, edgchen1, zhanghuanrong, yufenglee, thiagocrepaldi, hariharans29, skottmckay, weixingzhang, tianleiwu, SherlockNoMad, ashbhandare, tracysh, satyajandhyala, liqunfu, iK1D, RandySheriffH, suffiank, hanbitmyths, wangyems, askhade, stevenlix, chilo-ms, smk2007, kit1980, codemzs, raviskolli, pranav-prakash, chenfucn, xadupre, gramalingam, harshithapv, oliviajain, xzhu1900, ytaous, MaajidKhan, RyanUnderhill, mrry, orilevari, jingyanwangms, sfatimar, KeDengMS, jywu-msft, souptc, adtsai, tlh20, yuslepukhin, duli2012, pranavsharma, faxu, georgen117, jeffbloo, Tixxx, wschin, YUNQIUGUO, tiagoshibata, martinb35, alberto-magni, ryanlai2, Craigacp, suryasidd, fdwr, jcwchen, neginraoof, natke, BowenBao
- C++
Published by xzhu1900 over 4 years ago
onnxruntime - ONNX Runtime v1.7.2
This is a minor patch release on 1.7.1 with the following changes:
- Fix Microsoft.AI.MachineLearning NuGet package to correctly install on C# UWP projects in Visual Studio.
- C++
Published by smk2007 almost 5 years ago
onnxruntime - ONNX Runtime v1.7.1
The Microsoft.ML.OnnxRuntime.Gpu and Microsoft.ML.OnnxRuntime.Managed packages are uploaded to Nuget.org. Please note the version numbers for the Microsoft.ML.OnnxRuntime.Managed package.
- C++
Published by oliviajain almost 5 years ago
onnxruntime - ONNX Runtime v1.7.0
Announcements
Starting from this release, all ONNX Runtime CPU packages are now built without OpenMP. A version with OpenMP is available on Nuget (Microsoft.ML.OnnxRuntime.OpenMP) and PyPi (onnxruntime-openmp). Please report any issues in GH Issues.
Note: The 1.7.0 GPU package is uploaded on this Azure DevOps Feed due to the size limit on Nuget.org. Please use 1.7.1 for the GPU package through Nuget.
Key Feature Updates
General
- Mobile
- Custom operators now supported in the ONNX Runtime Mobile build
- Added ability to reduce types supported by operator kernels to only the types required by the models
- Expect a 25-33% reduction in binary size contribution from the kernel implementations. Reduction is model dependent, but testing with common models like Mobilenet v2, SSD Mobilenet and Mobilebert achieved reductions in this range.
- Custom op support for dynamic input
- MKLML/openblas/jemalloc build configs removed
- Removed dependency on gemmlowp
- [Experimental] Audio Operators
- Fourier Transforms (DFT, IDFT, STFT), Windowing Functions (Hann, Hamming, Blackman), and a MelWeightMatrix operator in "com.microsoft.experimental” domain
- Buildable using ms_experimental build flag (included in Microsoft.AI.MachineLearning NuGet package)
Performance
- Quantization
- Quantization tool now supports quantization of models in QDQ (QuantizeLinear-DequantizeLinear) format
- Depthwise Conv quantization performance improvement
- Quantization support added for Pad, Split and MaxPool for channel last
- QuantizeLinear performance improvement on AVX512
- Optimization: Fusion for Conv + Mul/Add
- Transformers
- Longformer Attention CUDA kernel memory footprint reduction
- Einsum Float16 CUDA kernel for ALBERT and XLNet
- Python optimizer tool now supports fusion for BART
- CPU profiling tool for transformers models
APIs and Packages
- Python 3.8 and 3.9 support added for all platforms, removed support for 3.5
- ARM32/64 Windows builds are now included in the CPU Nuget and zip packages
- WinML
- .NET5 support - will work with .NET5 Standard 2.0 Projections
- Image descriptors expose NominalPixelRange properties
- Native support added for additional pixel ranges [0..1] and [-1..1] in image models.
- A new property is added to the ImageFeatureDescriptor runtimeclass to expose the ImageNominalPixelRange property in ImageFeatureDescriptor. Other similar properties exposed are the image’s BitmapPixelFormat and BitmapAlphaMode.
- Bug fixes and performance improvements, including #6249
- [Experimental] Model Building API available under the Microsoft.AI.MachineLearning.Experimental namespace. (included in Microsoft.AI.MachineLearning NuGet package)
- Can be used to create dynamic models on the fly to enable engine-optimized and hardware accelerated dynamic tensor featurization code sample
Execution Providers
- CUDA EP
- Official GPU build now built with CUDA 11
- OpenVINO EP
- Support for OpenVINO 2021.2
- Deprecated support for OpenVINO 2020.2
- Support for OpenVINO EP options in onnxruntimeperftest tool
- General fixes
- TensorRT EP
- Support for TensorRT 7.2
- General fixes and perf improvements
- DirectML EP
- Support for DirectML 1.4.2
- DirectML PIX markers added to enable profiling graph at operator level.
- NNAPI EP
- Performance improvement for quantized models
- Support of per-channel quantization for QlinearConv
- Additional operator support – Min/Max/Pow
Contributions
Contributors to ONNX Runtime include members across teams at Microsoft, along with our community members: edgchen1, snnn, skottmckay, gwang-msft, hariharans29, tianleiwu, xadupre, yufenglee, ryanlai2, wangyems, suffiank, liqunfu, orilevari, baijumeswani, weixingzhang, pranavsharma, RandySheriffH, ashbhandare, oliviajain, smk2007, tracysh, stevenlix, fs-eire, Craigacp, faxu, mrry, codemzs, chilo-ms, jcwchen, zhanghuanrong, SherlockNoMad, iK1D, askhade, zhangxiang1993, yuslepukhin, tlh20, MaajidKhan, wschin, smkarlap, wenbingl, pengwa, duli2012, natke, alberto-magni, Tixxx, HectorSVC, jingyanwangms, jstoecker, kit1980, suryasidd, RandyShuai, sfatimar, jywu-msft, liuziyue, mosdav, thiagocrepaldi, souptc, fdwr
- C++
Published by oliviajain almost 5 years ago
onnxruntime - ONNX Runtime v1.6.0
Announcements
- OpenMP will be disabled in future official builds (build option will still be available). A NoOpenMP version of ONNX Runtime is now available with this release on Nuget and PyPi for C/C++/C#/Python users.
- In the next release, MKL-ML, openblas, and jemallac build options will be removed, and the Microsoft.ML.OnnxRuntime.MKLML Nuget package will no longer be published. Users of MKL-ML are recommended to use the Intel EPs. If you are using these options and identify issues switching to an alternative build, please file an issue with details.
Key Feature Updates
General
- ONNX 1.8 support / opset 13
- New contrib ops: BiasSoftmax, MatMulIntegerToFloat, QLinearSigmoid, Trilu
- ORT Mobile now compatible with NNAPI for accelerating model execution on Android devices
- Build support for Mac with Apple Silicon (CPU only)
- New dependency: flatbuffers
- Support for loading sparse tensor initializers in pruned models
- Support for setting the execution priority of a node
- Support for selection of cuDNN conv algorithms
- BERT Model profiling tool
Performance
- New session option to disable denormal floating numbers on sse3 supporting CPUs
- Eliminates unexpected performance degradation due to denormals without needing to retrain the model
- Option to share initializers between sessions to improve memory utilization
- Useful when several models that use the same set of initializers except the last few layers of the model are loaded in the same process
- Eliminates wasteful memory usage when every model (session) creates a separate instance of the same initializer
- Exposed by the AddInitializer API
- Transformer model optimizations
- Longformer: LongformerAttention CUDA operator added
- Support for BERT models exported from Tensorflow with 1 or 2 inputs
- Python optimizer supports additional models: openai-GPT, ALBERT and FlauBERT
- Quantization
- Support of per-channel QuantizeLinear and DeQuantizeLinear
- Support of LSTM quantization
- Quantization performance improvement on ARM
- CNN quantization perf optimizations, including u8s8 support and NHWC transformer in QLinearConv
- ThreadPool
- Use
_mm_pause()for spin loop to improve performance and power consumption
- Use
APIs and Packages
- Python - I/O Binding enhancements
- Usage Documentation (OrtValue and IOBinding sections)
- Python binding for the
OrtValuedata structure - An interface is exposed to allocate memory on a CUDA-supported device and define the contents of this memory. No longer need to use allocators provided by other libraries to allocate and manage CUDA memory to be used with ORT.
- Allows consuming ORT allocated device memory as an
OrtValue(check Scenario 4 in the IOBinding section of the documentation for an example) OrtValueinstances can be used to bind inputs/outputs. This is in addition to existing interfaces that allows binding a piece of memory directly/using numpy arrays that can be bound and may be particularly useful when binding ORT allocated device memory.
- C# - float16 and bfloat16 support
- Windows ML
- NuGet package now supports UWP applications targeting Windows Store deployment for both CPU and GPU
- Minor API Improvements:
- Able to bind IIterable
as inputs and outputs - Able to create Tensor* via multiple buffers
- WindowsAI Redist now includes a statically linked C-Runtime package for additional deployment options
Execution Providers
- DNNL EP Updates
- DNNL updated from 1.1.1 to 1.7
- NNAPI EP Updates
- Support for CNN models
- Additional operator support - Resize/Flatten/Clip
- TensorRT EP Updates
- Int8 quantization support (experimental)
- Engine cache refactoring and improvements
- General fixes and performance improvements
- OpenVINO EP Updates
- OpenVINO 2021.1 support
- OpenVINO EP builds as shared library
- Multi-threaded inferencing support
- fp16 input type support
- Multi-device plugin support
- Hetero plugin support
- Enable build on ARM64
- DirectML EP Updates (1.3.0 -> 1.4.0)
- Utilizing the first public standalone release of the DirectML API through the DirectML NuGet package release
- General fixes and improvements
- nGraph EP is removed. Recommend to use OpenVINO instead
Additional notes
- VCRuntime2019 with OpenMP: pinning a process to NUMA node 1 forces the execution to be single threaded. Fix is in progress in VC++.
- Workaround: place the VS2017 vcomp DLL side-by-side so that ORT uses the VS2017 version
- Pip version >=20.3 is required for use on macOS Big Sur (11.x)
- The destructor of OrtEnv is now non-trivial and may do DLL unloading Do not call
ReleaseEnvfrom DLLMain or put OrtEnv in global variables. It is not safe to call FreeLibrary from DllMain. - reference - Some unit tests fail on Pascal GPUs. See: https://github.com/microsoft/onnxruntime/issues/5914
- If using the default CPU package (built with OpenMP), consider tuning the OpenMP settings to improve performance. By default the number of threads to use for openmp parallel regions is set to the number of logical CPUs. This may not be optimal for machines with hyper-threading; when CPUs are oversubscribed the 99-percentile latency could be 10x greater. Setting the OMPNUMTHREADS environment variable to the number of physical cores is a good starting point. As noted in Announcements, future official builds of ORT will be published without OpenMP
Contributions
Contributors to ONNX Runtime include members across teams at Microsoft, along with our community members: gwang-msft, snnn, skottmckay, edgchen1, hariharans29, wangyems, yufenglee, yuslepukhin, tianleiwu, SherlockNoMad, tracysh, ryanlai2, askhade, xadupre, liqunfu, RandySheriffH, jywu-msft, KeDengMS, pranavsharma, mrry, ashbhandare, iK1D, RyanUnderhill, MaajidKhan, wenbingl, kit1980, weixingzhang, tlh20, suffiank, Craigacp, smkarlap, stevenlix, zhanghuanrong, sfatimar, ytaous, tiagoshibata, fdwr, oliviajain, alberto-magni, jcwchen, mosdav, xzhu1900, wschin, codemzs, duli2012, smk2007, natke, zhijxu-MS, manashgoswami, zhangxiang1993, faxu, HectorSVC, take-cheeze, jingyanwangms, chilo-ms, YUNQIUGUO, jgbradley1, jessebenson, martinb35, Andrews548, souptc, pengwa, liuziyue, orilevari, BowenBao, thiagocrepaldi, jeffbloo
- C++
Published by duli2012 about 5 years ago
onnxruntime - ONNX Runtime v1.5.3
This is a minor patch release on 1.5.2 with the following changes: * Fix shared provider unload crash #5553 * Minor minimal build header fix
- C++
Published by RyanUnderhill over 5 years ago
onnxruntime - ONNX Runtime v1.5.2
This is a minor patch release on 1.5.1 with the following changes: * Remove dependency on cudnn64_7.dll for GPU C# nuget: https://github.com/microsoft/onnxruntime/pull/5386 * Add config keys header file in the packages for Linux and Mac: https://github.com/microsoft/onnxruntime/pull/5388 * Add flatbuffers verifier for ORT format buffer: https://github.com/microsoft/onnxruntime/pull/5378 * Use official flatbuffers v1.12: https://github.com/microsoft/onnxruntime/pull/5392 * Mitigate pybind11 build break using Xcode 12 on macOS: https://github.com/microsoft/onnxruntime/pull/5381 * Support trilinear sampling in the Resize operator: https://github.com/microsoft/onnxruntime/pull/5300 * Update TensorRT parser to fix accuracy issue in some opset11 models: https://github.com/microsoft/onnxruntime/pull/5442
- C++
Published by tianleiwu over 5 years ago
onnxruntime - ONNX Runtime Training RC3.1
Fixes issue discovered during validation.
Changes: - https://github.com/microsoft/onnxruntime/pull/5350
- C++
Published by edgchen1 over 5 years ago
onnxruntime - ONNX Runtime Training RC3
See: https://github.com/microsoft/onnxruntime/releases/tag/v1.5.1
- C++
Published by edgchen1 over 5 years ago
onnxruntime - ONNX Runtime v1.5.1
Key Updates
General
- Reduced Operator Kernel build allows ORT binaries to be built with only required operators in the model(s) - learn more
- [Preview] ORT for Mobile Platforms - minimizes build size for mobile and embedded devices - learn more
- Transformer model inferencing performance optimizations
- Perf improvement for DistilBERT
- Benchmark tool supports more pretrained models
- Improvements in quantization tool
- Support quantization-aware training models
- Make calibration tool to support general preprocessing and calibrate on input
- Simplify the quantization APIs
- Support of model larger than 2G
- New operators for static quantization: QLinearMul, QLinearAdd, QlinearSigmoid and QLinearLeakyRelu
- Prepack constant matrix B for float GEMM (MatMul, Attention)
- Limited Python 3.8 support added in addition to 3.5-3.7 for official Python packages. Not yet supported for Windows GPU and Linux ARM builds.
- Telemetry enabled in Java and NodeJS packages for Windows builds. Note: data is not directly sent to Microsoft or ORT teams by ONNX Runtime; enabling telemetry means trace events are collected by the Windows operating system and may be sent to the cloud based on the user's privacy settings - learn more.
API
- Python API support for RegisterCustomOpsLibrary
- IO Binding API for C/C++/C# language bindings. This allows use of pre-allocated buffers on targeted devices and also target device for unknown output shapes.
- Sharing of allocators between multiple sessions. This allows much better utilization of memory by not creating a separate arena for each session in the same process. See this for details.
Windows ML
- NuGet package now supports UWP applications targeting Windows Store deployment (CPU only)
- NuGet package now supports .NET and .NET framework applications
- RUST Developers can now deploy Windows ML – sample and documentation available here
- New APIs to for additional performance control:
- IntraopNumThreads: Provides an ability to change the number of threads used in the threadpool for Intra Operator Execution for CPU operators through LearningModelSessionOptions.
- SetNamedDimensionOverrides: Provides the ability to override named input dimensions to concrete values through LearningModelSessionOptions in order to achieve better runtime performance.
- Support for additional ONNX format image type denotations – Gray8, normalized [0..1] and normalized [-1..1]
- Reduced Windows ML package size by separating debug symbols into separate distribution package.
Execution Providers
- CUDA updates
- CUDA 10.2 / cuDNN 8.0 in official package
- CUDA 11 support added and available to build from source
- CUDA conv kernel support asymmetrical padding to fully support models such as YoloV3 for improved GPU perf
- TensorRT EP updates
- Support for TensorRT 7.1
- Added TensorRT engine caching feature, turned on by setting env variable ORTTENSORRTENGINECACHEENABLE=1
- TensorRT builds are now built with the Execution Provider as a separate dll. If enabled in the build, the provider will be available as a shared library. This was previously also enabled for the DNNL EP (ORT 1.3). Other Execution Providers will be added in the future.
- OpenVINO EP updates
- Support for OpenVINO 2020.4
- Added runtime options for VPU hardware to select specific hardware device and enable fast compilation of models.
- Enable C# binding support for OpenVINO EP
- DirectML EP updates
- API available for Python (build from source) and C# Microsoft.ML.OnnxRuntime.DirectML
- 7 new operators for ONNX 1.7 (opset 12): Celu, GreaterOrEqual, LessOrEqual, ArgMin/Max with selectlastindex, GatherND with batch_dim, RoiAlign
- New data integer types were added to existing operators: Clip int, Max int, Min int, MaxPool int8, ReduceMin int8, ReduceMax int8, Pow int exponent
- Higher dimension support 1D to 8D added to these operators: ElementWise, Activation, Reduce, ArgMin/ArgMax, Gather, Scatter*, OneHot
- 64-bit support for indices on GPU's that support it: Gather, Scatter, OneHot, ArgMax/ArgMin, Cast.
- Android NNAPI EP updates:
- Support for dynamic input shape
- Int32/float32/uint8 data type
- 50% more supported operators (36 total)
- Support for Uint8 static quantization
- Smaller binary size
- Lower memory consumption
- CPU fallback for Android level 26-
- MiGraphX EP updates
- Added ONNX operators: GatherElements, NonZero, Equal, and Where
- Support for Boolean data type
- Improve support for existing operators:
- Asymmetric padding of AveragePool
- Multi-dimensional support for Convolution, Pooling, LRN, and Batchnormalization
- Ceil mode support for AveragePool and MaxPool
- More general approach to check whether constant folding is possible
- Improved graph partitioning logic
Training (RC3 release)
- New and improved API to simplify integration with PyTorch trainer code - see instructions here
- Updated CUDA 11 / cuDNN 8.0 support to accelerate in NVIDIA A100
Dependency updates
MacOS binaries now rely on openmp to be installed. See this for reference.
Contributions
Contributors to ONNX Runtime include members across teams at Microsoft, along with our community members:
gwang-msft, snnn, skottmckay, hariharans29, thiagocrepaldi, tianleiwu, wangyems, RandySheriffH, yufenglee, SherlockNoMad, smk2007, jywu-msft, liqunfu, edgchen1, yuslepukhin, tiagoshibata, fdwr, ashbhandare, iK1D, wschin, BowenBao, zhanghuanrong, RyanUnderhill, ryanlai2, askhade, pranavsharma, martinb35, suffiank, ytaous, KeDengMS, rayankrish, natke, YUNQIUGUO, range4life, smkarlap, zhangxiang1993, xzhu1900, codemzs, weixingzhang, stevenlix, tracysh, mosdav, jingyanwangms, tlh20, souptc, orilevari, kit1980, yangchen-MS, faxu, fs-eire, wenbingl, chilo-ms, xkszltl, Andrews548, yuzawa-san, MaximKalininMS, jgbradley1, nickfeeney, zhijxu-MS, Tixxx, suryasidd, Craigacp, duli2012, jeffbloo
- C++
Published by tianleiwu over 5 years ago
onnxruntime - ONNX Runtime v1.4.0
Key Updates
- Performance optimizations for Transformer models
- GPT2 - Enable optimizations for Attention with Past State and Attention Mask
- BERT - Improve EmbedLayerNormalization fusion coverage
- Quantization updates
- Added new quantization operators: QLinearAdd, QAttention
- Improved quantization performance for transformer based models on CPU
- More graph fusion
- Further optimization in MLAS kernel
- Introduced pre-packing for constant Matrix B of DynamicQuantizeMatMul and Qattention
- New Python IOBinding APIs (bindcpuinput, bindoutput, copyoutputstocpu) allow easier benchmarking
- Users no longer need to allocate inputs and outputs on non-CPU devices using third-party allocators.
- Users no longer need to copy inputs to non-CPU devices; ORT handles the copy.
- Users can now use copyoutputsto_cpu to copy outputs from non-CPU devices to CPU for verification.
- CUDA support for Einsum (opset12)
- ONNX Runtime Training updates
- Opset 12 support
- New sample for training experiment using Huggingface GPT-2.
- Upgraded docker image built from the latest PyTorch release
- Telemetry is now enabled by default for Python packages and Github release zip files (C API); see more details on what/how telemetry is collected in ORT
- [Coming soon] Availability of Python package for ONNX Runtime 1.4 for Jetpack 4.4
Execution Providers
New Execution Providers available for preview: * [Preview] AMD MIGraphX * [Preview] ARM NN
Contributions
Contributors to ONNX Runtime include members across teams at Microsoft, along with our community members:
snnn, tianleiwu, edgchen1, hariharans29, skottmckay, tracysh, yufenglee, fs-eire, codemzs, tiagoshibata, yuslepukhin, gwang-msft, wschin, smk2007, prabhat00155, liuziyue, liqunfu, ytaous, iK1D, BowenBao, askhade, pranavsharma, faxu, jywu-msft, ryanlai2, xzhu1900, KeDengMS, tlh20, smkarlap, weixingzhang, jeffbloo, RyanUnderhill, mrry, jgbradley1, stevenlix, zhanghuanrong, suffiank, Andrews548, pengwa, SherlockNoMad, orilevari, duli2012, yangchen-MS, yan12125, jornt-xilinx, ashbhandare, neginraoof, Tixxx, thiagocrepaldi, Craigacp, mayeut, chilo-ms, prasanthpul, martinb35, manashgoswami, zhangxiang1993, suryasidd, wangyems, kit1980, RandySheriffH, fdwr
- C++
Published by yuslepukhin over 5 years ago
onnxruntime - ONNX Runtime v1.3.1
This update includes changes to support the published packages for the Java and nodejs APIs for the 1.3.0 release. * Maven: Java API CPU * Maven: Java API GPU * NPM: ONNX Runtime Node.js API
For all other APIs/builds, the 1.3.0 release packages are suggested. 1.3.1 does address the 1.3.0 issue of Crash when setting IntraOpNumThreads using the C/C++/C# API, so if this fix is needed it can be built from source using this release branch (with official release support).
- C++
Published by stevenlix over 5 years ago
onnxruntime - ONNX Runtime v1.3.0
Key Updates
General
- ONNX 1.7 support
- Opset 12
- Function expansion support that enables several new ONNX 1.7 ops such as NegativeLogLikelihoodLoss, GreaterOrEqual, LessOrEqual, Celu to run without a kernel implementation.
- [Preview] ONNX Runtime Training
- ONNX Runtime Training is a new capability released in preview to accelerate training transformer models. See the sample here to use this feature in your training experiments.
- Improved threadpool support for better resource utilization
- Improved threadpool abstractions that switch between openmp and Eigen threadpools based on build settings. All operators have been updated to use these new abstractions.
- Improved Eigen based threadpool now allow ops to provide cost (among other things like thread affinity) for operations
- Simpler configuration of thread count. If built with OpenMP, use the OpenMP env variables; else use the ORT APIs to configure the number of threads.
- Support for sessions to share global threadpool. See this for more information.
- Performance improvements
- ~10% average measured latency improvements amongst key representative models (including ONNX model zoo models, MLPerf, and production models shipped in Microsoft products)
- Further latency improvements for Transformer models on CPU and GPU - benchmark script
- Improved batch inferencing latency for scikit-learn models for large batch sizes
- Significant improvements in the implementations of the following ONNX operators: TreeEnsembleRegressor, TreeEnsembleClassifier, LinearRegressor, LinearClassifier, SVMRegressor, SVMClassifier, TopK
- C# API optimizations - PR3171
- Telemetry enabled for Windows (more details on telemetry collection)
- Improved error reporting when a kernel cannot be found due to missing type implementation
- Minor fixes based on static code analysis
Dependency updates
Please note that this version of onnxruntime depends on Visual C++ 2019 runtime. Previous versions depended on Visual C++ 2017. Please also refer https://github.com/microsoft/onnxruntime/tree/rel-1.3.0#system-requirements for the full set of system requirements.
APIs and Packages
- [General Availability] Windows Machine Learning APIs - package published on Nuget - Microsoft.AI.MachineLearning
- Performance improvements
- Opset updates
- [General Availability] ONNX Runtime with DirectML package published on Nuget -Microsoft.ML.OnnxRuntime.DirectML
- [General Availability] Java API - Maven package coming soon.
- [Preview] Javascript (node.js) API now available to build from the master branch.
- ARM64 Linux CPU Python package now available on Pypi. Note: this requires building ONNX for ARM64.
- Nightly dev builds from master (Nuget feed, TestPypi-CPU, GPU)
- API Updates
- I/O binding support for Python API - This reduces execution time significantly by allowing users to setup inputs/outputs on the GPU prior to model execution.
- API to specify free dimensions based on both denotations and symbolic names.
Execution Providers
- OpenVINO v2.0 EP
- DirectML EP updates
- Updated graph interface to abstract GPU-dependent graph optimization
- ONNX opset 10 and 11 support
- Initial support of 8bit and quantized operators
- Performance optimizations
- [Preview] Rockchip NPU EP
- [Preview] Xilinx FPGA Vitis-AI EP
- Capability to build execution providers as DLLs - supported for DNNL EP, work in progress for other EPs.
- If enabled in the build, the provider will be available as a shared library. Previously, EPs had to be statically linked with the core code.
- No runtime cost to include the EP if it isn't loaded; can now dynamically decide when to load it based on the model
Contributions
We'd like to recognize our community members across various teams at Microsoft and other companies for all their valuable contributions. Our community contributors in this release include: Adam Pocock, pranavm-nvidia, Andrew Kane, Takeshi Watanabe, Jianhao Zhang, Colin Jermain, Andrews548, Jan Scholz, Pranav Prakash, suryasidd, and S. Manohar Karlapalem.
The ONNX Runtime Training code was originally developed internally at Microsoft, before being ported to Github. We’d like to recognize the original contributors: Aishwarya Bhandare, Ashwin Kumar, Cheng Tang, Du Li, Edward Chen, Ethan Tao, Fanny Nina Paravecino, Ganesan Ramalingam, Harshitha Parnandi Venkata, Jesse Benson, Jorgen Thelin, Ke Deng, Liqun Fu, Li-Wen Chang, Peng Wang, Sergii Dymchenko, Sherlock Huang, Stuart Schaefer, Tao Qin, Thiago Crepaldi, Tianju Xu, Weichun Wang, Wei Zuo, Wei-Sheng Chin, Weixing Zhang, Xiaowan Dong, Xueyun Zhu, Zeeshan Siddiqui, and Zixuan Jiang.
Known Issues
- The source doesn't compile on Ubuntu 14.04. See #4048
- Crash when setting IntraOpNumThreads using the C/C++/C# API. Fix is available in the master branch. Workaround: Setting IntraOpNumThreads is inconsequential when using ORT that is built with openmp enabled. Hence it's not required and can be safely commented out. Use the openmp env variables to set the threading params for openmp enabled builds (which is the recommended way).
- C++
Published by stevenlix almost 6 years ago
onnxruntime - ONNX Runtime v1.2.0
Key Updates
Execution Providers
- [Preview] Availability of Windows Machine Learning (WinML) APIs in Windows builds of ONNX Runtime, with DirectML for GPU acceleration
- Windows ML is a WinRT API designed specifically for Windows developers that already ships as an inbox component in newer Windows versions
- Compatible with Windows 8.1 for CPU and Windows 10 1709 for GPU
- Available as source code in the GitHub and pre-built Nuget packages (windows.ai.machinelearning.dll)
- For additional documentation and samples on getting started, visit the Windows ML API Reference documentation
- TensorRT Execution Provider upgraded to TRT 7
- CUDA updated to 10.1
- Linux build requires CUDA Runtime 10.1.243, cublas10-10.2.1.243, and CUDNN 7.6.5.32. Note: cublas 10.1.x will not work
- Windows build requires CUDA Runtime 10.1.243, CUDNN 7.6.5.32
- onnxruntime now depends on curand lib, which is part of the CUDA SDK. If you already have the SDK fully installed, then it won't be an issue
Builds and Packages
- Nuget package structure updated. There is now a separate Managed Assembly (Microsoft.ML.OnnxRuntime.Managed) shared between the CPU and GPU Nuget packages. The "native" Nuget will depend on the "managed" Nuget to bring it into relevant projects automatically. PR 3104 Note that this should transparent for customers installing the Nuget packages. ORT package details are here.
- Build system: support getting dependencies from vcpkg (a C++ package manager for Windows, Linux, and MacOS)
- Capability to generate an onnxruntime Android Archive (AAR) file from source, which can be imported directly in Android Studio
API Updates
- SessionOptions:
- default value of maxnumgraphtransformationsteps increased to 10
- default value of graph optimization level is changed to ORTENABLEALL(99)
- OrtEnv can be created/destroyed multiple times
- Java API
- Gradle now required to build onnxruntime
- Available on Android
- C API Additions:
- GetDenotationFromTypeInfo
- CastTypeInfoToMapTypeInfo
- CastTypeInfoToSequenceTypeInfo
- GetMapKeyType
- GetMapValueType
- GetSequenceElementType
- ReleaseMapTypeInfo
- ReleaseSequenceTypeInfo
- SessionEndProfiling
- SessionGetModelMetadata
- ModelMetadataGetProducerName
- ModelMetadataGetGraphName
- ModelMetadataGetDomain
- ModelMetadataGetDescription
- ModelMetadataLookupCustomMetadataMap
- ModelMetadataGetVersion
- ReleaseModelMetadata
Operators
- This release introduces a change to the forward-compatibility pattern ONNX Runtime previously followed. This change was added to guarantee correctness of model prediction and removes behavior ambiguity due to missing opset information. This release adds a model opset number and IR version check - ONNX Runtime will not support models with ONNX versions higher than the supported opset implemented for that version (see version matrix). If higher opset versions are needed, consider using custom operators via ORT's custom schema/kernel registry mechanism.
- Int8 type support for Where Op
- Updates to Contrib ops:
- Changes: ReorderInput in kMSNchwcDomain, SkipLayerNormalization
- New: QLinearAdd, QLinearMul, QLinearReduceMean, MulInteger, QLinearAveragePool
- Added featurizer operators as an expansion of Contrib operators - these are not part of the official build and are experimental
Contributions
We'd like to recognize our community members across various teams at Microsoft and other companies for all their valuable contributions. Our community contributors in this release include: Eric Cousineau (Toyota Research Institute), Adam Pocock (Oracle), tinchi, Changyoung Koh, Andrews548, Jianhao Zhang, nicklas-mohr-jas, James Yuzawa, William Tambellini, Maher Jendoubi, Mina Asham, Saquib Nadeem Hashmi, Sanster, and Takeshi Watanabe.
- C++
Published by yufenglee almost 6 years ago
onnxruntime - ONNX Runtime v1.1.2
This is a minor patch release on 1.1.1.
This fixes the a minor issue that some logging in execution_frame.cc cannot be controlled by SessionLogVerbosityLevel in SessionOptions. PR #3043
- C++
Published by yufenglee about 6 years ago
onnxruntime - ONNX Runtime v1.1.1
This is a minor patch release on 1.1.0.
Summary
- Updated default optimization level to apply all by default to support best performance for popular models
- Operator updates and other bugs
All fixes
- update default optimization level + fix gemm_activation fusion #2791
- Fix C# handling of unicode strings #2697
- Initialize max of softmax with lowest of float #2786
- Implement a more stable softmax #2715
- add uint8 support to where op #2792
- Fix memory leak in samples and test #2778
- Fix memory leak in TRT #2815
- Fix nightly build version number issue #2771
- C++
Published by RyanUnderhill about 6 years ago
onnxruntime - ONNX Runtime v1.1.0
Key Updates
- Performance improvements to accelerate BERT model inference latency on both GPU and CPU. Updates include:
- Additional fused CPU kernels as well as related transformers for key operators such as Attention, EmbedLayerNormalization, SkipLayerNormalization, FastGelu
- Further optimization such as parallelizing Gelu and LayerNorm, enabling legacy stream mode, improving performance of elementwise operators, and fusing add bias into SkipLayerNormalization and FastGelu
- Extended CUDA support for opset 11
- Performance improvement for Faster R-CNN and Master R-CNN with new and updated implementation of opset 11 CUDA kernels, including Resize, Expand, Scatter, and Pad
- TensorRT Execution Provider updates, including support for inputs with dynamic shapes
- MKL-DNN (renamed DNNL) updated to v1.1
- [Preview] NN API Execution Provider for Android - see more
- [Preview] Java API for ONNX Runtime - see more
- Tool for Python API: Automatically maps a dataframe to the inputs of an ONNX graph based on schema information in the pandas frame
- Custom ops can be loaded from shared libraries: Custom ops can now be packaged in shared libraries and distributed for use in multiple applications without modification.
Contributions
We'd like to thank our community members across various teams at Microsoft and other companies for all the valuable contributions.
We'd like to extend special recognition to these individuals for their contributions in this release: Jianhao Zhang (JD AI), Adam Pocock (Oracle), nihui (Tencent), and Nick Groszewski. From the Intel teams, we'd like to thank Patrick Foley, Akhila Vidiyala, Ilya Lavrenov, Manohar Karlapalem, Surya Siddharth Pemmaraju, Sreekanth Yalachigere, Michal Karzynski, Thomas V Trimeloni, Tomasz Dolbniak, Amy Zhuang, Scott Cyphers, Alexander Slepko and other team members on their valuable work to support the Intel Execution Providers for ONNX Runtime.
- C++
Published by RyanUnderhill about 6 years ago
onnxruntime - ONNX Runtime v1.0.0
Key Updates
General
- ONNX 1.6 compatibility - operator support for all opset11 ops on CPU, including Sequence ops.
- Free dimension override: Add ability to override free dimensions to the inputs of a model. Free dimensions are tensor shapes which aren't statically known at model author time, and must be provided at runtime. Free dimensions are most often used for the batch size of a model's inputs, allowing for customizable batch sizes at runtime. This feature enables certain optimizations since the shape can be known apriori.
- Performance improvements to further accelerate model inferencing latency on CPU and GPU. Notable updates include:
- Additional CUDA operators added to support Object Detection and BERT models. Note: CUDA operator coverage is still limited and performance will vary significantly depending on the model and operator usage.
- Improved parallelism for operators that use GEMM and MatMul
- New implementation for 64 bits MatMul on x86_64 CPU
- Added ability to set # of threads used by intra and inter operator parallelism to allow optimal configuration for both sequential and concurrent inferencing scenarios
- Gelu fusion optimizer
- Threading updates:
- Eigen ThreadPool is now the default (previously there were two thread pool implementations, TaskThreadPool and Eigen ThreadPool)
- Ability to disable multiple threading by setting thread pool size to 1 and onnxruntimeUSEOPENMP to OFF.
- MLAS now uses the number of thread pool threads plus one as the parallelism level. (e.g. if you have 4 CPUs, you need to set the thread pool size to 3 so that you only have one thread per CPU)
- CPU Python package is manylinux1 compliant. The GPU Python package is manylinux2010 and compatible with CUDA 10.0/cuDNN 7.6
- Support for CentOS 6 and 7 for Python, C, and C++. Most of the code is now C++11 compliant (previously required C++14). C# .NET Core compatibility coming soon.
- Package for ArchLinux
- Telemetry - component level logging through Trace Logging for Windows builds. Data collection is limited and used strictly to identify areas for improvement. You can read more about the data collected and how to manage these settings here.
- Bug fixes to address various issues filed on Github and other channels
API updates
- Updates to the C API for clarity of usage. The 1.0 version of the API is now stable and will maintain backwards compatibility. Versioning is in supported to accommodate future updates.
- C APIs are ABI compatible and follows Semantic Versioning. Programs linked with the current version of the ONNX Runtime library will continue to work with subsequent releases without updating any client code or re-linking.
- New session option available for serializing optimized ONNX models
- Enabled some new capabilities through the Python and C# APIs for feature parity, including registration of execution providers in Python and setting additional run options in C#.
Execution Providers (EP)
Updates
- General Availability of the OpenVINO™ EP for Intel® CPU, Intel® Integrated Graphics, Intel® Neural Compute Stick 2, and the Intel® Vision Accelerator Design with Intel® Movidius™ Myriad™ VPU powered by OpenVINO™nGraph EP support of new operators.
- MKL-DNN EP updated from 0.18.1 to 1.0.2 for an average of 5-10% (up to 50%) performance improvement on ONNX Model Zoo model latency
- nGraph EP updated from 0.18 to 0.26, with support of new operators for quantization and performance improvements on LSTM ops (without peephole) and Pad op
- TensorRT EP updated to the latest TensorRT 6.0 libraries
- Android DNNLibrary version update ### New EP support
- [Preview] NUPHAR (Neural-network Unified Preprocessing Heterogeneous ARchitecture) is a TVM and LLVM based EP offering model acceleration by compiling nodes in subgraphs into optimized functions via JIT
- [Preview] DirectML is a high-performance, hardware-accelerated DirectX 12 library for machine learning on Windows, providing GPU acceleration for common machine learning tasks across a broad range of supported hardware and drivers
- [Preview] Support for Intel® Vision Accelerator Design with Intel® Arria™ 10 FPGA powered by OpenVINO™.
- [Preview] ARM Compute Library (ACL) Execution Provider targets ARM CPUs and GPUs for optimized execution of ONNX operators using the low-level libraries.
Build updates
- Two new cmake options: onnxruntimeUSEGEMMLOWP, onnxruntimeUSEAUTOML, onnxruntimeUSEDML
- Removed two cmake options: onnxruntimeUSEMLAS/onnxruntimeUSEEIGEN_THREADPOOL. These are always ON now.
- The minimal supported gcc version is 4.8.2
Tooling
- Availability of ONNX Go Live tool, which automates the process of shipping ONNX models by combining model conversion, correctness tests, and performance tuning into a single pipeline as a series of Docker images.
- Updates to the quantization tool
- Supports selective quantization for some nodes instead of all possible nodes
- Bias quantization for Conv nodes
- Node fusion for dynamic quantization
- onnxruntimeperftool usage updates:
- new option "-y" for controlling interopnum_threads
- max optimization level is now 99, and 3 is now an invalid value. In most cases, this tool should be run with "-o 99"
Other Dependency Updates
- Replaced gsl with gsl-lite to be compatible with C++11
- Added NVIDIA cub
- Added Wil for DML execution provider
- Pybind11 updated from 2.2.4 to 2.4.0 to fix a compatibility issue with Baidu PaddlePaddle and some other python modules that are also depend on Pybind11
- TVM updated to a newer version
- C++
Published by snnn over 6 years ago
onnxruntime - ONNX Runtime v0.5.1
Bug Fixes - Fix in C# API marshalling for InferenceSession.Run() - Some fixes in OnnxRuntime server
Only NuGet packages are released for this patch release, because only the C# API users are impacted
- C++
Published by shahasad over 6 years ago
onnxruntime - ONNX Runtime v0.5.0
- Execution Provider updates
- MKL-DNN provider (subgraph based execution) for improved performance
- Intel OpenVINO EP now available for Public Preview - build instructions
- Update to CUDA 10 for inferencing with NVIDIA GPUs
- Base CPU EP has faster convolution performance using the NCHWc blocked layout. This layout optimization can be enabled by setting graph optimization level to 3 in the session options.
- C++ API for inferencing (wrapper on C API)
- ONNX Runtime Server (Beta) for inferencing with HTTP and GRPC endpoints
- Python Operator (Beta) to support custom Python code in a single node of an ONNX graph to make it easier for experimentation of custom operators
- Support of Keras-based Mask R-CNN model. The model relies on some custom operators pending to be added in ONNX; in the meantime, it can be converted using this script for inferencing using ONNX Runtime 0.5. Other object detection models can be found from the ONNX Model Zoo.
- Minor updates to the C API
- For consistency, all C APIs now return an ORT status code
- Code coverage for this release is 83%
- C++
Published by hariharans29 over 6 years ago
onnxruntime - ONNX Runtime v0.4.0
Key Updates
- New execution providers for improved performance on specialized hardware
- Intel nGraph
- NVIDIA TensorRT
- ONNX 1.5 compatibility
- Opset 10 operator support
- Supports newly added ONNX model zoo object detection models (YOLO v3, SSD)
- Quantization operators
- Updates to C API for Custom Operators
- Allocation of outputs during compute
- C++ wrapper to greatly simplify implementation
- Supports custom op DLLs when ONNX Runtime is compiled statically
- Graph optimizations with Constant Folding for improved performance
- Official binary packages
- Nuget package creation pipeline updated with security-focused tasks
- CredScan
- SDLNative Rules for PreFast
- BinSim
- Additional binaries built with MKL-ML published in Nuget
- Size reduction in Windows (700KB+), Linux (65%) and Mac (45%) binaries
- Nuget package creation pipeline updated with security-focused tasks
- C++
Published by askhade almost 7 years ago
onnxruntime - ONNX Runtime v0.3.1
This is a patch release for 0.3.0.
Updates include
- Binary size reduction through usage of protobuf-lite and operator fixes
- Build option to disable contrib ops (ops not in ONNX standard)
- Build option to statically link MSVC
- Minor bug fixes
- C++
Published by jignparm almost 7 years ago
onnxruntime - ONNX Runtime v0.3.0
Key Updates
ONNX 1.4 compatibility
- Opset 9 operator support
- Support of large models >2GB
New build packages
- C/C#: OS X x64 CPU
- C: Linux x86 CPU
- C: Windows x86 CPU
Custom op registration via C API
Non-Tensor type support for input/output for C and C# API
Release Notes
- Default execution provider for CPU uses Eigen and MLAS; prior releases used MKL-DNN. See all build options here.
- OpenMP is required for the prebuilt binaries. See System Requirements for more details.
- C++
Published by RandySheriffH almost 7 years ago
onnxruntime - ONNX Runtime v0.2.1
Key Updates:
- ONNX Runtime C# packages are now available for Linux, with GPU support for both Windows and Linux. Find the APIs and package downloads here.
- The C API has been updated and is now in Beta (previously: experimental). This version is expected to be mostly stable, though may adapt to ensure support of usage needs
- Support of additional operators with MKL-DNN: Relu, Sum, BatchNormalization
Release Notes
- The prebuilt-binaries in the CPU builds of the release require OpenMP at runtime. For Linux systems, it requires libgomp.so.1 to be installed. If OnnxRuntime fails to load, please try installing libgomp1.
- The binaries in the GPU builds require CUDA 9.1 and CuDNN 7.1 runtime libraries to be available in the system. For the Windows NuGet package of the v0.2.1 release, this is CUDA 10.0 and CuDNN 7.3 instead.
- C++
Published by raymondxyang about 7 years ago
onnxruntime - ONNX Runtime v0.1.5
This is just a minor patch to the previous 0.1.4 release.
- C++
Published by pranavsharma about 7 years ago
onnxruntime - ONNX Runtime v0.1.4
This is the first open source release of ONNX Runtime.
- C++
Published by pranavsharma about 7 years ago