Optimize and Accelerate Machine Learning Inferencing and Training

Speed up machine learning process

Built-in optimizations that deliver up to 17X faster inferencing and up to 1.4X faster training

Plug into your existing technology stack

Support for a variety of frameworks, operating systems and hardware platforms

Build using proven technology

Used in Office 365, Visual Studio and Bing, delivering half Trillion inferences every day

Get Started Easily

Platform

Platform list contains six items

Windows
Linux
Mac
Android
iOS
Web Browser (Preview)

API

API list contains eight items

Python
C++
C#
C
Java
JS
Obj-C
WinRT

Architecture

Architecture list contains five items

X64
X86
ARM64
ARM32
IBM Power

Hardware Acceleration

Hardware Acceleration list contains seventeen items

Default  CPU
CoreML
CUDA
DirectML
NNAPI
oneDNN
OpenVINO
SNPE
TensorRT
ACL (Preview)
ArmNN (Preview)
CANN (Preview)
MIGraphX (Preview)
ROCm (Preview)
Rockchip NPU (Preview)
TVM (Preview)
Vitis AI (Preview)
XNNPACK (Preview)

Installation Instructions

Please select a combination of resources

Organizations and products using ONNX Runtime​​

Resources​​