OnnxRuntime
Ort::Float16_t Struct Reference

IEEE 754 half-precision floating point data type. More...

#include <onnxruntime_cxx_api.h>

Public Member Functions

constexpr Float16_t () noexcept
 
constexpr Float16_t (uint16_t v) noexcept
 
constexpr operator uint16_t () const noexcept
 
constexpr bool operator== (const Float16_t &rhs) const noexcept
 
constexpr bool operator!= (const Float16_t &rhs) const noexcept
 

Public Attributes

uint16_t value
 

Detailed Description

IEEE 754 half-precision floating point data type.

It is necessary for type dispatching to make use of C++ API The type is implicitly convertible to/from uint16_t. The size of the structure should align with uint16_t and one can freely cast uint16_t buffers to/from Ort::Float16_t to feed and retrieve data.

Generally, you can feed any of your types as float16/blfoat16 data to create a tensor on top of it, providing it can form a continuous buffer with 16-bit elements with no padding. And you can also feed a array of uint16_t elements directly. For example,

uint16_t values[] = { 15360, 16384, 16896, 17408, 17664};
constexpr size_t values_length = sizeof(values) / sizeof(values[0]);
std::vector<int64_t> dims = {values_length}; // one dimensional example
Ort::MemoryInfo info("Cpu", OrtDeviceAllocator, 0, OrtMemTypeDefault);
// Note we are passing bytes count in this api, not number of elements -> sizeof(values)
auto float16_tensor = Ort::Value::CreateTensor(info, values, sizeof(values),
dims.data(), dims.size(), ONNX_TENSOR_ELEMENT_DATA_TYPE_FLOAT16);

Here is another example, a little bit more elaborate. Let's assume that you use your own float16 type and you want to use a templated version of the API above so the type is automatically set based on your type. You will need to supply an extra template specialization.

namespace yours { struct half {}; } // assume this is your type, define this:
namespace Ort {
template<>
struct TypeToTensorType<yours::half> { static constexpr ONNXTensorElementDataType type = ONNX_TENSOR_ELEMENT_DATA_TYPE_FLOAT16; };
} //namespace Ort
std::vector<yours::half> values;
std::vector<int64_t> dims = {values.size()}; // one dimensional example
Ort::MemoryInfo info("Cpu", OrtDeviceAllocator, 0, OrtMemTypeDefault);
// Here we are passing element count -> values.size()
auto float16_tensor = Ort::Value::CreateTensor<yours::half>(info, values.data(), values.size(), dims.data(), dims.size());

Constructor & Destructor Documentation

◆ Float16_t() [1/2]

constexpr Ort::Float16_t::Float16_t ( )
inlineconstexprnoexcept

◆ Float16_t() [2/2]

constexpr Ort::Float16_t::Float16_t ( uint16_t  v)
inlineconstexprnoexcept

Member Function Documentation

◆ operator uint16_t()

constexpr Ort::Float16_t::operator uint16_t ( ) const
inlineconstexprnoexcept

◆ operator!=()

constexpr bool Ort::Float16_t::operator!= ( const Float16_t rhs) const
inlineconstexprnoexcept

◆ operator==()

constexpr bool Ort::Float16_t::operator== ( const Float16_t rhs) const
inlineconstexprnoexcept

Member Data Documentation

◆ value

uint16_t Ort::Float16_t::value