tensor.qlinear_leakyrelu
Applies the Leaky Relu operator to a quantized Tensor
QLinar LeakyRelu takes as input a quantized Tensor, its scale and zero point and an scalar alpha, and produces one output data (a quantized Tensor) where the function f(x) = alpha * x for x < 0, f(x) = x for x >= 0
, is applied to the data tensor elementwise. The quantization formula is y = saturate((x / y_scale) + y_zero_point). Scale and zero point must have same shape and the same type. They must be either scalar (per tensor) or N-D tensor (per row for 'a' and per column for 'b'). Scalar refers to per tensor quantization whereas N-D refers to per row or per column quantization.
Args
self
(@Tensor<i8>
) - The first tensor to be multiplied (a).a_scale
(@Tensor<T>
) - Scale for inputa
.a_zero_point
(@Tensor<T>
) - Zero point for inputa
.alpha
(T
) - The factor multiplied to negative elements.
Returns
A new Tensor<i8>
, containing result of the Leaky Relu.
Type Constraints
u32 tensor, not supported. fp8x23wide tensor, not supported. fp16x16wide tensor, not supported. bool tensor, not supported.
Example
Last updated