LogoLogo
Actions SDKGiza CLIDatasetsAgents
main
main
  • 👋Welcome
    • Orion
    • Why Validity ML?
  • 🧱Framework
    • Get Started
    • Contribute
    • Compatibility
    • Numbers
      • Fixed Point
        • fp.new
        • fp.new_unscaled
        • fp.from_felt
        • fp.abs
        • fp.ceil
        • fp.floor
        • fp.exp
        • fp.exp2
        • fp.log
        • fp.log2
        • fp.log10
        • fp.pow
        • fp.round
        • fp.sqrt
        • fp.sin
        • fp.atan
        • fp.sign
      • Complex Number
        • complex.acos
        • complex.acosh
        • complex.arg
        • complex.asin
        • complex.asinh
        • complex.atan
        • complex.atanh
        • complex.conjugate
        • complex.cos
        • complex.cosh
        • complex.exp
        • complex.exp2
        • complex.from_polar
        • complex.img
        • complex.ln
        • complex.log2
        • complex.log10
        • complex.mag
        • complex.new
        • complex.one
        • complex.pow
        • complex.real
        • complex.reciprocal
        • complex.sin
        • complex.sinh
        • complex.sqrt
        • complex.tan
        • complex.tanh
        • complex.to_polar
        • complex.zero
    • Operators
      • Tensor
        • tensor.new
        • tensor.at
        • tensor.min_in_tensor
        • tensor.min
        • tensor.max_in_tensor
        • tensor.max
        • tensor.stride
        • tensor.ravel_index
        • tensor.unravel_index
        • tensor.reshape
        • tensor.transpose
        • tensor.reduce_sum
        • tensor.argmax
        • tensor.argmin
        • tensor.matmul
        • tensor.exp
        • tensor.log
        • tensor.equal
        • tensor.greater
        • tensor.greater_equal
        • tensor.less
        • tensor.less_equal
        • tensor.abs
        • tensor.neg
        • tensor.ceil
        • tensor.cumsum
        • tensor.sin
        • tensor.cos
        • tensor.asin
        • tensor.flatten
        • tensor.sinh
        • tensor.asinh
        • tensor.cosh
        • tensor.acosh
        • tensor.tanh
        • tensor.atan
        • tensor.acos
        • tensor.sqrt
        • tensor.or
        • tensor.xor
        • tensor.onehot
        • tensor.slice
        • tensor.concat
        • tensor.gather
        • tensor.quantize_linear
        • tensor.dequantize_linear
        • tensor.qlinear_add
        • tensor.qlinear_mul
        • tensor.qlinear_matmul
        • tensor.qlinear_concat
        • tensor.qlinear_leakyrelu
        • tensor.nonzero
        • tensor.squeeze
        • tensor.unsqueeze
        • tensor.sign
        • tensor.clip
        • tensor.identity
        • tensor.and
        • tensor.where
        • tensor.bitwise_and
        • tensor.bitwise_xor
        • tensor.bitwise_or
        • tensor.resize
        • tensor.round
        • tensor.scatter
        • tensor.array_feature_extractor
        • tensor.binarizer
        • tensor.reduce_sum_square
        • tensor.reduce_l2
        • tensor.reduce_l1
        • tensor.reduce_prod
        • tensor.gather_elements
        • tensor.gather_nd
        • tensor.reduce_min
        • tensor.shrink
        • tensor.reduce_mean
        • tensor.pow
        • tensor.is_nan
        • tensor.is_inf
        • tensor.not
        • tensor.erf
        • tensor.reduce_log_sum
        • tensor.reduce_log_sum_exp
        • tensor.unique
        • tensor.compress
        • tensor.layer_normalization
        • tensor.scatter_nd
        • tensor.dynamic_quantize_linear
        • tensor.optional
        • tensor.reverse_sequence
        • tensor.split_to_sequence
        • tensor.range
        • tensor.hann_window
        • tensor.hamming_window
        • tensor.blackman_window
        • tensor.random_uniform_like
        • tensor.label_encoder
      • Neural Network
        • nn.relu
        • nn.leaky_relu
        • nn.sigmoid
        • nn.softmax
        • nn.softmax_zero
        • nn.logsoftmax
        • nn.softsign
        • nn.softplus
        • nn.linear
        • nn.hard_sigmoid
        • nn.thresholded_relu
        • nn.gemm
        • nn.grid_sample
        • nn.col2im
        • nn.conv_transpose
        • nn.conv
        • nn.depth_to_space
        • nn.space_to_depth
      • Machine Learning
        • Tree Ensemble Classifier
          • tree_ensemble_classifier.predict
        • Tree Ensemble Regressor
          • tree_ensemble_regressor.predict
        • Linear Classifier
          • linear_classifier.predict
        • Linear Regressor
          • linear_regressor.predict
        • SVM Regressor
          • svm_regressor.predict
        • SVM Classifier
          • svm_classifier.predict
        • Sequence
          • sequence.sequence_construct
          • sequence.sequence_empty
          • sequence.sequence_length
          • sequence.sequence_at
          • sequence.sequence_empty
          • sequence.sequence_erase
          • sequence.sequence_insert
          • sequence.concat_from_sequence
        • Normalizer
          • normalize.predict
  • 🏛️Hub
    • Models
    • Spaces
  • 🧑‍🎓Academy
    • Tutorials
      • MNIST Classification with Orion
      • Implement new operators in Orion
      • Verifiable Linear Regression Model
      • Verifiable Support Vector Machine
      • Verifiable Principal Components Analysis
      • Provable MLR: Forecasting AAVE's Lifetime Repayments
Powered by GitBook
On this page
  • tensor.reduce_log_sum_exp
  • Args
  • Panics
  • Returns
  • Example

Was this helpful?

Edit on GitHub
  1. Framework
  2. Operators
  3. Tensor

tensor.reduce_log_sum_exp

tensor.reduce_log_sum_exp

   fn reduce_log_sum_exp(self: @Tensor<T>, axis: usize, keepdims: bool) -> Tensor<T>; 

Computes the log sum of the exponentials of the input tensor's elements along the provided axes.

Args

  • 'self'(@Tensor<T>) - The input tensor.

  • 'axis'(usize) - The dimension to reduce.

  • 'keepdims'(bool) - If true, retains reduced dimensions with length 1.

Panics

  • Panics if axis is not in the range of the input tensor's dimensions.

Returns

Returns a new Tensor<T> instance with the specified axis reduced by summing its elements.

Example

use core::array::{ArrayTrait, SpanTrait};
use orion::operators::tensor::{TensorTrait, Tensor};
use orion::operators::tensor::FP32x32Tensor;
use orion::numbers::{FixedTrait, FP32x32};

fn reduce_log_sum_exp() -> Tensor<FP32x32> {
    let mut shape = ArrayTrait::<usize>::new();
    shape.append(3);
    shape.append(2);
    shape.append(2);

    let mut data = ArrayTrait::new();
    data.append(FP32x32 { mag: 4294967296, sign: false });
    data.append(FP32x32 { mag: 8589934592, sign: false });
    data.append(FP32x32 { mag: 12884901888, sign: false });
    data.append(FP32x32 { mag: 17179869184, sign: false });
    data.append(FP32x32 { mag: 21474836480, sign: false });
    data.append(FP32x32 { mag: 25769803776, sign: false });
    data.append(FP32x32 { mag: 30064771072, sign: false });
    data.append(FP32x32 { mag: 34359738368, sign: false });
    data.append(FP32x32 { mag: 38654705664, sign: false });
    data.append(FP32x32 { mag: 42949672960, sign: false });
    data.append(FP32x32 { mag: 47244640256, sign: false });
    data.append(FP32x32 { mag: 51539607552, sign: false });
    TensorTrait::new(shape.span(), data.span())

    let tensor = TensorTrait::<FP32x32>::new(shape.span(), data.span());

    return tensor.reduce_log_sum_exp(axis: 2, keepdims: false);

 }   
 
   
>>> [[9215828, 16323477, 20115004], [22716772, 24699744, 26302432]]
Previoustensor.reduce_log_sumNexttensor.unique

Last updated 1 year ago

Was this helpful?

🧱