LogoLogo
Actions SDKGiza CLIDatasetsAgents
main
main
  • 👋Welcome
    • Orion
    • Why Validity ML?
  • 🧱Framework
    • Get Started
    • Contribute
    • Compatibility
    • Numbers
      • Fixed Point
        • fp.new
        • fp.new_unscaled
        • fp.from_felt
        • fp.abs
        • fp.ceil
        • fp.floor
        • fp.exp
        • fp.exp2
        • fp.log
        • fp.log2
        • fp.log10
        • fp.pow
        • fp.round
        • fp.sqrt
        • fp.sin
        • fp.atan
        • fp.sign
      • Complex Number
        • complex.acos
        • complex.acosh
        • complex.arg
        • complex.asin
        • complex.asinh
        • complex.atan
        • complex.atanh
        • complex.conjugate
        • complex.cos
        • complex.cosh
        • complex.exp
        • complex.exp2
        • complex.from_polar
        • complex.img
        • complex.ln
        • complex.log2
        • complex.log10
        • complex.mag
        • complex.new
        • complex.one
        • complex.pow
        • complex.real
        • complex.reciprocal
        • complex.sin
        • complex.sinh
        • complex.sqrt
        • complex.tan
        • complex.tanh
        • complex.to_polar
        • complex.zero
    • Operators
      • Tensor
        • tensor.new
        • tensor.at
        • tensor.min_in_tensor
        • tensor.min
        • tensor.max_in_tensor
        • tensor.max
        • tensor.stride
        • tensor.ravel_index
        • tensor.unravel_index
        • tensor.reshape
        • tensor.transpose
        • tensor.reduce_sum
        • tensor.argmax
        • tensor.argmin
        • tensor.matmul
        • tensor.exp
        • tensor.log
        • tensor.equal
        • tensor.greater
        • tensor.greater_equal
        • tensor.less
        • tensor.less_equal
        • tensor.abs
        • tensor.neg
        • tensor.ceil
        • tensor.cumsum
        • tensor.sin
        • tensor.cos
        • tensor.asin
        • tensor.flatten
        • tensor.sinh
        • tensor.asinh
        • tensor.cosh
        • tensor.acosh
        • tensor.tanh
        • tensor.atan
        • tensor.acos
        • tensor.sqrt
        • tensor.or
        • tensor.xor
        • tensor.onehot
        • tensor.slice
        • tensor.concat
        • tensor.gather
        • tensor.quantize_linear
        • tensor.dequantize_linear
        • tensor.qlinear_add
        • tensor.qlinear_mul
        • tensor.qlinear_matmul
        • tensor.qlinear_concat
        • tensor.qlinear_leakyrelu
        • tensor.nonzero
        • tensor.squeeze
        • tensor.unsqueeze
        • tensor.sign
        • tensor.clip
        • tensor.identity
        • tensor.and
        • tensor.where
        • tensor.bitwise_and
        • tensor.bitwise_xor
        • tensor.bitwise_or
        • tensor.resize
        • tensor.round
        • tensor.scatter
        • tensor.array_feature_extractor
        • tensor.binarizer
        • tensor.reduce_sum_square
        • tensor.reduce_l2
        • tensor.reduce_l1
        • tensor.reduce_prod
        • tensor.gather_elements
        • tensor.gather_nd
        • tensor.reduce_min
        • tensor.shrink
        • tensor.reduce_mean
        • tensor.pow
        • tensor.is_nan
        • tensor.is_inf
        • tensor.not
        • tensor.erf
        • tensor.reduce_log_sum
        • tensor.reduce_log_sum_exp
        • tensor.unique
        • tensor.compress
        • tensor.layer_normalization
        • tensor.scatter_nd
        • tensor.dynamic_quantize_linear
        • tensor.optional
        • tensor.reverse_sequence
        • tensor.split_to_sequence
        • tensor.range
        • tensor.hann_window
        • tensor.hamming_window
        • tensor.blackman_window
        • tensor.random_uniform_like
        • tensor.label_encoder
      • Neural Network
        • nn.relu
        • nn.leaky_relu
        • nn.sigmoid
        • nn.softmax
        • nn.softmax_zero
        • nn.logsoftmax
        • nn.softsign
        • nn.softplus
        • nn.linear
        • nn.hard_sigmoid
        • nn.thresholded_relu
        • nn.gemm
        • nn.grid_sample
        • nn.col2im
        • nn.conv_transpose
        • nn.conv
        • nn.depth_to_space
        • nn.space_to_depth
      • Machine Learning
        • Tree Ensemble Classifier
          • tree_ensemble_classifier.predict
        • Tree Ensemble Regressor
          • tree_ensemble_regressor.predict
        • Linear Classifier
          • linear_classifier.predict
        • Linear Regressor
          • linear_regressor.predict
        • SVM Regressor
          • svm_regressor.predict
        • SVM Classifier
          • svm_classifier.predict
        • Sequence
          • sequence.sequence_construct
          • sequence.sequence_empty
          • sequence.sequence_length
          • sequence.sequence_at
          • sequence.sequence_empty
          • sequence.sequence_erase
          • sequence.sequence_insert
          • sequence.concat_from_sequence
        • Normalizer
          • normalize.predict
  • 🏛️Hub
    • Models
    • Spaces
  • 🧑‍🎓Academy
    • Tutorials
      • MNIST Classification with Orion
      • Implement new operators in Orion
      • Verifiable Linear Regression Model
      • Verifiable Support Vector Machine
      • Verifiable Principal Components Analysis
      • Provable MLR: Forecasting AAVE's Lifetime Repayments
Powered by GitBook
On this page
  • Args
  • Panics
  • Returns
  • Example

Was this helpful?

Edit on GitHub
  1. Framework
  2. Operators
  3. Tensor

tensor.layer_normalization

   fn layer_normalization(
    self: @Tensor<T>,
    scale: @Tensor<T>,
    B: Option<@Tensor<T>>,
    axis: Option<i32>,
    epsilon: Option<T>,
    stash_type: Option<usize>,
) -> (Tensor<T>, Tensor<T>, Tensor<T>);

Layer normalization of the input, in two stages. The first stage is standardization, which makes the normalized elements have zero mean and unit variances. The second stage then scales and shifts the outcome of the first stage

Args

  • self(@Tensor<T>) - The input tensor.

  • scale(@Tensor<T>,) - Scale tensor.

  • B(Option<@Tensor<T>>) - Bias tensor.

  • axis(Option<i32>) (default is -1) - The first normalization dimension. If rank(X) is r, axis' allowed range is [-r, r). Negative value means counting dimensions from the back.

  • epsilon(Option<T>) (default is 0) - The epsilon value to use to avoid division by zero.

  • stash_type(Option<usize>) - Precise the computation precision - unused the precision is defined by the type of the tensor.

Panics

  • Panics if condition rank is not equal to 1.

Returns

A new normalized tensorTensor<T>. A tensor containing the mean Tensor<T>. A tensor containing the inverse standard deviation Tensor<T>.

Example

use orion::operators::tensor::{TensorTrait, Tensor};
use orion::operators::tensor::FP16x16TensorPartialEq;
use core::array::{ArrayTrait, SpanTrait};
use orion::operators::tensor::FP16x16Tensor;
use orion::numbers::{FixedTrait, FP16x16};

fn layer_normalization_example() -> (Tensor<FP16x16>, Tensor<FP16x16>, Tensor<FP16x16>) {
    let mut shape = ArrayTrait::<usize>::new();
    shape.append(3);
    shape.append(4);

    let mut data = ArrayTrait::new();
    data.append(FP16x16 { mag: 41143, sign: true });
    data.append(FP16x16 { mag: 51803, sign: false });
    data.append(FP16x16 { mag: 113556, sign: false });
    data.append(FP16x16 { mag: 64774, sign: false });
    data.append(FP16x16 { mag: 866, sign: false });
    data.append(FP16x16 { mag: 698, sign: true });
    data.append(FP16x16 { mag: 106500, sign: false });
    data.append(FP16x16 { mag: 98929, sign: false });
    data.append(FP16x16 { mag: 7551, sign: false });
    data.append(FP16x16 { mag: 30689, sign: true });
    data.append(FP16x16 { mag: 38325, sign: false });
    data.append(FP16x16 { mag: 48164, sign: false });
    let X = TensorTrait::new(shape.span(), data.span());

    let shape = ArrayTrait::<usize>::new();
    shape.append(4);
    let mut data = ArrayTrait::new();
    data.append(FP16x16 { mag: 49855, sign: false });
    data.append(FP16x16 { mag: 150787, sign: false });
    data.append(FP16x16 { mag: 83498, sign: true });
    data.append(FP16x16 { mag: 30346, sign: false });
    let scale = TensorTrait::new(shape.span(), data.span());

     
    let mut shape = ArrayTrait::<usize>::new();
    shape.append(4);
    let mut data = ArrayTrait::new();
    data.append(FP16x16 { mag: 54864, sign: true });
    data.append(FP16x16 { mag: 50952, sign: false });
    data.append(FP16x16 { mag: 8870, sign: true });
    data.append(FP16x16 { mag: 23216, sign: true });
    let bias = TensorTrait::new(shape.span(), data.span());

    return X.layer_normalization(@scale,Option::Some(@bias),Option::None,Option::None,Option::None);
}
>>> [[-0.48926553  1.0185822  -0.02138367 -0.39223218]
     [-0.7945549   0.99696046  0.04332176 -0.412645  ]
     [-0.5664707   0.7491956  -0.7896356  -0.5320859 ]]
Previoustensor.compressNexttensor.scatter_nd

Last updated 1 year ago

Was this helpful?

🧱