Layer normalization of the input, in two stages. The first stage is standardization, which makes the normalized elements have zero mean and unit variances. The second stage then scales and shifts the outcome of the first stage
Args
self(@Tensor<T>) - The input tensor.
scale(@Tensor<T>,) - Scale tensor.
B(Option<@Tensor<T>>) - Bias tensor.
axis(Option<i32>) (default is -1) - The first normalization dimension. If rank(X) is r, axis' allowed range is [-r, r). Negative value means counting dimensions from the back.
epsilon(Option<T>) (default is 0) - The epsilon value to use to avoid division by zero.
stash_type(Option<usize>) - Precise the computation precision - unused the precision is defined by the type of the tensor.
Panics
Panics if condition rank is not equal to 1.
Returns
A new normalized tensorTensor<T>. A tensor containing the mean Tensor<T>. A tensor containing the inverse standard deviation Tensor<T>.