Implement new operators in Orion
The Orion Framework offers an open-source ONNX runtime implementation for Validity and ZK Machine Learning. Are you interested in contributing? We sincerely appreciate your interest. This is exactly how we'll build a robust and transparent AI ecosystem! In this tutorial, you'll learn how to contribute to the Orion repository by implementing from scratch a new operator.
Throughout this tutorial, any concept that is directly explained in the official documentation will be met with a reference guiding you to the respective source. Feel free to dive in.
Code Structure
Orion repo uses Scarb, a Cairo package manager. You can find all information about Scarb and Cairo installation here.
The repository is structured as follows:
In the src
directory, you'll find four distinct folders:
numbers
: This folder contains a complete implementation of Signed Integer and Fixed Point.operators
: This directory includes a set of functions and operations used in calculating neural network models.tests
: This is the location where we'll test our code.
In this tutorial we will focus on operators
directory, as we will implement a new operator from scratch.
What are Orion Operators?
Orion operators serve as the foundational components of machine learning models compliant with ONNX ops. ONNX is an open format for representing machine learning models that allows interoperability between various deep learning frameworks. It enables models to be trained in one framework and deployed in another without the need for extensive model conversion.
Orion operators represent specific computations or operations performed by machine learning models. Each operator defines a specific functionality, such as convolution, pooling, activation functions, matrix operations, and more. These operators are defined using a set of attributes and inputs that describe their behaviour and dependencies.
Ensuring compatibility with ONNX operators facilitates integration into the ONNX ecosystem. This enables researchers and developers to pre-train models using their preferred framework, before executing verifiable inferences with Orion.
We implemented two different types of operators, each having their own trait:
tensor (TensorTrait)
: This represents a full implementation of multi-dimensional arrays.nn (NNTrait)
- These are operators designed for building neural networks.
How to contribute?
This tutorial will focus specifically on implementing a new Operator in the Orion repository, and will not cover the entirety of the contribution guidelines. If you intend to contribute to Orion, we kindly ask that you read carefully the Contribution Guidelines.
How to implement new Orion Operators?
In this section, I will guide you through the process of adding new operators to the Orion repository. To illustrate this, we will build the Softmax operator from scratch.
What is Softmax?
It is a non-linear activation function that takes a vector of real numbers as input and transforms them into a probability distribution over multiple classes. It's defined as follows:
In other words, the softmax function exponentiates each element of the input vector and divides it by the sum of exponentiated values across all elements. This normalization ensures that the output values lie between 0 and 1, and their sum adds up to 1, resembling a probability distribution.
Best practices before implementing an operator
Before implementing an operator in Orion, I recommend that you:
Read the corresponding ONNX operator documentation.
Understand how the ONNX backend has implemented it. It's essential to maintain the operator's interface consistent with the one in ONNX.
Consider whether the operator should be implemented in
NNTrait
orTensorTrait
.
Start coding!
Step 1: Add softmax to NNTrait
Since Softmax is a neural network operator, it needs to be implemented in NNTrait
. It accepts an input tensor of a generic type 'T' and an axis along which the softmax computation will occur. Given that the resulting values must range between 0 and 1, it should return a tensor of fixed-point numbers, retaining the same shape as the input tensor.
In src/operators/nn/core.cairo
we add the softmax to NNTrait
.
Step 2: Add the business logic
In the src/operators/nn/functional
directory, create a new file named softmax.cairo
.
The functional
folder is where all the business logic resides. All functions should be implemented with generic type.
A softmax function can be implemented as follows:
Softmax(input, axis) = Exp(input) / ReduceSum(Exp(input), axis=axis)
So we can leverage the exp
and reduce_sum
operators from TensorTrait
to implement softmax.
Here's the implementation in softmax.cairo
:
Step 3: Add softmax to the implementations
Now, we need to add the softmax function into the different representations. In nn/implementations/nn_fp8x23.cairo
, import the business logic and add the softmax implementation.
Do the same for all other fixed point implementations (FP16x16NN
, FP32x32NN
, FP64x64NN
). As softmax only support fixed point tensors, it should panic for other implementations. Below an example with U32NN
.
Step 4: Write the docstring
Navigate back to operators/nn/core.cairo
and prior to the declaration of the softmax function, write the docstring and list it preceding the trait as shown below. This step is useful for generating the documentation during the preparation of your Pull Request, which can be achieved with scarb run docgen
command. We use a docstring style similar to Rust's docstring, with a few variations.
Voilà! We have successfully implemented the softmax function in NNTrait
!
How to test the Orion Operator?
Now, let's proceed to testing the softmax operator we've just implemented. When testing an operator in Orion, you should ensure to test across all types of implementation.
Since softmax employs fixed points for intermediate calculations and returns a tensor of FixedType
, it is essential to test it across all fixed point implementations. As of now, Orion supports two fixed point implementations: FP16x16
and FP8x23
.
To simplify the task of writing tests, and get closer to ONNX tests, we've designed Nodegen! It lets you write your test in Python/Numpy, then generate the following Cairo code:
Input data
Expected output data
Your tests
First, we'll create a softmax.py
file in the nodegen/node
directory. Next, we'll define a softmax function in python. You can find the python function in ONNX implementation directory.
Finally, we create a Softmax class, containing tests for each dtypes.
Once set up, you can generate tests and data by executing scarb run nodegen softmax
.
The above code will generate 6 test files located in tests/src/nodes
. As an example, here's the content of the softmax_fp8x23.cairo
generated file:
If you'd like to expand the tests with additional cases, feel free to edit the generated Cairo file.
You're now ready to prepare your Pull Request. Please ensure you thoroughly read the Contribution Guidelines before making your first PR. Your contribution is greatly appreciated, and we sincerely value your interest 🫶.
Orion leverages Cairo to guarantee the reliability of inference, providing developers with a user-friendly framework to build complex and verifiable machine learning models. We invite the community to join us in shaping a future where trustworthy AI becomes a reliable resource for all.
Last updated