The Support Vector Machines (SVM) model is a supervised learning technique used for classification and regression. It is employed to solve binary classification problems where it identifies the hyperplane that best divides a data set into classes. This hyperplane results from maximizing the margin between the two classes. By determining this optimal hyperplane, predictions can be made for new data points and understand how the input attributes influence classification.
Below, we provide a brief review of implementing an SVM model using the Gradient Descent method for the linear kernel in Python, which we will later convert to Cairo to transform it into a verifiable ZKML (support vector machine model), using Orion's library. This allows an opportunity to familiarize oneself with the main functions and operators that the framework offers for the implementation of the SVM.
Content overview:
Support Vector Machine with Python: We start with the basic implementation of SVM using gradient descent in Python.
Convert your model to Cairo: In the subsequent stage, we will create a new scarb project and replicate our model to Cairo which is a language for creating STARK-provable programs.
Implementing SVM model using Orion: To catalyze our development process, we will use the Orion Framework to construct the key functions to build our verifiable SVM classification model.
Generating the dataset
For the purposes of this tutorial, we generated linearly separable data using make_blobs from Scikit-learn.
Upon examining the graph, we notice that the data separates into two distinct groups. Our goal in this tutorial is to differentiate these groups using a Support Vector Machine (SVM). Using the provided data points, we will seek to find an optimal hyperplane that effectively separates these two clusters.
Loss function, gradient and Weight init
We will start by generating the key functions for SVM.
Next, we'll define the loss functions and its gradient, with L2 regularization, both necessary to train our SVM.
Now, we declare the hyperparameters: learning rate (learning_rate), the number of epochs (num_epochs), and the regularization parameter (C). Then, we will use gradient descent to adjust the weights of the SVM model. For the purposes of this tutorial, we stick with the following hyperparameters; however, the hyperplane acquisition could be improved with their adjustment.
learning_rate =0.01num_epochs =100C =1
Training
Next, we execute the training of the SVM model, adjusting its parameters to minimize the loss over 100 iterations, and we monitor the training progress by printing the loss.
for epoch inrange(num_epochs): loss =loss_function(w,X_train, y_train, C) losses.append(loss)if epoch %25==0or epoch ==99:print(f"Epoch {epoch}, Loss: {loss:.4f}") gradient_w =loss_gradient(w, X_train, y_train,C) w -= learning_rate * gradient_w>>Epoch 0, Loss:1.0000>>Epoch 25, Loss:0.5300>>Epoch 50, Loss:0.4594>>Epoch 75, Loss:0.4238>>Epoch 99, Loss:0.4092
After training the model and observing the decrease of the loss function, we evaluate its performance on both the training and test data. We will calculate the accuracy and display the final loss on the training data. In our case, the weights and the accuracies will be the values against which we compare the SVM implementation in Cairo with Orion.
Now that we have a good understanding of the SVM models and their key functions, we will replicate the entire model in Cairo to make it fully verifiable. Since we will be rebuilding the model from scratch, this will be a good opportunity to get acquainted with Orion's built-in functions and the operators that make the transition to Cairo seamless.
Create a new Scarb project
Scarb is the Cairo package manager specifically created to streamline our Cairo and Starknet development process. Scarb will typically manage project dependencies, the compilation process (both pure Cairo and Starknet contracts), downloading and building external libraries to accelerate our development with Orion.You can find all information about Scarb and Cairo installation here.
To create a new Scarb project, open your terminal and run:
scarb new verifiable_support_vector_machine
A new project folder will be created for you and make sure to replace the content in Scarb.toml file with the following code:
Now let's generate the necessary files to begin our transition to Cairo. In our Jupyter Notebook, we'll run the necessary code to convert our dataset obtained with make_blobs from Scikit-learn into fixed-point values and represent our X_train, y_train, X_test, and y_test values as fixed-point tensors in Orion.
import os
os.makedirs("src/generated", exist_ok=True)
tensor_name = ["X_train","Y_train","X_test","Y_test"]defgenerate_cairo_files(data,name):withopen(os.path.join('src', 'generated', f"{name}.cairo"), "w")as f: f.write("use core::array::ArrayTrait;\n"+"use orion::operators::tensor::{Tensor, TensorTrait, FP16x16Tensor};\n"+"use orion::numbers::{FixedTrait, FP16x16, FP16x16Impl};\n"+"\n"+f"fn {name}() -> Tensor<FP16x16>"+"{\n\n"+"let mut shape = ArrayTrait::new();\n" )for dim in data.shape: f.write(f"shape.append({dim});\n") f.write("let mut data = ArrayTrait::new();")for val in np.nditer(data.flatten()): f.write(f"data.append(FixedTrait::new({abs(int(decimal_to_fp16x16(val)))}, {str(val <0).lower()}));\n") f.write("let tensor = TensorTrait::<FP16x16>::new(shape.span(), data.span());\n"+"return tensor;\n}" )withopen(f"src/generated.cairo", "w")as f:for n in tensor_name: f.write(f"mod {n};\n")generate_cairo_files(X_train, "X_train")generate_cairo_files(X_test, "X_test")generate_cairo_files(y_train, "Y_train")generate_cairo_files(y_test, "Y_test")
The X_train, y_train, X_test and y_test tensor values will now be generated under src/generated directory.
In src/lib.cairo replace the content with the following code:
mod generated;mod train;mod test;mod helper;
This will tell our compiler to include the separate modules listed above during the compilation of our code. We will be covering each module in detail in the following section, but let’s first review the generated folder files.
use core::array::ArrayTrait;use orion::operators::tensor::{Tensor, TensorTrait, FP16x16Tensor};use orion::numbers::{FixedTrait, FP16x16, FP16x16Impl};fnX_train() ->Tensor<FP16x16>{letmut shape =ArrayTrait::new(); shape.append(100); shape.append(3);letmut data =ArrayTrait::new();// data has been truncated (only showing the first 5 values out of the 100 values) data.append(FixedTrait::new(165613, false)); data.append(FixedTrait::new(40488, false)); data.append(FixedTrait::new(65536, false)); data.append(FixedTrait::new(101228, false)); data.append(FixedTrait::new(275957, false));let tensor =TensorTrait::<FixedType>::new(shape.span(), data.span());return tensor;}
Since Cairo does not come with built-in fixed points we have to explicitly define it for our X and y values. Luckily, this is already implemented in Orion for us as a struct as shown below:
// Example of a FP16x16.structFP16x16 { mag:u32, sign:bool}
For this tutorial, we will use fixed point numbers FP16x16 where the magnitude represents the absolute value and the boolean indicates whether the number is negative or positive. In a 16x16 fixed-point format, there are 16 bits dedicated to the integer part of the number and 16 bits for the fractional part of the number. This format allows us to work with a wide range of values and a high degree of precision for conducting the Tensor operations. To replicate the key functions of SVM, we will conduct our operations using FP16x16 Tensors which are also represented as a structs in Orion.
A Tensor in Orion takes a shape and a span array of the data.
Implementing SVM models using Orion
At this stage, we will be reproducing the SVM functions now that we have generated our X and Y Fixedpoint Tensors. We will begin by creating a separate file for our svm functions file named helper.cairo to host all of our Support vector machine functions.
Now that we have implemented all the necessary functions for SVM, we can finally test our classification model. We begin by creating a new separate test file named test.cairo and import all the necessary Orion libraries, including our X values and y values (train and test) found in the generated folder. We also import all the key functions for SVM from the helper.cairo file, as we will rely on them to construct the model.
use traits::TryInto;use core::array::{ArrayTrait, SpanTrait};use orion::operators::tensor::{Tensor, TensorTrait, FP16x16Tensor, FP16x16TensorAdd, FP16x16TensorMul, FP16x16TensorSub,FP16x16TensorDiv};use orion::numbers::{FixedTrait, FP16x16, FP16x16Impl};use orion::numbers::fixed_point::implementations::fp16x16::core::{HALF, ONE, FP16x16Mul, FP16x16Div, FP16x16IntoI32, FP16x16PartialOrd,FP16x16PartialEq};use verifiable_support_vector_machine::{ generated::{X_train::X_train, Y_train::Y_train, X_test::X_test, Y_test::Y_test}, train::{train}};use verifiable_support_vector_machine::{helper::{pred, accuracy}};#[test]#[available_gas(99999999999999999)]fnsvm_test() {let x_train =X_train();let x_test =X_test();let y_train =Y_train();let y_test =Y_test();let feature_size =*x_train.shape[1];letmut zero_array =ArrayTrait::new();let learning_rate =FixedTrait::new(655, false); // 655 is 0.01// 50 %let average_compare =FixedTrait::new_unscaled(50, false);letmut i =0_u32;loop {if i >= feature_size {break (); } zero_array.append(FP16x16Impl::ZERO()); i +=1; };let initial_w =TensorTrait::new( shape:array![feature_size].span(), data: zero_array.span() );let y_train_len = y_train.data.len();let (final_w, initial_loss, final_loss) =train(@x_train, @y_train, @initial_w, learning_rate, y_train_len, 100_u32 );let final_y_pred =pred(@x_test, @final_w);let average_pred =accuracy(@final_y_pred, @y_test);let train_y_pred =pred(@x_train, @final_w);let average_train =accuracy(@train_y_pred, @y_train);assert(final_loss < initial_loss, 'No decrease in training loss'); assert(average_pred > average_compare, 'It is better to flip a coin'); assert(average_train > average_compare, 'It was not a good training');}
Our model will be tested using the svm_test() function, which will follow these steps:
Data Retrieval : The function starts by fetching the feature values X_train and y_train with their labels, both sourced from the generated folder.
SVM Construction : Once we have the data, we proceed to train our Support Vector Machine using the X_train and y_train values, in line with the functions calculated for this purpose.
Hyperplane Retrieval : After training, we obtain our weights "w" that define the hyperplane separating both classes.
Prediction Phase : At this stage, we use our trained SVM to make predictions on the test set.
Evaluation : At this point, we evaluate the model. We calculate accuracy to measure how well our SVM has classified.
Additional Checks : Basic controls are carried out to ensure the model has been trained correctly, and we also verify that our model's accuracy is a better option than flipping a coin.
Finally, we can execute the test file by running scarb test
If you've made it this far, well done! You are now capable of building verifiable ML models, making them ever more reliable and transparent than ever before.
We invite the community to join us in forging a future in making AI transparent and reliable resource for all.
In the case of the loss function in SVM, the Hinge Loss, (max(0,1−yi×(w⋅xi)))is used, which measures how far a sample is on the "wrong side" of the margin. If the sample is on the correct side of the margin, the loss is 0.