An Introduction To Torch (Pytorch) C++ Front-End

Leveraging Torch C++ Library for Neural Network Training

PyTorch v1.0 was released this weekone of the major things it introduced was a new C++ Front-End , the ability to build models using C++, with a similar API to PyTorch .In this post I'm going to present library usage and how you can build a model using our favorite programming language .

Installation

First thing I noticed was the ease of use, installing and getting started is as fast as


wget https://download.pytorch.org/libtorch/nightly/cpu/libtorch-shared-with-deps-latest.zipunzip libtorch-shared-with-deps-latest.zip

That's it .

Hello World

Let's create a 3,3 matrix and multiply it by a 3,3 matrix full of ones .


#include
#include

int main(){
    // build a 2-D (3,3) Tensor
    at::Tensor mat = torch::rand({3,3});
    at::Tensor identity = torch::ones({3,3});
    std::cout << mat << std::endl;
    std::cout << mat * identity << std::endl;
}
                

Compiling

At this point your directory should look like this


.
├── CMakeLists.txt
├── first_example.cc
└── build
            

In your "CMakeLists.txt" paste the following instructions :


cmake_minimum_required(VERSION 3.0 FATAL_ERROR)
project(first_example)

find_package(Torch REQUIRED)

add_executable(first_example first_example.cpp)
target_link_libraries(first_example "${TORCH_LIBRARIES}")
set_property(TARGET first_example PROPERTY CXX_STANDARD 11)

To create your binary i.e build the example above you may run the following commandsPath to libtorch (where you unzipped the zip file you downloaded )In my case it was the "Home" directory so I wrote .


cd build

cmake -DCMAKE_PREFIX_PATH=~/libtorch ..

make 

./first_example

The Output on my computer was :


0.1264  0.9519  0.9513
0.3406  0.5906  0.2082
0.2233  0.5381  0.2901
[ Variable[CPUFloatType]{3,3}  ]
0.1264  0.9519  0.9513
0.3406  0.5906  0.2082
0.2233  0.5381  0.2901
[ Variable[CPUFloatType]{3,3}  ]

Toying with Tensors

The Torch API is quite similar to PyTorch this makes it easy to use one if you're familiar with the other ; a lot of work wentinto the API Design here .

C++ supports operator overloading thus you can use binary operators (+,-, * ..) to operate on tensors or use their functional counterparts .

Add the following lines to your previous file or create an entire new file (make sure to edit CMakeLists.txt accordingly) .


// summing and functions
at::Tensor mat2 = torch::rand({3,3});
auto sum_mat_mat2 = mat + mat2 ;
// or
auto sum_mat_mat2_f = torch::add(mat,mat2);


std::cout << sum_mat_mat2_f << std::endl;

Output :


➜  build ./first_example
0.3131  0.4840  0.9680
0.6355  0.0499  0.7635
0.5016  0.2171  0.8459
[ Variable[CPUFloatType]{3,3}  ]
0.3131  0.4840  0.9680
0.6355  0.0499  0.7635
0.5016  0.2171  0.8459
[ Variable[CPUFloatType]{3,3}  ]
0.6340  1.0769  1.6257
1.5995  0.5327  0.8489
1.2001  0.9391  1.1176
[ Variable[CPUFloatType]{3,3}  ]
            

You can seek further operations from the PyTorch docs they are similar to the C++ ones .

Deep Hello World

Nothing better than training a neural network in C++ .


#include
#include



// build a neural network similar to how you would do it with Pytorch 

struct Model : torch::nn::Module {

    // Constructor
    Model() {
        // construct and register your layers
        in = register_module("in",torch::nn::Linear(8,64));
        h = register_module("h",torch::nn::Linear(64,64));
        out = register_module("out",torch::nn::Linear(64,1));
    }

    // the forward operation (how data will flow from layer to layer)
    torch::Tensor forward(torch::Tensor X){
        // let's pass relu 
        X = torch::relu(in->forward(X));
        X = torch::relu(h->forward(X));
        X = torch::sigmoid(out->forward(X));
        
        // return the output
        return X;
    }

    torch::nn::Linear in{nullptr},h{nullptr},out{nullptr};



};


int main(){

    Model model;
    
    auto in = torch::rand({8,});

    auto out = model.forward(in);

    std::cout << in << std::endl;
    std::cout << out << std::endl;

}

As usual go to your build folder and run the commands we just mentionned to build your binary .

Output


0.2610
0.5681
0.7067
0.8388
0.0909
0.8252
0.3471
0.4768
[ Variable[CPUFloatType]{8} ]
0.4629
[ Variable[CPUFloatType]{1} ]
               

More examples :

Conclusion

The API is great it keeps the same naming conventions and creation style that we've known from PyTorch as to speed and flexbility I didn't runany metrics but the C++ version will probably be a bit faster (I think) . All I can say is this makes me love Pytorch more , Great work from the teamtruly amazing .

~ Till next time .