Portability between deep learning frameworks – with ONNX

No Comments

In recent years, the number of frameworks for deep learning has exploded. Companies such as Google, Facebook and Amazon have made their deep learning frameworks TensorFlow, PyTorch and MXNet available open-source or are actively involved in developing them. Each of these frameworks has different advantages and disadvantages, which have different consequences for development and commissioning. This article introduces Open Neural Network Exchange (ONNX) a model standard that makes it possible to exchange models between frameworks. Through the interoperability, we can use the advantages of a framework based on the situation to the fullest.

Deep learning frameworks: Background

Framework trends across PyTorch, Caffe2, TensorFlow, Theano

Deep Learning Framework trends: PyTorch, Caffe2, TensorFlow, Theano

In recent years the framework Theano has been heavily used. Nowadays, there isn’t any further development of the framework. Currently, we don’t know which frameworks which will establish themselves and which will disappear. Every framework has a different background and purpose of solving. Some of the frameworks were designed for research, while others were intended for production purpose. Besides the deep learning libraries, there are numerical frameworks to optimise the operations based on the hardware. The choosing of the numerical library will have an impact on the runtime of the models. 

Hard manufacturers such as NVIDIA and INTEL are developing the frameworks to run the models as efficiently as possible on GPUs or CPUs.

Companies that want to implement deep learning in their daily business are overwhelmed by the range of possibilities. The selection of a framework can have severe consequences for different areas of the company. The speed of innovation can suffer significant losses, as the commissioning of a model may be delayed after model development. One reason for this may be that the chosen framework is designed more for development than for production.

Deep Learning Zoo

Deep Learning Zoo

The graphic above shows a small selection of the deep learning framework Zoo and its technical possibilities. A general problem among the frameworks is the portability of the models to another framework. The interoperability allows the advantages of the different frameworks to be used depending on the phase, whether development or commissioning. For example, PyTorch is ideally suited for prototype development and experimentation of the models, while TensorFlow Serving provides an easy way to deploy a TensorFlow model.

Open Neural Network Exchange (ONNX)

Framework interoperability with ONNX

Framework interoperability with ONNX

In 2017, Microsoft, Facebook and Amazon joined forces to solve the challenge of model portability. The result is the new standard Open Neural Network Exchange (ONNX). The vision behind ONNX is to export a model developed with framework A and import it into framework B without any problems. Here you can find a list of supported frameworks. 

Seeing deep learning libraries from a very abstract perspective, one of the main difference is the way data is flowing through the operations. In TensorFlow and Caffe2 we are using a static graph to run computations. In PyTorch we are using a dynamic graph. The choose of the computation model can lead to some differences in programming and runtime. However, this is not an issue for the ONNX standard. Through the interfaces of the libraries, the relevant information like structure and weights can be extracted and transformed. The ONNX specification consists of these three essential components that enable import and export:

  1. An extensible calculation graph
  2. Fixed operators and functions
  3. Defined standard data types

The exact definition with its details can be found inside the Github repository onnx/onnx.

MNIST Example

MNIST trained model from PyTorch to TensorFlow with ONNX

MNIST trained model from PyTorch to TensorFlow with ONNX

To get to know ONNX a little better, we will take a look at a practical example with PyTorch and TensorFlow. We are training a model in PyTorch that we convert to ONNX. Then the ONNX transformed model is loaded into TensorFlow to run inference. We are using MNIST dataset. Python3 and pip3 are required to perform the tutorial. We are installing the needed packages with pip3:

First, we define the neural network architecture with PyTorch. Our chosen architecture consists of two convolutional layers and two fully connected layers. We are using the activation function ReLU and a max pooling layer. The input data is an image with only one colour channel. 

In the main() function, we are putting the essential parts together. It is necessary to save the weights with torch.save(model.state_dict(), file) after the training. The full training, test and main() functions can be read in the repository.

Before we export the model to ONNX, we need to read it back into PyTorch. Then it is necessary to define a dummy_input as the input vectors of the model. The dummy_input is required since PyTorch is using a dynamic input and ONNX requires a static one.

The model can be read by onnx.load(file). Via the prepare(model)-method of the onnx/onnx-tensorflow package the weights are bound to a static graph.

Afterwards, we can run to predictions in the TensorFlow runtime environment. For the preprocessing, we need to scale the image to 28×28 pixels and converted to Greyscale. Then we convert the datatype of the array to Float32 and transform the axes to the required dimensions of the input tensor.

Limits of ONNX

At first glance, the ONNX standard is an easy-to-use way to ensure the portability of models. The use of ONNX is straightforward as long as we provide these two conditions:

  1. We are using supported data types and operations of the ONNX specification.
  2. We don’t do any custom development in terms of specific custom layers/operations.

Furthermore, we need to double-check that the used operations and functions are implemented in the backends for the export and import.

The ONNX project is developing at a rapid pace and is continually releasing new versions that enhance the compatibility between the frameworks. If a project is carried out within this framework, the use of ONNX is entirely unproblematic.

If these conditions are not met, the functionality has to be implemented in the ONNX backends themselves to use it. The custom implementation can turn out to be very time-consuming and laborious.

Summary

The need for model portability is greater than ever. There are more and more deep learning frameworks on the market and the portability allows the advantages of the individual frameworks to be better exploited. ONNX is an easy-to-use framework that has a lot of potentials to be the standard for exchanging models between libraries. This ensures that developed models can be used flexibly and over the long term. Furthermore, the results of the research can go into production faster as long as the supported data types and operations are used by ONNX. Otherwise, they must be implemented in ONNX.

The German version of this post can be found here. Check out more posts on deep learning on our blog.

Comment

Your email address will not be published. Required fields are marked *