This is a TensorFlow Lite C++ Inference Example. It shows how to run a pre-trained TensorFlow Lite model on an Android device.
Click to see video:
In this example, we’ll take a pre-trained [TensorFlow Lite](https://www.tensorflow.org/lite) model and convert it to run in [C++](https://en.cppreference.com/w/), then run inference with it in an Android app using the [TensorFlow Lite C++ API](https://www.tensorflow.org/lite/guide/inference#using_the_tensorflowlite_c_api).
TensorFlow Lite Overview
TensorFlow Lite is a lightweight machine learning framework designed for mobile and embedded devices. It allows you to run inference on models trained in TensorFlow, and provides a simple C++ API for deploying these models on embedded devices.
TF Lite supports a wide range of hardware platforms, including:
– Arm 32-bits (armeabi-v7a) and 64-bits (arm64-v8a) processors
– x86 32-bits and 64-bits processors
– OpenCL accelerators
TensorFlow Lite also provides Java and Swift APIs which can be used on Android and iOS devices respectively.
TensorFlow Lite C++ Inference Example
This is an example of using TensorFlow Lite for inferencing in C++.
TensorFlow Lite Model Optimization
TensorFlow Lite is a great tool for running machine learning models on mobile devices. However, one potential downside of using TensorFlow Lite is that it can lead to larger model sizes. In this article, we’ll show you how to use TensorFlow Lite’s model optimization functionality to create smaller and faster models.
TensorFlow Lite Hardware Acceleration
TensorFlow Lite uses a “graph optimization” technique called ahead-of-time (AOT) compilation to create an optimized version of your TensorFlow Lite model that can be executed on specific types of hardware accelerators.
AOT compilation is performed using the TensorFlow Lite Converter. The converter tool takes as input a TensorFlow Lite FlatBuffer file (generated by the TensorFlow Lite Python API) and outputs a file in the ELF format that can be run on the specified target hardware accelerator.
The current version of the converter tool only supports AOT compilation for the following types of hardware accelerators:
TensorFlow Lite Supported Devices
TensorFlow Lite is supported on a wide variety of devices, from single-core Arm Cortex-M microcontrollers to custom ASICs designed for deploying TensorFlow Lite models.
TensorFlow Lite Future Directions
TensorFlow Lite is an open-source deep learning framework for on-device inference. It enables low-latency inference of on-device comfortable with a small binary size. Currently, TensorFlow Lite is in Developer Preview, and it supports a limited set of TensorFlow operations. Developers can use the C++ API to build and train models with TensorFlow, and then convert these models to TensorFlow Lite to run on mobile and embedded devices.
In the future, we plan to support a wider range of TensorFlow operations, including custom operations. We also plan to optimize the converter to generate more efficient code for a variety of hardware platforms.
In this tutorial, we focused on image classification and ran the TensorFlow Lite C++ example on an Android device. To use TensorFlow Lite in your own apps, we recommend using the TensorFlow Lite Support Library. The support library provides a consistent API layer on top of TensorFlow Lite that can be used across a wide range of devices and platforms. It also includes a number of tools to help with debugging and profiling your models.
– [TensorFlow Lite Swift for TensorFlow Inference Tutorial](https://www.tensorflow.org/lite/tutorials/swift_inference)
– [Classify images of flowers](https://www.tensorflow.org/lite/models/image_classification/overview)
– [Pose estimation with TensorFlow Lite](https://www.tensorflow.org/lite/models/pose_estimation/overview)
Keyword: TensorFlow Lite C++ Inference Example