编译 TensorFlow 的 C/C++ 接口
TensorFlow 的 Python 接口由于其方便性和实用性而大受欢迎,但实际应用中我们可能还需要其它编程语言的接口,本文将介绍如何编译 TensorFlow 的 C/C++ 接口。安装环境:
Ubuntu 16.04
Python 3.5
CUDA 9.0
cuDNN 7
Bazel 0.17.2
TensorFlow 1.11.0
1. 安装 Bazel
- 安装 JDK
sudo apt-get install openjdk-8-jdk
- 添加 Bazel 软件源
echo "deb [arch=amd64] http://storage.googleapis.com/bazel-apt stable jdk1.8" | sudo tee /etc/apt/sources.list.d/bazel.list curl https://bazel.build/bazel-release.pub.gpg | sudo apt-key add -
- 安装并更新 Bazel
sudo apt-get update && sudo apt-get install bazel
- 点此查看 Bazel 官方安装指南
2. 编译 TensorFlow 库
- 点此下载 TensorFlow 源码
- 进入源码根目录,运行
./configure
进行配置。可参考 官网 -> Build from source -> View sample configuration session 设置,主要是 Python 的路径、CUDA 和 CUDNN 的版本和路径以及显卡的计算能力 可点此查看 。以下是我的配置过程,仅供参考。
You have bazel 0.17.2 installed. Please specify the location of python. [Default is /usr/bin/python]: /usr/bin/python3.5 Found possible Python library paths: /usr/local/lib/python3.5/dist-packages /usr/lib/python3/dist-packages Please input the desired Python library path to use. Default is [/usr/local/lib/python3.5/dist-packages] Do you wish to build TensorFlow with Apache Ignite support? [Y/n]: n No Apache Ignite support will be enabled for TensorFlow. Do you wish to build TensorFlow with XLA JIT support? [Y/n]: n No XLA JIT support will be enabled for TensorFlow. Do you wish to build TensorFlow with OpenCL SYCL support? [y/N]: n No OpenCL SYCL support will be enabled for TensorFlow. Do you wish to build TensorFlow with ROCm support? [y/N]: n No ROCm support will be enabled for TensorFlow. Do you wish to build TensorFlow with CUDA support? [y/N]: y CUDA support will be enabled for TensorFlow. Please specify the CUDA SDK version you want to use. [Leave empty to default to CUDA 9.0]: Please specify the location where CUDA 9.0 toolkit is installed. Refer to README.md for more details. [Default is /usr/local/cuda]: Please specify the cuDNN version you want to use. [Leave empty to default to cuDNN 7]: Please specify the location where cuDNN 7 library is installed. Refer to README.md for more details. [Default is /usr/local/cuda]: Do you wish to build TensorFlow with TensorRT support? [y/N]: n No TensorRT support will be enabled for TensorFlow. Please specify the locally installed NCCL version you want to use. [Default is to use https://github.com/nvidia/nccl]: Please specify a list of comma-separated Cuda compute capabilities you want to build with. You can find the compute capability of your device at: https://developer.nvidia.com/cuda-gpus. Please note that each additional compute capability significantly increases your build time and binary size. [Default is: 6.1]: Do you want to use clang as CUDA compiler? [y/N]: n nvcc will be used as CUDA compiler. Please specify which gcc should be used by nvcc as the host compiler. [Default is /usr/bin/gcc]: Do you wish to build TensorFlow with MPI support? [y/N]: n No MPI support will be enabled for TensorFlow. Please specify optimization flags to use during compilation when bazel option "--config=opt" is specified [Default is -march=native]: Would you like to interactively configure ./WORKSPACE for Android builds? [y/N]: n Not configuring the WORKSPACE for Android builds. Preconfigured Bazel build configs. You can use any of the below by adding "--config=<>" to your build command. See .bazelrc for more details. --config=mkl # Build with MKL support. --config=monolithic # Config for mostly static monolithic build. --config=gdr # Build with GDR support. --config=verbs # Build with libverbs support. --config=ngraph # Build with Intel nGraph support. Configuration finished
- 进入 tensorflow 目录进行编译,编译成功后,在 /bazel-bin/tensorflow 目录下会出现 libtensorflow_cc.so 文件
C版本: bazel build :libtensorflow.so C++版本: bazel build :libtensorflow_cc.so
3. 编译其他依赖
- 进入 tensorflow/contrib/makefile 目录下,运行
./build_all_linux.sh
,成功后会出现一个gen文件夹 - 若出现如下错误 /autogen.sh: 4: autoreconf: not found ,安装相应依赖即可
sudo apt-get install autoconf automake libtool
4. 测试
- Cmaklist.txt
cmake_minimum_required(VERSION 3.8) project(Tensorflow_test) set(CMAKE_CXX_STANDARD 11) set(SOURCE_FILES main.cpp) include_directories( /media/lab/data/yongsen/tensorflow-master /media/lab/data/yongsen/tensorflow-master/tensorflow/bazel-genfiles /media/lab/data/yongsen/tensorflow-master/tensorflow/contrib/makefile/gen/protobuf/include /media/lab/data/yongsen/tensorflow-master/tensorflow/contrib/makefile/gen/host_obj /media/lab/data/yongsen/tensorflow-master/tensorflow/contrib/makefile/gen/proto /media/lab/data/yongsen/tensorflow-master/tensorflow/contrib/makefile/downloads/nsync/public /media/lab/data/yongsen/tensorflow-master/tensorflow/contrib/makefile/downloads/eigen /media/lab/data/yongsen/tensorflow-master/bazel-out/local_linux-py3-opt/genfiles /media/lab/data/yongsen/tensorflow-master/tensorflow/contrib/makefile/downloads/absl ) add_executable(Tensorflow_test ${SOURCE_FILES}) target_link_libraries(Tensorflow_test /media/lab/data/yongsen/tensorflow-master/bazel-bin/tensorflow/libtensorflow_cc.so /media/lab/data/yongsen/tensorflow-master/bazel-bin/tensorflow/libtensorflow_framework.so )
- 创建回话
#include <tensorflow/core/platform/env.h> #include <tensorflow/core/public/session.h> #include <iostream> using namespace std; using namespace tensorflow; int main() { Session* session; Status status = NewSession(SessionOptions(), &session); if (!status.ok()) { cout << status.ToString() << "\n"; return 1; } cout << "Session successfully created.\n"; return 0; }
- 查看 TensorFlow 版本
#include <iostream> #include <tensorflow/c/c_api.h> int main() { std:: cout << "Hello from TensorFlow C library version" << TF_Version(); return 0; } // Hello from TensorFlow C library version1.11.0-rc1
- 若提示缺少某些头文件则在 tensorflow 根目录下搜索具体路径,然后添加到 Cmakelist 里面即可。
获取更多精彩,请关注「seniusen」!
相关推荐
louishao 2020-06-03
JM 2020-06-21
Micusd 2020-11-19
xjtukuixing 2020-10-27
lybbb 2020-10-15
lybbb 2020-09-29
ghjk0 2020-09-24
yamaxifeng 2020-09-09
GDGYZL 2020-08-28
lybbb 2020-08-28
Icevivian 2020-08-25
comwayLi 2020-08-16
carbon0 2020-08-16
源式羽语 2020-08-09
sherry颖 2020-08-01
songbinxu 2020-07-19
sherry颖 2020-07-18
Niteowl 2020-07-15
Kindle君 2020-07-15