netrans model conversion examples

This commit is contained in:
xujiao 2025-04-07 11:31:19 +08:00
parent 99c9a6fcde
commit b39abda2fa
24 changed files with 1646 additions and 0 deletions

168
examples/caffe/README.md Normal file
View File

@ -0,0 +1,168 @@
# Caffe模型转换示例
本文档以 lenet_caffe 为例,介绍如何使用 Netrans 对 Caffe 模型进行转换。
Netrans 支持所有的 Caffe 模型。
## 安装Netrans
1. 先确定您的 Netrans 下载目录,使用以下命令将 Netrans 加入系统配置文件。记得使用您真实的 Netrans下载目录 替换下行命令中的文字。
```bash
export NETRANS_PATH=Netrans下载目录/bin
```
2. 安装 netrans_py
```bash
cd netrans_py
pip3 install -e .
```
## 数据准备
转换 Caffe 模型时,模型工程目录应包含以下文件:
- 以 .prototxt 结尾的模型结构定义文件
- 以 .caffemode 结尾的模型权重文件
- dataset.txt 包含数据路径的文本文件支持图像和NPY格式
我们的示例 已经完成数据准备,可以使用下面命令进入目录执行。
```bash
cd netrans/
cd examples/caffe
```
此时目录如下:
```bash
lenet_caffe/
├── 0.jpg # 校准数据
├── dataset.txt # 指定数据地址的文件
├── lenet_caffe.caffemodel # caffe 模型权重
└── lenet_caffe.prototxt # caffe 模型结构
```
## 使用 nertans_cli 命令行工具
使用 netrans_cli 之前,请先使用以下命令将 命令行脚本 拷贝至当前目录。
```bash
cp ../../netrans_cli/*sh ./
```
此时目录如下:
```bash
caffe/
├── example.py
├── export.sh
├── gen_inputmeta.sh
├── import_model.sh
├── infer.sh
├── lenet_caffe
│ ├── 0.jpg
│ ├── dataset.txt
│ ├── lenet_caffe.caffemodel
│ └── lenet_caffe.prototxt
└── quantize.sh
```
### 模型导入
```bash
./import_model.sh lenet_caffe
```
该步骤会生成 .json 结尾的网络结构文件和 .data 结尾的权重数据文件。
此时 lenet_caffe 的目录结构如下:
```bash
lenet_caffe/
├── 0.jpg
├── dataset.txt
├── lenet_caffe.caffemodel
├── lenet_caffe.data
├── lenet_caffe.json
└── lenet_caffe.prototxt
```
### 配置文件生成
数据在推理前一般会经过预处理,为了确保模型可以正确的输入数据,需要生产对应的配置文件。
```bash
./gen_inputmeta.sh lenet_caffe
```
此时 lenet_caffe 的目录结构如下:
```bash
lenet_caffe/
├── 0.jpg
├── dataset.txt
├── lenet_caffe.caffemodel
├── lenet_caffe.data
├── lenet_caffe_inputmeta.yml
├── lenet_caffe.json
└── lenet_caffe.prototxt
```
### 模型量化
为了优化模型的推理效率,加快模型的推理速度,我们使用下行命令对模型进行量化处理。
量化模型需要两个参数目录模型名字和量化类型。量化类型包括float,int16, int8 和 uint8。
```bash
./quantize.sh lenet_caffe uint8
```
此时 lenet_caffe 的目录结构如下:
```bash
lenet_caffe/
├── 0.jpg
├── dataset.txt
├── lenet_caffe_asymmetric_affine.quantize
├── lenet_caffe.caffemodel
├── lenet_caffe.data
├── lenet_caffe_inputmeta.yml
├── lenet_caffe.json
└── lenet_caffe.prototxt
```
### 模型导出
最后我们使用 export.sh 将模型导出到nbg格式并生成应用程序工程。量化模型需要两个参数目录模型名字和量化类型。量化类型包括float,int16, int8 和 uint8。量化类型应于 quantize.sh 使用的一致。
```bash
./export.sh lenet_caffe uint8
```
此时 lenet_caffe 的目录结构如下:
```bash
lenet_caffe/
├── 0.jpg
├── dataset.txt
├── lenet_caffe_asymmetric_affine.quantize
├── lenet_caffe.caffemodel
├── lenet_caffe.data
├── lenet_caffe_inputmeta.yml
├── lenet_caffe.json
├── lenet_caffe.prototxt
└── wksp
└── asymmetric_affine
├── BUILD
├── dump_core_graph.json
├── graph.json
├── lenetcaffeasymmetricaffine.2012.vcxproj
├── lenet_caffe_asymmetric_affine.export.data
├── lenetcaffeasymmetricaffine.vcxproj
├── main.c
├── makefile.linux
├── network_binary.nb
├── vnn_global.h
├── vnn_lenetcaffeasymmetricaffine.c
├── vnn_lenetcaffeasymmetricaffine.h
├── vnn_post_process.c
├── vnn_post_process.h
├── vnn_pre_process.c
└── vnn_pre_process.h
```
## 使用 netrans_py python api
本文档提供基于 python api 实现的python脚本请先使用以下命令将 api脚本 拷贝至当前目录。
### 准备示例脚本
```bash
cp ../../netrans_py/example.py ./
```
### 运行示例脚本
```bash
python3 example.py lenet_caffe -q uint8
```

Binary file not shown.

After

Width:  |  Height:  |  Size: 553 B

View File

@ -0,0 +1 @@
0.jpg

Binary file not shown.

View File

@ -0,0 +1,136 @@
name: "LeNet"
layer {
name: "input"
type: "Input"
top: "data"
input_param {
shape {
dim: 64
dim: 1
dim: 28
dim: 28
}
}
}
layer {
name: "conv1"
type: "Convolution"
bottom: "data"
top: "conv1"
param {
lr_mult: 1
}
param {
lr_mult: 2
}
convolution_param {
num_output: 20
kernel_size: 5
stride: 1
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
}
}
}
layer {
name: "pool1"
type: "Pooling"
bottom: "conv1"
top: "pool1"
pooling_param {
pool: MAX
kernel_size: 2
stride: 2
}
}
layer {
name: "conv2"
type: "Convolution"
bottom: "pool1"
top: "conv2"
param {
lr_mult: 1
}
param {
lr_mult: 2
}
convolution_param {
num_output: 50
kernel_size: 5
stride: 1
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
}
}
}
layer {
name: "pool2"
type: "Pooling"
bottom: "conv2"
top: "pool2"
pooling_param {
pool: MAX
kernel_size: 2
stride: 2
}
}
layer {
name: "ip1"
type: "InnerProduct"
bottom: "pool2"
top: "ip1"
param {
lr_mult: 1
}
param {
lr_mult: 2
}
inner_product_param {
num_output: 500
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
}
}
}
layer {
name: "relu1"
type: "ReLU"
bottom: "ip1"
top: "ip1"
}
layer {
name: "ip2"
type: "InnerProduct"
bottom: "ip1"
top: "ip2"
param {
lr_mult: 1
}
param {
lr_mult: 2
}
inner_product_param {
num_output: 10
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
}
}
}
layer {
name: "prob"
type: "Softmax"
bottom: "ip2"
top: "prob"
}

167
examples/darknet/README.md Normal file
View File

@ -0,0 +1,167 @@
# Darknet模型转换示例
本文档以 yolov4_tiny 为例,介绍如何使用 Netrans 对 Darknet 模型进行转换。
Netrans 支持 Darknet[官网](https://pjreddie.com/darknet/)列出 darknet 模型
## 安装Netrans
1. 先确定您的 Netrans 下载目录,使用以下命令将 Netrans 加入系统配置文件。记得使用您真实的 Netrans下载目录 替换下行命令中的文字。
```bash
export NETRANS_PATH=Netrans下载目录/bin
```
2. 安装 netrans_py
```bash
cd netrans_py
pip3 install -e .
```
## 数据准备
转换 Darknet 模型时,模型工程目录应包含以下文件:
- .cfg 文件:网络结构配置文件
- .weights 文件:训练权重文件
- .dataset.txt数据路径配置文件
我们的示例 已经完成数据准备,可以使用下面命令进入目录执行。
```bash
cd netrans/
cd examples/darknet
```
此时目录如下:
```bash
yolov4_tiny/
├── 0.jpg # 校准数据
├── dataset.txt # 指定数据地址的文件
├── yolov4_tiny.cfg # 网络结构配置文件
└── yolov4_tiny.weights # 预训练权重文件
```
## 使用 nertans_cli 命令行工具
使用 netrans_cli 之前,请先使用以下命令将 命令行脚本 拷贝至当前目录。
```bash
cp ../../netrans_cli/*sh ./
```
此时目录如下:
```bash
darknet/
├── export.sh
├── gen_inputmeta.sh
├── import_model.sh
├── infer.sh
├── quantize.sh
└── yolov4_tiny
├── 0.jpg
├── dataset.txt
├── yolov4_tiny.cfg
└── yolov4_tiny.weights
```
### 模型导入
```bash
./import_model.sh yolov4_tiny
```
该步骤会生成 .json 结尾的网络结构文件和 .data 结尾的权重数据文件。
此时 yolov4_tiny 的目录结构如下:
```bash
yolov4_tiny/
├── 0.jpg
├── dataset.txt
├── yolov4_tiny.cfg
├── yolov4_tiny.data
├── yolov4_tiny.json
└── yolov4_tiny.weights
```
### 配置文件生成
数据在推理前一般会经过预处理,为了确保模型可以正确的输入数据,需要生产对应的配置文件。
```bash
./gen_inputmeta.sh yolov4_tiny
```
此时 yolov4_tiny 的目录结构如下:
```bash
yolov4_tiny/
├── 0.jpg
├── dataset.txt
├── yolov4_tiny.cfg
├── yolov4_tiny.data
├── yolov4_tiny_inputmeta.yml
├── yolov4_tiny.json
└── yolov4_tiny.weights
```
### 模型量化
为了优化模型的推理效率,加快模型的推理速度,我们使用下行命令对模型进行量化处理。
量化模型需要两个参数目录模型名字和量化类型。量化类型包括float,int16, int8 和 uint8。
```bash
./quantize.sh yolov4_tiny uint8
```
此时 yolov4_tiny 的目录结构如下:
```bash
yolov4_tiny/
├── 0.jpg
├── dataset.txt
├── yolov4_tiny_asymmetric_affine.quantize
├── yolov4_tiny.cfg
├── yolov4_tiny.data
├── yolov4_tiny_inputmeta.yml
├── yolov4_tiny.json
└── yolov4_tiny.weights
```
### 模型导出
最后我们使用 export.sh 将模型导出到nbg格式并生成应用程序工程。量化模型需要两个参数目录模型名字和量化类型。量化类型包括float,int16, int8 和 uint8。量化类型应于 quantize.sh 使用的一致。
```bash
./export.sh yolov4_tiny uint8
```
此时 yolov4_tiny 的目录结构如下:
```bash
yolov4_tiny/
├── 0.jpg
├── dataset.txt
├── inputs_outputs.txt
├── yolov4_tiny_asymmetric_affine.quantize
├── yolov4_tiny.data
├── yolov4_tiny_inputmeta.yml
├── yolov4_tiny.json
├── yolov4_tiny.weights
└── wksp
└── asymmetric_affine
├── BUILD
├── dump_core_graph.json
├── graph.json
├── yolov4_tinyasymmetricaffine.2012.vcxproj
├── yolov4_tiny_asymmetric_affine.export.data
├── yolov4_tinyasymmetricaffine.vcxproj
├── main.c
├── makefile.linux
├── network_binary.nb
├── vnn_global.h
├── vnn_yolov4_tinyasymmetricaffine.c
├── vnn_yolov4_tinyasymmetricaffine.h
├── vnn_post_process.c
├── vnn_post_process.h
├── vnn_pre_process.c
└── vnn_pre_process.h
```
## 使用 netrans_py python api
本文档提供基于 python api 实现的python脚本请先使用以下命令将 api脚本 拷贝至当前目录。
### 准备示例脚本
```bash
cp ../../netrans_py/example.py ./
```
### 运行示例脚本
```bash
python3 example.py yolov4_tiny -q uint8
```

Binary file not shown.

After

Width:  |  Height:  |  Size: 16 KiB

View File

@ -0,0 +1 @@
0.jpg

View File

@ -0,0 +1,294 @@
[net]
# Testing
#batch=1
#subdivisions=1
# Training
batch=64
subdivisions=1
width=416
height=416
channels=3
momentum=0.9
decay=0.0005
angle=0
saturation = 1.5
exposure = 1.5
hue=.1
learning_rate=0.00261
burn_in=1000
max_batches = 2000200
policy=steps
steps=1600000,1800000
scales=.1,.1
#weights_reject_freq=1001
#ema_alpha=0.9998
#equidistant_point=1000
#num_sigmas_reject_badlabels=3
#badlabels_rejection_percentage=0.2
[convolutional]
batch_normalize=1
filters=32
size=3
stride=2
pad=1
activation=leaky
[convolutional]
batch_normalize=1
filters=64
size=3
stride=2
pad=1
activation=leaky
[convolutional]
batch_normalize=1
filters=64
size=3
stride=1
pad=1
activation=leaky
[route]
layers=-1
groups=2
group_id=1
[convolutional]
batch_normalize=1
filters=32
size=3
stride=1
pad=1
activation=leaky
[convolutional]
batch_normalize=1
filters=32
size=3
stride=1
pad=1
activation=leaky
[route]
layers = -1,-2
[convolutional]
batch_normalize=1
filters=64
size=1
stride=1
pad=1
activation=leaky
[route]
layers = -6,-1
[maxpool]
size=2
stride=2
[convolutional]
batch_normalize=1
filters=128
size=3
stride=1
pad=1
activation=leaky
[route]
layers=-1
groups=2
group_id=1
[convolutional]
batch_normalize=1
filters=64
size=3
stride=1
pad=1
activation=leaky
[convolutional]
batch_normalize=1
filters=64
size=3
stride=1
pad=1
activation=leaky
[route]
layers = -1,-2
[convolutional]
batch_normalize=1
filters=128
size=1
stride=1
pad=1
activation=leaky
[route]
layers = -6,-1
[maxpool]
size=2
stride=2
[convolutional]
batch_normalize=1
filters=256
size=3
stride=1
pad=1
activation=leaky
[route]
layers=-1
groups=2
group_id=1
[convolutional]
batch_normalize=1
filters=128
size=3
stride=1
pad=1
activation=leaky
[convolutional]
batch_normalize=1
filters=128
size=3
stride=1
pad=1
activation=leaky
[route]
layers = -1,-2
[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=leaky
[route]
layers = -6,-1
[maxpool]
size=2
stride=2
[convolutional]
batch_normalize=1
filters=512
size=3
stride=1
pad=1
activation=leaky
##################################
[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=leaky
[convolutional]
batch_normalize=1
filters=512
size=3
stride=1
pad=1
activation=leaky
[convolutional]
size=1
stride=1
pad=1
filters=255
activation=linear
[yolo]
mask = 3,4,5
anchors = 10,14, 23,27, 37,58, 81,82, 135,169, 344,319
classes=80
num=6
jitter=.3
scale_x_y = 1.05
cls_normalizer=1.0
iou_normalizer=0.07
iou_loss=ciou
ignore_thresh = .7
truth_thresh = 1
random=0
resize=1.5
nms_kind=greedynms
beta_nms=0.6
#new_coords=1
#scale_x_y = 2.0
[route]
layers = -4
[convolutional]
batch_normalize=1
filters=128
size=1
stride=1
pad=1
activation=leaky
[upsample]
stride=2
[route]
layers = -1, 23
[convolutional]
batch_normalize=1
filters=256
size=3
stride=1
pad=1
activation=leaky
[convolutional]
size=1
stride=1
pad=1
filters=255
activation=linear
[yolo]
mask = 1,2,3
anchors = 10,14, 23,27, 37,58, 81,82, 135,169, 344,319
classes=80
num=6
jitter=.3
scale_x_y = 1.05
cls_normalizer=1.0
iou_normalizer=0.07
iou_loss=ciou
ignore_thresh = .7
truth_thresh = 1
random=0
resize=1.5
nms_kind=greedynms
beta_nms=0.6
#new_coords=1
#scale_x_y = 2.0

Binary file not shown.

186
examples/onnx/README.md Normal file
View File

@ -0,0 +1,186 @@
# Onnx模型转换示例
本文档以 yolov5s 为例介绍如何使用 Netrans 对 Onnx 模型进行转换。
Netrans 支持 ONNX 至 1.14.0 opset支持至19。
## 安装Netrans
1. 先确定您的 Netrans 下载目录,使用以下命令将 Netrans 加入系统配置文件。记得使用您真实的 Netrans下载目录 替换下行命令中的文字。
```bash
export NETRANS_PATH=Netrans下载目录/bin
```
2. 安装 netrans_py
```bash
cd netrans_py
pip3 install -e .
```
## 数据准备
转换ONNX模型需准备
- .onnx 文件:网络模型
- dataset.txt数据路径配置文件
我们的示例 已经完成数据准备,可以使用下面命令进入目录执行。
```bash
cd netrans/
cd examples/onnx
```
此时目录如下:
```
yolov5s/
├── 0.jpg # 校准数据
├── dataset.txt # 指定数据地址的文件
└── yolov5s.onnx # 网络模型
```
### 3.1 使用 netrans_cli 转换 onnx 示例模型 yolov5s
使用 netrans_cli 之前,请先使用以下命令将 命令行脚本 拷贝至当前目录。
```bash
cp ../../netrans_cli/*sh ./
```
此时目录如下:
```
onnx/
├── export.sh
├── gen_inputmeta.sh
├── import_model.sh
├── infer.sh
├── quantize.sh
└── yolov5s
├── 0.jpg
├── dataset.txt
└── yolov5s.onnx
```
#### 3.1.1 导入模型
```bash
./import_model.sh yolov5s
```
该步骤会生成 .json 结尾的网络结构文件和 .data 结尾的权重数据文件。
此时 yolov5s 的目录结构如下
```
yolov5s/
├── 0.jpg
├── dataset.txt
├── yolov5s.data
├── yolov5s.json
└── yolov5s.onnx
```
#### 3.1.2 生成配置文件
数据在推理前一般会经过预处理,为了确保模型可以正确的输入数据,需要生产对应的配置文件。
```bash
./gen_inputmeta.sh yolov5s
```
此时 yolov5s 的目录结构如下:
```
yolov5s/
├── 0.jpg
├── dataset.txt
├── yolov5s.data
├── yolov5s_inputmeta.yml
├── yolov5s.json
└── yolov5s.onnx
```
根据 yolov5s 的实际情况 我们需要修改yml中的 mean 为 0scale为 0.003921568627。
打开 ` yolov5s_inputmeta.yml ` 文件,
修改第30-33行为
```
scale:
- 0.003921568627
- 0.003921568627
- 0.003921568627
```
关闭并保存。
#### 3.1.3 量化模型
```bash
./quantize.sh yolov5s uint8
```
量化模型需要两个参数目录模型名字和量化类型。量化类型包括float int8和uint8。
此时 yolov5s 的目录结构如下:
```
yolov5s/
├── 0.jpg
├── dataset.txt
├── yolov5s_asymmetric_affine.quantize
├── yolov5s.data
├── yolov5s_inputmeta.yml
├── yolov5s.json
└── yolov5s.onnx
```
#### 3.1.4 导出模型
```bash
./export.sh yolov5s uint8
```
此时 yolov5s 的目录结构如下:
```
yolov5s/
├── 0.jpg
├── dataset.txt
├── wksp
│ └── asymmetric_affine
│ ├── BUILD
│ ├── dump_core_graph.json
│ ├── graph.json
│ ├── main.c
│ ├── makefile.linux
│ ├── network_binary.nb
│ ├── vnn_global.h
│ ├── vnn_post_process.c
│ ├── vnn_post_process.h
│ ├── vnn_pre_process.c
│ ├── vnn_pre_process.h
│ ├── vnn_yolov5sasymmetricaffine.c
│ ├── vnn_yolov5sasymmetricaffine.h
│ ├── yolov5sasymmetricaffine.2012.vcxproj
│ ├── yolov5s_asymmetric_affine.export.data
│ └── yolov5sasymmetricaffine.vcxproj
├── yolov5s_asymmetric_affine.quantize
├── yolov5s.data
├── yolov5s_inputmeta.yml
├── yolov5s.json
└── yolov5s.onnx
```
### 3.2 使用 netrans_py 转换 onnx 示例模型 yolov5s
#### 3.2.1 安装netrans_py
```bash
cd netrans_py
pip3 install -e .
```
#### 3.2.2 准备示例脚本
```bash
cd ../example/onnx
cp ../../netrans_py/example.py ./
```
#### 3.2.3 运行示例脚本
```bash
python3 example.py yolov5s -q uint8 -m 0 -s 0.003921568627
```

BIN
examples/onnx/yolov5s/0.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 16 KiB

View File

@ -0,0 +1 @@
0.jpg

Binary file not shown.

View File

@ -0,0 +1,175 @@
# TensorFlow模型转换示例
本文档以 lenet 为例,介绍如何使用 Netrans 对 Tensorflow 模型进行转换。
Netrans 支持 TensorFlow 版本1.4.x, 2.0.x, 2.3.x, 2.6.x, 2.8.x, 2.10.x, 2.12.x 以tf.io.write_graph()保存的模型。
## 安装Netrans
1. 先确定您的 Netrans 下载目录,使用以下命令将 Netrans 加入系统配置文件。记得使用您真实的 Netrans下载目录 替换下行命令中的文字。
```bash
export NETRANS_PATH=Netrans下载目录/bin
```
2. 安装 netrans_py
```bash
cd netrans_py
pip3 install -e .
```
## 数据准备
转换 TensorFlow 模型时,模型工程目录应包含以下文件:
- .pb 文件:冻结图模型文件
- inputs_outputs.txt输入输出节点定义文件
- dataset.txt数据路径配置文件
我们的示例 已经完成数据准备,可以使用下面命令进入目录执行。
```bash
cd netrans/
cd examples/tensorflow
```
此时目录如下:
```bash
lenet/
├── 0.jpg # 校准数据
├── dataset.txt # 指定数据地址的文件
├── inputs_outputs.txt # 输入输出节点定义文件
└── lenet.pb # 冻结图模型文件
```
## 使用 nertans_cli 命令行工具
使用 netrans_cli 之前,请先使用以下命令将 命令行脚本 拷贝至当前目录。
```bash
cp ../../netrans_cli/*sh ./
```
此时目录如下:
```bash
tensorflow/
├── export.sh
├── gen_inputmeta.sh
├── import_model.sh
├── infer.sh
├── lenet
│ ├── 0.jpg
│ ├── dataset.txt
│ ├── inputs_outputs.txt
│ └── lenet.pb
└── quantize.sh
```
### 模型导入
```bash
./import_model.sh lenet
```
该步骤会生成 .json 结尾的网络结构文件和 .data 结尾的权重数据文件。
此时 lenet 的目录结构如下:
```bash
lenet/
├── 0.jpg
├── dataset.txt
├── inputs_outputs.txt
├── lenet.data
├── lenet.json
└── lenet.pb
```
### 配置文件生成
数据在推理前一般会经过预处理,为了确保模型可以正确的输入数据,需要生产对应的配置文件。
```bash
./gen_inputmeta.sh lenet
```
此时 lenet 的目录结构如下:
```bash
lenet/
├── 0.jpg
├── dataset.txt
├── inputs_outputs.txt
├── lenet.data
├── lenet_inputmeta.yml
├── lenet.json
└── lenet.pb
```
### 模型量化
为了优化模型的推理效率,加快模型的推理速度,我们使用下行命令对模型进行量化处理。
量化模型需要两个参数目录模型名字和量化类型。量化类型包括float,int16, int8 和 uint8。
```bash
./quantize.sh lenet uint8
```
此时 lenet 的目录结构如下:
```bash
lenet/
├── 0.jpg
├── dataset.txt
├── inputs_outputs.txt
├── lenet_asymmetric_affine.quantize
├── lenet.data
├── lenet_inputmeta.yml
├── lenet.json
└── lenet.pb
```
### 模型导出
最后我们使用 export.sh 将模型导出到nbg格式并生成应用程序工程。量化模型需要两个参数目录模型名字和量化类型。量化类型包括float,int16, int8 和 uint8。量化类型应于 quantize.sh 使用的一致。
```bash
./export.sh lenet uint8
```
此时 lenet 的目录结构如下:
```bash
lenet/
├── 0.jpg
├── dataset.txt
├── inputs_outputs.txt
├── lenet_asymmetric_affine.quantize
├── lenet.data
├── lenet_inputmeta.yml
├── lenet.json
├── lenet.pb
└── wksp
└── asymmetric_affine
├── BUILD
├── dump_core_graph.json
├── graph.json
├── lenetasymmetricaffine.2012.vcxproj
├── lenet_asymmetric_affine.export.data
├── lenetasymmetricaffine.vcxproj
├── main.c
├── makefile.linux
├── network_binary.nb
├── vnn_global.h
├── vnn_lenetasymmetricaffine.c
├── vnn_lenetasymmetricaffine.h
├── vnn_post_process.c
├── vnn_post_process.h
├── vnn_pre_process.c
└── vnn_pre_process.h
```
## 使用 netrans_py python api
### 3.2.1 安装netrans_py
```bash
cd netrans_py
pip3 install -e .
```
### 准备示例脚本
```bash
cd ../example/tensorflow
cp ../../netrans_py/example.py ./
```
### 运行示例脚本
```bash
python3 example.py lenet -q uint8
```

137
examples/tensorflow/export.sh Executable file
View File

@ -0,0 +1,137 @@
#!/bin/bash
if [ -z "$NETRANS_PATH" ]; then
echo "Need to set enviroment variable NETRANS_PATH"
exit 1
fi
OVXGENERATOR=$NETRANS_PATH/pnnacc
OVXGENERATOR="$OVXGENERATOR export ovxlib"
DATASET=dataset.txt
VERIFT='FLASE'
function export_network()
{
NAME=$1
pushd $NAME
QUANTIZED=$2
if [ ${QUANTIZED} = 'float' ]; then
TYPE=float;
quantization_type="none_quantized"
generate_path='./wksp/none_quantized'
elif [ ${QUANTIZED} = 'uint8' ]; then
quantization_type="asymmetric_affine"
generate_path='./wksp/asymmetric_affine'
TYPE=quantized;
elif [ ${QUANTIZED} = 'int8' ]; then
quantization_type="dynamic_fixed_point-8"
generate_path='./wksp/dynamic_fixed_point-8'
TYPE=quantized;
elif [ ${QUANTIZED} = 'int16' ]; then
quantization_type="dynamic_fixed_point-16"
generate_path='./wksp/dynamic_fixed_point-16'
TYPE=quantized;
else
echo "=========== wrong quantization_type ! ( float / uint8 / int8 / int16 )==========="
exit -1
fi
echo " ======================================================================="
echo " =========== Start Generate $NAME ovx C code with type of ${quantization_type} ==========="
echo " ======================================================================="
mkdir -p "${generate_path}"
# if want to import c code into win IDE , change --target-ide-project command-line param from 'linux64' -> 'win32'
if [ ${QUANTIZED} = 'float' ]; then
cmd="$OVXGENERATOR \
--model ${NAME}.json \
--model-data ${NAME}.data \
--model-quantize ${NAME}.quantize \
--dtype ${TYPE} \
--pack-nbg-viplite \
--model-quantize ${NAME}_${quantization_type}.quantize \
--with-input-meta ${NAME}_inputmeta.yml\
--optimize 'VIP8000NANOQI_PLUS_PID0XB1'\
#--optimize None\
--target-ide-project 'linux64' \
--viv-sdk ${NETRANS_PATH}/pnna_sdk \
--output-path ${generate_path}/${NAME}_${quantization_type}"
else
if [ -f ${NAME}_${quantization_type}.quantize ]; then
echo -e "\033[31m using ${NAME}_${quantization_type}.quantize \033[0m"
else
echo -e "\033[31m Can not find ${NAME}_${quantization_type}.quantize \033[0m"
exit -1;
fi
cmd="$OVXGENERATOR \
--model ${NAME}.json \
--model-data ${NAME}.data \
--model-quantize ${NAME}.quantize \
--dtype ${TYPE} \
--pack-nbg-viplite \
--model-quantize ${NAME}_${quantization_type}.quantize \
--with-input-meta ${NAME}_inputmeta.yml\
--optimize 'VIP8000NANOQI_PLUS_PID0XB1'\
--target-ide-project 'linux64' \
--viv-sdk ${NETRANS_PATH}/pnna_sdk \
--output-path ${generate_path}/${NAME}_${quantization_type}"
fi
if [${VERIFY}='TRUE']; then
echo $cmd
fi
eval $cmd
# copy input file into source code folder
# sourcefile="`cat ${DATASET}`"
# cpcmd="cp -fr $sourcefile ${generate_path}/"
# echo $cpcmd
# eval $cpcmd
# temp='wksp/temp'
# mkcmd="mkdir -p ${temp}"
# eval $mkcmd
# sourcefile="`cat ${DATASET}`"
# cpcmd="cp -fr $sourcefile ${temp}/"
# echo $cpcmd
# eval $cpcmd
cpcmd="cp ${generate_path}_nbg_viplite/network_binary.nb ${generate_path}/"
eval $cpcmd
delcmd="rm -rf ${generate_path}_nbg_viplite"
eval $delcmd
# rm -rf ${generate_path}
# mvcmd="mv ${temp} ${generate_path}"
# eval $mvcmd
echo " ======================================================================="
echo " =========== End Generate $NAME ovx C code with type of ${quantization_type} ==========="
echo " ======================================================================="
popd
}
if [ "$#" -lt 2 ]; then
echo "Input a network name and quantized type ( float / uint8 / int8 / int16 )"
exit -1
fi
if [ ! -e "${1%/}" ]; then
echo "Directory ${1%/} does not exist !"
exit -2
fi
export_network ${1%/} ${2%/}

View File

@ -0,0 +1,28 @@
#!/bin/sh
if [ -z "$NETRANS_PATH" ]; then
echo "Need to set enviroment variable NETRANS_PATH"
exit 1
fi
if [ "$#" -ne 1 ]; then
echo "Enter a network name !"
exit 2
fi
if [ ! -e "${1%/}" ]; then
echo "Directory ${1%/} does not exist !"
exit 3
fi
netrans=$NETRANS_PATH/pnnacc
NAME=${1%/}
cd $NAME
$netrans generate \
inputmeta \
--model ${NAME}.json \
--separated-database \

View File

@ -0,0 +1,209 @@
#!/bin/bash
if [ -z "$NETRANS_PATH" ]; then
echo "Need to set enviroment variable NETRANS_PATH"
exit 1
fi
function import_caffe_network()
{
NAME=$1
CONVERTCAFFE=$NETRANS_PATH/pnnacc
CONVERTCAFFE="$CONVERTCAFFE import caffe"
if [ -f ${NAME}.json ]; then
echo -e "\033[31m rm ${NAME}.json \033[0m"
rm ${NAME}.json
fi
if [ -f ${NAME}.data ]; then
echo -e "\033[31m rm ${NAME}.data \033[0m"
rm ${NAME}.data
fi
echo "=========== Converting $NAME Caffe model ==========="
if [ -f ${NAME}.caffemodel ]; then
cmd="$CONVERTCAFFE \
--model ${NAME}.prototxt \
--weights ${NAME}.caffemodel \
--output-model ${NAME}.json \
--output-data ${NAME}.data"
else
echo "=========== fake Caffe model data file==========="
cmd="$CONVERTCAFFE \
--model ${NAME}.prototxt \
--output-model ${NAME}.json \
--output-data ${NAME}.data"
fi
}
function import_tensorflow_network()
{
NAME=$1
CONVERTF=$NETRANS_PATH/pnnacc
CONVERTF="$CONVERTF import tensorflow"
if [ -f ${NAME}.json ]; then
echo -e "\033[31m rm ${NAME}.json \033[0m"
rm ${NAME}.json
fi
if [ -f ${NAME}.data ]; then
echo -e "\033[31m rm ${NAME}.data \033[0m"
rm ${NAME}.data
fi
echo "=========== Converting $NAME Tensorflow model ==========="
cmd="$CONVERTF \
--model ${NAME}.pb \
--output-data ${NAME}.data \
--output-model ${NAME}.json \
$(cat inputs_outputs.txt)"
}
function import_onnx_network()
{
NAME=$1
CONVERTONNX=$NETRANS_PATH/pnnacc
CONVERTONNX="$CONVERTONNX import onnx"
if [ -f ${NAME}.json ]; then
echo -e "\033[31m rm ${NAME}.json \033[0m"
rm ${NAME}.json
fi
if [ -f ${NAME}.data ]; then
echo -e "\033[31m rm ${NAME}.data \033[0m"
rm ${NAME}.data
fi
echo "=========== Converting $NAME ONNX model ==========="
cmd="$CONVERTONNX \
--model ${NAME}.onnx \
--output-model ${NAME}.json \
--output-data ${NAME}.data"
}
function import_tflite_network()
{
NAME=$1
CONVERTTFLITE=$NETRANS_PATH/pnnacc
CONVERTTFLITE="$CONVERTTFLITE import tflite"
if [ -f ${NAME}.json ]; then
echo -e "\033[31m rm ${NAME}.json \033[0m"
rm ${NAME}.json
fi
if [ -f ${NAME}.data ]; then
echo -e "\033[31m rm ${NAME}.data \033[0m"
rm ${NAME}.data
fi
echo "=========== Converting $NAME TFLite model ==========="
cmd="$CONVERTTFLITE \
--model ${NAME}.tflite \
--output-model ${NAME}.json \
--output-data ${NAME}.data"
}
function import_darknet_network()
{
NAME=$1
CONVERTDARKNET=$NETRANS_PATH/pnnacc
CONVERTDARKNET="$CONVERTDARKNET import darknet"
if [ -f ${NAME}.json ]; then
echo -e "\033[31m rm ${NAME}.json \033[0m"
rm ${NAME}.json
fi
if [ -f ${NAME}.data ]; then
echo -e "\033[31m rm ${NAME}.data \033[0m"
rm ${NAME}.data
fi
echo "=========== Converting $NAME darknet model ==========="
cmd="$CONVERTDARKNET \
--model ${NAME}.cfg \
--weight ${NAME}.weights \
--output-model ${NAME}.json \
--output-data ${NAME}.data"
}
function import_pytorch_network()
{
NAME=$1
CONVERTPYTORCH=$NETRANS_PATH/pnnacc
CONVERTPYTORCH="$CONVERTPYTORCH import pytorch"
if [ -f ${NAME}.json ]; then
echo -e "\033[31m rm ${NAME}.json \033[0m"
rm ${NAME}.json
fi
if [ -f ${NAME}.data ]; then
echo -e "\033[31m rm ${NAME}.data \033[0m"
rm ${NAME}.data
fi
echo "=========== Converting $NAME pytorch model ==========="
cmd="$CONVERTPYTORCH \
--model ${NAME}.pt \q
--output-model ${NAME}.json \
--output-data ${NAME}.data \
$(cat input_size.txt)"
}
function import_network()
{
NAME=$1
pushd $NAME
if [ -f ${NAME}.prototxt ]; then
import_caffe_network ${1%/}
elif [ -f ${NAME}.pb ]; then
import_tensorflow_network ${1%/}
elif [ -f ${NAME}.onnx ]; then
import_onnx_network ${1%/}
elif [ -f ${NAME}.tflite ]; then
import_tflite_network ${1%/}
elif [ -f ${NAME}.weights ]; then
import_darknet_network ${1%/}
elif [ -f ${NAME}.pt ]; then
import_pytorch_network ${1%/}
else
echo "=========== can not find suitable model files ==========="
fi
echo $cmd
eval $cmd
if [ -f ${NAME}.data -a -f ${NAME}.json ]; then
echo -e "\033[31m SUCCESS \033[0m"
else
echo -e "\033[31m ERROR ! \033[0m"
fi
popd
}
if [ "$#" -ne 1 ]; then
echo "Input a network name !"
exit -1
fi
if [ ! -e "${1%/}" ]; then
echo "Directory ${1%/} does not exist !"
exit -2
fi
import_network ${1%/}

65
examples/tensorflow/infer.sh Executable file
View File

@ -0,0 +1,65 @@
#!/bin/bash
if [ -z "$NETRANS_PATH" ]; then
echo "Need to set enviroment variable NETRANS_PATH"
exit 1
fi
TENSORZONX=$NETRANS_PATH/pnnacc
TENSORZONX="$TENSORZONX inference"
DATASET=./dataset.txt
function inference_network()
{
NAME=$1
pushd $NAME
QUANTIZED=$2
inf_path='./inf'
if [ ${QUANTIZED} = 'float' ]; then
TYPE=float32;
quantization_type="float32"
elif [ ${QUANTIZED} = 'uint8' ]; then
quantization_type="asymmetric_affine"
TYPE=quantized;
elif [ ${QUANTIZED} = 'int8' ]; then
quantization_type="dynamic_fixed_point-8"
TYPE=quantized;
elif [ ${QUANTIZED} = 'int16' ]; then
quantization_type="dynamic_fixed_point-16"
TYPE=quantized;
else
echo "=========== wrong quantization_type ! ( float / uint8 / int8 / int16 )==========="
exit -1
fi
cmd="$TENSORZONX \
--dtype ${TYPE} \
--batch-size 1 \
--model-quantize ${NAME}_${quantization_type}.quantize \
--model ${NAME}.json \
--model-data ${NAME}.data \
--output-dir ${inf_path} \
--with-input-meta ${NAME}_inputmeta.yml \
--device CPU"
echo $cmd
eval $cmd
echo "=========== End inference $NAME model ==========="
popd
}
if [ "$#" -lt 2 ]; then
echo "Input a network name and quantized type ( float / uint8 / int8 / int16 )"
exit -1
fi
if [ ! -e "${1%/}" ]; then
echo "Directory ${1%/} does not exist !"
exit -2
fi
inference_network ${1%/} ${2%/}

Binary file not shown.

After

Width:  |  Height:  |  Size: 553 B

View File

@ -0,0 +1 @@
0.jpg

View File

@ -0,0 +1 @@
--inputs input/x-input --outputs output --input-size-list "28,28,1"

Binary file not shown.

76
examples/tensorflow/quantize.sh Executable file
View File

@ -0,0 +1,76 @@
#!/bin/bash
if [ -z "$NETRANS_PATH" ]; then
echo "Need to set enviroment variable NETRANS_PATH"
exit 1
fi
TENSORZONEX=$NETRANS_PATH/pnnacc
TENSORZONEX="$TENSORZONEX quantize"
DATASET=./dataset.txt
function quantize_network()
{
NAME=$1
pushd $NAME
QUANTIZED=$2
if [ ${QUANTIZED} = 'float' ]; then
echo "=========== do not need quantied==========="
exit -1
elif [ ${QUANTIZED} = 'uint8' ]; then
quantization_type="asymmetric_affine"
elif [ ${QUANTIZED} = 'int8' ]; then
quantization_type="dynamic_fixed_point-8"
elif [ ${QUANTIZED} = 'int16' ]; then
quantization_type="dynamic_fixed_point-16"
else
echo "=========== wrong quantization_type ! ( uint8 / int8 / int16 )==========="
exit -1
fi
echo " ======================================================================="
echo " ==== Start Quantizing $NAME model with type of ${quantization_type} ==="
echo " ======================================================================="
if [ -f ${NAME}_${quantization_type}.quantize ]; then
echo -e "\033[31m rm ${NAME}_${quantization_type}.quantize \033[0m"
rm ${NAME}_${quantization_type}.quantize
fi
cmd="$TENSORZONEX \
--batch-size 1 \
--qtype ${QUANTIZED} \
--rebuild \
--quantizer ${quantization_type%-*} \
--model-quantize ${NAME}_${quantization_type}.quantize \
--model ${NAME}.json \
--model-data ${NAME}.data \
--with-input-meta ${NAME}_inputmeta.yml \
--device CPU"
echo $cmd
eval $cmd
if [ -f ${NAME}_${quantization_type}.quantize ]; then
echo -e "\033[31m SUCCESS \033[0m"
else
echo -e "\033[31m ERROR ! \033[0m"
fi
popd
}
if [ "$#" -lt 2 ]; then
echo "Input a network name and quantized type ( uint8 / int8 / int16 )"
exit -1
fi
if [ ! -e "${1%/}" ]; then
echo "Directory ${1%/} does not exist !"
exit -2
fi
quantize_network ${1%/} ${2%/}