tensorflow2检测API的安装和使用

编程入门 行业动态 更新时间:2024-10-21 16:24:52

tensorflow2检测<a href=https://www.elefans.com/category/jswz/34/1771441.html style=API的安装和使用"/>

tensorflow2检测API的安装和使用

tensorflow2API的安装和使用

  • 下载tf2 object decetion API
    • 安装遇到的问题
  • 导出pb模型,并进行推理测试
  • 别的问题

下载tf2 object decetion API

参考
链接: link

链接: link

安装遇到的问题

留个坑,有时间在补充

  1. 进行setup安装时候下载不了的包可以pip下载
  2. 在Anaconda\Lib\site-packages添加一个路径文件,如tensorflowmodel.pth,随便起名字但必须以.pth为后缀,写上你要加入的模块文件所在的目录名称
    C:\Users\PaulY\Desktop\models
    C:\Users\PaulY\Desktop\models\research
    C:\Users\PaulY\Desktop\models\research\slim
  3. 训练时候一定要运行在object decetion下的py文件,不能把文件复制到别的地方运行,导致一些奇怪的错误
  4. 可以把训练时候用的预训练模型和pbtxt等单独放在一个文件夹(好找)
导出得pb模型结构可视化      saved_model_cli show --dir C:/Users/PaulY/Desktop/tf2/litemodel --allpc端测试  在models得resaush里面得text文件  tf2训练          python model_main_tf2.py --logtostderr    配置文件相关在py文件里面更改tf2导出pb模型	python exporter_main_v2.py input_type image_tensor     在文件更改

导出pb模型,并进行推理测试

利用exporter_main_v2.py文件进行导出模型,导出的模型可以用脚本查看

saved_model_cli show --dir C:/Users/PaulY/Desktop/tf2/litemodel --all

注:导出pb是一个文件夹,其中的save_model文件夹下面的pb文件就是带模型结构和参数的
推理代码如下:


"""
Object Detection From TF2 Saved Model
=====================================
"""import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'    # Suppress TensorFlow logging (1)
import pathlib
import tensorflow as tf
import argparsetf.get_logger().setLevel('ERROR')           # Suppress TensorFlow logging (2)# Enable GPU dynamic memory allocation
gpus = tf.config.experimental.list_physical_devices('GPU')
for gpu in gpus:tf.config.experimental.set_memory_growth(gpu, True)# Initiate argument parser
parser = argparse.ArgumentParser(description="model inference sample")
parser.add_argument("-m","--saved_model_dir",help="Path to saved model directory.",type=str, default="exported_models/my_model/saved_model")
parser.add_argument("-l","--labels_path",help="Path to the labels (.pbtxt) file.", type=str, default="annotations/label_map.pbtxt")
parser.add_argument("-i","--images_dir",help="Path of input images file.", type=str, default="images/test")
parser.add_argument("-o","--output_inference_result",help="Path of output inference result file.", type=str, default='inference_result/')
args = parser.parse_args()# %%
# Load the model
# ~~~~~~~~~~~~~~
# Next we load the downloaded model
import time
from object_detection.utils import label_map_util
from object_detection.utils import visualization_utils as viz_utilsPATH_TO_SAVED_MODEL      = "C:/Users/PaulY/Desktop/tf2/tolite/saved_model"
PATH_TO_LABELS           = "C:/Users/PaulY/Desktop/tf2/training/label_map.pbtxt"
PATH_TO_IMAGES           = "C:/Users/PaulY/Desktop/testdata/VOC/JPEGImages"
PATH_TO_INFERENCE_RESULT = "C:/Users/PaulY/Desktop/tf2/inference_result/"
###
#更改这个地方就可以换成你的路径
###
print('Loading model...', end='')
start_time = time.time()# Load saved model and build the detection function
detect_fn = tf.saved_model.load(PATH_TO_SAVED_MODEL)end_time = time.time()
elapsed_time = end_time - start_time
print('Done! Took {} seconds'.format(elapsed_time))# %%
# Load label map data (for plotting)
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# Label maps correspond index numbers to category names, so that when our convolution network
# predicts `5`, we know that this corresponds to `airplane`.  Here we use internal utility
# functions, but anything that returns a dictionary mapping integers to appropriate string labels
# would be fine.category_index = label_map_util.create_category_index_from_labelmap(PATH_TO_LABELS,use_display_name=True)# %%
# Putting everything together
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~
# The code shown below loads an image, runs it through the detection model and visualizes the
# detection results, including the keypoints.
#
# Note that this will take a long time (several minutes) the first time you run this code due to
# tf.function's trace-compilation --- on subsequent runs (e.g. on new images), things will be
# faster.
#
# Here are some simple things to try out if you are curious:
#
# * Modify some of the input images and see if detection still works. Some simple things to try out here (just uncomment the relevant portions of code) include flipping the image horizontally, or converting to grayscale (note that we still expect the input image to have 3 channels).
# * Print out `detections['detection_boxes']` and try to match the box locations to the boxes in the image.  Notice that coordinates are given in normalized form (i.e., in the interval [0, 1]).
# * Set ``min_score_thresh`` to other values (between 0 and 1) to allow more detections in or to filter out more detections.
import numpy as np
from PIL import Image
import matplotlib.pyplot as plt
import warnings
import oswarnings.filterwarnings('ignore')   # Suppress Matplotlib warningsdef load_image_into_numpy_array(path):"""Load an image from file into a numpy array.Puts image into numpy array to feed into tensorflow graph.Note that by convention we put it into a numpy array with shape(height, width, channels), where channels=3 for RGB.Args:path: the file path to the imageReturns:uint8 numpy array with shape (img_height, img_width, 3)"""return np.array(Image.open(path))def load_images_path(images_dir):images_path_list = []images_filename_list =  os.listdir(images_dir)for img_path in images_filename_list:if img_path.endswith(".jpg") == True:img_path = os.path.join('%s/%s' % (images_dir, img_path))images_path_list.append(img_path)return images_path_listIMAGE_PATHS = load_images_path(PATH_TO_IMAGES)for image_path in IMAGE_PATHS:print('Running inference for {}... '.format(image_path), end='')image_np = load_image_into_numpy_array(image_path)# Things to try:# Flip horizontally# image_np = np.fliplr(image_np).copy()# Convert image to grayscale# image_np = np.tile(#     np.mean(image_np, 2, keepdims=True), (1, 1, 3)).astype(np.uint8)# The input needs to be a tensor, convert it using `tf.convert_to_tensor`.input_tensor = tf.convert_to_tensor(image_np)# The model expects a batch of images, so add an axis with `tf.newaxis`.input_tensor = input_tensor[tf.newaxis, ...]# input_tensor = np.expand_dims(image_np, 0)detections = detect_fn(input_tensor)# All outputs are batches tensors.# Convert to numpy arrays, and take index [0] to remove the batch dimension.# We're only interested in the first num_detections.num_detections = int(detections.pop('num_detections'))detections = {key: value[0, :num_detections].numpy()for key, value in detections.items()}detections['num_detections'] = num_detections# detection_classes should be ints.detections['detection_classes'] = detections['detection_classes'].astype(np.int64)image_np_with_detections = image_np.copy()viz_utils.visualize_boxes_and_labels_on_image_array(image_np_with_detections,detections['detection_boxes'],detections['detection_classes'],detections['detection_scores'],category_index,use_normalized_coordinates=True,max_boxes_to_draw=200,min_score_thresh=.30,agnostic_mode=False)plt.figure()# plt.imshow(image_np_with_detections)image_filename = os.path.join(PATH_TO_INFERENCE_RESULT, os.path.basename(image_path))plt.imsave(image_filename, image_np_with_detections)print('Done')# plt.show()# sphinx_gallery_thumbnail_number = 2


报错
Traceback (most recent call last):
File “C:\Users\PaulY\Desktop\models\research\test.py”, line 43, in
detections = detect_fn(input_tensor)
TypeError: ‘_UserObject’ object is not callable
可以利用上面脚本看看模型结构是否导出的时候出错了导致丢失一些参数,换成好的模型解决

别的问题

  1. import tensorflowpat.v1 as tf出现找不到compat模块 可以pip卸载scipy在重装

更多推荐

tensorflow2检测API的安装和使用

本文发布于:2024-02-11 14:28:58,感谢您对本站的认可!
本文链接:https://www.elefans.com/category/jswz/34/1681474.html
版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。
本文标签:API

发布评论

评论列表 (有 0 条评论)
草根站长

>www.elefans.com

编程频道|电子爱好者 - 技术资讯及电子产品介绍!