LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

How to Train Custom Object Detection Model for LabVIEW Using SSD MobileNet V1 with TensorFlow 1.x

Hi all,

I have a requirement to deploy a custom object detection model inside LabVIEW using the Vision Development Module (VDM). I’ve seen NI’s example where they successfully load the frozen_inference_graph.pb for SSD MobileNet V1, and it works well within LabVIEW’s Deep Learning functions. (Labview 2019)

I have already annotated my custom dataset (around 30 images) using Roboflow and exported it in the TFRecord format with a label_map.pbtxt.

However, I’m struggling to understand how to:

  1. Train a TensorFlow 1.x SSD MobileNet V1 model using my custom dataset (preferably TensorFlow 1.4+ compatible).

  2. Ensure the output is in the form of a frozen_inference_graph.pb that can be directly loaded in LabVIEW VDM.

  3. Modify or create the correct pipeline.config file for SSD MobileNet V1 training.

  4. Perform the training using model_main.py and verify the export steps.

0 Kudos
Message 1 of 2
(146 Views)

I remember that example , it seem that it use intel Open VINO? to optimize the model to run in a  CPU. it is very old school.

the best way to solve the problem , I think is USE Python Node in labview. It Can run in GPU, if you have complete python + cuda hardware.

your python code just need to be modular , like def init: def process: def close:

and use global variable in python to decribe your class object then you will have a very high speed python + Labview cross application.

0 Kudos
Message 2 of 2
(69 Views)