05-27-2025 11:19 PM - edited 05-27-2025 11:21 PM
Hi all,
I have a requirement to deploy a custom object detection model inside LabVIEW using the Vision Development Module (VDM). I’ve seen NI’s example where they successfully load the frozen_inference_graph.pb
for SSD MobileNet V1, and it works well within LabVIEW’s Deep Learning functions. (Labview 2019)
✅ I have already annotated my custom dataset (around 30 images) using Roboflow and exported it in the TFRecord format with a label_map.pbtxt.
However, I’m struggling to understand how to:
Train a TensorFlow 1.x SSD MobileNet V1 model using my custom dataset (preferably TensorFlow 1.4+ compatible).
Ensure the output is in the form of a frozen_inference_graph.pb
that can be directly loaded in LabVIEW VDM.
Modify or create the correct pipeline.config
file for SSD MobileNet V1 training.
Perform the training using model_main.py
and verify the export steps.
05-29-2025 07:19 AM
I remember that example , it seem that it use intel Open VINO? to optimize the model to run in a CPU. it is very old school.
the best way to solve the problem , I think is USE Python Node in labview. It Can run in GPU, if you have complete python + cuda hardware.
your python code just need to be modular , like def init: def process: def close:
and use global variable in python to decribe your class object then you will have a very high speed python + Labview cross application.