LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

calling a deep learning model

Hello everyone

I'm working on deep learning project using labview and python , I created my model in python own to tensorflow and keras 's dependencies and i saved it to a pb file in order to call him in labview using IMAQ DL function , whereas it doesn't work ,i realised that maybe i should transform the model created into a tensorflow graph in order to bring  the excpected result in labview , I need your help for an issue that allows me to call the python model in labview and run it successfully.

Here is my  python and labview codes.

Please help me.

Download All
0 Kudos
Message 1 of 15
(6,926 Views)

@Hazou51 wrote:

,i realised that maybe i should transform the model created into a tensorflow grap

 

yes - if you want to use the IMAQ deep learning .vis

see: http://zone.ni.com/reference/en-XX/help/370281AE-01/nivisionconcepts/deeplearning_faq/

how did you transform your keras model into a .pb file?

 

my first attempt would be: https://www.tensorflow.org/api_docs/python/tf/keras/Model#save

 

The saved_model.pb file stores the actual TensorFlow program, or model, and a set of named signatures, each identifying a function that accepts tensor inputs and produces tensor outputs.

Link

 

 

model.save(os.path.join(head_tail[0], modelname+".h5"))
# will create a single .h5 file 
model.save(os.path.join(head_tail[0], modelname))
# will create a folder modelname, which contains the "saved_model.pb" file

 

 

however, there are more ways to export a .pb file :

https://www.tensorflow.org/api_docs/python/tf/io/write_graph

 

The first step is to get the computation graph of TensorFlow backend which represents the Keras model, where the forward pass and training related operations are included.

Then the graph will be converted to a GraphDef protocol buffer, after that it will be pruned so subgraphs that are not necessary to compute the requested outputs such as the training operations are removed. This step if refer to as freezing the graph.

https//www.dlology.com/blog/how-to-convert-trained-keras-model-to-tensorflow-and-make-prediction/

 

 

so, which .pb (SavedModel or Frozen Model) did you try to load in loadModel&data.vi <u+200f>25 KB nbsp;i ?nbsp;i

nbsp;i

are you using LabView x64 and IMAQ x64?

 

are you using Tensorflow 2x?

Note Models should be compatible with TensorFlow version 1.4.1.

http://zone.ni.com/reference/en-XX/help/370281AE-01/imaqvision/imaq_dl_model_create/

 

0 Kudos
Message 2 of 15
(6,875 Views)

Hello , 

Yes I'm using the IMAQ DL and the labview 64bit , the tensorflow 2 and i Used the Frozen model, I used to create the graph but I get this error 

I'm thankful for your help 🙂 

 

0 Kudos
Message 3 of 15
(6,854 Views)

Hello , 

Yes I'm using the IMAQ DL and the labview 64bit , the tensorflow 2 and i Used the Frozen model, I used to create the graph but I get this error 

I'm thankful for your help 

 

 

 

0 Kudos
Message 4 of 15
(6,852 Views)

the joys of upgrading ...

it looks like the Frozen Model format is doomed

Frozen model is a deprecated format and support is added for backward compatibility purpose.

https://github.com/tensorflow/tfjs/tree/master/tfjs-converter

 

 

you want export your trained Keras NN to a "frozen_model.pb" using this source

in

graphe.PNG ‏80 KB - which obviously throws  a Tensorflow 1.x related error
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 

 

 

I can't tell you the solution to this issue either - 

Have you tried Tensorflow 1.5, to create the frozen_model?

 

 

 

 

 

In Python 3.76 x64  Tensorflow 2.0 it is possible to export a trained Keras neuronal net

 

1# as a .h5 file or 

2# to export a trained model as a "saved_model.pb" - see also this

 

however, if doing approach 2# the neural net's architecture (graph) and the trained weights are separated

export_keras_NN.PNG

 

I can't tell you, how to use IMAQ + saved_model.pb - but the manual says, it is supported:

http://zone.ni.com/reference/en-XX/help/370281AE-01/imaqvision/imaq_dl_model_create/

 

 

 

0 Kudos
Message 5 of 15
(6,803 Views)
0 Kudos
Message 6 of 15
(6,798 Views)

@alexderjuengere wrote:

it looks like the Frozen Model format is doomed

Frozen model is a deprecated format and support is added for backward compatibility purpose.

https://github.com/tensorflow/tfjs/tree/master/tfjs-converter

 


or it isn't:

The key to exporting the frozen graph is to convert the model to concrete function, extract and freeze graphs from the concrete function, and serialize to hard drive.

https://leimao.github.io/blog/Save-Load-Inference-From-TF2-Frozen-Graph/

0 Kudos
Message 7 of 15
(6,767 Views)

Hello ,I'm grateful for your help 

I generated a .pbtxt file using tf.io.write_graph and I  think that i ought to know the input node and output node in the created graph in order to call them in IMAQ DL function.

I'will try all your suggestions and share the result.

Thank you very much . 

0 Kudos
Message 8 of 15
(6,759 Views)

I don't think the Vision toolkit supports TF2.0

 

I have successfully used it with a TF1.14 frozen model, but be aware it is CPU only.

 

Currently I am using a toolkit provided by these guys. https://www.anscenter.com/

 

It supports GPU acceleration and works well, I recommend you get in touch with them.

 

 

Message 9 of 15
(6,724 Views)

@Neil.Pate wrote:

I don't think the Vision toolkit supports TF2.0

 


I haven't tested yet, but I am very confident, that a proper  "frozen_model.pb", which was trained in TF 2.0 will be compatibel with TF 1.x

and therefore is compatible with IMAQ

 

it would be nice, if there was an IMAQ example which illustrates how to use the "saved_model.pb" + variables successfully

 

 


@Neil.Pate wrote:

 

I have successfully used it with a TF1.14 frozen model, but be aware it is CPU only.

 

 


good point.

in the context of a scenario "Train a model in Python Tensorflow, apply the model in LabView IMAQ":

 

"CPU only" is  a killer argument for object detection or semantic segmentation neural nets e.g. DenseNet201, mobileNet SSD, Mask R-CNN -

but for a classic 3-layer fully connected net  or a small convolutional neural net like an 3-block VGG-style architecture

"CPU only" is an option.

 

 

0 Kudos
Message 10 of 15
(6,704 Views)