01-21-2021 07:56 PM
Hello, I just had a quick question about a neural network I want to deploy on hardware using NI and LabVIEW. I was wondering if models made in Pytorch would be able to be used in LabVIEW. I found the following webpage which mentions TensorFlow but not Pytorch:
I see that the Python node can call functions that use NumPy and many other packages. Does this hold true for the torch library?
Thank you!
03-24-2021 07:21 AM
I would also like to know that
06-16-2025 09:14 AM
@martyste wrote:
I would also like to know that
I know this topic is 4 years old, but last weekend I played around a bit with PyTorch, and I thought it might be useful for someone else, just would like to share. Until now, I had only used TensorFlow, and I really underestimated PyTorch — it's quite powerful and very easy to use.
For model inference, the fastest solution in terms of performance (I working with large x-ray images) turned out to be OpenVINO.
It’s actually quite straightforward. For example, let’s train a simple XOR network with four neurons in the hidden layer:
Then, a more or less “classical” Python script to train this would look like this:
import torch as pt
import torch.nn as nn
import torch.optim as opt
import openvino as ov
class XORModel(nn.Module):
def __init__(self):
super().__init__()
self.network = nn.Sequential(
nn.Linear(2, 4),
nn.ReLU(), # Rectified Linear Unit
nn.Linear(4, 1),
nn.Sigmoid() )
def forward(self, x):
return self.network(x)
pt.manual_seed(42) # Set Seed for reproducibility
# Data
X = pt.tensor([[0,0], [0,1], [1,0], [1,1]], dtype=pt.float32)
y = pt.tensor([[0], [1], [1], [0]], dtype=pt.float32)
# Model, loss, optimizer
model = XORModel()
criterion = nn.BCELoss()
optimizer = opt.Adam(model.parameters(), lr=0.1)
print(model)
# Training
for epoch in range(8000):
optimizer.zero_grad()
outputs = model(X)
loss = criterion(outputs, y)
loss.backward()
optimizer.step()
if (epoch + 1) % 1000 == 0:
print(f"Epoch {epoch+1}, Loss: {loss.item():.6f}")
with pt.no_grad(): # don't need gradients
ov_input = pt.zeros(1, 2) # Batch size 1, input size 2
# Convert to OpenVINO Model
ov_model = ov.convert_model(model, example_input=ov_input)
# Save OpenVINO IR model (produces .xml and .bin files)
ov.save_model(ov_model, 'xor_model_openvino.xml')
Obviously to run this, you’ll need to install the following:
pip install torch
pip install openvino
Now, for inference, we’ll use a simple wrapper DLL:
#include <openvino/openvino.hpp>
extern "C" __declspec(dllexport)
float infer_xor(const char* lv_model, float input1, float input2) {
try {
// Create OpenVINO Core (do not use static here)
ov::Core core;
// Load and compile the model
std::shared_ptr<ov::Model> model = core.read_model(lv_model);
ov::CompiledModel compiled_model = core.compile_model(model, "CPU");
ov::InferRequest infer_request = compiled_model.create_infer_request();
// Prepare input tensor
ov::Tensor input_tensor(ov::element::f32, { 1, 2 });
float* input_data = input_tensor.data<float>();
input_data[0] = input1;
input_data[1] = input2;
// Run inference
infer_request.set_input_tensor(input_tensor);
infer_request.infer();
// Get output
ov::Tensor output_tensor = infer_request.get_output_tensor();
float* output_data = output_tensor.data<float>();
return output_data[0];
}
catch (const std::exception& ) {
// Log or handle error if needed
return -1.0f;
}
}
And here’s how it’s used in LabVIEW:
From a performance perspective, it makes sense to split initialization and inference into separate functions, like this:
ov::Core core;
std::shared_ptr<ov::Model> model;
ov::CompiledModel compiled_model;
ov::InferRequest infer_request;
extern "C" __declspec(dllexport)
bool init(const char* lv_model) {
try {
model = core.read_model(lv_model);
compiled_model = core.compile_model(model, "CPU");
infer_request = compiled_model.create_infer_request();
return true;
}
catch (const std::exception&) {
return false;
}
}
extern "C" __declspec(dllexport)
float inference(float input1, float input2) {
try {
ov::Tensor input_tensor(ov::element::f32, { 1, 2 });
float* input_data = input_tensor.data<float>();
input_data[0] = input1;
input_data[1] = input2;
infer_request.set_input_tensor(input_tensor);
infer_request.infer();
ov::Tensor output_tensor = infer_request.get_output_tensor();
float* output_data = output_tensor.data<float>();
return output_data[0];
}
catch (const std::exception&) {
return -1.0f;
}
}
extern "C" __declspec(dllexport)
void close() {
try {
// Explicitly reset all OpenVINO objects, othwise LabVIEW not able to close
infer_request = ov::InferRequest(); // Reset to default
compiled_model = ov::CompiledModel();
model.reset();
core = ov::Core(); // Reinitialize to release internal state
}
catch (const std::exception&) {
// Optional: log error
}
}
Then you can use it, for example, to explore how XOR network work under the hood:
This was developed and tested on Windows, but it should also work on a Linux target as well.
Something like that.
Python 3.13.5 + PyTorch 2.7.1 + OpenVINO 2025.1.0 + Visual Studio 2022 v.17.14.5 + LabVIEW 2025Q1 25.1.2f2 was used (all 64-bit).
Test Project in the attachment. To run this, you will need to download the Intel Distribution of the OpenVINO Toolkit and place the runtime DLLs from \runtime\bin\intel64\Release
and \runtime\3rdparty\tbb\bin\
next to LVOpenVINO.dll
.
06-16-2025 10:57 AM
This is interesting but IMO LabVIEW should have jumped on support for Tensor Flow and PyYTorch years ago.
Back before I jumped off the LabVIEW boat into the Python swamp I was curious about how Neural Nets worked so I built some Neurons in LabVIEW : )
https://github.com/dnparadice/LabGRAD
06-17-2025 04:50 AM
@Jay14159265 wrote:
Back before I jumped off the LabVIEW boat into the Python swamp... : )
This is a very funny analogy. How many estuarine crocodiles did you encounter so far? 😁
06-17-2025 10:39 AM
@rolfk wrote:
@Jay14159265 wrote:
Back before I jumped off the LabVIEW boat into the Python swamp... : )
This is a very funny analogy. How many estuarine crocodiles did you encounter so far? 😁
Dynamic typing will eat you alive ...
In production Python code there has to be unit tests for EVERTHING because there is no static typing / compiler 😆
06-17-2025 10:47 AM - edited 06-17-2025 10:48 AM
@Jay14159265 wrote:
@rolfk wrote:
@Jay14159265 wrote:
Back before I jumped off the LabVIEW boat into the Python swamp... : )
This is a very funny analogy. How many estuarine crocodiles did you encounter so far? 😁
Dynamic typing will eat you alive ...
In production Python code there has to be unit tests for EVERTHING because there is no static typing / compiler 😆
That's my biggest gripe with Python for sure. And I can't understand how people find the VAR datatype in C# such a great feature as it is basically introducing dynamic datatypes into C# too. I'm heavy into strict datatypes everywhere and accept Variants only in very controlled situations.
Also don't like that formatting of text code has such an important impact, making simple intending an actual syntax element.
Encouraging intending? Absolutely yes! Enforcing it strictly? Not for me!
06-17-2025 11:38 AM
@rolfk wrote:
Also don't like that formatting of text code has such an important impact, making simple intending an actual syntax element.
Encouraging intending? Absolutely yes! Enforcing it strictly? Not for me!
It's especially funny how spaces and tabs aren't "compatible" in python. I'm a "Tab"-oriented person, and sometimes, even in code that looks visually correct, I have errors like this:
for i in range(num_detections):
score = detection_scores[0][i]
if score > 0.75:
box = detection_boxes[0][i]
class_id = int(detection_classes[0][i]) - 1
if score > 0.75:
^
IndentationError: unindent does not match any outer indentation level