LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

DeepSeek R1 is now available on LabVIEW !

🚀 Local execution of DeepSeek-R1-Distill-Llama-8B-Q8_0 on GPU (RTX 3090) using LabVIEW's QMH Template Architecture
In this video, we showcase how recent families of reasoning models have transformed the way LLMs are used in practice.
We demonstrate a full reasoning pipeline on LabVIEW, including the Chain of Thought (CoT) logic and the final response generation.

 


🧠 The demo runs locally on an RTX 3090 GPU with the model DeepSeek-R1-Distill-Llama-8B-Q8_0, fully integrated into the LabVIEW GenAI Toolkit — already available for download on our website.

💡 We adopt a functional approach to deep learning, aiming to streamline the entire process—from model import and fine-tuning to deployment—without relying on traditional code-based stacks.
Instead of using complex programming languages, we leverage LabVIEW as a graphical abstraction engine, making LLM integration faster, more visual, and accessible to engineers and researchers.
👉 The GenAI Toolkit is live!
Try it now and explore what's possible: 🌐 https://lnkd.in/eUtVumG2

🎥 Watch more demos on our official YouTube channel:
👉 https://www.youtube.com/@graiphic
📬 hello@graiphic.io or DM us for support

 


Youssef Menjour


Graiphic


www.Graiphic.io


y.menjour@graiphic.io


LabVIEW architect passionate about robotics and deep learning.



Follow us on LinkedIn
linkedin.com/company/graiphic



Follow us on YouTube
youtube.com/@graiphic



Message 1 of 5
(436 Views)

Yes, it works. I have this as well, goth this in February on pure CPU, but not the distilled version. Instead, I’m using the "full" DeepSeek R1 Q8_0 model, because have enough RAM.

 

After that, I added GPUs and spent a few weeks for learning and training the model on LabVIEW code. Now, with the help of scripting, it works like this (note the "thinking aha" moment, where DeepSeek can recognize its own mistakes and fix them on the fly, love this):

bs1c.gif

The context is saved, so I can refine the diagram if needed with new prompts:

bs2c.gif

A "Copilot"-like feature has also been implemented. For example, if I drop an "IMAQ Create" on the diagram and then add a "File Control," DeepSeek recognizes the idea and automatically adds "IMAQ File Open," wiring everything correctly (note that a string indicator is also added to prevent a broken block diagram):

bs3c.gif

Or, for instance, if I change the image type from the default grayscale to RGB, DeepSeek adjusts the code accordingly:

bs4c.gif

And so on. It's pretty helpful for me.

The backend runs currently on a powerful Xeon W7-3555 workstation with 56 logical cores, 1 TB of RAM, and three RTX 4500 Ada, totally 72 GB GPU Memory:

PC1.png

Performance is acceptably fast (for LabVIEW nodes we don't need much tokens per second), though unfortunately, I can’t upload the entire DeepSeek model onto the GPUs — only partial layers, around 10%.

Message 2 of 5
(337 Views)

Interesting, at the moment we haven't delved into that part of Copilot yet. I assume you fine-tuned DeepSeek using a collab unshold on the NI documentation. However, I don't understand why you used DeepSeek for this purpose, since LabVIEW is inherently a graphical language and you employed an LLM instead of a VLM (I would have preferred Qwen for better efficiency). I also assume you implemented a tagging system to instruct LabVIEW to execute a script block (using JSON or XML – considering that MCP doesn't exist in LabVIEW at the moment).

Also, I think using the original version of DeepSeek for this task isn't really necessary – it's way too large, and it clearly limits usability (not everyone is lucky enough to have your kind of hardware).

Regarding your memory usage, you might want to take a look at how you're managing execution – llama.cpp allows for better memory distribution if properly configured. It looks like your execution parameters might not be fully optimized.

Anyway, well done!

 

DeepSeek-R1-Distill-Llama-8B-Q8_0 local GPU (RTX 3090) execution is fully operational with the LabVIEW GenAI Toolkit (no need a monster hardawre 😀 ) --> (works also since february )

 


Youssef Menjour


Graiphic


www.Graiphic.io


y.menjour@graiphic.io


LabVIEW architect passionate about robotics and deep learning.



Follow us on LinkedIn
linkedin.com/company/graiphic



Follow us on YouTube
youtube.com/@graiphic



0 Kudos
Message 3 of 5
(262 Views)

You really got me! I totally believed it!
Thing is, we took it seriously — we don’t do April Fools, we build.


Youssef Menjour


Graiphic


www.Graiphic.io


y.menjour@graiphic.io


LabVIEW architect passionate about robotics and deep learning.



Follow us on LinkedIn
linkedin.com/company/graiphic



Follow us on YouTube
youtube.com/@graiphic



Message 4 of 5
(145 Views)

Damn, this one was quite convincing, and the GIFs were really well done as well!

Message 5 of 5
(117 Views)