03-30-2025 07:42 PM
I’m excited to share my first successful attempt at integrating LabVIEW with a local AI model "LLama2"
I was able to send prompts to the AI model and receive responses directly inside LabVIEW using HTTP API.
Since I’m running LLaMA 2 on CPU only, responses are very slow, so I set the timeout to infinite (-1). If you have a GPU, you can download and use optimized models for faster performance.
https://ollama.com/search
Steps to Integrate LLaMA 2 with LabVIEW:
1-Install Ollama from ollama.com and download LLaMA 2 using (in command prompt):
ollama pull llama2
2️-Run LLaMA 2 locally with:
ollama run llama2
3️-In LabVIEW:
Use JSON to format the request and extract the AI response.
Use ‘HTTP Client POST.vi’ to send data to http://localhost:11434/api/generate.
I set timeout to -1 to handle slow responses on CPU.
4️-Enter a prompt, Run the VI,and get AI-generated responses inside LabVIEW
This the VI
https://drive.google.com/file/d/1fqg_g48y_o1uIfUcdm7U8hUpU20qNC7_/view?usp=sharing
I think this is just the beginning!. It expands LabVIEW's capabilities by integration AI-powered automation
My attempt was made with the help of ChatGPT.