LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Is it possible to port "Find Global Min on Surface" to FPGA?

We are trying to solve a physic problem by using Find Global Min on Surface, it works, which is a good thing. But to really make it usable, I need to call around 10000 rounds of function call,  it is simply too long if we are doing it by using DAQ to get data and feedback, it would take hours to get the job done.

 

I was wondering if it is possible to port this thing to FPGA? in which case I can give 10000 tries in less than 10 seconds, that would make a huge difference in usablity.

 

I opened the "Find Global Min on Surface", I notice there aren't many mathematical calculations going underneath, most of then seems like housekeeping and comparison, in my mind this should be doable, am I right on this? And can anyone give me some advice on how to get started on this? Thanks~

0 Kudos
Message 1 of 7
(1,406 Views)

This is doable on an FPGA. 

 

In the FPGA you process data as each sample or group of samples run through the chip. So as each value passes through you keep a running max or min value.

 

At some point you know it is the end of an image or data set, here you take the current max and min, send to the host or other destination, reset and continue to look for max or min. 


Certified LabVIEW Architect, Certified Professional Instructor
ALE Consultants

Introduction to LabVIEW FPGA for RF, Radar, and Electronic Warfare Applications
0 Kudos
Message 2 of 7
(1,368 Views)

Thank you for your advice.

 

I found I can use SPGD to solve my problem, my problem is derivable,  so dither around the current value can help me found the right direction to archive my goal.

 

0 Kudos
Message 3 of 7
(1,327 Views)

FPGA? A GPU might make sense too.

 

What library is Find Global Min on Surface in? It's not in LV by default.

 

It won't just run faster if you copy\convert the function, as FPGAs are slower then normal CPUs. To get the benefits of the FPGA, you'd need a to convert the task into parallel tasks. That might not be easy.

 

Why does Find Global Min on Surface 10000X take hours? What is your data? What are you actually trying to do?

0 Kudos
Message 4 of 7
(1,310 Views)

GPU is not good enough when it comes to latency, I need to set and get a real-world signal to let GA move to the next step, it takes at least 10ms or so to finish a cycle, which makes the process run at a fairly low rate.

 

I want to make sure I can not only solve this problem using GA but also want to lock the result because in the real-world the system is not really all that stable, it will draft around, I need to make sure it won't go too far away.

0 Kudos
Message 5 of 7
(1,292 Views)

@jiangliang wrote:

GPU is not good enough when it comes to latency, I need to set and get a real-world signal to let GA move to the next step, it takes at least 10ms or so to finish a cycle, which makes the process run at a fairly low rate.

 

I want to make sure I can not only solve this problem using GA but also want to lock the result because in the real-world the system is not really all that stable, it will draft around, I need to make sure it won't go too far away.


Well, that answers my first question\remark...

 

Still no idea about:

 

wiebe@CARYA wrote:

FPGA? A GPU might make sense too.

 

What library is Find Global Min on Surface in? It's not in LV by default.

 

It won't just run faster if you copy\convert the function, as FPGAs are slower then normal CPUs. To get the benefits of the FPGA, you'd need a to convert the task into parallel tasks. That might not be easy.

 

Why does Find Global Min on Surface 10000X take hours? What is your data? What are you actually trying to do?


 

0 Kudos
Message 6 of 7
(1,283 Views)

Well, it an optical problem, actually it can be solved by SPGD quite well, I wasn't aware of that at the beginning, so I was trying to use GA for that.

 

0 Kudos
Message 7 of 7
(1,269 Views)