01-21-2021 09:19 PM - edited 01-21-2021 09:25 PM
We are trying to solve a physic problem by using Find Global Min on Surface, it works, which is a good thing. But to really make it usable, I need to call around 10000 rounds of function call, it is simply too long if we are doing it by using DAQ to get data and feedback, it would take hours to get the job done.
I was wondering if it is possible to port this thing to FPGA? in which case I can give 10000 tries in less than 10 seconds, that would make a huge difference in usablity.
I opened the "Find Global Min on Surface", I notice there aren't many mathematical calculations going underneath, most of then seems like housekeeping and comparison, in my mind this should be doable, am I right on this? And can anyone give me some advice on how to get started on this? Thanks~
01-22-2021 06:46 AM - edited 01-22-2021 06:48 AM
This is doable on an FPGA.
In the FPGA you process data as each sample or group of samples run through the chip. So as each value passes through you keep a running max or min value.
At some point you know it is the end of an image or data set, here you take the current max and min, send to the host or other destination, reset and continue to look for max or min.
02-18-2021 08:48 AM
Thank you for your advice.
I found I can use SPGD to solve my problem, my problem is derivable, so dither around the current value can help me found the right direction to archive my goal.
02-22-2021 04:03 AM
FPGA? A GPU might make sense too.
What library is Find Global Min on Surface in? It's not in LV by default.
It won't just run faster if you copy\convert the function, as FPGAs are slower then normal CPUs. To get the benefits of the FPGA, you'd need a to convert the task into parallel tasks. That might not be easy.
Why does Find Global Min on Surface 10000X take hours? What is your data? What are you actually trying to do?
02-24-2021 06:44 AM
GPU is not good enough when it comes to latency, I need to set and get a real-world signal to let GA move to the next step, it takes at least 10ms or so to finish a cycle, which makes the process run at a fairly low rate.
I want to make sure I can not only solve this problem using GA but also want to lock the result because in the real-world the system is not really all that stable, it will draft around, I need to make sure it won't go too far away.
02-24-2021 11:52 AM - edited 02-24-2021 11:52 AM
@jiangliang wrote:
GPU is not good enough when it comes to latency, I need to set and get a real-world signal to let GA move to the next step, it takes at least 10ms or so to finish a cycle, which makes the process run at a fairly low rate.
I want to make sure I can not only solve this problem using GA but also want to lock the result because in the real-world the system is not really all that stable, it will draft around, I need to make sure it won't go too far away.
Well, that answers my first question\remark...
Still no idea about:
wiebe@CARYA wrote:
FPGA? A GPU might make sense too.
What library is Find Global Min on Surface in? It's not in LV by default.
It won't just run faster if you copy\convert the function, as FPGAs are slower then normal CPUs. To get the benefits of the FPGA, you'd need a to convert the task into parallel tasks. That might not be easy.
Why does Find Global Min on Surface 10000X take hours? What is your data? What are you actually trying to do?
02-28-2021 03:59 AM
Well, it an optical problem, actually it can be solved by SPGD quite well, I wasn't aware of that at the beginning, so I was trying to use GA for that.