04-16-2013 06:37 AM
Hi
How do I implement a simple for loop in parallelism on GPU. I can implement it on the CPU but I need to use the GPU for Parallelism.
Regards
Ritesh
04-16-2013
09:06 AM
- last edited on
04-15-2025
12:20 PM
by
Content Cleaner
Ritesh,
do you have the GPU Analysis Toolkit? If so, do you have a NVIDIA CUDA graphic card or something else?
Norbert
04-16-2013 04:40 PM
04-17-2013
02:33 AM
- last edited on
04-15-2025
12:21 PM
by
Content Cleaner
Ritesh,
did you search for examples for this? There is also a very general tutorial on this. Have you read it? Did you check the links in the "next steps" section?
If you run an example using the GPU functions, do they work?
thanks,
Norbert
04-19-2013 11:09 AM
Hi
Yes I did look through and the examples but nothing to help me get started. My goal is to implement IMAQ pattern matching a 100 or more times in one iteration (On the GPU). But I also want to start small and by doing a simple calculation on the GPU
Regards
Ritesh
04-22-2013
12:20 PM
- last edited on
04-15-2025
12:21 PM
by
Content Cleaner
Hi reigngt09,
Here are some discussion forums and other resources that might help you get started with GPU Analysis toolkit.
Let me know if it is helpful