LabWindows/CVI Idea Exchange

Community Browser
cancel
Showing results for 
Search instead for 
Did you mean: 
Post an idea

One thing, I'm comming accross with very often is, that the drawing updates of Table controls are very slowly, Sometimes, they bring the system to hanging, whithout increasing the processor load, and User Interface events (e.g. Commit events) are not recognized quick enough (up to 30 s delay, depends also on other events, e.g. Timer, HW events etc.).

 

On most of my projects, I was changing the Table controls to Tree controls, which are much more quicker in drawing updates. But sometimes, I need the Table features (e.g. Ring cells, picture cells etc.), so I'm not able to change the Table into a Tree.

 

According my expierience, Table controls should be updated every 2 s and Tree controls are able to be updated every 200 ms or faster without waiting for User Interface Events.

 

You should make Table controls nearly as quick as Tree controls...

 

Thanks

We have recently dropped CVI (as of 2009) as an option for use with our many data visualization applications.  The graphic performance is just too slow and clunky to put up with any longer and gets worse as we add features or try to make 'native looking' applications (that resize, animate, etc).  

 

Things like dragging/updating cursors is noticably clunky when you have more than one graph updating (linked cursors across more than one graph).  

 

Updating datasets in large tables is slow enough to watch it step through the rows.  Even using suggest tips like using ATTR_CTRL_VAL instead of SetTableCellVal, when a large table has to update... it's painfully noticeable. Basically any operation that updates a large portion of the UI.

 

Another example, try to resize and move controls (as most other applications do) on the EVENT_PANEL_SIZING?

 

I'm going to go out on a limb and guess that CVI doesn't use any graphics card acceleration? since workstation or netbook doesn't seem to make much difference in graphic performance.  

 

Our clients notice when our applications look 'clunky' and 'slow' when compared to smooth, responsive apps/interfaces from competitors.  It's often the little things that make a big difference in appearance.


Greg

 

I apologize if this has already been addressed but this was determined as a issue years ago and forced us to implement a workaround.

 

We use PloXY for large data sets (up to 16 MB has become pretty standard lately). To address lag years ago, we pared the data to be viewed to 4000 data points in background code and send that to PlotXY. However, when the user zooms - which is always, you must then go back to the original data to find the new best 4K data points prior to plotting the new data. The code usually works well but there are instances where it can shutdown unexpectedly - not often enough to fight through friutfully though.

 

As for an truly automated perhaps a more logical course of action is to retrieve the panel size and use that for the data sizing. Regardless, it would be nice to use the CVI built in function and then subsequent zooming etc could be possible via standard functions.

I hope CVI can maintain its owns Inter Process communication mechanism, such as the share memory, pipe, message queue, etc., not need to call the OS's API.

 

 

 

David

My 2000 dollar worth, run of the mill desktop PC has 4 teraflop of brute computation power hiding in 4 GPUs. None of which is accessible for my programs I develop in Labwindows.

Shame!

With the release of the new OpenCL it is possible to generate a "platform independent" GPU computing library. That would place Labwindows on the same or better footing than Labview that already has some GPU computing support.

The advantages are obvious: huge gain in data processing speed, real-time application with streaming data,  pattern-recognition (video) applications, image processing, data-parallel tasks in (technical) modeling arena.

I am playing with some optimization algorithms (genetic algorithms, evolutionary algorithms) that benefit and show amazing gains since they are ideal for data-parallel applications! Currently working on the specs for a new type of controller that would optimize several parameters to figure out the state of the tissue culture (expanding, producing, overgrowing, etc.) to maximize productivity and to calculate the optimal settings using evolutionary algorithms... Any complex process control could take advantage of this kind of applications -currently not available- because of computational limitations. Had a previous optimization task that would have taken 150,000 years to complete using a brute force algorithm on a "monofilament" CPU-based application. Converted it to a genetic algorithm and it gives me a good enough solution in 3-4 days on the same standard PC. Now, with GPU, that problem could be solved in fifteen minutes while expanding the evolutionary depth and finding better solutions using even more complex fitness functions.

Ask yourself what do you want: tinkering with the conveniences of the IDE that already does the job well enough; or open the door to new landscapes that could be conquered by using the simple elegance and effectiveness of LabWindows and the power of GPUs?

 

Not sure if any other IDEs use this, but it would be nice to select an option toif when cut and pasting code to somewhere else in the file, is if the indentation corrected itself.

 

So instead of absolute indentation from where it was at previously (column number from line), it would be relative indentation based on the first line cut/copy and pasted.

In principle CVI supports external compilers for an optimized release version such as Intel's ICL and I managed to successfully compile release versions using ICL 11.1.

 

However, documentation on this issue is sparse.

 

It is even worse if one attempts to use an external linker which might be appropriate if one attempts to use e.g. Intel's MKL. Here I would love to see the support of external linkers in combination with an improved documentation.

 

Similarly, CUDA is becoming more attractive for more demanding floating point applications - I would consider it very useful if NI could provide e.g. an application note of how to do this in an easy to follow tutorial.

well, the title says it all: extend the current partial support of C99 standard to full support

In the Advanced Analysis Library I'd like to see a function for efficiently computing binomial coefficients. The 'standard' definition

 

a! / ( b! * ( a - b )! )

 

is not efficient and susceptible to overflow. A better approach for example could be the calculation of ln (n!) using the gamma function as outlined in Numerical Recipes.