LabVIEW Idea Exchange

Community Browser
cancel
Showing results for 
Search instead for 
Did you mean: 
Post an idea

As much as I love Quick Drop, I don't love this:

 

quick_drop_delay.png

 

This delay occurs because we have to load all the palette information in the background to be able to drop objects by name.  You can choose Tools > Options > Controls/Functions Palettes > Load palettes during launch to front-load this delay during LabVIEW launch, but (1) a lot of you don't really like the longer launch time much better and (2) a lot of users still don't know about this setting.

 

I'm sure there's a way to solve this problem and make Quick Drop instantly usable, but I don't know enough about the underlying palette code to get it done.  Solving this problem, by the way, could have positive side effects on LabVIEW launch time, palette search performance, and maybe some other areas as well.

There are a few simple math primitives that would be nice do have native finctions (polymorphic across all possible data types)  a few I can think of are absolute difference and A+-B (AB in and A+B and A-B out).  I use these all the tome for generating distances and ranges and it would be nice to have native versions, instead of making my own (one for each tata type ie U8, cluster . .).  I am sure there are a few more simple math functions used all the time that could be native functions.

 

 

Hi

 

Certainly, every body tries to evaluate a vi in terms of performance. This includes the time which a particular code takes to run. We have to use the tick cout vi in order to determine the time of execution.

How about adding an additional button in the  toolbar itself which measures the time taken by a particular code to execute? Smiley Happy

This can certainly be a good feature in LabVIEW!!!

I love quick drop, its great and everyone should try it. BUT: try it with Darren's shortcuts, not the defaults LV comes with.

 

I cringe when I am using another PC at a client and I do CTRL-space to invoke quick drop and they have not got it configured to load at startup, as this normally causes LV to grind to a halt for 30 seconds or so. This feels like a lifetime when the customer is peering over your shoulder and you are billing by the hour!

 

The vast majority of the time I drop something using quick drop, it is using Darren's shortcuts. I don't really see the need for things like Laplace transform* and hundreds of other seldom used primitives to clog up the quick drop. Having everything in quick drop slows it down at launch time, and makes it less usable in my opion.

 

How about an option to remove the build in quick-drop items? Or to only display ones that are listed in the LV.ini file.

 

*Disclaimer: I am sure there is someone out there who uses the Laplace transform daily, I mean you no offense!

 

Discussed here: http://lavag.org/topic/9570-new-wire-type-the-null-wire/

 

 

 

Summary: a right click menu option "Visible Items -> Sequence Container" on all VIs, prims, etc (almost anything on the diagram) to allow us to wire force flow control, without having to wire into/out of the actual node.

There has been a lot of discussion on this but no LabVIEW Idea (that I was able to find). Please move the Stacked Sequences from the Programming Palette to a "retro" or "classic" palette. This is dissuade novices from overusing them

 


Previous wording:

 

There has been a lot of discussion on this but no LabVIEW Idea (that I was able to find).

Retain them for legacy code but please remove them from the Programming Palette

 

Lets vote this in and get rid of Stacked Sequences forever.

I would go so far as to release a patch to remove them from all Installed LabVIEW Versions!

Message Edited by Laura F. on 09-30-2009 03:49 PM

We have an application that extensively uses queues, both named and unnamed.  We suspect that one or more of these queues -- probably an unnamed one -- is not being properly drained, and over time is leading to a memory leak.  I would like a means to programmatically examine all the queues in use to determine whether any are growing without bound.  I looked for a way to do this and found this link.

 

The answer here is pretty unsatisfactory.  Our queues have a multitude of types, and replacing every get/release queue with a registration VI specific to that type would be very tiresome.  What we would like is a way to obtain a generic queue reference to each queue -- named or unnamed, suitable for use in Get Queue Status (providing the Elements Out output is not used, as that would require knowledge of type).

 

It would be fine if the refnums were "read only", that is, they could not be used to modify the queue in any way.  Come to think of it, read only refnums might be useful in themselves, but that's another post.

 

If anyone can think of a way to do this with the existing features of 8.6.1 or LV 2000, I'd like to know about it, but there seems to be no existing method for doing this.

Cluster Size as a Wired Input:

 

  • Easier to see
  • More implicit
  • Nearly impossible to forget to set it (if it were a required input).

 Cluster Size.gif

As I see it, working with the IMAQ Image Typedef always results in problems for me, so I've gotten to where I try to avoid using it.  Working with IMAQ should be so much better than what it is.

 

Here are some Ideas:

 

1.  Allow IMAQ stream to disk RAW Format with a string input for a header for each Image.  You could call this vi once to write a Header to the Image, then for every call after that everything you input into the string would be written between Images. This vi should allow you to append the images.  Most of the time I DON'T want to write out to a specified Image format.  I want to stream raw Video to disk. 

 

Also, we are entering an era where lots of Image data is being dumped to disk, so the Raw stream to disk function would need to take advantage of multithreading and DMA transfer to be as fast and as effecient as possible.  The Vi should allow you to split up files into 2GB sizes if so desired.

 

See the block diagram to see how this would change the Labview code required to grab Images and save them to disk.

IMAQ Code Processing.JPG

 

 

 

Also, It would be nice to be able to specify what sort of Image that you want to use in the framegrabbing operation.  This could be set in the camera's .icd file, or by the IMAQ create.vi

Notice in the above example, I make IMAQ create.vi create U16 Images, but when the Image is output, I have no choice but to take the image in an I16 form.  I would like to have the image in its native U16 form.  I am converting the image to U16 from I16 by the "to unsigned word Integer"  I don't think that this should work, but it does, and the fact that it does helps me out. 

 

In general it would be nice to have better support for Images of all Flavors.  U16, U32 and I32 grayscale, and Double grayscale. 

 

 While you are at it, you might as well add in a boolean input (I32 bit result Image? (T/F)) to most Image math processing functions, so the coercion is not forced to happen.

 

Really though....... LETS GET TO THE POINT OF THIS DISCUSSION.....

 

The only reason that I have a problem with all this is because of speed issues.  I feel that arbitrarly converting from one data type to another wastes time and processing power.  When most of my work is images this is a serious problem.  Also, after checking out the Image Math functions they are WAYY slower than they need to be compared with thier array counterparts.

 

Solution: spend more time developing the IMAQ package to be speedier and more efficient.  For example, the IMAQ Array to Image is quite a beast and will essentially eliminate a quick processing time.  Why is this?  This doesn't need to be. NI should deal with Images internally with pointers, and simply point to the array Data in memory.  I dont see how or why that converting an array to an image needs to take so much time.

 

Discussions on this subject:

 

http://forums.ni.com/ni/board/message?board.id=200&thread.id=23785&view=by_date_ascending&page=1

 

And

 

http://forums.ni.com/ni/board/message?board.id=170&thread.id=376733

 

And

 

http://forums.ni.com/ni/board/message?board.id=200&message.id=22423&query.id=1029985#M22423

 

 

Hello:

 

I am going to be testing the 64-bit version of LabVIEW soon.  But the major code that I want to port to it also uses Vision and Advanced Signal Processing Toolkit.  Therefore, I am VERY, VERY interested in 64-bit versions of those toolkits.  I work at times with 100s of high resolution images and to effectively have no memory addressing limitation that 64-bit offers will be a significant advance for me.  Right now, I post-process with the 64-bit version of ImageJ to do some of the work that I need with huge image sets.

Currently, you can only change plot properties (color, line width, line style, etc.) for the first plot.  Subsequent plots get default settings and cannot be changed.  My proposal would be to add support the all of properties available in the property node.

I16 is currently the only output representation of this function, and it is particularly unhandy - I typically end up typecasting it 90% of the time. I propose the output representation of this primitive can be changed via Right-Click Menu.

 

BoolOutputRepresentation.png

Currently, if you use the stock one and two button dialog VIs in your code, they will block the root loop while being displayed.

For those of you who don't know what the 'root loop' is, this is the core process of LabVIEW that many UI functions must execute under.  One of the key functions that executes in the root loop is VI Server calls to open a new dynamic VI.  So, if your code has multiple threads all performing operations that involve dynamic calls to other 'plug-in' VIs and one part of your code calls one of these stock dialog VIs, then all the other threads will be blocked until the user clears the dialog.

As a result of this, I have had to write my own 'root loop safe' versions of these dialogs to prevent this from happening.

 

As a side note, dropping down a menu and leaving it down without selecting anything also blocks the root loop.  It would be great if they could fix this too!

 

 

Right now, you can only set a single variant attribute at a time.  You can get "all" variant attributes, but it would be nice to be able to get a specific "group" of them.  (the same could be said for delete)

 

From a programmer standpoint, the nice way to accomplish this would be to wire in an array of strings for the variant attributes you are interested in, instead of placing the function in a for loop.  It would also be nice to wire in the data type for all of those attributes, so you don't have to call the variant-to-data.

It would be nice if it was possible to add another 'Reentrant' setting.

This setting would make sure VI A always uses a specific instance, where VI B uses another instance. Sort of a single parent sub-vi.

This would allow for look-up VIs that have a seperate set of data per VI that is calling them.

 

So you can store some variables that are only valid in a single VI and if another VI is calling the same VI it will be calling a second instance and gets other variables back.

 

Ton

I hope this is the correct venue for ideas about the desktop execution trace toolkit.  It is a LabVIEW-related tool.

 

In the course of investigating several LabVIEW crashes, one of NIs AEs suggested the DETT.  This seemed like a really good idea because it runs as a separate application and therefore doesn't lose data on the crash.  Better yet, the last thing in the trace would be likely to be related to the crash.  So I started my eval period of the DETT.  I am debugging a LV 8.6.1 program but since I have installed LV 2009, the 2009 version of DETT came up when I started tracing.  It seemed to work, however.

 

Sadly, the DETT sucked.  After about a minute of tracing, it got buffer overflow and popped up this dialog:

trace tool mem full.PNG

When I dismissed this, I got the usual popup about "Not enough memory to complete this operation."  Following this, the DETT was basically frozen.  I couldn't view the trace, specify filters, nothing.  I had to restart the application.  I tried a few hacks like disabling screen update while running, but nothing changed.  The DETT app was using about 466 MB at the time, and adequate system memory was available.

 

Possibly this is a stripped-down eval version.  If so, it is a mistake to make an eval  version work so badly that one is pursuaded not to buy the full version, which is the way I feel now.

 

I have some suggestions about how to improve the tool.  If these are implemented, I would recommend that we buy the full version.

 

  1. Stop barfing when the buffer overflows.
  2. A wraparound (circular) buffer should be an option.  Often one is interested in the latest events, not the first ones. 
  3. There should be a way to specify an event as a trigger to start and/or stop tracing, like in a logic analyzer.  Triggers could be an event match, VI match, user event, etc.
  4. The tools for analyzing events in the buffer (when it doesn't overflow) are useless. A search on a VI that is obviously present fails to find any event for that VI.  Searching should be able to be done based on something like the trigger mentioned above.
  5. The display filter is a good start but needs to be smarter.  It should be possible to filter out specific patterns, not just whole classes of events.
  6. The export to text is broken.  It loses the name of the VI that has a refnum leak.
  7. Refnum leak events are useless.  They don't give even as much as a probe would show, like what the refnum is to, the type, etc.
  8. The tool should be able to show concurrent thread/VI activity side-by-side, not serially, so one can see what is happening in parallell operations.

Do this stuff and you will have a useful tool.

 

John Doyle

Measures of Mean.vi only takes double precision float as input. When I use single precision to save space, and where A/D resolution is the limiting factor anyhow, it is pretty annoying to have to promote a whole array of data to double just to get the median (or another of the functions in Measures of Mean). Can't this thing be recompiled to be able to do both double and single?

Searching an array for a certain element can be expensive for large arrays. The speed could be dramatically increased if we can assume that the input array is sorted. The speed difference can be several orders of magnitude.

 

There is an old example that discusses this in more detail. I also wrote a similar tool long ago when I needed to recursively score positions for the tic-tac-toe solver, using a retrograde analysis similar to what's used to generate endgame tablebases for chess. (It literally took hours with the plain old "search array"!).

 

(A similar tool is the "search ordered table.vi", which only works for DBL and returns a fractional index.)

 

Suggestion:

 

"Search array" should have a boolean input (default=FALSE) where we can tell the algorithm that the array is sorted.

 

(The output would be exactly as before, with -1 indicating "not found".). Setting this input to TRUE would switch to a binary search where in the worst case only ln2(N) instead of N comparisons need to be made. (e.g. 20 instead of 1048576 for a 50000x speedup!!!)

 

It could look as follows.

 

Of course it should continue to accept all possible datatypes (including clusters) and not be limited to simple datatypes as the polymorphic example quoted above.

.
Message Edited by altenbach on 08-31-2009 03:30 PM

" ... when a TDMS file is open, LabVIEW will create an index structure in memory that is used for random access to the file. The built-in LabVIEW TDM Streaming API will always create this index, even if you're just writing." - Herbert Engels

This feature will cause an apparent memory leak if your program periodically writes to a TDMS file over a long period of time. 

 

Feature Request: 

Disable indexing option for "TDMS Open.vi"

It's very rare for me to deal with large arrays, so it's seems very retarded when I'm debugging to 

find VIs have not retained their values. I must turn on the "retain values" and re-run and go through 

a huge test sequence again to trigger the problem.

 

All to save 100K of memory, when I've got 2G to through around.

 

How about a global setting "RETAIN WIRE VALUES" and per-VI setting to override the global.

 

Wouldn't take much...