LabVIEW Idea Exchange

Community Browser
cancel
Showing results for 
Search instead for 
Did you mean: 
Post an idea

Hello,

 

Use "tab" to tab through the text fields and map enter to the ok button.

 

This would streamline the dev process when creating lots of vis. It allows to double-click icon --> enter first line --> hit tab --> enter second line --> hit enter --> voilà.

 

As both my hands are on the keyboard at that point, this would prevent me from reaching for the the mouse all the way at the far end of the keyboard to click ok.

 

Pushing LabVIEW efficiency further and further, one second at a time!

 

As it takes on average 1 second to hit the ok button, if that feature is ever added, i will have to create approximately 300 VIs in order to make that posting profitable...but that is just for myself, think of the millions of seconds that will be saved every year on the planet!

- Make the graph smart enough to not draw points (symbols) on every value if it makes the points overlap.

 

If a plot is set to have points, the apperance of the plot will depend on how densely the points are. If the density is so high that there is no room left between the point graphics the plot line will instead look like a thick line without points. This often defeats the purpose of the points - namely to make it possible to distinguish between plots...(if you have enough plots color and/or line style may not be enough, and points are often safer bets than color or dotted lines).

 

Yes - you can achieve this yourself by decimating the data, however this will reduce the amount of information in the plot that could have been included if the graph just left out some of the point graphics.

 

 

Maybe this option already exists and I have never found it, but what I would like is the ability to turn off autowiring trying to wire two outputs together. 

 

I know you can play around with the auto-wiring jump tolerance, but I still don't understand why LV will allow you to connect two outputs together and then complain (for obvious reasons) that there is an error on the block diagram. I would actually vote for an explosion noise everytime we let the magic smoke come out of the primitives by being wired together.

 

Have I missed something terribly obvious here?

 

For all the purists out there i would be absolutely satisfied with a checkbox option that I can turn off when configuring LV, so if someone absolutely wants to wire outputs together they can use the normal behaviour.

Add the capability to dynamically change the  "Chart history length"  property node of a Chart indicator.
Add the capability to embed picture/image into the background of a Tab control and cluster control using control's property node.

Add a code snippet of a basic State Machine to the Programming >> Structures palette.

 

Palette

Hi

 

I think the title itself is self explanatory. 🙂 🙂

 

How about modifying the thermometer vi so that there are options in the vi itself which help in setting the colour (which fills the thermometer) for a range of temperature.

For example. if the range is from 0 to 100, then:

                                                                            GREEN for 0 to 30

                                                                            ORANGE for 31 to 60

                                                                            RED for  61 to 100

 

(not trying to make a traffic light system......hahaha)

 

the colour and the range can be set by the user.    

 If you are using Global Variables in a project, and you drag the control from the Global VI's front panel to the block diagram of another VI, the item gets placed on the BD as a constant. It is more desirable that this action drop as a global variable node associated with the dragged control. If either the global VI in the project or the global VI's icon is dragged to a diagram then the result is a global variable node, but then always defaults to the first control in the global so you have to manually change it (this default choice is natural as LV would have no clue which control you wanted, fair enough).

 

This idea is to create some way of moving a control from the front panel of the global VI to the block diagram of another VI in such a way that LabVIEW knows to drop a global variable node instead of a constant. Because changing the standard behavior for dragging a control from any panel to any diagram would result in user interface inconsistencies, the suggestion is that ctrl+alt+drag be used to indicate that the user is performing a special drag, and when this drag is from the FP of the global VI to the diagram of another VI, LabVIEW should drop a global variable node.

 

Note: before I get people commenting that GVs are bad bad bad and should be removed, this idea does not attempt to rationalise the need for (or hatred of ) GVs, lets try and leave that for a different thread.

Message Edited by Laura F. on 10-07-2009 12:27 PM

I love quick drop, its great and everyone should try it. BUT: try it with Darren's shortcuts, not the defaults LV comes with.

 

I cringe when I am using another PC at a client and I do CTRL-space to invoke quick drop and they have not got it configured to load at startup, as this normally causes LV to grind to a halt for 30 seconds or so. This feels like a lifetime when the customer is peering over your shoulder and you are billing by the hour!

 

The vast majority of the time I drop something using quick drop, it is using Darren's shortcuts. I don't really see the need for things like Laplace transform* and hundreds of other seldom used primitives to clog up the quick drop. Having everything in quick drop slows it down at launch time, and makes it less usable in my opion.

 

How about an option to remove the build in quick-drop items? Or to only display ones that are listed in the LV.ini file.

 

*Disclaimer: I am sure there is someone out there who uses the Laplace transform daily, I mean you no offense!

 

OK, so the subject title is not really conveying what I want. It will be easier to describe with an example:

 

I use the CTRL+click_drag_release to copy LV objects ALL the time. Really, ALL the time.

 

At least 10 times per day,  I seem to end up relocating my original object instead of making a copy of it. This of course breaks the wires associated with the original object etc etc and immediately calls for a CTRL-Z! Same problem on the FP, I end up moving the original control instead of copying it.

 

This is probably happening because I am releasing the mouse button too soon.

 

Now I use this technique in a variety of other packages (Visio etc), and the exact same physical behaviour from myself very seldom results in a moved object, it is 99% of the time a copy.

 

In LV it seems you have to pause for a fraction of a second before letting go of the mouse button (while still holding CTRL). 

 

What I would like is for the time between releasing the mouse button and the CTRL to be decreased (or settable somewhere for obsessive compulsive's like myself).

 

This may seem like a stupid, totally trivial request, but it really breaks work flow having to reach for CTRL-z when it hasn't worked, and waiting for the object to be dropped seems to also break my work flow.

Currently property nodes allow you to ignore errors that occur inside the property node however it would be useful if we could specify for the property node to ignore input errors as well. In a block diagram the "Error In"  input of a property node is often used to force data flow however if you are not concerned with any input errors you are forced to clear any errors first. If I am updating the property of a control such as enabling or disabling it, changing some attribute such as color or label text, or something similar the input error is often meaningless. It would be nice if we could simply ignore any errors.

We have an application that extensively uses queues, both named and unnamed.  We suspect that one or more of these queues -- probably an unnamed one -- is not being properly drained, and over time is leading to a memory leak.  I would like a means to programmatically examine all the queues in use to determine whether any are growing without bound.  I looked for a way to do this and found this link.

 

The answer here is pretty unsatisfactory.  Our queues have a multitude of types, and replacing every get/release queue with a registration VI specific to that type would be very tiresome.  What we would like is a way to obtain a generic queue reference to each queue -- named or unnamed, suitable for use in Get Queue Status (providing the Elements Out output is not used, as that would require knowledge of type).

 

It would be fine if the refnums were "read only", that is, they could not be used to modify the queue in any way.  Come to think of it, read only refnums might be useful in themselves, but that's another post.

 

If anyone can think of a way to do this with the existing features of 8.6.1 or LV 2000, I'd like to know about it, but there seems to be no existing method for doing this.

Hi,

This is my first thread in the Idea exchange. The other "demands" are already put by others so didn't need to start a thread. I don't know if there is already a thread with this subject. My brief search didn't show any results.

 

Well, I have come across this requirement, and I am sure many of you must have in the past.

I think it would be nice to have "free labels" on the front panel whose text values can be set programmatically.

 

There are labels which come with a 3D frame by default, and there are free labels without these 3D "raised" background. But these cannot be set programmatically.

I know, we can customize the String Indicators, and make them transparent etc, and remove their borders also, so they appear like label, yet with properties.

 

But, like many other demands here, I guess this one would be good too, to have shipped with the next LabVIEW. This will reduce unnecessary burdern of controls, if I have many instances of this object on my FP.

 

Looking forward to comments.

I like very much the possibility to cycle between "wire" and "select" by pressing the space bar, when editing the block diagram. But usually, I need to edit string or path constants: so I'd like to select the "text edit" tool if I press the space bar again.

The same should apply to front panel editing: cycle should be:"operat value (finger)", "select (arrow)", and text. That's useful to edit labels.

I don't like using the "auto selection tool" when editing BDs or FPs, because you need to be very accurate when pointing the mouse.

Let's uppose I'm sending a sequence of commands to a VISA-controlled instrument (see picture).

While dubugging, I realize I missed a command along the sequence (on following example, I forgot setting output impedance):

insert_vi.jpg

As of now, I need to delete the wires between 2nd and 3rd VIs, place the missing VI, then draw the four missing connections.

I'd like Labview to recognize that the I/O pattern of the VI to be inserted is compatible with the existing flow, and to insert it correctly.

Something similar already happens when you place a new VI into the block diagram, and you drag it near a VI with compatible I/O (blinking wires appear).

Most of the new LabVIEW users will start programming from the scratch. They really find very hard to understand from where did this particular object (function) came from. So if the location of it is displayed in the context help it will give a better way to easily find that object from the location.

 

Ex: "Create User Event" can be displayed in context help as below

 

Location: Programming->Dialog&User Interface->Events->

 

(or in a shorter form that everybody understands).

 

Sorry if this idea has been discussed before. Or its not at all needed. For a newbie this will help definitely.

As I see it, working with the IMAQ Image Typedef always results in problems for me, so I've gotten to where I try to avoid using it.  Working with IMAQ should be so much better than what it is.

 

Here are some Ideas:

 

1.  Allow IMAQ stream to disk RAW Format with a string input for a header for each Image.  You could call this vi once to write a Header to the Image, then for every call after that everything you input into the string would be written between Images. This vi should allow you to append the images.  Most of the time I DON'T want to write out to a specified Image format.  I want to stream raw Video to disk. 

 

Also, we are entering an era where lots of Image data is being dumped to disk, so the Raw stream to disk function would need to take advantage of multithreading and DMA transfer to be as fast and as effecient as possible.  The Vi should allow you to split up files into 2GB sizes if so desired.

 

See the block diagram to see how this would change the Labview code required to grab Images and save them to disk.

IMAQ Code Processing.JPG

 

 

 

Also, It would be nice to be able to specify what sort of Image that you want to use in the framegrabbing operation.  This could be set in the camera's .icd file, or by the IMAQ create.vi

Notice in the above example, I make IMAQ create.vi create U16 Images, but when the Image is output, I have no choice but to take the image in an I16 form.  I would like to have the image in its native U16 form.  I am converting the image to U16 from I16 by the "to unsigned word Integer"  I don't think that this should work, but it does, and the fact that it does helps me out. 

 

In general it would be nice to have better support for Images of all Flavors.  U16, U32 and I32 grayscale, and Double grayscale. 

 

 While you are at it, you might as well add in a boolean input (I32 bit result Image? (T/F)) to most Image math processing functions, so the coercion is not forced to happen.

 

Really though....... LETS GET TO THE POINT OF THIS DISCUSSION.....

 

The only reason that I have a problem with all this is because of speed issues.  I feel that arbitrarly converting from one data type to another wastes time and processing power.  When most of my work is images this is a serious problem.  Also, after checking out the Image Math functions they are WAYY slower than they need to be compared with thier array counterparts.

 

Solution: spend more time developing the IMAQ package to be speedier and more efficient.  For example, the IMAQ Array to Image is quite a beast and will essentially eliminate a quick processing time.  Why is this?  This doesn't need to be. NI should deal with Images internally with pointers, and simply point to the array Data in memory.  I dont see how or why that converting an array to an image needs to take so much time.

 

Discussions on this subject:

 

http://forums.ni.com/ni/board/message?board.id=200&thread.id=23785&view=by_date_ascending&page=1

 

And

 

http://forums.ni.com/ni/board/message?board.id=170&thread.id=376733

 

And

 

http://forums.ni.com/ni/board/message?board.id=200&message.id=22423&query.id=1029985#M22423

 

 

Hello:

 

I am going to be testing the 64-bit version of LabVIEW soon.  But the major code that I want to port to it also uses Vision and Advanced Signal Processing Toolkit.  Therefore, I am VERY, VERY interested in 64-bit versions of those toolkits.  I work at times with 100s of high resolution images and to effectively have no memory addressing limitation that 64-bit offers will be a significant advance for me.  Right now, I post-process with the 64-bit version of ImageJ to do some of the work that I need with huge image sets.

Currently, you can only change plot properties (color, line width, line style, etc.) for the first plot.  Subsequent plots get default settings and cannot be changed.  My proposal would be to add support the all of properties available in the property node.

What I want is that I have the chance to set a new value in a "condion probe"

For examble

I have a probe where I check a value of greater then. Now my value is greater and that is a bug.

Now I want to run the program from this point with a right/new value.

 

Jürgen