LabVIEW Idea Exchange

Community Browser
cancel
Showing results for 
Search instead for 
Did you mean: 
Post an idea

Up to LV 2010 the "Nonlinear Curve Fit.VI" has an Input for the data weighting. For sure this could somehow be related to measurement errors of the data, but as this is not documented and not for a daily user well known it would be nice to have an additional instance were the input is not the "Weight" but the "Std. Dev. of y".

In a similar way it is not totally straight forward to calculate the standard deviation for the "best fit coefficients" from the "covariance matrix". So an additional output "Std. Dev. of best fit coefficients" would be very helpful.

Right now in labVIEW the plots can be exported into excel and can be saved into different  image formats (*.png,*.jpeg,*.eps,etc,.). In addition to the other properties it would really a very cool and excellent option to save the plot data as a LabVIEW figure with an extension "vifig" (*.vifig). The idea here is when the  plot is saved as LabVIEW figure the tester with the help of an interactive tool or some thing like this reload the *.vifig to view, change the graphic object properties , maipulate legend, xaxis label, yaxis label so on... and finally save.

 

Let's assume a situation where the user wants to analyse the data (zoom in, zoom out, find delay etc.,) and change few properties the the data who doesn't know LabVIEW and he needs to use another programs like origin, excel etc. This could be done very easily witht he help of an interactive tool. Unlike LabVIEW, the softwares like MATLAB, Origin has such a pretty useful option. 

 

Thank you.

I'd like the Equal To Zero? and Not Equal To Zero? primitives to support the error cluster wire.  The node would look at the Error Code and compare it to zero, resulting in a Boolean accordingly.

 

Error_Cluster_Zero.png

 

Thanks,

 

Steve K

In my experience, it is common to open files written by other peole, and realize that constants are displayed in hex or bin format only after spending time in debugging.

 

I think it would be nic to see directly on the block diagram the display format.

This could also apply to controls/indicators.

 

 dispFormatHint.png

A recent request in the LabVIEW Forums was to append additional lines of text to the end of a (LabVIEW-generated, I assume) HTML Report.  "Easy," I thought, "just use the Template Input of New Report, move to the end, and Append away!".  So I tried it -- it over-wrote the earlier Report.  Oops!

 

Examination of the Toolkit showed there were no functions analogous to "Find Last Row" or "Read ..." from the Excel-specific sub-Palettes in the HTML or Report sub-palettes.  Indeed, when examining the New Report functions for HTML, it was clear that the Template input was simply ignored!

 

It seems to me (naively) that it should be fairly simple to add code to (a) allow a Template to be specified for HTML, and (b) allow a positioning function to position the "appended" input to the end of the existing HTML <body><\body> block.

 

A potentially more "useful" (but, I would think, also fairly simple) extension would be to allow a "Read Next HTML Element" function, which would return the next Line, Image, whatever, along with a Tag identifying it, allowing the user to copy from the Template file selected Elements and append new Elements where appropriate.  For this to work "nicely", you would need to also return a "No More HTML Elements" flag when you reached the end of the Template.  This would allow the User to copy as much as needed, adding in new data where appropriate, as well as "appending to the end".  As with Excel, there would be no issue with having the Template File be the same or different from the final Save Report file.

 

Bob Schor

Currently LabVIEW Timestamps truncate values rather than round them when using the %u format code.  The numeric format codes all round values (0.499999 formatted with %.2g is 0.50).  I would like an alternative to the %u code that does the same.

 

See https://forums.ni.com/t5/LabVIEW/Timestamp-Formatting-Truncates-Instead-of-Rounds/m-p/3782011 for additional discussion.

 

 

 

My use case here was generating event logs.  I was logging the timestamp when parameters started/stopped failing.  To save space, I stored a set of data identical to the compressed digital waveform data type.

 

I had a couple parameters which would alternate failing (one would fail, then the other would start failing as the first stopped failing).  As they had different T0 values, the floating point math for applying an offset multiplied by a dT to each T0 gave values that would truncate to 46.872 and the other truncated to 46.873, leading to a strange ordering of events in the log file (both would appear to be failing/not failing for a brief time).

 

To make the timestamps match, I made the attached vi (saved in LabVIEW 2015) to do my own rounding (based off the feedback in the linked forum post).

Interpolate 2D Scattered uses triangulation which is fairly CPU intensive.

 

The most difficult part, (and the part that most of the CPU time is spent on) is defining the triangles.  Once the triangles are specified, the interpolation is fairly quick (and could possibly be done on an FPGA).  This is the same idea as using the scatteredInterpolant class in Matlab (see : http://www.mathworks.com/help/matlab/math/interpolating-scattered-data.html#bsow6g6-1 )

 

The Interpolate 2D Scattered function needs to be broken up into two pieces just like Spline Interpolant, Spline Interpolate are. 

 

 

 

As I see it, working with the IMAQ Image Typedef always results in problems for me, so I've gotten to where I try to avoid using it.  Working with IMAQ should be so much better than what it is.

 

Here are some Ideas:

 

1.  Allow IMAQ stream to disk RAW Format with a string input for a header for each Image.  You could call this vi once to write a Header to the Image, then for every call after that everything you input into the string would be written between Images. This vi should allow you to append the images.  Most of the time I DON'T want to write out to a specified Image format.  I want to stream raw Video to disk. 

 

Also, we are entering an era where lots of Image data is being dumped to disk, so the Raw stream to disk function would need to take advantage of multithreading and DMA transfer to be as fast and as effecient as possible.  The Vi should allow you to split up files into 2GB sizes if so desired.

 

See the block diagram to see how this would change the Labview code required to grab Images and save them to disk.

IMAQ Code Processing.JPG

 

 

 

Also, It would be nice to be able to specify what sort of Image that you want to use in the framegrabbing operation.  This could be set in the camera's .icd file, or by the IMAQ create.vi

Notice in the above example, I make IMAQ create.vi create U16 Images, but when the Image is output, I have no choice but to take the image in an I16 form.  I would like to have the image in its native U16 form.  I am converting the image to U16 from I16 by the "to unsigned word Integer"  I don't think that this should work, but it does, and the fact that it does helps me out. 

 

In general it would be nice to have better support for Images of all Flavors.  U16, U32 and I32 grayscale, and Double grayscale. 

 

 While you are at it, you might as well add in a boolean input (I32 bit result Image? (T/F)) to most Image math processing functions, so the coercion is not forced to happen.

 

Really though....... LETS GET TO THE POINT OF THIS DISCUSSION.....

 

The only reason that I have a problem with all this is because of speed issues.  I feel that arbitrarly converting from one data type to another wastes time and processing power.  When most of my work is images this is a serious problem.  Also, after checking out the Image Math functions they are WAYY slower than they need to be compared with thier array counterparts.

 

Solution: spend more time developing the IMAQ package to be speedier and more efficient.  For example, the IMAQ Array to Image is quite a beast and will essentially eliminate a quick processing time.  Why is this?  This doesn't need to be. NI should deal with Images internally with pointers, and simply point to the array Data in memory.  I dont see how or why that converting an array to an image needs to take so much time.

 

Discussions on this subject:

 

http://forums.ni.com/ni/board/message?board.id=200&thread.id=23785&view=by_date_ascending&page=1

 

And

 

http://forums.ni.com/ni/board/message?board.id=170&thread.id=376733

 

And

 

http://forums.ni.com/ni/board/message?board.id=200&message.id=22423&query.id=1029985#M22423

 

 

Hello:

 

I am going to be testing the 64-bit version of LabVIEW soon.  But the major code that I want to port to it also uses Vision and Advanced Signal Processing Toolkit.  Therefore, I am VERY, VERY interested in 64-bit versions of those toolkits.  I work at times with 100s of high resolution images and to effectively have no memory addressing limitation that 64-bit offers will be a significant advance for me.  Right now, I post-process with the 64-bit version of ImageJ to do some of the work that I need with huge image sets.

I need to adjust the sampling rate of a Waveform, TSA Resampling VI accomplishes this task but it does not keep t0 value since it creates a new waveform instead of adjusting the original one.

I think an additional feature should be added to keep the reference of the original waveform, or specify in the documentation that you will have to modify the VI in order to keep t0.

 

TSA.png

 

 

 

How could a user know if a block he is using is slow or fast?

For example, Variant attribute is organiazed in a red balck tree, thus, a search is O(log(n)) while a regular array search is much slower O(n).

At the moment there are two options:

 

1. Benchmark each code section.

2. Read about tricks on the forums.

 

Instead, if NI added efficiency alanisys to each function with a simple automatic tool then we could easily find what we are searching for.

 

Hi,

I would recommend bipolar logarithmic scales in graphs or a new graph-type. This one: http://www.didactronic.de/Halbleiter+Dioden/diodenkennl2.gif is not log-sclaed, but it woul make sense. 

Sure the log of negative values is not defined (and I think it is calculated to plot the data), but to plot measured data it would be great to calculate it like:

 

measurement / abs(measurement) * log (abs(measurement))

 

I could do this manually, but I prefer to have the real values on the scales, and not the log(measurement)-Value.

 

kind regards

Hello All, I find myself often writing duplicate code just to be able to filter out the objects I want to work with from a collection that has many.

 

Examples of that can be:

- sending message objects in queue,

- working with multi object type arrays,

- parsing large objects composed of smaller ones,

- and many more...

 

The duplicated code is almost always the same. It's trying to cast to more specific or more generic, catching the error in case structure and invoking the method im interested in. Being able to perform casting as a part of the teminal would solve the problem with no extra code on the diagram. In the background, the compiler can treat this the same way as cast+case but we wouldn't have to worry about that.

 

The option could be enabled whenever you try to wire an object in the same hierarchy that does not have that specific method you need, but it's on a different level in the hierarchy.

 

ClassIdea.png

 

Please let me know what you think.

 

Therefore, my suggestion is to add functionality to a probe to provide the ability to replace the data passing through that point with the value of the developer’s choosing, either for a single iteration, or for multiple itterations.

 

Occasionally I have found instances where is would greatly helpful to be able to inject a value into a 'wire' for either a single iteration or for every iteration, overwriting the value that would be naturally set by the previous operation. This may be a desire to insert an erroneous value to test my code's response to unexpected values without disrupting the code that is in place, or, as I hate to admit, to recover a runaway thread without disrupting the execution of the remainder of the application and any associated, unsaved data.

 

For example, a while loop that has managed to miss its exit condition due to an unforeseen race condition, holding up a weekend’s worth of data collection from being saved to disk. Rather than being forced to click the dreaded Abort button, it would be nice to be able to simply force the next evaluation of the conditional terminal to be True, exiting the while loop and freeing the execution of the remainder of the application

 

In the "R" language, Hadley Wickham is a superstar.  (personal, git)

 

One of his packages is "lubridate".  It has a stack of very useful time/date handling utilities.  They are most useful for "speaking human" or in average-person-facing user interfaces.

 

You should look at the utility here.  Things like elapsed time in days, or holiday handling built-in.  That might be useful to have in a clean and native package.

Instead of using the sequence local, it maybe better to use the shift register in a sequence loop.

I'd like a comparison block to tell us if a timestamp is a valid one or not.

By not valid i mean showing something like "DD/MM/YYYY" instead of mumbers, as a newly placed timestamp control does.

 

You can't just check "= 0" because actually an invalid timestamp could also be a NaN or any large positive/negative number!

I would like to be able to right click on any Queue or Notifier (that has been previously setup, i.e. wired) and find all of the Queue or Notifier functions associated with it. Extend the idea to specifically find where these functions are setup and terminated.

 

In the pictured example, I have been able to easily troubleshoot a problem where someone has released a Notifier in two separate locations...

 

22266i6DB10B9728172C12

 

 

 

Add CUDA support for analysis, comutation and vision.

Is there any reason why backwards compatibility couldn't be added to the LabVIEW datalog access VIs? 

 

Why does a LabVIEW 8.x (or 7, or 6, or 5, or 4, or 3) datalog file NEED to be converted to LabVIEW 2009 datalog format.

 Shouldn't it be optional?

 

 

I have around 8 terabytes of data in LabVIEW 7 and 8 formats.  Switching to version 2009 will be very painful (as was the switch to other versions), in that it requires me to convert each datalog file.

 

- yes I know about SilentDatalogConvert, yes I could write a simple program to churn through all my data files. That much data would take weeks or months of continuous chugging - seems silly.

 

I'd even settle for backwards compatibility with caveats - read-only for example.

 

The Cluster type of my datalog file hasn't changed in 15 years. Maybe a cluster type check first to determine if the format really requires an update, but even then allow access without conversion