LabVIEW Idea Exchange

Community Browser
Top Authors
cancel
Showing results forย 
Search instead forย 
Did you mean:ย 
Post an idea

This function already handles alpha characters such as the plus and minus signs, "e", "E" and the localized decimal point. It should also be able to handle the "thousands" separator (in the US, the comma).

 

Currently, if the string is "2,000,000" it returns simply "2". Ouch!

Curvature in NI Vision has a property that makes a lot of sense: "If the current point is too close to either end of the array to choose the additional points, the curvature is calculated as 0." (Vision Manual).

Too close refers - obviously - to ~0.5 Kernel size.

 

curvature.png

 

This makes no sense when I'm working on an contour that is "closed" (starting point = ending point) - for example, when I am trying to analyse a particle and its "turning points". 

 

 

I'm losing 1 kernel width of data at exactly the starting point/end point - as marked in the picture - and in this synthetically generated and exxagerrated case, I'm losing the information about one edge!

To fix this, I either rotate the ROI or change the search direction, calculate the missing data and replace the values in the curvature profile. (Or - calculate the curvature myself.) 

This makes absolutely no sense. 
Vision could easily recognize starting-point = ending-point, or just allow me set some boolean if there is a reason not to make this automatic. (I can't think of one.)

 

It would be nice if we could programmatically set the inclusion of the upper and lower limits in "In Range and Coerce" function by having two additional input terminals in this function. This two extra terminals should be optional. With a value connected, the context menu option should be override.

 

Now it's possible to do it with a case structure. We may need to have up to 4 cases to cover all possibilities.

This is being discussed lots.. There are replies like, each and every element can't be assigned with property of its own and it may require a huge memory..!! But, there is a property called Index Value which when written with Zero, it returns the Value of the Active element ( The Element visible or last edited element).. And this will be very helpful if just the Index Value can be assigned by us and we get the necessary Value we require..

 

array element reference.png

 

This will be of huge help and save a lot of time when working with Referencing and array whose Data Type is a Variant (Unknown)., Eg: Unknown Cluster..

 

I personally believe that this will be possible.. Because, if Memory and all such matters, there wont be a property like Index Value and get its Value alone. We just need to Extend this.. It will be useful in lots and lots of projects.. :):)

Hello,

 

Burst random signal is commonly used for vibration test with shaker. LabVIEW has some random signal generator in original functions, time series analysis toolkit, but not burst random signal. If LabVEIW does not feature burst random signal, it is difficult to suggest shaker vibration testing with LabVIEW.

Now we have to compete with other NVH vendors such as LMS, B&K, however lacking of basic functions might be kind of obstacle for DSA business. Even Ono Sokki and A&D who are automotive testing supplier in Japan can offer it, NI should do to meet customer's expectations.

 

Saku Kakibe

LabVIEW should support loop unrolling...

 

For those who do not know what loop unrolling is (from wikipedia http://en.wikipedia.org/wiki/Loop_unwinding)

Loop unwinding, also known as loop unrolling, is a loop transformation technique that attempts to optimize a program's execution speed at the expense of its binary size (space-time tradeoff). The transformation can be undertaken manually by the programmer or by an optimizing compiler.

 

The goal of loop unwinding is to increase a program's speed by reducing (or eliminating) instructions that control the loop, such as pointer arithmetic and "end of loop" tests on each iteration;[1] reducing branch penalties; as well as "hiding latencies, in particular, the delay in reading data from memory".[2] Loops can be re-written instead as a repeated sequence of similar independent statements eliminating this overhead.[3]

 

Example with textual programming language:

 

int x;
for (x = 0; x < 100; x++)
{
   delete(x);
}

 

Becomes:

 

int x;
for (x = 0; x < 100; x+=5)
{
   delete(x);
   delete(x+1);
   delete(x+2);
   delete(x+3);
   delete(x+4);
}

 

Who is with me ๐Ÿ˜‰

Using the 3D Surface Plot in LV2011 it is possible to switch on or off either "Surface", Mesh" or "Normal".

Being in the need of the surface normal angle for some calculation I was told that this data is not accessible through property nodes.

 

I would really prefer accessing this data instead of calculating it twice!!!

 

 

And by the way: Displaying large data arrays in 3D Plots takes computing power running on one core only. Wouldn't it be possible to have a parallelizing option here...

I assume that the LabVIEW sort algorithm is a comparison based sorting algorithm (and my guess is that they use a form of mergesort & insertionsort, and maybe better, timsort). Anyway, for arrays of integers, string this works perfectly. For clusters and classes the implementation does not work that smooth since the sorting algorithm will always sort on the first parameter in the cluster, respectivily class. For this reason it is in my opinion that the sort VI should have some kind of ability to edit the comparison on how the sort VI actually sorts. In many languages you have the ability to do this but not in LabVIEW. This would be very usefull for when sorting classes and clusters. 

 

A example of how this should be implemented is as followed. When the sort VI has a non-standard LabVIEW datatype as input, one should be able to add a reference to a VI (or add by a menu), with 2 inputs and 1 output. In this VI the user needs to define the rules which apply to the datatype of the input. In the end the VI will output a boolean, telling if input 1 is greater (depending on which sorting algorithm LabVIEW uses) then input 2.

Well, I think the title says it all...

 

There are many threads on the NI website about this certain topic, but none of them really shows how to deal with this "problem" correctly!

 

It is a very common task to synchronize a AI signal (let's say 0-10 V from a torque sensor) with a Ctr signal (e.g. an angular position of a drive which causes the torque). How do I correctly display the torque over the drive angle in a X-Y graph?

 

It would be great if NI offers a reference example in the LV example finder, how to solve such a task elegantly and efficiently.

 

I'm not sure if this is the appropriate place for this suggestion, but anyway...I would love to see this in the LV example finder!

 

Regards

A:T:R

As requested in this thread (http://forums.ni.com/t5/LabVIEW/Picture-variable-to-image-IMAQ/m-p/1626254), it would be nice if there were a function that could convert the picture type that is output from the Graphics and Sound toolbox into an IMAQ image type. 

 

Currently, the necessary steps are to use a trio of Picture to Pixmap, Unflatten Pixmap and then IMAQ ArraytoImage or IMAQ ArraytoColorImage or to save the file as a .bmp and then loading it as an IMAQ image.

 

Thanks!

 

In a complexe project, when you make Packed Librairie (lvlibp) or dll it is very important to compil your lvlib in a good order.

(from lvlib with less depencies to lvlib with more depencies)

 

But we don't have a way to see the lvlib hierachy

(we have only vi hierarchie, class hierarchie)

 

 

Back in my Pascal days I could access string as an array. This made processing of data presented as a string much easier.  In Labview I have to convert the sting to an array then process the elements. This would not be too bad if their was ASCII option in the radix for a unsigned integer or case structure

 

 

Up to LV 2010 the "Nonlinear Curve Fit.VI" has an Input for the data weighting. For sure this could somehow be related to measurement errors of the data, but as this is not documented and not for a daily user well known it would be nice to have an additional instance were the input is not the "Weight" but the "Std. Dev. of y".

In a similar way it is not totally straight forward to calculate the standard deviation for the "best fit coefficients" from the "covariance matrix". So an additional output "Std. Dev. of best fit coefficients" would be very helpful.

It would be nice if it was possible to add another 'Reentrant' setting.

This setting would make sure VI A always uses a specific instance, where VI B uses another instance. Sort of a single parent sub-vi.

This would allow for look-up VIs that have a seperate set of data per VI that is calling them.

 

So you can store some variables that are only valid in a single VI and if another VI is calling the same VI it will be calling a second instance and gets other variables back.

 

Ton

While creating a labVIEW vi directly from vision assistant, the buffer needs to be manually allocated.optim.PNG

 

 

 I have faced this minor hitch of changing the palette to actually view the image when i am starting with a color image and ending with a binary image. Automatically allocating optimized buffer for image destination will save some time as well as avoid "Incompatible image type error". The problem will get multiplied when you have created a subvi and have used a "imaq extract function" before this. The buffer will automatically get overwritten. Hmmm I know this is a luxury but still.......................

Hough Transform will be a great add for vision projects. This will be helpful in a complex project like lane detection. A readily available hough transform with tweakable parameters will be of great help in the machine vision field.

LabVIEW's units support angular measurements of degrees, minutes, seconds, radians and steradians but I don't see support for a full revolution. If this was added, we could use 'rev/min' as a unit which is a very common unit.

 

I think that users don't really use units as much as they might because of limitations such as this. There are some other things that I would like changed with units, but this should be and easy one to fix.    

LabVIEW AF is a fantastic tool to create multi-threaded messaging applications the right way. It promotes best practices and help write big yet scalable applications. 

I feel however that there is not enough included in the box ๐Ÿ™‚ When looking at many other industry implementation of the actor model e.g. Scala Akka, Erlang, Elixir, microservices in many languages etc. you find that the frameworks usually include many more features. https://getakka.net/

 

I understand the decisions behind the design for AF yet would like to start a discussion about some of these.

I would like to see the following features being included into AF:
1. Ability to Get Actor Enqueuer by Path through hierarchy of actors e.g. /root/pactor/chactor could be used to get the enqueuer to chactor. This would probably require actors to have unique paths pointing to them. Having a system manager keeping all enqueuers on local system is not a bad thing. It is a good thing.

2. Ability to seamlessly communicate over the network and create your own queue specifications. For example actors should be able to talk between physical systems. Using the Network Actor is not a good solution, because it is simply to verbose and difficult to understand. The philosophy of AKKA should be embraced here "This effort has been undertaken to ensure that all functions are available equally when running within a single process or on a cluster of hundreds of machines. The key for enabling this is to go from remote to local by way of optimization instead of trying to go from local to remote by way of generalization. See this classic paper for a detailed discussion on why the second approach is bound to fail."

3. Improved debugging features like MGI Monitored Actor being inbuilt. https://www.mooregoodideas.com/actor-framework/monitored-actor/monitored-actor-2-0/

4. Included Subscriber Publisher communication scheme as a standard way for communicating outside the tree hierarchy. https://forums.ni.com/t5/Actor-Framework-Documents/Event-Source-Actor-Package/ta-p/3538542?profile.language=en&fbclid=IwAR3ajPR1lvFDyPFP_aRqFZzxR4FCQXh2nB2z0LYmPRQlnvXnsC_GQaWuZQk

5. Certain templates, standard actors, HALs, MALs should be part of the framework e.g. TDMS Logger Actor, DAQmx Actor. Right now the framework feels naked when you start with it, and substantial effort is required to prepare it for real application development. The fact that standard solutions would not be perfect for everyone should not prevent us from providing them, because still 80% of programmers will benefit.

6. Interface based messaging. The need to create messages for all communication is a major decrease in productivity and speed of actor programming. It also decreases readability. It is a better with the BD Preview in Choose Implementation Dialog in LV19, but still ๐Ÿ™‚

7. This is more of an object orientation thing rather than actor thing but would have huge implications for actor core VI, or Receive Message VI. Please add pattern matching into OO LV. It could look like a case structure adapting to a class hierarchy on case selector and doing the type casting to the specific class inside the case structure. You could have dynamic, class based behavior without creating dynamic dispatch VIs, and you would still keep everything type safe. https://docs.scala-lang.org/tour/pattern-matching.html

 

The natural way for programming languages to evolve is to take ideas from the community and build them into the language. This needs to also become the LabVIEW way. I saw a demo of features of LV20 and I am really happy to see that many new features were inspired by the community. Lets take some ideas about actor and add them in.

 

I wanted to share my view on the direction I think AF should go. I see in it the potential to become the standard way to do big applications, but for that we need to minimize the barrier to entry.

LabVIEW comes with a example (Check TDMS File for Fragmentation.vi) to check is a TDMS file is fragmented:Check TDMS File for Fragmentation.vi Block DiagramCheck TDMS File for Fragmentation.vi Block Diagram

I create a subVI based in this example and added to my personal library. How about having a "TDMS is Fragmented" as a native function in the "TDMS Streaming" palette?

 

This is a proposal to the VI's icon:

TDMS is fragmented Icon.png

Hello, I would like to be able to copy a configuration for a specific variable in fuzzy system designer. It's very tedious to specify the ranges and the values for those variables one by one. In some cases there are many variables that are configured in a very simillar way. There should be a copy button next to them.

 

Fuzy System Designer Idea