03-13-2018 10:47 AM
Good Day!
I am working on a VI which will gather temperature, voltage, and current signals corresponding to critical values in a thermal cycle. I have spent a fair amount of time working through forums and tutorials to set up custom scaling equations for my voltage and current channels (see the attached VI), but I would also like to set up custom scaling equations for my thermocouple channels. I know that the create channel VI can be configured to read a thermocouple signal, but I don't have the option to add scaling equations to the thermocouple data when I do this. Custom scaling equations are necessary because each thermocouple in my system has a slightly different calibration equation (and therefore, using LabView's built-in calibration adds uncertainty to my measurements).
I would like to add scaling equations to the thermocouple channels, preferably using a similar programming architecture as I used on the voltage and current blocks. However, I know that this may not be possible, so I'm open to suggestions. Please take a look at my VI and let me know what you think the most efficient way of accomplishing this is.
03-13-2018 12:00 PM - edited 03-13-2018 12:02 PM
I assume those calibration values for the temperature channels can change and so you need a way to load them from file. Same thing is true for your voltage and current channels. You have the scaling constants hard-coded and anytime you run a calibration check, if those constants are wrong or they have changed slightly, you'll need to change the code and recompile.
I would suggest storing your slope and y-intercept values in a CSV file or similar. When you run the program, the first step is to grab your calibration values. When you run a calibration routine, if those values need to be updated, just write the new constants to the CSV file. No need for a code recompile.
Another way to do it is store your scales in MAX and reference them by name in your code.
03-13-2018 01:47 PM
Thanks for the reply. We aren't calibrating this system very frequently, and we do it by hand, so that's why they are hard coded. I am familiar with storing scales in MAX and calling them by name, but when I have the thermocouple configuration selected for the Create Channel sub VI, there is no terminal available for the custom scaling. This is different from the current and voltage configurations, which have a terminal available for custom scaling equations.
03-13-2018 01:57 PM
@meierdh wrote:
Thanks for the reply. We aren't calibrating this system very frequently, and we do it by hand, so that's why they are hard coded. I am familiar with storing scales in MAX and calling them by name, but when I have the thermocouple configuration selected for the Create Channel sub VI, there is no terminal available for the custom scaling. This is different from the current and voltage configurations, which have a terminal available for custom scaling equations.
Don't the temperature functions return a temperature value, meaning a scale is not required. You aren't measuring a voltage and scaling it to a temperature. The hardware does that auto-magically.
03-13-2018 02:23 PM
You are correct. The 9213 card that I'm using returns a temperature. However, I need to either
a) Get the raw voltage from the card and convert that to a temperature (using my own calibration equation) after acquiring the raw voltage.
b) Get the temperature data from the 9213 and calibrate THOSE values, rather than calibrating a voltage.
03-13-2018 04:48 PM
@meierdh wrote:
You are correct. The 9213 card that I'm using returns a temperature. However, I need to either
a) Get the raw voltage from the card and convert that to a temperature (using my own calibration equation) after acquiring the raw voltage.
b) Get the temperature data from the 9213 and calibrate THOSE values, rather than calibrating a voltage.
First of all, what do you mean by "We aren't calibrating this system very frequently"? Once every 5 years is too often to make me want to recompile software just to update calibration values. It's just not a good idea.
It looks like you are already applying an equation to the temp values. If this is working, I would continue along those lines but improve it. My suggestion earlier was to read those calibration values from a CSV file or similar. Store them in an array and pass them into a subVI, along with your temp values, and run the calculations on the array. Much cleaner.
03-13-2018 05:07 PM
I also notice that you are reading 5 samples on each channel but when you split out the channels in your while loop, you are coercing each channel down to one DBL value. Is this intentional? Setting the read VI to multiple channels, single sample would do the same thing.
03-13-2018 06:17 PM
With regard to calibration - I don't know what you mean by "compiling software." If I need to update a calibration equation, I can update it by altering the values in the clusters for each create channel sub-VI. We are constantly switching out sensors into various channels, and the clusters allow a simple and easy way to update calibration equations without dealing with a CSV, which I don't know how to do. I appreciate your suggestion, but I am not interested in altering this configuration unless it will impact the accuracy of our data.
With regard to the temperature values... I am applying an equation to the values before they are displayed in graphs/indicators on the front panel. However, because the data is saved to a .TDMS file before the while loop, these equations are not applied to the data which is saved. If I am misunderstanding something (highly likely because I am in the process of teaching myself LabVIEW) about how this VI works, please let me know.
With regard to the sampling rates - I was not intentionally coercing the channels down to one value. I'll update the while loop to loop faster to fix this. Thank you for the tip.
03-14-2018 09:35 AM - edited 03-14-2018 10:01 AM
@meierdh wrote:
With regard to calibration - I don't know what you mean by "compiling software." If I need to update a calibration equation, I can update it by altering the values in the clusters for each create channel sub-VI. We are constantly switching out sensors into various channels, and the clusters allow a simple and easy way to update calibration equations without dealing with a CSV, which I don't know how to do. I appreciate your suggestion, but I am not interested in altering this configuration unless it will impact the accuracy of our data.
So are you planning on constantly running your test code from the LabVIEW development environment?
Once you have a program that works you compile it to a "stand alone executable" and you run the exe, not start LabVIEW and run your VI's.
At my company all of our instruments and measuring devices (like active probes and current shunts) are calibrated yearly. It just doesn't make sense to have to go back into the source code every year and update calibration factors, because then I would have to also compile and distribute a new exe AND that would mean everyone of our test stations would require a DIFFERENT exe because the cal factors will be different.
That is why everyone is concerned about hard coding calibration factors. So do like others have said and store calibration factors in a text file that can be updated without having to update the source code.
Not to mention NI licensing, you are only supposed to be running ONE instance of the LabVIEW development environment at a time even though you can install it on multiple computers. So if someone is using LabVIEW to run your test code, that means you can not legally use the same LabVIEW (serial number) on another computer to develop code.
Compiled LabVIEW programs and their required runtime libraries are free to distribute as many copies as you want. You can even charge money for your compiled LabVIEW EXE's (but not for the runtime libraries)
03-14-2018 11:47 AM
@meierdh wrote:
With regard to calibration - I don't know what you mean by "compiling software." If I need to update a calibration equation, I can update it by altering the values in the clusters for each create channel sub-VI. We are constantly switching out sensors into various channels, and the clusters allow a simple and easy way to update calibration equations without dealing with a CSV, which I don't know how to do. I appreciate your suggestion, but I am not interested in altering this configuration unless it will impact the accuracy of our data.
I'm not sure of your purpose behind the original post if you are not interested in changing anything. You asked for "the most efficient way of accomplishing this" and what you are doing is not efficient. For each of the TC channels you have an expression node with an equation mx+b. Those constants in the expression nodes are hard-coded values. As I mentioned earlier, I would store those constants in a text file, read them in during the startup of your application and use them in a for loop to scale your TC values (as I show in message 6 above). In my opinion, that would be the most efficient way of accomplishing the scaling of the values.
As far as reading from a CSV file, you can't get much more simple than this:
I can't speak to the TDMS issue that you mention as I have never had the need for the high speed streaming capability of TDMS. If you are only reading 1-5 samples per channel, the argument can be made that it's not needed here either. I don't know if you are interested in altering that configuration either but since TDMS doesn't record the values that you want, I think that some altering is necessary.