02-14-2023 11:00 AM - edited 02-14-2023 11:00 AM
@Yamaeda wrote:
You can open Flatten pixmap.vi. There doesn't seem to be too much strange things going on. I think it'd be easier to use a couple of Split byte instead of those ANDs and shifts (like they do in the color to RGB functions.
An eye sore are these (benign) coercion dots, but otherwise it is pretty compact and probably not the primary cause for the memory issues (unless we use 32bit LabVIEW), except that is requires data copies. Still it is a detour to do the conversion to "image data" with interlaced 3bytes/pixel, which then needs to be untangled again when writing the PNG.
The entire loop stack could be dramatically simplified by never going to a 2D array (... just to reshape it again to 1D at the end 😮 ). This may, or may not avoid a data copy, depending on what the compiler does with it. 😄
02-14-2023 11:12 AM
@paul_cardinale wrote:
wiebe@CARYA wrote:
Exactly what sizes are you dealing with?
I have a faint recollection of a 'streaming' PNG 'library', used for a forum user that wanted to save circles (IIRC) in a nn000Xmm000 image. This is possible, the complete image never has to live in LabVIEW.
Biggest one is about 1400 x 9400
Before or after the doubling?
I assume memory isn't a problem? I mean, there's plenty available?
Note you're actually quadrupling the image, at least it's surface area.
That's 200 MB in data (4 bytes per pixel), and that might be critical for 32 bit LabVIEW.
Are you on 32 bit?
This isn't a problem for Lv20, 64 bit:
To quote from Managing Large Data Sets in LabVIEW - NI Community:
"LabVIEW 8.x, due to its larger feature set, only allows a maximum array size of about 800 MBytes."
Although the link is poorly formulating things, a few data copies could easily get near the critical memory limit.
02-14-2023 11:14 AM
wiebe@CARYA wrote:
@paul_cardinale wrote:
wiebe@CARYA wrote:
Exactly what sizes are you dealing with?
I have a faint recollection of a 'streaming' PNG 'library', used for a forum user that wanted to save circles (IIRC) in a nn000Xmm000 image. This is possible, the complete image never has to live in LabVIEW.
Biggest one is about 1400 x 9400
Before or after the doubling?
I assume memory isn't a problem? I mean, there's plenty available?
Note you're actually quadrupling the image, at least it's surface area.
That's 200 MB in data (4 bytes per pixel), and that might be critical for 32 bit LabVIEW.
Are you on 32 bit?
This isn't a problem for Lv20, 64 bit:
To quote from Managing Large Data Sets in LabVIEW - NI Community:
"LabVIEW 8.x, due to its larger feature set, only allows a maximum array size of about 800 MBytes."
Although the link is poorly formulating things, a few data copies could easily get near the critical memory limit.
That size is before doubling. And we are using LV 32-bit.
02-14-2023 11:33 AM
I guess that's a lot of memory for 32 bit LabVIEW.
I see little options, except
1) start using 64 bit LabVIEW or
2) avoid the need to do this.
3) keep a 1X1 copy in memory and save as 2X2 the size (using something like the little library I described)
2) would require more info on the 'why'. For display, setting the picture control's zoom factor to 2 does the trick, but I'm sure there are other reasons you need this.
02-14-2023 11:38 AM - edited 02-14-2023 11:51 AM
@altenbach wrote:
The entire loop stack could be dramatically simplified by never going to a 2D array (... just to reshape it again to 1D at the end 😮 ). This may, or may not avoid a data copy, depending on what the compiler does with it.
Here's how I would probably have done the flatten pixmap. Not sure about performance, but definitely fewer buffer allocations. (Code could be further simplified (insert) by concatenating tunnels, but there might be a penalty. Not tested)
02-14-2023 08:01 PM
@paul_cardinale wrote:
wiebe@CARYA wrote:
Exactly what sizes are you dealing with?
I have a faint recollection of a 'streaming' PNG 'library', used for a forum user that wanted to save circles (IIRC) in a nn000Xmm000 image. This is possible, the complete image never has to live in LabVIEW.
Biggest one is about 1400 x 9400
The G-Image library I mentioned earlier also has functions for resizing, and loading and saving png files (to disk or byte stream), separate to LabVIEW's built-in png functions.
Using these functions a 9400 x 1400 png image can be loaded, resized to twice it's size, and then saved as png again without memory issues. The resize function has different resizing filters, where Box is a simple nearest neighbor type filter. Task manager shows LabVIEW 2020 (32-bit) peaking at a bit over 800MB RAM when running the snippet below. Repeating the same test with a 1400 x 9400 image shows similar results.
G-Image is in LV2020 on VIPM, but I can save it to LV2018 if you want to try it out.
02-15-2023 04:05 AM
@Dataflow_G wrote:
@paul_cardinale wrote:
wiebe@CARYA wrote:
Exactly what sizes are you dealing with?
I have a faint recollection of a 'streaming' PNG 'library', used for a forum user that wanted to save circles (IIRC) in a nn000Xmm000 image. This is possible, the complete image never has to live in LabVIEW.
Biggest one is about 1400 x 9400
The G-Image library I mentioned earlier also has functions for resizing, and loading and saving png files (to disk or byte stream), separate to LabVIEW's built-in png functions.
Would that work for nn000Xmm000 images? That would need at least 400 MBi, up to 20 GBi...
This 'library' could decide the value per pixel, while streaming it. So you never needed to allocate the image in memory.
IIRC, it was for rendering Farnell patterns at a (very) high resolution (for lens design or holography maybe?).
Anyway, it's OT, and I might not be able to find the library anyway.