Craig,
just a few points I might make - if you are being limited by the disk and
memory transfer rates, you might want to consider a RAID disk array to speed
up the disk transfer rate. If you are being memory bandwidth limited, then
again you might want to consider a processor that uses a 133Mhz FSB (an 850
will only use 100Mhz FSB, the 1000Mhz uses 133Mhz). Only if you are 100%
sure you are being limited by the processor should you move to a dual CPU
setup.
Ben
"Kevin B. Kent" wrote:
>This is a multi-part message in MIME format.>--------------4C49107D6A2F74793D6DFB59>Content-Type:
text/plain; charset=us-ascii>Content-Transfer-Encoding: 7bit>>Craig,>First
off you know that the OS has to support dual procs. This means>NT (and I
guess 2000) or one of the Unices.>You will not see a 2x performance increase,
it may be more along the lines of>30-50% or so (I have no data to support
this however). LabVIEW will take>advantage>of the 2nd proc by doubling the
number of threads you can use. As you suspect>though>the bottleneck is in
the disk drive and memory. That said you will probably see>improvements>if
you run one app on one proc and the UI proc on the other. The entire machine>response>will
improve.>I know it is not a hard answer just some info to consider.> Kevin
Kent>>Craig Graham wrote:>>> I currently have a dual-CPU motherboard system
as my Labview development/>> data analysis machine, although only one CPU
is present. I may soon have the>> opportunity to either add a second CPU,
or to replace the existing one with>> a single faster one, and I'm wondering
which way to go to improve>> performance. I've done a comparison, and for
the price of a single>> PIII-1GHz, I could instead by a pair of PIII-850MHz
processors.>>>> Probably the two most intensive programs that this machine
runs are>>>> 1) A vi that reads several thousand datafiles, each of the order
of a few>> tens of K, does various calculations on the data and saves them
to a few new>> large files, from which it calls Origin (a graphing package)
via DDE to load>> these files and present the results. Both programs run
in parallel- the>> numbercruncher opens all the files successively and streams
the data>> point-by-point through the calculation and out into the destination
files.>> The graphing program loads each completed destination file as it
becomes>> available, and as the cruncher does the next set, produces the
hardcopy>> graphs for the completed set. Obviously a lot of disk IO is involved-
the>> destination files are a few megs in size.>>>> 2) A vi that loads a
datafile and interactively allows the user to fit a>> combination of many
functions to the data, with perhaps up to tens of>> parameters to vary to
get the best fit. This is very CPU intensive.>>>> What kind of performance
increase is seen in general in Labview when going>> to a dual CPU system?
Does it turn out to be a waste, because of the shared>> memory and disk access?
I have a suspicion that multi-CPU is only useful if>> you have many parallel
CPU-intensive routines that can execute entirely>> within the local cache
of the processor, but I don't think I have any such>> thing here>>--------------4C49107D6A2F74793D6DFB59>Content-Type:
text/x-vcard; charset=us-ascii;> name="Kevin.Kent.vcf">Content-Transfer-Encoding:
7bit>Content-Description: Card for Kevin B. Kent>Content-Disposition: attachment;>
filename="Kevin.Kent.vcf">>begin:vcard >n:Kent;Kevin>tel;fax:972-477-4462>tel;work:972-477-4468
>x-mozilla-html:TRUE>url:http://www.usa.alcatel.com>org:Alcatel USA;OXC Hardware
Development>version:2.1>email;internet:Kevin.Kent@usa.alcatel.com>title:Engineering
Technician>adr;quoted-printable:;;MS OLXDV=0D=0A1000 coit Rd;Plano;Texas;75075-5802;United
States>x-mozilla-cpt:;16128>fn:Kevin Kent>end:vcard>>--------------4C49107D6A2F74793D6DFB59-->