07-06-2007 09:43 AM
07-09-2007 06:54 PM
Regards,
Nick D.
07-10-2007 02:21 PM
07-12-2007 12:27 PM
Hey Dominator,
Would you be able to implement a 3rd party program to monitor the packets being sent? One such program is:
Can you try to verify that your are sending the same message in both linux and VISA? is they same message is being sent, then I'd be curious why it's not responding the same.
Regards,
Nick D.
07-19-2007 03:21 PM
07-19-2007 03:32 PM
This is my PC SYSTEM INFORMATION
National Instruments Technical Support Form (Version 4.0.2.3002)
Generated: Thursday, July 19, 2007 4:30 PM Eastern Daylight Time
System Information:
Operating System(OS): Microsoft Windows 2000
OS Version: 5.00.2195
OS Info: Service Pack 4
Processor: Intel(R) Pentium(R) 4 CPU 2.80GHz / x86 Family 15 Model 3 Stepping 4 / GenuineIntel / 2793 MHz
Number of Processors: 1
Physical Memory: 1,046,516 KB RAM
Drive C:\ 5,990,440 of 38,989,752 KB free
Drive E:\ 3,736,783 of 19,936,633 KB free
NI Software Information:
CVI Run-Time 7.1.0.307
LabVIEW Run-Time 8.0
Measurement & Automation Explorer 4.0.2.3002
Measurement Studio for VS2003 7.1
.NET Languages Hardware Support 7.1.0
Common 8.0.11.194
NI-VISA 8.0.11.138
NI-PAL Software 1.10.3f0
NI Spy 2.3.2.49152
NI-VISA 3.5
visa32.dll 3.5.0.49152
NiVisaServer.exe 3.5.0.49152
NIvisaic.exe 3.5.0.49152
MAX Summary:
07-20-2007 01:33 PM
Hi,
Look like NI-VISA does not have Port Mapper running. So I have to create the raw socket to talk thru the interrupt channel instead of using RPC clnt_create (pcAddr, DEVICE_INTR, DEVICE_INTR_VERSION, "tcp").
Let me try the raw socket, and I'll let you know.
Regards,
Dominator.
07-23-2007 12:02 AM - edited 07-23-2007 12:02 AM
Unlike DEVICE_CORE and DEVICE_ASYNC, the DEVICE_INTR is not intended for the application side to create. The only difference of DEVICE_INTR against others is, its RPC functions are "reverse direction" call. Therefore the client for DEVICE_INTR is the instrument side. The server is VISA or application side.
More different part is, the Port Mapper for VXI-11 is normally provided only for DEVICE_CORE channel at the instrument side. Although DEVICE_ASYNC is similar condition for RPC's caller/callee relationship, its actual port# is informed by create_link() response (through the abortPort field on Create_LinkParms) rather than Port Mapper service which asks for prog# 395184. As for DEVICE_INTR channel, it is also unrelated to Port Mapper, because the server (VISA or app) teaches its IP address and Port# when the app invokes create_intr_chan() function. Then the instrument will learn enough knowledge about the DEVICE_INTR server (= VISA app side) such as IP address and Port#. This is why DEVICE_INTR is unrelated to Port Mapper.
If your test environment is running under Win32, it is recommended to install a network capture tool in order to spy the traffic. (I personally prefer "Squeezer" but it may not have English gui... it always appears in Japanese on my pc.) Then try VISA call viOpen() and viEnableEvent( SRQ). Then create_intr_chan() will be invoked and immediately the "reverse direction" RPC call will be invoked. Hereafter whenever your instrument generates an SRQ, device_intr_srq() will be invoked from the instrument. Its server entity is normally embedded inside the VISA, but the call will be visible by a network spy tool.
> Look like NI-VISA does not have Port Mapper running
I think it is very normal implementation. In VXI-11 architecture, the port mapper is only needed for establishing DEVICE_CORE channel.
このメッセージは 07-23-2007 02:03 PMに Makoto が編集しています。
07-31-2007 10:59 AM
08-02-2007 12:21 AM
As for Raw-socket sender routine, it looks like bit strange.
The contents of tdsIntrSrqData (INTR_SRQ_DATA type) are looking okay.
The strange point that I feel is:
//Send data to TCP Socket
*s_pobjTcpClientInterrupt << sData;
I believe the above stream I/O (?) operator does send the given data (sData in this case) over the TCP/IP raw socket. However in your code, the sData is string type and looks like contain human-readable "hex-dump" of the tdsIntrSrqData content. It must be sent out as raw binary, not as formatted string. Or, the above socket stream I/O call does eat a hex-dump string then convert it back to binary when send???