Hi guys,
I have been trying to implement simple client - server apps that will
help me measure communication performance. Server is running on custom
ZYNQ board and is supposed only to send packets, whereas client is
running on PC and is supposed only to receive packets. I am using Lwip
v1.4.1.
This is main loop of server app:
for (i = 0; i < 8000; i++)
{
if ((nsent = send(clisock, sendbuf, TCP_MAX_DATA_LEN, 0)) < 0)
{
xil_printf("send error! %d\r\n", nsent);
break;
}
xil_printf("i: %d, nsent: %d\r\n", i, nsent);
}
send(clisock, sendbuf, 10, 0);
Main loop of client app:
while (1)
{
nrecv = recv(sock, g_recvline, TCP_MAX_DATA_LEN, 0);
if (nrecv < TCP_MAX_DATA_LEN)
break;
}
The problem is that after some number of packets, client app receives
less than TCP_MAX_DATA_LEN (which is 1446) bytes even though it isn't
supposed to receive packet of that size at that moment. For example,
after 1013rd packet sent, client receives packet that is less than 1446.
But, the real problem is that send function always returns 1446! I have
verified at the PC side that there is one packet received that is
smaller then 1446. I am providing capture file (please download it from this
link http://s000.tinyupload.com/?file_id=02107988891904422407), communication
starts from packet no. 55 (192.168.0.101 is PC, whereas 192.168.0.240 is ZYNQ
board). Than take a look at packet no. 466, I think that is the packet
that confuses client app?
Everything I have described so far is happening when I DON'T have print
xil_printf("i: %d, nsent: %d\r\n", i, nsent) in server app, or when I
only print nsent. But when I put that print, server app works as
expected (wtf?). I have also made applications in which PC is sending
packets, and server is receiving, and in that case everything works as
expected regardless of whether there are some prints or whatever.
What do you think of this problem, I would be really grateful if you
could help me.
Best regards,
Nenad