lwip-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [lwip-devel] lwip deadlock in tcpip_apimsg using FreeRtos ST Arm Por


From: Alain M.
Subject: Re: [lwip-devel] lwip deadlock in tcpip_apimsg using FreeRtos ST Arm Port with lwip sockets
Date: Fri, 23 Jan 2009 14:57:57 -0200
User-agent: Thunderbird 2.0.0.17 (X11/20080914)

Hi Simon,

I have been thinking abou this and a simple idea struck me !!!

There is some talk about creating a new socket interface, so why not make both things toghether:

a new socket api that is thread safe !!!

That way, compatibility will not be an issue, as it would be just one more compile time option (thare already 2 sockets), and at the same time it can have a new pre-requisite from the OS. And it can be used as an alternative only by those who need it...

This is specialy intresting because there *seem* to be *volunteers* for the implementation of a new socket api :) :)

Alain

address@hidden escreveu:
My previous post was a little short because I didn't have the time, so I'll try again:

What you wrote is correct, except the fix is not that easy: Making the semaphore a counting one would not help much, as you couldn't tell which OP has completed when the semaphore is signaled. The solution would be for each task to access the socket to have an own semaphore if we didn't want to create a new semaphore for each call. So I'm afraid this won't be that easy to fix.

However, I don't think you want to have multiple tasks reading or multiple tasks writing at the same time: the thing most users asking this question is to have one thread reading while another thread writes (aka. full-duplex).

This could be easier to solve if we had one semaphore for read actions and one for write actions. Still I have to disappoint you about a planned fix for this. The current position of lwIP is just 'not supported'! :-(

Simon


M T wrote:
I'm seeing a very strange deadlock when multiple FreeRtos threads access the same socket. For some reason, one of the tasks blocks when waiting for the apimsg->msg.conn->op_completed semaphore. I believe this semaphore is used to ensure that a message was processed by the lwip internals. Has anybody seen this issue?

I'm not an expert at the internals of lwip, but I tried to trace the problem down. Please let me know if my reasoning is not correct: Essentially, any message that needs to go to the internals use the sys_mbox_post function which enqueues the message into a queue. The receiver of the queue routes the message to the correct processing entity based on the message and signals the semaphore indicating that the message was processed. However, if you are using the socket API, the semaphore used is the socket semaphore (and is a binary semaphore not a counting one). So, when multiple tasks try to send a packet, there is a possibility that one of the tasks will starve because the receiver task has already given the semaphore once and will not do it for the second time (I'm not extremely sure about this statement).

I modified the tcpip_apimsg function to wait if a message is being processed and then post the message. This seems to have reduced (but not eliminated) the problem.

Please let me know if my reasoning makes sense and if you guys have seen a fix for this problem...

Thanks,
MT
------------------------------------------------------------------------

_______________________________________________
lwip-devel mailing list
address@hidden
http://lists.nongnu.org/mailman/listinfo/lwip-devel



_______________________________________________
lwip-devel mailing list
address@hidden
http://lists.nongnu.org/mailman/listinfo/lwip-devel






reply via email to

[Prev in Thread] Current Thread [Next in Thread]