[Spread-users] Threading of spread daemon

Jonathan Stanton jonathan at cnds.jhu.edu
Thu Jun 13 10:39:31 EDT 2002


On Thu, Jun 13, 2002 at 06:24:16PM +0400, Muthal Sangam wrote:
> Hi all,
> 
> I am interested in implementing a distributed lock manager and other tricky 
> distributed services using spread. But these services are meant to be used from
> inside a unix kernel :-)  There are differences in the runtime environment,
> regarding
> which i need some help/guidance.

Very interesting.
> 
> I understand spread's present architecture as follows
> A spread thread(process) does SERIALLY & IN THE ORDER DESIRED, 
> 1. Protocol processing 
> 2. Handling requests from apps ( join/send )
> 3. Queing of events to apps (message recv/ leave events)
> 
> Applications queue requests and read events from a bi-directional stream socket.
> When there are no events, they will block. Multiple threads in a single
> application
> process sharing a group under one process name are protected against 
> inconsistent actions of each other in the thread safe user library via locks.

All correct.

> 
> Now, inside the kernel there is no need to use a stream socket to communicate
> between spread's thread and the application (the code of both being now inside
> the kernel). So this means,

That may be true that it is not 'needed' but I think it may turn out that
some 'decoupling' of the apps and the spread process will be needed for
the reasons you mention below. 

This decoupling can be done by means other then a stream socket. Even for
userspace implementation, I have thought about a shared memory approach
with a few locks to provide ordering and buffer control.

 > 
> 1.  instead of queing requests to spread, in principle the desired
> actions can be directly invoked by the application. 
> 2.  Whenever an event occurs that, the application has to be notified, the app
> can be directly called and given a ptr to the message.
> 
> In the case of 1, it means that application will call in arbitrarily to what
> the spread 
> thread was currently doing. It can even be going on concurrently if there are 
> multiple cpus. Can i approach this problem by not allowing app calls until its
> "SAFE" to do so, by means of locks. Both the spread thread and the application
> threads that request will serialize on some big lock. This makes the order of
> calling non-deterministic ( i think there is some priority in handling events),
> as
> we cant say who will get the lock. Does it matter here ?

With a 'big' lock this could be correct. The way spread works now, there
is a main 'event loop'. Whenever control reaches back to that main loop,
the constraints on what happens next are pretty loose, i.e. almost
non-deterministic. However, before returning back to that loop, the spread
code is VERY dependent on being non-interruptable and single threaded. 

The priorities in the event loop are used mainly to disable certain types
of events at certain times. They do also affect the order events are
handled in, but I think that could be removed without too much work. 

With locks, whatever constraints are needed can be provided by the use of
multiple locks representing different priorities of resources.

 > 
> In the case of 2, when the appcode is directly called, it can sometimes call
> back
> with requests!, will this re-enterancy be safe ?!

This is not guaranteed to be safe. Spread was written from the assumption
of there being some 'queue' like set of buffers between the daemon and
apps. However, remember also that all application initiated events are
"requests" in the spread model, so an app should always expect that events
it generates are not handled immediately, but rather are only a request,
and until the daemon tells the app what teh result of the request was, the
app can assume nothing. 

Because of this request model, any events the app wants to generate should
just be put in a little memory queue, and then they can be handled whever
the callback into the app finishes. That should remove the problem.

Hope this helps, I'd be interested in hearing what you end up doing.

Jonathan

-- 
-------------------------------------------------------
Jonathan R. Stanton         jonathan at cs.jhu.edu
Dept. of Computer Science   
Johns Hopkins University    
-------------------------------------------------------





More information about the Spread-users mailing list