[spread-users] Interface: java vs C

Ryan Caudy rcaudy at gmail.com
Sat Nov 27 19:25:36 EST 2004


Have you tried making the change suggested by Cédric Coulon on
November 22, to the Java interface, and seeing if this alters your
results?  --Ryan


On Fri, 19 Nov 2004 14:38:46 +0100, Datoh <datoh at free.fr> wrote:
> Hi,
> 
> I would like to continue your previous discussion about messages reception
> time.
> Alberto Martinez was surprised by its responses time (about 40ms).
> And Yair Amir answered the response should be around 3ms as it is
> described in this article:
> http://www.cnds.jhu.edu/pub/papers/cnds-2004-1.pdf
> 
> I have the same responses time than Alberto, and just like me, he uses
> the java interface to connect to the daemon. So I had the idea to
> compare the two interfaces: java and C.
> 
> I implemented a simple client/server program in C and in java.
> I used a single PC for the test with a spread daemon, a client and a server.
> The server multicasts messages (the number of messages, the time in ms
> between each messages and the size of the messages are given in
> parameters). Then, the client simply receives the messages and displays
> the delivering time. The results are amazing:
> 
> - With a low workload (200ms between two multicasts):
> With C interface, the first message is delivered in 40ms and the rest in
> 3-7ms.
> With java interface, the n first messages are delivered in 40ms and
> the rest in 3-7ms. n depends on of the size the message, as long as the
> size of a message increases, n decreases.
> 
> I launched a test on three completely different PCs and the results are
> the same. I also used two PCs (2 daemons, 2 clients and 1 server) in
> three different clusters of PCs and the results still the same (I assume
> the clocks of the PCs are synchronized).
> 
> I used jdk 1.4.2 and 1.5 and the last version of spread (daemon and
> interface): 3.17.3.
> 
> It seems that there is a buffer problem with the java interface, the
> messages are multicasted faster when the buffer is full.
> Can you confirm my result?
> 
> I join spread.conf, the server and the client in C and in java.
> 
> Datoh.
> 
> java interface:
> spread –n node0 –c spread.conf &
> java TestSpreadClient 20 &   (receive 340 messages)
> java TestSpreadSvr 20 200 5000 &   (multicast 340 messages with 200ms
> between each message - the size of a message is around 5000 byte)
> 
> C interface:
> spread –n node0 –c spread.conf &
> ./TestSpreadClient 20 &   (receive 340 messages)
> ./TestSpreadSvr 20 200 5000 &   (multicast 340 messages with 200ms
> between each message - the size of a message is around 5000 byte)
> 
> 
> #include <sp.h>
> #include <time.h>
> #include <sys/timeb.h>
> #include <sys/types.h>
> 
> long getTime();
> 
> long GetTime(void) {
>         struct timeb tp;
>         ftime(&tp);
>         return (long) ((tp.time * 1000) + tp.millitm);
> }
> 
> int main(int nbarg, char * args[]) {
>         int nb = atoi(args[1]);
>         int sleep_time = atoi(args[2]);
>         int size = atoi(args[3]);
> 
>         char name[15];
>         char private_group[MAX_GROUP_NAME];
>         mailbox mbox;
>         char * mes;
>         char * dump;
>         int i;
> 
>         sprintf(name, "%d", GetTime());
>         dump = (char *) malloc(size + 1);
>         dump[size] = 0;
>         for(i=0; i<size; i++)
>                 dump[i] = 'd';
> 
>         size += strlen(name) + 1;
>         mes = (char *) malloc(size + 1);
> 
>         SP_connect("4683 at localhost", name, 0, 0, &mbox, private_group);
>         SP_join(mbox, "testSpread");
> 
>         for ( ; nb>0; nb--) {
>                 sprintf(mes, "%d:%s", GetTime(), dump);
>                 SP_multicast(mbox, FIFO_MESS, "testSpread", 1, strlen(mes), mes);
>                 usleep(sleep_time*1000);
>         }
>         SP_leave(mbox, "testSpread");
>         SP_disconnect(mbox);
> 
>         return 1;
> }
> 
> 
> Spread_Segment  127.0.0.255:4683 {
>         node0          127.0.0.1
> }
> 
> 
> #include <sp.h>
> #include <time.h>
> #include <sys/timeb.h>
> #include <sys/types.h>
> 
> long getTime();
> 
> long GetTime(void) {
>         struct timeb tp;
>         ftime(&tp);
>         return (long) ((tp.time * 1000) + tp.millitm);
> }
> 
> int main(int nbarg, char * args[]) {
>         int nbMax = atoi(args[1]);
> 
>                 char name[15];
>         int nb = 0;
>         long avg = 0;
>         long max = -1;
>         long min = 1000000;
>         long t;
>         char private_group[MAX_GROUP_NAME];
>         char list_group[10][MAX_GROUP_NAME];
>         mailbox mbox;
>         char mes[1000000];
>         service type_service;
>         int nb_group;
>         int16 type_mes;
>         int endian;
>         int len;
>         char * dump;
> 
>         sprintf(name, "%d", GetTime());
> 
>         SP_connect("4683 at localhost", name, 0, 0, &mbox, private_group);
>         SP_join(mbox, "testSpread");
> 
>         while (nbMax>nb) {
>                 len = SP_receive(mbox, &type_service, private_group, 10, &nb_group, list_group, &type_mes, &endian, 1000000, mes);
>                 mes[len] = 0;
>                 dump = strchr(mes, ':');
>                 dump[0] = 0;
>                 t = GetTime() - atol(mes);
>                 printf("%d: %d\n", nb, t);
>                 nb++;
>                 avg += t;
>                 if (max < t) max = t;
>                 if (min > t) min = t;
>         }
>         printf("%d / %d / %d / %d\n", nb, min, ((long) avg/nb), max);
>         SP_leave(mbox, "testSpread");
>         SP_disconnect(mbox);
> 
>         return 1;
> }
> 
> 
> 
> 


-- 
---------------------------------------------------------------------
Ryan W. Caudy
<rcaudy at gmail.com>
---------------------------------------------------------------------
Bloomberg L.P.
<rcaudy1 at bloomberg.net>
---------------------------------------------------------------------
[Alumnus]
<caudy at cnds.jhu.edu>         
Center for Networking and Distributed Systems
Department of Computer Science
Johns Hopkins University          
---------------------------------------------------------------------




More information about the Spread-users mailing list