[Spread-users] Maximum message rate achieved is only 6600 msgs/sec (for message size 1300 bytes)

John Lane Schultz jschultz at spreadconcepts.com
Wed Nov 7 12:22:37 EST 2007


How does your client application connect to the daemon?

In particular, you should use the spread_name form of "<port>" rather than
"<port>@<host-name>".  For example, SP_connect("4803", ...) rather
than SP_connect("4803 at localhost").

This can make a big difference on *nix boxes.

I would also recommend upgrading to a newer version.  Spread 4.0.0 is
available and only requires minor changes to your application.  If you
don't want to change your application at all then I believe 3.17.4
would be better for you.

As a sanity check you could write a kind of echo program that uses "pipe"
and "fork" to see what kind of IPC throughput is possible on your
machine. Of course, spread should do worse than such a program due to
the extra work it does, but it should be in the same ballpark.

The manpage for "pipe" often has such an example program.  Here is a
modified example.  To see the maximum you should also pipeline some
number of sends before you begin send'ing + recv'ing.

       #include <sys/wait.h>
       #include <assert.h>
       #include <stdio.h>
       #include <stdlib.h>
       #include <unistd.h>
       #include <string.h>

       int
       main(int argc, char *argv[])
       {
           int pfd[2];
           int cfd[2];
           pid_t cpid;
           char sendbuf[1300];
           char recvbuf[sizeof(sendbuf)];

           assert(argc == 2);

           if (pipe(pfd) == -1) { perror("p1"); exit(EXIT_FAILURE); }
           if (pipe(cfd) == -1) { perror("p2"); exit(EXIT_FAILURE); }
           
           cpid = fork();
           if (cpid == -1) { perror("fork"); exit(EXIT_FAILURE); }

           /* Child reads from pfd[0] and writes to cfd[1] */
           /* Parent reads from cfd[0] and writes to pfd[1] */
           
           if (cpid == 0) {    /* child */
               close(pfd[1]);  /* Close parent's write end */
               close(cfd[0]);  /* Close parent's read end */

               /* send + recv here */

               close(pfd[0]);
               _exit(EXIT_SUCCESS);

           } else {            /* Parent */
               close(cfd[1]);  /* Close child's write end */
               close(pfd[0]);  /* Close child's read end */

               /* recv + send here */

               close(pfd[1]);          /* Reader will see EOF */
               wait(NULL);             /* Wait for child */
               exit(EXIT_SUCCESS);
           }
       }

 
Cheers!
John

---
John Lane Schultz
Spread Concepts LLC
Phn: 443 838 2200 
Fax: 301 560 8875

Wednesday, November 7, 2007, 3:20:56 AM, you wrote:

> From: "Sandesh Kumar Sodhi" <ssodhi at intellinet-india.com>
> To: <spread-users at lists.spread.org>
> Date: Wed, 7 Nov 2007 13:50:56 +0530

> Hi,

> I was doing the performance testing of GCS Daemon.
> The maximum message rate that I am able to achieve is
> around 6600 messages per second. My message size is 1300 bytes.
> Could you please let me know if this is the maximum that we 
> can achieve. Please do let me know if performance parameter can be tweeaked
> for GCS
> to improve performance.

> If this is the maximum rate that can be achieved for the message size than
> we shall have to remove the use of GCS from our platform.

> GCS version I ma using is: 3.16.0
> Message Type UNRELIABLE_MESS
> Message size 1300 bytes


> MY SETUP
> ~~~~~~~~~~~~~~~~~~~~~

> PerformanceTestGCS------GCSDaemon

> Both running on the same machine(Solaris).
> No other gcs daemon is running


> CONFIGURATION NOF MY MACHINE
> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

> root at neptune ~$prtdiag
> System Configuration: Sun Microsystems  sun4u Netra 240
> System clock frequency: 167 MHZ
> Memory size: 2GB

> ==================================== CPUs
> ====================================
>                E$          CPU                  CPU
> CPU  Freq      Size        Implementation       Mask    Status      Location
> ---  --------  ----------  -------------------  -----   ------      --------
>   0  1503 MHz  1MB         SUNW,UltraSPARC-IIIi  3.4    on-line     MB/P0
>   1  1503 MHz  1MB         SUNW,UltraSPARC-IIIi  3.4    on-line     MB/P1

> TEST APPLICATION
> ~~~~~~~~~~~~~~~~~~~~~~~~~
> Attached.

> TEST APPLICATION OUTPUT
> ~~~~~~~~~~~~~~~~~~~~~~~~~~

> Please look at the "msgs/sec:   6666" fields in the following output.

> root at neptune ~/newcvsroot/IntelliSS7/common/src/gcs-client
> $./performanceTestGCSD -s 7805 -c 100000 -b 1000000 -d 1300 -g 1
> Using g_burst: 1000000
> Using g_thiscount: 100000
> Using g_data_size: 1300
> Using g_use_gcs: 1
> In Send Loop
> Sent 0 message
> In Receive Loop
> received REGULAR membership caused by JOIN for group simple_group with 1
> members:
>         #ssodhi#neptune
> grp id is 117440513 1194449876 1
> Read message type:  4352 sec   1194449915 usec   917019 msgs/sec:  
> byte/sec:        0 count        0
> Sent 100000 message
> Read message type:     1 sec   1194449931 usec   429738 msgs/sec:   6250
> byte/sec:  8125000 count   100000Sent 200000 message
> Read message type:     1 sec   1194449946 usec   972752 msgs/sec:   6666
> byte/sec:  8666666 count   200000
> Sent 300000 message
> Read message type:     1 sec   1194449962 usec   504915 msgs/sec:   6250
> byte/sec:  8125000 count   300000
> Sent 400000 message
> Read message type:     1 sec   1194449978 usec    86120 msgs/sec:   6250
> byte/sec:  8125000 count   400000
> Sent 500000 message
> Read message type:     1 sec   1194449993 usec   591098 msgs/sec:   6666
> byte/sec:  8666666 count   500000
> Sent 600000 message
> Read message type:     1 sec   1194450009 usec    86466 msgs/sec:   6250
> byte/sec:  8125000 count   600000
> Sent 700000 message
> Read message type:     1 sec   1194450024 usec   587219 msgs/sec:   6666
> byte/sec:  8666666 count   700000
> Sent 800000 message
> Read message type:     1 sec   1194450040 usec    93531 msgs/sec:   6250
> byte/sec:  8125000 count   800000
> Sent 900000 message
> Read message type:     1 sec   1194450055 usec   624370 msgs/sec:   6666
> byte/sec:  8666666 count   900000
> Sent 1000000 message
> Main.Sleeping.....................
> Read message type:     1 sec   1194450071 usec   124368 msgs/sec:   6250
> byte/sec:  8125000 count  1000000


>  



> Thanks and Regards,
> Sandesh Kumar Sodhi
> Ph: +91-80-41256018
> Extension: 2433
> Mobile: +91-9341784637
>  





More information about the Spread-users mailing list