Hi Everyone,
I am trying to convert my Cheetah database product to a client/server
format. I have run into a problem. I try to add a bunch of records in
a loop and it takes forever to do it (e.g. 25 records takes over 8 seconds).
I dusted off my old copy of CodeBase and 25 records takes a second (in
Client/Server mode).
I am obviously missing the point somewhere along the way.
Here is what I have done: I have one main DLL which works with both the
client EXE and the server EXE. This DLL contains all of my database and
index routines. When the client app starts, a call is made to a server
connection routine found in the DLL. The DLL connects to the server and
saves the server's handle in a global variable. From here on everytime the
client app calls the DLL, the DLL checks to see if there is a server handle
available. If it is, then the request is simply packaged in a string
and sent to the server. The server then dispatches the request to its
own DLL to perform the database function. The result (usually an errorcode)
is then sent back to the client. That's it.
On the client side:
I connect using "TCP OPEN PORT %DEFAULT_PORT AT m_Server$ AS hSocket&"
hSocket& is then saved to a global variable. I have never had a problem
connecting.
Say for example the "create database" is now called by the client. The
DLL checks the global hSocket& > 0 and if it is then creates a send buffer
and sends it using "TCP SEND hSocket&, Buffer$". Right after that there is
a "TCP RECV hSocket&, LEN(Buffer$), Buffer$". Then the function exits.
On the Server side:
I am (now) using Erik Olsen's HTTP server as a guide although I experienced
the same problem with Don Dickinson's rserver (Sorry Don you can't get me
via email).
When %FD_ACCEPT is fired the following occurs: "TCP ACCEPT fnum AS hSocket"
and then a new thread is created "THREAD CREATE SocketThread(hSocket) to idThread(hSocket)"
In the thread I use "TCP RECV hSocket&, DataLen, Buffer$" to receive the
data being sent from the client. That buffer is processed to obtain the request
and a SELECT/CASE determines which database routine in the DLL to call. The DLL
routine does the processing and sends a "TCP SEND" back to the client.
Does this methodology sound correct??? Any ideas about why this could be
so slow? I know its not much to go on but the code is so huge there is no
way I would be able to post it here.
Maybe I am opening and closing
too many connections. I apologize for being such a rookie at this.
For those of you who have developed similar systems - have you experienced
similar slow results when sending many TCP request to the server in succession.
I am doing all the tests on a LOCAL machine - not over a network. This
doesn't seem to be a factor because CodeBase runs fast on the same machine....
Any help is greatly appreciated.
Thanks,
------------------
Paul Squires
mailto:[email protected][email protected]</A>
I am trying to convert my Cheetah database product to a client/server
format. I have run into a problem. I try to add a bunch of records in
a loop and it takes forever to do it (e.g. 25 records takes over 8 seconds).
I dusted off my old copy of CodeBase and 25 records takes a second (in
Client/Server mode).
I am obviously missing the point somewhere along the way.
Here is what I have done: I have one main DLL which works with both the
client EXE and the server EXE. This DLL contains all of my database and
index routines. When the client app starts, a call is made to a server
connection routine found in the DLL. The DLL connects to the server and
saves the server's handle in a global variable. From here on everytime the
client app calls the DLL, the DLL checks to see if there is a server handle
available. If it is, then the request is simply packaged in a string
and sent to the server. The server then dispatches the request to its
own DLL to perform the database function. The result (usually an errorcode)
is then sent back to the client. That's it.
On the client side:
I connect using "TCP OPEN PORT %DEFAULT_PORT AT m_Server$ AS hSocket&"
hSocket& is then saved to a global variable. I have never had a problem
connecting.
Say for example the "create database" is now called by the client. The
DLL checks the global hSocket& > 0 and if it is then creates a send buffer
and sends it using "TCP SEND hSocket&, Buffer$". Right after that there is
a "TCP RECV hSocket&, LEN(Buffer$), Buffer$". Then the function exits.
On the Server side:
I am (now) using Erik Olsen's HTTP server as a guide although I experienced
the same problem with Don Dickinson's rserver (Sorry Don you can't get me
via email).
When %FD_ACCEPT is fired the following occurs: "TCP ACCEPT fnum AS hSocket"
and then a new thread is created "THREAD CREATE SocketThread(hSocket) to idThread(hSocket)"
In the thread I use "TCP RECV hSocket&, DataLen, Buffer$" to receive the
data being sent from the client. That buffer is processed to obtain the request
and a SELECT/CASE determines which database routine in the DLL to call. The DLL
routine does the processing and sends a "TCP SEND" back to the client.
Does this methodology sound correct??? Any ideas about why this could be
so slow? I know its not much to go on but the code is so huge there is no
way I would be able to post it here.

too many connections. I apologize for being such a rookie at this.
For those of you who have developed similar systems - have you experienced
similar slow results when sending many TCP request to the server in succession.
I am doing all the tests on a LOCAL machine - not over a network. This
doesn't seem to be a factor because CodeBase runs fast on the same machine....
Any help is greatly appreciated.

Thanks,
------------------
Paul Squires
mailto:[email protected][email protected]</A>
Comment