Announcement

Collapse
No announcement yet.

Minimum wait between TCP SEND requests

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Paul Purvis
    replied
    Mike i do know what you are taking about.
    In the send email program i wrote, if i did not use the code as above, it might well be after the 3rd or 4th TCP SEND, before the TCP LINE return had returned the proper information.

    so if you did as i did and used the smtp example program or many other programs listed on this forum that uses a TCP LINE command directly after a TCP SEND command, i found it failed on retrieving adequate information in many situations, and it is not the compilers fault, it is just the way it is when communicating packets, there are so many things you do not have control over while sending packets.
    It all comes back to knowing in a standard protocol used or an special program used(server response) what to expect and set your program to look for a full complete response from the server.

    try that DO WHILE NOT EOF LOOP and see what you get
    Last edited by Paul Purvis; 11 Sep 2009, 04:46 PM.

    Leave a comment:


  • Paul Purvis
    replied
    I do not think i can say everything on my mind about this subject but i will try a few and be cautioned, i am not a expert in any way, just a trial and error guy.

    on packet sizes.
    be wise of your use with transmitted tcp packets.
    know your connection type(speed) from the client to the ending point.
    i think the hardware and software built into the client and server computer will allow buffering of so many connections at one time, consider that.
    speed counts here, allow for variations of timeouts and sizes of packets.

    performance and multitasking systems and networks
    be wise on resources being used: CPU, particular machines performance, network infrastructure, network setting, number of computers on network.

    here is an idea of mine recently but never tried.
    if trying to send a lot of data you may want to time how long it takes to connect to a server with the tcp open command. This should also cache your connection in some kind of way, making your next connection much faster if done near in time to the first connection, even if you just pinged the ending point first, but by timing the first connection, it might give you some idea of a value to set the timeout to by comparing the response time. This is just a thought ok.

    I have a location now that keeps having internet issues, it is dsl and there is problem on the tele line with noise, so we had to reduce the buffer size down.

    here is some of the program listing
    Code:
      ' BLOCKSIZETOSEND&&=528&&
       BLOCKSIZETOSEND&&=1412&&
      'BLOCKSIZETOSEND&&=2840&&
      ' BLOCKSIZETOSEND&&=5696&&
      ' BLOCKSIZETOSEND&&=32684&&/2&&
      ' BLOCKSIZETOSEND&&=32684&&
    the actual packet size consist of the blocksizetosend&& plus a few other bytes of overhead.

    i create a separate exe for each setting and copy whatever exe to send.exe
    sendtiny.exe, sendbasic.exe, sendreg.exe,sendlarge.exe,sendvlarge.exe,sendhuge.exe

    this is not the way to do this, it is best to have a variable way of changing the buffer size with a default value already set in your program if the value is not set and have a minimum and maximum values that can be used.

    When the connection gets bad and until it is fixed, i use the sendbasic.exe program and if it is very very bad, i use sendtiny.exe.

    On our systems, while sending very large packets across the internet, it degrades the entire internet performance "bad idea" for the whole network.
    On a local(intranet) system, the larger packets have no effect, if little.

    The sleep states allow other processes to take place, i suggest you keep an eye on the cpu performance while writing and testing both the client and server programs. Your program is not the only program running and you need to share that cpu, even for your client or server to have cpu time to transmit the data.

    while writing a email send program i had to use this code below while connecting and after each send to the server with a TCP SEND

    Code:
    TCP OPEN PORT semailsmtpserverportnumber AT SmtpHost AS hTCP TIMEOUT 20000
    
    IF ERR THEN
        STDOUT "cannot connect to "+smtphost
        GOTO SENDERROR
    END IF
    
    
    DO WHILE NOT EOF(hTCP)
       TCP LINE hTCP, sline
       e = VAL(LEFT$(sline, 3))
    LOOP     
    
    TCP PRINT hTCP, "EHLO " & LOCALHOST
    DO WHILE NOT EOF(hTCP)
       TCP LINE hTCP, sline
       e = VAL(LEFT$(sline, 3))
    LOOP
    also one thing i did not do in this program that i wish i had done in another program was to try to connect to the server many times before giving up.

    if you do not get a connection on the first time, put that TCP OPEN inside of a loop for x amount of tries and increase your timeouts for each loop if you want.

    I wish there where a way you could change the timeout during the execution of the tcp send commands, that would be nice.

    I think i just gave you a way. you could connect, disconnect. start timer, connect disconnect, read timer, then connect a third time setting your buffer size and timeout, but i have never done it like this, just food for thought.

    for large transfers, i use sleep on every so many packets sent using a counter variable inside my TCP SEND LOOP
    this will also make it easier on the server for other connections and my programs are giving up time on the server for other computers running the same software to talk to my server. The fastest for your own program is not always the best way to do things, you can also choke down your own computer system.
    Code:
    ' the lower the divider in the mod statement,
    ' the lower you may want to use a SLEEP period of time
    ' testing is the only way to know of your particular needs
    if counter& MOD 10& = 0& THEN SLEEP 30 
    ' SLEEP 20 or SLEEP 30 or SLEEP 40 or SLEEP 50 
    ' i do not go much beyond 50
    I made a change on a windows xp system using the group editor. i changed the QOS to 0 as some hack suggested to improve internet performance speed on a xp system. The xp systgem was a virtual computer. Well it slowed down the whole virtual computer, so i was quick to not only set it back but to increase the level from a default of 20 to a 30.

    Which also reminds me, if you a person like me running mostly windows 2000, you should consider installing the QOS service on you network adapter that i believe does not get installed automatically during the os install and do not disable QOS if you are on any other windows OS unless you have reason.
    Last edited by Paul Purvis; 11 Sep 2009, 04:29 PM.

    Leave a comment:


  • Michael Mattias
    replied
    Why do many programs sleep between send statements?
    Because the programmer wants his GUI to update during blocking calls but is too lazy to write the code necessary to execute that blocking call in a separate thread of execution?

    MCM
    (You KNOW that is correct for at least SOME of the aforementioned examples!)

    Leave a comment:


  • Thomas Tierney
    replied
    what are you sending the file to, that may help..

    Leave a comment:


  • Mike Doty
    replied
    Why do many programs sleep between send statements?

    The PowerBASIC TCP RECV demos use a buffer of 1024 bytes.
    Send a packet of 10 million bytes and it will take a very long time to finish.

    I'm using a modified version of Don Dickinson's tcpsafereceive and transmit 50 megabytes in 8 seconds over the internet.
    I would like to see how others send large files using a larger buffer. I have tried increasing the buffer size greater than 1460 bytes and
    entires packets are lost using the PowerBASIC echo server demo.

    Leave a comment:


  • Michael Mattias
    replied
    > doesn't want the TCP driver to combine packets, which it will normally do

    Perhaps....
    Code:
    FUNCTION .....
       TCP OPEN
       TCP SEND
       TCP CLOSE
    ...
    .. will provide the desired behavior?

    He may have to queue up the input to this function as I'm sure he can send many such requests quickly (code not shown) , but a search on 'queue' in Source Code Forum will result in multiple ways to do same.

    MCM

    Leave a comment:


  • Thomas Tierney
    replied
    There really is no programitic reason to do this. This is norrmally driven by either rfc -for known protocols or requirements for others. If you have written the server I think that the serve should be changed to issue a response to each packet send either good response or bad response. if the server is not under your control there should be guidance on how to use it.

    Leave a comment:


  • Tom Hanlin
    replied
    The comment in the code rather hints that Mike doesn't want the TCP driver to combine packets, which it will normally do when it can, for reasons of efficiency. Why this would be considered a problem is not clear.

    Leave a comment:


  • Michael Mattias
    replied
    You should not have to delay at all. From Help file (9.0.1):
    The TCP SEND statement does not return until string_expression has been sent, or an error occurs. That is, TCP SEND is a synchronous or "blocking" statement. If a time-out occurs, ERR will be set to indicate a run-time Error 24 ("Device timeout"). See TCP OPEN to specify the TCP socket timeout value.
    If you have problems, I think that problem is elsewhere. However, it cannot hurt to test the system ERR value in this procedure.

    Leave a comment:


  • Mike Doty
    started a topic Minimum wait between TCP SEND requests

    Minimum wait between TCP SEND requests

    Is there a minimum amount of milliseconds that should be used between TCP SEND requests? 250-milliseconds works in an application, but is a kludge.

    Code:
    SUB SendIt(MySocket AS LONG, s AS STRING)
       TCP SEND #MySocket, s + $Terminator
       SLEEP 250  'minimum 250 milliseconds between sends so packets don't get combined
    END SUB
    Last edited by Mike Doty; 10 Sep 2009, 01:16 PM. Reason: 210 milliseconds will sometimes combine packets
Working...
X