Announcement

Collapse
No announcement yet.

Port 80 "conversation"

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Port 80 "conversation"

    I am trying to have a "conversation" between my client software and my CGI.exe on a hosted server via the internet.

    The "conversation" is defined as:
    Client initiates request on port 80 (so firewalls don't squak)
    CGI.exe responds with some data
    Client responds to CGI.exe
    CGI.exe responds to client with more data
    Client responds to CGI.exe
    CGI.exe responds to client with more data
    etc
    etc
    2MB later CGI.exe terminates "conversation" and quits.

    As I understand it, the POST method is designed for a *single* POST to the CGI.exe and waits for a return. The CGI.exe can return multiple pieces of data for a while, but when it terminates, the client somehow knows the POST reply is finished. (how does it know this?)

    Is it possible for the client send a response to each piece it recieves?
    if I initiate another POST method from the client, this will spawn a second instance of the CGI.exe.

    How is this typicaly done on port 80?

  • #2
    As far as I know - something like this can't be done with CGI applications.

    During this conversation - what are you looking to do? Is the extra data that gets sent mid-conversation known before the conversation starts, or is it conditional based on the conversation?
    Adam Drake
    PowerBASIC

    Comment


    • #3
      Mike,

      A CGI application is a 'process' within the world of a web server. Its the Web server that is actually controlling the communication between the user and your CGI. Therefore, you can't just "talk" to a CGI application directly.

      However, its not that hard to accomplish what you want. In fact, the 'echo server' example in the PB install directory does just this. Don wrote a nice program called rServer that takes the concept a whole lot farther as well.

      BUT, you'd better be ready to commit some learning time. The use of true client/client communications like this would be a lot more prevalent if you could just zip a couple of apps together and "talk" over port 80. There are a lot of variables to consider, not the least of which is security.

      You also need to ask yourself what it is you are trying to do. Do you really need 2 apps to "talk" together directly, or are you only looking for one (server) app to control the process and distribution of data?

      Perhaps you can explain your requirements a bit more and we can offer some options.
      Software makes Hardware Happen

      Comment


      • #4
        Yes, all the data is derived from the initial POST request, I just don't want to send 2Mb - 12MB of data in one lump. If there is corruption, you wont know it for five mins and then you gotta request the whole thing again. If I break it up in to pieces, if a single piece is corrupted I can request it again. Each piece has a CRC.
        Also, If the connection is lost I can handle it quickly.

        Added:
        Yes the concept is multiple clients requesting data from the server. How can the server control this when a request must come from the client? If the server initiated anything to a client the users firewall would block it.

        Security is very well implemented. The CGI checks for encrypted data that identifies bona fide users. I am not at all worried about that.

        Added:

        Here Fred Suggests a named pipe. Will this work on port 80 without firewall issues.

        I have used TTDS for a few years and re-written it several times to suit, but the stumbling blocks are firewalls and port forewarding. This has got to run without any network admin configuration hence Port 80
        Last edited by Mike Trader; 10 Jan 2008, 10:57 AM.

        Comment


        • #5
          I have used TTDS for a few years and re-written it several times to suit, but the stumbling blocks are firewalls and port forewarding. This has got to run without any network admin configuration hence Port 80
          There is a reason some ports are "assigned" specific tasks. Just because port 80 is "open" does not mean that you can do anything you want over it. Most firewalls are not going to allow non-http type traffic. There is absolutely no way in the world you are going to circumvent every (even most) firewalls without resorting to....somewhat illegal, or at least unethical, methods....think 'Spyware'. Any basic firewall is not going to simply say "Oh, look, the data is on port 80 so I'll just leave it alone". Any firewall that did that wouldn't be much use.

          It sounds to me like you are trying to reinvent the wheel though. FTP (preferably S(ecured)FTP is designed for the exact purpose you are proposing, including the breaking up of large datagrams, verifying delivery (using TCP) and resending what didn't make it properly. While I'm a big fan of TTDS, sometimes it best to use the tools that were created for specific tasks such as this (might be).

          Another way to look at it would be to put a web server front end up and create an "upload" page for your customers. This is usually done with Javascript and/or PHP (other methods as well, but these are the most common). Then your clients initiate the transfer by going to the web page, selecting the file to send, and 'clicking send'. You can decide what you want to do with the uploaded file; place each in the same folder, or have the customer "login" and place them in customer-specific folders. You might still have to deal with user rights on their end depending on whether the admin allows file uploads this way or not. (any good one would not), but I'd guess the bulk of customers would be able to successfully get a file uploaded.

          AFAIK, named pipes only work on the local LAN. The communication protocol of the Internet is TCP/IP and UDP. I'm making the assumption that this is an Internet based application since you are worried about firewalls.
          Software makes Hardware Happen

          Comment


          • #6
            I agree with Joe, you might just be having a problem doing this with POST because of the size of the file.
            I don't know of a practical or logical limit to the file size with POST, but probably it takes so long for a large file to upload via POST that the server shuts down the session for security reasons.

            I'd use FTP if you can use port 21.

            Comment


            • #7
              Originally posted by Mike Trader View Post
              I just don't want to send 2Mb - 12MB of data in one lump. If there is corruption, you wont know it for five mins and then you gotta request the whole thing again. If I break it up in to pieces, if a single piece is corrupted I can request it again. Each piece has a CRC.
              Also, If the connection is lost I can handle it quickly.
              12MB is nothing these days, I've downloaded many a PDF or ZIP file much larger than this by clicking on a link. You should have absolutely no need for doing CRC checking or sending pieces of your data, tcp/ip will handle any error correction and resending of lost packets.

              Why not simply generate a zip file on the host from the original request, then when the file has been generated (should take a second or two), simply respond to the request with a redirect to the file, which will then prompt the user for a download location? Then launch a cleanup process that deletes the file after a minute or two. Way simpler, very little coding, and can easily be done within the confines of CGI.

              If it does take longer than a few seconds to generate the file, your response could simply include a link with the URL of the file, and directions for the end user to try downloading after 30 seconds or so.
              --pdf

              Comment


              • #8
                Great answers, let me explain better:
                All this has to happen in a worker thread in the background.
                The current implementation has the user surfing here and there, logging in, clicking links, choosing folders (which they forget), loading data etc etc.
                I am trying to provide data transparently. They launch the client app and it boom bada bing data starts appearing. As each chunk arrives, it is displayed, real time.

                Joe,
                I have used WinHTTP for years now in this way on a smaller scale. A small client app is run on their machine and after gathering info, the uploads a binary string to my server which responds with some binary data. This has worked great on a large number of user machines including secure network servers. The firewall simply pops up a message saying "MyClient.exe" wishes to connect to the internet ok?

                >This is usually done with Javascript and/or PHP
                Yes, PHP has been a very real consideration. This would be a very good idea except that all my core functions are coded in PB. Years of work. I really dont want to convert all that to PHP, assuming PHP can handle the ASM etc.
                Apart from that, how fast is PHP at doing heavy lifting? With PB I can get 120MB/sec of data xfer out of an SQLite database. Try doing that with MySQL and PHP. I doubt it.

                Shawn,
                >the server shuts down the session for security reasons.
                good point... hence the idea to send the data set in pieces

                Paul,
                >tcp/ip will handle any error correction and resending of lost packets.
                I did not know that! great.

                >Why not simply generate a zip file on the host from the original request,
                The records are stored compressed and encrypted. They are uncompressed and decrypted on the fly at display time for security. The data size is the minimum size it can be.

                >It sounds to me like you are trying to reinvent the wheel though.
                Since I can only get one request response cycle out of WinHTTP, I guess I am going to have to get a little smarter.
                I could create a result set file on the server HD (as some of you suggested to my previous question about returning large result sets without storing the result set in memory) as a result of the intial request. I could return the filename for example
                Subsequent requests would then request pieces of this file.
                Comments?
                Last edited by Mike Trader; 10 Jan 2008, 04:42 PM.

                Comment


                • #9
                  Originally posted by Mike Trader View Post
                  They launch the client app and it boom bada bing data starts appearing. As each chunk arrives, it is displayed, real time.
                  Well, you might want to read this, though I doubt it can be done with CGI. It's quite a bit out of the mainstream...
                  --pdf

                  Comment


                  • #10
                    Mike,

                    Ok, I think I'm beginning to see the larger picture here. From what I gather, your data is already stored in an SQL database, correct?

                    If that's the case, then you're beating the horse for no good reason

                    Most (all?) SQL servers have TCP/IP transport functionality built in. A client app can connect directly to the database and request as much data as they want (have rights to). That data can then be served up directly from the SQL database and multiple connections are handled internally.

                    I've written a couple of client apps to grab data directly from a MySQL server. There is (can be) a small delay when compared to LAN speeds, but overall, the performance is quite usable, especially if the data is displayed and usable before the entire dataset is complete. (filling a grid or listview control for example).

                    Have you looked at SQLTools to see if you can convert your client app to access/read from the SQL server directly? If I'm understanding you properly, I think this would be a better solution that simply dumping the whole data file on the client and accessing it as local data.
                    Software makes Hardware Happen

                    Comment


                    • #11
                      Great link! Thx Paul.

                      When the main page loads, an XHR (XMLHttpRequest) makes an "output conduit" request. If the server has collected any events between the main page rendering and the output conduit request rendering, it sends them immediately. If it has not, it waits until an event arrives and sends it over the output conduit. Any event from the server to the client causes the server to close the output conduit request. Any time the server closes the output conduit request, the client immediately reopens a new one. If the server hasn't received an event for the client in 30 seconds, it sends a noop (the javascript "null") and closes the request.
                      I could do that.

                      This could be integrated

                      And its free!
                      Last edited by Mike Trader; 10 Jan 2008, 05:43 PM.

                      Comment


                      • #12
                        Originally posted by Mike Trader View Post
                        Assuming your hosting provider has the Java runtime installed:

                        Lightstreamer Server runs on any platform compatible with Java Platform, Standard Edition 1.4.2 or newer. Java SE 6 is recommended for best performance.
                        --pdf

                        Comment


                        • #13
                          Yes the data resides in an SQL database.
                          >Most (all?) SQL servers have TCP/IP transport functionality built in.
                          Yes an they are deathly slow, hugely bloated and don't run on Port80 last I tried.
                          Many years ago I had to use MySQL as part of the spec of a project. Urgh. What nightmare. It was horribly cumbersome and deadly slow. I had people logging into the server left and right and it took forever to return a result set.

                          I have heard things have chaged in the last 8 years, but I prefer lightweight and fast like SQLite for example. It does not have the scalability, but I doubt that will be an issue for this application. If it is, I would just point half the clients at a secondary server.

                          Apart from all that, there are a lot of moving parts to this on both ends. It is not as simple as a client to SQL database server. There is no way i could retrieve information fromthe MySQL server needed for othe processes at the server end, so i have to roll my own. Besides its more fun.


                          from:
                          This is a discussion forum for Lightstreamer, the leading real-time web server

                          . Lightstreamer adopts a staged event-driven architecture instead of a thread-based one, making it possible to decouple the number of connections that the Server can sustain from the number of threads that are employed.
                          What does this mean?

                          Comment


                          • #14
                            I have heard things have changed in the last 8 years, but I prefer lightweight and fast like SQLite for example.
                            What? Na, in this business, very little changes in 8 days.


                            Oops, I mean years

                            Basing any technology on 8 year old specs is ludicrous. Even if not a single line of code had changed, the hardware 8 years ago would have been a whopping Pentium 66MHz (maybe 100MHz), a 100MB hard drive, and 32MB RAM.

                            8 years ago most people used 56k modems, with 24k actual speeds not uncommon. Heck, my Treo connects 3x faster than the best modems of 1999.

                            And of course, the software itself has been tweaked and optimize too
                            ...and don't run on Port80
                            First, you can run any server on any port you want. There are no restrictions on what port TCP/IP uses, however, there are 'accepted use ports', such as port 80 for HTTP, 21 for FTP, 53 for DNS, etc. If you want your server to be reachable by others without having to jump through hoops, then you stick to the standards. I get this feeling that you believe Port 80 is magically "free" and unrestricted. That's simply not the case. Port 80 is not "open" on any firewall I know of. What a firewall does is look at requests going out, and match replies coming in. If inbound traffic doesn't have a corresponding request, its dropped, period, whether its on Port 80 or 8000. A good admin might allow the HTTP Protocol on port 80, but totally disallow [any other protocol on port 80. Bottom line, I'd highly suggest you forget trying to use port 80 at all unless you are going to run this whole thing as a web application. If you don't you're going to be very disappointed at the results.

                            Apart from all that, there are a lot of moving parts to this on both ends. It is not as simple as a client to SQL database server. There is no way i could retrieve information fromthe MySQL server needed for othe processes at the server end, so i have to roll my own. Besides its more fun.
                            Ok, I can accept that. It sounds to me though, that you have a few unrealistic expectations and perhaps more thought needs to go in other methods of skinning this cat.
                            Software makes Hardware Happen

                            Comment


                            • #15
                              might allow the HTTP Protocol on port 80, but totally disallow [any other protocol on port 80. Bottom line, I'd highly suggest you forget trying to use port 80 at all unless you are going to run this whole thing as a web application.
                              Yup Absolutly. The only method I have had success with is HTTP over port 80... hence the thread

                              more thought needs to go in other methods of skinning this cat.
                              Suggestions welcome...

                              I like the idea of data streaming but I think it might be overly complex for this.

                              Any one else got any ideas?

                              Comment


                              • #16
                                Any one else got any ideas?
                                Mike,

                                I'm not trying to discourage you, nor sound like I have the answer to all the world's problems, but what you are wanting to do is not unique, nor are you the first to try and think through it. AFAIK, the method I outlined above, using the technology built into the SQL foundation and accepting the (security) fact that some clients are going to need to modify their security protocols, is the best solution to date. You really can't judge its capabilities on experiences from the 1990s.

                                In any event, I wish you luck. I'm sure there are other ways, and probably better methods. As they say, necessity is the mother of invention.
                                Software makes Hardware Happen

                                Comment


                                • #17
                                  Sorry Joe, I did not mean to imply that. I have probably gone a little far to argue for what I want to do in response to the general trend these days in this board for people to give me 25 cast iron reasons why I should not want to do what I want to do!

                                  I seem to spend 80% of my posts these days justifying what I want to do. I am interested to hear other approaches of course, like Pauls excellent link in this thread, but it gets frustrating when responses tell me over and over don't even bother to try it.

                                  Several of my ideas went beyond what had been done before even though it was unconventional. Some have bombed of course, but that's how I learn.

                                  I don't know how this is going to play out in the big picture, but I have given this considerable thought and I am commited to this approach for now. I would love to learn PHP and Ajax and ruby on rails and play with the big tools, and I will, but right now I am just trying to solve the problem in front of me.

                                  What i love about PB is I can take baby steps. It might not be an ideal language for everything, but it DOES work. I am willing to bet that when I am done with this, it will still be waay faster than a MySQL server. In fact it might be interesting to do a direct comparison if someone is willing to provide a server with MySQL installed on it.

                                  I have never seen a direct comparison of SQlite/MySQL/Tsunami in all the years I have been working with these tools. I would be VERY interested to do it.

                                  Comment


                                  • #18
                                    Mike,

                                    I think its great, necessary in fact, to think about things in a different way. I tend to agree far more than not that there is some way which a PB app can do just about anything.
                                    I have never seen a direct comparison of SQlite/MySQL/Tsunami
                                    And you're not likely to. Even if someone did, the results wouldn't mean much. After all, an SUV, limo, and a Porche, are all vehicles, but they serve totally different needs. SQLite is mainly a subset of the full blown SQL server, and Tsunami (my favorite of all databases by the way) doesn't "do" relational transactions. Speed is one thing, but functionality is another.

                                    Have you looked at what Paul Squires is doing with his SQLite Client/Server application? Its very, VERY, impressive (as are all things Squire ) That may give you a springboard for something bigger.
                                    Software makes Hardware Happen

                                    Comment


                                    • #19
                                      After all, an SUV, limo, and a Porche, are all vehicles, but they serve totally different needs
                                      true, but I think it would be very enlightening to see a speed comparison for simply executing a query and returning a large set of data. Like you I suspect that Tsunami might be the fastest (but it doesn't do SQL queries and is not relational) but I think SQLite might give it a run for its money. I would also like to see how MySQL is doing these days by comparison. It is much more scalable and includes a server of course but just what are the speed trade offs for that?

                                      Comment


                                      • #20
                                        On the SQLite site you can find a recent enough speed comparision between MySQL 5.0.18, PostgreSQL 8.1.2, FirebirdSQL 1.5.2 and obviously SQLite 3.3.3 & 2.8.17:

                                        SQLite CVSTrac - sqlite - Speed Comparison

                                        There's also a much older version of the same page here, if it can be of some historic interest.

                                        Bye!
                                        -- The universe tends toward maximum irony. Don't push it.

                                        File Extension Seeker - Metasearch engine for file extensions / file types
                                        Online TrID file identifier | TrIDLib - Identify thousands of file formats

                                        Comment

                                        Working...
                                        X