Announcement

Collapse
No announcement yet.

Multithreaded FastCGI APP Server Not Possible

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Multithreaded FastCGI APP Server Not Possible

    Hello everyone.

    Correct me if i'm wrong, but ...

    Seems like the WebServer(Abyss) spawns a new process for each new request, unless a previuosly created one is idle. Even if a process is multithreading on the FCGX_ACCEPT_R procedure, only one thread is used.

    Thought I could prevent the WebServer from spawning multiple
    copies of a FastCGI App Server, but seems it is not possible. When you
    try by exiting the new process, the WebServer responds with 500 Internal Server Error.

    The objective is to have only one process centralizing all requests. So have to come up with another approach. A DLL using a Shared Memory Block(CreateFileMapping). So all copies of the FastCGI App load this dll, and a centralized processing now might be possible.

    Thank you for your comments.

  • #2
    The objective is to have only one process centralizing all requests. So have to come up with another approach.
    Why? What's wrong with multiple processes? If the server is designed to work that way, it's designed to work that way.

    I mean, if you want to do 'somethng' along those lines, I 'suppose' you could write your own server application, which does nothing but queue up requests and pass them on to the web server, one at a time.

    Then again, maybe this particular web server is the wrong product for this particular application.

    (I am not a "web guy" but isn't the basic idea behind CGI to "launch process, handle input, return output, exit process?" )

    MCM
    Michael Mattias
    Tal Systems (retired)
    Port Washington WI USA
    [email protected]
    http://www.talsystems.com

    Comment


    • #3
      if you want one process to handle all requests, then use isapi. fastcgi is usually pooled - that is, a few of the fast cgi programs are started. they sit in a loop and wait for a connection from the web server. this eliminates the overhead of having to start a new process each time a web request comes in - the process is already running and the web server sends it request information via a pipe. so .. if you want to efficiently do cgi (more than one process, but without the overhead of starting a new process) then use fast cgi. if you want it all in one process, then use isapi. both work with abyss.

      -don
      Don Dickinson
      www.greatwebdivide.com

      Comment


      • #4
        I mentioned shared memory block because of speed, but it could be done
        using disk files, or a database table.

        Although, someone posted weeks ago
        Gee, in today's world with disk caches, accessing a disk file is almost as fast as using in memory arrays.
        Maybe he's right.

        Comment


        • #5
          It will only be 'fast' if the server isn't to busy.
          In practise this loading and unloading won't help you well.
          Avoidable but that would be fixing another problem while there are alternatives.

          Oh btw, why don't i hear about ISAPI anymore?
          The best approach imo.

          Stupid CGI exe's..
          hellobasic

          Comment


          • #6
            Take a look at this lively discussion on Multithreads(Java/ISAPI) vs Multiprocesses(RubyOnRails/FastCGI) on this blog

            Also, from The Wonders of Multiplexing in FASTCGI The Forgotten Treasure.
            I would change multiplex by multithreads.
            An application that multiplexes internally can handle context switches way
            more efficiently than the operating system. The OS has to re-programm the MMU;
            it has to restore the contents of CPU registers; it has to re-construct the
            memory image of the process; etc. All these operations are not necessary when
            multiple contexts are managed by the application itself.

            Applications capable of multiplexing can easily handle several hundred
            channels simultaneously. Hardly any operating system can cope with that number
            of active processes, though. The overhead for the context switches would
            become so large that the machine would waste most of its CPU time for
            administrative tasks rather than running the actual processes (thrashing).
            Again, with todays CPUs this may not be so.

            Comment

            Working...
            X