Announcement

Collapse
No announcement yet.

File locking with OPEN

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • File locking with OPEN

    In the following function:
    Code:
    FUNCTION OpenDatFile(BYVAL fLock AS LONG) AS LONG
    
      LOCAL timeoutval AS LONG
      LOCAL u          AS UserInfoStruc
    
      timeoutval = 1024
    
      ERRCLEAR
    
      DO
    
        IF fLock THEN
          OPEN datfile FOR RANDOM LOCK READ WRITE AS #1 LEN = SIZEOF(u)
        ELSE
          OPEN datfile FOR RANDOM SHARED AS #1 LEN = SIZEOF(u)
        END IF
    
        IF ISFALSE ERR THEN
          EXIT DO
        END IF
    
        DECR timeoutval
    
      LOOP WHILE timeoutval
    
      IF ISFALSE timeoutval THEN
        FUNCTION = -1
      END IF
    
    END FUNCTION
    What is a good value for "timeoutval" in PB/CC and PB/DLL? I can't seem to get a good feel for it.

    --Dave

    -------------
    PowerBASIC Support
    mailto:[email protected][email protected]</A>
    Home of the BASIC Gurus
    www.basicguru.com

  • #2
    I see this often with database engines - when you try to open a file for exclusive use, you want the program to retry the open if someone else has it open. And, keep retrying until you can get the file. I have similar code, but uses a sleep in the loop and displays (optionally) a retry message if it fails after waiting the desired length of time. I use is specifically for a small shared file that is read/written to and then closed right after reading/writing.
    Best Regards,
    Don

    ------------------
    Don Dickinson
    www.greatwebdivide.com

    Comment


    • #3
      Thanks Don!

      SLEEP is a great idea. But in this case, I can't display a "retry" option since the code is in a DLL called by a CGI application.

      --Dave


      ------------------
      PowerBASIC Support
      mailto:[email protected][email protected]ic.com</A>
      Home of the BASIC Gurus
      www.basicguru.com

      Comment


      • #4
        Hello,

        This is just a side note that might be of some interest regarding some information I read a short while back. If you are using a NTFS file system (or if the program might ever be put on NTFS,) having many file open and close statements can lead to drastic performance loss.

        The example I read was PC mag stating that in one of their test of Windows 2000, while using Word 97, the Windows 2000 machine was much slower than the 98. Microsoft’s comment on this was that the function used many open and close statements for the task and that the extra security overhead of the NTFS file system greatly slowed the process down. They said that future versions of Word would handle the task in a different way as to prevent the many open and close statements.

        What I have done in DOS apps running in Windows in the past is to use my own “file locking” ie just have a couple of bytes at the beginning of the file to state the current file lock state. With a bunch tweeking I was able to make it work smoothly on a single computer or on a NetBeui network. What it did was create a “white board” where many computers could share and edit the same file and have all updates immediate across all computers.

        It didn’t work on TCIP, I’d imagine due to more advanced file buffering by the TCIP protocol. I’d guess though that it could work using the same or similar techniques if tweeked some more.

        In DOS, I did tests and had many computers reading and writing to the same byte in a file repeatedly for some time (good for the hard drive I’m sure.) I never had any bad writes and I never had any bad reads. Was Windows just protecting my DOS app from potential problems? Are there any concerns with this in an all Windows app?

        Any ways, with all that background, what I’m trying to say is: Is this a suitable work around or is there a much better way to do this in Windows that I have missed entirely? This will be used for the backend of a database that I want to be able to scale to any size (if reasonable of course)

        Thanks,
        Colin Schmidt

        ------------------
        Colin Schmidt & James Duffy, Praxis Enterprises, Canada


        [This message has been edited by Colin Schmidt (edited March 22, 2000).]

        Comment


        • #5
          In my experience timeout values for file locking should represent “real time” values instead of number of iterations in a loop. For a CGI the ideal value is a function of the number of current active treads on the web server (if it can be obtained).

          With this technique, we can ensure the maximum efficiency since on a busy web server more locking demands could be waiting for the particular file to become available (i.e. unlocked).

          Sleep is a good idea if coupled with the value derived from the number of active threads on the server.


          Siamack


          ------------------




          [This message has been edited by Siamack Yousofi (edited March 22, 2000).]

          Comment


          • #6
            There is no way for a CGI application to know how many active threads there are on the web server. Nor do I believe that information is actually useful.

            I changed the timeoutval to 255 and added a "SLEEP 10" to the loop. I was able to execute 10,000 near simultaneous accesses to the CGI app and never had a single timeout occur, so I'm satisfied.

            --Dave

            ps. In a CGI application, there is no way to keep the file open from one instance to the next, so OPEN/READ/WRITE/CLOSE in a single session is required.


            ------------------
            PowerBASIC Support
            mailto:[email protected][email protected]</A>
            Home of the BASIC Gurus
            www.basicguru.com

            Comment


            • #7
              Since the data files are on the same machine as the cgi script (probably), I would assume you wouldn't see much delay in the open/close anyway.
              Best Regards,
              Don

              ------------------
              Don Dickinson
              www.greatwebdivide.com

              Comment


              • #8
                Sleep is a good idea, lets the other process do its work faster.
                If you are doing this under IIS and use ISAPI, you can take advantage of the fact that IIS wont freelibrary your dll till it shuts down. Then you can keep the file open between requests and only open/close with process attach/detach. Then wait on a mutex to control locking instead of trying to open over & over.
                If you are using CGI and plan on heavy use, it might be worth the effort to consider using multi-tier with MSMQ or MQSeries so that your CGI stays small. Probably better overall, because it keeps things pseudo-transactional and the backend app can keep the file open or in memory.



                ------------------

                Comment

                Working...
                X