Announcement

Collapse
No announcement yet.

PB/DLL 6 and networked environments.

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

    PB/DLL 6 and networked environments.

    In the PowerBASIC for DOS forum I have been advised to get, and did, the BASICally Speaking issue for December 1995, for Michael Mattias's excellent article on Multi-User programming. An earlier query of mine ragrading error 75 in PB/DLL 6.0 which, for the identical code in PB/DLL 5.0 does not occur, in conjunction with Michael's mention of LOGICAL locking of files/records, set me thinking and experimenting.

    This code is compiled under PB/DLL 6.0, in order to test the result of
    reading the same file (or record) twice, absolutely simultaneously, without ANY network conditions being set - no ACCESS/LOCK nor locking or unlocking.

    The Delphi code is below: of these, there are TWO, containing exactly the same code, only having different "project" and "unit" names and
    different positions of displaying the MessageDlgPos. Both call an identical DLL function (ReadIt) in its own DLL (ReadArec.DLL and
    ReadBrec.DLL).
    Code:
     -----------------------------------------------------------------------
    #Compile Dll "ReadArec.DLL"
    #Debug Error On
    
    Function ReadIt Alias "ReadIt" Export As String
     On Error GoTo RdErr
    Ent:
     x$=Str$(Timer)
     y$=x$
     x$=Left$(x$,Instr(x$,".")-1)
     x$=Right$(x$,2)
     x#=Val(x$)
     If x# Mod 15=0 Then ent
    bgn:
     x$=Str$(Timer)
     y$=x$
     x$=Left$(x$,Instr(x$,".")-1)
     x$=Right$(x$,2)
     x#=Val(x$)
     If x# Mod 15<>0 Then bgn
     frf&=FreeFile
    
     Open "\QX\TestFile" For Binary As frf& Base=0
    
     Seek frf&,0
     Get$ frf&,Lof(1),e$
     Close frf&
     Function=e$
     Exit Function
    RdErr:
     MsgBox("Error"+Str$(Err)+" in ReadBrec")
     End Function
    -----------------------------------------------------------------------
    Delphi/Pascal:
    
    var
      frmReadThree: TfrmReadThree;
      ipa: string;
    
      function ReadIt: PChar; stdcall; external 'ReadArec.dll';
    
    implementation
    
    {$R *.DFM}
    
    procedure TfrmReadThree.Button1Click(Sender: TObject);
    begin
     ipa := ReadIt;
     MessageDlgPos('Ipa: '+ipa, mtInformation, [mbOk], 0, 350, 248);
     Button1.SetFocus;
    end;
    -----------------------------------------------------------------------
    All of this is based on the not unreasonable assumption that the hardware (file server disk read/write heads etc.) CANNOT be in more than one place at any one instant in time.

    The two Delphi modules produce two little forms on screen, each with
    a button (Button1). By pressing these, one after the other (and there is no more than a second delay between) the two ReadIt functions are launched in succession.

    These first ensure that the timer is NOT at a quarter-minute interval, and then waits until the end of the current 15 second interval to open, read and close the little file (plain text). Since both instances utilise the same timer, both should leap into action at the same instant (or very close to it).

    What happens after the buttons are clicked to start the sequence, is that there is a delay (the remainder of the fifteen seconds) and then
    BOTH messages (MessageDlgPos) pop up, simultaneously, as near as may be discerned. Repeating the experiment ad nauseum has so far failed to produce any errors.
    -----------------------------------------------------------------------
    Michael's article on LOGICAL file locking, then, seems to be the preferable way to go. My file structure is already established, so that I work around the "flag" byte by instead using a small new file called (say) OPENFILE. This is a binary file, i.e., a simple string.

    Whenever a file needs to be READ FROM or WRITTEN TO, anywhere in the system, it is now possible to call a funtion (e.g., StatusOf"(as long)) as in --

    n&=StatusOf("Debtors,15,15) (random file, from/to record number)

    n&=StatusOf("ProdType,0,0) (sequential file from 0 to lof("ProdType))

    n&=StatusOf("EstIndex,78,156) (binary file, from/to byte)

    StatusOf checks for the presence of the parameters in OPENFILE; if absent, these parameters are added to OPENFILE and written to disk, and returns a zero. If the parameters are present, it delays half a second,
    then reopens OPENFILE and checks again, until the file should be free,
    at which point it may exit with a zero value. The number of retests may be restricted to a realistic (time) maximum after which a message to the user is displayed with suitable options (e.g., retry some more, or abort - if feasible). In fact, if the logon name/password/ID of users are added to the OPENFILE entries, it would even be possible to produce a juicy character analysis of defaulters.

    When the file or record has been read/written then, unless required otherwise, a very simple routine is used to reverse the OPENFILE records by removing the identical paramenters and refiling to disk.

    Like binary files, this frees one from predefined conditions and/or restrictions as well as unknown differences in networking software from different vendors, and for very little extra work, one gains more control over what needs to be done.

    Of course, that is, if there are no Black Holes in all of this.
    Last edited by Gary Beene; 12 Jul 2014, 07:24 PM. Reason: Code: tags

    #2
    I think I'll play the devil's advocate here...

    Restricting access to a file based on a particular "time" has several nasty weaknesses:

    1. On networked systems, you may have no control over the hardware, therefore you cannot categorically "know" that the technique is safe... a busy server will queue your request and it could "conflict" with another request... ie, "race conditions".

    2. when the code runs on two or more workstations, you cannot guarantee the clocks will be synchronized, so you may still encounter a conflict when two workstation file requestss hit the server at the same time.

    3. Local & server caching can mean that the concept of the "hardware CANNOT be in more than one place at any one instant in time" is not a reliable asumption.

    4. Multi-processor and/or "raid-array" file servers will completely trash the idea that the "hardware CANNOT be in more than one place at any one instant in time"

    5. Scaleability issues. While your technique works with a small number of applications acessing a file, but what happens when there are 500 or 10,000 simultaneous applications using the same file? Who gets served?

    IMHO, the best solution is a combination of both hard and soft locking. It requires careful application design to minimize the need to lock when there is no need.

    One technique that I see from time to time is to use a second file of which each byte represents the lock status of a record in the primary file... the individual bytes are LOCKed rather than the records in the primary file, therefore helping to minimize deadlock when just trying to read from the primary file (such as can occur when a workstation crashes while a record lock is in place).

    Records in the primary file are only locked when a write is occuring, and the lock applied to the corresponding byte in the second file can be used to determine if the primary file record is "reserved", "in-use", "locked for update" or available, etc.

    My $0.02.

    -------------
    Lance
    PowerBASIC Support
    ( mailto:[email protected][email protected]</A> )
    Lance
    mailto:[email protected]

    Comment


      #3
      Lance,

      To clarify:

      >Restricting access to a file based on a particular "time" has several nasty weaknesses
      >
      >1. On networked systems, you may have no control over the hardware, therefore you cannot >categorically "know" that the technique is safe... a busy server will queue your request and it >could "conflict" with another request... ie, "race conditions".

      The only "time" which is relevant here is the suggested 0.5 second delay if file/record busy.
      But above you say "a busy server will queue your request". If I understand aright, that means
      the half-second falls away. Is this "queue" the same as or similar to a printer spool queue on a network? I am network-illiterate and would appreciate some elucidation of the term "race conditions" and what conflict could arise if requests are indeed queued in an orderly way.

      >2. when the code runs on two or more workstations, you cannot guarantee the clocks will be >synchronized, so you may still encounter a conflict when two workstation file requests hit the >server at the same time.

      The sample code includes this "timing" bit ONLY to simulate a possible Real World situation; in
      live code it falls away. My purpose is to create a condition whereby two workstations perform
      the idendical file read/write at the same moment, since the test code runs on only ONE PC for purposes of the experiment. A Real World network doesn't want this, but it WILL occasionally
      encounter the odd simulaneous read/write request, with no reference to clock settings. What it seems to prove to me is that, since I did not create an error/conflict situation by this, it does work.

      >3. Local & server caching can mean that the concept of the "hardware CANNOT be in more than one >place at any one instant in time" is not a reliable asumption.

      and

      >4. Multi-processor and/or "raid-array" file servers will completely trash the idea that the >"hardware CANNOT be in more than one place at any one instant in time"

      This, in conjunction with your point 1 above: if access to a file or file record is restricted to
      ONE user at a time by the logical locking method visualised in Michael's article, surely the
      physical whereabouts of the record is immaterial? That file or record must be a specific and definable entity or quantity, such as "record 12 of random access file INVOICES", or is this not so? The word "virtual" this, that and the other may have bearing on this, e.g., "Yes, it is virtually definable, but not absolutely" ! Which would rewrite the annals of computers in their heyday, when it was axiomatic that computers offer "only speed and accuracy" and that "the computer does what YOU tell it to do, even if it is wrong". Thus, record number 13 is either
      A. record number 13, or, B. it is not record number 13.

      >5. Scaleability issues. While your technique works with a small number of applications acessing >a file, but what happens when there are 500 or 10,000 simultaneous applications using the same >file? Who gets served?

      Our package in its present form is very specifically for a very specific target market. In the DOS version, the largest user has a network of six or seven workstations. For "applications" here I am back to (over) simplification: I assume the server receives, from one or more workstations, a request(s) to read a certain record or file and proceeds to perform this service, with scant concern for the shape or size of the calling module/application. To quote Michael (page 4, col. 3): "When using physical locks, a programme that attempts to access a locked record will generate a PowerBASIC runtime error which must be handled..." etc., without reference to LOCK READ or LOCK SHARED or whatever. Implication being that only one user may access a record at one instant. Which is precisely what I am about.

      >IMHO, the best solution is a combination of both hard and soft locking. It requires careful >application design to minimize the need to lock when there is no need.

      "No need to lock": I read this as referring to any read-only request, where no updating and refiling applies. Is that correct? However, this does reiterate the question: does the network "queue" requests and execute these individually, one at a time?

      >"to use a second file of which each byte represents the lock status of a record in the primary >file... "

      I understand the concept of LOCKING to mean "prevent access by others". Now, if this is achieved (by whatever means, i.e., hard or soft), does not that answer? I see my proto-type file ("OPENFILE") as being this "second file" that you mention. This, it seems to me, would satisfy your paragraph ...
      >Records in the primary file are only locked when a write is occurring, and the lock applied to >the corresponding byte in the second file can be used to determine if the primary file record is >"reserved", "in-use", "locked for update" or available, etc.
      My file structure is almost always accessed as binary files, whereby "from/to byte" (soft) locking is simple.

      Quite aside from all this, the following HELP, under "#OPTION" is disquieting ---

      >Use #OPTION VERSION3 to compile and make the output file compatible with Windows NT 3. It is the >programmer's responsibility to make sure features specific to Windows 95/98 and NT 4 are not >used.
      >
      >Use #OPTION VERSION4 (default) to compile and make the output file require the target system be >running Windows 95/98 or NT 4.
      >
      >Use #OPTION VERSION5 to compile and make the output file require the target system be running >Windows 2000 or NT 5.

      When one is distributing a package world-wide there is no way of knowing which of thses may apply, especially the latter two. Does this mean one has to have THREE different versions at all times and leave the prospective end-user to select the one applicable?

      Awaiting proper invoice for $0.02.

      Comment


        #4
        The devils advocate replies...!

        There are no golden rules for network behaviour... what is true with one O/S is often completely different in another O/S. Configuration options can have a serious effect too. For example, most versions of Windows can be adjusted so that the CPU gives more time to network requests or local applications. There are millions of possible operating system behaviors and differences.

        A "race-condition" can be where two (or more) processes are running simultaneously and the final results depend on which process finishes first. For example, lets look at a hypothetical multi-user invoicing program: two users sell the same item simultaneously, but there is only one unit in stock... without a good file locking algorythm in place, both users may appear to successfully "sell" the same item. This is a fairly weak example, but may give you some idea.

        Software which is suceptable to race conditions is often unreliable and unpredictable! Race conditions can occur whenever there are two or more processes competing for a resource, be it a file, a communications port, etc.

        What it seems to prove to me is that, since I did not create an error/conflict situation by this, it does work.
        Don't bank on it. It may be successful on your network server, but try it on a bunch of other servers... I'd be surprised if it was 100% bulletproof on all O/S's.

        If you are willing to restrict the file to one user at a time, then you will surely block out a lot of problems, but it also knocks out "multi-user" mode of the software. As I noted, you must design the file access alogrythm very carefully to provide the utmost flexibility while maintaining the integrity of the data. This is one reason why client/server is successful... rather than leave the data access control to "client" applications, the "server" application receives all data requests from the clients, and it alone performs access to the datafile - no client applications are permitted direct access.

        Implication being that only one user may access a record at one instant. Which is precisely what I am about
        Lets say someone locks a record, but goes to lunch before writing the data back to the datafile. Until they return and finish, the record in question is locked. If another user tries to read all the records to generate a report, their application get stuck on the lock ad-infinitum. I don't consider that to be a multi-user application. My concept of a well designed "locking algorythm" would cater to this possibility.

        "No need to lock": I read this as referring to any read-only request, where no updating and refiling applies. Is that correct? However, this does reiterate the question: does the network "queue" requests and execute these individually, one at a time?
        Yes, if it is a read-only data request, allow shared access. Whether the server queues the request is not predictable... if the server has several CPU's and utilizes a Raid-Array (drive mirroring, etc), it may be quite possible to simultabneously answer several data requests at the same instant. The network itself is also likely to be the bottleneck, so it can introduce even more problems.

        I guess what I'm trying to say is this: you simply cannot make assumptions about the network or server under any circumstances, or your application will not be reliable. Great news, eh?

        Finally, the concept behind #OPTION VERSIONx is simple: there are API differences between 3, 4 and 5. For example, if your app uses features only available on NT4, then permitting the app to run on NT3.51 will be likely to cause a crash, usually at app startup (which does not give the user a nice impression of your program)

        Tools like "Boundschecker" can help with this problem: Boundschecker looks over your EXE and reports which platforms the code will run on. While it does not guarantee that your code is bug-free, it will quickly tell you which platforms are effectively out of reach. If you need your applications to run on all platforms, you have to write code that works on the lowest common denominator: API level 3.


        -------------
        Lance
        PowerBASIC Support
        ( mailto:[email protected][email protected]</A> )
        Lance
        mailto:[email protected]

        Comment

        Working...
        X
        😀
        🥰
        🤢
        😎
        😡
        👍
        👎