No announcement yet.

Fast and very robust database server

  • Filter
  • Time
  • Show
Clear All
new posts

    I like SQLite, but then I am a minimalist.

    My guess is that SQLite would handle the volume without difficulty, easily tested without cost - except for your time. Certainly creating 60 tables and populating each with a few thousand rows should not take very long.

    Whether SQLite's multiuser capabilities would suffice depends on the precise nature of the requirement, and again could be prototyped if that is indicated after reading the SQLite website and asking well-designed questions on the SQLite forum.


      I think about databases this way.
      You should have a system, written with your own database design routines built in. If you cannot figure out how to do that, i would question one's ability to program such a complex job.

      Your own design should be run like a cheata. If the program is not designed to be a web program, i do not see what all the fuss is about. I believe most of these database designs for the whole are for general processing and transaction oriented processing. Not for lean mean real time processing. I would not be thinking this same way for a large financial institution.

      In your lean mean system, you should have this, KEEP ALL DATA, do not throw no data away.

      A easy way to keep databases that way are to have two sets of databases. One is the current database and the other is the historical data base. As time goes on, the current database will always be the smaller database of two and have in it all open and paid out records starting at a beginning of a year but including all records of 3 months of the previous year.

      How i did that was at the end of the day at program was ran that updated the second database from the first database. In doing this after the batch run process, any and all records that are in the first database will also be in the second database. The second database will then be as current as the first database. If your primary first database got screwed up, you could filter those current records from the second database to rebuild the first database.

      If the second database got screwed up, you recover the second database by getting your last good copy, maybe even the beginning of the year(the second database) and process your first small database to update the second.

      Also by doing this processing of updating the current to second database at the end of the day or whenever, you can get differences if any of changes and record them as they happen within a days period of time and build reports based on them over time, like when did a customers phone number change.

      By taking this approach, the client will always be able to use his computer to see things that happened years ago. And oh boy does that ever happen.
      Now your transactions file, basically new business and activity on current business should have new files too. If loans, newloans file record paidout loans file record, all payments file record.

      Is 50 to 60 tables fields or indexs?
      Build customer records with fields needed and at least 1/3 more to create new fields. When zipped down to archieve the files, they will be small.
      Then prepare room for your program to have at least one more database for other junk giving it plenty of room too for growth for goodies you do not precieve by will hit you in the future.

      keep your indexs in a separate file, being able to rebuild on any moments notice. Indexes should not have to every be backed up.

      If you are worried about framentation, when the last record of a database gets to be close to the end of the file, just grow the file by so much empty space. this should only be a matter of the two large databases.

      If you not being paid much to do this, just give the customer what he pays you for, and let him have problems with effieciency and backup problems.

      I really cannot see a problem with speed unless you want something else to take up the slack for a poorly designed database up front. Matter of fact, you probably should make that record that is going to hold the loan info or whatever it is about twice as large as you have grand total field's length you have now. because the information given to you is not going to be all you need. Lessen to Michael about customers wishes.

      I am very sure there are a many of the people on this forum to disagree with me. But one guy wrote a financial program using DBase not to many years ago that was very young, and i just laughed at him and said are you kidding me. Run your software on a older slow machine, that will help you develop fast routines and get your thoughts on good database programming.

      If need me to further the idea of the two database system, one with current and the other with current and pruged records i can write more about it. I have yet to loose any information with discovering what i lost or what had changed with this approach either by a human or computer failure.

      Like is said you purge your system once a year only, keep the purged information in a sperate file, that you should never need again, because your second database with the current and paidout customers will be updated first. Your program only has to make one hit for a customers record, if the record does not exist in the current, you look for it in the second database. The second database will have its own indexs that are more scary in the beginning but it all will come together once applied.
      By the way, i can find any loan, or payment, or any changes to the databases, or any other transactions that has taken place from september of 1984.

      One of things i added later was a program to compare all databases for corrupted data before the update and comparing of databases was done daily. This was done becuase of other software accessing the data files other than my own and you never know what failure equipment can make. Even if a field contained data of chararacters that should not be in field where checked, yea go ahead and that number where a name goes and we will see what happens. That will make the users more careful of wrong inputting of data. But there could be computer failure too, and this will find it in a new york second, and where it exist.

      Don't depend on a system to be checking for failure, it is the programmers responsiblity.

      You should also be keeping a beginning of the month copy of the databases so you can do transaction checking. Simply run the transactions against the beginning of the month and you should end up with current data in the databases. You might not get this part done in the beginning, but hopefully you will have that in the future. This will provide accuracy in your databases and give you a audit report of wrongful doing by workers to curcumivent something.

      So roll your own, it is usually a better product if networking issues do not get to be too great.
      p purvis


        Codebase option

        Originally posted by Marc van Breemen View Post

        I still haven't decided on which database I should use. One thing is to be sure: 1) it has to be very reliable, 2) it has to be very fast. In that order.

        If possible it has to be cheap if not free. I have three choices left: MSSQL Express (disadv: memory usage, installation), MySQL (a bit too bloated) or SQLite (looks very good).

        No. 1 on my list is SQLite, but I have doubts. Is it really that good for my needs? Will development be likely continued on the long run?
        Not free but is cheap though and meets your other requirements.


        A copy of the library must be purchased for each developer. Included is a royalty free server for client server which is free up to 10 users. Moving beyond 10 users has additional but low costs.


          Link to Oracle-XE


            Originally posted by Edwin Knoppert View Post
            Was SQLite not poor on multiuser?
            I have also been looking at SQLite for a rewrite of a rather large Paradox based application a client of mine has been using for about 7 years now and was wondering about the multiuser aspect of it myself. This application has about 100 tables with many tables in the 500k row range. In 7 years, Paradox has not failed and no database restores or damaged tables have resulted. I've gotten use to all the intrinsic multiuser support Paradox supports automatically. For instance, I retrieve a row(s) from one or more tables for some update function, the user makes changes and Paradox informs me automatically if some other user has changed the applicable rows since they were retrieved. I use this feature versus setting any row or table locks. With SQLite, I think I will have to include some type of incremental update counter so I could check if it changed since the initial retrieval. These are the sort of issues I spend most of my time thinking about.

            It has come to my attention that certain dubious forces are interpolating their desires in my search for Mom, apple pie and the girl you left behind. Stop it or I'll scream...


              We have been discussing the SQLite multiuser locking stuff over on the SQLitening forum:
              Paul Squires
              FireFly Visual Designer (for PowerBASIC Windows 10+)
              Version 3 now available.


                Originally posted by Edwin Knoppert View Post
                Was SQLite not poor on multiuser?
                SQLite does not claim to be multi-user because it can only comfortably handle roughly 25 concurrent connections under normal circumstances. This according to their Wiki, so it may well do the job in this case. It just cannot handle hundreds of users or more.
                Scott Slater
                Summit Computer Networks, Inc.


                  SQLite does not claim to be multi-user because it can only comfortably handle roughly 25 concurrent connections
                  Just for the record..... and as kind of a 'gotcha watch' thing......

                  I ran into a similar but not the same problem a couple months ago with SQL/Server.

                  The number of allowed concurrent statements on a single connection is some kind of SQL/Server configuration option. After nine years I finally got a customer who had configured "one" and it exposed an oversight in my code. (The way I had written it I needed two or more.)

                  You might encounter similar situations with SQLLite or any other brand DBMS.... a little something to think about.

                  Michael Mattias
                  Tal Systems (retired)
                  Port Washington WI USA
                  [email protected]