Announcement

Collapse
No announcement yet.

Fast and very robust database server

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Michael Mattias
    replied
    SQLite does not claim to be multi-user because it can only comfortably handle roughly 25 concurrent connections
    Just for the record..... and as kind of a 'gotcha watch' thing......

    I ran into a similar but not the same problem a couple months ago with SQL/Server.

    The number of allowed concurrent statements on a single connection is some kind of SQL/Server configuration option. After nine years I finally got a customer who had configured "one" and it exposed an oversight in my code. (The way I had written it I needed two or more.)

    You might encounter similar situations with SQLLite or any other brand DBMS.... a little something to think about.

    MCM

    Leave a comment:


  • Scott Slater
    replied
    Originally posted by Edwin Knoppert View Post
    Was SQLite not poor on multiuser?
    SQLite does not claim to be multi-user because it can only comfortably handle roughly 25 concurrent connections under normal circumstances. This according to their Wiki, so it may well do the job in this case. It just cannot handle hundreds of users or more.

    Leave a comment:


  • Paul Squires
    replied
    We have been discussing the SQLite multiuser locking stuff over on the SQLitening forum: http://planetsquires.com/support/index.php?topic=3104.0

    Leave a comment:


  • Rick Kelly
    replied
    Originally posted by Edwin Knoppert View Post
    Was SQLite not poor on multiuser?
    I have also been looking at SQLite for a rewrite of a rather large Paradox based application a client of mine has been using for about 7 years now and was wondering about the multiuser aspect of it myself. This application has about 100 tables with many tables in the 500k row range. In 7 years, Paradox has not failed and no database restores or damaged tables have resulted. I've gotten use to all the intrinsic multiuser support Paradox supports automatically. For instance, I retrieve a row(s) from one or more tables for some update function, the user makes changes and Paradox informs me automatically if some other user has changed the applicable rows since they were retrieved. I use this feature versus setting any row or table locks. With SQLite, I think I will have to include some type of incremental update counter so I could check if it changed since the initial retrieval. These are the sort of issues I spend most of my time thinking about.

    Leave a comment:


  • Martin Francom
    replied
    Link to Oracle-XE

    http://www.oracle.com/technology/pro.../xe/index.html

    Leave a comment:


  • Brian Chirgwin
    replied
    Codebase option

    Originally posted by Marc van Breemen View Post
    Hi,

    I still haven't decided on which database I should use. One thing is to be sure: 1) it has to be very reliable, 2) it has to be very fast. In that order.

    If possible it has to be cheap if not free. I have three choices left: MSSQL Express (disadv: memory usage, installation), MySQL (a bit too bloated) or SQLite (looks very good).

    No. 1 on my list is SQLite, but I have doubts. Is it really that good for my needs? Will development be likely continued on the long run?
    Not free but is cheap though and meets your other requirements.

    Codebase

    A copy of the library must be purchased for each developer. Included is a royalty free server for client server which is free up to 10 users. Moving beyond 10 users has additional but low costs.

    Leave a comment:


  • Paul Purvis
    replied
    I think about databases this way.
    You should have a system, written with your own database design routines built in. If you cannot figure out how to do that, i would question one's ability to program such a complex job.

    Your own design should be run like a cheata. If the program is not designed to be a web program, i do not see what all the fuss is about. I believe most of these database designs for the whole are for general processing and transaction oriented processing. Not for lean mean real time processing. I would not be thinking this same way for a large financial institution.

    In your lean mean system, you should have this, KEEP ALL DATA, do not throw no data away.

    A easy way to keep databases that way are to have two sets of databases. One is the current database and the other is the historical data base. As time goes on, the current database will always be the smaller database of two and have in it all open and paid out records starting at a beginning of a year but including all records of 3 months of the previous year.

    How i did that was at the end of the day at program was ran that updated the second database from the first database. In doing this after the batch run process, any and all records that are in the first database will also be in the second database. The second database will then be as current as the first database. If your primary first database got screwed up, you could filter those current records from the second database to rebuild the first database.

    If the second database got screwed up, you recover the second database by getting your last good copy, maybe even the beginning of the year(the second database) and process your first small database to update the second.

    Also by doing this processing of updating the current to second database at the end of the day or whenever, you can get differences if any of changes and record them as they happen within a days period of time and build reports based on them over time, like when did a customers phone number change.

    By taking this approach, the client will always be able to use his computer to see things that happened years ago. And oh boy does that ever happen.
    Now your transactions file, basically new business and activity on current business should have new files too. If loans, newloans file record paidout loans file record, all payments file record.

    Is 50 to 60 tables fields or indexs?
    Build customer records with fields needed and at least 1/3 more to create new fields. When zipped down to archieve the files, they will be small.
    Then prepare room for your program to have at least one more database for other junk giving it plenty of room too for growth for goodies you do not precieve by will hit you in the future.

    keep your indexs in a separate file, being able to rebuild on any moments notice. Indexes should not have to every be backed up.

    If you are worried about framentation, when the last record of a database gets to be close to the end of the file, just grow the file by so much empty space. this should only be a matter of the two large databases.

    If you not being paid much to do this, just give the customer what he pays you for, and let him have problems with effieciency and backup problems.

    I really cannot see a problem with speed unless you want something else to take up the slack for a poorly designed database up front. Matter of fact, you probably should make that record that is going to hold the loan info or whatever it is about twice as large as you have grand total field's length you have now. because the information given to you is not going to be all you need. Lessen to Michael about customers wishes.


    I am very sure there are a many of the people on this forum to disagree with me. But one guy wrote a financial program using DBase not to many years ago that was very young, and i just laughed at him and said are you kidding me. Run your software on a older slow machine, that will help you develop fast routines and get your thoughts on good database programming.

    If need me to further the idea of the two database system, one with current and the other with current and pruged records i can write more about it. I have yet to loose any information with discovering what i lost or what had changed with this approach either by a human or computer failure.

    Like is said you purge your system once a year only, keep the purged information in a sperate file, that you should never need again, because your second database with the current and paidout customers will be updated first. Your program only has to make one hit for a customers record, if the record does not exist in the current, you look for it in the second database. The second database will have its own indexs that are more scary in the beginning but it all will come together once applied.
    By the way, i can find any loan, or payment, or any changes to the databases, or any other transactions that has taken place from september of 1984.

    One of things i added later was a program to compare all databases for corrupted data before the update and comparing of databases was done daily. This was done becuase of other software accessing the data files other than my own and you never know what failure equipment can make. Even if a field contained data of chararacters that should not be in field where checked, yea go ahead and that number where a name goes and we will see what happens. That will make the users more careful of wrong inputting of data. But there could be computer failure too, and this will find it in a new york second, and where it exist.

    Don't depend on a system to be checking for failure, it is the programmers responsiblity.

    You should also be keeping a beginning of the month copy of the databases so you can do transaction checking. Simply run the transactions against the beginning of the month and you should end up with current data in the databases. You might not get this part done in the beginning, but hopefully you will have that in the future. This will provide accuracy in your databases and give you a audit report of wrongful doing by workers to curcumivent something.

    So roll your own, it is usually a better product if networking issues do not get to be too great.

    Leave a comment:


  • Chris Holbrook
    replied
    I like SQLite, but then I am a minimalist.

    My guess is that SQLite would handle the volume without difficulty, easily tested without cost - except for your time. Certainly creating 60 tables and populating each with a few thousand rows should not take very long.

    Whether SQLite's multiuser capabilities would suffice depends on the precise nature of the requirement, and again could be prototyped if that is indicated after reading the SQLite website and asking well-designed questions on the SQLite forum.

    Leave a comment:


  • Michael Mattias
    replied
    It's good to invest a lot of time to make 'The Right Choice' of DBMS for this application.

    You need to appreciate this is a luxury.


    Most often your clients (who could be your employer!) will have "official" applications to be used for e-mail, database management, spreadsheets and a host of other applications and support services. Many, too, will have "blacklists" - applications or support software/utilities which may not be used... usually because of some Bad Experience in the past.


    Just don't forget to include in your criteria for selection:

    1. Performance and reliability are pretty much a given; but don't forget to think about what it might take to "restore from catastrophe." This application is described as being required to store data for at least a year; while today's hardware systems are so much more reliable than they used to be, <stuff> happens. If being without the application for the three days it will take to get it up and running again following a 'crash' would cripple the user's business, perhaps something a bit easier to restore is in order.

    1A. Related would be, "How easily can we move the application to a new computer?"

    2. Cost of the chosen DBMS: initial licence fee, annual maintenance, and - often overlooked - the cost of adding additional users ("seats") to the application.

    3. Maintainability. While from the description provided whomever must maintain the application will have to be at least familiar with the PowerBASIC compiler family, it will also be important they have the skill set to use the ODBC or OLE/ADO interface to the database, or the proprietary client interface (eg SQLLite) of the DBMS.. whatever you choose.

    Yes, being allowed to pick like this is a luxury... but it's also a responsibility. You will be judged as much on your ability to 'choose wisely' as on your programming prowess.

    Leave a comment:


  • Stanley Durham
    replied
    Code:
    No. 1 on my list is SQLite, but I have doubts. Is it really that good for my needs? Will development be likely continued on the long run?
    I don't know if SQLite is the best choice, but you don't have to worry about speed, reliability or continued development.

    It's estimated to be the most deployed SQL database in the world.

    Used by Google, Apple, McAfee, Skype, Bloomberg and Adobe.
    Adobe supports it. (along with Bloomberg and Symbian)
    You probably use it every day on your cellphone.
    If you have anything, Apple, you're using it.

    You may have to do more roll-your-own to make it fit.
    A complete package might be a better choice.

    Leave a comment:


  • Michael Mattias
    replied
    >150K records per year spread over 60 tables is trivial for an Access database

    Jet ("Access") databases can get funny when the physical size of the *.mdb file approaches 1 Gb.

    However, "Compacting the Database" is a nearly-universal fix when this happens.

    If you go to http://www.providerpaymentpartner.com/, then hit the "User Support" tab near the top and get to the support page, you will find a link to a "white paper" I created for users of the Provider Payment Partner(tm) system on this subject.

    That paper includes a link to the relevant KB article, as well as step-by-step instructions to compact the database.

    And unless I am mistaken, I posted a 'compact Jet Database' routine here (somewhere, not in source code forum, that much I recall) just last week.

    With this you can do the compacting from your application, which should eliminate the Susie User Effect which occurs when you ask a user to do anything with more than two steps.

    [ADDED]
    Link to code to compact Jet Database from your program:
    http://www.powerbasic.com/support/pb...33&postcount=9

    MCM
    Last edited by Michael Mattias; 18 Sep 2009, 08:47 AM.

    Leave a comment:


  • Robert Wallace
    replied
    Stuart:

    Totally off topic - Watching, every week, a BBC program "Lost Land of the Volcano" - you live in a fantastic country

    Leave a comment:


  • Stuart McLachlan
    replied
    Originally posted by Edwin Knoppert View Post
    I had different experiances, i think you are lucky and simply not have triggered some nasty feature.
    We where lucky we had a backup, we lost a great deal of our records at some point.
    This MDB was used for a (ASP.NET) website, we moved to SQL Server.
    I've been developing Access applications professional for the last 16 years (ever since Ver 1.0). I've got literally hundreds of them deployed in in a wide range of organisations. If they are designed and used properly, there aren't any "nasty features".

    However, Access is a very bad choice for website backends. Anyone who uses them for that deserves everything they get. Even Microsoft themselves warn against it for anything which is likely to experience anything more that light traffic.

    Leave a comment:


  • Edwin Knoppert
    replied
    I had different experiances, i think you are lucky and simply not have triggered some nasty feature.
    We where lucky we had a backup, we lost a great deal of our records at some point.
    This MDB was used for a (ASP.NET) website, we moved to SQL Server.

    Leave a comment:


  • Stuart McLachlan
    replied
    Originally posted by Edwin Knoppert View Post
    The number of records is somewhat of a limit for MS Access databases.
    (The MDB type i mean)
    Not so.

    150K records per year spread over 60 tables is trivial for an Access database. I regularly deal with several hundred thousand records per table in multi-user Access databases with no problems.

    I have one with 4 million records in the primary table and as long as I only do indexed seeks, it is still very fast.

    Leave a comment:


  • Edwin Knoppert
    replied
    We have customers using a foxpro driver.
    Maybe this is a solution as well.
    Each folder represents a database and each dbf a table.

    Maybe?
    http://msdn.microsoft.com/en-us/vfoxpro/bb190233.aspx

    Leave a comment:


  • Michael Mattias
    replied
    will, thanks for the tip. Although, without looking at the product, the name Oracle doesn't give me a good feeling regarding easy installation / low memory profile .....
    Installation? Download the file and click on it.

    I have no problems with memory here... let me start the database (I have the server installed) and see what I am using... hmmm, all of 8476K (the service).

    I never got around to installing the client (on my laptop) , but surely that cannot take more than the server, can it?


    MCM

    Leave a comment:


  • Edwin Knoppert
    replied
    Originally posted by Marc van Breemen View Post
    For me that's not a problem as I will control the contents and will not allow direct external access (if possible to prevent that).
    I think i get fired if i told them to abandon all kinds of 'rules' a good DBMS provides.
    Btw, you where talking about 60 tables and over 100k of records.
    That doesn't sound like job for a single and silly dll.
    There will be a time you have to give up and convert your DBMS for something serious.
    Installation is not really an issue in practise.

    I think you'll have to think about your requirements about memory and 'very low on system resources'.. what does that mean?
    We have some issues with our webserver running MS Sql Server but that is a server side solution, it currently supports between 6 and 10 websites/webservices.
    Will your 'client/server' DBMS still run locally or remote?

    Sqlite is at the opposite side of being a serious DBMS, it may fulfill the need of thousands of peoples but i am sure not for bigger tasks.
    Your task is somewhat over-intermediate (due the amount of tables)
    The number of records is somewhat of a limit for MS Access databases.
    (The MDB type i mean)

    "back to the old drawing board"..

    Leave a comment:


  • Marc van Breemen
    replied
    Originally posted by Michael Mattias View Post
    Take a look at Oracle/XE.
    I will, thanks for the tip. Although, without looking at the product, the name Oracle doesn't give me a good feeling regarding easy installation / low memory profile .....

    Leave a comment:


  • Marc van Breemen
    replied
    Originally posted by Paul Squires View Post
    If you decide to use SQLite then check out our (well, Fred Meier mainly) client/server implementation. <snip>
    Yes, via your site I got to know SQLite . With 'client/server' I guess it's suitable for multi-user access?

    Originally posted by Paul Squires View Post
    <snip>It is pretty robust.<snip>
    Pretty or 'very' .... ?

    Leave a comment:

Working...
X