Announcement

Collapse
No announcement yet.

Design options

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • StanHelton
    replied
    Originally posted by Rodney Hicks View Post
    Stan,

    A little off the topic, but take a look at your DATA LOAD TOOL and consider if every VB term had been assessed and the array full of the corresponding PB terms.

    All the program would need is a text box to input a VB term and search the file for that term.

    It would be a great tool to assist those VBers learning to program in PB as opposed to recreating old files. Albeit mostly using DDT.

    A possibly useful by product of this endeavor.

    Rod
    Good idea. I'll work on it!
    Stan

    Leave a comment:


  • Rodney Hicks
    replied
    Stan,

    A little off the topic, but take a look at your DATA LOAD TOOL and consider if every VB term had been assessed and the array full of the corresponding PB terms.

    All the program would need is a text box to input a VB term and search the file for that term.

    It would be a great tool to assist those VBers learning to program in PB as opposed to recreating old files. Albeit mostly using DDT.

    A possibly useful by product of this endeavor.

    Rod
    Last edited by Rodney Hicks; 25 Jun 2008, 03:29 AM. Reason: Additional comment

    Leave a comment:


  • StanHelton
    replied
    different take on DD Manager

    Take a look at Fred's last post in the New Submission thread.

    Stan

    Leave a comment:


  • StanHelton
    replied
    Originally posted by Chris Holbrook View Post
    Rod, I liked the bit about celebration the best!

    Regarding the RDB, there is absolutely no doubt that SQLite can do it and save us time. There would be no point without saving us time - the only thing we are putting in to this project! The only challenge is me convincing you of that! The overhead is tiny, tiny, tiny. One #include statement. I quote from www.sqlite.org:
    To me, SQLite embodies the "smaller, faster" ethos espoused by PowerBASIC, but also has a HUGE userbase.

    I like the celebrate part too!

    Did I miss something when I d/l'd SQLite? Only 1 include file? I was thinking of using the DLL.

    BTW: v3 of the Flowchart is up. I think the DD manager is redundant now that I've had a look at SQLite.

    Leave a comment:


  • Chris Holbrook
    replied
    Originally posted by Rodney Hicks View Post
    I also believe that we can do ourselves whatever any DB will do for us, with a few missteps (learning opportunities) along the way.
    Rod, I liked the bit about celebration the best!

    Regarding the RDB, there is absolutely no doubt that SQLite can do it and save us time. There would be no point without saving us time - the only thing we are putting in to this project! The only challenge is me convincing you of that! The overhead is tiny, tiny, tiny. One #include statement. I quote from www.sqlite.org:
    SQLite is a software library that implements a self-contained, serverless, zero-configuration, transactional SQL database engine.

    SQLite is the most widely deployed SQL database engine in the world. It is used in countless desktop computer applications as well as consumer electronic devices including cellphones, PDAs, and MP3 players. The source code for SQLite is in the public domain.
    To me, SQLite embodies the "smaller, faster" ethos espoused by PowerBASIC, but also has a HUGE userbase.

    Leave a comment:


  • Rodney Hicks
    replied
    I'm designing some tables now and will post them later today
    Sounds good, I'll wait until I see them, since I have to sleep sometime, before I do much more.

    Very early on in discussions somewhere, I don't quite remember where, I asked something about whether or not we were going to convert everything to longs (perhaps as equates) to make the process easier. Is now the time to consider something along those lines?

    Rod
    Last edited by Rodney Hicks; 21 Jun 2008, 09:11 AM. Reason: correcting typos.

    Leave a comment:


  • Chris Holbrook
    replied
    Originally posted by Rodney Hicks View Post
    Chris, just yesterday you were questioning my 'look ahead' comment. And today you post
    I know, it's maddening.

    Originally posted by Rodney Hicks View Post
    What will the data Dictionary contain?
    the answer is - I don't know entirely, because I'm focussing on the detailed requirements of the "font end" of the translator, and the "back end" will have some detail requirements which I don't know. It will probably end up as more than one table. I'm designing some tables now and will post them later today.

    Leave a comment:


  • StanHelton
    replied
    Good points. I've got to run out & do some physical labor, but I'll be back later this morning having had time to think about all this.

    My goodness, it's going faster than I expected!

    Leave a comment:


  • Rodney Hicks
    replied
    With regards to using uCalc, I am in agreement with Chris on this one.

    As it stands, without using any other product, we expect out first release to certain areas, or certain features in certain areas that the converter user will have to do some hard coding. By using a non specific device as uCalc, we may end up in the same place as we are now, having to fill in the holes with the same thing that we have to 'fill in the holes' in our own version.

    Those that used VB to any extent will want this converter up and running ASAP, while someone like me, would rather take a little extra time and 'fill in the holes' as we putter along.

    I personally think that we are doing better than we think we are, mostly because we are now at a bottleneck stage. That tells me that a little patience and a little celebration is in order.

    I have always thought that this project would get slowed down at this stage, and we reached it a lot soon than I expected, because this is the stage where the nuts and bolts have to fit through the slots.

    We require a leximacallit (read parser), analyzer, data dictionary, and data manipulators, and an unleximacallit (read code spewer).

    Even at this stage of the project we are not sure of the data dictionary(DD) requirements, other than long , wide, and deep, terms that are not too specific. We may also find that any low cost(free)Database Tool may not be up to the task as well due to component size limitations alone.

    We might be wise to split the DD into two creatures at the present time.

    I think that we should continue putting the Statements, Keywords, intrinsic Functions, and Operators into a Type, with the understanding that quite some number of the total are going to need some supplementary functions to produce the intended effect(as part of the analyzing and manipulation), and that the DB handle the more complicated Events, Objects, Properties, Elephants, and other assorted misdemeanors that VB perpetrated upon the VBers. However, we could do it ourselves. The first items are really only a small part, yet very necessary.

    I think the real issue at this point is blending our programming styles and our output, more than the Data Dictionary. No matter what we use for a DD, it's going to take a considerable amount of time to fill it.

    Whether we use any other product or not, the converter is going to rely on functions someone wrote. It's a question of whether we want to do it ourselves.

    I would like us to do it ourselves, just to say we did.

    So, if we were to take Chris's lexidodad,(I like that lexi prefix, Chris, I really, really do) pretty much as it is, it could parse a VB file, and with one supporting function take the Term array, search it, and spew out lines of code that simply showed the PB term, or the fact that a PB term is required.
    At present there are only 62 terms(of 89 in the array so far) that would readily translate, but if someone were creating the necessary functions while someone was adding to the array, the converter would be well on its way. Some of the functions for the terms that would be in the array would not be too complicated and should probably be written by someone with both VB and PB experience for the sake of speed only.

    Any program is ultimately just a collection of functions and procedures and proscribed systems.

    I also believe that we can do ourselves whatever any DB will do for us, with a few missteps (learning opportunities) along the way.

    That said, and as Chris stated earlier, we'll keep plugging along the course we've begun and see where it leads us.

    Rod

    Leave a comment:


  • Chris Holbrook
    replied
    re. ucalc and RDB

    Stan,
    thanks for the ucalc stuff. Daniel has a tremendous product there. I have skimmed it and my initial reaction is - it will take time to know whether ucalc can do the job or not. In "not" I include "nearly", which experience tells me is often the way with technology-led solutions. So to bottom it out, you will need resource, someone or ones will have to get up to speed with ucalc and customise it to do the conversion job, say VB to SDK. It might also be worthwhile consulting the VB2PB project lawyers and commercial department, as ucalc is a commercial product.

    As to whether you put the traditional approach on hold in the meantime, a) I'm glad it's not my decision b) I don't think that either Fred or I will stop worrying away at it anyway (citation required from Fred).

    Having re-read para #1 above, it looks as if I am damning ucalc with faint praise. That is not the case, but I simply don't know whether it will do the job. If it can't, we could lose. If it can, there will be a winner.

    RDB
    What attracts me to SQLite, which of course uses the same - in 99% of cases - SQL as MySQL, is that it is SO lightweight and easily deployable. ONE file for the entire database(unless you keep standing data elsewhere and use a MEMORY DB! ONE dll for native mode API.
    Last edited by Chris Holbrook; 21 Jun 2008, 05:04 AM. Reason: forgot the RDB comments!

    Leave a comment:


  • Rodney Hicks
    replied
    Chris, just yesterday you were questioning my 'look ahead' comment.

    And today you post:
    The look-ahead facility
    It shows growth on your part! Lack of clarity on mine.

    I've a question that needs a simple answer.

    What will the data Dictionary contain?

    Strings?
    Numbers?
    Both?


    Rod

    Leave a comment:


  • StanHelton
    replied
    Sorry, forgot Daniel's uCalc files. Here's the zip.
    Last edited by StanHelton; 24 Jun 2008, 03:57 PM.

    Leave a comment:


  • StanHelton
    replied
    So... it looks like the Data Dictionary (DD) is the current bottleneck. SQLite sounds like a good choice, but I'm more familiar with MySQL. The more I dig into VB the bigger the sTable got in my mind. You've convinced me RDB is the best way to approach. It looks like all modules will be dependent on the DD. Conceptually the frontend Parser/Lexer depends only on the DD while the backend (SDK & DDT & Linux & whatever) modules are dependent on both.

    Daniel completed a uCalc Language Builder example for this project based on Fred's original PoC code and my DDT target. He sent it to me yesterday. (Attached below.) I'm digging into it now, but it's a bit over my head. I recommend reading the files ReadMe_ucfmp295.txt and ReadMe_ucVBtoPB.txt first. His results have promise, but did not hit the DDT target (he doesn't use DDT himself). Everything he sent is in the .zip. Preserve directory structure and it should run.

    Questions:

    For the DD --
    I think you've identified the bottleneck properly and we should get to work on this. Can you give a little detail on what fields you think are necessary?

    Do you think DD design should allow for separate sets of tables for each target language?

    For the Lexer --
    Lexer implies more function than just parsing. I think I understand, but could you explain this a bit more?

    Thinking Out Loud:

    My first thought is that we need a set of translations specific to each target language, even separate sets of translations for SDK & DDT. Now you've got me thinking MySQL.

    Leave a comment:


  • Chris Holbrook
    started a topic Design options

    Design options

    Fellow Translators,

    Clearly, the design has not yet crystallized. Fred (mostly) and I to a lesser extent have been spending time developing code which doesn't match up - although it has been very useful both in provoking thought and in revealing where the skills are - with Fred!. Meanwhile there is some possibility that Daniel Corbier has something which we might or might not use for a front end, meanwhile the broad sweep of design has, it seems to me, not happened. Or, just as likely, it happened and I missed it!

    So while nothing (delivery dates excepted!) is yet set in stone, may I outline my views, if only in order to stimulate discussion of the design.

    Essentially, we are translating VB text, albeit of different types ( forms, prjs...), and the destination is PB text. Great! nothing complicated here. So the input text is analysed, working structures built, and the target text is extracted. Taking a look at these components.

    Analysing VB text
    We have to do much of what VB itself does to acheive our goal, which is to build working data structures which will enable us to extract PB source code. This means reading the VB source code line by line, character by character, and deciding what each "atom" which we encounter is, how to translate it, and whether it needs to go in our symbol table. Every line of VB source code would go through the same process (lexing), a single lexer for all code.

    Working data structures
    Assuming, gentle reader, that you have been following recent developments in this forum, you will know that the symbol table is looking rather - well, fat. More like a data dictionary, so let's call a spade a spade, it's a Data Dictionary. Now, let's look back at the task in hand, deciding the fate of atoms, like God or Nils Bohr. This process could have a single result, the construction of the data dictionary. If we now have a data dictionary which tells us so much about our declared environment, why can't we use it to drive the generation of declarations at global, local and form levels, rather than allowing the incoming VB source to drive it? One could also say the same for the procedural code, which but for a couple of sequence number columns could also reside in the Data Dictionary, or a related structure. The look-ahead facility which it would provide would certainly be of benefit when (note when not if) optimised code was required, and allow the code export phase to be independent of the lexing phase.

    A couple of comments have been made about the method of access for our symbol table - I mean data dictionary. It seems likely to me that it will end up as quite a big thing, both wide and high. Should we consider putting it into a RDB? SQLite would be the obvious choice, being fast, light, easy to use and free, with good PB wrappers. There are real advantages in having the intermediate form of the application in a form which can be examined, rather than spread around between numerous tables for which individual export functions would have to be developed to provide a comparable facility.

    PB source code export
    Having designed the table(s), it would also allow the lexer (front end) and PB exporter (back end) to be developed concurrently - those delivery dates again!, and you guessed it, would also enable seperate DDT and SDK back ends to be developed independently, and who knows, PB for Linux?

    I'll take questions now.
Working...
X