Fellow Translators,
Clearly, the design has not yet crystallized. Fred (mostly) and I to a lesser extent have been spending time developing code which doesn't match up - although it has been very useful both in provoking thought and in revealing where the skills are - with Fred!. Meanwhile there is some possibility that Daniel Corbier has something which we might or might not use for a front end, meanwhile the broad sweep of design has, it seems to me, not happened. Or, just as likely, it happened and I missed it!
So while nothing (delivery dates excepted!) is yet set in stone, may I outline my views, if only in order to stimulate discussion of the design.
Essentially, we are translating VB text, albeit of different types ( forms, prjs...), and the destination is PB text. Great! nothing complicated here. So the input text is analysed, working structures built, and the target text is extracted. Taking a look at these components.
Analysing VB text
We have to do much of what VB itself does to acheive our goal, which is to build working data structures which will enable us to extract PB source code. This means reading the VB source code line by line, character by character, and deciding what each "atom" which we encounter is, how to translate it, and whether it needs to go in our symbol table. Every line of VB source code would go through the same process (lexing), a single lexer for all code.
Working data structures
Assuming, gentle reader, that you have been following recent developments in this forum, you will know that the symbol table is looking rather - well, fat. More like a data dictionary, so let's call a spade a spade, it's a Data Dictionary. Now, let's look back at the task in hand, deciding the fate of atoms, like God or Nils Bohr. This process could have a single result, the construction of the data dictionary. If we now have a data dictionary which tells us so much about our declared environment, why can't we use it to drive the generation of declarations at global, local and form levels, rather than allowing the incoming VB source to drive it? One could also say the same for the procedural code, which but for a couple of sequence number columns could also reside in the Data Dictionary, or a related structure. The look-ahead facility which it would provide would certainly be of benefit when (note when not if) optimised code was required, and allow the code export phase to be independent of the lexing phase.
A couple of comments have been made about the method of access for our symbol table - I mean data dictionary. It seems likely to me that it will end up as quite a big thing, both wide and high. Should we consider putting it into a RDB? SQLite would be the obvious choice, being fast, light, easy to use and free, with good PB wrappers. There are real advantages in having the intermediate form of the application in a form which can be examined, rather than spread around between numerous tables for which individual export functions would have to be developed to provide a comparable facility.
PB source code export
Having designed the table(s), it would also allow the lexer (front end) and PB exporter (back end) to be developed concurrently - those delivery dates again!, and you guessed it, would also enable seperate DDT and SDK back ends to be developed independently, and who knows, PB for Linux?
I'll take questions now.
Clearly, the design has not yet crystallized. Fred (mostly) and I to a lesser extent have been spending time developing code which doesn't match up - although it has been very useful both in provoking thought and in revealing where the skills are - with Fred!. Meanwhile there is some possibility that Daniel Corbier has something which we might or might not use for a front end, meanwhile the broad sweep of design has, it seems to me, not happened. Or, just as likely, it happened and I missed it!
So while nothing (delivery dates excepted!) is yet set in stone, may I outline my views, if only in order to stimulate discussion of the design.
Essentially, we are translating VB text, albeit of different types ( forms, prjs...), and the destination is PB text. Great! nothing complicated here. So the input text is analysed, working structures built, and the target text is extracted. Taking a look at these components.
Analysing VB text
We have to do much of what VB itself does to acheive our goal, which is to build working data structures which will enable us to extract PB source code. This means reading the VB source code line by line, character by character, and deciding what each "atom" which we encounter is, how to translate it, and whether it needs to go in our symbol table. Every line of VB source code would go through the same process (lexing), a single lexer for all code.
Working data structures
Assuming, gentle reader, that you have been following recent developments in this forum, you will know that the symbol table is looking rather - well, fat. More like a data dictionary, so let's call a spade a spade, it's a Data Dictionary. Now, let's look back at the task in hand, deciding the fate of atoms, like God or Nils Bohr. This process could have a single result, the construction of the data dictionary. If we now have a data dictionary which tells us so much about our declared environment, why can't we use it to drive the generation of declarations at global, local and form levels, rather than allowing the incoming VB source to drive it? One could also say the same for the procedural code, which but for a couple of sequence number columns could also reside in the Data Dictionary, or a related structure. The look-ahead facility which it would provide would certainly be of benefit when (note when not if) optimised code was required, and allow the code export phase to be independent of the lexing phase.
A couple of comments have been made about the method of access for our symbol table - I mean data dictionary. It seems likely to me that it will end up as quite a big thing, both wide and high. Should we consider putting it into a RDB? SQLite would be the obvious choice, being fast, light, easy to use and free, with good PB wrappers. There are real advantages in having the intermediate form of the application in a form which can be examined, rather than spread around between numerous tables for which individual export functions would have to be developed to provide a comparable facility.
PB source code export
Having designed the table(s), it would also allow the lexer (front end) and PB exporter (back end) to be developed concurrently - those delivery dates again!, and you guessed it, would also enable seperate DDT and SDK back ends to be developed independently, and who knows, PB for Linux?
I'll take questions now.
Comment