Announcement

Collapse
No announcement yet.

New Life For Old BASICs?

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • New Life For Old BASICs?

    Good news. A Virtual Machine (VM) like VirtualBox can create a virtual environment (VE) in which you can install a guest OS and various applications. I have just finished using VirtualBox on top of Ubuntu (a Linux distro) to create a virtual DOS environment, then downloaded and installed MSDOS 6.22, QuickBasic 4.5, and reinstalled my old copy of PB/DOS 3.5. Everything works fine, but I had to use FDISK, FORMAT /S to set up a bootable DOS partition on the virtual drive, then create and use ISO images of the folders and files that I wanted copied to it.

    The virtual environment eliminates the problem I had of having all NTFS and Ext3 Partitions now, which DOS doesn't really know how to deal with. I'm limited to a half-Gig virtual hard drive, but that should be more than adequate. If not. I can create additional virtual half-gig partitions as well. I haven't tested yet, but I might be able to get access to a USB hard drive as well, but that will likely mean another driver. Only it won't likely be a massive hard drive unless I see a need to get NTFS drivers that will work under DOS as well.

    I found it difficult to get good use out of PB/DOS now that I have gigabytes of hard drive space and am using a format other than FAT. But this is a way around all that, and I thought it worth mentioning.

    The host OS can be Windows or Linux, your choice. Some people might prefer to use a different VM package, but I really like VirtualBox myself.

  • #2
    That's really good news Donald. On the other hand, PB/DOS is running for years under the DOSEMU of Linux and FreeBSD and that's nothing else as a Virtual Machine. I understand all the DOS fans of the early PB days, the interesting discussions inside the CIS forum etc. It's, of course, very nice to see the DOS compiler running under a modern operating systems.

    On the other hand, we're living no longer in the 8086 era; it seems that the 80386 is also history. We've now the Pentium II, III or IV, the Centrino technology, the Athlon, the Opteron etc. with several deep instruction pipelines, branch prediction, out of order execution, concurrent processing, powerful MMX and SSE instructions, and last but not least multiple processor cores. That is state of the art.

    It's not so easy to use that new features, when it comes to programs, especially with PB 3.x inside a VM. Modern processors can do 2, 3 or 4 operations at the same time if the things to do are independent of each other and do not use the same execution units. Most CPU’s have at least 2 integer Arithmetic Logic Units so that they can do 2 or more integer additions per clock cycle. There is usually one floating point add unit and one floating point multiplication unit so that it is possible to do a floating point addition and a multiplication at the same time. There is at least one memory read unit and one memory write unit so we can read from and write to memory at the same time. It's possible, for example, to do an integer operation, a floating point operation, and a memory operation in the same clock cycle.

    That's the theory; in practice my students have done a bit research and the results are interesting and surprisingly at the same time. It's a challenge to use the above described features with PB 3.x, but it's possible.

    Gunther

    Comment


    • #3
      One does not need 6.22 DOS. Win98 DOS is more functional and with a program like Norton Commander or better yet Volkov Commander (last call in AutoExec.Bat). The hook never lets windows past the DOS. If you exit the Commander then Windows 98 completes its boot into the graphical interface, but you can execute DOS programs from the Commander prompt and exit without ever entering Win98.

      But I find almost all DOS program work fine from a DOS window in Win98. I even use a CAD program that implements expanded memory under Himem.sys and supports VESA up to 1280 x 1024. I really find this a better method as I can still mulitask.

      Comment


      • #4
        Backwards compatibility has been a key factor in PC architectural advances. It
        has created the means by which we can still run legacy code and attempt to
        retain the use of old compilers, interpreters, and assemblers.

        The biggest hurdles have been in the changes that have taken place on the surrounding edges of that architecture. Most notably, the introduction of massive hard drives, USB devices, new graphical standards, and the abandonment of standard parallel and serial port technology. Networking and dealing with a modern printer are also quite challenging in the post-DOS era for anyone trying to stay with legacy code and practices.

        But a modern programming language can hark back to the format and syntax of an earlier era, making it easier to migrate old code and old programmers into the present age. In their way, PB/CC 4.x and PB/Win 8.x have attempted to reach back in time a bit and make that process even easier, by reintroducing some concepts that will be familiar to QB coders.

        On the other hand, some would first prefer to see if they can keep on using the tools of the past in the present. VM let's you keep everything old, which
        means using those tools in a virtual environment that purports to be where they once resided natively. My experience with VirtualBox is that it is not
        the best way to achieve an old DOS or early Windows OS (16-bit) environment. But it is virtually flawless with installs of 32-bit Windows, such as Windows 2K/XP/Vista.

        But even if you could achieve near perfection in terms of a virtual environment or alternate method of providing a DOS experience, the constraints imposed by the limits of that old compiler or interpreter are sure to thwart your efforts to expand your reach beyond the boundaries of that DOS eumulation. You simply cannot expect to take full advantage of the capabilities of a modern system from such a confined area.

        The two techniques normally used to exceed these limits are (1), to Employ embedded assembler code, whether inline or in binary form, or (2), to link to modules that add functionality, whether by static or dynamic methods. These options do exist, but few programmers are positioned to be able to take advantage of them, as they require advance coding expertise or access to modules that are suited to the task at hand.

        So while I did point out that you might be able to breathe new life into an old interpreter or compiler by a port to a VM or adding support for it under a
        different environment, I don't think that is going to be much of a revival in the long term.

        Comment


        • #5
          The two techniques normally used to exceed these limits are (1), to Employ embedded assembler code, whether inline or in binary form, or (2), to link to modules that add functionality, whether by static or dynamic methods. These options do exist, but few programmers are positioned to be able to take advantage of them, as they require advance coding expertise or access to modules that are suited to the task at hand.
          You're right. But there's a 3. way available. What about so called intrinsic functions like in C++? Most of those functions generate one machine instruction each. An intrinsic function is therefore equivalent to an assembly language instruction. Coding with intrinsic functions is a kind of high-level assembly. It can easily be combined with the high-level language constructs such as if-statements, loops, and functions.

          The main advantage is, that there is no need to learn assembly language by using intrinsic functions. But the 64 thousand Dollar question is: Who will write the functions for PowerBASIC?

          Gunther

          Comment


          • #6
            If there is a direct correlation between a statement in a higher language form and a produced machine language instruction, then you can expect the maximum speed possible in execution. However, what language is constructed in such a fashion? Even if it contains a significant number of such statements,
            there are many situations that call for more actions on the part of the computer than explicitly called for or capable of being done with single instructions.

            An interesting example is the typical FOR loop. Say you create a FOR statement that looks like this:
            Code:
            b=0
            FOR a = 1 TO 2000
              INCR b
            NEXT.
            A smart compiler would recognize that A and B are numeric forms, but an even smarter compiler would recognize that a and b can be handled by integer or long references, and an optimizing compiler would see that a and b can both be allocated to registers for fastest execution. But PowerBasic goes
            a step beyond, by recognizing that a is not referenced within the loop, so
            it can take advantage of the ECX register and count down rather than up
            using the LOOP decrement method, which is the fastest method possible.

            Does it make any difference if a and b are integers or longs? Actually, yes,
            because PowerBasic is optimized towards the use of longs in preference to integers, the reason being that a long can satisfy a significantly larger range of numeric values than the smaller integer form permits.

            But PowerBasic also has to handle a mixture of numeric types without preference to any one type. so it has to decide which form is best suited for representing all the numeric types that are present, then recast any types as necessary so that they can all be used together correctly. This is all done
            transparently to the user and programmer, and greatly benefit both. But it does not permit the compiler the easy option of generating one-for-one, high-level to low-level statement conversions. So the advantage of so-called "intrinsic" functions, that they facilitate a high level approach to assembly programming, does not really count for much, if you can't find intrinsic functions that do exactly what you need to have done without resorting to a low-level method of coding.

            Comment


            • #7
              So the advantage of so-called "intrinsic" functions, that they facilitate a high level approach to assembly programming, does not really count for much, if you can't find intrinsic functions that do exactly what you need to have done without resorting to a low-level method of coding.
              Let me answer to that point. Intrinsic functions aren't so new as it may sound; the old fashioned ALGOL 60 language had such functions. Of course, that was on main frames. Nowadays we've MMX instructions and new registers for SIMD (for example SSE, SSE2, SSE3). It's a kind of vectorization and intrinsic functions would make sense by using the new instructions; all together there is set of 70 new machine instructions.

              The other side of the medal is that it's not enough that the CPU provides this features, but the operating system has to support that too. The new SIMD registers must be saved during any task switch, while MMX needs it not. But since the advent of modern operating systems seems that no longer an important question.

              Gunther

              Comment


              • #8
                Taking advantage of specific features in an AMD or Intel architecture are unlikely to be objectives of an OS that is intended to run on as many types of
                PCs as possible, because they do not represent a common denominator. Even video standards are principally a matter of implementation and enforcement in the video drivers provided by the manufacturer. Given that you might have a $400 advanced video card in your PC, and I may only have the use of the integrated graphics chip set, our Windows displays are still going to look remarkably similar, because the OS is not trying to take full advantage of the advanced video capabilities.

                Vista represents a new video standard that does make a difference in terms of performance by the OS, and has been subject to complaints for that reason, as providing additional "eye candy" with no significant gains in productivity.

                Its games that really seek to bring out the advances in PC architecture, though some specialty programs, particularly those that need a boost in
                processing power or handle intensive video tasks, can benefit from these as well. The problem then though, is that our development tools may not give us full access to those features either, which makes it even more challenging to figure out how to exploit them.

                Comment

                Working...
                X