For the past four weeks, I have been busy working on the B5500 Algol cross-compiler we call ALGOLXEM. The "XEM" stands for "cross-compiled on E-mode" which is the technical name for the present Unisys MCP architecture that has descended from the B5500.
Background
The whole purpose of this exercise is to have a way initially to generate object code for the emulator. We presently have no machine-readable source or object code from the B5500 -- all that we have are scans of listings of the system software on bitsavers, and a few small paper listings. Thus we need to turn the listings into source code and the source code into object code -- hence the XEM portion of this project to bootstrap the creation of object code.
The B5500 Algol compiler is written in Algol, and the modern MCP has an Algol compiler, the dialect for which has evolved from and is still somewhat similar to that for the B5500. It therefore made sense to try to use the modern MCP as the host for a cross-compiler and to port the B5500 compiler to modern Algol. It now appears that this approach actually is going to work. That cross-compiler is starting to generate what looks to be valid B5500 object code in the "code file" format used by the B5500 for an executable file.
As discussed in the last post, Nigel manually transcribed the 12,700-line source of the Algol compiler from the listing on bitsavers, and I have taken it from there. The first steps were to get that source into a form that could be used by the current MCP development tools, proofread the source by comparing it to the listing, and then reformat it using a pretty-printing utility into something that would be easier to read and understand. In the process, we identified a number of major issues that would need to be addressed in order to port the B5500 dialect of Algol to modern E-mode Algol:
- Bit numbering for partial-word designators is reversed between the B5500 and modern Algol.
- The B5500 and modern E-mode use the same 48-bit word size and numeric formats, but the B5500 used 6-bit characters, while E-mode uses 8-bit ones. The ability to process 6-bit characters natively was removed from the product line 30 years ago.
- The I/O for the compiler would need to be reworked, partly to address 6/8-bit character differences, partly due to changes in the way that files are declared in a program, and partly to replace some constructs (principally the RELEASE statement) that are no longer supported.
- The B5500 operated in two modes, word and character, with word mode being the default and primary one, at least for Algol programs. To provide for character mode, the compiler supported an additional syntax termed stream procedures. You called a stream procedure much like a standard Algol procedure, but syntactically they were little more than a free-form assembler for the machine's character-mode instruction set.
Simplifying Source Input Management and Addressing
This memory addressing issue had me really worried. One of the precepts of the architectural line that began with the Burroughs B5000 was that user programs do not manipulate addresses. Instead they manipulate offsets from a base (or indexes into a vector of elements -- same thing), with the vector's base address maintained by the system in a protected control word called a descriptor. This approach allows the system to establish (and change) the base address without the program being aware of it. Descriptors are the basis for relocation, bounds protection, and what was originally referred to as automated overlay, but we now (somewhat erroneously) refer to as virtual memory. Alas, on the B5500, these addressing concepts were not yet fully developed, and in the implementation of character mode and stream procedures, not only could you manipulate absolute memory addresses, you didn't even need a descriptor on order to do so.
The addressing and descriptor concepts are much more fully developed in modern E-mode, and there simply isn't any way to manipulate and use arbitrary memory addresses. All addressing is based, in one way or another, on offsets from system-controlled descriptors. Modern Algol has POINTER and ARRAY REFERENCE variables, which can help to abstract a base address and pass it around within a program, but they are subject to up-level addressing restrictions, i.e., a reference in a more global scope is not permitted to point to an object in a more local scope, to avoid the possibility of dangling references.
There were a couple of hopeful signs in the addressing issue, however. First, in order to work reliably, any memory addresses that escaped from character mode routines had to point to so-called "save" memory -- areas that were fixed in position by the system and not subject to overlay paging. Such areas include physical I/O buffers, the program's PRT and stack area, and arrays explicitly declared with the SAVE keyword. Fortunately, the B5500 Algol compiler restricted its manipulation of addresses to these types of entities.
Second, it appeared that most of the address manipulation occurred between the compiler's token scanner and the source code input mechanism. This input mechanism was quite complex. The primary input to the compiler was from cards, but that could be redirected to a different medium, such as magnetic tape or disk. The compiler could also merge the primary input file with a secondary input file, based on sequence numbers in columns 73-80 of each record. In this mode, the primary file served as a patch against the secondary. There was also an "include" capability that would inject the contents of a file into the source stream, and something called a CAST file, which was essentially a library of source modules on a tape volume that was arranged and formatted for quick searching. If storing source libraries on tape seems odd today, the idea of storing source code on disk was still new in the early/mid-1960s, because disks were relatively new, relatively small, and very expensive. Most source programs in those days were maintained exclusively on punched cards. The final piece of input complexity was the compiler's DEFINE mechanism, which is similar to the #define construct that showed up later in C.
Low-level scanning is carried out by a stream procedure named, not surprisingly, SCAN. Much of character mode processing was based on a source index (SI) and destination index (DI). Each index was represented physically in the processor by a triplet of registers holding the word address (15 bits), character offset within the word (0-7, 3 bits), and bit offset within the character (0-5, 3 bits). These indexes were established from descriptor and numeric parameters passed to the stream procedure. Some character mode instructions, such as those transferring characters or words from source to destination, implicitly updated these indexes. Other instructions allowed the programmer to increment or decrement the indexes directly. You could also store the index value back into a parameter or return it as the value of the procedure. In either case, this could make the address available to the word-mode procedure that called the stream procedure. These stream procedure address were an 18-bit value composed of the word address in the low-order 15 bits and the character offset within that word in the high-order three bits. The bit offset within a character was not encoded by this format and could only be manipulated within a stream procedure.
These character-mode address values were used by the scanning mechanism to keep track of the current location on the source line where scanning was taking place. The starting location was passed to SCAN, which returned an updated location at exit. So far, no problem, but when the source of input changed -- say from the primary to secondary file, or from one of the file sources to the DEFINE text stored in the symbol table -- that 18-bit character-mode address was stashed away while scanning was directed to the other source and then reinstated when the source reverted back. Note that those addresses reflected not only the offset within the record area, but also the identity of the record area itself.
Worse, these redirections could be nested -- a $INCLUDE card from the primary input could be inserted into the secondary input's stream, which would redirect to the specified file, which in turn could invoke DEFINEs, references to which could be nested about 30 levels deep. It's important to note here that the Algol compiler was strictly a one-pass affair -- there was no separate pre-processor phase, so all of these input redirections were taking place while all of the scanning, parsing, lexical analysis, and code emission were also being done.
It's impressive how all of this worked, but it was extremely complex, and there was no obvious way to emulate the way the character-mode address values were being saved and restored within a modern E-mode environment. Not wanting to redesign and graft a new input-handling mechanism into the compiler, I decided to punt and change the problem. For the purpose XEM would be put to, we did not need all of that input richness, so I ripped out all of the input files except for the primary input. That left only one case of input redirection -- between the primary input and DEFINE text in the symbol table -- and that proved to be relatively easy to handle as a special case.
Input scanning now works using an ARRAY REFERENCE variable named SBUFF to point to the input source record (either CBUFF, the primary file's record area, or DEFINEARRAY, an existing array that holds DEFINE text during processing). It uses a pseudo character-mode address that has the word offset from the SBUFF base in the low-order 15 bits and the character offset in the next-higher three bits. This facilitates some manipulation of the character-mode addresses that is done in word mode, e.g., adding 32768 to the address to advance to the next character within a word.
The compiler also had a number of output files that were not really needed in XEM, so those were ripped out as well, along with any dependent routines (including some stream procedures) that used those files. In the end, the input/output mechanism of the compiler was reduced to one file for source input (CARD), one for compiled code output (CODE), one for generating an optional listing (LINE), and two temporary disk files used to generate an optional cross-reference listing.
There were a few remaining cases of character-mode addresses being returned to and manipulated by word-mode procedures, but fortunately those were kept pointing within one array, so I was able to deal with the word portion of the address as just an offset within that array and not as a general memory address.
Understanding Stream Procedures
That simplification of the source input mechanism solved the most serious of the stream procedure issues, and actually eliminated a number of stream procedures from the source, but that still left the fact that stream procedures were used for most of the text manipulation and formatting in the compiler. What to do about those?
In looking at the compiler during the proofreading phase, I had come to the conclusion that it would be better to retain 6-bit character coding for the compiler's internal operations, even though modern Algol does not have any facilities for directly manipulating 6-bit units. A lot of the data structures within the compiler were designed for 6-bit characters, and the resulting object code was going to have to be in terms of 6-bit characters anyway, so this seemed like the better way to go. Experience has since demonstrated that this was the right decision, but it meant that I would not be able to use much of the current language and hardware's very capable character manipulation capabilities -- the stuff that replaced stream procedures starting with the B6500.
To understand what needed to be done, we need to look at some stream procedure examples to see how they work and what they can do. Here's a simple example from the original compiler source, except that I've changed the B5500 special characters to their digraph equivalents:
1: BOOLEAN STREAM PROCEDURE OCTIZE(S,D,SKP,CNT); VALUE SKP,CNT;
2: BEGIN
3: SI:=S; SI:=SI+3; DI:=D; SKP(DS:=3 RESET); % RIGHT JUSTIFY.
4: CNT(IF SC>="8"THEN TALLY:=1 ELSE IF SC<"0"THEN TALLY:=1; SKIP 3 SB;
5: 3(IF SB THEN DS:=SET ELSE DS:=RESET; SKIP SB));
6: OCTIZE:=TALLY; % "1" = NON OCTAL CHARACTER.
7: END OCTIZE;
This procedure converts a string of octal characters to its binary equivalent. The string of characters is addressed by S (actually, this routine is designed to work from the token structure built by the scanner -- the first three characters have length and control information, so the text of the token begins in the fourth character). The binary result will be placed at the address of D (typically a REAL variable) offset by SKP octades. The number of octal characters converted is determined by CNT. The parenthesized constructs are loops, each preceded by its repeat count. This presentation is a little obscure, so let's reformat it a bit and add some comments to show what is going on.
1: BOOLEAN STREAM PROCEDURE OCTIZE(S,D,SKP,CNT);
2: VALUE SKP,CNT;
3: BEGIN
4: SI:=S; % load source index from address in S
5: SI:=SI+3; % advance source index forward 3 chars to token text
6: DI:=D; % load destination index from address in D
7: SKP(DS:=3 RESET); % transfer 3-bit groups of zeros for (SKP mod 64) times to dest
8: CNT( % repeat the enclosed statements (CNT mod 64) times
9: IF SC>="8"THEN % if the source character (SC) pointed to by SI >= "8":
10: TALLY:=1 % set TALLY (a 6-bit hardware register) to binary 1
11: ELSE % otherwise:
12: IF SC<"0"THEN % if the source character is < "0"
13: TALLY:=1; % set TALLY to 1 (this is an error flag)
14:
15: SKIP 3 SB; % advance SI forward over the first 3 bits of the character
16: 3( % repeat the following 3 times
17: IF SB THEN % if the source bit pointed to by SI is a 1 (true):
18: DS:=SET % set the destination bit addressed by DI to 1, advance DI
19: ELSE % otherwise
20: DS:=RESET; % set the destination bit to zero, advance DI
21:
22: SKIP SB % advance SI by one bit
23: ) % end the repeat-3-times loop
24: ); % end the repeat-CNT-times loop
25: OCTIZE:=TALLY; % return the error flag (1 = non-octal character)
26: END OCTIZE;
This would have been called from word mode with a statement similar to:
1: IF OCTIZE(ACCUM[1],ACCUM[4],16-COUNT,COUNT) THEN
2: FLAG(521) % NON-OCTAL CHARACTER IN STRING.
OCTIZE is a good example of the character and bit manipulation that was possible with stream procedures. The character codes for the decimal digits were assigned to octal 00 through 11, so the low-order three bits of characters 0-7 were the corresponding octal value. TALLY (a six-bit register available for general use) was implicitly set to zero on entry to a stream procedure. Assignments to DS (the destination string) transferred data to the address specified by DI and incremented DI automatically by the units of data transferred (words, characters or bits). If at the start of a character or word transfer operation either of the indexes were not on a character or word boundary, respectively, the hardware automatically adjusted them forward to the appropriate boundary. Repeat values were limited to 63 (six bits), so nested repeat loops were often used to transfer larger quantities of data. Most stream statements compiled to a single 12-bit character-mode instruction.
Here is a more typical, but somewhat trickier example. This transfers up to 63 characters (N) from a source address (SORCE) offset by SK characters to a destination address (DEST) offset by DK characters. Both offsets are limited in this procedure to 127 characters from their respective bases:
1: STREAM PROCEDURE MOVECHARACTERS(N,SORCE,SK,DEST,DK);
2: VALUE N, SK, DK ;
3: BEGIN
4: SI:=LOC SK; % point SI to the location of SK (to the parameter word itself)
5: SI:=SI+6; % advance 6 chars (to the low-order 12 bits of the offset value)
6: IF SC^="0" THEN % if the source char is not zero (i.e., offset > 63):
7: BEGIN
8: SI:=SORCE; % point SI to the address in SORCE
9: 2(SI:=SI+32); % advance SI by 64 characters
10: SORCE:=SI % save the updated index back into the SORCE parameter cell
11: END;
12:
13: SI:=LOC DK; % repeat the same process for DEST, DK, and DI
14: SI:=SI+6;
15: DI:=DEST;
16: IF SC^="0" THEN
17: 2(DI:=DI+32);
18:
19: SI:=SORCE; % restore the updated source index
20: SI:=SI+SK; % advance SI by (SK mod 64) characters
21: DI:=DI+DK; % advance DI by (DK mod 64) characters
22: DS:=N CHR; % transfer (N mod 64) characters from source to destination
23: END MOVECHARACTERS;
Notice the significance of the LOC keyword. It causes the source or destination index to point to the parameter word in the stack itself, instead of to the location the parameter word addresses (assuming what it holds is actually an address). Also notice how all index increments and repeat counts are modulo 64. This is because the registers in the processor that hold these values are only six bits wide.
The hardware could distinguish a descriptor from an ordinary numeric operand, because all control words had their high-order ("flag") bit set. Descriptors only pointed to word boundaries, with the address of the word in the low-order 15 bits, but a numeric operand could represent both the word address and character offset using the 18-bit representation described above. When loading an address from a parameter word, the character-mode instructions dealt with the two types of words differently. In the case of MOVECHARACTERS, the SORCE parameter would normally be passed as a descriptor but be overwritten by the procedure at line 10 with an updated address in the 18-bit numeric-operand form. Note that what got overwritten was just the parameter word in the stack, not the original descriptor referenced by the caller.
There is one more example I want to show, which terrified me when I came across it and started to understand what it did.
1: INTEGER STREAM PROCEDURE GETF(Q); VALUE Q;
2: BEGIN
3: SI:=LOC GETF; % point SI to return value cell
4: SI:=SI-7; % move SI to bit 6 of MSCW
5: DI:=LOC Q; % point DI to Q
6: DI:=DI+5; % move DI to bit 30 of Q
7: SKIP 3 DB; % move DI to bit 33 of Q [start of word addr field]
8: 9( % move 9-bit register R value (PRT location) from MSCW
9: IF SB THEN % into high-order 9 bits of Q's word addr field
10: DS:=SET
11: ELSE
12: DS:=RESET;
13:
14: SKIP SB
15: );
16: DI:=LOC Q; % point DI to Q
17: SI:=Q; % load SI from value of Q (R+Q = R+3 from only call)
18: DS:=WDS; % copy FPB descriptor word from M[R+3] to Q
19: SI:=Q; % load SI with addr of FPB
20: GETF:=SI % return FPB memory address as the result
21: END GETF;
What's worse, it was called from just one place in the compiler, during initialization, with this cryptic statement: IF EXAMIN(RR11:=GETF(3)+"Y08" )^=12 THEN
Yikes! What is going on here?EXAMIN is a stream procedure that extracts and returns the 6-bit character code at an absolute character address in memory. The implementation of this procedure is very straightforward, and I won't go into its details here.
Upon entry, GETF points SI to the stack cell allocated for its return value, and then decrements that address into the prior word in the stack (on the B5500, a stack push increases the top-of-stack pointer, not decrements it as on some systems). That prior word is the MSCW (Mark Stack Control Word) that was pushed into the stack when the procedure was called. The primary function of the MSCW is to link this procedure's stack frame (activation record) to the prior one lower in the stack, but it also holds the value of the R register for the prior frame. The R register holds the address of the program's PRT (Program Reference Table, where the global variables and descriptors for the program are stored). The R register holds only the high-order nine bits of the PRT address; the low-order six bits of that address are implicitly zero. Physically, the low-order six bits of the R register were used to implement the stream procedure TALLY register.
GETF then moves the nine-bit R register value into the high-order nine bits of the word address portion of the value passed in parameter Q. It uses that address in Q to move the corresponding word in memory to the procedure's return value word, and exits, leaving that word on the top of the stack.
Since GETF is called with a value of 3 for the Q parameter, what it effectively does is return the word at PRT[3]. That word is a data descriptor to the program's File Parameter Block, or FPB. The FPB holds "label equation" information for a program's files -- the actual file names and device types that will be used at run time. It is initially built by the compiler, but updated by the MCP at run time from information that may optionally have been entered on control cards associated with the program's execution. For those of an IBM OS/360 bent, those control cards provided a capability somewhat like DD statements in JCL.
The "Y08" had me stumped for quite a while, until I realized that value is octal 700010, and to a stream procedure, that looks like word address 8 plus character offset 7. Since that value gets added to the value returned from GETF, which is a data descriptor with the absolute address of the FPB in its low-order 15 bits, what effectively gets stored in RR11 and passed to EXAMIN is the address of character 7 in word 8 of the FPB. The FPB has five-word entries, so what we're really looking at is word 3 (0-relative) of the second file declared in the program, which is the primary input, CARD. Character 7 of that word (the low-order six bits) identifies the device type to which the file is assigned, and 12 is one of the values used for disk. All of this information is available in the B5500 Handbook.
Thus this whole rigmarole was gone through to determine if the compiler's primary input would be coming from disk instead of something else. Now we know why file attributes were invented for the B6500 (and in a limited form, were back-ported to late versions of the B5500 MCP).
What did the compiler do with this information? The answer is that it simply adjusted some buffering and blocking factors for the file. The address into CARD's FPB entry was saved in the variable RR11, and the compiler continued to use increments of that address in multiples of five to perform similar tests on other files in the program and make similar adjustments to their file declarations.
How did I deal with this in preparing XEM for E-mode? That turned out to be quite simple. None of this type of file manipulation is normally necessary in modern MCP systems, and we have a much easier and safer way to get this type of information via file attributes if it did. I ended up deleting GETF and all of the references to its result, but I had to verify first that I could do that with impunity, and along the way it was quite a lesson in what people could (and did) do with stream procedures.
Converting Stream Procedures
Hairball case studies like GETF aside, I still had to deal with the 40-odd stream procedures that survived the simplification of the compiler's input and output handling. I wanted something that was modeled very closely on the stream procedure syntax and semantics, so that wherever possible, I could do one-for-one substitutions of the stream statements, and hopefully not worry very much about what those procedures were doing.
While modern E-mode systems no longer support 6-bit character codes natively, they have something that the B5500 didn't -- a much more capable facility for manipulating bit fields within words. In the Algol dialects of both the B5500 and modern E-mode, you can refer to a sub-field of a word in this way:
W.[31:4]
This is termed a partial-word designator. It is a form of arithmetic primary. Inside the brackets, the number before the colon specifies the first (high-order) bit of the field and the number after the colon specifies the number of bits in the field. The result is the specified bit field, right-justified in a word of zeroes. Note that binary values in both systems are big-endian.This partial-word notation accomplishes directly what you usually try to accomplish on other systems with shift and mask instructions. The B5500 had mask instructions (AND, OR, etc.) but no shift or rotate instructions -- it didn't need them. Isolation of a field of bits was quite efficient. On the B5500 it required three instructions, two to load the bit numbers into registers, and a third to extract and shift the field.
This syntax works on both systems, but there are two significant differences in it between the B5500 and E-mode:
- The bit numbering is reversed -- bit 0 is high order for the B5500 and low order for E-mode.
- On the B5500 the field specifications had to be constants, but with E-mode they can be arbitrary arithmetic expressions, albeit limited in range to 0-47 for the starting bit number and 1-48 for the field length. E-mode also allows a field to extend beyond the low-order bit and wrap around to the high-order end of the word. Further, when the bit numbers are constants, E-mode isolates the field using a single instruction.
The bit number reversal was a nuisance that I managed to resolve largely using some regular expressions and the text editor's find-and-replace capability, but E-mode's ability to have dynamic bit field specifications saved my bacon when it came to implementing 6-bit character handling.
There is a second form of partial word operation, termed bit concatenation, that was especially useful in emulating stream procedure behavior. It looks like this:
A & B[23:31:4]
This is another form of arithmetic primary. What it does is transfer a bit field from the value of B into the value of A and yield the updated value of A as the value of the primary. The first value inside the square brackets specifies the high-order bit in A where the destination field will start, the second value specifies the high-order bit in B where the source field will start, and the third value specifies the number of bits to transfer. On an E-mode system, all three values can also be arbitrary arithmetic expressions, limited in range the same way as for partial word designators. The bits in A that are outside the specified destination field are not altered. A must be a primary, but B can be an arbitrary expression. Bit concatenation also has a shorthand form, A&B[23:4], where the first value inside the brackets is the starting destination bit number and the second value specifies the field length. The transfer is implicitly from the low-order bits of B.I decided the easiest thing to do would be roughly model the way that character mode worked on the B5500. Rather that try to develop a separate module for this, I just embedded the emulation in the XEM compiler source. I defined global variables for the word, character, and bit index registers. The word indexes could not hold full addresses as on the B5500, so I also defined ARRAY REFERENCE variables to point to the memory areas to be operated on (separately allocated memory areas are generically referred to as "arrays" in Algol, although they are in fact one-dimensional vectors of memory words). The word address "registers" then held just a word offset within these arrays.
Character mode operations were implemented with a set of defines and procedures. Most stream procedure parameters that reflected addresses were converted into two parameters -- the array reference and the word offset within that array. For example, a stream procedure invocation P(ACCUM[1]) in the original source would be converted to P(ACCUM,1) in XEM. Internal to P, the stream statement SI:=A was converted to STREAMSETSI(A,AX), where A and AX are the formal parameters corresponding to actual parameters ACCUM and 1, with AX being the parameter introduced to carry the word offset. STREAMSETSI was implemented like this:
1: DEFINE WOFF = [14:15] #, COFF = [17:3] #;
2:
3: DEFINE STREAMSETSI(A, X) =
4: BEGIN
5: MBASE:= A;
6: MREG:= (X).WOFF;
7: GREG:= (X).COFF;
8: HREG:= 0;
9: END STREAMSETSI #;
MBASE is the array reference variable and the rest are ordinary REAL variables. In character mode, the source word address was held in the M register, the character offset within the word was held in the G register, and the bit offset within the character in the H register. The corresponding registers for the destination index were S, K, and V.One of the most common things to do with a stream procedure is to transfer characters from a source address to a destination address. On the B5500, that would have been coded similar to this:
1: SI:=S;
2: DI:=D;
3: DS:=30 CHR;
In XEM, that now looks similar to this:1: STREAMSETSI(S,0);
2: STREAMSETDI(D,0);
3: STREAMTRANSFERCHR(30);
The implementation of character transfer is this:1: DEFINE STREAMTRANSFERCHR(N) =
2: STREAMTRANSFERCHRQQ(MBASE, SBASE, N) #;
3: PROCEDURE STREAMTRANSFERCHRQQ(MBASE, SBASE, N);
4: VALUE N;
5: ARRAY MBASE, SBASE[0];
6: REAL N;
7: BEGIN
8: REAL
9: NC; % CHARS LEFT TO TRANSFER
10:
11: STREAMADJUSTSICHAR;
12: AREG:= MBASE[MREG];
13: STREAMADJUSTDICHAR;
14: BREG:= SBASE[SREG];
15: NC:= N;
16: WHILE NC > 0 DO
17: BEGIN
18: BREG:= * & AREG[47-KREG*6 : 47-GREG*6 : 6];
19: NC:= *-1;
20: IF GREG < 7 THEN
21: GREG:= *+1
22: ELSE
23: BEGIN
24: GREG:= 0;
25: MREG:= *+1;
26: IF NC > 0 THEN
27: AREG:= MBASE[MREG];
28: END;
29:
30: IF KREG < 7 THEN
31: KREG:= *+1
32: ELSE
33: BEGIN
34: SBASE[SREG]:= BREG;
35: KREG:= 0;
36: SREG:= *+1;
37: IF NC > 0 THEN
38: BREG:= SBASE[SREG];
39: END;
40: END WHILE;
41:
42: IF KREG > 0 THEN
43: SBASE[SREG]:= BREG;
44: END STREAMTRANSFERCHR;
The source and destination array reference variables, MBASE and SBASE, had to be declared locally to the converted stream procedure body to avoid up-level addressing restrictions, so the define was used to pass these as hidden parameters to the "QQ" procedure that did the actual transfer. The STREAMADJUST... calls are used to ensure that the source and destination indexes are on a character boundary, as required by the character-mode semantics. When the processor was in character mode, the top-of-stack A and B registers were used as word buffers for the source and destination strings, respectively, and I used similarly-named global variables as word buffers. Note that I do not use these quite the same way the A and B registers were used in the B5500 -- those registers would have maintained their state between character-mode instructions, but I load and flush them inside each emulation procedure.
The actual data transfer takes place by means of the bit concatenation expression on line 18. Everything else is register state maintenance.
This approach is not the most faithful implementation of stream procedure semantics, it's not complete, and it's certainly not the most efficient one, but it doesn't need to be any of those. The goal in this phase of the project is simply to get the ALGOLXEM cross-compiler to work. Ultimately that compiler needs to do only three things: (a) generate test cases for the emulator, (b) compile the B5500 version of itself, and (c) compile the ESPOL compiler (yes, some day we plan to have an ESPOLXEM). Once we have the MCP bootstrapped to the emulator environment and the compilers working there, we won't need the XEM tools anymore.
I was successful enough in this approach that most stream statements indeed can be replaced one-for-one with emulation constructs, and in many cases the conversion of stream procedures in the compiler source was just mechanical substitution of one form of statement for another. I'm relatively happy with how it turned out.
The remaining piece of 6-bit character handling concerned getting data into and out of the compiler. Output to the CODE file and I/O for the two cross-reference temporary files is done in word-mode binary, so those were not sensitive to six- vs. eight-bit character coding considerations. Source input to the compiler was done using "array-row I/O," somewhat like fgets() and fputs() in C. Output to the listing file was a combination of array-row I/O and formatted I/O, which is similar to a FORTRAN FORMAT. It was easy to leave the formatted I/O more or less alone and have it operate natively in EBCDIC, but the array-row I/O constructs had to be converted.
I wrote a routine to translate the native EBCDIC character set used by E-mode to the B5500 character codes and pack those codes eight to the word. I wrote another routine to reverse the process and produce EBCDIC from 6-bit coded data. At each array-row I/O statement I inserted the appropriate translation routine after a read or before a write to allow the actual I/O to the file to remain in EBCDIC but the data used internal to the program to be in B5500 6-bit code. This was fairly easy to implement and works well.
Interestingly, the B5500 had three six-bit character sets. The first of these is BCL (Burroughs Common Language), which was a standard character code used to communicate with peripheral devices. The second is the internal code used to represent character data. This is commonly referred to as BCL. but it's actually not. Some documents refer to it as BCL-Internal, but in this project, we're referring to this character coding as BIC -- B5500 Internal Code. The third code was the collating sequence for character comparisons. This code was not physically represented, but was used implicitly by the hardware to determine the relative ordering of character codes. This collating sequence was decidedly not the same as the binary value of either the BCL or BIC codes. The stream procedure emulation had to take this separate collating sequence into account in the implementation of character-mode comparison operations.
Current Status
Our pretty-printing utility reformatted the compiler source into almost 20,000 lines. Conversion work to date has modified over 4,000 of those. Much of that is in deletions resulting from the simplification of input and output, and additions associated with the new code to emulate stream procedure operations. Reversing bit numbers in partial-word expressions affected almost 4% of the remaining lines. Most of the other differences are changes to individual stream procedures and their invocation parameters, and corrections to typos in the original transcription.
At this point we have successfully compiled a small, but non-trivial program using ALGOLXEM. I had a listing from 1970 of a B5500 Algol program that included the generated object code. This provided a basis for comparing what XEM was generating against what the real compiler once generated, and we resolved a lot of problems in the process of getting XEM to produce comparable output. We are still finding typos in the transcription of source from the listings on bitsavers, and expect to continue to do so as we put the cross-compiler through other tests. Other than those typos, the only true bugs we've found are -- not surprisingly -- ones in my stream procedure emulation logic.
I have posted the output from this test to the project web site. There is a download file available on that site with the output of the XEM compilation, including the compiler listing and binary code file. I have also posted a PDF scan of the original B5500 listing from 1970.
With only one good test under our belts, ALGOLXEM is hardly finished, and we are nowhere near being ready to try compiling the B5500 compiler with it. Significant portions of the compiler have not yet been exercised, particularly defines and word-mode procedure calls. We are actively looking for more B5500 code listings we can use to assemble test cases and validate the output of the compiler, but we are probably going to have to write quite a few tests ourselves and determine on our own whether the generated code is correct -- or at least looks like it would have worked.
If you have any B5500 compiler output you think may be useful, particularly listings with generated code, please contact us through the project web site.
The XEM compiler in its current state has been committed back to the project's Subversion repository, along with a new utility, CODEDUMP55, I wrote to generate a semi-formatted dump of a B5500 code file. If you are interested, have a look.
Thus passeth the agony of stream procedures. The only ecstasy in this is that I'm done with them. Mostly. I think.
No comments:
Post a Comment