Industrial Training




History of operating system



The 1940's

Computers were a new idea. Typical general purpose computer designs in this era were closely modeled on the paper design presented at the Princeton summer school in 1946. That means a single-accumulator machine with 40 bits per word, with two 20 bit instructions packed into each word. Memory addresses were typically 12 bits, allowing addressing of 4k words of 40 bits each. Half word addressing was supported in only the most minimal way by having two distinct jump instructions, jump to high halfword and jump to low halfword.

Subroutine call instructions had not yet been invented. To call a subroutine took, in the typical case, 3 instructions.
  1. Load constant (the return jump) into accumulator.
  2. Store accumultor (the return jump) into the subroutine's return location.
  3. Jump to the entry point of the subroutine.

The return location of the subroutine was typically the last instruction of the subroutine, and the constant loaded, for each call, was not the return address, but rather, a fully formed jump instruction to transfer control to the return address.

With machines like this, it was hard to imagine any kind of operating system.

The 1950's

The decade of the 1950 saw a number of innovations in computer architecture. These included the invention of index registers and subroutine call instructions, both crucial to the development of modern ideas of programming. This was also the decade when the first compilers were developed, most notably, FORTRAN (it was always capitalized back then).

In the 1950's, magnetic tape drives became commonplace. Typical tape drives for computers stored data as a sequence of records. Each record was typically 80 characters (based on the record format of punched cards) or 120 characters (the number of characters per line on many early line printers), although the hardware did not limit the record format.

Computers of the 1950's rapidly standardized on 6 bits per character, after the IBM 701 computer introduced this standard. The 701 also set the word-size to 36 bits, and many others used this size, while 48-bit words were also used on some computers. The standard magnetic tape format that emerged in this era, also introduced by IBM, stored 7 data tracks on 1/2 inch wide tape. 7-tracks were used so that 6 bits of data (one character) could be stored in parallel along with a parity bit.

Data was typically stored at just 100 characters per inch (but higher density tapes came out, firs 200 characters per inch, and then 400), and tape reels typically held 1200 feet of tape.

The tape drive could not read or write fractional records, and the drive hardware automatically computed (on output) or checked (on input) the parity of each character. The drive also computed or checked the checksum of the entire block, which was stored at the end of the block.

Typical tape drive commands were:
  • Read block into memory buffer.
  • Write block from memory buffer.
  • Skip forward one block.
  • Skip backward one block.
  • Write end-of-file mark (conceptually, a special kind of block).
  • Skip forward to next end of file.
  • Skip backward to next end of file.
  • Rewind tape.

By mid decade, computers with whole banks of tape drives became common. In such a mix, one tape drive was frequently designated the system drive, and the tape on that drive contained system programs. Main memories were still small, but it was common for a small loader to be permanently resident in main memory, able to load and run the n'th file from tape drive d (you'd jump to the loader with parameters n and d loaded in agreed upon memory locations or registers).

This system is a sufficient foundation to make something that users would begin to call a tape operating system. At the end of execution, typical programs would exit by jumping to the loader, asking it to load and run the command language interpreter from the system tape. The command language interpreter would remember (in just a few words of dedicated memory) the drive number from which it was reading a command file. Commands in the command file would set up parameters to programs and then launch them.

These systems were very fragile. Any program that accidentally damaged the loader would force a complete system restart. Any program that accidentally damaged the memory location used to remember the current input file could lead to wild and unpredictable system actions.

Nonetheless, these systems were flexible enough to support assemblers and compilers, linkers and subroutine libraries. Fortran grew in such an environment, and the developers thinking about the new languages of 1960, Cobol and Algol, had extensive experience with such systems.

Rotating magnetic memory was also important in this era. Not disk drives, but rather, drums. Some low performance computers had drums for main memory. Typically, drum main memory stored data in word parallel form, so a computer with a 40 bit word would have a 40 track drum, or perhaps 44 tracks, so it could include a parity bit and some tracks for addressing (typically, one track for counting words, and a track holding a start mark so it could tell where on each revolution the word count should be reset to zero).

File systems on drum computers had yet to be developed, but subroutine libraries typically included routines to read a block n words of data in from drum address d to main memory address m, or to write a block of data back out to the drum. Clever programmers could use these to move subroutines or data structures out of main memory when they were not needed, reading them back in only when needed. This was called overlay management, and it was very difficult to get it right.

The 1960's

During the 1960's, the major developments in computer architecture were condition codes, byte addressing, memory management units, and various forms of parallel processing. Parallel processing ranged from attaching multiple co-equal CPUs to a single memory through using dedicated small general purpose processors for input-output to special purpose coprocessors. The first graphics coprocessors emerged in this era, but the most common use of coprocessors was to speed input-output to the newly developed high performance moving-head disk drives.

The first real operating systems emerged in the 1960s. Some of these were very crude. The acronym DOS, standing for disk operating system, first emerged in this era.

A typical DOS involved just one change to the tape operating system described above. The system subroutine library included a file system, so that programs could use a disk drive as if it were multiple tape drives, where disk files each had textual names. The command language interpreter could read commands from a disk file, launch programs from disk files, and for each program launched, tell it what files to use.

In the DOS era, tapes were used for backups and for files that were too large to fit on the disk drives.

The first networking efforts emerged in this era, the and the dial-up modem came into being -- typically, dial-up access was at 110 baud for electromechanical Teletype terminals, but the first generation of modems were designed to work at up to 300 baud.

Memory Management Units

Memory management units were introduced very early in the 1960s by Feranti corporation on their Atlas computer. The Atlas system had paged virtual memory, and the Atlas operating system used it for both memory protection and to create the illusion of a large address space implemented using a small main memory and what was, at the time, a large magnetic drum.

By the end of the decade, IBM would release a computer, the IBM 360 model 67, that supported this technology, but several manufacturers got there first, including Scientific Data Systems (the SDS 940), General Electric (the GE 600) and Digital Equipment Corporation (the PDP-10). All of the latter virtual memory systems were built by customers (the University of California at Berkeley, MIT and Bolt, Beraneck and Newman), but were then sold commercially.

Memory management units required real operating systems. The University of California at Berkeley developed the Berkeley Timesharing System for the SDS 940. General Electric, in conjunction with Bell Labs and MIT developed Multics for the GE 600, and BB&N developed TENEX for the PDP-10. IBMs first attempt at an operating system for the 360/67, TSS 360, was a failure. They never got it working (Carnegie-Mellon University, which had a 360/67 on order, took delivery on the broken TSS/360 software and fixed it). IBM's followup, the System 370 with the VM operating system, however, was very successful in the 1970's.

Multics

Multics was, undoubtedly, the single most important operating system developed in the 1960's. After Honeywell bought GE's computer division, it became the flagship operating system of the H 6000 series of mainframes, the successor to the GE 600.

Multics introduced the following ideas:
  1. A hierarchic file system. Each user had a home directory, with their own subdirectories hanging from that home directory.
  2. Access control lists for files.
  3. The idea of a computer utility, where anyone could subscribe. Today, we call these ISPs.
  4. Multiple levels of protection, so that lower level parts of the operating system were protected from upper level parts, which were protected from user programs.
  5. Opening a file being the same as loading the contents of that file as a memory segment.

Multics also built on some ideas that first came to market in the Berkeley Timesharing System, including

  1. Separate virtual address spaces for each user
  2. The ability to share memory segments between users

Multics was definitely not modern in some ways: The GE 600 and the H 6000 had 36-bit words, and the machine supported two character sets, GE's 6-bit code, packed 6 characters per word, and 7-bit ASCII. Five 7-bit characters could be packed into 35 bits (leaving the sign bit unused) or each 7-bit character could be padded out to 9 bits, packing 4 characters per word. The former made more efficient use of memory, while the latter was easier for programmers.

The 1970's

What most of the world saw in the 1970's computer market was a steep plunge in the price of computing. This actually began with the introduction of minicomputers in the 1960's, with the least expensive general purpose computer systems selling for under $10,000 by 1970, but the trend quickly accelerated, until by the mid 1970's, a fully functional microcomputer kit could be purchased for under $1000 in kit form (the Altair 8800).

When Bell Labs quit the Multics project, some of the programmers working on that project decided to take the best ideas they'd encountered in that project and scale them down, building a little operating system suitable for a departmental timesharing system running on a minicomputer. The result was Unix. Even the name is a pun on Multics.

It is fair to say that Unix had only one new idea -- the SUID and SGID bits on files. Everything else had been done before. What Unix did was do all of it better, integrating a number of really good ideas from multiple sources (mostly Multics and the Berkeley Timesharing System) into one system and doing it very well.

Another system from the 1970's is largely forgotten outside of corporate datacenters. That is VM-370. This is the system that IBM developed out of the ashes of the TSS 360 project on the IBM 360/67 -- specifically, the CP/CMS operating system developed in the very late 1960s by IBM. What VM did that had never been done before is virtualize everything, so that users could run any operating system they wanted as user programs under VM (originally CP). The idea of being to run, say, the horrible old DOS 360 as a user program on a computer without threatening any other user of that computer was extraordinary. The IBM 360 was the first 32-bit computer of any consequence, and IBM's current 64-bit Enterprise Systems Architecture is compatible with 360 and its successor, the 370.

Networking, introduced in the mid 1960s, became commonplace in thew 1970s. Most of the larger computer science departments were linked by the ARPANET (an experimental defense department network linking research projects funded by ARPA, the defense advanced research projects agency). By the mid 1970s, most university computer centers were linked by BITNET, and as Unix emerged from Bell Labs, most Unix sites joined UUNET, an informal network of Unix sites, originally linked by dial-up lines. E-mail became the "killer app" on all of these early networks.

The 1980's

Personal computers such as the Apple II and the first IBM PCs came with systems that were typical of the early disk operating systems. IBM even called its system PCDOS, and many PC users just called it DOS, as if no other system had ever had this name. Eventually, it emerged that this was a Microsoft product, although IBM originally sold it without this identification.

In the 1980's, Unix was pried free from Bell Labs and AT&T. The University of California developed BSD Unix, originally under AT&T license, but they reimplemented enough of it that, as time passed, BSD Unix was wrested free of AT&T. Linus Torvalds, a Finnish hacker, developed another Unix clone, Linux, and many manufacturers, under AT&T license, commercialized their own UNIX variants. IBM developed AIX. HP developed HPUX. Sun developed Solaris.

Unix was adapted to run on multiprocessors by two competing vendors in this decade, Sequent and Encore. In general, Unix worked very well on machines with on the order of 16 CPUs. Earlier operating systems from Burroughs Corporation, the University of Michigan and Carnegie Mellon Univeristy had demonstrated similar performance in earlier decades.

Independently of all this, Carnegie Mellon university developed a system called Mach that was supposed to be used as a replacement kernel under Unix, but was in fact far more. Mach would be seen as an academic curiosity for many years, but eventually, BSD was rebuilt on top of a Mach kernel, and eventually, Apple would chose BSD/Mach as the foundation for MacOS X.

Window managers, first developed in the 1970's at SRI and Xerox PARC, came into maturity in the 1980's. Window managers don't need to rest on sophisticated operating system technology. Prior to Windows NT and 95 from Microsoft and MacOS X from Apple, the dominant commercial window managers sat on top of rather primitive disk operating systems. In contrast, however, the X window system from MIT, developed under BSD Unix, took complete advantage of the available operating system technology. X remains the dominant window manager on Unix and related platforms.

In the early 1980's, mail gateways were installed between the existing computer networks, creating serious headaches that were resolved later in the decade by the creation of the Internet, a generalization of the old ARPANET. By this time, operating systems such as Unix provided a good suite of network access primitives, and for most of the decade, the most common Internet servers were Digital Equipment Corporation's VAX computers running BSD Unix.

The 1990's

In the 1990's, the personal computing field finally rose above the level of DOS. Windows NT emerged when Microsoft bought the ashes of DEC's VAX VMS operating system development group. Windows 95 was a reaction to this, but both are real operating systems in the sense that emerged in the 1960's. MacOS X is another solid operating sysem to reach the desktop in this era. These systems were finally mature enough to incorporate decent support for virtual memory and network connectivity.

By the end of the 1990s, the operating systems running on typical desktop computers were as complex as any mainframe operating system of the 1960's. Essentially all of the innovative ideas from systems such as Multics were to be found on desktop and laptop computers, complete with full support for networking and window management.

The 2000's

As with the minicomputer revolution and the microcomputer revolutions before, the mobile communications revolution created a wide open niche in which, at least initially, competition flourished. Initially, each cellphone and PDA vendor based their product on proprietary systems, but as the market grew and the function of PDAs and Cellphones began to merge, two systems emerged as the primary competitors in this new niche, Windows CE and Android.

Neither of these represents any kind of revolutionary new approach to operating systems. Windows CE is a direct descendant of Windows, freed from dependency on the Intel x86 family and stripped of the baggage of "integration" with a full suite of office productivity tools. Android, in turn, is based on Linux, stripped of the assumption that the shell or a window manager will be the primary application launchers.

A third thread has woven into both of these systems, and that is the desire of many major players to lock down the system, controlling what applications the user is permitted to launch, what files the user is permitted to store, and where control is not possible, allowing for pervasive monitoring of the actions taken by users of mobile platforms. This has led to innvations such as trusted platform monitors, but it also raises serious questions about the relationship between operating system developers and civil liberties. Is it ethical for programmers to write code that permits pervasive surveilance of cellphone users? Probably not.




Hi I am Pluto.