Excerpt Two

Nothing new under the sun

As I stated at the start of the book: I propose the industrialization of IT and application development. Now, I expect that at least some of you are asking: if this is such a great idea, how come nobody has thought of it before? 

The answer is of course, it has been thought of – and tried – many times before, by some very smart people. I have already written about some of those efforts, and I will introduce you to some new ones in this chapter.

I’ll start with this Wikipedia entry[1]

“Ad hoc code reuse has been practiced from the earliest days of programming. Programmers have always reused sections of code, templates, functions, and procedures. Software reuse as a recognized area of study in software engineering, however, dates only from 1968 when Douglas McIlroy of Bell Laboratories proposed basing the software industry on reusable components.”

McIlroy made his proposal for “Mass Produced Software Components” at a conference sponsored by the NATO Science Committee held in Garmisch, Germany, the 7th to 11th October 1968.

He opened his paper with this: 

“We undoubtedly produce software by backward techniques… Software production today appears in the scale of industrialization somewhere below the more backward construction industries. I think its proper place is considerably higher, and would like to investigate the prospects for mass-production techniques in software.”

He added: 

“Of course mass production, in the sense of limitless replication of a prototype, is trivial for software. But certain ideas from industrial technique I claim are relevant…The idea of interchangeable parts corresponds roughly to our term ‘modularity,’ and is fitfully respected. The idea of machine tools has an analogue in assembly programs and compilers. Yet this fragile analogy is belied when we seek for analogues of other tangible symbols of mass production. There do not exist manufacturers of standard parts, much less catalogues of standard parts. One may not order parts to individual specifications of size, ruggedness, speed, capacity, precision or character set.”

McIlroy was well ahead of his time. His paper did not trigger immediate industry action, so he went on to quietly lead the Unix team, and to create Unix pipes, the part of Unix that makes it possible to connect small programs together. As I explained in Chapter Two there is an underlying philosophy to Unix: “the idea that the power of a system comes more from the relationships among programs than from the programs themselves”. Pipes are one of the key mechanisms to do this. In the words of Dennis Ritchie: 

“One of the most widely admired contributions of Unix to the culture of operating systems and command languages is the pipe…”[2]

Unix pipes are at the heart of McIlroy’s philosophy[3]:

1. Make each program do one thing well. To do a new job, build afresh rather than complicate old programs by adding new “features”.

2. Expect the output of every program to become the input to another, as yet unknown, program. Don’t clutter output with extraneous information. Avoid stringently columnar or binary input formats. Don’t insist on interactive input.

3. Design and build software, even operating systems, to be tried early, ideally within weeks. Don’t hesitate to throw away the clumsy parts and rebuild them.[4]

4. Use tools in preference to unskilled help to lighten a programming task, even if you have to detour to build the tools and expect to throw some of them out after you’ve finished using them.

Note the last item: he is talking about transfer of skill to machinery (or code in this case). 

Unix was the first implementation of McIlroy’s vision. Arguably, the next one was Brad Cox’s interchangeable software components (software ICs) in Objective-C, a little bit more than a decade later (the usual time required for adoption of new technology it seems). They most certainly did transfer skill; the NextStep libraries transferred a considerable amount of programming skill to a package that could be easily re-used by a programmer with considerably less skill (like me). However, they only worked in Objective-C, and not everybody wanted to use Objective-C. More importantly, it still required some skill to use software ICs. NextStep was not yet to the point where semi-skilled or unskilled labor could use it to assemble a program without help.

Next, in the early 1990’s the OMG proposed the CORBA (Common Object Request Broker Architecture) standard, and Microsoft introduced its OLE and COM (Object Linking and Embedding and Component Object Model) models.

CORBA promised to provide out-of-the-box multi-vendor interoperability. It may have been the first credible promise of Mass Produced Software Components because of the number of influential participants in the OMG. Yet despite significant investment by various vendors, it failed to establish a lasting presence for a variety of reasons, mostly centered around its bloated design-by-committee feature. It was just too much of too much. As a result, I would wager there are many more mission critical COBOL-based systems running today than there are CORBA-based ones. 

Amusingly, at the time of this writing, the link to “Who is using CORBA already?” on the official OMG “CORBA® BASICS” page[5] leads to absolutely nothing. (Bonus points for the old school insistence on using the registered trademark symbol. AT&T would be proud.)

As for Microsoft’s Visual Basic component architecture, its language independent suc­ces­sor OLE/DCOM and its spiritual successor .NET, they were resisted by large swaths of the market because they were Microsoft “standards” that in practical terms only worked on Windows, In a world that was rapidly adopting the web as its preferred platform for applications, solutions that did not work on Unix (and later on Linux) servers were not going to succeed. In fact, just as with Objective-C, any solution that depended upon just one operating system, or one language would never succeed across the board.

And I feel that I can very safely predict: the IT industry will never agree to pick just one language or operating system for everyone and everything.

Concurrently with McIlroy, Daniel Teichroew, Professor of Industrial and Operations Engineering at the University of Michigan, had started on what he called the Information System Design and Optimization System project in 1968. Although very interested in programming and automation, he was a statistician and Management Information theorist, not a programmer. His approach was markedly different than McIlroy’s.

A well-known and oft used cliché in IT is: “you got what you asked for, not what you needed.” 

Teichroew’s starting point was “the problem” not the code that solved the problem, and he intended to ensure that what one “asked for”, was indeed what one “needed”. 

To remove ambiguity in stating problems (now usually called “requirements”), he developed the Problem Statement Language, the idea being that a formal way of describing require­ments would allow for the reliable automated production of code by a Problem Statement Analyzer, a kind of compiler if you will. A traditional compiler converts a 3GL like C or PASCAL to machine language (or sometimes an intermediary language). The Problem Statement Analyzer would convert the Problem Statement Language into a conventional programming language such as C.

Parts of his work, specifically the focus on problem analysis as a means of ensuring successful development, would be picked up by the object-oriented analysis and design researchers. It lives on in the “use case”, which is a direct outgrowth of Problem Statement Language. Other parts of his work became the foundation of CASE, which stands for Computer Aided Software Engineering. The idea (which was a good one) was to apply to software the same principles that were successfully used in CAD/CAM (computer-aided design and computer-aided manufacturing). 

CASE tools became popular and were widely used for development of mainframe and minicomputer applications in the 1980s and early 1990s. With the decline of the mainframe, the CASE banner was taken up by the Object Management Group as part of the standardiza­tion of UML (see Chapter Six) and was embedded into a number of tools such as Rational Rose (it later went through a number of name changes) and Silverrun, both of which still exist.

In 2001 the OMG took another run at promoting these concepts, this time under the banner of Model Driven Architecture (MDA). Under MDA, a platform independent model (PIM) was created first to then be machine translated to a platform specific model (PSM). This was undoubtedly the right idea, but it went nowhere.

To a carpenter with a hammer, the world is a nail.

You may wonder, if the previous solutions were such great ideas, then why did they never really succeed? Why is the problem of IT Project failures still as present as it ever was?

Because to a carpenter with a hammer, the whole world looks like a nail. 

NextStep, COM, .NET, CORBA, CASE, MDA; all had a fatal flaw stemming from a bias shared by almost all programmers, computer scientists and software engineers: to them, writing code is the alpha and the omega. The beginning and the end. It is all that there is.

All of the above attempts at software industrialization focused on the “creative” part of development: writing code that “solves a problem” as expressed in a requirement or use case.

I expect that you are now asking yourself: “How could that be a flaw?” Isn’t that what programmers do? The answer is no, that is not what they spend most of their time doing. It will probably shock you to learn that enterprise developers are unlikely to spend more than 10% of their time actually writing new code, creating new functions, being creative.

Only the developers working for a FANG, (Facebook, Amazon, Netflix, and Google, remember?) writing compilers, creating advanced image, or language processing, or researching, are spending a lot of time writing algorithms.

Alexander Stepanov, the primary designer and implementer of the C++ Standard Template Library, relays to us[6] that Scott Byer, the architect of Adobe Photoshop, estimates that 90% of developer efforts are dedicated to “glue” and housekeeping tasks such as memory management, scripting, UI management, file I/O, and color management. 10% is spent on “substance”. 

Stepanov presents the following estimates of how much time is spent on what he deems “substance” for a variety of application types: 

·       Word processing – 3%, 

·       Presentation App – 1%, 

·       Databases – 10%, 

·       Technical/CAD – 30%, 

·       Operating System – 1%, and 

·       Enterprise Application Software – 1%. 

From personal experience, these numbers sound just about right to me.



[1] https://en.wikipedia.org/wiki/Code_reuse

[2] “The Evolution Of The Unix Time-Sharing System.” AT&T Bell Laboratories Technical Journal, vol 63, no. 6 part 2, 1984, pp. 1577-93. – https://www.bell-labs.com/usr/dmr/www/hist.html

[3] Doug McIlroy, E. N. Pinson, B. A. Tague (8 July 1978). “Unix Time-Sharing System: Foreword” (PDF). The Bell System Technical Journal. Bell Laboratories. pp. 1902–1903. McIlroy was head of the Bell Labs Computing Sciences Research Center while Unix was being written.

[4] An example of “agile” philosophy long before the term became popular. Other examples abound, most notably the admonitions of C.A. Hoare in his address The Emperor’s Old Clothes, (Hoare, Charles Antony Richard. “The Emperor’s Old Clothes.” Communications Of The ACM, vol 24, no. 2, 1981, pp. 75-83.)

[5] http://www.omg.org/gettingstarted/corbafaq.htm

I am not sure why, but the page is available in two languages, English and Belorussian.

[6] Alexander Stepanov: Industrializing Software Development. A keynote address at The First International Conference on Embedded Software and System, Zhejiang University, Hangzhou, P. R. China, December 9, 2004.