If Ideas Had Shapes

A quoteblog ranging from philosophers in bathrobes to galaxy-rises

Category: Technology

Baldwin and Clark – Design Rules, vol. 1 (2000)

Fundamentally, augmenting is a “wild card” operator: it is difficult to place a value on things yet to be invented, or to predict when and where those inventions will occur.

Advertisements

Baldwin and Clark – Design Rules, vol. 1 (2000)

If the downside of “bad draws” in the design effort can be controlled by rejecting bad outcomes, technical risk and complexity may be good things. The reason is that modules with more technical risk or complexity may have wider distributions of outcomes than other modules. If the downside risk can be controlled, that leaves only the “upside risk”–the possibility that the experiments will uncover very good designs (high peaks in the value landscape)

Baldwin and Clark – Design Rules, vol. 1 (2000)

One insidious aspect of first-time modularizations is that design rules flaws are revealed only very late in the process. Once the rules are in place, work on the individual modules proceeds independently and may appear, for a time, to be going very well. It is only when the pieces are brought together for final integration and testing that unforeseen interdependencies are brought to light.

Baldwin and Clark – Design Rules, vol. 1 (2000)

All the necessary design rules would be established in the first phase, which was projected to last about ninety days. (In fact it took ten months.)

IBM designing System/360

Baldwin and Clark – Design Rules, vol. 1 (2000)

However, as we said in chapter 3, to achieve true modularity in a design and corresponding task structure, the mental decomposition of the artifact is not enough. Designers must also have experience with many actual designs in order to understand precisely the nature of the underlying parameter interdependencies. Only when that knowledge is in place is it feasible to think about converting mental components into actual modules.32

32 Attempts to modularize without sufficient knowledge result in the discovery of interdependencies, which may render the system inoperable. The real design and its task structure will remain stubbornly interdependent and interconnected until the designers know all the points of interaction and can address them via sensible design rules.

Baldwin and Clark – Design Rules, vol. 1 (2000)

Despite its elegance and flexibility, the concept of microprogramming did not take the industry by storm. The greatest criticism leveled against the idea was that it was inefficient. Wilkes’s approach was more complicated and circuitous than the usual way of building machines. There was also a performance disadvantage implied by the double decoding of instructions. Finally, fast memory, which was critical to implementation of the concept, was expensive. Thus many designers believed that they could build faster machines at lower cost by hardwiring the “right” set of instructions in the first place.

These criticisms were fair but missed the essential point. When Wilkes asserted that microprogramming was the “best way” to design a computer, he did not mean it was the “highest speed” or “lowest cost” approach. What Wilkes sought above all was flexibility and ease of improvement. More than anyone else of his generation, Wilkes expected all the components of the computer to get better over time.

Lewis Thomas – Late Night Thoughts on Listening to Mahler’s Ninth Symphony (1983)

…computers will not take over the world, they cannot replace us, because they are not designed, as we are, for ambiguity.

Imagine the predicament faced by a computer programmed to make language, not the interesting communication in sounds made by vervets or in symbols by brilliant chimpanzee prodigies, but real human talk. The grammar would not be too difficult, and there would be no problem in constructing a vocabulary of etymons, the original, pure, unambiguous words used to name real things. …

“The Corner of the Eye”

(so quaint!)

A. M. Turing – “Computing Machinery and Intelligence” (1950)

The original question, “Can machines think?” I believe to be too meaningless to deserve discussion. Nevertheless I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted. I believe further that no useful purpose is served by concealing these beliefs. The popular view that scientists proceed inexorably from well-established fact to well-established fact, never being influenced by any unproved conjecture, is quite mistaken. Provided it is made clear which are proved facts and which are conjectures, no harm can result. Conjectures are of great importance since they suggest useful lines of research.

excerpt appears in The Mind’s I, ed. Daniel Dennett and Douglas R. Hofstadter, 1981

Douglas Hofstadter – Gödel, Escher, Bach (1979)

The first human to conceive of the immense computing potential of machinery was the Londoner Charles Babbage (1792-1871). A character who could almost have stepped out of the pages of the Pickwick Papers, Babbage was most famous during his lifetime for his vigorous campaign to rid London of “street nuisances”—organ grinders above all. These pests, loving to get his goat, would come and serenade him at any time of day or night, and he would furiously chase them down the street.

Douglas Adams – The Salmon of Doubt (2002)

Now that we’ve built computers, first we made them room-size, then desk-size and in briefcases and pockets, soon they’ll be as plentiful as dust—you can sprinkle computers all over the place. Gradually, the whole environment will become something far more responsive and smart, and we’ll be living in a way that’s very hard for people living on the planet just now to understand.