There is no security against the ultimate development of mechanical consciousness, in the fact of machines possessing little consciousness now … reflect upon the extraordinary advance which machines have made in the last few hundred years, and note how slowly the animal and the vegetable kingdoms are advancing.
Samuel Butler, Erewhon
When a designer supplies a machine with step-by-step instructions for solving a specific problem, the resulting solution is unquestionably attributed to the designer’s ingenuity and labors. As soon as the designer furnishes the machine with instructions for finding a method of solution, the authorship of the results becomes ambiguous. Whenever a mechanism is equipped with a processor capable of finding a method “of finding a method of solution,” the authorship of the answer probably belongs to the machine. If we extrapolate this argument, eventually the machine’s creativity will be as separable from the designer’s initiative as our designs and actions are from the pedagogy of our grandparents.
For a machine to learn, it must have the impetus to make self-improving changes, to associate courses with goals, to be able to sample for success and failure, and to be ethical. We do not have such machine capabilities; the problem is still theoretical, still of interest primarily to mathematicians and cyberneticians.
A 1943 theorem of McCulloch and Pitts states that a machine constructed with regenerative loops of a certain formal character is capable of deducing any legitimate conclusion from a finite set of premises. One approach to such a faculty is to increase the probability of meaningfulness of the output (the design) generated from random or disorderly input (the criteria). Ross Ashby (1956) states, “It has been often remarked that any random sequence, if long enough, will contain all answers; nothing prevents a child from doodling: cos2X + sin2X = 1.” In the same spirit, to paraphrase the British Museum/chimpanzee argument, a group of monkeys, while randomly doodling, can draw plans, sections, and elevations of all the great works of architecture and do this in a finite period of time. As the limiting case, we would have a tabula rasa, realized as a network of uncommitted design components or uncommitted primates. Unfortunately, in this process our protagonists will have built Levittown, Lincoln Center, and the New York Port Authority Towers.
4
Surely some constraint and discrimination is necessary if components are to converge on solutions within “reasonable” time. Components must assume some original commitment. As examples of such commitment, five particular subassemblies should be part of an architecture machine: (1) a heuristic mechanism, (2) a rote apparatus, (3) a conditioning device, (4) a reward selector, and (5) a forgetting convenience.
A heuristic is a method based on rules of thumb or strategies that drastically limit the search for a solution. A heuristic method does not guarantee a solution, let alone an optimal one. The payoff is in time and in the reduction of the search for alternatives. Heuristic learning is particularly relevant to evolutionary machines because it lends itself to personalization and change by talking to one specific designer, overviewing many designers, or viewing the real world. In an architecture machine, this heuristic element would probably be void of specific commitment when the package arrives at an office. Through architect-sponsored maturation, a resident mechanism would acquire broad rules to handle exceptional information. The first time a problem is encountered, the machine would attempt to apply procedures relevant to similar problems or contexts. Heuristics gained from analogous situations would be the machine’s first source of contribution to the solution of a new problem.
After repeated encounters, a rote apparatus would take charge. Rote learning is the elementary storing of an event or a basic part of an event and associating it with a response. When a situation is repeatedly encountered, a rote mechanism can retain the circumstance for usage when similar events are next encountered. In architecture, this repetition of subproblems is extremely frequent: parking, elevators, plumbing, and so forth. And again a rote mechanism lends itself to evolutionary expansion. But, unlike a heuristic mechanism, this device would probably come with a small original repertoire of situations it can readily handle.
Eventually, simple repetitous responses become habits, some good and some bad. More specifically acclimatized than a rote apparatus, a conditioning mechanism is an enforcement device that handles all the nonexceptional information. Habits, not thought, assist humans to surmount daily obstacles. Similarly, in a machine, beyond rote learning, design habitudes can respond to standard events while the designer, the heuristic mechanism, and the rote apparatus engage in the problemsolving and problem-worrying (Anderson, 1966) aspects of design. Each robot would develop its own conditioned reflexes (Uttley, 1956). Like Pavlov’s dog, the presence of habitual events will trigger predefined responses with little effort until the prediction fails; whereupon, the response is faded out by frustration (evolution) and is handled elsewhere in the system.
A reward selector initiates no activities. In a Skinnerian fashion (B. F. Skinner, 1953), the reward mechanism selects from any action that which the “teacher” likes. The teachers (the designer, the overviewing apparatus, the inhabitants) must exhibit happiness or disappointment for the reward mechanism to operate. Or, to furnish this mechanism with direction, simulation techniques must evolve that implicitly pretest any environment. The design of this device is crucial; bad architecture could escalate as easily as good design. A reward selector must not make a machine the minion or bootlicker of bad architecture. It must evaluate, or at least observe, goals as well as results.
Finally, unlearning is as important as learning (Brodey, 1969c). The idea of “its [the computer’s] inability to forget anything that has been put into it.…” (A. Miller, 1967) is simply fallacious. Information can assume less significance over time and eventually disappear—exponential forgetting. Obsolescence can occur through time or pertinence. A technological innovation in the construction industry, for example, can make entire bodies of knowledge obsolete (which, as humans, we tend to hate surrendering). Or past procedures might not satisfy environmental conditions that have changed over time, thus invalidating a heuristic, rote response, or conditioned reflex.
These five items are only pieces of an architecture machine; the entire body will be an ever-changing group of mechanisms that will undergo structural mutations, bear offspring (Fogel et al., 1965), and evolve, all under the direction of a cybernetic device.