What if we designed and created machines that were able to design and create machines that were smarter than themselves? What sort of world would this usher into existence? How close are we already to such a world? Well, it would seem the answer is going to depend on what you mean by ‘smarter.’ More precisely, what is the scope of intellectual abilities referred to by the word “smarter”? The possibilities get interesting even if the scope of abilities we focus on are relatively limited. Let me explain:
Human beings are very sophisticated machines. We are able to design and create machines much ‘smarter’ than ourselves in the sense that they are quite good at various narrow scope ‘intelligent’ tasks: For example, scientific calculators can compute the value of pi with greater accuracy, to many more decimal places, and much faster than any human being. So too, obviously, computers. Computer simulations can design and test millions of virtual items, such as cars, aircraft, and radio antennae. The end result of such trial and error can then be manufactured using real materials. This has the effect of greatly shortening the trial and error testing that is crucial to design and engineering.
An interesting recent example of this is NASA’s use of something called ‘evolutionary computing’; an extremely rapid way to design by these sorts of simulations. Over the course of about 10 hours, such AI can produce an antenna design very finely tuned to the frequencies and power levels of its associated transmission device. What is more, it can optimize energy use and size. The end results looks more like a twisted paper clip or demented miniature tree than anything else, but is the end product of a simulated process of trial and error, a process the “real world” equivalent of which would have taken years, perhaps decades or centuries to accomplish, and which would have trashed a great many paper clips to boot.
To design the ST5 space antenna, the computers started with random antenna designs, and through the evolutionary process, refined them. The computer system took about 10 hours to complete the initial antenna design process.
"The AI software examined millions of potential antenna designs before settling on a final one," said Lohn. The software did this much faster than any human being could do so under the same circumstances, according to Lohn. "Through a process patterned after Darwin's 'survival of the fittest,' the strongest designs survive and the less capable do not."
"We told the computer program what performance the antenna should have, and the computer simulated evolution, keeping the best antenna designs that approached what we asked for. Eventually, it zeroed in on something that met the desired specifications for the mission," Lohn said.
"Not only can the software work fast, but it can adapt existing designs quickly to meet changing mission requirements," he said. Following the first design of the ST5 satellite antenna, NASA Ames scientists used the software to 're-invent' the antenna design in less than a month to meet new specifications – a very quick turn-around in the space hardware redesign process.
Scientists also can use the evolutionary AI software to invent and create new structures, computer chips and even machines, according to Lohn. "We are now using the software to design tiny microscopic machines, including gyroscopes, for spaceflight navigation," he ventured..
In fact, given certain end state functions or parameters to shoot for, such human-guided evolutionary computing simulations are likely coming up with designs no human would have been able to think of. (I mean, really, take a look at the pics of the end results.):
"The software also may invent designs that no human designer would ever think of," Lohn asserted. In addition, the software can plan devices that are smaller, lighter, consume less power, are stronger and more robust among many other things – characteristics that spaceflight requires, according to Lohn.
We already live in a computer assisted world, and are partners in a ‘symbiotic’ relationship, such that we are much ‘smarter’ with these machines than we are without them. What is more, at least in the limited areas of design engineering for which they were designed, the evolutionary computing machines or programs seem to be “smarter” than us humans, in the sense that they are much more able to fulfill the goals of engineers, and do the research, much like the pocket calculator is ‘smarter’ than the bookkeeper (more accurate and fast in calculating).
This is an obvious boon to us. Cure for the common cold coming soon? But, is it also anything to worry about ( at least outside the obvious worry that we may become so dependent upon these aids that we will lose our abilities to function as mathematical calculators or trial and error engineers ourselves if the dreaded apocalypse comes, and all the technological aids go on the fritz)?
[Cue older folks grumbling about young folks’ over reliance on scientific calculators. “In our day, we used slide rules, and we liked it!” Cue even older folks grumbling about slide rule dependence and calculator dependence, “In our day we used pencil and paper, and liked it!”]
In each of these cases we have created machines that create objects or machines suited to particular human purposes. But, as far as I am aware, we have yet to create (1) machines that create intelligent machines. And, we have yet to create (2) intelligent machines that create intelligent machines that are, or can become smarter than they. For example, we have yet to build calculators that can, in turn, design and create calculators superior to themselves, nor have we created evolutionary computers or computer programs that can create evolutionary or other computer programs that are, or can become, smarter than they. That, we have not managed.
The second possibility is one that has become of interest to Homo sapiens philosophicogeektituditas, and has long been a premise of science fiction. Recently, this possible future event has acquired a strange name.
When and if we reach that point of technological prowess, we will have arrived at (…dramatic pause…sharp breath... and now appropriately chopped Shatneresque delivery)...”The Singularity” (dramatic swell of music).
Analogous to the singularities at the center of black holes, regions where expectations as to physical behavior tend to break down, futurists, and philosophers believe that this technological watershed will usher in an age the contours of which we cannot predict with great confidence. Something like it has never occurred in the history of our planet. So, we have little to rely on, other than fiction, in guessing what our status will be in that world. Will machines created via these routes eventually become smarter than us in a more global or inclusive sense, instead of the more narrow scope sense of “smart”? Will these machines hatch “Pinky and the Brain schemes” to take over the world?
If we ever get to the point where we think we can bring about the singularity, will we in fact design the first generation intelligent machines hoping that they, or their progeny, take over the technological aspects of our society? Will we welcome our machine overlords, or perhaps, harness these machines in the hopes of eventually turning their engineering prowess on ourselves, improving our biological inheritance? Can we envisage nano-technological marvels, not only computing a trial and error improvement of the human genome, but actually carrying it out? Will we instead choose to forgo this sort of technology as being too risky? Assuming its development and utilization is inevitable, how exactly will we wield such technology, in what sort of politico/social context would it be wise to utilize it? Will we use it under the rubric of a free market system, or in some sort of centralized command and control climate? How heavily should we regulate? How determine the content of regulation? Who should be in on the conversation? Who determines the regulations? In either case, who will make the determinations as to the design engineering ‘goals’ to be pursued by the evolutionary programming? If we decide to fiddle with the human genome, which genes should be eliminated and which should be preserved or changed? Will the machines ever be smart enough to be able to make these sorts of decisions in concert with ourselves as equal partners in a sort of co-evolution, or perhaps even by themselves? Dare we give them control of our fate?
No comments:
Post a Comment