|
The Selfish Gene 01 - YouTubeyoutube.com2011년 10월 19일 - 359분 - 업로더: valerytozer Audiobook: The Selfish Gene by Richard Dawkinsby rahulsingh11100742 views; Thumbnail 4:14. Add to ... |
The Selfish Gene 02 - YouTubeyoutube.com2011년 10월 19일 - 346분 - 업로더: valerytozer Audiobook: The Selfish Gene by Richard Dawkinsby rahulsingh11100711 views; Thumbnail 2:00. Add to ... |
The Selfish Gene 03 - YouTubeyoutube.com2011년 10월 20일 - 265분 - 업로더: valerytozer Audiobook: The Selfish Gene by Richard Dawkinsby rahulsingh11100749 views; Thumbnail 2:00. Add to ... |
4
Survival machines began as passive receptacles for the genes, providing little more than walls to protect them from the chemical warfare of their rivals and the ravages of accidental molecular bombardment. In the early days they ‘fed’ on organic molecules freely available in the soup. This easy life came to an end when the organic food in the soup, which had been slowly built up under the energetic influence of centuries of sunlight, was all used up. A major branch of survival machines, now called plants, started to use sunlight directly themselves to build up complex molecules from simple ones, re-enacting at much higher speed the synthetic processes of the original soup. Another branch, now known as animals, ‘discovered’ how to exploit the chemical labours of the plants, either by eating them, or by eating other animals. Both main branches of survival machines evolved more and more ingenious tricks to increase their efficiency in their various ways of life, and new ways of life were continually being opened up. Sub-branches and sub-sub-branches evolved, each one excelling in a particular specialized way of making a living: in the sea, on the ground, in the air, underground, up trees, inside other living bodies. This sub-branching has given rise to the immense diversity of animals and plants which so impresses us today.
Both animals and plants evolved into many-celled bodies, complete copies of all the genes being distributed to every cell. We do not know when, why, or how many times independently, this happened. Some people use the metaphor of a colony, describing a body as a colony of cells. I prefer to think of the body as a colony of genes, and of the cell as a convenient working unit for the chemical industries of the genes.
Colonies of genes they may be but, in their behaviour, bodies have undeniably acquired an individuality of their own. An animal moves as a coordinated whole, as a unit. Subjectively I feel like a unit, not a {47} colony. This is to be expected. Selection has favoured genes that cooperate with others. In the fierce competition for scarce resources, in the relentless struggle to eat other survival machines, and to avoid being eaten, there must have been a premium on central coordination rather than anarchy within the communal body. Nowadays the intricate mutual co-evolution of genes has proceeded to such an extent that the communal nature of an individual survival machine is virtually unrecognizable. Indeed many biologists do not recognize it, and will disagree with me.
Fortunately for what journalists would call the ‘credibility’ of the rest of this book, the disagreement is largely academic. Just as it is not convenient to talk about quanta and fundamental particles when we discuss the workings of a car, so it is often tedious and unnecessary to keep dragging genes in when we discuss the behaviour of survival machines. In practice it is usually convenient, as an approximation, to regard the individual body as an agent ‘trying’ to increase the numbers of all its genes in future generations. I shall use the language of convenience. Unless otherwise stated, ‘altruistic behaviour’ and ‘selfish behaviour’ will mean behaviour directed by one animal body toward another.
This chapter is about behaviour — the trick of rapid movement which has been largely exploited by the animal branch of survival machines. Animals became active go-getting gene vehicles: gene machines. The characteristic of behaviour, as biologists use the term, is that it is fast. Plants move, but very slowly. When seen in highly speeded-up film, climbing plants look like active animals. But most plant movement is really irreversible growth. Animals, on the other hand, have evolved ways of moving hundreds of thousands of times faster. Moreover, the movements they make are reversible, and repeatable an indefinite number of times.
The gadget that animals evolved to achieve rapid movement was the muscle. Muscles are engines which, like the steam engine and the internal combustion engine, use energy stored in chemical fuel to generate mechanical movement. The difference is that the immediate mechanical force of a muscle is generated in the form of tension, rather than gas pressure as in the case of the steam and internal combustion engines. But muscles are like engines in that they often exert their force on cords, and levers with hinges. In us the levers are known as bones, the cords as tendons, and the hinges as joints. Quite a lot is known about the exact molecular ways in which muscles work, {48} but I find more interesting the question of how muscle contractions are timed.
Have you ever watched an artificial machine of some complexity, a knitting or sewing machine, a loom, an automatic bottling factory, or a hay baler? Motive power comes from somewhere, an electric motor say, or a tractor. But much more baffling is the intricate timing of the operations. Valves open and shut in the right order, steel fingers deftly tie a knot round a hay bale, and then at just the right moment a knife shoots out and cuts the string. In many artificial machines timing is achieved by that brilliant invention the cam. This translates simple rotary motion into a complex rhythmic pattern of operations by means of an eccentric or specially shaped wheel. The principle of the musical box is similar. Other machines such as the steam organ and the pianola use paper rolls or cards with holes punched in a pattern. Recently there has been a trend towards replacing such simple mechanical timers with electronic ones. Digital computers are examples of large and versatile electronic devices which can be used for generating complex timed patterns of movements. The basic component of a modern electronic machine like a computer is the semiconductor, of which a familiar form is the transistor.
Survival machines seem to have bypassed the cam and the punched card altogether. The apparatus they use for timing their movements has more in common with an electronic computer, although it is strictly different in fundamental operation. The basic unit of biological computers, the nerve cell or neurone, is really nothing like a transistor in its internal workings. Certainly the code in which neurones communicate with each other seems to be a little bit like the pulse codes of digital computers, but the individual neurone is a much more sophisticated data-processing unit than the transistor. Instead of just three connections with other components, a single neurone may have tens of thousands. The neurone is slower than the transistor, but it has gone much further in the direction of miniaturization, a trend which has dominated the electronics industry over the past two decades. This is brought home by the fact that there are some ten thousand million neurones in the human brain: you could pack only a few hundred transistors into a skull.
Plants have no need of the neurone, because they get their living without moving around, but it is found in the great majority of animal groups. It may have been ‘discovered’ early in animal evolution, and {49} inherited by all groups, or it may have been rediscovered several times independently.
Neurones are basically just cells, with a nucleus and chromosomes like other cells. But their cell walls are drawn out in long, thin, wire-like projections. Often a neurone has one particularly long ‘wire’ called the axon. Although the width of an axon is microscopic, its length may be many feet: there are single axons which run the whole length of a giraffe's neck. The axons are usually bundled together in thick multi-stranded cables called nerves. These lead from one part of the body to another carrying messages, rather like trunk telephone cables. Other neurones have short axons, and are confined to dense concentrations of nervous tissue called ganglia, or, when they are very large, brains. Brains may be regarded as analogous in function to computers.(1) They are analogous in that both types of machine generate complex patterns of output, after analysis of complex patterns of input, and after reference to stored information.
The main way in which brains actually contribute to the success of survival machines is by controlling and coordinating the contractions of muscles. To do this they need cables leading to the muscles, and these are called motor nerves. But this leads to efficient preservation of genes only if the timing of muscle contractions bears some relation to the timing of events in the outside world. It is important to contract the jaw muscles only when the jaws contain something worth biting, and to contract the leg muscles in running patterns only when there is something worth running towards or away from. For this reason, natural selection favoured animals that became equipped with sense organs, devices which translate patterns of physical events in the outside world into the pulse code of the neurones. The brain is connected to the sense organs — eyes, ears, taste-buds, etc.: — by means of cables called sensory nerves. The workings of the sensory systems are particularly baffling, because they can achieve far more sophisticated feats of pattern-recognition than the best and most expensive man-made machines; if this were not so, all typists would be redundant, superseded by speech-recognizing machines, or machines for reading handwriting. Human typists will be needed for many decades yet.
There may have been a time when sense organs communicated more or less directly with muscles; indeed, sea anemones are not far from this state today, since for their way of life it is efficient. But to achieve more complex and indirect relationships between the timing {50} of events in the outside world and the timing of muscular contractions, some kind of brain was needed as an intermediary. A notable advance was the evolutionary ‘invention’ of memory. By this device, the timing of muscle contractions could be influenced not only by events in the immediate past, but by events in the distant past as well. The memory, or store, is an essential part of a digital computer too. Computer memories are more reliable than human ones, but they are less capacious, and enormously less sophisticated in their techniques of information-retrieval.
One of the most striking properties of survival-machine behaviour is its apparent purposiveness. By this I do not just mean that it seems to be well calculated to help the animal's genes to survive, although of course it is. I am talking about a closer analogy to human purposeful behaviour. When we watch an animal ‘searching’ for food, or for a mate, or for a lost child, we can hardly help imputing to it some of the subjective feelings we ourselves experience when we search. These may include ‘desire’ for some object, a ‘mental picture’ of the desired object, an ‘aim’ or ‘end in view’. Each one of us knows, from the evidence of our own introspection, that, at least in one modern survival machine, this purposiveness has evolved the property we call ‘consciousness’. I am not philosopher enough to discuss what this means, but fortunately it does not matter for our present purposes because it is easy to talk about machines that behave as if motivated by a purpose, and to leave open the question whether they actually are conscious. These machines are basically very simple, and the principles of unconscious purposive behaviour are among the commonplaces of engineering science. The classic example is the Watt steam governor.
The fundamental principle involved is called negative feedback, of which there are various different forms. In general what happens is this. The ‘purpose machine’, the machine or thing that behaves as if it had a conscious purpose, is equipped with some kind of measuring device which measures the discrepancy between the current state of things, and the ‘desired’ state. It is built in such a way that the larger this discrepancy is, the harder the machine works. In this way the machine will automatically tend to reduce the discrepancy — this is why it is called negative feedback — and it may actually come to rest if the ‘desired’ state is reached. The Watt governor consists of a pair of balls which are whirled round by a steam engine. Each ball is on the end of a hinged arm. The faster the {51} balls fly round, the more does centrifugal force push the arms towards a horizontal position, this tendency being resisted by gravity. The arms are connected to the steam valve feeding the engine, in such a way that the steam tends to be shut off when the arms approach the horizontal position. So, if the engine goes too fast, some of its steam will be shut off, and it will tend to slow down. If it slows down too much, more steam will automatically be fed to it by the valve, and it will speed up again. Such purpose machines often oscillate due to over-shooting and time-lags, and it is part of the engineer's art to build in supplementary devices to reduce the oscillations.
The ‘desired’ state of the Watt governor is a particular speed of rotation. Obviously it does not consciously desire it. The ‘goal’ of a machine is simply defined as that state to which it tends to return. Modern purpose machines use extensions of basic principles like negative feedback to achieve much more complex ‘lifelike’ behaviour. Guided missiles, for example, appear to search actively for their target, and when they have it in range they seem to pursue it, taking account of its evasive twists and turns, and sometimes even ‘predicting’ or ‘anticipating’ them. The details of how this is done are not worth going into. They involve negative feedback of various kinds, ‘feed-forward’, and other principles well understood by engineers and now known to be extensively involved in the working of living bodies. Nothing remotely approaching consciousness needs to be postulated, even though a layman, watching its apparently deliberate and purposeful behaviour, finds it hard to believe that the missile is not under the direct control of a human pilot.
It is a common misconception that because a machine such as a guided missile was originally designed and built by conscious man, then it must be truly under the immediate control of conscious man. Another variant of this fallacy is ‘computers do not really play chess, because they can only do what a human operator tells them’. It is important that we understand why this is fallacious, because it affects our understanding of the sense in which genes can be said to ‘control’ behaviour. Computer chess is quite a good example for making the point, so I will discuss it briefly.
Computers do not yet play chess as well as human grand masters, but they have reached the standard of a good amateur. More strictly, one should say programs have reached the standard of a good amateur, for a chess-playing program is not fussy which physical computer it uses to act out its skills. Now, what is the role of the {52} human programmer? First, he is definitely not manipulating the computer from moment to moment, like a puppeteer pulling strings. That would be just cheating. He writes the program, puts it in the computer, and then the computer is on its own: there is no further human intervention, except for the opponent typing in his moves. Does the programmer perhaps anticipate all possible chess positions, and provide the computer with a long list of good moves, one for each possible contingency? Most certainly not, because the number of possible positions in chess is so great that the world would come to an end before the list had been completed. For the same reason, the computer cannot possibly be programmed to try out ‘in its head’ all possible moves, and all possible follow-ups, until it finds a winning strategy. There are more possible games of chess than there are atoms in the galaxy. So much for the trivial non-solutions to the problem of programming a computer to play chess. It is in fact an exceedingly difficult problem, and it is hardly surprising that the best programs have still not achieved grand master status.
The programmer's actual role is rather more like that of a father teaching his son to play chess. He tells the computer the basic moves of the game, not separately for every possible starting position, but in terms of more economically expressed rules. He does not literally say in plain English ‘bishops move in a diagonal’, but he does say something mathematically equivalent, such as, though more briefly: ‘New coordinates of bishop are obtained from old coordinates, by adding the same constant, though not necessarily with the same sign, to both old x coordinate and old y coordinate.’ Then he might program in some ‘advice’, written in the same sort of mathematical or logical language, but amounting in human terms to hints such as ‘don't leave your king unguarded’, or useful tricks such as ‘forking’ with the knight. The details are intriguing, but they would take us too far afield. The important point is this. When it is actually playing, the computer is on its own, and can expect no help from its master. All the programmer can do is to set the computer up beforehand in the best way possible, with a proper balance between lists of specific knowledge, and hints about strategies and techniques.
The genes too control the behaviour of their survival machines, not directly with their fingers on puppet strings, but indirectly like the computer programmer. All they can do is to set it up beforehand; then the survival machine is on its own, and the genes can only sit passively inside. Why are they so passive? Why don't they grab the {53} reins and take charge from moment to moment? The answer is that they cannot because of time-lag problems. This is best shown by another analogy, taken from science fiction. A for Andromeda by Fred Hoyle and John Elliot is an exciting story, and, like all good science fiction, it has some interesting scientific points lying behind it. Strangely, the book seems to lack explicit mention of the most important of these underlying points. It is left to the reader's imagination. I hope the authors will not mind if I spell it out here.
There is a civilization 200 light-years away, in the constellation of Andromeda.(2) They want to spread their culture to distant worlds. How best to do it? Direct travel is out of the question. The speed of light imposes a theoretical upper limit to the rate at which you can get from one place to another in the universe, and mechanical considerations impose a much lower limit in practice. Besides, there may not be all that many worlds worth going to, and how do you know which direction to go in? Radio is a better way of communicating with the rest of the universe, since, if you have enough power to broadcast your signals in all directions rather than beam them in one direction, you can reach a very large number of worlds (the number increasing as the square of the distance the signal travels). Radio waves travel at the speed of light, which means the signal takes 200 years to reach earth from Andromeda. The trouble with this sort of distance is that you can never hold a conversation. Even if you discount the fact that each successive message from earth would be transmitted by people separated from each other by twelve generations, it would be just plain wasteful to attempt to converse over such distances.
This problem will soon arise in earnest for us: it takes about four minutes for radio waves to travel between earth and Mars. There can be no doubt that spacemen will have to get out of the habit of conversing in short alternating sentences, and will have to use long soliloquies or monologues, more like letters than conversations. As another example, Roger Payne has pointed out that the acoustics of the sea have certain peculiar properties, which mean that the exceedingly loud ‘song’ of some whales could theoretically be heard all the way round the world, provided the whales swim at a certain depth. It is not known whether they actually do communicate with each other over very great distances, but if they do they must be in much the same predicament as an astronaut on Mars. The speed of sound in water is such that it would take nearly two hours for the song to travel across the Atlantic Ocean and for a reply to return. I suggest {54} this as an explanation for the fact that some whales deliver a continuous soliloquy, without repeating themselves, for a full eight minutes. They then go back to the beginning of the song and repeat it all over again, many times over, each complete cycle lasting about eight minutes.
The Andromedans of the story did the same thing. Since there was no point in waiting for a reply, they assembled everything they wanted to say into one huge unbroken message, and then they broadcast it out into space, over and over again, with a cycle time of several months. Their message was very different from that of the whales, however. It consisted of coded instructions for the building and programming of a giant computer. Of course the instructions were in no human language, but almost any code can be broken by a skilled cryptographer, especially if the designers of the code intended it to be easily broken. Picked up by the Jodrell Bank radio telescope, the message was eventually decoded, the computer built, and the program run. The results were nearly disastrous for mankind, for the intentions of the Andromedans were not universally altruistic, and the computer was well on the way to dictatorship over the world before the hero eventually finished it off with an axe.
From our point of view, the interesting question is in what sense the Andromedans could be said to be manipulating events on Earth. They had no direct control over what the computer did from moment to moment; indeed they had no possible way of even knowing the computer had been built, since the information would have taken 200 years to get back to them. The decisions and actions of the computer were entirely its own. It could not even refer back to its masters for general policy instructions. All its instructions had to be built-in in advance, because of the inviolable 200 year barrier. In principle, it must have been programmed very much like a chess-playing computer, but with greater flexibility and capacity for absorbing local information. This was because the program had to be designed to work not just on earth, but on any world possessing an advanced technology, any of a set of worlds whose detailed conditions the Andromedans had no way of knowing.
Just as the Andromedans had to have a computer on earth to take day-to-day decisions for them, our genes have to build a brain. But the genes are not only the Andromedans who sent the coded instructions; they are also the instructions themselves. The reason why they cannot manipulate our puppet strings directly is the same: {55} time-lags. Genes work by controlling protein synthesis. This is a powerful way of manipulating the world, but it is slow. It takes months of patiently pulling protein strings to build an embryo. The whole point about behaviour, on the other hand, is that it is fast. It works on a time-scale not of months but of seconds and fractions of seconds. Something happens in the world, an owl flashes overhead, a rustle in the long grass betrays prey, and in milliseconds nervous systems crackle into action, muscles leap, and someone's life is saved — or lost. Genes don't have reaction-times like that. Like the Andromedans, the genes can only do their best in advance by building a fast executive computer for themselves, and programming it in advance with rules and ‘advice’ to cope with as many eventualities as they can ‘anticipate’. But life, like the game of chess, offers too many different possible eventualities for all of them to be anticipated. Like the chess programmer, the genes have to ‘instruct’ their survival machines not in specifics, but in the general strategies and tricks of the living trade.(3)
As J. Z. Young has pointed out, the genes have to perform a task analogous to prediction. When an embryo survival machine is being built, the dangers and problems of its life lie in the future. Who can say what carnivores crouch waiting for it behind what bushes, or what fleet-footed prey will dart and zig-zag across its path? No human prophet, nor any gene. But some general predictions can be made. Polar bear genes can safely predict that the future of their unborn survival machine is going to be a cold one. They do not think of it as a prophecy, they do not think at all: they just build in a thick coat of hair, because that is what they have always done before in previous bodies, and that is why they still exist in the gene pool. They also predict that the ground is going to be snowy, and their prediction takes the form of making the coat of hair white and therefore camouflaged. If the climate of the Arctic changed so rapidly that the baby bear found itself born into a tropical desert, the predictions of the genes would be wrong, and they would pay the penalty. The young bear would die, and they inside it.
Prediction in a complex world is a chancy business. Every decision that a survival machine takes is a gamble, and it is the business of genes to program brains in advance so that on average they take decisions that pay off. The currency used in the casino of evolution is survival, strictly gene survival, but for many purposes individual survival is a reasonable approximation. If you go down to the water-hole {56} to drink, you increase your risk of being eaten by predators who make their living lurking for prey by water-holes. If you do not go down to the water-hole you will eventually die of thirst. There are risks whichever way you turn, and you must take the decision that maximizes the long-term survival chances of your genes. Perhaps the best policy is to postpone drinking until you are very thirsty, then go and have one good long drink to last you a long time. That way you reduce the number of separate visits to the water-hole, but you have to spend a long time with your head down when you finally do drink. Alternatively the best gamble might be to drink little and often, snatching quick gulps of water while running past the water-hole. Which is the best gambling strategy depends on all sorts of complex things, not least the hunting habit of the predators, which itself is evolved to be maximally efficient from their point of view. Some form of weighing up of the odds has to be done. But of course we do not have to think of the animals as making the calculations consciously. All we have to believe is that those individuals whose genes build brains in such a way that they tend to gamble correctly are as a direct result more likely to survive, and therefore to propagate those same genes.
We can carry the metaphor of gambling a little further. A gambler must think of three main quantities, stake, odds, and prize. If the prize is very large, a gambler is prepared to risk a big stake. A gambler who risks his all on a single throw stands to gain a great deal. He also stands to lose a great deal, but on average high-stake gamblers are no better and no worse off than other players who play for low winnings with low stakes. An analogous comparison is that between speculative and safe investors on the stock market. In some ways the stock market is a better analogy than a casino, because casinos are deliberately rigged in the bank's favour (which means, strictly, that high-stake players will on average end up poorer than low-stake players; and low-stake players poorer than those who do not gamble at all. But this is for a reason not germane to our discussion). Ignoring this, both high-stake play and low-stake play seem reasonable. Are there animal gamblers who play for high stakes, and others with a more conservative game? In Chapter 9 we shall see that it is often possible to picture males as high-stake high-risk gamblers, and females as safe investors, especially in polygamous species in which males compete for females. Naturalists who read this book may be able to think of species that can be described as {57} high-stake high-risk players, and other species that play a more conservative game. I now return to the more general theme of how genes make ‘predictions’ about the future.
One way for genes to solve the problem of making predictions in rather unpredictable environments is to build in a capacity for learning. Here the program may take the form of the following instructions to the survival machine: ‘Here is a list of things defined as rewarding: sweet taste in the mouth, orgasm, mild temperature, smiling child. And here is a list of nasty things: various sorts of pain, nausea, empty stomach, screaming child. If you should happen to do something that is followed by one of the nasty things, don't do it again, but on the other hand repeat anything that is followed by one of the nice things.’ The advantage of this sort of programming is that it greatly cuts down the number of detailed rules that have to be built into the original program; and it is also capable of coping with changes in the environment that could not have been predicted in detail. On the other hand, certain predictions have to be made still. In our example the genes are predicting that sweet taste in the mouth, and orgasm, are going to be ‘good’ in the sense that eating sugar and copulating are likely to be beneficial to gene survival. The possibilities of saccharine and masturbation are not anticipated according to this example; nor are the dangers of over-eating sugar in our environment where it exists in unnatural plenty.
Learning-strategies have been used in some chess-playing computer programs. These programs actually get better as they play against human opponents or against other computers. Although they are equipped with a repertoire of rules and tactics, they also have a small random tendency built into their decision procedure. They record past decisions, and whenever they win a game they slightly increase the weighting given to the tactics that preceded the victory, so that next time they are a little bit more likely to choose those same tactics again.
One of the most interesting methods of predicting the future is simulation. If a general wishes to know whether a particular military plan will be better than alternatives, he has a problem in prediction. There are unknown quantities in the weather, in the morale of his own troops, and in the possible countermeasures of the enemy. One way of discovering whether it is a good plan is to try and see, but it is undesirable to use this test for all the tentative plans dreamed up, if only because the supply of young men prepared to die ‘for their {58} country’ is exhaustible, and the supply of possible plans is very large. It is better to try the various plans out in dummy runs rather than in deadly earnest. This may take the form of full-scale exercises with ‘Northland’ fighting ‘Southland’ using blank ammunition, but even this is expensive in time and materials. Less wastefully, war games may be played, with tin soldiers and little toy tanks being shuffled around a large map.
Recently, computers have taken over large parts of the simulation function, not only in military strategy, but in all fields where prediction of the future is necessary, fields like economics, ecology, sociology, and many others. The technique works like this. A model of some aspect of the world is set up in the computer. This does not mean that if you unscrewed the lid you would see a little miniature dummy inside with the same shape as the object simulated. In the chess-playing computer there is no ‘mental picture’ inside the memory banks recognizable as a chess board with knights and pawns sitting on it. The chess board and its current position would be represented by lists of electronically coded numbers. To us a map is a miniature scale model of a part of the world, compressed into two dimensions. In a computer, a map might alternatively be represented as a list of towns and other spots, each with two numbers — its latitude and longitude. But it does not matter how the computer actually holds its model of the world in its head, provided that it holds it in a form in which it can operate on it, manipulate it, do experiments with it, and report back to the human operators in terms which they can understand. Through the technique of simulation, model battles can be won or lost, simulated airliners fly or crash, economic policies lead to prosperity or to ruin. In each case the whole process goes on inside the computer in a tiny fraction of the time it would take in real life. Of course there are good models of the world and bad ones, and even the good ones are only approximations. No amount of simulation can predict exactly what will happen in reality, but a good simulation is enormously preferable to blind trial and error. Simulation could be called vicarious trial and error, a term unfortunately preempted long ago by rat psychologists.
If simulation is such a good idea, we might expect that survival machines would have discovered it first. After all, they invented many of the other techniques of human engineering long before we came on the scene: the focusing lens and the parabolic reflector, frequency analysis of sound waves, servo-control, sonar, buffer {59} storage of incoming information, and countless others with long names, whose details don't matter. What about simulation? Well, when you yourself have a difficult decision to make involving unknown quantities in the future, you do go in for a form of simulation. You imagine what would happen if you did each of the alternatives open to you. You set up a model in your head, not of everything in the world, but of the restricted set of entities which you think may be relevant. You may see them vividly in your mind's eye, or you may see and manipulate stylized abstractions of them. In either case it is unlikely that somewhere kid out in your brain is an actual spatial model of the events you are imagining. But, just as in the computer, the details of how your brain represents its model of the world are less important than the fact that it is able to use it to predict possible events’. Survival machines that can simulate the future are one jump ahead of survival machines who can only learn on the basis of overt trial and error. The trouble with overt trial is that it takes time and energy. The trouble with overt error is that it is often fatal. Simulation is both safer and faster.
The evolution of the capacity to simulate seems to have culminated in subjective consciousness. Why this should have happened is, to me, the most profound mystery facing modern biology. There is no reason to suppose that electronic computers are conscious when they simulate, although we have to admit that in the future they may become so. Perhaps consciousness arises when the brain's simulation of the world becomes so complete that it must include a model of itself.(4) Obviously the limbs and body of a survival machine must constitute an important part of its simulated world; presumably for the same kind of reason, the simulation itself could be regarded as part of the world to be simulated. Another word for this might indeed be ‘self-awareness’, but I don't find this a fully satisfying explanation of the evolution of consciousness, and this is only partly because it involves an infinite regress — if there is a model of the model, why not a model of the model of the model. . .?
Whatever the philosophical problems raised by consciousness, for the purpose of this story it can be thought of as the culmination of an evolutionary trend towards the emancipation of survival machines as executive decision-takers from their ultimate masters, the genes. Not only are brains in charge of the day-to-day running of survival-machine affairs, they have also acquired the ability to predict the future and act accordingly. They even have the power to rebel {60} against the dictates of the genes, for instance in refusing to have as many children as they are able to. But in this respect man is a very special case, as we shall see.
What has all this to do with altruism and selfishness? I am trying to build up the idea that animal behaviour, altruistic or selfish, is under the control of genes in only an indirect, but still very powerful, sense. By dictating the way survival machines and their nervous systems are built, genes exert ultimate power over behaviour. But the moment-to-moment decisions about what to do next are taken by the nervous system. Genes are the primary policy-makers; brains are the executives. But as brains became more highly developed, they took over more and more of the actual policy decisions, using tricks like learning and simulation in doing so. The logical conclusion to this trend, not yet reached in any species, would be for the genes to give the survival machine a single overall policy instruction: do whatever you think best to keep us alive.
Analogies with computers and with human decision-taking are all very well. But now we must come down to earth and remember that evolution in fact occurs step-by-step, through the differential survival of genes in the gene pool. Therefore, in order for a behaviour pattern — altruistic or selfish — to evolve, it is necessary that a gene ‘for’ that behaviour should survive in the gene pool more successfully than a rival gene or allele ‘for’ some different behaviour. A gene for altruistic behaviour means any gene that influences the development of nervous systems in such a way as to make them likely to behave altruistically.(5) Is there any experimental evidence for the genetic inheritance of altruistic behaviour? No, but that is hardly surprising, since little work has been done on the genetics of any behaviour. Instead, let me tell you about one study of a behaviour pattern which does not happen to be obviously altruistic, but which is complex enough to be interesting. It serves as a model for how altruistic behaviour might be inherited.
Honey bees suffer from an infectious disease called foul brood. This attacks the grubs in their cells. Of the domestic breeds used by beekeepers, some are more at risk from foul brood than others, and it turns out that the difference between strains is, at least in some cases, a behavioural one. There are so-called hygienic strains which quickly stamp out epidemics by locating infected grubs, pulling them from their cells and throwing them out of the hive. The susceptible strains are susceptible because they do not practise this hygienic {61} infanticide. The behaviour actually involved in hygiene is quite complicated. The workers have to locate the cell of each diseased grub, remove the wax cap from the cell, pull out the larva, drag it through the door of the hive, and throw it on the rubbish tip.
Doing genetic experiments with bees is quite a complicated business for various reasons. Worker bees themselves do not ordinarily reproduce, and so you have to cross a queen of one strain with a drone (=male) of the other, and then look at the behaviour of the daughter workers. This is what W. G. Rothenbuhler did. He found that all first-generation hybrid daughter hives were non-hygienic: the behaviour of their hygienic parent seemed to have been lost, although as things turned out the hygienic genes were still there but were recessive, like human genes for blue eyes. When Rothenbuhler ‘back-crossed’ first-generation hybrids with a pure hygienic strain (again of course using queens and drones), he obtained a most beautiful result. The daughter hives fell into three groups. One group showed perfect hygienic behaviour, a second showed no hygienic behaviour at all, and the third went half way. This last group uncapped the wax cells of diseased grubs, but they did not follow through and throw out the larvae. Rothenbuhler surmised that there might be two separate genes, one gene for uncapping, and one gene for throwing-out. Normal hygienic strains possess both genes, susceptible strains possess the alleles — rivals — of both genes instead. The hybrids who only went half way presumably possessed the uncapping gene (in double dose) but not the throwing-out gene. Rothenbuhler guessed that his experimental group of apparently totally non-hygienic bees might conceal a sub-group possessing the throwing-out gene, but unable to show it because they lacked the uncapping gene. He confirmed this most elegantly by removing caps himself. Sure enough, half of the apparently non-hygienic bees thereupon showed perfectly normal throwing-out behaviour.(6)
This story illustrates a number of important points which came up in the previous chapter. It shows that it can be perfectly proper to speak of a ‘gene for behaviour so-and-so’ even if we haven't the faintest idea of the chemical chain of embryonic causes leading from gene to behaviour. The chain of causes could even turn out to involve learning. For example, it could be that the uncapping gene exerts its effect by giving bees a taste for infected wax. This means they will find the eating of the wax caps covering disease-victims rewarding, {62} and will therefore tend to repeat it. Even if this is how the gene works, it is still truly a gene ‘for uncapping’ provided that, other things being equal, bees possessing the gene end up by uncapping, and bees not possessing the gene do not uncap.
Secondly it illustrates the fact that genes ‘cooperate’ in their effects on the behaviour of the communal survival machine. The throwing-out gene is useless unless it is accompanied by the uncapping gene and vice versa. Yet the genetic experiments show equally clearly that the two genes are in principle quite separable in their journey through the generations. As far as their useful work is concerned you can think of them as a single cooperating unit, but as replicating genes they are two free and independent agents.
For purposes of argument it will be necessary to speculate about genes ‘for’ doing all sorts of improbable things. If I speak, for example, of a hypothetical gene ‘for saving companions from drowning’, and you find such a concept incredible, remember the story of the hygienic bees. Recall that we are not talking about the gene as the sole antecedent cause of all the complex muscular contractions, sensory integrations, and even conscious decisions, that are involved in saving somebody from drowning. We are saying nothing about the question of whether learning, experience, or environmental influences enter into the development of the behaviour. All you have to concede is that it is possible for a single gene, other things being equal and lots of other essential genes and environmental factors being present, to make a body more likely to save somebody from drowning than its allele would. The difference between the two genes may turn out at bottom to be a slight difference in some simple quantitative variable. The details of the embryonic developmental process, interesting as they may be, are irrelevant to evolutionary considerations. Konrad Lorenz has put this point well.
The genes are master programmers, and they are programming for their lives. They are judged according to the success of their programs in coping with all the hazards that life throws at their survival machines, and the judge is the ruthless judge of the court of survival. We shall come later to ways in which gene survival can be fostered by what appears to be altruistic behaviour. But the obvious first priorities of a survival machine, and of the brain that takes the decisions for it, are individual survival and reproduction. All the genes in the ‘colony’ would agree about these priorities. Animals therefore go to elaborate lengths to find and catch food; to avoid {63} being caught and eaten themselves; to avoid disease and accident; to protect themselves from unfavourable climatic conditions; to find members of the opposite sex and persuade them to mate; and to confer on their children advantages similar to those they enjoy themselves. I shall not give examples — if you want one just look carefully at the next wild animal that you see. But I do want to mention one particular kind of behaviour because we shall need to refer to it again when we come to speak of altruism and selfishness. This is the behaviour that can be broadly labelled communication.(7)
A survival machine may be said to have communicated with another one when it influences its behaviour or the state of its nervous system. This is not a definition I should like to have to defend for very long, but it is good enough for present purposes. By influence I mean direct causal influence. Examples of communication are numerous: song in birds, frogs, and crickets; tail-wagging and hackle-raising in dogs; ‘grinning’ in chimpanzees; human gestures and language. A great number of survival-machine actions promote their genes’ welfare indirectly by influencing the behaviour of other survival machines. Animals go to great lengths to make this communication effective. The songs of birds enchant and mystify successive generations of men. I have already referred to the even more elaborate and mysterious song of the humpback whale, with its prodigious range, its frequencies spanning the whole of human hearing from subsonic rumblings to ultrasonic squeaks. Mole-crickets amplify their song to stentorian loudness by singing down in a burrow which they carefully dig in the shape of a double exponential horn, or megaphone. Bees dance in the dark to give other bees accurate information about the direction and distance of food, a feat of communication rivalled only by human language itself.
The traditional story of ethologists is that communication signals evolve for the mutual benefit of both sender and recipient. For instance, baby chicks influence their mother's behaviour by giving high piercing cheeps when they are lost or cold. This usually has the immediate effect of summoning the mother, who leads the chick back to the main clutch. This behaviour could be said to have evolved for mutual benefit, in the sense that natural selection has favoured babies that cheep when they are lost, and also mothers that respond appropriately to the cheeping.
If we wish to (it is not really necessary), we can regard signals such as the cheep call as having a meaning, or as carrying information: in {64} this case ‘I am lost.’ The alarm call given by small birds, which I mentioned in Chapter 1, could be said to convey the information ‘There is a hawk.’ Animals who receive this information and act on it are benefited. Therefore the information can be said to be true. But do animals ever communicate false information; do they ever tell lies?
The notion of an animal telling a lie is open to misunderstanding, so I must try to forestall this. I remember attending a lecture given by Beatrice and Allen Gardner about their famous ‘talking’ chimpanzee Washoe (she uses American Sign Language, and her achievement is of great potential interest to students of language). There were some philosophers in the audience, and in the discussion after the lecture they were much exercised by the question of whether Washoe could tell a lie. I suspected that the Gardners thought there were more interesting things to talk about, and I agreed with them. In this book I am using words like ‘deceive’ and ‘lie’ in a much more straightforward sense than those philosophers. They were interested in conscious intention to deceive. I am talking simply about having an effect functionally equivalent to deception. If a bird used the ‘There is a hawk’ signal when there was no hawk, thereby frightening his colleagues away, leaving him to eat all their food, we might say he had told a lie. We would not mean he had deliberately intended consciously to deceive. All that is implied is that the liar gained food at the other birds’ expense, and the reason the other birds flew away was that they reacted to the liar's cry in a way appropriate to the presence of a hawk.
Many edible insects, like the butterflies of the previous chapter, derive protection by mimicking the external appearance of other distasteful or stinging insects. We ourselves are often fooled into thinking that yellow and black striped hover-flies are wasps. Some bee-mimicking flies are even more perfect in their deception. Predators too tell lies. Angler fish wait patiently on the bottom of the sea, blending in with the background. The only conspicuous part is a wriggling worm-like piece of flesh on the end of a long ‘fishing rod’, projecting from the top of the head. When a small prey fish comes near, the angler will dance its worm-like bait in front of the little fish, and lure it down to the region of the angler's own concealed mouth. Suddenly it opens its jaws, and the little fish is sucked in and eaten. The angler is telling a lie, exploiting the little fish's tendency to approach wriggling worm-like objects. He is {65} saying ‘Here is a worm’, and any little fish who ‘believes’ the lie is quickly eaten.
Some survival machines exploit the sexual desires of others. Bee orchids induce bees to copulate with their flowers, because of their strong resemblance to female bees. What the orchid has to gain from this deception is pollination, for a bee who is fooled by two orchids will incidentally carry pollen from one to the other. Fireflies (which are really beetles) attract their mates by flashing lights at them. Each species has its own particular dot-dash flashing pattern, which prevents confusion between species, and consequent harmful hybridization. Just as sailors look out for the flash patterns of particular lighthouses, so fireflies seek the coded flash patterns of their own species. Females of the genus Photuris have ‘discovered’ that they can lure males of the genus Photinus if they imitate the flashing code of a Photinus female. This they do, and when a Photinus male is fooled by the lie into approaching, he is summarily eaten by the Photuris female. Sirens and Lorelei spring to mind as analogies, but Cornishmen will prefer to think of the wreckers of the old days, who used lanterns to lure ships on to the rocks, and then plundered the cargoes that spilled out of the wrecks.
Whenever a system of communication evolves, there is always the danger that some will exploit the system for their own ends. Brought up as we have been on the ‘good of the species’ view of evolution, we naturally think first of liars and deceivers as belonging to different species: predators, prey, parasites, and so on. However, we must expect lies and deceit, and selfish exploitation of communication to arise whenever the interests of the genes of different individuals diverge. This will include individuals of the same species. As we shall see, we must even expect that children will deceive their parents, that husbands will cheat on wives, and that brother will lie to brother.
Even the belief that animal communication signals originally evolve to foster mutual benefit, and then afterwards become exploited by malevolent parties, is too simple. It may well be that all animal communication contains an element of deception right from the start, because all animal interactions involve at least some conflict of interest. The next chapter introduces a powerful way of thinking about conflicts of interest from an evolutionary point of view.
p. 49 Brains may be regarded as analogous in function to computers.
Statements like this worry literal-minded critics. They are right, of course, that brains differ in many respects from computers. Their internal methods of working, for instance, happen to be very different from the particular kind of computers that our technology has developed. This in no way reduces the truth of my statement about their being analogous in function. Functionally, the brain plays precisely the role of on-board computer — data processing, pattern recognition, short-term and long-term data storage, operation coordination, and so on.
Whilst we are on computers, my remarks about them have become gratifyingly — or frighteningly, depending on your view — dated. I wrote (p. 48) that ‘you could pack only a few hundred transistors into a skull.’ Transistors today are combined in integrated circuits. The number of transistor-equivalents that you could pack into a skull today must be up in the billions. I also stated (p. 51) that computers, playing chess, had reached the standard of a good amateur. Today, chess programs that beat all but very serious players are commonplace on cheap home computers, and the best programs in the world now present a serious challenge to grand masters. Here, for instance, is the Spectator's chess correspondent Raymond Keene, in the issue of 7 October 1988:
It is still something of a sensation when a titled player is beaten by a computer, but not, perhaps, for much longer. The most dangerous metal monster so far to challenge the human brain is the quaintly named ‘Deep Thought’, no doubt in homage to Douglas Adams. Deep Thought's latest exploit has been to terrorise human opponents in the US Open Championship, held in August in Boston. I still do {277} not have DT's overall rating performance to hand, which will be the acid test of its achievement in an open Swiss system competition, but I have seen a remarkably impressive win against the strong Canadian Igor Ivanov, a man who once defeated Karpov! Watch closely; this may be the future of chess.
There follows a move-by-move account of the game. This is Keene's reaction to Deep Thought's Move 22:
A wonderful move. . . The idea is to centralise the queen. . . and this concept leads to remarkably speedy success . . . The startling outcome . . . Black's queen's wing is now utterly demolished by the queen penetration.
Ivanov's reply to this is described as:
A desperate fling, which the computer contemptuously brushes aside .. The ultimate humiliation. DT ignores the queen recapture, steering instead for a snap checkmate. . . Black resigns.
Not only is Deep Thought one of the world's top chess players. What I find almost more striking is the language of human consciousness that the commentator feels obliged to use. Deep Thought ‘contemptuously brushes aside’ Ivanov's ‘desperate fling’. Deep Thought is described as ‘aggressive’. Keene speaks of Ivanov as ‘hoping’ for some outcome, but his language shows that he would be equally happy using a word like ‘hope’ for Deep Thought. Personally I rather look forward to a computer program winning the world championship. Humanity needs a lesson in humility.
p. 53 There is a civilization 200 light-years away, in the constellation of Andromeda.
A for Andromeda and its sequel, Andromeda Breakthrough, are inconsistent about whether the alien civilization hails from the enormously distant Andromeda galaxy, or a nearer star in the constellation of Andromeda as I said. In the first novel the planet is placed 200 light-years away, well within our own galaxy. In the sequel, however, the same aliens are located in the Andromeda galaxy, which is about 2 million light-years away. Readers of my page 53 may replace ‘200’ with ‘2 million’ according to taste. For my purpose the relevance of the story remains undiminished.
Fred Hoyle, the senior author of both these novels, is an eminent astronomer and the author of my favourite of all science fiction stories, The Black Cloud. The superb scientific insight deployed in his novels makes a poignant contrast to his spate of more recent books written jointly with C. Wickramasinghe. Their misrepresenting of Darwinism (as a theory of pure chance) and their waspish attacks on Darwin himself do nothing to {278} assist their otherwise intriguing (though implausible) speculations on interstellar origins of life. Publishers should correct the misapprehension that a scholar's distinction in one field implies authority in another. And as long as that misapprehension exists, distinguished scholars should resist the temptation to abuse it.
p. 55 . . . strategies and tricks of the living trade. . .
This strategic way of talking about an animal or plant, or a gene, as if it were consciously working out how best to increase its success — for instance picturing ‘males as high-stake high-risk gamblers, and females as safe investors’ (p. 56) — has become commonplace among working biologists. It is a language of convenience which is harmless unless it happens to fall into the hands of those ill-equipped to understand it. Or over-equipped to misunderstand it? I can, for example, find no other way to make sense of an article criticizing The Selfish Gene in the journal Philosophy, by someone called Mary Midgley, which is typified by its first sentence: ‘Genes cannot be selfish or unselfish, any more than atoms can be jealous, elephants abstract or biscuits teleological.’ My own ‘In Defence of Selfish Genes’, in a subsequent issue of the same journal, is a full reply to this incidentally highly intemperate and vicious paper. It seems that some people, educationally over-endowed with the tools of philosophy, cannot resist poking in their scholarly apparatus where it isn't helpful. I am reminded of P. B. Medawar's remark about the attractions of ‘philosophy-fiction’ to ‘a large population of people, often with well-developed literary and scholarly tastes, who have been educated far beyond their capacity to undertake analytical thought’.
p. 59 Perhaps consciousness arises when the brain's simulation of the world becomes so complete that it must include a model of itself
I discuss the idea of brains simulating worlds in my 1988 Gifford Lecture, ‘Worlds in Microcosm’. I am still unclear whether it really can help us much with the deep problem of consciousness itself, but I confess to being pleased that it caught the attention of Sir Karl Popper in his Darwin Lecture. The philosopher Daniel Dennett has offered a theory of consciousness that takes the metaphor of computer simulation further. To understand his theory, we have to grasp two technical ideas from the world of computers: the idea of a virtual machine, and the distinction between serial and parallel processors. I'll have to get the explanation of these out of the way first.
A computer is a real machine, hardware in a box. But at any particular time it is running a program that makes it look like another machine, a virtual machine. This has long been true of all computers, but modern ‘user-friendly’ computers bring home the point especially vividly. At the time of writing, the market leader in user-friendliness is widely agreed to be the Apple Macintosh. Its success is due to a wired-in suite of programs that {279} make the real hardware machine — whose mechanisms are, as with any computer, forbiddingly complicated and not very compatible with human intuition — look like a different kind of machine: a virtual machine, specifically designed to mesh with the human brain and the human hand. The virtual machine known as the Macintosh User Interface is recognizably a machine. It has buttons to press, and slide controls like a hi-fi set. But it is a virtual machine. The buttons and sliders are not made of metal or plastic. They are pictures on the screen, and you press them or slide them by moving a virtual finger about the screen. As a human you feel in control, because you are accustomed to moving things around with your finger. I have been an intensive programmer and user of a wide variety of digital computers for twenty-five years, and I can testify that using the Macintosh (or its imitators) is a qualitatively different experience from using any earlier type of computer. There is an effortless, natural feel to it, almost as if the virtual machine were an extension of one's own body. To a remarkable extent the virtual machine allows you to use intuition instead of looking up the manual.
I now turn to the other background idea that we need to import from computer science, the idea of serial and parallel processors. Today's digital computers are mostly serial processors. They have one central calculating mill, a single electronic bottleneck through which all data have to pass when being manipulated. They can create an illusion of doing many things simultaneously because they are so fast. A serial computer is like a chess master ‘simultaneously’ playing twenty opponents but actually rotating around them. Unlike the chess master, the computer rotates so swiftly and quietly around its tasks that each human user has the illusion of enjoying the computer's exclusive attention. Fundamentally, however, the computer is attending to its users serially.
Recently, as part of the quest for ever more dizzying speeds of performance, engineers have made genuinely parallel-processing machines. One such is the Edinburgh Supercomputer, which I was recently privileged to visit. It consists of a parallel array of some hundreds of ‘transputers’, each one equivalent in power to a contemporary desktop computer. The supercomputer works by taking the problem it has been set, subdividing it into smaller tasks that can be tackled independently, and farming out the tasks to gangs of transputers. The transputers take the sub-problem away, solve it, hand in the answer and report for a new task. Meanwhile other gangs of transputers are reporting in with their solutions, so the whole supercomputer gets to the final answer orders of magnitude faster than a normal serial computer could.
I said that an ordinary serial computer can create an illusion of being a parallel processor, by rotating its ‘attention’ sufficiently fast around a number of tasks. We could say that there is a virtual parallel processor sitting {280} atop serial hardware. Dennett's idea is that the human brain has done exactly the reverse. The hardware of the brain is fundamentally parallel, like that of the Edinburgh machine. And it runs software designed to create an illusion of serial processing: a serially processing virtual machine riding on top of parallel architecture. The salient feature of the subjective experience of thinking, Dennett thinks, is the serial ‘one-thing-after-another’, ‘Joycean’. stream of consciousness. He believes that most animals lack this serial experience, and use brains directly in their native, parallel-processing mode. Doubtless the human brain, too, uses its parallel architecture directly for many of the routine tasks of keeping a complicated survival machine ticking over. But, in addition, the human brain evolved a software virtual machine to simulate the illusion of a serial processor. The mind, with its serial stream of consciousness, is a virtual machine, a ‘user-friendly’ way of experiencing the brain, just as the ‘Macintosh User Interface’ is a user-friendly way of experiencing the physical computer inside its grey box.
It is not obvious why we humans needed a serial virtual machine, when other species seem quite happy with their unadorned parallel machines. Perhaps there is something fundamentally serial about the more difficult tasks that a wild human is called upon to do, or perhaps Dennett is wrong to single us out. He further believes that the development of the serial software has been a largely cultural phenomenon, and again it is not obvious to me why this should be particularly likely. But I should add that, at the time of my writing, Dennett's paper is unpublished and my account is based on recollections of his 1988 Jacobsen Lecture in London. The reader is advised to consult Dennett's own account when it is published, rather than rely on my doubtless imperfect and impressionistic — maybe even embellished — one.
The psychologist Nicholas Humphrey, too, has developed a tempting hypothesis of how the evolution of a capacity to simulate might have led to consciousness. In his book, The Inner Eye, Humphrey makes a convincing case that highly social animals like us and chimpanzees have to become expert psychologists. Brains have to juggle with, and simulate, many aspects of the world. But most aspects of the world are pretty simple in comparison to brains themselves. A social animal lives in a world of others, a world of potential mates, rivals, partners, and enemies. To survive and prosper in such a world, you have to become good at predicting what these other individuals are going to do next. Predicting what is going to happen in the inanimate world is a piece of cake compared with predicting what is going to happen in the social world. Academic psychologists, working scientifically, aren't really very good at predicting human behaviour. Social companions, using minute movements of the facial muscles and other subtle cues, are often astonishingly good at reading minds and second-guessing behaviour. Humphrey believes that this ‘natural psychological’ skill has become highly {281} evolved in social animals, almost like an extra eye or other complicated organ. The ‘inner eye’ is the evolved social-psychological organ, just as the outer eye is the visual organ.
So far, I find Humphrey's reasoning convincing. He goes on to argue that the inner eye works by self-inspection. Each animal looks inwards to its own feelings and emotions, as a means of understanding the feelings and emotions of others. The psychological organ works by self-inspection. I am not so sure whether I agree that this helps us to understand consciousness, but Humphrey is a graceful writer and his book is persuasive.
p. 60 A gene for altruistic behaviour. . ..
People sometimes get all upset about genes ‘for’ altruism, or other apparently complicated behaviour. They think (wrongly) that in some sense the complexity of the behaviour must be contained within the gene. How can there be a single gene for altruism, they ask, when all that a gene does is encode one protein chain? But to speak of a gene ‘for’ something only ever means that a change in the gene causes a change in the something. A single genetic difference, by changing some detail of the molecules in cells, causes a difference in the already complex embryonic processes, and hence in, say, behaviour.
For instance, a mutant gene in birds ‘for’ brotherly altruism will certainly not be solely responsible for an entirely new complicated behaviour pattern. Instead, it will alter some already existing, and probably already complicated, behaviour pattern. The most likely precursor in this case is parental behaviour. Birds routinely have the complicated nervous apparatus needed to feed and care for their own offspring. This has, in turn, been built up over many generations of slow, step-by-step evolution, from antecedents of its own. (Incidentally, sceptics about genes for fraternal care are often inconsistent: why aren't they just as sceptical about genes for equally complicated parental care?) The pre-existing behaviour pattern — parental care in this case — will be mediated by a convenient rule of thumb, such as ‘Feed all squawking, gaping things in the nest’ The gene ‘for feeding younger brothers and sisters’ could work, then, by accelerating the age at which this rule of thumb matures in development. A fledgling bearing the fraternal gene as a new mutation will simply activate its ‘parental’ rule of thumb a little earlier than a normal bird. It will treat the squawking, gaping things in its parents’ nest — its younger brothers and sisters — as if they were squawking, gaping things in its own nest — its children. Far from being a brand new, complicated behavioural innovation, ‘fraternal behaviour’ would originally arise as a slight variant in the developmental timing of already-existing behaviour. As so often, fallacies arise when we forget the essential gradualism of evolution, the fact that adaptive evolution proceeds by small, step-by-step alterations of pre-existing structures or behaviour. {282}
p. 61 Hygienic bees
If the original book had had footnotes, one of them would have been devoted to explaining — as Rothenbuhler himself scrupulously did — that the bee results were not quite so neat and tidy. Out of the many colonies that, according to theory, should not have shown hygienic behaviour, one nevertheless did. In Rothenbuhler's own words, ‘We cannot disregard this result, regardless of how much we would like to, but we are basing the genetic hypothesis on the other data.’ A mutation in the anomalous colony is a possible explanation, though it is not very likely.
p. 63 This is the behaviour that can be broadly labelled communication.
I now find myself dissatisfied with this treatment of animal communication. John Krebs and I have argued in two articles that most animal signals are best seen as neither informative nor deceptive, but rather as manipulative. A signal is a means by which one animal makes use of another animal's muscle power. A nightingale's song is not information, not even deceitful information. It is persuasive, hypnotic, spellbinding oratory. This kind of argument is taken to its logical conclusion in The Extended Phenotype, part of which I have abridged in Chapter 13 of this book. Krebs and I argue that signals evolve from an interplay of what we call mind-reading and manipulation. A startlingly different approach to the whole matter of animal signals is that of Amotz Zahavi. In a note to Chapter 9, I discuss Zahavi's views far more sympathetically than in the first edition of this book.
|
첫댓글 비디오 3:02:30~4:17:42