Four entries in 30-Second Theories. Ed Paul Parsons, Lewes, East Sussex, Ivy Press. pp 78-9, 80-1, 90-1, 144-5.
Note: This is the version originally submitted and may have been edited before publication.
Behaviourism dominated the science of psychology for half a century, even though it eliminated all talk of mind or consciousness and refused to study feelings, thoughts, or desires. Instead, it stuck to behaviours that could be measured in the lab. Behaviourism began in 1913 when American psychologist, John B. Watson, argued that human learning could be studied like Pavlov’s famous dogs which were conditioned to salivate to a sound by pairing it with food. Appropriate conditioning could be used, he said, to turn young children into any kind of person he wanted. B.F. Skinner, another American psychologist, developed ‘operant conditioning’; in which rats or pigeons are rewarded for some behaviours and punished for others, and so learn to press levers, run through mazes, or peck at colours. Pigeon-guided missiles were successfully developed from this idea but were never used in war. He also discovered that rewards are far more effective than punishments – a crucial principle for parents and teachers. Skinner founded his own school of ‘Radical behaviourism’, arguing that all our actions are determined, and he speculated on a utopian society based on conditioning. Behaviourism is no longer dominant within experimental psychology but the findings of learning theory are still widely applied in education and therapy.
Nothing counts in psychology except behaviours that can be measured and tested – don’t talk about mind or consciousness, they’re unscientific illusions.
By turning psychology into science, behaviourism was a great advance on the psychoanalytic speculations of Freud that preceded it. It finally fell out of favour in the 1960s for two reasons. First, lots of things happen inside our heads that can’t be measured in behaviourist experiments – even rats and pigeons have emotions, intentions and mental models of their environment. Second, much of our intelligence and personality is inherited, not learned. So we need an evolutionary psychology to understand human nature.
Cognitive psychology treats human beings as information processing systems, and studies how we think, perceive, learn and remember. The word “cognitive” comes from the Latin “cognoscere” (to think); and the term “cognitive psychology” was coined by Ulric Neisser in 1967. This was an exciting change from the prevailing behaviourism which had rejected all research into mental processes. The new cognitive psychology studied inner mental events, and tapped into the rapidly expanding science of artificial intelligence. In this new view, the mind was analogous to software and the brain to the hardware of a biological computer. Human actions, decisions and thoughts were all forms of information processing, depending on input information from the senses. So, for example, studies of the brain’s visual system revealed how information entering through the eyes is processed by layers of cells in the eye, the mid-brain, and the sensory cortex, and on through to areas of the brain that can recognise objects or control actions and speech. Cognitive psychology expanded dramatically to become the dominant paradigm in psychology towards the end of the twentieth century, and is now part of the inter-disciplinary field of cognitive science, which includes parts of neuroscience, linguistics and philosophy.
We humans are information processing machines – our brain is a computer and our mind is the software that runs on it.
Cognitive psychology was so successful that, at the end of the twentieth century, most research psychologists called themselves “cognitive psychologists”. But it depended on computer analogies that did not always hold up and on the idea that the brain works by building complex representations of the world. This emphasis on the brain began to give way to “enactive” theories which take much more account of the rest of our bodies and of our dynamical interactions with the world around us.
Imagine you’re given a pill to take and told it will cure your headache (or ulcer or depression) and that your condition improves even though the pill contains nothing but chalk – that is the placebo effect. The word means ‘I please’, but in medicine refers to treatments that have no real medical effect and work through the power of suggestion. First described in the 1920s, placebos are now an indispensable part of “evidence based medicine”, the research that finds out whether new drugs, or treatments, or crazy machines that claim to improve your sex life, really work. In clinical trials the treatment is compared with a placebo, such as a pill that looks exactly like the real thing but has no medicine in it, or acupuncture needles that look and feel convincing but don’t actually pierce the skin. Typically, in a “randomised controlled trial”, half the patients get the real thing and half the placebo. If both groups show the same improvement then you know the new treatment is useless. Placebo effects are extraordinarily powerful, and can even be enhanced by giving bigger pills, pink pills rather than white ones, or by the perceived seniority of the doctor who gives it.
Without the power of the placebo effect alternative therapists would be out of business. Even a medicine-free pill will make you feel better if you believe it will.
Placebo effects can be so strong that until the twentieth century most medicines were actually useless: the rich, who could afford doctors, fared no better (and sometimes worse) than the poor who could not. By testing against placebos we can now judge which new drugs are worth paying for, and we know that acupuncture can reduce pain but not cure illnesses, and that homoeopathy has no real effect (so don’t waste your money on it unless your belief is exceptionally strong).
Whenever we copy habits, skills, stories, or songs (or any kind of information) from person to person, we are dealing in memes. The ‘meme’ meme emerged from the theory of Universal Darwinism; the idea that when any information is copied, varied and selected, then evolution must happen; and the information that is copied is called the replicator. Our most familiar replicator is the gene, but in 1976 Dawkins argued that culture consists of a second replicator, and he called this the ‘meme’. Humans copy memes (including ideas, skills and behaviours) through imitation and teaching; they vary what they copy by making errors, by deliberate alterations or by creative combining; and they select which memes to remember and pass on. The science of memetics studies how memes spread, why some thrive and others fail, and the consequences of this for the evolution of culture. Generally, some memes spread because they are useful or advantageous for us, like parts of science and medicine, financial institutions, arts and music. Others spread like viruses even though they may be useless or even harmful to us, including Internet viruses, chain letters, religions and cults, and useless alternative therapies. We humans are meme machines, and the memes use us for their survival.
Culture evolves because people select memes, just as biology evolves by the selection of genes. We are the meme machines.
Memes have been called an “empty analogy” and a “meaningless metaphor”. Most biologists deny that memetics is needed to explain the origin of the big human brain, or our peculiar delight in art and music, and argue that existing theories of gene-culture co-evolution are better. Perhaps memetics is too scary for some – we humans are meme machines, and now that techno-memes (or temes) are developing ever better technology, our role is becoming ever less relevant.