Top 70 Proficiency and CPE English Reading Exercises Student Edition

Top 70 Proficiency and CPE English Reading Exercises Student Edition  dành cho ôn luyện các Kỳ thi học sinh giỏi THPT dành cho  các bạn học sinh, sinh viên tham khảo, ôn tập, chuẩn bị cho kì thi.  

70 PROFICIENCY & CPE
ENGLISH READING
EXERCISES
Student Edition
Introduction
Thank you for buying "70 Proficiency and CPE English Reading Exercises:
Student Edition". I'm sure that you with find this collection of 70 different pieces
of writing not only very useful for improving your level of English but also interesting
to read too (I've specially chosen the topic of each text for this purpose). Below I will
explain a little about what you have bought, who it is for, how the exercises are
ordered, what parts of your English it will improve and what is the best way to do the
reading exercises in this eBook.
What you have bought
In the eBook, there are 70 different reading exercises. Each reading exercise has its
own text (mainly articles, but also a variety of other types of text like essays, reviews
etc...). The length of each text varies, although none is shorter than 1,000 words.
After each text is a vocabulary exercise for you to learn and remember 7 phrases and
words from the text. After this there a page for you to write a sentence in your own
words for each of these words and phrases once you have learnt the meaning of
them.
On the following 6 pages you will find a list of the 70 different reading exercises and
on which pages you can find each on.
1
List of exercises
Below you will find the names of the 70 different reading exercises that are included in this eBook and
the page where you can find each.
In addition, for each exercise it tells you what topics are used in the text. This is especially useful if you
want to improve the knowledge of specific topics for your students and the associated English
vocabulary used in them.
Number
Title
Topics
Page
1 The intelligence of plants
Nature, plants &
science
13
2
Superstitions and their strange
origins
Culture & history
19
3
Jordan: A spectacular country with
unfortunately too few tourists
Travel, culture &
nature
24
4
The myth of meritocracy in
education
Education &
society
29
5
The comparative failure of online
grocery shopping
Business,
technology &
society
35
6
Can fashion be considered to be
art?
The arts & fashion
42
7
Ten inventions that radically
changed our world
Inventions, history
& society
47
8
What makes solo endurance
athletes keep going?
Sport &
psychology
53
9
Is human habitation on Mars
possible?
Astronomy &
science
58
10
The questionable validity and
morality of using IQ tests
Psychology &
society
64
11
Is wind power the answer for our
future energy needs?
The environment
& energy
70
12 The history of astrology
Science,
astronomy, history
& psychology
75
2
List of exercises (continued)
Number
Title
Topics
Page
13
Our changing spending habits at
Christmas
Economics &
business
81
14 The importance of stories for us
The arts, literature,
culture & history
86
15
Is lab-grown meat a good thing for
us?
Food/drink,
science,
technology &
business
91
16
Is there any difference between
men’s and women’s brains?
Science,
psychology,
health & society
97
17
ASMR: Making money through
making very soft sounds
Technology,
culture & business
102
18
Why many of us don’t really work
when at work
Work, business &
society
107
19
The worrying disappearance of the
right to free speech at British
universities
Society, education
& philosophy
113
20
Changing the image of classical
music
The arts, music &
culture
118
21
The discovery which is reshaping
the theory of our origins
Anthropology &
archaeology
122
22 Sleep and its importance
Science, human
biology & health
127
23
Wealth and happiness: Are the two
connected?
Economics,
society & life
132
24
Punishing the parents for their kids
underage drinking
Society, health &
law
137
25
Reintroducing wolves and other
lost species back into the wild in
Britain
Nature, animals &
the environment
143
3
List of exercises (continued)
Number
Title
Topics
Page
26
Everest and death: Why people
are still willing to climb mountain of
the dead
Sport, society,
physical
geography &
psychology
150
27
The debate on whether being
overweight is unhealthy
Medicine, science
& health
157
28
The importance of Conrad’s ‘The
Heart of Darkness’
The arts, literature,
culture & history
163
29
The resistance to moving from
steam power to electricity in
manufacturing
Science, history &
economics
169
30 The significance of colour
Culture, design,
business,
psychology &
history
174
31 The work of artist Jackson Pollock
The arts, painting
& culture
182
32 Food imagery and manipulation
Psychology,
food/drink &
media
190
33
The urgency of acting now to stop
climate change
The environment
& nature
197
34
Trying to reverse the declining
demand for humanities majors in
America
Education &
university
205
35
How the tulip caused the world’s
first economic crash
Economics,
history & painting
210
36
A review of the film "Three
Billboards Outside Ebbing,
Missouri"
The arts & film
215
37
Our personality traits appear to be
mostly inherited
Psychology &
science
219
38
Is it right for people to go to Africa
to hunt?
Nature, society &
business
225
4
List of exercises (continued)
Number
Title
Topics
Page
39
The new palaces of the 21st
century
Architecture,
society & culture
232
40
Understanding contemporary
dance
The arts, dance &
history
238
41 America’s six best hiking trails
Travel, nature &
physical
geography
243
42
Why the disappearance of
livestock farming is good for us all
The environment
& farming
250
43 Can we trust our memories?
Science,
psychology &
health
255
44
Why Mary Shelly’s Frankenstein is
still relevant today
The arts, literature
& culture
260
45
Customer complaints are good for
business
Business
266
46
How the discovery of plate
tectonics revolutionised are
understanding of our planet
Science, geology
& history
271
47
Is there a conflict between saving
the planet and reducing poverty?
Society & the
environment
275
48 No more rock stars anymore
The arts, music &
technology
280
49
Are the rich better than the rest of
us?
Psychology &
society
286
50
Globalisation and how societies
have always evolved
Culture, society &
history
291
51
The reasons for the decline in wine
consumption in France
Food/drink,
culture & society
297
52
The merits of being a fair-weather
sports fan
Sport, society &
culture
302
5
List of exercises (continued)
Number
Title
Topics
Page
53
The reason why homeopathic
treatment does work with patients
Medicine &
science
307
54
Is the gentrification of parts of
cities a really bad thing?
Society,
economics &
human geography
312
55 The origins of the metric system
Science & history
318
56 Do prisons work?
Crime, society,
history & law
323
57
All you need to know about
depression
Science,
psychology &
health
329
58 The future of work
Work, technology
& society
334
59
The difficulties for us to colonise
the galaxy
Astronomy,
science, health &
technology
340
60
The shredding of Banksy’s “Girl
With Balloon” at auction: Stunt or
statement?
The arts, painting
& culture
345
61
Why a decline in the planet’s
biodiversity is a threat to us all
Nature & the
environment
350
62
Should we profile people in society
to predict behaviour?
Psychology,
society & crime
356
63
A review of Vincent LoBrutto’s
biography of Stanley Kubrick
The arts, film &
reviews
361
64
Is free trade between countries a
good thing?
Economics
366
65
Is freshwater the biggest challenge
we will face this century?
The environment,
society & physical
geography
372
66
The problems of relying on metrics
to gauge performance
Statistics,
business & work
378
6
List of exercises (continued)
Number
Title
Topics
Page
67
Should you use the carrot or the
stick with your children?
Psychology &
family
382
68
Stefan Zweig: The life of a citizen
of the world
The arts, literature,
culture & society
387
69
The evolution of science and
thought throughout the ages part 1
Science, history,
innovation,
society &
philosophy
392
70
The evolution of science and
thought throughout the ages part 2
Science, history,
innovation,
society &
philosophy
398
7
Who this eBook is for
The reading exercises (the texts and vocabulary exercises) in this eBook have been
specifically designed for people who have a proficiency level of English (higher C1 or
C2), are studying for the Cambridge Certificate of Proficiency in English (CPE) exam
or students who are looking to obtain a 8-9 mark in IELTS exams.
The texts have either been adapted or written for the needs of students learning
English at this level. What this means is that you should be able to easily understand
what the piece of text you are reading is about, but at the same time find parts of it
challenging (specifically with some of the vocabulary you are going to encounter in
the text). You are going to come across some vocabulary (but not too much, so you
get frustrated) which you are very likely not to have encountered before in English.
Vocabulary which you need to know and understand at this level.
The words and phrases in the vocabulary exercises have been specifically chosen
from each of the texts that you will read. The words and phrases in each vocabulary
exercise are ones which I have frequently noticed that students at this level don't
know, don't use or use incorrectly. In addition, the majority of these words and
phrases are ones which you will both see and be able to use in a variety of different
contexts.
The reading exercises in this eBook are not really appropriate for people who have
lower levels of English (upper-Intermediate, advanced or CAE). People who have
these levels of English will find the majority of the texts too difficult and likely
become frustrated.
How the exercises are ordered
Unlike our other reading exercise books (for intermediate/FCE and advanced/CAE
levels) the texts in the reading exercises contained in this eBook don't get
progressively more difficult (i.e. the text in reading exercise 1 isn't necessarily less
challenging than and the text in exercise 34 or 70). So you can do them in which ever
order you want.
You will probably find some of the reading exercises more challenging than others,
but this isn't necessarily down to the vocabulary used them, but because of the
knowledge you have of the topic(s) discussed in them. But due to there being other
reading exercises on each of the topics throughout the eBook, you should improve
your knowledge of the topic and its vocabulary the more reading exercises you do.
8
By all means miss out doing reading exercises if you don't think they are necessary
for yourself (that's why I've included 70 different exercises in this eBook, so you have
a choice of which to use).
What it will help you improve
The reading exercises that you will find here have been designed to improve your
English in a number of different areas.
Reading
By reading each text in the exercises, you will become more comfortable at reading
and understanding a range of different complex pieces of writing in English.
Vocabulary
You'll not only learn the meaning and use of advanced vocabulary in the vocabulary
exercises, but you'll also broaden your overall knowledge of English vocabulary. The
reading exercises cover a wide variety of different topics (from science to art). So,
you'll read about many topics which you otherwise wouldn't and learn the vocabulary
used when talking about them. And although broadening vocabulary is important for
everyone who wants to improve their level of English, it is especially important for
people doing exams (e.g. CPE, IELTS etc...).
Grammar
Although there are no grammar exercises in this eBook, the reading of the different
texts will reinforce your knowledge and use of both complex and simple grammatical
structures. The more you see grammatical structures being used, the more likely you
are to use them correctly yourself.
Writing
Although the focus of these exercises is on improving reading skills and vocabulary,
you can use the different texts to improve your own writing. To do this, you should
look at the different texts and see how they are structured (what comes first, second,
third etc...), how the different paragraphs are linked together and how the texts flow
(how the writers order the different things they talk about, to keep the people reading
them both interested and not confused). In addition, some of the vocabulary you will
learn in the vocabulary exercises is commonly used in written English.
9
The best way to use this eBook
Although you can do the reading exercises in this eBook however you like, I'm going
to recommend a method to use which will help you to improve your English more
quickly and effectively.
Read the text
The first thing to do is to read the text of each reading exercise. But before you do,
make sure you read the summary (which explains what the text is about) at the
beginning of the text. The reason why you need to read this first is that it will make
reading the text both quicker and easier.
Then read the text. The first time you read the text, you are doing it to understand
what it is talking about. Don't worry if you don't understand what all the vocabulary
means. If there are words and phrases you don't know or are confused with,
underline or highlight them (so you remember what they are). But don't think too
long about what they mean when first reading the text. You can do that later (I'll
explain when later).
Do the vocabulary exercise
After you have finished reading the text, have a break of about 5 to 10 minutes. Then
look at the vocabulary exercise of the text that you have just read.
In this part you are going to learn 7 words or phrases from the text you have just
read. You are going to learn what each of these mean and in what situations they are
used in. You will find that each word or phrase is in a sentence (or sentences) taken
from the text. Guess from the context (the sentence(s) in which the word or phrase is
in) what the meaning of the word or phrase is. When you think that you know what it
means, check in a dictionary to make sure that you are right.
After you have done this, create your own sentence with the word/phrase in your
mind (don't write it down) and say it out loud.
Remember when you do this, to use the same context in your own sentence that the
word or phrase is in from the text.
Do the same process for all the words and phrases.
Write your own sentences
The following day (not on the same day), write a sentence for the each of the 7 words
or phrases that you learnt from the vocabulary exercise. Write the sentences on the
page after the vocabulary exercise called "Write your own sentences with the
10
vocabulary". You can write the same sentence you created the day before or a
different sentence, it is your choice.
This eBook has been designed so that you can write the sentence you have created
directly in the eBook (in the coloured box below each word or phrase). This means
that you don't have to print out the reading exercise and it also makes it easier to find
and reread the sentences you have written in the future.
To write your sentences directly in the eBook, you need to open it with the Adobe
Acrobat application. Although you can open and read this eBook in many
different applications, most of them won't allow you to write directly into the eBook.
After you have written each sentence, read it out aloud again.
Why do all of this?
The main reason to do all of this is that it will make sure that you remember both the
meaning of the words or phrases and when they are used. I have tested this method
with people learning English and the ones who use it remember and use the words
and phrases a lot more than those that don't.
Look at the other words you don't know
It is your choice if you do this or not, but after you have done the sentences in the
vocabulary exercise you can look at the words and phrases that you underlined or
highlighted when you were reading the text. Use a similar process to learning and
remembering their meanings as you did with the vocabulary you learnt in the
vocabulary exercise (guess their meaning from the context, check in a dictionary,
create your own sentence etc...).
And that's it
That's all you have to do. I appreciate that after reading all this, it may seem like
there's a lot you have to do. But when you get used to doing it, it won't take you long
at all to do.
11
Use of the content
Please remember it's taken me a lot of time and effort to produce this eBook and all
the material in it. So I would really appreciate it if you didn't give copies of it to other
people. If you choose to do it, it will mean less people will buy it from me. This will
make it harder for me to dedicate my time to producing more of this type of content
in the future.
Although I'm sure that the majority of you won't do this, I have to stipulate what you
are legally not able to do with the content in this eBook. You are not permitted to
rebrand the eBook or content (or any part of it) as your own or resell it. In addition,
you are not allowed to publish the content or provide a link to download the content
or eBook for free on a digital medium (e.g. on a website, social media network etc...).
I apologise for having to do this, but I have to legally protect my rights.
If you have any questions about this or about anything connected to this eBook,
please don't hesitate to contact me (Chris) at contact@blairenglish.com.
12
EXERCISE 1
The intelligence of plants
Summary
This article discusses the increasing evidence that plants have a form of intelligence. Talking to a
co-author of a book on the subject, it explains why this is the case and why this hasn't really been
researched into in the past. It also says what the importance of plants is to our survival and that we
really need to start paying them more attention.
Plants are intelligent. Plants deserve rights. Plants are like the Internet or more accurately the
Internet is like plants. To most of us these statements may sound, at best, insupportable or, at
worst, crazy. But a new book, Amazing Plants: the Intelligence of plants, by plant neurobiologist
(yes, plant neurobiologist), Stefano Rivili and journalist, Alessandra Vickers, makes a compelling
and fascinating case not only for plant sentience and smarts, but also plant rights.
For centuries Western philosophy and science largely viewed animals as unthinking automatons,
simple slaves to instinct. But research in recent decades has shattered that view. We now know
that not only are chimpanzees, dolphins and elephants thinking, feeling and personality-driven
beings, but many others are as well. Octopuses can use tools, whales sing, bees can count, crows
demonstrate complex reasoning, paper wasps can recognise faces and fish can differentiate types
of music. All these examples have one thing in common: they are animals with brains. But plants
don't have a brain. How can they solve problems, act intelligently or respond to stimuli without a
brain?
"Today's view of intelligence - as the product of the brain in the same way that urine is of the
kidneys - is a huge oversimplification. A brain without a body produces the same amount of
intelligence of the nut that it resembles," said Rivili, who as well as co-writing Amazing Plants, is
the director of the Institute of Plant Neurobiology in Milan.
As radical as Rivili's ideas may seem, he's actually in good company. Charles Darwin, who studied
plants meticulously for decades, was one of the first scientists to break from the crowd and
recognise that plants move and respond to sensation i.e., are sentient. Moreover, Darwin who
studied plants meticulously for most of his life, observed that the radicle the root tip "acts like
the brain of one of the lower animals."
Plant problem solvers
Plants face many of the same problems as animals, though they differ significantly in their
approach. Plants have to find energy, reproduce and stave off predators. To do these things, Rivili
argues, plants have developed smarts and sentience. "Intelligence is the ability to solve problems
and plants are amazingly good at solving their problems," Rivili noted. To solve their energy needs,
most plants turn to the sun in some cases literally. Plants are able to grow through shady areas to
locate light and many even turn their leaves during the day to capture the best light. Some plants
have taken a different route, however, supplying themselves with energy by preying on animals,
including everything from insects to mice to even birds. The Venus flytrap may be the most famous
13
of these, but there are at least 600 species of animal-eating flora. In order to do this, these plants
have evolved complex lures and rapid reactions to catch, hold and devour animal prey.
Plants also harness animals in order to reproduce. Many plants use complex trickery or provide
snacks and advertisements (colours) to lure in pollinators, communicating either through direct
deception or rewards. New research finds that some plants even distinguish between different
pollinators and only germinate their pollen for the best.
Finally, plants have evolved an incredible variety of toxic compounds to ward off predators. When
attacked by an insect, many plants release a specific chemical compound. But they don't just throw
out compounds, but often release the precious chemical only in the leaf that's under attack. Plants
are both tricky and thrifty.
"Each choice a plant makes is based on this type of calculation: what is the smallest quantity of
resources that will serve to solve the problem?" Rivili and Vickers write in their book. In other
words, plants don't just react to threats or opportunities, but must decide how far to react.
The bottom of the plant may be the most sophisticated of all though. Scientists have observed that
roots do not flounder randomly but search for the best position to take in water, avoid competition
and garner chemicals. In some cases, roots will alter course before they hit an obstacle, showing
that plants are capable of "seeing" an obstacle through their many senses. Humans have five basic
senses. But scientists have discovered that plants have at least 20 different senses used to monitor
complex conditions in their environment. According to Rivili, they have senses that roughly
correspond to our five, but also have additional ones that can do such things as measure humidity,
detect gravity and sense electromagnetic fields.
Plants are also complex communicators. Today, scientists know that plants communicate in a wide
variety of ways. The most well known of these is chemical volatiles why some plants smell so
good and others awful but scientists have also discovered that plants also communicate via
electrical signals and even vibrations. "Plants are wonderful communicators: they share a lot of
information with neighbouring plants or with other organisms such as insects or other animals.
The scent of a rose, or something less fascinating as the stench of rotting meat produced by some
flowers, is a message for pollinators."
Many plants will even warn others of their species when danger is near. If attacked by an insect, a
plant will send a chemical signal to their fellows as if to say, "hey, I'm being eaten so prepare
your defences." Researchers have even discovered that plants recognize their close kin, reacting
differently to plants from the same parent as those from a different parent. "In the last several
decades science has been showing that plants are endowed with feeling, weave complex social
relations and can communicate with themselves and with animals," write Rivili and Vickers, who
also argue that plants show behaviours similar to sleeping and playing.
And it turns out Darwin was likely right all along. Rivili has found rising evidence that the key to
plant intelligence is in the radicle or root apex. Rivili and colleagues recorded the same signals
given off from this part of the plant as those from neurons in the animal brain. One root apex may
not be able to do much. But instead of having just one root, most plants have millions of individual
roots, each with a single radicle.
So, instead of a single powerful brain, Rivili argues that plants have a million tiny computing
structures that work together in a complex network, which he compares to the Internet. The
strength of this evolutionary choice is that it allows a plant to survive even after losing 90% or
more of its biomass. "The main driver of evolution in plants was to survive the massive removal of
14
a part of the body," said Rivili. "Thus, plants are built of a huge number of basic modules that
interact as nodes of a network. Without single organs or centralised functions plants may tolerate
predation without losing functionality. The Internet was born for the same reason and, inevitably,
reached the same solution."
Having a single brain just like having a single heart or a pair of lungs would make plants much
easier to kill. "This is why plants have no brain: not because they are not intelligent, but because
they would be vulnerable," Rivili said. "In this way, it may be better to think of a single plant as a
colony, rather than an individual. Just as the death of one ant doesn't mean the demise of the
colony, so the destruction of one leaf or one root means the plant still carries on."
The wide gulf
So, why has plant sentience or if you don't buy that yet, plant behaviour been ignored for so
long? Rivili says this is because plants are so drastically different from us. For example, plants
largely live on a different timescale than animals, moving and acting so slowly that we hardly
notice they are, indeed, reacting to outside stimuli. Consequently, it is "impossible" for us to put
ourselves in the place of a plant. "We are too different; the fruit of two diverse evolutive
tracks...plants would appear very different to us," he says. "But at the same time, we have things in
common with them too. For example, we both have the same needs to survive and we evolved on
the same planet as well. We pretty much respond in the same way to the same impulses."
But due to these vast differences, Rivili says, plants fail to attract interest in the same way as, say, a
tiger or an elephant. "The love for plants is an adult love. It is almost impossible to find a baby
interested in plants; they love animals," he said. "No child thinks that a plant is funny. And for me
it was no different: I began to be interested in plants during my doctorate when I realised that they
were capable of surprising abilities."
This has resulted in very few researchers studying plant behaviour or intelligence, unlike queries
into animals. "Today the vast majority of the plant scientists are molecular biologists who know [as
much] about the behaviour of plants as much as I know of cricket," said Rivili. "We take plants for
granted. Expecting that they will always be there, not worrying or even considering the possibility
that one might not be."
Yet, humankind's disinterest and dispassion about plant behaviour and intelligence may be to our
detriment, and put our own very survival at stake.
Totally dependent on plants
Whilst plants are by no means as diverse as the world's animals (no one beats beetles for
diversity), they have truly conquered the world. Today, plants make up more than 99 percent of
biomass on the planet. Think about that: this means all the world's animals including ants, blue
whales, and us make up less than one percent. "So we depend on plants, thus plant conservation
is necessary for man's conservation," said Rivili.
Yet, human actions including deforestation, habitat destruction, pollution, climate change, etc.
have ushered in a mass extinction crisis. While plants in the past have fared better in previous
mass extinctions, there is no guarantee they will this time. "Every day a consistent number of plant
species that we have never encountered, disappears," noted Rivili.
At the same time, we don't even know for certain how many plant species exist on the planet.
Currently, scientists have described around 20,000 species of plants. But there are probably more
unknown than known. "We have no idea about the number of plant species living on the planet.
15
There are different estimates saying we know from 10 to 50% (no more) of the existing plants,"
said Rivili. Many of these could be wiped out without ever being described, especially as
unexplored rainforests and cloud forest – the most biodiverse communities on the planet
continue to fall in places like Brazil, Indonesia, Malaysia, the Democratic Republic of the Congo
and Papua New Guinea, among others.
Yet, we depend on plants not only for many of our raw materials and our food, but also for the
oxygen we breathe and, increasingly it seems, the rain we require. Plants drive many of the
biophysical forces that make the Earth habitable for humans and all animals.
"Sentient or not sentient, intelligent or not, the life of the planet is green...The life on the Earth is
possible just because plants exist," said Rivili. "Is not a matter of preserving plants: plants will
survive. The conservation implications are for humans: fragile and dependent organisms."
Still, there are few big conservation groups working directly on plants most target the bigger,
fluffier and more publicly appealing animals. Much like plant behaviour research, plant
conservation has been little-funded and long-ignored.
Rivili says the state of plant conservation and the rising evidence that plants are sentient beings
should make people consider something really radical: plants' rights. "It is my opinion that a
discussion about plants' rights is no longer deferrable. I know that the first reaction, even of the
more open-minded people, will be 'Jeez! He's exaggerating now. Plant's right is nonsense,' but
should we not care? After all the reaction of the Romans' to the proposal of rights for women and
children, was no different. The road to rights is always difficult, but it is necessary. Providing rights
to plants is a way to prevent our extinction."
16
Vocabulary exercise
1. lure (paragraph 6)
Plants also harness animals in order to reproduce. Many plants use complex trickery or provide
snacks and advertisements (colours) to lure in pollinators, communicating either through direct
deception or rewards.
2. ward off (paragraph 7)
Finally, plants have evolved an incredible variety of toxic compounds to ward off predators.
3. roughly (paragraph 9)
According to Rivili, they have senses that roughly correspond to our five, but also have additional
ones that can do such things as measure humidity, detect gravity and sense electromagnetic fields.
4. endowed with (paragraph 11)
In the last several decades science has been showing that plants are endowed with feeling, weave
complex social relations and can communicate with themselves and with animals
5. take plants for granted (paragraph 17)
We take plants for granted. Expecting that they will always be there, not worrying or even
considering the possibility that one might not be.
6. at stake (paragraph 18)
Yet, humankind's disinterest and dispassion about plant behaviour and intelligence may be to our
detriment, and put our own very survival at stake.
7. fared (paragraph 20)
While plants in the past have fared better in previous mass extinctions, there is no guarantee they
will this time.
17
Write your own sentences with the vocabulary
1. lure
2. ward off
3. roughly
4. endowed with
5. take plants for granted
6. at stake
7. fared
18
EXERCISE 2
Superstitions and their strange origins
Summary
This is an article which explains the historical origins of 9 commonly held superstitions (things
which are lucky or unlucky to do) in the Anglo-Saxon world.
Some superstitions are so ingrained in modern English-speaking societies that everyone, even
scientists, succumb to them (or, at least, feel slightly uneasy about not doing so). So why don't we
walk under ladders? Why, after voicing optimism, do we knock on wood? Why do non-religious
people "God bless" a sneeze? And why do we avoid at all costs opening umbrellas indoors? Is there
a logical explanation for them?
If you know what these superstitions originally emanated from, you’ll find that there kind of is.
And this is what you’re going to learn below for both the aforementioned and five other common
superstitious customs we have.
It's bad luck to open an umbrella indoors
Though some historians tentatively trace this belief back to ancient Egyptian times, the
superstitions that surrounded pharaohs' sunshades were actually quite different and probably
unrelated to the modern-day one about rain gear. Most historians think the warning against
unfurling umbrellas inside originated much more recently, in Victorian England.
In "Extraordinary Origins of Everyday Things", the scientist and author Charles Panati wrote: "In
eighteenth-century London, when metal-spoked waterproof umbrellas began to become a common
rainy-day sight, their stiff, clumsy spring mechanism made them veritable hazards to open
indoors. A rigidly spoked umbrella, opening suddenly in a small room, could seriously injure an
adult or a child, or shatter a frangible object. Even a minor accident could provoke unpleasant
words or a minor quarrel, themselves strokes of bad luck in a family or among friends. Thus, the
superstition arose as a deterrent to opening an umbrella indoors."
It's bad luck to walk under a leaning ladder
This superstition really does originate 5,000 years ago in ancient Egypt. A ladder leaning against a
wall forms a triangle, and Egyptians regarded this shape as sacred (as exhibited, for example, by
their pyramids). To them, triangles represented the trinity of the gods, and to pass through a
triangle was to desecrate them.
This belief wended its way up through the ages. "Centuries later, followers of Jesus Christ usurped
the superstition, interpreting it in light of Christ's death," Panati explained. "Because a ladder had
rested against the crucifix, it became a symbol of wickedness, betrayal, and death. Walking under a
ladder courted misfortune."
In England in the 1600s, criminals were forced to walk under a ladder on their way to the gallows.
19
A broken mirror gives you seven years of bad luck
In ancient Greece, it was common for people to consult "mirror seers," who told their fortunes by
analyzing their reflections. As the historian Milton Goldsmith explained in his book "Signs, Omens
and Superstitions" (1918), "divination was performed by means of water and a looking glass. This
was called catoptromancy. The mirror was dipped into the water and a sick person was asked to
look into the glass. If their image appeared distorted, they were likely to die; if clear, they would
live."
In the first century A.D., the Romans added a caveat to the superstition. At that time, it was
believed that people's health changed in seven year cycles . A distorted image resulting from a
broken mirror therefore meant seven years of ill-health and misfortune, rather than outright
death.
When you spill salt, toss some over your left shoulder to avoid bad luck
Spilling salt has been considered unlucky for thousands of years. Around 3,500 B.C., the ancient
Sumerians first took to nullifying the bad luck of spilled salt by throwing a pinch of it over their left
shoulders. This ritual spread to the Egyptians, the Assyrians and later, the Greeks.
The superstition ultimately reflects how much people prized (and still prize) salt as a seasoning for
food. The etymology of the word "salary" shows how highly we value it. According to Panati: "The
Roman writer Petronius, in the Satyricon, originated 'not worth his salt' as opprobrium for Roman
soldiers, who were given special allowances for salt rations, called salarium 'salt money' the origin
of our word 'salary.'"
Knock on wood to prevent disappointment
Though historians say this may be one of the most prevalent superstitious customs in the United
States, its origin is very much in doubt. "Some attribute it to the ancient religious rite of touching a
crucifix when taking an oath," Goldsmith wrote. Alternatively, "among the ignorant peasants of
Europe it may have had its beginning in the habit of knocking loudly to keep out evil spirits."
Always 'God bless' a sneeze
In most English-speaking countries, it is polite to respond to another person's sneeze by saying
"God bless you." Though incantations of good luck have accompanied sneezes across disparate
cultures for thousands of years (all largely tied to the belief that sneezes expelled evil spirits), our
particular custom began in the sixth century A.D. by explicit order of Pope Gregory the Great.
A terrible pestilence was spreading through Italy at the time. The first symptom was severe,
chronic sneezing, and this was often quickly followed by death. [Is It Safe to Hold In a Sneeze?]
Pope Gregory urged the healthy to pray for the sick, and ordered that light-hearted responses to
sneezes such as "May you enjoy good health" be replaced by the more urgent "God bless you!" If a
person sneezed when alone, the Pope recommended that they say a prayer for themselves in the
form of "God help me!"
Hang a horseshoe on your door open-end-up for good luck
The horseshoe is considered to be a good luck charm in a wide range of cultures. Belief in its
magical powers traces back to the Greeks, who thought the element iron had the ability to ward off
20
evil. Not only were horseshoes wrought of iron, they also took the shape of the crescent moon in
fourth century Greece for the Greeks, a symbol of fertility and good fortune.
The belief in the talismanic powers of horseshoes passed from the Greeks to the Romans, and from
them to the Christians. In the British Isles in the Middle Ages, when fear of witchcraft was
rampant, people attached horseshoes open-end-up to the sides of their houses and doors. People
thought witches feared horses, and would shy away from any reminders of them.
A black cat crossing your path is lucky/unlucky
Many cultures agree that black cats are powerful omens but do they signify good or evil?
The ancient Egyptians revered all cats, black and otherwise, and it was there that the belief began
that a black cat crossing your path brings good luck. Their positive reputation is recorded again
much later, in the early seventeenth century in England: King Charles I kept (and treasured) a
black cat as a pet. Upon its death, he is said to have lamented that his luck was gone. The supposed
truth of the superstition was reinforced when he was arrested the very next day and charged with
high treason.
During the Middle Ages, people in many other parts of Europe held quite the opposite belief. They
thought black cats were the "familiars," or companions, of witches, or even witches themselves in
disguise, and that a black cat crossing your path was an indication of bad luck a sign that the devil
was watching you. This seems to have been the dominant belief held by the Pilgrims when they
came to America, perhaps explaining the strong association between black cats and witchcraft that
exists in the country to this day.
The number 13 is unlucky
Fear of the number 13, known as "triskaidekaphobia," has its origins in Norse mythology. In a
well-known tale, 12 gods were invited to dine at Valhalla, a magnificent banquet hall in Asgard, the
city of the gods. Loki, the god of strife and evil, crashed the party, raising the number of attendees
to 13. The other gods tried to kick Loki out, and in the struggle that ensued, Balder, the favorite
among them, was killed.
Scandinavian avoidance of 13-member dinner parties, and dislike of the number 13 itself, spread
south to the rest of Europe. It was reinforced in the Christian era by the story of the Last Supper, at
which Judas, the disciple who betrayed Jesus, was the thirteenth guest at the table.
Many people still shy away from the number, but there is no statistical evidence that 13 is unlucky.
21
Vocabulary exercise
1. ingrained (paragraph 1)
Some superstitions are so ingrained in modern English-speaking societies that everyone, even
scientists, succumb to them (or, at least, feel slightly uneasy about not doing so).
2. succumb to (paragraph 1)
Some superstitions are so ingrained in modern English-speaking societies that everyone, even
scientists, succumb to them (or, at least, feel slightly uneasy about not doing so).
3. a caveat (paragraph 9)
If their image appeared distorted, they were likely to die; if clear, they would live. In the first
century A.D., the Romans added a caveat to the superstition. At that time, it was believed that
people's health changed in seven year cycles.
4. disparate (paragraph 13)
Though incantations of good luck have accompanied sneezes across disparate cultures for
thousands of years (all largely tied to the belief that sneezes expelled evil spirits), our particular
custom began in the sixth century
5. rampant (paragraph 17)
In the British Isles in the Middle Ages, when fear of witchcraft was rampant, people attached
horseshoes open-end-up to the sides of their houses and doors.
6. revered (paragraph 19)
The ancient Egyptians revered all cats, black and otherwise, and it was there that the belief began
that a black cat crossing your path brings good luck.
7. ensued (paragraph 21)
The other gods tried to kick Loki out, and in the struggle that ensued, Balder, the favorite among
them, was killed.
22
Write your own sentences with the vocabulary
1. ingrained
2. succumb to
3. a caveat
4. disparate
5. rampant
6. revered
7. ensued
23
EXERCISE 3
Jordan: A spectacular country with
unfortunately too few tourists
Summary
This is a travel article written by a journalist who visited Jordan with her young daughter. She not
only talks about the places she visited whilst in the country, but also about the reasons why the
number of tourists visiting the country has drastically fallen in recent years and the consequences
this is having there.
"You are safe and sound here," the gift shop owner said, as he handed over some change. At
breakfast, the waiter had been similarly reassuring. "I always tell my guests they are in a very safe
place. There might be issues around the corner," he said, pouring out tea. "But here you are
perfectly safe."
After a while these repeated reassurances began to have the opposite than desired effect, and
actually became rather disconcerting. I hadn't expected to find Jordan anything other than
peaceful, but since the bottom has fallen out of the tourism industry because of the conflict in
neighbouring Syria, most people you meet have an urge to emphasise how risk-free a trip here is.
It's easy to see why. Thanks to the widespread sense of unease about travelling to the region,
Jordan, as well as being safe, is now extremely empty. Some of the country's most extraordinary
sites are virtually deserted; tourism has fallen 66% since 2011. As a tourist, you can't help feeling
worried for the people who depend for their livelihood on the travel industry (which has
historically contributed about 20% of GDP), but at the same time there is an uneasy pleasure in
visiting places like Petra, one of the new seven wonders of the world, in near silence.
Nothing had prepared me for how spectacular Jordan is, and perhaps part of the intense
experience of visiting now is tied up with the unusually solitary feeling you have as you walk
through its ancient sites.
After a late-night arrival in Amman with Rose, my 12-year-old daughter, we set off early and drove
through the desert to Petra, arriving late morning. When tourism here was at its peak, there were
as many as 3,000 visitors every day. On the day we visited in late October, only 300 people went
through the gates. This meant that walking in the Siq, the natural gorge that leads through red
sandstone rocks to the vast classical Treasury building, carved into the rockface in the first century
BC, felt very peaceful. There were no crowds with selfie-sticks, no umbrella-waving tour guides. It
was the most unfrazzling experience, which allowed us to look at the scenery and see it as it has
been for centuries.
Conservationists' concerns, referred to in my now out-of-date Lonely Planet Guide, about mass
tourism in Petra the pernicious effects of humidity and the damage wrought by thousands of feet
trampling up the steps cut into the rock are no longer so acute.
24
We had only one day in Petra, but there was so much we wanted to see that we walked 12 miles,
racing around in the heat to pack everything in, overwhelmed and stupefied by the quantity of
beautiful tombs and facades. Guidebook photographs do no justice at all to the splendour of the
site, the monumental architectural talent of the Nabateans (the nomadic people who built Petra)
and the mesmerising way sunlight changes the colour of the rock as the day progresses, from
orange to pink and, with dusk, to shadowy grey. This vast settlement is truly extraordinary. The
canyon alone and the sudden, amazing reveal of the Treasury is enough in itself to justify a visit,
but this is only the beginning.
We hurriedly climbed 700 steps up to the Monastery, a temple or tomb carved into the mountain
summit, drinking hot, sugary mint tea at the top in a cafe offering a view over the whole site. The
cafe owner appeared bemused by the reluctance of tourists to come. "It's perfectly safe here; there
are no terrorists here. But people have stopped coming," he told us.
The Foreign Office travel advice notes that there is "a high threat from terrorism" in Jordan – but
it makes the same warning about Egypt, and also Germany and France.
Jordan's defensiveness has built up over the past 15 years, as its location, sandwiched between
Syria, Iraq, Palestine, Israel and Egypt, has conspired to discourage visitors. First there was the
Intifada of 2000, then 9/11, then the war in Iraq. Then, just as things were beginning to pick up, in
2008 there was the global recession and, later, the violent aftermath of the Arab Spring,
culminating in a civil war in Syria. The key thing to remember is that the Foreign Office website
does not advise against travel to anywhere except a two-mile strip along the Syrian border (which
is far from tourist sites); it also notes that 60,820 British nationals visited Jordan in 2015 and that
most visits are trouble-free.
We had lunch at a restaurant under a canopy of trees at the centre of the site. Towards the end of
the day, we walked around the back of the site, up another 670 steps, past tombs and Bedouin
houses, to the High Place of Sacrifice the exposed mountain plateau where the Nabateans
performed religious rituals. A long twisting trail leads to the summit. We arrived at sunset having
passed only four other tourists on the way up. The cafe near the top was closed last year's price
list faded and flapping in the breeze and only a goat inside. Dotted everywhere were groups of
camels sitting on the ground, their legs tucked beneath them, with no customers.
There was some bitterness at the fickle nature of the global tourist market. "A bomb goes off in
Turkey and people think 'We shouldn't visit Jordan,' " a jewellery seller said. A man selling bottles
of sand with camel shapes formed from different coloured layers, said this was the worst year since
2002, mournfully displaying the blown sand vases that are no longer selling. His friend's hotel had
just closed and his business was very slow.
We walked back down at dusk, hurrying to make sure we were on flat ground before the light
disappeared completely. There was no one else in the courtyard in front of the Treasury, and we
walked silently up the Siq in the half light, watching the shadows creep up the rock, until it was
totally dark. We heard the sound of donkeys being led back to their stables, but barely saw them in
the darkness. As we reached the end of the gorge, we saw the beginnings of preparations for Petra
by Night, with candles being lit so tourists can walk along in the late evening.
We stayed at the lovely Petra Palace (doubles/twins from £57 B&B) in Wadi Musa, just a few
hundred metres from the entrance to the site, and ate in a cafe a few doors down, enjoying small
plates of hummus, kibbeh (fried lamb meatballs), baba ganoush (chargrilled aubergine), tabouleh
and stuffed vine leaves.
25
On our second day we drove for two hours down to Wadi Rum, the spectacular desert that T E
Lawrence described as "vast, echoing and godlike", with its rocks like melted wax emerging on the
skyline, their colours shifting in the light and alien shapes forming from the cavities. We hired a
guide, Abdullah, who drove us to sand dunes where we could climb the rocks for amazing views
and later we set up camp at the foot of a sandstone cliff. Abdullah made a fire and cooked meat-
and-vegetable stew and potatoes, which we ate by torchlight, and then rolled out carpets on the
sand so we could bed down in sleeping bags in the open. We woke up before dawn to watch the
sunrise over the rocks, and walked up one of the cliffs before a breakfast of sesame paste halva and
tea.
Later in the week we travelled to the Dead Sea for a night, where there were just a handful of
people floating in the salty water when we arrived at dusk, and then 30 miles north of Amman to
see Jerash, a huge Graeco-Roman settlement, with theatres, colonnades, a hippodrome, triumphal
arches, squares and mosaics depicting scenes of daily life all well-preserved after an earthquake
in 749 buried the ruins in sand for centuries. It felt such a privilege to see this remarkable place so
empty so unlike the jostling experience of walking through the Forum in Rome. In a time when
Peru has set a limit on the number of people walking the Inca Trail, and residents in Venice are
protesting against tourist numbers, visiting Jordan feels like being transported back to another
era, before charter flights and package holidays. When we arrived at 9.30am, there was one tour
bus in the car park. It got busier towards lunchtime, but most of the time we were alone among the
amphitheatres and plazas. We drank cardamom coffee in the Temple of Artemis, watching lizards
dart out from between the Corinthian columns, while a stray kitten tried to climb into my handbag.
Jordan clearly needs tourists to return. The big chain hotels are managing to weather the storm by
shifting marketing to locals, but the smaller businesses are suffering. "Before 2011, 70% of our
business came from Russia, Scandinavia, Germany and the UK. Now that has shifted to 70% of our
business coming from Jordan, Lebanon, Palestine and expat Iraqis," the manager of Dead Sea
Kempinski told me. "The bigger hotels can shift to weddings and the local market, but those who
are most affected are the people selling trinkets."
The Jordanian Tourism Board is fighting back in imaginative ways. It recently brought a group of
film producers and directors over from Los Angeles to show them the country's superb film
locations. It has encouraged Instagram stars to come and post picturesque scenes from the desert.
They like to remind visitors of the number of films made in Jordan, from Lawrence of Arabia to
Theeb, which won the Bafta for best foreign film last year, to The Martian and Indiana Jones and
the Last Crusade. The board is optimistic for 2017; UK visitors are already up 6% this year and the
Russian market is up 1,200%.
Jordan is home to 635,000 refugees from Syria, 80,000 of them in the Zaatari refugee camp in the
north of the country, and the World Bank has estimated that about a third of the country's nine
million population is made up of refugees Palestinians and Iraqis as well as Syrians.
The country's attitude towards the crisis is in marked contrast to that of some other nations. "We
welcome refugees: they are our relatives," said our guide in Jerash, Talal Omar. "We have a long
history together and we speak the same language. You having a good holiday in Jordan is helping
Jordan tackle that issue. The money that tourists bring in to our country helps pay the overheads
we have from the refugees."
I'm not sure that going on holiday in Jordan can be presented as directly aiding the refugee crisis,
but certainly the reverse is true that the absence of tourism is hugely problematic for the country.
26
Vocabulary exercise
1. disconcerting (paragraph 2)
After a while these repeated reassurances began to have the opposite than desired effect, and
actually became rather disconcerting.
2. livelihood (paragraph 3)
As a tourist, you can't help feeling worried for the people who depend for their livelihood on the
travel industry (which has historically contributed about 20% of GDP)
3. stupefied (paragraph 7)
but there was so much we wanted to see that we walked 12 miles, racing around in the heat to pack
everything in, overwhelmed and stupefied by the quantity of beautiful tombs and facades.
4. mesmerising (paragraph 7)
Guidebook photographs do no justice at all to the splendour of the site, the monumental
architectural talent of the Nabateans (the nomadic people who built Petra) and the mesmerising
way sunlight changes the colour of the rock
5. bemused (paragraph 8)
The cafe owner appeared bemused by the reluctance of tourists to come. "It's perfectly safe here;
there are no terrorists here. But people have stopped coming," he told us.
6. culminating in (paragraph 10)
Then, just as things were beginning to pick up, in 2008 there was the global recession and, later,
the violent aftermath of the Arab Spring, culminating in a civil war in Syria..
7. weather the storm (paragraph 17)
Jordan clearly needs tourists to return. The big chain hotels are managing to weather the storm
by shifting marketing to locals, but the smaller businesses are suffering.
27
Write your own sentences with the vocabulary
1. disconcerting
2. livelihood
3. stupefied
4. mesmerising
5. bemused
6. culminating in
7. weather the storm
28
EXERCISE 4
The myth of meritocracy in education
Summary
This article argues that the belief that both success in education is largely based on merit (i.e.
based on hard work and/or ability) and that it helps to make fairer society is wrong. Through
examining a number of countries, it explains how the educational system in those countries
reinforces the existing social order (benefiting those from more affluent backgrounds and men). It
ends by suggesting some things that can be done to make education fairer to all members of
society.
The idea of meritocracy has long pervaded conversations about how economic growth occurs in the
United States. The concept is grounded in the belief that our economy rewards the most talented
and innovative, regardless of gender, race, socioeconomic status, and the like. Individuals who rise
to the top are supposed to be the most capable of driving organizational and economic
performance.
More recently, however, concerns about the actual effects of meritocracies are rising. In the case of
gender, research across disciplines shows that believing an organization or its policies are merit-
based makes it easier to overlook the subconscious operation of bias. People in such organizations
assume that everything is already meritocratic, and so there is no need for self-reflection or
scrutiny of organizational processes. In fact, psychologists have found that emphasizing the value
of merit can actually lead to more bias in favor men.
Ironically, despite growing recognition of the pitfalls of meritocracy for women and minorities, the
concept has been exported to developing countries through economic policies, multilateral
development programs, and the globalization of media and curricula. In countries with deep social
divisions like India, where the number of women in the workforce dropped 11.4 percent between
1993 and 2012, the mantra of meritocracy has taken hold as a potential means to overcome these
divides and drive economic growth especially in education.
Economist Claudia Goldin wrote in the Journal of Interdisciplinary History that when it comes to
education, historically, "Americans equate a meritocracy with equality of opportunity and an open,
forgiving, and publicly funded school system for all. To Americans, education has been the great
equalizer, and generator, of a ‘just' meritocracy." This idea also pervades economic development
and policy circles. India, a society once famous for its caste inequality and number of "missing
women," has embraced the value of meritocracy for the modern economy and now touts its success
in advancing merit over historic prejudices in education. Since 2010, the Right to Education Act
has guaranteed free schooling for all Indian children up to age 14, and by 2013, 92 percent of
children, nearly half of them girls, were enrolled in primary school, up from 79 percent in 2002.
And yet equal access to schools does not guarantee that the best and brightest will succeed. During
fieldwork in the Indian Himalayas between 2014 and 2015, we observed the poor quality of
education available to local children. Teachers were frequently absent from school, buildings
29
lacked proper sanitation, and parents often had to pay additional fees despite government
mandates. Rote memorization was a common teaching method, and many children had difficulty
answering questions that were not in the same format they learned. Studies by academics and the
United Nations Development Programme show similar problems in schools serving poor and rural
communities across India. Students at these schools do not receive the same quality of education
as their wealthy, urban peers, making it more difficult for them to succeed on merit.
Indians also hold up their exam-based university admission system as an example of meritocracy
university acceptance is based only on exam scores. This belief in meritocracy may allow Indians to
overlook continuing disparities in acceptance rates and the underrepresentation of women in
STEM fields. To be accepted at elite Indian universities, students must score in the top percentiles
of national exams. But achieving a good exam score is not solely based on merit because of
differential access to resources. Liberalization in the education sector has created a boom in
private schools across India, as well as a thriving educational services industry. Private tutors,
after-school courses, test prep centers, and accelerated English programs abound, promising to
give children the extra edge they need to pass university entrance tests. Students from privileged
backgrounds with expensive private educations, highly educated parents, and the resources to
access test prep services consistently score higher on national exams than others.
Gender exacerbates these class differences, particularly in terms of admission to elite STEM
institutions – the Indian Institutes of Technology, or IITs. Only 8 percent of students at IITs are
women, though a much higher percentage of women study STEM subjects in high school. Fewer
women attend coaching classes in preparation for the IIT entrance exam, making them less likely
to receive sufficient admission scores. This underrepresentation seems to be due partly to the
belief, also common in the United States and elsewhere, that women are less suited to technical
jobs, and partly to parents' greater willingness to invest in a son's education. In India, sons are
expected to contribute to family income over the long-term. Daughters, on the other hand, are not
seen as long-term contributors, because they will marry into another family and are less likely to
enter the workforce. Because families do not invest as much in women's success in STEM fields,
female students are less likely to achieve the high exam scores required for IITs admission.
Believing in meritocracy has also allowed successful Indians to dismiss the continued presence of
bias. In 2006, the government announced plans to set aside additional places in federally-funded
universities for students from marginal caste groups. In response, medical students and doctors
demonstrated in cities across India, claiming that these quotas would "compromise the quality" of
health care and put patients at risk, because there would be fewer seats available to students with
the highest test scores. The demonstrators, mostly middle and upper-class people from urban
areas, also asserted that higher scores were a mark of higher intelligence. The supposedly objective
nature of admissions tests led these people to overlook how money, connections, and parental
involvement had ensured that they could do well on the entrance exam.
This story should sound familiar to readers regardless of their country of origin. Whether in the
United States or China, the mantra of meritocracy often helps divert attention from ongoing
inequalities.
In the United States, wealthier parents can also be more involved in children's education and
provide additional resources that ensure academic success. Studies by Harvard researchers have
demonstrated that SAT test questions are unconsciously biased in favor of white, middle class
students. Such inequalities have contributed to the growing educational achievement gap between
rich and poor Americans. But since success is widely believed to be the result of individual merit,
poor students are blamed for their failures.
30
While there is little gender bias in access to education in the United States, and more women than
men earn bachelor's degrees, gender bias continues at colleges in a surprising way. A smaller
percentage of female applicants are accepted to elite colleges than male applicants, because many
more qualified women apply to these schools. Universities accept a lower percentage of women to
maintain a gender ratio closer to 50-50 and since they are exempt from Title IX the law banning
gender discrimination in the United States these practices have gone unchallenged. Having
fewer women with elite degrees only compounds the well documented discrimination American
women continue to face in the labor market.
Like India, China relies on a national exam to determine admission to university. But rural,
migrant, and disabled children systematically receive lower-quality schooling than their urban
counterparts, resulting in lower exam scores. In fact, students from Beijing with access to better
schools are 41 times more likely to gain admission to top Chinese universities than students from
poor rural areas. Girls from rural areas are even more disadvantaged as they are less likely to
graduate from high school than boys and are consequently underrepresented in the college
population. The Chinese government has not made moves to address these issues, because the
school system is widely regarded as meritocratic, thus justifying any imbalances as the result of
differential ability and not differential access.
This is not to say that we should quit striving toward meritocracy. In its pure form, it is a worthy
ideal. But we must recognize that the idea of meritocracy has largely served to entrench the
privileges of the elite and justify their success. Claiming to be in a meritocracy is not the same as
achieving it, and policies created to improve opportunities, like exam-based admission, may not
work as expected. To move beyond the rhetoric of meritocracy to actual merit-based systems will
require significant changes to education and university admission including:
1. Improving educational access to all in order to improve the preparation for students of all
genders, races, and socioeconomic backgrounds. Governments could recruit and train
teachers from underserved communities, who will be more invested in student success, to
address imbalances in educational quality for girls and boys. This has been an important
recommendation for achieving better educational outcomes for indigenous peoples in Canada
who have been chronically under-served by the current education system. Governments
could also collaborate with local NGOs to train new teachers and improve school facilities
without dramatically increasing costs.
2. Changing admission processes to make university education more accessible to all
underrepresented groups. Admissions tests could be written by people of all genders from
many different backgrounds to ensure that one kind of life experience is not over-
represented. Alternatively, some schools are foregoing the use of entrance exams entirely in
order to engage in a more holistic assessment of university candidates. A recent study found
that high school grades are a better predictor of student success in college than standardized
test scores, and an increasing number of American universities are now "test-optional."
The advantage of exams is that they provide a score that can easily be compared across candidates.
The advantage of a holistic assessment is that it can account for multiple criteria of excellence.
Neither is a panacea for bias, which can creep in at different points. For exams, biased access to
resources or bias in the test may lead to differential test scores. For holistic assessments, the basis
of comparison is unclear and therefore prone to bias at the point of decision. Although the more
holistic admission system in the United States is also open to manipulation by elites, it can at least
31
attempt to account for the different advantages that individual applicants possess. Given the
challenges in education, acknowledging existing inequalities holds more promise for success than
believing we already know how to be meritocratic.
32
Vocabulary exercise
1. is grounded in (paragraph 1)
The concept is grounded in the belief that our economy rewards the most talented and
innovative, regardless of gender, race,
2. pitfalls of (paragraph 3)
Ironically, despite growing recognition of the pitfalls of meritocracy for women and minorities,
the concept has been exported to developing countries
3. disparities (paragraph 6)
This belief in meritocracy may allow Indians to overlook continuing disparities in acceptance
rates and the underrepresentation of women in STEM fields.
4. abound (paragraph 6)
Private tutors, after-school courses, test prep centers, and accelerated English programs abound,
promising to give children the extra edge they need to pass university entrance tests.
5. compounds (paragraph 11)
Having fewer women with elite degrees only compounds the well documented discrimination
American women continue to face in the labor market.
6. entrench (paragraph 13)
This is not to say that we should quit striving toward meritocracy. In its pure form, it is a worthy
ideal. But we must recognize that the idea of meritocracy has largely served to entrench the
privileges of the elite and justify their success.
7. are foregoing (paragraph 15)
Alternatively, some schools are foregoing the use of entrance exams entirely in order to engage
in a more holistic assessment of university candidates.
33
Write your own sentences with the vocabulary
1. is grounded in
2. pitfalls of
3. disparities
4. abound
5. compounds
6. entrench
7. are foregoing
34
EXERCISE 5
The comparative failure of online grocery
shopping
Summary
This article talks about the reasons why unlike other products (e.g. clothes, books, electronics
etc...), buying food/groceries online has not taken off in the United States. It explains the factors
which have caused this and what companies are now doing in order to get more consumers to do
their weekly food shopping online rather than in person in a supermarket.
Nearly 30 years ago, when just 15 percent of Americans had a computer, and even fewer had
internet access, Thomas Parkinson set up a rack of modems on a wine rack and started accepting
orders for the internet's first grocery delivery company, Peapod, which he founded with his brother
Andrew.
Back then, ordering groceries online was complicated most customers through slow dial-up
modems, and Peapod's web graphics were so rudimentary customers couldn't see images of what
they were buying. Delivery was complicated, too: The Parkinsons drove to grocery stores in the
Chicago area, bought what customers had ordered, and then delivered the goods from the backseat
of their beat-up Honda Civic. When people wanted to stock up on certain goods – strawberry
yogurt or bottles of Diet Coke the Parkinsons would deplete the stocks of the requested items in
local grocery stores.
Peapod is still around today. But convincing customers to order groceries online is still nearly as
difficult now as it was in 1989. Whilst twenty-two percent of apparel sales and 30 percent of
computer and electronics sales happen online today, only 3 percent of grocery sales are, according
to a report from Deutsche Bank Securities. "My dream was for it to be ubiquitous, but getting that
first order can be a bit of a hurdle," Parkinson told me from Peapod's headquarters in downtown
Chicago. (He is now Peapod's Chief Technology Officer; his brother has since left the company.)
Until online grocery-delivery companies are delivering to hundreds of homes in the same
neighborhood, it will be very hard for them to make a profit. Though it is an $800 billion business,
grocery is famously low-margin; most grocery stores are barely profitable as it is. Add on the labor,
equipment, and gas costs of bringing food to people's doors quickly and cheaply, and you have a
business that seems all but guaranteed to fail. "No one has made any great amount of money
selling groceries online," Sucharita Kodali, an analyst with Forrester Research, told me. "In fact,
there have been a lot more people losing money."
This is not true in every country. In South Korea, 20 percent of consumers buy groceries online,
and both in the United Kingdom and Japan, 7.5 percent of consumers do, according to Kantar
Consulting. But those are countries with just a few large population centers, which makes it easier
for delivery companies to set up shop in just a few big cities and access a huge amount of
purchasing power. In the United States, by contrast, people are spread out around rural, urban,
35
and suburban areas, making it hard to reach a majority of shoppers from just a few physical
locations. In South Korea and Japan, customers are also more comfortable with shopping on their
phones than consumers are in countries like the United States.
Still, companies are still trying to make online grocery delivery work in the United States. Today,
Peapod is one of dozens of companies offering grocery delivery to customers in certain metro
areas. In June 2017, Amazon bought Whole Foods for $13.4 billion and started rolling out grocery
delivery for its Prime members in a number of cities across the country; analysts predicted at the
time that the company's logistics know-how would allow it to leverage Whole Foods stores to
dominate grocery delivery. Also in 2017, Walmart acquired Parcel, a same-day, last-mile delivery
company. Two months after that, Target said it was buying Shipt, a same-day delivery service.
Kroger announced last May it was partnering with Ocado, a British online grocer, to speed up
delivery with robotically operated warehouses. Companies like ALDI, Food Lion, and Publix have
started working with Instacart to deliver groceries from their stores. FreshDirect recently opened a
highly automated 400,000-square-foot delivery center and says it plans to expand to regions
beyond New York, New Jersey, and Washington D.C. in the coming year.
The story of Peapod, which has had 30 years to perfect the art of online grocery delivery, suggests
that making money will be a challenge for even deep-pocketed retailers like Amazon. Peapod has
more experience than any other online grocery delivery company. It outlasted Webvan, which
raised $800 million before crashing in 2001, and beat out other big bets of the dot-com boom such
as Kozmo, Home Grocer, and ShopLink.
Peapod itself nearly failed in 2000 before being rescued by the Dutch conglomerate Royal Ahold
NV, which first bought a controlling interest and then later the entire company. (After a recent
merger, Peapod's parent company is now called Ahold Delhaize it owns supermarket chains like
Food Lion, Hannaford, and Stop & Shop.) In 2016, Peapod was only in the black in three markets,
whole losing money in the rest, a Peapod executive told The Wall Street Journal that year. The
company has not been able to get enough people to buy groceries online to lower the costs of
delivering them. If a company with 30 years of experience in grocery delivery can't make it work,
can anyone?
Compared to groceries where many of the items are perishable and/or bulky , clothes and
electronics and dog food are incredibly simple to deliver. A company like Amazon keeps those
products stored in a warehouse, packs them in a box, and sends them on their way through the
mail or through its delivery contractors.
Groceries, though, can't just be packed in a box and entrusted to mail carriers. Imagine fulfilling
an order that includes popsicles, avocados, a case of Coke and tortilla chips. The popsicles have to
be kept cold; the avocados have to be chosen carefully, the Coke is heavy and the tortilla chips can't
be crushed. Now consider that the average Peapod order has 52 items.
Because of these factors, it will always be cheaper for grocery stores to have customers come to
them, and do all the work of shopping themselves, than it will be for the stores to bring the
groceries to the customers, said Kodali, the Forrester analyst. "In the best case, you only make the
same as what you would make in stores," Kodali said. "It's not like it's a more profitable
distribution channel." One of Amazon's big innovations delivering packages was that it could cut
out the middleman (the store) and sell things directly to consumers. But food and beverage
retailers can't cut out the stores, since they don't have the infrastructure in place to get their
products, whether it be ice cream or avocados, directly to consumers.
36
Peapod has tried to lower its overheads in a few ways. In some markets, it keeps groceries in vast
warehouses outside of town, which saves money because the company doesn't have to buy or rent
expensive retail space in city centers. Peapod has figured out how to make the shopping part of
online grocery delivery relatively fast, which means one employee can process dozens of orders in
just a few hours. In "warerooms," which are essentially smaller stores on top of grocery stores,
aisles are much narrower than they are in regular grocery stores. Employees wear devices on their
wrists that tell them on what aisle and shelf a product is located, and they load food into baskets
efficiently, scanning barcodes. Workers get intimately familiar with where various items are
located, allowing them to shop quickly.
Despite Peapod's innovations, the whole process is very labor-intensive. Peapod's workers still
have to scan the groceries packed into orders with a temperature gun to make sure meat hasn't
gotten too warm; they also have to audit each customer order o make sure that items aren't broken
and that nothing is missing. (Ahold, Peapod's parent company, is already using robots to speed
some parts of packing customers' orders.)
Delivery can be slow-going, too. I tagged along with one Peapod driver, Ricardo Bernard, on a
Friday afternoon as he brought groceries to consumers' doorsteps in a wealthy neighborhood of
Chicago. We were assigned 19 stops in Chicago's South Loop, which was heavily congested and
included a number of apartment buildings; Bernard kept having to park the truck in narrow spots,
get out, unload the orders, call the tenant from an intercom (or get let in by a doorman), wait for
an elevator, ride the elevator, and then wait for tenants to open their doors and hand over the
goods, a process than can take more than 10 minutes for each delivery.
The most efficient grocery delivery companies are really logistics companies. Employees at
Peapod's headquarters tinker with routes and monitor weather and traffic in real time so they can
make changes if a storm is coming or a concert is causing congestion, all to shave seconds or
minutes off of delivery routes. The company times how long drivers are sitting in traffic, how long
they go between deliveries, how much time they spend with customers. It rewards drivers who get
deliveries completed faster than average but who maintain high scores from customers.
Grocery companies may have to spend more money opening more brick-and-mortar stores to
make logistics easier and to lessen the amount of time delivery drivers have to be on the road. A
DA Davidson analyst, Tom Forte, recently wrote that he thought Amazon should acquire
thousands of gas stations to "advance its delivery efforts."
Even though it's spent years shaving seconds off of deliveries, Peapod struggles to make the
financials work. The company charges a delivery fee ranging from $6.95 to $9.95. That might seem
steep to people accustomed to getting everything delivered for free, but does not come close to
covering the costs associated with bringing groceries to customers' doors. "Getting costs down is a
work in progress," Ken Fanaro, Peapod's senior director of transportation planning and
development, told me. Online grocery delivery is really only cost-efficient when companies can
spend the bulk of their time bringing groceries into homes from trucks, rather than driving miles
and then bringing groceries into homes. "In a perfect world, we'd be like a mailman, going down
the street, delivering at every home," he told me.
Today, the markets where Peapod is profitable are the densest ones, like New York City. Even
Amazon struggles in suburban markets, announcing last year that it was suspending its Amazon
Fresh delivery service in regions of New Jersey, Pennsylvania, and Maryland, while maintaining
service in cities like New York City, Chicago, and Boston.
37
Grocery stores are stuck in a tough place right now. They're facing challenges from big retailers like
Walmart and Target, which have started offering produce and fresh food, and from discount
chains like Aldi and Lidl, which recently started adding stores in the U.S.. Now, as Amazon enters
more markets, it's forcing grocery stores to offer delivery, too, even though they'll lose money on it.
If they don't, customers may go somewhere else. Amazon is using its deep pockets to undercut its
competitors on price, taking a page from other tech startups like Uber who tried to corner the
market first and then make money after.
Some supermarkets have experimented with offering ways to make shopping easier for consumers
that are not as expensive as grocery delivery. Walmart, Kroger, Safeway, and a number of other
stores offer "click and collect," for example, which allows consumers to order their groceries online
and then drive to the store and pick them up. Click-and-collect represents nearly half of online
grocery sales, according to Nielsen data, up from 18 percent in 2016. Amazon is covering both of
these bases: in addition to its delivery options, the company has launched Go stores in Seattle,
Chicago, and San Francisco that allow customers to walk in, select items, and walk out without
waiting in line to pay.
But ubiquity remains the holy grail of grocery delivery, and all the stores know it. So they're
offering discounts and deals to get customers to sign up for delivery services, making thin margins
even thinner. Most online grocery delivery services offer free delivery on a customer's first order,
for example. According to Elley Symmes, a senior analyst on Kantar Consulting's grocery team, the
number-one reason many customers got groceries delivered was that they received an incentive to
do so. But when those promotions go away, so do the customers. "Delivery costs continue to be a
barrier to entry," Symmes told me.
To be able to offer those incentives without going bankrupt, some supermarkets are partnering
with brands to get the cost of delivery subsidized. Colgate may offer free delivery if a customer
buys a certain number of Colgate products, for instance.
Cost might not be the only reason customers aren't flocking to grocery delivery. I asked a few
shoppers in a Massachusetts Stop & Shop why they weren't getting their groceries delivered. Most
said they liked picking out their own meat and produce, and that they don't like planning their
shopping ahead of time. Mike Kolodziej, 37, told me he actually likes going to the grocery store.
"It's my quiet time," he said. He has five kids at home. And besides, unlike other industries tech
has disrupted going to the post office, taking cabs in certain cities going grocery shopping isn't
all that unpleasant. In the suburbs, people get in their cars and drive to spacious stores where they
can pick out the produce they like and also find out about new products on the shelves, said David
J. Livingston, a supermarket analyst for DJL Research. Some stores offer other services, like
prescription pick-up or wine bars, that make them an experience people enjoy they're faced with
the daunting task of making stores more appealing to people while also making delivery appealing
too.
Still, analysts say that now is the time to convert more customers to online grocery delivery. About
41 percent of consumers neither like nor dislike shopping for products like beverages and
perishable goods in grocery stores, according to a Deloitte survey. Deloitte argues that there are
many consumers "who are not emotionally attached to the physical shopping process and might
consider online shopping options if they were offered." They include Jim Winnfield, who recently
got his first online grocery delivery order; he used to live in the Chicago suburbs, but recently
moved downtown, and decided to give Peapod a try. "I'm lazy enough that I want people to do as
much for me as possible," he told me. Winnfield's first delivery was free.
38
However they get customers to sign up, supermarkets are likely going to have to spend a lot of
money in promotions and deals as they try to make delivery more popular among consumers. This,
of course, advantages Amazon, which has deep pockets and has long been able to convince
shareholders that spending upfront on getting customers in the door has long-term dividends. This
has never been Peapod's strategy it outlasted competitors like Webvan because it never spent a
lot of money it didn't have, Parkinson told me.
But even Peapod is now getting into the battle for customer share it is launching self-driving
grocery delivery vehicles in Boston and now offers $20 off groceries and no delivery fees for the
first 60 days a customer uses the service. And even though it has outlasted its competitors over the
past few decades by being careful with money, Parkinson told me that they are now coming around
to the fact that customers are cheap, and whatever company makes its services the cheapest just
might win.
39
Vocabulary exercise
1. deplete (paragraph 2)
When people wanted to stock up on certain goods strawberry yogurt or bottles of Diet Coke the
Parkinsons would deplete the stocks of the requested items in local grocery stores.
2. rolling out (paragraph 6)
In June 2017, Amazon bought Whole Foods for $13.4 billion and started rolling out grocery
delivery for its Prime members in a number of cities across the country
3. perishable (paragraph 9)
Compared to groceries where many of the items are perishable and/or bulky , clothes and
electronics and dog food are incredibly simple to deliver.
4. overheads (paragraph 12)
Peapod has tried to lower its overheads in a few ways. In some markets, it keeps groceries in vast
warehouses outside of town, which saves money because the company doesn't have to buy or rent
expensive retail space in city centers.
5. tinker with (paragraph 15)
Employees at Peapod's headquarters tinker with routes and monitor weather and traffic in real
time so they can make changes if a storm is coming or a concert is causing congestion, all to shave
seconds or minutes off of delivery routes.
6. corner the market (paragraph 19)
Amazon is using its deep pockets to undercut its competitors on price, taking a page from other
tech startups like Uber who tried to corner the market first and then make money after.
7. flocking to (paragraph 23)
Cost might not be the only reason customers aren't flocking to grocery delivery. I asked a few
shoppers in a Massachusetts Stop & Shop why they weren't getting their groceries delivered. Most
said they liked picking out their own meat and produce
40
Write your own sentences with the vocabulary
1. deplete
2. rolling out
3. perishable
4. overheads
5. tinker with
6. corner the market
7. flocking to
41
EXERCISE 6
Can fashion be considered to be art?
Summary
This article discusses whether fashion is a form of art. In it the author argues that although it may
contain aspects which could be considered artistic, overall the fashion industry and those who
form part of it, can't be due to their very nature.
Fashion entertains a wide variety of pretensions not the least of them is a desire to be considered
as art of the highest order.
Every year sees a plethora of exhibitions (e.g. 100 Years of Art and Fashion at the Hayward Gallery
in London) and books (e.g. The Influence of Contemporary Clothing Design on Picasso) coming
out which all assert the industry's artistic claims. At the upcoming CPD, the world's biggest
clothing trade fair in Dusseldorf, is to include no less than three exhibitions based on the same
theme, fashion and art. All such events have faced the same challenge: to present fashion in a
manner which gives credence to the notion that this is a branch of creative art as worthy of serious
scrutiny as any other. All fail to realise their ambition because they overlook the simple fact that
commerce rather than creativity lies at the heart of fashion.
The past 150 years have been marked by clothes designers efforts to raise their status, and other
art forms have been pressed into service whenever possible. The first acknowledged couturier (as
opposed to dress maker) was Charles Worth, whose clothes for the French Imperial court during
the 1860s were disseminated throughout Europe thanks to the paintings of Franz Winterhalter.
From Worth's point of view, these pictures might almost be regarded as an early and advantageous
form of advertising, another branch of commercial creativity which would later lure many other
visual artists. What consistently emerges from even a cursory study of the links between fashion
and art is that representatives of the latter tend to have a strong sense of self-marketing.
It is surely no accident that the 20th century's most commercially-minded artist, Andy Warhol,
should have begun his career in the late 1940s as an illustrator for New York fashion magazines
such as Glamour and Harper's Bazaar. The lessons he learnt during that period about the
importance of meeting the market's demands were later applied to his own art.
In his introduction to the catalogue accompanying Addressing the Century, Peter Wollen makes a
plea for fashion's cultural gravitas, noting that "the design and making of garments has
traditionally been viewed as artisanal rather than artistic, however, this perception has gradually
been changing over the last 200 years." But what Wollen fails to comprehend is that for the vast
majority of people clothing, unlike paintings or music, still retains its inherent character which is
primarily functional and only secondarily decorative. Although in Europe and the US, for example,
second-hand garments may now be sold as antiques by established auction houses, this should not
necessarily mean they enjoy parity of regard with other items such as paintings and furniture, also
offered for auction by the same establishments. And when items of clothing do fetch a high price at
42
auction, it often as more to do with the original owner of them rather than the item of clothing
itself.
Wollen further remarks that "painters as great as Cezanne and Monet drew on fashion magazines
for their imagery" but again, this merely reflects a particular society's leisure interests and can
hardly be considered a serious argument in favour of fashion as art.
There have, of course, been many occasions during this century when art and fashion enjoyed a
particularly close association. During the early decades of the twentieth century, a number of art
movements mostly notably Futurism in Italy, Vorticism in Britain and Constructivism in Russia
showed an interest in clothing design. These tended, however, to take the form of uniforms
reminiscent of those devised by Jacques-Louis David in post-revolutionary France. The concern in
every instance was not with fashion as a creative form but with ridding clothes of their social and
cultural implications.
In Italy, for example, Ernesto Thayaht produced and promoted a unisex overall which he called the
Tuta, derived from the word tutta, meaning "all". Fashion's constant striving for the novel based
on a commercial necessity to sell fresh product to its clients every season is the very antithesis of
the Tuta or the clothes created by Rodchenko and his fellow Constructivists.
More oblique and pervasive links between fashion and art last century include the role of
Diaghilev's Ballets Russes in helping to redefine how women dressed before the First World War.
The most important couturier of the period, Paul Poiret, based many of his designs on the clothes
produced for ballet dancers by Leon Bakst and Alexandre Benois; these were distinguished by their
ability to permit free movement in contrast to the restrictions which had defined late 19th-century
women's fashion. Poiret employed a number of artists, including Dufy and Derain, to create fabric
designs for him; perhaps inevitably, the most successful work came from painters whose work
contained the strongest decorative element. Similarly, around the same time in Italy, Mariano
Fortuny – whose Delphos dress epitomises fashion's first stirrings of interest in feminism was
greatly influenced by his friendships with writer Gabriele D'Annunzio and actress Eleanora Duse.
Later, the connections between fashion and ballet were to become overt when, in 1922, Chanel
created the costumes for the Ballets Russes' Le Train Bleu, with music commissioned from Darius
Milhaud. Around the same time, she also costumed a contemporary production of Sophocles's
Antigone rewritten by Jean Cocteau. Chanel's clothing, both on and off-stage, continued the
move initiated by Poiret towards greater ease and freedom. For her own collections and for Le
Train Bleu she used jersey, a fabric which was then the very essence of modernity and in its daring
might be considered fashion's equivalent of Cubism, especially since Chanel was a friend of Picasso
and other painters at the time.
But the ongoing divide between fashion and art is illustrated by another, less happy, instance from
the same period. In 1925, Man Ray was commissioned by Vogue to photograph for the magazine a
sequence of couture-clad mannequins at the Exhibition of Decorative and Industrial Arts in Paris.
These were due to appear in the August issue but before that date, the photographer allowed one
picture to appear on the cover of the periodical La Revolution Surrealiste accompanied by the
slogan "et guerre au travail" ("and war on work"). Ray's Vogue cover was immediately pulled
because this act of cultural subversion undermined fashion's commercial character. Fashion
literally cannot afford to be ironic, much less anarchic, because of the potential threat this poses to
sales.
Nonetheless, the most effective link between art and fashion to-date was during the 1930s when
the designer Elsa Schiaparelli joined forces with a number of Surrealists, particularly Salvador
43
Dali. Might not this be because the Surrealists's greatest interest and abilities tended to lie in the
area of self-promotion and many of them, not least Dali, were strongly motivated by the possibility
of making money from their work? Schiaparelli's clothes are conservative in essence and surrealist
only in their decorative detail a piece of witty trompe l'oeil beading on a jacket, for example, or a
hat in the form of a vegetable or item of household furniture.
Many surrealist motifs resurfaced in New York fashion circles during the 1950s, when they were
employed with particular success by window dressers such as Gene Moore and Tom Lee at
Tiffany's and Bonwit Teller. And the same elements regularly featured in fashion photography of
the period because many of those who worked for Vogue and Harper's Bazaar especially Horst P.
Horst and Erwin Blumenfeld spent the pre-war years in Paris where they had known many
members of the Surrealist movement.
Fashion photography probably remains the field with the highest claim to artistic credentials.
Traditionally, the primary purpose of such photography has been to sell the clothes featured.
However, in many contemporary style magazines (everything from Sleaze Nation to Attitude) the
clothes are intensely pedestrian. Unable to be creative with such basics as sweatshirts, jeans or
trainers, photographers such as Jurgen Teller and Wolfgang Tillmans have produced fashion
shoots in which the clothes are incidental. These pictures, reflecting the world in which T-shirt
consumers live, seem as much at home in a gallery as in the advertising pages of a style magazine.
Otherwise, the distance between fashion and art remains as great as ever. There continue to be
designers who cannibalise art history for inspiration, whether Yves Saint Laurent with his
Mondrian couture dresses for winter 1965 or Gianni Versace, whose silk evening dresses for
spring/summer 1991 were printed with Andy Warhol's polychrome portraits of James Dean and
Marilyn Monroe.
Other designers are less obvious about their borrowings, but these nonetheless eventually become
evident: in the past, both Georgina Godley and Rei Kawakubo of Comme des Garcons have made
clothes with distorted and padded sections which owe a strong debt to Oskar Schlemmer's designs
for the Bauhaus in the late 1920s. Fashion may continue to aspire to art, especially in its
presentation, but because it can never ignore the market's opinion it must remain artistically
flawed. The aspiration is unrealisable but the efforts to prove otherwise can often be highly
entertaining.
44
Vocabulary exercise
1. gives credence to (paragraph 2)
All such events have faced the same challenge: to present fashion in a manner which gives
credence to the notion that this is a branch of creative art
2. the notion (paragraph 2)
All such events have faced the same challenge: to present fashion in a manner which gives
credence to the notion that this is a branch of creative art
3. reminiscent of (paragraph 7)
These tended, however, to take the form of uniforms reminiscent of those devised by Jacques-
Louis David in post-revolutionary France.
4. devised (paragraph 7)
These tended, however, to take the form of uniforms reminiscent of those devised by Jacques-
Louis David in post-revolutionary France.
5. antithesis (paragraph 8)
Fashion's constant striving for the novel based on a commercial necessity to sell fresh product to
its clients every season is the very antithesis of the Tuta or the clothes created by Rodchenko
and his fellow Constructivists.
6. epitomises (paragraph 9)
around the same time in Italy, Mariano Fortuny whose Delphos dress epitomises fashion's first
stirrings of interest in feminism was greatly influenced by his friendships with writer Gabriele
D'Annunzio and actress Eleanora Duse.
7. incidental (paragraph 14)
Unable to be creative with such basics as sweatshirts, jeans or trainers, photographers such as
Jurgen Teller and Wolfgang Tillmans have produced fashion shoots in which the clothes are
incidental.
45
Write your own sentences with the vocabulary
1. gives credence to
2. the notion
3. reminiscent of
4. devised
5. antithesis
6. epitomises
7. incidental
46
EXERCISE 7
Ten inventions that radically changed our
world
Summary
This article gives the 10 most important inventions in the history of mankind. For each it gives a
little history about the invention and the impact that it had either at the time of its creation and/or
subsequently.
Humans are an ingenious species. Though we've been on the planet for a relatively short amount of
time (Earth is 4.5 billion years old), modern Homo sapiens have dreamed up and created some
amazing, sometimes far-out, things. From the moment someone bashed a rock on the ground to
make the first sharp-edged tool, to the debut of the wheel to the development of Mars rovers and
the Internet, several key advancements stand out as particularly revolutionary. Here are our top
picks for the most important inventions of all time, along with the science behind the invention
and how they came about.
The Wheel
Before the invention of the wheel in 3500 B.C., humans were severely limited in how much stuff
we could transport over land, and how far. Apparently the wheel itself wasn't the most difficult
part of "inventing the wheel." When it came time to connect a non-moving platform to that rolling
cylinder, things got tricky, according to David Anthony, a professor of anthropology at Hartwick
College.
"The stroke of brilliance was the wheel-and-axle concept," Anthony previously told Live Science.
"But then making it was also difficult." For instance, the holes at the center of the wheels and the
ends of the fixed axles had to be nearly perfectly round and smooth, he said. The size of the axle
was also a critical factor, as was its snugness inside the hole (not too tight, but not too loose,
either).
The hard work paid off big time. Wheeled carts facilitated agriculture and commerce by enabling
the transportation of goods to and from markets, as well as easing the burdens of people traveling
great distances. Now, wheels are vital to our way of life, found in everything from clocks to vehicles
to turbines.
The Nail
Without nails, civilization would surely crumble. This key invention dates back more than 2,000
years to the Ancient Roman period, and became possible only after humans developed the ability
to cast and shape metal. Previously, wood structures had to be built by interlocking adjacent
boards geometrically, a much more arduous and time consuming construction process.
47
Until the 1790s and early 1800s, hand-wrought nails were the norm, with a blacksmith heating a
square iron rod and then hammering it on four sides to create a point, according to the University
of Vermont. Nail-making machines came online between the 1790s and the early 1800s.
Technology for crafting nails continued to advance; After Henry Bessemer developed a process to
mass-produce steel from iron, the iron nails of yesteryear slowly waned and by 1886, 10 percent of
U.S. nails were created from soft steel wire, according to the University of Vermont. By 1913, 90
percent of nails produced in the U.S. were steel wire.
Meanwhile, the screw a stronger but harder-to-insert fastener is thought to have been invented by
the Greek scholar Archimedes in the third century B.C.
The Compass
Ancient mariners navigated by the stars, but that method didn't work during the day or on cloudy
nights, and so it was unsafe to voyage far from land. The Chinese invented the first compass
sometime between the 9th and 11th century; it was made of lodestone, a naturally-magnetized iron
ore, the attractive properties of which they had been studying for centuries. Soon after, the
technology passed to Europeans and Arabs through nautical contact. The compass enabled
mariners to navigate safely far from land, increasing sea trade and contributing to the Age of
Discovery.
The Printing Press
The German Johannes Gutenberg invented the printing press around 1440. Key to its development
was the hand mold, a new molding technique that enabled the rapid creation of large quantities of
metal movable type. Though others before him including inventors in China and Korea had
developed movable type made from metal, Gutenberg was the first to create a mechanized process
that transferred the ink (which he made from linseed oil and soot) from the movable type to paper.
With this movable type process, printing presses exponentially increased the speed with which
book copies could be made, and thus they led to the rapid and widespread dissemination of
knowledge for the first time in history. Twenty million volumes had been printed in Western
Europe by 1500.
Among other things, the printing press permitted wider access to the Bible, which in turn led to
alternative interpretations, including that of Martin Luther, whose "95 Theses" a document
printed by the hundred-thousand sparked the Protestant Reformation.
The Internal Combustion Engine
In these engines, the combustion of a fuel releases a high-temperature gas, which, as it expands,
applies a force to a piston, moving it. Thus, combustion engines convert chemical energy into
mechanical work. Decades of engineering by many scientists went in to designing the internal
combustion engine, which took its (essentially) modern form in the latter half of the 19th century.
The engine ushered in the Industrial Age, as well as enabling the invention of a huge variety of
machines, including modern cars and aircraft.
The Telephone
Though several inventors did pioneering work on electronic voice transmission (many of whom
later filed intellectual property lawsuits when telephone use exploded), Alexander Graham Bell
was the first to be awarded a patent for the electric telephone in 1876. He drew his inspiration
48
from teaching the deaf and also visits to his hearing-impaired mom, according to PBS. He called
the first telephone an "electrical speech machine," according to PBS.
The invention quickly took off, and revolutionized global business and communication. When Bell
died on Aug. 2, 1922, according to PBS, U.S. telephone service stopped for a minute to honor him.
The Light Bulb
When all you have is natural light, productivity is limited to daylight hours. Light bulbs changed
the world by allowing us to be active at night. According to historians, two dozen people were
instrumental in inventing incandescent lamps throughout the 1800s; Thomas Edison is credited as
the primary inventor because he created a completely functional lighting system, including a
generator and wiring as well as a carbon-filament bulb in 1879.
As well as initiating the introduction of electricity in homes throughout the Western world, this
invention also had a rather unexpected consequence of changing people's sleep patterns. Instead of
going to bed at nightfall (having nothing else to do) and sleeping in segments throughout the night
separated by periods of wakefulness, we now stay up except for the 7 to 8 hours we allot for sleep,
and, ideally, we sleep all in one go.
Penicillin
It's one of the most famous discovery stories in history. In 1928, the Scottish scientist Alexander
Fleming noticed a bacteria-filled Petri dish in his laboratory with its lid accidentally ajar. The
sample had become contaminated with a mold, and everywhere the mold was, the bacteria was
dead. That antibiotic mold turned out to be the fungus Penicillium, and over the next two decades,
chemists purified it and developed the drug Penicillin, which fights a huge number of bacterial
infections in humans without harming the humans themselves.
Without this discovery, modern medicine would be very very different to what it is today. Whereas
before Antibiotics (of which Penicillin is the first), it was a lottery whether you would recover from
an operation (due to infections both during and after the procedure), with the advent of the drug,
this in most cases is no longer an issue.
Contraceptives
Not only have birth control pills, condoms and other forms of contraception sparked a sexual
revolution in the developed world by allowing men and women to have sex for leisure rather than
procreation, they have also drastically reduced the average number of offspring per woman in
countries where they are used. With fewer mouths to feed, modern families have achieved higher
standards of living and can provide better for each child.
Meanwhile, on the global scale, contraceptives are helping the human population gradually level
off; our number will probably stabilize by the end of the century. Certain contraceptives, such as
condoms, also curb the spread of sexually transmitted diseases.
Natural and herbal contraception has been used for millennia. Condoms came into use in the 18th
century, while the earliest oral contraceptive "the pill" was invented in the late 1930s by a chemist
named Russell Marker.
Scientists are continuing to make advancements in birth control, with some labs even pursuing a
male form of "the pill." A permanent birth-control implant called Essure was approved by the Food
49
and Drug Administration in 2002, though in 2016, the FDA warned the implant would need
stronger warnings to tell users about serious risks of using Essure.
The Internet
It really needs no introduction: The global system of interconnected computer networks known as
the Internet is used by billions of people worldwide and radically altered how we work, live and
communicate. Countless people helped develop it, but the person most often credited with its
invention is the computer scientist Lawrence Roberts.
In the 1960s, a team of computer scientists working for the U.S. Defense Department's ARPA
(Advanced Research Projects Agency) built a communications network to connect the computers
in the agency, called ARPANET. It used a method of data transmission called "packet switching"
which Roberts, a member of the team, developed based on prior work of other computer scientists.
ARPANET was the predecessor of the Internet.
50
Vocabulary exercise
1. facilitated (paragraph 4)
Wheeled carts facilitated agriculture and commerce by enabling the transportation of goods to
and from markets, as well as easing the burdens of people traveling great distances.
2. arduous (paragraph 5)
Previously, wood structures had to be built by interlocking adjacent boards geometrically, a much
more arduous and time consuming construction process.
3. dissemination (paragraph 10)
printing presses exponentially increased the speed with which book copies could be made, and
thus they led to the rapid and widespread dissemination of knowledge for the first time in
history.
4. ushered in (paragraph 12)
The engine ushered in the Industrial Age, as well as enabling the invention of a huge variety of
machines, including modern cars and aircraft.
5. drew (paragraph 13)
He drew his inspiration from teaching the deaf and also visits to his hearing-impaired mom,
according to PBS.
6. allot (paragraph 16)
Instead of going to bed at nightfall (having nothing else to do) and sleeping in segments
throughout the night separated by periods of wakefulness, we now stay up except for the 7 to 8
hours we allot for sleep, and, ideally, we sleep all in one go.
7. sparked (paragraph 19)
Not only have birth control pills, condoms and other forms of contraception sparked a sexual
revolution in the developed world by allowing men and women to have sex for leisure rather than
procreation,
51
Write your own sentences with the vocabulary
1. facilitated
2. arduous
3. dissemination
4. ushered in
5. drew
6. allot
7. sparked
52
EXERCISE 8
What makes solo endurance athletes keep
going?
Summary
This is an article which explains why people choose to do endurance sports (like running for 24
hours, sailing across oceans etc...) alone. It speaks to a variety of endurance athletes and academic
experts throughout the text and explains what both motivates them to do it and what they need to
do to ensure that keep going and don't give up.
Few people have the will and the stamina needed to run across a continent or row across an ocean
and even fewer opt to do so alone and unsupported. What makes these endurance athletes
different from others who pursue similar challenges as part of a team, or while competing against
fellow athletes? Are they simply made of tougher stuff?
Across the spectrum of athletic pursuits, people have accomplished staggering feats solo. For
example, Alex Honnold was the first climber to "free solo" Yosemite's 3,000-foot El Capitan wall
climbing without ropes or harnesses.
When Jessica Watson was 16, she became the youngest person to sail around the world solo and
nonstop. She wanted to challenge people's expectations of what young girls could do. Diana Nyad
was the first person to swim from Cuba to Florida without a shark cage on her fifth attempt, at
age 64.
In August, Bryce Carlson broke the world record for the fastest west-to-east unsupported row
across the north Atlantic Ocean. The previous record belonged to a boat of four people. Carlson set
off solo from Newfoundland in a custom-made boat and landed at the Isles of Scilly a little more
than 38 days later. He's the first American to row this route solo and unsupported.
Endurance sports are not new to Carlson, who rowed at the University of Michigan, ran
ultramarathons and then ran across the US, from California to Maryland, with 11 other runners.
On that run, Carlson always had other runners and support staff around.
"I felt I did not really have the opportunity to sit with my own thoughts as much as I would have
liked," and he looked forward to that in his transatlantic row, he explained. "It became a challenge,
as well as an opportunity to explore myself and my psyche in ways that previous challenges of mine
have not."
Carlson, a high school biology teacher who holds a doctorate in biological anthropology, noted that
many ocean rowers are "athletes who have taken on challenges such as climbing Mount Everest,
and this adventure is an extension of that kind of wanderlust and desire for challenge." And going
solo makes it a different type of challenge. Carlson said he wondered, "Can I live in my own head
without being accountable to someone else and their standards?"
53
Carlson's boat capsized about a dozen times, and he got caught in bad weather and currents that
took him in the wrong direction. He learned "to remain patient with things I couldn't change, and
to remain focused on things I had control over," he said. He was able to avoid getting upset, which
he attributes to a decade of ultra running.
Kevin Bingham, PhD, is a psychologist at the University of Michigan who led the psychology
portion of a study that followed Carlson's team's run across the US, and he also gathered data from
Carlson on his row. He had Carlson complete a daily questionnaire while on the Atlantic, along
with a brief diary response asking him to describe the biggest challenge he faced that day and how
he responded to it. The 15 questions asked about effort, physical symptoms (pain, fatigue) and
psychological variables (confidence, anxiety, frustration).
Bingham and his team followed along in real time.
"We were looking at things that we think are positive and helpful, as well as those that we think are
negative or hurtful. In general, he was higher on the positive or helpful factors consistently, and he
was lower on the negative or unhelpful," he said. They also asked Carlson to rate these factors, and
his ratings showed "he really had the full range of experiences," even though he exhibited more
positive than negative variables. From a psychologist's perspective, Bingham said, "We're really
interested in athletes' abilities to be mindful or fully present in the moment what's happening in
their competition or their event."
Carlson said staying in the moment and getting the most out of that moment had been a big
challenge. On some days, he was less focused and more easily distracted.
"Modern sport psychology thought is that you want athletes who are driven by these goals and
these outcomes" but who can set the goals aside and "really immerse themselves in whatever step
they're on," Bingham said. Carlson "wanted to get across as quickly and as safely as he could, and
there was a record out there that he might be able to break, but each day was a day of rowing,"
Bingham explained. "He focused on what could he do to make the most of that day."
When Carlson was positioning himself to avoid the worst of a hurricane, he landed in an adverse
current that pulled him in the wrong direction. To cope, Carlson explained in his diary response, "I
focused on what was near at hand and controllable, navigated my boat as best I could. Hope the
current will weaken overnight, nothing else I can do." Bingham said this response "shows that
ability to be fully aware of what's going on, fully present with the challenge that's there, and yet
then redirect his response to what's in his control and let go of what's out of his control."
The Ocean Rowing Society keeps records of ocean crossings, and it has documented nearly 500
successful attempts and more than 250 unsuccessful ones. Coordinator Tatiana Rezvaya-
Crutchlow said many rowers throw in the towel on the first day, when they lose sight of the land,
and "you have 360 degrees of just horizon." Some people celebrate their "freedom at last," but for
others, fear sets in and they feel intensely out of their depth. "It's willpower over willpower," she
said.
From a sport psychology standpoint, goal-setting is important, said Christina Hemple, a principal
lecturer at the University of Texas. Goals need to be attainable and achievable but that's a matter
of perspective, she said. If someone thinks they can achieve a goal, "that can be a very motivating
thing for them, whether you think from an outside perspective it's attainable or not." Along
with this intrinsic motivation and desire for a challenge, some athletes who pursue these grand
achievements may be adrenaline junkies who "have an intrinsic drive to experience this rush" that
they can't get with everyday athletic activities, she said.
54
Jessica Goldman is a massage therapist and an ultra runner who ran from San Francisco's City
Hall to New York City's City Hall in 2014, solo and unsupported. She is reportedly only the second
woman to have done so. She ran with a cart made from a retrofitted stroller, but no support. She
used the run to raise money for the Brain Injury Association of America.
Before her 91-day run, Goldman had completed two long, solo bike rides, as well as ultra races.
Like Carlson, she relished the challenge of a long, solo, unsupported feat after running ultras. "I'm
always curious to find my boundaries and limits, and it's meditative to a degree. You feel more
alive out in the elements it's about surviving, pushing oneself and seeing what you can
accomplish," she said.
Goldman had lived in Africa and Asia, where she watched people doing daily activities that would
be considered impossible in the United States like kids walking up mountains to go to school.
"Living in other cultures, you realize we don't push our boundaries in a physical sense," she said.
"We live with climate control, going from air conditioning to heating, and everything stays
comfortable. With food, everything is ready for you at the store."
While some ultra runners find comfort in the proximity of their fellow athletes, Goldman said it
adds a layer of pressure. If she ran with a partner, "having another person there would seem more
difficult – having to deal with your emotions and a second person's emotions," she said. "Being
alone gives you the ability to reinvent yourself at any time. When you are in the company of
another person, they have a certain perception of who you are, and I feel like we can sometimes
limit ourselves by playing the role of who we are expected to be rather than who we need to be."
Goldman also finds it easier to pace herself while solo. "Even when we try to run our own pace in a
group setting, I think it is easy to go too fast or too slow, because we are always comparing
ourselves to others," she said. "When you go solo, you are always the first one!"
Athletes taking on such daunting challenges also may deal with their emotions differently when
they're alone. "Endurance sports frequently involve ugly crying, swearing and moments of hysteria
that might be better done in privacy," Goldman noted. "If an athlete falls apart in the forest and no
one was around to hear it, did they make a sound?"
Goldman had to contend with some significant challenges, beyond the distance she covered every
day sometimes as much as 55 miles and the question of where to sleep each night. In the
beginning of her journey, she set up her route on a GPS device, and a couple of days in, a dust devil
struck, snatching the device and launching it somewhere she couldn't find it. She used paper maps
and Google to navigate after that. She did have a live tracking device on her, so brain injury
survivors who were following her progress often came out to meet her and offer her a place to
sleep. Otherwise, she would camp or find a hotel.
Now, Goldman is training for an 888 km race in Vermont. "I don't know if I can finish it's sort of
thrilling," she said. "I don't know if I can do it."
Carlson echoed that sentiment. "When I sign up for a race, it's something I'm not capable of doing
in that moment, and I have to put together a plan to become the kind of person who can complete
that challenge," he said. "That's part of the fun for me."
Goldman said she was inspired by Nyad's approach of changing the impossible into the possible.
"It's good to get out of our comfort zone and know what we're capable of," she said. "There's some
kind of power that you pull out of that."
55
Vocabulary exercise
1. feats (paragraph 2)
Across the spectrum of athletic pursuits, people have accomplished staggering feats solo. For
example, Alex Honnold was the first climber to "free solo" Yosemite's 3,000-foot El Capitan wall
climbing without ropes or harnesses.
2. attributes (paragraph 8)
He learned "to remain patient with things I couldn't change, and to remain focused on things I had
control over," he said. He was able to avoid getting upset, which he attributes to a decade of ultra
running.
3. throw in the towel (paragraph 15)
it has documented nearly 500 successful attempts and more than 250 unsuccessful ones.
Coordinator Tatiana Rezvaya-Crutchlow said many rowers throw in the towel on the first day,
when they lose sight of the land,
4. out of their depth (paragraph 15)
when they lose sight of the land, and "you have 360 degrees of just horizon." Some people
celebrate their "freedom at last," but for others, fear sets in and they feel intensely out of their
depth.
5. relished (paragraph 18)
Like Carlson, she relished the challenge of a long, solo, unsupported feat after running ultras.
"I'm always curious to find my boundaries and limits, and it's meditative to a degree. You feel more
alive out in the elements
6. pace (paragraph 21)
Goldman also finds it easier to pace herself while solo. "Even when we try to run our own pace in a
group setting, I think it is easy to go too fast or too slow, because we are always comparing
ourselves to others,
7. echoed (paragraph 25)
"it's sort of thrilling," she said. "I don't know if I can do it." Carlson echoed that sentiment. "When
I sign up for a race, it's something I'm not capable of doing in that moment, and I have to put
together a plan to become the kind of person who can complete that challenge," he said. "That's
part of the fun for me."
56
Write your own sentences with the vocabulary
1. feats
2. attributes
3. throw in the towel
4. out of their depth
5. relished
6. pace
7. echoed
57
EXERCISE 9
Is human habitation on Mars possible?
Summary
This article talks about how we could survive on the planet Mars. It discusses what would be
involved in not only surviving on the planet, but on also landing on it safely. It also talks about the
difficulties and challenges which will be faced by any manned mission or colonisation of the
planet.
It seems like everyone has Mars on the mind these days. NASA wants to send humans to the red
planet by 2030, and SpaceX wants to get there even sooner, with plans to have people there by
2024.
Mars is a favorite theme in Hollywood, with movies like The Martian and Life exploring what we
might find once we finally reach our celestial neighbor, but most of them aren't addressing the
biggest questions once we get there, how will we survive long-term?
The atmosphere of Mars is mostly carbon dioxide, the surface of the planet is too cold to sustain
human life, and the planet's gravity is a mere 38% of Earth's. Plus, the atmosphere on Mars is
equivalent to about 1% of the Earth's atmosphere at sea level. That makes getting to the surface
tricky. How will NASA get there? How can we hope to survive against such odds?
Landing ideas: Then and now
Traveling to Mars is just the first leg of the journey when Earth and Mars are closest to each
other, the trip will take a mere 260 days. Once we get there, the challenge becomes landing on the
planet's surface. What type of landing system will get our astronauts and colonists safely to the
surface?
Back in 2007, scientists considered three possible solutions to get astronauts to the surface. All of
them deploy a parachute when the craft is in Mars's atmosphere to initially slow down the craft
and then fire rockets to further reduce the speed when close to the actual surface. The first was a
Legged Landing System based off the Lunar Lander. This system could provide the option to both
land and take off from the red planet. The second, the SLS System, or Sky-Crane Landing System,
is where the craft would not land on the surface but hover just above it (using rockets to maintain
altitude) and then slowly lower its cargo down with a crane onto the surface, before taking off
again. The third design discussed was an Air Bag Landing System, which would rely on the
inflating of airbags surrounding the entire part of the craft that was to land, which would then
bounce onto the surface. However, this last option is far from suitable for landing people.
More than ten years later, scientists now question the feasibility of using any of these methods for
landing manned missions to Mars. According to Richard McGuire Davis from NASA, "The lander
will be travelling at such fast speeds as it is in the atmosphere thousands of metres per second) and
be so heavy (approximately 20 metric tons) that the technologies discussed in 2007 will simply not
either work or be able to slow down sufficiently enough to land safely or without major damage."
he said. "It is likely that the only thing that will be able to do this will be supersonic retro
58
propulsion rockets. However, as this technology is still in its infancy, we are still a long way off
coming up with, let alone manufacturing anything which is both powerful and light enough for
doing this for a manned Mars mission."
But if we were able to successfully land on Mars, what comes next?
Habitation built to last
NASA is already considering what kind of habitation we'll need to survive on the surface of Mars.
Six companies have already began designing possible habitat prototypes. All these habitats will
likely have a few things in common they have to be self-sustaining, sealed against the thin
atmosphere, and capable of supporting life for extended periods without support from Earth. To
get an idea for what to expect, think about the ISS. "The International Space Station has really
taught us a tremendous amount of what is needed in a deep space habitat," said Davis. "We'll need
things like environmental control and life support systems (ECLSS), power systems, docking ports,
[and] air locks so that crew can perform space walks to repair things that break or to add new
capabilities." Expect big robust equipment to travel across the stars to Mars during the first
manned mission. Whatever the astronauts use must be up for the long journey.
Davis also posed an interesting question: how much space is needed for each crewmember? Could
you imagine spending months in one location, surrounded by the same walls day in and day out?
How far apart would they have to be to keep claustrophobia at bay? "In the days of the Space
Shuttle, missions ran for 7-15 days, and there was not a lot of space for each crewmember. In a
space station, where crewmembers are onboard for a much longer time (typically 6 months), we
have found that crewmembers simply need more space." Based on this logic, it's possible that
habitable bases on Mars will require more square footage for inhabitants.
Science fiction also does a great job helping the public imagine what this future mission will look
like. The film The Martian, portrayed the kind of habitats NASA is investigating for a Mars. Nine
pieces of technology showcased in the movie are accurate to the kind of equipment astronauts on
the planet will use.
Growth
Keeping the food and medicine supplies stocked on Mars is the best way to make a habitat self-
sustaining, but with a thin atmosphere and reduced sunlight, it can be difficult to get anything to
grow. However, a recent scientific breakthrough, artificial leaves, could overcome this.
These leaves, designed to work in harsh conditions and made of silicone rubber, can harness the
reduced levels of sunlight that Mars receives and turn it into enough power to fuel the necessary
chemical reactions to make medicine and other compounds. Lead researcher Tim Noel, assistant
professor at Eindhoven University of Technology said, "The leaf harvests solar energy and re-emits
it to a wavelength region which is useful for the chemical reaction which is wanted. It has the
ability to make the reaction conditions…uniform wherever you are."
In other words, it can use sunlight during the day on Mars, even though it is potentially exposed to
more harmful UV rays. The channels inside the leaf are protected because it can re-emit the energy
it collects at a safer wavelength, which allows any chemical processes to take place. "This could be
helpful when the irradiation on a certain planet is too energetic. Since light is basically everywhere
… theoretically you can use that energy to start making the required molecules, whether they are
pharmaceuticals, agrochemicals or solar fuels."
Right now, methylene blue is being used as the photocatalyst to produce drugs. A catalyst's job is
to speed up a reaction, so the methylene blue allows the scientists to produce drugs faster than
they could without it. Tim and his team are working hard now to make a diverse set of reactors.
59
They hope to have the device onboard for the trip to Mars. Nature has given us the perfect tools to
survive nearly anywhere. They just need a little bit of tweaking to survive off Earth.
Terraforming: It won't be quite like the movies at first
When you think of astronauts on Mars, what comes to mind? Did you picture a red planet turning
green with time and continued human colonization? Unfortunately, those days are far in the
future, if they even happen at all. During the interview, Davis explained, "Terraforming has a
connotation of humans making another planetary body, like Mars, Earth-like. But really, it's about
humans changing their environment to make it more supportive of our needs." What does this
mean?
The first few trips to Mars will only include the essentials. One of NASA's first goals for its
astronauts is to learn how to live on the planet. Since it differs greatly from Earth, survival is an
important skill for astronauts to master. "The initial base will probably include a habitat and a
science lab. The inside of these modules will be much like the space station, but there will be
differences." One example Davis gave included preventing toxic dust from getting into the habitat
and lab. Microbial life is another threat to astronauts. Without more research on the planet, NASA
can't say for certain what dangers could threaten human life. With this in mind, all scientists
involved with the Mars mission will have to take these and other potential risks under
consideration.
After the NASA base is well established and the astronauts learned survival basics, things get more
interesting. "Eventually, since it costs so much to send things from Earth, we will want to farm on
Mars. Such a farm will really be green houses to protect the plants against the challenging Martian
environment," said Davis. However, we should keep in mind that Martian soil isn't like the soil on
Earth. It lacks organics "the rotting biological materials that plants need." But fortunately, it does
contain the minerals plants require. Davis said that his team calls this soil regolith and it will need
to be cleansed of some toxic materials. And NASA scientists can get the job done.
Detoxified soil isn't the only thing astronauts will need to grow plants. They'll also need to utilize
the water from Mars' ice-capped poles. Davis said, "Many anticipate that the first human base will
be located adjacent to these billion-year-old ice deposits, so that humans can easily produce the
volumes of water that they will need to support water intensive activities like farming." As of yet
there is no word about which pole will be more beneficial, if there's a difference at all.
Before speaking to Davis, I believed that future Martian farms would be equivalent to greenhouses
here on Earth. It seemed logical. That's how people control plant growth here. However, while the
plants will need a higher pressure to grow, the plants "don't have to be at an Earth-like pressure. In
fact, we can pressurize the greenhouse with carbon dioxide, which is the main component of the
Martian atmosphere." This sounds like a win-win for both the scientists and the plants. Instead of
the astronauts having to wear cumbersome space suits, they could "just wear lightweight oxygen
masks" in the greenhouses. The key takeaway is that the planet doesn't have to transform into
Earth2.0. Maybe one day it will, but for the time being, it just has to function for NASA scientists
to live and work.
Time will tell
Mars has captured the imagination of humans for centuries. These plans are just the next step in
the process of getting the Mars Mission from the 'drawing room floor' to a funded mission with a
launch date. NASA isn't the only ones with their eyes on Mars. Others are already coming up with
their own plans for the red planet. Scientists and enthusiasts have speculated on everything from
60
nuking the ice caps on the planet to creating a magnetic shield around the planet to encourage it to
'grow' its own atmosphere.
Mars is hopefully just our first step into the universe. Once we've dipped our toes out into the solar
system, it will be easier to expand out into the asteroid belt and beyond. Mars' low gravity provides
the perfect platform for constructing and launching other deep space vehicles. After we've got that
foothold, the only thing holding us back is our technology. As like today, the Achilles heel for any
mission (whether it be to Mars or further afield) is not the will, but the technology to make that
will become not only possible, but also safe.
Those of us who have grown up watching the Apollo missions, space shuttles take-off and now the
Falcon rockets climbing through the atmosphere likely won't see Mars colonized in our lifetimes,
but that doesn't negate the wonder we all feel every time one of those rockets soars into the sky. It's
not just a rocket, but a source of inspiration for generations to come one of which will step foot
on Martian soil.
61
Vocabulary exercise
1. deploy (paragraph 5)
All of them deploy a parachute when the craft is in Mars's atmosphere to initially slow down the
craft and then fire rockets to further reduce the speed when close to the actual surface.
2. let alone (paragraph 6)
However, as this technology is still in its infancy, we are still a long way off coming up with, let
alone manufacturing anything which is both powerful and light enough for doing this for a
manned Mars mission.
3. harness (paragraph 12)
These leaves, designed to work in harsh conditions and made of silicone rubber, can harness the
reduced levels of sunlight that Mars receives and turn it into enough power to fuel the necessary
chemical reactions to make medicine and other compounds.
4. a connotation of (paragraph 15)
Terraforming has a connotation of humans making another planetary body, like Mars, Earth-
like. But really, it's about humans changing their environment to make it more supportive of our
needs.
5. keep in mind (paragraph 17)
However, we should keep in mind that Martian soil isn't like the soil on Earth. It lacks organics
"the rotting biological materials that plants need." But fortunately, it does contain the minerals
plants require
6. adjacent to (paragraph 18)
Many anticipate that the first human base will be located adjacent to these billion-year-old ice
deposits, so that humans can easily produce the volumes of water
7. cumbersome (paragraph 19)
Instead of the astronauts having to wear cumbersome space suits, they could "just wear
lightweight oxygen masks" in the greenhouses.
62
Write your own sentences with the vocabulary
1. deploy
2. let alone
3. harness
4. a connotation of
5. keep in mind
6. adjacent to
7. cumbersome
63
EXERCISE 10
The questionable validity and morality of using
IQ tests
Summary
This article discusses whether IQ (Intelligence Quotient) tests can be trusted. It examines whether
they are a valid way of measuring intelligence and explains how these types of tests have been
misused (especially in the past) by individuals and organisations to support their beliefs. It also
explains some ways in which they can be used positively (both scientifically and morally).
John, 12-years-old, is three times as old as his brother. How old will John be when he is
twice as old as his brother?
This is a question from an online Intelligence Quotient or IQ test. Such tests that purport to
measure your intelligence can be verbal, meaning written, or non-verbal, focusing on abstract
reasoning independent of reading and writing skills. First created more than a century ago, the
tests are still widely used today to measure an individual's mental agility and ability.
Education systems use IQ tests to help identify children for special education and gifted education
programmes and to offer extra support. Researchers across the social and hard sciences study IQ
test results also looking at everything from their relation to genetics, socio-economic status,
academic achievement, and race.
Online IQ "quizzes" purport to be able to tell you whether or not "you have what it takes to be a
member of the world's most prestigious high IQ society".
If you want to boast about your high IQ, you should have been able to work out the answer to the
question that when John is 16 he'll be twice as old as his brother.
Despite the hype, the relevance, usefulness, and legitimacy of the IQ test is still hotly debated
among educators, social scientists, and hard scientists. To understand why, it's important to
understand the history underpinning the birth, development, and expansion of the IQ test a
history that includes the use of IQ tests to further marginalise ethnic minorities and poor
communities.
Testing times
In the early 1900s, dozens of intelligence tests were developed in Europe and America claiming to
offer unbiased ways to measure a person's cognitive ability. The first of these tests was developed
by French psychologist Alfred Binet. He was commissioned by the French government to identify
students who would face the most difficulty in school. The resulting 1905 Binet-Simon Scale
became the basis for modern IQ testing. Ironically, Binet actually thought that IQ tests were
inadequate measures for intelligence, pointing to the test's inability to properly measure creativity
or emotional intelligence.
64
At its conception, the IQ test provided a relatively quick and simple way to identify and sort
individuals based on intelligence which was and still is highly valued by society. In the US and
elsewhere, institutions such as the military and police used IQ tests to screen potential applicants.
They also implemented admission requirements based on the results.
The US Army Alpha and Beta Tests screened approximately 1.75m draftees in World War I in an
attempt to evaluate the intellectual and emotional temperament of soldiers. Results were used to
determine how capable a soldier was of serving in the armed forces and identify which job
classification or leadership position one was most suitable for. Starting in the early 1900s, the US
education system also began using IQ tests to identify "gifted and talented" students, as well as
those with special needs who required additional educational interventions and different academic
environments.
Ironically, some districts in the US have recently employed a maximum IQ score for admission
into the police force. The fear was that those who scored too highly would eventually find the work
boring and leave after significant time and resources had been put towards their training.
Alongside the widespread use of IQ tests in the 20th century was the argument that the level of a
person's intelligence was influenced by their biology. Ethnocentrics and eugenicists, who viewed
intelligence and other social behaviours as being determined by biology and race, and
consequently latched onto IQ tests in order to justify their prejudices. They held up the apparent
gaps these tests illuminated between ethnic minorities and whites or between low- and high-
income groups. Some maintained that these test results provided further evidence that
socioeconomic and racial groups were genetically different from each other and that systemic
inequalities were partly a byproduct of evolutionary processes.
Going to extremes
The US Army Alpha and Beta test results garnered widespread publicity and were analysed by Carl
Brigham, a Princeton University psychologist and early founder of psychometrics, in a 1922 book A
Study of American Intelligence. Brigham applied meticulous statistical analyses to demonstrate
that American intelligence was declining, claiming that increased immigration and racial
integration were to blame. To address the issue, he called for social policies to restrict immigration
and prohibit racial mixing.
A few years before, American psychologist and education researcher Lewis Terman had drawn
connections between intellectual ability and race. In 1916, he wrote:
High-grade or border-line deficiency … is very, very common among Spanish-Indian and
Mexican families of the Southwest and also among Negroes. Their dullness seems to be
racial, or at least inherent in the family stocks from which they come … Children of this
group should be segregated into separate classes … They cannot master abstractions but
they can often be made into efficient workers … from a eugenic point of view they constitute
a grave problem because of their unusually prolific breeding.
There has been considerable work from scientists refuting arguments such as Brigham's and
Terman's that racial differences in IQ scores are influenced by biology. Critiques of such
"hereditarian" hypotheses arguments that genetics can powerfully explain human character
traits and even human social and political problems cite a lack of evidence and weak statistical
analyses. This critique continues today, with many researchers resistant to and alarmed by
research that is still being conducted on race and IQ.
65
But in their darkest moments, IQ tests became a powerful way to exclude and control marginalised
communities using empirical and scientific language. Supporters of eugenic ideologies in the
1900s used IQ tests to identify "idiots", "imbeciles", and the "feebleminded". These were people,
eugenicists argued, who threatened to dilute the White Anglo-Saxon genetic stock of America. As a
result of such eugenic arguments, many American citizens were later sterilised. In 1927, an
infamous ruling by the US Supreme Court legalised forced sterilisation of citizens with
developmental disabilities and the "feebleminded," who were frequently identified by their low IQ
scores. The ruling, known as Buck v Bell, resulted in over 65,000 coerced sterilisations of
individuals thought to have low IQs. Those in the US who were forcibly sterilised in the aftermath
of Buck v Bell were disproportionately poor or of colour.
Compulsory sterilisation in the US on the basis of IQ, criminality, or sexual deviance continued
formally until the mid 1970s when organisations like the Southern Poverty Law Center began filing
lawsuits on behalf of people who had been sterilised. In 2015, the US Senate voted to compensate
living victims of government-sponsored sterilisation programmes.
IQ tests today
Debate over what it means to be "intelligent" and whether or not the IQ test is a robust tool of
measurement continues to elicit strong and often opposing reactions today. Some researchers say
that intelligence is a concept specific to a particular culture. They maintain that it appears
differently depending on the context in the same way that many cultural behaviours would. For
example, burping may be seen as an indicator of enjoyment of a meal or a sign of praise for the
host in some cultures and impolite in others.
What may be considered intelligent in one environment, therefore, might not in others. For
example, knowledge about medicinal herbs is seen as a form of intelligence in certain communities
within Africa, but does not correlate with high performance on traditional Western academic
intelligence tests.
According to some researchers, the "cultural specificity" of intelligence makes IQ tests biased
towards the environments in which they were developed namely white, Western society. This
makes them potentially problematic in culturally diverse settings. The application of the same test
among different communities would fail to recognise the different cultural values that shape what
each community values as intelligent behaviour. Going even further, given the IQ test's history of
being used to further questionable and sometimes racially-motivated beliefs about what different
groups of people are capable of, some researchers say such tests cannot objectively and equally
measure an individual's intelligence at all.
Used for good
At the same time, there are ongoing efforts to demonstrate how the IQ test can be used to help
those very communities who have been most harmed by them in the past. In 2002, the execution
across the US of criminally convicted individuals with intellectual disabilities, who are often
assessed using IQ tests, was ruled unconstitutional. This has meant IQ tests have actually
prevented individuals from facing "cruel and unusual punishment" in the US court of law.
In education, IQ tests may be a more objective way to identify children who could benefit from
special education services. This includes programmes known as "gifted education" for students
who have been identified as exceptionally or highly cognitively able. Ethnic minority children and
those whose parents have a low income, are under-represented in gifted education.
66
The way children are chosen for these programmes means that Black and Hispanic students are
often overlooked. Some US school districts employ admissions procedures for gifted education
programmes that rely on teacher observations and referrals or require a family to sign their child
up for an IQ test. But research suggests that teacher perceptions and expectations of a student,
which can be preconceived, have an impact upon a child's IQ scores, academic achievement, and
attitudes and behaviour. This means that teacher's perceptions can also have an impact on the
likelihood of a child being referred for gifted or special education.
The universal screening of students for gifted education using IQ tests could help to identify
children who otherwise would have gone unnoticed by parents and teachers. Research has found
that those school districts which have implemented screening measures for all children using IQ
tests have been able to identify more children from historically underrepresented groups to go into
gifted education.
IQ tests could also help identify structural inequalities that have affected a child's development.
These could include the impacts of environmental exposure to harmful substances such as lead
and arsenic or the effects of malnutrition on brain health. All these have been shown to have an
negative impact on an individual's mental ability and to disproportionately affect low-income and
ethnic minority communities.
Identifying these issues could then help those in charge of education and social policy to seek
solutions. Specific interventions could be designed to help children who have been affected by
these structural inequalities or exposed to harmful substances. In the long run, the effectiveness of
these interventions could be monitored by comparing IQ tests administered to the same children
before and after an intervention.
Since its invention, the IQ test has generated strong arguments in support of and against its use.
Both sides are focused on the communities that have been negatively impacted in the past by the
use of intelligence tests for eugenic purposes.
The use of IQ tests in a range of settings, and the continued disagreement over their validity and
even morality, highlights not only the immense value society places on intelligence but also our
desire to understand and measure it.
67
Vocabulary exercise
1. purport (paragraph 4)
Online IQ "quizzes" purport to be able to tell you whether or not "you have what it takes to be a
member of the world's most prestigious high IQ society".
2. underpinning (paragraph 6)
To understand why, it's important to understand the history underpinning the birth,
development, and expansion of the IQ test
3. commissioned (paragraph 7)
He was commissioned by the French government to identify students who would face the most
difficulty in school. The resulting 1905 Binet-Simon Scale became the basis for modern IQ testing.
4. screen (paragraph 8)
In the US and elsewhere, institutions such as the military and police used IQ tests to screen
potential applicants. They also implemented admission requirements based on the results.
5. latched onto (paragraph 11)
Ethnocentrics and eugenicists, who viewed intelligence and other social behaviours as being
determined by biology and race, and consequently latched onto IQ tests in order to justify their
prejudices.
6. byproduct (paragraph 11)
Some maintained that these test results provided further evidence that socioeconomic and racial
groups were genetically different from each other and that systemic inequalities were partly a
byproduct of evolutionary processes.
7. elicit (paragraph 18)
Debate over what it means to be "intelligent" and whether or not the IQ test is a robust tool of
measurement continues to elicit strong and often opposing reactions today.
68
Write your own sentences with the vocabulary
1. purport
2. underpinning
3. commissioned
4. screen
5. latched onto
6. byproduct
7. elicit
69
EXERCISE 11
Is wind power the answer for our future energy
needs?
Summary
This article argues that wind power by itself is incapable of generating enough electricity to supply
our needs either now or in the future. It also states that it is not as environmentally friendly as we
may think.
The Global Wind Energy Council recently released its latest report, excitedly boasting that "the
proliferation of wind energy across the global continues at a furious pace, after it was revealed that
more than 54 gigawatts of electricity (more than double what it was 5 years ago) is now being
generated from clean renewable wind power".
You may have got the impression from announcements like that, and from the obligatory pictures
of wind turbines in any BBC story or airport advert about energy, that wind power is making a big
contribution to world energy today. You would be wrong. Its contribution is still, after decades
nay centuries of development, trivial to the point of irrelevance.
Here's a question for you. To the nearest whole number, what percentage of the world's energy
consumption was supplied by wind power in 2014, the last year for which there are reliable
figures? Was it 20 per cent, 10 per cent or 5 per cent? None of the above: it was 0 per cent. That is
to say, to the nearest whole number, there is still no wind power on Earth.
Even put together, wind and photovoltaic solar are supplying less than 1 per cent of global energy
demand. From the International Energy Agency's 2016 Key Renewables Trends, we can see that
wind provided 0.46 per cent of global energy consumption in 2014, and solar and tide combined
provided 0.35 per cent. Remember this is total energy, not just electricity, which is less than a fifth
of all final energy, the rest being the solid, gaseous, and liquid fuels that do the heavy lifting for
heat, transport and industry.
Such numbers are not hard to find, but they don't figure prominently in reports on energy
emanating from the solar and wind energy lobbies. Their trick is to hide behind the statement that
close to 14 per cent of the world's energy is renewable, with the implication that this is wind and
solar. In fact the vast majority three quarters is biomass (mainly wood), and a very large part
of that is ‘traditional biomass'; sticks and logs and dung burned by the poor in their homes to cook
with. Those people need that energy, but they pay a big price in health problems caused by smoke
inhalation.
Even in the rich countries whose governments have heavily subsidised the adoption of wind and
solar power, the majority of their renewable energy comes from wood and hydro, the reliable
renewables. Meanwhile, world energy demand has been growing at about 2 per cent a year for
nearly 40 years. Between 2013 and 2014, again using International Energy Agency data, it grew by
just under 2,000 terawatt-hours.
70
If wind turbines were to supply all of that growth but no more, how many would need to be built
each year? The answer is nearly 350,000, since a two-megawatt turbine can produce about 0.005
terawatt-hours per annum. That's one-and-a-half times as many as have been built in the world
since governments started pouring consumer funds into this so-called industry in the early 2000s.
At a density of, very roughly, 50 acres per megawatt, typical for wind farms, that many turbines
would require a land area greater than the British Isles, including Ireland. Every year. If we kept
this up for 50 years, we would have covered every square mile of a land area the size of Russia with
wind farms. And bear in mind that this would be just to fulfil the new demand for energy, not to
displace the vast existing supply of energy from fossil fuels, which currently supply 80 per cent of
global energy needs.
Do not take refuge in the idea that wind turbines could become more efficient. There is a limit to
how much energy you can extract from a moving fluid, the Betz limit, and wind turbines are
already close to it. Their effectiveness (the load factor, to use the engineering term) is determined
by the wind that is available, and that varies at its own accord from second to second, day to day,
year to year.
As machines, wind turbines are pretty good already; the problem is the wind resource itself, and
we cannot change that. It's a fluctuating stream of lowdensity energy. Mankind stopped using it
for mission-critical transport and mechanical power long ago, for sound reasons. It's just not very
good.
As for resource consumption and environmental impacts, the direct effects of wind turbines
killing birds and bats, sinking concrete foundations deep into wildlands is bad enough. But out
of sight and out of mind is the dirty pollution generated in Inner Mongolia by the mining of rare-
earth metals for the magnets in the turbines. This generates toxic and radioactive waste on an epic
scale, which is why the phrase ‘clean energy' is such a sick joke and ministers should be ashamed
every time it passes their lips.
It gets worse. Wind turbines, apart from the fibreglass blades, are made mostly of steel, with
concrete bases. They need about 200 times as much material per unit of capacity as a modern
combined cycle gas turbine. Steel is made with coal, not just to provide the heat for smelting ore,
but to supply the carbon in the alloy. Cement is also often made using coal. The machinery of
'clean' renewables is the output of the fossil fuel economy, and largely the coal economy.
A two-megawatt wind turbine weighs about 250 tonnes, including the tower, nacelle, rotor and
blades. Globally, it takes about half a tonne of coal to make a tonne of steel. Add another 25 tonnes
of coal for making the cement and you're talking 150 tonnes of coal per turbine. Now if we are to
build 350,000 wind turbines a year (or a smaller number of bigger ones), just to keep up with
increasing energy demand, that will require 50 million tonnes of coal a year. That's about half the
EU's hard coalmining output.
The point of running through these numbers is to demonstrate that it is utterly futile to think that
wind power can make any significant contribution to world energy supply, let alone to emissions
reductions, without ruining the planet. As the late David MacKay pointed out years back, the
arithmetic is against such unreliable renewables.
The truth is, if you want to power civilisation with fewer greenhouse gas emissions, then you
should focus on shifting power generation, heat and transport to natural gas, the economically
recoverable reserves of which thanks to horizontal drilling and hydraulic fracturing are much
more abundant than we dreamed they ever could be. It is also the lowest-emitting of the fossil
fuels, so the emissions intensity of our wealth creation can actually fall while our wealth continues
to increase. Good.
71
And let's put some of that burgeoning wealth in nuclear, fission and fusion, so that it can take over
from gas in the second half of this century. That is an engineerable, clean future. Everything else is
a political displacement activity, one that is actually counterproductive as a climate policy and,
worst of all, shamefully robs the poor to make the rich even richer.
72
Vocabulary exercise
1. proliferation (paragraph 1)
the proliferation of wind energy across the global continues at a furious pace, after it was
revealed that more than 54 gigawatts of electricity (more than double what it was 5 years ago) is
now being generated from clean renewable wind power
2. emanating from (paragraph 5)
Such numbers are not hard to find, but they don't figure prominently in reports on energy
emanating from the solar and wind energy lobbies.
3. heavily subsidised (paragraph 6)
Even in the rich countries whose governments have heavily subsidised the adoption of wind and
solar power, the majority of their renewable energy comes from wood and hydro, the reliable
renewables.
4. bear in mind (paragraph 8)
And bear in mind that this would be just to fulfil the new demand for energy, not to displace the
vast existing supply of energy from fossil fuels, which currently supply 80 per cent of global energy
needs.
5. displace (paragraph 8)
And bear in mind that this would be just to fulfil the new demand for energy, not to displace the
vast existing supply of energy from fossil fuels, which currently supply 80 per cent of global energy
needs.
6. its own accord (paragraph 9)
Their effectiveness (the load factor, to use the engineering term) is determined by the wind that is
available, and that varies at its own accord from second to second, day to day, year to year.
7. futile (paragraph 14)
The point of running through these numbers is to demonstrate that it is utterly futile to think that
wind power can make any significant contribution to world energy supply
73
Write your own sentences with the vocabulary
1. proliferation
2. emanating from
3. heavily subsidised
4. bear in mind
5. displace
6. its own accord
7. futile
74
(;(5&,6( 12
The history of astrology
Summary
This article talks about astrology (the use of the stars to make predictions for people) both in the
past and today. It explains the origins and the history of astrology and what exactly astrologers use
to base their predictions on. It ends by giving an opinion on why astrology continues to be popular
today.
As the summer officially begins, with the Summer Solstice occurring in the Northern Hemisphere
on Thursday, those who enjoy Western astrology will be checking out their Summer Solstice
horoscopes to try to use the stars to discern what the season might have in store.
While some horoscopes sites may promise predictions based on the "movement" of the stars, it's
important to remember that it's the Earth that's moving, not the stars. The reason why stars
appear like they're moving, both throughout the night and over the course of the year, is owing to
the fact that the Earth rotates on its axis and orbits around the Sun. But, before our ancestors were
conscious of that, they spent a considerable amount of time gazing at the sky and ruminating about
what was happening up there.
So, though astrology looking for answers, signs and predictions in the movements of the celestial
bodies isn't itself a science, there's a long history of humans looking up at the heavens to plan
their lives. Farmers used the skies as a calendar as far back as the Ancient Egyptians, when the
rising of Sirius, the Dog Star, around mid-July, was seen as a marker of the imminent annual
flooding of the Nile. Travelers used the skies as a compass, following and consulting the stars to
know where to go. And many people used the skies as a source of mystical direction, too.
But who first looked up at the sky to make sense of what was happening down on the ground and
why their fellow humans were acting in certain ways? Exactly who came up with this way of
thinking and when is unclear, but historians and astronomers do know a bit about how it got so
popular today.
Where did zodiac signs come from?
The stars are just one of the many things in the natural world that human beings have turned to for
answers over the years.
"We don't really know who first came up with the idea for looking at things in nature and divining
influences on humans," says astronomer Steve Oldfield, a lecturer in astronomical education at the
University of Florida. "There's some indication that cave art shows this idea that animals and
things can be imbued with some kind of spirit form that then has an influence on you, and if you
appease that spirit form, then you will have a successful hunt. That was taken over by the idea of
divination, where you can actually look at things in nature and study them carefully, such as tea-
leaf reading."
Some form of astrology shows up in various belief systems in ancient cultures.
75
In Ancient China, noblemen looked at eclipses or sunspots as portents of good or bad times for
their emperor, though it's thought that those signs had less application to the lives of other
individuals. (Oldfield points out that in societies where people in the lower classes had less control
over their lives, divination could seem pointless.) The Sumerians and Babylonians, by around the
middle of the second millennium BC, appeared to have had many divination practices they
looked at spots on the liver and the entrails of animals, for example and their idea that watching
planets and stars was a way to keep track of where gods were in the sky can be traced to The Venus
tablet of Ammisaduqa. This tablet, which is dated to the first millennium BC and tracks the motion
of Venus, is one of the earliest pieces of what's been called Babylonian planetary omens. The
ancient Egyptians contributed the idea that patterns of stars made up constellations, through
which the sun appears to "move" at a specific times during the year.
It's thought that all of these ideas came together when Alexander the Great conquered Egypt
around 330 BC.
"There must have been a lot of exchange that got the Greeks on-board with the idea of divination
using planets," says Oldfield, and because they were deep into mathematics and logic, they worked
out a lot of the rules for how this could work."
Here's how NASA has described how that logic led to the creation of the familiar zodiac signs
known today:
Imagine a straight line drawn from Earth through the Sun and out into space way beyond
our solar system where the stars are. Then, picture Earth following its orbit around the
Sun. This imaginary line would rotate, pointing to different stars throughout one complete
trip around the Sun or, one year. All the stars that lie close to the imaginary flat disk
swept out by this imaginary line are said to be in the zodiac. The constellations in the zodiac
are simply the constellations that this imaginary straight line points to in its year-long
journey.
What are the 12 signs of the zodiac?
It was during this Ancient Greek period that the 12 star signs of the zodiac with which many people
are likely familiar today Aries (roughly March 21-April 19), Taurus (April 20-May 20), Gemini
(May 21-June 20), Cancer (June 21-July 22), Leo (July 23-Aug. 22), Virgo (Aug. 23-Sept. 22),
Libra (Sept. 23-Oct. 22), Scorpio (Oct. 23-Nov. 21), Sagittarius (Nov. 22-Dec. 21), Capricorn (Dec.
22-Jan. 19), Aquarius (Jan. 20 to Feb. 18) and Pisces (Feb. 19 to March 20) were set down. These
Western, or tropical, zodiac signs were named after constellations and matched with dates based
on the apparent relationship between their placement in the sky and the sun.
The Babylonians had already divided the zodiac into 12 equal signs by 1500 BC boasting similar
constellation names to the ones familiar today, such as The Great Twins, The Lion, The Scales
and these were later incorporated into Greek divination. The astronomer Ptolemy, author of the
Tetrabiblos, which became a core book in the history of Western astrology, helped popularize these
12 signs.
"This whole idea that there were 12 signs along the zodiac that were 30° wide, and [that] the sun
moved through these signs regularly during the year, that was codified by Ptolemy," says Oldfield.
Even the word "zodiac" emanates from Greek, from a term for "sculpted animal figure," according
to the Oxford English Dictionary, and the order in which the signs are usually listed comes from
that period too.
76
"Back at the time of the Greeks," Oldfield explains, "the first day of spring started when the sun
appeared in the constellation Aries and then everything was marked from that time forward
around the circuit of the year."
However, the Earth has moved on its axis since then, a process known as precession, so now the
dates that are used to mark the signs don't really correspond to the background constellations that
give them their signs names. In fact, the chronology has really shifted one sign to the West. That
means zodiac sign dates, based on the mathematical division of the year, basically correspond
today to the presence of the sun in the constellations of the signs that come before them. (The set
nature of the signs is also why the Minnesota Planetarium Society's 2011 argument that there
should be a 13th zodiac sign now, Ophiuchus, didn't actually result in a big astrology change.)
"Before, astrologers looked at where the sun was relative to background constellations in general,
and that generally matched up almost exactly with the signs of zodiac defined by Ptolemy," says
Oldfield. "Now astrologers do their calculations and forecasting based on where the planets and
the sun are relative to the 12 signs which are fixed and not based on where they are relative to
the constellations. Astrologers say if the sun is in the sign of Sagittarius on the day you were born,
then you're a Sagittarius."
What's the difference between astrology and astronomy?
For centuries, astrology (looking for signs based on the movement of the celestial bodies) was
considered basically the same thing as astronomy (the scientific study of those objects). For
example, revolutionary 17th-century astronomer Johannes Kepler, who studied the motion of the
planets, was at the time considered an astrologer. And if one were to have studied about the
movement of the planets and stars at university before the 1700s, astrology would have been a core
component of the syllabus.
That changed with the dawn of the Enlightenment in the late 17th century, when there was a
paradigm shift in our understanding of what governed the trajectories of objects in the solar
system. Once Sir Isaac Newton basically turned the sky into a calculator, mathematizing the
motion of the planets and realizing that gravity controlled everything, Oldfield says, "that started a
whole new scientific approach to looking at the sky and the motion of planets and the earth."
That's the point at which astronomy and astrology diverged. Astronomy being regarded as a true
scientific discipline, whilst astrology (in academic and scientific circles at least) became regarded
as merely a pseudo-science (on a par with alchemy or homoeopathy) whose theories possess no
scientific merit at all.
What accounts for astrology's continued popularity?
Although astrology may not be taken seriously by those in academia, it has endured through the
centuries since its theories were debunked. And far from waning, it has actually gained in
popularity and adherents in recent years. With the number of members of the American
Federation of Astrologers seeing a threefold increase since the turn of the century and a 2014
National Science Foundation poll finding that more than half of millennials think astrology is a
true science.
And Oldfield argues that this resurgence in the belief of astrology is down to both a reflection of
the increasing uncertainty in modern life and a psychological phenomenon he calls the human
tendency for "self-selection", the search for interpretations that match what we already hope to be
true.
"Man has always looked for answers." he argues. "Whether it be why a volcano erupts or will I have
children in the future, we want to know. But it's immaterial whether the answer is true or not. As
77
long as it stops us asking the question or dwelling on something, we'll be happy with what we find
out or are told. And it is this that accounts for astrology's continued appeal, it's good at giving
answers."
78
Vocabulary exercise
1. discern (paragraph 1)
those who enjoy Western astrology will be checking out their Summer Solstice horoscopes to try to
use the stars to discern what the season might have in store.
2. ruminating (paragraph 2)
they spent a considerable amount of time gazing at the sky and ruminating about what was
happening up there.
3. imbued with (paragraph 6)
There's some indication that cave art shows this idea that animals and things can be imbued
with some kind of spirit form that then has an influence on you
4. emanates from (paragraph 15)
Even the word "zodiac" emanates from Greek, from a term for "sculpted animal figure,"
according to the Oxford English Dictionary
5. a paradigm shift (paragraph 20)
That changed with the dawn of the Enlightenment in the late 17th century, when there was a
paradigm shift in our understanding of what governed the trajectories of objects in the solar
system
6. on a par with (paragraph 21)
astrology (in academic and scientific circles at least) became regarded as merely a pseudo-science
(on a par with alchemy or homoeopathy) whose theories possess no scientific merit at all.
7. waning (paragraph 22)
it has endured through the centuries since its theories were debunked. And far from waning, it
has actually gained in popularity and adherents in recent years.
79
Write your own sentences with the vocabulary
1. discern
2. ruminating
3. imbued with
4. emanates from
5. a paradigm shift
6. on a par with
7. waning
80
EXERCISE 13
Our changing spending habits at Christmas
Summary
This article talks about people's spending habits in Britain during Christmas. It explains how much
and on what households spend their money on. It also explains why people are changing what they
spend their money on during this period.
ALL over the country, warehouse staff are digging out and dusting down their stocks of electric
foot spas to be given their annual outing on the shelves of Britain's department stores. It is
Christmas and shoppers want gifts, no matter how useless or irrelevant to families and friends. A
combination of shoppers being suckered and a weary nation desperate to escape the past few
months of gloom and doom will add up to what economists are confidently forecasting will be the
biggest Christmas on record.
Our spending on gifts for loved ones and the food and drink to wash it all down will add up to
£16.5bn of additional consumer spending this year, a 5 per cent increase on last year, according to
new figures from the Centre for Economic and Business Research. But while Christmas is getting
bigger there are signs, particularly from the high street retailers, that its rate of growth is slowing.
Even so the overall business of Christmas is unlikely to start shrinking unless the economy takes a
surprising nose dive over the next few years.
At £16.5bn, the Yuletide economy accounts for 2.1 per cent of total annual consumer spending
which the CEBR's model expects to be £650bn for 2019. But is all this Christmas spending healthy
for the economy and why, as consumers, do we feel the need to spend ever greater amounts on
Christmas?
Doug McWilliams, chief executive of the CEBR, says: "Spending ultimately is determined by how
much income people have and the various incentives to spend or save. Christmas adds volatility to
purchasing patterns which makes life tricky for the likes of the retailers. There is an economic cost
in that so many retailers feel the need to stock up on goods which ultimately have no buyers and
have to be sold at discounts of 50 per cent or more in the January sales. Clearly they lose out from
that."
Most economic statistics which are published are seasonally adjusted to smooth the peaks and
troughs of people's spending habits. By removing the adjustment factors it is possible to measure
the spike in consumer spending attributable specifically to Christmas over and above usual day-to-
day spending. The £16.5bn translates to an average Christmas spend of £734 per household across
the country.
While it has been known for economic theory to be divorced from reality, the £16.5bn value for
Christmas is given further credibility by examining which sectors are the beneficiaries of the boom
and by how much. Analysis of recent trends would appear to indicate that the extra spending is
going on all the traditional Christmas goods and services you would expect. Toys and gifts, which
includes the inevitable paraphernalia of trees and decorations, are the biggest beneficiary. There is
81
also forecasted to be a big increase in food and alcohol purchases during this festive season. This is
unsurprising considering how we all tend to binge on them during the period.
Consider the fact that the average person over 18 will, this month alone, sink 23 pints of beer, five
measures of spirits and four bottles of wine. Indeed, of the £1.3bn of blended Scotch whisky drunk
in the country this year, 26 per cent will be consumed at Christmas. That figure rises to 40 per cent
for the considerably more expensive single malt market, worth a total of £44m. There are going to
be some big hangovers.
Clothing is still the most popular Christmas gift item, according to a survey by Mintel, the market
research company. All those socks, scarves and slippers that no one wants means clothing and
footwear is the second biggest beneficiary of the Christmas economy.
But traditional Christmas gift items are changing as the reasons why we give change. Judith
Pilkington, chief executive of MW Group which owns Mappin & Webb, the jeweller, and Watches
of Switzerland, says: "In the nine weeks up to Christmas we do 2530 per cent of our sales. People
have become massively more discerning of late in what they buy. It is all about individual taste and
expression. They are buying things because they want them, not because of what other people
think."
Dr Hugh Phillips, a retail psychologist and senior lecturer in retailing at De Montfort University in
Leicester, says: "Where Christmas present buying between adults is concerned, the value of the
present is not the money it costs, but the care taken in choosing it. People are therefore giving the
right present irrespective of the cost." While this may mean that in some cases people spend more
than they intended, it also means people may spend considerably less to achieve the same result. It
also implies that people may not buy anything at all if they cannot find exactly what they want.
Phillips says "Ostentatious gift giving, where people spend to impress, has changed to giving a
token of esteem which may be something as simple as three chocolates perfectly wrapped. It is not
the value but the beauty of the offering which is important. This is reflected in retail sales of the
past few years, which have been static."
Indeed, the retailers are getting worried about changing Christmas shopping habits. According to
figures from Verdict, the retail research consultancy, the rate of growth in retail sales at this time
of year is in decline. Verdict predicts a 3.2 per cent year-on-year sales increase for December the
busiest shopping month of the year.
This is a marked fall from last year, which saw a 4.6 per cent rise in December sales. One of the
problems for retailers in the future will be the trend for people to spend more on leisure pursuits,
such as holidays and meals out, rather than updating their wardrobe or buying the newest gadget.
Richard Hyman, chairman of Verdict, says: "This is channelling money out of the retail market.
Retailers are seeing their piece of the consumer spending cake decline at a rate of knots. In 1979
nearly 50 per cent of consumer spending went through the retail trade. By 2024 we are forecasting
the figure will fall to 33 per cent."
He adds that just 36 per cent of people's total consumer spending allocation now ends up in retail
coffers. Coupled to this the continuing growth in the market share of Amazon, and many
traditional retailers are less than optimistic about the future.
But while retailers may see Christmas future as being less lucrative than Christmas past, a bundle
of traditional businesses are still busy making money from the festive period. The UK's largest
Christmas decoration manufacturer, Festive Productions, can barely keep pace with the £200m
demand for tinsel and baubles. Festive's South Wales factory manufactures 24 hours a day every
82
day of the year, except for a few days' grace at Christmas. The company offers 10,000 variations of
Christmas products, including more than 1,000 colour variations for tinsel.
Fashion and design are becoming increasingly influential in the greeting cards, crackers and
giftwrap marketplace which is estimated at £650m. Nick Fisher, joint chief executive of
International Greetings, the Aim-listed designer and manufacturer of cards, wrapping paper and
crackers, says: "Design is of great importance now, as well as quality and value. The whole industry
is becoming design driven and fashion led. Consumers are far more discerning and demanding in
what they want." He says that five years ago people were happy to pay £4.99 for a box of 12
crackers with tiny plastic toys, whereas they are now prepared to spend £15 for six stylish crackers
with designer toys.
Consumers are still happy to spare a thought for those less fortunate than themselves as they
prepare to gorge themselves on sherry, turkey and Christmas pudding. Charity Christmas cards,
now worth £114m, are becoming increasingly popular and Christmas remains one of the most
lucrative times of the year for most UK charities.
Charities are not the only ones relying on Christmas. It is hard to imagine where Britain's fir tree
growers would be without the Yuletide need for a tree. Roger Hay, secretary of the British
Christmas Tree Growers Association, says: "The industry has grown substantially over the last 20
years from selling 2 million trees 15 years ago, to 6 million now."
Sales of real Christmas trees are worth £150m. Hay believes people have wised up to the fact that
real trees are more environmentally friendly than the imitation plastic ones. "The growing cycle is
to the benefit of the plant," he explains. "When trees are growing they take carbon and convert it
into oxygen. They generate 2.5 tonnes of carbon per acre a year and this is enough to provide
oxygen for 15 or 16 people a year."
So while the future may not be so bright for many retailers, maybe it will be a little bit more for the
planet and those less fortunate members of society. Which in theory is what Christmas is about.
83
Vocabulary exercise
1. shrinking (paragraph 2)
Even so the overall business of Christmas is unlikely to start shrinking unless the economy takes
a surprising nose dive over the next few years.
2. spike (paragraph 5)
By removing the adjustment factors it is possible to measure the spike in consumer spending
attributable specifically to Christmas over and above usual day-to-day spending.
3. paraphernalia (paragraph 6)
Toys and gifts, which includes the inevitable paraphernalia of trees and decorations, are the
biggest beneficiary.
4. binge (paragraph 6)
There is also forecasted to be a big increase in food and alcohol purchases during this festive
season. This is unsurprising considering how we all tend to binge on them during the period.
5. discerning (paragraph 9)
People have become massively more discerning of late in what they buy. It is all about individual
taste and expression. They are buying things because they want them, not because of what other
people think.
6. irrespective (paragraph 10)
the value of the present is not the money it costs, but the care taken in choosing it. People are
therefore giving the right present irrespective of the cost.
7. coupled to (paragraph 15)
just 36 per cent of people's total consumer spending allocation now ends up in retail coffers.
Coupled to this the continuing growth in the market share of Amazon, and many traditional
retailers are less than optimistic about the future.
84
Write your own sentences with the vocabulary
1. shrinking
2. spike
3. paraphernalia
4. binge
5. discerning
6. irrespective
7. coupled to
85
EXERCISE 14
The importance of stories for us
Summary
This article talks about the importance that stories have (both now and in the past) on human
culture and civilisation. It explains that storytelling in various forms has been around for
thousands and thousands of years, and what stories are used for and the recurrent themes and
types of characters which are used in them.
It sounds like the perfect summer blockbuster. A handsome king is blessed with superhuman
strength, but his insufferable arrogance means that he threatens to wreak havoc on his kingdom.
Enter a down-to-earth wayfarer who challenges him to fight. The king ends the battle chastened,
and the two heroes become fast friends and embark on a series of dangerous quests across the
kingdom.
The fact that this tale is still being read today is itself remarkable. It is the Epic of Gilgamesh,
engraved on ancient Babylonian tablets 4,000 years ago, making it the oldest surviving work of
great literature. We can assume that the story was enormously popular at the time, given that later
iterations of the poem can be found over the next millennium.
What is even more astonishing is the fact that it is read and enjoyed today, and that so many of its
basic elements including its heart-warming 'bromance' can be found in so many of the popular
stories that have come since. Such common features are now a primary interest of scholars
specialising in 'literary Darwinism', who are asking what exactly makes a good story, and the
evolutionary reasons that certain narratives from Homer's Odyssey to Harry Potter have such
popular appeal.
Escapism?
Although we have no firm evidence of storytelling before the advent of writing, we can assume that
narratives have been central to human life for thousands of years. The cave paintings in sites like
Chauvet and Lascaux in France from 30,000 years ago appear to depict dramatic scenes that were
probably accompanied by oral storytelling. "If you look across the cave, there will be a swathe of
different images and there often seems to be a narration relating to a hunting expedition," says
Daniel Kruger at the University of Michigan narratives that may have contained important
lessons for the group. It is highly likely that tales told by our ancestors during the last Ice Age will
linger in some form today.
Today, we may not gather around the campfire, but the average adult is still thought to spend at
least 6% of the waking day engrossed in fictional stories on our various screens.
From an evolutionary point of view, that would be an awful lot of time and energy to expend on
pure escapism, but psychologists and literary theorists have now identified many potential benefits
to this fiction addiction. One common idea is that storytelling is a form of cognitive play that hones
our minds, allowing us to simulate the world around us and imagine different strategies,
86
particularly in social situations. "It teaches us about other people and it's a practice in empathy
and theory of mind," says Joseph Carroll at the University of Missouri-St Louis.
Providing some evidence for this theory, brain scans have shown that reading or hearing stories
activates various areas of the cortex that are known to be involved in social and emotional
processing, and the more people read fiction, the easier they find it to empathise with other people.
Palaeolithic politics
Crucially, evolutionary psychologists believe that our prehistoric preoccupations still shape the
form of the stories we enjoy. As humans evolved to live in bigger societies, for instance, we needed
to learn how to cooperate, without being a 'free rider' who takes too much and gives nothing, or
overbearing individuals abusing their dominance to the detriment of the group's welfare. Our
capacity for storytelling and the tales we tell may have therefore also evolved as a way of
communicating the right social norms. "The lesson is to resist tyranny and don't become a tyrant
yourself," Kruger said.
Along these lines, various studies have identified cooperation as a core theme in popular narratives
across the world. The anthropologist Daniel Smith of University College London recently visited 18
groups of hunter-gatherers of the Philippines. He found nearly 80% of their tales concerned moral
decision making and social dilemmas (as opposed to stories about, say, nature). Crucially, this
then appeared to translate to their real-life behaviour; the groups that appeared to invest the most
in storytelling also proved to be the most cooperative during various experimental tasks – exactly
as the evolutionary theory would suggest.
The Epic of Gilgamesh provides one example from ancient literature. At the start of the tale the
King Gilgamesh may appear to be the perfect hero in terms of his physical strength and courage,
but he is also an arrogant tyrant who abuses his power, using his droits to seigneur to sleep with
any woman who takes his fancy, and it is only after he is challenged by the stranger Enkidu that he
ultimately learns the value of cooperation and friendship. The message for the audience should
have been loud and clear: if even the heroic king has to respect others, so do you.
In his book On the Origin of Stories, Brian Boyd of the University of Auckland describes how these
themes are also evident in Homer's Odyssey. As Penelope waits for Odysseus's return, her suitors
spend all day eating and drinking at her home. When he finally arrives in the guise of a poor
beggar, however, they begrudge offering him any shelter (in his own home!). They ultimately get
their comeuppance as Odysseus removes his disguise and wreaks a bloody revenge.
You might assume that our interest in cooperation would have dwindled with the increasing
individualism of the Industrial Revolution, but Kruger and Carroll have found that these themes
were still prevalent in some of the most beloved British novels from the 19th and early 20th
Centuries. Asking a panel of readers to rate the principal characters in more than 200 novels
(beginning with Jane Austen and ending with EM Forster), the researchers found that the
antagonists' major flaw was most often a quest for social dominance at the expense of others or an
abuse of their existing power, while the protagonists appeared to be less individualistic and
ambitious.
Consider Jane Austen's Pride and Prejudice. The conniving and catty Miss Bingley aims to increase
her station by cosying up to the rich-but-arrogant Mr Darcy and establishing a match between her
brother and Darcy's sister while also looking down on anyone of a lower social standing. The
heroine Elizabeth Bennett, in contrast, shows very little interest in climbing their society's
hierarchy in this way, and even rejects Mr Darcy on his first proposal.
87
William Thackeray's Vanity Fair, meanwhile, famously plays with our expectations of what to
expect in a protagonist by placing the ruthlessly ambitious (and possibly murderous) Becky Sharp
at the very centre of the novel, while her more amiable (but bland) friend Amelia is a secondary
character. It was, in Thackeray's own words, "a novel without a hero", but in evolutionary terms
Becky's comeuppance, as she is ultimately rejected by the society around her, still signals a stark
warning to people who might be tempted to put themselves before others.
Bonnets and bonobos
Evolutionary theory can also shed light on some of the things which constitutes staples of romantic
fiction, like the heroines' preferences for stable 'dad' figures (such as Mr Darcy in Pride and
Prejudice or Edward Ferrars in Sense and Sensibility) or flighty 'cads' (like the dastardly
womanisers Mr Wickham or Willoughby). The 'dads' might be the better choice for the long-term
security and protection of your children, but according to an evolutionary theory known as the
'sexy son hypothesis', falling for an unfaithful cad can have his own advantages since they can pass
on their good looks, cunning and charm to his own children, who may then also enjoy greater
sexual success.
The result is a greater chance that your genes will be passed on to a greater number of
grandchildren even if your partner's philandering brought you heartbreak along the way. It is for
this reason that literature's bad boys may still get our pulses racing, even if we know their wicked
ways. In these ways, writers like Austen are intuitive evolutionary psychologists with a "stunningly
accurate" understanding of sexual dynamics that would preempt our recent theories, Kruger said.
"I think that's part of the key for these stories' longevity. Jane Austen wrote these novels 200 years
ago and there are still movies being made of them today."
There are many more insights to be gained from these readings, including, for instance, a recent
analysis of the truly evil figures in fantasy and horror stories such as Harry Potter's nemesis Lord
Voldemort and Leatherface in The Texas Chainsaw Massacre. A grotesque appearance, which is a
common feature of villains in stories, triggers our evolved fear of contagion and disease. And given
our innate tribalism, villains often carry signs that they are a member of an "out-group" – hence
the reason that so many Hollywood baddies have foreign accents. Once again, the idea is that a
brush with these evil beings ultimately reinforces our own sense of altruism and loyalty to the
group.
The novelist Ian McEwan is one of the most celebrated literary voices to have embraced these
evolutionary readings of literature, arguing that many common elements of plot can even be found
in the machinations of our primate cousins. "If one reads accounts of the systematic nonintrusive
observations of troops of bonobo," he wrote in a book of essays on the subject, The Literary
Animal, "one sees rehearsed all the major themes of the English 19th-Century novel: alliances
made and broken, individuals rising while others fall, plots hatched, revenge, gratitude, injured
pride, successful and unsuccessful courtship, bereavement and mourning."
McEwan argues we should celebrate these evolved tendencies as the very source of fiction's power
to cross the continents and the centuries. "It would not be possible to enjoy literature from a time
remote from our own, or from a culture that was profoundly different from our own, unless we
shared some common emotional ground, some deep reservoir of assumptions, with the writer," he
added.
By drawing on that deep reservoir, a story like the Epic of Gilgamesh is still as fresh if it had been
written yesterday, and its timeless messages of loyal friendship remain a lesson to us all, 4,000
years after its author first put stylus to tablet.
88
Vocabulary exercise
1. blessed with (paragraph 1)
It sounds like the perfect summer blockbuster. A handsome king is blessed with superhuman
strength, but his insufferable arrogance means that he threatens to wreak havoc on his kingdom.
2. engrossed (paragraph 5)
Today, we may not gather around the campfire, but the average adult is still thought to spend at
least 6% of the waking day engrossed in fictional stories on our various screens.
3. dwindled (paragraph 12)
You might assume that our interest in cooperation would have dwindled with the increasing
individualism of the Industrial Revolution, but Kruger and Carroll have found that these themes
were still prevalent
4. stark (paragraph 14)
but in evolutionary terms Becky's comeuppance, as she is ultimately rejected by the society around
her, still signals a stark warning to people who might be tempted to put themselves before others.
5. shed light on (paragraph 15)
Evolutionary theory can also shed light on some of the things which constitutes staples of
romantic fiction, like the heroines' preferences for stable 'dad' figures
6. staples (paragraph 15)
Evolutionary theory can also shed light on some of the things which constitutes staples of
romantic fiction, like the heroines' preferences for stable 'dad' figures
7. preempt (paragraph 16)
In these ways, writers like Austen are intuitive evolutionary psychologists with a "stunningly
accurate" understanding of sexual dynamics that would preempt our recent theories
89
Write your own sentences with the vocabulary
1. blessed with
2. engrossed
3. dwindled
4. stark
5. shed light on
6. staples
7. preempt
90
EXERCISE 15
Is lab-grown meat a good thing for us?
Summary
This article talks about lab-grown meat (artificial meat which is grown by using cells from
animals). It talks about the number of new companies that are emerging to create this type of
product and what the impact could be if it becomes successful. It ends by discussing some issues
which some experts have about lab-grown meat.
"We built a lab with glass walls. That was on purpose," Ryan Bethencourt, program director for the
biotech accelerator IndieBio, told me as we sat in the company's wide open basement workspace in
the South of Market district of San Francisco. Glass walls it's a design philosophy that many
animal rights activists have argued could turn the world vegan, if only people could see into the
slaughterhouses that produce their meat.
But IndieBio is taking a different approach. "If we put a lightning rod in the ground and say we are
going to fund the post-animal bioeconomy," Bethencourt, a self-described ethical vegan, explained,
"then we're going to create foods that remove animals from the food system." He pointed me to
two examples in the accelerator: NotCo, a Chilean startup using a mix of plant science and artificial
intelligence to create mayonnaise and dairy products, and Finless Foods, a two-man team using
"cellular agriculture" to create lab-grown or "cultured" seafood. The latter is just one of several new
products in development that creates meat without relying on actual livestock, using only a few cell
tissues from animals instead.
While the number of alternatives to animal protein has been growing steadily over the last several
years, it remains a relatively niche market. Bethencourt and his colleagues at IndieBio are eager to
get their food into the hands of the masses. "If we don't see our products used by billions of people,
then we've failed," he told me. But it's not just altruism that drives this emerging industry. There's
big money betting on a future of animal products made without animals.
Just look at IndieBio alum Memphis Meats, a cultured meat company that announced late last
month that it had raised $17 million in Series A funding. High-profile investors have included Bill
Gates, Richard Branson and ag industry giant Cargill, none of whom seemed deterred by the fact
that no lab-grown meat product actually has been made available to consumers yet.
Major investment also has been pouring in for high-tech products made solely from plants.
Hampton Creek, best known for its eggless mayo and dressings and numerous controversies
involving its embattled CEO Josh Tetrick has been dubbed a "unicorn" for its billion-dollar
valuation. (The company recently announced that it's getting in on cultured meat innovation, too.)
Products from Beyond Meat are in over 11,000 stores across the United States, supported in part
by early investment from Gates and a 2016 deal with Tyson Foods. Gates is also a backer of
Impossible Foods, which has raised upwards of $300 million since it launched in 2011 and has the
capacity to churn out 1 million pounds of "plant meat" each month in its new Oakland production
facility.
91
All of this big money, of course, has followed big promises. According to the innovators and
investors involved, a sustainable, well-fed, economically thriving world that makes factory farming
obsolete is within our reach.
I've spent the last few months talking to scientists and entrepreneurs in the plant-based and
cultured meat landscape. As a vegan since my college days, it's been hard for me not to get excited
by the vision they present. But as someone who has spent the better part of the last decade working
as a food justice researcher, author and activist, lingering concerns have kept my enthusiasm in
check. The truth is, food scientists, corporations and philanthropists have made big promises
before, but the food system is still a mess. Farmers and workers continue to be marginalized,
environmentally irresponsible practices remain the norm, animals are mistreated on a massive
scale, rates of hunger and food insecurity are alarmingly high and chronic diet-related disease is on
the rise across the globe.
I find myself with mixed feelings about the whole enterprise. On one hand, I'm skeptical that these
technological fixes automatically will lead us to some sort of agricultural utopia. But I'm also
concerned that many who identify with the food movement might be missing out on the chance to
shape the future of food because they're turning their backs on food science altogether.
According to Cor van der Weele, a philosopher of biology at Wageningen University in the
Netherlands who studies public perception of animal protein alternatives and has a book
forthcoming on the topic, my reaction is far from unique. "Meat has, for a long time, led to a very
polarized debate you were either a vegetarian or a staunch meat lover," she explained. "Cultured
meat has been very effective in undermining those polarities. It brings ambivalence more to the
foreground and it also makes possible the formation of new coalitions."
I'm interested in the possibilities these new coalitions present. But it's hard not to ponder whether
what's good for Silicon Valley would be good for eaters or farmers in general.
The future of farming
When I stepped into the El Segundo, California office of Ethan Brown, CEO of Beyond Meat, the
writing was literally on the wall. Four stylishly designed posters outlined the company's mission:
improving human health; positively impacting climate change; addressing global resource
constraints; and improving animal welfare. "We're lucky that for the first time in a long time,
profit-seeking behavior and what's good are aligning," Brown told me. "The whole genius of the
thesis of what we're doing is that you don't have to have the mission in mind for it to be the right
thing to do," added Emily Byrd, a senior communications specialist at the Good Food Institute, a
non-profit that promotes and supports alternatives to animal agriculture and works with
companies such as Beyond Meat. "That's why writing efficiency into the process is so important."
Food-tech proponents insist that animals are really poor bioreactors for converting plants into
protein. They suggest we simply skip that step either by building meat directly from plant
sources or using a laboratory bioreactor to grow meat cultures. It would be a clear win for animals,
and one that could mitigate the negative environmental impacts of factory farming at a moment of
growing global demand. But what would it mean for farmers?
For one, it would require a lot less corn and soybeans the two crops that dominate this country's
farm landscape. Shifting the commodity system wouldn't be easy, but Brown argued, "If you were
to redesign the agricultural system with the end in mind of producing meat from plants, you would
have a flourishing regional agricultural economy."
By relying on protein from a wider range of raw ingredients from lentils to cannellini and lupin
he said companies such as his have the potential to diversify what we grow on a mass scale. It
92
would be better for the soil and water, and farmers theoretically could benefit from having more
say in what they grow with more markets to sell their goods.
When it comes to putting this type of system into practice, however, a lot of details still need to be
worked out. Byrd pointed me to the writings of David Bronner, CEO of Dr. Bronner's soap
company, who envisions a world of plant-based meats and regenerative organic agriculture. He
suggests that the soil fertility-boosting power of diversified legume rotations, combined with a
modest amount of Allan Savory-inspired livestock management, could put an end to the factory
farm and the massive amounts of GMO corn and soy (and the herbicides) that feed it.
Even cultured meat advocates see a future that is better for farmers once we move away from
raising animals for food. "In my mind, farmers are the ultimate entrepreneurs," said Dutch
scientist Mark Post, who created the first cultured hamburger, at the recent Reducetarian Summit
in New York. "They will extract value from their land however they can. And if this is going to fly
and be scaled up, we need a lot of crops to feed those cells. And so the farmers will at some point
switch to those crops because there will be a demand for it."
What crops and what types of farms would feed those cells? Right now it's unclear, as up to this
point cultured meat has used a grisly product called fetal bovine serum to do the job. Along with
the continued use of animal testing, it's one of the few ways that these food-tech innovators have
been unable to move beyond using animals completely. Several companies claim they've begun to
find plant-based replacements for fetal bovine serum, assisted in the discovery process by complex
machine learning systems such as Hampton Creek's recently patented Blackbird platform. But
intellectual property keeps them tight-lipped on the particulars.
As for how those crops and others used in the production of meat alternatives would be
produced, there's not much more clarity. In my conversations with people in the food-tech world,
the opinions on organic and regenerative agriculture ranged from strongly opposed to agnostic to
personally supportive. But with the likes of Gates and Cargill playing an increasingly big role in the
sector, it's unlikely that a wholesale switch toward these practices is on the horizon.
It's not surprising, then, that some food activists are not buying what the alternative animal
product advocates are selling.
Are we rushing things?
"We want to see a food system in the hands of people and not in the hands of profit-driven
companies," said Dana Perls, senior food and technology campaigner for Friends of the Earth
(FOE). She expressed a set of misgivings about the role of genetic engineering and synthetic
biology in the plant-based and cultured meat space. Are these products really about sustainably
feeding the world or are they more about investor profit? Are we sure we know the long-term
health impacts?
Perls noted the U.S. Food & Drug Administration's (FDA) recent decision to stop short of declaring
that a key genetically modified ingredient in Impossible Foods' plant-based "bleeding" Impossible
Burger was safe for human consumption. That determination did not mean the burger was unsafe,
however, and Impossible Foods stands by its integrity.
Perls was encouraged by the fact that some plant-based products such as those produced by
Beyond Meat do not use GMO ingredients. And she recognized that, from a technical
perspective, cultured meat does not necessarily use genetic modification, either although it could
in the future. But she and others are still uneasy: "The fact that there is a lot of market-driven hype
propelling these genetically engineered ingredients ahead of safety assessments and fully
understanding the science is concerning."
93
Other concerns have been raised about the healthfulness of highly processed alternative meats,
which often lack a strong nutrient profile. But food-tech advocates maintain that conventional
meat products go through multiple layers of processing, too, even if the label doesn't always reflect
it. And they are quick to note that meat is a major source of foodborne illness and has been
associated with cardiovascular disease. "Our No. 1 driver is far and away human health," Beyond
Meat's Brown explained. "It's absolutely the No. 1 thing that brings people to this brand."
Plant-based and cultured meat producers see themselves promoting sustainability, promising
healthier options in a world that demands convenience and good taste. But it's not clear yet how
universally accessible these products will be. Plant-based burgers made by Beyond Meat are for
sale in a number of grocery stores (including Safeway), for instance. But at about $12 a pound,
they're still much more expensive than conventional ground beef, which costs around $3.50 a
pound, and even more than some higher-end ground grass-fed and organic ground beef, which
sells for around $10 a pound.
But like with all new products which are produced by a new technique or technology, the price
should eventually fall as the processes and techniques used become more refined and the scale of
production expands. However, how long it will take until its price is on a par with that of natural
meat is still something which even the companies which are creating it are reticent to give a clear
answer on.
Whilst there are some valid concerns about lab-grown meat and it is wise to be sceptical about all
the claims its proponents are making about it, from everything that I have seen and heard I feel it
has the potential to be one of the greatest inventions of the 21th century. This has little to do with
the tech which it employs to create this artificial meat - although that is impressive enough - but
with the positive ramifications it could have not only for the planet, but also on the health and
welfare of both humans and animals alike. And that for me is something we all should be
celebrating.
94
Vocabulary exercise
1. niche (paragraph 3)
While the number of alternatives to animal protein has been growing steadily over the last several
years, it remains a relatively niche market.
2. deterred (paragraph 4)
High-profile investors have included Bill Gates, Richard Branson and ag industry giant Cargill,
none of whom seemed deterred by the fact that no lab-grown meat product actually has been
made available to consumers yet.
3. churn out (paragraph 6)
Impossible Foods, which has raised upwards of $300 million since it launched in 2011 and has the
capacity to churn out 1 million pounds of "plant meat" each month in its new Oakland
production facility.
4. undermining (paragraph 10)
Cultured meat has been very effective in undermining those polarities. It brings ambivalence
more to the foreground and it also makes possible the formation of new coalitions.
5. ponder (paragraph 11)
But it's hard not to ponder whether what's good for Silicon Valley would be good for eaters or
farmers in general.
6. envisions (paragraph 16)
Byrd pointed me to the writings of David Bronner, CEO of Dr. Bronner's soap company, who
envisions a world of plant-based meats and regenerative organic agriculture.
7. misgivings (paragraph 21)
She expressed a set of misgivings about the role of genetic engineering and synthetic biology in
the plant-based and cultured meat space. Are these products really about sustainably feeding the
world or are they more about investor profit?
95
Write your own sentences with the vocabulary
1. niche
2. deterred
3. churn out
4. undermining
5. ponder
6. envisions
7. misgivings
96
EXERCISE 16
Is there any difference between men’s and
women’s brains?
Summary
This article discusses whether there is any difference in the composition and working of male and
female brains and whether this has any impact on behaviour, abilities or illnesses. It presents the
findings of numerous different academic studies on the topic and draws a conclusion from these at
the end.
In a world of equal rights, pay gaps, and gender-specific toys, one question remains central to our
understanding of the two biological sexes: are men's and women's brains wired differently? If so,
how, and how is that relevant?
There are many studies that aim to explore the question of underlying differences between the
brains of men and women. But the results seem to vary wildly, or the interpretations given to the
main findings are in disagreement.
In existing studies, researchers have looked at any physiological differences between the brains of
men and women. They then studied patterns of activation in the brains of participants of both
sexes to see if men and women relate to the same external stimuli and cognitive or motor tasks in
the same way. But if there are, do any of these differences exert an influence on the way in which
men and women perform the same tasks? And do such differences affect men versus women's
susceptibility to different brain disorders?
Are there 'hardwired differences?'
Increasingly, online articles and popular science books appeal to new scientific studies to deliver
quick and easy explanations of "why men are from Mars and women come from Venus," to
paraphrase a well-known bestseller about heterosexual relationship management. One such
example is a book from the Gurian Institute, which emphasizes that baby girls and boys should be
treated differently because of their underlying neurological differences. Non-differentiated child-
rearing, the authors suggest, may ultimately be unhealthy.
Cars for boys, teddies for girls?
Dr. Nirao Shah, who is a professor of psychiatry and behavioral sciences at Stanford University in
California, also suggests that there are some basic "behaviors that are essential for survival and
propagation," related to reproduction and self-preservation, that are different in men and women.
These, he adds, are "innate rather than learned" and "in animals they are hardwired into the
brain."
Some examples brought to bear on these "innate differences" often come from studies on different
primates, such as rhesus monkeys. One experiment offered male and female monkeys traditionally
"girly" ("plush") or "boyish" ("wheeled") toys and observed which kinds of toys each would prefer.
This team of researchers found that male rhesus monkeys appeared to naturally favor "wheeled"
97
toys, whereas the females played predominantly with "plush" toys. This, they argued, was a sign
that "boys and girls may prefer different physical activities with different types of behaviors and
different levels of energy expenditure."
Similar findings have been reported by researchers from the United Kingdom about boys and girls
between 9 and 32 months old a period when, some researchers suggest, the children are too
young to form gender stereotypes. Apparent differences in preferences have been explained
through a differential hardwiring in the female versus male brain. Yet, criticisms of this
perspective also abound.
Refuting studies in monkeys, some specialists argue that, no matter how similar to human beings
from a biological point of view, monkeys and other animals are still not human, and guiding our
understanding of men and women by the instincts of male and female animals is erroneous.
As for studies on infants and young children, researchers often identify pitfalls. Boys and girls,
some argue, can already develop gender stereotypes by age 2, and their taste for "girly" or "boyish"
toys may be acquired by how their parents socialize them, even if the parents themselves are not
always aware of perpetuating stereotypes.
The perspective that "gendered" preferences can be explained through hormonal activity and
differences in the brains of men and women remains, therefore, controversial.
Different brain activation patterns
Still, there are a number of studies that pinpoint different patterns of activation in the brains of
men versus women given the same task, or exposed to the same stimuli.
Navigation
One such study evaluated sex-specific brain activity in the context of visuospatial navigation. The
researchers used functional MRI (fMRI) to monitor how men's and women's brains responded to a
maze task. In their given activity, participants of both sexes had to find their way out of a complex
virtual labyrinth. It was noted that in men, the left hippocampus which has been associated with
context-dependent memory lit up preferentially. In women, however, the areas activated during
this task were the right posterior parietal cortex, which is associated with spatial perception, motor
control, and attention, and the right prefrontal cortex, which has been linked to episodic memory.
Another study discovered "rather robust differences" between resting brain activity in men and in
women. When the brain is in a resting state, it means that it is not responding to any direct tasks
but that doesn't mean it isn't active. Scanning a brain "at rest" is meant to reveal any activity that is
"intrinsic" to that brain, and which happens spontaneously. When looking at the differences
between male and female brains "at rest," the scientists saw a "complex pattern, suggesting that
several differences between males and females in behavior might have their sources in the activity
of the resting brain."
What those differences in behaviour might amount to, however, is a matter of debate.
Social cues
An experiment targeting men's and women's response to perceived threat, for instance,
highlighted a better evaluation of threat on the part of women. The study, which used fMRI to scan
the brain activity of teenagers and adults of both sexes, found that adult women had a strong
neural response to unambiguous visual threat signals, whereas adult men and adolescents of
both sexes exhibited a much weaker response.
Last year, Medical News Today also reported on a study that pointed to different patterns of
cooperation in men and women, with possible underlying neural explanations. Groups of male-
98
male, female-female, and female-male couples were observed as they performed the same simple
task involving cooperation and synchronization. Overall, same-sex pairs did better than opposite
sex pairs. But interbrain coherence that is, the relative synchronization of neural activity in the
brains of a pair performing a cooperative task was observed in different locations in the brains of
male-male versus female-female subjects.
Another study using fMRI also emphasized significant differences between how the brains of men
and women organize their activity. There are different activation patterns in the brain networks of
males and females, the researchers explain, which correlate with substantial differences in the
behavior of men and of women.
Different activation patterns, but what does that mean?
A more recent study, however, disagrees that there are any fundamental functional differences,
though the methodology of this research has been questioned. The authors of this work analyzed
the MRI scans of more than 1,400 human brains, sourced from four different datasets. Their
findings suggest that, whatever physiological differences may exist between the brain of men and
of women, they do not indicate underlying, sex-specific patterns of behaviour and socialization.
The volumes of white and gray matter in the brains of people pertaining to both sexes do not differ
significantly, the study found.
Also, the scientists pointed out that "most humans possess a mosaic of personality traits, attitudes,
interests, and behaviors," consistent with individual physiological traits, and inconsistent with a
dualistic view of "maleness" and "femaleness." They added that "The lack of internal consistency in
human brain and gender characteristics undermines the dimorphic dualistic view of human brain
and behavior". And that "we should shift from thinking of brains as falling into two classes, one
typical of males and the other typical of females, to appreciating the variability of the human brain
mosaic."
Susceptibility to brain disorders
That being said, many scientists continue to point toward evidence that the distinct physiological
patterns of male and female brains lead to a differentiated susceptibility to neurocognitive
diseases, as well as other health-related problems.
One recent study covered by MNT, for instance, suggests that microglia which are specialized
cells that belong to the brain's immune system are more active in women, meaning that women
are more exposed to chronic pain than men.
Yet another analysis of brain scans for both sexes suggested that women show higher brain activity
in more regions of the brain than men. According to the researchers, this heightened activation
especially of the prefrontal cortex and the limbic regions, tied with impulse control and mood
regulation – means that women are more susceptible to mood disorders such as depression and
anxiety.
So, are brain differences fundamental to how men and women function? The answer is maybe.
While so many studies noted different activation patterns in the brain, these did not necessarily
amount to differences in the performance of given tasks.
99
Vocabulary exercise
1. exert an influence on (paragraph 3)
But if there are, do any of these differences exert an influence on the way in which men and
women perform the same tasks? And do such differences affect men versus women's susceptibility
to different brain disorders?
2. paraphrase (paragraph 4)
new scientific studies to deliver quick and easy explanations of "why men are from Mars and
women come from Venus," to paraphrase a well-known bestseller about heterosexual
relationship management.
3. refuting (paragraph 8)
Refuting studies in monkeys, some specialists argue that, no matter how similar to human beings
from a biological point of view, monkeys and other animals are still not human
4. perpetuating (paragraph 9)
and their taste for "girly" or "boyish" toys may be acquired by how their parents socialize them,
even if the parents themselves are not always aware of perpetuating stereotypes.
5. robust (paragraph 13)
Another study discovered "rather robust differences" between resting brain activity in men and in
women.
6. unambiguous (paragraph 15)
adult women had a strong neural response to unambiguous visual threat signals, whereas adult
men and adolescents of both sexes exhibited a much weaker response.
7. pertaining to (paragraph 18)
The volumes of white and gray matter in the brains of people pertaining to both sexes do not
differ significantly, the study found.
Write your own sentences with the vocabulary
1. exert an influence on
2. paraphrase
3. refuting
4. perpetuating
5. robust
6. unambiguous
7. pertaining to
EXERCISE 17
ASMR: Making money through making very soft
sounds
Summary
This article talks about the rise in popularity on YouTube of ASMR video (where soft sounds are
made to relax or even stimulate the viewer). It briefly explains to who and why these type of videos
appeal. It also says how the makers of these videos are making money through them.
Olivia Kissper creates videos that give people the tingles. She whispers into the microphone, taps
her nails on objects, strokes brushes on the camera, crinkles packaging, and even eats on screen. It
is weird and, at times, even a little creepy to watch.
But it is also extraordinarily successful. The videos she has posted over the past five years have
regularly attracted more than one million views on her YouTube channel and she has more than
294,000 subscribers. Viewers come to her in the hope of experiencing a pleasurable sensation
known as autonomous sensory meridian response, or ASMR the tingling feeling that can form on
the scalp and emanate down the body in response to certain stimuli. She is just one of a growing
community of filmmakers who are creating content for people who crave this sensation.
The videos are slow-paced and hypnotic, typically running between 25 minutes to an hour. They
show footage of people performing a strange array of tasks from caressing different objects to
brushing their hair that are designed to produce what are sometimes described as a "brain
massage" or "shivers" that lead to an intense sense of calm. They capture the sounds these actions
produce using high-quality recording equipment, often using a binaural microphone to create a 3-
dimensional sensation.
Kissper's own recent posts reveal a playful range of topics: one video bears the title "Eating Snake
Eggs" actually an exotic fruit and another one called "Shsssssh! It'll Be OK!" that is designed to
induce restful slumber. Ten years ago, whispering into a camera wasn't exactly a career move, but
now this kind of sound creation has become not only a social phenomenon but a way to make a
living.
What started out as niche content is poised to become big business as brands and the marketing
industry rush to tap into a cultural trend. For a content producer like Kissper, who plies her trade
from Costa Rica, this evolution isn't surprising. "I think it's inevitable," she says. "It's another
platform for people to promote things. When you look at some of the ASMR channels, they're
humongous, so of course big companies will jump in."
Type ASMR into YouTube's search engine and it will throw out over 12.7 million results. Some of
the most popular videos have been viewed more than 20 million times. With these sorts of
numbers, content creators can start to generate thousands of dollars from advertising that
YouTube places at the start of their videos.
But with the loyal community that has formed around ASMR, brands are looking to do more than
simply buy advertising space. Instead they want to get in on the act. Last year IKEA launched an
advertising series it called "Oddly IKEA", for which it developed six ASMR-style videos, including
one long-form video that ran to 25-minutes. The video presents a tactile range of items that
students might need for their university rooms, shown on screen as a narrator gently describes the
merits of each product. Bed sheets are gently stroked, pillows are squished, fabric is scratched and
hangers are delicately jangled as the hushed voice gives details about the thread counts and duvet
fibres of the products on show. Viewers are given detailed price information, colour options and
where they can buy the products.
"Since this content was such an untapped market for brands, we didn't know what to expect," says
Kerri Homsher, a media specialist for IKEA USA. "But we wanted to ensure that we were
approaching our videos as authentically as possible to attract the ASMR community."
The strategy paid off in a big way, according to Homsher. The video went viral and to date has had
1.8m views. IKEA says it saw a 4.5% increase in sales in store and a 5.1% increase online during the
advertising campaign. "Because the genre is a bit strange for the unaccustomed, there were quite a
few people who were baffled by what was happening," adds Della Mathew, creative director at
advertising agency Ogilvy and Mather New York, which collaborated with IKEA on the videos. "But
the ASMR community came to the rescue and we found a very natural community of advocates
who were explaining the genre to other people."
As YouTube searches for ASMR grow at a rapid rate interest in ASMR has doubled from June
2016 to June 2018, according to Google data other brands are also paying attention. Dove
Chocolate, KFC, and a Swedish Beer maker, Norrland Guld Ljus, are among the companies that
have leveraged the growing appeal of ASMR and incorporated it in their marketing campaigns to
promote their products.
Brands are also seeking out ASMR creators who have dedicated followers. Lily Whispers made her
first ASMR video five years ago at the age of 19 and was one of the pioneers of the form. In
addition to a full-time job in digital marketing, she produces two or three videos per week, in
which she may feature a brand promotion.
She sees her role as that of a big sister, especially because 80% of her followers are women under
30, with many aged between 14 and 17 years old. Whispers is very aware that the interaction with
her followers has the potential to run deep, she's cautious about the limitations of online
relationships too. "I do think that ASMR can come close to providing comfort and a safe haven but
I am a strong advocate of talk therapy and human connection and oftentimes urge my viewers to
seek help beyond watching videos on YouTube," she says.
Regardless, the powerful connection that ASMR creators forge with their audiences has left brands
eager to sponsor personalities like Whispers to promote their products. And they're willing to pay.
Depending on the views per video and the level of influence, brands might pay between $1,000
and $3,000 plus for a campaign, according to Savannah Newton, a senior talent manager for
Ritual Network, a digital talent agency. The firm manages a roster of ASMR creators it calls them
"ASMRtists" including Lily Whispers and Olivia Kissper.
It helps its clients with direct monetisation on platforms like YouTube, distribution of audio to
Spotify and iTunes, and also helps secure branded content partnerships.
"Consumers of ASMR feel like they can relate to the creators on a personal level so they trust what
they say when it comes to the promotion of brands," says Newton.
The most successful ASMR creators can also monetise their talents via the Patreon app, which
offers creative people, from podcasters to musicians, a way of generating funding directly from
their fans, subscribers, and patrons. In exchange, artists can offer premium content for their fans.
Olivia Kissper lets these fans have early access to her videos, as well as a behind the scenes
exclusives and live Skype sessions. Some ASMR creators have even created their own apps, like
Ben Nicholls, a 20-year-old student at Liverpool University who goes by the online moniker of The
ASMR Gamer. He whispers to a predominantly male audience – about consumer technology,
video games, football, and beer.
"What's great about the ASMR subculture is that channels are generally smaller, but the audience
that it grows and retains is much more involved than other communities on YouTube, where it can
be more passive," Nicholls says.
According to Google data, searches for ASMR tend to peak at 10:30pm regardless of time zone,
which Nicholls suggests is because people use this content to help them sleep. He has even written
a book on the subject, titled ASMR: The Sleep Revolution.
Nicholls calls ASMR "the fastest growing genre for relaxation online, lulling millions to a peaceful
sleep each night whilst combating anxiety and insomnia".
So far there has been little scientific research into ASMR. "In fact, there is no scientific evidence
demonstrating that these videos produce consistent and reliable neurological responses," says
Tony Ro, a professor of neuroscience. But he concedes that "it may help some people sleep better
or reduce their anxieties or depression".
But for those able to induce this euphoric state with a video camera and a microphone, there's
ample anecdotal evidence that ASMR videos do offer a pleasant elixir. "I think people are starved
of close intimate personal attention, eye contact, comfort, being soothed by a human voice," says
Olivia Kissper. She jokes that the ability to induce tingles in other people is akin to having a
superpower. And right now, that superpower seems to have real value.
Vocabulary exercise
1. footage (paragraph 3)
The videos are slow-paced and hypnotic, typically running between 25 minutes to an hour. They
show footage of people performing a strange array of tasks from caressing different objects to
brushing their hair
2. is poised to (paragraph 5)
What started out as niche content is poised to become big business as brands and the marketing
industry rush to tap into a cultural trend.
3. baffled (paragraph 9)
Because the genre is a bit strange for the unaccustomed, there were quite a few people who were
baffled by what was happening,
4. leveraged (paragraph 10)
Dove Chocolate, KFC, and a Swedish Beer maker, Norrland Guld Ljus, are among the companies
that have leveraged the growing appeal of ASMR and incorporated it in their marketing
campaigns to promote their products.
5. forge with (paragraph 13)
Regardless, the powerful connection that ASMR creators forge with their audiences has left
brands eager to sponsor personalities like Whispers to promote their products.
6. concedes (paragraph 20)
there is no scientific evidence demonstrating that these videos produce consistent and reliable
neurological responses," says Tony Ro, a professor of neuroscience. But he concedes that "it may
help some people sleep better
7. anecdotal (paragraph 21)
But for those able to induce this euphoric state with a video camera and a microphone, there's
ample anecdotal evidence that ASMR videos do offer a pleasant elixir. "I think people are starved
of close intimate personal attention, eye contact, comfort, being soothed by a human voice," says
Olivia Kissper.
Write your own sentences with the vocabulary
1. footage
2. is poised to
3. baffled
4. leveraged
5. forge with
6. concedes
7. anecdotal
(;(5&,6( 18
Why many of us don’t really work when at work
Summary
This article examines idleness (not doing anything) at work. It explains that this is more common
than thought and suggests some reasons why this is happening or being allowed to happen.
Two years ago a civil servant in the German town of Menden wrote a farewell message to his
colleagues on the day of his retirement stating that he had not done anything for 14 years. "Since
1998," he wrote, "I was present but not really there. So I'm going to be well prepared for retirement
Adieu." The e-mail was leaked to Germany's Westfalen-Post and quickly became world news.
The public work ethic had been wounded and in the days that followed the mayor of Menden
lamented the incident, saying he "felt a good dose of rage."
The municipality of Menden sent out a press release regretting that the employee never informed
his superiors of his inactivity. In a lesser-known interview with the German newspaper Bild a
month later, the former employee responded that his e-mail had been misconstrued. He had not
been avoiding work for 14 years; as his department grew, his assignments were simply handed over
to others. "There never was any frustration on my part, and I would have written the e-mail even
today. I have always offered my services, but it's not my problem if they don't want them," he said.
The story of this German bureaucrat raised some questions about modern-day slacking. Does
having a job necessarily entail work? If not, how and why does a job lose its substance? And what
can be done to make employees less lazy or is that even the right question to ask in a system
that's set up in the way that ours is? After talking to 40 dedicated loafers, I think I can take a stab
at some answers.
Most work sociologists tend toward the view that non-work at work is a marginal, if not negligible,
phenomenon. What all statistics point towards is a general intensification of work with more and
more burnouts and other stress syndromes troubling us. Yet there are more-detailed surveys
reporting that the average time spent on private activities at work is between 1.5 and three hours a
day. By measuring the flows of audiences for certain websites, it has also been observed that, by
the turn of the century, 70 percent of the U.S. internet traffic passing through pornographic sites
did so during working hours, and that 60 percent of all online purchases were made between 9
a.m. and 5 p.m. What is sometimes called "cyberloafing" has, furthermore, not only been observed
in the U.S. (in which most work-time surveys are conducted), but also in nations such as
Singapore, Germany, and Finland.
Even if the percentage of workers who claim they are working at the pinnacle of their capacity all
the time is slowly increasing, the majority still remains unaffected. In fact, the proportion of people
who say they never work hard has long been far greater than those who say they always do. The
articles and books about the stressed-out fraction of humanity can be counted in the thousands,
but why has so little been written about this opposite extreme?
The few books that have been written on this topic were written by slackers themselves. In Bonjour
Paresse, French author Corinne Maier offers her own explanation for professional detachment.
Maier opens the book (which eventually cost her a job) by declaring that social science has
miserably failed to understand the mechanisms of office work: "Millions of people work in
business, but its world is opaque. This is because the people who talk about it the most and I
mean the university professors have never worked there; they aren't in the know." Having spent
years as a bureaucrat at the utility Électricité de France, Maier contends that work is increasingly
reduced to "make-believe," that at the office, "image counts more than product, seduction more
than production."
Under these circumstances, feigned obedience and fake commitment become so central to working
that a deviation from those acts can result in embarrassment for everyone. As she recalls: "One
day, in the middle of a meeting on motivation, I dared to say that my job was a means to an end
and that the only reason I came to work was to put food on the table. There were 15 seconds of
absolute silence, and everyone seemed uncomfortable. Even though the French word for work,
'travail,' etymologically derives from an instrument of torture, it's imperative to let it be known, no
matter the circumstance, that you are working because you are interested in your work."
The gap between image and substance is also a recurring theme in the comic Dilbert, whose
creator, Scott Adams, was inspired by his uninspiring stints in the working world. Again and again,
Adams questions not only the link between work and rationality, but also the relation between
work and productivity: "Work can be defined as 'anything you'd rather not be doing,'" he says.
"Productivity is a different matter."
In the preface to the Dilbert collection This Is the Part Where You Pretend to Add Value, Adams
openly gives his impressions of 16 years of employment at Crocker National Bank and Pacific Bell:
"If I had to describe my 16 years of corporate work with one phrase, it would be 'pretending
to add value.' … The key to career advancement is appearing valuable despite all hard
evidence to the contrary. … If you add any actual value to your company today, your career
is probably not moving in the right direction. Real work is for people at the bottom who
plan to stay there."
Other office workers have presented similar accounts. In The Living Dead, David Bolchover rues
"the dominance of image over reality, of obfuscation over clarity, of politics over performance,"
and in City Slackers, Steve McKevitt, a disillusioned "business and communications expert,"
gloomily declares: "In a society where presentation is everything, it's no longer about what you do,
it's about how you look like you're doing it."
The simulation, the glossing over, the loss of meaning, the jargon, the games, the office politics, the
crises, the boredom, the despair, and the sense of unreality – these are ingredients that often
reappear in popular accounts of working life. The risk when they only appear in popular culture is
that we begin regarding them as metaphors or exaggerations that may well apply to our own jobs
but not to work in general. But what would happen if we started taking these "unserious" accounts
of working life more seriously?
Consider the last novel by David Foster Wallace, The Pale King, in which an IRS worker dies by his
desk and remains there for days without anyone noticing that he is dead. This might be read as a
brilliant satire of how work drains liveliness such that no one notices whether you are dead or
alive. However, in the strict sense of the word, this was not fiction. In 2004, a tax-office official in
Finland died in exactly the same way while checking tax returns. Although there were about 100
other workers on the same floor and some 30 employees in the auditing department where he
worked, it took them two days to notice that he was dead. None of them seemed to feel the loss of
his labors; he was only found when a friend stopped by to have lunch with him.
How could no one notice? I talked with over 40 people who spent half of their working hours on
private activities a phenomenon I call "empty labor." I wanted to know how they did it, and I
wanted to know why. "Why" turned out to be the easy part: For most people, work simply sucks.
We hate Mondays and we long for Fridays it's not a coincidence that evidence points towards a
peak in cardiac mortality on Monday mornings.
There are, of course, exceptional cases. According to a Gallup report from last year, 13 percent of
employees from 142 countries are "engaged" in their jobs. However, twice as many are "actively
disengaged" they're negative and potentially hostile to their organizations. The majority of
workers, though, are simply "checked out," the report says.
Foot-dragging, shirking, loafing, and slacking are ways of avoiding work within the frames of wage
labor. In 1911, Frederick W. Taylor, the notorious founder of "scientific management," called work
avoidance "the greatest evil with which the working-people of both England and America are now
afflicted." His attempts to eradicate slacking set the course of a perpetual cat-and-mouse game,
between the time-study men and the worker collective, that would live much longer than the
industrial piece-work system.
For Taylor, the project of making the labor process transparent was an important step towards
efficiency not only because it made the optimization of each operation possible, but also because
it siphoned power from the worker collective, with its "natural" inclination towards "loafing," and
giving it to management, or as Taylor would have it, to Science. Today, now that the labor process
has become opaque in new ways, the "evil" of which Taylor once spoke may have returned for
good.
Something that would have surprised Taylor is that slacking is not always the product of
discontent, but also of having too few tasks to fill the hours. According to repeated surveys by
Salary.com, not having "enough work to do" is the most common reason for slacking off at work.
The service sector offers new types of work in which periods of downtime are long and tougher to
eliminate than on the assembly line: A florist watching over an empty flower shop, a logistics
manager who did all his work between 2 and 3 p.m., and a bank clerk responsible for a not-so-
popular insurance program are some examples of employees I talked with who never actively
strived to work less. Like the civil servant of Menden, they offered their services, but when the flow
of assignments petered out, they did not shout it from the rooftops.
Many would say that the underworked should talk to their bosses, but that doesn't always help. I
spoke with a Swedish bank clerk who said he was only doing 15 minutes' worth of work a day. He
asked his manager for more responsibilities, to no avail, then told his boss of his idleness. Did he
get more to do? Barely. When I spoke with him, he was working three-hour days – there were laws
that barred any workday shorter than that and his intervention only added another 15 minutes to
his workload.
There's a widely held belief that more work always exists for those who want it. But is that true?
Everywhere we look, technology is replacing human labor. In OECD countries, productivity has
more than doubled since the '70s. Yet there has been no perceptible movement to reduce workers'
hours in relation to this increased productivity; instead, the virtues of "creating jobs" are
trumpeted by both Democrats and Republicans. The project of job creation hasn't been a complete
failure, but the fact of unemployment still looms.
What's more, the jobs that are created often come up short on providing fulfillment. Involuntary
slacking may first be conceived of as real bliss: "Hey, I don't have to work!" one of my interviewees
recalls. But as the years pass by, most of us will crave some type of meaningful activity. I
interviewed an archivist who wrote his master's thesis while at work and a subway-ticket collector
who composed music in his little booth. If you're lucky, these activities may be pursued within the
frame of wage labor but that's very hard to come by. Our economy produces inequalities in
income and job security, but also, we should acknowledge, in stimulation and substance.
Vocabulary exercise
1. misconstrued (paragraph 2)
the former employee responded that his e-mail had been misconstrued. He had not been
avoiding work for 14 years; as his department grew, his assignments were simply handed over to
others.
2. entail (paragraph 3)
The story of this German bureaucrat raised some questions about modern-day slacking. Does
having a job necessarily entail work?
3. take a stab at (paragraph 3)
is that even the right question to ask in a system that's set up in the way that ours is? After talking
to 40 dedicated loafers, I think I can take a stab at some answers.
4. feigned (paragraph 7)
Under these circumstances, feigned obedience and fake commitment become so central to
working that a deviation from those acts can result in embarrassment for everyone.
5. a means to an end (paragraph 7)
One day, in the middle of a meeting on motivation, I dared to say that my job was a means to an
end and that the only reason I came to work was to put food on the table.
6. derives from (paragraph 7)
Even though the French word for work, 'travail,' etymologically derives from an instrument of
torture,
7. to no avail (paragraph 19)
He asked his manager for more responsibilities, to no avail, then told his boss of his idleness. Did
he get more to do? Barely.
Write your own sentences with the vocabulary
1. misconstrued
2. entail
3. take a stab at
4. feigned
5. a means to an end
6. derives from
7. to no avail
EXERCISE 19
The worrying disappearance of the right to free
speech at British universities
Summary
This opinionated article talks about how many students at UK universities are trying to censure
what can be openly discussed at these institutions. The author refers to them as the 'Stepford
students' (after the film 'The Stepford Wives' (where a group of women are converted into being
the stereotypical perfect wives for their husbands)). They give many examples of when this has
happened and why it is justified. It ends by saying what the consequences of this could be.
Have you met the Stepford students? They're everywhere. On campuses across the land. Sitting
stony-eyed in lecture halls or surreptitiously policing beer-fuelled banter in the uni bar. They look
like students, dress like students, smell like students. But their student brains have been replaced
by brains bereft of critical faculties and programmed to conform. To the untrained eye, they seem
like your average book-devouring, ideas-discussing, H&M-adorned youth, but anyone who's spent
more than five minutes in their company will know that these students are far more interested in
shutting debate down than opening it up.
I was attacked by a swarm of Stepford students this week. On Tuesday, I was supposed to take part
in a debate about abortion at Christ Church, Oxford. I was invited by the Oxford Students for Life
to put the pro-choice argument against the journalist Timothy Stanley, who is pro-life. But
apparently it is forbidden for men to talk about abortion. A mob of furious feministic Oxford
students, all robotically uttering the same stuff about feeling offended, set up a Facebook page
littered with expletives and demands for the debate to be called off. They said it was outrageous
that two human beings 'who do not have uteruses' should get to hold forth on abortion identity
politics at its most basely biological and claimed the debate would threaten the 'mental safety' of
Oxford students. Three hundred promised to turn up to the debate with 'instruments' heaven
knows what that would allow them to disrupt proceedings.
Incredibly, Christ Church capitulated, the college's censors living up to the modern meaning of
their name by announcing that they would refuse to host the debate on the basis that it now raised
'security and welfare issues'. So at one of the highest seats of learning on Earth, the democratic
principle of free and open debate, of allowing differing opinions to slog it out in full view of
discerning citizens, has been violated, and students have been rebranded as fragile creatures,
overgrown children who need to be guarded against any idea that might prick their souls or
challenge their prejudices. One of the censorious students actually boasted about her role in
shutting down the debate, wearing her intolerance like a badge of honour in an Independent
article in which she argued that, 'The idea that in a free society absolutely everything should be
open to debate has a detrimental effect on marginalised groups.'
This isn't the first time I've encountered the Stepford students. Last month, at Britain's other
famously prestigious university, Cambridge, I was circled by Stepfords after taking part in a debate
on faith schools. It wasn't my defence of parents' rights to send their children to religious schools
they wanted to harangue me for much as they loathed that liberal position it was my
suggestion, made in this magazine and elsewhere, that 'lad culture' doesn't turn men into rapists.
Their mechanical minds seemed incapable of computing that someone would say such a thing.
Their eyes glazed with moral certainty, they explained to me at length that culture warps minds
and shapes behaviour and that is why it is right for students to strive to keep such wicked,
misogynistic stuff as the Sun newspaper and sexist pop music off campus. 'We have the right to
feel comfortable,' they all said, like a mantra. One a bloke said that the compulsory sexual
consent classes recently introduced for freshers at Cambridge, to teach what is and what isn't rape,
were a great idea because they might weed out 'pre-rapists': men who haven't raped anyone but
might. The others nodded. I couldn't believe what I was hearing. Pre-rapists! Had any of them read
Philip K. Dick's dystopian novella about a wicked world that hunts down and punishes pre-
criminals, I asked? None had.
When I told them that at the end of the last millennium I had spent my student days arguing
against the very ideas they were now spouting against the claim that gangsta rap turned black
men into murderers or that Tarantino flicks made teens go wild and criminal not so much as a
flicker of reflection crossed their faces. 'Back then, the people who were making those censorious,
misanthropic arguments about culture determining behaviour weren't youngsters like you,' I said.
'They were older, more conservative people, with blue rinses.' A moment's silence. Then one of the
Stepfords piped up. 'Maybe those people were right,' he said. My mind filled with a vision of Mary
Whitehouse cackling to herself in some corner of the cosmos.
If your go-to image of a student is someone who's free-spirited and open-minded, who loves
having a pop at orthodoxies, then you urgently need to update your mind's picture bank. Students
are now pretty much the opposite of that. It's hard to think of any other section of society that has
undergone as epic a transformation as students have. From freewheelin' to ban-happy, from askers
of awkward questions to suppressors of offensive speech, in the space of a generation. My
showdown with the debate-banning Stepfords at Oxford and the pre-crime promoters at
Cambridge echoed other recent run-ins I've had with the intolerant students of the 21st century.
I've been jeered at by students at the University of Cork for criticising gay marriage; cornered and
branded a 'denier' by students at University College London for suggesting industrial development
in Africa should take precedence over combating climate change; lambasted by students at
Cambridge (again) for saying it's bad to boycott Israeli goods. In each case, it wasn't the fact the
students disagreed with me that I found alarming disagreement is great! it was that they were
so plainly shocked that I could have uttered such things, that I had failed to conform to what they
assume to be right, that I had sought to contaminate their campuses and their fragile grey matter
with offensive ideas.
Where once students might have allowed their eyes and ears to be bombarded by everything from
risqué political propaganda to raunchy rock, now they insulate themselves from anything that
might dent their self-esteem and, crime of crimes, make them feel 'uncomfortable'. Student groups
insist that online articles should have 'trigger warnings' in case their subject matter might cause
offence.
The 'no platform' policy of various student unions is forever being expanded to keep off campus
pretty much anyone whose views don't chime perfectly with the prevailing groupthink. Where once
it was only far-right rabble-rousers who were no-platformed, now everyone from Zionists to
feminists who hold the wrong opinions on transgender issues to 'rape deniers' (anyone who
questions the idea that modern Britain is in the grip of a 'rape culture') has found themselves
shunned from the uni-sphere. My Oxford experience suggests pro-life societies could be next. In
September the students' union at Dundee banned the Society for the Protection of Unborn
Children from the freshers' fair on the basis that its campaign material is 'highly offensive'.
Barely a week goes by without reports of something 'offensive' being banned by students. Robin
Thicke's rude pop ditty 'Blurred Lines' has been banned in more than 20 universities. Student
officials at Balliol College, Oxford, justified their ban as a means of 'prioritising the wellbeing of
our students'. Apparently a three-minute pop song can harm students' health. More than 30
student unions have banned the Sun, on the basis that Page Three could turn all those pre-rapists
into actual rapists. Radical feminist students once burned their bras now they insist that models
put bras on. The union at UCL banned the Nietzsche Society on the grounds that its existence
threatened 'the safety of the UCL student body'.
Stepford concerns are over-amplified on social media. No sooner is a contentious subject raised
than a university 'campaign' group appears on Facebook, or a hashtag on Twitter, demanding that
the debate is shut down. Technology means that it has never been easier to whip up a false sense of
mass outrage and target that synthetic anger at those in charge. The authorities on the receiving
end feel so besieged that they succumb to the demands and threats.
Heaven help any student who doesn't bow before the Stepford mentality. The students' union at
Edinburgh recently passed a motion to 'End lad banter' on campus. Laddish students are being
forced to recant their bantering ways. Last month, the rugby club at the London School of
Economics was disbanded for a year after its members handed out leaflets advising rugby lads to
avoid 'mingers' (ugly girls) and 'homosexual debauchery'. Under pressure from LSE bigwigs, the
club publicly recanted its 'inexcusably offensive' behaviour and declared that its members have 'a
lot to learn about the pernicious effects of banter'. They're being made to take part in equality and
diversity training. At British unis in 2019, you don't just get education you also get re-education,
Soviet style.
The censoriousness has reached its nadir in the rise of the 'safe space' policy. Loads of student
unions have colonised vast swaths of their campuses and declared them 'safe spaces' that is,
places where no student should ever be made to feel threatened, unwelcome or belittled, whether
by banter, bad thinking or 'Blurred Lines'. Safety from physical assault is one thing but safety
from words, ideas, Zionists, lads, pop music, Nietzsche? We seem to have nurtured a new
generation that believes its self-esteem is more important than everyone else's liberty.
This is what those censorious Cambridgers meant when they kept saying they have the 'right to be
comfortable'. They weren't talking about the freedom to lay down on a chaise longue they meant
the right never to be challenged by disturbing ideas or mind-battered by offensiveness. At precisely
the time they should be leaping brain-first into the rough and tumble of grown-up, testy
discussion, students are cushioning themselves from anything that has the whiff of controversy.
We're witnessing the victory of political correctness by stealth. As the annoying 'PC gone mad!'
brigade banged on and on about extreme instances of PC schools banning 'Baa Baa, Black
Sheep', etc. nobody seems to have noticed that the key tenets of PC, from the desire to destroy
offensive lingo to the urge to re-educate apparently corrupted minds, have been swallowed whole
by a new generation. This is a disaster, for it means our universities are becoming breeding
grounds of dogmatism. As John Stuart Mill said, if we don't allow our opinion to be 'fully,
frequently, and fearlessly discussed', then that opinion will be 'held as a dead dogma, not a living
truth'.
One day, these Stepford students, with their lust to ban, their war on offensive lingo, and their
terrifying talk of pre-crime, will be running the country. And then it won't only be those of us who
occasionally have cause to visit a campus who have to suffer their dead dogmas.
Vocabulary exercise
1. spouting (paragraph 6)
When I told them that at the end of the last millennium I had spent my student days arguing
against the very ideas they were now spoutingagainst the claim that gangsta rap turned black
men into murderers
2. jeered (paragraph 7)
My showdown with the debate-banning Stepfords at Oxford and the pre-crime promoters at
Cambridge echoed other recent run-ins I've had with the intolerant students of the 21st century.
I've been jeered at by students at the University of Cork for criticising gay marriage;
3. uttered (paragraph 7)
it wasn't the fact the students disagreed with me that I found alarming disagreement is great! – it
was that they were so plainly shocked that I could have uttered such things, that I had failed to
conform to what they assume to be right
4. besieged (paragraph 11)
and target that synthetic anger at those in charge. The authorities on the receiving end feel so
besieged that they succumb to the demands and threats.
5. nadir (paragraph 13)
The censoriousness has reached its nadir in the rise of the 'safe space' policy. Loads of student
unions have colonised vast swaths of their campuses and declared them 'safe spaces'
6. swaths (paragraph 13)
The censoriousness has reached its nadir in the rise of the 'safe space' policy. Loads of student
unions have colonised vast swaths of their campuses and declared them 'safe spaces'
7. tenets (paragraph 14)
nobody seems to have noticed that the key tenets of PC, from the desire to destroy offensive lingo
to the urge to re-educate apparently corrupted minds, have been swallowed whole by a new
generation.
Write your own sentences with the vocabulary
1. spouting
2. jeered
3. uttered
4. besieged
5. nadir
6. swaths
7. tenets
EXERCISE 20
Changing the image of classical music
Summary
This article is about how the classical music world is trying to attract a new audience. It talks about
the different things they are trying to get a younger and more diverse audience to both be
interested in the musical genre and to attend classical music performances.
Anna Goldsworthy, an Australian pianist and festival director, wrote recently about her fears for
her art-form as she played Chopin's funeral-march in B-flat minor. Though we are all headed
towards our own funerals, "it is difficult to escape the fact that my audience is several decades
further down the road than I am. And I am less and less confident that a new audience will come
marching in to replace them."
Her fears are not outlandish as they might sound: a 2010 study by the Australian Bureau of
Statistics found the largest proportion of classical concert-goers are aged between 65 and 74, and
the same problem is bemoaned far beyond Australia. So promoters and classical-music venues are
keen to do anything that will lure in youngsters. There is some evidence that their initiatives are
bearing fruit: in 2015, more than 37,500 people bought their first tickets for the BBC Proms, a
concert series held in London's Victoria & Albert Hall every summer since 1895. Over 8,600
under-18s attended concerts across the entire season, many of them standing in a pit in the classic
"promming" experience, reminiscent of a rock concert. Carnegie Hall, a venerable New York
venue, has seen a decline in the age of those attending single concerts from 58 in 2006 to 48 in
2014 (though given the breadth of Carnegie Hall's offerings, it is hard to be sure whether it is
classical or contemporary fare pulling in the punters).
This may not be enough. There is scant evidence that a student attracted by cheap and cheerful
Prom tickets morphs into a paid-up attendee of full-fare concerts. Bringing in a new generation
will be hard without shaking off classical music's reputation for being elitist and uncool. Too many
people in the classical bubble a cabal of concert-goers immersed in it from a young age assume
that wayward youth will inevitably find their way to symphonies. This looks complacent. Classical
is competing not only with more modern styles, but with its own image of being the preserve of the
educated, the white, the middle-aged and the middle-class.
Once a popular art form with its own proverbial rock-stars, the medium now mostly consists of
recycling the same canonical works by European men from centuries past. The name "classical"
implies a historical past, yet it is much broader than this label, and recognising that is a first step to
broadening the audience. Clive Gillinson, Carnegie Hall's executive director, says "we try not to
package music…labels put people off, it is better if people don't know what they are listening to".
Classical includes a range of new artists and sounds: from the minimalism of Steve Reich to the
percussion of Inuksuit Ensemble.
Contemporary artists like Maya Beiser, who transforms our expectations of the soloist by using
technology to layer her own playing of different parts on the cello, and Max Richter, who merges
violin, orchestra and synthesiser, should be considered no different to the Stravinskys and
Schuberts of eras past. Such artists create a valuable entry point for new listeners, whatever their
age. And after entry comes exploration, which comes naturally to those who have grown up with
YouTube and Spotify.
Such novel work would, at best, be seen live: even the most high-tech speakers are no match, and
one of classical music's advantages over its genre rivals is the joy of watching the astonishing
virtuosity of its best musicians. Traditionally, most classical venues have been unwilling to
experiment with form, style and content. Rock and jazz offer no such fastidiousness, and so
naturally evolve with their audiences.
Many theatres and artists have recognised this: they offer last-minute tickets (appealing to
impulsive young folks), shorter concerts, later start times, unusual venues and "taster"
experiences, where people can drop in and out at their leisure. Embracing new technology has also
been instrumental: online streaming of live concerts offers at least the visual if not the perfect
auditory experience, allowing theatres to reach out to far-away audiences. Live transmissions from
the Metropolitan Opera House, New York, are now seen in more than 2,000 screens in 70
countries.
Creating a relaxing atmosphere that encourages rather than dissuades is essential for the curious-
but-unsure listener. La Philharmonie de Paris, a concert hall in the northeast of Paris, opened last
year with the ambition of becoming a new hub, offering family weekends, often structured around
a particular theme or genre, as well as unique and adaptable spaces. "Multi-Story", an orchestra
founded in 2011 by Kate Whitley, a composer, and Christopher Stark, a conductor, takes classical
music to unexpected places, the most popular being a car park in South London. "Nonclassical", a
record label and London club night, has established a loyal fan-base by showcasing its artists and
their blend of electronic classical at a range of underground clubs and bars for over a decade. Now
the label hopes to get people through the doors of more traditional venues, too. A club night held
at London's Royal Albert Hall suggested that this novel idea (for the classical music world at least)
may be working: of those that attended, 41% had never been to the concert hall before.
A recent festival held by the Barbican, a London venue, proved how effective this approach could
be. It was the Proms with a twist. Like the Proms, there was no dress code and the tickets were
dead cheap (£40 for the weekend), but festival-goers could grab a pint, wander from stage to stage,
and use their phone to take photos all scandalous, by classical music standards. Artists were
encouraged to talk to their audience, provide a brief description of why they decided to play a
certain piece and to even make them laugh. It seemed to work: Huw Humphreys, head of Music at
the Barbican, noted that many who had bought one-day tickets for the Saturday returned for the
second day, too.
Puncturing the punctiliousness that many associate with classical music is a good start. It is often
more a product of the audience than the musicians. Gillian Moore, head of music at London's
Southbank Centre, recently recalled being tut-tutted by an audience member who objected to her
vigorous head movements during a performance. Seasoned concert-goers such as Ms Moore can
shake off such rebukes; many first-time attendees might have left it at that.
Vocabulary exercise
1. outlandish (paragraph 2)
Her fears are not outlandish as they might sound: a 2010 study by the Australian Bureau of
Statistics found the largest proportion of classical concert-goers are aged between 65 and 74,
2. lure (paragraph 2)
So promoters and classical-music venues are keen to do anything that will lure in youngsters.
There is some evidence that their initiatives are bearing fruit: in 2015, more than 37,500 people
bought their first tickets for the BBC Proms
3. complacent (paragraph 3)
Too many people in the classical bubble a cabal of concert-goers immersed in it from a young age
assume that wayward youth will inevitably find their way to symphonies. This looks
complacent.
4. fastidiousness (paragraph 6)
Traditionally, most classical venues have been unwilling to experiment with form, style and
content. Rock and jazz offer no such fastidiousness, and so naturally evolve with their
audiences.
5. embracing (paragraph 7)
Embracing new technology has also been instrumental: online streaming of live concerts offers at
least the visual if not the perfect auditory experience, allowing theatres to reach out to far-away
audiences.
6. novel (paragraph 8)
A club night held at London's Royal Albert Hall suggested that this novel idea (for the classical
music world at least) may be working: of those that attended, 41% had never been to the concert
hall before.
7. punctiliousness (paragraph 10)
Puncturing the punctiliousness that many associate with classical music is a good start. It is
often more a product of the audience than the musicians.
Write your own sentences with the vocabulary
1. outlandish
2. lure
3. complacent
4. fastidiousness
5. embracing
6. novel
7. punctiliousness
EXERCISE 21
The discovery which is reshaping the theory of
our origins
Summary
This article is about how the discovery of human bones in Morocco have called into question the
existing theory of human evolution. It details what happened with the discovery and the
ramifications this may have on our understanding of how we evolved. It ends with a scientist being
sceptical about the importance of the discovery.
Fossils recovered from an old mine on a desolate mountain in Morocco have rocked one of the
most enduring foundations of the human story: that Homo sapiens arose in a cradle of humankind
in East Africa 200,000 years ago.
Archaeologists unearthed the bones of at least five people at Jebel Irhoud, a former barite mine
100 km west of Marrakesh, in excavations that lasted years. They knew the remains were old, but
were flabbergasted when dating tests revealed that a tooth and stone tools found with the bones
were about 300,000 years old.
"My reaction was a big 'wow'," said Jean-Jacques Hublin, a senior scientist on the team at the Max
Planck Institute for Evolutionary Anthropology in Leipzig. "I was expecting them to be old, but not
that old."
Hublin said the extreme age of the bones makes them the oldest known specimens of modern
humans and poses a major challenge to the idea that the earliest members of our species evolved in
a "Garden of Eden" in East Africa one hundred thousand years later. "This gives us a completely
different picture of the evolution of our species. It goes much further back in time, but also the very
process of evolution is different to what we thought," Hublin said. "It looks like our species was
already present probably all over Africa by 300,000 years ago. If there was a Garden of Eden, it
might have been the size of the continent."
Jebel Irhoud has thrown up puzzles for scientists since fossilised bones were first found at the site
in the 1960s. Remains found in 1961 and 1962, and stone tools recovered with them, were
attributed to Neanderthals and at first considered to be only 40,000 years old. At the time, the
view that modern humans evolved from Neanderthals held sway. But today, the Neanderthals are
considered a sister group that lived alongside, and even bred with, our modern human ancestors.
In fresh excavations at the Jebel Irhoud site, Hublin and others found more remains, including a
partial skull, a jawbone, teeth and limb bones belonging to three adults, a juvenile, and a child
aged about eight years old. The remains, which resemble modern humans more than any other
species, were recovered from the base of an old limestone cave that had its roof smashed in during
mining operations at the site. Alongside the bones, researchers found artefacts such as sharpened
flint tools, a good number of gazelle bones, and lumps of charcoal, perhaps left over from fires that
warmed those who once lived there.
"It's rather a desolate landscape, but on the horizon you have the Atlas mountains with snow on
top and it's very beautiful," said Hublin. "When we found the skull and mandible I was emotional.
They are only fossils, but they have been human beings and very quickly you make a connection
with these people who lived and died here 300,000 years ago."
Scientists have long looked to East Africa as the birthplace of modern humans. Until the latest
findings from Jebel Irhoud, the oldest known remnants of our species were found at Omo Kibish in
Ethiopia and dated to 195,000 years old. Other fossils and genetic evidence all point to an African
origin for modern humans.
In the first of two papers published in Nature on Wednesday, the researchers describe how they
compared the freshly-excavated fossils with those of modern humans, Neanderthals and ancient
human relatives that lived up to 1.8m years ago. Facially, the closest match was with modern
humans. The lower jaw was similar to modern Homo sapiens too, but much larger. The most
striking difference was the shape of the braincase which was more elongated than that of humans
today. It suggests, said Hublin, that the modern brain evolved in Homo sapiens and was not
inherited from a predecessor.
Apart from being more stout and muscular, the adults at Jebel Irhoud looked similar to people
alive today. "The face of the specimen we found is the face of someone you could come across on
the tube in London," Hublin said. In a second paper, the scientists lay out how they dated the stone
tools to between 280,000 and 350,000 years, and a lone tooth to 290,000 years old.
The remains of more individuals may yet be found at the site. But precisely what they were doing
there is unclear. Analysis of the flint tools shows that the stones came not from the local area, but
from a region 50km south of Jebel Irhoud. "Why did they come here? They brought their toolkit
with them and they exhausted it," Hublin said. "The tools they brought with them have been
resharpened, resharpened, and resharpened again. They did not produce new tools on the spot. It
might be that they did not stay that long, or maybe it was an area they would come to do
something specific. We think they were hunting gazelles, there are a lot of gazelle bones, and they
were making a lot of fires."
Hublin concedes that scientists have too few fossils to know whether modern humans had spread
to the four corners of Africa 300,000 years ago. The speculation is based on what the scientists see
as similar features in a 260,000-year-old skull found in Florisbad in South Africa. But he finds the
theory compelling. "The idea is that early Homo sapiens dispersed around the continent and
elements of human modernity appeared in different places, and so different parts of Africa
contributed to the emergence of what we call modern humans today," he said.
John McNabb, an archaeologist at the University of Southampton, said: "One of the big questions
about the emergence of anatomically modern humans has been did our body plan evolve quickly or
slowly. This find seems to suggest the latter. It seems our faces became modern long before our
skulls took on the shape they have today."
"There are some intriguing possibilities here too. The tools the people at Jebel Irhoud were making
were based on a knapping technique called Levallois, a sophisticated way of shaping stone tools.
The date of 300,000 years ago adds to a growing realisation that Levallois originates a lot earlier
than we thought. Is Jebel Irhoud telling us that this new technology is linked to the emergence of
the hominin line that will lead to modern humans? Does the new find imply there was more than
one hominin lineage in Africa at this time? It really stirs the pot."
Lee Berger, whose team recently discovered the 300,000 year-old Homo naledi, an archaic-
looking human relative, near the Cradle of Humankind World Heritage site outside Johannesburg,
said dating the Jebel Irhoud bones was thrilling, but is unconvinced that modern humans lived all
over Africa so long ago. "They've taken two data points and not drawn a line between them, but a
giant map of Africa," he said.
John Shea, an archaeologist at Stony Brook University in New York who was not involved in the
study, said he was cautious whenever researchers claimed they had found the oldest of anything.
"It's best not to judge by the big splash they make when they are first announced, but rather to wait
and see some years down the line whether the waves from that splash have altered the shoreline,"
he said, adding that stone tools can move around in cave sediments and settle in layers of a
different age.
Shea was also uneasy with the scientists combining fossils from different individuals, and
comparing reconstructions of complete skulls from fragmentary remains. "Such 'chimeras' can
look very different from the individuals on which they are based," he said.
"For me, claiming these remains are Homo sapiens stretches the meaning of that term a bit," Shea
added. "These humans who lived between 50,000-300,000 years ago are a morphologically
diverse bunch. Whenever we find more than a couple of them from the same deposits, such as at
Omo Kibish and Herto in Ethiopia or Skhul and Qafzeh in Israel, their morphology is all over the
place both within and between samples."
But Jessica Thompson, an anthropologist at Emory University in Atlanta, said the new results
show just how incredible the Jebel Irhoud site is. "These fossils are the rarest of the rare because
the human fossil record from this time period in Africa is so poorly represented. They give us a
direct look at what early members of our species looked like, as well as their behaviour.
"You might also look twice at the brow ridges if you saw them on a living person. It might not be a
face you'd see every day, but you would definitely recognise it as human," she said. "It really does
look like in Africa especially, but also globally, our evolution was characterised by numerous
different species all living at the same time and possibly even in the same places."
Vocabulary exercise
1. unearthed (paragraph 2)
Archaeologists unearthed the bones of at least five people at Jebel Irhoud, a former barite mine
100 km west of Marrakesh
2. flabbergasted (paragraph 2)
They knew the remains were old, but were flabbergasted when dating tests revealed that a tooth
and stone tools found with the bones were about 300,000 years old.
3. poses (paragraph 4)
the extreme age of the bones makes them the oldest known specimens of modern humans and
poses a major challenge to the idea that the earliest members of our species evolved in a "Garden
of Eden" in East Africa one hundred thousand years later.
4. held sway (paragraph 5)
At the time, the view that modern humans evolved from Neanderthals held sway. But today, the
Neanderthals are considered a sister group that lived alongside
5. artefacts (paragraph 6)
Alongside the bones, researchers found artefacts such as sharpened flint tools, a good number of
gazelle bones, and lumps of charcoal, perhaps left over from fires that warmed those who once
lived there.
6. remnants (paragraph 8)
Until the latest findings from Jebel Irhoud, the oldest known remnants of our species were found
at Omo Kibish in Ethiopia and dated to 195,000 years old.
7. settle (paragraph 16)
adding that stone tools can move around in cave sediments and settle in layers of a different age.
Write your own sentences with the vocabulary
1. unearthed
2. flabbergasted
3. poses
4. held sway
5. artefacts
6. remnants
7. settle
EXERCISE 22
Sleep and its importance
Summary
This article talks about various aspects of sleep from what happens when we sleep to the
importance it has to our physical and emotional well-being.
"The only known function of sleep is to cure sleepiness," the Harvard sleep scientist Dr J Allan
Hobson once joked. This isn't quite true, but the questions of why we spend about a third of our
lives asleep and what goes on in our head during this time are far from being solved.
One big mystery is why sleep emerged as an evolutionary strategy. It must confer powerful benefits
to make up for the substantial risks which sleeping entails, such as being eaten or missing out on
food while lying dormant. The emerging picture from research is that sleep is not a luxury but
essential to both physical and mental health. But the complex and diverse functions of sleep are
only just starting to be uncovered.
What's going on in our brains while we sleep?
The brain doesn't just switch off. It generates two main types of sleep: slow-wave sleep (deep
sleep) SWS and rapid eye movement (dreaming), or REM. About 80% of our sleeping is of the
SWS variety, which is characterised by slow brain waves, relaxed muscles and slow, deep
breathing. There is strong evidence that deep sleep is important for the consolidation of memories,
with recent experiences being transferred to long-term storage. This doesn't happen
indiscriminately though a clearout of the less relevant experiences of the preceding day also
appears to take place. A study published last year revealed that the connections between neurons,
known as synapses, shrink during sleep, resulting in the weakest connections being pruned away
and those experiences forgotten.
Dreaming accounts for the other 20% of our sleeping time and the length of dreams can vary from
a few seconds to closer to an hour. Dreams tend to last longer as the night progresses and most are
quickly or immediately forgotten. During REM sleep, the brain is highly active, while the body's
muscles are paralysed and heart rate increases, and breathing can become erratic. Dreaming is
also thought to play some role in learning and memory after new experiences we tend to dream
more. But it doesn't seem crucial either: doctors found that one 33-year-old man who had little or
no REM sleep due to a shrapnel injury in his brainstem had no significant memory problems.
How much sleep is enough?
Eight hours is often quoted, but the optimum sleeping time varies between people and at different
times of life. In a comprehensive review, in which 18 experts sifted through 320 existing research
articles, the US National Sleep Foundation concluded that the ideal amount to sleep is seven to
nine hours for adults, and eight to 10 hours for teenagers. Younger children require much more,
with newborn babies needing up to 17 hours each day (not always aligned with the parental sleep
cycle).
However, the experts did not consider quality of sleep or how much was SWS v REM. Some people
may survive on less sleep because they sleep well, but below seven hours there was compelling
evidence for negative impacts on health. According to experts, too much sleep is also bad, but few
people appear to be afflicted by this problem. In the UK the average sleep time is 6.8 hours.
What about shift work does it matter when you sleep?
In the 1930s, an American scientist Nathaniel Kleitman spent 32 days 42m below ground level in
Mammoth Cave, Kentucky. The aim was to investigate the human body clock. Living in complete
isolation, with no external cues of night and day, he adopted a 28-hour day. Despite sticking rigidly
to a schedule of mealtimes, delivered in a bucket down a shaft, and bedtime, Kleitman failed to
adapt and continued to feel awake only when his assigned "daytime" happened to coincide roughly
with daytime in the outside world.
His body temperature also continued to follow a cycle of close to 24 hours. Many shift workers
particularly those working irregular shifts face similar problems. In recent years this issue has
been taken more seriously, with professional sports teams taking on consultants about schedules
for training and travel abroad. The US navy has altered its shift system to align it with the 24-hour
clock, rather than the 18-hour day used in the old British system.
Why are we stuck on this 24-hour cycle?
Over millions of years of evolution, life has become deeply synchronised with the day-night cycle
as our planet rotates. So-called circadian rhythms are evident in almost every life-form and are so
firmly imprinted on our biological machinery that they continue even in the absence of any
external input. Plants kept in a dark cupboard at a stable temperature open and close their leaves
as though they can sense the sun without seeing it.
In the 1970s, scientists uncovered a crucial piece of machinery for this internal molecular
timekeeping. In experiments using fruit flies, they found a gene, later given the name "period",
whose activity appeared to reliably rise and fall on a 24-hour cycle. Scientists, two of whom
received Nobel prizes last year, later showed that the period gene worked by releasing a protein
that built up in cells overnight, before being broken down in the daytime.
Later, humans were shown to have the same gene, expressed in a tiny brain area called the
suprachiasmatic nucleus (SCN). This serves as a conduit between the eye's retina and the brain's
pineal gland, which pumps out the sleep hormone melatonin. So when it gets dark, we get sleepy.
So is it just our brain that is affected?
The SCN clock is our body's master timekeeper, but in the past decade, scientists have discovered
clock genes are active in almost every cell type in the body, and the activity of roughly half our
genes appears to be under circadian control.
The activity of blood, liver, kidney and lung cells in a petri dish all rise and fall on a roughly 24-
hour cycle, and virtually everything in our body from the secretion of hormones to the
preparation of digestive enzymes in the gut, from changes in blood pressure to body temperature
is influenced in major ways by what time of day these things are normally needed.
Just how the ticking of each neuron is linked to the more complex brainwave rhythms that emerge
in our brain during sleep is not yet clear, but scientists are investigating. When brain cells are
grown in a dish in the lab they begin to self-organise and, somewhat unnervingly, start to show
patterns of activity similar to those seen during sleep.
Did we sleep more soundly in the past?
Poor sleep is often seen as a modern problem, a blight of sedentary lifestyles and being glued to
smartphones late into the night. However, research into the sleep patterns of modern-day hunter
gatherers suggests this may paint an overly romantic view of the past. One study, of the Hadza
people in northern Tanzania, found frequent night-time waking and widely differing sleep
schedules between individuals. Over a three-week period, there were only 18 minutes when all 33
tribe members were asleep simultaneously. The scientists behind the work concluded that fitful
sleep could be an ancient survival mechanism designed to guard against nocturnal threats.
The main difference appeared to be that tribe members were unburdened by paranoia and anxiety
about sleep problems, which are a common cause of concern in western countries.
What happens when you don't get enough sleep?
In extreme cases, sleep deprivation can be fatal. Rats that are completely deprived of sleep die
within two or three weeks. This experiment hasn't been replicated in humans obviously – but
even a day or two of sleep deprivation can cause otherwise healthy people to suffer hallucinations
and physical symptoms. After a poor night's sleep, cognitive abilities take an immediate hit.
Concentration and memory are noticeably affected and people are more likely to be impulsive and
favour instant gratification over waiting for a better outcome. We are also worse people when we're
tired – one study found that sleep deprived people are more likely to cheat and lie.
What about physical health?
Cumulative lack of sleep can have long-term health consequences, and links are seen with obesity,
diabetes, heart disease and dementia. Last year, a review of 28 existing studies found that
permanent night-shift workers were 29% more likely to develop obesity or become overweight
than rotating shift workers. Findings based on more than 2 million individuals found that working
night shifts raised the risk of a heart attack or stroke by 41%.
The reasons for some of these associations are complex and hard to separate from other lifestyle
factors. The studies mentioned above attempted to filter out socioeconomic factors, for instance,
but factors like stress and social isolation can be harder to capture. That said, there is growing
evidence for a direct biological influence. Sleep deprivation has been shown to alter the body's
basic metabolism and the balance between fat and muscle mass.
Insomnia has long been known as a common symptom of dementia, but some scientists also
believe poor sleep could play a role in causing Alzheimer's. Research has shown that the brain
"cleanses" itself of beta-amyloid proteins linked to Alzheimer's during sleep and that sleep
deprivation causes the levels of these toxins to rise.
Vocabulary exercise
1. confer (paragraph 2)
It must confer powerful benefits to make up for the substantial risks which sleeping entails, such
as being eaten or missing out on food while lying dormant.
2. sifted through (paragraph 5)
In a comprehensive review, in which 18 experts sifted through 320 existing research articles, the
US National Sleep Foundation concluded that the ideal amount to sleep is seven to nine hours for
adults,
3. aligned with (paragraph 5)
Younger children require much more, with newborn babies needing up to 17 hours each day (not
always aligned with the parental sleep cycle).
4. afflicted (paragraph 6)
According to experts, too much sleep is also bad, but few people appear to be afflicted by this
problem. In the UK the average sleep time is 6.8 hours.
5. cues (paragraph 7)
Living in complete isolation, with no external cues of night and day, he adopted a 28-hour day.
6. conduit (paragraph 11)
This serves as a conduit between the eye's retina and the brain's pineal gland, which pumps out
the sleep hormone melatonin.
7. blight (paragraph 15)
Poor sleep is often seen as a modern problem, a blight of sedentary lifestyles and being glued to
smartphones late into the night.
Write your own sentences with the vocabulary
1. confer
2. sifted through
3. aligned with
4. afflicted
5. cues
6. conduit
7. blight
EXERCISE 23
Wealth and happiness: Are the two connected?
Summary
This article explains why being rich doesn't actually make a person happier. It details why this is
the case and what may make people who have less be more content with their life.
Imagine having a six-figure income, owning at least one home and sitting on a spare $1 million in
investable assets. Surely a sign that you've "made it" and are, by global standards, incredibly rich?
Apparently not.
A recent survey of affluent US investors by global financial services firm UBS found 70% of people
meeting these criteria don't consider themselves wealthy. Only those with $5 million or more in
assets thought they have enough set aside to feel secure about their future, while the majority of
the rest feared a single setback could have a major effect on their lifestyle.
So, if millionaires don't consider themselves wealthy, where does that leave the rest of us? If we're
unlikely to "feel" rich, no matter how much we earn, is it really worth aspiring to get there at all?
Getting off the treadmill
Decades of psychological research has already disproved the idea that money can buy long-term
happiness, with one study even suggesting that lottery winners ended up no more satisfied with
their lives after a big win. And The New York Times reported in February about a boom in bespoke
therapy for billionaires suffering personal struggles.
"As people get wealthier, they are more satisfied to start, but at some stage there is no additional
increase in satisfaction," explains Jolanda Jetten, a professor in social psychology at the University
of Queensland in Australia and co-author of The Wealth Paradox.
She says plenty of high earners can't get off the treadmill, even if they're aware that their happiness
or quality of life has flatlined, because they become too defined by their wealth. This, she explains,
is because rich people, just as the less well-off, make upwards comparisons, rating their income,
home, investments or possessions against those of even richer friends and colleagues, rather than
the rest of the population. "The more money you make, the more you also have a need for more
money it's like an addiction," she says.
It's a pattern that's all too familiar for life and career coaches like Pia Webb, who focuses on
guiding top-tier managers in Europe. Even in her home country, Sweden, a social democracy
famed for work-life balance rather than excess, she says many still fall victim to benchmarking
themselves against those in higher income brackets. "Nobody looks up to you because you work a
lot in Sweden. But there is still a pressure to keep up with others, to show your wealth in other
ways, like going on holidays with your family, having a boat, a summer house," she says.
Webb asks her clients to reflect on the experiences or items they think they personally need to feel
satisfied, rather than striving to keep earning more to match societal or peer-group expectations.
"When it comes to wealth, many people think money is the key. But you don't need much if you
can be happy living in the moment," she argues. Webb, who was much more focused on wealth
before experiencing a burnout 10 years ago, now enjoys simple pleasures such as having a sauna,
taking a walk in the forest or enjoying time with friends and relatives.
Happy peasants and miserable millionaires
Jetten's research suggests people living in poverty are already accustomed to finding ways to boost
their life satisfaction and well-being, that transcend what can be obtained through having large
quantities of money and material possessions. They are more likely to spend time with family and
volunteer in the community, for instance. "Well-being is related quite strongly to the extent to
which there is social capital in a country or society and the extent to which people feel connected to
others around them," she explains. "In developing nations, while much smaller amounts of money
can make a huge difference to a person's lifestyle helping them move beyond very basic needs
those who don't have much are a lot less frightened of what they've got to lose," she adds.
Carol Graham, professor in the school of public policy at the University of Maryland, has described
the paradox as the "happy peasant and miserable millionaire problem". "Wealthier countries are,
on average, happier than destitute ones, but after that, the story becomes more complicated," she
wrote in a 2010 paper, with her research suggesting people in Afghanistan enjoy a level of
happiness on a par with Latin Americans.
"Freedom and democracy make people happy, but they matter less when these goods are less
common. People can adapt to tremendous adversity and retain their natural cheerfulness, while
they can also have virtually everything… and be miserable."
Of course, this doesn't mean we should in any way conclude that it is better to live closer to the
poverty line (in the UN's latest World Happiness Report, richer countries still dominate the table).
But, Graham's research suggests wealthier people may be more adaptable to negative shifts in their
income than they might think. And as Jetten argues, richer people who often have more idealistic
lifestyles, could have a lot to learn from the "banding together and connecting with others" that is
more common in poorer groups and societies.
Krishna Prasad Timilsina, a mountain tour guide in Nepal, says he noted high levels of tenacity in
the aftermath of the worst earthquake in his country's history in 2015. It cost 8,000 lives and left
thousands more homeless. Yet by making downwards comparisons, many residents were able to
count their blessings. "In the earthquake a lot of things got destroyed but people were still happy
because if they had not lost their family...it could have been a lot worse," says the 36-year-old.
In fact, while Nepal's core industry, tourism, took a battering after the tragedy, the country
climbed eight spots to be ranked 99 out of 155 nations on the World Happiness Index in 2017,
ahead of South Africa, Egypt and even neighbouring India, one of the world's fastest-growing
economies. However, Timilsina doesn't believe his homeland is completely immune to the kind of
upwards comparisons that appear to be stressing out the rest of us. "In the city, more educated
people are more worried about life. My parents have no money but they are more happy than me,"
he laughs.
The future of wealth
As research into income and wellbeing becomes increasingly nuanced due to the quantity and
type of data that can be analysed growing numbers of experts are also speculating that
traditional symbols of wealth such as owning a car or a house are set to shift, as millennials in
many countries become the first generation to earn relatively less than their parents and struggle
to buy homes in tough property markets. Though frustrating for millennials, "it may mean that this
generation will show fewer of the negative effects of wealth such as selfishness, narcissism and a
high sense of entitlement," says Jetten.
There are even signs that even high-earning young professionals who could choose to invest in
stocks or property are instead becoming increasingly focused on making memories instead of
money. In the US, since 1987, the share of consumer spending on live experiences and events
relative to total consumer expenditure has risen by 70%, according to figures from the US
Department of Commerce.
Paris-based American fashion photographer Eileen Cho, 25, for example, grew up in an affluent
neighbourhood in Seattle, but describes the idea of earning cash in order to save or invest it in
property as "like a jail sentence".
Despite being offered financial help from her parents, who wanted to help her buy a home, she's
opted to share a 30-square metre rented apartment with her boyfriend instead. "We pay 950 euro
($1030) a month in rent and still have enough for one international trip a month," she explains.
"For me it's about experiencing things and being happy. Tomorrow I am off to Spain; my next trip
is Marrakech."
It's an approach that coach Pia Webb supports, although she argues that young workers should be
mindful to avoid travel and other experience-based adventures simply becoming the new norm
against which they benchmark their "wealth".
"Travelling is a great way of learning about other cultures, getting to know yourself and finding
your place in the world, but it can also become an addiction. You're getting hit from experiencing
new things - just like when people go shopping for example. But this can mean you're not so
rooted, or you miss out on quality time with family," she says.
"My best advice is that you need to work out what's right for you as an individual and learn to be
happy with the really small things in life, wherever you are."
Vocabulary exercise
1. bespoke (paragraph 5)
And The New York Times reported in February about a boom in bespoke therapy for billionaires
suffering personal struggles.
2. famed for (paragraph 8)
Even in her home country, Sweden, a social democracy famed for work-life balance rather than
excess, she says many still fall victim to benchmarking themselves against those in higher income
brackets.
3. striving (paragraph 9)
Webb asks her clients to reflect on the experiences or items they think they personally need to feel
satisfied, rather than striving to keep earning more to match societal or peer-group expectations.
4. transcend (paragraph 10)
Jetten's research suggests people living in poverty are already accustomed to finding ways to boost
their life satisfaction and well-being, that transcend what can be obtained through having large
quantities of money and material possessions.
5. on a par with (paragraph 11)
"Wealthier countries are, on average, happier than destitute ones, but after that, the story becomes
more complicated," she wrote in a 2010 paper, with her research suggesting people in Afghanistan
enjoy a level of happiness on a par with Latin Americans.
6. immune to (paragraph 15)
However, Timilsina doesn't believe his homeland is completely immune to the kind of upwards
comparisons that appear to be stressing out the rest of us.
7. benchmark (paragraph 20)
It's an approach that coach Pia Webb supports, although she argues that young workers should be
mindful to avoid travel and other experience-based adventures simply becoming the new norm
against which they benchmark their "wealth".
Write your own sentences with the vocabulary
1. bespoke
2. famed for
3. striving
4. transcend
5. on a par with
6. immune to
7. benchmark
EXERCISE 24
Punishing the parents for their kids underage
drinking
Summary
In this article a number of experts express their opinion on whether it is right to punish parents
who allow their children and their friends to drink alcohol in their presence. They also give their
opinion on whether it is good or not to allow those under 21 to drink any alcohol at all.
It's graduation party season, which means social host laws that hold parents responsible for
teenage drinking are back in the news. Last week, two Harvard Medical School professors were
arrested owing to the fact that teenagers were found drinking at their daughter's graduation party,
though they said they did not see the alcohol.
How effective are these laws, which can impose fines or jail time for parents? Some parents believe
it is better to have teenagers party at home so that adults can monitor the event and take away the
car keys than have kids drinking elsewhere unsupervised. Is this a bad idea? Is there an alternative
to social host laws? We asked some experts to see what they thought:
Condoning Bad Behavior
William Damon is a professor of education and director of the Center on Adolescence at Stanford
University. His books include "The Moral Child," "The Youth Charter" and, most recently, "The
Path to Purpose."
Parents who sanction teenage drinking parties are making a huge mistake. These parents are
encouraging the very behavior they are attempting to control. Even worse, they are communicating
disrespect for legal authority to young people who are just forming their attitudes about how to
behave in society.
A parent's first message must be that we are all obliged to obey the law. Laws on underage drinking
in this country are clear. A parent certainly has the right to disagree with these laws; and
discussions about such disagreements with children can foster critical thinking and civic
awareness. But the parent's first message to a child must be that we are obliged to obey our
society's laws even when we disagree with them.
At the same time, legal enforcement of social host laws should be used sparingly and as a last
resort. It's heavy-handed, intrusive, and risks undermining relations between parents and
children.
Learn Safer Drinking Habits
Ruth C. Engs is professor emeritus at Indiana University. She has researched university student
drinking patterns for over 25 years. She is currently researching health reformers of the
Progressive Era.
"Social host" laws vary from state to state and on the whole they are largely unenforceable. High
school graduates drinking at graduation parties has been a "rite of passage" among youth in the
United States for decades. It is unlikely this behavior will change as it is ingrained in our culture.
It is better to have young adults consume alcohol within the confines of a home where they can be
monitored and driven home by parents or designated drivers, as opposed to having them go to
unsupervised parties to get drunk. In many cultures outside of the U.S. parents routinely serve
their children alcohol at home. Wine and beer are considered part of the diet.
In my opinion, the age of alcohol consumption across the United States should be lowered to age
18 (not 21 as it is in some states) in controlled environments. These include restaurants, anytime
or anyplace with parents, or in pubs where alcohol is consumed on the premises.
The Myth of How Europeans Drink
David Jernigan is an associate professor in the Department of Health, Behavior and Society at
the Johns Hopkins Bloomberg School of Public Health. He has worked for the World Health
Organization and the World Bank as an expert adviser on alcohol policies.
According to the Surgeon General, there are 5,000 deaths per year in the U.S. among young people
under 21 as a result of alcohol use. No parent wants their child to have an alcohol problem, be
involved in an alcohol-related crash or sexual assault, fall off a balcony during spring break, or
suffer from alcohol poisoning.
Young people who start drinking before age 15 are five times more likely to develop alcohol
problems. Yet parents are strikingly ignorant of what the research literature suggests will be
effective in keeping our children out of trouble with alcohol.
Many parents feel that young people will be safer if we keep them at home and supervise their
drinking, or teach them to drink by having them drink with us. They shore up this conviction with
a mental image of drinking patterns in European countries, where they assume that younger
drinking ages and drinking with parents decreases youth drinking problems. Unfortunately,
research indicates it does not
Making Hosts Responsible
John Mills is a lecturer of social policy at the University of Birmingham.
Most minors obtain alcohol from adults of legal drinking age. Most underage drinkers typically
drink alcohol in their own or someone else's home. Social host liability laws (which penalize adults
facilitating under-aged drinking if that drinking damages a third party) for minors aim to stem this
access to alcohol and its accompanying drinking and driving. And supporters cite that this appears
to have substantially reduced drunk-driving fatality rates for minors.
Parents who throw parties for their children, however, cite safety reasons as part of their
motivation for hosting parties, preferring their teens and their teens' friends to drink in a
supervised and safe locale. And they too also claim that this has reduced the incidences of drunk-
driving fatality rates for minors.
Whether social host laws have had any effect on drunk-driving is debatable. But the one thing
these have done is make people ruminate on the effects that alcohol can have in society in general,
which in itself is not a bad thing.
Saying ‘No' Is Not Enough
David S. Anderson is professor of education and human development at George Mason
University.
Social host laws are needed to communicate clearly that underage drinking is not acceptable.
While a parent may have the intention of limiting a teenager's (and his or her friends') exposure to
drunk driving by hosting a party, exposing teenagers to alcohol even in that setting can result in
harm, like alcohol poisoning, sexual abuse, violence, drunk driving and more.
Underage drinking, though decreasing in recent years, is still extensive, as over 25 percent of high
school seniors nationwide report drinking 5 or more drinks in a row at least once in the previous
two weeks.
While social host laws and other regulations make a difference I believe we also must have a
comprehensive approach that emphasizes prevention, personal responsibility, skill-building, and
early intervention in addition to laws and policies. We also need to find out why adolescents drink
and then address the underlying reasons for their decisions about alcohol use or non-use.
Permit Drinking With Adults
David J. Hanson is a professor emeritus of sociology at the State University of New York,
Potsdam.
Parents can prohibit drinking in their home and unintentionally drive their high schoolers to drink
unsupervised in the woods, fields, older friends' apartments, and who-knows-where-else. The
results are sometimes driving while intoxicated and tragic alcohol-related crashes.
Or parents can host gatherings in which they supervise and control the behavior of the young
people who attend (as long as the parents of those attending give permission) to protect their
safety and well-being. Some states already permit parents to serve alcoholic beverages to their own
offspring under their direct supervision. Every state should do this. Federally-funded research has
shown that drinking with parents can reduce overall alcohol consumption and alcohol-related
problems.
It's Not the Drinking, It's the Driving
Marsha Rosenbaum is a medical sociologist and the founder of the Safety First project at the
Drug Policy Alliance.
Recently, a couple (both on the faculty of Harvard Medical School) were arrested under a "social
host" law in New Hampshire because teenagers were caught consuming alcohol at their daughter's
graduation party.
Such social host laws were created in a well-meaning effort to prevent teenage drinking by making
parents vulnerable to prosecution. But are they effective?
Most would agree that teenagers would be better off if they abstained. But annual surveys
consistently show that nearly 80 percent of high school students have consumed alcohol by the
time they graduate. And if they can't drink at home, they'll drink on the street, in the park, on the
beach. And they'll get there by car.
Shift Social Norms
James F. Mosher is a leading scholar in the field of alcohol policy and the law. He has provided
expert consultation to community groups, policy makers, and law enforcement on social host
laws.
Social host laws (sometimes referred to as house party laws) hold individuals responsible for
underage drinking on property they own, lease or control. They recognize that the problems
associated with underage drinking parties (a high-risk setting for binge drinking, drunken driving,
sexual assault, and other forms of violence) are community problems that require a multifaceted
public health approach.
Research is absolutely clear on this point: restricting the availability of alcohol to teens saves
young lives. So does increasing alcohol taxes. Educational programs are important, but on their
own have little or no effect on teen drinking, in part because of the massive advertising and
marketing budget of the alcohol industry undermining the pro-health educational messages.
Many parents and other adults don't realize how easily teen parties can get out of control. Even
with the best intentions, adults who allow teens to party on their property are not only setting a
poor example but are also endangering the safety of those attending as well as neighbors and
others in the community.
Vocabulary exercise
1. sanction (paragraph 4)
Parents who sanction teenage drinking parties are making a huge mistake. These parents are
encouraging the very behavior they are attempting to control.
2. is ingrained in (paragraph 8)
High school graduates drinking at graduation parties has been a "rite of passage" among youth in
the United States for decades. It is unlikely this behavior will change as it is ingrained in our
culture.
3. penalize (paragraph 15)
Social host liability laws (which penalize adults facilitating under-aged drinking if that drinking
damages a third party) for minors aim to stem this access to alcohol and its accompanying
drinking and driving.
4. stem (paragraph 15)
Social host liability laws (which penalize adults facilitating under-aged drinking if that drinking
damages a third party) for minors aim to stem this access to alcohol and its accompanying
drinking and driving.
5. ruminate on (paragraph 17)
Whether social host laws have had any effect on drunk-driving is debatable. But the one thing
these have done is make people ruminate on the effects that alcohol can have in society in
general, which in itself is not a bad thing.
6. offspring (paragraph 24)
Some states already permit parents to serve alcoholic beverages to their own offspring under
their direct supervision. Every state should do this.
7. multifaceted (paragraph 30)
They recognize that the problems associated with underage drinking parties (a high-risk setting for
binge drinking, drunken driving, sexual assault, and other forms of violence) are community
problems that require a multifaceted public health approach.
Write your own sentences with the vocabulary
1. sanction
2. is ingrained in
3. penalize
4. stem
5. ruminate on
6. offspring
7. multifaceted
EXERCISE 25
Reintroducing wolves and other lost species
back into the wild in Britain
Summary
This is an article on rewilding parts of the British countryside (reintroducing into the wild animal
species which have completely disappeared from the island). It discusses some of the plans to do
so, the benefits which this can bring to the environment and the reservations that some have
concerning the reintroduction of certain species (e.g. wolves). It also talks about the way that the
reintroductions should be done.
A pair of highland ponies nibble grass as two kestrels swoop across the path. Up a rock face across
this windswept valley deep in the Scottish highlands, a golden eagle is hunting for prey, its
movements tracked by a GPS tag. Nearby are Scottish wildcats among the bracken – Europe's
rarest cat, with fewer than 400 left plus red squirrels, black grouse, the occasional pine marten,
shaggy highland cattle adapted to the harsh environment here, and, like much of the highlands,
plenty of deer. Wild boar and moose roamed this corner of Sutherland until recently.
But if Paul Lister, the estate's multimillionaire owner and the heir to the MFI company fortune
gets his way, two species not seen on this land for centuries could soon be added to the list: wolves
and bears. Alladale estate, which Lister prefers to call a "wilderness reserve", is one of the most
ambitious examples of so-called "rewilding", the banner under which a growing number of people
are calling for the reintroduction of locally extinct species to landscapes. Bringing back species
such as wolves, beavers and lynx, rewilding advocates say, can increase the diversity of other flora
and fauna, enable woodlands to expand and help reconnect people with nature.
The unofficial figurehead for this movement, the outlines of which will become clearer with the
formation of a new charity early next year called Rewilding Britain, is author George Monbiot. His
book Feral, published in 2013, has been reprinted over 30 times in hardback and has led to a
national debate over the merits of restoring the country to a wilder state. "For me, it's part of a
wider effort to develop a positive environmentalism, which we desperately need," says Monbiot.
"It's about creating a vision for a better world that is much more appealing than just laying out
what is wrong with the current one, of having a rather more inspiring one than saying, ‘Do as we
say and the world will be a bit less crap than it could be'."
While rewilding efforts on continental Europe have seen substantial progress Eurasian beavers
are now found in 25 countries, European bison have returned across eastern Europe including one
of the biggest reintroductions in Romania this May, and wolves have spread across much of
Europe including Germany, France and last year one was even found in the Netherlands in the
UK there has been more talk than action. It is a charge that even Monbiot admits is not unfair, but
he argues: "Talk precedes action."
One area where rewilding efforts in Britain have made some modest progress, albeit at very local
levels, is in native tree-planting. In a Cumbrian valley, the Wild Ennerdale project has seen
conifers for forestry replaced with native broadleaf species whose populations have dwindled.
Knepp Castle estate, in West Sussex, has been planting relatively rare native black poplars as part
of its rewilding efforts. In just over two decades, Trees for Life in Scotland has planted 1.2 million
trees, mostly Scots pine, and plans to reach its second million in the next five years while
diversifying into other species including aspen.
Trees could be helped further by returning wolves and other top predators to Britain, Monbiot
says, because of the knock-on effects of such "keystone" species. One of the most famous case
studies is the return of wolves to Yellowstone National Park in the 90s, which have been credited
with moving deer around, meaning less damage to new trees, allowing them and other vegetation
to grow, stabilising the soil along river banks.
In Scotland, deer still pose a threat to the 600,000-odd trees that Lister has planted in the glens at
his estate and the hundreds of thousands more planned, even though the management has already
culled deer numbers by 50% over a decade, to around 600. Wolves would not only reduce those
numbers further they specialise in killing deer but would be a tourist attraction too. "We've
managed to put a man on the moon, I don't see why we can't get wolves back in Scotland," says
Lister. Bears would also learn to specialise in killing deer, he believes, and would be an even more
dramatic pull for visitors than wolves.
But Lister's plan does not extend to allowing these carnivores completely off the leash. "I'm not an
advocate of reintroduction, I'm not a supporter of letting these big animals out in the freedom of
the countryside, because we've sanitised our landscape so much I don't think there's enough
tolerance of these animals for us to be coached through the whole process." Instead, Lister wants
to fence in land at Alladale and on neighbouring estates to release two packs of around five wolves
each, plus bears, which he says would be a huge draw for day visitors to the estate, generating jobs
for locals through increased demand for B&Bs, work on the fence and ecology roles.
But the idea of fencing-in such a large tract of land has raised concern from hikers, who have a
legal right to roam across the estate. "Our view is that it's not a reintroduction that he's trying to
do, he's trying to create a giant zoo," says Dave Morris, director of Ramblers Scotland. "We've
always resisted this, saying it would be inappropriate to fence in such a huge area of land, and it
would have big landscape impacts, as you'd have to have a road all around it."
Privately, some rewilding advocates express concern that Lister's uncompromising style could set
back support for rewilding. Some people living near Alladale are not convinced yet either. One
householder, who did not wish to be named, said: "Is he still on about that nonsense? What if the
wolves break out? We worry for our son [who has sheep]. We had a meeting about it. It was
pointed out to him [Lister] that if it was covered by the snow, the wolves would get over the fence.
We might get a wolf on our doorstep."
Finlay Collouch, a neighbour who said he supported the estate's tree-planting and outreach
education with local children, said of the wolves plan: "It doesn't put me up nor down if they do it,
as long as they keep them there. But I don't see how they're going to keep them there when the
snow drifts go over fences."
Alladale's man on the ground, Innes MacNeill, the reserve manager, says he cannot see how it
could happen without a fence, because farmers would shoot wolves if they were reintroduced
straight into the wild. "The fence is probably one of the things we need to overcome. Ultimately the
general public have to want this, they have to want something different, something that would
hopefully be really special."
The return of the wolf, however, could be eased by the reintroduction of a far less controversial
species. Jamie Wyver is a masters student at Imperial College London looking at public attitudes
towards the reintroduction of the lynx in the Scottish highlands and Forest of Dean. "The
interesting thing about the lynx is it's almost like we've forgotten about it. It doesn't feature in
nursery tales. It just gets missed off. It might be because they've been gone for a longer time than
wolves and bears but it's probably because they're not a threat to humans. There are no records
anywhere in Europe of anyone ever being attacked by a lynx," he says.
Wyver says most people he has spoken to know so little about the Eurasian lynx, which is still
present in much of northern and eastern Europe and some southern European countries, that they
often first think he is enquiring about the deodorant rather than the carnivore. Lynx could be back
in the UK as soon as 2025, thinks Alan Watson Featherstone, the founder of Trees for Life. "The
big picture is there are far too many deer in Scotland for the habitat. The next crucial step is to get
a predator back, because that ecological level of top predators is missing. The wolf is not the one to
begin with, because it comes with tremendous prejudice: the Three Little Pigs, Red Riding Hood; it
gets the works thrown at it.
"We're promoting the lynx as a more feasible candidate for reintroduction, it's a solitary animal, an
ambush hunter, it's quite secretive," says Featherstone, who believes that restoring enough habitat
in the shape of native woodland is crucial to help such species come back. The lynx, he argues,
would give people the experience of living again with a carnivore, and make a wolf reintroduction
many years later more realistic.
Hundreds of miles south, in a forest on the west coast of Scotland, one species is already getting its
teeth back into the UK landscape four centuries after being hunted to extinction for its fur. Four
families of European beavers (Castor fiber) have spent the last five years in an official captive trial
where they have successfully produced young (known as kits), built lodges and dams, in one case
causing a freshwater loch to grow up to five times in size as a result. "In some respects, it's no great
surprise – beavers do what we expected beavers to do," said Simon Jones, head of major projects
for Scottish Wildlife Trust, who oversaw the Scottish beaver trial at Knapdale, in Argyll and Bute.
"But the whole point is that it's not just about species reintroduction, it's about what beavers do.
Beavers create good habitat for other species where you get beavers, you get good biodiversity.
That's not necessarily what our trial was about, but the wider drive in the wild for considering
them is that the science shows amphibians, otters, waterfowl do well [as a result], because beavers
are this keystone species that creates habitat that other species can use."
An unlicensed population of around 150 beavers has also established itself on the river Tay, near
Dundee. The Scottish government initially planned to trap them, but later decided against it. Next
year, Holyrood is expected to make a decision on what to do about both sets of beavers. Knapdale
also serves as an example that reintroductions rarely happen overnight. It took 11 years to become
reality, after the trial was first floated in 1998. Campaigners have been lobbying for a similar
amount of time to return the herbivores to England and Wales, but plans to bring them back in the
wild in Ceredigion in Wales this year have not yet come to fruition. In England, slow progress
appears to have prompted individuals to take matters into their own hands.
This February, Tom Buckley, a retired environmental scientist, photographed beavers on the river
Otter in Devon, the first in the wild in England for centuries. Local people attending a public
meeting this August at Ottery St Mary, a village along the river, say that the beavers have been out
in the area for several years longer, a secret known to some but until recently not broadcast more
widely, though it remains a mystery where they came from.
The Department for the Environment, Food and Rural Affairs said this summer that it would trap
the beavers, in part to test for a disease not currently in the UK (alveolar echinococcosis), but
officials will not say whether the family, which expanded with the addition of three kits in July, will
be allowed to return or will be rehoused elsewhere at a zoo or other site, even if they test all-clear
for the disease. People living near the river Otter are overwhelmingly behind the beavers being
returned, a poll by a major newspaper suggests. Local resident Pam Baker-Clare said: "Everyone
seemed very proud of the beavers. But if the government gets mixed up in this, they will
disappear."
Some visitors to the meeting, organised by the Devon Wildlife Trust, which is looking to submit a
bid for a licence for the beavers to return, sounded more wary. "I'm a bit cautious about the future.
I appreciate that reintroducing beavers means we don't have any predators other than man,
because the wolf has disappeared so obviously the population increase [of beavers] and what
happens in a 100 years' time has to be answered," said John Killingbeck, who lives nearby. "Our
landscape has changed since we had beavers. We are much more densely populated, we are trying
to farm, there are effects on rivers, on catchment zones, on fisheries."
About an hour away near Okehampton in north Devon, a three-hectare fenced enclosure
demonstrates dramatically why beavers are referred to as a keystone species. Hundreds of fallen
willow and birch trunks criss-cross the captive trial site, with distinctive pencil-shaped stubs
remaining amid a network of canals, paths, small dams and 10 ponds that a pair of beavers
introduced in 2011 have built, along with an increasingly elaborate lodge where they sleep during
the day before emerging at night to work.
"The impact they've had has been phenomenal, they've blown us away, they've done what we
hoped for and more. We've been surprised at how effective they've been," says Mark Elliott of the
Devon Wildlife Trust, which runs the project.
There was no static water here before, and just 10 clumps of frogspawn were counted in 2010. This
year, 370 clumps were spotted. Around the ponds, butterflies dance and dragonflies hover. Drawn
by the invertebrates that have appeared as the forest cover has thinned out, birds have arrived,
including herons feasting on the frogs, spotted flycatchers, snipe and woodcock. Vegetation has
sprouted up in the gaps created by the felled trees, including orchids, pond weeds and purple moor
grass, a "really good sign" of the habitat's health, Elliott says.
The University of Exeter is now measuring the height of water levels and collecting water samples
to see whether, as expected, the habitat the beavers create filters and cleans the water, removing
phosphates and other pollutants. The project could also generate data that proves beavers can
reduce flood risk during this winter's floods there were calls by the Mammal Society to
reintroduce them for just that purpose.
"If we can provide evidence that beavers in the top of the catchments reduce floods downstream,
that's gold dust really," Elliott said. "If you can reduce the flood risk downstream by 10%, that
could in many cases be the difference between flooding and not flooding. It can mean the size of
your flood defences can be lower. It means the cost of that sort of work can be reduced. Potentially
it's of huge financial benefit to society."
Yet both the farming and angling lobbies in the UK are opposed to beavers returning to the wild.
The National Farmers' Union's countryside adviser, Claire Robinson, said: "We believe efforts, and
finances, would be better focused on retaining current biodiversity." If beavers were allowed out in
the wild, there would "rightly be concerns about them causing damage to the environment,
including farmland." she said.
Mark Owen, head of freshwater at the Angling Trust, said the landscape had changed so much
since beavers were last in Britain that it would be inappropriate to bring them back. "In the last
500 years-odd, we've heavily straightened our rivers, we've caused pollution, so when beavers were
in this country, the river system would've looked completely different. Rather than a top-down
approach of introducing a water engineer like a beaver, we'd rather rivers were improved to a point
where we could look at reintroducing beavers." Owen cited a list of concerns, including half-
gnawed trees posing a threat to fishermen and the potential dangers posed when beaver dams
break.
Even among the most enthusiastic rewilding supporters, however, few believe that reintroduced
species should be allowed to run truly wild. None, even Monbiot, are arguing for a blanket, mass
return of farmland to nature. But advocates hope that even on this crowded island, there is still
room for more wildlife, and that people could learn to live alongside it.
Elliot, walking alongside a beaver canal, says: "If we do get beavers back [in the wild], we have to
accept we will have to manage conflicts, like they do in Europe. There's no point in reintroducing
an animal and not managing conflict."
Vocabulary exercise
1. heir (paragraph 2)
But if Paul Lister, the estate's multimillionaire owner and the heir to the MFI company fortune
gets his way, two species not seen on this land for centuries could soon be added to the list: wolves
and bears.
2. figurehead (paragraph 3)
The unofficial figurehead for this movement, the outlines of which will become clearer with the
formation of a new charity early next year called Rewilding Britain, is author George Monbiot.
3. draw (paragraph 8)
Lister wants to fence in land at Alladale and on neighbouring estates to release two packs of
around five wolves each, plus bears, which he says would be a huge draw for day visitors to the
estate
4. tract (paragraph 9)
But the idea of fencing-in such a large tract of land has raised concern from hikers, who have a
legal right to roam across the estate. "Our view is that it's not a reintroduction that he's trying to
do, he's trying to create a giant zoo,"
5. come to fruition (paragraph 17)
Campaigners have been lobbying for a similar amount of time to return the herbivores to England
and Wales, but plans to bring them back in the wild in Ceredigion in Wales this year have not yet
come to fruition.
6. behind (paragraph 19)
People living near the river Otter are overwhelmingly behind the beavers being returned, a poll by
a major newspaper suggests. Local resident Pam Baker-Clare said: "Everyone seemed very proud
of the beavers. But if the government gets mixed up in this, they will disappear."
7. blown us away (paragraph 22)
The impact they've had has been phenomenal, they've blown us away, they've done what we
hoped for and more. We've been surprised at how effective they've been
Write your own sentences with the vocabulary
1. heir
2. figurehead
3. draw
4. tract
5. come to fruition
6. behind
7. blown us away
EXERCISE 26
Everest and death: Why people are still willing
to climb mountain of the dead
Summary
This article primarily talks about what motivates people to keep climbing Mount Everest even
though many climbers die trying to do so every year. In addition to this it explains why the bodies
of many of those who died in their attempt to reach the summit of the mountain still remain on the
mountain.
No one knows exactly how many bodies remain on Mount Everest today, but there are certainly
more than 200. The bodies of climbers and Sherpas almost litter the upper reaches of the
mountain. They lie tucked into crevasses, buried under avalanche snow and exposed on catchment
basin slopes their limbs sun-bleached and distorted. Many are concealed from view, but some
are familiar fixtures on the route to Everest's summit. Perhaps most well-known of all are the
remains of Tsewang Paljor, a young Indian climber who lost his life in the infamous 1996 blizzard.
For nearly 20 years, Paljor's body popularly known as Green Boots, for the neon footwear he was
wearing when he died has rested near the summit of Everest's north side. When snow cover is
light, climbers have had to step over Paljor's extended legs on their way to and from the peak.
Mountaineers largely view such matters as tragic but unavoidable. For the rest of us, however, the
idea that a corpse could remain in plain sight for nearly 20 years can seem mind-boggling. Will
bodies like Paljor's remain in their place forever, or can something be done? And will we ever
decide that Mount Everest is simply not worth it?
Before answering those questions, however, it is worth asking something more fundamental: when
death is all around, why do people gamble their lives on Everest at all?
Reaching the highest point on Earth once served as a symbol of "man's desire to conquer the
Universe," as British mountaineer George Mallory put it. When a reporter once asked him why he
wished to climb Everest's 8,848m (29,029ft)-high peak, Mallory snapped "Because it's there!"
Everest, however, is no longer the romantic, unconquered place it once was. Since Tenzing Norgay
and Edmund Hillary became the first men to stand on its summit in 1953, the mountain has been
summited more than 7,000 times by more than 4,000 people, who have left a trail of garbage,
human waste and bodies in their wake. "Climbing Everest looks like a big joke today," says Captain
MS Kohli, a mountaineer who in 1965 led India's first successful expedition to summit Mount
Everest. "It absolutely does not resemble the old days when there were adventures, challenges and
exploration. It's just physically going up with the help of others."
For Sherpas and others hired to work on Everest, the reason they keep coming back is that it's a
high-paying job. For everyone else, however, motivations are often difficult to explain, even to
oneself. Professional climbers often insist that their drive differs from that of the majority of
clients who pay to climb Everest, a group that is frequently accused of the lowliest of motivations:
bagging the world's highest mountain for bragging rights. "Somebody once said that climbing
Everest is a challenge, but the bigger challenge would be to climb it and not tell anybody," says Billi
Bierling, a Kathmandu-based journalist and climber and personal assistant for Elizabeth Hawley, a
former journalist, now 91, who has been chronicling Himalayan expeditions since the 1960s.
But few would actually admit that they climb Everest only so they can boast about it later. Instead,
Everest tends to assume a symbolic importance for those who set their sights on it, who often
articulate the reason in terms of transformation, triumph over personal obstacles or the crown
jewel in a bucket list of lifelong goals. "Everyone has a different motivation," Bierling says.
"Someone wants to spread the ashes of their dead husband, another does it for their mother,
others want to kill a personal demon. In some cases, it's just ego. In fact you have to have a certain
amount of ego to get up the damn thing."
As for professional climbers, whose love of mountaineering extends well beyond Everest,
psychologists have tried to weed their motivations out for decades. Some concluded that high-risk
athletes mountaineers included are sensation-seekers who thrive off thrill. Yet think for a
moment about what climbing a mountain like Everest entails weeks spent at various camps,
allowing the body to adapt to altitude; inching up the mountain, step-by-step; using sheer
willpower to push through unrelenting discomfort and exhaustion and this explanation makes
less sense. High altitude climbing, in fact, is a slog. As Matthew Barlow, a postdoctoral researcher
in sports psychology at Bangor University, Wales, puts it: "Climbing something like Everest is
boring, toilsome and about as far from an adrenaline rush as you can get."
A climber himself, Barlow suspected that sensation-seeking theory has long been misapplied to
mountaineers. His research suggests that, compared to other athletes, mountaineers tend to
possess an exaggerated "expectancy of agency". In other words, they crave a feeling of control over
their lives. Because the complexities of modern life defy such control, they are forced to seek it
elsewhere. As Barlow explains: "To demonstrate that I have influence over my life, I might go into
an environment that is incredibly difficult to control like the high mountains."
Flirting with mortality, in other words, is part of the appeal. "If you can escape death or dodge fatal
accidents, it allows you the illusion of heroism, even though I don't think it's truly heroic," says
David Roberts, a mountaineer, journalist and author based in Massachusetts. "It's not like playing
poker where the worst that could happen is you lose some money. The stakes are ultimate ones."
Barlow and colleagues also found that mountaineers believe that they struggle emotionally,
especially when it came to loving partner relationships. They may compensate for this by becoming
experts at dealing with emotions in another, more straightforwardly terrifying realm. "The
emotional anxiety of everyday life is confusing, ambiguous and diffuse, and you don't know the
source of it," Barlow says. "In the mountains, the emotion is fear, and the source is clear: if I fall, I
die." In her decades interviewing mountaineers, Hawley, too, has noticed this tendency. "In some
cases, climbers just want to get away from home and responsibilities," she says. "Let the mother
take care of the son that's sick, or deal with little Johnny who got in trouble at school."
Many of the climbers Barlow and his colleagues included in their study especially professional
ones also exhibited what psychologists refer to as counterphobia. Rather than avoid the things
they fear, they feel compelled to face-off with those elements. "It's a misnomer that climbers are
fearless," Barlow says. "Instead, as a climber, I know I will be afraid, but the key bit is that I
approach that fear and try to overcome it."
Like a junkie who's got his fix, mountaineers usually report a transfer effect from their experience
a feeling of satiation immediately after returning from a peak. "For me, coming back from a
climb physically exhausted but mentally relaxed is the dream," says Mark Jenkins, a journalist,
author and adventurer in Wyoming. To continue to sate that desire, mountaineers thus set their
sights on increasingly challenging peaks, routes or circumstances, and as the world's highest
mountain, Everest has a natural place in that progression. "You have to up the ante, which over
time leads to greater and greater risk taking," Barlow says. "If the transfer effect is never enough
for you to stop, then ultimately you likely die."
Given all this, climbers must decide for themselves if their passion is worth potentially losing their
lives and abandoning their loved ones for. "On my own volition, I accept the risk and suffering,
and that there is no external benefit to society," says Conrad Anker, a mountaineer, author and
leader of the North Face climbing team. "But as long as one is clear and transparent with your
family and wife, then I don't think it's morally incorrect."
Some, however, do get their fill. Seaborn Beck Weathers, a pathologist in Dallas who lost his nose
and parts of his hands and feet and very nearly his life on Everest in 1996, was originally
attracted to climbing precisely because of a paralysing fear of heights. As he described in his book,
Left for Dead, facing off in the mountains with that fear proved to be an effective (albeit
temporary) antidote for his severe depression. Everest was his last mountaineering experience,
though, and that close call with death saved his marriage by causing him to realise what was truly
important in life. Because of that, he does not regret it. But at the same time, he would not
recommend anyone to climb Everest.
"My view has changed on this fairly dramatically," he says. "If you don't have anyone who cares
about you or is dependent on you, if you have no friends or colleagues, and if you're willing to put a
single round in the chamber of a revolver and put it in your mouth and pull the trigger, then yeah,
it's a pretty good idea to climb Everest."
War zones aside, the high mountains are the only places on Earth where it is expected and even
normal to encounter exposed human remains. And of all the mountains where climbers have lost
their lives, Everest likely carries the highest risk of coming across bodies simply because there are
so many. "You'll be walking along, it's a beautiful day, and all of a sudden there's someone there,"
says mountaineer Ed Viesturs. "It's like, wow it's a wakeup call."
At times, the encounter is personal. Ang Dorjee Chhuldim Sherpa, a mountaineering guide at
Adventure Consultants who has summited Everest 17 times, was good friends with Scott Fischer, a
mountain guide who died in the 1996 disaster on Everest's south side. After his death, Fischer's
body remained in sight. "When you're passing by and you see your friend lying there, you know
exactly who it is," he says. "I try not to look, but my eyes always get drawn to it." He adds "I think
we are somehow able to walk by these bodies and continue climbing by rationalising to ourselves
that whatever happened to this person will not happen to me."
Some, however, are not able to continue climbing. In 2010, Geert van Hurck, an amateur climber
from Belgium, was making his way up Everest's north side when he came across a "coloured mass"
on the ground. Realising it was a climber, Van Hurck quickly approached, eager to offer any help
he could. That was when he saw the bag. Someone had placed a plastic bag over the man's face to
prevent birds from pecking out his eyes. "It just didn't feel right to climb any further and celebrate
at the summit," Van Hurck says. "I think maybe I was seeing myself lying there." He would almost
certainly have summited, but returned to camp, shaken and upset.
His decision to turn back, however, is rare. Hundreds of climbers have passed corpses en-route to
their summit, often without knowing who they are. Indeed, almost immediately after Paljor died,
uncertainty has surrounded his remains. Some even doubt that the body belongs to Paljor at all,
thinking it more likely to be his climbing partner, Dorje Morup. But for whatever reason, Paljor's
identity has largely stuck, even if most climbers today know the remains only as Green Boots, and
the place where he rests as Green Boots' Cave.
That enclave, located at about 8,500m high and sheltered from the wind, is a popular resting point
for climbers on their way back from the summit, who may sit down there to catch their breath or
have a snack. "It's pretty grisly that they named that cave after him," says amateur mountaineer
Bill Burke, the only person to have climbed the highest mountain on every continent after age 60.
"It's really become a landmark on the north side."
In 2006, the cave and Green Boots earned even more infamous renown when a British climber
named David Sharp was discovered huddled inside, on the brink of death. The story was widely
circulated by the media, which claimed that some 40 climbers passed Sharp by, who died later that
day, without offering aid. As is so often the case, however, much of the story's nuance was lost in
those reports; in fact, most climbers did not notice Sharp, or assumed that he was simply resting.
Others accused of ignoring his plight were not informed until it was much too late to help. Sharp's
body was removed from sight a year later at the request of his parents, but Paljor, whose moniker
was further solidified by the incident, remained.
What to do with bodies on the mountain depends on a number of factors, including the wishes of
the deceased and his or her families, and where the death took place. Some make arrangements for
their body to be returned to their family, if possible. Burke did not discuss those details with his
wife, but he did ensure that his body would be delivered to her, should the worst happen. "It's not
something you dwell on," he says. "I knew I needed repatriation insurance so I got it, but I didn't
give it a lot of thought."
Returning a body to a family costs thousands of dollars, however, and requires the efforts of six to
eight Sherpas potentially putting those men's lives in danger. "Even picking up a candy wrapper
high up on the mountain is a lot of effort, because it's totally frozen and you have to dig around it,"
says Ang Tshering Sherpa, chairman and founder of Asian Trekking, a company based in
Kathmandu, and president of the Nepal Mountaineering Association. "A dead body that normally
weighs 80kg might weigh 150kg when frozen and dug out with the surrounding ice attached."
Typically, though, mountaineers who die on a mountain wish to remain there, a tradition co-opted
from seafarers more than a century ago. "But when we have 500 people stepping over a body ever
year, that's no longer acceptable," says Jenkins, who had to navigate four bodies when he was last
on Everest. "That's disgraceful."
Funeral rights
To avoid this, the remains are usually "committed" to the mountain that is, they are respectfully
pushed into a crevasse or off a steep slope, out of sight. When possible, they might also be covered
with rocks, forming a burial mound. But Dave Hahn, a mountain guide at RMI Expeditions who
has reached Everest's summit 15 times, emphasises "the time to move a body is when the accident
happens." Afterwards, "not to get grotesque, but they become attached to the hill."
But even for a fresh body, those respectful acts can take hours and require the effort of several fit
climbers. The question remains of whose responsibility that task should fall to, especially as more
bodies have built up over the years, and glacial melting due to climate change has caused others to
appear.
Some have stepped up. Since 2008, Dawa Steven Sherpa, managing director of Asian Trekking and
Ang Tshering's son, and his colleagues have led yearly clean-up efforts on the mountain, removing
more than 15,000kg of old garbage and more than 800kg of human waste. As such, whenever a
body or body parts emerge from the melting, ever-dynamic Khumbu glacier, his team is seen as the
de facto removal crew. So far, they have respectfully disposed of several bodies, four Sherpas one
of whom they knew – and one Australian climber who had disappeared in 1975. "If at all possible,
human remains should get a burial," Dawa Steven says. "That's not always possible if a body is
frozen into the slope at 8,000m, but we can at least cover it and give it some dignity so people
don't take pictures."
Return to the mountain
Amid all the death, the pollution, the overcrowding and the increasingly questionable merit of
reaching the summit, will people ever decide the mountain simply is not worth it anymore?
Not likely, if the past is anything to go on.
Just as the 1996 tragedy did nothing to quell people's interest in Everest, the back-to-back horrors
of the past two years seem to have had little effect. After the 2014 avalanche, many Sherpas vowed
not to return to Everest until working conditions including life insurance policies were
improved. For most, either out of economic necessity or choice, the sentiment to stay away from
the mountain seems to have been short lived.
Ang Dorjee, for example, opted out of the 2015 season after losing three lifelong friends in the
avalanche, but he now plans to return. "I was a bit scared, so I skipped that season," he says. "But
time passes, and I've been doing this all my life."
"Nobody's ok with what happened," adds Dawa Steven. "The last few years have been very
traumatising for a lot of the Sherpas." But of the 63 Sherpas he has on payroll, none have tendered
their resignation. "No one has said 'I don't want to climb anymore,' although some have gotten
pressure from their wives and parents to stop," he says. The same dynamic is playing out among
Western guiding companies and leaders. Hahn has always defended Everest, but is now
considering a break from the mountain. "I used to see the media stories that came out and they'd
be only about death and destruction, and I'd say, 'Well, my mountain is not about death,'" he says.
"But the last two years have brought such a huge loss of life that it's become hard for me to
continue to make that argument."
Yet Everest has a way of drawing people back in. Seven years ago, Mountain Madness from Seattle
suspended its guided climbs on Everest for an indefinite period of time, citing overcrowding and a
surplus of inexperienced mountaineers. "We were trying to decide if we wanted to take a stance
and say, 'Hey, look, we just don't support what's happening on Everest,'" says Mark Gunlogson, the
company's president. Next year, however, Mountain Madness plans to return. "It's more due to
client demand as opposed to us trying to get back into the game," Gunlogson says.
"Everest hasn't lost its mystique for me, or for many others who go back year after year," Burke
says. "Even having been there six times, I love climbing that mountain. I love going there. I'm
almost addicted to it."
For years to come – perhaps forever Everest will no doubt continue to do what it has for decades:
capture the imagination, provide the backdrop for dreams and personal triumphs, and take a few
lives in the process. Green Boots may at last be at rest, but there is no guarantee that his cave will
remain empty for long.
Vocabulary exercise
1. litter (paragraph 1)
No one knows exactly how many bodies remain on Mount Everest today, but there are certainly
more than 200. The bodies of climbers and Sherpas almost litter the upper reaches of the
mountain.
2. chronicling (paragraph 6)
says Billi Bierling, a Kathmandu-based journalist and climber and personal assistant for Elizabeth
Hawley, a former journalist, now 91, who has been chronicling Himalayan expeditions since the
1960s.
3. crave (paragraph 9)
In other words, they crave a feeling of control over their lives. Because the complexities of modern
life defy such control, they are forced to seek it elsewhere.
4. albeit (paragraph 15)
As he described in his book, Left for Dead, facing off in the mountains with that fear proved to be
an effective (albeit temporary) antidote for his severe depression.
5. quell (paragraph 31)
Just as the 1996 tragedy did nothing to quell people's interest in Everest, the back-to-back horrors
of the past two years seem to have had little effect.
6. vowed (paragraph 31)
After the 2014 avalanche, many Sherpas vowed not to return to Everest until working conditions
including life insurance policies were improved.
7. take a stance (paragraph 34)
Mountain Madness from Seattle suspended its guided climbs on Everest for an indefinite period of
time, citing overcrowding and a surplus of inexperienced mountaineers. "We were trying to decide
if we wanted to take a stance and say, 'Hey, look, we just don't support what's happening on
Everest,'" says Mark Gunlogson,
Write your own sentences with the vocabulary
1. litter
2. chronicling
3. crave
4. albeit
5. quell
6. vowed
7. take a stance
EXERCISE 27
The debate on whether being overweight is
unhealthy
Summary
This article is on the controversy surrounding an academic study which concluded that being
overweight does not reduces life expectancy. In addition to talking about the controversy, it also
gives the arguments from both those who believe it doesn't and those who believe it does.
Is being a little bit overweight bad for you? Could it lead to an untimely death?
It's a question with real consequences. Many overweight people feel locked in a fruitless battle with
their size. If they do slim down, the process might distort their metabolisms forever. But if they
remain overweight, non-thin people may face intense prejudice and stigma, as the writer Taffy
Brodesser-Akner poignantly described in The New York Times Magazine recently:
I was in Iceland, for a story assignment, and the man who owned my hotel took me fishing
and said, ""I'm not going to insist you wear a life jacket, since I think you'd float, if you
know what I mean." I ignored him, and then afterward, back on land, after I fished cod like
a Viking, he said, "I call that survival of the fattest.""
The "health at every size" movement, though, has its own pitfalls, and not just because it can come
off as oddly objectifying. American life expectancy recently dipped slightly, and obesity might be
part of the cause. Telling people it's perfectly fine to be dozens of pounds overweight would be
terrible advice if it's wrong.
Most researchers agree that it's unhealthy for the average person to be, say, 300 pounds. They
don't really know why being very overweight is bad for you, but the thinking is that all those fat
cells disrupt how the body produces and uses insulin, leading to elevated glucose in the blood and,
eventually, diabetes. Extra weight also increases blood pressure, which can ultimately damage the
heart.
But whether just a few extra pounds raise the risk of death is a surprisingly controversial and
polarizing issue. Usually, nutrition scientists tell journalists hedgy things like, "this is just what my
study shows," followed by the dreaded disclaimer: "Further research is needed." But on this
question, the researchers involved are entrenched, having reached opposite conclusions and not
budging an inch. Like many internecine wars, the dispute mostly comes down to one small thing:
how you define the "overweight" population in the study.
Over the years, myriad side controversies personal attacks, money from the Coca-Cola Company,
and a debate over who is truly "overweight" have deepened the divide. But they haven't clarified
things.
It all started in 2004, when the Centers for Disease Control and Prevention scientists published a
study suggesting obesity was responsible for 400,000 deaths a year, making it almost as deadly as
smoking. It turned out to be a false alarm: The authors made methodological errors that skewed
their number too high. But a CDC senior scientist named Katherine Flegal was already working
with a small group of her colleagues to write a different obesity paper using better data and better
methods. In 2005, they published their results, and their estimate was substantially lower: Obesity
was only responsible for about 112,000 excess deaths. They also found something peculiar. Being
"overweight," but not obese, was not associated with an increased risk of death at all.
Millions of despairing dieters likely sighed with relief, perhaps celebratorily pouring a SlimFast
down the drain. But while Flegal's study was praised by some researchers, others were skeptical,
saying past research had already shown that the heavier you are, the greater your risk of dying.
"We can't afford to be complacent about the epidemic of obesity," JoAnn Manson, the chief of
preventive medicine at Brigham and Women's Hospital in Boston, told The New York Times after
Flegal's study came out.
Flegal pressed on, and in 2013 she and colleagues published a meta-analysis a study of studies
that replicated her earlier findings. Even when adjusting for smoking, age, and sex, overweight
people those with a body mass index of between 25 and 30 had a 6 percent lower risk of dying
than normal-weight individuals. Body mass index, or BMI, is a measure of a person's weight
divided by their height. Her paper found that in terms of mortality, it's better for this number to be
slightly elevated than to be normal. A 5-foot-6-inch woman, in other words, would be better off
weighing 180 pounds than 120.
A "pile of rubbish" is what Walter Willett, a Harvard University professor of epidemiology and
nutrition, deemed that paper. Willett has co-authored studies finding the opposite effect. He and
Andrew Stokes, a demographer at Boston University, say Flegal's work suffers from a problem they
call "reverse causality." They think that because she didn't examine her subjects' entire weight
history, her study didn't control for people who used to be overweight, but became normal-weight
because they got sick before they died. They argue her study conflates normal-weight, healthy
people with formerly overweight people who lost weight due to liver disease, cancer, or some other
illness. Having those individuals in the pool of normal-weight people makes the normal-weight
people seem sicker, and the overweight people seem healthier, than they actually are. "I think
Kathy Flegal just doesn't get it that people often lose weight before they die," Willett told me.
In 2016, Willett and dozens of other researchers from around the world published a paper in The
Lancet analyzing 239 studies and millions of study subjects. Their takeaway was clear: Above the
normal weight range, the fatter you are, the higher your risk of premature death. "On average,
overweight people lose about one year of life expectancy, and moderately obese people lose about
three years of life expectancy," said the paper's lead author, Emanuele Di Angelantonio.
Flegal takes issue with how Willett and his colleagues selected the studies for their review. "It
seems like they took studies they already knew about and that gave the answers that they
preferred," said Flegal, who is now a consulting professor at Stanford. Besides, other studies have
since implied there's a health benefit to heaviness. Last year researchers in Copenhagen looked at
three cohorts of Danes during the 1970s, '90s, and between 2003 and 2013. In the 1970s, the BMI
that was associated with the lowest risk of death was 23.7so-called normal weight. Surprisingly,
by the 2000s, the "healthiest" BMI had shifted up to 27, or technically overweight.
Børge G. Nordestgaard, a clinical professor at the University of Copenhagen and an author of that
study, speculated that this could be because over time, doctors have gotten better at treating some
of the side effects of excess weight, like high blood pressure and high triglycerides. Or, "it could
just be that as the population has become more overweight and obese, the people who are in the
middle of the BMI distribution, these are the most "normal' people, they are the ones who do all
the most normal things," Nordestgaard said. "They are the ones who survive the best."
What's more, in 2014, New Orleans cardiologist Carl Lavie published the book The Obesity
Paradox: When Thinner Means Sicker and Heavier Means Healthier, based in part on his research
showing that overweight and mildly obese patients with cardiovascular disease have a better
prognosis than their leaner counterparts.
But when reporters found that Lavie had received money from the Coca-Cola Company for
speaking and consulting on obesity, it fueled speculation that junk-food companies are promoting
the supposed benefits of obesity in order to evade blame for causing it.
Andrew Stokes, the demographer at Boston University, says some of the most vocal supporters of
the "obesity paradox" are activists and people with vested interests. He's found that the paradox
disappears when "normal weight" is defined as only those people who have remained thin over
time, as opposed to those who entered the normal-weight category after losing weight due to an
illness. In a paper published this April, Stokes, Willett, and others found being overweight was
associated with mortality – but only if you looked at a person's maximum weight over the past 16
years. According to their findings, it's having ever been overweight that's risky.
That's not the end of the methodological gripes, though. Flegal and others say the self-report data
that Willett and Stokes use in some of their studies is not reliable. "It is well-known that
underreporting of body weight along with underreporting for females and overreporting for males
of height can result in biased BMI's," said Barry Graubard, a senior investigator with the National
Cancer Institute, which is part of the National Institutes of Health.
Stokes counters that not only has self-report data been found to correspond closely with measured
weight, not all of the data refuting the obesity paradox is self-reported. Flegal, meanwhile, thinks
Stokes and others haven't demonstrated that the weight loss was the result of a sickness, or that
the sickness-induced weight loss is a big enough problem to taint an entire study. She also thinks
his results are consistent with her 2013 meta-analysis, falling "pretty much in the middle of the
other studies that we found." Stokes disputes this.
If a little extra pudge is somehow good for you, it's not clear why. Some researchers suggest
overweight people might be better equipped to fight off certain diseases, with fat serving as a last-
ditch fuel for the ailing body. And they point to studies that failed to show that losing weight led to
less heart disease in overweight people. Stokes, meanwhile, thinks that explanation is speculative,
and it pales compared to the many ways obesity harms health. Even a BMI of 25, for example
just barely "overweight" has been associated with an increased risk of diabetes.
There's also the idea that some people we now consider "overweight" say, a 6-foot, 1-inch man
who weighs 200 pounds don't actually have too much fat. For one thing, athletes and other very
muscular people might be wrongly categorized as overweight, and some scientists now think it's
stomach fat, not hip fat, that's the dangerous kind. What's more, in 1998 the NIH revised down its
BMI threshold for "overweight" to 25, from 27.8 for men and 27.3 for women, in order to better
align with the rest of the world.
The new standard means that "if you showed someone with a 26 [BMI] had no excess mortality in
1996 there would be no question," Flegal said. She speculates the change was made to emphasize
the seriousness of the obesity epidemic, and she notes that her critics have expressed fears her
results might lull the public into complacency around obesity. "The problem with my research is
apparently just that I did it," she said. "That's not science."
But there's a big caveat to this theory. Medical advice urging heavy people to lose weight is based
on the premise that being overweight is unhealthy. If Flegal and Nordestgaard are right, and being
overweight is linked to less mortality, then should people whose BMIs fall in the normal range gain
weight? Should they be guzzling milkshakes in hopes of staving off death? Both Flegal and
Nordestgaard said "no."
"Weight is just one risk factor for most of these conditions, it's not the risk factor," Flegal said. She
points out that some studies show people with doctorate degrees live longer than those with
bachelor's degrees. "If someone tells me, "I have a bachelor's degree, but I know the risk is lower if
I have a doctoral degree,' should I tell them they should go get a Ph.D.?"
She reiterated something perhaps the only thing that epidemiologists who work on this issue
can still agree on: "It's associated. The causality is unclear."
Vocabulary exercise
1. poignantly (paragraph 2)
But if they remain overweight, non-thin people may face intense prejudice and stigma, as the
writer Taffy Brodesser-Akner poignantly described in The New York Times Magazine recently:
2. entrenched (paragraph 6)
But on this question, the researchers involved are entrenched, having reached opposite
conclusions and not budging an inch.
3. not budging an inch (paragraph 6)
But on this question, the researchers involved are entrenched, having reached opposite
conclusions and not budging an inch.
4. deemed (paragraph 11)
A "pile of rubbish" is what Walter Willett, a Harvard University professor of epidemiology and
nutrition, deemed that paper.
5. conflates (paragraph 11)
They argue her study conflates normal-weight, healthy people with formerly overweight people
who lost weight due to liver disease, cancer, or some other illness.
6. takes issue with (paragraph 13)
Flegal takes issue with how Willett and his colleagues selected the studies for their review. "It
seems like they took studies they already knew about and that gave the answers that they
preferred,"
7. vested interests (paragraph 17)
Andrew Stokes, the demographer at Boston University, says some of the most vocal supporters of
the "obesity paradox" are activists and people with vested interests.
Write your own sentences with the vocabulary
1. poignantly
2. entrenched
3. not budging an inch
4. deemed
5. conflates
6. takes issue with
7. vested interests
EXERCISE 28
The importance of Conrad's 'The Heart of
Darkness'
Summary
This article talks about Joseph Conrad's seminal novel 'The Heart of Darkness'. It not only talks
about the influence the novel has had (and why) on wider culture, but it also dissects the novel
trying to understand what ideas and opinions Conrad was trying to express about society in parts
of the novel.
Joseph Conrad's Heart of Darkness - or "The Heart of Darkness", as it was known to its first
readers - was first published as a serial in 1899, in the popular monthly Blackwood's Magazine.
Few of that magazine's subscribers could have foreseen the fame that Conrad's story would
eventually garner, or the fierce debates it would later provoke.
Already, in 1922, the American poet T.S. Eliot thought the book was Zeitgeisty enough to provide
the epigraph for his epoch-defining poem, The Waste Land - although another American poet,
Ezra Pound, talked him out of using it. The same thought occurred to Francis Ford Coppola more
than 50 years later, when he used Conrad's story as the framework for his phantasmagoric
Vietnam War movie, Apocalypse Now. Echoes of Heart of Darkness can pop up almost anywhere:
the chorus to a Gang of Four song, the title of a Simpsons episode, a scene in Peter Jackson's 2005
King Kong remake.
Consider one final Heart of Darkness allusion, from Mohsin Hamid's 2017 Man Booker-shortlisted
novel, Exit West. In the novel's opening pages, a man with "dark skin and dark, woolly hair"
appears in a Sydney bedroom, transported there by one of the mysterious portals that have
appeared around the globe, connecting stable, prosperous countries with places that people need
to escape from. The "door", as these wormholes are called, is "a rectangle of complete darkness
the heart of darkness". This is a more complicated kind of Conrad reference. Here, "heart of
darkness" is a shorthand for European stereotypes of Africa, which Conrad's novel did its part to
reinforce.
Hamid's line plays on racist anxieties about immigration: the idea that certain places and peoples
are primitive, exotic, dangerous. For contemporary readers and writers, these questions have
become an unavoidable part of Conrad's legacy, too.
Up the river
Heart of Darkness is the story of an English seaman, Charles Marlow, who is hired by a Belgian
company to captain a river steamer in the recently established Congo Free State. Almost as soon as
he arrives in the Congo, Marlow begins to hear rumours about another company employee, Kurtz,
who is stationed deep in the interior of the country, hundreds of miles up the Congo River.
The second half of the novel or novella, as it's often labelled relates Marlow's journey upriver
and his meeting with Kurtz. His health destroyed by years in the jungle, Kurtz dies on the journey
back down to the coast, though not before Marlow has had a chance to glimpse "the barren
darkness of his heart". The coda to Marlow's Congo story takes place in Europe: questioned by
Kurtz's "Intended" about his last moments, Marlow decides to tell a comforting lie, rather than
reveal the truth about his descent into madness.
Although Conrad never met anyone quite like Kurtz in the Congo, the structure of Marlow's story is
based closely on his experiences as mate and, temporarily, captain of the Roi des Belges, a Congo
river steamer, in 1890. By this time, Conrad, born Józef Teodor Konrad Korzeniowski in the
Russian-ruled part of Poland in 1857, had been a seaman for about 15 years, rising to the rank of
master in the British merchant service. (The remains of the only sailing ship he ever commanded,
the Otago, have ended up in Hobart, a rusted, half-submerged shell on the banks of the Derwent.)
Sick with fever and disenchanted with his colleagues and superiors, he broke his contract after
only six months, and returned to London in early 1891. Three years and two ships later, Conrad
retired from the sea and embarked on a career as a writer, publishing the novel that he had been
working on since before he visited the Congo, Almayer's Folly, in 1895. A second novel, An Outcast
of the Islands, followed, along with several stories. Conrad's second career was humming along
when he finally set about transforming his Congo experience into fiction in 1898.
Darkness at home and abroad
Heart of Darkness opens on a ship, but not one of the commercial vessels that feature in Conrad's
sea stories. Rather, it's a private yacht, the Nellie, moored at Gravesend, about 20 miles east of the
City of London. The five male friends gathered on board were once sailors, but everyone except
Marlow has since changed careers, as Conrad himself had done. Like sail, which was rapidly being
displaced by steam-power, Marlow is introduced to us as an anachronism, still devoted to the
profession his companions have left behind. When, amidst the gathering "gloom", he begins to
reminisce about his stint as a "fresh-water sailor", his companions know they are in for one of his
"inconclusive experiences".
Setting the opening of Heart of Darkness on the Thames also allowed Conrad to foreshadow one of
the novel's central conceits: the lack of any absolute, essential difference between so-called
civilized societies and so-called primitive ones. "This, too", Marlow says, "has been one of the dark
places of the earth", imagining the impressions of an ancient Roman soldier, arriving in what was
then a remote, desolate corner of the empire.
During the second half of the 19th century, spurious theories of racial superiority were used to
legitimate empire-building, justifying European rule over native populations in places where they
had no other obvious right to be. Marlow, however, is too cynical to accept this convenient fiction.
The "conquest of the earth", he says, was not the manifest destiny of European peoples; rather, it
simply meant "the taking it away from those who have a different complexion or slightly flatter
noses than ourselves."
The idea that Africans and Europeans have more in common than the latter might care to admit
recurs later, when Marlow describes observing tribal ceremonies on the banks of the river.
Confronted with local villagers "stamping" and "swaying", their "eyes rolling", he is shaken by a
feeling of "remote kinship with this wild and passionate uproar".
Whereas most contemporary readers will be cheered by Marlow's scepticism about the project of
empire, this image of Congo's indigenous inhabitants is more problematic. "Going up that river",
Marlow says, "was like travelling back to the earliest beginnings of the world", and he accordingly
sees the dancing figures as remnants of "prehistoric man".
Heart of Darkness suggests that Europeans are not essentially more highly-evolved or enlightened
than the people whose territories they invade. To this extent, it punctures one of the myths of
imperialist race theory. But, as the critic Patrick Brantlinger has argued, it also portrays Congolese
villagers as primitiveness personified, inhabitants of a land that time forgot.
Kurtz is shown as the ultimate proof of this "kinship" between enlightened Europeans and the
"savages" they are supposed to be civilising. Kurtz had once written an idealistic "report" for an
organisation called the International Society for the Suppression of Savage Customs. When
Marlow finds this manuscript among Kurtz's papers, however, it bears a hastily-scrawled
addendum: "Exterminate all the brutes!" The Kurtz that Marlow finally encounters at the end of
the novel has been consumed by the same "forgotten and brutal instincts" he once intended to
suppress.
Adventure on acid
The European "gone native" on the fringes of empire was a stock trope, which Conrad himself had
already explored elsewhere in his writing, but Heart of Darkness takes this cliché of imperial
adventure fiction and sends it on an acid trip. The manic, emaciated Kurtz that Marlow finds at the
Inner Station is straight out of the pages of late-Victorian neo-Gothic, more Bram Stoker or
Sheridan Le Fanu than Henry Rider Haggard. The "wilderness" has possessed Kurtz, "loved him,
embraced him, got into his veins" it is no wonder that Marlow feels "creepy all over" just
thinking about it.
Kurtz's famous last words are "The horror! The horror!" "Horror" is also the feeling that Kurtz and
his monstrous jungle compound, with its decorative display of human heads, are supposed to
evoke in the reader. Along with its various other generic affiliations imperial romance,
psychological novel, impressionist tour de force Heart of Darkness is a horror story.
Conrad's Kurtz also channels turn-of-the-century anxieties about mass media and mass politics.
One of Kurtz's defining qualities in the novel is "eloquence": Marlow refers to him repeatedly as "A
voice!", and his report on Savage Customs is written in a rhetorical, highfalutin style, short on
practical details but long on sonorous abstractions. Marlow never discovers Kurtz's real
"profession", but he gets the impression that he was somehow connected with the press either a
"journalist who could paint" or a "painter who wrote for the papers".
This seems to be confirmed when a Belgian journalist turns up in Antwerp after Kurtz's death,
referring to him as his "dear colleague" and sniffing around for anything he can use as copy.
Marlow fobs him off with the bombastic report, which the journalist accepts happily enough. For
Conrad, implicitly, Kurtz's mendacious eloquence is just the kind of thing that unscrupulous
popular newspapers like to print.
If Kurtz's "colleague" is to be believed, moreover, his peculiar gifts might also have found an outlet
in populist politics: "He would have been a splendid leader of an extreme party." Had he returned
to Europe, that is, the same faculty that enabled Kurtz to impose his mad will on the tribespeople
of the upper Congo might have found a wider audience.
Politically, Conrad tended to be on the right, and this image of Kurtz as an extremist demagogue
expresses a habitual pessimism about mass democracy in 1899, still a relatively recent
phenomenon. Nonetheless, in the light of the totalitarian regimes that emerged in Italy, Germany
and Russia after 1918, Kurtz's combination of irresistible charisma with megalomaniacal brutality
seems prescient.
These concerns about political populism also resonate with recent democratic processes in the US
and the UK, among other places. Only Conrad's emphasis on "eloquence" now seems quaint: as the
2016 US Presidential Election demonstrated, an absence of rhetorical flair is no handicap in the
arena of contemporary populist debate.
Race and empire
Heart of Darkness contains a bitter critique of imperialism in the Congo, which Conrad condemns
as "rapacious and pitiless folly". The backlash against the systematic abuse and exploitation of
Congo's indigenous inhabitants did not really start in earnest until the first decade of the 20th
century, so that the anti-imperialist theme was ahead of its time, if only by a few years. Nor does
Conrad have any patience with complacent European beliefs about racial superiority.
Nonetheless, the novel also contains representations of Africans that would rightly be described as
racist if they were written today. In particular, Conrad shows little interest in the experience of
Marlow's "cannibal" shipmates, who come across as exotic caricatures. It is images like these that
led the Nigerian novelist Chinua Achebe to denounce Conrad as a "bloody racist", in an influential
1977 essay.
One response to this criticism is to argue, as Paul B. Armstrong does, that the lack of more
rounded Congolese characters is the point. By sticking to Marlow's limited perspective, Heart of
Darkness gives an authentic portrayal of how people see other cultures. But this doesn't necessarily
make the images themselves any less offensive.
If Achebe did not succeed in having Heart of Darkness struck from the canon, he did ensure that
academics writing about the novel could no longer ignore the question of race. For Urmila
Seshagiri, Heart of Darkness shows that race is not the stable, scientific category that many
Victorians thought it was. This kind of argument shifts the debate in a different direction, away
from the author's putative "racism", and onto the novel's complex portrayal of race itself.
Perhaps because he was himself an alien in Britain, whose first career had taken him to the
farthest corners of the globe, Conrad's novels and stories often seem more in tune with our
globalized world than those of some of his contemporaries. An émigré at 16, Conrad experienced to
a high degree the kind of dislocation that has become an increasingly typical modern condition. It
is entirely appropriate, in more ways than one, for Hamid to allude to Conrad in a novel about
global mobility.
The paradox of Heart of Darkness is that it seems at once so improbable and so necessary. It is
impossible not to be astonished, when you think of it, that a Polish ex-sailor, writing in his third
language, was ever in a position to author such a story, on such a subject. And yet, in another way,
Conrad's life seems more determined than most, in more direct contact with the great forces of
history. It is from this point of view that Heart of Darkness seems necessary, even inevitable, the
product of dark historical energies, which continue to shape our contemporary world.
Vocabulary exercise
1. relates (paragraph 6)
The second half of the novel or novella, as it's often labelled relates Marlow's journey upriver
and his meeting with Kurtz.
2. disenchanted (paragraph 8)
Sick with fever and disenchanted with his colleagues and superiors, he broke his contract after
only six months, and returned to London in early 1891.
3. displaced (paragraph 9)
Like sail, which was rapidly being displaced by steam-power, Marlow is introduced to us as an
anachronism, still devoted to the profession his companions have left behind.
4. reminisce (paragraph 9)
When, amidst the gathering "gloom", he begins to reminisce about his stint as a "fresh-water
sailor", his companions know they are in for one of his "inconclusive experiences".
5. recurs (paragraph 12)
The idea that Africans and Europeans have more in common than the latter might care to admit
recurs later, when Marlow describes observing tribal ceremonies on the banks of the river.
6. flair (paragraph 22)
as the 2016 US Presidential Election demonstrated, an absence of rhetorical flair is no handicap
in the arena of contemporary populist debate.
7. start in earnest (paragraph 23)
The backlash against the systematic abuse and exploitation of Congo's indigenous inhabitants did
not really start in earnest until the first decade of the 20th century, so that the anti-imperialist
theme was ahead of its time,
Write your own sentences with the vocabulary
1. relates
2. disenchanted
3. displaced
4. reminisce
5. recurs
6. flair
7. start in earnest
EXERCISE 29
The resistance to moving from steam power to
electricity in manufacturing
Summary
This article explains how it takes time for new technologies to deliver productivity gains to
companies. Focusing in the most part on the reasons for the slow transition from steam power to
electricity in factories in the past, it ends by explaining why these reasons are applicable to the
present day.
For investors in Boo.com, WebVan and eToys, the bursting of the dotcom bubble came as a bit of a
shock. Companies like this raised vast sums on the promise that the world wide web would change
everything. Then, in the spring of 2000, stock markets collapsed and these investors lost
everything as all three companies went into bankruptcy.
Some economists had long been sceptical about the promise of computers. In 1987, we didn't have
the web, but spreadsheets and databases were appearing in every workplace – and having, it
seemed, no impact whatsoever. The leading thinker on economic growth, Robert Solow, famously
quipped: "You can see the computer age everywhere but in the productivity statistics."
It's not easy to track the overall economic impact of innovation but the best measure we have is
something called "total factor productivity". When it's growing, that means the economy is
somehow squeezing more output out of inputs, such as machinery, human labour and education.
The productivity paradox
In the 1980s, when Robert Solow was writing, it was growing at the slowest rate for decades
slower even than during the Great Depression. Technology seemed to be booming but productivity
was almost stagnant.
Economists called it the "productivity paradox".
For a hint about what was going on, we have to rewind more than 100 years and witness another
remarkable new technology that was proving disappointing for productivity: electricity.
At that time some corporations were investing in electric dynamos and motors (technology which
at the time was cutting-edge) and installing them in the workplace. Thomas Edison and Joseph
Swan independently invented usable light bulbs in the late 1870s. And in 1881, Edison built
electricity generating stations at Pearl Street in Manhattan and Holborn in London. Within a year,
he was selling electricity as a commodity. By 1883, the first electric motors were driving
manufacturing machinery.
Yet by 1900, less than 5% of mechanical drive power in American factories was coming from
electric motors. The age of steam lingered.
The mechanical power still came, like it had since the turn of the 1800s, from a single or multiple
massive steam engines. Machine which turned a central steel drive shaft that ran along the length
of the factory. Subsidiary shafts, connected via belts and gears, drove hammers, punches, presses
and looms. The belts could even transfer power vertically through a hole in the ceiling to a second
or even third floor. Expensive "belt towers" enclosed them to prevent fires from spreading through
the gaps. Everything was continually lubricated by thousands of drip oilers. Steam engines rarely
stopped. If a single machine in the factory needed to run, the coal fires needed to be fed. The cogs
whirred, the shafts span and the belts churned up the grease and the dust, and there was always
the risk that a worker might snag a sleeve or bootlace and be dragged into the relentless, all-
embracing machine.
Some factory owners did replace steam engines with electric motors, drawing clean and modern
power from a nearby generating station.
Revolutionary impact
But given the huge investment this involved, they were often disappointed with the savings. Until
about 1910, plenty of entrepreneurs considered using the new electrical drive system, but they
mostly opted for good old-fashioned steam.
Why? Because to take advantage of electricity, factory owners had to think in a very different way.
They could, of course, use an electric motor in the same way as they used steam engines. It would
slot right into their old systems. But electric motors could do much more. Electricity allowed
power to be delivered exactly where and when it was needed. Small steam engines were hopelessly
inefficient but small electric motors worked just fine. So a factory could contain several smaller
motors, each driving a small drive shaft.
As the technology developed, every workbench could have its own machine tool with its own little
electric motor. Power wasn't transmitted through a single, massive spinning drive shaft but
through wires.
And whilst a factory powered by steam needed to be sturdy enough to be able support the weight of
the huge steel drive shafts. One powered by electricity could be light and airy. Steam-powered
factories had to be arranged on the logic of the driveshaft. Electricity meant you could organise
factories on the logic of a production line.
More efficient
Old factories were dark and dense, packed around the shafts. New factories could spread out, with
wings and windows allowing natural light and air. Factories could be cleaner and safer and more
efficient, because machines needed to run only when they were being used.
But you couldn't get these results simply by ripping out the steam engine and replacing it with an
electric motor. You needed to change everything: the architecture and the production process. And
as workers had more autonomy and flexibility, you even had to change the way they were
recruited, trained and paid. And it was largely down to this why many factory owners hesitated and
continued to use steam.
And then there was the cost. Like with any emerging technology, the unit cost of electricity was
prohibitively expensive and supply could be unreliable. It was still considerably cheaper for factory
owners to use steam to power their factories. Added on to this was scrapping existing equipment
which was still working perfectly fine and replacing it with something which in relative terms cost
an arm and a leg not only to run, but to also to install seemed illogical.
However, that changed as the century progressed and the technology matured. The adoption of
Alternating Current (AC) instead of the initial Direct Current (DC) electricity was key to this. This
change meant that the use of electricity was not only cheaper, but also far more reliable. On top of
this was a change in the workforce. American workers become more expensive to employ thanks to
a series of new laws that limited immigration from a war-torn Europe.
Average wages soared and hiring staff became more about quality and less about quantity. Trained
workers could use the autonomy that electricity gave them. And as more factory owners figured out
how to make the most of electric motors, new ideas about manufacturing spread. Compelling
reasons to finally make most factory owners to take the plunge and move to electricity.
And as most manufacturers moved away from steam and embraced electricity, it led to
productivity gains which before had not been obtained. Come the 1920s, productivity in American
manufacturing soared in a way never seen before or since.
And the reason for these gains was that manufacturers were innovating on how they used the new
source of energy in their production processes. As economic historian Paul David said
“manufacturers had finally figured out how to use the technology efficiently”. It was no longer
merely a substitute for steam, like it had been at the beginning, but something which
revolutionised the workplace and the means of production. Enabling not only new processes to be
instigated, but the development of new and better machinery.
Parallels to now
It took nearly 60 years after the first computer program was created for companies to finally reap
the benefits of what this new technology could offer. Research has shown that for the vast majority
of companies that had invested in computers in the 20th century, there was relatively little
financial reward of doing so. But as we moved into the new century, this began to change, thanks
especially to the rise of the world wide web and the internet. And this led to companies (either
willingly or unwillingly) starting to reorganise their operational and business models to take
advantage of what computers and the web had to offer.
And whilst in the last 10 years we have witnessed more changes in how companies operate than in
the previous 50 (with companies decentralising and outsourcing their operations, streamlining
supply chains and offering more choice to customers). And like with electricity in the past we are
still yet to see what the really big changes in our life and work that the computer revolution will be
in the future.
The thing about a revolutionary technology is that it changes everything that's why we call it
revolutionary. And changing everything takes time and sometimes just a lot of hard work.
Vocabulary exercise
1. cutting-edge (paragraph 7)
At that time some corporations were investing in electric dynamos and motors (technology which
at the time was cutting-edge) and installing them in the workplace.
2. lingered (paragraph 8)
By 1883, the first electric motors were driving manufacturing machinery. Yet by 1900, less than 5%
of mechanical drive power in American factories was coming from electric motors. The age of
steam lingered.
3. sturdy (paragraph 14)
And whilst a factory powered by steam needed to be sturdy enough to be able support the weight
of the huge steel drive shafts. One powered by electricity could be light and airy.
4. cost an arm and a leg (paragraph 17)
Added on to this was scrapping existing equipment which was still working perfectly fine and
replacing it with something which in relative terms cost an arm and a legnot only to run, but
to also to install seemed illogical.
5. take the plunge (paragraph 19)
And as more factory owners figured out how to make the most of electric motors, new ideas about
manufacturing spread. Compelling reasons to finally make most factory owners to take the
plunge and move to electricity.
6. merely (paragraph 21)
It was no longer merely a substitute for steam, like it had been at the beginning, but something
which revolutionised the workplace and the means of production.
7. reap the benefits (paragraph 22)
It took nearly 60 years after the first computer program was created for companies to finally reap
the benefits of what this new technology could offer.
Write your own sentences with the vocabulary
1. cutting-edge
2. lingered
3. sturdy
4. cost an arm and a leg
5. take the plunge
6. merely
7. reap the benefits
EXERCISE 30
The significance of colour
Summary
This article talks about the meanings, effects and uses of different colours. It talks about these for
the 10 most common colours. For each colour it starts by explaining the history of the colour
before explaining the different meanings, effects and uses it has and how the colour is used in
branding for companies and products.
In today's society, colour is perhaps something we take for granted. With one simple click, we have
thousands of colours at our disposal. But, much like the science behind colour psychology, there's
also the history of colours. Below, we learn about how some of our favorite colours were created
and how they have evolved to form meaning in society and through design.
1. Blue
Blue is a colour that has long been associated with royalty, art, military, business and nature,
making it a colour with a lot of applications. US and UK public opinion surveys have found that
blue is a majority of men and women's preferred colour, making it quite a popular hue.
The first documented use of blue pigment is using blue azurite, a vivid deep blue naturally
occurring mineral, used widely in ancient Egypt for decoration and jewellery. Later, in the
Renaissance, the mineral was crushed and used as the expensive paint pigment ultramarine. From
here blue would go on to live a long life in the world of art, from stained glass windows in the
Middle Ages, fine blue and white porcelain in China through to famous applications by artists such
as Renoir and Van Gogh.
Blue is also thought to promote trustworthiness, serenity, and productivity amongst other positive
traits, the use of the colour dominates tech, financial and medical branding.
2. Red
Red is a rich colour with an even richer history. Use of the pigment can be traced way back to
Ancient Egypt where it was considered both a colour of vitality and celebration, as well as evil and
destruction. From here on, red was a staple hue throughout history, used in ancient Grecian
murals, in Byzantine clothing to signal status and wealth, and heavily applied throughout art
movements ranging from the Renaissance through to modern day art.
Red is considered to be a colour of intense emotions, ranging from anger, sacrifice, danger, and
heat, through to love, passion, and sexuality.
In many Asian countries such as India and China, red is regarded as the colour of happiness,
wellbeing, and good fortune.
In the world of branding, red can signal a whole host of different ideas, depending on the specific
shade. For example, a darker red often signals luxury and professionalism. A bright, intense red
signals excitement, energy, and efficiency. A cooler, deeper burgundy is often more sophisticated
and serious whereas a brown-tinted maroon red is courageous and strong. Red is also a colour that
is thought to stimulate appetites and hunger, making it a popular choice when it comes to food
industry branding. Think of Fritos, Coca-Cola, McDonalds, etc.
When it comes to branding red is also often used to promote speed, energy, and efficiency. Netflix,
Suzuki, and RedBull are all brands that use red to promote life in the fast lane. Red is also often
seen being applied as a colour to promote creativity in electronic/software brands. Adobe, Canon,
Nintendo are all prime examples of this in action.
3. Yellow
Heralded as the colour of sunshine and gold, yellow is a vibrant, historic colour. Deriving the
pigment from clay, yellow is thought to be one of the first colours ever used as a paint in
prehistoric cave art, with the first application thought to be over 17,300 years old.
Ancient Egyptians were pretty prolific users of the colour too. Thanks to its close association with
gold, the colour was considered eternal and indestructible. Yellow has a longstanding relationship
with the world of art, with artists such as Van Gogh adopting it as a signature colour to signal
warmth and happiness. Amongst these associations, Yellow is a colour that embodies many ideas
depending on the shade and application. As previously mentioned it can symbolise happiness,
sunshine, good energy, and joy.
However, it can also represent cowardice, betrayal, terror, and illness. Interestingly, the latter of
these associations is thought to be due to the fact that yellow pigments are often found in toxic
materials. Furthermore, as yellow is the most visible colour of the spectrum, it is often used for
emergency and cautionary signage, clothing, and applications. If you need to grab attention fast,
use a bit of yellow.
In Japan, yellow is thought to represent courage, and in some parts of Mexico certain shades can
are thought to represent death.
When it comes to branding, yellow can be applied to signal a range of ideas. For example, it is
often used to signal speed and efficiency, as we can see embodied in brands like Ferrari and Sprint.
It can also signal, of course, happiness and joy. Think of brands like Snapchat and McDonald's that
promote a culture of enjoyment and fun. And it is also a colour that promotes the idea of wisdom
and knowledge in certain brands. National Geographic, BIC, Commonwealth Bank, are all prime
examples of this theory in action.
4. Green
Named after the Anglo-Saxon word grene meaning "grass" and "grow", green is a colour with close
and distinctive ties to nature, the environment, and all things to do with the great outdoors.
Historically, green was a pigment that did not appear as early in prehistoric art as other hues as it
was a hard pigment to reproduce. Due to this, many art and fabric applications of green either
turned out a dull brownish green, or eventually faded due to the temperamental pigments used.
So, it was only when synthetic green pigments and dyes were produced that green was seen more
prolifically throughout modern art.
In Western countries, green is seen as a colour of luck, freshness, the colour for 'go', jealousy, and
greed. In China and Japan, green is seen as the colour of new birth, youth, and hope. However in
China it is also a symbol of infidelity as to wear a green hat is considered a symbol of your partner
cheating on you.
Psychologically speaking, green is thought to help balance emotions, promote clarity, and create an
overall feeling of zen. Green is obviously the colour of nature and health, thus it also has close ties
with emotions of empathy, kindness, and compassion. Paler, softer mint greens often promote
ideas of youth, inexperience, and innocence, while deeper, darker greens draw out notions of
success, wealth, and money. Vibrant lime green shades promote energy and playfulness, and
deeper olive greens are seen as representing strength and endurance.
It can also signal prestige and wealth, as demonstrated in luxury car brand Jaguar's logo, or the
high-end fashion brand Lacoste. And it can also be used to promote health, the environment, and
all-natural products. Just check out Whole Foods or Animal Planet's logos to see this eco-
friendliness exemplified.
5. Orange
Sitting in between red and yellow in the colour spectrum is orange. Historically, orange was used
prolifically by the Ancient Egyptians and Medieval artists, the pigment was often made out of a
highly toxic mineral called orpiment, which contained traces of arsenic.
Before the late 15th century, Europeans simply referred to orange as yellow-red until they were
introduced to orange trees, when the pigment was finally awarded its true name. During the 16th
and 17th centuries, orange became a symbol of Protestantism and an important political colour in
Britain and Europe under William III's reign. Throughout art in the 18th and 19th centuries,
orange became a symbol of impressionism thanks to artists such as Renoir, Cezanne, and Van
Gogh.
Orange has different tones and shades, each with different meanings and effects. For example,
light pastel peach tones are seen as sweet, conversational, and affable, whereas more intense,
vibrant oranges are seen as representative of vitality, energy, and encouragement. Deeper amber
shades are seen as confident, a symbol of pride and self-assertion, and darker orange-brown tones
promote ambition, adventure, and opportunity.
In Western countries, orange is often linked to inexpensive/affordable products and is heavily
used in relation to Halloween. In Thailand, orange is the colour of Friday, whereas in the
Netherlands it is the colour of the Dutch Royal Family.
In terms of branding, orange has a few different applications. First of all, orange is often used to
communicate a product that is affordable and/or inexpensive. Think of Amazon, or Payless shoes'
logos and how they use orange to suggest this. It is also used to elicit feelings of adventure,
excitement and risks. We can see this in Harley Davidson or Rockstar Games' logos. Alternatively,
vibrant oranges are used to promote energy, enthusiasm, and fun. Think of Nickelodeon's
signature orange hue, or Fanta's bright, enthusiastic use of the colour.
6. Purple
Sitting in between red and blue on the colour spectrum is none other than purple. Purple has long
had a noble and fairly regal history surrounding the hue. Due to the fact that producing purple
pigments was expensive and difficult, the colour was often worn by those of high status and royal
descent throughout the Byzantine and Holy Roman Empires as well as Japanese aristocracy. From
here on out purple remained a colour to symbolize royalty and nobility throughout history until
1856 when the colour became more accessible to the man in the street and simply became a signal
of fashion and style instead. However, purple is still a colour used by the British royal family and
will forever remain the colour of the royals.
Purple is a colour that sits in an interesting place on the colour spectrum right in between warm
red and cool blue making it a colour that can be both cool and warm depending on the specific
shade.
Thanks to this, different shades of purple can have significantly different effects. On the lighter end
of the spectrum is lavender. This pale, soft shade communicates femininity, nostalgia, romance,
and tenderness. More vibrant purples promote royalty, nobility, extravagance, and luxury. While
deeper, darker shades of purple such as mauve can promote ideas of seriousness, professionalism
as well as gloom and sadness in certain applications.
When it comes to branding purple is used in a multitude of ways. One common application is to
draw on the historic ties of the colour with royalty to help build a luxurious, expensive, high quality
brand. Think of the logos of Hallmark, or luxe whiskey brand Crown Royal. Purple is also a colour
that can promote fun, creativity, and play. Thanks to this, it is often used in children-oriented
brandings such as Wonka candy, or platforms that encourage play, such as video game streaming
service Twitch. In addition, purple is also often used frequently in branding to promote knowledge,
innovation, and intelligence. Take a look at SYFY and NYU's logos and see how they each use
purple to evoke these ideas.
7. Pink
Named after the flower of the same name, pink is a vibrant, feminine colour with an interesting
history. Pink does not have as prolific a history in art and culture as some other colours, as more
intense shades of red and crimson were often preferred. However, during the renaissance, pink
pigments started to be applied more often, as from thereon, the colour worked its way into the
world of fashion, art, and design.
Pink is widely regarded in the western world as the colour of femininity. Because of this, it is used
to bring awareness to breast cancer, applied to many women's products, and considered a colour
predominately used and worn by women. However, this has not always been the case. Originally it
was considered to be a colour suited to little boys, as red was a man's colour, and pink its younger
sister hue. Similarly, in Japan pink is a colour associated with masculinity, the pink cherry
blossom trees thought to be symbols of fallen Japanese warriors.
When it comes to shades of pink, softer, lighter tones often promote innocence, girlhood,
nurturing, love, and gentleness, and these soft shades are actually thought to increase female
physical strength. However, brighter, more intense shades of pink are instead thought to promote
sensuality, and passion, as well as creativity, energy, and, as studies have suggested, the colour is
thought to raise pulse rate and blood pressure.
When it comes to branding pink is used in a variety of ways. Arguably the most common
application is on brands that predominantly cater towards women. As pink is a symbol of
femininity in Western cultures, brands like Victoria's Secret and Cosmopolitan all use the colour to
get the attention of their demographic. Pink is also a colour used to promote creativity, artistic
expression, and innovation, as seen in Adobe InDesign's logo, as well as graphic design social
network Dribbble's branding. And as it is seen as the sweetest colour of all, naturally pink is also
used for sweets, candy, and dessert food brands such as Dunkin Donuts, Baskin Robbins, and
Trolli.
8. Brown
Hailed as what studies have shown to be "the least favorite colour of the public", brown is not a
colour to turn a blind eye to. Brown is considered to be one of the first pigments ever used in
prehistoric times and has been a staple of art and culture ever since.
Brown has long been a symbol of the lower-class, this association stemming from Ancient Rome
when the colour was donned only by barbarians and people of low social and economic rankings. It
is also a colour that was worn by the monks of the Franciscan order as a sign of poverty and
humility. However, brown has had quite the revival in modern culture. Now a symbol of all things
organic, natural, healthy, and quality, it is a colour with many positive associations.
Brown is not a colour applied throughout branding as prolifically as other hues, but when it is
used, it has a few distinctive effects. As brown is a colour often seen in nature, it has become a
symbol of all things organic, authentic and/or natural. We can see this in the branding of brands
such as Ugg and Cotton. And the hue is also seen as one that promotes reliability, efficiency, and
high value service, as seen in the UPS and JP Morgan logos.
Deep, rich browns are often applied to signal high quality and luxe style. Two prime examples of
this are Louis Vuitton's signature brown, and Hollister's deep brown logo. And finally, brown is
also often used as a colour to communicate warmth, relaxation, and indulgence. It is often
chocolate and coffee brands such as Nespresso, Hersheys, and many small coffee chains that make
use of this application of the hue.
9. White
White has been a staple of art, history, and culture for many eras. In fact, it is recorded as the first
colour ever used in art, with Paleolithic artists using white calcite and chalk to draw. And
throughout much of history, white has been elected as a symbol of goodness, spirituality, purity,
godliness, and sacredness. Ancient Egyptian gods, Greek gods, Roman goddesses were all depicted
as clad in white to symbolize their deity.
White is considered the symbolic opposite of black, with the two colours together forming symbols
of good and evil, night and day, light and dark, etc... And while in Western cultures, white is the
classic colour of wedding dresses, symbolizing innocence and purity, in many Asian cultures white
is the colour of mourning, grief, and loss.
White is a tricky colour to work with when it comes to logos and brand applications as you can't
have a purely white logo. However, many brands choose to either use slight off-white, grey, or
silver tones, or combine the white with black to create striking high contrast logos. White and light
off-white colours are often used as a very luxurious colour in branding. Brands like Swarovski use
it to promote elegance, sophistication, and to draw comparisons between their logo and
crystals/diamonds.
White is also the colour of modern day technology. It is seen as clean, sophisticated, streamlined,
and efficient. This association is likely in part thanks to Apple's prolific use of white throughout
their branding.
10. Black
Earning the title of darkest colour thanks to its total absorption of light is none other than black.
Similarly to white, it is one with a long history of use and importance that extends into modern
day.
Alongside white, black is one of the first recorded colours used in art, the pigment created by
paleolithic people who used charcoal, burnt bones, or various crushed minerals.
Throughout much of history, black has been a symbol of evil, (such as the Greek mythology
underworld), mourning, sadness, and darkness. However, in Ancient Egypt, the colour had
positive connotations of protection and fertility. However black has gone through many shifts in
meaning, application, and perception from era to era and culture to culture. Eventually the colour
was revolutionised and given a prominent standing in the world of fashion, quickly becoming a
symbol of elegance and simplicity.
Black is seen as a sharp colour that can promote many ideas ranging from sophistication, mystery,
sensuality, confidence, through to grief and misery, depending on the application.
In the world of branding, black is a staple. While colour is thought to increase brand recognition by
up to 80%, many brands opt for a sharp black logo thanks to the versatility of the hue. Black can
promote ideas of power, elitism, and strength. This is often seen in sports brands such as Nike,
Puma, and Adidas. The use of bold flat black logos creates striking, bold, punchy brand marks.
Conversely, black is also often used to promote elegance, luxury, and status. This is why we see
plain black logos used for beauty and fashion brands such as Schwarzkopf and Chanel as the colour
is thought to be timeless and never out of style.
Vocabulary exercise
1. hue (paragraph 2)
US and UK public opinion surveys have found that blue is a majority of men and women's
preferred colour, making it quite a popular hue.
2. staple (paragraph 5)
From here on, red was a staple hue throughout history, used in ancient Grecian murals, in
Byzantine clothing to signal status and wealth, and heavily applied throughout art movements
ranging from the Renaissance through to modern day art.
3. shade (paragraph 8)
In the world of branding, red can signal a whole host of different ideas, depending on the specific
shade. For example, a darker red often signals luxury and professionalism. A bright, intense red
signals excitement, energy, and efficiency.
4. signal (paragraph 11)
Yellow has a longstanding relationship with the world of art, with artists such as Van Gogh
adopting it as a signature colour to signal warmth and happiness
5. exemplified (paragraph 18)
And it can also be used to promote health, the environment, and all-natural products. Just check
out Whole Foods or Animal Planet's logos to see this eco-friendliness exemplified.
6. suited to (paragraph 29)
However, this has not always been the case. Originally it was considered to be a colour suited to
little boys, as red was a man's colour, and pink its younger sister hue.
7. cater (paragraph 31)
Arguably the most common application is on brands that predominantly cater towards women.
As pink is a symbol of femininity in Western cultures, brands like Victoria's Secret and
Cosmopolitan all use
Write your own sentences with the vocabulary
1. hue
2. staple
3. shade
4. signal
5. exemplified
6. suited to
7. cater
EXERCISE 31
The work of artist Jackson Pollock
Summary
This is an article on the life and work of the celebrated American artist Jackson Pollock. It tries to
explain what made Pollock both a great and influential artist; often through examining in detail
some of his paintings, but by also talking about his life and the period he was painting in.
It seems as if the Museum of Modern Art in New York organizes a show of Pollock about every two
years or so. This may be an exaggeration, but the truth is that we can't let the man peaceably
subside into art history. It has something to do with the American moment the short period just
after the Second World War, when we had saved the world from fascism and our hidden imperial
tactics were not yet exposed.
In New York, Abstract Expressionism has never really died. We still pray to Jackson Pollock on a
regular basis, not only because he is a major, and profoundly American, artist, but because his
mythology floats what can only be described as a bloated market for his art. There was a time when
Pollock and his other colleagues Gorky and de Kooning were impossibly poor, but their
tenacity and creativity have been translated into immense amounts of hard cash. To this day we
have artists pursuing careers by maintaining styles that actively reference a movement that had its
high points in the middle of the last century! All this time has passed, and yet New Yorkers still
cling to the relevance of expressionist art, created mostly without a recognizable subject.
When trying to understand Pollock and his work, it is easy to succumb to the myth of a bad boy
artist who was tortured by his own genius. And whilst it is true there were bouts of depression,
womanizing and alcoholism (he famously urinated in the fireplace of famous collector Peggy
Guggenheim in New York City). Yet, this does not necessarily explain either him or the
achievements of his creativity. Pollock quite tenaciously overcame his vulnerabilities and mood
swings to produce, for a bit less than ten years, a body of work that remains outstanding,
luminously inspired, and technically and stylistically new.
My feeling is that he is surely a major artist, but that his life has been surrounded by the gossip of
his transgressions behavior that challenged the acceptable norm, but which was nonetheless
tolerated by a public fascinated with genius. The trouble is that the mythology of the person
damages our ability to judge objectively the quality of the art.
Certain artists have lives that cannot be separated from their artistic output for example, we have
the case of Van Gogh. His instability, like that of Pollock's, is enmeshed in our experience of his
work, the late work especially. But stories of an artist's personal life cannot salvage achievement
that is not outstanding. In the case of Pollock, his work has been accorded critical acclaim from the
start. His intensity of purpose, experienced in the fixed stare he threw at everything he saw, was
seen early on as the accomplishment of someone destined for greatness.
The Museum of Modern Art's show, "Jackson Pollock: A Collection Survey, 1934-1954," comprised
roughly fifty works of art, ranging from raw, rough early efforts that are figurative and describe
mythologies, to the pioneering drip abstractions of the 1940s and ‘50s. All in all, it is a very good
exhibition that successfully covers the beginnings of Pollock's creativity to the masterful works that
came later, including the masterpiece entitled "One: Number 31, 1950," (1950) the center of the
show.
Yes, he is a preeminent painter, whose discipline within the expressionism of the drip resulted in
art of enduring importance; and yes, he represented in many ways the boldness and risk-taking of
American life during the period he worked. But I do not think we help his large audience develop
an accurate judgment by constructing a mythic reading of the man. The mythology must be
separated from the achievement.
The primary reason Americans find this so hard to do is because Pollock really did embody the
glory and perhaps as well the bombast of the culture at the time. It is much more interesting to
see how the exhibition charts the development of the artist, rather than setting the paintings up as
supports for a legendary reputation. Surely it is more useful to rationally conceive of the arc of
Pollock's career as a development over time rather than the inspired exaltations of an alcoholic
genius.
At the same time, this restraint does not diminish the qualities of Pollock as a person. Clearly he
was a charismatic figure, someone in close relations with a tempestuous unconscious. It must have
cost a tremendous amount personally for him to access an unknown that enabled him to paint so
extraordinarily well and so differently from everyone else. To paint the way he did, directly as an
action and without premeditation, meant that he had to forge a straight path to his inner self. Even
in the early work, inspired as it is by mythological reference, both private and public, one senses
that Pollock began by trying to break through to the other side of his psyche, in ways that
accentuated the violence of the attempt.
Born in 1912 in Wyoming, Pollock came to live in New York with his older brother Charles at the
age of 18. Once there, he studied with the accomplished painter and muralist Thomas Hart Benton,
who provided a bit of a home for Pollock at this time. As the exhibition demonstrates, the early
paintings are dark, rough, and emotionally fraught attempts on the artist's part to harness
passions that he could not fully control. If we look at "The Flame," (1934-38), Pollock paints white
and red flames over charred wood; the painting's intensity surely corroborates with the force of his
own personality flame is a more than obvious metaphor for ardor, its place among human
feelings.
But it is his painting "One: Number 31, 1950," which is his masterpiece. A piece that fulfills the
promise of his earlier work. Begun by the artist in the late 1940s, it was created by using alkyd
paint, bought not from the art supply store but from a hardware store. "One," is a very large work,
269.5 by 530.8 cm in size, active in every quartile of the picture but lacking a real center in either
the literal or figurative sense of the word.
Image of Pollock’s painting "One: Number 31, 1950,"
By emphasizing the action of the paint without direct reference to a representational image,
Pollock also stresses paint as paint its simple existence as a material. The substance is used as it
is, not as a vehicle for an image based on illusory perspective. The attention paid to paint as a mere
substance was revolutionary at the time, being backed by the prominent critic Clement Greenberg,
who saw such a focus as greater honesty than the pretense of perspective, whose seemingly
extended depth developed on what was actually a two-dimensional plane.
But "One," is more than a mere dismissal of a 500-year-long painting practice; it is a real triumph
of physical undertaking. And it looks like the work of no one else. The sheer independence of
Pollock's style makes it clear that he had found a way to present the movement of thought just as
much as the movement of materials. Maybe, the word thought isn't quite enough to describe the
amalgam of attributes linked to such a work of art. It is tied deeply to expansive, nearly anarchic
feeling, which is a good way to describe the American Abstract Expressionist movement.
Yet there is marked esthetic intelligence in "One," which refuses to reside in any other category
than the one created by its own fracture. As a painting, it is oriented toward a high seriousness of
purpose, being deliberately epic in the sense that its dimensions are meant to overwhelm rather
than reassure the viewer. The work's condensed, complicated language of drips and skeins
elaborate a view of the unconscious and the roiling impulses that occur behind the forefront of
awareness.
The idea of the artist as a heroic protagonist was especially strong during the Abstract
Expressionist period, a time when emotional and physical stamina were both needed to survive
indifference and neglect. Today, we have a newly bureaucratized art world, in which one racks up
awards and residencies as a way of building a career. There is nothing wrong in doing so, especially
when we consider that the MFA program has become the mainstay of fine art education in
America.
But the professionalization of art as a vocation has had the unfortunate effect of turning art into a
career rather than a calling. In retrospect, Pollock's life hardly looks in any way like a series of
calculated decisions; rather it seems as if it is the record of one very gifted painter's attempts to put
some order in his life even though his art reflects a profoundly chaotic reality settled deep in his
mind. But the intense complexity of "One," completed in the summer and fall of 1950, not only
expresses personal anxiety in a major manner, it also relates to the angst of the time.
Just as Giacometti produced work that summed up the grim spirit of post-Second World War
society in Europe, so has Pollock's art recognized the universal bias of a time when interest in
psychoanalysis was high Pollock himself underwent Jungian analysis for his drinking and
personal problems. This brings up a couple of questions. First, is Pollock's direct exploration of the
unconscious inherently fraught, so that his art not only described a private journey but was
additionally archetypal, being addressed toward the inner lives of everyone and not just himself?
And second, just how effective is abstraction in rendering something so convoluted as the mind's
inner life?
In a way, Pollock is involved not only with art as physical activity, he is also asserting the
metaphorical value of painting in the broadest sense. When the subject of your art is the mind
itself, the work becomes a window into consciousness, and the abstraction becomes the way in
which that consciousness is universalized.
"One," matters as a painting not only because it represents a new methodology for making art; it
also reconnects us with primal feeling. This kind of expressionism is based on emotion, to the
point where it is impossible to analyze it on the basis of intellectual perception alone. At the same
time, the movement might be criticized for the theatricality of its art. Practiced without
constraints, abstract expressionism seems childlike or even decorative, lacking a structure to keep
it from excess. But when the practice is informed by an intuition that does not reject structure or
order for the sake of sentiment alone, as does in fact happen in "One," the style takes off into
substantive regions of expression.
The feelings become the glue holding the composition together, but even more than that, the
skeins of paint in "One," establish a cohesiveness that cannot be called entirely emotional. This is
the reason why the painting will last. A structure in a painting will remain a structure even if its
origins are intuitive, that is, lacking an obvious framework. "One," established the truth of its
materials; paint is what it is and is to be experienced as pigment rather than be submerged into a
representation. But that is only part of its ecstatic claim. On a larger level, the work is about
imaginative freedom, tied to the instant of its making.
By 1947, Pollock had begun the spattering of the canvas he is known for, a technique he would rely
on for approximately a decade. But the earlier mythical paintings already were pointing the way
toward a style predicated on extreme freedom, as well as a drive toward personal exploration. "The
She-Wolf," (1943) precedes the painter's command of the drip and splatter, but it is painted highly
expressively, with a crowded background of curling lines that also cross over the white outline of
the wolf. The title would likely seem to refer to the legend of the wolf that suckled Romulus and
Remus, the founders of Rome.
Image of Pollock’s painting "The She-Wolf,"
Pollock himself refused to comment on the painting's significance, asserting in 1944 that "She-
Wolf came into existence because I had to paint it." Analysis would only demystify what Pollock
believed to be a picture about origins, perhaps applicable to his own development as a painter. He
commented: "Any attempt on my part to say something about it, to attempt explanation of the
inexplicable, could only destroy it."
One hesitates to go beyond making so generalized a statement because a particular reading, mostly
of a psychological kind, would box in the experience of the art. The violence of "The She-Wolf," of
course has something to do with Pollock's state of mind at the time it is additionally a succinct
reprise of the violence of his esthetic. But at the same time it must be said that the picture depicts a
known mythology, one that can only be seen as metaphorical in regard to Pollock's personal life.
This relationship between painting and psyche is primarily indirect. The work is in no way
confessional. And the privacy Pollock demanded was part of the spirit of the time. Today we live in
a culture in which the particulars of an artist's life are worked over with absurd intensity. Pollock's
representations of a private self cannot be tacked on to specific events or trauma simply because
the work is abstract.
By comparison, the American poetry movement called the Confessionals School, active just after
the end of Pollock's career, made the frank dissemination of their personal lives a point of pride.
Almost all the prominent writers in the movement, including Robert Lowell, John Berryman,
Sylvia Plath, and Anne Sexton, yielded to madness or suicide I think likely because they made
their personal lives too transparent. One of the advantages of Pollock's technique is that it could be
read as a general statement of mind without being descriptive of the details of his circumstances.
Even with all this being true, it is clear that in America we continue to praise the artist who breaks
the rules. Or at least we do so when the artist is gone. Pollock broke the rules not only socially but
also methodologically with the drip. The element of performance in the creation of his art pushed
his creativity to a new level, in which the physical activity of the person became just as important
as the thoughts and emotions accompanying the project of art.
While it is not Pollock's fault that he would become an icon of critical and popular success, his
influence on following generations of painters became rigidly doctrinaire. This would result in a
stultifying imitation of true feeling, not the genuine emotion experienced in his art. Even now, in
the art school where I teach, I find myself regularly coming across expressionist styles. If we take
1950 as a general high point for Pollock's accomplishments, it becomes clear that the tenets of his
style have served as a model for a certain kind of art for more than 65 years roughly three
generations!
As a writer, and to some extent as someone who agrees with Ezra Pound's injunction "Make it
new," I need to hope that art will move on to a succeeding insight. Because Pollock now sells for
such great sums of money, many writers, curators, and viewers agree to his apotheosis. But it is
much more complicated that than. We need to honestly contextualize him in a way that does
justice to what he has actually done, rather than praise his sometimes outrageous behavior. The
truth is that Pollock has become a god for the American art world. But there is another truth, a
greater one, that sees him for what he is: a deeply troubled major painter, who did the best that he
could to control the stormy emotions that consumed him.
No matter how he is seen as a gifted rogue or a transcendent painter no one can take away the
achievements of his art. Pollock is certainly among the finest painters of his generation, and he
remains profoundly important now. Only time can tell whether this importance will translate into
a permanent historical standing, free of the flourish and rhetoric of his personality.
Vocabulary exercise
1. cling to (paragraph 2)
All this time has passed, and yet New Yorkers still cling to the relevance of expressionist art,
created mostly without a recognizable subject.
2. succumb to (paragraph 3)
When trying to understand Pollock and his work, it is easy to succumb to the myth of a bad boy
artist who was tortured by his own genius. And whilst it is true there were bouts of depression,
womanizing and alcoholism
3. accorded (paragraph 5)
But stories of an artist's personal life cannot salvage achievement that is not outstanding. In the
case of Pollock, his work has been accorded critical acclaim from the start.
4. masterpiece (paragraph 11)
But it is his painting "One: Number 31, 1950," which is his masterpiece. A piece that fulfills the
promise of his earlier work.
5. dismissal (paragraph 13)
But "One," is more than a mere dismissal of a 500-year-long painting practice; it is a real
triumph of physical undertaking.
6. overwhelm (paragraph 14)
As a painting, it is oriented toward a high seriousness of purpose, being deliberately epic in the
sense that its dimensions are meant to overwhelm rather than reassure the viewer.
7. calling (paragraph 16)
But the professionalization of art as a vocation has had the unfortunate effect of turning art into a
career rather than a calling.
Write your own sentences with the vocabulary
1. cling to
2. succumb to
3. accorded
4. masterpiece
5. dismissal
6. overwhelm
7. calling
EXERCISE 32
Food imagery and manipulation
Summary
This is an article on the rise of food porn/gastroporn (where food is presented in such a way that
we desire it). It says from when the presentation of food started to become important, what
specific things make food appear more appealing to people and how this is used to manipulate
consumers. It also explains the problems which this can cause.
Your brain is your body's most blood-thirsty organ, using around 25% of total blood flow (or
energy) – despite the fact that it accounts for only 2% of body mass. Given that our brains have
evolved to find food, it should perhaps come as little surprise to discover that some of the largest
increases in cerebral blood flow occur when a hungry brain is exposed to images of desirable foods.
Adding delicious food aromas makes this effect even more pronounced. Within little more than the
blink of an eye, our brains make a judgment call about how much we like the foods we see and how
nutritious they might be. And so you might be starting to get the idea behind gastroporn.
No doubt we have all heard our stomachs rumbling when we contemplate a tasty meal. Viewing
food porn can induce salivation, not to mention the release of digestive juices as the gut prepares
for what is about to come. Simply reading about delicious food can have much the same effect. In
terms of the brain's response to images of palatable or highly desirable foods (food porn, in other
words), research shows widespread activation of a host of brain areas, including the taste and
reward areas. The magnitude of this increase in neural activity, not to mention the enhanced
connectivity between brain areas, typically depends on how hungry the viewer is, whether they are
dieting (ie, whether they are a restrained eater or not) and whether they are obese. (The latter, for
instance, tend to show a more pronounced brain response to food images even when full.)
Apicius, the first-century Roman gourmand and author, is credited with the aphorism: "The first
taste is always with the eyes." Nowadays, the visual appearance of a dish is just as important as, if
not more important than, the taste/flavour itself. We are bombarded by food images, from adverts
through to social media and TV cookery shows. Unfortunately, though, the foods that tend to look
best (or rather, that our brains are most attracted to) are generally not the healthiest. Quite the
reverse, in fact.
We may all face being led into less healthy food behaviours by the highly desirable images of foods
that increasingly surround us. In 2015, just as in the year before, food was the second most
searched-for category on the internet (after pornography). The blame, if any, doesn't reside solely
with the marketers, food companies and chefs; a growing number of us are actively seeking out
images of food "digital foraging", if you will. How long, I wonder, before food takes the top slot?
People have been preparing beautiful-looking dishes for feasts and celebrations for centuries.
However, for anything other than an extravagant feast, the likelihood is that meals in the past
would have been served without any real concern for how they looked. That they tasted good, or
even just that they provided some sustenance, was all that mattered. This was true even of famous
French chefs, as highlighted by the following quote from Sebastian Lepinoy, executive chef at
L'Atelier de Joel Robuchon, describing the state of affairs before the emergence of nouvelle
cuisine: "French presentation was virtually non-existent. If you ordered a coq au vin at a
restaurant, it would be served just as if you had made it at home. The dishes were what they were.
Presentation was very basic."
Everything changed, though, when east met west in the French kitchens of the 1960s. It was this
meeting of culinary minds that led to nouvelle cuisine, and with it, gastroporn a term that dates
to a review in 1977 of Paul Bocuse's French Cookery. The name stuck.
These days, more and more chefs are becoming concerned (obsessed, even) by how their food
photographs. And not only for the pictures that will adorn the pages of their next cookbook. As one
restaurant consultant put it: "I'm sure some restaurants are preparing food now that is going to
look good on Instagram."
Some have been struggling with how to deal with the trend for diners sharing meals on social
media. Much publicised responses include everything from limiting diners' opportunities to
photograph the food during the meal through to banning photography inside the restaurant. It
would, however, seem as though the chefs have now, mostly, embraced the trend, acknowledging
that it is all part of "the experience". As Alain Ducasse, at London's three-Michelin-starred
Dorchester Hotel says: "Cuisine is a feast for the eyes, and I understand that our guests wish to
share these instants of emotion through social media."
There is a sense in which the visual appeal of the meal has become an end in itself. Researchers
and food companies have begun to establish which tricks and techniques work best in terms of
increasing the eye-appeal of a dish, including, for instance, showing food, especially protein, in
motion (even if it is just implied motion) to attract the viewer's attention and convey freshness.
What do you get if you show protein (eg, oozing egg yolk) in motion? Answer: yolk-porn. I came
across an example recently in a London tube station. There were video advertising screens along
the wall as I ascended the escalator. All I could see, out of the corner of my eye, was a steaming
slice of lasagne being lifted slowly from a dish, dripping with hot melted cheese, on screen after
screen. As the marketers know only too well, such "protein in motion" shots are attention-
grabbing; our eyes (or rather our brains) find them almost irresistible. Images of food (or more
specifically, energy-dense foods) capture our visual awareness, as does anything that moves.
"Protein in motion" is therefore precisely the kind of energetic food stimulus that our brains have
evolved to detect, track and concentrate on visually.
Marks & Spencer has acquired something of a reputation for food porn with its highly stylised and
gorgeously presented advertising. Look closely and you will find plenty of protein in motion (both
implied and real). Its most famous ad, from 2005, was for a chocolate pudding with an extravagant
melting centre. A sultry voiceover came out with the now much parodied line: "This is not just
chocolate pudding, this is a Marks & Spencer chocolate pudding." Sales skyrocketed by around
3,500%. In M&S's 2014 campaign, all of the food was shown in motion. In fact, one of the most
widely commented on images was of a scotch egg being sliced in half, with the yolk oozing out.
Food in motion also looks more desirable, in part because it is perceived to be fresher. Studies by
food psychology researcher Brian Wansink and his colleagues at Cornell University show that we
rate a picture of a glass of orange juice as significantly more appealing when juice can be seen
being poured into the glass than when the image is of a glass that has already been filled. Both are
static images but one implies motion. That is enough to increase its appeal. (For those of you at
home, who may not be able to guarantee that your food moves, another strategy is simply to leave
the leaves and/or stems on fruit and vegetables, to help cue freshness.)
One of the strangest trends relating to food porn that I have come across in recent years is called
mukbang. A growing number of South Koreans are using their mobile phones and laptops to watch
other people consuming and talking about eating food online. Millions of viewers engage in this
voyeuristic habit, which first appeared back in 2011. Interestingly, the stars are not top chefs, TV
personalities or restaurateurs but rather regular (albeit generally photogenic) "online eaters". One
can think of this as yet another example of food in motion; it's just that the person interacting with
the food happens to be more visible than in many examples of dynamic food advertising in the
west, where all you see is the food moving. I also get the sense, though, that some people who eat
alone are tuning in for a dose of mukbang at mealtimes to get some virtual company.
It would be interesting to see whether those who eat while tuning in consume more than they
would were they really eating alone (ie, without any virtual dinner guests). One might also wonder
if mukbang is as distracting as regular television, which has been shown to dramatically increase
the amount consumed. If so, one might expect that the viewer's immediate intake of food will
increase and that the amount of time that passes before they get hungry again ought to be
reduced.
Food imagery is most visually appealing when the viewer's brain finds it easy to simulate the act of
eating, for example, when the food is seen from a first-person perspective. This is rated more
highly than viewing food from a third-person view (as is typically the case with mukbang).
Marketers, at least the smarter ones, know only too well that we will rate what we see in food
advertisements more highly if it's easier to mentally simulate the act of eating that which we see.
Imagine a packet of soup with a bowl of soup on the front of the packaging. Adding a spoon
approaching the bowl from the right will result in people being around 15% more willing to buy the
product than if the spoon approaches from the left. That's because most of us are right-handed,
and so we normally see ourselves holding a spoon in our right hand. Simply showing what looks
like a right-handed person's spoon approaching the soup makes it easier for our brains to imagine
eating. Now, for all those lefties out there saying, "What about me?" it may not be too long before
the food ads on your mobile device might be reversed to show the left-handed perspective. The
idea is that this will help maximise the adverts' appeal (assuming, that is, that your technology can
figure out your handedness implicitly).
How worried should we be by the rise of food porn? Why shouldn't people indulge their desire to
view all those delectable gastroporn images. Surely there is no harm done? After all, food images
don't contain any calories, do they? Well, it turns out that there are a number of problems that I
think we should be concerned about:
1. Food porn increases hunger
One thing that we know for certain is that viewing images of desirable foods provokes appetite. For
example, in one study, simply watching a seven-minute restaurant review showing pancakes,
waffles, hamburgers, eggs, etc led to increased hunger ratings not only in participants who hadn't
eaten for a while but also in those who had just polished off a meal.
2. Food porn promotes unhealthy food
Many of the recipes that top chefs make on television shows are incredibly calorific or unhealthy.
Those who have systematically analysed TV chefs' recipes find that they tend to be much higher in
fat, saturated fat and sodium than is recommended by the World Health Organisation's nutritional
guidelines. This is not only a problem for those viewers who go on to make these foods. (Although
surprisingly few people actually do this: according to a 2015 survey of 2,000 foodies, fewer than
half had ever cooked even one of the dishes that they had seen prepared on food shows.) Rather,
the bigger concern here is that the foods we see being made, and the food portions we see being
served on TV, may set implicit norms for what we consider it appropriate to eat at home or in a
restaurant.
3. The more food porn you view, the higher your body mass index (BMI)
While the link is only correlational, not causal, the fact that people who watch more food TV have a
higher BMI might still cause you to raise your eyebrows. They might, of course, be watching more
television generally, not just food programmes after all, the term "couch potato" has been around
for longer than the term "food porn". The key question, though, from the gastrophysics
perspective, is whether those who watch more food television have a higher BMI than those who
view an equivalent amount of non-food TV. That would certainly seem likely, given all the evidence
showing that food advertising biases subsequent consumption, especially in kids.
4. Food porn drains mental resources
Whenever we view images of food on the side of product packaging, in cookery books, TV shows
or social media our brains can't help but engage in a spot of embodied mental simulation. That
is, they simulate what it would be like to eat the food. At some level, it is almost as if our brains
can't distinguish between images of food and real meals. We therefore need to expend some
mental resources, silly though it may sound, to resist all of these virtual temptations.
So what happens when we are subsequently faced with an actual food choice? Imagine yourself
watching a TV cookery show and then arriving at a train station; the smell of coffee wafting
through the air leads you by the nose into buying a cup. At the counter, you see the sugary snack
bars and fruit laid out in front of you. Should you go for a bar of chocolate, or pick a healthy
banana instead? In one laboratory study, participants who had been shown appealing food images
tended to make worse food choices afterwards than those who had been exposed to a smaller
number of food images.
All of this increased exposure to desirable food images results in involuntary embodied mental
simulations. Our brains imagine what it would be like to consume the foods we see, even if those
foods are only on the TV or our phones, and we then have to try to resist the temptation to eat. One
recent study, conducted in three snack shops in train stations, investigated whether people could
be nudged towards healthier food choices simply by moving the fruit closer to the till than the
snacks the reverse normally being the case. The "nudge" worked in the sense that people were
indeed more likely to buy fruit or a muesli bar. Unfortunately, they continued to purchase crisps,
cookies and chocolate as well. In other words, an intervention that had been designed to cut
consumption resulted in people consuming more calories.
Our brains have evolved to find sources of nutrition in food-scarce environments. Unfortunately,
we are surrounded by more images of energy-dense, high-fat foods than ever before. While there is
an increasing desire to view images of food, not to mention take pictures of it, and more is now
known about what aspects of these images attract us, we should, I think, be concerned about just
what consequences such exposure is having on us all. I am increasingly concerned that all this
"digital grazing" of images of unhealthy energy-dense foods may be encouraging us to eat more
than we realise and nudging us all towards unhealthier food behaviours.
Describing desirable images of food as gastroporn, or food porn, is undoubtedly pejorative.
However, I am convinced that the link with actual pornography is more appropriate than we think.
So perhaps we really should be thinking about moving those food magazines bursting with images
of highly calorific and unhealthy food up on to the newsagents' top shelf? Or preventing cookery
shows from being aired on TV before the watershed? While such suggestions are, of course, a little
tongue in cheek, there is a very serious issue here. The explosion of mobile technologies means
that we are all being exposed to more images of food than ever before, presented with foods that
have been designed to look good, or photograph well, more than for their taste or balanced
nutritional content.
Max Ehrlich's 1971 book The Edict, is set in a future where the strictly calorie-rationed populace
can go to the movies to see a "Foodie": "What they saw was almost unbearable, both in its pain and
ecstasy. Mouths dropped half open, saliva drooling at their corners. People licked their lips, staring
at the screen lasciviously, their eyes glazed, as though undergoing some kind of deep sexual
experience. The man in the film had finished his carving and now he held a thick slice of beef
pinioned on his fork … As his mouth engulfed it, the mouths of the entire audience opened and
closed in symbiotic unison with the man on the screen … What the audience saw now was not
simple greed. It was pornographic. Close-ups of mouths were shown, teeth grinding, juice
dribbling down chins."
I don't want to leave on a pessimistic note. In the coming years, gastrophysicists will continue to
examine the crucial part the foods we are exposed to visually play in eating behaviours. There
seems little chance that the impact of sight will decline any time soon, especially given how much
time we spend gazing at screens. My hope is that by understanding more about the importance of
sight to our perception of, and behaviour around, food and drink we will be in a better position to
optimise our food experiences in the future.
Vocabulary exercise
1. bombarded (paragraph 3)
We are bombarded by food images, from adverts through to social media and TV cookery shows.
2. reside (paragraph 4)
The blame, if any, doesn't reside solely with the marketers, food companies and chefs; a growing
number of us are actively seeking out images of food
3. adorn (paragraph 7)
These days, more and more chefs are becoming concerned (obsessed, even) by how their food
photographs. And not only for the pictures that will adorn the pages of their next cookbook.
4. embraced (paragraph 8)
It would, however, seem as though the chefs have now, mostly, embraced the trend,
acknowledging that it is all part of "the experience"
5. indulge (paragraph 17)
Why shouldn't people indulge their desire to view all those delectable gastroporn images. Surely
there is no harm done? After all, food images don't contain any calories, do they?
6. implicit (paragraph 19)
Rather, the bigger concern here is that the foods we see being made, and the food portions we see
being served on TV, may set implicit norms for what we consider it appropriate to eat at home or
in a restaurant.
7. nudged towards (paragraph 23)
One recent study, conducted in three snack shops in train stations, investigated whether people
could be nudged towards healthier food choices simply by moving the fruit closer to the till than
the snacks
Write your own sentences with the vocabulary
1. bombarded
2. reside
3. adorn
4. embraced
5. indulge
6. implicit
7. nudged towards
EXERCISE 33
The urgency of acting now to stop climate
change
Summary
This is a long article on how climate change is currently affecting the plant. Focusing
predominantly on the oceans, it explains and provides many examples of how the rise in
temperatures is negatively affecting them and the species that inhabit them. It ends by briefly
talking about some things that people are doing to try to stop this.
Historians may look to 2015 as the year when shit really started hitting the fan. Some snapshots:
In just the past few months, record-setting heat waves in Pakistan and India each killed more than
1,000 people. In Washington state's Olympic National Park, the rainforest caught fire for the first
time in living memory. In California, suffering from its worst drought in a millennium, a 50-acre
brush fire swelled seventy fold in a matter of hours, jumping across the I-15 freeway during rush-
hour traffic. Then, a few days later, the region was pounded by intense, virtually unheard-of
summer rains. Puerto Rico is under its strictest water rationing in history as a monster El Niño
forms in the tropical Pacific Ocean, shifting weather patterns worldwide.
On July 20th, James Hansen, the former NASA climatologist who brought climate change to the
public's attention in the summer of 1988, issued a bombshell: He and a team of climate scientists
had identified a newly important feedback mechanism off the coast of Antarctica that suggests
mean sea levels could rise 10 times faster than previously predicted: 10 feet by 2065. The authors
included this chilling warning: If emissions aren't cut, "We conclude that multi-meter sea-level rise
would become practically unavoidable. Social disruption and economic consequences of such large
sea-level rise could be devastating. It is not difficult to imagine that conflicts arising from forced
migrations and economic collapse might make the planet ungovernable, threatening the fabric of
civilization."
Eric Rignot, a climate scientist at NASA and the University of California-Irvine and a co-author on
Hansen's study, said their new research doesn't necessarily change the worst-case scenario on sea-
level rise, it just makes it much more pressing to think about and discuss, especially among world
leaders.
Hansen's new study also shows how complicated and unpredictable climate change can be. Even
as global ocean temperatures rise to their highest levels in recorded history, some parts of the
ocean, near where ice is melting exceptionally fast, are actually cooling, slowing ocean circulation
currents and sending weather patterns into a frenzy. Sure enough, a persistently cold patch of
ocean is starting to show up just south of Greenland, exactly where previous experimental
predictions of a sudden surge of freshwater from melting ice expected it to be. Michael Mann,
another prominent climate scientist, recently said of the unexpectedly sudden Atlantic slowdown,
"This is yet another example of where observations suggest that climate model predictions may be
too conservative when it comes to the pace at which certain aspects of climate change are
proceeding."
Since storm systems and jet streams in the United States and Europe partially draw their energy
from the difference in ocean temperatures, the implication of one patch of ocean cooling while the
rest of the ocean warms is profound. Storms will get stronger, and sea-level rise will accelerate.
Scientists like Hansen only expect extreme weather to get worse in the years to come, though
Mann said it was still "unclear" whether recent severe winters on the East Coast are connected to
the phenomenon.
And yet, these aren't even the most disturbing changes happening to the Earth's biosphere that
climate scientists are discovering this year. For that, you have to look not at the rising sea levels
but to what is actually happening within the oceans themselves.
Water temperatures this year in the North Pacific have never been this high for this long over such
a large area and it is already having a profound effect on marine life.
Eighty-year-old Roger Thomas runs whale-watching trips out of San Francisco. On an excursion
earlier this year, Thomas spotted 25 humpbacks and three blue whales. During a survey on July
4th, federal officials spotted 115 whales in a single hour near the Farallon Islands enough to issue
a boating warning. Humpbacks are occasionally seen offshore in California, but rarely so close to
the coast or in such numbers. Why are they coming so close to shore? Exceptionally warm water
has concentrated the krill and anchovies they feed on into a narrow band of relatively cool coastal
water. The whales are having a heyday. "It's unbelievable," Thomas told a local paper. "Whales are
all over the place."
Last fall, in northern Alaska, in the same part of the Arctic where Shell is planning to drill for oil,
federal scientists discovered 35,000 walruses congregating on a single beach. It was the largest-
ever documented "haul out" of walruses, and a sign that sea ice, their favored habitat, is becoming
harder and harder to find.
Marine life is moving north, adapting in real time to the warming ocean. Great white sharks have
been sighted breeding near Monterey Bay, California, the farthest north that's ever been known to
occur. A blue marlin was caught last summer near Catalina Island 1,000 miles north of its typical
range. Across California, there have been sightings of non-native animals moving north, such as
Mexican red crabs.
No species may be as uniquely endangered as the one most associated with the Pacific Northwest,
the salmon. Every two weeks, Bill Peterson, an oceanographer and senior scientist at the National
Oceanic and Atmospheric Administration's Northwest Fisheries Science Center in Oregon, takes to
the sea to collect data he uses to forecast the return of salmon. What he's been seeing this year is
deeply troubling.
Salmon are crucial to their coastal ecosystem like perhaps few other species on the planet. A
significant portion of the nitrogen in West Coast forests has been traced back to salmon, which can
travel hundreds of miles upstream to lay their eggs. The largest trees on Earth simply wouldn't
exist without salmon.
But their situation is precarious. This year, officials in California are bringing salmon downstream
in convoys of trucks, because river levels are too low and the temperatures too warm for them to
have a reasonable chance of surviving. One species, the winter-run Chinook salmon, is at a
particularly increased risk of decline in the next few years, should the warm water persist offshore.
"You talk to fishermen, and they all say: 'We've never seen anything like this before,' " says
Peterson. "So when you have no experience with something like this, it gets like, 'What the hell's
going on?' "
Atmospheric scientists increasingly believe that the exceptionally warm waters over the past
months are the early indications of a phase shift in the Pacific Decadal Oscillation (PDO), a cyclical
warming of the North Pacific that happens a few times each century. Positive phases of the PDO
have been known to last for 15 to 20 years, during which global warming can increase at double the
rate as during negative phases of the PDO. It also makes big El Niños, like this year's, more likely.
The nature of PDO phase shifts is unpredictable climate scientists simply haven't yet figured out
precisely what's behind them and why they happen when they do. It's not a permanent change
the ocean's temperature will likely drop from these record highs, at least temporarily, some time
over the next few years – but the impact on marine species will be lasting, and scientists have
pointed to the PDO as a global-warming preview.
"The climate [change] models predict this gentle, slow increase in temperature," says Peterson,
"but the main problem we've had for the last few years is the variability is so high. As scientists, we
can't keep up with it, and neither can the animals." Peterson likens it to a boxer getting pummeled
round after round: "At some point, you knock them down, and the fight is over."
Attendant with this weird wildlife behavior is a stunning drop in the number of plankton the
basis of the ocean's food chain. In July, another major study concluded that acidifying oceans are
likely to have a "quite traumatic" impact on plankton diversity, with some species dying out while
others flourish. As the oceans absorb carbon dioxide from the atmosphere, it's converted into
carbonic acid and the pH of seawater declines. According to lead author Stephanie Dutkiewicz of
MIT, that trend means "the whole food chain is going to be different."
The Hansen study may have gotten more attention, but the Dutkiewicz study, and others like it,
could have even more dire implications for our future. The rapid changes Dutkiewicz and her
colleagues are observing have shocked some of their fellow scientists into thinking that yes,
actually, we're heading toward the worst-case scenario. Unlike a prediction of massive sea-level
rise just decades away, the warming and acidifying oceans represent a problem that seems to have
kick-started a mass extinction on the same time scale.
Jacquelyn Gill is a paleoecologist at the University of Maine. She knows a lot about extinction, and
her work is more relevant than ever. Essentially, she's trying to save the species that are alive right
now by learning more about what killed off the ones that aren't. The ancient data she studies
shows "really compelling evidence that there can be events of abrupt climate change that can
happen well within human life spans. We're talking less than a decade."
For the past year or two, a persistent change in winds over the North Pacific has given rise to what
meteorologists and oceanographers are calling "the blob" a highly anomalous patch of warm
water between Hawaii, Alaska and Baja California that's thrown the marine ecosystem into a
tailspin. Amid warmer temperatures, plankton numbers have plummeted, and the myriad species
that depend on them have migrated or seen their own numbers dwindle.
Significant northward surges of warm water have happened before, even frequently. El Niño, for
example, does this on a predictable basis. But what's happening this year appears to be something
new. Some climate scientists think that the wind shift is linked to the rapid decline in Arctic sea ice
over the past few years, which separate research has shown makes weather patterns more likely to
get stuck.
A similar shift in the behavior of the jet stream has also contributed to the California drought and
severe polar vortex winters in the Northeast over the past two years. An amplified jet-stream
pattern has produced an unusual doldrum off the West Coast that's persisted for most of the past
18 months. Daniel Swain, a Stanford University meteorologist, has called it the "Ridiculously
Resilient Ridge" weather patterns just aren't supposed to last this long.
What's increasingly uncontroversial among scientists is that in many ecosystems, the impacts of
the current off-the-charts temperatures in the North Pacific will linger for years, or longer. The
largest ocean on Earth, the Pacific is exhibiting cyclical variability to greater extremes than other
ocean basins. While the North Pacific is currently the most dramatic area of change in the world's
oceans, it's not alone: Globally, 2014 was a record-setting year for ocean temperatures, and 2015 is
on pace to beat it soundly, boosted by the El Niño in the Pacific. Six percent of the world's reefs
could disappear before the end of the decade, perhaps permanently, thanks to warming waters.
Since warmer oceans expand in volume, it's also leading to a surge in sea-level rise. One recent
study showed a slowdown in Atlantic Ocean currents, perhaps linked to glacial melt from
Greenland, that caused a four-inch rise in sea levels along the Northeast coast in just two years,
from 2009 to 2010. To be sure, it seems like this sudden and unpredicted surge was only
temporary, but scientists who studied the surge estimated it to be a 1-in-850-year event, and it's
been blamed on accelerated beach erosion "almost as significant as some hurricane events."
Possibly worse than rising ocean temperatures is the acidification of the waters. Acidification has a
direct effect on mollusks and other marine animals with hard outer bodies: A striking study last
year showed that, along the West Coast, the shells of tiny snails are already dissolving, with as-yet-
unknown consequences on the ecosystem. One of the study's authors, Nina Bednaršek, told
Science magazine that the snails' shells, pitted by the acidifying ocean, resembled "cauliflower" or
"sandpaper." A similarly striking study by more than a dozen of the world's top ocean scientists
this July said that the current pace of increasing carbon emissions would force an "effectively
irreversible" change on ocean ecosystems during this century. In as little as a decade, the study
suggested, chemical changes will rise significantly above background levels in nearly half of the
world's oceans.
"I used to think it was kind of hard to make things in the ocean go extinct," James Barry of the
Monterey Bay Aquarium Research Institute in California told the Seattle Times in 2013. "But this
change we're seeing is happening so fast it's almost instantaneous."
Thanks to the pressure we're putting on the planet's ecosystem warming, acidification and good
old-fashioned pollution the oceans are set up for several decades of rapid change. Here's what
could happen next.
The combination of excessive nutrients from agricultural runoff, abnormal wind patterns and the
warming oceans is already creating seasonal dead zones in coastal regions when algae blooms suck
up most of the available oxygen. The appearance of low-oxygen regions has doubled in frequency
every 10 years since 1960 and should continue to grow over the coming decades at an even greater
rate.
So far, dead zones have remained mostly close to the coasts, but in the 21st century, deep-ocean
dead zones could become common. These low-oxygen regions could gradually expand in size
potentially thousands of miles across which would force fish, whales, pretty much everything
upward. If this were to occur, large sections of the temperate deep oceans would suffer should the
oxygen-free layer grow so pronounced that it stratifies, pushing surface ocean warming into
overdrive and hindering upwelling of cooler, nutrient-rich deeper water.
Enhanced evaporation from the warmer oceans will create heavier downpours, perhaps
destabilizing the root systems of forests, and accelerated runoff will pour more excess nutrients
into coastal areas, further enhancing dead zones. In the past year, downpours have broken records
in Long Island, Phoenix, Detroit, Baltimore, Houston and Pensacola, Florida.
Evidence for the above scenario comes in large part from our best understanding of what
happened 250 million years ago, during the "Great Dying," when more than 90 percent of all
oceanic species perished after a pulse of carbon dioxide and methane from land-based sources
began a period of profound climate change. The conditions that triggered the "Great Dying" took
hundreds of thousands of years to develop. But humans have been emitting carbon dioxide at a
much quicker rate, so the current mass extinction only took 100 years or so to kick-start.
With all these stressors working against it, a hypoxic feedback loop could wind up destroying some
of the oceans' most species-rich ecosystems within our lifetime. A recent study by Sarah Moffitt of
the University of California-Davis said it could take the ocean thousands of years to recover.
"Looking forward for my kid, people in the future are not going to have the same ocean that I have
today," Moffitt said.
As you might expect, having tickets to the front row of a global environmental catastrophe is taking
an increasingly emotional toll on scientists, and in some cases pushing them toward advocacy. Of
the two dozen or so scientists I interviewed for this piece, virtually all drifted into apocalyptic
language at some point.
For Simone Alin, an oceanographer focusing on ocean acidification at NOAA's Pacific Marine
Environmental Laboratory in Seattle, the changes she's seeing hit close to home. The Puget Sound
is a natural laboratory for the coming decades of rapid change because its waters are naturally
more acidified than most of the world's marine ecosystems.
The local oyster industry here is already seeing serious impacts from acidifying waters and is going
to great lengths to avoid a total collapse. Alin calls oysters, which are non-native, the canary in the
coal mine for the Puget Sound: "A canary is also not native to a coal mine, but that doesn't mean
it's not a good indicator of change."
Though she works on fundamental oceanic changes every day, the Dutkiewicz study on the
impending large-scale changes to plankton caught her off-guard: "This was alarming to me
because if the basis of the food web changes, then . . . everything could change, right?"
Alin's frank discussion of the looming oceanic apocalypse is perhaps a product of studying
unfathomable change every day. But four years ago, the birth of her twins "heightened the whole
issue," she says. "I was worried enough about these problems before having kids that I wondered
whether it was a good idea. Now, it just makes me feel crushed."
James Hansen, the dean of climate scientists, retired from NASA in 2013 to become a climate
activist. But for all the gloom of the report he just put his name to, Hansen is actually somewhat
hopeful. That's because he knows that climate change has a straightforward solution: End fossil-
fuel use as quickly as possible. If tomorrow, the leaders of the United States and China would agree
to a sufficiently strong, coordinated carbon tax that's also applied to imports, the rest of the world
would have no choice but to sign up. This idea has already been pitched to Congress several times,
with tepid bipartisan support. Even though a carbon tax is probably a long shot, for Hansen, even
the slim possibility that bold action like this might happen is enough for him to devote the rest of
his life to working to achieve it.
One group Hansen is helping is Our Children's Trust, a legal advocacy organization that's filed a
number of novel challenges on behalf of minors under the idea that climate change is a violation of
intergenerational equity children, the group argues, are lawfully entitled to inherit a healthy
planet.
A separate challenge to U.S. law is being brought by a former EPA scientist arguing that carbon
dioxide isn't just a pollutant (which, under the Clean Air Act, can dissipate on its own), it's also a
toxic substance. In general, these substances have exceptionally long life spans in the environment,
cause an unreasonable risk, and therefore require remediation. In this case, remediation may
involve planting vast numbers of trees or restoring wetlands to bury excess carbon underground.
Even if these novel challenges succeed, it will take years before a bend in the curve is noticeable.
But maybe that's enough. When all feels lost, saving a few species will feel like a triumph.
Vocabulary exercise
1. traced back to (paragraph 12)
A significant portion of the nitrogen in West Coast forests has been traced back to salmon, which
can travel hundreds of miles upstream to lay their eggs.
2. attendant with (paragraph 17)
Attendant with this weird wildlife behavior is a stunning drop in the number of plankton the
basis of the ocean's food chain.
3. flourish (paragraph 17)
another major study concluded that acidifying oceans are likely to have a "quite traumatic" impact
on plankton diversity, with some species dying out while others flourish.
4. has given rise to (paragraph 20)
For the past year or two, a persistent change in winds over the North Pacific has given rise to
what meteorologists and oceanographers are calling "the blob"
5. striking (paragraph 25)
A similarly striking study by more than a dozen of the world's top ocean scientists this July said
that the current pace of increasing carbon emissions would force an "effectively irreversible"
change on ocean ecosystems during this century.
6. triggered (paragraph 31)
The conditions that triggered the "Great Dying" took hundreds of thousands of years to develop.
But humans have been emitting carbon dioxide at a much quicker rate
7. wind up (paragraph 32)
With all these stressors working against it, a hypoxic feedback loop could wind up destroying
some of the oceans' most species-rich ecosystems within our lifetime.
Write your own sentences with the vocabulary
1. traced back to
2. attendant with
3. flourish
4. has given rise to
5. striking
6. triggered
7. wind up
EXERCISE 34
Trying to reverse the declining demand for
humanities majors in America
Summary
This is an article which talks about the ways colleges (what universities are sometimes referred to
the US) are trying to encourage students to choose humanity subjects (English, history, philosophy
etc...) as the main part of their degrees in the US. It explains why there has been a fall in the
numbers of students who take these and why it is difficult for universities to adjust to this. It then
explains what they are doing to encourage students to take humanities courses.
Even as college students on the whole began to shun humanities majors over the past decade in
favor of vocational majors in business and health, there was one group of holdouts:
undergraduates at elite colleges and universities. That's not the case anymore, and as a result,
many colleges have become cheerleaders for their own humanities programs, launching
promotional campaigns to make them more appealing to students.
As Benjamin Schmidt wrote recently, humanities majors which traditionally made up one-third
of all degrees awarded at top liberal-arts colleges as recently as 2011 have fallen to well under a
quarter. Meanwhile, at elite research universities the share of humanities degrees has dropped
from 17 percent a decade ago to just 11 percent today.
"This wasn't a gradual decline; it was more like a tidal wave," says Brian C. Rosenberg, the
president of Macalester College. The Minnesota campus, which is well known for its international-
studies program, has "never been a science-first liberal-arts college," Rosenberg said. But now 41
percent of its graduates complete a major in a stem (science, technology, engineering, and
mathematics) field. That's up from 27 percent only a decade ago.
The reasons for this national shift are many, but most academics attribute it mostly to the
lingering effects of the Great Recession that occurred between 2008 to 2010. One of the earliest
memories for the generation entering college right now is of Americans losing their jobs and
sometimes their homes. Financial security still weighs heavily on the minds of these students.
Indeed, a long-running annual survey taken of new college freshman has found in the past decade
that the No. 1 reason students say they go to college is to get a better job; for the 20 years before
the recession hit in 2008, the top reason was to learn about things that interested them.
Unlike automakers, which can swiftly switch production lines when consumers start buying SUVs
instead of sedans, colleges can't adjust their faculty ranks as quickly in response to public demand.
Often, schools wait for professors to retire to reassign those openings to disciplines with the
greatest need. Even then, small schools might only recruit a handful of new faculty staff every year.
When they hire, most colleges also need to keep a balance of professors across departments to
teach introductory classes that are part of a core curriculum. Macalester, for instance, hired 11 full-
time faculty members this year four of them in computer science and statistics. "We have vacant
positions in history and English, and we decided not to fill them," Rosenberg says.
With that pace of hiring, it's nearly impossible for many colleges to keep up with increasing
enrollments in popular majors while maintaining small classes. What's more, faculty members
hired for tenure-track positions who eventually earn tenure are essentially promised lifetime
employment at the college. "When you put labor in position for 30 years, your ability to respond to
future trends becomes really challenging," says Raynard Kington, the president of Grinnell College
in Iowa. Grinnell expects 70 students to graduate with computer-science degrees this spring out of
a class of around 400; four years ago, it graduated just 15 computer-science majors.
To avoid further slippage in humanities majors, elite colleges and universities have resorted to an
all-out campaign to convince students that such degrees aren't just tickets to jobs as bartenders
and Starbucks baristas. Colleges are starting early with that push. Stanford University writes
letters and sends brochures to top-notch high-school students with an interest in the humanities to
encourage them to apply, says Debra Satz, the dean of Stanford's School of Humanities and
Sciences. Prospective students can also take humanities classes at Stanford while still in high
school.
What's puzzling to the college officials I spoke to is that they say students' interest in humanities
majors remains high during the college-search process, according to what students indicate on
their applications. Then something happens between when students apply and when they actually
declare a major, usually in their sophomore year. Perhaps students' intentions on their
applications weren't serious, but if they were, Satz says it's critical that humanities courses in the
freshman year capture their attention. At Stanford, she said introductory courses in the humanities
are focused on "big ideas," such as justice, ethics, and the environment, to appeal to students
trying to choose their major.
"We have to make the offerings really good, really enriching," Satz says. "Part of our challenge is
when students see so many of their peers going into computer science."
To help guide the course selection of incoming students, Grinnell sent a booklet to all freshmen
this past summer that outlined the importance of a broad liberal-arts education. The college also
added a session on the topic to orientation in advance of students meeting their academic advisers.
Both initiatives, Kington said, were intended to encourage students to select courses across a range
of academic disciplines, given that Grinnell lacks a traditional core curriculum with mandated
requirements.
Macalester's tactic has been to try to inject some humanities into stem classes and some practical
career training into the humanities. Last year, Rosenberg, the school's president, brought the
faculty together at a retreat to discuss the shifting balance of majors. One outcome was that faculty
members were encouraged to pair together courses across academic disciplines so that, for
example, a new class in social media might be a blend of computer science and philosophy.
Professors in the humanities were also encouraged to give their students more career guidance
than in the past, when many humanities students simply went to graduate school or law school
after college.
"The typical English major is designed to get students to go to graduate school," Rosenberg says.
"We need to rethink the curriculum so that it's more focused on what employers will immediately
find attractive."
Rosenberg was present when several presidents of elite colleges gathered last fall for a meeting in
New York City. At our table during lunch, there was a debate about whether the changing
distribution of majors was really a crisis. After all, at least at liberal-arts colleges, the humanities
remain a central part of the curriculum, including for stem majors. Indeed, Satz of Stanford says
she's less concerned about the 14 percent drop in humanities majors at the university over the past
decade, and more focused on the 20 percent increase in enrollment in humanities courses.
"There's only so much we can do to stem the tide of students choosing to do more vocational
majors," she says. "What I care about is that every student in engineering can think critically, can
read carefully, and they can listen empathetically. That happens by taking courses in the
humanities."
Rosenberg, an English professor and Charles Dickens scholar by training, agrees. He says he
doesn't blame students for flocking to computer science and applied mathematics. Mathematical
literacy and the ability to manipulate large data sets are becoming more critical in every job,
including those the humanities traditionally trained, from journalists to sociologists. "We're not
giving students enough credit," Rosenberg says. "They're picking something that's really
interesting to them."
Unless colleges in the United States want to follow the European model, where prospective
students apply to specific degree programs instead of a given university, the choices of American
students will likely always shift with the winds of employment. Some studies suggest that many of
the tasks done by humans in stem fields will be automated in the future; robots may well end up
writing most programming and intelligent algorithms. So if elite colleges just wait long enough,
perhaps the humanities will make a comeback as humans look for the kind of knowledge that helps
them complement rather than compete with technology.
Vocabulary exercise
1. shun (paragraph 1)
Even as college students on the whole began to shun humanities majors over the past decade in
favor of vocational majors in business and health
2. lingering (paragraph 4)
The reasons for this national shift are many, but most academics attribute it mostly to the
lingering effects of the Great Recession that occurred between 2008 to 2010.
3. swiftly (paragraph 5)
Unlike automakers, which can swiftly switch production lines when consumers start buying SUVs
instead of sedans, colleges can't adjust their faculty ranks as quickly in response to public demand.
4. resorted to (paragraph 7)
To avoid further slippage in humanities majors, elite colleges and universities have resorted to
an all-out campaign to convince students that such degrees aren't just tickets to jobs as bartenders
5. top-notch (paragraph 7)
Stanford University writes letters and sends brochures to top-notch high-school students with an
interest in the humanities to encourage them to apply
6. stem the tide (paragraph 14)
There's only so much we can do to stem the tide of students choosing to do more vocational
majors,
7. flocking to (paragraph 15)
He says he doesn't blame students for flocking to computer science and applied mathematics.
Mathematical literacy and the ability to manipulate large data sets are becoming more critical in
every job
Write your own sentences with the vocabulary
1. shun
2. lingering
3. swiftly
4. resorted to
5. top-notch
6. stem the tide
7. flocking to
EXERCISE 35
How the tulip caused the world’s first
economic crash
Summary
Although speaking at length in parts about paintings, this article talks about what happened in the
world's first economic crash. It details the history of how the speculative buying (buying something
for the purpose of not consumption but for profit) in Holland of the bulbs (from what some types
of flowers grow from) of the tulip led to an economic boom which ended in a catastrophic crash.
Not long after the turn of the 17th Century, the Flemish painter Jan Brueghel the Elder began a
small but exquisite still life depicting a bunch of cut flowers in a glass vase. Painted in oils on
copper, which served to enhance the brightness and intensity of the hues, the picture showcased a
remarkably lush floral arrangement. In it, narcissi, chrysanthemums and various other flowers
emerge from an improbably small vessel, creating an extravagant spray of colour. Towards the top
of the composition, two rounded blooms, each seemingly as plump and soft as a piece of overripe
fruit, catch the eye. One is pale pink, the other a dramatic, streaky combination of yellow and red.
Both are tulips.
Brueghel's still life, on loan from a private collection in Hong Kong, is the first painting on view in
Dutch Flowers, a new, one-room display at the National Gallery in London. This small, free
exhibition charts the course over two centuries of the genre of Dutch flower painting, which
Brueghel originated. And tulips figure prominently in many of the 22 ravishing paintings in the
show.
"So what?", you might ask. After all, Dutch artists commonly depicted many other types of flowers,
including irises and roses. Yet within the Dutch Republic of the 17th Century, tulips, in particular,
were notorious. For this was the age of the so-called 'tulip mania', when speculators traded the
flower's bulbs for extraordinary sums of money, until, without warning, the market for them
spectacularly collapsed. Ever since, the cautionary tale of tulip mania has been held up as the first
example of an economic bubble.
Roots of the problem
When Brueghel was at work on his still life, between 1608 and 1610, this bubble was still decades
away from bursting. Brueghel and his specialist contemporaries, such as Ambrosius Bosschaert the
Elder, painted flowers in order to cater for the new and fashionable interest in horticulture that
was preoccupying gentlemen botanists and wealthy connoisseurs. One of their leaders was the
pioneering botanist Carolus Clusius, who established an important botanical garden at the
University of Leiden during the 1590s.
Clusius also had a private garden at Leiden, and it was here that he planted his own collection of
tulip bulbs. At that time, tulips, which originally hailed from the Pamir and Tien Shan mountain
ranges in central Asia, and had already been cultivated by besotted gardeners in the Ottoman
Empire for decades, were rare and exotic newcomers to Western Europe. They were hard to get
hold of, and quickly became desired by other scholars besides Clusius.
Clusius devoted a large proportion of his final years to studying tulips. He was especially interested
in understanding how and why, from one year to the next, a particular bulb could suddenly 'break'.
This meant that, inexplicably, it would go from producing blooms of a single colour to flowers
boasting beautiful feathery or flame-like patterns involving more than one hue.
Much later, during the 19th Century, it was discovered that this striated effect was actually the
result of a virus. But, in the 17th Century, this was still not understood, and so, strangely enough,
diseased tulips, emblazoned with distinctive patterns, became more prized than healthy ones in
the Dutch Republic. Dutch botanists competed to breed ever more beautiful hybrid varieties,
known as 'cultivars'.
In the early 17th Century, these cultivars began to be exchanged among a growing network of
gentlemen scholars, who swapped cuttings, seeds and bulbs both within the Netherlands and
internationally. "As that network grew," explains Betsy Wieseman, the curator of Dutch Flowers at
the National Gallery, "it became less of a friendship network, and the scholars started getting
requests from people they didn't know. So they started trading for money. And as that network
grew and grew, it became increasingly fragile."
Strange bunch
The expanding interest in tulips coincided with an especially prosperous period in the history of
the United Provinces, which, by the 17th Century, dominated world trade and had become the
richest country in Europe. As a result, not only aristocratic citizens but also wealthy merchants and
even middle-class artisans and tradesmen suddenly found that they had spare cash to spend on
luxuries such as expensive flowers.
Already by 1623, the sizeable sum of 12,000 guilders was offered to tempt one tulip connoisseur
into parting with only 10 bulbs of the beautiful, and extremely rare, Semper Augustus the most
coveted tulip variety. It was not enough to secure a deal.
When word got out, during the 1630s, that tulip bulbs were being sold for ever-increasing prices,
more and more speculators piled into the market. The intricacies of this market, as well as its
frailties, are brilliantly outlined by the historian Mike Dash in Tulipomania: The Story of the
World's Most Coveted Flower and the Extraordinary Passions It Aroused (1999).
One of the curiosities of the 17th Century tulip market was that people did not trade the flowers
themselves but rather the bulbs of scarce and sought-after varieties. The result, as Dash points out,
was "what would today be called a futures market". Tulips even began to be used as a form of
money in their own right: in 1633, actual properties were sold for handfuls of bulbs.
As people heard stories of acquaintances making unheard-of profits simply by buying and selling
tulip bulbs, they decided to get in on the act and prices skyrocketed. In 1633, a single bulb of
Semper Augustus was already worth an astonishing 5,500 guilders. By the first month of 1637, this
had almost doubled, to 10,000 guilders. Dash puts this sum in context: "It was enough to feed,
clothe and house a whole Dutch family for half a lifetime, or sufficient to purchase one of the
grandest homes on the most fashionable canal in Amsterdam for cash, complete with a coach
house and an 80-ft (25-m) garden and this at a time when homes in that city were as expensive
as property anywhere in the world."
Cut flowers
Things came to a head during the winter of 1636-37, when tulip mania reached its peak. By then,
thousands of people within the United Provinces, including cobblers, carpenters, bricklayers and
woodcutters, were indulging in frenzied trading, which often took place in smoky tavern
backrooms. (Drink was a significant factor in the generally intoxicated mood.) Some bulbs even
changed hands up to 10 times during the course of a single day.
And then, overnight, the tavern trade disappeared. In early February 1637, the market for tulips
collapsed. This was because most speculators could no longer afford to purchase even the cheapest
bulbs. Demand disappeared, and flowers tumbled to a tenth of their former values. The result was
the prospect of financial catastrophe for many. Disputes over debts rumbled on for years.
The extraordinary thing is that the collapse of the market for tulips didn't diminish the Dutch
appetite for flowers in art, at least. Dutch flower painting persisted for the best part of two
centuries. It is possible to spot tulips in, for instance, Jan van Huysum's Flowers in a Terracotta
Vase of 1736-37. Yet, ironically, very few flower paintings of any sort survive from the 1630s, when
the Dutch Republic was in thrall to tulip mania: "There really is this break in production of flower
paintings in the 1630s and '40s," says Wieseman, "and I can't quite explain it." Perhaps, for a few
years at least, the excesses of tulip mania, and the traumatic memories of it that followed, were so
sickening for Dutch art collectors that they couldn't stomach the idea of looking at a picture of a
flower hanging on their wall.
Vocabulary exercise
1. charts (paragraph 2)
This small, free exhibition charts the course over two centuries of the genre of Dutch flower
painting, which Brueghel originated.
2. hailed from (paragraph 5)
At that time, tulips, which originally hailed from the Pamir and Tien Shan mountain ranges in
central Asia, and had already been cultivated by besotted gardeners in the Ottoman Empire for
decades,
3. coveted (paragraph 10)
Already by 1623, the sizeable sum of 12,000 guilders was offered to tempt one tulip connoisseur
into parting with only 10 bulbs of the beautiful, and extremely rare, Semper Augustus the most
coveted tulip variety.
4. came to a head (paragraph 14)
Things came to a head during the winter of 1636-37, when tulip mania reached its peak.
5. rumbled on (paragraph 15)
Demand disappeared, and flowers tumbled to a tenth of their former values. The result was the
prospect of financial catastrophe for many. Disputes over debts rumbled on for years.
6. in thrall to (paragraph 16)
Yet, ironically, very few flower paintings of any sort survive from the 1630s, when the Dutch
Republic was in thrall to tulip mania:
7. couldn't stomach the idea (paragraph 16)
Perhaps, for a few years at least, the excesses of tulip mania, and the traumatic memories of it that
followed, were so sickening for Dutch art collectors that they couldn't stomach the idea of
looking at a picture of a flower hanging on their wall.
Write your own sentences with the vocabulary
1. charts
2. hailed from
3. coveted
4. came to a head
5. rumbled on
6. in thrall to
7. couldn't stomach the idea
EXERCISE 36
A review of the film "Three Billboards Outside
Ebbing, Missouri"
Summary
This is a review of the movie Three Billboards Outside Ebbing, Missouri. In the review the reviewer
talks about various aspects of the film and gives their opinion on those and on the film in general.
Life and death, heaven and hell, damnation and redemption collide in this blisteringly foul-
mouthed, yet surprisingly tender, tragicomedy from British-Irish writer-director Martin
McDonagh. Lacing a western-tinged tale of outlaw justice with Jacobean themes of rape, murder
and revenge.
In McDonagh's second American-set feature, we find a grieving mother naming and shaming the
lawmen who have failed to catch her daughter's killer. Unlike his previous work, the subject is no
laughing matter. But as with his 2008 debut feature, In Bruges, McDonagh's Chaucerian ear for
obscenity provokes giggles, guffaws and gasps in the most inappropriate circumstances. More
importantly, he underpins the anarchic nihilism of his narrative with a heartbreaking meditation
upon the toxic power of rage. When characters, struggling to make sense of all this chaos, utter
platitudes such as "anger just begets greater anger" and "through love comes calm", it seems less
like a joke than a weirdly sincere mission statement for what the film is for.
Seven months after her daughter, Angela, was abducted and killed, Mildred Hayes (Frances
McDormand) emblazons the roadside billboards of the title with signs taunting police chief
Willoughby (Woody Harrelson) about the lack of arrests. For Mildred, the Ebbing police force is
"too busy going round torturing black folks" to solve crime. "I got issues with white folks too,"
declares bozo cop Jason Dixon (Sam Rockwell) after throwing someone out of a window a
bravura one-shot sequence pointedly orchestrated to the lilting strains of His Master's Voice by
Monsters of Folk.
But beneath the caricatures of the characters, however, even Ebbing's most apparently
unsympathetic residents have complex lives. While ad man Red Welby (Caleb Landry Jones)
pointedly reads Flannery O'Connor's A Good Man Is Hard to Find, family man Willoughby looks
beyond his own mortality, attempting to find the best in everyone, including the aggressively
infantile Dixon. And the righteously angry Mildred has her own demons, torturing her bullied son,
Robbie (Lucas Hedges), with her guilt-driven vendetta, and wrestling with the awful possibility
that "there ain't no God, and the whole world's empty, and it doesn't matter what we do to each
other".
It's difficult not to see the references to Fred Zinnemann's seminal western High Noon (amplified
by composer Carter Burwell's spaghetti-tinged guitar themes in the score) in the film, and cheeky
references to the American gothic of Psycho (Sandy Martin's domineering Momma Dixon seems to
have walked straight out of the Bates Motel). This magical-realist parable finds McDonagh far
more in tune with the US landscape than in his disappointing Seven Psychopaths (2012). From the
opening morning-mist shots of those lonely billboards to the flames that evoke the burning crosses
of the KKK, cinematographer Ben Davis perfectly captures the film's knife-edge balance between
humour and horror, mayhem and melancholia.
While McDonagh's script contains a familiar carnivalesque litany of "fuckheads", "funny-eyed old
ladies" and "fat dentists", the excellent ensemble cast ensures that even peripheral characters have
depth and heft. Amanda Warren and Darrell Britt-Gibson work wonders with small but significant
roles as Mildred's accidental support network, both experiencing the sharp end of Ebbing's
retrograde law enforcement, while Clarke Peters exudes understated gravitas as incoming police
chief Abercrombie, viewing the unfolding idiocy with the same quiet astonishment that Cleavon
Little brought to Blazing Saddles. Peter Dinklage wrings pathos from the role of love-struck car
salesman James, while Abbie Cornish and John Hawkes are much more than mere foils for their
respective screen partners.
As for McDormand, the stoically unsentimental Mildred, who sports a blue-collar jumpsuit and
ready-for-action bandana, offers the best vehicle for her deadpan talents since Fargo's Marge
Gunderson. While McDonagh's dialogue is ripe and chewy, McDormand has the power to speak
volumes in silence. An early scene in which she gazes at the derelict billboards, fiercely chews a
fingernail, then lets her hand gently graze her chin as her head falls back in thought tells us all we
need to know about her dawning plan and her determination to follow it through.
Whether each of these characters is on a road to redemption or ruin is left open-ended.
McDonagh's rejection of clear-cut moral certainties has already provoked a backlash from some
commentators; a recent Huffington Post article, for example, argued that Dixon is essentially "the
racist uncle whom white liberals fear and love". Deserved awards attention (four Golden Globe
wins, umpteen Bafta nominations) for this well made and acted film has turned up the heat on
such highly charged debates. Yet I was not left contemplating the film's thorny racial politics, but
instead remembering the closing moments of Straw Dogs; of the chaos left in the wake of violence
and the wistful possibility (however remote) of transcending its awful legacy.
Vocabulary exercise
1. underpins (paragraph 2)
More importantly, he underpins the anarchic nihilism of his narrative with a heartbreaking
meditation upon the toxic power of rage.
2. taunting (paragraph 3)
Seven months after her daughter, Angela, was abducted and killed, Mildred Hayes (Frances
McDormand) emblazons the roadside billboards of the title with signs taunting police chief
Willoughby (Woody Harrelson) about the lack of arrests.
3. caricatures (paragraph 4)
But beneath the caricatures of the characters, however, even Ebbing's most apparently
unsympathetic residents have complex lives.
4. wrestling with (paragraph 4)
And the righteously angry Mildred has her own demons, torturing her bullied son, Robbie (Lucas
Hedges), with her guilt-driven vendetta, and wrestling with the awful possibility that "there ain't
no God, and the whole world's empty, and it doesn't matter what we do to each other".
5. seminal (paragraph 5)
It's difficult not to see the references to Fred Zinnemann's seminal western High Noon (amplified
by composer Carter Burwell's spaghetti-tinged guitar themes in the score) in the film,
6. exudes (paragraph 6)
while Clarke Peters exudes understated gravitas as incoming police chief Abercrombie, viewing
the unfolding idiocy with the same quiet astonishment that Cleavon Little brought to Blazing
Saddles.
7. sports (paragraph 7)
As for McDormand, the stoically unsentimental Mildred, who sports a blue-collar jumpsuit and
ready-for-action bandana, offers the best vehicle for her deadpan talents since Fargo's Marge
Gunderson.
Write your own sentences with the vocabulary
1. underpins
2. taunting
3. caricatures
4. wrestling with
5. seminal
6. exudes
7. sports
EXERCISE 37
Our personality traits appear to be mostly
inherited
Summary
This is an article which explains how our genes have much more influence on shaping who we are
than environmental factors do. Using the findings of a recent study of the personality types of
twins, it explains what types of personality traits seem to be inherited rather than learnt. It ends by
explaining how parents can use these findings for bringing up their own children.
THE genetic makeup of a child is a stronger influence on personality than child rearing, according
to the first study to examine identical twins reared in different families. The findings shatter a
widespread belief among experts and laymen alike in the primacy of family influence and are sure
to engender fierce debate.
The findings are the first major results to emerge from a long term project at the University of
Minnesota in which, since 1979, more than 350 pairs of twins have gone through six days of
extensive testing that has included analysis of blood, brain waves, intelligence and allergies.
The results on personality are being reviewed for publication by the Journal of Personality and
Social Psychology. Although there has been wide press coverage of pairs of twins reared apart who
met for the first time in the course of the study, the personality results are the first significant
scientific data to be announced.
For most of the traits measured, more than half the variation was found to be due to heredity,
leaving less than half determined by the influence of parents, home environment and other
experiences in life. The Minnesota findings stand in sharp contradiction to standard wisdom on
nature versus nurture in forming adult personality. Virtually all major theories since Freud have
given far more importance to environment, or nurture, than to genes, or nature.
Even though the findings point to the strong influence of heredity, the family still shapes the broad
suggestion of personality offered by heredity; for example, a family might tend to make an innately
timid child either more timid or less so. But the inference from this study is that the family would
be unlikely to make the child brave.
The 350 pairs of twins studied included some who were raised apart. Among these separately
reared twins were 44 pairs of identical twins and 21 pairs of fraternal twins. Comparing twins
raised separately with those raised in the same home allows researchers to determine the relative
importance of heredity and environment in their development. Although some twins go out of
their way to emphasize differences between them, identical twins in general are very much alike in
personality.
But what accounts for that similarity? If environment were the overriding influence on personality,
then identical twins raised in the same home would be expected to show more similarity than
would the twins reared apart. But the study of 11 personality traits found differences between the
kinds of twins were far smaller than had been assumed. "If in fact twins reared apart are that
similar, this study is extremely important for understanding how personality is shaped,"
commented Jerome Kagan, a developmental psychologist at Harvard University. "It implies that
some aspects of personality are under a great degree of genetic control."
The traits were measured using a personality questionnaire developed by Auke Tellegen, a
psychologist at the University of Minnesota who was one of the principal researchers. The
questionnaire assesses many major aspects of personality, including aggressiveness, striving for
achievement, and the need for personal intimacy. For example, agreement with the statement
"When I work with others, I like to take charge" is an indication of the trait called social potency,
or leadership, while agreement with the sentence "I often keep working on a problem, even if I am
very tired" indicates the need for achievement.
Among traits found most strongly determined by heredity were leadership and, surprisingly,
traditionalism or obedience to authority. "One would not expect the tendency to believe in
traditional values and the strict enforcement of rules to be more an inherited than learned trait,"
said David Lykken, a psychologist in the Minnesota project. "But we found that, in some
mysterious way, it is one of the traits with the strongest genetic influence." Other traits that the
study concludes were more than 50 percent determined by heredity included a sense of well-being
and zest for life; alienation; vulnerability or resistance to stress, and fearfulness or risk-seeking.
Another highly inherited trait, though one not commonly thought of as part of personality, was the
capacity for becoming rapt in an aesthetic experience, such as a concert.
Vulnerability to stress, as measured on the Tellegen test, reflects what is commonly thought of as
"neuroticism," according to Dr. Lykken. "People high in this trait are nervous and jumpy, easily
irritated, highly sensitive to stimuli, and generally dissatisfied with themselves, while those low on
the trait are resilient and see themselves in a positive light," he said. "Therapy may help vulnerable
people to some extent, but they seem to have a built-in susceptibility that may mean, in general,
they would be more content with a life low in stress."
The need to achieve, including ambition and an inclination to work hard toward goals, also was
found to be genetically influenced, but more than half of this trait seemed determined by life
experience. The same lower degree of hereditary influence was found for impulsiveness and its
opposite, caution.
The necessity for personal intimacy appeared the least determined by heredity among the traits
tested; about two-thirds of that tendency was found to depend on experience. People high in this
trait have a strong desire for emotionally intense relationships; those low in the trait tend to be
loners who keep their troubles to themselves.
"This is one trait that can be greatly strengthened by the quality of interactions in a family," Dr.
Lykken said. "The more physical and emotional intimacy, the more likely this trait will be
developed in children, and those children with the strongest inherited tendency will have the
greatest need for social closeness as adults."
No single gene is believed responsible for any one of these traits. Instead, each trait, the Minnesota
researchers propose, is determined by a great number of genes in combination, so that the pattern
of inheritance is complex and indirect.
No one believes, for instance, that there is a single gene for timidity but rather a host of genetic
influences. That may account, they say, for why previous studies have found little correlation
between the personality traits of parents and their children. Whereas identical twins would share
with each other the whole constellation of genes that might be responsible for a particular trait,
children might share only some part of that constellation with each parent. That is why, just as a
short parent may have a tall child, an achievement-oriented parent might have a child with little
ambition.
The Minnesota findings are sure to stir debate. Though most social scientists accept the careful
study of twins, particularly when it includes identical twins reared apart, as the best method of
assessing the degree to which a trait is inherited, some object to using these methods for assessing
the genetic component of complex behavior patterns or question the conclusions that are drawn
from it.
Further, some researchers deem paper-and-pencil tests of personality less reliable than
observations of how people act, since people's own reports of their behavior can be biased. "The
level of heritability they found is surprisingly high, considering that questionnaires are not the
most sensitive index of personality," said Dr. Kagan. "There is often a poor relationship between
how people respond on a questionnaire and what they actually do."
"Years ago, when the field was dominated by a psychodynamic view, you could not publish a study
like this," Dr. Kagan added. "Now the field is shifting to a greater acceptance of genetic
determinants, and there is the danger of being too uncritical of such results."
Seymour Epstein, a personality psychologist at the University of Massachusetts, said he was
skeptical of precise estimates of heritability. "The study compared people from a relatively narrow
range of cultures and environments," he said. "If the range had been much greater - say Pygmies
and Eskimos as well as middle-class Americans - then environment would certainly contribute
more to personality. The results might have shown environment to be a far more powerful
influence than heredity," he said.
Dr. Tellegen himself said: "Even though the differences between families do not account for much
of the unique attributes of their children, a family still exercises important influence. In cases of
extreme deprivation or abuse, for instance, the family would have a much larger impact - though a
negative one - than any found in the study. Although the twins studied came from widely different
environments, there were no extremely deprived families."
Gardner Lindzey, director of the Center for Advanced Studies in the Behavioral Sciences in Palo
Alto, Calif., said the Minnesota findings would no doubt produce impassioned rejoinders. "They do
not in and of themselves say what makes a given character trait emerge," he said, "and they can be
disputed and argued about, as have similar studies of intelligence."
For parents, the study points to the importance of treating each child in accord with his innate
temperament. "The message for parents is not that it does not matter how they treat their children,
but that it is a big mistake to treat all kids the same," said Dr. Lykken. "To guide and shape a child
you have to respect his individuality, adapt to it and cultivate those qualities that will help him in
life.
"If there are two brothers in the same family, one fearless and the other timid, a good parent will
help the timid one become less so by giving him experiences of doing well at risk-taking, and let
the other develop his fearlessness tempered with some intelligent caution. But if the parent
shelters the one who is naturally timid, he will likely become more so."
The Minnesota results lend weight to earlier work that pointed to the importance of a child's
temperament in development. For instance, the New York Longitudinal Study, conducted by
Alexander Thomas and Stella Chess, psychiatrists at New York University Medical Center,
identified three basic temperaments in children, each of which could lead to behavioral problems if
not handled well. "Good parenting now must be seen in terms of meeting the special needs of a
child's temperament, including dealing with whatever conflicts it creates," said Stanley Grossman,
a staff member of the medical center's Psychoanalytic Institute.
Vocabulary exercise
1. engender (paragraph 1)
The findings shatter a widespread belief among experts and laymen alike in the primacy of family
influence and are sure to engender fierce debate.
2. inference (paragraph 5)
a family might tend to make an innately timid child either more timid or less so. But the
inference from this study is that the family would be unlikely to make the child brave.
3. go out of their way to (paragraph 6)
Although some twins go out of their way to emphasize differences between them, identical
twins in general are very much alike in personality.
4. overriding (paragraph 7)
If environment were the overriding influence on personality, then identical twins raised in the
same home would be expected to show more similarity than would the twins reared apart.
5. deem (paragraph 17)
Further, some researchers deem paper-and-pencil tests of personality less reliable than
observations of how people act, since people's own reports of their behavior can be biased.
6. in accord with (paragraph 22)
For parents, the study points to the importance of treating each child in accord with his innate
temperament.
7. lend weight to (paragraph 24)
The Minnesota results lend weight to earlier work that pointed to the importance of a child's
temperament in development.
Write your own sentences with the vocabulary
1. engender
2. inference
3. go out of their way to
4. overriding
5. deem
6. in accord with
7. lend weight to
EXERCISE 38
Is it right for people to go to Africa to hunt?
Summary
This article looks at why people go and hunt exotic and sometimes dangerous species in Africa.
Speaking to mostly hunters, it explains what seems to motivate them to go to Africa to hunt big
game (e.g. lions, elephants etc...) and why they feel people are against this type of hunting. It also
talks about the benefits of allowing regulated and controlled big game hunting to continue for both
the wildlife and the local community.
The most elephants that Ron Thomson has ever killed by himself, in one go, is 32. It took him
about 15 minutes. Growing up in Rhodesia, now Zimbabwe, Thomson began hunting as a teenager
and quickly became expert. From 1959, he worked as a national parks ranger and was regularly
called on to kill animals that came into conflict with man. "It was a great thrill to me, to be very
honest," he says by phone from Kenton-on-Sea, the small coastal town in South Africa where he
lives. "Some people enjoy hunting just as much as other people abhor it. I happened to enjoy it."
Now 79, Thomson has not shot an elephant for decades, and he struggles to find an open-minded
audience for his stories of having, in his own words, "by far hunted more than any other man
alive". Today there are people who hunt, and many more people who feel a deep-seated aversion to
it; for whom the image of an animal slain by man regardless of species, motive, legal status or
even historical context is nothing but repellent.
Today, these fault lines are most often exposed when a picture of a hunter grinning above their kill
goes viral, as it did for the US hunter and television presenter Larysa Switlyk. Photographs of her
posing with a goat and a sheep she had shot weeks earlier, and entirely legally, on the Scottish
island of Islay went well beyond hunting circles on social media to be met with widespread disgust.
Mike Russell, the local member of the Scottish parliament, said it was unacceptable "to see people
in camouflage … rejoicing at the killing of a goat".
Nicola Sturgeon publicly sympathised with the outcry and said the law would be reviewed. Switlyk
posted on Instagram that she would be heading out of internet access on her "next hunting
adventure". "Hopefully, that will give enough time for all the ignorant people out there sending me
death threats to get educated on hunting and conservation."
And that was a goat. In the case of species that people travel to glimpse in the wild, or just watch
on the Discovery Channel, the outrage can reverberate around the world. What would possess
someone to want to kill these animals, let alone pay tens of thousands of pounds for the
opportunity to do so?
"If you ask 100 hunters, you will get 100 different answers," says Jens Ulrik Høgh by phone from
woodlands in Sweden, where he has been escorting groups on hunts of wild boar. Høgh, who
works for Limpopo Travel & Diana Hunting Tours, a Danish hunting travel company, compares
the attraction to that of mountaineering, scuba diving or golf: a physical hobby through which you
can see the world. Hunters travel to experience different challenges. Zebras, for example, are tricky
because they gather in herds out in the open and are watchful for predators. "There are always eyes
looking in every direction it typically takes a couple of days to get one." With baboons, numerous
but intelligent primates, "you need to be a good hunter, a good stalker and a good shot".
The demand is reflected in the price tag. It costs relatively little about £3,000 legally to hunt a
giraffe because doing so is widely considered easy by hunters and therefore not desirable. "A
giraffe is basically a very docile pile of meat. I could go shoot a cow in a field," says Høgh. (For the
same reason, he tells me, he is rolling his eyes at Switlyk, the self-styled "hardcore huntress",
posing with her trophies on Islay: "Who wants to kill a sheep?")
Although Høgh has made about 30 trips to Africa, he has never killed a lion, elephant or much
"super-big game" for a straightforward reason: it is very expensive, typically upwards of £20,000.
(And rightly so, he adds.) "I simply cannot afford to go lion hunting. But if I could, I would."
It is a tiny proportion of hunters who can, he says; he guesses fewer than 1%, although he is
upfront about the distinction between hunting a wild lion and a "canned" one, an animal raised for
slaughter. That comes much cheaper but to hear Høgh tell it, it is a price no hunter with integrity
would want to pay. "That's basically a farmed animal. You wouldn't even call it a hunt," he says.
One name keeps cropping up in conversations had about so-called trophy hunting: Cecil, the lion
killed by Minnesota dentist Walter Palmer in Zimbabwe in 2015. Although it was legal to shoot
him, he had been lured out of a national park where he was well-beloved, and Palmer, hunting
with a bow and arrow, did not kill Cecil outright, meaning the animal suffered. "It was an
outrageous and shabby thing," says David Quammen, a US science writer who has written
extensively about humanity's relationship with predators. But, he adds, there is a skill, even
nobility, to hunting when "old-fashioned" rules of fair chase are observed. "Anyone who is not
vegetarian is ill-advised to condescend to the people who do that."
It is undeniable that industrial meat production causes more global suffering than hunting. But
even charges of hypocrisy do not deter opponents of hunting, even those who eat meat themselves,
for whom the thrill of the chase could never justify taking a life.
This reflects the complexity of our often emotional, sometimes contradictory relationship with
animals, which Quammen explores in his 2003 book, Monster of God. Hunting is an ancient
impulse. From bringing down mammoths in the ice ages to the gladiatorial battles of Roman
times, whether for food or sport, humans have always pitted themselves against animals. Large
predators, such as big cats and bears, loom large in our collective consciousness, rendered either as
"man-eaters" or charismatic and cuddly, like Simba and Pooh. And hunters argue sentiment is
impeding our ability to protect them as a species.
"The 'Hollywoodification' of animals is, more than anything, the biggest threat to their survival,"
says Loodt Büchner. As director of Tootabi Hunting Safaris, outfitting mostly US clients on legal
hunts of "50-plus species" on ranch land in five southern African countries, he is now mostly
deskbound. But, growing up poor in South Africa, he came to hunting as a valuable source of
protein, joining his father hunting for antelope when he was as young as five or six. Today, the
philanthropic arm of his business provides schoolchildren in Eastern Cape province with 3,400
meals of trophy-hunted meat each month.
Büchner says most of the backlash to hunting comes from "very fragile people" who conflate
wildlife conservation which can sometimes necessitate killing with preservation, "shooting
with a Canon camera, not a rifle".
He makes no distinction between people who hunt for food and those who pay his company
$13,500 (£10,400) for a package of 10 "trophy animals" in 10 days. Regardless of circumstance, no
animal killed is ever wasted, with the meat either sold or consumed. In fact, he says, Tootabi has
seen a 54% increase in revenue that he credits to the killing of Cecil because, before then, people
didn't know legal hunting was possible. Conversely, Høgh suggests the increase might instead be
the result of the subsequent crackdown on canned lion hunting.
Much of the interest is from recent university graduates, says Büchner, which he ascribes to their
generation's desire to document unique experiences on social media. "It's amazing to see the
number of young people in Manhattan who all of a sudden realise there's a world out there, that
it's not just shares and stocks: 'We could actually go hunt animals, it sounds amazing.'"
The desire to hunt especially for "ornaments" such as heads or horns is often explained by
detractors as being driven by male ego. Craig Packer, an ecologist now resident in Minneapolis
after 35 years spent pushing back against unsustainable lion hunting in Tanzania, says many
hunters model themselves on the Marlboro Man, the stereotypical picture of masculinity or,
more precisely, what he calls "toxic masculinity".
He recalls his encounters with Steven Chancellor, a Republican fundraiser, who has shot about 50
lions and displays them in his home in Indiana: "He dresses up in black he's into that cowboy
thing." Chancellor is also one of many big-game hunters to sit on a new federal board tasked with
advising on US import laws.
Packer says hunters often share Republican politics, a rural background and religion, leading them
to reject evidence showing that species are under threat, because (Packer affects a redneck drawl),
"only God can be getting rid of the wildlife".
Unsurprisingly, hunters reject suggestions that they are motivated by their fragile masculinity.
Høgh says it is an ill-informed stereotype. Büchner says it is sexist. "As an accredited dangerous
game hunter, killing any animal isn't fun, nor about boosting my ego," he says. But earlier, he had
said hunters were driven by the adrenaline rush surely that is fun?
For Høgh, the trouble is that non-hunters see the act of killing an "innocent" animal as
fundamentally dramatic or evil, as well as the primary goal of hunting. "It's not about those 0.5
seconds," he says. "I've met extremely few who took pleasure in killing animals, and the ones I
have met, I wish they would actually stop hunting. It's just perverse they're sadists."
But it is entirely understandable people would assume otherwise, he adds, when that is what gets
documented. "Hunters have been very poor at communicating the complexity of the experience.
Often what we show the world is a picture of a dead animal and us sitting behind it with a big grin
on our face."
Høgh asks his groups not to post photos of their kills online, or at least to be mindful of how they
might strike a non-hunter. But there are individuals he assumes an American accent who
demand that it is "their right". "And it is their right. But it's still damn stupid."
To add to the complexity of the debate, some point out that many of the people who most oppose
the hunting of animals will never know the realities, and sometimes the costs, of living alongside
them. "Wildlife is a problem for many people," says Prof Adam Hart, a scientist at the University of
Gloucestershire. Elephants, for instance, raid crops and damage trees. Big cats kill livestock, while
impala compete with them for food.
There is an important distinction, Gonçalves says, between an individual who kills an animal
sensitively and skilfully for meat that they will consume themselves "and hunting for a plaque on
the wall and a selfie".
Many, including Hart and Büchner, believe that hunting even big game can be not only sustainable
but beneficial to species' survival. They call it the "if it pays, it stays" approach, meaning that, by
putting a monetary figure on animals, they become valuable and worthy of preservation. They say
that if hunting and ranging is more lucrative than farming livestock or ecotourism, landowners are
incentivised to buy more land thereby conserving habitat and ensure species' long-term
survival.
Hart says figures from Namibia and South Africa seem to bear this out, but cautions against
oversimplification. "Is it good where we've got a system whereby the only way we can conserve
wildlife is to actively use it? We [Britons] would say no, but we have a very privileged viewpoint.
The biggest problem is that people don't understand the complexities of the situation there are
different species, different habitats, different countries, different economies, different societies."
Not even Packer is against lion hunting in principle. But in Tanzania he found that the industry
was resistant to reform and charging far too little per hunted animal to fund the conservation of
their land. "What's really infuriating to me is that they're posing like they are the great saviours of
wildlife, but they're putting in pennies where they should be putting in pounds."
A fair price, Packer says, would be about $1m a lion. "Steve Chancellor may be able to afford it. But
there aren't many Steve Chancellors."
In Botswana, home to the world's largest elephant population, Hart says that numbers have grown
so great that their habitat cannot support them and they are causing lasting damage. The
government there is considering lifting a 2014 ban on hunting elephants for sport, pointing to
figures that say there are 237,000 animals in an area able to support 50,000.
Last Monday, Sir Ranulph Fiennes, Bill Oddie, Peter Egan, a cross-party group of MPs and a
lifesize inflatable elephant delivered a 250,000-strong petition against the proposal to lift the ban
to the Botswana high commission in London, saying that allowing hunting could push the species
towards extinction. The protest marked the launch of the Campaign to Ban Trophy Hunting,
founded by Eduardo Gonçalves, the former CEO of the League Against Animal Sports. He says
elephant numbers in Botswana have only increased as a result of the 2014 ban, while populations
in neighbouring countries have declined. "The ban on trophy hunting has been good for
conservation there's no two ways about it," he says.
He is scathing of the justification that hunting benefits communities or conservation, pointing out
that if that was their motivation, hunters could donate direct to the cause. "They try to come up
with rational arguments to justify their bloodlust," he says. Moreover, he adds, there is an
"indivisible line" between trophy hunting and poaching, with laws against poachers having an
impact on poor African people while wealthy westerners lawfully carry out the same practice, for a
price.
Thomson says many species suffer when there is an excess of elephants: arboreal snakes and
chameleons, black hornbills, martial eagles, bush babies ("beautiful little monkey-like things").
Elephants' lives have been valued above theirs by "purely human sentiment", he says, now audibly
angry. "The people who think like that don't know anything about wildlife management. These are
people who have never left their armchairs, in London or New York or wherever they live. They
make these demands, and they haven't a clue what is going on. We are the ones looking after the
elephants in Africa. They are the ones that are causing all the problems."
In 1971, Thomson and two other hunters were called on to halve the elephant population in
Gonarezhou national park in Zimbabwe, killing 2,500 animals using semi automatic rifles. "The
three of us were able to kill between 30 and 50 elephants stone dead with brain shots in less than
60 seconds. In some cases, we were almost touching the elephants when we pulled the triggers. We
did the job that had to be done, without any emotion and without any blood loss, and we did it
exceptionally well."
His own regret is that they hadn't started sooner. The elephants' numbers had already grown so
great as to have caused permanent devastation to the park's baobab trees some so old, Thomson
says, that they would have stood during Tutankhamun's reign of Egypt. "To have these ancient
trees wiped out before me it broke my heart."
Vocabulary exercise
1. abhor (paragraph 1)
Some people enjoy hunting just as much as other people abhor it. I happened to enjoy it.
2. aversion to (paragraph 2)
Today there are people who hunt, and many more people who feel a deep-seated aversion to it;
for whom the image of an animal slain by man regardless of species, motive, legal status or even
historical context is nothing but repellent.
3. rejoicing (paragraph 3)
Mike Russell, the local member of the Scottish parliament, said it was unacceptable "to see people
in camouflage … rejoicing at the killing of a goat".
4. outcry (paragraph 4)
Nicola Sturgeon publicly sympathised with the outcry and said the law would be reviewed.
5. cropping up (paragraph 10)
One name keeps cropping up in conversations had about so-called trophy hunting: Cecil, the lion
killed by Minnesota dentist Walter Palmer in Zimbabwe in 2015.
6. rendered (paragraph 12)
Large predators, such as big cats and bears, loom large in our collective consciousness, rendered
either as "man-eaters" or charismatic and cuddly, like Simba and Pooh.
7. scathing (paragraph 31)
He is scathing of the justification that hunting benefits communities or conservation, pointing
out that if that was their motivation, hunters could donate direct to the cause.
Write your own sentences with the vocabulary
1. abhor
2. aversion to
3. rejoicing
4. outcry
5. cropping up
6. rendered
7. scathing
EXERCISE 39
The new palaces of the 21st century
Summary
This article talks about three specific buildings built or being built for the tech companies Apple,
Facebook and Google. It talks predominantly about the design and scale of each of the three
buildings in turn. It ends by talking about where the inspiration for these buildings comes from
and what the buildings say about the companies themselves.
We know by now that the internet is a giant playpen, a landscape of toys, distractions and instant
gratification, of chirps and squeaks and bright, shiny things. It is a world, as Jonathan Franzen
once said, "so responsive to our wishes as to be, effectively, a mere extension of the self". Until we
chance on the bars of the playpen and find that there are places we can't go and that it is in the gift
of the grown-ups on the other side to set or move the limits to our freedom.
We're talking here of virtual space. But those grown-ups, the tech giants, Apple, Facebook, Google
and the rest, are also in the business of building physical billion-dollar enclaves for their thousands
of employees. Here too they create calibrated lands of fun, wherein staff offer their lives, body and
soul, day and night, in return for gyms, Olympic-sized swimming pools, climbing walls, basketball
courts, running tracks and hiking trails, indoor football pitches, massage rooms and hanging
gardens, performance venues, amiable art and lovable graphics. They have been doing this for a
while what is changing is the sheer scale and extravagance of these places.
For the tech giants are now in the same position as great powers in the past the bankers of the
Italian Renaissance, the skyscraper-builders of the 20th century, the Emperor Augustus, Victorian
railway companies whereby, whether they want to or not, their size and wealth find expression in
spectacular architecture. As Deyan Sudjic, director of the Design Museum, wrote in his book The
Edifice Complex, the execution of architecture "has always been at the discretion of those with
their hands on the levers of power". Having as much sense of their own importance as those
previous powers, tech companies probably don't mind commissioning structures that define their
time.
Most though not all of these new structures are in the gathering of towns, suburbs and small cities
that goes by the name of Silicon Valley. There is the Foster project, Apple Park in Cupertino, 2.8m
sq ft in size and which reportedly cost $5bn, at its centre a mile in circumference, visible from
space, a metal and glass circle. There are the planned Google headquarters in Mountain View and
London by the high-ego, high-reputation pairing of Bjarke Ingels and Thomas Heatherwick.
Facebook has hired the New York office of OMA, the practice founded by Rem Koolhaas, to add to
the Frank Gehry-designed complex in Menlo Park that was completed in 2015.
The one that commands most attention, and has done since the designs were unveiled in 2011, is
the Apple/Foster circle, built on a site vacated by the waning empire of Hewlett Packard, which as
it happens was the company that gave the teenage Steve Jobs his first break. According to Wired
magazine, the building preoccupied Jobs in his last months, and he would spend his precious time
on five- or six-hour meetings on its design.
"We've had some great architects to work with," he said, "some of the best in the world I think, and
we've come up with a design that puts 12,000 people in one building." The audience gasped. He'd
seen "office parks with lots of buildings" but they "get boring pretty fast". So he proposed,
introducing a metaphor that has since stuck to the design like dust to a MacBook screen,
something "a little like a spaceship landed" with a "gorgeous courtyard in the middle". "It's a circle
and so it's curved all the way round," he said, which "as you know if you build things is not the
cheapest way to build something. There's not a straight piece of glass on this building." At the same
time the height would never exceed four storeys "we want the whole place human-scale". There
would be 6,000 trees on the 150-acre site, selected with the help of a "senior arborist from
Stanford who's very good with indigenous trees around this area".
Jobs was in fact understating the circle's exceptionalness. Steven Levy, a journalist for Wired, was
let through Apple's PR palisades to look inside when the building was nearing completion. He
described a high-precision Xanadu, a feel-good Spectre base, on which Lord Foster and his team
were assisted by Apple's famed chief design officer also, as it happens, British-born – Sir
Jonathan Ive. After a drive down a pristine 755-foot long tunnel, clad in specially designed and
patented tiles, he discovered a world of whiteness, greenery and silver, with a 100,000 sq ft fitness
centre and a cafe that can serve 4,000 at once, with the 1,000-seat Steve Jobs theatre, surmounted
by a 165ft-wide glass cylinder, for Apple's famous product launches, and with a landscape designed
to emulate a national park.
It is a place where trees have been transplanted from the Mojave desert, where the aluminium
door-handles have been through multiple prototypes to achieve their perfect form, where the stairs
use fire-control systems borrowed from yachts, where the extensive glass has been specially
treated to achieve exactly the desired level of transparency and whiteness, where even a new kind
of pizza box that stops the contents going soggy has been invented and patented for the company
cafe.
In life Jobs was ferocious about the detail; since his death his followers have striven to be true to
his spirit. He specified how the timber wall-linings should be cut and at what time of year, to
minimise its sap content. There is a yoga room, reports Levy, that is "covered in stone, from just
the right quarry in Kansas, that's been carefully distressed, like a pair of jeans, to make it look like
the stone at Jobs's favourite hotel in Yosemite". There are the sliding glass doors to the cafe, four
storeys or 85 feet high, each weighing 440,000 lbs nearly 200 tonnes , that open and close with
the help of near-noiseless underground mechanisms. Apple Park uses the largest, heaviest single
pieces of glass ever installed on a building, with the added complication of being curved.
It is certainly a wonder of our age, though to what end is an open question. Jonathan Ive told
Wired that the main aims were the connection and collaboration it would allow between
employees. For Foster it is "a beautiful object descended on this verdant, luxurious landscape … a
true utopian vision". One of its aims is to inspire future Apple workers with its perfection and
attention to detail, to set a standard for them to follow in their work. Tim Cook, Apple's CEO,
called it a "100-year decision".
Ever since the design was unveiled, however, it has provoked scepticism. The architecture critic of
the LA Times called it a "retrograde cocoon", "doggedly old-fashioned". As a perfect and excluding
piece of modernist geometry, set within lush planting and dependent on large amounts of car
parking, it looks oddly like a corporate HQ of the 1950s or 60s. And a circle is a frozen form, hard
to modify or augment. At any given point, the relationship to the rest is much the same as at any
other point, which seems to work against Ive's hopes for communication and spontaneity. It is the
shape of infinity and eternity, of mausoleums and temples.
Many of the greatest inventions in modern technology have been made in rough and ready, easy-
to-adapt spaces in the garages, front rooms and borrowed office desks where Apple, Google and
others were hatched and in Building 20, the big wooden shed at the Massachusetts Institute of
Technology where major advances were made in linguistics, nuclear science, acoustics and
computing, to name but a few. And while it's impossible for a company the size of Apple to recreate
that exact spirit in its workplace, the big circle does look over-determined and too complete, as
well as expensive and slow to build. Foster's "beauty" and "utopia" may not make the best
environment for fast-moving invention. As for Cook's 100-year ambition, this seems strange and
hubristic – as the decline of Hewlett Packard shows, there is little reason to think that any tech
company can last that long, in which case the Apple circle will, like the crumbling art deco
skyscrapers of Detroit, be magnificently redundant.
It doesn't often pay to bet against Apple's judgment, and there may be intelligence in the project
that is not visible in the available information. The company's wealth and power may in any case
be enough to counteract any unhelpfulness in its architecture, but Apple Park looks like the sort of
splendid monument that empires build for themselves Lutyens's buildings for the British Raj in
Delhi, the skyscrapers that went up on the cusp of the Wall Street crash after they have passed
their supremacy. It may also be governed by excessive if understandable respect for Jobs. It is a
place imbued with his biography and his dreams. They call it "Steve's gift". It had better not be
Steve's millstone.
It is, at all events, the project against which other tech companies' proposals want to define
themselves. They want to be the things that it is not. The official story of the Facebook/Gehry
collaboration is that Mark Zuckerberg was wary of the architect's celebrity and the latter had to
convince him of his ability to deliver the project with the help of Gehry's in-house software
more cheaply and efficiently than his rivals. The finished version is from the rough-edged and
rumpus-room schools of tech HQ design, with a huge open-plan office containing 2,800 workers
and splashy, colourful works by local artists. "The building itself is pretty simple and isn't fancy.
That's on purpose," said Zuckerberg. "We want our space to feel like a work in progress. When you
enter our buildings, we want you to feel how much left there is to be done in our mission to
connect the world."
Shohei Shigematsu, the partner at OMA New York in charge of Facebook's latest expansion,
Willow Campus, says that "our mission was not to provide iconic architecture but also regional and
social thinking". He and his client, he says, want to "integrate with community and provide
community amenity", to provide "the things that the community desperately wants" a grocery
store, open space, 1,500 homes of which 15% will be offered at below market rents, a hotel,
greenways, residential walks, shopping streets. "Facebook is the perfect company," Shigematsu
also says, "their mission is to connect people, and network is a word that is virtual but also
physical." So he wants to apply that mission to "urban ambitions for connectivity in the Bay Area".
He wants to re-activate a disused rail corridor at the edge of the site as a cycling track, a pedestrian
route and a possible line for a Facebook shuttle that can also be used by the public. He wants to
"undo the corporate fortress-like approach", although he acknowledges that a vast company will
always have secrets and that much of its territory will be out of bounds to the general public. The
imagery published so far shows generically pleasant parks and streets, of the kind that well-
mannered urbanists have been generating for more than three decades, with none of the
provocation, surprise and signature perversity that you usually get with OMA projects. Shigematsu
says he is happy to accept "a certain level of banality" in the appearance it is the "large-scale
thinking" that matters to him.
Google want something else again. They're definitely not afraid of icons. After considering various
architects – Zaha Hadid, for example they shotgunned Heatherwick and Bjarke Ingels's practice,
BIG, into a marriage. It's a striking idea, like a billionaire hiring Miley Cyrus and Britney Spears to
perform at his child's 18th birthday. Heatherwick and Ingels are among the younger recruits to the
ranks of iconists and unabashed showmen in their choice of designs. One of the two might be
considered ample for any one project, but then businesses like Google don't play by normal rules
when it comes to hiring designers, or indeed much else.
At Mountain View, where permission was recently granted to proceed, a huge roof is proposed,
both mountainous and tent-like, with upward-curving openings for viewing the sky. Beneath its
capacious shelter, on a raised open deck, hundreds if not thousands of Googlers will be doing their
stuff. The next level down a publicly accessible route runs through, part of a programme of
engaging with the local community that also includes a "public plaza" for group tai chi and
whatever. It is framed by "oval oak thickets".
If Apple Park seems aloof and extraterrestrial despite the fact that quite a lot of its landscape is
open to the public then Facebook and Google want you to know how much, like street jugglers or
mime artists, they want to engage you. But there are also similarities between all these projects,
such as the all-embracing nature of their ambitions. Each campus is a self-contained universe
where everything the species of vegetation, the graphics, the food in the cafe, the programming
of events, the architecture, is determined by the management. They make their own weather.
Under the Google tent or inside the Apple circle there is little but googleness or appleness. There is
nature but despite the meticulous selection of native plants it is of an abstract, managed kind.
There is art, but it is drained of the power to shock and subvert, leaving only diversion and
reassurance. There is architecture but, notwithstanding the high degree of invention that goes into
materials, it finds it hard to shed the quality of computer renderings, the sense that buildings are
made of a kind of digistuff, which could as well be one thing or another. Even when the
corporations reach out to their communities, to use the preferred PR terminology, the rest of the
world is a hazy, ill-defined entity, a mist in the background of the computer-generated images.
These panoptical worlds are a function of the sheer scale of the corporations, but they also reflect
their mindset. It has been pointed out that tech campuses resemble hippie communes of the 1960s
in their apparent egalitarianism, their illusion that you can go back to nature, make your own
rules, liberate yourself with science and share everything. Physically, Google's big roof echoes the
geodesic domes that hippies put up in their rural retreats.
While their sci-fi is strangely dated, culturally it makes sense. As the author Fred Turner has
argued in From Counterculture to Cyberculture, radical Californian ideas of the 1960s were, with
added profit motive, converted into radical Californian technologies of recent decades. And as has
been belatedly dawning, there are limits to the sharing, equality and freedom, particularly when
the intellectual property and business strategies of the tech giants are at stake.
Their architecture gives form to these contradictions, to the combinations of openness and control
and of freedom and barriers. They are perfect diagrams of the apparent equality and actual
inequality of the tech world, where impermeable septa divide those in the inner circles from the
rest. There is inequality everywhere, of course, but the tech trick is to pretend that there isn't.
Vocabulary exercise
1. have striven (paragraph 9)
In life Jobs was ferocious about the detail; since his death his followers have striven to be true to
his spirit. He specified how the timber wall-linings should be cut and at what time of year, to
minimise its sap content
2. provoked (paragraph 11)
Ever since the design was unveiled, however, it has provoked scepticism. The architecture critic
of the LA Times called it a "retrograde cocoon", "doggedly old-fashioned"
3. imbued with (paragraph 13)
It may also be governed by excessive if understandable respect for Jobs. It is a place imbued with
his biography and his dreams. They call it "Steve's gift". It had better not be Steve's millstone.
4. be out of bounds (paragraph 16)
He wants to "undo the corporate fortress-like approach", although he acknowledges that a vast
company will always have secrets and that much of its territory will be out of bounds to the
general public.
5. aloof (paragraph 19)
If Apple Park seems aloof and extraterrestrial despite the fact that quite a lot of its landscape is
open to the public then Facebook and Google want you to know how much, like street jugglers or
mime artists
6. shed (paragraph 20)
There is architecture but, notwithstanding the high degree of invention that goes into materials, it
finds it hard to shed the quality of computer renderings, the sense that buildings are made of a
kind of digistuff,
7. mindset (paragraph 21)
These panoptical worlds are a function of the sheer scale of the corporations, but they also reflect
their mindset.
Write your own sentences with the vocabulary
1. have striven
2. provoked
3. imbued with
4. be out of bounds
5. aloof
6. shed
7. mindset
EXERCISE 40
Understanding contemporary dance
Summary
This is a beginner's guide to understanding what contemporary dance is. It explains what the style
of dance is, where and how it evolved and what you need to know in order to understand a
performance of it.
It's safe to say the blueprint for dance in the United States and Europe has largely been upended
over the past century. Watching dance performances was once a predictable affair characterized
by narrative plots, ornate costumery, theatrical set design, a classical score, and movement that
either was (or followed the traditions of) ballet.
But these days, seeing dance can, at times, feel as though you need a personal translator to make
sense of what's going on. But it doesn't need to be an alienating experience. Below is a brief guide
to understanding contemporary dance, including insights from choreographers, dance historians,
and other experts to help demystify the art form's practices today.
What is contemporary dance?
So what are we talking about when we say "contemporary dance"? While the term is mainly used
in major dance hubs in the U.S. and Europe, it can be used to refer to anything from hip-hop to
non-western folk or tribal dance rituals. More broadly, it refers to modes of dance that began to
emerge in the mid-20th century which encompasses a myriad of cultural, economic, social, and
temporal influences. It is a form of dance without boundaries.
Attempts to classify what, exactly, contemporary dance is have been almost unanimously
abandoned, due in part to the inability to account for the distinct historical contexts that individual
dance practices emerge from. "Not every performance has the luxury of being ‘of its time' in the
current world," stresses dance scholar Noémie Solomon.
In today's study of movement, Solomon continues, dance makers are seeking solutions to
questions including: "What does it mean to move, or to be moved? How does movement intersect
with issues of labor and visibility?" Dance is also, by nature, an ephemeral art.
"Dance has often been hailed as an art that disappears, as if dance didn't leave any tangible trace,"
Solomon says. "Many contemporary dancers have really engaged with this question of what it is
that dance produces in time." We can see that dance evades categorical definition in the same way
that it won't be pinned down by the word "contemporary."
The history of contemporary dance in the U.S. and Europe
American and European traditions have largely shaped how dance is understood in the West
today, including the parameters that define what dance can be (e.g. for some touching and moving
the torso of the body on the floor isn't a dance movement). However, since the early 20th century
such rigid guidelines started to be openly questioned by many dancers. It is from this questioning
that the precursor to contemporary dance, modern dance, formed.
In the U.S., contemporary dance is widely considered to be the outgrowth of explorations by the
Judson Dance Theater in New York, where the vision of dance went beyond "defining what dance
is" to "expanding what dance can be."
Discussions of the origins of contemporary dance in North America almost always trace back to
performances coming out of cultural hubs like New York, San Francisco, and Los Angeles from the
end of the 19th century into the present. These traditions stem from a strong matriarchal lineage of
modern dance, with Isadora Duncan at the helm.
At the turn of the 19th century, Duncan catalyzed a redefining of the discipline with her vision of
dance as a vehicle for expressing the soul, and contended with the popularity of vaudeville
performance. Her emphasis on making the inner life visible disrupted prevailing ideas around
dance that it should be strictly narrative based and entertainment driven. Martha Graham
articulated this idea further, prioritizing expressive movement over narrative elements. Driven by
an interest in gravity, she relinquished dancers from wearing ballet slippers, which were
ubiquitous up to that point, and instead had them perform barefoot.
The innovations of Duncan and Graham created space for other radical practices to take shape.
Sidestepping the teachings of Graham, her student Merce Cunningham was in direct dialogue with
the visual arts community of the 1950s. He and John Cage were lifelong collaborators, and in the
same way that the composer redefined music as sound, Cunningham redefined dance as
movement. He integrated seemingly ordinary movements into his works and played with elements
of chance, while stripping away the necessity of a musical backdrop. These explorations suggested
that dance could be anything.
In tandem with this largely white experimental dance community in the U.S., choreographer and
dancer Alvin Ailey forged a voice for African Americans, beginning in the late 1950s. His
innovative choreography drew together influences from modern dance, ballet, and black cultural
currents in the United States, such as jazz, blues, and gospel music.
Later, the artist-driven dance collective Judson Dance Theater formed in Greenwich Village in
1962. The reimagining of the body by the Judson Dance Theater freed dance to "be seen as a
means to incorporate non-dancers within choreographic practices, to relate dance's forms to
everyday movements and broad cultural issues," Solomon explains. Their influential work paved
the way for the multifarious forms that dances takes into the present.
In Europe, the move to contemporary dance took a different tack to what it did in the United
States. And it is this which is the reason why in many respects it is very different to that which
sprung up in America.
Modern dance in Europe, was strongly influenced by the avant-garde theatre and the visual arts.
Consequently, it had more intellectual pretensions about what it aimed to do. The itinerant
Parisian dance troupe Ballets Russes, for instance, destabilized norms around content early on in
the 20th century and made strategic use of shock factor, incorporating erotic plotlines and tribal
references. Under the helm of Sergei Diaghilev, they too were known to commission costumes and
sets by leading artists, including Pablo Picasso and Henri Matisse.
At the Bauhaus in the 1920s, experiments with dance became an interdisciplinary affair and dance
was a medium for bringing the gesamtkunstwerk ("total work of art") to life, through an emphasis
on bridging the various performative elements of costume, lighting, music, and stage, again
emphasizing theatricality.
The European trajectory of dance saw a simplification of movement in tandem with a heightening
of emotion. This is evident in the work of German titan Pina Bausch who, as the director of the
Tanztheater from the early 1970s until her death, started intensifying the mood on the stage, with
highly dramatized performances where emotion transcended narrative.
Around the same time, as choreographer for the Stuttgart Ballet, William Forsythe was reworking
ballet, changing its contours to reflect new visions of the practice that could integrate traditional
and innovative ideas about choreography, costume, music, and set design to formulate something
original. Forsythe's career has extended into the visual arts, where he explores concepts of
choreography.
Since the '80s, dance maker Anne Teresa de Keersmaeker has continued these non-narrative
tendency imposing rigorous everyday movements into her works, employing pattern, repetition,
and speed. With a demand for discipline, her dances make fierce emotions visually palpable.
How does contemporary dance differ from performance art?
"Generally people think of dance as a kinetic art," says Susan Foster, a choreographer and dance
scholar at UCLA. "It's about bodies being articulate through movement." But how does it differ
from certain types of performance art?
As some dance has become more interdisciplinary, the overlap with performance art has made
distinguishing between the two a recurring conundrum. The confusion speaks to the fact that these
experimental practices share common concerns, including "a reflexive approach to the body as art
material," and "the ways in which it endures and generates experiences of time and effect,"
Solomon explains. "Performance art's focus on duration can be seen in close relation to dance's
questioning of movement, which often leads to a stretching (or deceleration) of time."
Circling back to the Judson Dance Theater, where speech, film, and experimental music were
interlaced with dance as a way to upset expectation, we can arguably identify an origin of the
murkiness. However, Foster would challenge that we cannot draw lines between the two, without
looking at individual practices. In the same vein, Evans would encourage an interdisciplinary
outlook, and doing away with the "identity-driven divisions" that confine practitioners to either
dance or performance art.
How to approach watching contemporary dance
Given that even trying to define what constitutes contemporary dance is difficult, you may think
that trying to interpret what is going on when viewing a performance. However, you would be
misgiven for thinking so. In order to do so, you first need to remember, as Foster says, that dance
reflects who we are. "We are moving bodies, and dance makes that more evident than ever," she
says.
Recognizing ourselves in dance that is visually challenging can help to bring down the barrier
between what we expect to see and what we actually see. To build on this, we as audience
members, can work with the theory of kinesthetic empathy, which proposes that dance is a two-
way conversation that engages all of our senses and "heightens how we feel our body in space, in
relation to others," Solomon explains.
Moving beyond the visceral components of comprehension, viewing more dance will, in time,
allow us to draw formal connections and conclusions about what we've witnessed. It's important to
remember that dance is ostensibly a visually jarring experience.
If all else fails, take comfort in the fact that well-respected scholars, like Foster, are equally
interested in the dance that happens on a stage, as that which happens in a bar. "It can tell you
something completely illuminating about what the body is or can be, and about people's social
relations to each other," she says.
Vocabulary exercise
1. encompasses (paragraph 3)
More broadly, it refers to modes of dance that began to emerge in the mid-20th century which
encompasses a myriad of cultural, economic, social, and temporal influences.
2. precursor (paragraph 7)
However, since the early 20th century such rigid guidelines started to be openly questioned by
many dancers. It is from this questioning that the precursor to contemporary dance, modern
dance, formed.
3. at the helm (paragraph 9)
These traditions stem from a strong matriarchal lineage of modern dance, with Isadora Duncan at
the helm.
4. relinquished (paragraph 10)
Driven by an interest in gravity, she relinquished dancers from wearing ballet slippers, which
were ubiquitous up to that point, and instead had them perform barefoot.
5. in tandem with (paragraph 12)
In tandem with this largely white experimental dance community in the U.S., choreographer
and dancer Alvin Ailey forged a voice for African Americans
6. pretensions (paragraph 15)
Modern dance in Europe, was strongly influenced by the avant-garde theatre and the visual arts.
Consequently, it had more intellectual pretensions about what it aimed to do.
7. confine (paragraph 22)
In the same vein, Evans would encourage an interdisciplinary outlook, and doing away with the
"identity-driven divisions" that confine practitioners to either dance or performance art.
Write your own sentences with the vocabulary
1. encompasses
2. precursor
3. at the helm
4. relinquished
5. in tandem with
6. pretensions
7. confine
EXERCISE 41
America’s six best hiking trails
Summary
This article lists the six best hiking trails/routes in America. In each, a writer gives their reasons
for what makes the particular trail special, describing the landscape and the flora and fauna (the
plants and animals) which can be found there. Each also gives a little advice for doing the route.
When one thinks of holidays in America, images of the skyscrapers of Manhattan or the beaches of
Miami or Los Angeles automatically spring to mind. But there's more (and a lot) to discover and
explore in this vast and incredibly diverse country. And for those who are more inclined to
enjoying the outdoors instead of the bustling cities or the sun drenched beaches, there are a
plethora of trekking opportunity to take advantage of. However, with so many to choose from that
it can actually be quite daunting to know which to choose.
To help you know which to choose, we spoke to six leading trekking bloggers and writers in the US
and got them to pick their own favourite trekking route in the continental US.
Lewis and Clark National Historic Trail
Length: 3,700 miles
Route: St Louis, Missouri, north-west over the Rockies to Oregon
Writer: John Thomaa
The Lewis and Clark National Historic Trail follows the expedition of Captains Meriwether Lewis
and William Clark, sent to explore the West by President Thomas Jefferson in 1804. They began
near St Louis, Missouri, continued to the Pacific Ocean, then returned a round-trip that took
over two years.
In 1992, I began a quest to walk all 30 routes in the national trails system. After 25 years of
travelling these trails "walking down a dream" as I have called it – this week I take my final steps
on the Lewis and Clark Trail in St Louis, Missouri. Lewis and Clark also ended their journey here
in 1806 and I will join in celebrating their achievement as well as the 50th anniversary of the trails
network.
The Upper Missouri River Breaks in Montana is one of the best stretches of the trail. There's a
sublime beauty there that captures the romantic view of the American West. I can still vividly
remember undertaking a long voyage downstream there in a canoe packed with supplies, passing
the same places Lewis and Clark saw on their trip home.
Having walked 34,000 miles across America, as I paddled down Class I rapids I was fascinated,
and felt humbled by the ever-changing landscape and geological features of the Upper Missouri.
On one afternoon, an eagle flew directly overhead, then an osprey circled above the eagle. As the
two birds circled ever higher into the great blue yonder, I sat in wonder, gazing up, watching the
osprey get ever smaller until it simply vanished.
The Pacific Crest Trail
Length: 2,650 miles
Route: Mexico to Canada, through California, Oregon, and Washington
Writer: Sally Schmidt
Described as a "wilderness path in our backyard", the Pacific Crest Trail (PCT), goes through 57
major mountain passes, dips into 19 major canyons and meanders alongside more than 1,000
lakes and tarns. For those who don't have the months or stamina needed to walk the entire
route, it is broken into sections for shorter, multi-night treks and within those sections are plenty
of options for day hikes along the national trail. One of my favourites of these is the seven-mile
High Trail with its views of the jagged peaks of the Minarets and other summits of the Ritter Range
in California. The river flowing through the polished glacial rock below is dramatic, but it's the void
between the high mountains and low valley that makes onlookers feel small.
"Everything is flowing," the naturalist John Muir wrote of the region, "going somewhere, animals
and so-called lifeless rocks as well as water."
Starting at the Agnew Meadows Trailhead near Mammoth Lakes, the High Trail contours the west
slope of the Sierra Nevada crest and stays above the river before meeting the headwaters of the San
Joaquin at Thousand Island Lake.
The High Trail-PCT is in the Ansel Adams Wilderness, which spans more than 230,000 acres
between the John Muir Wilderness to the south and Yosemite national park to the north. I often
join this trail skiing in winter and backpacking in the halcyon days of summer, but mostly
running, stopping often. I watch the trail change through the seasons. Snow falling and melting,
wildflowers blooming and ultimately withering as fall begins. I see aspen leaves transition from
green to yellow, orange and red.
My dad, who was never an avid hiker, once went for a walk on the High Trail and returned at dusk
so tired that he went to bed without dinner. He told the story of that long walk for the rest of his
life.
At Thousand Island Lake, the PCT continues north to Yosemite, but the 3,000-metre alpine lake is
the final destination for most day hikers. Muir called it Islet, for its abundance of tiny granite
islands. Banner Peak's sharp near 4,000-metre summit ridge rises above the sapphire-coloured
water.
Hikers have the option to return the same way or via the River Trail for a 14- to 15-mile round-trip.
Road transport between trailheads and Mammoth Mountain in summer is by shuttle bus only ($8
adult) – cars are almost entirely prohibited.
The Appalachian Trail
Length: 2,190 miles
Route: Running from Springer Mountain in Georgia to Mount Katahdin in Maine, the
Appalachian Trail spans wilderness areas and several sub-Appalachian ranges, such as the Great
Smokies and the Blue Ridge Mountains, through 14 states. A quarter of the trail (550 miles) is in
Virginia. Its highest point is 2,025-metre Clingmans Dome in the Great Smoky Mountains in
Tennessee.
Writer: Terry Briggs
It is the camaraderie of hikers that makes the experience of walking this trail, established in 1937,
so special. The Appalachian Trail is a melting pot of nationalities and people of all ages,
occupations, social classes, races, and religions. People who hike the trail congregate at the lean-
tos each night, spaced every 10 miles or so along the trail, and tell their stories, comparing notes on
how they are coping physically.
An estimated three million visitors hike portions of the trail each year, which is within driving
range of major cities such as Atlanta, Richmond, Washington DC, New York, and Boston. Each
year about 2,000 "thru-hikers" complete the trail in one continuous trip lasting from five to seven
months. Others hike sections over years to complete the entire trail.
The many shorter sections include routes through the Shenandoah National Park, and the White
Mountain national forest.
There are no fees or permits required to hike the trail, which links existing trail systems through
numerous national parks, national forests, designated wilderness areas, state parks, and other
public lands. It is maintained by an army of volunteers who build shelters, repair eroded sections
of trail, and repaint the 150,000 white rectangular blazes that mark the route from one end to the
other.
My favourite section is the 100 Mile Wilderness in Maine, just before you reach the northern
terminus of the trail and climb Mount Katahdin (1,605 metres) in Baxter State Park. It's a wild and
rugged stretch of mountains, forest, and lakes with little vehicle access and takes 6-10 days to hike
end-to-end. I prefer hiking on the trail in mid- to late- September when it is ablaze with autumn
colour, the nights are cool and the biting insects have disappeared.
Continental Divide Trail
Length: 3,100 miles
Route: The Canadian border in Glacier National Park, Montana, to the Mexican border west of El
Paso, through Idaho, Wyoming, Colorado and New Mexico
Writer: Diego Simpson
The Appalachian and Pacific Crest trails both get a lot of attention but the Continental Divide Trail
(CDT) offers at least as much adventure, with a fraction of the crowds. The CDT showcases one of
the most spectacular parts of the world: the snow-covered Rockies, alpine wildflowers and high-
elevation forests. This is a place of massive scale, where remote, rugged lands divide the
watersheds of the Pacific, Atlantic and Arctic Oceans.
Of the "Triple Crown" of long-distance routes (along with the Appalachian and Pacific Crest), the
CDT is by far the least-used of the three.
Yellowstone national park, on the Wyoming border, has grizzly bears, bison and wolves but be
sure to give all wildlife a wide berth and research current conditions. Turning west the trail
intersects with the Nez Perce historic trail, a path of untold stories from one of America's most
celebrated tribes.
When camping on the CDT, you can see the stars more vividly than in most places in the US (as
long as forest fires aren't smoking up the skies). Trail towns and gateway communities across the
country play an important role in the trail experience. These are places with unique culture, good
food, parks and activities appealing to different interests. Silver City in New Mexico and Lincoln,
Montana, are two towns that enrich the trail experience.
I have walked the many varied routes across the national trails system, but the CDT remains one of
the most special to me. It encapsulates everything which the American wilderness was and should
be: vast untouched awe-inspiring landscapes where the wildlife still thrives and dominates.
Ice Age Trail
Length: 1,200 miles
Route: From Interstate state park on the Wisconsin-Minnesota border to Potawatomi State Park
on Lake Michigan
Writer: Chloe Soprano
The US's most famous long-distance trails the Appalachian, Continental Divide and Pacific Crest
follow mountain ranges running from north to south. My favourite follows the edge of the last
Ice Age from east to west across Wisconsin.
Until 12,000 years ago, massive ice sheets blanketed all of Canada, with the southern fringe of the
glaciers dipping down past the Great Lakes. The Ice Age national scenic trail follows a zig-zagging
line of rock piles called moraines and other ice age relics such as long ridge lines called eskers,
large boulders called glacial erratics and small basins left by melting ice chunks called kettle ponds
for over a thousand miles. Many of the textbook-worthy geographical features along the trail are
protected in the Ice Age national scientific preserve, part of the US national park system.
Wisconsin's rolling landscape alternates between mature forests and open fields connected by
streams and rivers that flow into thousands of lakes. The trail meanders along the shore of Lake
Michigan, before plummeting south of Madison, the state capital, and then heading north again to
undulate across the western half of the state, linking numerous state parks and rural counties.
Large sections of the trail, marked by yellow blazes on trees, rocks and fence posts, are designated
hiking-only footpaths; others follow scenic backroads perfect for cyclists.
This is my favourite long-distance trail because its unique focus on the region's geological history
transports hikers back in time, to an era when the landscape was all wilderness. As you walk, ride
or drive in the footsteps of the woolly mammoths that once grazed the lush grasslands cultivated
by the retreating glaciers, keep your eyes peeled for porcupines, foxes and black bears. And at
night, your ears may ring with the howls of wolves: about 1,000 grey wolves live in Wisconsin,
about a third of the endangered Western Great Lakes population.
Wonderland Trail
Length: 93 miles
Route: A circuit around Mount Rainier, Washington, giving views of all sides of the Cascade's
highest volcano
Writer: Tom Lincoln
Rising from its lowland valleys like a vision, 4,392-metre Mount Rainier is the highest of all the
snow-clad volcanoes of the Cascade Range. And the 93-mile Wonderland trail offers a truly
intimate connection with Rainier, as the route makes a complete circuit of this magic mountain
through the moody, rugged wilderness at its feet.
The simple desire to climb Rainier was the very thing that brought me to the Pacific Northwest
decades ago. There I was, puking on the summit with the other pilgrims who had climbed too high,
too fast. Only later did I realise that time spent in the backcountry around the mountain can be
even more rewarding. The allure here is the tremendous variety of terrain, which makes for
fascinating backcountry travel.
To hike all 93 miles of the Wonderland, a designated Recreation Trail, is to take in all the majestic
nuances of Mount Rainier's domain. The 360-degree view of the mountain, under volatile weather
and changing light, is reason enough to come. The cathedral-like ancient forests of Douglas fir and
western hemlock, the expanses of lovely alpine meadows (locally called "parks"), the high volcanic
ridges, and the 35 cubic miles of ice draping the rocky flanks of the mountain all combine for a
landscape unique in the lower 48 states. At high points along the route, such as Panhandle Gap,
the hiker is taken deep into the alpine zone, into the realm of ice and snow far above the trees.
Just be prepared to do a little work. Distinctive radial ridges called "cleavers" reach from high on
Rainier right into the surrounding backcountry. These ridges create serious topography, a
successive series of obstructing ridgelines above valleys deeply dug by raging glacial torrents.
These require multiple climbs above 2,000 metres from deep green valleys, taking the hiker into a
high, austere wilderness of ice and rock. Going up and over these ridges means the backcountry
traveller who makes a complete circuit gains more than 6,000 metres of elevation in those 93
miles.
Not everyone has the time or inclination to hike for months on the longer-distance trails. In my
opinion, shorter trails such as the Wonderland bring the biggest reward for time and effort
applied.
Most hikers set out from the village of Longmire and do the Wonderland in 12-14 days, a period
that allows for a relaxed pace, time to appreciate the scenery, and a rain day or two. You can do all
in a single push, or in sections over several seasons even over a decade in two- or three-day
stints. The trail is usually hikable from mid-July through September, but depending on the
previous winter's snowfall, trails above 1,800 metres may be covered in snow well into August.
Vocabulary exercise
1. daunting (paragraph 1)
there are a plethora of trekking opportunity to take advantage of. However, with so many to choose
from that it can actually be quite daunting to know which to choose.
2. stretches (paragraph 5)
The Upper Missouri River Breaks in Montana is one of the best stretches of the trail. There's a
sublime beauty there that captures the romantic view of the American West.
3. spans (paragraph 10)
The High Trail-PCT is in the Ansel Adams Wilderness, which spans more than 230,000 acres
between the John Muir Wilderness to the south and Yosemite national park to the north.
4. withering (paragraph 10)
I watch the trail change through the seasons. Snow falling and melting, wildflowers blooming and
ultimately withering as fall begins.
5. avid (paragraph 11)
My dad, who was never an avid hiker, once went for a walk on the High Trail and returned at dusk
so tired that he went to bed without dinner.
6. encapsulates (paragraph 23)
It encapsulates everything which the American wilderness was and should be: vast untouched
awe-inspiring landscapes where the wildlife still thrives and dominates.
7. allure (paragraph 29)
Only later did I realise that time spent in the backcountry around the mountain can be even more
rewarding. The allure here is the tremendous variety of terrain, which makes for fascinating
backcountry travel.
Write your own sentences with the vocabulary
1. daunting
2. stretches
3. spans
4. withering
5. avid
6. encapsulates
7. allure
EXERCISE 42
Why the disappearance of livestock farming is
good for us all
Summary
This article argues for the end of livestock farming (the farming of animals). It gives a variety of
reasons to support why this would be a good thing and provides counter arguments to those who
argue that this type of farming can actually be good for the planet.
What will future generations, looking back on our age, see as its monstrosities? We ourselves think
of slavery, the subjugation of women, judicial torture, the murder of heretics, imperial conquest
and genocide, the First World War and the rise of fascism, and ask ourselves how people could
have failed to see the horror of what they did. What madness of our times will revolt our
descendants?
There are a plethora to choose from. But one of them, I believe, will be the mass incarceration of
animals, to enable us to eat their flesh or eggs or drink their milk. While we call ourselves animal
lovers, and lavish kindness on our dogs and cats, we inflict brutal deprivations on billions of
animals that are just as capable of suffering. The hypocrisy is so rank that future generations will
marvel at how we could have failed to see it.
I personally feel that the shift away from the consumption of meat will occur with the advent of
cheap artificial meat (otherwise known as lab-grown meat). Technological change has often helped
to catalyse ethical change. The $300m deal China signed last month to buy lab-grown meat marks
the beginning of the end of livestock farming. But it won't happen quickly: the great suffering is
likely to continue for many years.
The answer, we are told by celebrity chefs and food writers, is to keep livestock outdoors: eat free-
range beef or lamb, not battery pork. But all this does is to swap one disaster mass cruelty – for
another: mass destruction. Almost all forms of animal farming cause environmental damage, but
none more so than keeping them outdoors. The reason is inefficiency. Grazing is not just slightly
inefficient, it is stupendously wasteful. Roughly twice as much of the world's surface is used for
grazing as for growing crops, yet animals fed entirely on pasture produce just one gram out of the
81g of protein consumed per person per day.
A paper in Science of the Total Environment reports that "livestock production is the single largest
driver of habitat loss". Grazing livestock are a fully automated system for ecological destruction:
you need only release the animals on to the land and they do the rest, browsing out tree seedlings,
simplifying complex ecosystems. Their keepers augment this assault by slaughtering large
predators.
In the UK, for example, sheep supply around 1% of our diet in terms of calories. Yet they occupy
around 4m hectares of the uplands. This is more or less equivalent to all the land under crops in
this country, and more than twice the area of the built environment (1.7m hectares). The rich
mosaic of rainforest and other habitats that once covered our hills has been erased, the wildlife
reduced to a handful of hardy species. The damage caused is out of all proportion to the meat
produced.
Replacing the meat in our diets with soya spectacularly reduces the land area required per kilo of
protein: by 70% in the case of chicken, 89% in the case of pork and 97% in the case of beef. One
study suggests that if we were all to switch to a plant-based diet, 15m hectares of land in Britain
currently used for farming could be returned to nature. Alternatively, this country could feed 200
million people. An end to animal farming would be the salvation of the world's wildlife, our natural
wonders and magnificent habitats.
Understandably, those who keep animals have refute such facts, using an ingenious argument.
Livestock grazing, they claim, can suck carbon out of the atmosphere and store it in the soil,
reducing or even reversing global warming. In a TED talk watched by 4 million people, the rancher
Allan Savory claims that his "holistic" grazing could absorb enough carbon to return the world's
atmosphere to pre-industrial levels. His inability, when I interviewed him, to substantiate his
claims has done nothing to dent the idea's popularity.
Similar statements have been made by Graham Harvey, the agricultural story editor of the BBC
Radio 4 serial The Archers he claims that the prairies in the US could absorb all the carbon
"that's gone into the atmosphere for the whole planet since we industrialised" and amplified by
the Campaign to Protect Rural England. Farmers' organisations all over the world now noisily
promote this view.
A report this week by the Food Climate Research Network, called Grazed and Confused, seeks to
resolve the question: can keeping livestock outdoors cause a net reduction in greenhouse gases?
The authors spent two years investigating the issue. They cite 300 sources. Their answer is
unequivocal. No.
It is true, they find that some grazing systems are better than others. Under some circumstances,
plants growing on pastures will accumulate carbon under the ground, through the expansion of
their root systems and the laying down of leaf litter. But the claims of people such as Savory and
Harvey are "dangerously misleading". The evidence supporting additional carbon storage through
the special systems these livestock crusaders propose is weak and contradictory, and suggests that
if there's an effect at all, it is small.
The best that can be done is to remove between 20% and 60% of the greenhouse gas emissions
grazing livestock produce. Even this might be an overestimate: a paper published this week in the
journal Carbon Balance and Management suggests that the amount of methane (a potent
greenhouse gas) farm animals produce has been understated. In either case, carbon storage in
pastures cannot compensate for the animals' own climate impacts, let alone those of industrial
civilisation. I would like to see the TED team post a warning on Savory's video, before even more
people are misled.
As the final argument crumbles, we are left facing an uncomfortable fact: animal farming looks as
incompatible with a sustained future for humans and other species as mining coal.
That vast expanse of pastureland, from which we obtain so little at such great environmental cost,
would be better used for rewilding: the mass restoration of nature. Not only would this help to
reverse the catastrophic decline in habitats and the diversity and abundance of wildlife, but the
returning forests, wetlands and savannahs are likely to absorb far more carbon than even the most
sophisticated forms of grazing.
For many, the end of animal farming might be hard to swallow at the beginning, but we are a
resilient and adaptable species. We have undergone a series of astonishing changes: the adoption
of sedentarism, of agriculture, of cities, of industry.
Now it is time for a new revolution, almost as profound as those other great shifts: the switch to a
plant-based diet. The technology is depending on how close an approximation to meat you
demand (Quorn seems almost indistinguishable from chicken or mince to me) either here or just
around the corner. The ethical switch is happening already: even today, there are half a million
vegans in the land of roast beef. It's time to abandon the excuses, the fake facts and false comforts.
It is time to see our moral choices as our descendants will.
Vocabulary exercise
1. lavish (paragraph 2)
While we call ourselves animal lovers, and lavish kindness on our dogs and cats, we inflict brutal
deprivations on billions of animals that are just as capable of suffering.
2. the advent of (paragraph 3)
I personally feel that the shift away from the consumption of meat will occur with the advent of
cheap artificial meat (otherwise known as lab-grown meat).
3. augment (paragraph 5)
you need only release the animals on to the land and they do the rest, browsing out tree seedlings,
simplifying complex ecosystems. Their keepers augment this assault by slaughtering large
predators.
4. refute (paragraph 8)
Understandably, those who keep animals have refute such facts, using an ingenious argument.
Livestock grazing, they claim, can suck carbon out of the atmosphere and store it in the soil,
reducing or even reversing global warming.
5. substantiate (paragraph 8)
rancher Allan Savory claims that his "holistic" grazing could absorb enough carbon to return the
world's atmosphere to pre-industrial levels. His inability, when I interviewed him, to
substantiate his claims has done nothing to dent the idea's popularity.
6. dent (paragraph 8)
rancher Allan Savory claims that his "holistic" grazing could absorb enough carbon to return the
world's atmosphere to pre-industrial levels. His inability, when I interviewed him, to substantiate
his claims has done nothing to dent the idea's popularity.
7. be hard to swallow (paragraph 15)
For many, the end of animal farming might be hard to swallow at the beginning, but we are a
resilient and adaptable species. We have undergone a series of astonishing changes: the adoption
of sedentarism, of agriculture, of cities, of industry.
Write your own sentences with the vocabulary
1. lavish
2. the advent of
3. augment
4. refute
5. substantiate
6. dent
7. be hard to swallow
EXERCISE 43
Can we trust our memories?
Summary
This article is about memory. In the article the author talks to two sisters who have released a book
on the topic. The article explains how our brains store and recall information and memories, and
how our memories are modified over time. It also discusses why we are more likely to remember
certain events over others
Of all the mysteries of the mind, perhaps none is greater than memory. Why do we remember
some things and forget others? What is memory's relationship to consciousness and our identities?
Where and how is memory stored? How reliable are our memories? And why did our memory
evolve to be so rich and detailed?
In a sense there are two ways of looking at memory: the literary and the scientific. There is the
Proustian model in which memory is about meaning, an exploration of the self, a subjective
journey into the past. And then there is the analytical model, where memory is subjected to
neurological study, psychological experiments and magnetic resonance imaging.
A new book or rather a recent translation of a two-year-old book by a pair of Norwegian sisters
seeks to marry the two approaches. The co-authors of Memories: The Science of Remembering and
Forgetting are Ylva Eastby, a clinical neuropsychologist, and Hilda Eastby, an editor and novelist.
Their book begins in 1564, with Julius Caesar Arantius performing a dissection of a human brain.
Cutting deep into the temporal lobe, where it meets the brain stem, he encounters a small,
wormlike ridge of tissue that resembles a seahorse. He calls it hippocampus or "horse sea
monster" in Latin. The significance of this discovery would take almost 400 years to come to light.
As with so much to do with our understanding of the brain, the breakthrough came through a
malfunction. An American named Henry Molaison suffered from acute epilepsy, and in 1953 he
underwent an operation in which the hippocampi from both sides of his brain were removed. The
surgery succeeded in controlling his epilepsy but at the cost of putting an end to his memory.
For the remaining 55 years of his life, he was unable to form new memories or rather, new
explicit memories. Memory is divided up in various ways. First between long-term and short-term
memory. Explicit memory, which is part of long-term memory, is the product of conscious
thought, while implicit memory enables unthinking rote actions. Explicit memory is itself
subdivided between episodic memory the autobiographical record of our experience and
semantic memory, which concerns general knowledge or "textbook learning".
Molaison's short-term (sometimes called working) memory remained intact, as did his procedural
(or implicit) memory. He could learn new motor skills. He just couldn't remember learning them.
Molaison's case had a profound effect on the study of memory, chiefly in placing the hippocampus
as a vital part of memory formation and retention.
As the Eastby sisters acknowledge, memory is an area of neuropsychology that is fraught with
dispute. While everyone agrees that the role of the hippocampus is central to memory, there is
division over whether it is simply the part of the brain that consolidates memory or if is also
employed to overwrite the original memory with each new recollection of it.
In fact, almost everything about memory remains open to debate, speculation, theorising. It used
to be thought, for example, that memories were like files that we retrieved or if we couldn't recall
them lost. Now it's much more widely understood that memories are created and then recreated,
each time in a slightly different way.
When I meet the Eastby sisters in a central London hotel, I tell them that this is a disturbing
insight because it suggests that our memories are inherently unreliable. If they are subject to
continual change they cannot represent reality with any degree of accuracy. What's more, as we
don't realise that a memory has changed, because it always feels "real" to us, it means it's a kind of
illusion. So does this mean we shouldn't give much credence to the accuracy of our memories?
"It's a very difficult question," says Ylva, who is the younger sister by four years. "To some extent
we can rely on our memory, but not on every little minute detail. In the court of law you have to
decide where the tipping point is. But philosophically speaking, we can say we can never fully trust
our memory."
The strange thing is, some people do recall things in extraordinary detail, while others the
depressed, for example find it hard to bring to mind more than the basic raw facts of a memory.
But this does not necessarily mean that one memory is more reliable than the others. As Hilde
explains, it's a bit like writing autobiographical fiction. The writer takes the basic facts and then
fills in the gaps with their imagination. That's what we do with memory as well. It's a creative act
as much as it is an accurate representation of the past.
Exactly how a memory is created and then becomes something we can recollect at a later time is
not fully known. But current thinking divides the process into three stages: encoding (the
formation of a new memory), consolidation (its transformation into a long-term memory) and
recollection (the retrieval of contextual information about a given event).
Functional magnetic resonance imaging (fMRI) scans can show us the different locations of
activity in the brain while we remember an incident, but they don't tell us exactly how the memory
is formed, retained or retrieved.
"It's impossible to capture the whole essence of memory in one fMRI experiment," says Ylva. But
she says her colleagues in Oslo have conducted an fMRI experiment in which participants are
shown pictures in various contexts and 40 minutes later are rescanned while they recall whatever
they can about the images. "There is a strong suggestion," says Ylva, "that the hippocampus is
more active during the encoding of these long-term memories. So fMRI has its uses but they come
with many caveats."
Memory, as an observable process, remains shrouded in myth. One of the most insistent myths
stems from Freud's early understanding of repressed memories. The basic idea is that children
unconsciously bury traumatic memories that can sometimes later be retrieved under hypnosis or
psychoanalysis. Although there is some evidence to support a limited conception of repressed, or
more accurately dissociated memory, it's a field of research that has suffered from claims of false
memory manipulation. In the 1990s, the satanic child sex abuse cases that sprang up, and built on
recovered memories, were proven to be erroneous.
"Most traumatic memories are highly memorable," says Ylva. "It's much more common that bad
memories haunt people."
In the book, the sisters accompany Adrian Pracon, who was the last person to be shot by Anders
Behring Breivik during the terror attack of 2011 on Utøya, which left 69 of Pracon's fellow summer
campers dead. Pracon has suffered from post-traumatic stress disorder (PTSD) since the slaughter.
His memories, he says, are still affected by the fact that his initial response to Breivik's shooting
spree was that it wasn't real. So he had to first take in that what he saw as an act an illusion
was actually lethal reality. And then afterwards he had to work to accept that his fears, dreams and
even visions were not real but hallucinations.
That's a heavy toll to place on the human mind. Interestingly, research done at the University of
Oxford suggests that PTSD can be relieved by playing a computer game in the immediate hours
following a traumatic event. The theory is that the game weakens the strong visual memories
associated with the original trauma by offering competing images.
It's unlikely that someone who had been through the horror that Pracon experienced would then
pull out a smartphone and start playing with it even if he had not been wounded. In his own
case, Pracon chose to face his fears by returning to the scene of the trauma. "He wanted to go,"
says Hilde. "He's been several times. He's still fighting his memories but now he's studying in
Oxford and he wants to be a researcher in terrorism and peace work." And Ylva adds, "The
important message is that trauma can last a lifetime. People around you have to realise that it's not
simply going to go away."
Indeed, what's striking about memory is that the things we want to remember are often difficult to
recall, while the things we'd prefer not to remember are impossible to forget. We don't seem, in
other words, to have much control over our episodic memories. Most of us will recall the
emotionally charged events of our lives the first kiss, a near-death experience. But it's only really
semantic memory that we can reliably hold on to by actively memorising.
In terms of semantic memory, Hilde says that when she was a student there was no internet or
social media and she was able to concentrate for long periods on what she had to read. "I don't
think students have that kind of concentration today," she says, "and that will affect what they get
into their long-term memory and what they remember."
Ylva also worries that the constant distraction of social media will undermine the flow of our
episodic memories and our ability to let our mind wander in creative, unpredictable ways. This
kind of flow, or stream of recollection, brings us back to the Proustian richness of memory, in
which a tea cake can summon up a whole world lost to the passage of time.
Although it's obvious why a squirrel would need to remember where it has hidden its nuts, or why
a wildebeest might want to recall the places where lions like to gather, it's not so clear why
evolution has left us with this cinematic or novelistic memory of times past. The theory that has
recently gained most traction is that memory supplies evolutionary advantage less because of its
focus on the past than as a means of preparing for the future.
"It makes sense to see memory as the other side of the future," says Ylva, "because it allows you to
imagine scenarios. Not just knowing what is likely and what is not likely, but you can feel it and
think what that scenario makes you feel. Would you act on it if that's how it makes you feel? It's a
kind of fantastic time machine."
Perhaps that in the end is what sets humans apart our ability to place ourselves in the future, to
anticipate and appreciate what a situation or set of circumstances might entail. And the great
temporal irony is that we've attained this facility through being able to recall with such haunting
vividness what took place in the past.
All these centuries on, we can see that the seahorse that Arantius found lurking in the depths of the
brain wasn't just a memory bank but an investment bank, storing up information and emotional
understanding for tomorrow and, unless a nuclear arms race intervenes, lifetimes to come.
Vocabulary exercise
1. come to light (paragraph 4)
The significance of this discovery would take almost 400 years to come to light.
2. chiefly (paragraph 7)
Molaison's case had a profound effect on the study of memory, chiefly in placing the
hippocampus as a vital part of memory formation and retention.
3. fraught with (paragraph 8)
memory is an area of neuropsychology that is fraught with dispute. While everyone agrees that
the role of the hippocampus is central to memory, there is division over whether it is simply the
part of the brain that consolidates memory or if is also employed to overwrite the original memory
4. give much credence to (paragraph 10)
as we don't realise that a memory has changed, because it always feels "real" to us, it means it's a
kind of illusion. So does this mean we shouldn't give much credence to the accuracy of our
memories?
5. bring to mind (paragraph 12)
The strange thing is, some people do recall things in extraordinary detail, while others the
depressed, for example find it hard to bring to mind more than the basic raw facts of a
memory.
6. shrouded in (paragraph 16)
Memory, as an observable process, remains shrouded in myth. One of the most insistent myths
stems from Freud's early understanding of repressed memories.
7. gained most traction (paragraph 24)
The theory that has recently gained most traction is that memory supplies evolutionary
advantage less because of its focus on the past than as a means of preparing for the future.
Write your own sentences with the vocabulary
1. come to light
2. chiefly
3. fraught with
4. give much credence to
5. bring to mind
6. shrouded in
7. gained most traction
EXERCISE 44
Why Mary Shelly’s Frankenstein is still relevant
today
Summary
This article talks about Mary Shelley's Frankenstein. It talks about the origin of the story (what
inspired and influenced it) and explains why it has been both so popular with the public and
influential.
One night during the strangely cool and wet summer of 1816, a group of friends gathered in the
Villa Diodati on the shores of Lake Geneva. "We will each write a ghost story," Lord Byron
announced to the others, who included Byron's doctor John Polidori, Percy Shelley and the 18-
year-old Mary Wollstonecraft Godwin.
"I busied myself to think of a story," Mary wrote. "One which would speak to the mysterious fears
of our nature and awaken thrilling horror." Her tale became a novel, published two years later as
'Frankenstein, or The Modern Prometheus', the story of a young natural philosophy student, who,
burning with crazed ambition, brings a body to life but rejects his horrifying 'creature' in fear and
disgust.
Frankenstein is simultaneously the first science-fiction novel, a Gothic horror, a tragic romance
and a parable all sewn into one towering body. Its two central tragedies one of overreaching and
the dangers of 'playing God', the other of parental abandonment and societal rejection are as
relevant today as ever.
Are there any characters more powerfully cemented in the popular imagination? The two
archetypes Mary brought to life, the 'creature' and the overambitious or 'mad scientist', lurched
and ranted their way off the page and on to stage and screen, electrifying theatre and filmgoers as
two of the lynchpins, not just of the horror genre, but of cinema itself.
Frankenstein spawned interpretations and parodies that reach from the very origins of the moving
image in Thomas Edison's horrifying 1910 short film, through Hollywood's Universal Pictures and
Britain's Hammer series, to The Rocky Horror Picture Show and it foreshadowed others, such as
2001: A Space Odyssey. There are Italian and Japanese Frankensteins and a Blaxploitation film,
Blackenstein; Mel Brooks, Kenneth Branagh and Tim Burton all have their own takes. The
characters or themes appear in or have inspired comic books, video games, spin-off novels, TV
series and songs by artists as diverse as Ice Cube, Metallica and T'Pau: "It was a flight on the wings
of a young girl's dreams/ That flew too far away/ And we could make the monster live again…"
As a parable, the novel has been used as an argument both for and against slavery and revolution,
vivisection and the Empire, and as a dialogue between history and progress, religion and atheism.
The prefix 'Franken-' thrives in the modern lexicon as a byword for any anxiety about science,
scientists and the human body, and has been used to shape worries about the atomic bomb, GM
crops, strange foods, stem cell research and both to characterise and assuage fears about AI. In the
two centuries since she wrote it, Mary's tale, in the words of Bobby Pickett's comedy song, Monster
Mash, has truly been "a graveyard smash" that "caught on in a flash".
'Mysterious fears of our nature'
Why was Mary's vision of 'science gone wrong' so ripe a vessel to carry our fears? She certainly
captured the zeitgeist: the early 19th Century teetered on the brink of the modern age, and
although the term 'science' existed, a 'scientist' didn't. Great change brings fear, as Fiona Sampson,
author of a new biography of Mary Shelley said in a recent interview: "With modernity with the
sense that humans are what there is, comes a sense of anxiety about what humans can do and
particularly an anxiety about science and technology." Frankenstein fused these contemporary
concerns about the possibilities of science with fiction for the very first time with electrifying
results. Far from an outrageous fantasy, the novel imagined what could happen if people and in
particular overreaching or unhinged scientists went too far.
Several points of popular 19th Century intellectual discourse appear in the novel. We know from
Mary's pieces of writing that in that Villa Diodati tableau of 1816, Shelley and Byron discussed the
'principle of life'. Contemporary debates raged on the nature of humanity and whether it was
possible to raise the dead. In the book's 1831 preface, Mary Shelley noted 'galvanism' as an
influence, referring to Luigi Galvani's experiments using electric currents to make frogs' legs
twitch. Galvani's nephew Giovanni Aldini would go further in 1803, using a newly-dead murderer
as his subject. Many of the doctors and thinkers at the heart of these debates such as the chemist
Sir Humphry Davy were connected to Mary's father, the pre-eminent intellectual William
Godwin, who himself had developed principles warning of the dangers and moral implications of
'overreaching'.
Despite these nuggets of contemporary thought, though, there's little in the way of tangible theory,
method, or scientific paraphernalia in Frankenstein. The climactic moment of creation is described
simply: "With an anxiety that almost amounted to agony, I collected the instruments of life around
me, that I might infuse a spark of being into the lifeless thing that lay at my feet." The 'science' of
the book is rooted in its time and yet timeless. It is so vague, therefore, as to provide an immediate
linguistic and visual reference point for moments of great change and fear.
Monster mash-up
But surely the reason we turn to Frankenstein when expressing an anxiety about science is down to
the impression the 'monster' and 'mad scientist' have had on our collective brains. How did this
happen? Just as the science is vague in the book, so is the description of the creature as he comes
to life. The moment is distilled into a single, bloodcurdling image:
"It was already one in the morning; the rain pattered dismally against the panes, and my candle
was nearly burnt out, when, by the glimmer of the half-extinguished light, I saw the dull yellow eye
of the creature open; it breathed hard, and a convulsive motion agitated its limbs."
With his 'yellow skin', 'watery eyes', 'shrivelled complexion' and 'straight black lips' the creature is
far from the beautiful ideal Frankenstein intended. This spare but resonant prose proved
irresistible to theatre and later film-makers and their audiences, as Christopher Frayling notes in
his book, Frankenstein: The First Two Hundred Years. The shocking novel became a scandalous
play and of course, a huge hit, first in Britain and then abroad. These early plays, Frayling
argues, "set the tone for future dramatisations". They condensed the story into basic archetypes,
adding many of the most memorable elements audiences would recognise today, including the
comical lab assistant, the line "It lives!" and a bad-brained monster who doesn't speak.
It's a double-edged sword that the monstrous success of Hollywood's vision (James Whale's 1931
film for Universal starring Boris Karloff as the creature) in many ways secured the story's longevity
but obscured Mary's version of it. "Frankenstein [the film] created the definitive movie image of
the mad scientist, and in the process launched a thousand imitations," Frayling writes. "It fused a
domesticated form of Expressionism, overacting, an irreverent adaptation of an acknowledged
classic, European actors and visualisers and the American carnival tradition to create an
American genre. It began to look as though Hollywood had actually invented Frankenstein."
Making a myth
And so, a movie legend was born. Although Hollywood may have cherry-picked from Mary Shelley
to cement its version of the story, it's clear she also borrowed from historical myths to create her
own. The subtitle of Frankenstein, 'The Modern Prometheus', namechecks the figure of ancient
Greek and Latin mythology who variously steals fire from the gods and gives it to man (or makes a
man out of clay) and represents the dangers of overreaching. But the other great myth of the novel
is of God and Adam, and a quote from Paradise Lost appears in the epigraph to Frankenstein: "Did
I request thee, Maker, from my clay / To mould me man?". And it is above all the creature's
tragedy and his humanity that in his cinematic transformation into a mute but terrifying
monster, has been forgotten.
Mary gave him a voice and a literary education in order to express his thoughts and desires (he is
one of three narrators in the book). Like The Tempest's Caliban, to whom Shakespeare gives a
poetic and poignant speech, the creature's lament is haunting: "Remember that I am thy creature;
I ought to be thy Adam, but I am rather the fallen angel, whom thou drivest from joy for no
misdeed. Everywhere I see bliss, from which I alone am irrevocably excluded. I was benevolent and
good; misery made me a fiend. Make me happy, and I shall again be virtuous."
If we think of the creature as a badly made and unattractive human, his tragedy deepens. His first,
catastrophic rejection is by his creator (man, God), which Christopher Frayling calls "that post-
partum moment", and is often identified as a parental abandonment. If you consider that Mary
Shelley had lost her mother Mary Wollstonecraft at her own birth, had just buried her baby girl
and was looking after her pregnant step-sister as she was writing the book which took exactly
nine months to complete the relevance of birth (and death) makes even more sense. The
baby/creature is alienated further as society recoils from him; he is made good, but it is the
rejection that creates his murderous revenge. As an allegory of our responsibility to children,
outsiders, or those who don't conform to conventional ideals of beauty, there isn't a stronger one.
"The way that we sometimes identify with Frankenstein, as we've all taken risks, we've all had
hubristic moments, and partly with the creature; they are both aspects of ourselves all our
selves" say Fiona Sampson tells, "they both speak to us about being human. And that's incredibly
powerful."
Some modern interpretations, such as Nick Dear's 2011 play (directed by Danny Boyle for the
National Theatre), have highlighted the question of who is the monster and who is the victim, with
the lead actors Jonny Lee Miller and Benedict Cumberbatch alternating roles each night. And in
this shapeshifting context, it's fitting that the creature is widely mistaken as 'Frankenstein', rather
than his creator.
So could a new, cinematic version of Frankenstein be on the cards? One which brings together the
creature's humanity, the mirroring of man and monster and contemporary anxieties? Just like the
Romantics, we edge towards a new modern age, but this time, of AI, which brings its own raft of
fears and moral quandaries. A clutch of recent films and TV shows have channelled Frankenstein,
exploring what it means to be human in the context of robotics and AI Blade Runner, Ex
Machina, AI, Her, Humans and Westworld among them. But there is one film director (rumoured
to have been developing the story for a while) who might be able to recapture the creature's lament
as a parable for our time.
Collecting a Bafta for a different sci-fi monster fable, The Shape of Water, this year, Guillermo del
Toro thanked Mary Shelley, because "she used the plight of Caliban in her story and she gave
weight to the burden of Prometheus, and she gave voice to the voiceless and presence to the
invisible, and she showed me that sometimes to talk about monsters, we need to fabricate
monsters of our own, and parables do that for us".
When the then-Mary Godwin thought up her chilling parable that summer of 1816, she couldn't
have imagined how far it would go to shape culture and society, science and fear, well into the 21st
Century. "And now, once again, I bid my hideous progeny go forth and prosper," she wrote in the
preface to the 1831 edition. The creator and creature, parent and child, the writer and her story
they went forth, and did they prosper? Two hundred years since its publication, Mary Shelley's
Frankenstein is no longer just a tale of "thrilling horror" but its own myth, sent out into the world.
Vocabulary exercise
1. spawned (paragraph 5)
Frankenstein spawned interpretations and parodies that reach from the very origins of the
moving image in Thomas Edison's horrifying 1910 short film, through Hollywood's Universal
Pictures and Britain's Hammer series
2. a byword for (paragraph 6)
The prefix 'Franken-' thrives in the modern lexicon as a byword for any anxiety about science,
scientists and the human body, and has been used to shape worries about the atomic bomb,
3. zeitgeist (paragraph 7)
She certainly captured the zeitgeist: the early 19th Century teetered on the brink of the modern
age, and although the term 'science' existed, a 'scientist' didn't.
4. is down to (paragraph 10)
But surely the reason we turn to Frankenstein when expressing an anxiety about science is down
to the impression the 'monster' and 'mad scientist' have had on our collective brains.
5. cherry-picked (paragraph 14)
And so, a movie legend was born. Although Hollywood may have cherry-picked from Mary
Shelley to cement its version of the story, it's clear she also borrowed from historical myths to
create her own.
6. be on the cards (paragraph 19)
So could a new, cinematic version of Frankenstein be on the cards? One which brings together
the creature's humanity, the mirroring of man and monster and contemporary anxieties?
7. plight (paragraph 20)
she used the plight of Caliban in her story and she gave weight to the burden of Prometheus, and
she gave voice to the voiceless and presence to the invisible
Write your own sentences with the vocabulary
1. spawned
2. a byword for
3. zeitgeist
4. is down to
5. cherry-picked
6. be on the cards
7. plight
EXERCISE 45
Customer complaints are good for business
Summary
This article talks about complaints from customers. It argues that customer complaints can now be
damaging to companies and suggests how they should be dealt with to limit any potential harm. It
also talks of some benefits that dealing with them well can give a business.
A few months back a four year-old little boy called Robin was "a bit poorly and desperate for
beans-on-toast" so his dad opened the kitchen cupboards only to discover that all the tins of baked
beans he'd bought were damaged. "We didn't want to risk the beans," says Robin's father, Matthew
Campbell Hill, who lives in Cornwall in the UK. So he posted a message complaining about their
plight on Twitter that evening. "No beans on toast 4 sad toddler," he tweeted, tagging on the
hashtag "#heartstrings" for added effect.
By the next morning, Heinz had contacted Campbell-Hill, a former international wheelchair
fencer. To make up for the dented tins, the company sent Robin a Play-Doh set and then for his
birthday he received a card emblazoned with the words "Robin: No1 Baked Beans Fan" that was
signed by all the office.
The result at the end of the day of what had began as a complaint from a disgruntled customer was
a very happy little boy and a father singing the praises of Heinz. This episode even afforded Heinz
with some free publicity on social media, with the boy and father of their own accord making a
video to thank Heinz for their action.
Many of us deal with moaning and complaints every day. Social media has made publicising these
gripes easy and sites like TripAdvisor have given customers the opportunity to voice their
displeasure like never before. The majority of people under the age of 50 in the US now check
online reviews before buying new products, according to the Pew Research Center.
And all this whinging can have a significant financial impact. Every star a company receives on the
online ratings site Yelp, for instance, can affect its revenue by 5-9% on average, according to
research by Michael Luca, from Harvard Business School.
So what can experts tell us about the best ways of keeping people onside?
Playing to the audience
Complaining is something we all do, says Robin Kowalski, a professor of psychology at Clemson
University in South Carolina, but the reasons behind it can vary. They can be social, so complaints
can serve as unifying icebreakers in a waiting room or on a crowded train, for example. A good
whinge about bad service can do wonders to bring strangers together. Others might complain
about wine at a reception to "give the appearance to others that they are discriminating in their
tastes". However, the most effective complainers are strategic, according to Kowalski. "They select
their audience and they complain in moderation."
But regardless of why someone is moaning, a key secret to minimising the impact of their
complaint is to be fast.
‘I'll be right with you….'
"If you respond within 24 hours you have a 33% higher probability of that person going back and
upgrading the star rating of their review," says Darnell Halloway, director of local business
outreach at the ratings website Yelp. "That's a really easy thing to do, but I still see many
businesses not respond in either a sufficiently apologetic or timely fashion to negative reviews. And
there are others who don't respond at all."
And doing nothing at all could be a fatal mistake, agrees James Kay, from the reviews site
TripAdvisor. In a survey of 14,000 TripAdvisor users, 85% agreed "a thoughtful management
response" to a bad review improves their impression of a hotel, he says.
Don't fight
Another crucial tactic is to avoid turning a complaint into a confrontation. Kay says 69% of his
site's users find an aggressive or defensive response to a bad review will make them less likely to
visit a business.
Perhaps the worst way to respond, however, is to simply say "you are wrong and we are right",
adds Mikkel Svane, Danish chief executive of Zendesk, which provides customer services support
for other businesses. "The instances when people are trying to cheat you are actually very, very
rare," he explains. And if most of your sales come from repeat business, by "spending so much
time trying to avoid cheats, and damaging everybody else, just hurts you."
Making it easy is also important, says Svane. Insisting that customers contact you by telephone
after they initially complain by email, for example, is only likely to make matters worse, especially
if they have to wait on hold to speak to someone, he adds.
The customer is always right wasn't always a mantra. In 1926, the Soviet Commissariat of
Commerce ordered complaint books to be placed in all retail stores, including the Glávnyj
Universáľnyj Magazín (Main Universal Store), the famous department store in Red Square better
known as GUM. The GUM "seldom found in favour of consumers", notes Soviet historian Marjorie
Hilton. When one customer, named only as comrade Zhivotovskii, reported that he couldn't buy
products 15 minutes before closing time, the complaint's book records that store workers
responded by calling him a hooligan.
And when another customer complained about being sold dirty sugar, a salesman in the food
department named Smirnov responded simply by saying that someone had to receive it.
Frequent feedback
Encouraging people to give frequent feedback can also help you head problems off sooner. Places
like Heathrow Airport or the San Francisco 49ers stadium have fitted terminals that feature four
buttons with smiley and unhappy faces on them. The terminals can produce alerts in real time,
allowing people to react to potential problems, says Ville Levaniemi, co-founder of HappyOrNot,
the Finnish company that makes these feedback terminals. Data gathered by these terminals have
helped to reveal that people in airports around the world tend to be happiest at around 9 am and
least happy at 10 pm. Tuesdays also appear to be the happiest days while Sundays are the saddest.
Listen and learn
Perhaps the best way to treat complaints is to see them as a source of crucial, valuable information
which can be used to identify potentially disastrous consequences early on. "There's good data
there," says Alex Gillespie, a professor of psychology at the London School of Economics who flags
two significant examples of mismanaging available information.
Both Wells Fargo's 2016 account fraud scandal and safety flaws leading to the 2017 Grenfell Tower
fire in London were first raised in complaints that were ignored, he says.
Kensington Council responded by threatening one Grenfell resident, who raised concerns about
the safety of the cladding put on the outside of the tower, with legal action, saying the dweller's
blog posts constituted "defamation and harassment".
But the number of organisations treating complaints as a useful source of intelligence is growing,
says Gillespie. "I know some hospital CEOs read through their complaints every morning," he says.
So in short, it may pay off to try treating your customers more like Heinz baked beans eaters and
less like those in the former Soviet Union.
Vocabulary exercise
1. disgruntled (paragraph 3)
The result at the end of the day of what had began as a complaint from a disgruntled customer
was a very happy little boy and a father singing the praises of Heinz.
2. afforded (paragraph 3)
This episode even afforded Heinz with some free publicity on social media, with the boy and
father of their own accord making a video to thank Heinz for their action.
3. of their own accord (paragraph 3)
This episode even afforded Heinz with some free publicity on social media, with the boy and father
of their own accord making a video to thank Heinz for their action.
4. gripes (paragraph 4)
Many of us deal with moaning and complaints every day. Social media has made publicising these
gripes easy and sites like TripAdvisor have given customers the opportunity to voice their
displeasure like never before.
5. whinge (paragraph 7)
They can be social, so complaints can serve as unifying icebreakers in a waiting room or on a
crowded train, for example. A good whinge about bad service can do wonders to bring strangers
together.
6. fashion (paragraph 9)
That's a really easy thing to do, but I still see many businesses not respond in either a sufficiently
apologetic or timely fashion to negative reviews. And there are others who don't respond at all.
7. flaws (paragraph 18)
Both Wells Fargo's 2016 account fraud scandal and safety flaws leading to the 2017 Grenfell
Tower fire in London were first raised in complaints that were ignored,
Write your own sentences with the vocabulary
1. disgruntled
2. afforded
3. of their own accord
4. gripes
5. whinge
6. fashion
7. flaws
EXERCISE 46
How the discovery of plate tectonics
revolutionised are understanding of our planet
Summary
This article talks about the theory of plate tectonics (that the surface of the planet is on top of a
layer of constantly moving plates/slabs of rock). It explains how scientists came up with the theory
and what scientists today believe is causing this process to happen.
What would you put on your list of the great scientific breakthroughs of the 20th Century? The
things which lead to paradigm shift in terms of our understanding of how the world (and the
universe) functions. Would it be the theory general relativity? Quantum mechanics? Or something
to do with genetics, perhaps?
One discovery that ought to be on everyone's list is plate tectonics the description of how the
rigid outer shell of our planet (its lithosphere) moves and is recycled.
All the truly great ideas in science not only seem brilliantly simple and intuitive when they come
into focus, they also then have this extraordinary power to answer so many other questions in
Nature. And plate tectonics is no exception.
It tells us why the Himalayas are so tall; why Mexico experiences damaging earthquakes; why
Australia developed a diverse group of marsupials; and why Antarctica went into a deep freeze.
But when you are in the actual process of devising a theory, trying to make all the pieces of
evidence fit into a coherent narrative the solution seems very far from obvious. "We had no idea
what were the cause of earthquakes and volcanoes and things like that," recalls Dan McKenzie.
"It's extraordinarily difficult now to put yourself back into the state of mind that we had when I
was an undergraduate. And of course, the ideas I came up with are now taught in primary school."
McKenzie is regarded as one of the architects of modern plate tectonics theory. In 1967, he
published a paper in the journal Nature called "The North Pacific: An Example of Tectonics on a
Sphere" with Robert Parker, another Cambridge University graduate. It drew on a series of post-
war discoveries to paint a compelling picture of how the sea-floor in that part of the globe was able
to move, much like a curved paving stone, initiating earthquakes where it interacted with the other
great slabs of solid rock covering the Earth.
Although seen as an "aha!" moment, it was actually a long run-up to that point with a group of
committed scientists all sprinting and dipping for the line in 1966/67/68.
The story goes back to 1915 to Alfred Wegener, the German polar explorer and meteorologist, who
we most associate with the idea of continental drift. Wegener could see that the continents were
not static, that they must have shifted over time, and that the coastlines of South America and
Africa looked a suspiciously snug fit, as if they were once joined together. But he couldn't devise a
convincing mechanism to drive the motion.
Things really had to wait for WWII and the technologies it spawned, such as echosounders and
magnetometers. Developed to hunt down submarines and to find mines, these capabilities were
put to work in peacetime to investigate the properties of the sea floor. And it was these
investigations that revealed how plates are made at mid-ocean ridges and destroyed at their
margins where they underthrust the continents.
"The theory of plate tectonics really comes from the oceans. It was where we discovered things that
were fundamental for helping our understanding, like the oceanic ridges, subduction zones,
transform faults, and so forth," said John Dewey from Oxford University, another of those
sprinting scientists. "In the sixties, there was this massively increased knowledge through
oceanographic expeditions. Until that time we'd been peering down microscopes at thin sections of
rock, looking at faults and outcrops on land. And every now and then we'd be lucky enough to find
some component of plate tectonics, but we didn't know it was plate tectonics because we didn't
have the oceans. Without the oceans, you have nothing,"
One of the key observations was that of sea-floor spreading the process that creates new crust at
the ridges from upwelling magma. As the rock cools and moves away from a ridge, it locks into its
minerals the direction of Earth's magnetic field. And as the field reverses, as it does every few
hundred thousand years, so does the polarity in the rocks, presenting a zebra-like, striped pattern
to traversing research ships and their magnetometers.
In 1967, all roads led to the spring meeting of the American Geophysical Union. Some 70 abstracts
(summaries of research) were submitted on sea-floor spreading alone. A heady time, it must have
been. The coherent narrative of plate tectonics was about to fall rapidly into place. McKenzie's
paper was published in December that year. Concurrently, other researchers were extending the
model to describe all the other plates.
But one mechanism of plate tectonics eluded Wegener and his peers at the time. It is only recently
that scientists have seen how the weight of underthrusting plates plays such a major role in driving
the whole system. Much as the slinky dog needs no encouragement once it has started its journey
downstairs, so the descending rock appears to have an unstoppable momentum.
Tony Watts, an Oxford geologist explains: "We know that the fastest moving plates, the ones
spreading the fastest, have very long slabs, long pieces of lithosphere, that are going under at ocean
trenches. So, it looks as though something called 'trench pull' is a very important force and it's
generally agreed to be larger than 'ridge push'. Of course, everything is connected in the deep
mantle through convection, but trench pull does seem to be key."
Nothing is ever done and dusted in science. There is still a lively debate for example about
precisely when and how plate tectonics got going on Earth. More than four billion years ago as the
result of asteroid impacts, argued one recent Nature Geoscience paper.
Today, we have extraordinary tools such as GPS and satellite radar interferometry that allow us to
watch the march of the plates, millimetre by millimetre. Even more remarkable is the technique of
seismic tomography, which uses the signals of earthquakes to build 3D visualisations of sunken
rock slabs.
"Plate tectonics was a revolution. I'm a geologist, so I would say that," Tony Watts says. "Looking
back, the history of geology is very long. The Geological Society was founded in 1807, so plate
tectonics came really late in its history. But it needed the right technologies and a relatively small
group of scientists from strongly led institutions to make it happen.
"The other thing to remember is how young some of these scientists were: Dan McKenzie had only
just finished his PhD thesis."
Vocabulary exercise
1. paradigm shift (paragraph 1)
The things which lead to paradigm shift in terms of our understanding of how the world (and
the universe) functions. Would it be the theory general relativity? Quantum mechanics?
2. devising (paragraph 5)
But when you are in the actual process of devising a theory, trying to make all the pieces of
evidence fit into a coherent narrative the solution seems very far from obvious.
3. drew on (paragraph 6)
It drew on a series of post-war discoveries to paint a compelling picture of how the sea-floor in
that part of the globe was able to move, much like a curved paving stone
4. spawned (paragraph 9)
Things really had to wait for WWII and the technologies it spawned, such as echosounders and
magnetometers.
5. and so forth (paragraph 10)
The theory of plate tectonics really comes from the oceans. It was where we discovered things that
were fundamental for helping our understanding, like the oceanic ridges, subduction zones,
transform faults, and so forth
6. eluded (paragraph 13)
But one mechanism of plate tectonics eluded Wegener and his peers at the time. It is only recently
that scientists
7. done and dusted (paragraph 15)
Nothing is ever done and dusted in science. There is still a lively debate for example about
precisely when and how plate tectonics got going on Earth.
Write your own sentences with the vocabulary
1. paradigm shift
2. devising
3. drew on
4. spawned
5. and so forth
6. eluded
7. done and dusted
EXERCISE 47
Is there a conflict between saving the planet
and reducing poverty?
Summary
This article argues that policies aimed at protecting the environment do not inherently have
negative consequences on the poor in the world. In it, it justifies this opinion and argues that such
policies (if done well) are overwhelmingly in the poor's long-term interest.
It is the stick with which the greens are beaten daily: if we spend money on protecting the
environment, the poor will starve, or freeze to death, or will go without shoes and education. Most
of those making the argument that saving the planet hurts the poor do so disingenuously: they
support the conservative or libertarian politics that keep the poor in their place and ensure that the
1% harvest the lion's share of the world's resources.
Journalists writing for the corporate press, with views somewhere to the right of Vlad the Impaler
and no prior record of concern for the poor, suddenly become their champions when the interests
of the proprietorial class are threatened. If tar sands cannot be extracted in Canada, they maintain,
subsistence farmers in Africa will starve. If Tesco's profits are threatened, children will die of
malaria. When it is done cleverly, promoting the interests of corporations and the ultra-rich under
the guise of concern for the poor is an effective public relations strategy.
Even so, it is true that there is sometimes a clash between environmental policies and social
justice, especially when the policies have been poorly designed or implemented.
But whilst individual policies can be bad for the poor, is the protection of the environment
inherently incompatible with social justice? This was the question that was addressed in a recent
discussion paper published by Oxfam.
Oxfam, remember, exists to defend the world's poorest people and help them to escape from
poverty. Unlike the right wing bloggers, it is motivated by genuine concern for social justice. So
when it investigates the question of whether concern for the environment conflicts with
development, we should take notice. Kate Raworth, who wrote the report, has created an essential
template for deciding whether economic activity will help or harm humanity and the biosphere.
She points out that in rough terms we already know how to identify the social justice line below
which no one should fall, and the destruction line above which human impacts should not rise.
The social justice line is set by the eleven priorities listed by the participating governments in the
1992 Rio summit (often referred to as the “Earth Summit”). These are:
Food security, adequate income, clean water and good sanitation, effective healthcare,
access to education, decent work, modern energy services, resilience to shocks, gender
equality, social equity, and a voice in democratic politics.
The destruction line is set by the nine planetary boundaries identified in Stockholm in 2009 by a
group of earth system scientists. They identified the levels beyond which we endanger the earth's
living systems of:
Climate change, biodiversity loss, nitrogen and phosphate use, ozone depletion, ocean
acidification, freshwater use, changes in land use, particles in the atmosphere, and chemical
pollution.
We are already living above the line on the first three indicators, and close to it on several others.
The space between these two lines is the "safe and just space for humanity to thrive in". So what
happens if everyone below the social justice line rises above it? Does that push us irrevocably over
the destruction line? The answer, she shows, is no.
For example, providing enough food for the 13% of the world's people who suffer from hunger
means raising world supplies by just 1%. Providing electricity to the 19% of people who currently
have none would raise global carbon emissions by just 1%. Bringing everyone above the global
absolute poverty line ($1.25 a day) would need just 0.2% of global income.
In other words, it is not the needs of the poor that threaten the biosphere, but the demands of the
rich. Raworth points out that half the world's carbon emissions are produced by just 11% of its
people, while, with grim symmetry, 50% of the world's people produce just 11% of its emissions.
Animal feed used in the EU alone, which accounts for just 7% of the world's people, uses up 33% of
the planet's sustainable nitrogen budget. "Excessive resource use by the world's richest 10% of
consumers," she notes, "crowds out much-needed resource use by billions of other people."
The politically easy way to tackle poverty is to try to raise the living standards of the poor while
doing nothing to curb the consumption of the rich. This is the strategy almost all governments
follow. It is a formula for environmental disaster, which, in turn, spreads poverty and deprivation.
As Oxfam's paper says, social justice is impossible without "far greater global equity in the use of
natural resources, with the greatest reductions coming from the world's richest consumers".
This is not to suggest that all measures intended to protect the environment are socially just.
Raworth identifies the evictions by biofuels companies and plantation firms harvesting carbon
credits as examples of the pursuit of supposedly green policies which harm the poor. But before
the sneering starts, remember that the fight against both these blights has been led by
environmentalists, who recognised their destructive potential long before the libertarians now
using them as evidence of the perfidy of the green movement.
But there are far more cases in which poverty has been exacerbated by the lack of environmental
policies. The Oxfam paper points out that crossing any of the nine planetary boundaries can
"severely undermine human development, first and foremost for women and men living in
poverty." Climate change, for example, is already hammering the lives of some of the world's
poorest people. You can see the consequences of crossing another planetary boundary in the report
just published by the New Economics Foundation, which shows that overfishing has destroyed
around 100,000 jobs.
Just as mistaken green policies can damage the poor, mistaken poverty relief policies can damage
the environment. For example, where fertiliser subsidies encourage farmers to use more than they
need, as they do in China, money supposed to relieve poverty serves only to pollute the water
supply. Development which has no regard for whom or what it harms is not development. It is the
opposite of progress, damaging the Earth's capacity to support us and the rest of its living systems.
But extreme poverty, just like extreme wealth, can also damage the environment. People without
access to clean energy sources, for example, are often forced to use wood for cooking. This
shortens their lives as they inhale the smoke, destroys forests and exacerbates global warming by
producing black carbon.
With a few exceptions, none of which should be hard to remedy, delivering social justice and
protecting the environment are not only compatible: they are each indispensable to the other. Only
through social justice, which must include the redistribution of the world's ridiculously
concentrated wealth, can the environment and the lives of the world's poorest be defended.
Those who consume far more resources than they require destroy the life chances of those whose
survival depends upon consuming more. As Gandhi said, the Earth provides enough to satisfy
everyone's need but not everyone's greed.
Vocabulary exercise
1. disingenuously (paragraph 1)
Most of those making the argument that saving the planet hurts the poor do so disingenuously:
they support the conservative or libertarian politics that keep the poor in their place
2. under the guise of (paragraph 2)
When it is done cleverly, promoting the interests of corporations and the ultra-rich under the
guise of concern for the poor is an effective public relations strategy.
3. irrevocably (paragraph 11)
So what happens if everyone below the social justice line rises above it? Does that push us
irrevocably over the destruction line?
4. in turn (paragraph 14)
This is the strategy almost all governments follow. It is a formula for environmental disaster,
which, in turn, spreads poverty and deprivation.
5. the pursuit of (paragraph 15)
This is not to suggest that all measures intended to protect the environment are socially just.
Raworth identifies the evictions by biofuels companies and plantation firms harvesting carbon
credits as examples of the pursuit of supposedly green policies which harm the poor.
6. relieve (paragraph 17)
For example, where fertiliser subsidies encourage farmers to use more than they need, as they do
in China, money supposed to relieve poverty serves only to pollute the water supply.
7. no regard for (paragraph 17)
Development which has no regard for whom or what it harms is not development. It is the
opposite of progress, damaging the Earth's capacity to support us and the rest of its living systems.
Write your own sentences with the vocabulary
1. disingenuously
2. under the guise of
3. irrevocably
4. in turn
5. the pursuit of
6. relieve
7. no regard for
EXERCISE 48
No more rock stars anymore
Summary
This article explains why people are still listening to and being influenced by music from the 60s
and 70s. Focusing on the 70s rock group "Led Zeppelin" it gives a number of reasons why there
appears to be no modern equivalents of the great bands of the past today. It ends by speculating
what music people will be listening to in the future.
Last week I've found myself involved in passionate discussions with several people, all centered
around the fact that there seems to be no Led Zeppelin for the current generation of music fans...
or, in a way, Led Zeppelin is this generation's Led Zeppelin.
Think of it this way: somewhere in suburbia, there's a twenty year old sitting in a cold garage with
a guitar, practicing riffs from songs that are older than he is. All his friends know the songs. In fact,
two of them are wearing shirts sporting the band's name. When the session is over, they get into a
car where they listen to Led Zeppelin's Houses of the Holy. They know all the words. They know all
the solos. They know this band as if they had grown up with them. And, in a way, they did even if
Led Zeppelin was their parents' favorite band and they inherited the albums from them. Even if
they're too young to have experienced the band first hand or seen the original band live.
Thirty years later, Led Zeppelin is still a household word. Much like the Rolling Stones and The
Who of the same era, the band's favor has withstood the test of time and their impact still is being
felt with young musicians today.
But this begs a serious question: where is this generation's Led Zeppelin? Why, when asked what
their favorite rock and roll bands are, do young people still point to artists that had their heyday
before they were even born? Is there no band, no artist around today who epitomizes the standard
of rock and roll that was set by Led Zeppelin and their contemporaries? Is there no band whose
name will be sported on the tattered t-shirts of kids 30 years from now?
There's really not not in the broad, sweeping sense that Led Zeppelin has had on the music scene
for so many years. It's not the fault of the artists and bands out there that there's no one with that
kind of staying power; we just live in a different time. The future has arrived, and this future does
not favor longevity.
From the 1980s, when remixes and longer versions of songs allowed for people to become attached
to singular songs rather than albums, right up through today, where technology plays a pivotal role
in how we find music, we have moved past the days when a radio station was the only means of
discovering new bands. And the way we now discover new bands has dramatically changed the way
we listen to music: we no longer just listen to music, we consume music.
Look at how we discover music now. It's as easy as turning on your computer. There are literally
hundreds of sites where you can make your own radio stations, create playlists, listen to streams of
music built around your mood, your favorite song, a single word. We put the name of a band into a
search bar and get a list of similar bands within seconds. A minute later you're listening to a string
of songs from bands you never heard before. It's the drive-through of music; pull up, order and
leave. There's less of a tendency to linger on one band, less of a chance to become attached or
devoted to a performer the way it happened when listening options were more limited.
Says producer/mixer/engineer Jonathan Wyman: "The sheer amount of music that comes out now
makes it difficult to keep up with new music. I'm 100% for the democratization of music recording
and am not a major label sympathizer, but keeping up with and separating the wheat from the
chaff in new music is daunting. The listener is faced with a constant barrage of new stimulus, and
it's hard to wade through it. I'm sometimes paralyzed in a pre-caffeinated haze in the morning,
trying to figure out something to listen to while I walk my dog."
It used to be that you'd listen to whatever the local rock radio station was throwing at you. In the
glory days of rock - in the prime time of Zeppelin, The Who, The Stones - you only knew what you
heard. Sure, there were magazines like Creem and Rolling Stone, but the only place to actually
hear any new music was on the radio, so the chance of discovering something new and exciting was
limited. Unless you were turned onto college radio or low-powered community stations, you ended
up buying what the major rock stations sold you - and back then, you were sold not only the
superbands, but their personalities as well. And those personalities were the key to hooking you
into their music.
"In the Zeppelin-era music was inextricably tied to an entire cultural zeitgeist," says George
Howard, COO of Daytrotter, Concert Vault, and Paste Magazine. "We can’t really imagine the 60's
and 70’s - in all of their cultural significance stripped of a musical component. For young people
of the 60’s-70’s, music held a similar power as the Internet holds in today’s young people's lives.
The Internet, specifically, and technology, generally, is clearly the dominant cultural signifier for
today’s younger generation. Certainly, music as accessed through technological innovations is
part of it, but it’s just a part, and not the whole."
Timothy Rosenberg, Course Director of Critical Listening for Music Professionals at Full Sail
University in Orlando, agrees. "The vast majority of the public has become detached from artists
because they no longer buy albums and instead buy single tracks. If the public won't be fans for
twelve tracks in a row, it's not surprising that they won't coalesce around a band, such as Led
Zeppelin. Believe it or not, I think this is healthy. Fans are becoming more attached to the music
rather than an artist's persona."
Perhaps he's right: there is a certain detachment now between rock bands and their music. Gone
are the days when a band was sold to you as a whole package; the music and the makers of the
music. Profiles in music magazines are no longer of the tabloid variety it's more about the
making of the music now and less about them trashing their hotel rooms or their excesses &
eccentricities. And because we have the means to listen to so many different genres of music today,
because we tend to buy singles rather than albums, perhaps we don't become attached to bands the
way it happened in Zeppelin's day. We don't make rock superstars anymore. The superstars of
today, the ones who are sold as personalities, are in pop or country music.
So, the broadening of our horizons has meant the demise of the superstar rock band. The widening
of the definition of rock and the creation of so many genres has diluted rock music to the point
where one can't even be sure there's such a thing as straight rock 'n roll anymore, in the vein of a
Zeppelin or Stones. There are certainly no rock stars that command the cover of magazines the way
Robert Plant did. There are no bands that demand attention from the media the way the super
bands of the late 70s did.
But is that a bad thing?
The dilution of the genre has meant more exposure for a larger number of bands. Musicians no
longer need to get their demos in the hand of a major label record exec anymore to be heard and
noticed; it's a matter of getting on the Internet and selling yourself and your band to thousands of
people at once by way of modern technology.
The more bands that introduce themselves to us, the more ways we have to discover and hear new
music, the more we listen to. Instead of an album collection that consists mainly of five or six
bands, we have playlists with 100 different artists on them. Here, dilution of the market has both
good and bad consequences: "The market caters to the consumers who demand convenience and
quantity and the product reflects that" says Wyman. It induces a kind of vicious cycle where people
don't want to pay for music, so the people who make music have less means with which to make
the music, which can result in less than stellar results, which makes people less inclined to buy it.
Lather, rinse, repeat."
There are those who disagree with that assessment, however: perennial outsider-band another
cultural landslide agrees that "music has become a disposable commodity, more so than at any
other time in the history of music;" then follows that statement with "but it's not because there are
too many bands, nor is it because music is so readily available on demand. It's not because of a
drop in the quality of recordings, nor is it caused by the scapegoats of 'free ' music, or 'amateur
musicians' or 'piracy'. It's because so much of the heavily-promoted music you see on mass-media
is created specifically for that commodification, to be something as disposable as a plastic wrapper
on a bar of soap. Why would you want to pay for that?"
"The truth is: people want to support artists & musicians. Look at the various Kickstarter
campaigns that have been well thought out. And people will gladly pay for music, and not just
singles; for example, Bandcamp has become a huge success for musicians - and hell, on Bandcamp,
albums outsell single cuts by a 5 to 1 ratio. But now it's absolutely vital for any musician starting
out to forge a personal connection with their listener. Connect with them. Be Real. That's what
your listener wants. And, most importantly: the musician must put their music first. Your need to
make music must be the sole reason you make music. Being real works. Being a rockstar doesn't."
Perhaps we no longer have a need for superbands and rock stars. We are happy with the wide
variety of music we get to hear; and the worshipping at the altar of rock gods no longer fits in our
society. Rather, we worship at the altar of the technology that connects us with the music we
choose to listen to.
And that's the main thing right there: We choose. We're no longer told what to listen to by a radio
station; we create our own stations. We find our own way. We discover on our own and that is
what keeps any one band from domination of the genre. Says Rosenberg, "Despite this great
interconnectedness the Internet has brought, I think it is a lot more difficult for artists to reach
new fans."
The dearth of superstar rock bands and the reason so many younger people are listening to
bands of the 70s is perhaps more about what is not available than what is. It's about the music
itself. It's that raw rock sound that came out of the era of the superbands that seems to be missing.
With technology comes new ways to produce, mix and create music. Everything seems sleeker and
smoother now. Perhaps the reason high school kids are listening to older music is because there's a
certain raw feeling to it. Rock music back then felt like the unleashing of an animal. That's not to
say there are no bands producing something with that kind of power now, but that it's presented in
a different manner.
The days of guitar and drum solos are long gone; that kind of indulgence is almost frowned upon
in today's tightly packaged music. Maybe the reason people who didn't grow up on Zeppelin and
The Who listen to those bands is because they're finding something that's missing from the current
rock scene. That twenty year old in his garage playing "Living Loving Maid" is looking for the
decadence inherent in the rock genre of the late 70s. "There's no band today that has that kind of
power," said Danny, a 19 year old guitarist. "We still listen to today's music, we listen to everything
from Radiohead to The Black Keys, but when we play, there's nothing like Led Zeppelin. Nothing
today comparable to that sound."
When asked what music they think would hold up twenty or thirty years from now the way
Zeppelin has, the members of Danny's band mention artists like Nirvana, Metallica, Pearl Jam and
Radiohead. "And anything Jack White does," said Danny.
So twenty or thirty years from now, will the kids be wearing Nirvana t-shirts? Will they be sitting
in their garages practicing Radiohead riffs? When you walk into the equivalent of a Guitar Center
in the year 2033, will the guy trying out a new guitar be playing a Black Keys song or will we still
hear the strains of the first notes of "Stairway to Heaven?"
Wyman thinks there may be something on the horizon. "The last time we saw something like [the
late 70s], where the charts were filled with records that would eventually stand the test of time,
was the early 90's with Pearl Jam and Nirvana. Interestingly enough, today we're almost
equidistant in time from the release of the debut albums of Pearl Jam and Nirvana as the releases
of those albums were to the debut of Led Zeppelin and the demise of the Beatles. Now maybe we're
due for the next big shift."
Maybe he's right and thirty years from now, a band we've yet to discover will be the one everyone is
still listening to. While wearing their Nirvana shirts and learning the chords to "Rock and Roll."
Vocabulary exercise
1. sporting (paragraph 2)
All his friends know the songs. In fact, two of them are wearing shirts sporting the band's name.
2. epitomizes (paragraph 4)
Is there no band, no artist around today who epitomizes the standard of rock and roll that was
set by Led Zeppelin and their contemporaries?
3. pivotal (paragraph 6)
right up through today, where technology plays a pivotal role in how we find music, we have
moved past the days when a radio station was the only means of discovering new bands.
4. separating the wheat from the chaff (paragraph 8)
I'm 100% for the democratization of music recording and am not a major label sympathizer, but
keeping up with and separating the wheat from the chaff in new music is daunting. The
listener is faced with a constant barrage of new stimulus, and it's hard to wade through it.
5. demise (paragraph 13)
So, the broadening of our horizons has meant the demise of the superstar rock band.
6. caters (paragraph 15)
The market caters to the consumers who demand convenience and quantity and the product
reflects that
7. frowned upon (paragraph 21)
The days of guitar and drum solos are long gone; that kind of indulgence is almost frowned upon
in today's tightly packaged music.
Write your own sentences with the vocabulary
1. sporting
2. epitomizes
3. pivotal
4. separating the wheat from the chaff
5. demise
6. caters
7. frowned upon
EXERCISE 49
Are the rich better than the rest of us?
Summary
This article discusses whether being successful and wealthy is down to intelligence and genes, and
why the belief that it is, is popular amongst many of the wealthy in society today. It argues that it
isn't and there are a number of different factors involved in determining why some end up more
successful than others. It also discusses some problems that the belief that it is, can cause in
society.
Whilst London's mayor, Boris Johnson, drew heavy criticism from many quarters for stating that
economic inequality can be attributed, in part, to IQ. "I am afraid that the violent economic
centrifuge of competition is operating on human beings who are already very far from equal in raw
ability," he told an audience at the Centre for Policy Studies.
That's a satisfying worldview for someone who is successful and considers himself unusually
bright. But a quick look at the data shows the limitations of raw intelligence and hard work as an
explanation for inequality. The income distribution in the United States provides a good example.
In 2012 the top 0.01 percent of households earned an average of $10.25 million, while the mean
household income for the country overall was $51,000. Are top earners 200 times as smart as
everybody else? Doubtful. Do they have the capacity to work 200 times more hours in the week?
Even more doubtful. Many forces out of their control, including sheer luck, are at play.
But say you're in that top 0.01 percent or even the top 50 percent. Would you really want to
admit that happenstance is the principal factor for your wealth or position? Wouldn't you rather
believe that it is down to your own innate abilities and work ethic, and that you truly deserve it?
Wouldn't you like to think that any resources you inherited are rightfully yours, as the descendant
of fundamentally exceptional people? Of course you would. New research indicates that in order to
justify your lifestyle, you might even adjust your ideas about the power of genes. The lower classes
are not merely unfortunate, according to the upper classes; they are genetically inferior.
In several experiments published in the Journal of Personality and Social Psychology, Michael
Kraus of the University of Illinois at UrbanaChampaign and Dacher Keltner of the University of
California at Berkeley explored what they call social class essentialism.
Essentialism is a belief that you can group together either things or people based on some
attributes which they all share. For example, researchers have found that people generally hold
essentialist beliefs about biological categories such as gender, race, and sexuality, as well as about
more cultural ones such as nationality, religion, and political orientation. What this does is to form
distinct groups within in society who believe that they have a distinct identity. Unfortunately,
essentialism does lead to stereotyping, prejudice, and a disinclination to mingle with outsiders
from our own group. In their research, Kraus and Keltner wanted to know if we also see social class
as such an essential category.
They started by developing a scale for measuring essentialistic beliefs about class. A diverse group
of American adults rated their endorsement of such statements as "I think even if everyone wore
the same clothing, people would still be able to tell your social class," and "It is possible to
determine one's social class by examining their genes." On average, people rated the items a 3.43,
where 1 means completely disagree and 7 means completely agree.
Participants also gave a subjective rating, from 1 to 10, of their own social class rank within their
community, based on education, income, and occupational status. The researchers found that the
higher social class of the participants, the higher was the belief that was a social class essentialism.
Kraus and Keltner delved deeper into the connection between social class and social class
essentialism by testing participants' belief in a just world, asking them to evaluate such statements
as "I feel that people get what they are entitled to have." The psychologist Melvin Lerner came up
with the "Just World Theory" in the 1960s, arguing that we're both motivated and conditioned to
believe that the world is a fair place. The alternative a universe where bad things happen to good
people is too upsetting. So we engage defense mechanisms such as blaming the victim "She
shouldn't have dressed that way" or trusting that positive and negative events will be balanced
out by karma, a form of magical thinking.
Kraus and Keltner found that the higher people perceived their social class to be, the more strongly
they endorsed just world beliefs, and that this difference explained their increased social class
essentialism: Apparently if you feel that you're doing well, you want to believe success comes to
those who deserve it, and therefore those of lower status must not deserve it. (Incidentally, the
argument that you "deserve" anything because of your genes is philosophically contentious; none
of us did anything to earn our genes.)
Higher-class Americans may well believe life is fair because they're motivated to defend their egos
and lifestyle, but there's an additional twist to their greater belief in a just world. Numerous
researchers have found that upper-class people are more likely to explain other people's behavior
by appealing to innate traits and abilities, whereas lower-class individuals note circumstances and
environmental forces. This matches reality in many ways for these respective groups. The rich do
generally have the freedom to pursue their desires and strengths, while for the poor, external
limitations often outnumber their opportunities. The poor realize they could have the best genes in
the world and still end up working at McDonald's. The wealthy might not merely be turning a blind
eye to such realities; due to their personal experience, they might actually have a blind spot.
There is a grain of truth to social class essentialism; the few studies on the subject estimate that
income, educational attainment, and occupational status are perhaps at least 10 percent genetic
(and maybe much more). It makes sense that talent and drive, some portion of which are related to
genetic variation, contribute to success. But that is a far cry from saying "It is possible to determine
one's social class by examining his or her genes." Such a statement ignores the role of wealth
inheritance, the social connections one shares with one's parents, or the educational opportunities
family money can buy not to mention strokes of good or bad luck (that are not tied to karma).
One repercussion of social class essentialism is a lack of forgiveness for criminals and cheaters. In
one of Kraus and Keltner's experiments, subjects read one of two fake scientific articles: One
reported that we genetically inherit our work ethic, intelligence, and ultimately our socioeconomic
status; the other held that socioeconomic status has no genetic basis. Then the participants read
scenarios about someone cheating on an academic exam and rated how much they endorsed
various punishments, including "restorative" ones such as community service and ethics training.
Those who read the essay supporting essentialism showed more resistance to restorative
punishments. "When people cheat the academic system they unfairly ascend the social class
hierarchy," Kraus says. Some of us might attribute a cheater's seeming subpar intelligence or
preparation or integrity to upbringing and see room for improvement. An essentialist will see bad
genes. And if you think people can't change, then there's no use in trying to help them.
Kraus and Keltner think social class essentialism (and the historically even more harmful race
essentialism) might push our justice system toward giving certain people long prison sentences
instead of chances at rehabilitation. Spreading the notion that social categories are constructed
could counteract the belief that lower-class people's behavior is genetically determined, and it
could also lead to greater support for drug treatment programs, affirmative action, Head Start, an
increased minimum wage, and multiple other causes benefiting the less affluent.
Social class essentialism is basically inciting social Darwinism. This distortion of Darwin's theory
of evolution, in one interpretation, is the belief that only the fit survive and thrive and, further,
that this process should be accepted or even accelerated by public policy. It's an example of the
logical fallacy known as the "appeal to nature" what is natural is good. (If that were true,
technology and medicine would be moral abominations.) Social class essentialism entails belief in
economic survival of the fittest as a fact. It might also entail belief in survival of the fittest as a
desired end, given the results linking it to reduced support for restorative interventions. It's one
thing to say, "Those people can't change, so let's not waste our time." It's another to say, "Those
people can't change, so let's lock them away." Or eradicate them: Only four years ago, the then Lt.
Gov. of South Carolina Andre Bauer told a town hall meeting that poor people, like "stray animals,"
should not be fed, "because they breed."
Kraus' even more recent work, not yet published, goes beyond what high-status individuals believe
in order to maintain the status hierarchy and explores what they do. Consider Congress. Members'
median net worth, in 2011, was $966,000. "They're quite wealthy individuals," Kraus says. "And
because they're wealthy they're not only more likely to belief in the tenets of essentialism, but they
also actually have power to enact laws to maintain inequality." A top adviser to the U.K.'s
education secretary just produced a report arguing that "discussions on issues such as social
mobility entirely ignore genetics." He claimed that school performance is as much as 70 percent
genetic and criticized England's Sure Start program as a waste of money. (As Scott Barry Kaufman,
an intelligence researcher at NYU and the author of Ungifted, points out, "Since genes are always
interacting with environmental triggers, there is simply no way to parse how much of an individual
child's performance is due to nature or nurture.")
It may be easy to demonize upper-class politicians as out of touch. But given how easily Kraus and
Keltner triggered social class essentialism in everyday Americans, and given the frequency with
which we toss around terms like white trash, redneck, welfare queen, and (across the pond) chav,
we might want to question the degree to which we all see status as a marker of a deeper identity. If
you were born under other circumstances, your résumé might look very different. Privilege is often
invisible, especially one's own.
Vocabulary exercise
1. disinclination to (paragraph 5)
Unfortunately, essentialism does lead to stereotyping, prejudice, and a disinclination to mingle
with outsiders from our own group.
2. mingle (paragraph 5)
Unfortunately, essentialism does lead to stereotyping, prejudice, and a disinclination to mingle
with outsiders from our own group.
3. endorsed (paragraph 9)
Kraus and Keltner found that the higher people perceived their social class to be, the more strongly
they endorsed just world beliefs, and that this difference explained their increased social class
essentialism
4. contentious (paragraph 9)
Incidentally, the argument that you "deserve" anything because of your genes is philosophically
contentious; none of us did anything to earn our genes.
5. turning a blind eye (paragraph 10)
The poor realize they could have the best genes in the world and still end up working at
McDonald's. The wealthy might not merely be turning a blind eye to such realities; due to their
personal experience, they might actually have a blind spot.
6. is a far cry from (paragraph 11)
It makes sense that talent and drive, some portion of which are related to genetic variation,
contribute to success. But that is a far cry from saying "It is possible to determine one's social
class by examining his or her genes."
7. fallacy (paragraph 14)
It's an example of the logical fallacy known as the "appeal to nature" what is natural is good. (If
that were true, technology and medicine would be moral abominations.)
Write your own sentences with the vocabulary
1. disinclination to
2. mingle
3. endorsed
4. contentious
5. turning a blind eye
6. is a far cry from
7. fallacy
(;(5&,6( 50
Globalisation and how societies have always
evolved
Summary
This is an article on the evolution of human societies. In it the author explains how human
societies and their cultures have progressively changed and grown in scale (and why) throughout
history and that today's globalisation is simply the latest phase of this. It goes on to discuss some
major challenges to social cohesion we might face in the future, and how globalisation may in fact
be able to help us overcome some of the problems they may cause.
Stroll into your local Starbucks and you will find yourself part of a cultural experiment on a scale
never seen before on this planet. In less than half a century, the coffee chain has grown from a
single outlet in Seattle to nearly 20,000 shops in around 60 countries. Each year, its near identical
stores serve cups of near identical coffee in near identical cups to hundreds of thousands of people.
For the first time in history, your morning cappuccino is the same no matter whether you are
sipping it in Tokyo, New York, Bangkok or Buenos Aires.
Of course, it is not just Starbucks. Select any global brand from Coca Cola to Facebook and the
chances are you will see or feel their presence in most countries around the world. It is easy to see
this homogenization in terms of loss of diversity, identity or the westernization of society. But, the
rapid pace of change also raises the more interesting question of why - over our relatively short
history - humans have had so many distinct cultures in the first place. And, if diversity is a part of
our psychological make-up, how we will fare in a world that is increasingly bringing together
people from different cultural backgrounds and traditions?
To get at this question, I argue that we need to understand what I call our unique 'capacity for
culture'. This trait that makes us stand alone amongst all other animals. Put simply, we can pick up
where others have left off, not having to re-learn our cultural knowledge each generation, as good
ideas build successively upon others that came before them, or are combined with other ideas
giving rise to new inventions.
Take the axe as an example. At first we built simple objects like hand axes chipped or "flaked" from
larger stones. But these would give way to more sophisticated axes, and when someone had the
idea to combine a shaped club with one of these hand axes, the first "hafted axe" was born.
Similarly when someone had the idea to stretch a vine between the ends of a bent stick the first
bow was born and you can be sure the first arrow soon followed.
Life savers
In more recent history, this 'cumulative cultural adaptation' that our capacity for culture grants
has been accelerated by the rise of archiving technology. Papyrus scrolls, books and the internet
allow us to even more effectively share knowledge with successive generations, opening up an
unbridgeable gap in the evolutionary potential between humans and all other animals.
Chimpanzees, for example, are renowned for their "tool use" and we think this is evidence of their
intelligence. But you could go away for a million years and upon your return the chimpanzees
would still be using the same sticks to 'fish' for termites and the same rocks to crack open nuts -
their "cultures" do not cumulatively adapt. Rather than picking up where others have left off, they
start over every generation. Just think if you had to re-discover how to make fire, tan leather,
extract bronze or iron from earth, or build a smart phone from scratch. That is what it is like to be
the other animals.
Not so for humans. Around 60,000 years ago, cumulative cultural adaptation was what propelled
modern humans out of Africa in small tribal groups, by enabling us to acquire knowledge and
produce technologies suitable to different environments. Eventually these tribes would occupy
nearly every environment on Earth - from living on ice to surviving in deserts or steaming jungles,
even becoming sea-going mariners as the Polynesians did. And amongst each one we see distinct
sets of beliefs, customs, language and religion.
The importance of the tribe in our evolutionary history has meant that natural selection has
favoured in us a suite of psychological dispositions for making our cultures work and for defending
them against competitors. These traits include cooperation, seeking affiliations, a predilection to
coordinating our activities, and tendencies to trade and exchange goods and services. Thus, we
have taken cooperation and sociality beyond the good relations among family members that
dominate the rest of the animal kingdom, to making cooperation work among wider groups of
people.
In fact, we have evolved a set of dispositions that allow us to treat other members of our tribe or
society as "honorary relatives", thereby unlocking a range of emotions that we would normally
reserve for other family members. A good example of this so-called cultural nepotism is the
visceral feeling you have when one of your nation's soldiers is lost in battle - just compare that
feeling to how you react to the news of a similar loss of a soldier from another nation. We also see
our cultural nepotism in the dispositions we have to hold doors for people, give up our seats on
trains, contribute to charities or when we fight for our countries in a war.
Of course, this nepotism is not just a positive force. It is also a trait that can be exploited by
propagandists and to produce Kamikaze-like or other suicidal behaviors. But the success of
cooperation as a strategy has seen our species for at least the last 10,000 years on a long
evolutionary trajectory towards living in larger and larger social groupings that bring together
people from different tribal origins. The economies of scale that we realize even in a small
grouping 'scale up' in larger groups, so much so that larger groups can often afford to have armies,
to build defensive walls around their settlements. Large groups also benefit from the efficiencies
that flow from a division of labour, and from access to a vast shared store of information, skills,
technology and good luck.
'One world'
And so in a surprising turn, the very psychology that allows us to form and cooperate in small
tribal groups, makes it possible for us to form into the larger social groupings of the modern world.
Thus, early in our history most of us lived in small bands of maybe 50 to 200 people. At some point
tribes formed that were essentially coalitions or bands of bands. Collections of tribes later formed
into chiefdoms in which for the first time in our history a single ruler emerged.
Eventually several chiefdoms would come together in nascent city-states such as Catal-Huyuk in
present day Turkey or Jericho in the Palestinian West-Bank, both around 10,000 years old. City-
states gave way to nations states, and eventually to collections of states such as the United
Kingdom. At each step formerly competing entities discovered that cooperation could return better
outcomes than endless cycles of betrayal and revenge.
This is not to say that cooperation is easy, or that it is never subject to reversals. Just look at the
outpouring of cultural diversity that sprang up with the collapse of the Soviet Union. Despite being
suppressed for decades, almost overnight Turkmenistan, Uzbekistan, Kazakhstan, Chechnya,
Tajikistan, Moldova, Kyrgyzstan, and Dagestan reappeared, all differentiated by culture, ethnicity,
and language.
So how will these two competing tendencies that comprise our evolved tribal psychology - one an
ancient disposition to produce lots of different cultures, the other an ability to extend honorary
relative status to others even in large groupings - play out in our modern, interconnected and
globalised world? There is in principle no reason to rule out a "one world" culture, and in some
respects, as Starbucks vividly illustrates, we are already well on the way.
Thus, it seems our tribal psychology can extend to groups of seemingly nearly any size. In large
countries such as the United Kingdom, Japan, the United States, Brazil, India and China hundreds
of millions and even over a billion people can all be united around a single tribal identity as British
or Japanese, American, Indian or Chinese and they will have a tendency to direct their cultural
nepotism towards these other members of their now highly extended tribe. If you take this
behaviour for granted, just imagine 100,000 dogs or hyenas packed into a sporting arena - not a
pretty sight.
'Bumpy road'
But two factors looming on the horizon are likely to slow the rate at which cultural unification will
happen. One is resources, the other is demography. Cooperation has worked throughout history
because large collections of people have been able to use resources more effectively and provide
greater prosperity and protection than smaller groups. But that could change as resources become
scarce.
This must be one of the most pressing social questions we can ask because if people begin to think
they have reached what we might call 'peak standard of living' then they will naturally become
more self-interested as the returns from cooperation begin to leak away. After all, why cooperate
when there are no spoils to divide?
Related to this, the dominant demographic trend of the next century will be the movement of
people from poorer to richer regions of the world. Diverse people will be brought together who
have little common cultural identity of the sort that historically has prompted our cultural
nepotism, and this will happen at rates that exceed those at which they can be culturally
integrated.
At first, I believe, these factors will cause people to pull back from whatever level of cultural
'scaling' they have achieved to the previous level. An example is the nations of the European Union
squabbling over national versus EU rights and privileges. A more troubling example might be the
rise of nationalist groups and political parties, such as Marine le Pen's Front National in France, or
similar far right groups in Britain and several European nations.
Then, if the success of modern societies up to this point is anything to go by, new and ever more
heterogeneous and resource-scarce societies will increasingly depend upon clear enforcement of
cultural or democratically derived rules to maintain stability, and will creak under the strain of
smaller social groupings seeking to disengage further from the whole.
One early harbinger of a sense of decline in the sense of 'social relatedness' might be the increasing
tendencies of people to avoid risk, to expect safety, to be vigilant about fairness, to require and to
be granted "rights". These might all be symptoms of a greater sense of self-interest, springing from
perhaps by declines in the average amount of "togetherness" we feel. When this happens, we
naturally turn inwards, effectively reverting to our earlier evolutionary instincts, to a time when we
relied on kin selection or cooperation among families for our needs to be met.
Against this backdrop the seemingly unstoppable and ever accelerating cultural homogenization
around the world brought about by travel, the internet and social networking, although often
decried, is probably a good thing even if it means the loss of cultural diversity: it increases our
sense of togetherness via the sense of a shared culture. In fact, breaking down of cultural barriers
unfashionable as this can sound is probably one of the few things that societies can do to
increase harmony among ever more heterogeneous peoples.
So, to my mind, there is little doubt that the next century is going to be a time of great uncertainty
and upheaval as resources, money and space become ever more scarce. It is going to be a bumpy
road with many setbacks and conflicts. But if there was ever a species that could tackle these
challenges it is our own. It might be surprising, but our genes, in the form of our capacity for
culture, have created in us a machine capable of greater cooperation, inventiveness and common
good than any other on Earth. And of course it means you can always find a cappuccino just the
way you like it no matter where we wake up.
Vocabulary exercise
1. renowned for (paragraph 6)
Chimpanzees, for example, are renowned for their "tool use" and we think this is evidence of
their intelligence.
2. propelled (paragraph 7)
Around 60,000 years ago, cumulative cultural adaptation was what propelled modern humans
out of Africa in small tribal groups, by enabling us to acquire knowledge and produce technologies
suitable to different environments.
3. predilection (paragraph 8)
These traits include cooperation, seeking affiliations, a predilection to coordinating our
activities, and tendencies to trade and exchange goods and services.
4. looming (paragraph 16)
But two factors looming on the horizon are likely to slow the rate at which cultural unification
will happen. One is resources, the other is demography.
5. harbinger of (paragraph 21)
One early harbinger of a sense of decline in the sense of 'social relatedness' might be the
increasing tendencies of people to avoid risk,
6. springing from (paragraph 21)
These might all be symptoms of a greater sense of self-interest, springing from perhaps by
declines in the average amount of "togetherness" we feel.
7. upheaval (paragraph 23)
So, to my mind, there is little doubt that the next century is going to be a time of great uncertainty
and upheaval as resources, money and space become ever more scarce.
Write your own sentences with the vocabulary
1. renowned for
2. propelled
3. predilection
4. looming
5. harbinger of
6. springing from
7. upheaval
(;(5&,6( 51
The reasons for the decline in wine
consumption in France
Summary
This article talks about the falling consumption of wine in recent years in France. Explaining the
reasons for this trend, it says why some people are worried about this and why there maybe some
optimism about the future of wine drinking in the country.
Does the seemingly perpetual decline in consumption of France's national drink symbolise a
corresponding decline in French civilisation?
The question worries a lot of people oenophiles, cultural commentators, flag-wavers for French
exceptionalism all of whom have watched with consternation the gradual disappearance of wine
from the national dinner table. Recent figures merely confirm what has been observed for years,
that the number of regular drinkers of wine in France is in freefall. In 1980 more than half of
adults were consuming wine on a near-daily basis. Today that figure has slumped to 17%.
Meanwhile, the proportion of French people who never drink wine at all has doubled to 38%.
In 1965, the amount of wine consumed per head of population was 160 litres a year. In 2010 that
had fallen to 57 litres, and will most likely dip to no more than 30 litres in the years ahead.
At dinner, wine is the third most popular drink after tap and bottled water. Sodas and fruit juices
are catching up fast and are now just a short way behind. According to a recent study in the
International Journal of Entrepreneurship, changes in French drinking habits are clearly visible
through the attitudes of successive generations.
People in their 60s and 70s grew up with wine on the table at every meal. For them, wine remains
an essential part of their patrimoine, or cultural heritage. The middle generation - now in their 40s
and 50s sees wine as a more occasional indulgence. They compensate for declining consumption
by spending more money. They like to think they drink less but better. And members of the third
generation the internet generation do not even start taking an interest in wine until their mid-
to-late 20s. For them, wine is a product like any other, and they need persuading that it is worth
their money.
"What has happened is a progressive erosion of wine's identity, and of its sacred and imaginary
representations," say the report's authors, Thierry Lorey and Pascal Poutet. "Over three
generations, this has led to the changes in France's habits of consumption and the steep declines in
the volume of wine that is drunk."
The fall in consumption is mirrored in other countries such as Italy and Spain which are also
historic producers of wine. However, it has not dented the prospects for exports of French wine,
which continues to hold its own abroad.
But what worries people are the effects of the change on life inside France, on French civilisation.
They fear that time-honoured French values conviviality, tradition and appreciation of the good
things in life are on the way out. Taking their place is a utilitarian, "hygieno-moralistic" new
order, cynically purveyed by an alliance of politics, media and global business.
"Wine is not some trophy product that we roll out to celebrate the grand occasions or to show off
our social status. It is a table drink intended to accompany the meal and provide a complement to
whatever is on our plate," says food writer Perico Legasse. "Wine is an element in the meal. But
what has happened is that it's gone from being popular to elitist. It is totally ridiculous. It should
be perfectly possible to drink moderate amount of good quality wine on a daily basis."
For Legasse, part of the fault is a changing national approach to food and gastronomy as a whole.
"For many years people have been steadily abandoning what in our French sociology we referred
to as the repas, or meal, by which I mean a convivial gathering around a table, and not the
individualised, accelerated version we see today.” he says. "The traditional family meal is withering
away. Instead we have a purely technical form of nourishment, whose aim is to make sure we fuel
up as effectively and as quickly as possible."
Wine drinking in France is certainly part of a long-standing way of life, but it would be wrong to
suppose that the French have always drunk as much as they did, say, 50 years ago. In the Middle
Ages, wine was commonly drunk (at least in wine-growing areas), but it was a weak concoction and
popular mainly because unlike water it was safe. The Revolution of 1789 dispelled the
aristocratic image that wine had, by then, acquired, and the economic changes of the 19th Century
helped it permeate society as a whole.
Denis Saverot, editor of La Revue des Vins de France magazine, says the rise of wine mirrored the
rise of the working class. But it was the war of 1914-18 that really secured its position in the hearts
of the French. "Basically the soldiers went over the top pickled on pinard, the strong, low-quality
wine which was supplied in bulk. Up until then the Normans, the Bretons, the people of Picardy
and the north, they had never touched wine. But they learned in the trenches.” says Saverot. "After
that in France we generalised the consumption of cheap wine so that by the early 1960’s there were
drinking outlets, cafes and bars, everywhere. Tiny villages would have five or six. But that was the
zenith of consumption. Since then consumption has been in decline."
Everyone agrees on the main factors. Fewer people work outdoors, so the fortifying qualities of
wine are less in demand. Offices require people to stay awake, so lunchtimes are, by and large, dry.
Then there is the rise of the car ("wine's worst enemy" for Saverot), changing demographics, with
France's large Muslim minority, and the growing popularity of beers and mixers.
But for Saverot there is another factor which he ascribes as being fundamental for this change. "It
is our bourgeois, technocratic elite with their campaigns against drink-driving and alcoholism,
lumping wine in with every other type of alcohol, even though it should be regarded as totally
different," he says. "Recently I heard one senior health official saying that wine causes cancer 'from
the very first glass'. That coming from a Frenchman. I was flabbergasted. In hock with the health
lobby and the politically correct, our elites prefer to keep the country on chemical anti-depressants
and wean us off wine.” And if one were to look at the figures, it would appear, on the surface at
leat, that such an assertion may have some validity. "In the 1960s, we were drinking 160 litres each
a year and weren't taking any pills.” he adds. "Today we consume 80 million packets of anti-
depressants, and wine sales are collapsing. Wine is the subtlest, most civilised, most noble of anti-
depressants. But look at our villages. The village bar has gone, replaced by a pharmacy."
Veteran observer of his nation's way of life, Oxford-based French writer Theodore Zeldin agrees
that over the last 10 years a business-style corporate culture has made huge inroads into France -
the bane of all those who prefer to take the time to savour things. "Companionship has been
replaced by networking. Business means busy-ness, and in that way we are becoming like
everywhere else," he says.
But Zeldin refuses to abandon hope. "The old French art de vivre is still there. It's an ideal. It's a bit
like the ideal of an English gentleman. You don't often find an English gentleman, but the ideal is
there and it informs society as a whole," he says. "It is the same with our art de vivre. Of course
times have changed, but it still survives. It is that feeling you get in France that in human relations
we need to do more than just conduct business. We have a duty to entertain, to converse. And in
France thanks to our education system we still have that ability to converse in a general,
universalist way that has been lost elsewhere.
"That is the art de vivre. It is about taking your time. And wine is part of it, because with wine you
have to take your time and savour the taste of every mouthful. After all, that is one of the great
things about wine. You can't swig it like you do with beer."
Vocabulary exercise
1. consternation (paragraph 2)
The question worries a lot of people - oenophiles, cultural commentators, flag-wavers for French
exceptionalism all of whom have watched with consternation the gradual disappearance of
wine from the national dinner table.
2. slumped (paragraph 2)
In 1980 more than half of adults were consuming wine on a near-daily basis. Today that figure has
slumped to 17%.
3. indulgence (paragraph 5)
The middle generation now in their 40s and 50s sees wine as a more occasional indulgence.
They compensate for declining consumption by spending more money.
4. the zenith (paragraph 12)
so that by the early 1960’s there were drinking outlets, cafes and bars, everywhere. Tiny villages
would have five or six. But that was the zenith of consumption. Since then consumption has been
in decline."
5. ascribes (paragraph 14)
But for Saverot there is another factor which he ascribes as being fundamental for this change. "It
is our bourgeois,
6. flabbergasted (paragraph 14)
Recently I heard one senior health official saying that wine causes cancer 'from the very first glass'.
That coming from a Frenchman. I was flabbergasted.
7. made huge inroads (paragraph 15)
Veteran observer of his nation's way of life, Oxford-based French writer Theodore Zeldin agrees
that over the last 10 years a business-style corporate culture has made huge inroads into France
Write your own sentences with the vocabulary
1. consternation
2. slumped
3. indulgence
4. the zenith
5. ascribes
6. flabbergasted
7. made huge inroads
EXERCISE 52
The merits of being a fair-weather sports fan
Summary
This article is an attack on the culture of only ever supporting one particular sports team and why
there is nothing wrong with being a fair-weather fan (one who supports a team when they are
doing well). In it, the writer justifies his position, ridicules those who spend their lives supporting
the same team and argues that doing so has negative consequences for sport in general.
When I was 10 years old, I was brainwashed. It was a perfectly legal maneuver. My uncle, who
lived in New York City, observed that I liked to play baseball and took great care to impress upon
me the superiority of the Yankees. This was the mid-1990s, an auspicious time to be hypnotized by
the team who wore pinstripes. A team whom conquered all who stood in front of them.
I grew up in McLean, Virginia, some 300 miles from the Bronx, but my parents stood by and
allowed the indoctrination. My mom regarded sports the way a vegan looks at a porterhouse steak,
while my dad's appraisal of the Washington, D.C., sports scene was straightforward: The Redskins
were sinfully bad, the Bullets were worse, and hockey was too boring to merit an opinion. Lacking
an appealing hometown team, I became a kind of free-agent fan, seeking out teams the Yankees,
the Miami Heat, the Indianapolis Colts with likable stars who won.
And so, without intending to adopt any sort of triumphalist attitude toward sports, I became that
most despised of figures in the eyes of the diehard: a fair-weather fan. For most of my life, this has
been a heavy shame. I have muttered shy apologies to friends for not standing by the hometown
teams, even as most of them failed to escape the vortex of mediocrity.
But I'm done apologizing. In fact, I'm pretty sure that I'm right and everybody else is wrong.
Rooting for winners to win is more than acceptable it's commendable. Fans shouldn't put up
with awfully managed teams for decades just because their parents liked those teams, as if sports
were governed by the same rules and customs as medieval inheritance. Fans should feel free to
shop for teams the way they do for any other product.
What I'm proposing here is a theory of fluid fandom that would encourage, as opposed to
stigmatize, promiscuous sports allegiances. By permanently anchoring themselves to teams from
their hometown or even an adopted town, sports fans consign themselves to needless misery. They
also distort the marketplace by sending a signal to team owners that winning is not an essential
component of fans' long-term interests. Fluid fandom, I submit, is the emotionally, civically, and
maybe even morally superior way to consume sports.
Fair-weather fan is a slur I have long endured but misunderstood. (Only in a country with tens of
millions of citizens rooting for regular losers based north of the 40th parallel in such climes as
Buffalo, Milwaukee, and Minneapolis could one sneer at the idea of "fair weather."). When
Charles Darwin wrote of man bearing "the indelible stamp of his lowly origins," he referred to all
Homosapiens, but the phrase might better fit its most sorrowful subspecies: people who still watch
the Cleveland Browns.
I can hear the critics now: What of loyalty? What of the ecstatic, once-in-a-lifetime feeling of
having endured decades of failure only to be present at the championship moment?
When I hear "once in a lifetime," I think: Only once? Why is fleeting happiness a worthwhile trade-
off for decades of agony? The belief that eventual victory will bring lasting happiness is a classic
delusion; behavioral psychologists chalk it up to the "durability bias." People assume that all sorts
of positive events a promotion, a wedding, a championship will punch a ticket to permanent
happiness. But no such ticket exists. All life is suffering, as Buddhists and Buffalo Bills fans will
attest, and the suffering of sports fans is a biological fact. Studies led by researchers at the
University of Utah and Indiana University have found that self-esteem, mood, and even
testosterone levels plummet in male fans after a loss. Why relegate yourself to such misery?
One of the stronger arguments for unconditionally supporting even bad local teams is that doing
so fosters a civic union that transcends class, politics, and other divisions, making small talk
possible across otherwise unbridgeable divides. But this serves more aptly as an argument for the
unifying power of sports than for any particular allegiance many of my most entertaining sports
conversations have been colorful exchanges with Red Sox fans. If anything, being a Yankees fan is
a boon to sports banter; try getting somebody outside of Maryland to talk with you about the 2018
Baltimore Orioles.
Rather than fostering civic bonds, blind loyalty makes our cities worse. Millions of people pledging
unconditional devotion to any business threatens to imbue it with monopoly power. This power is
plainly corrupting.
America's professional athletic leagues are essentially cartels. By prohibiting cities from having too
many competing teams, they obligate fans to support local powerhouses owned by billionaires who
use fans' undying loyalty as emotional leverage for municipal extortion. Many of these owners
make ludicrous financial demands of city and state governments. London, by contrast, has more
than a dozen professional soccer clubs that move up and down through various leagues; as such,
none are viewed as civically essential, and local governments tend not to subsidize their stadiums.
According to Andrew Zimbalist, an economics professor at Smith College who has written
prolifically about the financial aspects of sports, state and local governments in the United States
have spent up to $24 billion on professional, amateur, and college stadiums since 1990. That is far
beyond any sum the community value of a team could justify. In fact, some studies suggest that
sports franchises might have a small negative effect on economic activity and employment. Not
only do their stadiums tend to offer mostly part-time or temporary work, but they funnel local
spending which might otherwise go to an array of restaurants, bars, bowling alleys, and cinemas
into the pockets of millionaire athletes and billionaire owners.
When fat-cat owners aren't begging cities for money, they're raking it in from you, the ever-loyal
fan who willing to continue buying their overpriced merchandise. The New York Knicks haven't
made the conference finals in the 21st century. But what urgency does James Dolan, the Knicks
owner, feel to build a more competitive squad when he knows that you and Spike Lee will be there
through thick and thin?
Of course, a devoted fan base doesn't necessarily give rise to a crappy team; the Yankees are almost
never bad. The point is that unconditional devotion permits incompetent management to go
unpunished. In a world of more-fluid fandom, Dolan couldn't count on inelastic demand for his
terrible product. Last summer, on draft night, with the Knicks holding the eighth pick, Dolan
skipped the event to play a gig with his blues band. This man is not loyal to you. Why are you loyal
to him?
Whether or not traditionalists approve of it, however, a new age of fandom may be emerging, one
that is less arbitrary and shifts the balance of power to fans. For starters, fantasy sports allow
anyone to build teams comprising athletes from across a given league. Inevitably, fantasy sports
muddy hometown alliances by encouraging people to root for stars outside their media market,
stars whose success might well come at the expense of the local team. What's more, because
fantasy teams draw players from all over a given league, they require users to follow an entire
sport, pulling attention away from any one team and its traditional rivals. And abandoning the
hometown team is hardly a sacrifice in an age of sports-viewing packages MLB at Bat, for
example, or NBA League Pass that allow people to pay a flat fee and see most games around the
country. These packages make it easier to root for far-flung teams.
Seismic shifts in the leagues themselves are also accelerating change. Savvy coaches and team
owners, many supported by advanced analytics, trade players when their value peaks, even if that
means shipping off hometown heroes still in their prime. It's a business goes the standard defense
of such moves. In the past, players have been pilloried for having the gall to pursue their own self-
interest, financial or otherwise, in choosing which team to play for. But that stigma is
disappearing. Stars now routinely use their free agency to maximize their salary or shop for
championship-contending teams, challenging the assumption that they owe a vassal's fealty to
owners who themselves display no such loyalty. In turn, fans are further encouraged toward a
more Marxist view of the sporting world, where allegiances flow through labor (individual players)
rather than through capital (franchises).
Some might fear that my proposal will lead to a new age of inequality in which everyone roots for
the winners, leaving struggling franchises to wallow in the basement. I'm not too concerned.
American sports have a Marxist tradition of their own, in which leagues redistribute the wealth of
successful big-market teams and otherwise encourage parity. The poorest teams receive the
equivalent of welfare paid out of shared television revenue, and the poorest-performing teams
typically get the top draft picks. What none of these teams will get, if fluid fandom catches on, is a
guarantee that local fans will watch and attend games should teams fail to invest those handouts
wisely.
After the New York Yankees, my most embarrassing fair-weather allegiance is to LeBron James. In
2016, James won the NBA championship with the Cleveland Cavaliers in a dramatic seven-game
series. With blanched knuckles, I watched the final game from a New York City bar with two
brothers of Iranian descent whose parents lived in Puerto Rico. None of us had the slightest
connection, ancestral or autobiographical, to northeast Ohio. But when time expired, we screamed
in joy, jumped up and down, and embraced total strangers. A traditionalist would argue that we'd
cheated by cutting to the front of some imaginary line snaking toward such championship glories.
Sure, maybe I enjoyed the victory a bit less than some long-suffering Clevelander. But at the end of
the day, it's only a game.
Vocabulary exercise
1. rooting for (paragraph 4)
In fact, I'm pretty sure that I'm right and everybody else is wrong. Rooting for winners to win is
more than acceptable it's commendable.
2. consign (paragraph 5)
By permanently anchoring themselves to teams from their hometown or even an adopted town,
sports fans consign themselves to needless misery.
3. trade-off (paragraph 8)
When I hear "once in a lifetime," I think: Only once? Why is fleeting happiness a worthwhile
trade-off for decades of agony?
4. attest (paragraph 8)
All life is suffering, as Buddhists and Buffalo Bills fans will attest, and the suffering of sports fans
is a biological fact.
5. imbue it with (paragraph 10)
Millions of people pledging unconditional devotion to any business threatens to imbue it with
monopoly power.
6. raking it in (paragraph 13)
When fat-cat owners aren't begging cities for money, they're raking it in from you, the ever-loyal
fan who willing to continue buying their overpriced merchandise.
7. at the expense of (paragraph 15)
Inevitably, fantasy sports muddy hometown alliances by encouraging people to root for stars
outside their media market, stars whose success might well come at the expense of the local
team.
Write your own sentences with the vocabulary
1. rooting for
2. consign
3. trade-off
4. attest
5. imbue it with
6. raking it in
7. at the expense of
EXERCISE 53
The reason why homeopathic treatment does
work with patients
Summary
This is an article by a doctor who supports the use of homeopathic treatment. Although stating in
the article that there is no scientific evidence to support the use of this type of treatment, the
doctor says that they have witnessed good results when using it with patients and explains what
they feel accounts for this inherent contradiction. It ends by raising some ethical issues of using
homeopathy with patients.
Homeopathy has intrigued me for many years; in a way, I grew up with it. Our family doctor was a
homeopath, and my very first job as a junior doctor, was in a German homeopathic hospital. For
the last two decades, I have investigated homeopathy scientifically. During this period, the
evidence has become more and more negative, and it is now quite clear that highly diluted
homeopathic remedies are pure placebos.
Two main axioms constitute the core principles of homeopathy. The "like cures like" principle
holds that, if a substance causes a symptom (e.g. onion makes my nose run), then that substance
can cure a disease that is characterised by a runny nose (e.g. hayfever or a common cold). The
second principle assumes that the serial dilution process used for homeopathic remedies renders
them not less but more potent (hence homeopaths call this process "potentiation").
Both of these axioms fly in the face of science. If they were true, much of what we learned in
physics and chemistry would be wrong. If anyone shows the concepts of homeopathy to be correct,
he or she becomes a serious contender for one or two Nobel prizes. Homeopaths often say that we
simply have not yet discovered how homeopathy works. The truth is that we know there is no
conceivable scientific explanation that could possibly explain it.
Yet as a clinician almost 30 years ago, I have been astonished with the results achieved by
homeopathy. Many of my patients seemed to improve dramatically after receiving homeopathic
treatment. How was this possible?
In order to understand this apparent contradiction, we have to take a step back and consider the
complexities of the therapeutic response. Whenever a patient or a group of patients receive a
medical treatment and subsequently experience improvements, we automatically assume that the
improvement was caused by the intervention. This logical fallacy can be very misleading and has
hindered progress in medicine for hundreds of years. Of course, it could be the treatment – but
there are many other possibilities as well.
For instance, the condition could have improved on its own. Or the encounter between the
therapist and the patient could have been therapeutic without any meaningful contribution from
the treatment itself. Or the patient could have had high expectations in the treatment that
prompted a powerful placebo response. Or the patient self-administered some other treatments
concomitantly that caused the improvements. In other words, it is not the effect of the remedy per
se, but the non-specific effect of the context in which it is given that benefits the patient.
Because of these complexities, we must conduct clinical trials that differentiate between the
specific and non-specific effects of a treatment. In such studies, one group of patients receives the
experimental treatment (e.g. a homeopathic therapy) and another group receives a placebo. If well
designed, these studies expose the experimental group to the specific effect plus all the non-
specific effects of an intervention, while the control group is exposed to precisely the same range
and amount of non-specific effects but not to the specific effect of the treatment that is being
tested. In this situation, any difference in outcome between the groups must be caused by the
specific effects.
About 200 clinical studies of homeopathic remedies are available to date. With that sort of
number, one cannot be surprised that the results are not entirely uniform. It would be easy to
cherry-pick and select those findings that one happens to like (and some homeopaths do exactly
that). Yet, if we want to know the truth, we need to consider the totality of this evidence and weigh
it according to its scientific rigour. This approach is called a systematic review. Over a dozen
systematic reviews of homeopathy have been published. Almost uniformly, they come to the
conclusion that homeopathic remedies are not different from placebo.
Many homeopaths reluctantly accept this state of affairs but claim that their clinical experience is
more important than the evidence from clinical trials. And there is plenty of positive experience in
homeopathy. Patients who consult homeopaths do get better, and observational studies have
shown this ad nauseam. Homeopaths insist that this amounts to evidence which is more relevant
than that from clinical trials. But is there really a contradiction?
Experience is real, of course, but it does not establish causality. If observational data show
improvements while clinical trials tell us that homeopathic remedies are placebos, the conclusion
that fits all of these facts comfortably is straightforward: patients get better, not because of the
homeopathic remedy but because of a placebo-effect and the lengthy consultation with a
compassionate clinician. This conclusion is not just logical, it is also supported by data.
Homeopaths from Southampton recently demonstrated that the consultation not the remedy is the
element that improves clinical outcomes of patients after seeing a homeopath.
One of my teachers at medical school kept telling us: "Any treatment that does not harm patients
cannot be all bad". As they contain no active ingredient, highly dilute homeopathic remedies are
devoid of side effects. So, from this perspective, homeopathy might still be OK. This is perhaps the
most difficult issue in the debate about homeopathy; there are obviously reasonably good
arguments either way. But before you make up your mind, consider the following points:
• Placebo effects are notoriously unreliable; the patient who benefits today might not do so
tomorrow. Placebo effects also tend to be small and short-lived.
• Knowingly giving a placebo to patients would be unethical in most instances. Either
clinicians tell the truth (i.e. "this is a placebo"), in which case the effect is likely to disappear,
or they do not, in which case they are liars.
• Giving a placebo to a patient with a serious condition that would be otherwise treatable does
seriously endanger the health of that patient.
• In order to generate a placebo response in a patient, we do not need to administer a
placebo. All treatments come with the free bonus of a placebo effect as long as clinicians
administer them with compassion and empathy. So why only rely on part of the total
therapeutic response? Is this not short-changing the patient?
I apologise if you find my logic for supporting homeopathy convoluted (I have to admit it confuses
me at times). I know that the homeopathic principles fly in the face of science, yet I have seen
positive results and in the past thought that maybe there was some fundamental phenomenon to
discover. What I did discover was perhaps not fundamental but nevertheless important: patients
can experience significant improvement from non-specific effects. This is why they get better after
seeing a homeopath but this has nothing to do with the homeopathic sugar pills.
Vocabulary exercise
1. intrigued (paragraph 1)
Homeopathy has intrigued me for many years; in a way, I grew up with it. Our family doctor was
a homeopath, and my very first job as a junior doctor, was in a German homeopathic hospital.
2. renders (paragraph 2)
The second principle assumes that the serial dilution process used for homeopathic remedies
renders them not less but more potent (hence homeopaths call this process "potentiation").
3. fly in the face of (paragraph 3)
Both of these axioms fly in the face of science. If they were true, much of what we learned in
physics and chemistry would be wrong.
4. astonished (paragraph 4)
Yet as a clinician almost 30 years ago, I have been astonished with the results achieved by
homeopathy. Many of my patients seemed to improve dramatically after receiving homeopathic
treatment.
5. hindered (paragraph 5)
This logical fallacy can be very misleading and has hindered progress in medicine for hundreds of
years. Of course, it could be the treatment but there are many other possibilities as well.
6. devoid of (paragraph 11)
As they contain no active ingredient, highly dilute homeopathic remedies are devoid of side
effects. So, from this perspective, homeopathy might still be OK.
7. convoluted (paragraph 16)
I apologise if you find my logic for supporting homeopathy convoluted (I have to admit it
confuses me at times). I know that the homeopathic principles fly in the face of science, yet
Write your own sentences with the vocabulary
1. intrigued
2. renders
3. fly in the face of
4. astonished
5. hindered
6. devoid of
7. convoluted
EXERCISE 54
Is the gentrification of parts of cities a really
bad thing?
Summary
This is an article on gentrification (where affluent people move into poor areas in a city and change
them). It explains why it isn't such a bad problem or as widespread as some believe it is and how it
can actually be beneficial to the existing residents. It also explains where the fear of it stems from
and what more serious urban housing issues we should be focusing more of our attention on
instead.
It started in Soho, then moved to Chelsea and the East Village. Riots in Tompkins Square in 1988
earned it some headlines but didn't stop its creeping advance. It moved on to lower Harlem, then
jumped the river to Park Slope. Williamsburg and Fort Greene followed; today, it threatens even
Bedford-Stuyvesant. New York isn't the only city where it spreads. San Francisco, Washington, and
Boston have arguably been even more affected by it. Seattle, Atlanta, and Chicago have
experienced it on a large scale, too.
The "it," as you may have guessed, is gentrification. If you live in one of these cities, you probably
think you know how it works. Artists, bohemians, and gay couples come first. They move into run-
down but charming and historic homes and loft spaces close to the urban core. Houses are
restored. Funky coffee shops appear. Public safety improves. Then rents and home prices start to
go up. The open-minded, diversity-loving creative types who were the first wave of gentrifiers give
way to lawyers, bankers, and techies. As rents and home prices continue to rise, the earlier
residents often lower-income people of color are forced out.
That's the story, at least. And even liberals and conservatives alike agree that it is bad (although
liberals blame developers, and conservatives blame onerous regulations that limit development).
That gentrification displaces poor people of color by well-off white people is a claim so
commonplace that most people accept it as a widespread fact of urban life. It's not. Gentrification
of this sort is actually exceedingly rare. The socio-economic status of most neighborhoods is
strikingly stable over time. When the ethnic compositions of low-income black neighborhoods do
change, it's typically because Latinos and other immigrants move into a neighborhood and such
in-migration is probably more beneficial than harmful. As for displacement the most
objectionable feature of gentrification there's actually very little evidence it happens. In fact, so-
called gentrifying neighborhoods appear to experience less displacement than non-gentrifying
neighborhoods.
It's time to retire the term gentrification altogether. Fourteen years ago, Maureen Kennedy and
Paul Leonard of the Brookings Institution wrote that gentrification "is a politically loaded concept
that generally has not been useful in resolving growth and community change debates because its
meaning is unclear." That's even truer today. Some U.S. cities do have serious affordability
problems, but they're not the problems critics of gentrification think they are. Worse, the media
focus on gentrification has obscured problems that actually are serious: the increasing isolation of
poor, minority neighborhoods and the startling spread of extreme poverty.
Gentrification, as it is commonly understood, is about more than rising housing prices. It's about
neighborhoods changing from lower-income, predominantly black or Latino neighborhoods to
high-income, predominantly white neighborhoods. Demographers and sociologists have identified
neighborhoods where this kind of displacement has occurred. Wicker Park in Chicago, Harlem and
Chelsea in Manhattan, Williamsburg in Brooklyn these places really did gentrify. Sociologists
and demographers captured these changes in case studies and ethnographies. But starting a
decade ago, economists began to ask more nuanced questions about this displacement. These
found the reasons for this were far more varied than had been previously thought. Simply
documenting that low-income people were being forced out of a neighborhood whose housing
prices were rising didn't mean in and of itself that gentrification was causing displacement, they
noted. Poor people often move away from non-gentrifying neighborhoods, too. Indeed, low-
income people move frequently for many reasons. The real question was whether low-income
residents moved away from "gentrifying" neighborhoods at a higher rate than they did from non-
gentrifying neighborhoods.
One of the first people to explore this question in a sophisticated way was University of
Washington economist Jacob Vigdor. In 2002, Vigdor examined what had happened in Boston
between 1974 and 1997, a period of supposedly intense gentrification. But Vigdor found no
evidence that poor people moved out of gentrifying neighborhoods at a higher than normal rate. In
fact, rates of departure from gentrifying neighborhoods were actually lower.
It wasn't just Boston. In 2004, Columbia University economists Lance Freeman and Frank Braconi
conducted a similar study of gentrification in New York City in the 1990s. They too found that low-
income residents of "gentrifying" neighborhoods were less likely to move out of the neighborhood
than low-income residents of neighborhoods that had none of the typical hallmarks of
gentrification.
Of course, displacement is not the only way in which gentrification could harm the poor. Residents
of gentrifying neighborhoods might stay put but suffer from rising rents. Freeman and Braconi
found that rents did rise in gentrifying neighborhoods in New York. But rising rents had an
unexpected effect: As rents rose, residents moved less.
"The most plausible interpretation," the authors concluded, "may be the simplest: As
neighborhoods gentrify, they also improve in many ways that may be as appreciated by their
disadvantaged residents as by their more affluent ones."
In 2010, University of ColoradoBoulder economist Terra McKinnish, along with Randall Walsh
and Kirk White, examined gentrification across the nation as a whole over the course of the 1990s.
McKinnish and her colleagues found that gentrification created neighborhoods that were attractive
to minority households, particularly households with children or elderly homeowners. They found
no evidence of displacement or harm. While most of the income gains in these neighborhoods
went to white college graduates under the age of 40 (the archetypical gentrifiers), black high school
graduates also saw their incomes rise. They also were more likely to stay put. In short, black
households with high school degrees seemed to benefit from gentrification.
McKinnish, White, and Walsh aren't the only researchers whose work suggests that blacks often
benefit from gentrification. In his book, Nowhere to Go, sociologist Peter Sharpe took a close look
at black neighborhoods that saw significant changes to their ethnic composition between 1970 and
1990. He found that when the composition of black neighborhoods changed, it wasn't because
whites moved in. That rarely happens. For black communities, neighborhood change happens
when Latinos begin to arrive. Sometimes these changes can be difficult, resulting as they often do
in new political leaders and changes to the character of the communities. But Sharpe's research
suggests they also bring real benefits. Black residents, particularly black youth, living in more
diverse neighborhoods fare significantly better economically than their peers with the same skill
sets who live in less diverse neighborhoods. In short, writes Sharpe, "There is strong evidence that
when neighborhood disadvantage declines, the economic fortunes of black youth improve, and
improve rather substantially."
In other words, the problem isn't so much that gentrification hurts black neighborhoods; it's that it
too often bypasses them. Harvard sociologists Robert Sampson and Jackelyn Hwang have shown
that neighborhoods that are more than 40 percent black gentrify much more slowly than other
neighborhoods. The apparent unwillingness of other ethnic groups to move into and invest in
predominantly black communities in turn perpetuates segregation and inequality in American
society.
While critics of gentrification decry a process that is largely imaginary, they've missed a far more
serious problem the spread of extreme poverty. Last year, economists Joseph Cortright of the
Portland based Impresa Consulting and Dillon Mahmoudi of Portland State University set out to
examine how America's poorest urban neighborhoods had changed over time. They started by
going back to 1970 and identifying 1,100 census tracts the county subregions that demographers
use as a basic unit of analysis located within 10 miles of the central business districts in the 51
largest cities with high levels of poverty. They then asked a simple question: How did the socio-
economic status of these places change during the next 40 years?
The answer: Most had not. Two-thirds of high-poverty neighborhoods in 1970 were still high-
poverty neighborhoods in 2010. Only about 100 neighborhoods saw their poverty rates decline to
below the national average. The typical metropolitan area had one or two high-poverty
neighborhoods that could conceivably be described as gentrifying. However, Cortright and
Mahmoudi did find another, more significant change. Whereas in 1970, 1,100 census tracts within
10 miles of central business districts had poverty rates of 30 percent or higher, by 2010, the
number of poor census tracts had jumped to 3,100. In other words, the number of high-poverty
areas close to central business districts had nearly tripled. To make matters worse, the number of
people living in extreme poverty in those areas had doubled. The residents of these neighborhoods
are disproportionately black.
If gentrification occurs so infrequently and if it may help rather than hurt existing residents
why are so many people so upset about it? There are at least a couple of reasons. The first has to do
with where it happens. According to Cortright and Mahmoudi, just three cities New York,
Chicago, and Washington accounted for one-third of all census tracts that saw poverty rates
decline from above 30 percent in 1970 to below 15 percent in 2010. Half of all the areas in the
nation that "gentrified" (if we still want to call it that) were located in those three cities. No wonder
New Yorkers and Washingtonians think gentrification is a big deal.
The other reason we continue to dwell on gentrification probably has more to do with middle-class
fears. Housing prices in America's most expensive coastal cities have risen sharply since the end of
the Great Depression. Expressing concern about "gentrification" in those cities may simply be
another way of expressing concern about rising housing prices. But in fact, different types of cities
have very different kinds of affordability problems. In coastal cities, the cost of housing is often far
higher than the cost of construction. That is primarily because supply is constrained. Builders in
Washington can't turn Adams Morgan's row houses into a high-rise apartment district, so row
house prices rise. High demand plays a role too, of course. Some of that demand reflects a
preference for older, close-in housing stock. The fact that global cities deliver high wages to the
most skilled workers is almost certainly more important though. Gentrification isn't the cause of
these cities' affordable housing problem. It's a symptom.
There's a large group of cities with a very different affordability problem. These are Rust Belt cities
such as Detroit where housing sells at or below the cost of construction. These are cities with an
income problem. Cities where the cost of housing is far higher than the cost of construction require
different policy solutions than cities where the situation is reversed. Coastal cities can benefit from
requirements that developers set aside a portion of new units as affordable housing, although some
economists argue that such zoning requirements can actually backfire by raising the cost of new
housing even more, and all agree that the effect of such set-asides will be minimal. It certainly
won't reverse the transformation of these cities into enclaves for the rich.
In contrast, many residents of Rust Belt cities would benefit from rent subsidies (or cash subsidies,
period), not set-asides. Yet policymakers all too often fail to fit remedies to the circumstances. Rust
Belt cities require set-asides just like San Francisco, while Bay Area institutions such as Stanford
University hand out generous housing subsidies to new faculty members, a measure that only
serves to drive up housing prices, instead of searching for ways to increase supply.
Retiring the term gentrification won't do anything to address these problems, of course. But it will
remove a distraction. Let's examine how neighborhoods really change and why some don't. Let's
debate supply constraints (in addition to providing affordable housing) in the San Franciscos of
America and figure out how to provide rent subsidies in the Rust Belt. It won't be as fun as
decrying or defending gentrification, but at least it will be directed at problems that are real.
Vocabulary exercise
1. nuanced (paragraph 6)
But starting a decade ago, economists began to ask more nuanced questions about this
displacement. These found the reasons for this were far more varied than had been previously
thought.
2. hallmarks (paragraph 8)
residents of "gentrifying" neighborhoods were less likely to move out of the neighborhood than
low-income residents of neighborhoods that had none of the typical hallmarks of gentrification.
3. perpetuates (paragraph 13)
The apparent unwillingness of other ethnic groups to move into and invest in predominantly black
communities in turn perpetuates segregation and inequality in American society.
4. decry (paragraph 14)
While critics of gentrification decry a process that is largely imaginary, they've missed a far more
serious problem
5. backfire (paragraph 18)
can benefit from requirements that developers set aside a portion of new units as affordable
housing, although some economists argue that such zoning requirements can actually backfire by
raising the cost of new housing even more,
6. serves (paragraph 19)
Stanford University hand out generous housing subsidies to new faculty members, a measure that
only serves to drive up housing prices,
7. address (paragraph 20)
Retiring the term gentrification won't do anything to address these problems, of course. But it
will remove a distraction.
Write your own sentences with the vocabulary
1. nuanced
2. hallmarks
3. perpetuates
4. decry
5. backfire
6. serves
7. address
EXERCISE 55
The origins of the metric system
Summary
This article talks about how and when the metric measuring (metres and kilometres) system was
invented. It talks about how things were measured before its introduction and how the metric
system has been fundamental in shaping our world.
On the facade of the Ministry of Justice in Paris, just below a ground-floor window, is a marble
shelf engraved with a horizontal line and the word 'MÈTRE'. It is hardly noticeable in the grand
Place Vendôme: in fact, out of all the tourists in the square, I was the only person to stop and
consider it. But this shelf is one of the last remaining 'mètre étalons' (standard metre bars) that
were placed all over the city more than 200 years ago in an attempt to introduce a new, universal
system of measurement. And it is just one of many sites in Paris that point to the long and
fascinating history of the metric system.
"Measurement is one of the most banal and ordinary things, but it's actually the things we take for
granted that are the most interesting and have such contentious histories," said Dr Ken Alder,
history professor at Northwestern University and author of The Measure of All Things, a book
about the creation of the metre.
We don't generally notice measurement because it's pretty much the same everywhere we go.
Today, the metric system, which was created in France, is the official system of measurement for
every country in the world except three: the United States, Liberia and Myanmar. And even then,
the metric system is still used for purposes such as global trade. But imagine a world where every
time you travelled you had to use different conversions for measurements, as we do for currency.
This was the case before the French Revolution in the late 18th Century, where weights and
measures varied not only from nation to nation, but also within nations. In France alone, it was
estimated at that time that at least 250,000 different units of weights and measures were in use
during the Ancien Régime.
The French Revolution changed all that. During the volatile years between 1789 and 1799, the
revolutionaries sought not only to overturn politics by taking power away from the monarchy and
the church, but also to fundamentally alter society by overthrowing old traditions and habits. To
this end, they introduced, among other things, the Republican Calendar in 1793, which consisted
of 10-hour days, with 100 minutes per hour and 100 seconds per minute. Aside from removing
religious influence from the calendar, making it difficult for Catholics to keep track of Sundays and
saints' days, this fit with the new government's aim of introducing decimalisation to France. But
while decimal time did not stick, the new decimal system of measurement, which is the basis of the
metre and the kilogram, remains with us today.
The task of coming up with a new system of measurement was given to the nation's preeminent
scientific thinkers of the Enlightenment. These scientists were keen to create a new, uniform set
based on reason rather than local authorities and traditions. Therefore, it was determined that the
metre was to be based purely on nature. It was to be one 10-millionth of the distance from the
North Pole to the equator.
The line of longitude running from the pole to the equator that would be used to determine the
length of the new standard was the Paris meridian. This line bisects the centre of the Paris
Observatory building in the 14th arrondissement, and is marked by a brass strip laid into the white
marble floor of its high-ceilinged Meridian Room, or Cassini Room.
Although the Paris Observatory is not currently open to the public, you can trace the meridian line
through the city by looking out for small bronze disks on the ground with the word ARAGO on
them, installed by Dutch artist Jan Dibbets in 1994 as a memorial to the French astronomer
François Arago. This is the line that two astronomers (Jean-Baptiste-Joseph Delambre and Pierre
Méchain) set out from Paris to measure in 1792.
While Delambre travelled north to Dunkirk, his colleague Méchain travelled south to Barcelona.
Using the latest equipment and the mathematical process of triangulation to measure the meridian
arc between these two sea-level locations, and then extrapolating the distance between the North
Pole and the equator by extending the arc to an ellipse, the two astronomers aimed to meet back in
Paris to come up with the new, universal standard of measurement within one year. It ended up
taking seven.
As Dr Alder details in his book, measuring this meridian arc during a time of great political and
social upheaval proved to be an epic undertaking. The two astronomers were frequently met with
suspicion and animosity by those they met; they fell in and out of favour with the state; and were
even injured on the job, which involved climbing to high points such as the tops of churches.
The Pantheon, which was originally commissioned by Louis XV to be a church, became the central
geodetic station in Paris from whose dome Delambre triangulated all the points around the city.
Today, it serves as a mausoleum to heroes of the Republic, such as Voltaire, René Descartes and
Victor Hugo. But during Delambre's time, it served as another kind of mausoleum a warehouse
for all the old weights and measures that had been sent in by towns from all over France in
anticipation of the new system.
But despite all the technical mastery and labour that had gone into defining the new measurement,
nobody wanted to use it. People were reluctant to give up the old ways of measuring since these
were inextricably bound with local rituals, customs and economies. For example, an ell, a measure
of cloth, generally equalled the width of local looms, while arable land was often measured in days,
referencing the amount of land that a peasant could work during this time.
The Paris authorities were so exasperated at the public's refusal to give up their old measure that
they even sent police inspectors to marketplaces to enforce the new system. Eventually, in 1812,
Napoleon abandoned the metric system; although it was still taught in school, he largely let people
use whichever measures they liked until it was reinstated in 1840. According to Dr Alder, "It took a
span of roughly 100 years before almost all French people started using it."
This was not just due to perseverance on the part of the state. France was quickly advancing into
the industrial revolution; mapping required more accuracy for military purposes; and, in 1851, the
first of the great World's Fairs took place, where nations would showcase and compare industrial
and scientific knowledge. Of course, it was tricky to do this unless you had clear, standard
measures, such as the metre and the kilogram. For example, the Eiffel Tower was built for the 1889
World's Fair in Paris, and at 324m, was at that time the world's tallest man-made structure.
All of this came together to produce one of the world's oldest international institutions: The
International Bureau of Weights and Measures (BIPM). Located in the quiet Paris suburb of
Sèvres, the BIPM is surrounded by landscaped gardens and a park. Its lack of ostentatiousness
reminded me again of the mètre étalon in the Place Vendôme; it might be tucked away, but it is
fundamental to the world we live in today.
Originally established to preserve international standards, the BIPM promotes the uniformity of
seven international units of measurement: the metre, the kilogram, the second, the ampere, the
kelvin, the mole and the candela. It is the home of the master platinum standard metre bar that
was used to carefully calibrate copies, which were then sent out to various other national capitals.
In the 1960s, the BIPM redefined the metre in terms of light, making it more precise than ever.
And now, defined by universal laws of physics, it was finally a measure truly based on nature.
The building in Sèvres is also home to the original kilogram, which sits under three bell jars in an
underground vault and can only be accessed using three different keys, held by three different
individuals. The small, cylindrical weight cast in platinum-iridium alloy is also, like the metre, due
to be redefined in terms of nature specifically the quantum-mechanical quantity known as the
Planck constant by the BIPM this November.
"Establishing a new basis for a new definition of the kilogram is a very big technological challenge.
It was described at one point as the second most difficult experiment in the whole world, the first
being discovering the Higgs Boson," said Dr Martin Milton, director of the BIPM, who showed me
the lab where the research is being conducted.
As he explained to me what they were doing to do this, I marvelled at the latest scientific
engineering before me, the precision and personal effort of all the people who have been working
on the kilogram project since it began in 2005 and are now very close to achieving their goal.
As with the 18th-Century meridian project, defining measurement continues to be one of our most
important and difficult challenges. As I walked further up the hill of the public park that surrounds
the BIPM and looked out at the view of Paris, I thought about the structure of measurement
underlying the whole city. The machinery used for construction; the trade and commerce
happening in the city; the exact quantities of drugs, or radiation for cancer therapy, being
delivered in the hospitals.
What started with the metre formed the basis of our modern economy and led to globalisation. It
enabled high-precision engineering and continues to be essential for science and research,
progressing our understanding of the universe.
Vocabulary exercise
1. preeminent (paragraph 5)
The task of coming up with a new system of measurement was given to the nation's preeminent
scientific thinkers of the Enlightenment.
2. upheaval (paragraph 9)
As Dr Alder details in his book, measuring this meridian arc during a time of great political and
social upheaval proved to be an epic undertaking
3. undertaking (paragraph 9)
As Dr Alder details in his book, measuring this meridian arc during a time of great political and
social upheaval proved to be an epic undertaking
4. bound with (paragraph 11)
People were reluctant to give up the old ways of measuring since these were inextricably bound
with local rituals, customs and economies.
5. exasperated (paragraph 12)
The Paris authorities were so exasperated at the public's refusal to give up their old measure that
they even sent police inspectors to marketplaces to enforce the new system.
6. showcase (paragraph 13)
France was quickly advancing into the industrial revolution; mapping required more accuracy for
military purposes; and, in 1851, the first of the great World's Fairs took place, where nations would
showcase and compare industrial and scientific knowledge.
7. marvelled (paragraph 18)
As he explained to me what they were doing to do this, I marvelled at the latest scientific
engineering before me, the precision and personal effort of all the people who have been working
on the kilogram project
Write your own sentences with the vocabulary
1. preeminent
2. upheaval
3. undertaking
4. bound with
5. exasperated
6. showcase
7. marvelled
EXERCISE 56
Do prisons work?
Summary
This article argues that the current prison system is failing society in general. It starts by saying
what the functions of prisons are, before explaining how they are failing to do some of these
aspects well. It then suggests some things which can be done to correct this.
What do you do with people who have broken the law? If you were to ask the average person in the
street they would more than likely retort, depending on the severity of crime committed of course,
put them in prison!
But if you were to then ask them why, the majority are likely to come back saying (if recent surveys
are anything to go by) to punish those who commit crime and reduce the instances of crime being
committed (as they'll be unable to do criminal acts whilst being locked up and away from civil
society).
Although prison is about punishing those who transgress or break the laws set down by society and
keeping villains off the streets, it is also about more than these. And understanding if the current
prison system actually works for the benefit of society as a whole, we need to know what in theory
the prison system is there to do.
And we can do this by learning a little about their history.
The history of modern prisons
Although prisons and imprisonment have existed throughout recorded history (a form of prison
has been shown to have existed in the times of the pharaohs), the prisons as we know them today
are a relatively new innovation. Even in the 16th and 17th centuries, prisons were predominantly
used as secure units to house criminals before acts of punishment and humiliation were carried
out, in Britain usually hanging for the most serious of crimes.
The conditions in the holding cells were harsh, often unsanitary and poorly managed meaning
many criminals died from disease before facing their punishment. During these times,
punishments were public displays to show the consequences of criminal activity in order to deter
others.
However, it was during the 18th century that the idea of what a prison is and what it is there to do,
changed. And much of this was down to the ideas and changes in society and morality brought
about by the Enlightenment in Europe. Up to this time public executions or flogging had been
common features of punishment for criminal acts across the continent. However, as the century
progressed the upper echelons of society in the more advanced countries on the continent began to
struggle with the very notion of public executions. Juries (who were predominantly made up of the
well-to-do (and hence educated) members of society) became increasingly reluctant to convict
individuals of the types of crimes which would result in such a draconian form of punishment.
Over the century there was a gradual shift towards prisons housing criminals for a prolonged
period of time and punishing them through hard labor rather than physical punishment.
It was also at this time that the concept of criminal 'correction' (turning criminals into law-abiding
citizens) grew in popularity. Although it started to be taken seriously during this period, the actual
idea of using imprisonment as a vehicle for reform rather than solely punishment and a deterrent
is not a new one. Rehabilitation was actually talked about at length in some of the writings of
philosophers from ancient Greece (who were increasingly being read by the intelligentsia, political
and religious elites). And it was the enlightened members of the various religious institutions in
Britain (who were well versed in both the works of the Greek philosophers and the bible) who were
responsible for bringing about and campaigning for prison reform. They saw that crime was a
merely sin and prison should be the place to teach proper and acceptable behavior. As a result,
prisons became (relatively) humane places and focused on both moral learning and the modifying
of behavior to ensure prisoners didn't reoffend on their release back into society.
And thus we have the form of the modern prison system: punishment, deterrent and reform.
Reform as well as punish
Although the conditions and methods employed with inmates may have dramatically changed
since the 18th century, the reform of individuals is still one of the central tenets of correctional
facilities today as they were back then.
One only need look at the UK's Prison Service's mission statement own website to see this idea:
"We keep those sentenced to prison in custody, helping them lead law-abiding and useful
lives, both while they are in prison and after they are released." HM Prison Service
However, the question remains: do they actually do this? Unfortunately, the answer is no.
Prisons don't work that well
Whilst the existence of prisons do act as an effective deterrent for the vast majority of people in
society, the assumption that the simple imprisonment of an individual away from their community
will automatically produce a level of reform in them is unfortunately a little naive. Although there
are cases of people becoming reformed during their incarceration, for many it leads to them
becoming career criminals.
Humans by our very nature are remarkable at adaptation. Even the feeblest individual will over
time develop a method of dealing with their environment when they have no choice. While prison
life is generally an extremely tedious and mundane existence, prisoners join networks of other
inmates, contacts are made, new criminal skills are gained and lessons are learned.
Add to this the difficulty of obtaining employment upon release and the result is an individual with
a battle on their hands to stay on the straight and narrow. There are many instances where
individuals have spent so much time in prison they have become institutionalized. They find life on
the 'outside' very difficult to manage. Back within society, they have to make their own decisions
and fend for themselves. There is no set routine, no three meals a day and no one to tell them what
to do and when. For this reason, many long-term offenders actively seek to return to prison life
soon after release, where it is familiar and guaranteed.
Prison, therefore, by its very nature does not result in the reforming of many of those who have
entered them.
Prison sentences
Figures from the Prison Reform Trust in 2013 state that in the UK 47% of adults who were released
from prison re-offended and were re-convicted within one year and re-conviction figures increased
further for those who were serving short sentences of less than 12 months (58%) and those under
the age of 18 years (73%).
Research studies, such as those reported by Hedderman in the Handbook of Probation (2007),
have indicated that reoffending rates have increased alongside the increase in prison population,
suggesting the continued and increasing use of custodial sentences is becoming less effective.
Moreover, a large-scale study published by Marsh et al in 2009 found no evidence for prison alone
reducing reoffending. They found that alternative strategies, including substance misuse treatment
and monitoring, were more effective. Evidence to date is not conclusive, however, data is steadily
accumulating that would indicate that simply incarcerating offenders in prison, neither
substantially reduced levels of crime, nor acts as a deterrent to existing offenders or new offenders.
This, of course, doesn't mean that prisons do not have an important role to play in criminal justice.
If an individual commits a crime, breaks the law that has been set by this country, there should
indeed be consequences. Difficulties and hardships faced by offenders upon their release could be
said to be their own making and some will have little sympathy. Furthermore, there is the issue of
justice for the victims of the crime and suitable punishment for those who have committed that
crime and prison does work for that purpose.
The UK does not have sentences which commit an individual to prison for the rest of their natural
lives. An average 'life' sentence for the UK means 13 16 years, therefore most offenders will be
released at some point and be expected to reintegrate back into society. The problem is whether
current practices are simply creating a cycle for many offenders which is of no benefit to them or to
the communities they are released back into.
The cost
Coupled to this is the cost required to continue to house, feed and guard those in them. In 2018 it
was calculated that the cost on average to do this per annum per prisoner was £42,648 in Britain.
And as there was a prison population of 83,618 in the same year this translates to a yearly outlay of
over £3.5 billion.
On top of this, is the cost to the economy and judicial system of those reoffending once released.
The Prison Reform Trust estimated that this could cost up to the tune of £15 billion for 2017-2028.
Alternatives to prison
While not an alternative to prison, some initiatives have been tried to address the reoffending of
prisoners upon release. One such one are community-based services programmes. Such initiatives
aim to provide effective guidance, support, training and employment programmes both before and
after prison inmate release.
A true alternative to prison is community sentences (where offenders are not given a custodial
sentence, but remain in the community and have to work for free in certain community projects
for a set period as a penance for their crimes). Whilst only appropriate for those convicted of petty
crimes (such as drug possession, minor theft etc...), they are considerably cheaper than
maintaining someone in prison and reduce the rate of reoffending. The National Offender
Management Service has reported that those engaged on community sentence programmes are 8%
less likely to reoffend within a year than those sent to prison.
Yet another idea which is being touted more and more as a means to reduce prison population in
judicial and political circles these days, is to stop people from committing criminal acts in the first
place. One way to do this, is a greater police presence on the streets. Studies conducted in New
York found that when the city doubled the number of police officers patrolling the streets in the
1990s, there was a large fall in the number of petty crimes being committed. One explanation for
this has been that the measure increased the perceived risk in the minds of those considering
committing a criminal act of being caught. Hence, less people committed crimes and less people
were sent to prison.
Even though the modern prison system is failing society in many respects, it still has (and will
always have) a place in society. However, it and the judicial system in general are in dire need of
change. The soaring costs of operating prisons, particularly with such increasing prison
populations and high level of reoffending rates when released of those incarcerated, demand
changes to be urgently made. Although the ramifications of the changes that are needed will be
hard to swallow for many (especially for the victims of crime), in the long-run it will be to all of our
detriment if something is not done soon.
Vocabulary exercise
1. a vehicle for (paragraph 8)
the actual idea of using imprisonment as a vehicle for reform rather than solely punishment and
a deterrent is not a new one. Rehabilitation was actually talked about at length in some of the
writings of philosophers from ancient Greece
2. were well versed in (paragraph 8)
And it was the enlightened members of the various religious institutions in Britain (who were
well versed in both the works of the Greek philosophers and the bible) who were responsible for
bringing about
3. fend for themselves (paragraph 16)
There are many instances where individuals have spent so much time in prison they have become
institutionalized. They find life on the 'outside' very difficult to manage. Back within society, they
have to make their own decisions and fend for themselves.
4. to the tune of (paragraph 23)
On top of this, is the cost to the economy and judicial system of those reoffending once released.
The Prison Reform Trust estimated that this could cost up to the tune of £15 billion for 2017-
2028.
5. penance (paragraph 25)
where offenders are not given a custodial sentence, but remain in the community and have to work
for free in certain community projects for a set period as a penance for their crimes
6. touted (paragraph 26)
Yet another idea which is being touted more and more as a means to reduce prison population in
judicial and political circles these days, is to stop people from committing criminal acts in the first
place.
7. be hard to swallow (paragraph 27)
Although the ramifications of the changes that are needed will be hard to swallow for many
(especially for the victims of crime), in the long-run it will be to all of our detriment if something is
not done soon.
Write your own sentences with the vocabulary
1. a vehicle for
2. were well versed in
3. fend for themselves
4. to the tune of
5. penance
6. touted
7. be hard to swallow
EXERCISE 57
All you need to know about depression
Summary
This article gives a variety of information and facts about depression (from describing the
symptoms of the condition through to how it is or can be treated).
If we are to judge but what we see on the news or the web these days, there seems to be an
epidemic of depression. Once regarded as something unseemly to talk about openly, it seems that
every day a different celebrity or public figure comes out and confesses of battling with depression.
Although it is a good thing that the illness now receives so much publicity (and that it is no longer
seen as taboo to talk about or that it reflects a character flaw in the individual who suffers from
bouts of depression), aside from in academic or medical journals, surprisingly few facts about
depression (e.g. what it is, where it occurs etc...) are ever published in the mass media.
In this article, we will try to remedy that.
What is depression?
Depressed people don't all shuffle around with a long face, or cry at any provocation.
MentalHealth.gov, a US government website, defines it as "losing interest in important parts of
life". Symptoms include eating or sleeping too much or too little; pulling away from people and
usual activities; having low or no energy; feeling numb or like nothing matters; feeling unusually
confused, forgetful, on edge, angry, upset, worried or scared; and thinking of harming yourself or
others.
A visceral description is quoted by the UK campaign group Mind: "It starts as sadness, then I feel
myself shutting down, becoming less capable of coping. Eventually, I just feel numb and empty."
Depression is also often mixed with other health problems: long-term illness, anxiety, obsessive
compulsive disorder or schizophrenia, for example.
The term dysthymia is also used for mild, long-term depression usually lasting two years or
more.
How many people have depression?
According to figures recently published by the World Health Organization (WHO), at any one time
it is estimated that more than 300 million people have depression about 4% of the world's
population. Women are more likely to be depressed than men.
Depression is the leading global disability, and unipolar (as opposed to bipolar) depression is the
10th leading cause of early death, it calculates. The link between suicide, the second leading cause
of death for young people aged 15-29, and depression is clear, and around the world two people kill
themselves every minute.
While rates for depression and other common mental health conditions vary considerably, the US
is the "most depressed" country in the world, followed closely by Colombia, Ukraine, the
Netherlands and France. At the other end of the scale are Japan, Nigeria and China.
Why are there such wide variations?
The stark contrasts between countries have led some to dub depression as a "first world problem"
or a "luxury". The logic is that if you are staring down the barrel of a gun or you don't know where
the next meal is coming from, you have no time for such introspection.
But recent research casts in doubt such an assertion. It points to a myriad of reasons which
account for the perception that depression is predominantly a first world problem. In particular
less developed countries often lack the infrastructure to collect data on depression, and are less
likely to recognise it as an illness. Also, people in these countries are more likely to feel a social
stigma against talking about how they feel, making them reticent to ask for professional help.
Statistics are also less simplistic than rich = depressed and poor = not depressed. A paper in the
journal Plos Medicine argues that, extremes aside, the majority of countries have similar rates of
depression. It also found that the most depressed regions are eastern Europe, and north Africa and
the Middle East; and that, by country, the highest rate of years lost to disability for depression is in
Afghanistan, and the lowest in Japan.
What causes depression?
Things have improved since people with mental illness were believed to be possessed by the devil
and cast out of their communities, or hanged as witches. But there remains a widespread
misunderstanding of the illness, particularly the persistent trope that people with depression
should just "buck up", or "get out more".
A contrasting opinion is provided by the psychiatrist Dr Tim Cantopher's book Depressive Illness:
The Curse of the Strong. He argues there is a part of the brain called the limbic system that acts
like a thermostat, controlling various functions of the body including mood and restoring
equilibrium after the normal ups and downs of life. The limbic system is a circuit of nerves,
transmitting signals to each other via two chemicals, serotonin and noradrenaline, of which people
with depression have a deficit. According to this description, depressive illness is predominantly a
physical, not mental, illness.
Cantopher says that, when under stress, weak or lazy people give in quickly; strong people keep
going, redouble their efforts, fight any pressure to give up and so push the limbic system to
breaking point. However, there is no scientific evidence to support this theory, as it is impossible
to experiment on live brains.
Other commonly agreed causes or triggers are past trauma or abuse; a genetic predisposition to
depression, which may or may not be the same as a family history; life stresses, including financial
problems or bereavement; chronic pain or illness; and taking drugs, including cannabis, ecstasy
and heroin.
The subject of much debate, there is a school of thought that severe stress or certain illnesses can
engender an excessive response from the immune system, causing inflammation in the brain,
which in turn causes depression.
Treatments
The WHO estimates that fewer than half of people with depression are receiving treatment. Many
more will be getting inadequate help, often focused on medication, with too little investment in
talking therapies, which are regarded as a crucial ally.
Among pharmacological treatments for depression, the most commonly prescribed
antidepressants are selective serotonin re-uptake inhibitors (SSRIs) which reduce the absorption
of serotonin, increasing overall levels. Another popular class of drugs is serotonin norepinephrine
re-uptake inhibitor (SNRIs), which work on both serotonin and noradrenaline.
The most common talking therapy is cognitive behavioural therapy, which breaks down
overwhelming problems into situations, thoughts, emotions, physical feelings and actions to try to
break a cycle of negative thoughts. Other types are interpersonal therapy, behavioural activation,
psychodynamic psychotherapy and couples therapy. All talking therapies can be used on their own,
or with medication.
Away from the medical approach, doctors can prescribe physical activity or arts therapy, while
some patients opt for alternative or complementary therapies, most popularly St John's Wort
herbal pills, mindfulness and yoga.
Trends
While there are more and more treatments for depression, the problem is rising, not falling. From
2005-15, cases of depressive illness increased by nearly a fifth. People born after 1945 are 10 times
more likely to have depression. This reflects both population growth and a proportional increase in
the rate of depression among the most at-risk ages, the WHO said.
Suicide rates, however, have declined globally, by about a quarter. In 1990, the rate was 14.55 per
100,000 people, in 2016 the rate was 11.16 per 100,000. A key reason for the continuing rise in
depressive illness is that drugs do not necessarily "cure" the patient, and other therapies that can
make the crucial difference are usually not in sufficient supply.
Other reasons given for the continuing rise in depressive illness include an ageing population (60-
to 74-year-olds are more likely to suffer than other age groups), and rising stress and isolation.
What next?
No new antidepressant drugs have been developed in the last 25 years, forcing psychiatrists to look
elsewhere for help. There have been positive experiments with both ketamine and psilocybin, the
active ingredient in magic mushrooms. Further hopes for a new generation of treatments have
been raised by recent discoveries of 44 gene variants that scientists believe raise the risk of
depression. Another controversial area of research is treatment for low immunity and mooted
links between depression and inflammation (although with the latter, no correlation between the
two has ever been established).
Countries are increasingly recognising the need to train more psychologists to replace or
complement drug treatments. And perhaps most importantly, there is a cultural movement to
make it easier for people to ask for help and speak out about their illness.
Some of the most visible leaders of this shift are the UK's princes William and Harry, who set up
the charity Heads Together and have talked publicly about their own problems. Others are
celebrities; most recently the wrestler and actor Dwayne "The Rock" Johnson has spoken about his
depression, and the singer Mariah Carey has talked about having bipolar disorder.
Vocabulary exercise
1. bouts (paragraph 2)
Although it is a good thing that the illness now receives so much publicity (and that it is no longer
seen as taboo to talk about or that it reflects a character flaw in the individual who suffers from
bouts of depression)
2. remedy (paragraph 3)
surprisingly few facts about depression (e.g. what it is, where it occurs etc...) are ever published in
the mass media. In this article, we will try to remedy that.
3. casts in doubt (paragraph 12)
The stark contrasts between countries have led some to dub depression as a "first world problem"
or a "luxury". The logic is that if you are staring down the barrel of a gun or you don't know where
the next meal is coming from, you have no time for such introspection. But recent research casts
in doubt such an assertion
4. reticent to (paragraph 12)
Also, people in these countries are more likely to feel a social stigma against talking about how
they feel, making them reticent to ask for professional help.
5. trope (paragraph 14)
But there remains a widespread misunderstanding of the illness, particularly the persistent trope
that people with depression should just "buck up", or "get out more".
6. engender (paragraph 18)
The subject of much debate, there is a school of thought that severe stress or certain illnesses can
engender an excessive response from the immune system
7. mooted (paragraph 26)
Another controversial area of research is treatment for low immunity and mooted links between
depression and inflammation (although with the latter, no correlation between the two has ever
been established).
Write your own sentences with the vocabulary
1. bouts
2. remedy
3. casts in doubt
4. reticent to
5. trope
6. engender
7. mooted
EXERCISE 58
The future of work
Summary
This article lists 3 things which are likely to change in how we work in the future. For each it
explains what this change (or changes) will be, why they will happen and what the implications of
them will be on both the worker and society in general.
1. Workplace structures
Browse the business section of any bookshop and you'll find dozens of titles promising to share the
secret to climbing the corporate ladder. But the day is not far off when such books will seem as
quaint and outmoded as a housekeeping manual from the 1950s.
One of the key workplace trends of the 21st century has been the collapse of the corporate ladder,
whereby loyal employees climbed towards the higher echelons of management one promotion at a
time. Cathy Benko, vice-chairman of Deloitte in San Francisco and co-author of The Corporate
Lattice, says that the ladder model dates back to the industrial revolution, when successful
businesses were built on economies of scale, standardisation and a strict hierarchy. "But we don't
live in an industrial age, we live in a digital age. And if you look at all the shifts taking place, one [of
the biggest] is the composition of the workforce, which is far more diverse in every way," she says.
This new diversity, twinned with technological advances, has fed demand for a more collaborative
and flexible working environment. Benko estimates that companies have "flattened out" by about
25% over the past 25 years, losing several layers of management in favour of a more grid-like
structure, where ideas flow along horizontal, vertical and diagonal paths.
Career paths are becoming similarly fluid, with many following a zigzag rather than a straight path.
"I would argue that the lattice model provides more opportunity and more possibilities to be
successful," says Benko. "In the ladder model, you're looking in one direction, which is up. In the
lattice organisation you can find growth by doing different roles, so you have new experiences, you
acquire new skills, you tap into new networks. The world is less predictable than it was in the
industrial age, so you stay relevant by acquiring a portfolio of transferable skills."
A recent report, The Future Workplace, commissioned by financial protection specialist Unum and
authored by The Future Laboratory, reveals how the workplace is evolving and what employers
need to do successfully to manage employee wellbeing over the next 15 years. One of the key
findings of the survey was that in order to attract and retain high-calibre employees, companies
need to foster a more collaborative environment. This might involve hot-desking, ideas workshops
and regularly switching teams. Not only do employees respond well to this style of working, but
corporations benefit too as it better equips them to compete with the startups that are disrupting
their business.
"Large organisations have a huge challenge in attracting the millennial generation to come and
work for them. Those people expect much more entrepreneurial environments more freedom to
operate, less control," says Philippe De Ridder, co-founder of the Board of Innovation, a
consultancy firm whose mission statement is to "help corporations innovate like start-ups". One of
the ways they do this is through a series of "intrapreneurship" programmes, which encourage
employees to think and act like entrepreneurs within the confines of their company. What this
means in practical terms is individuals having the freedom to take full ownership of particular
domains or projects, with minimal supervision or bureaucracy, and to be able to pitch directly to
the CEO without having to go through several layers of management.
The principles of intrapreneurship can apply at every level of an organisation, not just
management or creative roles. De Ridder gives the example of online shoe shop Zappos.com,
which abolished scripts from its call centre a year ago and gave customer service staff the freedom
to deal with complaints however they saw fit. He says they are now outperforming most of their
competitors in terms of customer satisfaction ratings.
"It's a common assumption that this level of freedom only works for managers or people who work
remotely," says De Ridder. "But there are plenty of real-life examples that [show] people are more
motivated if they themselves make a decision rather than having a decision forced on them."
2. Artificial Intelligence
In recent years, automation has become increasingly prevalent. We think nothing of paying for
groceries at a scanner or transferring money on a screen without going into a bank. We've grown
accustomed to the idea of self-driving cars and computers that can talk to us.
As marvellous as these innovations may seem, they can also be destructive, rendering entire
professions obsolete even as they boost productivity and convenience. And now, if widespread
predictions are correct, automation in the workplace is set to increase at an unprecedented rate.
"There's going to be a huge change, comparable to the industrial revolution," says Jerry Kaplan, a
Silicon Valley entrepreneur who teaches a class in artificial intelligence at Stanford. Robots and
intelligent computer systems, he says, "are going to have a far more dramatic impact on the
workplace than the internet has".
Kaplan isn't alone in this belief. A 2013 study by the Oxford Martin School estimated that 47% of
jobs in the US could be susceptible to computerisation over the next two decades. A study by the
McKinsey Global Institute predicted that, by 2025, robots could jeopardise between 40m and 75m
jobs worldwide.
"There have been two major developments over the past 10 years," says Kaplan. "The first relates
to advances in machine learning the ability to organise large volumes of data so you can get
actionable intelligence. The second is the availability of data of all kinds, coming from
smartphones and other low-cost sensors out there in the environment. When you add those two
things up the availability of the data along with the ability to interpret it it enables a whole lot
of things that you couldn't do before."
Many areas of manual work are being affected. Robots in factories and warehouses are becoming
more mobile, versatile and affordable. A US-designed robot called Baxter, which can handle a wide
variety of tasks from loading to packaging, currently costs £19,000. "If you're digging a ditch or
painting a house, laying pipes or setting bricks anything that involves basic hand-eye co-
ordination there will be low-cost, efficient mechanical devices that can do that work," says
Kaplan.
It's not just manual labour that is ripe for automation: white-collar jobs are also at risk as software
becomes more sophisticated. One example is Quill, a program developed by US company Narrative
Science that crunches data and generates reports in a journalistic style.
Data analysis work in areas such as advertising and finance is being outsourced to computers and
even the authority of medical experts is being challenged: IBM's Watson computer, which won the
American TV quiz Jeopardy in 2011, is being used to diagnose cancer patients in the US.
Watson can sift through symptoms, medical histories and the latest research to deliver diagnoses
and suggest potential treatments, but there are limits to its diagnostic abilities and, unlike a
human doctor, it cannot treat patients with empathy and understanding.
By absorbing the most routine aspects of our jobs, optimists argue, machines are freeing us up to
concentrate on more creative, thoughtful activities. This may be true for some, but, as the Silicon
Valley entrepreneur and author Martin Ford says: "The reality is that a very large fraction of our
workforce is engaged in activities that are on some level routine, repetitive and predictable." If this
is the case, retraining a large portion of the workforce to engage in more creative activity beyond
the reach of automation will pose an enormous challenge.
Not all jobs are at risk. "A lot of work involving personal interaction won't be affected," says
Kaplan. "Nobody wants to go to a robotic undertaker who says ‘I'm sorry for your loss'; it's just not
meaningful. But it depends on the activity the more transactional it is, the more likely it is to be
automated. If you go to a fancy restaurant, you don't want a robotic waiter. On the other hand if
you go to McDonald's, you won't have a problem with punching buttons and having a burger come
out of a chute somewhere."
One issue that will loom ever larger as the incidence of automation increases, according to Kaplan,
is inequality. "Automation is fundamentally the substitution of capital for labour. The problem is
that the people who already have the capital are the ones who will benefit most, because they're the
ones who will invest in the new automation."
In other words, the rich will get richer and the rest of us will suffer.
3. The human cloud
In the past decade cloud computing has radically altered the way we work, but it's the growth of
the "human cloud" a vast global pool of freelancers who are available to work on demand from
remote locations on a mind-boggling array of digital tasks which is really set to shake up the
world of work.
The past five years have seen a proliferation of online platforms that match employers (known in
cloud-speak as "requesters") with freelancers (often referred to as "taskers"), inviting them to bid
for each task. Two of the biggest sites are Amazon's Mechanical Turk, which lays claim to 500,000
"turkers" from 190 countries at any given time, and Upwork, which estimates that it has 10 million
freelancers from 180 countries on its database. They compete for approximately 3m tasks or
projects each year, which can range from tagging photos to writing code. The market is evolving so
quickly that it's hard to pin down exactly how many people are using these sites worldwide, but
management consultants McKinsey estimate that by 2025 some 540 million workers will have
used one of these platforms to find work.
The benefits for companies using these sites are obvious: instant access to a pool of cheap, willing
talent, without having to go through lengthy recruitment processes. And no need to pay overheads
and holiday or sick pay. For the "taskers" the benefits are less clear cut. Champions of the
crowdsourcing model claim that it's a powerful force for the redistribution of wealth, bringing a
fresh stream of income and flexible work into emerging economies such as India and the
Philippines (two of the biggest markets for these platforms). But herein lies the problem, as far as
critics are concerned. By inviting people to bid for work, sites such as Upwork inevitably trigger a
"race to the bottom", with workers in Mumbai or Manila able to undercut their peers in Geneva or
London thanks to their lower living costs.
"It's a factor in driving down real wages and increasing inequality," says Guy Standing, professor of
economics at SOAS, University of London. He has written two books on the "precariat", which he
defines as an emerging global class with no financial security, job stability or prospect of career
progression. He argues that falling wages in this sector, with workers often willing to complete
tasks for as little as $1 an hour. "How can anybody making their living from such types of work
make ends meet on so low an hourly rate. Especially if you a house or a family to support." says
Standing. He fears that this will eventually have a knock-on effect on the wages of traditional
employees and contribute to the growth of the precariat. "And it's not just unskilled labour that's
being done online," says Standing. "It goes all the way up: legal services, medical diagnosis,
architectural services, accounting it's affecting the whole spectrum."
Love it or loathe it, the human cloud is here to stay. "People don't necessarily want to work from
9am to 5pm in an office anymore. They want more flexible work, both in terms of the hours and
the location," says Vassili van der Mersch, founder of Sevendays, a new platform which specialises
in matching established freelancers with startups and digital agencies. Unlike the auction model
favoured by sites like Upwork, Sevendays invites a carefully selected number of jobseekers to apply
for each job and does not take a cut of their earnings. Freelancers can also specify the minimum
rate they are prepared to work for.
Van der Mersch argues that there are career development opportunities for cloud workers, with
many startups using the site as a way of testing out freelancers to see if they're a good cultural fit
before offering them a permanent job and vice versa. "Typically these remote freelancers are very
entrepreneurial, which is one of the mindsets that startups are looking for," he says. "They are self-
starters and they don't need someone looking over their shoulder."
For now this sector of the labour market is largely unregulated but Standing says there is an urgent
need for an industry code of ethics to limit possible abuses that the workers involved in them could
face, and the implementation of processes that the workers can use to quickly and effectively
redress any that do occur. "It's going to become a very big, explosive issue. In some sectors the use
of cloud labour is doubling each year and so far the policymakers haven't addressed it."
Vocabulary exercise
1. retain (paragraph 5)
One of the key findings of the survey was that in order to attract and retain high-calibre
employees, companies need to foster a more collaborative environment.
2. foster (paragraph 5)
One of the key findings of the survey was that in order to attract and retain high-calibre employees,
companies need to foster a more collaborative environment.
3. is ripe for (paragraph 15)
It's not just manual labour that is ripe for automation: white-collar jobs are also at risk as
software becomes more sophisticated
4. proliferation of (paragraph 23)
The past five years have seen a proliferation of online platforms that match employers (known
in cloud-speak as "requesters") with freelancers (often referred to as "taskers"),
5. undercut (paragraph 24)
By inviting people to bid for work, sites such as Upwork inevitably trigger a "race to the bottom",
with workers in Mumbai or Manila able to undercut their peers in Geneva or London
6. loathe (paragraph 26)
Love it or loathe it, the human cloud is here to stay. "People don't necessarily want to work from
9am to 5pm in an office anymore. They want more flexible work, both in terms of the hours and
the location,"
7. redress (paragraph 28)
this sector of the labour market is largely unregulated but Standing says there is an urgent need for
an industry code of ethics to limit possible abuses that the workers involved in them could face,
and the implementation of processes that the workers can use to quickly and effectively redress
any that do occur.
Write your own sentences with the vocabulary
1. retain
2. foster
3. is ripe for
4. proliferation of
5. undercut
6. loathe
7. redress
EXERCISE 59
The difficulties for us to colonise the galaxy
Summary
This article talks about the difficulties we would encounter in trying to colonise planets in our
galaxy. It explains what the potential problems would be and makes some suggestions what could
be done to overcome these and what we should do with regards to our own planet.
The idea that humans will eventually travel to and inhabit other parts of our galaxy was well
expressed by the early Russian rocket scientist Konstantin Tsiolkovsky, who wrote, "Earth is
humanity's cradle, but you're not meant to stay in your cradle forever." Since then the idea has
been a staple of science fiction, and thus become part of a consensus image of humanity's future.
Going to the stars is often regarded as humanity's destiny, even a measure of its success as a
species. But in the century since this vision was proposed, things we have learned about the
universe and ourselves combine to suggest that any endeavour to colonise the galaxy would be far
from plain sailing. In fact we may have to face the reality that the colonisation may not be
humanity's destiny after all.
The problem that tends to underlie all the other problems with the idea is the sheer size of the
universe, which was not known when people first imagined we would go to the stars. Tau Ceti, one
of the closest stars to us at around 12 light-years away, is 100 billion times farther from Earth than
our moon. A quantitative difference that large turns into a qualitative difference; we can't simply
send people over such immense distances in a spaceship, because a spaceship is too impoverished
an environment to support humans for the time it would take, which is on the order of centuries.
Instead of a spaceship, we would have to create some kind of space-traveling ark, big enough to
support a community of humans and other plants and animals in a fully recycling ecological
system.
On the other hand it would have to be small enough to accelerate to a fairly high speed, to shorten
the voyagers' time of exposure to cosmic radiation, and to breakdowns in the ark. Regarded from
some angles bigger is better, but the bigger the ark is, the proportionally more fuel it would have to
carry along to slow itself down on reaching its destination; this is a vicious circle that would appear
difficult to overcome. For that reason and others, smaller is better, but smallness creates problems
for resource metabolic flow and ecologic balance. Island biogeography suggests the kinds of
problems that would result from this miniaturization, but a space ark's isolation would be far more
complete than that of any island on Earth. The design imperatives for bigness and smallness may
cross each other, leaving any viable craft in a non-existent middle.
The biological problems that could result from the radical miniaturization, simplification and
isolation of an ark, no matter what size it is, now must include possible impacts on our
microbiomes. We are not autonomous units; about eighty percent of the DNA in our bodies is not
human DNA, but the DNA of a vast array of smaller creatures. That array of living beings has to
function in a dynamic balance for us to be healthy, and the entire complex system co-evolved on
this planet's surface in a particular set of physical influences, including Earth's gravity, magnetic
field, chemical makeup, atmosphere, insolation, and bacterial load. Traveling to the stars means
leaving all these influences, and trying to replace them artificially. What the viable parameters are
on the replacements would be impossible to be sure of in advance, as the situation is too complex
to model. Any starfaring ark would therefore be an experiment, its inhabitants lab animals. The
first generation of the humans aboard might have volunteered to be experimental subjects, but
their descendants would not have. These generations of descendants would be born into a set of
rooms a trillion times smaller than Earth, with no chance of escape.
In this radically diminished environment, rules would have to be enforced to keep all aspects of the
experiment functioning. Reproduction would not be a matter of free choice, as the population in
the ark would have to maintain minimum and maximum numbers. Many jobs would be
mandatory to keep the ark functioning, so work too would not be a matter of choices freely made.
In the end, sharp constraints would force the social structure in the ark to enforce various norms
and behaviors. The situation itself would require the establishment of something akin to a
totalitarian state.
Of course sociology and psychology are harder fields to make predictions in, as humans are highly
adaptable. But history has shown that people tend to react poorly in rigid states and social systems.
Add to these social constraints permanent enclosure, exile from the planetary surface we evolved
on, and the probability of health problems, and the possibility for psychological difficulties and
mental illnesses seems quite high. Over several generations, it's hard to imagine any such society
staying stable.
Still, humans are adaptable, and ingenious. It's conceivable that all the problems outlined so far
might be solved, and that people enclosed in an ark might cross space successfully to a nearby
planetary system. But if so, their problems will have just begun.
Any planetary body the voyagers try to inhabit will be either alive or dead. If there is indigenous
life, the problems of living in contact with an alien biology could range from innocuous to fatal, but
will surely require careful investigation. On the other hand, if the planetary body is inert, then the
newcomers will have to terraform it using only local resources and the power they have brought
with them. This means the process will have a slow start, and take on the order of centuries, during
which time the ark, or its equivalent on the alien planet, would have to continue to function
without failures.
It's also quite possible the newcomers won't be able to tell whether the planet is alive or dead, as is
true for us now with Mars. They would still face one problem or the other, but would not know
which one it was, a complication that could slow any choices or actions.
So, to conclude: an interstellar voyage would present one set of extremely difficult problems, and
the arrival in another system, a different set of problems. All the problems together create not an
outright impossibility, but a project of extreme difficulty, with very poor chances of success. The
unavoidable uncertainties suggest that an ethical pursuit of the project would require many
preconditions before it was undertaken. Among them are these: first, a demonstrably sustainable
human civilization on Earth itself, the achievement of which would teach us many of the things we
would need to know to construct a viable mesocosm in an ark; second, a great deal of practice in an
ark orbiting our sun, where we could make repairs and study practices in an ongoing feedback
loop, until we had in effect built a successful proof of concept; third, extensive robotic explorations
of nearby planetary systems, to see if any are suitable candidates for inhabitation.
Unless all these steps are taken, humans cannot successfully travel to and inhabit other star
systems. The preparation itself is a multi-century project, and one that relies crucially on its first
step succeeding, which is the creation of a sustainable long-term civilization on Earth. This
achievement is the necessary, although not sufficient, precondition for any success in interstellar
voyaging. If we don't create sustainability on our own world, there is no Planet B.
Vocabulary exercise
1. endeavour (paragraph 1)
But in the century since this vision was proposed, things we have learned about the universe and
ourselves combine to suggest that any endeavour to colonise the galaxy would be far from plain
sailing.
2. plain sailing (paragraph 1)
But in the century since this vision was proposed, things we have learned about the universe and
ourselves combine to suggest that any endeavour to colonise the galaxy would be far from plain
sailing.
3. underlie (paragraph 2)
The problem that tends to underlie all the other problems with the idea is the sheer size of the
universe, which was not known when people first imagined we would go to the stars
4. a vicious circle (paragraph 3)
Regarded from some angles bigger is better, but the bigger the ark is, the proportionally more fuel
it would have to carry along to slow itself down on reaching its destination; this is a vicious
circle that would appear difficult to overcome.
5. enforced (paragraph 5)
In this radically diminished environment, rules would have to be enforced to keep all aspects of
the experiment functioning. Reproduction would not be a matter of free choice,
6. akin to (paragraph 5)
would force the social structure in the ark to enforce various norms and behaviors. The situation
itself would require the establishment of something akin to a totalitarian state.
7. outlined (paragraph 7)
Still, humans are adaptable, and ingenious. It's conceivable that all the problems outlined so far
might be solved, and that people enclosed in an ark might cross space successfully
Write your own sentences with the vocabulary
1. endeavour
2. plain sailing
3. underlie
4. a vicious circle
5. enforced
6. akin to
7. outlined
EXERCISE 60
The shredding of Banksy’s “Girl With Balloon”
at auction: Stunt or statement?
Summary
This article talks about the planned self-destruction of a piece of art by the artist Banksy directly
after it was sold at auction. It explains what happened and gives the possible reasons for why it was
done. It also explains that such acts are nothing new in the art world and asks whether such
actions are necessary.
The moment I love most in the video the graffiti artist Banksy has released of his latest art stunt is
when a bespectacled man with the well-groomed air of an art-world professional puts his hand to
his forehead in apparent disbelief at what he is seeing: a million quid being shredded. He looks
genuinely distraught that the revolution has reached Mayfair and that activists are about to storm
Sotheby's, where Banksy's framed picture Girl With Balloon has just mechanically self-destructed
shortly after going under the hammer for a little more than £1m.
If this moment of artistic terrorism really had been as at least one member of the audience
appeared to think the sign for all the collectors and dealers assembled at yet another big-selling
night in the art industry to be dragged out of the auctioneer's and shot, there would be VIP blood
in the gutters of New Bond Street. For Banksy put his artwork through the shredder at the climax
of the busiest week in the London art market, when international collectors fly in for the Frieze art
fair and its satellite parties, private views and purchases. "In the spirit of Frieze week, the October
contemporary art evening auction is led by a selection of outstanding works," enthused Sotheby's
about its sale. Apparently, it had no idea that one of these modern treasures was booby-trapped.
Yet, by the next morning, self-proclaimed market insiders were claiming to be the first to get
Banksy's joke. (Others say the stunt is a hoax.) One, Joey Syer, an online art broker, was offering
bullish "insight" to the media: "The auction result will only propel [Banksy's prices] further and,
given the media attention this stunt has received, the lucky buyer would see a great return on the
[£1.042m] they paid last night. This is now part of art history in its shredded state and we'd
estimate Banksy has added, at a minimum, 50% to its value, possibly as high as being worth £2m-
plus." Despite Syer offering no evidence for this claim, it got into the media. After all, such cynical
savoir-faire sounds plausible if you have followed the freakonomics of art. Of course Banksy
doubled the value of Girl With Balloon by destroying it. The art market always wins.
I beg to differ. I am not exactly Banksy's biggest fan. I walked around his anti-theme-park
Dismaland with a frown on my face, not because I was part of the performance, like the grim and
sulky greeters, but because I found it truly dismal. But the rush for knowing insiders to say
Banksy's art is more valuable now is beside the point. For once, an artist has genuinely pissed all
over the system that reduces art to nothing but a commodity. What happened at Sotheby's is
Banksy's greatest work. He has said something that needed to be said: art is being choked to death
by money. The market turns imagination into an investment and protest into decor for some
oligarch's house. The only real rebellion left is for works of art to destroy themselves the moment
they are sold.
Banksy's Million Quid Artwork Destroying Itself as perhaps we should call this masterpiece of
radical performance belongs to a tradition of destruction in art that is a 100 years old. In 1917, a
porcelain urinal, titled Fountain and bearing the signature "R Mutt" in crudely daubed black paint,
was submitted to a New York art exhibition. Marcel Duchamp, the man behind the stunt, is often
seen as a dry, ironic wit whose "readymades" are dissected reverently as philosophical
conundrums, but that does an injustice to the anger and contempt in his gesture. To call a pissoir
Fountain was to urinate on high culture and that could not be a neutral gesture in 1917.
Duchamp was part of the dada movement. This deliberately reductive and primal movement the
name imitates baby talk was begun by pacifist German draft dodgers in exile in Switzerland in
1916 and spread to Berlin, Paris and more cities by the end of the first world war.
The dadaists hated the European culture of fine art and self-conscious sensitivity that could
slaughter its youth by putting them through the giant human shredder that was the western front.
All sides in the First World War claimed to be defending "civilisation". The dada generation spat
on that civilisation. In his 1919 work LHOOQ (which sounds like the French for "she's got a hot
arse"), Duchamp drew a moustache and small goatee on a reproduction of the Mona Lisa. He also
said he wanted to "use a Rembrandt as an ironing board".
The problem with the anti-art tradition that started with dada's violence against the very idea of
culture is that, over the past 100 years or so, it has been assimilated into the mainstream of
modern art. The sassy and slick smart alecs who are claiming that all Banksy has done is add value
to his work are the latest in a long line of art-world insiders who have turned dissidence into art
history. As an older man in the 60s, Duchamp was embraced by the establishment. Replicas were
made of his lost Fountain. There is one in Tate Modern today.
The Young British Artists saw the commercial potential of Duchamp's ideas. You could put a shark
in a tank and call it art, then turn it into money. The result is what Banksy's video of the crowd at
Sotheby's shows: an international community of the well-heeled spending their money on art that
has an aura of dadaist danger. As well as Banksy's Girl With Balloon, the auction included a
slashed white canvas by Lucio Fontana and a painting by Piero Manzoni, whose most notorious
masterpiece is a can labelled Merda d'Artista (Artist's Shit).
Looking at these lots, the game being played at the auction is not that subtle. On the one hand, the
collectors were being offered the thrill of anti-art. On the other, they were being sold nice paintings
to hang at home. Beauty with just a whiff of the urinal, the artist's merda, the street. Banksy's Girl
With Balloon seemed to fit this shallow Janus-faced aesthetic perfectly. He is an artist famous for
working in the street, painting, with the help of stencils, images of kissing coppers, cheeky rats and
flower-throwing anarchists on walls all over the world. Yet Sotheby's was selling a finely framed
version of one of his most famous graffiti images literally domesticating his art by turning it into
a luxury painting, a modern Mona Lisa.
Perhaps they should have wondered why an artist known for rebellion would create something so
posh-looking. But no one guessed that Banksy had been studying modern art history. For hidden
inside his painting was a device inspired by one of the most stubbornly subversive offshoots of the
dada tradition. The Britain-based artist Gustav Metzger invented auto-destructive art at the start
of the 60s, just as Duchamp was being assimilated as glossy pop. In his most spectacular
demonstration of the idea, he "painted" a sheet of white nylon with acid in front of an audience on
the South Bank in London. The act of making this artwork also destroyed it.
Metzger had reasons to be angry. As a Jewish child in Nuremberg in the 30s, he witnessed some of
the Nazis' biggest rallies. He was saved by the Kinderstransport, but lost his family in the
Holocaust. For him, auto-destructive art was part of a lifelong refusal to be part of the capitalist
system. One of his keenest disciples was a young art student who met him at Hornsey School of Art
and went on to become a rock star. Pete Townshend saw his regular smashing of guitars in Who
concerts as auto-destructive art.
Auto-destruction is one of the hardest acts for the commercial art world to assimilate. When
Michael Landy destroyed everything he owned as an artwork, it was a conscious farewell to the
commodity art of the Hirst generation to which he belongs. When the K Foundation burned a
million pounds, their gesture was so far outside the ethos of mainstream art that it was barely
recognised as art at all more like a bloody stupid waste of money.
When the auctioneer's hammer came down at Sotheby's and a "modern masterpiece" began to eat
itself, the stage was set perfectly. Here was the art world's moment of truth however theatrical
and multifaceted it may prove to be. Of course, this revolt will be assimilated. Of course, the
market will smile and the cash tills will go on ringing. Yet for once the commodity bit back. Art
turned on the hands that feed it.
In principle, all artists should do the same until the market is cut down to size and stops defining
the art of our time. Most won't, of course, for good reasons such as the need to make a living. Yet
Banksy has let a little light into a very claustrophobic room and proved he is the artist who
matters most right now.
Vocabulary exercise
1. distraught (paragraph 1)
a bespectacled man with the well-groomed air of an art-world professional puts his hand to his
forehead in apparent disbelief at what he is seeing: a million quid being shredded. He looks
genuinely distraught
2. stunt (paragraph 3)
The auction result will only propel [Banksy's prices] further and, given the media attention this
stunt has received, the lucky buyer would see a great return on the [£1.042m] they paid last night.
3. dismal (paragraph 4)
I walked around his anti-theme-park Dismaland with a frown on my face, not because I was part of
the performance, like the grim and sulky greeters, but because I found it truly dismal.
4. is beside the point (paragraph 4)
But the rush for knowing insiders to say Banksy's art is more valuable now is beside the point.
For once, an artist has genuinely pissed all over the system that reduces art to nothing but a
commodity.
5. assimilated (paragraph 7)
The problem with the anti-art tradition that started with dada's violence against the very idea of
culture is that, over the past 100 years or so, it has been assimilated into the mainstream of
modern art.
6. mainstream (paragraph 12)
When the K Foundation burned a million pounds, their gesture was so far outside the ethos of
mainstream art that it was barely recognised as art at all more like a bloody stupid waste of
money.
7. multifaceted (paragraph 13)
When the auctioneer's hammer came down at Sotheby's and a "modern masterpiece" began to eat
itself, the stage was set perfectly. Here was the art world's moment of truth however theatrical
and multifaceted it may prove to be.
Write your own sentences with the vocabulary
1. distraught
2. stunt
3. dismal
4. is beside the point
5. assimilated
6. mainstream
7. multifaceted
EXERCISE 61
Why a decline in the planet’s biodiversity is a
threat to us all
Summary
This article talks about the falling level of biodiversity (variety of different animal and plant life) on
Earth. It explains biodiversity's importance, details where the decline is happening and discusses
what the impact of this could have. It then briefly talks about what the causes of this fall are before
saying what can be done (or is being done) to stop it.
What is biodiversity?
It is the variety of life on Earth, in all its forms and all its interactions. If that sounds bewilderingly
broad, that's because it is. Biodiversity is the most complex feature of our planet and it is the most
vital. "Without biodiversity, there is no future for humanity," says Prof David Macdonald, at
Oxford University.
The term "biodiversity" was originally coined in 1985 and is a contraction of "biological diversity".
But the huge global biodiversity losses now becoming apparent represent a crisis equalling – or
quite possibly surpassing climate change.
More formally, biodiversity is comprised of several levels, starting with genes, then individual
species, then communities of creatures and finally entire ecosystems, such as forests or coral reefs,
where life interplays with the physical environment. These myriad interactions have made Earth
habitable for billions of years.
A more philosophical way of viewing biodiversity is this: it represents the knowledge learned by
evolving species over millions of years about how to survive through the vastly varying
environmental conditions Earth has experienced. Seen like that, experts warn, humanity is
currently "burning the library of life".
Do animals and bugs really matter to me?
For many people living in towns and cities, wildlife is often something you watch on television. But
the reality is that the air you breathe, the water you drink and the food you eat all ultimately rely
on biodiversity. Some examples are obvious: without plants there would be no oxygen and without
bees to pollinate there would be no fruit or nuts.
Others are less obvious coral reefs and mangrove swamps provide invaluable protection from
cyclones and tsunamis for those living on coasts, while trees can absorb air pollution in urban
areas. Others appear bizarre tropical tortoises and spider monkeys seemingly have little to do
with maintaining a stable climate. But the dense, hardwood trees that are most effective in
removing carbon dioxide from the atmosphere rely on their seeds being dispersed by these large
fruit-eaters.
When scientists explore each ecosystem, they find countless such interactions, all honed by
millions of years of evolution. If undamaged, this produces a finely balanced, healthy system which
contributes to a healthy sustainable planet.
The sheer richness of biodiversity also has human benefits. Many new medicines are harvested
from nature, such as a fungi that grows on the fur of sloths and can fight cancer. Wild varieties of
domesticated animals and crops are also crucial as some will have already solved the challenge of,
for example, coping with drought or salty soils.
If money is a measure, the services provided by ecosystems are estimated to be worth trillions of
dollars double the world's GDP. Biodiversity loss in Europe alone costs the continent about 3%
of its GDP, or €450m (£400m), a year.
From an aesthetic point of view, every one of the millions of species is unique, a natural work of art
that cannot be recreated once lost. "Each higher organism is richer in information than a
Caravaggio painting, a Bach fugue, or any other great work," wrote Prof Edward O Wilson, often
called the "father of biodiversity", in a seminal paper in 1985.
Just how diverse is biodiversity?
Mind-bogglingly diverse. The simplest aspect to consider is species. About 1.7 million species of
animals, plants and fungi have been recorded, but there are likely to be 8-9 million and possibly up
to 100 million. The heartland of biodiversity is the tropics, which teems with species. In 15
hectares (37 acres) of Borneo forest, for example, there are 700 species of tree the same number
as the whole of North America.
Recent work considering diversity at a genetic level has suggested that creatures thought to be a
single species could in some cases actually be dozens. Then add in bacteria and viruses, and the
number of distinct organisms may well be in the billions. A single spoonful of soil which
ultimately provides 90% of all food contains 10,000 to 50,000 different types of bacteria.
The concern is that many species are being lost before we are even aware of them, or the role they
play in the circle of life.
How bad is it?
Very. The best studied creatures are the ones like us large mammals. Tiger numbers, for
example, have plunged by 97% in the last century. In many places, bigger animals have already
been wiped out by humans think dodos or woolly mammoths.
The extinction rate of species is now thought to be about 1,000 times higher than before humans
dominated the planet, which may be even faster than the losses after a giant meteorite wiped out
the dinosaurs 65m years ago. The sixth mass extinction in geological history has already begun,
according to some scientists. Lack of data means the "red list", produced by the International
Union for Conservation of Nature, has only assessed 5% of known species. But for the best known
groups it finds many are threatened: 25% of mammals, 41% of amphibians and 13% of birds.
Species extinction provides a clear but narrow window on the destruction of biodiversity it is the
disappearance of the last member of a group that is by definition rare. But new studies are
examining the drop in the total number of animals, capturing the plight of the world's most
common creatures.
The results are scary. Billions of individual populations have been lost all over the planet, with the
number of animals living on Earth having plunged by half since 1970. Abandoning the normally
sober tone of scientific papers, researchers call the massive loss of wildlife a "biological
annihilation" representing a "frightening assault on the foundations of human civilisation".
What about under the sea?
Humans may lack gills but that has not protected marine life. The situation is no better and
perhaps even less understood in the two-thirds of the planet covered by oceans. Seafood is the
critical source of protein for more than 2.5 billion people but rampant overfishing has caused
catches to fall steadily since their peak in 1996 and now more than half the ocean is industrially
fished.
What about bugs don't cockroaches survive anything?
More than 95% of known species lack a backbone there are about as many species in the
staphylinidae family of beetles alone as there are total vertebrates, such as mammals, fish and
birds. Altogether, there are at least a million species of insect and another 300,000 spiders,
molluscs and crustaceans.
But the recent revelation that 75% of flying insects were lost in the last 25 years in Germany and
likely elsewhere indicates the massacre of biodiversity is not sparing creepy crawlies. And insects
really matter, not just as pollinators but as predators of pests, decomposers of waste and, crucially,
as the base of the many wild food chains that support ecosystems. "If we lose the insects then
everything is going to collapse," says Prof Dave Goulson of Sussex University, UK. "We are
currently on course for ecological Armageddon."
Even much-loathed parasites are important. One-third could be wiped out by climate change,
making them among the most threatened groups on Earth. But scientists warn this could
destabilise ecosystems, unleashing unpredictable invasions of surviving parasites into new areas.
What's destroying biodiversity?
We are, particularly as the human population rises and wild areas are razed to create farmland,
housing and industrial sites. The felling of forests is often the first step and 30m hectares - the area
of Britain and Ireland - were lost globally in 2016.
Poaching and unsustainable hunting for food is another major factor. More than 300 mammal
species, from chimpanzees to hippos to bats, are being eaten into extinction.
Pollution is a killer too, with orcas and dolphins being seriously harmed by long-lived industrial
pollutants. Global trade contributes further harm: amphibians have suffered one of the greatest
declines of all animals due to a fungal disease thought to be spread around the world by the pet
trade. Global shipping has also spread highly damaging invasive species around the planet,
particularly rats.
The hardest hit of all habitats may be rivers and lakes, with freshwater animal populations in these
collapsing by 81% since 1970, following huge water extraction for farms and people, plus pollution
and dams.
Could the loss of biodiversity be a greater threat to humanity than
climate change?
Yes nothing on Earth is experiencing more dramatic change at the hands of human activity.
Changes to the climate are reversible, even if that takes centuries or millennia. But once species
become extinct, particularly those unknown to science, there's no going back.
At the moment, we don't know how much biodiversity the planet can lose without prompting
widespread ecological collapse. But one approach has assessed so-called "planetary boundaries",
thresholds in Earth systems that define a "safe operating space for humanity". Of the nine
considered, just biodiversity loss and nitrogen pollution are estimated to have been crossed, unlike
CO2 levels, freshwater used and ozone losses.
What can be done?
Giving nature the space and protection it needs is the only answer. Wildlife reserves are the
obvious solution, and the world currently protects 15% of land and 7% of the oceans. But some
argue that half the land surface must be set aside for nature. However, the human population is
rising and wildlife reserves don't work if they hinder local people making a living. The poaching
crisis for elephants and rhinos in Africa is an extreme example. Making the animals worth more
alive than dead is the key, for example by supporting tourism or compensating farmers for
livestock killed by wild predators.
But it can lead to tough choices. "Trophy hunting" for big game is anathema for many. But if the
shoots are done sustainably only killing old lions, for example and the money raised protects a
large swath of land, should it be permitted?
We can all help. Most wildlife is destroyed by land being cleared for cattle, soy, palm oil, timber
and leather. Most of us consume these products every day, with palm oil being found in many
foods and toiletries. Choosing only sustainable options helps, as does eating less meat, particularly
beef, which has an outsized environmental hoofprint.
Another approach is to highlight the value of biodiversity by estimating the financial value of the
ecosystem services provided as "natural capital". Sometimes this can lead to real savings. Over the
last 20 years, New York has spent $2bn protecting the natural watershed that supplies the city
with clean water. It has worked so well that 90% of the water needs no further filtering: building a
water treatment plant instead would have cost $10bn.
Locating the tipping point at which biodiversity loss moves from being a concern to an actual
ecological collapse is an urgent priority. Biodiversity is vast and research funds are small, but
speeding up analysis might help, from automatically identifying creatures using machine learning
to real-time DNA sequencing.
There is even an initiative that aims to create an open-source genetic database for all plants,
animals and single-cell organisms on the planet. It argues that by creating commercial
opportunities – such as self-driving car algorithms inspired by Amazonian ants it could provide
the incentive to preserve Earth's biodiversity. However, some researchers say the dire state of
biodiversity is already clear enough and that the missing ingredient is political will.
A global treaty, the Convention on Biological Diversity (CBD), has set many targets. Some are
likely to be reached, for example protecting 17% of all land and 10% of the oceans by 2020. Others,
such as making all fishing sustainable by the same date are not. The 196 nations that are members
of the CBD next meet in Egypt in November.
In his 1985 text, Prof E O Wilson, concluded: "This being the only living world we are ever likely to
know, let us join to make the most of it." That call is more urgent than ever.
Vocabulary exercise
1. coined (paragraph 2)
The term "biodiversity" was originally coined in 1985 and is a contraction of "biological diversity".
2. teems (paragraph 11)
The heartland of biodiversity is the tropics, which teems with species. In 15 hectares (37 acres) of
Borneo forest, for example, there are 700 species of tree
3. plunged (paragraph 14)
Tiger numbers, for example, have plunged by 97% in the last century. In many places, bigger
animals have already been wiped out by humans
4. rampant (paragraph 18)
Seafood is the critical source of protein for more than 2.5 billion people but rampant overfishing
has caused catches to fall steadily since their peak in 1996 and now more than half the ocean is
industrially fished.
5. unleashing (paragraph 21)
But scientists warn this could destabilise ecosystems, unleashing unpredictable invasions of
surviving parasites into new areas.
6. tipping point (paragraph 32)
Locating the tipping point at which biodiversity loss moves from being a concern to an actual
ecological collapse is an urgent priority
7. dire (paragraph 33)
However, some researchers say the dire state of biodiversity is already clear enough and that the
missing ingredient is political will.
Write your own sentences with the vocabulary
1. coined
2. teems
3. plunged
4. rampant
5. unleashing
6. tipping point
7. dire
EXERCISE 62
Should we profile people in society to predict
behaviour?
Summary
This article discusses whether profiling people (assessing the likelihood of people doing (or not
doing) certain types of actions based on characteristics they have) is a good or bad thing. It
explains how profiling is done, the impact it can have and whether it is ethical or effective to do.
Whether we know it or not, our lives are influenced by profiling in many ways. You may think it's
sensible, or that it's unfair… you may even be tempted to think that it's both at once.
Imagine you're a police superintendent in charge of security at a political rally at which the
president is speaking. You have information that someone may attempt to assassinate her. You
know nothing about the potential killer and as usual you're stretched for resources. Should the few
officers you have at your disposal give equal attention to all members of the crowd? Or would it
make more sense for them to concentrate more on men than on women? Might it be reasonable to
conclude that those who appear to be over 75 years old pose less of a threat?
Profiling is always in the news. Racial profiling in particular has been held partially responsible for
riots from the UK to the US to France.
Profiling is the practice of categorising people and predicting their behaviour on the basis of
particular characteristics. We're profiled all the time by businesses and insurance companies, for
example. Companies that agree to give us car insurance want to know what we do for a job, where
we live, our age and marital status. This information is a proxy, a clue to our lifestyle and
behaviour. It helps them assess the likelihood that we will be involved in accidents. A proxy is a
stand-in, a trait such as race, or sex, or religion, used as a short cut to judge something else.
A case in point where profiling is commonly used, is with car insurance. Insurers use the sex of the
driver to calculate the premium they will be charged (their analysis has identified that women are
generally safer drivers than men). But in the EU at least, that's no longer allowed (not that it seems
to have reduced the gap between male and female premiums.) The puzzle is that profiling with
certain proxies can seem at one and the same time both rational and unfair.
Of course, the belief that individuals within one group are more likely than others to have a certain
characteristic or have a predisposition to do or have a particular type of behaviour, may not always
be grounded in sound evidence. The view that one group is on average meaner with money, or
richer, or more disposed to dishonesty, may be based on ignorance or prejudice. But where there
are statistical differences between groups, it seems logical to act upon them. Is it really worth the
police stopping octogenarian women if they're hunting for criminals carrying knives?
The appeal of profiling is that it saves time and resources, says Tarun Khaitan, associate professor
in law at Oxford and Melbourne universities. Take an airline that wants to make sure its pilots
have 20-20 vision. "There is some statistical evidence that the eyesight of elderly people
deteriorates," he says. "So instead of the airline having to figure whether their pilots retain good
eyesight by testing everyone over 65, it may be cheaper to have a mandatory retirement age." Here
age is a proxy for good vision.
Some proxies will be tougher than others to access. A genetic test may be an accurate proxy for
predicting whether people will develop a certain disease, but it may be easier and cheaper to gather
information on less precise proxies, such as diet or smoking habits.
It's always important to interrogate the numbers, especially when using proxies such as sex and
religion. First, how big is the statistical difference? If 50.1% of women are linked to behaviour X,
and 49.9% of men, using sex as a proxy for X is going to be pretty useless. Second, how many false
negatives and false positives will there be? That is to say, how many threats will you miss if you
target only one group, and how many innocent people will come under suspicion?
Suppose it is overwhelmingly the case that a particular crime is committed by people from a
particular religious background. If nonetheless only 1% of people from that background are
implicated in that crime the 99% end up being tarred with the same brush, despite being innocent.
The effect on those being profiled
Which brings us to the impact of profiling on the individuals being profiled. Tarun Khaitan says
that groups in a "socially and politically and economically vulnerable position" will perceive
profiling as "not just unfair but humiliating". He offers this example. If a person is profiled based
on their star sign, Virgo or a Sagittarius and so on, they may regard that as eccentric and even
unjust. They probably won't feel it's demeaning (probably most would see it as ridiculous). But we
identify ourselves more closely with our ethnicity, religion, and sex, so when disadvantaged people
are profiled on the basis of these characteristics it tends to have a far more noxious effect.
Obviously the impact of profiling will depend upon what is at stake. If a person's job prospects are
affected by profiling, that really matters. If profiling only alters the likelihood of facing additional
scrutiny at airport security on your annual holiday, that matters a bit less. Frequency is a relevant
consideration too. Innocent African-American males who are constantly stopped and questioned
by police naturally feel a powerful sense of injustice.
Profilers should bear in mind that the policy may have one of two unintended consequences. It
could generate a vicious circle, entrenching the very pattern upon which it is based. For example,
members of one race may become alienated at constantly being stopped and searched, and some
innocent people within this racial group may be tempted into crime. If one group comes to believe
it is being targeted by the state, that's almost bound to undermine its commitment to abiding by
the state's rules.
A different effect is also possible. If would-be terrorists become aware that young men of Middle-
Eastern appearance are more closely inspected, then they could try to plant bombs or weapons on
those arousing the least suspicion children or old women. Targeting individuals in particular
groups then becomes self-defeating.
Despite the pitfalls, profiling can work
Criminologists such as Bryanna Fox of the University of South Florida have used statistical
techniques to investigate property and violent crimes. An ex-FBI special agent, Fox subdivided
burglaries into various categories and analysed the characteristics of those convicted of
committing these crimes. For example, where burglaries were clearly sophisticated and
premeditated, the criminals tended to be older, male, white and with a long criminal history but
few arrests. Police departments that experimented by using her profiles solved over 300% more
burglaries compared to the departments that did not.
With that kind of success, profiling is not going to disappear. Indeed, in the digital age, as more
and more data becomes available for analysis, profiling in its myriad forms is likely to become ever
more prevalent.
But Tarun Khaitan warns us that "we should calculate the costs that racial profiling imposes on
already vulnerable groups alongside the efficiency savings that might accrue". He believes that the
benefits outweigh the costs only under exceptional circumstances.
Vocabulary exercise
1. disposal (paragraph 2)
You know nothing about the potential killer and as usual you're stretched for resources. Should the
few officers you have at your disposal give equal attention to all members of the crowd?
2. a case in point (paragraph 5)
A case in point (where profiling is commonly used) is with car insurance. Insurers use the sex of
the driver to calculate the premium they will be charged (their analysis has identified that women
are generally safer drivers than men).
3. predisposition (paragraph 6)
the belief that individuals within one group are more likely than others to have a certain
characteristic or have a predisposition to do or have a particular type of behaviour
4. tarred with the same brush (paragraph 10)
If nonetheless only 1% of people from that background are implicated in that crime the 99% end
up being tarred with the same brush, despite being innocent.
5. demeaning (paragraph 11)
If a person is profiled based on their star sign, Virgo or a Sagittarius and so on, they may regard
that as eccentric and even unjust. They probably won't feel it's demeaning (probably most would
see it as ridiculous).
6. at stake (paragraph 12)
Obviously the impact of profiling will depend upon what is at stake. If a person's job prospects
are affected by profiling, that really matters. If profiling only alters the likelihood of facing
additional scrutiny at airport security on your annual holiday, that matters a bit less.
7. entrenching (paragraph 13)
It could generate a vicious circle, entrenching the very pattern upon which it is based. For
example, members of one race may become alienated at constantly being stopped and searched,
and some innocent people within this racial group may be tempted into crime.
Write your own sentences with the vocabulary
1. disposal
2. a case in point
3. predisposition
4. tarred with the same brush
5. demeaning
6. at stake
7. entrenching
EXERCISE 63
A review of Vincent LoBrutto’s biography of
Stanley Kubrick
Summary
This is a review of a biography of the late American film director Stanley Kubrick. Throughout this
negative review, the reviewer provides details of some of the things which are revealed in the book.
Whilst doing this, they also give their opinion about the different aspects of it and then the book as
a whole.
There is probably no other director in the history of cinema that has received as many accolades
from his peers than Stanley Kubrick. But for a director so highly regarded, aside from anecdotes
from those who worked from him, there is surprisingly little really known about this notoriously
private man who rarely gave interviews and shunned the limelight. That’s why when I was made
aware of a new biography on this great man, I was eager to get my hands on a copy. But perhaps
the most telling revelations in this long, turgid and not very illuminating biography occur early on
in the volume, when the author Vincent LoBrutto tells us about the artists that Stanley Kubrick
admired or studied when he was young: the photographer Weegee, the jazz musician Gene Krupa,
the Russian cineaste Sergei Eisenstein and the writers Dostoyevsky, Kafka, Sartre and Camus.
In these artists' work can be discerned the roots of the mature Kubrick's films: his observant,
unforgiving eye, his virtuosic use of music, his innovative mastery of cinematic technique and his
Hobbesian vision of life as nasty, brutish and short.
The emotional sources for Kubrick's artistic vision, however, are not examined in this volume, nor
are his movies analyzed in a meaningful way. Instead, LoBrutto a film historian and editor who
teaches at the School of Visual Arts in Manhattan gives us reams and reams of stories and
reminiscences about Kubrick, culled from magazine articles, newspaper stories and interviews
with people who knew or worked with him.
Although it's clear that LoBrutto has done a prodigious amount of research, there is no critical
intelligence at work in this volume. In fact, the author writes as a film buff someone who's
interested in all the minutiae of the filmmaker's life, from the type of camera lens he used to shoot
a particular scene to the clothes he wore on a particular day.
LoBrutto cites a numerologist's analysis of Kubrick's signature (she found him "to be a
perfectionist, with fears and anxieties"), and suggests a parallel between the famous bone, turning
end over end, in the prologue to "2001" and a pickle that the young Kubrick once flung in the air in
a fit of youthful exasperation.
There is little effort on LoBrutto's part to sort out rumor and speculation from fact. Instead, we are
given lengthy accounts of the experiences that various actors, writers and technicians had with
Kubrick – accounts that range from the flattering (Matthew Modine: "He's probably the most
heartfelt person I ever met") to the disgruntled (Kirk Douglas: "You don't have to be a nice person
to be extremely talented").
Although LoBrutto writes that his book is meant to "both shatter and inform the myths" that have
grown up around the reclusive director, this biography sheds little new light on "the legend of
Stanley Kubrick," that is, the popular image of him, in LoBrutto's words, as "an intense, cool,
misanthropic cinematic genius who obsesses over every detail, a man who lives a hermetic
existence, doesn't travel and is consumed with phobic neuroses."
If anything, the book actually fuels this simplistic myth. LoBrutto not only cites numerous
examples of Kubrick's eccentric behavior (like flying his own New York dentist over to England so
he would not have to see a new one) but also dwells, at enormous length, on the director's
obsessive perfectionism.
One of his assistants is quoted saying that the director has precise requirements for everything
(like wanting memo pads to be exactly six inches by four), while one of his editors is quoted on his
callous disregard for anything not directly connected with his work. (When the editor's finger got
caught in the editing machine, Kubrick is said to have ignored the incident, arguing, "There's no
point in giving you sympathy after it's done.")
We're told about Kubrick's penchant for ordering actors to do take after take after take (up to 70 to
80 for each setup), and the fanatical research he does before making a film. In the case of "2001,"
he supposedly ordered up "every science fiction book ever written."
In the case of an unrealized project on Napoleon, several hundred books on the subject, including
19th-century English and French accounts, were read and dissected; the material was then broken
down into categories on everything from his food preferences to the weather on the day of a
specific battle.
For all of LoBrutto's similarly obsessive research methods his apparent determination to give the
reader every fact and piece of gossip he can find on Kubrick there is no wide-angle take on what
all of this might mean. What palpable effect does Kubrick's mania for detail have on his movies?
Where does his need for control come from, and why has his paranoia escalated, as LoBrutto
suggests, with each passing year?
For that matter, what are the roots of the dark, pessimistic view of mankind evinced in nearly all
his films, from "Paths of Glory" through "Dr. Strangelove" and "A Clockwork Orange"? Is it simply
a philosophical notion rooted in the director's youthful taste for existential novels and film noir, or
does it spring from some more personal experience of the world?
LoBrutto does not grapple with such questions in his book. His account of Kubrick's youth and
apprenticeship reads like a dry resume: a recitation of the Kubrick family's moves from one
address in the Bronx to another, repeated mentions of young Stanley's poor attendance record in
school and a plodding summary of the assignments he took as a still photographer for Look
magazine.
When it comes to Kubrick's films, LoBrutto proves an equally flat-footed tour guide, though the
die-hard movie buff can glean some interesting tidbits from this volume. One learns that Kubrick
removed a farcical pie fight from "Dr. Strangelove" because it did not jibe with the satiric tone of
the rest of the movie, that the background voices in the "Hail, Crassus!" scene in "Spartacus" were
taken from a Michigan State-Notre Dame football game.
LoBrutto also makes some interesting comparisons between Kubrick's use of specific cinematic
devices like long tracking shots, gliding camera movements and severely angled shots and the
pioneering work of Max Ophuls and Orson Welles.
Such technical discussions, however, do not communicate the overall achievement of Kubrick's
movies. There is no real assessment of the themes that animate the filmmaker's work, no real
appreciation of his cinematic artistry. In the end, LoBrutto's book is less a full-fledged biography of
Kubrick than a biographical compendium of trivia about his life and work.
Vocabulary exercise
1. accolades (paragraph 1)
There is probably no other director in the history of cinema that has received as many accolades
from his peers than Stanley Kubrick. But for a director so highly regarded,
2. flattering (paragraph 6)
accounts that range from the flattering (Matthew Modine: "He's probably the most heartfelt
person I ever met") to the disgruntled (Kirk Douglas: "You don't have to be a nice person to be
extremely talented").
3. disgruntled (paragraph 6)
accounts that range from the flattering (Matthew Modine: "He's probably the most heartfelt
person I ever met") to the disgruntled (Kirk Douglas: "You don't have to be a nice person to be
extremely talented").
4. shatter (paragraph 7)
Although LoBrutto writes that his book is meant to "both shatter and inform the myths" that have
grown up around the reclusive director, this biography sheds little new light on "the legend of
Stanley Kubrick,"
5. sheds little new light on (paragraph 7)
Although LoBrutto writes that his book is meant to "both shatter and inform the myths" that have
grown up around the reclusive director, this biography sheds little new light on "the legend of
Stanley Kubrick,"
6. penchant for (paragraph 10)
We're told about Kubrick's penchant for ordering actors to do take after take after take (up to 70
to 80 for each setup), and the fanatical research he does before making a film.
7. grapple with (paragraph 14)
LoBrutto does not grapple with such questions in his book. His account of Kubrick's youth and
apprenticeship reads like a dry resume: a recitation of the Kubrick family's moves from one
address in the Bronx to another,
Write your own sentences with the vocabulary
1. accolades
2. flattering
3. disgruntled
4. shatter
5. sheds little new light on
6. penchant for
7. grapple with
EXERCISE 64
Is free trade between countries a good thing?
Summary
This article explains how free trade (where goods between countries are bought and sold freely) is
beneficial. It explains what the theoretical benefits of it are and what are some of the ways it can be
impeded. It also explains what the reason is that protectionism (which aims to restrict the flow of
goods between countries) has become more common in recent years.
When I was growing up in the 70s I used to frequently see my grandmother sat in her armchair
with a needle and thread mending socks and other items of clothing which were in a state of
disrepair. Fast forward to the present day and the idea that anybody would spend their time
darning socks seems ludicrous. Why would you when you can pop down to Marks and Spencer's
and buy a 6 pack of new socks for 12 pounds.
Although there are many reasons to account for this change in behaviour, by far the most
important is cost. Over the last 30 years we have seen the relative cost of most products plummet.
And whilst advances in technology have indeed contributed to this fall, it is the free trade of goods
and services between countries which is arguably the most important reason.
Why is trade good?
Economists argue about a lot of things, yet many would probably agree on the benefits of free
trade, which generates wealth by allowing the free flow of goods across international borders,
without taxes and other such barriers. The argument runs that billions of people around the world
have been lifted out of poverty by the combined power of capitalism and free trade. We are taught
that the world’s most powerful nations spurred their advance by tearing down the castle walls of
protectionism during the latter half of the 19th century ending centuries of beggar-thy-neighbour
economic nationalism opening up new markets to boost the industrial revolution and drive
forward the development of the middle class.
In his seminal work The Wealth of Nations, Adam Smith taught countries to concentrate their
efforts on producing and selling goods in which they have an "absolute advantage" over their
trading partners. Cheaper labour costs give modern-day China an absolute advantage over most
western nations for manufacturing.
In the 19th century, David Ricardo took the idea further by arguing that one country might have a
comparative advantage over the other. He said it was possible for England to produce more wine
and cloth than Portugal. However, it would require greater effort in England, handing Portugal a
comparative advantage.
Both theories teach nations to focus their time on making and selling goods in which they have an
advantage over their rivals. And if there is free trade, all countries and consumers will benefit as
products are made and sold cheaper.
Classical economic theory does not, however, always work in practice, and the rules require all
nations to sing from the same hymn sheet. Unfettered capitalism across borders still creates
winners and losers, posing thorny social and political questions for policymakers where dry
economic models cannot help. There are thousands of ways nations can distort the playing field.
The most extreme might be the use of military intervention, although far more common are
subsidies for particular industries, government spending, the use of the legal system, bureaucracy
or tax. And it is this latter which import tariffs are an example of.
How do import tariffs work?
Tariffs are border taxes charged on foreign imports. Importers pay the charges at the point of entry
to the customs agency of the country or economic bloc imposing them. Rather than being used to
raise revenue, they are predominantly imposed to increase the price of foreign goods in order to
make domestic products comparatively cheaper, with the aim of encouraging domestic production
and protecting firms from global competition.
Economists mostly agree higher tariffs are counterproductive. While they can protect jobs, they
also tend to raise the price of goods for consumers and stifle innovation that could benefit the
economy.
In order to ensure a relatively level playing field, a large number of countries signed up to the
World Trade Organisation (WTO), where membership requires that the country agrees to keep
their tariffs within certain limits. The WTO also acts as the ultimate arbiter in any international
tariff disputes. There are currently about 160 nations (including the UK, US, Japan and Germany)
in the organisation. China joined in 2001 in a major moment for world trade and Russia became
the world’s last major economy to become a member, in 2012. As of present the member states
account for 96% of all world trade.
Besides using tariffs to protect domestic industries, countries often provide support to certain
sectors through state subsidies, or impose quotas restricting the volume of goods imported from
overseas. There are also non-tariff barriers; such as patent rules, health and safety regulations,
labour and environmental standards, and rules of origin (for example, parmesan cheese can only
come from northern Italy).
As a consequence of the WTO, not only have tariffs fallen, but most countries’ domestic rules and
regulations have become more aligned in recent decades, enabling greater levels of international
trade.
Do trade deficits matter?
Many economists would argue trade deficits are in this day and age an irrelevance. They cite the
examples of the US, the UK and Canada as countries which have consistently imported far more
than they have exported over the last 25 years or so. That being said, how a trade deficit is viewed
depends largely on the country which has it. Whilst being regarded as relatively fine to have for
developed countries, this is not the case for nations from the developing world; where persistent
deficits are seen as a sign of structural weaknesses in their economy. Consequently, this reduces
economic confidence in those countries, making it both harder for them to attract inward
investment and more expensive to borrow money off international lenders.
There are other risks from reliance on imports over domestic production. National security is one:
should a country sacrifice the ability to produce steel required for making tanks, for example.
America in recent years has used national security legislation for much of their tariffs.
The other risk is that imports support jobs overseas, rather than at home. Jobs! Jobs! Jobs! was
president Trump's rallying cry ahead of his election. Workers in industries competing most with
imports typically in manufacturing do tend to lose out, economists have found, while
employment shifts towards sectors less exposed to trade.
Without smooth transitions for struggling industries, or the safety net of the welfare state
through jobseeker benefits, education and training whole communities can be left bereft of work.
Britain during the 1980s is a classic example. Embracing the mantra of free trade, the government
of Margaret Thatcher chose to shut down the internationally uncompetitive UK mines and instead
import cheaper coal from South Africa and Argentina.
Is free trade always the answer?
Trade deals always create winners and losers. But while the choice is a matter for politics, these
decisions often come amid an onslaught of lobbying from powerful vested interests. The failed
Transatlantic Trade and Investment Partnership between the US and the EU is one recent
example, where corporate interests including the US private health industry wanted to expand to
new markets in Europe. The deal ultimately failed amid widespread public opposition.
There are fears trade deals benefit larger corporations already operating across international
borders, rather than smaller firms. Domestic producers can be squeezed out by global rivals with
huge economies of scale. The argument could be best put by the political theorist Isaiah Berlin who
noted "freedom for the pike is death for the minnows".
However, international firms often support networks of smaller companies in their supply chains.
Greater trade barriers can make it more difficult for multinationals to operate across borders,
meaning they could relocate elsewhere where it is easier for them to do so directly and indirectly
affecting jobs and economic growth. After the gradual advance of globalisation in recent years,
rapidly unpicking the progress may cause severe short-term pain.
Economists contend that international competition stimulates greater innovation and
productivity, while warning protectionism can impede progress. The quantity and quality of Soviet
cars and other eastern bloc goods serves as one example, while the poor reputation of cars made by
the British motor industry during the 1970s might be another. Consumers have benefited as the
quality of goods have improved and prices have fallen.
Now and in the future
Given the benefits that free trade in theory can potentially can give, it would seem lunacy for any
country not to opt for this model. However, as already pointed out, in order to ensure that it works
it needs to be done on a relatively level playing field where all countries abide by the same rules.
And this is the crux of the problem, because this has never happened and probably never will.
But very much like lying, when protectionism is done on a relatively minor scale and level, this
generally poses no major problem for the global economy. However, when it isn't, it is.
And this is one of the two main reasons (the other being a structural change in the global economy
with manufacturing output increasingly coming from the emerging economies) why we have seen
economic protectionism so blatantly resurface in recent years. And a large part of the blame for
this resurgence is down to one country: China. Even after entering the WTO, the country has
continued to engage in certain practices (like manipulating the value of its own currency to keep
their exports artificially cheap, and the dumping of steel onto the international market at loss
making prices to put out of business the competition) in order to ensure its own industries flourish
at the expense of those in other countries, which they have on a truly impressive scale.
Consequently, both of these have drawn a response from countries (predominantly America) who
find themselves in some ways negatively affected by them. However, every imposition of a new or
higher tariff, invariably leads to retaliatory measure in the countries affected by them. Leading to a
vicious circle where tariffs spiral ever higher and wider.
Whether this continues to escalate further in the future or some form of agreement can be found
by all countries to move back to a freer and fairer trade model is still debatable. It may require
something very dramatic to happen before it finally dawns on all the countries that an eye for an
eye leaves the world blind.
Vocabulary exercise
1. stifle (paragraph 9)
Economists mostly agree higher tariffs are counterproductive. While they can protect jobs, they
also tend to raise the price of goods for consumers and stifle innovation that could benefit the
economy.
2. aligned (paragraph 12)
As a consequence of the WTO, not only have tariffs fallen, but most countries’ domestic rules and
regulations have become more aligned in recent decades, enabling greater levels of international
trade.
3. bereft (paragraph 16)
Without smooth transitions for struggling industries, or the safety net of the welfare state
through jobseeker benefits, education and training whole communities can be left bereft of
work.
4. impede (paragraph 20)
Economists contend that international competition stimulates greater innovation and
productivity, while warning protectionism can impede progress.
5. abide by (paragraph 21)
in order to ensure that it works it needs to be done on a relatively level playing field where all
countries abide by the same rules. And this is the crux of the problem, because this has never
happened
6. the crux of (paragraph 21)
in order to ensure that it works it needs to be done on a relatively level playing field where all
countries abide by the same rules. And this is the crux of the problem, because this has never
happened
7. flourish (paragraph 23)
in order to ensure its own industries flourish at the expense of those in other countries, which
they have on a truly impressive scale.
Write your own sentences with the vocabulary
1. stifle
2. aligned
3. bereft
4. impede
5. abide by
6. the crux of
7. flourish
EXERCISE 65
Is freshwater the biggest challenge we will
face this century?
Summary
This article is about the problems which we will encounter in the future due to a lower supply and
access to freshwater (water contained in ice, rivers, lakes etc...). It explains both what the problems
concerning this are and what are the various factors which are contributing to it. It also says things
which can be done to ensure that its impact is limited.
Water seems the most renewable of all the Earth's resources. It falls from the sky as rain, it
surrounds us in the oceans that cover nearly three-quarters of the planet's surface, and in the polar
ice caps and mountain glaciers. It is the source of life on Earth and quite possibly beyond the
discovery of traces of water on Mars aroused excitement because it was the first indication that life
may have existed there.
The problem is that most of the Earth's water resources are as inaccessible as if they were on Mars,
and those that are accessible are unevenly distributed across the planet. Water is hard to transport
over long distances, and our needs are growing, both for food and industry. Everything we do
requires water, for drinking, washing, growing food, and for industry, construction and
manufacturing. With more than 7.5 billion people on the planet, and the population projected to
top 10 billion by 2050, the situation is set to grow more urgent.
Currently, 844 million people about one in nine of the planet's population lack access to clean,
affordable water within half an hour of their homes, and every year nearly 300,000 children under
five die of diarrhoea, linked to dirty water and poor sanitation. Providing water to those who need
it is not only vital to human safety and security, but has huge social and economic benefits too.
Children lose out on education and adults on work when they are sick from easily preventable
diseases. Girls in developing countries are worst off, as they frequently stop going to school at
puberty because of a lack of sanitation, and girls and women travelling miles to fetch water or
forced to defecate in the open are vulnerable to violence. Providing affordable water saves lives and
reduces the burden on healthcare, as well as freeing up economic resources. Every £1 invested in
clean water yields at least £4 in economic returns, according to the charity WaterAid.
It would cost just over £21bn a year to 2030, or 0.1% of global GDP, to provide water and hygiene
to all those who need it, but the World Bank estimates that the economic benefits would be $60bn
a year.
Is climate change making things worse?
Climate change is bringing droughts and heatwaves across the globe, as well as floods and sea level
rises. Pollution is growing, both of freshwater supplies and underground aquifers. The depletion of
those aquifers can also make the remaining water more saline. Fertilisers leaching nitrates into the
supplies can also make water unsuitable for drinking or irrigation.
Cape Town in South Africa provided a stark example of what can happen when water supplies
come under threat. For years the city was using more water than it could sustainably supply, and
attempts to curb wastage and distribute water supplies more equitably to rich and poor had fallen
short of what was needed. By late last year, a crisis point had been reached. The city's government
warned of an imminent day zero, when the water supply would simply run out. Taps would run
dry. There would be no more water.
In the event, day zero was narrowly averted, in part by public exhortations to use water more
efficiently, rationing, changes in practices such as irrigating by night and reusing "grey" water from
washing machines or showers, and eventually a new desalination plant.
Who is most at risk?
The poor are worst hit. Jonathan Farr, senior policy analyst at WaterAid, says: "Competing
demands for water means that those who are poorer or marginalised find it more difficult to get
water than the rich and powerful." Many governments and privatised water companies concentrate
their provision on wealthy districts, and prioritise agriculture and industry over poorer people,
while turning a blind eye to polluters and those who over-extract water from underground sources.
Sharing access to water equitably requires good governance, tight regulation, investment and
enforcement, all qualities in short supply in some of the world's poorest and most water-scarce
areas.
The number of water-scarce areas is increasing: Cape Town is just the beginning. A ground-
breaking new study based on data from the Nasa Grace Gravity Recovery and Climate
Experiment satellites over a 14-year period discovered 19 hotspots around the world where water
resources are being rapidly depleted, with potentially disastrous results. They include areas of
California, north-western China, northern and eastern India, and the Middle East. Overall, as
climate change scientists had predicted, areas of the world already prone to drought were found to
be getting drier, and areas that were already wet getting wetter.
The authors were uncompromising: the results showed that "water is the key environmental issue
of the century," they said.
Who controls water?
There is no global governance system for water. Water is managed at a local level, and often poorly
managed. The technology needed to help us use water efficiently and equitably exists, but often is
not implemented. "In many instances, proper management of known technology [such as pumps,
rainwater collectors, storage cisterns and latrines] rather than new technological solutions is
sufficient to ensure users receive adequate services," says Farr. "We have been solving the problem
of getting access to water resources since civilisation began. We know how to do it. We just need to
manage it."
For instance, he notes, in many remote parts of sub-Saharan Africa, "there may be sufficient
supplies of groundwater but there has not been enough investment in service delivery and service
management to ensure that people can access this water".
How can freshwater resources be better managed?
Some of the most effective ways of managing water resources are also the simplest. Plugging leaks
in pipes is a good example ageing or poorly maintained infrastructure wastes vast quantities of
water. A dripping tap can leak 300 litres a year. In the UK, the Environment Agency has warned of
water shortages across the south-east of the country within a few years, if the 3bn litres a day
wasted through leaks enough for the needs of 20 million people continues.
Water meters for domestic users in developed countries have been controversial, because they can
penalise large families which have greater needs. But they provide a readily recognisable gauge to
give households more information on their usage, and encourage them not to waste water,
particularly as there are readily available technical fixes, from short flush toilets to spray taps and
shower heads.
Irrigation has enabled farmers even in arid regions to grow a wider variety of crops. Some methods
of irrigation are highly inefficient in hot countries, water sprayed on crops evaporates before it
can reach the roots. An alternative is drip irrigation, a system of pipes that delivers water directly
to the roots of each plant, but this is also prone to wastage.
Traditional methods can also be usefully restored in many regions, adds Marc Stutter, of the James
Hutton Institute. He notes that in Rajasthan, in India, restoring traditional small dams called
johads enabled the periodic rains to be held before they dissipated across the land. The johads led
to "the miraculous revitalisation to a green landscape and the surface water returning".
Advances in sensor technology offer a new way forward. Field sensors, available for as little as $2 a
year, can monitor the moisture content in soil, letting farmers know whether irrigation is needed
and allowing them to calibrate the irrigation more finely than has previously been possible.
Science is also being brought to bear on the crops themselves. Plant biologists are breeding
varieties less prone to drought, through natural selection, and in some cases using genetic
modification.
But science and technology can only go so far. As with most water issues, the biggest problem is
still governance and equity. Farmers will grow what they can to turn a profit, and many have little
alternative than to use scarce groundwater resources. Without strong governance, this can lead to
disaster as the depletion has a widespread effect on the whole local community.
What about floods?
Climate change will not only mean more droughts, but also more frequent floods. These can be
devastating to agriculture and cities, especially coastal cities already under threat from rising sea
levels and stronger storm surges. The World Bank estimates that the damage to cities from
flooding will top $1tn by 2050 if strong action is not taken to equip cities to cope with the
consequences.
Making the world more resilient to flooding involves more than just building walls and barriers
such as London's Thames Barrier, though these are still used. Increasingly, planners are finding
ways to "make space for water", and return to natural protections. For instance, in tropical areas
more than a fifth of the mangrove swamps that used to cling to the coastline have been destroyed,
cut down to make way for agriculture and aquaculture. Restoring mangroves produces many
benefits: they protect inland areas from sea level rises and storm flooding, and provide nurseries
for fish, increasing fishing yields. Mangrove restoration projects are now operating in countries
from Bangladesh and Indonesia, to Cote d'Ivoire and Suriname.
Flood plains and water meadows also provide natural water storage, with land that acts like a
sponge to soak up water, releasing it gradually over time. This can prove unpopular with farmers
who want to grow crops on such land, but payments from the public purse can offset the cost to
them. In the UK, for instance, projects are underway from Historic England and the National
Trust.
Floating houses are another idea that is taking off, from the Netherlands to south-east Asia. The
houses are built on floating platforms instead of foundations, but anchored to the sea or river bed,
and a wide variety of modern designs are now available. Projects are already under way as far
afield as Lagos and London's Docklands.
What next?
Sustainable development goal six from the UN concerns water, stating that safe water and
sanitation should be provided to all by 2030. But WaterAid's Farr notes that at current rates, some
countries will miss the deadline by centuries. World governments will meet at the UN this summer
to discuss the progress.
According to James Famiglietti, co-author of the Nasa Grace study, some of the areas most
vulnerable are "already past sustainability tipping points" as their major aquifers are being rapidly
depleted, in particular the Arabian peninsula, the north China plain, the Ogallala aquifer under the
great plains of the US, the Guarani aquifer in South America, the north-west Sahara aquifer system
and others. "When those aquifers can no longer supply water and some, like the southern half of
the Ogallala, may run out by 2050 where will we be producing our food and where will the water
come from?" he asks.
Vocabulary exercise
1. aroused (paragraph 1)
the discovery of traces of water on Mars aroused excitement because it was the first indication
that life may have existed there.
2. depletion (paragraph 5)
Pollution is growing, both of freshwater supplies and underground aquifers. The depletion of
those aquifers can also make the remaining water more saline.
3. curb (paragraph 6)
For years the city was using more water than it could sustainably supply, and attempts to curb
wastage and distribute water supplies more equitably to rich and poor had fallen short of what was
needed.
4. turning a blind eye (paragraph 8)
Many governments and privatised water companies concentrate their provision on wealthy
districts, and prioritise agriculture and industry over poorer people, while turning a blind eye to
polluters and those who over-extract water from underground sources.
5. prone to (paragraph 9)
Overall, as climate change scientists had predicted, areas of the world already prone to drought
were found to be getting drier, and areas that were already wet getting wetter.
6. yields (paragraph 21)
Restoring mangroves produces many benefits: they protect inland areas from sea level rises and
storm flooding, and provide nurseries for fish, increasing fishing yields.
7. anchored (paragraph 23)
The houses are built on floating platforms instead of foundations, but anchored to the sea or
river bed, and a wide variety of modern designs are now available.
Write your own sentences with the vocabulary
1. aroused
2. depletion
3. curb
4. turning a blind eye
5. prone to
6. yields
7. anchored
(;(5&,6( 66
The problems on relying on metrics to gauge
performance
Summary
This is an article which discusses the problems of solely relying on metrics/statistics to measure
performance and how this can damage organisations. In it the author argues that metrics can be
easily manipulated and how this can lead to a lack of innovation within organisations.
More and more companies, government agencies, educational institutions and philanthropic
organisations are today in the grip of a new phenomenon. I've termed it 'metric fixation'. The key
components of metric fixation are the belief that it is possible and desirable to replace
professional judgment (acquired through personal experience and talent) with numerical
indicators of comparative performance based upon standardised data (metrics); and that the best
way to motivate people within these organisations is by attaching rewards and penalties to their
measured performance.
The rewards can be monetary, in the form of pay for performance, say, or reputational, in the form
of college rankings, hospital ratings, surgical report cards and so on. But the most dramatic
negative effect of metric fixation is its propensity to incentivise gaming: that is, encouraging
professionals to maximise the metrics in ways that are at odds with the larger purpose of the
organisation. If the rate of major crimes in a district becomes the metric according to which police
officers are promoted, then some officers will respond by simply not recording crimes or
downgrading them from major offences to misdemeanours. Or take the case of surgeons. When the
metrics of success and failure are made public affecting their reputation and income some
surgeons will improve their metric scores by refusing to operate on patients with more complex
problems, whose surgical outcomes are more likely to be negative. Who suffers? The patients who
don't get operated upon.
When reward is tied to measured performance, metric fixation invites just this sort of gaming. But
metric fixation also leads to a variety of more subtle unintended negative consequences. These
include goal displacement, which comes in many varieties: when performance is judged by a few
measures, and the stakes are high (keeping one's job or getting a pay rise), people focus on
satisfying those measures often at the expense of other, more important organisational goals that
are not measured. The best-known example is 'teaching to the test', a widespread phenomenon
that has distorted primary and secondary education in the United States since the adoption of the
No Child Left Behind Act of 2001.
Short-termism is another negative. Measured performance encourages what the US sociologist
Robert K Merton in 1936 called 'the imperious immediacy of interests… where the actor's
paramount concern with the foreseen immediate consequences excludes consideration of further
or other consequences'. In short, advancing short-term goals at the expense of long-range
considerations. This problem is endemic to publicly traded corporations that sacrifice long-term
research and development, and the development of their staff, to the perceived imperatives of the
quarterly report.
To the debit side of the ledger must also be added the transactional costs of metrics: the
expenditure of employee time by those tasked with compiling and processing the metrics in the
first place not to mention the time required to actually read them. As the heterodox management
consultants Yves Morieux and Peter Tollman note in Six Simple Rules (2014), employees end up
working longer and harder at activities that add little to the real productiveness of their
organisation, while sapping their enthusiasm. In an attempt to staunch the flow of faulty metrics
through gaming, cheating and goal diversion, organisations often institute a cascade of rules, even
as complying with them further slows down the institution's functioning and diminishes its
efficiency.
Contrary to common sense belief, attempts to measure productivity through performance metrics
discourage initiative, innovation and risk-taking. The intelligence analysts who ultimately located
Osama bin Laden worked on the problem for years. If measured at any point, the productivity of
those analysts would have been zero. Month after month, their failure rate was 100 per cent, until
they achieved success. From the perspective of the superiors, allowing the analysts to work on the
project for years involved a high degree of risk: the investment in time might not pan out. Yet
really great achievements often depend on such risks.
The source of the trouble is that when people are judged by performance metrics they are
incentivised to do what the metrics measure, and what the metrics measure will be some
established goal. But that impedes innovation, which means doing something not yet established,
indeed that hasn't even been tried out. Innovation involves experimentation. And experimentation
includes the possibility, perhaps probability, of failure. At the same time, rewarding individuals for
measured performance diminishes a sense of common purpose, as well as the social relationships
that motivate co-operation and effectiveness. Instead, such rewards promote competition.
Compelling people in an organisation to focus their efforts on a narrow range of measurable
features degrades the experience of work. Subject to performance metrics, people are forced to
focus on limited goals, imposed by others who might not understand the work that they do. Mental
stimulation is dulled when people don't decide the problems to be solved or how to solve them,
and there is no excitement of venturing into the unknown because the unknown is beyond the
measurable. The entrepreneurial element of human nature is stifled by metric fixation.
Organisations in thrall to metrics end up motivating those members of staff with greater initiative
to move out of the mainstream, where the culture of accountable performance prevails. Teachers
move out of public schools to private and charter schools. Engineers move out of large
corporations to boutique firms. Enterprising government employees become consultants. There is
a healthy element to this, of course. But surely the large-scale organisations of our society are the
poorer for driving out staff most likely to innovate and initiate. The more that work becomes a
matter of filling in the boxes by which performance is to be measured and rewarded, the more it
will repel those who think outside the box.
Economists such as Dale Jorgenson of Harvard University, who specialise in measuring economic
productivity, report that in recent years the only increase in total-factor productivity in the US
economy has been in the information technology-producing industries. The question that ought to
be asked next, then, is to what extent the culture of metrics with its costs in employee time,
morale and initiative, and its promotion of short-termism – has itself contributed to economic
stagnation?
Vocabulary exercise
1. in the grip of (paragraph 1)
More and more companies, government agencies, educational institutions and philanthropic
organisations are today in the grip of a new phenomenon.
2. at odds with (paragraph 2)
encouraging professionals to maximise the metrics in ways that are at odds with the larger
purpose of the organisation.
3. is tied to (paragraph 3)
When reward is tied to measured performance, metric fixation invites just this sort of gaming.
4. fixation (paragraph 3)
But metric fixation also leads to a variety of more subtle unintended negative consequences.
5. paramount (paragraph 4)
what the US sociologist Robert K Merton in 1936 called 'the imperious immediacy of interests…
where the actor's paramount concern with the foreseen immediate consequences excludes
consideration of further or other consequences'
6. pan out (paragraph 6)
From the perspective of the superiors, allowing the analysts to work on the project for years
involved a high degree of risk: the investment in time might not pan out.
7. repel (paragraph 9)
The more that work becomes a matter of filling in the boxes by which performance is to be
measured and rewarded, the more it will repel those who think outside the box.
Write your own sentences with the vocabulary
1. in the grip of
2. at odds with
3. is tied to
4. fixation
5. paramount
6. pan out
7. repel
EXERCISE 67
Should you use the carrot or the stick with your
children?
Summary
This article provides advice to parents on how to deal with their children and whether it is right to
use the carrot or the stick (rewards or punishments) to ensure they do or don't do things. It
explains why children do what they do and what are the best things to say and do in order to
connect with them and get them to behave well.
"I feel a sense of dread as bedtime rolls around. Here we go again."
A dad said this in our family therapy office one day, describing his son's pre-bed antics. The child
would go wild as bedtime approached, stubbornly ignoring his parents' directions and melting
down at the mention of pajamas. The parents felt frustrated and stumped.
They asked us a question we hear a lot: Should they set up a system to entice him with stickers and
prizes for good behavior (rewards)? Or sternly send him to time out and take away his screen time
when he acted this way (punishments)?
Many parents grew up with punishments, and it's understandable that they rely on them. But
punishments tend to escalate conflict and shut down learning. They elicit a fight or flight response,
which means that sophisticated thinking in the frontal cortex goes dark and basic defense
mechanisms kick in. Punishments make us either rebel, feel shamed or angry, repress our feelings,
or figure out how not to get caught. In this case, full-fledged 4-year-old resistance would be at its
peak.
So rewards are the positive choice then, right?
Not so fast. Rewards are more like punishment's sneaky twin. Families find them alluring
(understandably), because rewards can control a child momentarily. But the effect can wear off, or
even backfire: "How much do I get?" a client told us her daughter said one day when asked to pick
up her room.
Over decades, psychologists have suggested that rewards can decrease our natural motivation and
enjoyment. For example, kids who like to draw and are, under experimental conditions, paid to do
so, draw less than those who aren't paid. Kids who are rewarded for sharing do so less, and so
forth. This is what psychologists call the "overjustification effect" the external reward
overshadows the child's internal motivation.
Rewards have also been associated with lowering creativity. In one classic series of studies, people
were given a set of materials (a box of thumbtacks, a candle and book of matches) and asked to
figure out how to attach the candle to the wall. The solution requires innovative thinking seeing
the materials in a way unrelated to their purpose (the box as a candle holder). People who were
told they'd be rewarded to solve this dilemma took longer, on average, to figure it out. Rewards
narrow our field of view. Our brains stop puzzling freely. We stop thinking deeply and seeing the
possibilities.
The whole concept of punishments and rewards is based on negative assumptions about children
that they need to be controlled and shaped by us, and that they don't have good intentions. But we
can flip this around to see kids as capable, wired for empathy, cooperation, team spirit and hard
work. That perspective changes how we talk to children in powerful ways.
Rewards and punishments are conditional, but our love and positive regard for our kids should be
unconditional. In fact, when we lead with empathy and truly listen to our kids, they're more likely
to listen to us. Following are suggestions for how to change the conversation and change the
behavior.
Look Underneath
Kids don't hit their siblings, ignore their parents or have tantrums in the grocery store for no
reason. When we address what's really going on, our help is meaningful and longer lasting. Even
trying to see what's underneath makes kids less defensive, more open to listening to limits and
rules, and more creative in solving problems.
Instead of saying: Be nice to your friend and share, or no screen time later.
Say: Hmm, you're still working on sharing your new building set. I get it. Sharing is hard at
first, and you're feeling a little angry. Can you think of a plan for how to play with them
together? Let me know if you need help.
Crying, resistance and physical aggression may be the tip of the iceberg. Underneath could be
hunger, sleep deprivation, overstimulation, having big feelings, working on a developmental skill
or being in a new environment. If you think this way, it makes you a partner there to guide, rather
than an adversary there to control.
Motivate Instead of Reward
Motivation is great, when it has the underlying message: "I trust you and believe you want to
cooperate and help. We are a team." This is a subtle difference from dangling rewards, but it's a
powerful one.
Instead of saying: If you clean your room we can go to the park. You better do it, though, or no
park.
Say: When your room is clean, we'll go to the park. I can't wait. Let me know if you need some
help.
Help Instead of Punish
The idea of a punishment conveys the message: "I need to make you suffer for what you did." Many
parents don't really want to communicate this, but they also don't want to come off as permissive.
The good news is that you can hold limits and guide children, without punishments.
Instead of saying: You're not playing nicely on this slide so you're going to time out. How
many times do I have to tell you?
Say: You're feeling kind of wild, I can see that! I'm going to lift you off this slide because it's not
safe to play this way. Let's calm down somewhere.
Instead of saying: You were rude to me and used swear words. That's unacceptable. I'm taking
your phone away.
Say: Wow, you're really angry. I hear that. It's not O.K. with me that you use those words. We're
putting your phone away for now so you can have some space in your mind. When you're ready,
tell me more about what's bothering you. We'll figure out what to do together.
Engage the Natural Hard Worker
Humans are not naturally lazy (it's not a trait which we are born with), and especially not kids. We
like to work hard, if we feel like we're part of a team. Little kids want to be capable members of the
family, and they like to help if they know their contribution matters and isn't just for show. Let
them help in a real way from the time they are toddlers, rather than assuming they need to be
otherwise distracted while we do the work.
Have a family meeting to brainstorm all the daily tasks the family needs to get done. Ask for ideas
from each family member. Make a chart for the kids (or have them make their own), with a place
to note when tasks are completed.
In the case of the bedtime-averse child, when the parents looked under the surface, they made
progress. It turned out that he was overtired, so they let go of some scheduled activities and
protected more wind-down time in the evenings. When he started to get wound up, his mom
wrapped him in his bath towel and said he was her favorite burrito. She acknowledged that it was
hard for him when she had to work late: "Maybe you've felt sad I missed bedtime the last few
weeks I know I have. Hey, can we read our favorite book tonight?" They made a chart listing each
step of his routine and asked for his input. Over time, he stopped resisting, and the tone at bedtime
went from dread to true connection and enjoyment.
No matter how irrational or difficult a moment might seem, we can respond in a way that says: "I
see you. I'm here to understand and help. I'm on your side. We'll figure this out together."
Vocabulary exercise
1. entice (paragraph 3)
Should they set up a system to entice him with stickers and prizes for good behavior (rewards)?
Or sternly send him to time out
2. elicit (paragraph 4)
But punishments tend to escalate conflict and shut down learning. They elicit a fight or flight
response
3. wear off (paragraph 6)
because rewards can control a child momentarily. But the effect can wear off, or even backfire
4. overshadows (paragraph 7)
This is what psychologists call the "overjustification effect" the external reward overshadows
the child's internal motivation.
5. the tip of the iceberg (paragraph 14)
Crying, resistance and physical aggression may be the tip of the iceberg. Underneath could be
hunger, sleep deprivation, overstimulation
6. conveys (paragraph 18)
The idea of a punishment conveys the message: "I need to make you suffer for what you did."
7. trait (paragraph 23)
Humans are not naturally lazy (it's not a trait which we are born with), and especially not kids. We
like to work hard, if we feel like we're part of a team.
Write your own sentences with the vocabulary
1. entice
2. elicit
3. wear off
4. overshadows
5. the tip of the iceberg
6. conveys
7. trait
EXERCISE 68
Stefan Zweig: The life of a citizen of the world
Summary
This article is a short biography of the Austrian author Stefan Zweig. Whilst talking about what
happened at key points of his life, it also explains how these shaped his beliefs about the world and
his writing.
Seventy-seven years ago, in February 1942, Europe's most popular author committed suicide in a
bungalow in the Brazilian town of Petrópolis, 10,000 km (6,200 miles) from his birthplace in
Vienna. In the year before his death, Stefan Zweig completed two contrasting studies The World
of Yesterday: Memoirs of a European, an elegy for a civilisation now consumed by war, and Brazil:
Land of the Future, an optimistic portrait of a new world. The story of these two books, and of the
refugee who wrote them, offers a guide to the trap of nationalism and the trauma of exile.
Zweig was born in 1881 into a prosperous and cultured Jewish family in Vienna, capital of the
multi-ethnic Habsburg empire, where Austrians, Hungarians, Slavs and Jews, among many others,
co-existed. Their ruler was the polyglot Franz-Joseph I, who decreed at the start of his reign in
1867 that "All races of the empire have equal rights, and every race has an inviolable right to the
preservation and use of its own nationality and language".
Franz-Joseph was a stiff-necked autocrat, and his reign should not be romanticised, but it
provided Zweig with a template of cultural plurality at a time when Europe was consuming itself in
nationalism. His biographer George Prochnik notes that Zweig called for the foundation of an
international university, with branches in every major European capital and a rotating exchange
programme that would expose young people to other ethnicities and religions.
Zweig began to write The World of Yesterday after leaving Austria in 1934, anticipating the
Nazification of his homeland. He completed the first draft in New York in summer 1941, and
posted the final version, typed by his second wife Lotte Altmann, to his publisher the day before
their joint suicide. By then, the Habsburg empire had "vanished without trace", he writes, and
Vienna was "demoted to the status of a German provincial town". Zweig became stateless: "So I
belong nowhere now, I am a stranger or at the most a guest everywhere".
Zweig's memoir is illuminating in its portrait of the disorienting nature of exile. In the cities in
which Zweig had been celebrated, his books were now burnt; the golden era of "security and
prosperity and comfort" had given way to revolution, economic instability and nationalism, "the
ultimate pestilence that has poisoned the flower of our European culture". Time itself was
ruptured: "all the bridges are broken between today, yesterday and the day before yesterday".
Without a trace
One of Zweig's greatest anxieties was the loss of his linguistic home. He expressed "a secret and
tormenting shame" that Nazi ideology was "conceived and drafted in the German language". Like
the poet Paul Celan, who committed suicide in Paris, Zweig felt that the language of Schiller,
Goethe and Rilke had been occupied by Nazism, and irredeemably deformed. After moving to
England, he felt "imprisoned in a language, which I cannot use".
In The World of Yesterday, Zweig describes the ease of borderless travel prior to 1914 of visiting
India and the US without the need for a passport or visa a situation inconceivable to the interwar
generation. Now he, like all refugees, faced the humiliation of negotiating an unwieldy
bureaucracy. Zweig described his intense "Bureauphobia" as immigration officials demanded ever
more proof of identity, and he joked to a fellow refugee that his job description was "Formerly
writer, now expert in visas".
As Hitler's forces spread across Europe, Zweig moved from his lodging in Bath in the UK to
Ossining, New York. There he was almost unknown to all but his fellow refugees, who lacked his
connections and material comforts, and frequently appealed to his legendary generosity. Zweig
never felt at home in the US he regarded Americanisation as the second destruction of European
culture, after World War One and hoped to return to Brazil, which enchanted him during a
lecture tour in 1936.
Brazil: Land of the Future is a lyrical celebration of a nation whose beauty and generosity
profoundly impressed Zweig. He was surprised and humbled by the country, and admonished
himself for his ignorance and "European arrogance". Zweig outlines Brazil's history, economy,
culture and geography, but the real insight of the book comes from the perspective he gains about
his own continent.
Brazil becomes, in Zweig's description, everything he would like Europe to be: sensual, intellectual,
tranquil and averse to militarism and materialism. (He even claims that Brazilians lack the
European passion for sport a bizarre assertion, even back in 1941). Brazil is free of Europe's "race
fanatics", its "frenzied scenes and mad ecstasies of hero-worship", its "foolish nationalism and
imperialism", its "suicidal fury".
In its cadences and colours, Brazil was radically different from Zweig's repressed image of
Habsburg Vienna, but the beauty of its hybrid identity seemed to vindicate his outlook. In Brazil,
the descendants of African, Portuguese, German, Italian, Syrian and Japanese immigrants mixed
freely: "all these different races live in fullest harmony with each other". Brazil teaches 'civilised'
Europe how to be civilised: "Whereas our old world is more than ever ruled by the insane attempt
to breed people racially pure, like race-horses and dogs, the Brazilian nation for centuries has been
built upon the principle of a free and unsuppressed miscegenation... It is moving to see children of
all colours chocolate, milk, and coffee come out of their schools arm-in-arm… There is no
colour-bar, no segregation, no arrogant classification... for who here would boast of absolute racial
purity?"
'Paradise'
This paean proved hugely popular with the public, and thousands of Brazilians attended Zweig's
lectures, while his daily itinerary was printed in every major newspaper. But the book was
lambasted by critics: Prochnik notes that, for three days in a row, Brazil's leading newspaper
published withering reviews, rebuking Zweig for ignoring the country's industrial and modernist
innovations.
More controversial was Zweig's fulsome praise for Brazil's dictator, Getúlio Vargas. In 1937, Vargas
had declared the Estado Novo (New State), inspired by authoritarian rule in Portugal and Italy.
Vargas shut down Brazil's congress and imprisoned left-wing intellectuals, some of whom assumed
that Zweig had been paid for his praise, or at least offered a visa. Vargas' government had curtailed
Jewish immigration on racial grounds but made an exception for Zweig, due to his fame.
This troubling episode reveals Zweig's political naivety. A pacifist and conciliator by nature, Zweig
feared inciting hostility at a crucial moment (Vargas finally sided with the Allies in January 1942).
Seeking seclusion, Stefan and Lotte ensconced themselves in the elegant former German
settlement of Petrópolis, 40 miles (64 km) outside Rio.
"It is Paradise", wrote Zweig of the lush Alpine landscape, which "seems to be translated from the
Austrian into a tropical language". Zweig sought to forget his old books and friendships, and seek
"inner freedom". But at Carnival in Rio, he learned of Nazi advances in the Middle East and Asia,
and it filled him with both dread and a sense of impending doom. Zweig felt he could never be free,
or free from fear. "Do you honestly believe that the Nazis will not come here?" he wrote. "Nothing
can stop them now."
Zweig believed in a world beyond borders, but he became defined by them: "My inner crisis
consists in that I am not able to identify myself with the me of my passport, the self of exile". This
haunted Zweig ("We are just ghosts or memories"), and he wrote in his suicide note of being
"exhausted by long years of homeless wandering". Stefan and Lotte shared this resignation: "We
have no present and no future… We decided, bound in love, not to leave each other".
In Petrópolis, I visited Zweig's bungalow, which now serves as an "active museum", according to
Tristan Strobl, who works there on national service as an Austrian Holocaust Memorial Servant.
He showed me an interactive display of all the refugees that came to Brazil between 1933 and 1945,
highlighting their contributions. "This period was such a loss for the intellectual life of Europe",
says Tristan, "but for Brazil and the other countries that received these exiles, it was hugely
positive". The darkest decade of the old world brought light to the new.
Vocabulary exercise
1. decreed (paragraph 2)
Their ruler was the polyglot Franz-Joseph I, who decreed at the start of his reign in 1867 that "All
races of the empire have equal rights, and every race has an inviolable right to the preservation and
use of its own nationality and language".
2. demoted (paragraph 4)
By then, the Habsburg empire had "vanished without trace", he writes, and Vienna was "demoted
to the status of a German provincial town"
3. enchanted (paragraph 8)
Zweig never felt at home in the US he regarded Americanisation as the second destruction of
European culture, after World War One and hoped to return to Brazil, which enchanted him
during a lecture tour in 1936.
4. lambasted (paragraph 12)
But the book was lambasted by critics: Prochnik notes that, for three days in a row, Brazil's
leading newspaper published withering reviews, rebuking Zweig for ignoring the country's
industrial and modernist innovations.
5. rebuking (paragraph 12)
But the book was lambasted by critics: Prochnik notes that, for three days in a row, Brazil's leading
newspaper published withering reviews, rebuking Zweig for ignoring the country's industrial and
modernist innovations.
6. curtailed (paragraph 13)
Vargas' government had curtailed Jewish immigration on racial grounds but made an
exception for Zweig, due to his fame.
7. dread (paragraph 15)
But at Carnival in Rio, he learned of Nazi advances in the Middle East and Asia, and it filled him
with both dread and a sense of impending doom. Zweig felt he could never be free, or free from
fear.
Write your own sentences with the vocabulary
1. decreed
2. demoted
3. enchanted
4. lambasted
5. rebuking
6. curtailed
7. dread
EXERCISE 69
The evolution of science and thought
throughout the ages part 1
Summary
This is the first of two parts of an article on science and rational thought. This part explains the
emergence of the two. It goes through the factors and people in history which led this to happen. It
also states what they replaced and explains how both our knowledge and understanding of the
world has changed progressively throughout history through new discoveries and theories and
what the consequences of these have been on society. It ends by defining what science is.
When Einstein made the great conceptual leap that changed physics and with it the understanding
of the fundamental nature of matter and the way the universe worked, he said that it came to him
as if in a dream. He saw himself riding on a beam of light and concluded that if he were to do so,
light would appear to be static. This concept was against all the laws of physics at the time, and it
brought Einstein to the realisation that light was the one phenomenon whose speed was constant
under all conditions and for all observers. This led him directly to the concept of relativity.
Einstein's dreamlike experience is echoed by other descriptions of the same kind of event. August
Kekue, the discoverer of the benzene ring which typifies the mechanism or structure by which
groups of atoms join to form molecules that can be added to other molecules, wrote of gazing into
the fire and seeing in the flames a ring of atoms like a serpent eating its own tail. Newton is
supposed to have had his revelation when watching an apple fall to earth. Archimedes, so the story
goes, leapt out of his bath crying 'Eureka!' as he realised the meaning of displacement. Gutenberg
described the idea of the printing press as 'coming like a ray of light'. Each of them experienced the
flash of insight that comes at the moment of discovery.
This act of mystical significance in which man uncovers yet another secret of nature is at the very
heart of science. Through discovery man has broadened and deepened his control over the
elements, explored the far reaches of the solar system, laid bare the forces holding together the
building-blocks of existence. With each discovery the condition of the human race has changed in
some way for the better, as new understanding has brought more enlightened modes of thought
and action, and new techniques have enhanced the material quality of life. Each step forward has
been characterised by an addition or refinement to the body of knowledge which has changed the
view of society regarding the universe as a whole. As the knowledge changed, so did the view.
With the arrival in northern Europe in the twelfth century of the Greek and Arab sciences and the
logical system of thought contained in the writings of Aristotle, saved from loss in the Muslim
texts, the mould in which life had been cast for at least seven hundred years broke. Before the texts
arrived man's view of life and the universe was unquestioning, mystical, passive. Nature was
transient, full of decay, ephemeral, not worth investigation. The truth lay not in the world around,
which decomposed, but in the sky, where the stars which wheeled in eternal perfection were the
divine plan written in light. If man looked for inspiration at all he looked backwards, to the past, to
the work of giants. The new Arab knowledge changed all this.
Whereas with St Augustine man had said, 'Credo ut intelligam' (I come to understanding only
through belief), he now began to say, 'Intelligo ut credam' (belief can come only through
understanding). New skills in the logical analysis of legal texts led to a rational, scholastic system
of thought which subjected nature to examination.
The new, logical approach encouraged empiricism. Man's individual experience of the world was
now considered valuable. As the questioning grew, stimulated by the flood of information arriving
from the Arab world, knowledge became institutionalized with the establishment of the European
universities, where students were taught to think investigatively, measure what they saw and
record it. The first tentative steps towards science were taken by Theodoric of Freiburg and Roger
Bacon. Man had become a rational thinker, confident and above all forward-looking.
In the middle of the fifteenth century a German goldsmith called Gutenberg superseded memory
with the printing press. In the earlier, oral world which the press helped to destroy, daily life had
been intensely parochial. Knowledge and awareness of the continuity of social institutions had
rested on the ability of the old to recall past events and customs. Elders were the source of
authority. The need for extensive use of memory made poetry the carrier of most information, for
merchants as much as for university students. In this world all experience was personal: horizons
were small, the community was inward-looking. What existed in the outside world was a matter of
hearsay.
Printing brought a new kind of isolation, as the communal experience diminished. But the
technology also brought greater contact with the world outside. The rate of change accelerated.
With printing came the opportunity to exchange information without the need for physical
encounter. Above all, indexing permitted cross-reference, a prime source of change. The 'fact' was
born, and with it came specialisation.
The Copernican revolution brought a fundamental change in the attitude to nature. The
Aristotelian cosmos it supplanted had consisted of a series of concentric crystal spheres, each
carrying a planet, while the outermost carried the fixed stars. Observation had shown that the
heavenly bodies appeared to circle the earth unceasingly and unchangingly, so Aristotle made
them perfect and incorruptible, in contrast to earth, where things decayed and died. Natural
terrestrial motion was rectilinear, because objects fell straight to earth. In the sky all motion was
circular.
The two forms of existence, earthly and celestial, were incommensurable. Everything that
happened in the cosmos was initiated by the Prime Mover, God, whose direct intervention was
necessary to maintain the system. At the centre of it all was the earth and man, fashioned by God
in His own image.
Copernicus shattered this view of the cosmos. He placed the earth in a solar orbit and opened the
way to an infinite universe. Man was no longer the centre of all. The cosmic hierarchy that had
given validity to the social structure was gone. Nature was open to examination and was
discovered to operate according to mathematical laws. Planets and apples obeyed the same force of
gravity; Newton wrote equations that could be used to predict behaviour. Modern science was
born, and with it the confident individualism of the modern world. In a clockwork universe we now
held the key.
In the eighteenth century the world found a new form of energy which gave us the ability to change
the physical shape of the environment and released us from reliance on the weather. Until then, all
life had been dependent on agricultural output. Land was the fundamental means of exchange and
source of power. Society was divided into small agricultural or fishing communities in which the
relationship between worker and master was patriarchal. Workers owed labour to their master,
who was in turn responsible for their welfare. People consumed what they produced. Most
communities were self-sufficient, while political power lay in the hands of those who owned the
most land. Populations rose and fell according to the effect of weather on crops, and life took the
form of cycles of feast and growth alternating with starvation and high death rates.
This self-balancing structure was radically changed by the introduction of steam power. Society
became predominantly urban. Relationships were defined in terms of cash. The emergence of
industrial capitalism brought the first forms of class struggle as the new means of production
generated material wealth and concentrated it in the hands of the entrepreneurial few.
Consumerism was born of mass-production, as were the major ideological and political divisions
of the modern world.
Before the early years of the nineteenth century the nature of disease was unknown, except as a list
of symptoms, each of which was the manifestation of the single 'disease' that attacked each body
separately and produced individual effects. In this situation the doctor treated the patient as the
patient dictated. Each practitioner used idiosyncratic remedies, all of which were claimed to be the
panacea for all forms of the disease.
The rise of surgeons to positions of responsibility during the wars of the French Revolution and the
use of recently developed probability theory combined to produce a new concept of disease as a
localised phenomenon. Statistical surveys established the nature and course of the disease and the
efficacy of treatment. In the new medical practice the bedside manner gave way to hospital
techniques and a consequent loss of involvement on the part of the patient in the diagnosis and
treatment of his ailment.
As medical technology advanced it became unnecessary to consult the patient at all. Information
on the nature of his illness was collected at first without his active participation, and later without
his knowledge or understanding. Along with these changes came the great medical discoveries of
the nineteenth century and dramatic improvements in personal and public health. By the end of
the century the doctor had assumed his modern role of unquestioned and objective arbiter.
Patients had become numbers.
The biblical version of history reigned until the middle of the nineteenth century. The six days of
Creation and the garden of Eden were regarded as matters of historical fact. The age of the earth
was established by biblical chronology at approximately six thousand years. The Bible was also the
definitive text of geological history. The flood was an event which accounted for the discovery of
extinct organisms. The purpose of natural history was only to elaborate God's Grand Design.
Taxonomy, the listing and naming of all parts of nature, was the principal aim of this endeavour.
The patterns which these lists revealed would form God's original plan, unchanged since Creation.
The discovery of more fossils as well as geological evidence of a hitherto unsuspected span of
history led to the theory of evolution. The cosmic view became a materialist one. Man, it seemed,
was made of the same stuff as the rest of nature. It was an accident of circumstance, rather than
purposeful design, which ensured survival. The universe was in constant change. Progress and
optimism became the new watchwords. Man, like the rest of nature, could be improved because
society obeyed biological evolutionary laws. The new discipline of sociology would study and apply
these laws.
From the Middle Ages to the end of the nineteenth century the cosmological view had changed
only once, as the Aristotelian system gave way to Newton's clockwork universe. All objects were
now seen to obey the law of gravity. Time and space were universal and absolute. All matter moved
in straight lines, affected only by gravity or impact.
With the investigation of the electromagnetic phenomenon, Newton's world fell apart. The new
force curved; it took time to propagate through space. The universe was a structure based on
probability and statistics, an uncertain cosmos. Absolutes no longer existed. Quantum mechanics,
relativity, electronics and nuclear physics emerged from the new view.
In the light of the above we would appear to have made progress. We have advanced from magic
and ritual to reason and logic; from superstitious awe to instrumental confidence; from localised
ignorance to generalised knowledge; from faith to science; from subsistence to comfort; from
disease to health; from mysticism to materialism; from mechanistic determinism to optimistic
uncertainty. We live in the best of all possible worlds, at this latest stage in the ascent of man. Each
of us has more power at a fingertip than any Roman emperor. Of the scientists who gave us that
power, more are alive today than in the whole of history. It seems that, barring mishaps and
temporary setbacks the way ahead lies inevitably onward and upward towards even further
discovery and innovation, as we draw closer to the ultimate truths of the universe that science can
reveal.
The generator of this accumulation of knowledge over the centuries, science, seems at first glance
to be unique among mankind's activities. It is objective, making use of methods of investigation
and proof that are impartial and exacting. Theories are constructed and then tested by experiment.
If the results are repeatable and cannot be falsified in any way, they survive. If not, they are
discarded. The rules are rigidly applied. The standards by which science judges its work are
universal. There can be no special pleading in the search for the truth: the aim is simply to discover
how nature works and to use that information to enhance our intellectual and physical lives. The
logic that directs the search is rational and ineluctable at all times and in all circumstances. This
quality of science transcends the differences which in other fields of endeavour make one period
incommensurate with another, or one cultural expression untranslatable in another context.
Science knows no contextual limitations. It merely seeks the truth.
Vocabulary exercise
1. tentative (paragraph 6)
The first tentative steps towards science were taken by Theodoric of Freiburg and Roger Bacon.
Man had become a rational thinker, confident and above all forward-looking.
2. rested on (paragraph 7)
Knowledge and awareness of the continuity of social institutions had rested on the ability of the
old to recall past events and customs. Elders were the source of authority.
3. supplanted (paragraph 9)
The Copernican revolution brought a fundamental change in the attitude to nature. The
Aristotelian cosmos it supplanted had consisted of a series of concentric crystal spheres, each
carrying a planet
4. emergence (paragraph 13)
The emergence of industrial capitalism brought the first forms of class struggle as the new means
of production generated material wealth and concentrated it in the hands of the entrepreneurial
few.
5. hitherto (paragraph 18)
The discovery of more fossils as well as geological evidence of a hitherto unsuspected span of
history led to the theory of evolution.
6. awe (paragraph 21)
In the light of the above we would appear to have made progress. We have advanced from magic
and ritual to reason and logic; from superstitious awe to instrumental confidence; from localised
ignorance to generalised knowledge;
7. barring (paragraph 21)
It seems that, barring mishaps and temporary setbacks the way ahead lies inevitably onward and
upward towards even further discovery and innovation, as we draw closer to the ultimate truths of
the universe that science can reveal.
Write your own sentences with the vocabulary
1. tentative
2. rested on
3. supplanted
4. emergence
5. hitherto
6. awe
7. barring
(;(5&,6( 70
The evolution of science and thought
throughout the ages part 2
Summary
This second part of a two part article on the evolution of science and thought focuses
predominantly on what shapes scientific thought. It explains how we perceive and interpret the
world around us and how this and beliefs have an impact on the practice of science. The author
illustrates this by showing how scientific understanding has changed throughout the ages. It ends
by saying what we can learn from history about science.
But which truth? At different times in the past, reality was observed differently. And different
societies coexisting even in the modern world also have different structures of reality. Within those
structures, past and present, forms of behaviour reveal the cultural idiosyncrasy of a particular
geographical or social environment. Eskimos have a large number of words for 'snow'. South
American gauchos describe horse-hides in more subtle ways than can another nationality. The
personal space of an Arab, the closest distance he will permit between himself and a stranger, is
much smaller than that of a Scandinavian.
Even at the individual level, perceptions of reality are unique and autonomous. Each one of us has
our own mental structure of the world by which we may recognise new experiences. In a world
today so full of new experiences, this ability is necessary for survival. But by definition, the
structure also provides the use with hypotheses about events before they are experienced. The
events then fit the hypothesis, or are discarded as being unrecognisable and without meaning.
Without the structure, in other words, there can be no reality.
This is true at the basic neurophysiological level. Visual perception consists of energetic particles
bouncing off an object or coming from a light source to strike the rods and cones in the retina of
the eye. The impact releases a chemical which starts a wave of depolarisation along the neurons
that form graduated networks behind the eye. The signal is routed along the optic nerve to the
brain. At this point it consists merely of a complex series of changes in electrical potential.
A very large number of these signals arrive in the visual field of the brain, where the object is 'seen'.
It is at this point that the object first takes on an identity for the brain. It is the brain which sees,
not the eye. The pattern of signals activates neurons whose job it is to recognise each specific
signal. The cognition or comprehending of the signal pattern as an object occurs because the
pattern fits an already existing structure. Reality, in one sense, is in the brain before it is
experienced, or else the signals would make no sense.
The brain imposes visual order on chaos by grouping sets of signals, rearranging them, or rejecting
them. Reality is what the brain makes it. The same basic mechanism functions for the other senses.
This imposition of the hypothesis on an experience is what causes optical illusions. It also modifies
all forms of perception at all levels of complexity. To quote Wittgenstein once more, "You see what
you want to see".
All observation of the external world is, therefore, theory laden. The world would be chaos if this
were not so.
In all cases of perception, from the most basic to the most sophisticated, the meaning of the
experience is recognised by the observer according to a horizon of expectation within which the
experience will be expected to fall. Anything which does not do so will be rejected as meaningless
or irrelevant. If you were to believe that the universe was made of omelette, you’d design
instruments to find traces of intergalactic eggs. In such a structure, phenomena such as planets or
black holes would be rejected. This is not as far-fetched as it may seem. The structure, or Gestalt,
controls all perceptions and all actions. It is a complete version of what reality is supposed to be. It
must be so if the individual or group is to function as a decision-making entity. Each must have a
valid structure of reality by which to live.
The structure therefore sets the values, bestows meaning, determines the morals, ethics, aims,
limitations and purpose of life. It imposes on the external world the contemporary version of
reality. The answer therefore to the question, "Which truth does science seek?" can only be, "The
truth defined by the contemporary structure."
The structure represents a comprehensive view of the entire environment within which all human
activity takes place. It thus directs the efforts of science in every detail. In all areas of research,
from the cosmic to the sub-atomic, the structure indicates the best means of solving the puzzles
which are themselves designated by the structure as being in need of solution. It provides a belief
system, a guide and, above all, an explanation of everything in existence. It places the unknown in
an area defined by expectation and therefore more accessible to exploration. It offers a set of
routines and procedures for every possible eventuality during the course of investigation. Science
progresses by means of these guidelines at all times, in every case, everywhere.
The first of the guidelines is the most general. It defines what the cosmos is and how it functions.
All cultures in history have had their own cosmogonies. In pre-Greek times these were
predominantly mythological in nature, dealing with the origins of the universe, usually in
anthropomorphic terms, with gods and animals of supernatural power.
The Aristotelian cosmology held longest sway in Western culture, lasting over two thousand years.
Aristotle based his system on common-sense observations. The stars were seen to circle the earth
regularly and unchangingly every night. Five planets moved against this general wheeling
movement of the stars, as did the moon. During the day the sun circled the earth in the same
direction. Aristotle placed these celestial objects on a series of concentric spheres circling the
earth.
These observations served as the basis for an overview of all existence. God had set the spheres in
motion. Each object, like the planets, had its natural place. On earth this place was as low as the
object could get. Everything in existence, therefore, had its preferred position in an immense,
complex and unchanging hierarchy that ranged from inanimate rocks up through plants and
animals to man, heavenly beings and finally God, the Prime Mover.
The cosmic order dictated that the universal hierarchy be mirrored in the social order in which
every member of society had a designated place. The cosmology conditioned science in various
ways. Astronomy was expected to account for the phenomena, not seek unnecessary explanations.
It was for this reason that the Chinese, whose structure had no block concerning the possibility of
change in the sky, made regular observations and developed sophisticated astronomy centuries
before those in the West.
The static nature of Aristotle's universe precluded change and transformation, so the science of
dynamics was unnecessary. Since each object was unique in its 'essence' and desires, there could
be no common forms of behaviour or natural laws which applied equally to all objects.
By the middle of the nineteenth century a different cosmology reigned. The Anglican Church was
committed to the biblical record, the Mosaic version of the history of the earth involving six days of
Creation, the garden of Eden and an extremely young planet. The Church strongly opposed the
new geological speculation by James Hutton and Charles Lyell regarding the extreme age of the
earth. This opposition took various forms including support for a professorial chair in geology at
Oxford, initially given to the diluvialist William Buckland in an effort to promote views more in
step with ecclesiastical sentiment. It was ultimately this clerical interference which was to cause a
split in the geological ranks. The breakaway group, keen to remove the study of the evolutionary
implications of geology from the influence of the Church, established the new and independent
scientific discipline of biology.
In our own day, the opposing 'big bang' and 'steady state' theories of cosmic origin influence
scientific effort because they have generated sub-disciplines within physics and chemistry which
are dedicated to finding supportive evidence for each view.
All cosmologies by their very form dictate the nature, purpose and, if any, the direction of
movement of the universe. The epic work of Linnaeus in the middle of the eighteenth century to
create a taxonomic structure in which all plants and animals would fit was spurred by a Newtonian
desire to discover the Grand Design he believed was in the mind of God when He had started a
clockwork universe at the time of Creation. By identifying and naming all forms of plants and
animals in this unchanging and harmonious universe, thus laying bare the totality of God's work,
Linnaeus considered that he had completed the work of science.
By the middle of the nineteenth century the view had changed. According to the cosmic theory
implicit in Darwin's Origin of Species, the universe was dynamic and evolutionary, and contained
organisms capable of change from one form to another. Some Darwinists, such as the German
Ernst Haeckel, were of the opinion that organic forms of life had evolved from inorganic material
early in the earth's history.
In the third quarter of the century the eminent biologist Thomas Huxley found what he took to be
a fossil in a mud sample taken from the sea-bed ten years earlier by the crew of the Challenger
during the first round the world oceanographic survey. Obedient to Haeckel's theory that at some
time in the past there had been a life form which was half-organic, half-inorganic, Huxley
identified the fossil as the missing organism and named it Bathybius haeckelii. Some years later,
Bathybius was revealed to be an artifact created by the effect of the preservative fluid on the mud
in the sample. In the interim, however, it had served to confirm a key element in a wide-ranging
cosmic theory.
Whilst we now may mock those in the past who regarded the stars as holes from where the light of
heaven shone through or a volcanic eruption as a display of the wrath of God or Gods, we should
not be too hasty to judge them. One should bear in mind they were making the most sense of the
world with the knowledge and prevailing intellectual structure they had. And it is certain that
many of the theories (because that’s what everything is) which we hold to be true and irrefutable
today will also be viewed in the same light by future generations to come.
If history teaches us anything, it is that our understanding of our world and the universe has and
will always be in a state of continual flux. All it requires is one discovery, one new theory or one
invention to turn our understanding of the world and our perception of it completely on its head.
Vocabulary exercise
1. discarded (paragraph 2)
The events then fit the hypothesis, or are discarded as being unrecognisable and without
meaning. Without the structure, in other words, there can be no reality.
2. phenomena (paragraph 7)
If you were to believe that the universe was made of omelette, you’d design instruments to find
traces of intergalactic eggs. In such a structure, phenomena such as planets or black holes would
be rejected.
3. bestows (paragraph 8)
The structure therefore sets the values, bestows meaning, determines the morals, ethics, aims,
limitations and purpose of life. It imposes on the external world the contemporary version of
reality.
4. eventuality (paragraph 9)
It offers a set of routines and procedures for every possible eventuality during the course of
investigation. Science progresses by means of these guidelines at all times, in every case,
everywhere.
5. in step with (paragraph 15)
This opposition took various forms including support for a professorial chair in geology at Oxford,
initially given to the diluvialist William Buckland in an effort to promote views more in step with
ecclesiastical sentiment.
6. hasty (paragraph 20)
Whilst we now may mock those in the past who regarded the stars as holes from where the light of
heaven shone through or a volcanic eruption as a display of the wrath of God or Gods, we should
not be too hasty to judge them.
7. irrefutable (paragraph 20)
And it is certain that many of the theories (because that’s what everything is) which we hold to be
true and irrefutable today will also be viewed in the same light by future generations to come.
Write your own sentences with the vocabulary
1. discarded
2. phenomena
3. bestows
4. eventuality
5. in step with
6. hasty
7. irrefutable
| 1/403