mm076: Why the disappearance of the honeybees isn’t the end of the world. – By Heather Smith – Slate Magazine

July 19, 2007

MUDGE’S Musings

Thursday is apparently Science Day here at Left-Handed Complement. Spotted this story at Slate Magazine blogroll2 last weekend, and it finally bubbled to the top of the stack.

Bee Not Afraid – The disappearance of the honeybees isn’t the end of the world.

By Heather Smith
Posted Friday, July 13, 2007, at 3:55 PM ET

When the honeybees disappeared this winter, the thought of losing such a fuzzy and adorable animal inspired dismay. The fact that bees might also be useful drove us to despair. The first official reports of “colony collapse disorder” began to surface in October of 2006; seven months later, USDA officials were calling CCD “the biggest general threat to our food supply,” and newspaper columnists nervously joked about the impending “bloody wars not for oil or land or God but over asparagus and avocados.” Experts pointed to the $14.6 billion worth of free labor honeybees provide every year, pollinating our crops. With a full quarter of them AWOL, presumed dead, who would make sweet love to the $1.6 billion California almond harvest? More precisely, who would help the almond harvest make sweet love to itself?

Few people realized that the honeybee apocalypse was already over. We may continue to associate them with childhood sugar rushes and chubby-cheeked fertility metaphors, but in real life honeybees have been virtually extinct in North America for more than 10 years, their absence concealed by a rogue’s gallery of look-alikes. The stragglers have been kept alive only by the continued ministrations of the agricultural giga-industry that needs them.

It used to be that it was hard to eat a peanut-butter-and-jelly sandwich without a honeybee showing up and doing a little dance around your head. Hives (literally) grew on trees until 1987, when a mite called varroa destructor turned up in a honeybee colony in Wisconsin. Even for a parasite, varroa is less than charming. It looks like a microscopic baked bean, with sharp fangs used to slurp tiny droplets of blood from the abdomens of unsuspecting honeybees. Since these bites also transmit disease, like deformed wing virus and acute bee paralysis virus, an infested colony is kaput within four years. By 1994, an estimated 98 percent of the wild, free-range honeybees in the United States were gone. The number of managed colonies—those maintained by beekeepers—dropped by half.

The honeybees may have been especially vulnerable to the varroa epidemic. When the honeybee genome was sequenced a few years ago, researchers discovered fewer immune-system genes than you’d find in other insects. This despite the fact that the honeybee lives in tenementlike conditions, anywhere between 15,000 and 30,000 of them crammed into a hive the size of a filing cabinet. To make matters worse, a weakened hive often becomes the target of honey-raiders from healthier colonies, which only helps the parasites to spread.

It’s possible that if the American honeybees had been left to their own devices, they would have died off in epic numbers and then evolved natural defenses against varroa (like more effective grooming), as they did in Asia. But crops had to be pollinated and no one had the time to sit around and wait.

Beekeepers opted to keep their colonies on life support with selective breeding, and by sprinkling them with medicine and insecticides aimed at the invading mites. This was no longer a hobby for amateurs. The only honeybees left—i.e., the ones that started disappearing in October—had become the cows of the insect world: virtually extinct in the wild, hopped up on antibiotics, and more likely to reproduce via artificial insemination than by their own recognizance.

If anything, it’s impressive that the honeybee has hung on in America for as long as it has. The commercial hives spend half the year sealed and stacked in the back of 18-wheelers, as they’re schlepped down miles of interstate to pollinate crops around the country. During this time, they get pumped up with high fructose corn syrup, which keeps the bees buzzing and lively, but it’s no pollen. And if a bee happens to get sick on the road, it can’t self-quarantine by flying away from the colony to die. (In the wild, a bee rarely dies in the hive.) Add to the above the reduced genetic diversity resulting from the die-offs in the 1990s, and you have an insect living in a very precarious situation—where a new pathogen, even a mild one, could spell honeybee doom.

So what brought on this recent scourge of colony collapse disorder? Early news reports on CCD listed a plethora of suspects: pesticides, parasites, global warming, chilly larvae, ultraviolet light, not enough pollen, not enough rain, cell phones, and alien spaceships. Given the present state of the honeybee, any or all of these could have been the culprit. (Well, except for the cell phones and spaceships.)

It’s even possible the mystery disease has already shown up in years past. An 1897 issue of Bee Culture magazine mentions the symptoms of something that sounds remarkably like CCD, as do a few case studies from the ’60s and ’70s. Before bees fell victim to varroa and the ensuing stresses of modern life, these afflictions would have been easy to bounce back from. Today, the same causal agent could have more serious effects.

But is CCD such a tragedy? The honeybee may be the only insect ever extended charismatic megafauna status, but it’s already gone from the wild (and it wasn’t even native to North America to begin with). Sure, it makes honey, but we already get most of that from overseas. What about the $14.6 billion in “free labor”? It’s more expensive than ever: In the last three years, the cost to rent a hive during the California almond bloom has tripled, from $50 to $150.

Good thing the honeybee isn’t the only insect that can pollinate our crops. In the last decade, research labs have gotten serious about cultivating other insects for mass pollination. They aren’t at the point yet where they can provide all of the country’s pollination needs, but they’re getting there. This year the California Almond Board two-timed the honeybee with osmia ligneria—the blue-orchard bee: Despite CCD, they had a record harvest.

But these newly domesticated species are likely to follow in the tiny footsteps of the honeybee, if they’re treated the same way. Varroa mites have already been found on bumblebees, though for the time being they seem not to be able to reproduce without honeybee hosts. And bumblebees used in greenhouse pollination have escaped on several occasions to spread novel, antibiotic-resistant diseases to their wild counterparts. If things keep going like this, we may soon be blaming spaceships all over again.

Why the disappearance of the honeybees isn’t the end of the world. – By Heather Smith – Slate Magazine

Now, doesn’t that just turn your preconceived knowledge upside down? Who knew that the honeybee has been gone for a decade? 

If I wasn’t such a confirmed and level-headed “man of business” I’d probably be more frightened than ever of the devastating impact of agribusiness on our planet. Instead, like the almond growers, I’m certain that we’ll develop new ways; press new creatures into service; or perhaps create brand new, disease resistant honeybees out of silicon (per mm075?).

It’s it for now. Thanks,

–MUDGE

Advertisement

mm075: Economist.com | Computing and biology

July 19, 2007

MUDGE’S Musings

From the best publication, print and/or web, on our planet…

image

SCIENCE & TECHNOLOGY
Computing and biology
Arresting developments

Jul 12th 2007 | CAMBRIDGE
From The Economist print edition

Computer science and biological science have a lot to teach each other

Stephen Jeffrey

Stephen Jeffrey

THERE is, at the moment, a lot of interest in the idea of artificial life. The ability to synthesise huge screeds of DNA at will means the genomes of viruses can be replicated already, and replicating those of bacteria is not far off. But that merely copies what nature already manages routinely. David Harel of the Weizmann Institute in Israel reckons he can do better. He proposes to recreate living organisms inside a computer.

As with many biologists, his creature of choice is a worm called Caenorhabditis elegans. This tiny nematode (it is just a millimetre long) was the first organism to have its developmental pathway worked out cell by cell and the first multicellular one to have its genome sequenced completely. It is probably, therefore, the best understood animal in biology.

As he told “The next 10 years”, a conference organised by Microsoft Research in Cambridge, England, Dr Harel has been working on a computer model of C. elegans. He hopes this will reveal exactly how pluripotent stem cells—those capable of becoming any sort of mature cell—decide which speciality they will take on. He thinks that a true understanding of the processes involved will be demonstrated only when it is possible to build a simulation that does exactly—but artificially—what happens in nature. With colleagues at New York University and Yale University in America, he is modelling and testing the possibilities.

Indeed, he proposes to evaluate the result using an updated version of the Turing test. This was devised by Alan Turing, an early computer scientist, to identify whether a machine is capable of thought. The original test proposes that a person be presented with a suitable interface—say, a keyboard and a screen—through which to communicate. If the operator cannot tell the difference between talking to another person through this interface and talking to a computer, then the computer can be argued to be thinking. Dr Harel’s version is a little more challenging. He wants to test whether scientists well versed in the ways of C. elegans could tell his computerised version from the real thing. So far, the distinction is obvious, but it may not always remain so.

Silicon biology

Stephen Emmott, who works for Microsoft Research, wonders whether to turn the whole approach on its head. Instead of looking at how computers can mimic creatures, he wants to build computers from biological components. People—and other creatures—are notoriously forgetful and not much good at number crunching compared with their silicon counterparts. But they do have compensating advantages. People excel at reasoning and make much better learning machines than do computers. Dr Emmott reckons that a biological computer might find it easier to cope with problems that have foxed the traditional, silicon variety for decades—such as how to recognise what it is that they see.

Working with Stephen Muggleton of Imperial College, London, he is developing an “artificial scientist” that would be capable of combining inductive logic with probabilistic reasoning. Such a computer would be able to design experiments, collect the results and then integrate those results with theory. Indeed, it should be possible, the pair think, for the artificial scientist to build hypotheses directly from the data, spotting relationships that the humble graduate student or even his supervisor might miss.

Indeed, Luca Cardelli, who also works for Microsoft Research, likens biological cells to computers. He points out that creatures are programmed to find food and to avoid predators. But exactly how the “wetware” of molecular biology works remains a mystery. Dr Cardelli is trying to discover whether it is more like the hardware of electronic circuits or the software of programming languages. He is using statistical techniques—in particular, a method called stochastic pi-calculus—to model how biological systems appear to change with time.

His colleagues, meanwhile, are examining how the spread of diseases such as malaria and AIDS can be thought of as information systems. They are using what used to be called artificial intelligence and is now referred to as machine learning to explore the relationships between the two. All of which raises some interesting philosophical points. If, say, a computer were used to diagnose a patient’s symptoms and recommend treatment, and the result was flawed, could the computer be held responsible? Peter Lipton of the University of Cambridge, who ponders such matters, suggests that such expert systems could indeed be held morally responsible for the consequences of their actions (although the designers of such systems would not necessarily be off the hook). If so, then it is hard to see why computers should not be recognised for good work as well. If Dr Lipton is correct, the race has now begun to see whether the first artificial scientist to win a Nobel prize is based on silicon or biological material.

Copyright © 2007 The Economist Newspaper and The Economist Group. All rights reserved.

Economist.com | Articles by Subject | Computing and biology

Computer versions of simple organisms, impossible to differentiate from the real one. A computer that’s a scientist. AIDS as an information system. Easily written; difficult to reflect upon without getting dizzy…

Stories such as this one reinforce my belief that I was born ‘way too soon! But I’m grateful for The Economist for widening my perspectives, as they do every time I click on their site, or crack open the magazine.

It’s it for now. Thanks,

–MUDGE