- UID
- 685547
- 在线时间
- 小时
- 注册时间
- 2011-10-24
- 最后登录
- 1970-1-1
- 主题
- 帖子
- 性别
- 保密
|
time 1
SPIDERS are known for many things. Sociability is not one of them. Most spiders are more likely to try to eat their neighbours than befriend them. Given that there are at least 43,678 species of the critters, though, it is not too surprising that a few have overcome their natural grumpiness and teamed up to form societies. So far, about two dozen such social spiders have been identified. And among them, something really strange has just been found. For one type of spider society turns out to involve two different but closely related species. It is as though anthropologists had discovered villages populated both by human beings and chimpanzees.。 This was discovered by a team led by Lena Grinsted of Aarhus University in Denmark. They were studying a social species of spider called Chikunia nigra, living near Beratan Lake in Bali. Later, as they looked in more detail at their specimens, they realised its genes and genitalia revealed that it was actually two species, according to their findings just published in Naturwissenschaften. Exactly what the spiders get out of being social is not clear. They do not hunt together. One explanation may be that the colony is acting like a giant crèche.Ms Grinsted discovered this possibility by experiment. First, she identified 19 females who were looking after recently hatched young, and another 20 who had eggs. In each case she introduced an intruder, in the form of a spider from the same colony. Both mothers and mothers-to-be were surprisingly tolerant of what would, in most spider species, be a serious threat. Only 40% of the time did they attempt to chase the intruder away, or bite it(278)
time 2 Ms Grinsted then took another 40 spiders and swapped some of their broods (though always to a female from the same colony). The upshot, she found, was that a female was as likely to look after and guard another’s brood as she was her own. Which is intriguing, but not all that extraordinary in social groups which are composed of closely related individuals. Except that Ms Grinsted now knows that this cannot always be the case for her spiders, since two different species are involved. The species in question are pretty similar, which would seem to rule out another common cause of collaboration: that different creatures bring different adaptations to the party, thus dividing the labour of staying alive into specialisms. Because Ms Grinsted did not know at the time of her experiment that two species were involved, she cannot be sure how many of the fosterings she induced were cross-specific. The two species seem more or less equally abundant, so the chances are it was about half of them. If colony members are acting as foster mothers in the wild (which has yet to be established), something most odd is going on. Altruism is not a concept often associated with spiders. Xenophilic altruism is truly bizarre.(207)
Time 3 The waste heat generated by car engines, power plants, home furnaces and other fossil fuel-burning machinery plays an unappreciated role in influencing regional climates, new computer simulations suggest. By altering atmospheric circulation, human-made heat may raise temperatures by as much as 1 degree Celsius during winter in the northernmost parts of the world. The finding may help explain why current climate simulations, which account for the heat-trapping effects of greenhouse gases but not the heat directly produced by energy consumption, have failed to replicate some winter warming observed in the northern latitudes, researchers report online January 27 in Nature Climate Change. “The magnitude of their result is quite surprising,” says Mark McCarthy, a climate scientist at the Met Office Hadley Centre in Exeter, England. It’s well-known that the heat from human energy consumption makes cities hotter than sparsely populated areas nearby, a phenomenon known as the urban heat island effect. But worldwide, waste heat represents only a tiny fraction of the heat produced naturally by incoming solar energy. Previous studies hadn’t found evidence that waste heat significantly influences global average temperatures. Energy consumption’s global warming effect, those studies have suggested, is no more than around 3 percent of that due to carbon dioxide emissions.(205)
Time4 The new study suggests that waste heat coming from urban areas is sufficient to influence climate on a regional scale. Climate scientist Ming Cai of Florida State University in Tallahassee and his colleagues ran global climate simulations that took into account energy use in 2006 from 86 of the world’s largest metropolitan areas. Together, these cities — located along the coasts of North America, Europe and East Asia — cover less than 2 percent of Earth’s surface but are responsible for about 42 percent of world energy consumption. The researchers assumed that all energy used in these areas is converted to waste heat — an overestimate, but not an unrealistic one. The simulations incorporating waste heat found that temperatures in December, January and February were 1 degree warmer in Russia and northern Asia than in simulations that ignored the heat. Parts of the United States, Canada and China experienced winter temperature increases of as much as 0.5 to 0.8 degrees. “The largest warming is not in the places where the energy is consumed,” Cai notes. That’s because the heat itself doesn’t cause the temperature spikes, according to the simulations. Instead, the heat disrupts normal atmospheric circulation, widening the jet stream and strengthening other circulation patterns in the mid-latitudes. These changes warm some regions in winter and bring cooler air to others, such as Western Europe, the simulations show. The results demonstrate that climate researchers shouldn’t ignore waste heat, says Mark Flanner, an atmospheric scientist at the University of Michigan in Ann Arbor. The next step is to improve estimates of waste heat, says David Sailor, a mechanical engineer at Portland State University in Oregon. Not all energy use dissipates as heat, as the new simulations assume. Sailor also calls for adding to the simulations the daily, seasonal and spatial variations in energy use.(300)
Time 5 Farmers in California help make it rain in the American Southwest, a new computer simulation suggests. Water that evaporates from irrigated fields in California’s Central Valley travels to the Four Corners region, where it boosts summer rain and increases runoff to the Colorado River, researchers report online January 12 inGeophysical Research Letters. This climate link may be crucial to the 40 million people who depend on the Colorado River for drinking water. That number could nearly double in the next 50 years at the same time that droughts are projected to become more common in the Southwest. Since the Central Valley’s supply of irrigation water faces an uncertain future, it’s important to examine how shortfalls in California might affect climate change in the region, says study coauthor Jay Famiglietti, a hydrologist at the University of California, Irvine. “We have to understand these connections better to deal with changes in water availability,” he says. The Central Valley is one of the world’s most productive agricultural regions. More than 50,000 square kilometers of the valley are irrigated, equaling one-sixth of all irrigated land in the United States. A study in 2011 showed that watering the area’s crops cools local temperatures and increases humidity. But the work didn’t find any larger climate ties outside the region, because it relied on a regional climate simulation, which has trouble estimating conditions along the boundaries of a study area, Famiglietti says. To overcome this problem, Famiglietti and Min-Hui Lo, now at the National Taiwan University in Taipei, simulated global climate over a 90-year period. They added in 350 millimeters of water — coming from groundwater and surface reservoirs — to the Central Valley between May and October each year. The researchers say that’s a realistic amount of irrigation based on published agriculture and climate data.(297)
越障 Storing information in DNA LIKE all the best ideas, this one was born in a pub. Nick Goldman and Ewan Birney of the European Bioinformatics Institute (EBI) near Cambridge, were pondering what they could do with the torrent of genomic data their research group generates, all of which has to be archived.
The volume of data is growing faster than the capacity of the hard drives used to hold it. “That means the cost of storage is rising, but our budgets are not,” says Dr Goldman. Over a few beers, the pair began wondering if artificially constructed DNA might be one way to store the data torrent generated by the natural stuff. After a few more drinks and much scribbling on beer mats, what started out as a bit of amusing speculation had turned into the bones of a workable scheme. After some fleshing out and a successful test run, the full details were published this week in Nature.
The idea is not new. DNA is, after all, already used to store information in the form of genomes by every living organism on Earth. Its prowess at that job is the reason that information scientists have been trying to co-opt it for their own uses. But this has not been without problems.
Dr Goldman’s new scheme is significant in several ways. He and his team have managed to set a record (739.3 kilobytes) for the amount of unique information encoded. But it has been designed to do far more than that. It should, think the researchers, be easily capable of swallowing the roughly 3 zettabytes (a zettabyte is one billion trillion or 10²¹ bytes) of digital data thought presently to exist in the world and still have room for plenty more. It would do so with a density of around 2.2 petabytes (10¹?) per gram; enough, in other words, to fit all the world’s digital information into the back of a lorry. Moreover, their method dramatically reduces the copying errors to which many previous DNA storage attempts have been prone.
Faithful reproduction
The trick to this fidelity lies in the way the researchers translate their files from the hard drive to the test tube. DNA uses four chemical “bases”—adenosine (A), thymine (T), cytosine (C) and guanine (G)—to encode information. Previous approaches have often mapped the binary 1s and 0s used by computers directly onto these bases. For instance, A and C might represent 0, while G and T signify 1. The problem is that sequences of 1s or 0s in the source code can generate repetition of a single base in the DNA (say, TTTT). Such repetitions are more likely to be misread by DNA-sequencing machines, leading to errors when reading the information back.
The team’s solution was to translate the binary computer information into ternary (a system that uses three numerals: 0, 1 and 2) and then encode that information into the DNA. Instead of a direct link between a given number and a particular base, the encoding scheme depends on which base has been used most recently (see table). For instance, if the previous base was A, then a 2 would be represented by T. But if the previous base was G, then 2 would be represented by C. Similar substitution rules cover every possible combination of letters and numbers, ensuring that a sequence of identical digits in the data is not represented by a sequence of identical bases in the DNA, helping to avoid mistakes.
The code then had to be created in artificial DNA. The simplest approach would be to synthesise one long DNA string for every file to be stored. But DNA-synthesis machines are not yet able to do that reliably. So the researchers decided to chop their files into thousands of individual chunks, each 117 bases long. In each chunk, 100 bases are devoted to the file data themselves, and the remainder used for indexing information that records where in the completed file a specific chunk belongs. The process also contains the DNA equivalent of the error-detecting “parity bit” found in most computer systems.
To provide yet more tolerance for mistakes, the researchers chopped up the source files a further three times, each in a slightly different, overlapping way. The idea is to ensure that each 25-base quarter of a 100-base chunk was also represented in three other chunks of DNA. If any copying errors did occur in a particular chunk, it could be compared against its three counterparts, and a majority vote used to decide which was correct. Reading the chunks back is simply a matter of generating multiple copies of the fragments using a standard chemical reaction, feeding these into a DNA-sequencing machine and stitching the files back together.
When the scheme was tested, it worked almost as planned. The researchers were able to encode and decode five computer files, including an MP3 recording of part of Martin Luther King’s “I have a dream” speech and a PDF version of the 1953 paper by Francis Crick and James Watson describing the structure of DNA. The one glitch was that, despite all the precautions, two 25-base segments of the DNA paper went missing. The problem was eventually traced to a combination of a quirk of DNA chemistry and another quirk in the machines used to do the synthesis. Dr Goldman is confident that a tweak to their code will avoid the problem in future.
There are downsides to DNA as a data-storage medium. One is the relatively slow speed at which data can be read back. It took the researchers two weeks to reconstruct their five files, although with better equipment it could be done in a day. Beyond that, the process can be sped up by adding more sequencing machines.
Ironically, then, the method is not suitable for the EBI’s need to serve up its genome data over the internet at a moment’s notice. But for less intensively used archives, that might not be a problem. One example given is that of CERN, Europe’s biggest particle-physics lab, which maintains a big archive of data from the Large Hadron Collider.
Store out of direct sunlight The other disadvantage is cost. Dr Goldman estimates that, at commercial rates, their method costs around $12,400 per megabyte stored. That is millions of times more than the cost of writing the same data to the magnetic tape currently used to archive digital information. But magnetic tapes degrade and must be replaced every few years, whereas DNA remains readable for tens of thousands of years so long as it is kept somewhere cool, dark and dry—as proved by the recovery of DNA from woolly mammoths and Neanderthals.
The longer you want to store information, then, the more attractive DNA becomes. And the cost of sequencing and synthesising DNA is falling fast. The researchers reckon that, within a decade, that could make DNA competitive with other methods for (infrequently-used) archives designed to last fifty years or more.
There is one final advantage in using DNA. Modern, digital storage technologies tend to come and go: just think of the fate of the laser disc, for example. In the early 2000s NASA, America’s space agency, was reduced to trawling around internet auction sites in order to find old-style eight-inch floppy drives to get at the data it had laid down in the 1960s and 1970s. But, says Dr Goldman, DNA has endured for more than 3 billion years. So long as life—and biologists—endure, someone should know how to read it.(1204) |
|