查看: 3768|回复: 103

[阅读小分队] 【Native Speaker每日训练计划—89系列】【89-01】科技

发表于 2017-6-2 00:22:42 | 显示全部楼层 |阅读模式
本帖最后由 喵了个咪hh 于 2017-6-2 00:25 编辑

内容: 枣糕兔 (Gina Wu)   编辑: Lily Shi

Wechat ID: NativeStudy   |   Weibo: http://weibo.com/u/3476904471

Part I: Speaker

Trees Beat Lawns for Water-Hungry L.A.
Christopher Intagliata   |   May 27, 2017

When California was strangled by drought, the city of Los Angeles was offering homeowners cash to replace their lawns with landscaping that was less thirsty. Because water just evaporates from overwatered lawns. But how much?

"So that turned out to be a lot of water." Diane Pataki, an ecologist at the University of Utah. "It turned out to be 70 billion gallons of water a year."

Pataki and her team got that number using a combination of real-world sensor data and modeling. And they found that, of water wasted specifically in urban landscaping, lawns were to blame for three quarters, with L.A.'s six million trees accounting for the rest.

The study also uncovered something these ecologists were not expecting to study: economic disparity. "The amount of vegetation is really closely related to affluence. So in L.A. that means wealthy neighborhoods actually have twice the evapotranspiration of poorer neighborhoods." Meaning low income neighborhoods not only miss out on that greenery: but also the natural, built-in cooling effect of evapotranspiration. The findings are in the journal Water Resources Research. [E. Litvak et al., Evapotranspiration of urban landscapes in Los Angeles, California at the municipal scale]

Finally: if you think native trees are the solution to water waste? Think again, Pataki says. "Some of the highest water users in L.A. are those species, including the native California sycamore, which is a very, very popular tree." The reason being that southern California doesn't have a lot of native trees, except alongside rivers—meaning they're water guzzlers by nature.

Better, she says, to plant other species that thrive in Mediterranean climates, like water-thrifty pines and palms. Because even if the drought comes back, she says, L.A.'s secret to staying green may be its trees. "It doesn't take a lot of water in terms of absolute gallons to keep them alive. So moving forward L.A. could be very water efficient and maintain a very extensive tree canopy, which I think is good news."

[Rephrase 1, 02’02]

Source: Scientific American

Part II: Speed

Pay-to-view blacklist of predatory journals set to launch
——Private firm says its watchlist of untrustworthy journals will be objective and transparent — but not free.
Andrew Silver   |   May 31, 2017

[Time 2]
The blacklist is dead; long live the blacklist. Five months after a widely read blog listing possible ‘predatory’ scholarly journals and publishers was shut down, another index of untrustworthy titles is appearing — although this version will be available only to paying subscribers.

Scholarly-services firm Cabell’s International in Beaumont, Texas, says that on 15 June it will launch its own list of predatory journals: those that deceive their authors or readers, for example by charging fees to publish papers without conducting peer review. The firm described its work on 31 May, at the annual meeting of the Society for Scholarly Publishing in Boston, Massachusetts.

The previous, now-defunct, list was run by academic librarian Jeffrey Beall of the University of Colorado Denver. Since 2010, he had tracked what he called “potential, possible or probable predatory scholarly open-access publishers” and journals on his blog, attracting huge attention and some legal threats from publishers unhappy at their inclusion. But in January this year, Beall deleted the site, without saying why. Cached copies have been posted elsewhere online.

By January, Cabell’s was well into the process of designing its own blacklist, says project manager Kathleen Berryman. The company already publishes a ‘whitelist’ of trustworthy journals, to which about 800 institutions subscribe; websites such as the Directory of Open Access Journals provide other whitelists. But Berryman says there’s also value in having a list to monitor for journals with bad business practices.

As of 26 May, the blacklist contains about 3,900 journals, she says, with more to come. It will be provided as an add-on to subscribers to the company's whitelist.

Clear criteria

Berryman says the firm was aware of complaints that Beall’s list wasn’t objective and that his criteria for including journals weren’t transparent. So Cabell’s uses some 65 criteria — which will be reviewed quarterly — to check whether a journal should be on its blacklist, adding points for each suspect finding. Examples include fake editors, plagiarized articles and unclear peer-review policies, says Berryman, although she declined to provide all criteria, saying that the firm would present them later in the year.  A team of four employees checks for evidence that journals meet the criteria by searching online or contacting authors and journals for verification.

“It’s pretty much as scientific as we can get at this point,” she says.
[383 words]

[Time 3]
Some of the publishers and journals listed by Beall aren’t on Cabell’s list, says Berryman. And Cabell’s has added new journals, including some that aren’t open access. The firm declined to provide details of the differences between its list and Beall's, but says that it will clearly state all the reasons that a journal is on its list. Berryman hopes that will limit libel suits. Publishers or journals will be able to contact Cabell’s to find out whether they are indexed, and will have the opportunity to appeal their status once a year.

Black and white

Some researchers say there’s little value in a blacklist. Cameron Neylon, who studies research communications at Curtin University in Perth, Australia, says such lists require a lot of work and will always miss some journals. He thinks that researchers should rely on whitelists of trustworthy journals, and that their training should cover how to judge journal quality.

But Natalia Zinovyeva, an economist at Aalto University in Helsinki who is studying the editorial processes of some of the journals that Beall once tracked, thinks Cabell’s list will be “extremely valuable” to funding or hiring committees without a wide level of expertise, who could use it as a tool to help evaluate researcher CVs.

And Beall, who was once an informal consultant for Cabell’s, says he thinks blacklists are still useful as a timesaving tool for authors who are deciding where to publish. Cabell's will probably find managing its appeals process one of its most difficult tasks, he says.

It’s unclear how many institutions or people will sign up once the list is released, but pricing will vary by institution. Cabell’s had originally planned to make the list free — and still hopes to do so eventually, Berryman says — but is charging to cover the costs involved in creating it.  

Anyone wanting to produce their own watch list as a competing service will “quickly realize how much work and time goes into this”, says Berryman. “I don’t foresee us having competition.”
[334 words]

Source: Nature

Conservative party leader Theresa May, seen visiting an industrial facility, has promised more money for research.

UK election: science spending pledges overshadowed by Brexit
——Parties promise more money for research, but scientists fear impact of split with European Union.
Daniel Cressey   |   May 31, 2017

[Time 4]
Ahead of a UK election that will decide who leads the country’s exit negotiations with the European Union, a remarkable consensus has emerged among the main national parties. All three have pledged in their manifestos to spend more money on science. Each is “putting science at the heart of their programme for the future of Britain”, says Sarah Main, director of the London-based Campaign for Science and Engineering.

Yet as the uncertainty of Brexit hangs over the future of British science, for many researchers the promises may seem more like consolation prizes. Almost a year after the country voted to leave the EU, scientists still don’t know what the split will mean for funding and collaborations with EU colleagues — while non-British EU scientists remain unclear about their future residency status.

Compared to many other major science nations, Britain spends relatively little on research and development (R&D): just 1.7% of its gross domestic product (GDP). By contrast, Germany spends 2.9%, and the United States 2.8%. But the governing Conservative Party — which polls suggest will win the 8 June election — has said it wants to raise UK spending to 2.4% within 10 years, with a longer-term goal of 3%. Labour, the main rival party, has promised 3% by 2030 and the Liberal Democrats, third in national polls, have set a “long-term goal” to double science spending.

In theory, these targets would mean billions more for research. But the proportion of GDP spent isn’t necessarily an indicator of a nation’s research health, says Kieron Flanagan, a science-policy researcher at the Alliance Manchester Business School. “Some wags have pointed out that the easiest way to achieve the 3% is by crashing the economy,” he says. “Which is true, and it actually points to the meaninglessness of that kind of ratio.” Much R&D spending comes through the private sector, over which the government has no direct control, he adds.
[314 words]

[Time 5]
Research rhetoric

In recent decades, Labour has been a champion of research and innovation, and academia has been seen as a bastion of its support. The party set the country’s most recent long-term research target: while in government in 2004, it promised to push R&D spending to 2.5% of GDP by 2014, although it failed to do so.

But the Conservatives have gradually increased their focus on science. The words ‘science’ and ‘research’ — or variations of these — didn’t appear in their manifesto in 2005, but garnered 28 mentions this year (see ‘Political patter’). And in the past year, the party — led by Prime Minister Theresa May, who took over after the Brexit vote — has announced an extra £2 billion (US$2.6 billion) per year in government spending on research by 2020. “The Conservative Party put its money where its mouth is. There is clearly an upwards trajectory of support for science and research,” says David Willetts, who was UK science minister from 2010 to 2014.

Yet at the same time, May is pledging to slash immigration, raising concerns about the country’s attractiveness to overseas researchers and students. And the Conservatives are taking a hard line on Brexit negotiations. Whereas Labour has promised to immediately guarantee residency and other rights for EU citizens in the United Kingdom, and has said that it will seek to stay part of EU research programmes and the popular Erasmus student-exchange scheme, the Conservatives have made no such promises. They have, however, said that they want to maintain collaborations with European partners. John Unsworth, an energy research consultant who chairs the network Scientists for Labour, thinks the Conservative commitment to science is “up and down”, and more focused on its commercial aspects than on basic science.

Of the three parties, only the Liberal Democrats explicitly oppose leaving the EU — a stance that is bringing them scientists’ support, says Julian Huppert, a former biochemist who was Liberal Democrat Member of Parliament for Cambridge until he lost his seat in 2015. Huppert, who now works on science and technology policy at the University of Cambridge, is campaigning for re-election in that city. Many scientists are volunteering for his campaign, he says.

With the Conservatives seemingly on course for victory, some scientists say that concerns over Brexit trump any other promises. “I would not consider voting for any party hell-bent on pursuing Brexit whatever the cost, and without providing any analysis on the possible scenarios that we have for a Brexit future,” says Anne Glover, a prominent biologist who was formerly the European Commission’s chief science adviser, and is now dean for Europe at the University of Aberdeen. “It seems to me we had an evidence-free EU referendum and we are heading towards an evidence-free Brexit,” she says.
[458 words]

Source: Nature

Simply counting the number of species may not be the best measure of biodiversity. (Westend61/Getty)

Why function is catching on in conservation
——Counting what species do is becoming as important as counting how many there are.
May 31, 2017

[Time 6]
The high-street coffee shop has long been used as a measure of urban gentrification. But are all coffee shops the same? Not so, claimed the London edition of Time Out in 2014. In fact, it said, there are eight types in London just in the independent sector, away from the global mega-chains. These separate species of capital brew house could be distinguished by the presence of table service, for instance, and whether the barista could remember your name and favourite order.

Time Out, then, would see a high street with one of each of these individual outlets as diverse. But most of us, especially tea drinkers, would probably prefer to swap a few of them for, say, a butcher, a baker and, if not a candlestick maker, then perhaps a newsagent. Despite their differences, all coffee shops provide essentially the same service. In those terms, a street of different types of coffee shop is anything but diverse. It doesn’t offer as good a service, and so it’s not such a great place to live.

What does the supply of caffeine have to do with this week’s special issue of Nature that discusses biodiversity, the extinction of species and how to conserve them? Everything. For, as some biologists argue, too much current thinking on conservation agrees with Time Out. The standard definition of biodiversity focuses too heavily on counting the number of different species, when perhaps it should concentrate on what each of those species contributes to the ecosystem.

Carry your coffee to drink at the rocky seashore, for example. Within a square metre or so you might find four species — a mussel and three different species of barnacle. A bit farther along, in another square metre, you find another four species, but this time the mussel is joined by a starfish, an anemone and a seagrass (see go.nature.com/2qmbfah). Under current conservation measures, each community has equal biodiversity and deserves equal attention. That’s because a thatched barnacle is considered to be as different from an acorn barnacle as it is from the seagrass. Just as a barista who remembers your name is as different from a forgetful one as he or she is from a librarian.

To see and designate the second seashore community as different from the first, some biologists argue that we should consider what these species do, individually and collectively. The idea is called functional diversity, and it’s catching on. Many biologists have felt for decades that the starfish, anemone and seagrass make up a more diverse community than the barnacle trio. But as a News Feature explores this week, the concept is gaining ground in policy circles. And it’s being used to set priorities and to determine how conservation resources are allocated.
[455 words]

[The Rest]
Intuition is not evidence, and there are already concerns that proponents of functional diversity are trying to run before they have worked out if they want to walk. Which functional traits should be considered and how can they be compared? How can biologists ensure that all functions of a species are accounted for, and not just those that are the most obvious? Do we have sufficient data to link diversity of traits to the health of an ecosystem? What if table service at a coffee shop is the only reason that a rich couple visit, and spend money in other shops while there?

To consider the utility of creatures in a habitat and not just their number can certainly throw up counter-intuitive findings. Some measures of functional diversity, for example, judge degraded post-logging secondary forests in the tropics to be as healthy as the primary forests they replace (see C. A. Sayer et al. Biol. Conserv. 211 (A), 1–9; 2017). (That is not an argument to stop protecting primary forest, but it might be a reason to give the degraded areas equal status.)

What is clear — and laid out in much detail in a series of other articles this week — is that existing attitudes and measures are failing to halt the global loss of habitats, species and ecosystems. To address the decline and stem the damage to the natural world, new approaches and new thinking are needed. Functional diversity, properly applied, could be a pragmatic and necessary step. All species are equal. But perhaps some are more equal than others.
[261 words]

Source: Nature

Part III: Obstacle

The Neutrality Delusion
——Net neutrality will never be anything more than a vague aspiration with no clear definition.
Richard Bennett   |   May 31, 2017

[Paraphrase 7]
The most surprising thing about net neutrality is that the Internet policy community is still debating it 15 years after the idea was born. If it were the magic bullet it’s alleged to be, by now we all would have seen the light, embraced it, and moved on to more pressing issues such as privacy and cybersecurity.

Net neutrality is essentially the belief that intelligence inside the Internet is detrimental to innovation at the network’s edge. This misguided faith leads advocates to demand lobotomies for Internet service providers in the vain hope of maximizing the Internet’s potential.

While most people who are aware of net neutrality believe it to be a good thing—even if they can’t define it—it delivers the opposite of its promise.

Net neutrality hasn’t extended high-speed broadband networks to all corners of the nation and the globe, for example. In fact, it hasn’t even made the networks we have any faster or more reliable. Networks have indeed improved at an awesome rate since the 1990s, but net neutrality has had nothing to do with this progress.

Likewise, net neutrality has not made the Internet safer or more secure, nor can it. Progress toward greater security depends on technical and regulatory enhancements that we haven’t even had time to discuss because net neutrality has sucked all the oxygen out of the room.

And net neutrality has not made networks any less expensive—nor can it, because it requires costly investments in network infrastructure to deal with trivial engineering issues such as fleeting moments of network overload.

Worst of all, net neutrality is not actually enforceable. This question was studied by British computer scientist Neil Davies for Ofcom, the U.K.’s FCC, in 2015. Davies looked at the six best methods of “traffic management detection” described in the academic literature and found all of them wanting in some important way.

Net neutrality enforcers need the ability to detect unfair treatment of Internet sites; without this capability regulations banning such conduct are meaningless. But Davies declares “no tool or combination of tools currently available is suitable for practical use” in this endeavor.

So what gives with the snarky blog posts, the sloganeering, the clever formulations about innovation, and the shrieking of cable TV comedians about this wonky notion? While the Internet certainly appears to be developing nicely, we can neither prove nor disprove its alleged neutrality.

Lawmakers love increasing criminal penalties. It’s hard to catch the perpetrators of many crimes, but easy to crack down on those who are caught.

A similar dynamic has taken place in Internet regulation in the era of net neutrality. Former FCC chairman Michael Powell introduced the concept of net neutrality to the FCC in the form of a 2004 policy statement on the Four Freedoms of the Internet.

The statement recognized the freedoms to access content, run chosen applications, attach devices, and obtain service plan information. But it didn’t create specific regulations to enforce them because they had already come to exist in the absence of regulatory mandates.

Powell didn’t play bad cop with Internet Freedom because there was no need. Market forces alone were enough to encourage the Internet to keep doing what it had always done, only better.

Powell’s restraint hasn’t been mimicked by subsequent FCC chairmen: Kevin Martin, Julius Genachowski, and Tom Wheeler all tried to transform Internet freedom ideals into ever more aggressive legalisms. They did this in the context of an Internet that was continuing to improve and expand without notable issues.

This is not to say that the advocacy community hasn’t fabricated dubious crises at every opportunity. Free Press, the brainchild of socialist academic Robert McChesney, claims no fewer than a dozen offenses against net neutrality have taken place since Powell’s Four Freedoms speech.

Upon examination, it’s evident that all claimed debacles are grossly exaggerated incidents of brief duration that were resolved without regulatory intervention. And in every net neutrality case brought to the FCC, the agency has failed to properly determine the facts.

This was especially evident in the 2007 complaint by public interest advocates against a network management system used briefly by Comcast. The company stopped using the system—a crude way of limiting bandwidth appropriation by digital piracy programs—long before the Martin FCC’s investigation was complete.

Despite three failed attempts to undergird Powell’s aspirations with regulation, the freedoms have generally proved to be self-executing. The Internet is open and relatively neutral by design, and remains so in practice because the costs to Internet business of deviating from essential neutrality are too high.

Career telecom regulators don’t like the Internet. Its development in the absence of significant rulemaking is in fact an affront to the entire enterprise of telecom regulation. In other countries, we see endless attempts to censor content, restrict carrier business models, and to shut down Internet connections on the thinnest of pretexts.

We’re fortunate to have avoided this sort of thing in the U.S. But if regulators and political partisans continue to promote their value based on their abilities to tame an unruly Internet, they can’t be far away.

The greatest danger in the net neutrality debate is the propagation of the myth that regulators alone are responsible for the Internet’s success. This simply takes undeserved credit for the work of technologists, entrepreneurs, and investors.

There’s no overlooking the fact that the genuine problems the Internet faces today—the plodding pace of innovation, safety, security, privacy, and the consolidation of services—cannot be resolved by open Internet regulation. Internet engineers need the freedom to tinker with ways of making the Internet better by departing from tradition.

A rulebook of ever-increasing complexity does not take us where we need to go, and neither does an unenforceable “simple rule” that gives rise to nothing but endless litigation and ever more dubious regulations.

Imperfect, frustrating, and sometimes scary, the Internet is also marvelous, amazing, and even dazzling in a way that no product of the bureaucracy has ever been.

Let’s accept the fact that net neutrality will never be anything more than a vague aspiration that will always escape regulatory definition.
[1017 words]

Source: MIT Technology Review


您需要 登录 才可以下载或查看,没有帐号?立即注册

发表于 2017-6-2 01:13:18 | 显示全部楼层
T1 2.23
T2 3.12
发表于 2017-6-2 11:04:49 | 显示全部楼层
发表于 2017-6-2 11:27:46 | 显示全部楼层
T2 02'11
T3 01'45
T4 01'39
T5 02'18
T6 02'32
发表于 2017-6-2 14:42:33 | 显示全部楼层
OB 7'18"91
发表于 2017-6-2 19:29:24 | 显示全部楼层
T2: 5:44.89
T3: 3:11.75
T4: 4:35.01
T5: 3:15.35
发表于 2017-6-2 20:17:55 | 显示全部楼层
T2 3:28
T3 2:56
T4 2:25
T5 2:50
T6 2:52
发表于 2017-6-2 20:49:50 | 显示全部楼层
发表于 2017-6-2 21:07:09 | 显示全部楼层
发表于 2017-6-2 21:08:07 | 显示全部楼层
T2 02:44.15
T3 02:07.35
T4 02:18.36
T5 02:48.67
T6 02:23.36
The rest 01:42.56
您需要登录后才可以回帖 登录 | 立即注册

Mark一下! 看一下! 顶楼主! 感谢分享! 快速回复:

手机版|Archiver|ChaseDream ( 京ICP证101109号 )

GMT+8, 2017-12-18 20:49 , Processed in 0.095461 second(s), 7 queries , Memcache On.

ChaseDream 论坛

© 2003-2017 ChaseDream.com. All Rights Reserved.