ChaseDream
搜索
返回列表 发新帖
查看: 9165|回复: 54
打印 上一主题 下一主题

[阅读小分队] 【Native Speaker每日综合训练—41系列】【41-16】科技 Artificial intelligence

[精华] [复制链接]
跳转到指定楼层
楼主
发表于 2014-9-15 22:51:45 | 只看该作者 回帖奖励 |倒序浏览 |阅读模式
内容: cherry6891 伊蔓达    编辑: 伊蔓达

Stay tuned to ourlatest post! Follow us here ---> http://weibo.com/u/3476904471

今天是我加入阅读小分队工作组的首秀贴,在小分队的优质资源和cherry指导下,捣鼓了半天,终于编辑完成,掌声~!!!

今天的topic是Artificial Intelligence,人工智能,听起来高大上的AI。说起人工智能,可能很多人的直观反应就是:机器人。我最早的印象是当时看X-file,其中Season 1中有一集是讲超级电脑杀人事件,当时印象灰常深刻,惊呆了;最近的印象是Person of Interest里的强大的The machine。其实真实世界的AI远远不止这些浅显的了解。所以,大家要看完今天的几篇文章噢!

PS:Obstacle文章来自ArtificialIntelligence and the End of the Human Era这本书的作者的访谈,即Speed第4组文章讲到的。难度不足,所以长度有余。Please Enjoy~


Part I: Speaker

We are all hawking products now

Software startups getting big bucks to write code that can identify, find and link logos and brands in the billions of images posted daily. Larry Greenemeier reports.

You post photos on social media sites for the enjoyment of your family and friends. But your snapshots are also a potential gold mine of information about what you spend money on for those sites and the companies that advertise on them.

Artificial intelligence software is on the horizon that can spot brands like Nike or Coke even in images without text or tags. Google, Facebook and other deep-pocketed investors are accelerating this software’s development. Even startup photo-sharing service Pinterest got in on the action earlier this year by buying the even more “start-uppy” Visual Graph, whose software could be used to find and link photos with similar content.

In general the software winning these investments uses machine vision, image recognition and/or visual search algorithms to identify objects and shapes as well as textures. One startup called Ditto Labs makes a search engine that specifically examines digital images for logos and brands.
So next time you’re chowing down in McDonald’s, smile for that selfie. You’re feeding that company’s bottom line and probably helping a tech startup earn its next million. On second thought, why are YOU smiling?


Source: Scientific American
http://www.scientificamerican.com/podcast/episode/we-re-all-hawking-products-now/

[Rephrase 1, 1:22]



本帖子中包含更多资源

您需要 登录 才可以下载或查看,没有帐号?立即注册

x
收藏收藏 收藏收藏
沙发
 楼主| 发表于 2014-9-15 22:52:58 | 只看该作者
Part II: Speed


An AI Pal That Is Better Than “Her”

The charming automated assistant in Spike Jonze’s new movie isn’t realistic. But if they were designed thoughtfully, computerized interlocutors could make us better people.
By Greg Egan |  January 24, 2014

[Time 2]
In the movie Her, which was nominated for the Oscar for Best Picture this year, a middle-aged writer named Theodore Twombly installs and rapidly falls in love with an artificially intelligent operating system who christens herself Samantha.

Samantha lies far beyond the faux “artificial intelligence” of Google Now or Siri: she is as fully and unambiguously conscious as any human. The film’s director and writer, Spike Jonze, employs this premise for limited and prosaic ends, so the film limps along in an uncanny valley, neither believable as near-future reality nor philosophically daring enough to merit suspension of disbelief. Nonetheless, Her raises questions about how humans might relate to computers. Twombly is suffering a painful separation from his wife; can Samantha make him feel better?

Samantha’s self-awareness does not echo real-world trends for automated assistants, which are heading in a very different direction. Making personal assistants chatty, let alone flirtatious, would be a huge waste of resources, and most people would find them as irritating as the infamous Microsoft Clippy.

But it doesn’t necessarily follow that these qualities would be unwelcome in a different context. When dementia sufferers in nursing homes are invited to bond with robot seal pups, and a growing list of psychiatric conditions are being addressed with automated dialogues and therapy sessions, it can only be a matter of time before someone tries to create an app that helps people overcome ordinary loneliness. Suppose we do reach the point where it’s possible to feel genuinely engaged by repartee with a piece of software. What would that mean for the human participants?

Perhaps this prospect sounds absurd or repugnant. But some people already take comfort from immersion in the lives of fictional characters. And much as I wince when I hear someone say that “my best friend growing up was Elizabeth Bennet,” no one would treat it as evidence of psychotic delusion. Over the last two centuries, the mainstream perceptions of novel reading have traversed a full spectrum: once seen as a threat to public morality, it has become a badge of empathy and emotional sophistication. It’s rare now to hear claims that fiction is sapping its readers of time, energy, and emotional resources that they ought to be devoting to actual human relationships.
[375 words]


[Time 3]
Of course, characters in Jane Austen novels cannot banter with the reader—and it’s another question whether it would be a travesty if they could—but what I’m envisaging are not characters from fiction “brought to life,” or even characters in a game world who can conduct more realistic dialogue with human players. A software interlocutor—an “SI”—would require some kind of invented back story and an ongoing “life” of its own, but these elements need not have been chosen as part of any great dramatic arc. Gripping as it is to watch an egotistical drug baron in a death spiral, or Raskolnikov dragged unwillingly toward his creator’s idea of redemption, the ideal SI would be more like a pen pal, living an ordinary life untouched by grand authorial schemes but ready to discuss anything, from the mundane to the metaphysical.

There are some obvious pitfalls to be avoided. It would be disastrous if the user really fell for the illusion of personhood, but then, most of us manage to keep the distinction clear in other forms of fiction. An SI that could be used to rehearse pathological fantasies of abusive relationships would be a poisonous thing—but conversely, one that stood its ground against attempts to manipulate or cower it might even do some good.

The art of conversation, of listening attentively and weighing each response, is not a universal gift, any more than any other skill. If it becomes possible to hone one’s conversational skills with a computer—discovering your strengths and weaknesses while enjoying a chat with a character that is no less interesting for failing to exist—that might well lead to better conversations with fellow humans.

But perhaps this is an overoptimistic view of where the market lies; self-knowledge might not make the strongest selling point. The dark side that Her never really contemplates, despite a brief, desultory feint in its direction, is that one day we might give our hearts to a charming voice in an earpiece, only to be brought crashing down by the truth that we’ve been emoting into the void.
[350 words]

Source: MIT Technology review
http://www.technologyreview.com/review/523826/an-ai-pal-that-is-better-than-her/

Our Final Invention
Artificial Intelligence and the End of the Human Eraby James Barrat
By Sid Perkins  | October 22, 2013

[Time 4]
Computers already make all sorts of decisions for you. With little or no human guidance, they deduce what books you would like to buy, trade your stocks and distribute electrical power. They do all this quickly and efficiently using a simple form of artificial intelligence. Now, imagine if computers controlled even more aspects of life and could truly think for themselves.

Barrat, a documentary filmmaker and author, chronicles his discussions with scientists and engineers who are developing ever more complex artificial intelligence, or AI. The goal of many in the field is to make a mechanical brain as intelligent — creative, flexible and capable of learning?—?as the human mind. But an increasing number of AI visionaries have misgivings.

Science fiction has long explored the implications of humanlike machines (think of Asimov’s I, Robot), but Barrat’s thoughtful treatment adds a dose of reality. Through his conversations with experts, he argues that the perils of AI can easily, even inevitably, outweigh its promise.

By mid-century — maybe within a decade, some researchers say — a computer may achieve human-scale artificial intelligence, an admittedly fuzzy milestone. (The Turing test provides one definition: a computer would pass the test by fooling humans into thinking it’s human.) AI could then quickly evolve to the point where it is thousands of times smarter than a human. But long before that, an AI robot or computer would become self-aware and would not be interested in remaining under human control, Barrat argues.

One AI researcher notes that self-aware,self-improving systems will have three motivations: efficiency, self-protection and acquisition of resources, primarily energy. Some people hesitate to even acknowledge the possible perils of this situation, believing that computers programmed to be super intelligent can also be programmed to be “friendly.” But others, including Barrat, fear that humans and AI are headed toward a mortal struggle. Intelligence isn’t unpredictable merely some of the time or in special cases, he writes. “Computer systems advanced enough to act with human-level intelligence will likely be unpredictable and inscrutable all of the time.”

Humans, he says, need to figure out now, at the early stages of AI’s creation, how to coexist with hyper intelligent machines. Otherwise, Barrat worries, we could end up with a planet — eventually a galaxy— populated by self-serving, self-replicating AI entities that act ruthlessly toward their creators.
[382 words]


Source: Science News
https://www.sciencenews.org/article/our-final-invention

                                                                                                  

The computer will see you now
A virtual shrink may sometimes be better than the real thing
Aug 16th 2014 | From the print edition

[Time 5]
ELLIE is a psychologist, and a damned good one at that. Smile in a certain way, and she knows precisely what your smile means. Develop a nervous tic or tension in an eye, and she instantly picks up on it. She listens to what you say, processes every word, works out the meaning of your pitch, your tone, your posture, everything. She is at the top of her game but, according to a new study, her greatest asset is that she is not human.

When faced with tough or potentially embarrassing questions, people often do not tell doctors what they need to hear. Yet there searchers behind Ellie, led by Jonathan Gratch at the Institute for Creative Technologies, in Los Angeles, suspected from their years of monitoring human interactions with computers that people might be more willing to talk if presented with an avatar. To test this idea, they put 239 people in front of Ellie (pictured above) to have a chat with her about their lives. Half were told (truthfully) they would be interacting with an artificially intelligent virtual human; the others were told (falsely) that Ellie was a bit like a puppet, and was having her strings pulled remotely by a person.

Designed to search for psychological problems, Ellie worked with each participant in the study in the same manner. She started every interview with rapport-building questions, such as, “Where are you from?”She followed these with more clinical ones, like, “How easy is it for you to get a good night’s sleep?” She finished with questions intended to boost the participant’s mood, for instance, “What are you most proud of?” Throughout the experience she asked relevant follow-up questions—“Can you tell me more about that?” for example—while providing the appropriate nods and facial expressions.
[336 words]

[Time 6]
Lie on the couch, please
During their time with Ellie, all participants had their faces scanned for signs of sadness, and were given a score ranging from zero (indicating none) to one (indicating a great degree of sadness). Also, three real, human psychologists, who were ignorant of the purpose of the study, analyzed transcripts of the sessions, to rate how willingly the participants disclosed personal information.

These observers were asked to look at responses to sensitive and intimate questions, such as, “How close are you to your family?” and, “Tell me about the last time you felt really happy.” They rated the responses to these on a seven-point scale ranging from -3 (indicating a complete unwillingness to disclose information) to +3 (indicating a complete willingness). All participants were also asked to fill out questionnaires intended to probe how they felt about the interview.

Dr Gratch and his colleagues report in Computers in Human Behavior that, though everyone interacted with the same avatar, their experiences differed markedly based on what they believed they were dealing with. Those who thought Ellie was under the control of a human operator reported greater fear of disclosing personal information, and said they managed more carefully what they expressed during the session, than did those who believed they were simply interacting with a computer.

Crucially, the psychologists observing the subjects found that those who thought they were dealing with a human were indeed less forthcoming, averaging 0.56 compared with the other group’s average score of1.11. The first group also betrayed fewer signs of sadness, averaging 0.08compared with the other group’s 0.12 sadness score.

This quality of encouraging openness and honesty, Dr Gratch believes, will be of particular value in assessing the psychological problems of soldiers—a view shared by America’s Defence Advanced Research Projects Agency, which is helping to pay for the project.

Soldiers place a premium on being tough, and many avoid seeing psychologists at all costs. That means conditions such as post-traumatic stress disorder (PTSD), to which military men and women are particularly prone, often get dangerous before they are caught. Ellie could change things for the better by confidentially informing soldiers with PTSD that she feels they could be a risk to themselves and others, and advising them about how to seek treatment.

If, that is, a cynical trooper can be persuaded that Ellie really isn’t a human psychologist in disguise. Because if Ellie can pass for human, presumably a human can pass for Ellie.
[414 words]

Source: The economist
http://www.economist.com/news/science-and-technology/21612114-virtual-shrink-may-sometimes-be-better-real-thing-computer-will-see

本帖子中包含更多资源

您需要 登录 才可以下载或查看,没有帐号?立即注册

x
板凳
 楼主| 发表于 2014-9-15 22:55:30 | 只看该作者
Part III: Obstacle

                                                                                                            

What Happens When Artificial Intelligence Turns On Us?[Excerpt]

In a new book, James Barrat warns that artificial intelligence will one day outsmart humans, and there is no guarantee that it will be benevolent
By Erica R. Hendry January 21, 2014

[Paraphrase7]
Artificial intelligence has come a long way sinceR2-D2. These days, most millennials would be lost without smart GPS systems.Robots are already navigating battlefields, and drones may soon be delivering Amazon packages to our doorsteps.

Siri can solve complicated equations and tell you how to cook rice. She has even proven she can even respond to questions with a sense of humor.
But all of these advances depend on a user giving the A.I. direction. What would happen if GPS units decided they didn’t want to go to the dry cleaners, or worse, Siri decided she could become smarter without you around?



These are just the tamest of outcomes James Barrat,an author and documentary filmmaker, forecasts in his new book, Our Final Invention: Artificial Intelligence and the End of the Human Era.

Before long, Barrat says, artificial intelligence—from Siri to drones and data mining systems—will stop looking to humans for upgrades and start seeking improvements on their own. And unlike theR2-D2s and HALs of science fiction, the A.I. of our future won’t necessarily be friendly, he says: they could actually be what destroy us.

In a nutshell, can you explain your big idea?   

In this century, scientists will create machines with intelligence that equals and then surpasses our own. But before we share the planet with super-intelligent machines, we must develop a science for understanding them. Otherwise, they’ll take control. And no, this isn’t science fiction.

Scientists have already created machines that are better than humans at chess, Jeopardy!, navigation, data mining, search, theorem proving and countless other tasks. Eventually, machines will be created that are better than humans at A.I. research

At that point, they will be able to improve their own capabilities very quickly. These self-improving machines will pursue the goals they’re created with, whether they be space exploration, playing chess or picking stocks. To succeed they’ll seek and expend resources, be it energy or money. They’ll seek to avoid the failure modes, like being switched off or unplugged. In short, they’ll develop drives, including self-protection and resource acquisition—drives much like our own. They won’t hesitate to beg, borrow, steal and worse to get what they need.

How did you get interested in this topic?      
         
I’m a documentary filmmaker. In 2000, I interviewed inventor Ray Kurzweil, roboticist Rodney Brooks and sci-fi legend Arthur C.Clarke for a TLC film about the making of the novel and film, 2001: A Space Odyssey. The interviews explored the idea of the Hal 9000, and malevolent computers. Kurzweil’s books have portrayed the A.I. future as a rapturous“singularity,” a period in which technological advances outpace humans’ ability to understand them. Yet he anticipated only good things emerging from A.I. that is strong enough to match and then surpass human intelligence. He predicts that we’ll be able to reprogram the cells of our bodies to defeat disease and aging. We’ll develop super endurance with nanobots that deliver more oxygen than red blood cells. We’ll supercharge our brains with computer implants so that we’ll become super intelligent. And we’ll port our brains to a more durable medium than our present “wetware” and live forever if we want to. Brooks was optimistic, insisting that A.I.-enhanced robots would be allies, not threats.

Scientist-turned-author Clarke, on the other hand, was pessimistic. He told me intelligence will win out, and humans would likely compete for survival with super-intelligent machines. He wasn’t specific about what would happen when we share the planet with super-intelligent machines, but he felt it’d be a struggle for mankind that we wouldn’t win.

That went against everything I had thought about A.I., so I began interviewing artificial intelligence experts.

What evidence do you have to support your idea?

Advanced artificial intelligence is a dual-use technology, like nuclear fission, capable of great good or great harm. We’re just starting to see the harm.
The NSA privacy scandal came about because the NSA developed very sophisticated data-mining tools. The agency used its power to plumb the metadata of millions of phone calls and the the entirety of the Internet—critically, all email. Seduced by the power of data-mining A.I., an agency entrusted to protect the Constitution instead abused it. They developed tools too powerful for them to use responsibly.

Today, another ethical battle is brewing about making fully autonomous killer drones and battlefield robots powered by advanced A.I.—human-killers without humans in the loop. It’s brewing between the Department of Defense and the drone and robot makers who are paid by the DOD, and people who think it’s foolhardy and immoral to create intelligent killing machines. Those in favor of autonomous drones and battlefield robots argue that they’ll be more moral—that is, less emotional, will target better and be more disciplined than human operators. Those against taking humans out of the loop are looking at drones’ miserable history of killing civilians, and involvement in extralegal assassinations. Who shoulders the moral culpability when a robot kills? The robot makers, the robot users, or no one? Never mind the technical hurdles of telling friend from foe.

In the longer term, as experts in my book argue, A.I. approaching human-level intelligence won’t be easily controlled; unfortunately, super-intelligence doesn’t imply benevolence. As A.I. theorist Eliezer Yudkowsky of MIRI [the Machine Intelligence Research Institute] puts it, “The A.I. does not love you, nor does it hate you, but you are made of atoms it can use for something else.” If ethics can’t be built into a machine, then we’ll be creating super-intelligent psychopaths, creatures without moral compasses, and we won’t be their masters for long.

What is new about your thinking?

Individuals and groups as diverse as American computer scientist Bill Joy and MIRI have long warned that we have much to fear from machines whose intelligence eclipses our own. In Our Final Invention, I argue that A.I. will also be misused on the development path to human-level intelligence. Between today and the day when scientists create human-level intelligence, we’ll have A.I.-related mistakes and criminal applications.

Why hasn’t more been done, or, what is being done to stop AI from turning on us?

There’s not one reason, but many. Some experts don’t believe we’re close enough to creating human-level artificial intelligence and beyond to worry about its risks. Many A.I. makers win contracts with the Defense Advanced Research Projects Agency [DARPA] and don’t want to raise issues they consider political. The normalcy bias is a cognitive bias that prevents people from reacting to disasters and disasters in the making—that’s definitely part of it. But a lot of A.I. makers are doing something. Check out the scientists who advise MIRI. And, a lot more will get involved once the dangers of advanced A.I. enter mainstream dialogue.

Who will be most affected by this idea?

Everyone on the planet has much to fear from the unregulated development of super-intelligent machines. An intelligence race is going on right now. Achieving A.G.I. is job number one for Google, IBM and many smaller companies like Vicarious and Deep Thought, as well as DARPA, the NSA and governments and companies abroad. Profit is the main motivation for that race. Imagine one likely goal: a virtual human brain at the price of a computer. It would be the most lucrative commodity in history. Imagine banks of thousands of PhD quality brains working 24/7 on pharmaceutical development, cancer research, weapons development and much more. Who wouldn’t want to buy that technology?

Meanwhile, 56 nations are developing battlefield robots, and the drive is to make them, and drones, autonomous. They will be machines that kill, unsupervised by humans. Impoverished nations will be hurt most by autonomous drones and battlefield robots. Initially, only rich countries will be able to afford autonomous kill bots, so rich nations will wield these weapons against human soldiers from impoverished nations.

How might it change life, as we know it?

Imagine: in as little as a decade, a half-dozen companies and nations field computers that rival or surpass human intelligence. Imagine what happens when those computers become expert at programming smart computers. Soon we’ll be sharing the planet with machines thousands or millions of times more intelligent than we are. And, all the while, each generation of this technology will be weaponized. Unregulated, it will be catastrophic.
[1377 words]

Source: Smithsonianmag
http://www.smithsonianmag.com/innovation/what-happens-when-artificial-intelligence-turns-us-180949415/?no-ist

本帖子中包含更多资源

您需要 登录 才可以下载或查看,没有帐号?立即注册

x
地板
发表于 2014-9-15 23:05:23 | 只看该作者
来得早不如来得巧~THX 伊蔓达&Cherry
----------
speaker:
app: link similar things together
detect the icon

time7:
eg. of AI
how get interested
some challenges
fear
changes to life


最近阅读都不在状态。。如此水的打卡
5#
发表于 2014-9-16 07:27:20 | 只看该作者
Time2 2'50''
Time3 2'00''
Her is a movie about AI and this movie reflects some human emotional dependence on fictional charactors
Some pitfalls and concerns about AI

Time4 1'59''
There are many concerns about the increasingly sophisticated AI,including moral ,safty concerns.


Time5 1'28''
Time6 2'00
Ellie is a new invention of AI and she can scan signs of facial expression.
Elllie can encourage honesty ,the quality can be used to psychological diagnosis

Obstacle: 5'06''
AI roborts can be self-improving,this quality raise controversy among society.
We should consider  the listed questions when judging the AI
6#
发表于 2014-9-16 07:55:39 | 只看该作者
time2+time3 5:10
the famous movie HER illustrates the story between a writer and an intelligence, possibly because of his suffering from his dead wife.
some people take comfort from softwares, such as conversational apps.
it is disastrous that they are getting used to it.

time4 3:14
there may be some dangers between the relationship of AI and humans. moral problems.

time5+time6 4:40
a research conducted to see how people are willing to disclose to a computer system.
the results show that people are more likely to talk with computers rather than with real humans.
Such a finding could be used to heal soldiers’ PTSD.

obstacle 9:12
Artificial intelligence means a lot to humans.
through the interview, there are two kins of considerations: human will use tech to live better, or tech will turn on humans.
the dangers if AI turns on humans. how it changes our lives.
7#
发表于 2014-9-16 09:14:25 | 只看该作者
T2
A recent movie Her raises some questions of relation between humans and computer.
Although Smamtha divert to the modern idea of Ai which doesn't wast resources to be chatty, it will improve people who have diseases or loneness.
Also it is ususal now to hear someone who immerge in the relationship with fictional characters.

T3
SI have some back story and can evolvethorough time.
SI may e poisonous but it can trainpeople's communicative skills.
Although the author still worries thatpeople would be crash down by SI.
T4 3:12
Computer can do a lot of things for humansnow.
Recently, Barrot talk with scientists toshow his concern that one day AI would subjugate humans.
He recommends that humans should find a wayto coexist with AI in the future.
T5 155
E is a robot who can understand every faceexpression.
Scientists are wondering whether peoplewould like to tell their authentic felling to avatar.
Thus, they convey different massage to 2gruops and let E to ask them questions.  
T6 3:23
Sadly, the results show that the group whobeen told E is computer perform better than other group who been told E isdisguise of humans.
This method can improve the solders.
Obstacle: 11:29
AI is smart than human and not friendly.
In order to finish their goals, AI becomeso greedy to collect resources.
Book is optimistic but not me.
For example, NSA tool and robot who killpeople in battle field.
AI will also be misused by humans.
2        reasons:
1.      scientists thinks currently wecan't make Ai 2. AI makers don't want to rise issues.
Big company's profits would be effected andso do rich counties.
AI would be come armed and catastrophic.
我觉得这个Barrot就是在扯淡吧。。。
8#
发表于 2014-9-16 09:37:46 | 只看该作者
long introduction about the latest status of AI, thank you
9#
发表于 2014-9-16 10:11:33 | 只看该作者
2+3time
第一篇文章看醉了,很多单词都是连在一起的,比如butthen,楼主修改一下吧
By introducing a film named Her, the author mentions that AI may be able to help people overcome loneliness. Even though it sounds absurd that people take conform from fictional things, such things actually can help people feel better. And then the author raises some issues about the use of AI. The author concerns that maybe one day we will give our hearts to a charming voice in a an earpiece.
7m48s

4time
第二篇也有部分连起来的单词,不过比第一篇好多的。。。
When the mechanical brains become as intelligent as human's brains, human beings will end up with a planet populated by AI entities.
2m37s

5+6time
Researchers make an experiment to see whether people will be more honesty when facing a computer than facing a human being. The answer is YES. Researchers also find that this quality of encouraging openness and honesty will be helpful to evaluate the psychological problems of soldiers.
5m51s

This is an interwiew with James Barrat. In this interview, he answers some questions and share his concerns about high-level AI with the world.
9m32s
希望能修改一下关于“连词”的问题
10#
发表于 2014-9-16 10:14:56 | 只看该作者
41-16. Thanks yimanda ~~
Time2
A writer named T fell in love with artificial intelligence S
S can not echo the real world trends and it is a waste of source to make assistant chatty.
So AI person can be used in dementia nursing home to accompany the patients and have automated dialogues
Time3
Time4
Computer has done a lot for human and the writer worries wether one day AI will act ruthless to its creator .
Time5
E is a phycologist by processing the word u said and working out the pitch even he is just a AI
People tend to communicate with E than doctor or human
Time6
The researcher rated the responder the score of sadness and compared the difference between the conversation with a AI and human.
People are less forthcoming when communicate with Human.
E could help the soliders with PSTD

Obstacle
The development of AI is quickly and AIs can provide us service.
But the writer and document maker J put forward his worries about smarter AIs.
He explained his big idea--AI outsmart than human and out of control
Why was he interested in this field--the interview with sci-fi legend
The evidences to support his idea--
Hoe to avoid AI turn on us--AI is a dual-use technology
您需要登录后才可以回帖 登录 | 立即注册

Mark一下! 看一下! 顶楼主! 感谢分享! 快速回复:

手机版|ChaseDream|GMT+8, 2024-4-24 20:25
京公网安备11010202008513号 京ICP证101109号 京ICP备12012021号

ChaseDream 论坛

© 2003-2023 ChaseDream.com. All Rights Reserved.

返回顶部