일 | 월 | 화 | 수 | 목 | 금 | 토 |
---|---|---|---|---|---|---|
1 | 2 | 3 | ||||
4 | 5 | 6 | 7 | 8 | 9 | 10 |
11 | 12 | 13 | 14 | 15 | 16 | 17 |
18 | 19 | 20 | 21 | 22 | 23 | 24 |
25 | 26 | 27 | 28 | 29 | 30 | 31 |
- singleton
- Visuall code
- 오버로딩
- OuterClass
- 접근제한수식어
- File
- SPRING 특징
- Unbox
- tomat
- Spring 개념
- 드림코딩
- protected
- 자바 개발환경준비
- 싱글톤 패턴
- Spring 구조
- Overloading
- Statement
- pox.xml 오류
- New Frame
- innerclass
- append()
- DB 설정
- New Dialog
- Runnable
- 오라클 데이터베이스
- spring 페이징
- inheritance
- JDK 설치
- java
- 톰켓
- Today
- Total
~홍~
How will AI change the world?_TED-Ed 본문
How will AI change the world?
In the coming years, artificial intelligence is probably going to change your life— and likely the entire world. But people have a hard time agreeing on exactly how AI will affect our society. Can we build AI systems that help us fix the world? Or are we
www.ted.com
In the coming years, artificial intelligence is probably going to change your life, and likely the entire world. But people have a hard time agreeing on exactly how. The following are excerpts from a World Economic Forum interview where renowned computer science professor and AI expert Stuart Russell helps separate the sense from the nonsense.
在将来的岁月里,人工智能 极有可能会改变 你的生活,甚至全世界。 但人们对于这种改变的 呈现方式结论不一。 以下的采访摘录自 为我们辟谣的著名计算机科学教授 兼人工智能专家, 斯图尔特·罗素(Stuart Russell)。
There’s a big difference between asking a human to do something and giving that as the objective to an AI system. When you ask a human to get you a cup of coffee, you don’t mean this should be their life’s mission, and nothing else in the universe matters. Even if they have to kill everybody else in Starbucks to get you the coffee before it closes— they should do that. No, that’s not what you mean. All the other things that we mutually care about, they should factor into your behavior as well.
要求一个人做某件事与 将其作为目标交给人工智能系统 是有很大的区别的。 当你拜托一个人帮你拿杯咖啡时, 你并不在命令这个人 奉它为人生使命, 以致宇宙里再也没有更重要的事了。 就算除掉了星巴克里的其他人 都得在店铺关门之前买到你的咖啡。 不,你不是那个意思。 其他我们共同在意的事物, 都应当影响你的所作所为。
And the problem with the way we build AI systems now is we give them a fixed objective. The algorithms require us to specify everything in the objective. And if you say, can we fix the acidification of the oceans? Yeah, you could have a catalytic reaction that does that extremely efficiently, but it consumes a quarter of the oxygen in the atmosphere, which would apparently cause us to die fairly slowly and unpleasantly over the course of several hours.
而我们现在建造的 人工智能系统的问题在于, 我们给了它们一个固定目标。 算法要求我们规定目标里的一切。 比如说,“能修正 海洋的酸化问题吗?” 没问题,可以形成一道 非常有效率的催化反应, 但这将会吞噬大气层里 四分之一的氧气, 好像会导致我们 全都慢慢地、不愉快地 在几个小时后死去。
So, how do we avoid this problem? You might say, okay, well, just be more careful about specifying the objective— don’t forget the atmospheric oxygen. And then, of course, some side effect of the reaction in the ocean poisons all the fish. Okay, well I meant don’t kill the fish either. And then, well, what about the seaweed? Don’t do anything that’s going to cause all the seaweed to die. And on and on and on.
那,我们该如何避免这种问题呢? 你可能会说,好吧,那我们 就对目标更具体地说明一下—— 别忘了大气层里的氧气。 然后海洋里某种效应的副作用 将会毒死鱼儿们。 好吧,那我就再定义一下, 也别杀了鱼。 那么,海藻呢? 也别做什么会让海藻都死光的事。 以此类推。
And the reason that we don’t have to do that with humans is that humans often know that they don’t know all the things that we care about. If you ask a human to get you a cup of coffee, and you happen to be in the Hotel George Sand in Paris, where the coffee is 13 euros a cup, it’s entirely reasonable to come back and say, well, it’s 13 euros, are you sure you want it, or I could go next door and get one? And it’s a perfectly normal thing for a person to do. To ask, I’m going to repaint your house— is it okay if I take off the drainpipes and then put them back? We don't think of this as a terribly sophisticated capability, but AI systems don’t have it because the way we build them now, they have to know the full objective. If we build systems that know that they don’t know what the objective is, then they start to exhibit these behaviors, like asking permission before getting rid of all the oxygen in the atmosphere.
而我们之所以对于人类 不需要这样做 是因为人们大都明白自己并不可能 对每个人的爱好无不知晓。 如果拜托一个人去帮你买咖啡, 而你刚好人在一杯咖啡 13 欧元的 巴黎乔治圣德酒店, 你很有可能会再回去问一下: “喂,这里咖啡得 13 欧元, 你还要吗?要不我去隔壁店里 帮你买杯去?” 这完全合情合理。 这对于人类来讲再也正常不过。 在帮人重涂房子时问起: “我先把排水管拆了 再装回去,可以吗?” 我们并不觉得 这是个特别复杂厉害的能力, 但人工智能系统没有个能力, 因为在我们当下的建造方法里, 它们必须熟知全面目标。 如果我们建造的系统 明白它们并不了解目标, 它们就会开始展示此类行动: 比如在除掉大气层里的氧气 之前先征求许可。
In all these senses, control over the AI system comes from the machine’s uncertainty about what the true objective is. And it’s when you build machines that believe with certainty that they have the objective, that’s when you get this sort of psychopathic behavior. And I think we see the same thing in humans.
在这种意义上, 对于人工智能系统的控制 源于机器对真正目标的不确定性。 而只有在建造对目标自以为有着 绝对肯定性的机器时, 才会产生这种精神错乱的行为。 而我觉得对于人类, 也是相同的理念。
What happens when general purpose AI hits the real economy? How do things change? Can we adapt? This is a very old point. Amazingly, Aristotle actually has a passage where he says, look, if we had fully automated weaving machines and plectrums that could pluck the lyre and produce music without any humans, then we wouldn’t need any workers.
通用型人工智能闯进实体经济后, 会发生什么? 事态会如何改变? 我们能够调整适应吗? 这是个非常古老的讨论点。 令人惊讶的是,据记载, 亚里士多德(Aristotle)就曾说过: “看吧,如果我们有了 完全自动化的织布机 与无需人们撩动 就能弹琴奏乐的弦拨, 那我们就不需要工人了。”
That idea, which I think it was Keynes who called it technological unemployment in 1930, is very obvious to people. They think, yeah, of course, if the machine does the work, then I'm going to be unemployed.
这个主意,也就是 1930 年 在凯恩斯(Keynes) 口中的技术性失业, 对于人们来说,这显而易见。 想着,嗯,当然了, 如果机器做了我的工, 那我就要失业了。
You can think about the warehouses that companies are currently operating for e-commerce, they are half automated. The way it works is that an old warehouse— where you’ve got tons of stuff piled up all over the place and humans go and rummage around and then bring it back and send it off— there’s a robot who goes and gets the shelving unit that contains the thing that you need, but the human has to pick the object out of the bin or off the shelf, because that’s still too difficult. But, at the same time, would you make a robot that is accurate enough to be able to pick pretty much any object within a very wide variety of objects that you can buy? That would, at a stroke, eliminate 3 or 4 million jobs?
我们能想到,如今半自动化运行的 公司电子商务仓库。 运行方法如下—— 比之让人们在堆积如山的 旧形仓库里到处寻找货物, 再取回送出, 现在由机器人行驶到 你需要的物件的货架单元上, 但仍然需要人把物品 从货架取出, 因为那对机器人还是太难了。 但与此同时, 我们有可能创造出 一个能够在众多可购类别中 精准挑选出 任何一件目标物品的机器人吗? 以此一举,或会削减 三、四百万有余的工作岗位?
There's an interesting story that E.M. Forster wrote, where everyone is entirely machine dependent. The story is really about the fact that if you hand over the management of your civilization to machines, you then lose the incentive to understand it yourself or to teach the next generation how to understand it. You can see “WALL-E” actually as a modern version, where everyone is enfeebled and infantilized by the machine, and that hasn’t been possible up to now.
E·M·福斯特(E.M. Forster) 写过一篇引人深思的故事: 故事里的人们都完全依赖机器。 其中寓意是, 如果你把文明的管理权 交给了机器, 那你将会失去自身了解文明、 把文明传承于下一代的动力。 我们可以将《机器人总动员》 视为现代版: 由于机器,人们变得衰弱与幼儿化, 到目前为止,这还不可能。
We put a lot of our civilization into books, but the books can’t run it for us. And so we always have to teach the next generation. If you work it out, it’s about a trillion person years of teaching and learning and an unbroken chain that goes back tens of thousands of generations. What happens if that chain breaks?
我们把大量文明注入书籍, 但书籍无法为我们管理文明。 所以我们必须一直指导下一代。 计算下来,这是一个往回一万亿年、 数以万计的世代之间 绵延不绝的教导与学习的链条。 这条链如果断了,将会如何?
I think that’s something we have to understand as AI moves forward. The actual date of arrival of general purpose AI— you’re not going to be able to pinpoint, it isn’t a single day. It’s also not the case that it’s all or nothing. The impact is going to be increasing. So with every advance in AI, it significantly expands the range of tasks.
随着人工智能的进展, 我认为这是我们必须了解的事情。 我们将无法精准地确认 通用型人工智真正来临的时日, 因为那并不会是一日之劳。 也并不是存在或不存在的两项极端。 这方面的影响力将是与日俱进的。 所以随着人工智能的进步, 它所能完成的任务将显著地扩展。
So in that sense, I think most experts say by the end of the century, we’re very, very likely to have general purpose AI. The median is something around 2045. I'm a little more on the conservative side. I think the problem is harder than we think.
这样一看,我觉得大部分的专家都说 我们极有可能在世纪之末前 生产通用型人工智能。 中位数位置在 2045 年左右。 我对此偏于保守派。 我认为问题比我们想象的还要难。
I like what John McAfee, he was one of the founders of AI, when he was asked this question, he said, somewhere between five and 500 years. And we're going to need, I think, several Einsteins to make it happen.
我喜欢人工智能的发明家之一 约翰·麦卡菲(John McAfee) 对这个问题的答案: 他说,应该在 5 到 500 年之间。 而我觉得,这得要 几位爱因斯坦才能实现。
출처 :
https://www.ted.com/talks/ted_ed_how_will_ai_change_the_world/c/transcript
'TED & 중국어' 카테고리의 다른 글
4 things all great listeners know_TED-Ed (0) | 2023.06.11 |
---|---|
How do we create a better economy?_TED-Ed (0) | 2023.06.11 |
The science of falling in love_Shannon Odell (0) | 2023.06.11 |
The best way to apologize (according to science)_TED-Ed (0) | 2023.06.11 |
How puberty changes your brain_Shannon Odell (0) | 2023.06.11 |