crossorigin="anonymous"> [TED학습] The incredible inventions of intuitive AI
본문 바로가기
고등 영어/TED

[TED학습] The incredible inventions of intuitive AI

by 학생교활:학교생활은 교활하게! 2021. 7. 19.
728x90
반응형

[TED학습] The incredible inventions of intuitive AI

 

안녕하세요. 학생교활 TED 담당입니다.

원래 저희 학생교활은, 외고 내신 관련해서 수원외국어고등학교를 위주로 자료를 제작했었는데요.

조사 결과 수원외국어고등학교 뿐만 아니라 다른 학교에서도 TED를 많이 내신에서 보더라고요.

 

그래서 학생교활에서도 TED 담당 코치를 새로 뽑으셨는데, 그게 저입니다.. ^^

 

제일 먼저 어느 테드로 할까 생각하다가

수원외고에서 2021년 1학년 2학기 시험범위로 들어가는 TED가 있더라고요.

"The incredible inventions of intuitve AI"네요.

https://youtu.be/aR5N2Jl8k14

내용도 의미있고, 재미있어서 따로 제작해보는 것도 나쁘지 않다고 생각했습니다.

 

학생교활 TED 학습지는 

1. 단어

2. 본문 좌지문우해석

3. 문맥상 적절한 낱말 찾기

4. 한글 해석 쭉 읽어보기

 

이렇게 구성했습니다.

 

1. 단어를 먼저 공부하세요. 단어를 스스로 외워보고, 또 조금 어려운 단어들은 단어페이지 밑에 활용해 보는 연습도 할 수 있게끔 해 놓았거든요? (전부 다 만들진 않았습니다. 귀찮아서 ㅎ) 그거 이용하세요.

2. 본문을 읽어보세요. 문장별로 번호가 부여돼 있을 겁니다. 번호별로 끊어서 읽으세요.

모르는 단어는 표시해 두고, 일단 오른쪽 해석을 보며 끼워맞춰보세요. 대충 모르는 단어 뜻이 감이 오죠?

혹시 이해되지 않는 문장구조도 표시해두고요.

*절대 한글뜻을 좌지문에 표시해두지 않습니다. 영어만 보고 한글 뜻이 바로 생각나는 연습을 계속 해주어야 하기 때문입니다. 한글뜻이 바로 그 밑에 써져 있으면, 해석연습에 방해됩니다. (해석연습 때는 종이를 접어서 우지문이 안보이도록 해야겠죠?)

끝까지 다 읽은 후, 모르는 단어를 찾아보도록 합니다. 절대 한글뜻을 좌지문에 쓰거나 표시해두지 않습니다.

3. 문맥상 적절한 낱말을 찾습니다. 조금 어려울 수도 있고, 이 단어든 저 단어든 말이 되는 거 같은 게 많아보일 수도 있습니다. 그럴 때는 항상 다시 본문으로 돌아가서 어떤 단어였는지 찾아보고, 리마인딩합니다.

혹시 대괄호 안에 모르는 단어가 제시됐을 경우에는, 그 단어도 인터넷을 통해 찾아봅니다. 

4.마지막 페이지에 첨부된 한글 해석을 쭉 읽어봅니다. 중요한 내용에는 밑줄도 긋고, 각자 알아서 내용 정리해봅니다.

 

사실 TED로 어법문제를 내지는.. 않거든요.

근데 혹시 모르죠. 이번 수원외고 1학년 1학기 시험에서 모의고사 43-45 장문 지문이 나왔다고는 하니...

TED로 어법문제가 나올 가능성도 배제할 수는 없긴 한데,

TED 자체가 어법적으로 불안한 문장들이 종종 있기 때문에 어법문제로 나오지는 않을 것 같아요.(장담은 못합니다 ㅋ)

 

 TED는 '어휘'와 '내용' 위주로 학습하시면 됩니다.

한글해석 읽는 걸 죄스럽게 생각할 필요 전혀 없어요.

일단 내용파악이 우선이고, 그 뒤에 영어본문분석 들어가도 전혀 늦지 않고, 점수에 악영향끼치지 않습니다.

내용파악을 우선시하세요.

먼저 파일부터 올려드릴게요!

 

[학생교활] [TED학습] The incredible inventions of intuitive AI.pdf
0.26MB

----------------------------

여기서부터는 학습지에 있는 적절한 낱말 찾기 연습입니다.

근데 너무 빼곡해서 잘 안 읽힐 수도 있으니까,, 가능하면 첨부파일을 참고하세요~

확인해보시길~^^

 

How many of you are creatives, designers, engineers, entrepreneurs, artists, or maybe you just have a really [big/small] imagination? Show of hands? That's most of you. I have some news for us creatives. Over the course of the next 20 years, [more/less] will change around the way we do our work than has happened in the last 2,000. In fact, I think we're at the dawn of a new age in human history. Now, there have been four major historical eras [defined/refined] by the way we work. The Hunter-Gatherer Age [lasted/lated] several million years. And then the Agricultural Age lasted several thousand years. The Industrial Age lasted a couple of centuries. And now the Information Age has lasted just a few decades. And now today, we're on the [bathroom/cusp] of our next great era as a species. Welcome to the [Decreased/Augmented] Age. In this new era, your natural human capabilities are going to be [augmented/discreased] by computational systems that help you think, robotic systems that help you make, and a digital nervous system that [disconnects/connects] you to the world far beyond your [natural/artificial] senses. Let's [start/stop] with cognitive augmentation. How many of you are augmented cyborgs? I would actually argue that we're already augmented. Imagine you're at a party, and somebody asks you a question that you [know/don't know] the answer to. If you have one of these, in a few seconds, you can know the answer. But this is just a [primary/primitive] beginning. Even Siri is just a passive tool. In fact, for the last three-and-a-half million years, the tools that we've had have been completely [proactive/passive]. They do exactly what we tell them and nothing more. Our very first tool only cut where we struck it. The [adhesive/chisel] only carves where the artist points it. And even our most advanced tools do nothing without our [implicit/explicit] direction. In fact, to date, and this is something that [gladdens/frustrates] me, we've always been limited by this need to manually push our wills into our tools -- like, manual, [literally/literarily] using our hands, even with computers. But I'm more like Scotty in "Star Trek." I want to have a [conflict/conversation] with a computer. I want to say, "Computer, let's design a car," and the computer [sells/shows] me a car. And I say, "No, more fast-looking, and less German," and bang, the computer shows me an option. That conversation might be a little ways off, probably less than many of us think, but right now, we're working on it. Tools are making this leap from being passive to being [valueless/generative]. [Valueless/Generative] design tools use a computer and algorithms to [degrade/synthesize] geometry to come up with new designs all by themselves. All it needs are your goals and your [onlooking/constraints]. I'll give you an example. In the case of this aerial drone chassis, all you would need to do is tell it something like, it has four propellers, you want it to be as lightweight as possible, and you need it to be aerodynamically [effortful/efficient]. Then what the computer does is it explores the entire solution space: every single possibility that solves and meets your criteria -- millions of them. It takes big computers to do this. But it comes back to us with designs that we, [between ourselves/by ourselves], never could've imagined. And the computer's coming up with this stuff all by itself -- no one ever drew anything, and it started completely from [completion/scratch]. And by the way, it's no accident that the drone body looks just like the pelvis of a flying squirrel. It's because the algorithms are designed to work the same way evolution does. What's exciting is we're starting to see this technology out in the [virtual/real] world. We've been working with Airbus for a couple of years on this concept plane for the future. It's a ways out still. But just recently we used a generative-design AI to come up with this. This is a 3D-printed [cabin/carbon] partition that's been designed by a computer. It's stronger than the original yet half the weight, and it will be flying in the Airbus A320 later this year. So computers can now generate; they can come up with their own solutions to our well-defined problems. But they're not [logical/intuitive]. They still have to start from scratch every single time, and that's because they never learn. Unlike Maggie. Maggie's actually smarter than our most advanced design tools. What do I mean by that? If her owner picks up that leash, Maggie knows with a fair degree of certainty it's time to go for a walk. And how did she learn?
Well, every time the owner picked up the [trash/leash], they went for a walk. And Maggie did three things: she had to pay attention, she had to remember what happened and she had to retain and create a pattern in her mind. Interestingly, that's exactly what computer scientists have been trying to get AIs to do for the last 60 or so years. Back in 1952, they built this computer that could play Tic-Tac-Toe. Big deal. Then 45 years later, in 1997, Deep Blue beats Kasparov at chess. 2011, Watson beats these two humans at Jeopardy, which is much harder for a computer to play than chess is. In fact, rather than working from predefined recipes, Watson had to use [irrationality/reasoning] to overcome his human opponents. And then a couple of weeks ago, DeepMind's AlphaGo beats the world's best human at Go, which is the most difficult game that we have. In fact, in Go, there are more possible moves than there are atoms in the universe. So in order to win, what AlphaGo had to do was develop intuition. And in fact, at some points, AlphaGo's programmers didn't understand why AlphaGo was doing what it was doing. And things are moving really fast. I mean, consider -- in the space of a human lifetime, computers have gone from a child's game to what's recognized as the pinnacle of strategic thought. What's basically happening is computers are going from being like Spock to being a lot more like Kirk. Right? From pure logic to intuition. Would you cross this bridge? Most of you are saying, "Oh, hell no!" And you arrived at that decision in a split second. You just sort of knew that bridge was [safe/unsafe]. And that's exactly the kind of [logic/intuition] that our deep-learning systems are starting to develop right now. Very soon, you'll [literarily/literally] be able to show something you've made, you've designed, to a computer, and it will look at it and say, "Sorry, homie, that'll never work. You have to try again." Or you could ask it if people are going to like your next song, or your next flavor of ice cream. Or, much more importantly, you [could/coudln't] work with a computer to solve a problem that we've never faced before. For instance, climate change. We're not doing a very good job on our own, we could certainly use all the help we can get. That's [why/what] I'm talking about, technology amplifying our cognitive abilities so we can imagine and design things that were simply out of our reach as plain old un-augmented humans. So what about making all of this crazy new stuff that we're going to invent and design? I think the era of human augmentation is as much about the [physical/psychological] world as it is about the virtual, intellectual realm. How will technology augment us? In the [psychological/physical] world, robotic systems. OK, there's certainly a [confidence/fear] that robots are going to take jobs away from humans, and that is true in certain sectors. But I'm much more interested in this idea that humans and robots working together are going to augment each other, and start to inhabit a new space. This is our applied research lab in San Francisco, where one of our areas of focus is advanced robotics, specifically, human-robot collaboration. And this is Bishop, one of our robots. As an experiment, we set it up to help a person working in construction doing repetitive tasks -- tasks like cutting out holes for outlets or light switches in drywall. So, Bishop's human partner can tell what to do in plain English and with simple gestures, kind of like talking to a dog, and then Bishop executes on those instructions with perfect [precision/imprecision]. We're using the human for what the human is good at: awareness, perception and decision making. And we're using the robot for what it's good at: [imprecision/precision] and repetitiveness. Here's another cool project that Bishop worked on. The goal of this project, which we called the HIVE, was to prototype the experience of humans, computers and robots all working together to solve a highly complex design problem. The humans acted as labor. They cruised around the construction site, they manipulated the bamboo -- which, by the way, because it's a non-isomorphic material, is super hard for robots to deal with. But then the robots did this fiber winding, which was almost impossible for a human to do. And then we had an AI that was controlling everything. It was telling the humans what to do, telling the robots what to do and keeping track of thousands of individual components. What's interesting is, building this pavilion was simply not possible without human, robot and AI augmenting each other. OK, I'll share one more project. This one's a little bit crazy. We're working with Amsterdam-based artist Joris Laarman and his team at MX3D to generatively design and robotically print the world's first autonomously manufactured bridge.
So, Joris and an AI are designing this thing right now, as we speak, in Amsterdam. And when they're done, we're going to hit "Go," and robots will start 3D printing in stainless steel, and then they're going to keep printing, without human [intervention/stillness], until the bridge is finished. So, as computers are going to augment our ability to imagine and design new stuff, robotic systems are going to help us build and make things that we've never been able to make before. But what about our ability to sense and control these things? What about a nervous system for the things that we make? Our nervous system, the human nervous system, tells us everything that's going on around us. But the nervous system of the things we make is [advanced/rudimentary] at best. For instance, a car doesn't tell the city's public works department that it just hit a pothole at the corner of Broadway and Morrison. A building doesn't tell its designers whether or not the people [outside/inside] like being there, and the toy manufacturer doesn't know if a toy is actually being played with -- how and where and whether or not it's any fun. Look, I'm sure that the designers imagined this lifestyle for Barbie when they designed her. But what if it turns out that Barbie's actually really lonely? If the designers had known what was really happening in the real world with their designs -- the road, the building, Barbie -- they could've used that knowledge to create an experience that was better for the user. What's missing is a nervous system connecting us to all of the things that we design, make and use. What if all of you had that kind of information flowing to you from the things you create in the real world? With all of the stuff we make, we spend a [slight/tremendous] amount of money and energy -- in fact, last year, about two trillion dollars -- convincing people to buy the things we've made. But if you had this connection to the things that you design and create after they're out in the real world, after they've been sold or launched or whatever, we could actually change that, and go from making people want our stuff, to just making stuff that people want in the first place. The good news is, we're working on digital nervous systems that connect us to the things we design. We're working on one project with a couple of guys down in Los Angeles called the Bandito Brothers and their team. And one of the things these guys do is build [sane/insane] cars that do absolutely [sane/insane] things. These guys are crazy -- in the best way. And what we're doing with them is taking a traditional race-car chassis and giving it a nervous system. So we instrumented it with dozens of sensors, put a world-class driver behind the wheel, took it out to the desert and drove the hell out of it for a week. And the car's nervous system captured everything that was happening to the car. We [overlooked/captured] four billion data points; all of the forces that it was subjected to. And then we did something crazy. We took all of that data, and plugged it into a generative-design AI we call "Dreamcatcher." So what do get when you give a design tool a nervous system, and you ask it to build you the [intimate/ultimate] car chassis? You get this. This is something that a human could never have designed. Except a human did design this, but it was a human that was augmented by a generative-design AI, a digital nervous system and robots that can actually fabricate something like this. So if this is the future, the Augmented Age, and we're going to be augmented cognitively, physically and perceptually, what will that look like? What is this wonderland going to be like? I think we're going to see a world where we're moving from things that are fabricated to things that are farmed. Where we're moving from things that are constructed to that which is grown. We're going to move from being isolated to being connected. And we'll move away from extraction to embrace aggregation. I also think we'll shift from craving [autonomy/obedience] from our things to valuing [obedience/autonomy]. Thanks to our augmented capabilities, our world is going to change [subtlely/dramatically]. We're going to have a world with more variety, more connectedness, more dynamism, more complexity, more adaptability and, of course, more beauty. The shape of things to come will be unlike anything we've [never/ever] seen before. Why? Because what will be shaping those things is this new partnership between technology, nature and humanity. That, to me, is a future well worth looking forward to. Thank you all so much.

// *참고 : intimate 친숙한, sane 제정신인, subtlely 미묘하게, literarily 문학적으로, stillness 고요함 between ourselves 우리끼리 이야긴데 //

 

728x90
반응형

댓글