YTread Logo
YTread Logo

Reveal Invisible Motion With This Clever Video Trick

Jun 04, 2021
- Let's say you run a factory that makes widgets, you have a thing that rotates there, you have a thing that goes up and down here, it's a complex setup that produces a lot of vibrations, so it's quite reasonable. I'm worried that one day your factory might shake into pieces, as if a small vibration here can cause a bolt to loosen, which causes even more vibrations leading to metal fatigue here, which leads to even larger vibrations, it's a snowball effect. So you want to catch the vibrations early before they become a problem, before they are visible to the human eye, so what do you do?
reveal invisible motion with this clever video trick
Well, you could put some sensors on the things that you're worried about, so maybe you put an accelerometer on

this

tube here that you're worried about is moving up and down too much, and that sensor gives you data and plots a graph and you can see it move up and down and it's within limits and you can monitor it, and it's fine. But what if there's a factory part that you forget to put a sensor in because you don't think it's a problem? What if you could take millions of sensors and just place them in your factory and capture everything?
reveal invisible motion with this clever video trick

More Interesting Facts About,

reveal invisible motion with this clever video trick...

It turns out that you can, and in fact, do carry a set of these sensors with you at all times, I'm talking about your phone, specifically your phone's camera, that chip that captures the light for the image, the CMO chip. It's made of millions of individual sensors, one for each pixel, well I guess actually three for each pixel. In reality, you'll probably want to use something better than your industrial camera phone with a global shutter. So you grab a camera and take some

video

s of your factory, some very wide shots to start with, make sure you capture absolutely everything, but if what you're looking for are vibrations so small that they're not perceptible with human vision, then how can a camera see these? vibrations?
reveal invisible motion with this clever video trick
That's where

motion

amplification comes in, in fact all the footage I've shown you so far is

motion

amplified, for example here, you can see the original footage and the motion amplified footage side by side. The original footage doesn't appear to move at all, but there is subtle motion information that has been amplified on the other side. In fact, an original

video

can contain subpixel motions and still have that information amplified, so you don't even have a hard edge running across several pixels, but rather you have a line of pixels that darken while neighboring pixels become clearer and you can interpret that information as motion and amplify it, it's really incredible, especially when you consider that if you are extracting

this

information over a background of noise, all digital images and videos contain a little bit of noise, which is an elegant way of say random fluctuations in the brightness of individual pixels.
reveal invisible motion with this clever video trick
I spoke with Jeff Hay of RDI Technologies, who created an emotion amplification system used in the industrial sector. - I started in astronomy, I thought I was going to be an astronomer, but I think I had a conversation with my advisor about how to get paid employment, you know, surprisingly, the problems are more or less the same, you know what I mean? You're using the camera to discover something about some object that's, you know, at a distance. The other hobby I had was being the photo editor for my college newspaper, so I have a background in photography.
Also, all these things started coming together, you know, it's no secret where I ended up. Even a megapixel camera, which by today's standards is somewhat small and high resolution, that has a million sensors, you can cover an object with sensors and measure everywhere, but when you have something like a motor and a pump, already You know, they're connected, the shafts are connected in something they call a coupling, which essentially connects those two pieces of equipment, and if on one side, for example, the pump goes up while the motor runs down, that's misalignment and that's bad. , you know, so a lot of times things move together, that's an indication that that's okay, that's okay, and sometimes when they oppose each other, that's where the problems come in.
If you have a 50 frames per second camera and something that vibrates 15 times a second, you might not catch it, do you have to worry about that kind of thing? - Yes, normally the general rule is that you have to take samples, you have to take data at twice the frequency that interests you. - With this system you can also see the raw data, like you can say, look, tell me what's going on with these pixels here, I'm interested in the vibrations at that point, and you can see a nice graph and that's really important, but also I would say that this summarized visual representation, that is so easily digested by the human visual cortex is also very important because human brains are better than computers at certain tasks, maybe not forever, but it is true for now, which is why We use human brains to train artificial intelligence in things like computer vision.
This is kind of funny, but when you log into a website and have to prove that you're human and not a robot, you complete a CAPTCHA, and in the old days it used to be two confusing words. somehow, and you had to write down what the words were, an easy task for human vision, a difficult task for computers, but really only one of those words was to test if you are a human, the other word was something that Google was struggling as part of its book digitization project. So here's a word that we don't recognize in a book, we'll outsource it to a human through the CAPTCHA process, and not only does it help with that word in that book, but it also trains the AI ​​to be able to recognize that type of word. word in the future.
By outsourcing word recognition to humans in this way, Google was able to boost its book digitization projects, which is why you no longer see these CAPTCHA words, so what do you see now? Well, for a while it was numbers, if you remember, and it was a picture of a number that looked suspiciously like a blurry photo of a front door, and guess what? That was to help Google Maps identify addresses through images, taken from Google Street View, today's CAPTCHA task tends to involve something like clicking on all the images that contain a road sign or clicking on all the images containing a bus.
And guess what, that's because Google's parent company has an autonomous vehicle project and they need human brains to help them train their artificial intelligence to recognize things like traffic signs and buses. Interestingly, those CAPTCHAS are real time, so if you are asked to identify a traffic sign, for example, it is because there is a car on the street at that moment that needs to know that it is a traffic sign, so if you see That's why you have to act quickly, otherwise someone could get hurt, that last bit isn't true, is it? It is not. But anyway, the thing is, if you can take your data and turn it into something visual, you might be able to outsource some of the processing of that data to a human brain that can produce remarkable insights without even breaking a sweat.
And by the way, having this visual representation really helps when you're trying to persuade your superiors to spend some money. - So if you go to someone and you show them a graph and you say, man, that machine is going to fall apart, you know, that's hard, you know, but if you can show them that it's rising off the ground or it's cracking, or You know, that's kind of an element too. - You talked about having it, you have some images of engines and things like that. - We have car engines, we have some more common, you know, everyday things that we could do, I think it would be a little more universal for people. -Jeff is understandably reluctant to talk about the finer technical details of how the technology works, but he really wanted to see if he could create emotional, amplified video on my own.
I found some research from MIT scientists, they're calling. It's video magnification, it's the same kind of idea, you can amplify movement in a video, you can also amplify color change in a video, let me show you what I mean. This is my wrist, you can see a pulse there, but look here, it has motion amplification applied. I'll link some links to the research and description of it, they have software you can download, they even have an online system where you can upload videos and play them with an algorithm, it's really cool, you can even detect the pulse using color amplification.
The color of my face changes very, very slightly with each beat of my heart, you can't see it with the naked eye, but here it is amplified and then I blink. You can actually detect the pulse just by standing still here. I'm trying to be as still as possible but every time the blood pumps through my neck my head wobbles slightly, here it is amplified, there was a lot more movement in my body than just the wobble of my head due to my pulse , but the algorithm that generates the resulting video is really smart and you can filter by frequency.
So I can say that I am only interested in movement that has a frequency between one Hertz and one and a half Hertz, because I know that my pulse was around that and it will amplify only those movements, and the same goes for the images of the By the way, you can filter by frequency to select different things you are interested in, this method works even if I am wearing a mask. I'd love to talk in detail about how the algorithms that generate the videos work, but honestly, I have a feeling you'd need to study this stuff for a long time before I could begin to describe how it works.
Here is a phrase chosen from the research, it has to do with the phase variation of a complex steerable pyramid, what does that mean? So I'm going to state that beyond the scope of this video, you may already be imagining medical applications for this, such as taking someone's pulse non-invasively or without having any physical contact with them. If you've ever taken care of very, very young children, you've probably watched them sleep, and you've probably started to worry at some point, have they stopped breathing? So you push them and they move and you think, oh okay, ugh, they were breathing, I mean, now they're awake and crying, but at least I know they're breathing.
Imagine if, instead, you could shine a camera on them and amplify the up and down movement of their chest and know for sure that they are breathing. I will put a link on the card and on the final screen to a very Tassian video that is somewhat related that shows how you can extract audio information from a video by observing the subtle movement of objects in the video that have sound energy impinged on them, it's really nice. Thanks again to Jeff Hay at RDI Technologies, for all the motion amplification material. Details about his company are in the description.
Segue, I can snowboard, maybe it's not that interesting, but what's interesting is the way I learn, I basically learned on my own with a little help from a couple of people who were with me, who already knew how to snowboard, the subject. With that approach, what I've learned over the years is that I have some habits that are now very ingrained, and if I want to improve at this point, I really need to unlearn those bad habits, which is a total pain in the ass, and that was my approach to life back then, in general, if there's something you need to learn, just muddle through, figure it out, but I'm increasingly of the opinion that that's a false economy in many situations, It's often better to invest up front, spend some time on a proper course or, you know, take a class, and that way all the time you invest afterwards is really quality time, like you're building on a framework. of good practices will really accelerate your learning and improvement.
That's why I'm a real advocate for online video learning today, like the sponsor of this video, Skillshare. Skillshare has courses in things like web design, photography, interior design, animation, illustration. I recently took the interior design course to design your space, because we are remodeling our kitchen, generally speaking, the categories are like creative things, artistic things, but I also like business related things, like leadership or productivity , there really is a huge spectrum of things there, and you know, maybe you find yourself with a little more time on your hands in this current situation, maybe it's time for some well-directed work. self-improvement or maybe you've been tinkering with an idea, and it's time to kick-start the idea with a proper chunk of formal learning, but whatever your story is, I recommend you take a look at Skillshare because you might get inspired to start something new.
Everything is only $10 a month, you get access to absolutely everything for that, and since they're sponsoring this video, the first 500 people to visit skl.sh/stevemould8 will get two months absolutely free, no strings attached. so check out Skillshare today. I hope you enjoyed thisvideo, if you did, don't forget to subscribe and see you next time. (upbeat music)

If you have any copyright issue, please Contact