Subtitles section Play video Print subtitles Let's say if we have this many of something, we'll call it "one", and represent it with this symbol. If we have this many, we'll call it "two", and use this this symbol. If there's none. If there's a certain amount of something and that amount is… none many, we'll call it zero and use this symbol. This many, call it three. Duh duh duh duh duh. If there's this many, we'll call it ten. And we're all out of symbols. So we'll just start reusing symbols. And so on. This is a way, we can represent quantities with words and symbols. And we represent lots of properties with words. Like redness. They can have shapes. Things can be cold. Scattered or patterned. They can be wooden or wet. Even though 2 things are shaped differently and are made of different materials we may still call them by the same name because of other characteristics. Things can have movements or behaviours. And we can have descriptions that only come when we're comparing or looking at multiple things. Any characteristic, property, or concept that we think about something, we always also have a word for it. …I think … that might not be right. Try to think of something that exists, in a way that you can't describe with words. If you're not talking... we'll know that idea was wrong. When light bounces off an object, the light can be directed by a lens to form an image. A lens in the eye does this and creates an image at the back of the eye where we have an array of neuron cells that detect the light and send the information on the image to the brain. And it's a similar story for your other sensory cells that pick up other stuff. From there your brain tries to make sense of the signals, classifying concepts and trying to build a model of what it's observing. I don't know how that works exactly. How neurons connecting to neurons becomes conscious concepts like white duck jumping. Should probably ask like a brainologist. But for now, let's say our goal is we want the concepts or pictures that we build in our mind to be the same as the world outside. We want to accurately "recreate" the universe (or at least a part of it), into our brains. We don't want to be wrong. How do we do it? Well, observations are the start. If we want to know what the world is like, looking directly at it, is really going to give us the best idea. But at the same time there's a lot of signals that our sensory cells and brains have trouble with. Like certain wavelengths of light and sound. Things that are too small. Or too far away. Or too fast. It can be hard to see an individual part if there's too much stuff going on in the background. If there's too much noise. We can have trouble processing things even when they're right in front of us. Like a face with the eyes and mouth upside down. Goes from "happy birthday Mr.President" to "my sister and mother are the same person". The haphazard way our brains and senses evolved to be wired didn't give us a perfect accuracy, perception or memory. But we can use tools to detect things we can't detect. Microscopes for things that are too small, telescopes for things that are too far away. A tool that detect bits of radiation and plays a fun noise. And instead of trying to remember and communicate properties by feel, we can have this thing we call a centimeter. Just count how many centimeters are the same length as this other thing. And we can use whatever standard of comparison we need. And we can try to always go slowly and systematically. But some people, after they've had the neurons in a part of their brain die, they may no longer perceive faces. They may not be able recognize their friends and family, celebrities, or themselves. They can still see eyes and noses and mouths, and describe their layout. The signal about the light is still coming in. But the brain no longer classifies this arrangement we call a face. In the end there may be a lot of things like this… useful classifications about the universe, that all of our brains can't conceptualize. But the point is because those signals coming into the brain from the sensory cells, are the only way information can get into the brain. Observations are the only way of really knowing what the universe is like. But we can also, come to new ideas by playing around with old ones. For example we call this many six, and this many four. But we can also frame them together. Then what do we get? Well we already have a word for that amount, we call that ten. Split it in half, how do we describe that? Make a sort of square array out of it, can we describe that quantity? OK, "five times five equals twenty five", isn't an "observation". It's more a play on our "definitions". Its 'a statement that: according to our naming scheme, five times five, and twenty five, refer to the same thing. Or let's say we notice that a parallelogram always has the same area as a square. If the parallelogram always has the same base length. And height. Then we can look at the squares and play around. Go bushoomp, bushoomp, bushoomp... OK, taking what we knew (about the parallelogram and the square) we can come to a new useful model without having "observed" it first. I don't think this is how they actually found this equation but they could have. We can do something like: if everything that we call a Flaggle is blue. And everything that we call a Beener is a Flaggle. If that's the information that we know, then we should also be able to know that every Beener is blue. Even when they (Flaggle and Beener) are nonsense words, the new ideas we come to make sense. Because it's creating a world in our mind and seeing what we would absolutely have to observe in that world, because of the rules that we set. So if we build our rules and definitions and ideas based on observations, we can form new ideas and models that actually describe and match the real world. OK this is deduction. But we can't always describe the world with this much certainty. For example, if we start rolling this dice and we put it in with our eyes closed or something. There's no way we could ever know, ahead of time, what face will turn up when we stop rolling it. What do we know? There are 6 sides, Only 6 possibilities for what will be turned up. Perfectly cubed, perfectly balanced. (Considering these factors) we don't have any reason to think one of the sides is going to turn up more or less than the others. Let's represent the likelihood of each event occurring as being a certain proportion of all the possibilities. Added together they will equal 100%. In this case we think each one has an equal probability of occurring so they're just one sixth of one hundred percent. So we might say, rolling a 3 has a probability of decimal one six seven. Or of all the things that could happen, rolling a 3 is 16.7% of those possibilities. Or we would expect to see a 3 about one sixth of the time. This is all we can do, we don't know what's going to happen so we describe the possibilities. What are the odds of rolling a total of 3 when rolling 2 dice? Each dice has the six possibilities, their outcome is independent of one another and we can get any combination between them. Each combination having an equal probability of occurring. These are all the possible mutually exclusive dice rolls. So, we've got these 2 ways of rolling a 3, of 36 possible rolls. There is a 5.6% chance of rolling a 3. Anyways. We've got observation and deduction to form an idea about the world. And they're good, you know they're pretty good. But this other way we can form an idea, is by guessing. Just imagine the way the world is (with all its possibilities, and perhaps you will understand why guessing is a useful method for forming ideas) Great! Why would we do that? I mean there's a lot of possibilities for things that could be in this mystery box, only one of those possibilities actually is. So why don't we just look? (If we just open and look) we can verify the thing thats in there, and falsify all the other possibilities. It's because sometimes it's useful not to wait for an observation. Hear thunder? The last time we heard thunder it rained. (We then guess:) maybe thunder always comes before rain and it's going to rain. We better put away any horse meat we don't want to get wet. We just saw Frank eat these mushrooms, and now he's bleeding out the eyes. (So we guess:) these mushrooms must cause bleeding out the eyes. Let's stay away from them. We do it because seems faster and safer. Our guesses aren't always accurate, in fact you could say they're often not accurate. For example the idea that: the stars and the sun circle the Earth, while the earth remained stationary (AKA the Geocentric Model). Sure it looks like they're swirling around us and it doesn't feel like we're "moving". But at the time I think a lot of people were very opposed to other ways of modelling the system. Or the idea that you can sweat out toxins through your sweat. I don't know the observation that led to this idea, maybe that you smell after eating certain foods. But it doesn't matter the substance, cyanide, sugar or water, you can take a certain amount and it's not going to hurt you. It's when you take too much that it starts to cause damage. If we define "toxin" as a substance that hurts you then, "toxin" isn't a class of chemical. A toxin is any substance you have too much of. So "detoxification" would be sort of the recognition when there's too much of something happen to leak out with the sweat, but unlike urine and stool, sweat doesn't have a lot (of these certain substances) in it. It's almost entirely water. And there's no specializing cells at the skin "sorting chemical" or making things easier to excrete (bodily wastes through the skin). Skin cells function mostly as a barrier. How about the idea that there's this God named Thor behind those loud lights in the sky? So scary. We better sacrifice another horse. Frank? Frank eats everything he sees. It may very well have been something else he ate when we weren't watching. OK, guessing is fine, It's us wondering about the world. It's not the problem. The problem is assuming that we know (the truth). Not recognizing that we're guessing. We'll often take the first idea that pops into our head and treat it as though it were true. Or treat an idea as true because someone told it to us. We all tend to do it. We all have trouble saying "I don't know". Me, I'm no exception. I'm pretty sure I'm wrong more than I'm right. I'm probably wrong in this video. But let's see what we can do. Let's call a guess or hypothetical idea about the world: a hypothesis. It will either match the world outside, or not match the world outside. For us to know whether an idea is true, for us to verify it, we have to observe it directly out there. Turn the made up idea into an observation. For us to know that it's false, to falsify it, we have to distinctly see the real world being inconsistent with the hypothesis. But just because we can imagine something, doesn't mean we can see it. Some ideas are unverifiable. For example, the idea that: "every time we drop this pen, it will fall". It's falsifiable, if we see the pen float or go up or something, just one time. We'll see that no, the pen doesn't always fall. And the hypothesis is wrong. But it's not verifiable. That is there's nothing we can see that will let us know that this idea is true. Even if every time we've ever seen the pen dropped, it fell. The hypothesis wasn't "every time we've seen it", the hypothesis was "every time". Every time will always include, the next time, in the future where we can't observe it. So this specific idea will never be able to be an observation in our mind. Seems like it's stupid overly strict semantics. But the point of it is we never want to confuse the feeling that we're right, with making the observations to actually know something. If the hypothesis was different. Every time we drop this pen in this room today, it will fall. The boundaries of the idea have been set and we can see within those boundaries. But a universal idea about the way the world is has no boundaries. And we can never see it entirely. But at the same time, the pen falling seems to be very consistent. And we've never seen something else happen. Maybe we treat it as though it were true, since it's so universally predictive. But remembering in the back of our mind, observations are the way that we know stuff. And we can't literally see all of this idea. Along these same lines an idea can be unfalsifiable. For example the idea, a squirrel that looks exactly like this, exists somewhere. Somewhere on Earth let's say. It's verifiable. What would we have to see to know it? Just have to see the squirrel. We'll know it exists. But it's hard to falsify. We would have to see every inch of the planet, simultaneously in case it moves around, and see no squirrel in all those places to be able to have observed the absence of the animal. Which let's say is possible. Although maybe this is a bad example. If we've never seen one, and we've never seen any signs of it. And we know that animals almost never have 2 tails. You know the squirrel is mostly just a made up idea We can talk about the low probability of its existence and ignore the idea until there's some sort of observational basis for it… and we probably should. But it's just, this isn't entirely falsifying the idea. We haven't truly observed the squirrel's absence. Unfalsifiable ideas can be tricky. Even if an idea is unverifiable, if it's falsifiable you can at least eliminate stuff as you make observations and the hypotheses that are left, are maybe left because they're true. Maybe. With unfalsifiable ideas we can't eliminate stuff. And the idea can stick around with little to no observational basis. An idea isn't automatically right or automatically wrong just because we can't see it. Saying it's unverifiable or unfalsifiable is about the disconnect between being able to imagine something, and being able to observe it. OK, some ideas are both, unverifiable and unfalsifiable. There's nothing we can see to know that they're true and there's nothing we can see to know that they're false. For example, since your experience of the world is all controlled by your brain, it's possible your brain is really attached by wires to a computer or something and all the reality you perceive is fake. Verifiable? Nope. You could even wake up, in your vat, wires coming out of your nips. But that could still just be a part of the simulation. Falsifiable? Nope. If it's not true, everything would look exactly the same. OK, it's like a hypothesis about a that squirrel we had no observational basis for, except we think the squirrel is also invisible. It's like a hypothesis about a God who has the supreme power who could manipulate the world and our lives. But it's only exerting its will in mysterious ways that are indistinguishable from regular ways. Or a hypothesis that the universe popped into existence 10 seconds ago and the only reason we didn't notice was because all the atoms and light and our neurons and memory and everything came to be in the exact shape and position they are now. It also may have happened a year ago, or 6 thousands years ago. Again, not automatically right or wrong it's just we can't see it entirely. Our bodies may be being harvested for energy, while we're kept subservient within a simulation. But at least there's pie. To summarize so far we're wrong a lot… and learning is real hard. OK. Let's say there's this new disease, you get a big lump. But people have been saying that eating carrots can make them smaller. And we want to know if it's true or not true. Basically we've got two incompatible hypothesis and we want to know which one matches the world. You know if we're in a world where carrots do shrink lumps, how would that world look? What observations could we expect to make. The problem with these are they are hard to verify or falsify even on an individual level, because of the noise. Kind of like with Frank. Just because we observed them eating carrots, and then observed some change in their lumps, doesn't mean the carrots are causing it. Could be something else they're eating or something else going on in their life that's causing this. Even comparing them against somebody who didn't eat carrots might not be much more help. Because again we don't know what else is important and if carrots only had a small effect it may be lost among the noise. So what else? While we don't know how important the other factors are. Maybe if we sample lots and lots of people and put them into two different groups, only feeding carrots to one of the groups, maybe all this other stuff will average out? And then if we see a difference in the average lump size between the groups, maybe we can attribute it to the main difference between them. The carrots. Would be nice if we record or survey these other factors so we can check to see if there's a relationship between these other stuff and lump size. And check to see if there's any interactions. You know maybe carrots only shrink lumps when the person also eats broccoli. Or something. Or better yet, have both groups eat the same things, have the same lifestyles… and be genetically identical clones so that we can be very sure that any changes we see, are from the carrots. Although that could be really hard. OK, two groups, measure lumps sizes before, measure again after some amount of time, feeding carrots to one group the whole time but not the other. Let's say these are how much each person's lumps have changed in size over the course of the experiment. And these are the average lump size changes of the groups. On average, the carrots eaters lumps shrunk more or grew by less. So carrots shrink the lumps? Carrots can help? Maybe… But it could also be the noise from all the other stuff. Maybe carrots did nothing… and all these people's lumps were going to shrink and grow for other reason and it was just the random way we put them into groups that created these results. What we can do to investigate that is to look at every possible combination we could have made with these people, assuming nothing we did mattered. Kind of like what we did with the dice. What we're looking for is the probability of seeing what we saw, assuming carrots do nothing. It's like if rolled ten apparently normal dice. And we rolled a 10, we rolled all 1s. What are the odds of that happening? Pretty low, there's only one way of rolling a 10, and with ten dice there's 60 million combinations we could have got. If we were testing a hypothesis that these are normal dice, you would expect to see rolls in the 30 to 40 range. There's loads and loads of ways of rolling those. Rolling all 1s is so unlikely that the hypothesis that these are normal dice is probably wrong. Maybe these are loaded dice. Or maybe the dice only have 1s on them, or maybe somebody used a camera trick. So same over here. This is the range of possibilities for the differences between the groups assuming carrots weren't a factor. This would be if we had randomly placed people into the groups like this. And over here if we have randomly put people into the groups like this. And everything in between. And this is how likely each difference was of being rolled. If we see a difference between the groups somewhere over in the middle, it looks like it was just random. But if the difference between the groups was somewhere over in this region somewhere, where under random conditions there's only a 5% chance of seeing it, or better yet a 1% chance of seeing it, then maybe it wasn't random. Chances are good that there's actually something going on. Where does our experiment we did fall on here? Here. If carrots did nothing and this was just random, then we shouldn't be that surprised seeing the results that we saw in our study. The hypothesis that it was random is relatively likely. It's not that this difference between the groups was so small that it's just not important. It's about all this variance we're seeing. If our results instead of looking like this, looked like this. Same average difference, but what are the odds of randomly putting everybody whose lumps shrunk in one group, and grew in the other? Very low. It's like rolling all 1s, carrots would almost certainly be an important factor. Actually they look like the only factor. But in our experiment is it just the other stuff affecting lump size, or do carrots have some small effect getting drowned out by the noise? We can't tell for sure from the observations we made. And unfortunately we don’t know which hypothesis is true. But there‘s not a strong reason to think that carrots shrink lumps. Lumps seems to shrink and grow more for other reasons. Or maybe not. 14 people isn't really enough to get a good result. That was a lot of work. Definitely too much work. Can we just trust what other people say about the world? Like if the all-time greatest physicist says the universe was born some 14 billion years ago after something called the big bang. Can we just go with that? Maybe…. But not if that's all they tell us. We still need to hear the observations behind the idea. A hypothesis doesn't suddenly become true because someone really smart says it. We still need to go through it in our heads as well. Even if it's just communicated to us. If we think they've made a mistake or lied in such a way that we could never find out by looking at their work, you know, they recorded an observations wrong or something. Then we would want to see some other people able to make the same observations. And I've never heard of someone who was always right, so even if we trust them we should be double checking. It can be hard and slow, and full of uncertainty. But this is the process of learning new things, the process of science. Maybe not a proper definition, but it's about: building ideas from good and thorough observations, acknowledging what we have and haven't seen, what we can and can't see. To try to get what we build in our mind, to match the universe we observe. Our ideas are always changing, even the tools and equations we use to build ideas might change, entire areas of study might end up being wrong. But the goal is same.
B1 hypothesis idea rolling squirrel world shrink How can we know what's really true? 651 48 Kristi Yang posted on 2016/08/23 More Share Save Report Video vocabulary