Subtitles section Play video Print subtitles Before you watch this video, to make sure you're not a robot, please look at these pictures and identify the street signs, the cars, the fire hydrants, the shop fronts, the numbers on the front doors, and the ever-growing sense of frustration. In the late 90s the most popular search engine was AltaVista, and they had a spam problem. People were writing automated scripts to send in spam and malicious links to their finely crafted database of the web, and they needed to add a test that only humans could pass. Their solution was to add a question to the submission form that, back then, only a human could answer. In their case it was to identify a warped string of letters and numbers. Image processing wasn't at the point where a computer could easily identify the squiggly letters, but it was trivial for a human. Or at least, trivial for a human with good eyesight. Accessible versions didn't come along for a while. But that was one of the first public versions of what became known as a CAPTCHA; a Completely Automated Public Turing Test to Tell Computers and Humans Apart. A version called reCAPTCHA came along a few years later: it was used to help scan old books and newspapers. When a CAPTCHA was needed, the team would send one scanned word that they knew was right, deliberately distorted, and another word that their scanning systems weren't sure about. The user would have to type in both: the known word was to check it was a human answering, they had to get that right, but the unknown word – after maybe a dozen people had agreed on what it was – that would be logged as part of the book scan. Google ended up buying reCAPTCHA. But by then, the arms race was well underway. The bot makers looked at reCAPTCHA as a challenge, and they rose to it admirably. First, they could train computers to read those messed up words. This was before the recent breakthroughs in machine learning and artificial intelligence, but even a fairly rudimentary system could solve reCAPTCHA well enough to let bot-makers create fake accounts and send spam some of the time. If the test was still too difficult, though, they could just pay humans. The bot makers set up systems where automated bots would fill in all the details, ready to send spam, and then when the CAPTCHA appeared, the bots would show it to human operators, hired from countries where the average income is low. Those humans got paid to sit there and solve CAPTCHA after CAPTCHA after CAPTCHA. After all, it's only testing that there's a human in the loop somewhere. You could even outsource your CAPTCHA solving needs to any one of dozens of companies who all competed on price and accuracy. Actually, you still can. Or you could get unsuspecting members of the public to solve CAPTCHAs for you. You could set up a web site with, er, some images that some people might want to see(!) and before they could visit, they'd have to prove they were human by solving a CAPTCHA. Which was copied straight from whatever site your bots were trying to get into. So then Google released reCAPTCHA version 2. Which is where you're presented with a single check box, that you have to click on to prove you are not a robot. And clearly, any bots presented with that box would be honest and not click it. It's not really about clicking the box. When you complete one of these new CAPTCHAs, extra data is sent. And Google is very cagey about what that data is, because everything they reveal is a clue for the people trying to break it. But that box is loaded into your browser from google.com, which means it can look at any login cookies that Google already have on your browser. Certainly if you clear your cookies, you are way more likely to get that secondary check that asks you to identify buses or fire hydrants. And maybe it checks how your mouse moves in the moments before clicking the box? And the exact position and length of time your finger tapped the phone screen? Plus a bunch of other things that Google all feeds into their giant machine-learning system. The only people that know for sure are the designers, and they aren't telling. The CAPTCHA solving services, of course, are already offering a cost per thousand to solve these. It may be harder, but it's not unbreakable. Using machine learning, bots can be trained to pass those secondary checks themselves, and to hide as humans, well, identifying the correct sections of the presented images is something that you can throw cloud machine learning at. And given that Google Cloud sells machine learning systems, it's very likely that some of their servers are creating CAPTCHAs, and others are trying to break them. And, even then, you can just have humans on standby instead. So at the end of 2018, Google released reCAPTCHA version 3. And you might have already passed, or failed, one of those without knowing it. There's no box to tick, no puzzles to solve: when you browse round a site, version 3 works in the background and watches what you do. By the time you're posting a comment or signing up, it's already assigned you a score based on how likely you are to be human. And again, Google is being very careful about saying how they're working that out. But the answer is very likely “it's a machine learning system they're throwing everything into "and they don't know it works either”. Hopefully they're taking account of incognito mode, and accessibility tools. Because if you get a low score, maybe your comment will get sent to a moderator to check… or maybe it'll just disappear into nothing and you'll never know. The bot makers, of course, are already working on the challenge. I signed up for a new bank account online a few months ago, and I had to send in a photo of my ID and a video of me holding that ID and waving. That's checking not just that I'm a human, but that I'm the one specific human I claim to be. Bots are becoming more and more indistinguishable from humans. Successful CAPTCHA methods are having to be more and more intrusive. The arms race continues, as it has done for twenty years or more. I'm going to do something very strange for a sponsorship segment now: I'm going to tell you about limits. I've said “you should use a password manager”. And it's true. I use one, I recommend you do too. Which is why this video is sponsored by Dashlane, the password manager, if you go to dashlane.com/tomscott, you can get a free 30-day trial of Dashlane Premium, and I recommend you try it out for yourself. But there are some things that a password manager cannot help with. If you are worried that a major government is trying to steal your secrets, or industrial spies with millions of dollars of funding are targeting you specifically: well, you probably already have professional advice. After all, if someone actually gets to your computer's hardware and installs stuff that pulls things right out of the physical memory… well, at some point, even a password manager has to decrypt the password, put it in memory, and pass it on to wherever you're actually trying to log in to. If an attacker has physical access to the hardware, it's over. And of course, the weak link in almost every security system is human. All the security in the world cannot protect you from someone threatening you with violence unless you type in your password. And in the UK, if the police have a court order requiring you to give them your password, it is a criminal offence to refuse. People have been jailed for it. Dashlane cannot help you with that. But using Dashlane means that every password is different, so when some online service that you signed up to years ago and forgot about gets breached, it's no big deal. Actually, Dashlane will also warn you about data breaches with instant alerts for websites where you have accounts. Using Dashlane also means that every password is long, complicated and secure -- but you don't have to try and type them in on your phone, because Dashlane will autofill them for you everywhere you go. It can even auto-change passwords for you on a lot of sites, with the click of a button. In short, using something like Dashlane means that passwords stop being this worrying pain in the ass and start being something that it's really easy to deal with. I thought long and hard before accepting sponsorship for this series, but honestly: if you are techie enough to watch these videos, you should use a password manager. So: dashlane.com/tomscott for a free 30-day trial of Dashlane Premium, and if you like it, you can use the code “tomscott” for 10% off at purchase.
B1 dashlane password human spam machine learning manager I'm Not A Robot ✅ 5 0 林宜悉 posted on 2020/04/01 More Share Save Report Video vocabulary