Placeholder Image

Subtitles section Play video

  • [MUSIC PLAYING]

  • MATT GAUNT: So you do testing for all your projects, right?

  • ADDY OSMANI: I usually create a new issue, assign it to you.

  • And then magically, overnight, my tests get written.

  • It's the most wonderful thing.

  • MATT GAUNT: Mm-hm.

  • Yeah, for you.

  • So we did a previous episode on unit testing

  • with things like Mocha.

  • And we looked at a whole host of other ones.

  • What do you think of when someone

  • says unit testing to you?

  • ADDY OSMANI: Oh, there's like in-browser testing, CI testing.

  • There's usability testing.

  • There's an entire minefield.

  • MATT GAUNT: Yeah, so this is I think, the biggest issue that I

  • have with a lot of this stuff.

  • When I first got into unit testing, it was like Mocha.

  • And it was in the terminal.

  • And you get used to this thing of like, this test passed, this

  • failed.

  • And that was all cool, but it's JavaScript.

  • It's not in the browser.

  • You don't have the browser API, so it was a bit-- I mean,

  • it got confused.

  • I was like, well, OK, do I have to edit my code

  • to be separate so it can run in Node, even though it's going

  • to run in a browser, or what?

  • That's been cleared up.

  • Because obviously, with Mocha, you

  • can run in the browser, which makes a bit more sense

  • if you're building a browser library

  • and you need those APIs.

  • But there was always this weird tension.

  • Because a lot of the work I've been doing super recently,

  • I actually got to a point where Node, I was mocking out

  • of a lot of the browser APIs.

  • It allows you to make the tests super fast and super reliable,

  • because basically, nothing was real.

  • Everything was fake and a facade.

  • ADDY OSMANI: So you don't have access to a window.

  • You don't have access to any of those APIs, or these things.

  • MATT GAUNT: Yes.

  • So you end up doing one of two things.

  • Either you run it inside the browser with the Mocha stuff,

  • and it will actually use those APIs, which, for the most part,

  • I find is good, because if there's any browser

  • differences, you get them.

  • Or you mock all of them out, which

  • means it's completely fake.

  • But it's normally super fast, because it's not actually

  • doing any work.

  • You normally go, in this case, pass.

  • In this case, throw an error.

  • But it still kind of always confused me.

  • Like OK, so I have these two things,

  • but it doesn't feel like this is good on an ongoing basis,

  • because if I don't run the tests, nothing happens.

  • So that's when we get into the Selenium world.

  • And it gets super interesting, slash, weird,

  • depending on how you do it.

  • So for anyone who doesn't know, Selenium WebDriver is

  • a way of launching a browser and controlling it,

  • is the most generic way of putting it.

  • They have a ton of different versions,

  • but this is the JavaScript Node version

  • I've come to know and love.

  • Let's look at super basic, bare bones, Selenium script.

  • And this is it.

  • So this is a Node version.

  • Selenium has a ton of different libraries, which

  • also doesn't help, because documentation-- just so much

  • stuff.

  • But in the most bare bones thing,

  • you create what's known as a WebDriver builder.

  • And at that point, you can basically

  • say, OK, with the builder, I want it to be a Chrome browser,

  • or I want to do this.

  • And then you can bolt on additional things.

  • You'd say, build me a driver.

  • And from that point, that is basically--

  • you can kind of think of it as like an instance of a browser.

  • So that's what you've got here.

  • At that point, you can just start using it.

  • So if we call get, in this case I'm

  • going to launch my Mocha test, because we want

  • to automate all of that stuff.

  • We don't want to worry about it.

  • And then, we are going to say, wait until something happens.

  • In this case, we're going to say,

  • wait until Mocha results becomes a thing on the actual thing

  • inside the window.

  • And then, get the Mocha results and do

  • something useful for them.

  • So this is like a really basic Selenium thing,

  • but it's already super helpful.

  • So if you're going to run this-- I'm just going to run it with

  • Node, ~/test/selenium.

  • You're just going to nod along, while I'm just

  • talking to myself.

  • ADDY OSMANI: I'm just going to nod along,

  • assume you know what you're doing.

  • MATT GAUNT: Yeah, that's the safest way of doing it.

  • So all it's doing-- you can kind of see it-- is Chrome pops off,

  • and then it closes.

  • And you briefly see the Mocha test running.

  • This is super basic, but it's super nice,

  • because it means that you could put this-- because it's just

  • a Node script, you can put it in npm run,

  • or you could add it to Travis.

  • You can do whatever you want.

  • It's super nice and clean.

  • The problem I always had with this approach

  • is, it gets super manual really quick when

  • you run Chrome and Firefox.

  • So I created this thing called selenium-assistant.

  • And this entire thing was like, just tell me

  • what browsers are available, and then just give me

  • all of those in a WebDriver thingy, and then run it.

  • Yeah.

  • So it's kind of weird, because you get available browsers.

  • And it can give you stable, beta, and unstable.

  • And we've done so much stuff with service workers and brand

  • new APIs that you wanted to test on all those ones,

  • because you knew breaking changes were coming in.

  • ADDY OSMANI: Oh, yeah.

  • It's almost impossible to manually stay

  • on top of testing that stuff.

  • MATT GAUNT: Yeah.

  • So I built this thing.

  • And that's all it does is just simplify all those things.

  • And in this case, I've kind of merged the two.

  • So if we do it super role with just selenium-assistant, what

  • it's going to do is we're going to get

  • all the available browsers, open them

  • up, go through the same steps of running Chrome,

  • and then return it.

  • And that's super cool, because you

  • get into this super weird world where suddenly browsers open

  • and close, open and close, open and close.

  • And then you see it go into Firefox,

  • and then it's open and close, open and close.

  • There's still one more big issue.

  • ADDY OSMANI: It's almost as if you enjoy chaos.

  • MATT GAUNT: Well, it's great when it works.

  • It's getting to that point.

  • You have this Mocha environment where everything builds, runs,

  • and it either passes or fails.

  • Selenium just gives you the piece

  • that opens up the browser.

  • ADDY OSMANI: Right.

  • MATT GAUNT: It doesn't actually give you the pass-fail.

  • Selenium can open, do all those things.

  • Things can go horribly wrong and it just goes,

  • well, I don't care.

  • And the worst thing with that is, if you have a Selenium test

  • and something goes wrong, Selenium doesn't quit.

  • It doesn't close that browser, so you end up

  • with a desktop full of 20 different browsers opened

  • simultaneously.

  • The way around that is you kind of end

  • up mashing Mocha and Selenium together.

  • And then you get super trippy inception

  • where you've got Mocha running a Selenium test, which

  • is running Mocha in the browser, to then report back

  • to your Mocha/Selenium thing, which is what I'm doing here.

  • Because you know what?

  • That's true chaos.

  • And that's what you need.

  • And it's kind of interesting.

  • So you get the available browsers,

  • and you say, add Mocha test for each individual browser.

  • So we've got foreach, addMochaTest.

  • Inside of this, we're then saying,

  • describe a new test suite.

  • And then, before each one, get the Selenium driver.

  • So that's doing the builder stuff for me.

  • And then, after each, I just want

  • to make sure I call quit on that browser.

  • Because without that, like I say, especially when you start

  • working on this, if you don't exit at the end,

  • it's going to suck.

  • So mocha ~/test/selenium/mocha-assistant.

  • And this time, it's going to open the browser,

  • listen for the results, and come back whether it works or not.

  • But I've also added an additional step

  • to this one where, because we're now in Mocha,

  • we can do individual tests, I have that bunch of tests.

  • So it's like open the browser with Mocha,

  • run through all the tests that are unit tests.

  • And then I've added one at the end, which is like,

  • OK, load another page, click on the About link,

  • check that it actually went to the About page.

  • And this is kind of the interesting thing

  • with Mocha is you start getting into more-- which I think

  • you'd class as an integration test-- but it's like, make sure

  • the behavior is the actual end thing that I'm expecting.

  • And I call them integration tests,

  • so I've kind of labeled them differently

  • to highlight the fact they're going to be long,

  • because I think that's part of the point is unit

  • testing's going to be super fast,

  • super reliable, which is why I'm kind of like,

  • you could mock it out, and it makes sense.

  • Integration tests are the big, long, scary,

  • can take a while kind of things.

  • And there's a ton of ways you can skin both of these cats.

  • But I think it makes a lot of sense.

  • So the super basic integration test-- in this case,

  • we have my website.

  • I'm looking for a particular link, which I'm executing

  • in the page, clicking it.

  • And then, at the very end, I'm just

  • saying, OK, wait until the document title is actually

  • what I expect it to be.

  • ADDY OSMANI: How well does all this stuff

  • play with service worker?

  • MATT GAUNT: So that's where I ended up

  • getting into the Selenium stuff, because I was writing

  • Mocha tests in the browser.

  • And that was great because it meant

  • a really repetitive way of running tests

  • against service worker APIs.

  • And it was the only way of doing it

  • in a sane way, especially when you're

  • learning, because there's lots of edge cases.

  • It was the then running them in each individual browser

  • and repeatedly doing it, as well as then running it in the CI.

  • You need all of that in a Node script,

  • which is why I landed on Mocha afterwards,

  • because you kick it off on the Node side,

  • and then it's just JavaScript.

  • And you're not testing any Node stuff.

  • You're only using Selenium to launch the browser,

  • do certain things.

  • And plus, having that step where you can then launch it,

  • perform certain tasks on a demo page,

  • and then check what the response is from the browser

  • is insanely helpful, especially for figuring out

  • any possible browser differences or bugs in each one, of which

  • there was many back in the day.

  • So that's where I'm at.

  • And I think I'm now starting to try and delineate

  • between unit testing and integration tests,

  • because with Selenium, you are instantly in flaky territory.

  • Like, WebDriver just does random stuff,

  • because of lulz and funzies.

  • ADDY OSMANI: Define random stuff.

  • MATT GAUNT: It would just-- so the worst

  • thing with a lot of this stuff is you will randomly

  • get one browser updating, and it will not

  • work with its current version of its WebDriver equivalent.

  • Your CI kind of goes out the window at that point.

  • But it will often come up with a fix, sooner or later.

  • And it's very rare that that will land on stable releases.

  • But there are just times where, let's say,

  • you're using Express, or some other Node

  • module start a local server.

  • If for whatever reason the browser talking to Express

  • causes an issue and something breaks, it breaks.

  • And that may be a once in a lifetime opportunity on the CI

  • that you will never see again.

  • So I'm finding I'm adding retries a lot

  • for Selenium WebDriver tests.

  • And generally, that weeds out a lot of issues

  • to the point where you're like, this

  • is actually a bug in my code.

  • ADDY OSMANI: What's the typical number of retries you--

  • MATT GAUNT: I've been landing on, like, three.

  • ADDY OSMANI: Three?

  • OK.

  • MATT GAUNT: And that's just largely

  • because, nine times out of 10, it's

  • because I'm using like-- I'm even

  • going so far with integration tests of testing

  • against real live networks.

  • You think of push.

  • You need the browser to talk to whatever

  • push service they're going to talk to, and then

  • get a network response.

  • And if for whatever reason the CI's internet connection

  • is broken, that means my test is going to fail.

  • It's going to get so far, and then die.

  • ADDY OSMANI: I think you've been tackling

  • some of the sort of bleeding edge around unit testing,

  • dealing with service worker, dealing with push,

  • and all of those other newer platform APIs.

  • MATT GAUNT: I don't think it's bleeding edge testing,

  • I think it's just I've been testing a lot of bleeding edge

  • APIs, which is why I'm reaching a lot

  • of these weird situations.

  • And it's also the reason why I'm finding it so useful.

  • Because the main thing is, if I build these things,

  • I also don't want to babysit them.

  • ADDY OSMANI: Right.

  • Right.

  • MATT GAUNT: Because you just lumber me with an issue

  • that I then don't want to have to keep on looking after.

  • I just want you to deal with it.

  • ADDY OSMANI: That's not going to happen.

  • MATT GAUNT: Yeah, I know.

  • ADDY OSMANI: That's just not.

  • MATT GAUNT: But it's interesting.

  • I think everyone should be looking at this stuff

  • and playing around with it.

  • Like the next step for me is looking at services,

  • like BrowserStack and SauceLabs, which I think,

  • now that I understand Selenium, I'm

  • at that point where I can appreciate

  • what SauceLabs does for me.

  • But I think that's the interesting thing

  • is I feel like everyone who does this starts off raw Selenium,

  • goes through the pain, realizes there's abstractions that exist

  • and they want to use.

  • ADDY OSMANI: And then they get to this point

  • where they could do a follow up "Totally Tooling Tips"

  • episode on that topic.

  • MATT GAUNT: Boom.

  • [MUSIC PLAYING]

[MUSIC PLAYING]

Subtitles and vocabulary

Click the word to look it up Click the word to find further inforamtion about it