Placeholder Image

Subtitles section Play video

  • everyone.

  • So our story begins with a humble, await patch.

  • These two lines of code making http request.

  • Wait for a response and print out the status code.

  • While this code is waiting, a cascade of activity is taking place in your JavaScript runtime, your operating system and your hardware.

  • In this talk, we're going to look at what happens between these two lines of code and what waiting can teach us about software design.

  • Now, you may be thinking, Hold up.

  • Why do I need to think about this stuff while riding JavaScript code and more so 100 feet from a beautiful beach?

  • Well, I can sympathize.

  • I've wrestled with motivated myself to dig into these lower level details before, and I'd like to offer to motivating ideas, maybe enough to get you over the postal on chump first.

  • This isn't trivia.

  • These aerial systems we use every day while writing JavaScript code understanding Systems teaches us examples of how problems can be solved, and most importantly, it gives us the opportunity to identify patterns that guide how new systems can be built.

  • Second, computers are really complex and powerful, and that can sometimes make them feel magical.

  • But much of that complexity comes from layer upon layer of simple solutions to simple problems.

  • As JavaScript programmers, these problems a relatable to us because we live in a synchronous world.

  • By understanding the problems the lower layers solved, we can gain a greater confidence in the face of this complexity.

  • As you'll see, a lot of that complexity serves a simple task.

  • How computers Wait My name's Max.

  • I'm visiting from San Francisco and I'm a developer.

  • Experience engineer, a patriotic.

  • On the first Wednesday of every month, I co organized a local meet up called Waffle Jazz.

  • You could see the logo on screen.

  • It's a really fun time.

  • I worked with a team of some amazingly talented people I've learned a ton from, and if you ever passing through, I'd invite you to come join us.

  • It's a really fun night.

  • Or come give a talk to talk to me after the after this, So back to waiting.

  • Waiting's kind of a strange concept, and it was difficult for me to like, nail down in writing this talk because we rarely do it.

  • We spend a little time waiting is humanly possible.

  • If I ask you to wait for something you're gonna switch to something else to D'oh.

  • So our use of the word wait is usually in the context of a set of instructions to do something.

  • And when there's some dependency in that process, we have to wait for our progress into the Hill of Progress is no longer blocked, by the way.

  • Thanks, Andres, for the cute spaghetti image on the slide.

  • Good recipes take advantage of the idle time by inter leaving processes while you're blocked on one thing you can start on another.

  • This is a great analogy to how computers multi task except computers have thousands of different processes They're carrying out at any given time.

  • And unlike most recipes, the order in which things could happen is unpredictable.

  • As programmers, we like to think in terms of sequential steps because it makes our code simple to reason about you making http request.

  • Then you wait for the response, then you print out the status.

  • This is why a single weight style program was so useful.

  • It lets us express steps in a clear and then your order waiting makes that possible.

  • It's the glue that allows us to express a Siri's of separate events linearly.

  • But when we say to wait, we expect that the human or computer on the other side is gonna find something else to do.

  • In the meantime, this is a behavior we have to design into our computers.

  • Let's dive into the mechanisms of how this works.

  • Starting with a simple microprocessor for simplicity, I'm gonna focus on a single core CPU.

  • That means it can only do one thing at a time.

  • Richest talk, which is immediately after this that around 1 30 is gonna cover threading, which takes advantage of modern si, pues having multiple concurrent course.

  • So back to the CPU, we're gonna start with a simple sub component called the clock the clock.

  • Periodically, pulse is an electrical signal on and off.

  • And this is what it drives.

  • Instruction execution in your CPU.

  • In modern si, pues, we measure the frequency of the clock and gigahertz, meaning the clock is oscillating billions of times per second.

  • From a reductionist point of view, every single thing a computer does or waits for, begins at one clock cycle and ends with another.

  • By programming a microprocessor, we can implement a simple kind of waiting called busy waiting.

  • This is a loop where each cycle we checked the condition we're waiting for.

  • If you want to do other things while we wait, though, we kind of have to intersperse them in this wait Luke.

  • And that increases the time between checks as a number of things we want to wait for increases.

  • This loop becomes less and less efficient because you're checking more things.

  • Each type you cycle.

  • That's where interrupts come in.

  • Interrupt.

  • Signal the CPU when something happens, such as your network interface receiving data, keyboard key being pressed or timer elapsing.

  • When a processor receives an interrupt, it pauses.

  • What's currently running saves the execution ST and switches to different code called interrupt Handler.

  • The interrupt handler takes any immediate actions necessary and then determines how to proceed forward.

  • The code that implements interrupt handlers is part of your operating system.

  • The OS makes it possible to write higher level programs that don't have to worry about interrupt handling and communication with hardware.

  • The OS governs switching between which programs are running so that multiple programs can take turns sharing a CPU.

  • This is called scheduling.

  • The operating system also provides a P eyes for Io called system calls for things like writing to files and sending packets in limits.

  • Most kinds of Io's representative operations on streams of bites.

  • Here's a couple examples on screen I owe takes time, though you know discs take time to perform operations.

  • Network devices take time to transfer data.

  • When a program is performing, I owe it often wants to wait until that Io is completed.

  • So a simple model for this is called blocking Io.

  • While the program's waiting for an I O operation, we say that the calling program blocks until that IO completes.

  • Once the OS receives an interrupt that the eye is completed, the process is queued to be resumed by the scheduler.

  • While that process is blocked, the OS scheduler can run something else.

  • In the meantime, simple blocking Io system calls only wait for one thing at a time, though.

  • If we want to wait for multiple things, we're gonna need some more tools.

  • Operating systems provide non blocking versions of many Io calls for this.

  • When you open a file or network connection, the OS returns a number that identifies the stream called a file descriptor you can use that f d to reference the stream and other system calls.

  • So, in this example, if an operation would block the non blocking, read returns an error instead.

  • If we get an error indicating the reed would block, that means there's no late data left in the buffer for us to read.

  • So instead, we can find something else to do instead of pausing execution.

  • This leaves our process running, so we get to decide what to do instead of blocking non blocking.

  • Io calls can also be used to wait for multiple things at a time.

  • For example, we can loop over a set of file descriptors and try to read from each of them.

  • If there's no data to read, we continue on to the next file descriptor.

  • However, now we're back to essentially busy waiting.

  • What we really want to do is block on a set of things that could happen.

  • Resuming when any one of them does operating systems provide event based system calls for this.

  • A simple one is called Select.

  • Select has given three sets of file descriptors, one for each kind of event streams ready to read, streams ready to write and streams with errors.

  • The select call then blocks until an event happens or a specified amount of time elapses.

  • It then returns to the programmer, which file descriptors could be read to written two or have errors.

  • So here's a really simplified example of how select works we pass.

  • A set of file descriptors were interested in reading from, and then it blocks until one or many of them becomes readable when select returns.

  • It gives us the list of files that now of data available, and we can loop over them and read from them without blocking.

  • Each operating system provides a slightly different modern implementation of a venture of an IO, though Lennox this is called a pole Mac OS and BSD F K Q.

  • And Windows has OCP, too right cross platform software.

  • We have to implement different Io code for each one.

  • Some programs do that, but many others used libraries that abstract over the differences between these AP eyes.

  • This is where Libya V comes in Libya, the abstracts over the varying implementations of event driven Io in different operating systems and provides a common interface for them.

  • Libby V is used by note, which will be using as our example JavaScript run time In the future of the slides, I'm guessing there may be a few Libya v developers in the audience today.

  • If you can find one of them, give them a high five for me.

  • Okay, Liberty lets you perform network and file operations and register callbacks on events happen.

  • Libby V uses the operating systems event driven Io constructs toe, wait for events and when something happens, live TV then executes the registered callbacks and resumes waiting.

  • This is called an event, Luke, and here's an example to make a network connection.

  • We initialize what Libby V calls a handle, which represents some kind of io object we can perform operations on.

  • When we opened the network connection, we passed live You ve a callback to run when the connection's established.

  • This is very familiar.

  • If you used to like running job script code in assigning callbacks from things complete to read data from the connection, we need to tell the TV to track.

  • When the connection becomes readable, we provide a read callback, which will be called with the data as it becomes available, and then the last thing we need to do is run Libby V's event loop.

  • This is gonna block until the next IO event were interested in happens and call the related call back for us.

  • No, J s is implemented on top of this Libby V event.

  • And when J.

  • S uses a note a p I to perform, I owe under the hood notice calling Libya Vita performance.

  • And then when I owe events happen, node runs the callbacks or triggers events and jobs.

  • So that was a lot.

  • Now that we have our cast of characters, let's return to our code.

  • In order to make an http request, we have to perform several network operations.

  • Couldn't looking up the host name, opening a socket, sending in htp requests for receiving data.

  • I'm gonna gloss over many of those steps so we can just look at a single walk through the layers as we established a socket connection.

  • So let's dive in and break this down.

  • Let's start with the fetch Call fetch uses nodes.

  • Https module to start the request.

  • It then returns a promise that represents the pending value of the fete response.

  • Our javascript awaits the response from the fetch.

  • This tells Node to pause and save the Java script state and then switch to running up other cute up JavaScript code.

  • We're going to skip over looking up the I P address here and go straight to opening a connection so no one uses Libya.

  • Vito open a connection to J.

  • S.

  • H i dot com and accuse up a callback to run when the connection's established.

  • To accomplish this, Libya details Lennox to make the connection.

  • At this point, if Note has no more JavaScript to run it, then calls into Libya V's Event Loop, which will wait for the next Iot event now know Js is waiting under the hood.

  • Libya V is using the Lenox a pole, a P I.

  • To track these Io events, Libya V tells Lennox it's interested in when the socket becomes rideable, then Libya the weights as well, locking until a time out where the next event happens.

  • In the meantime, Lennox's network stack is busy making the connection.

  • This involves the operating system, network drivers sending data and the CPU transferring that data to the network device.

  • While we wait for a response, the operating system is going to switch to running other things.

  • Eventually, though, the CQ will receive an interrupt that we're interested in that the network device has received data.

  • This interrupt will cause the operating system network driver to read the data from the network interface.

  • And then this is gonna continue back and forth for a while until the connection is totally established.

  • At this point, the connection socket becomes ready to write, which is what Libya V is waiting for.

  • Libya v executes any callbacks waiting on file descriptors that became ready.

  • And it just so happens that we have one from node waiting for the connection to be established once we've finished processing all waiting callbacks.

  • We finished our first our iteration of the Libya, the event loop.

  • There's gonna be like a couple of the things that happened here that I'm glossing over.

  • There's several similar back and force that we've seen.

  • Like starting a security less connection handshake and actually making the http request.

  • We're gonna skip over them here.

  • But when that's all finished nodes H GPS module is going to admit a response event.

  • This is what our fetch promise that we initially made is waiting for when the promise resolves the weight is now ready to resume.

  • So node executes our next line of JavaScript code printing out the response code.

  • That was a lot to digest, huh?

  • Let's look at the key parts in one picture.

  • So your job script code awaits a promise.

  • No.

  • Js accomplishes this by pausing execution until the promise results.

  • Under the hood node lit uses Libya V to queue up network operations.

  • Libya v uses the linen people a p I to listen for events when the network connection is ready.

  • So I told you I was gonna explain howto wait.

  • But like, where in this process are we actually waiting?

  • And how does the waiting happen?

  • Let's break this down a little bit more.

  • The 1st 3 steps are essentially creating information when the request finishes resume my JavaScript code When the connection's established run my Libya ve call back.

  • We're not waiting here.

  • We're really just defining relationships between what we're waiting for and what to do when it's ready.

  • In contrast the operating system and see if you are responding to real world events and executing the code that's cute up waiting for them.

  • Let's recap and summarize what's happening when our job script code waits together.

  • Are JavaScript code?

  • No Jason Libby v are performing io and queuing up handling the responses.

  • When this Io event happens, Run this JavaScript code.

  • The OS juggles incoming.

  • I owe events and runs processes As soon as they become unblocked, the CPU is busy executing these instructions, and then it gets interrupted as real world events happen.

  • Intuitively, this is similar to how humans wait.

  • We keep track of something we're waiting for and do things that were not blocked on until something interrupts us.

  • There's something new to do.

  • So far, we've thought of sea fuse in terms of being a black box that just does io for us.

  • But modern sea views are really interesting.

  • They can actually perform many a sink operations to themselves that will be pretty familiar to you.

  • As a JavaScript programmer.

  • Modern CPU offload data transfer's using something called D M.

  • A direct memory access.

  • This is a subsystem that transfer stated between devices and memory in the background.

  • When a transfer finishes, the CPU gets notified.

  • Vienna Interrupt.

  • Think of it like a callback in Java script.

  • The CPI makes a D.

  • M.

  • A request it transfers in the background and then, later on the CPS, notified.

  • When it completes, see if you can also set timers to trigger it and interrupt in a specified time using the high precision event timer.

  • This is really analogous to set time out and said, Interval on Java script the see if you can set background triggers to fire after some time has passed.

  • So as you can see like as JavaScript programmers were really familiar with asynchronous callbacks and that time out in Central, the CPU can do all of this.

  • So when work is happening in the background, sometimes the CPU actually doesn't have anything to do.

  • There's no interrupt instruction that needs to be run before the next interrupt.

  • So how does the CPU wait?

  • The first thing see if you could do is lower their clock speed.

  • This reduces the amount of power the CPU consumes in the amount of heat it generates at the cost of slower execution.

  • The second thing, see, if you can do is progressively turn themselves off in a modern intel.

  • Er an D c p u A.

  • Cork and issue a halt instruction this will cause the sea if you two stop it executing instructions until interruptus received.

  • This allows the CPU to power parts of itself off, resulting in a significant energy savings.

  • As an aside, this is one reason why request animation frame is useful requests.

  • Animation frame is used by front and JavaScript code to schedule animation timers.

  • As you can probably see, I think animations are awesome personally, but to save battery, we should use them spiritually.

  • Requests Animation frame can reduce the frequency of animations or skip them entirely to save battery, which enables to CP enable c p usedto underpowered off sleep states.

  • If we look at how waiting is implemented at each layer of the stack, what we're trying to accomplish and how we accomplish it bears a resemblance.

  • Each of node Libby V and the OS scheduler consumed incoming notifications that something happened.

  • They then dispatched handlers to respond to each incoming event.

  • When there's nothing left to dio, the loop sleeps until new events are available.

  • In a nutshell.

  • We're running in a loop.

  • We consume a queue of incoming events and then wait until more ready for us.

  • This is where the name event comes from.

  • So, for example, know Js Event Loop is consuming Libya v callbacks.

  • It runs a queue of jobs callbacks, and then it pauses and blocks the process, essentially to wait for the next live TV call back.

  • The contrast, The Lenox Scheduler.

  • It's it's It's not exactly the same because Lennox schedulers actually always executing something, and it's getting interrupted.

  • But those interrupts come in, they get stored and cute up, and then the Lenox scheduler runs a cute the queue of unblocked processes, and then it'll continue on its way until the next interrupt comes along.

  • And maybe if there's absolutely nothing to dio the Lenox schedule or will issue a halt, construction, which will turn off the CPU, were turned off parts of it.

  • If you squint both the operating system and no GS appear to be doing kind of similar things here, the operating system enables programmers to write linear blocking Io, whereas no Jess enables developers to write sequences of Io using callbacks or a single weight code.

  • Both the operating system scheduler and no Js event loop allow multiple sequences of code to be relieved.

  • One big difference between these two systems is that the operating system implements preemptive multitasking.

  • If a see if you interrupt occurs while the process is running, the OS, make pause the process and switch to another.

  • In addition, the operating system scheduler uses a timer so it can regularly interrupt processes so that each gets its phone share a fair share of the CP time.

  • In contrast, jobs for front times interrupt implement cooperative multitasking.

  • This means that the runtime will not interrupt any running JavaScript code.

  • The event loop only runs in the spaces between jobs from execution.

  • You're J s can and will delay the event loop from handling events.

  • This is actually a deliberate design choice, and it provides some advantages to jobs.

  • Good programmers.

  • While JavaScript code is running incoming, Io can't interrupt or change the state of the JavaScript world.

  • This also gives JavaScript code the ability to choose exactly when it wants to yield to the event loop.

  • Then why do we need the OS layer at all?

  • Winner code?

  • Be more efficient without it.

  • Potentially.

  • You could do that, but what you lose is general ism.

  • Our computers are typically doing much more than just running our JavaScript code and Not all software is written in JavaScript, however.

  • It is possible to eliminate the OS layer entirely.

  • Build run times that operate directly on the CPU and hardware.

  • These are called Unit Colonel's Summer research projects, some arm or actually used in production.

  • In general, though, we want our computers to be able to run many different kinds of software at once, And that's where an extra layer of operating system comes in handy.

  • What if, instead, we eliminated the event loop and just let the operating system schedule blocking Io?

  • Why do we need the event loop it all?

  • Couldn't we just run a lot of processes?

  • The answer lies in economies of scale.

  • Each operating system process has overhead processes take up memory.

  • It also takes time to switch between them while we save and restore the execution ST.

  • By using a venture been io.

  • In an event loop, a single process can manage a vast number of a sink.

  • Io operations in bulk.

  • Overhead is lower because the single process can use more compact data structures for keeping track of state.

  • The event loop can also exert finer grained control over the priority in which it handles different events compared to the operating systems.

  • More generic scheduler operating system knows much less about what we're doing, but this is a job script talk.

  • And what does this all have to do with Java script?

  • And what does it have to do with waiting?

  • Well, most interactive applications can actually be thought of as a tapestry of events driven code.

  • We weave together a sparse graph of code that runs in response to external io events happening and in between those short bursts of activity are code isn't doing very much.

  • Our code is actually just waiting.

  • A great abstraction layer breaks down a complex problem into multiple similar pieces.

  • Blocking silo enables us to think about this tapestry as linear flows of events.

  • This is what makes no Js in a single weight.

  • Such powerful tools.

  • They give us leverage over complexity.

  • Under the hood, the event loop ingests an unpredictable stream of incoming events and advances the linear flows well defined.

  • Great abstractions also tend to make both sides of their problems faced more efficient blocking.

  • Silo not only helps programmers understand their data flows, it makes explicit what processes air waiting for, so that our runtime can focus its attention elsewhere.

  • And yet abstractions always leak.

  • What we do in our dinosaur code has impacts across the entire system, even though sometimes we have limited control over it.

  • As our stack of abstractions gets deeper, the distance between what we intend to happen and how it happens widens before we wrap up.

  • I'd like to point out to newish layers of the stack we haven't really gotten to yet, which are pretty interesting ones on the back end and another's on the front end.

  • If you're back in code is running in the cloud.

  • It's not running on a real CPU.

  • There's an additional virtualization layer, which provides a virtual Cebu to our operating system.

  • Called a hyper visor.

  • The hyper visor has its own schedule er, which allocates CPU and I, oh, time based on how much you're paying for your actually running and even more schedulers than you thought.

  • However, if your front end code is running on, react 16 or later, your dom renders are subject to an additional schedule or two called react fiber, because the event loop doesn't have a way to preempt control from Java script when it's taking too long.

  • React fiber schedules itself.

  • It's save state and pauses when it's out of time and resumes.

  • After the browser renders next frame, the fiber scheduler splits up large U I re renders into smaller pieces of work that can execute incrementally.

  • It also prioritizes time sensitive work like updating input elements.

  • So here's like a broad overview of the many layers of our stack when we're working both in back and JavaScript and in front of JavaScript.

  • There's a lot of layers here, and there's a tendency for us to think of high level programming languages like Java script as less efficient because they introduce additional layers of overhead and give us less control over the, uh, the execution.

  • I want to leave you with more nuanced idea than that.

  • As we've seen with waiting, working at a high level of abstraction often leaves you with a clearer expression of what you're writing.

  • The more clearly programmers can express their intentions, the more information we can weave into our tapestry of events, and the more latitude we provide our execution layers optimize them.

  • The real cost of adding too many layers is mismatching abstractions, where we waste energy translating one problem into another.

  • As you've seen, many layers of modern computers provide means of waiting in similar ways.

  • Over time, we're gonna simplify away some of those layers and specialized them to better serve the needs of modern applications.

  • At the same time, we can expect that the future will demand even higher level languages or high level abstractions built in those languages like react.

  • As you've seen, JavaScript gives us tremendous leverage over the CPU and OS to weave together flows in a way that's intuitive to us as programmers.

  • But programming is still a young medium, and we still have much learned about how to structure our ideas.

  • If you're interested in diving in deeper on these topics, I'd recommend three.

  • Awesome Resource is Sam Roberts has talk on how close shows up close, how Libya V is used to create the Js event loop and exposes the trade offs that are made in designing it.

  • Cindy ST Horan's talk on non blocking Io dives deeper into how the operating system implements event driven Io internally and finally, Merrick Minkowski, East Log Siris on select walks through the history of how I owe multiplexing evolved starting in early computer systems in a time before, people were just figuring out in a time when people were just figuring out how to write network computers like the dawn of the Internet.

  • It's really fascinating stuff, and I highly recommend diving in.

  • And with that, I'm gonna yield back to the conference loop.

  • Thanks for scheduling your attention for the past 25.

everyone.

Subtitles and vocabulary

Click the word to look it up Click the word to find further inforamtion about it