Ashley Williams

JavaScript's Journey to the Edge

In September of 2008, Google’s Chromium Project released V8, a JavaScript engine, as part of a browser optimization wave that heralded the era of JavaScript browser applications that we both love, and love to hate. Less than a year later, in 2009, Ryan Dahl announced (at this very conference!) a way to run the V8 browser environment outside of the browser- Node.js, a platform that held the promise of unifying web application development, where both client and server side development could happen in the same language - JavaScript.

A decade later, V8, JavaScript, and its new buddy WebAssembly, have expanded to lands charted only a few years after Node.js debuted- known (confusingly) as the “Edge”. In this talk, we’ll introduce what the “Edge” is and why we are excited for it to revolutionize computation on the web. We’ll explore how this adventurous JavaScript engine, V8, is so well suited to tasks previously limited to Virtual Machines, Containers, or even simply Operating Systems. Finally, we’ll talk about security, Spectre, and ask ourselves the age old question, “You can do it, but should you?”.

In true JSConf EU tradition, this talk itself is going be an exciting announcement. You should come if you want to be there for the beginning of a new era of the Internet.

Portrait photo of Ashley Williams

Transcript

Ashley Williams - Javascript's Journey to the Edge

>> All right, so we're ready for the next talk, and our next speaker is already standing on the stage, and this is Ashley, so Ashley is a manager and engineer working on Rust and WebAssembly technologies. Also, she cares a lot about inclusive and open source communities. Please a big round of applause for Ashley! [Cheering and applause].

>> Hello, everyone, welcome to my awesome talk. My name is Ashley Williams. You may know me as AB.Dubs from the internet, I'm sorry! What is this talk about? This talk is about a couple of things. The first thing is it about is performance. More importantly, performance that makes things more accessible, and, unlike a lot of the talks that we've seen at this conference, this talk is also about infrastructure.

Can I get a shout out from any of the ops people in the room? Come on! Who is on pager duty right now! Someone, I'm so sorry. That sucks! I want this talk to be a little bit about how the internet works, and potentially, how the internet could work. And so this talk is called "JavaScript's journey to the edge" and so there's a little bit of journeying. Perhaps you're more familiar with these journeys if you're from the United States, but I just wanted to say a small thing, as this is JSConf about how important this conference has been to me.

I spoke at the last Reject JS in 2015, and I was the second-to-last reject only to Marika who was the last reject. It was one of the most amazing conferences. The next day I went to JSConf EU and saw someone wearing a shirt with my face on it which was a fascinating surprise. This was from a musical number they had done using quotes from a previous talks of mine, and they did that again in 2017, or 2016, with the classic "people got mad" which was an auto tune of my voice talking about how people get mad if you put all of your code in one file! They do get mad! But then I spoke at JSConf EU in 2017 and I wore an Antifa shirt. It was a super fun talk.

Man, I have so many friends at this conference. The last time I was here was last year where I did an impromptu Rust and WebAssembly workshop for 100 people from the Mozilla booth, and it's cool. This conference is super awesome, so can we give it a round of applause? I love this place.

But this talk is obviously not about my journey, this talk is about JavaScript's journey, so I'm about to show you some very scientific timelines that I made using Wikipedia and Keynote. JavaScript has had a fascinating history, and people talk about history, about this one being the tenth one, and Node being announced in 2009. We've seen a lot of development from JavaScript, and we've seen it develop really, really quickly, and I think it's kind of developed in one particular way, so here we see the first website in 1991, and then we end with Wasm up in 2017. Wasm was only born in 2017? Amazing. But, we saw the appearance of a fair number of things, including a lot of browser engines, and a lot of frameworks.

And I think one of the most pivotal things in this timeline that people don't usually see is the emergence much Google Maps in 2004 really motivated people to see what you could do inside of a website, and made it so that we started developing all of these things that you could do browser computations so much faster. So if we put these graphs together and take a look at this, what is happening is that the speed of computation in the browser is just exponentially growing, and that is so awesome, and I'm a big WebAssembly fan, I'm super here for this. However, because the browser has become such a computationally awesome agent, we've run into some costs.

How much does doing this cost? And fundamentally, this comes back to the idea of accessibility, and it's spelled wrong here, but it really comes down to the fact that what we are talking about is the ability for people even to access content. Mani in his talk the Cost of JavaScript in 2018 said the web is bloated by user experience. And he's genuinely completely right. So how many people here have ever checked out HTTP Archive? If you haven't, it's amazing, and you could loan at these numbers.

This is just one of the graphs. What this graph is showing is the median size of desktop and mobile applications with the JavaScript bytes that are being downloaded to the device, and we have seen a 353 per cent increase for desktop, and it's worse for mobile - 577 per cent growth in how much here sending to the browser. It's cool because the browser can take that stuff and use it really fast.

Moving those bytes over the wire takes a lot of time. So, on average, remember, this is on average, there are people who are on the really bad end of this, mobile loading time for an average website takes nine seconds. All right? That's unacceptable. So this is going to be part of the problem we are going to solve in my talk today. So, my intro did not entirely say this but I'm a systems engineer for a big orange cloud, not to be confused with a big orange website which I'm not a fan of, or SoundCloud which has a surprisingly spectacular logo.

I work for Cloudflare, I know it doesn't look super big here. What does it do? Cloudflare is an infrastructure company. Cloudflare is not super good at actually defining what it does, but the thing you definitely don't call it is a DNS company, because also, I mean, no-one likes DNS. I can't figure that out.

I work at a DNS now. But we call ourselves an infrastructure company, and sometimes, I describe Cloudflare as a hardware company, and that is because our primary asset is this, and this is a map of 180 data centres and growing all over the world. And so, this set of data centres contains something which is called "the Edge", and this is a terrible name. It doesn't make any sense to most people.

Someone said there's a wrestler called the Edge. I don't know who that is, whatever. To talk to you about what the Edge is, we will talk about the classic dichotomies in web programming: client and server.

To do so, let's talk about pizza. Who likes pizza? All right, there we go! I'm originally from New York, so pizza is cool, right? We are going to talk about pizza delivery, and I want you to view that in the eyes of pizza accessibility, because it would be terrible to deny people pizza, right? Especially, warm, fresh pizza. So obviously, you guys smoosh them together for pizza accessibility, all one word. Here is animation mere HP your JavaScript programmer will be represented by this lovely chef. The JavaScript's generated output is a pizza.

Your end using is a super hero because this is how we should think about our users, and we are going to have this interesting thing that is a basket, and so there you can think of that as a data centre, a point of presence, or a cache node. Let's take a look at what client-side rendering looks like. With client-side rendering, what we do is we have our chef in New York, and we have our person who wants to eat pizza in Australia.

When we render on the client, we send the chef to Australia. That's a lot, right? And then, what the chef has to do is then the chef has to cook. And then, at that point, you have delivered your pizza in Australia. But that's a little concerning, right? Like, maybe the person in Australia doesn't have like a whole room with an oven for a <to move in and start cooking.

Like maybe they have a flip phone that literally can't do that. This is a little bit after complicated situation. So, of course be what do we do? Let's throw a cache on it.

That should make it better, right? Instead, with our cache, we send our chef to a basket, you know, and the Pacific Ocean, South Pacific. We send the whole chef and then we still have to send the whole chef to Australia where they cook and they make their pizza. Again, we still have this situation, even with the cache, where the chef is travelling for maybe a closer location, but these still have to go into that person's house and make that pizza. That's pretty invasive be think. All right? So we can keep sending those chefs, but, yes.

Again, we have another option, right? We have server-side rendering, and of course as everyone said, ten years ago at JSConf you, Ryan Dahl announced node.js and it was cool because it promised to unify web development on the client side and the server. It's like why do it in two languages? Let's do it in one. That made the server accessible to JavaScript developers. Maybe not for the first time, but for the first time for fin who didn't want to learn a new language.

So let's take a look at what server-side rendering looks like. We got our chef in New York and our super hero in Australia. They're going to cook in New York. No more moving into the house to make pizza. They're going to send that right on over.

Not bad, but that's a pretty long trip for some pizza, right? I'm not sure that's going to hold up. Fresh pizza's very important, right? So maybe that's not the freshest of pizzas. Let's throw a cache on it, right? Cool, we will throw a cache on it.

So cool. The chef can still make their pizza in New York, and now they can just send it on over to that cache and the pizza can hang out there, and then our Australia person can happily eat all the pizza they would like. But that pizza maybe, it's a little old. I don't know. What if they wanted an extra topping on something? They would have to go all the way back, and that's really not efficient.

So, when we talk about the client and server, and we've heard other people talk about this at this conference already, is that we are seeing frameworks realise they have to negotiate this boundary better. But that's really only just starting, and we are still genuinely talking about these trade-offs between the client and the server, and they're tricky trade-offs. Now, it wouldn't be an Ashley Williams talk if, when I said "trade doc" I didn't immediately start talking about dialectics.

Who here knows what a dialectics is? Dia what? It is super foreign. There's an idea of formal logic where you say A equals A. This is a thing, and then that thing is always equal to this thing. We use this a lot in science.

We say when we reach a contradiction, maybe where A doesn't equal A, we are wrong, right? A does not equal A. However, I think that this is completely backwards. In fact, the dialectical method, in contrast of formal logic, challenges to contradictions.

Fundamentally, the idea of dialectics is that the motor of history is these oppositions, and that resolving those oppositions and pushing them forward as the synthesis of the oppositions is what makes things happen, and I get so excited about this because I am a giant philosophy nerd, so this is the philosophy nerd version of the diagram but this one will go over a little bit better. Just wait for it. This will explain everything. There we go! Dialectics, right? I want to see here we've been kind of assuming this trade-off between client and server for a very long time, and then maybe dichotomy is the problem. All right, so, we've got this problem, stuff is slow, we've got the client, we've got the server, they seem to be in opposition, we have no way to resolve them.

What is to be done? As I told you, I work to of a big orange cloud, and this big orange cloud has a ton of baskets all over the earth. One day, they were like what should we do with all these bassets? We've been storing static assets in them but maybe we could do something cooler. This is a direct quote. Just kidding, it's not a direct quote at all! What this became was like we have the client and the server, let's kind of take those oppositions, synthesise them, and create the Edge.

So let's take a look at what that looks like. We are back to pizza, all. I hope you're into it. So with Edge-side rendering, we have the basket and the chef.

The chef lives in the basket. In our previous examples, no-one could cook in the basket. But with Edge-side rendering, you can cook in the basket! And so, now the chef doesn't have to move into your house and make pizza, they can hang out in the basket that's near your house, make you pizza, and then deliver it to you.

That's pretty sweet. But, this kind of looks a lot like server-side rendering, a little bit, right? Like, it's just one. But the real trick is, remember, we don't just have one basket, we have a lot of baskets. So, at any point in time, these chefs can be making pizza and sending them to people all over the world so everybody can have pizza that is nice and fresh without someone messing up their kitchen. That's what the Edge is.

You might be asking this is a talk about performance. How fast are you? Right? So benchmarks are done and I'm going to show you some. Your mileage may vary.

These were done yesterday in who knows what will happen tomorrow? These are good representative benchmarks of what we have. There is something called serverlessbenchmark.com, but these are competitive numbers. I don't want to do a product talk where I compete against other products. I want to skip over that one and do what I call big numbers are big, small numbers are small, all numbers on small.

These are numbers for response times, in Cape Town, and, so worker are respond in about 143 milliseconds, and a GitHub pages is going to respond in 591. In Doha, worker will respond in 44 milliseconds, in GitHub pages, 497 milliseconds.

What about Australia? A worker, 208 milliseconds, GitHub pages 624. Those are some big numbers. Remember, these are people accessing the internet. Maybe you'll call me a millennial, but if I don't have access to the internet, I actually get nervous. Emotional health.

Also, it's all of this information. All of the ability to prosper currently on earth is largely driven by the internet, and so this axis matters. Could you imagine having to wait that much more time just to get, I don't know, you probably read Reddit. You would have to wait so long, it'll be terrible.

Reykjavik Iceland, 170 milliseconds. Now you're probably asking me how do you do that? That is very interesting, or hopefully you think it's interesting. Let's talk a little bit about it. We have all of these baskets, and it turns out that trying to cook pizza on these baskets actually has a lot of fascinating constraints, and so the first thing we can think about is scaleability. So, for Cloudflare, scaleability, traffic, or requests, they're super easy, never gets huge.

I think it can take over 30 terabytes of traffic over it. It's a lot, and it's continuing to grow. However, for this model to work, this idea of tenants, or how many apps we can put in the basket is super hard.

Every app needs to be in some location, and some places are very small. We are looking at a need for 100 times efficiency than what you usually see on a server-side offering. We came one a set of constraints. The first was for the code footprint, the base amount of what the app needs to be, a VM requires ten gigabytes, a container around 100 and then what we needed was less than one. Less than one.

That's a pretty big deal. All right? And then from memory usage, a VM is going to require at least one gig a container, again, around the same that had for the footprint but we needed to do under five. All right? And this is fundamentally because the Edge is not a large place, or like here, the kids call it, the Edge is not thick at all.

It's very small. We have to get everyone's apps on here because we want everyone to be super fast, all right? Additionally, because of the needs that we have, context-switching is very interesting thing. For a VM, very low context-switching is needed. Maybe for a container, a little bit more.

We need context switching because the apps only run local to it. We don't need to be running that app all the time. We need to be able to switch back and forth between apps incredibly quickly, like to the point where switching processes would be too much overhead.

That's a pretty big constraint, all right? Start-up time is a fascinating constraint. A VM is going to take around ten seconds, a container 500 milliseconds. We need it to be less than someone will notice - around one 50th of the blink of an eye. Why do we need that? If people start using too many resources on our Edge, we need to be able to kick them out quick. But, if they get another request in, we need to be able to start them back up quickly again in a way that the user never would notice, all right? These are some pretty serious constraints, right? But, there's other things that also have these same constraints.

For example, certain APIs, particularly APIs that speak to our APIs may need to run client code directly on the server, because it needs to be more efficient. Similar with big-data processing. With big data, you can't bring the data to the app, you bring the app to the data. It's similar constraints to what we have.

Also, web browsers also have the exact same constraints. They need to be constantly running all of that code from all of those websites that you go to that all of you are writing. This is where I say web browsers are freaking awesome. We have actually already solved this that is namespacing server-side need with a client-side technology.

Like server technology is actually too slow, and building out server-side technology, have we overlooked the fact that we have this beautiful technology that is our client for the web? Browsers are optimised for exactly the types of things that we need in a type of serverless properly. They've got fast start-ups, someone is literally there waiting for it. They're going to be moving quick.

Remember, we talked about computation in the browser and how optimising rates has been getting, all right? I don't know about you, but I usually have around 100 tabs open at once, and that's a lot of processes. You also have to remember that a single page is not always a single website. There are plenty of iFrames. That like button is its own context and the browser is orchestrating all of that. Additionally, and I know people will fight me on this, browsers are optimised for secure isolation.

When you go to a website, if it was able to leak all that stuff out to your other websites, we would be in a big, big problem. So, the architect of this run time said web browsers has been the most hostile security environment for quite some time. And I'm inclined to agree with him on this.

I think this makes sense. So, with deciding what we were going to do for a serverless run time, we picked V8. Come on! We chose to run the serverless run time on V8 using this class of V8 isolate. You could traditionally think of this more like a VM but the word "VM" has kind of changed so this is more of a JVM which is not the same.

You can understand it as a light weight context sandbox, so why is this better an VMs or containers? It is basically because you get to share more. It's like a little bit more communist, right? VMs, you get the hardware virtualised, and then you've not to bring everything else on your own. Containers, all right, you get the operating system, but again, bringing everything else on your own.

With isolates, you get web platform APIs, the JS run time, and operating system, hardware, and you just show up with your application and maybe a couple of weird libraries you wrote, and that's freaking awesome, and, because we can share so much, we can very efficiently use resources. And so just to take a kind of look at what this is, in a virtual machine you can see that the process overhead is one-to-one with the user code, whereas with the isolate model, you can see that it is definitely many to one. This is how we are getting those benchmarks that we are talking about. So, in addition to using V8, we have built a coding environment for you that uses the Fetch API and the service worker API. You can build it out in a UI that looks a little bit like this.

You literally just have to have something for fetch events, and then you write a function that can handle requests. You can do plenty of other things but this is the bare minimum worker in what is looks like. Because we're using V8, we get ... for free.

If JavaScript is not your jam, feel free to use a target for WebAssembly and we can use that. You might be thinking this looks like a lot like an operating system, and, man, I would like to talk about how that is true but I have literally no time left. I would recommend that you check this out if you want to learn about how we've tried to tame the um killer, that's really fun. Before I finish up, I do want to say, is this a good idea? Right, like, oh, cool, you can do it, but like should you? So a lot people will be like, yes, there is a spectre haunting this architecture.

Right? Wrong talk, sorry! This one! Spectre! It's a fascinating memory bug. I don't have time to go into explaining it. What I will tell you is that we've made some mitigations to attempt to avoid it.

Spectre is a bug that goes all the way down to the depths of the stack, and many of us are just trying to cope as you pop up it. This is it what we've been doing so far.. primarily, we are letting you avoiding the any sort of timer. If you use date.now in a worker, it will tell you the same time every time. We don't allow local concurrency because that's a timer in disguise.

But one of the things that we are also able to do is we have the freedom to reschedule. We can take the look at someone who is kind of looking a little weird, a worker that is behaving funny, and we can say, "Hey, let's kick you out, keep an eye on you, stick you in your own little box." Hopefully, I have encouraged you to think that is it a somewhat cool technology, and that it's fast, and maybe you want to use it because you want to get fresh pizza to everybody in the world. But, let's talk about how you can actually use it, because, accessible isn't just about receiving content, it's about building content that other people can receive.

I joined Cloudflare two months ago and it was like oh, shoot, we've got to make this developer experience way better. People do not like curl commands and API keys in them. That's not cool, all right? So, you may have seen me on a couch yesterday, but I've been working on this tool called Wrangler. It was originally released for building Rust WebAssembly workers in Cloudflare but now a fully fledged CLI for loading any type of worker you like.

I found this picture of a crab with a cowboy hat on it. You can npm install it, and it will just work for you. It looks a little bit like this.

I couldn't speed this gif up. Boom. So, if you like emojis, there are a lot of them. If your terminal doesn't support them, we have fallback. We've set up a template gallery because maybe you don't note what you want to build with something like this, so you can just generate one of these templates and get going with it.

Each one will create a functional work I don't remember er for you, and so that is pretty awesome. So you can run this, command, and just publish, and it will be successfully published at this fascinating URL. How many people here signed up for workers.dev subdomain? We've got three. Here's the thing: remember that fucking DNS tweet? I hate setting up DNS.

Maybe you too. To get started playing with this kind of thing, you do not need to set up any sort of DNS. We will give you a free subdomain on workers.dev you can get it by running the wranglers subdomain command, snag that for you, and you can put the apps and workers you would like on it. Finally, we've got cool new docs.

They are also written as a worker. So, that is good. And the big announcement here is you probably like why do I care about this? This is a company, corporate product announcement, but today we are making it free.

[Cheering and applause]. I'm really excited because Cloudflare workers are actually the first free Edge serverless platform. You cannot put something on the Edge, be it our Edge or someone else's Edge without paying money. I'm excited to get people playing in this awesome spot because I think it will change how we think about applications, particularly as a more distributed thing across the world, and not having to think about server and client. So we have a free tier, this is some stuff about it.

Again, go forth and build, but I have one final message for you, because I'm not done, and I am almost certainly over time, but, it's an Ashley talk, so we are going to go over. I want us to do some thinking, like, it's cool, this Edge thing. It's free, and I want you to go and use it and build awesome accessibility apps, but, the real point of this talk is that I want people to think more radically, and, with more people who don't look like us about how the internet works, and how it could work.

Right? When we encounter trade-offs like the client and the server, I think a lot of people think that's how it is. That's what we've to do, right? We have the client server, we can't change that. What I want to encourage you to find oppositions like that and challenge them because those oppositions are opportunities for making strides in accessibility for the web, like what we are doing today. And so, again, we've talked about journeys, we've talked about performance, we've talked about accessibility, the web is the primary place that new developers go to learn, so let's make sure we can get there fast.

The future with the .just community which is representative of the whole world literally cannot come soon enough. Thanks! [Cheering and applause].

>> Thank you so much, Ashley. So, we are having 15-minute break right now. Please make sure to check out community lounge, and be the BIPoCiT space and there's a Mozilla workshop right now. See you at 1145. [Break].