112: Guillermo Rauch - Building Serverless Applications with Now
Full Stack Radio
0:00
0:00

Full episode transcript -

0:0

in this episode of full stack radio, I talked to GMO Rouch about building and deploying serverless applications with. Now this is full stack radio Episode 112 Hey, everyone, welcome to another episode of the full stack radio podcast. I'm your host Adam Weapon. And today it's my pleasure to be speaking with Guillermo Rouch. How's it going, Guillermo?

0:31

Great. Thanks for having me.

0:33

So for anyone who is not familiar with you, do you mind sort of briefly introducing yourself and talking a little bit about the sorts of things you work on?

0:41

Sure. So my name is Gim a roach. I'm on Twitter at Twitter com slash Rouch g r u c h g. I started my career things to open source. Really? Very. At a very young age, I was like hacking with clinics and all this like, sort of fun stuff. But my sort of breakthrough moment, uh, was two Things is starting with open source contributing to open source learning from open source and joining online communities of supportive individuals like I would be I would hang out all the time. Unlike artsy IRC channels and all kinds of websites for Lake programming communities that led to me becoming very interested in Ph. B specifically like CMS and Forum software did was sort of turn key like Ph. BBB and were presence along. But then it became really, really interested in this idea of using JavaScript to create a very real time experiences,

experiences where the end user had a direct connection with your website or obligation. So I started using this framework called mood Tools, that it sort of started around the time that also Jake, where he got started in the era of espectaculos and product, I Js that framework was, uh, let it to a lot of the early ideas and that made it into frameworks like react today on it was a lot of people that were involved in mood tools at a time are now working Facebook or other awesome places where they crafted the JavaScript experiences a day. Another way in which I was interested in making snappy like fast interactions, was serving the user with the data in real time before they get to requested. So this idea of sort of eliminating the refresh button as a thing that people need it to click so I created a friend called Soccer Video, which made it Web sockets a lot more accessible just because it made it easier to use and also because he made it completely cross browser across platform and so on. So, uh, yeah, my my bio is basically working on open source, but also making it a lot more accessible to a lot of people and creating the best developer tools.

And that's what our company dust today. So I started a company about three years ago called Site that is behind the reentering will called Next Js that many have probably heard off. And we build the serverless infrastructure for deploying frameworks like next areas and create react happened view, but also programming languages like Go and Ph. B. And basically anything. So this idea that it's not just about giving the developer the client side tools but also giving them a platform, on top of which they can deploy applications and websites very, very easily on and have them scale automatically.

3:45

Awesome. So the thing I was most excited about talking to you about today is now that you kind of alluded to this sort of like really interesting deployment platform that you guys have been working on for a while now. I don't even know what you sort of think of it as or how you sort of categorized or with the elevator pitch for it is. But do you mind sort of talking a bit about, Like what now is what sort of makes it different and sort of what problems you're trying to solve versus a traditional deployment strategy.

4:20

Yeah, for sure. So many have probably heard of these 80 of server less. So the idea of service comes down fundamentally to you. Never again manage a server or write a server as a piece of code. Really? So now could be best described as a cdn that is also capable of executing code. So there, with me on this. So when you deployed typically to cloud infrastructure, you select the region you boot up at GM, you put some code inside, and then you're in charge of sort of the up time of that, like you have to monitor it and serve it because he can go down. That's kind of this spectrum of servers today, but there's another spectrum of how people have used the cloud ever since the days of aka my and later on fastly and cloud for and so on, which is you Leverage cloud infrastructure That is truly kind of like the cloud make It's there and you never worked about it.

So when you put a file aesthetic filing through a cdn, you don't monitor that static file. You don't worry that it's not gonna be reaching every customer on Earth that could that you could get a page in the middle of the night. So there's kind of a different programming model off. If you really squint hard here, you sort of realize that CD ends are fundamentally a different way off. Orchestrating resource is, and that's what now, as a platform enables. So when you deploy her now, the easiest way to the player now is by slowing or get a pap, which is kind of like circle See, I you install it into a report and it deploys that repo or you sold the command line at a utility called Now eso you consort with NPM I G. Now, when you deploy with put your artefacts, we build your artifacts first and then I get into that later,

and then we put them into the Cdn Etch. But it's not just a static files that we're putting in that seedy and edge. We're also putting code that is capable of executing on the man. So we're talking about a PHP. Prior, they started to pump gas, and this might sound very familiar to people that have used Ph. B. Because that's kind of how it works is it's this magical files that look like static files but are capable of doing very awesome, dynamic stuff when I request comes in. So now brings back this programming model and this way of using the cloud and actually makes it work for non just speech be but no Js typescript. All kinds of frameworks, like net Js and create react happened view and knocks, and so on, in a way that you, the developer,

you don't do any work to set it up. You don't do any work to configure sort of the bills. You don't you don't have to log into any cloud provider, and the result is that when you deploy, you get your euro instantly to your deployment that you can sort of share with anyone your co workers is alone, and then you can put it into production by a leasing it to a domain. So you get your deployment and say, Hey, now, um, adam dot com is pointing to my deployment.

7:33

Yeah, so there's a lot to impact their, I think, And a lot of it, I think, is sometimes I worry that some of the stuff it's hard to see from the perspective of someone who's like completely new t these ideas once you've been, like so deep in it for so long. So I think it might be worth trying to, um, come up with sort of compare it to Maybe like, the traditional approach that a lot of my own answer would use. Like PHP developers deploying like full stack PHP frameworks

8:2

for sure, yes. So typically, a lot of those people are like getting a VPs, and they're probably S s aging into it. And there's configuring a server that get passed she and then they're coming up with some way of synchronizing files to that server and sort of setting it this all out by themselves, in contrast, And then let's leave aside the fact that you're gonna have to monitor and maintain and upgrade that sort of Apache server and make sure that it doesn't go down. And that is only gonna run in one region of the world where he first put it up. But let's leave that aside for a second. So just from a developer experience to stand point with now all that we care about is your source code literally the code and powers your customer experiences. So you create a directory, you put your files inside. You run now, and we give you your deployment in a your ill that it will scale infinitely. And it will only charge you per each sort of hit that you get eyes completely on. The man is scales automatically,

so there's no server in the picture. You never configured a passion. For example, you never as a sage into anything. There is not even an SS age that you could agent. Yeah, all we did was we synchronize the files and would run them at the Etch Eso. It's a tremendous difference, and the cool thing about now is that this is not coming at the expense of lock in, because what we did is we allowed to say, Hey, please use now such PHP furnace. And now that it is a little module that builds your project, which is completely open source as well. So it's not that we're removing the server about introducing a some sort of like locking or custom. Maybe I you kind of get best of both worlds the no set up experience, but also the ability to sort of navigate the entire spectrum of static and dynamic obligations with different frameworks and technologies.

10:0

Yeah, awesome. So under the hood right now is powered by, like, server lis functions, right? So everything is using something like Google Cloud functions or as your functions or aws lambda. And if I understand it correctly, you kind of can use all of those would now, depending on what regions you select and stuff. And that sort of meant to be abstracted away from you in a sense, where you're not really worried about what? How things are working under the hood.

10:29

Yeah, So a lot of a lot of you are excited by this idea of service functions, right? But it's there have always been kind of hard to approach because you don't really? When you write, for example like, uh, uh, PHP code bases, like using larryl or symphony or whatever you don't think of Oh, I'm going to define a function here. I'm gonna deploy my function. You just literally write your PHP files and you start using your framework and you start either returning from Jason or returning some HTML and salon so you don't want to late have to unlearn all of that in order to leverage the benefits of service functions, which are, for example, infant heart sandals capability. Your code.

You're not paying for your code when the functions are not being invoked. Older functions. Ryan isolation. So you kind of have a lot of security benefits in there as well. So what now? Does this at Bill time? We package your code into this different serverless functions, and we deploy them with the providers on your behalf. So let's say that you're in Hong Kong, where Google Cloud has a region, but no one else in the I believe it. Um, it certainly doesn't have a region in Hungary, but Asher might not also someone Snopes me on this later. So the key there is that when you run now in Hong Kong and our hunger region is in preview mode but will be live very soon. We're going to play her code to Google Cloud functions without you even noticing.

So you never changes the code. You just benefited from deploying into your closest match with no extra configuration, nor supervision, and paying exact same amount of money that he would pay to the original cloud provider with same scalability benefits. Eso it's It's all about this idea that you should never opt to lock yourself into one a specific provider. But also the key question here is. Should the developer even have to worry about configuring glad infrastructure and that sort of what we've set out on a mission to prove and commit to, which is the idea that you need to worry most about the code and this would be executed to serve your customers? You're probably not in the business of configuring load balancers, configuring servers operating a passion when that we're upgrading, for example, you know when openess a cell gets breached or there's a vulnerability operating your server so that you get the latest T. Alas, all of that is handled by now automatically. And that's what saying that the best way to think about is like a cdn where, you know,

you don't worry about configuring TLS. You don't worry about configuring that servers. And I think that's already happened once, right? They s three. When he came out like we used to literally create http servers backed by hard drives that had all this like a P eyes were synchronizing the files and so on. And we have replicated with the hard drives with raid. But no one does that anymore, right? Like we all say like, Oh, just use, you know, like I file a B I So you can think of now as the missing deployment a p i for the world. That kind of makes it so magical and yet so correct that you don't think like I'd rather go back and, you know, configure

13:51

my observers. Yeah. So the mission is sort of like how can we take the experience that everyone has already accustomed to at this point for serving static files and bring that same experience to deploying dynamic code that needs to run in a server environment?

14:5

Correct. And there so many use cases or those rights. Like if you're running on e commerce website, if you're running a dynamic blawg or something where people submit their comments or any kind of Web center application that serves dynamic data, which arguably is that it's also, um, can fit perfectly into this model. And obviously there's always like some, you know, constraints that the model imposes. But one thing that I was reminded, uh, my friends and customers is some concerns are in there. It's specifically to enable greater scalability, right? So, like the beauty of this model is that,

like I said, he is not that we don't let you ssh into the server because we don't have the, um, we haven't gotten around to it yet. It's more that if we if we if we weren't in that feature, we would undo a lot of this great other benefits. So in a lot of cases, that constraints come with tremendous benefits.

15:4

Yeah, so I think that maybe the most interesting thing to get into first, especially for my audience, is that I think the idea of being able to take your sort of server side code and deploy it using like Serverless functions in a way where you don't ever have to worry about the server or think about any of that stuff is really exciting. And then you start thinking about a deeper and all of a sudden you have all these questions about how do you do all this stuff that seems so easy and like stuff that's required to do in service side applications? A lot. Like talked to a database, send an email, grab something from a cash stuff like that? Because at least the way I understand it with serverless functions, everything has to sort of be stateless,

15:48

right? Yeah, that Zaveri questions. So this is kind of what trips people up the most because a lot of frameworks or systems do sometimes make the assumption that they, for example, can write to the file system. So, example is we have funny enough. We have WordPress deployed, uh, with now you can go to it workers five. Now that it's age, and the only thing that didn't work out of the box was image up floats because the images were trying to be rich into the vile system on, and it is well saying that make. The constraints are sometimes so wonderful, because if the functions could Onley execute in one a specific machine where Warren, a specific hard drive,

maintains your uploads, it would severely limit the scalability of the system. But if instead you're naming one of the multiple workers plug ins that allows you to upload images to providers, I guess three or Google Cloud storage and so on, then Now you have all this awesome scalability benefits and even just like things like building backups and a lot of things that you kind of never worry when I worry about with your with your uploads right, Um, and it's also so cheap. So the system didn't let you in this case right to the file system. It seemed like it wasn't working. But then you realize, Oh, this is how I actually solve this problem. And by applying the correct solution, it's likely that you'll never worry about you know that side of the operation again. So an example is like I used to have this way back in the day, were like,

Oh, and sometimes you'll see companies have outages because of this. You run out of hard drive space so that you go to postmortems of different start ups and and and I understand like it's not. I'm not mocking because I understand it. Like when you're monitoring servers, you're not just my during one thing like the up time or like a pig, you have to monitor so many dimensions that it's so easy for us human beings to something entirely forget about certain dimensions off the off. The things that we have to monitor and one of them is hard drive space, for example. You'll hear it countless of times. Oh, everything was working fine until one day our daily threat of right out of hard drive space. So that's actually funny enough is a category of error that you could have if you were writing to a file system. So, like if you're operas 25 system one day, you'll you get paged or your customers will call you that.

They're getting errors to upload, however, with server less sure. Initially, Theus assumption that the framework was making that he could write a file system was proven incorrect, and that caused a little bit of a readjustment on me. I had to go in research. What is that right? Plug and install so that my files can actually use an HDB a b i to be uploaded instead of just right into the file system. But then I, as you could imagine, I do. A lot of things I don't just set up were press demos on. Now that it's age, I will probably never again in my life have to second guess whether that website is gonna work. Um,

short of workers getting packed or, um but actually funny enough, everything we run now we fetch our workers, builder fetches the latest version. So when you re deployed, you automatically operated as well. But, you know, short of very specific scenarios, service gives me the confidence that I'll never have to monitor death

19:14

thing again. Yeah, yeah, that's awesome. So in the case of like, a file upload, that seems like a pretty straightforward You're using some sort of tool, Teoh, like maybe you're serving this function is responsible for getting you like, upload signature from s three or whatever. And that's all it's really doing, which is a stateless operation. You provide that signature back to the client client uploads this stuff toe s story directly, which is probably better for everyone because there's not double the band with happening. Are all the stuff that so That's a perfect example of the situation where it's like that constraint is sort of like guiding you towards a better solution anyways, which is which is really awesome. But in the case of something like a database,

like what? One of your kind of like go to services and stuff for some sort of hosted database service? If you need to be able to store persistent data and fetch it, you're building an A p I that you're deploying on now, for example,

20:11

so in general you want to have the same mindset off. Every time I've been doing something locally or making assumptions of local computer, you want to move that into an A B I, and in general is gonna be some probably some https or TCP based FBI. So the kind of categories of sort of persistence needs that our customers have are everything from a cash. So in this case, he can use services like red, it's laughs or member Estonia elasticache eight of its last Akash, making sure that the connection is encrypted and that the size of the cash satisfies your needs, etcetera. So that's for the situation where you know that you have a very expensive backhand process or very expensive query. Anyone Akash it something then, in terms of the actual application, state the durability of the application, saying, I kind of like to divide things into sequel and no sequel. So for the sequel,

I think they used to be a legitimately great option. I think my sequel works very, very well with several s. There's a few caveats to keep in mind. I think one of them is that, um, services, actually giving you so much scalability power that in a lot of cases, databases can not keep up. So one of them is the connection counts. So, for example, if you have 2000 or are they say hundreds of concurrent Abia calls and you're opening a my civil connection for each and then there's nothing that is closing those connections quickly thereafter, and then every single AP A call it sort of, because in the world of server lies like you don't know if it's sort of your theme underlying function was running or not, which is what gives you all this awesome benefits.

But because, you know, that is only going to say, Oh, I haven't most 10 connections open to my sequel. Yeah, so as a result, you could overwhelm my single, but I guess that this is actually again, like, I'm not trying to, like, say that every negative is a positive, but it kind of it's because of a few things. So first of all, um,

you're gonna find that things like, especially posture Siegel, for example, sometimes the connection limits are very, very low taken realistically load. And then people started scaling. Their systems are becoming more popular and they go down. So it's kind of nice that SERVERLESS makes you think about this early on. It makes you think about like, oh, well, my database be able to handle the connection load that I'm gonna get if my service becomes popular, has a lot of traffic. So with my sequel, what do you do? Is you can actually be a fairly okay by just garbage collecting connections every once in a while.

So there's there's modules that do this, For example, no Js several stash might sequel. Thus is automatically where it's a little bit more careful, then default about the Connection County or my sequel server. So it's a in general you can use any hosted my sequel service on. It's Gonna work really, really well, like or or on my sequel by AWS. Or we use the scale grid a swell for ever assembled in the world. Presento that we did any worse, tremendously well and for things that are not meant to have received lots and lots and lots of traffic, you can actually open the connection and closing the same vocation and provided the latest between your function and the database server is low. You're gonna be really be really, really effective. And keep in mind that a lot of cases that because we're emerging the Cdn together with the code execution,

you can take advantage of a CB cash control headers to not be hitting your data is all the time. If if people hear a familiar with B, it's being warbirds. You'll know that, like sort of the blue screen of Death of War, Prez is establishing a date of its connection. So the solution there I think it's not always just like hammering or database, but it should take advantage of cashing. So the beauty off now is that when you deployed to it, you're basically deploying to a cdn so you don't need to, like, set up cloudfront or Cloudflare in Frantz. So imagine that you're riding a user facing page that you know that it's gonna get a lot of traffic. You're actually better off saying taking advantage of the CD in cash rather than, you know, paying for a tremendously expensive my sequel database.

Instance that it's gonna manage that kind of load. That's why I was mentioning, you know, like in a lot of cases, people tend to think that they need to scale the data minutes, and in reality, what they need to do is think better about what is the right sort of cdn cashing mechanism I have to use. So on the other side of the spectrum, with no sequel with no single, I would say you tend to have to worry less about load and a lot of the, um, the providers like Mongo DB and costumes to be and Dynamo. They have really awesome scalability benefits built in in the cases off or fire base. For example. In a lot of cases, this more modern databases already support connecting over http to make the queries and sold us than you are on my sequel offering by Amazon.

So when you're when you're connecting race to be, to be sure, your queries and you're not doing like state ful transactions and so on, you actually never probably never worry about connections to your database and or any kind of issue like that. So we're partnering with mumbled to be at less too soon. Provide one click install of Mongol to be through Are you I? Which means you'll sign up for Zeit now going to Now that it's age, you'll click. Install mongo and well provisioned. Your secrets automatically provisioned the end automatically and you'll be able to get into Ramallah to be data. It's from any sort of language that you use, and I I think that's a great solution for Aton appeal, because a lot of what we do is just shuffle shuffle documents around right like we have an A P I that receive some body of a document. We save it to the database kind of in the Jason former that we're ready to receive their with some minimal validation. And then we make another FBI available to expose a series of documents. So Mongo,

DB and Fire Base like are really, really good in this space. One thing I would recommend to people is it's appealing that serverless allows you to deploy everywhere in the world. In fact, when you use now, you can literally say Now, dash, dash regions all it's actually I recommend everyone run this so like, do mpm and still now, yeah, and being so that's G now Then do, for example now, innit? No J as, for example. And then it's gonna created the now we didn't know,

just angry and no Jay's example so cv into it. And then if you run now dashes regions all, it will deploy that function through every available edge in our network. It's super cool to watch, because again, Thea output of the commandments and showing real time always go to our regions are coded after airport. So it's going SFO and I d. And then it's going to Paris and but you actually have to step back sometimes and wonder like is this an actual good idea? Because if my database my database this far away from my function, then imagine that you're like you make two database skulls. Wants to get the use of roe from the session identifier. So you have a cookie with sessions select, you serve. Ah, where? Session indeed.

This and then you want to fetch older friends for Laurie, Let's do another table. You wanna fetch their lights or something that so you're gonna make to cereal? SQL costs. So what happens is, if you're if you're function, it's running in. First of all, our most distant edge Sydney to San Francisco or Mumbai to San Francisco. We have, like the mostly It's like 250 milliseconds. Then you're just waiting for your functions is running in India, which sounds awesome. But then it's the date of its running in San Francisco. So 250 milliseconds to get the user and then 250 milliseconds to get all the lights. And then,

you know, routing is never as efficient as photons off light, travelling through vacuum. So you have all this like routing hops and and problems sometimes in the routing. Because, you know, Mumbai kind of optimizes for riding within, like for Indians, websites and provinces and so on. So you end up having issues also in terms of connectivity. So that's why a lot of companies air promoting this idea. That server less allows you to run your code everywhere, and that's true. But you have to be smart about it because what's gonna make for the best possible experience for the end user is always minimizing the amount of hops and making sure the hops are very, very quick on. This is why I,

like so many of our customers, love now for doing things like server rendering because it's all about minimizing the hops when when you do SPS and when you do single based applications, what you're doing is you're giving people Js and CSS. Then they loaded on their computer. Then they make a navy a call to your A P ID and then make another FBI call it like you're doing all this hops just to render a few items on the screen so they kind of golden rule of performance still applies to serverless. You have wanna minimize hops between your function in your database and you also want to make those hops very, very fast and now kind of puts you in control of that. I'll give you another cool example like we're working on this demo using the on slash baby I to get. It's just on also maybe ever getting stock images, and we realize that our functions were executing their recording. The on slash AP I It's really awesome. It worked perfectly. But then I realized, hey, like, to me,

it felt a little slow. Just like to get a few items of meta data from the an slash baby. I seem like it was fast, but it, you know, I'm kind of kind of obsessed about this, right? So when he realized like, oh, the the as slash ap eyes running in the East Coast, I literally did Ping, um, space sore start on splash dot com and the host replied with us East. So it's like, OK, I'm gonna go and added my now that Jason file m s a deploy my code to i e.

D, which is our region in the East Coast and they made it this really simple demo three times faster. So So I was sort of rejoicing because sure, you know, initially like it. It worked when it wasn't as fast. And then now kind of gave me the power to say, Hey, actually, I need my coat to be running here and I redeployed with one command and I made the demo three times faster. Yeah, so that's kind of what I want people to be mindful off when they think about this new era of server list is the basic laws of physics are still very important.

31:41

Just wanted to take a quick break to thank one of this week's sponsors, and that is digital ocean. So Digital Ocean is a simple, developer, friendly cloud platform optimized to make managing and scaling APS easy with an intuitive AP I multiple store doctrines, integrated firewalls, load balancers and more. I've personally been a customer of digital ocean for about five years, and I used them to host all of my service side projects like my custom course platform, for example, which is built with Blair Bo Ah, lot of the guests that have had on the show in the past or digital ocean customers as well. For example, Taylor out well, the creator of LeBeau he uses Digital Ocean to host all of his products like Envoy and Larva Forge in Jeffrey Way actually uses Digital Ocean to host Larry Kasas. Well ah, one of digital oceans.

Newest features that I'm personally really excited about is managed databases, which lets you spend up a completely managed database server. So you don't have to worry about anything like backups managing read only replicas or just general server maintenance. Now, Digital Ocean is already an extremely affordable service. You can spend up a server for as little as $5 a month, but they've been kind enough to offer a free $100 credit to full stack radio listeners. So if you want to give digital Oceana spin head over to D o dot c o slash full stack all one word to claim your $100 credit. Thanks so much to digital Ocean for sponsoring the podcast back to the show. I think maybe be worth sort of like reiterating some of that in another example, because, um, I want to make sure that I understand it. Ah, well, you're explaining in that the audience does suit,

so I'm gonna try and explain back to what I understand. And you tell me what I got wrong if I didn't get any. Basically, what you're saying is like, the idea is you want, um you know, just because you can run the code in the house next order where the person lives. If that code needs to make five SQL queries to the outside of the world, right, it probably makes a lot more sense for that person to be requesting the code from the other side of the world that the five SQL Queries can happen very close to each other. Um, physically. And there's only, like, one slow request that happens instead of one faster question five slow requests. It's one slow request from the user to the code. That's a lot of information queries. Okay, so with sorry, go ahead.

33:58

Yes, Jose, we should then slice up the show and cut this part of Jesus is really awesome. Inflation.

34:5

So, um, you give an example, like about next Js and server rendering, which I thought was kind of interesting to think about because I haven't thought about it enough detail yet to fully grok it, But I'm thinking in my head the way that next works is it lets you basically render like a react up on the server, sort of on demand so that you don't get like, the static code back and then seek a spinner until you get the first brush of data. So then, if you're rendering a next app on now, is it just like all the endpoints in your kind of Web app? They go to something like some single function that's running that does. The service had rendering for every route

34:51

in the no. So that's a benefit of using next. In general, any framework that allows you sliced up you're that's called a single page application into a model based application, because so the premise of next for those that are not from there with it, is we wanted to make using react very, very simple, but also very, very scalable. Eso next project This simple is simply a pages directory where every Js file you put inside becomes a Eurail that is accessible through the Web browser, and that returns some react, surrendered or basically some JSX rendered code. Very similar. How PHP works where you know all your PHP files become entry points for security reasons, we decided instead of, um, rendering any Js file,

it's on Lee Js files that are inside of pages directory. But what's cool is that when you deployed toe our platform, each of this page is becoming dependent several its functions that are all independently scalable, that all kind of have their own isolation mode. And this makes surrendering extremely fast because what happens is imagine that you have Ah, you're e commerce item page. So you great pages slash items item that gas. And that page is your most valued page in the system, and all the others are not really So you wouldn't want the code execution of that particular page to sort of load all the assets of hold the other pages in the system. So this kind of why Naji s has become so popular in the rear committees The insight that fundamental insight is the same one that applies to the Web browser, which is when you load a page you wanted load ASL. It'll j esus possible right, because, as you know, the Google Chrome Deverell team always reminds us we more jsc load this slower. You make that pace slowed.

Yeah, so I think they always quote a figure that is like, you know, ah, 100 kilobytes of jobs. Strip can add one entire second, even from a cash one entire second of just evaluation time where I think it's even less. It's probably even like depending on how old the devices Tana Kilobytes of Josh Tripped can be one full second just parsing through the

37:22

code. Yeah, so it's not interesting. It is time to download the file that you have to worry about. It's not even executed. Also, to make the website

37:31

absolutely and Web assembly thus make that better. It's much faster to crunch through a one megabyte of well assembly than it is to crunch through one megabyte of Js. But the point stands, which is You kind of want to take advantage that your your AUS already have a granularity tooth. Um, so when I go to amazon dot com, it's last product. I'm not going to Amazon com slash Cetus so that inside that we kind of learned from the Web front and side applies perfectly to server less because if you package your functions in such a way that you're only putting the code that's necessary for each entry point of your application in there and now does this automatically for you. So you don't kind of have to worry about all this. But as a result, what happens is your functions execute very fast, and they all scale independently. So the the justice servers Onley scale by becoming bigger and fatter and fatter. Says you're familiar with containers or, you know, thank anyway of packaging server technology. Every time you got something,

you're contributing to this pile of code that only gets bigger. So there's just no technology that will make one day you wake up in the air like, Oh, my container services became smaller. No, because you're constantly adding code. You're adding new routes if you work on a P. I see you're constantly pushing out new versions, but you don't want to immediately deprecate the biggest versions, right? Yeah, so you're several only becomes bigger Wes with server less each page. Its entry point each ap I call is always going to an independent thing. So you kind of never become slow or you're never paying for the weight of things that you're not even supporting or not. even that popular any more. So it's a really nice acknowledging in the sense that it's not that you're just only getting started and it's fast they get mentioned with Now in it are like Oh,

my first feeling was so fast. But what? We're in the business office making sure that your the blooming number 1000 it's those of fast and holds the scales well. And that's, Ah, surprisingly hard thing to

39:53

Dio. Yeah, so Okay, I think this is a really good Segway into probably the topic that I kind of like really pushed me toe buggy about coming on the podcast, which is that my understanding reading about now and reading through documentation is that you guys, that site sort of our opinionated about the way that you should sort of be structuring your back and projects even the ones that are kind of like being deployed in this, like serverless fashion, where, instead of having like, kind of like one single entry point into application that does some, like server side parsing of the U R L. To determine which code to kick it off to correct, you sort of want to deploy every sort of endpoint as its own. Ah, function. So your routing is like handled at, like the now dot Jason configuration level. Not like an application code.

40:48

Yes, and that's a necessary step in order to take advantage of this granularity model that I described because you can perfectly do your routing inside one function, for example, right now, you can use for intimately know J as you can use express or with symphony. They support their own router system. But what you're doing there is you're fundamentally introducing a bottleneck because instead of being able to load all in access, letting your customers access that function kind of directly, you're making it go through a huge bottleneck, and you're kind of creating this mega function that contains all the possible routes in the future. Again, it's like if what we want in the client's side is to have one bundle of code percent that the user is requesting, why would he want them? The example that it was give with next that makes it click for people is like again. If I'm going to buy a product, why am I downloading in terms of service? But you know he's a resident.

How Cohen This is because, like, there are a ton of frameworks and technologies out there that the router eyes like you go. You go into the writer configured, you add slash T O s. And then you point you imports a module with Web pack or whatever. So what you end up doing is that your bundle now contains even the terms of service of your application. 10 kilobytes is like English text when I'm going to a complete different section. So that's what we encourage people to avoid also in the back. And and And the easiest way to fall into this trap is that Oh, I'm gonna deploy a server to server less. That's kind of what a lot of people, um, can falling into the trap off, which is because service this is such a powerful technology. Nothing is technically stopping you from.

You know, I created a sort of server abstraction, and I defined every single ride of my system in there and then packaged it into a wall. One function. There's nothing technically stopping you from doing that, but you're you're not going to fully realize the scalability benefits of server less if you force yourself into that

43:9

corner. It's sort of like the wrong mental model of the whole thing. So the way that you guys serve encourage you with work is to you. If you're going to deploy, Know J s A P I, for example, would that now there would be no express involved or any sort of stripes, sort of like routing level framework like that or anything that's sort of handled at the now configuration level you can almost think of now as sort of like a framework for you in some ways. In that sense,

43:38

yeah. Yeah, that's a good description. One way that it's interesting that you can use express is and we have an example of this is you can you can create lots of independent entry points each for your A B, I or your pages. And then you can actually export one express instance from each and take advantage of expresses middleware and the A B A that you already know. But what we would not in cars you to do is to do all your app dot Get rounding up Don post. You probably wanna handle one. You're ill entry point for express instance, or you might want to do, um, one and put the delete get and post methods of your FBI. But then you create a separate one and then you great the handled different methods for your other AP I endpoint separately and you're actually going to get a sense of this that because when you run now or when you install our get help integration, you're going to see that as we build your project, we show you that whether we're able to apparently lies your build and we show you the outputs off each of this process. So we show you what functions were created. So in an ideal world,

as you continue to evolve your project, you're gonna see lots of outputs. C a p I slash users, FBI, such sessions, FBI, such files and and all of them will become independent. And then you see the weight of each, which is super cool, because weight is a great way of understanding how the complexity of your project evolves. Over time. Eso you can kind of see oh, like look this safety and was becoming really big, or like why this is a P. I jumped so much like Did I food the wrong note module, for example?

Because these are things that happened a lot in the in the note community. Like one day you include a new module and off, Sonny realize you're embedding a lot of extra code that you don't want. So services also helping with this a lot where you don't really know with servers, that your node modules directory might contain everything in the world because no one actually gave you any limits or guidance for service like service. Just grow and grow and grow. So in a lot of cases, we've seen people including, you know, that Web pack in node modules for production and and things kind of get out of control.

46:2

So OK, so that's that gets me to another topic that I think is interesting. Before we get into that, I want to ask kind of one more question about this, this kind of idea of everything sort of being like every route, sort of being like its own entry point from a file perspective, right? Like every endpoint is its own file. If we think about in no terms as a JavaScript file, sort of for every page. Um, is is it still possible to, like, share code between all that stuff? Like, what is the right mental model? Is it like every endpoint sort of its own micro service? Or if I have a database querying code in a file, I can import that file in tow every page I want and that all so I can still think of it is like one sort of monolithic app. A lot of

46:45

ways, e think that's by the way. The key to all this is monoliths have always allowed people to move really fast because older code lives in one rebo in one place. If you change one ap, I the old the old the other files have to abide by the new A p I, and you find out at compile time instead of when you're micro services failed to talk to each other. So monoliths are amazing from a productivity standpoint, but they have the growth problem and the scalability problem that we're talking about, which is like they become this massive bundle of code. But from a code organization perspective friggin awesome, because in the case that you were talking about you want your A p I Your A B s have common methods and utilities. Of course, we do this all the time, and But when they get built and they get in, they turn into function Several functions that get deployed to the cloud. Then they all will each include its dependencies and not all the dependence of the system. So think of it this way.

You might have an A B i that dust something really, really elaborate. Like, imagine that you have an A P. I call that resize is a file and then uploaded to some some other system. So sure, that process may bring image magic and this is not him. And but none of your other MPs were going to do that. So when you run now, you're gonna look at your outputs and oh, look, this e v a function is definitely bigger than the others, but it makes sense. It has doing some other stuff. But the other important part, it's sort of think about is that you're never deploying this functions independently.

So you're always running now for the entirety of your project or you run, get or you get potion. Then we build the entire project. So this is kind of the magic of this system is that with micro services you always think about Oh, I'm gonna go and redeployed this one. Micro service now almost doesn't let you do that with now. You're saying I'm deploying my entire project as if it was a monolith, and that has a tremendous like, liberating effect of like the entire thing works in cohesion or the entire thing does not work in Croatian. So there's no sort of middle boy.

49:11

It's sort of like the like. The react mental model taken two deployments where it's like I here's what I need you to have deployed ultimately at the end of the day or I do not care how you do that. Please optimize it as much as you can. I don't want to think about the individual pieces, but, like this is the source of truth. Now make that a reality.

49:32

Yep. You're rendering your entire project.

49:35

Yeah, that's awesome. So okay, so with the the the idea of sort of creating all these separate artefacts that are deployed is like separate serverless functions. How is that happening? Like from the user's perspective with now does now sort of have, like, its own, like bundling system built into it that resolves dependencies and figures out how to create the artifacts? Or is that something you have to worry about setting up with some builds tool or

50:1

this is a really good question. So we have a system that we go builders. There's very similar heroic oh, built back. So herro herro buildfax were two servers. What now? Builders are two server less. Okay, so this this builders will are open source and PM modules. Actually, that will take your code and package it accordingly. So we built a bunch ourselves that are super useful for most people, like now, slash node supports no Js functions and typescript. It actually uses *** under the hood to bundle all the cold with its dependencies together. If someone, if people have used in the audience,

go Yeah, the mental model of functions that it really like to think about is that when you run, go build, you get a static binary of your project that is completely self contained and that it only contains the dependencies that the go code needed, right? Nothing else in JavaScript. We've never had this technology before, so the closer approximation of that is West packing everything into one Js file. So that's exactly what it now it's like noted builder dust. And we created a little rapper for that is really it works really, really well. But the key insight here is that when people have been packaging node, they've been including the entirety of note underscore modules, which can even contain, like testing dependencies, development agencies.

Things get published to NPM all the time with their tests with huge breathe me files. And you know you really don't want a package that into production, so you don't want to do it from a security perspective. You don't want to do it from a common sense perspective, and you ultimately don't want to do it because it's going to slow down your functions. Uh, and that's why all this builders tend to take this constraint into consideration and anyone can write their own builders. And we have lots and lots of builders contributed by the community, and what's cool about builders that they can be a slow level or high level as you want. So, for example, we have Ah, now slash war press builder. That'll take a WP conflict dot PHP and then and bad workers into it like bring in the entire framework at build time. So it's like you can grade the super minimalistic like project layouts, right? You're not including that will become and the entirety of the framework. Your only cleaning that loopy conflict that PHP. And now the Jason. And then you run now and you get it works.

52:46

It's free. Awesome. That's awesome. Yeah, that's really exciting. So one thing that that kind of leads us into to that, I think is a really exciting thing about now. That I think, is not obvious. Until you see someone pointed out is that, um because now supports tons of different languages, right? It supports node PHP go rust. I think even I think it even supports bash, which is kind of ridiculous. Um, you can create one monolithic project where every AP I end point that you write or every serverless function that you write can be written in a different language,

which may or may not be the best thing to do for your project. But it's really fascinating that, like, if you really needed to optimize some operation and you needed to write it in rust. You could do all of that in this big sort of like monolithic multi language project where it's not like a separate service. It still feels like one project,

53:43

and there are many interesting obligations of that, by the way. So there's a lot of scenarios where you actually don't even want a different language. But he wants a different build time language. So an example. Is there a lot of people that want to build our static projects with Hugo so they can use? They can use a builder that, at build time, creates and executes and generates or website only in a specific part of your project. So you can say this is a big part of my brother wanna build with Hugo. This is a part of my project when I use Gatsby and this part of my brush when it used no Js functions so you can have the sort of freedom to combine this and mix and match these technologies. And, like you said, it might not always be the best idea to just randomly started using languages into your stock in her company. But what's important here is that what we've noticed over decades of experience in open source communities is languages and frameworks tend to come and go right, like, literally go,

I think kind of you know what? The language was incubated a Google. They invested a lot of amazing resource into it. It has a tremendous center library, and it's there like it's becoming more and more popular. So no one could have predicted that. I think, you know, like, Oh, there's this language is gonna come out of nowhere and it's gonna take the world by storm, Yeah, or even running Js in the service I So what? We've noticed this languages and Frank were stew tend to change. What doesn't change is the fundamental primitives, the protocols and the programming models.

So the interesting interesting thing about now is the probably model of this exact same one that people had with Ph. B. 20 years ago. And arguably, that's for me in particular, this kind of my personal opinion that is the most successful growing model for creating anything on the Web, especially, and even for native APs. Once we start surrendering native apse which a lot of people working on that is the winning model, in my opinion, for how you want to literally respond to user traffic. Justin Frost sing and give them back some dynamic results. Um, and that's never gonna change. What's going to change is that Oh, today new ph.

B seven tomorrow. I use Larible the day after your symphony. Danny, go to know J s. So now it's sort of giving it this freedom with the builder system to sort of translate all this communities and languages and trends. But we're giving you this sort of no nonsense model that it's gonna make sure that your process was working scale correctly. I

56:31

just want to take a quick break to thank one of this week's sponsors, and that is Cloud Neri. So if I had to describe climb very myself, it's basically just the best way to store and serve images that I've ever seen in the past. I used to use generic storage services like Amazon as three years sorts of images. But after switching the clown area, genuinely cannot believe I ever did this stuff any other way. So here's one example of how Clough Neri has made my life easier, so you probably know that typically images are the heaviest resource your users have to download when they visit your site, right usually way mawr than your JavaScript CSS. So in the past, I would spend a lot of time tweaking settings and tools like Image Alfa an image often to try and optimize my image files. So then one is large. With culinary, I can upload the full resolution file without even really thinking about it, and then, by just adding a parameter to the image rural that I get back when I go to serving on my site.

Cloudier will automatically optimize the image as best as it can, usually resulting in foul sizes that are actually lower than what I was seeing when trying to optimize the images by hand. This is even more useful for like, user uploaded images, because instead of trying to do some fancy automatic image optimization in a background job on my own server or something, I could just send those images directly to Connery from the browser. Request the optimized version back by adding that you're all perimeter and bam of gun optimized image at a really small file size, so there's an enormous amount of other cool stuff that you can do through the euro based AP I that's really just scratching the surface. But you can do stuff like requests images at different sizes to conserve smaller images on mobile devices. You're not wasting bandwidth. You can crop images two different dimensions. You can crop images using face detection, so just crop to the faces in an image. You can automatically add watermarks or text overlays or tons of different effects and stuff like that. It's a seriously impressive service, so Cloud Mary has an amazing free plan where you can store 300,000 images and videos.

Yet that I mentioned you could do all this crazy stuff not just with images but also with videos to you get 10 gigabytes of storage and 20 gigabytes of monthly bandwidth on this free plan, sir, if you're not already using them, definitely head over to cloud mary dot com and check it out. It really is one of my absolute favorite services that I use on my own projects. Thanks a tonne Decline Mary for sponsoring this episode back to the show, so I got two more questions for you don't want to have to to too much of your time. But the 1st 1 is what is sort of the local development story, like using now as kind of like, you know, the central sort of tool that using for development, especially if you're doing all this crazy stuff with multiple languages.

59:11

Actually, it'll is assigned to work perfectly on local host, and the key word there is decide because not every building has yet been fully prepared to tow work locally. But we're soon introducing this. It's already in the preview mode this now, Dev some command. Oh, that'll take this. Builders, which are like I said, MPM packages and run them locally and give you the exact same experience that you get in the cloud on local host. And interestingly enough, and we're about to announce this, you get exact same semantics as well. So this scalability model, if you said a lot of concurrent requires and the shutdown model of the fungus, every little detail mimics the model that he would experience in production.

So it's kind of like it's not just running the code, but it's also running what we call the scheduler that this oldest different providers use in production. And we see it as our mission to actually extract out this details from them. Then in a lot of things, they're not well documented. And we open source. Um, and we make them work a local host. So that's kind of been our mission from the beginning is like, Hey, like this service tax is being sort of I've rediscovered is really cool, but really, what matters the most developer experience. So we're extracting out all these details from from different clouds, and we're sort of rep producing them on a local environment to sort of give you the best possible. Like no latest developer experience.

60:50

Awesome. That's really exciting, really looking forward to checking that out. So the last question that I have for you, which I think what kind of be a good place to sort of close things off on a really sort of practical note and maybe answer some questions for people, is ah, lot of the stuff that we've talked about has been sort of like I don't want to say no, it's not abstract. I think we'll be talking about things in very concrete terms, but I think it's always great to come toe like a real application example and talk about some of the stuff that we've been talking about, so people can sort of relate to it a little more closely. So I thought Be interesting is to just kind of briefly talk about the now dashboard itself because I think a lot of Miamians especially, could look at the now dashboard and click around in it. And they could think in their head, like how I would build this as a rails, app or Larriva lap, for example.

Um so I'm curious. First of all, is like the now dashboard sort of built following all the sort of things that you would recognize building stuff now. So is that deployed now, too, with all separate functions,

61:53

Stuff like that? Yeah, absolutely. And it's hundreds of functions, actually, which is really funny, but it kinda shows you the amazing scale of this system. Every single push that we make to our website goes through the build process that our customers go through. We have some optimization for taking advantage of our cdn. So when your loved out, if you go, go to zite dot co z e i t than co. You're gonna feel that it's pretty damn fast Every every click, every decision. Everything is just always rendered from a cash, um, closest to your location automatically.

So it would routing you to the cdn. Ash discloses to you automatically, and then we're probably returning from the cash. Now, when you log in, things change because then we can sort of server render your dashboard. We can do anything that we need to do to give you the data that you need as fast as possible. So, yeah, age of this page is our next race. Pages that are being built into serverless functions automatically on it. It's funny that you mention that because we're open sourcing more and more of this natural, because we realize that this most of what most people in the world need is they need an authentication system than they need a Dashwood overview off events and projects on Ben. You know, they want all that to be fastened easily deployable. We actually follow the same sort of the political bottle that we advocate for,

which is we used to get have integration. So every PRD gets Merced into master, goes out, designed that coat. Um, it's quite eyes is a joy to use. And I think this is already a project has grown quite dramatically. It has close to a dozen people committing to it. So it puts the stress into our system off the demands of the high performance team that is constantly pushing out new code that wants the builds to be really fast. Eso It's a great experience to build our tool while building this tool for everyone else. So I highly recommend that

64:5

Yeah, awesome. So So I'm just looking at it now, kind of clicking around. So what's the backend built in? Is it Is there multiple languages or is it just like a no Js back end

64:16

and the It's all next serious for the actual rendering of the pages. And we call out to node ap ice uh, occasionally.

64:27

Okay, Yeah, to do stuff like like I was clicking around. For example, if I want to change my name on my profile or something that makes like a

64:35

that Celeste also all know J s a

64:38

B ice And what services air you guys using for this particular app for persistent data storage and stuff like that.

64:45

Great questions. So for this week's Cosmos TV, okay, it's a database by Microsoft that is geo replicated. It allows you to sort of click on a map and Replicator data. So we currently replicated too, I believe, at least, um, North America in Europe, which actually, and this is why we decided to deploy the function suit three different locations because we know that we have the data there and it will make thing experience really fast when we're doing dynamic stuff. Um, and then we use services like, uh, we actually used readies last for some stuff we used century for error reporting we use as three and Google Cloud storage quite extensively were smuggled to be add less for a few things as well. So actually, we've really taken event. We've really eating our doc food because over time we've gone through a lot of different cloud providers and services like we've been able to. This idea of not being locked in has paid major dividends to us that we've been able to try it all and always go with the solution that works best.

65:57

Very cool. So one last question about the services, I think this is something that, for anyone who's trying to build like a single page job for the first time, is always Ah, stressful topic is like authentication. So I was doing a little bit of sort of like network requests, peaking t to kind of get an idea what you guys are doing. And it looks like you're doing like, a token authentication sort of approach where you just sent a token back. Come to the A p I What are you doing to sort of like, Look up that toe? Gonna figure out the user identity? Is that stuff stored and like, Cosmos db two or Yeah,

66:28

that's what That's what Cosmos, um, we if I were to do it all over again, I think at a time when we started off zero wasn't as mature. But now it's kind of a no for Internet. We I would totally use off zero or something like it

66:44

and just float all authentication entirely.

66:47

Yeah, because because they're they're going to do a much better job at, for example, we want to add features. They touch a d. You know authentication, and we could use their sdk and give our customers, like, touch Adeeb a session resumption overnight. So but I think in the experience of rolling it out ourselves hasn't been bad specifically because we kind of wanted to push the battery of password less authentication. Um, so he gave us building it ourselves, gave us the flexibility to kind of decided what a deal one boarding experience would be for customers. And it actually worked really well. And I'm really happy with it. Um, considering

67:32

awesome. Well, I think that would maybe be a good place toe start wrapping things up because, ah, I've taken it most your afternoon here. I think so. Where is what is a much place for people to kind of keep up with you and the new things that are going on that site? And is there anything else that you kind of wanted toe leave the listeners with before we wrap things up?

67:53

Just follow us on Twitter At Twitter conversations, I t h q z e i t h Q. And we have plenty of examples and awesome documentation about how to get started. I click on our website and Yeah, Don't have state to reach out to us with questions, especially after having listened to the podcast. If you let us know that you listen to it and you have a follow up question will definitely give you a personalized response.

68:22

So there you have it, folks. I hope you enjoyed this conversation with Guillermo. If you're interested in show notes for the podcast, they will be at full stack radio dot com slash 112 thanks to Digital Ocean Incline Mary for sponsoring the podcast this week, and we'll see you next time.

powered by SmashNotes