Is “serverless” yet another buzzword?
Geeks and engineers love to describe technology using fancy words. Nowadays, we don’t talk about servers – everything’s “in the cloud”. We don’t mention databases, as everyone’s focused on more sophisticated terms, like “big data” or “blockchain”. Marketing also loves such jargon – just add a “crypto” prefix to your company’s name and observe your sales.
The main problem with such buzzwords is that sometimes they can hide the true meaning of what we’re using. And “serverless” is the perfect example of that. One could think of some strange technology in which there are no servers, but we all know that it’s not the case. So what does, in fact, “serverless” mean?
In short, going “serverless” means that you don’t need to take care of the servers running your app.
Your only concern is the app itself. Server management and provisioning are outsourced so you can focus on writing your business logic. It saves your time and allows to introduce new features more frequently.
Well, at least, in theory… We know that the “hello world” example in every technology always looks brilliant, but things tend to get complicated soon after, depending on how complex our solution is. I hope that after reading this article you’ll be more aware of both the pros and the cons of adapting serverless in Node.js.
Can I get rid of DevOps by going serverless?
Because there’s no need to maintain servers by yourself, you might think of saying goodbye to your beloved DevOps team. However, that would be a disastrous choice.
For development purposes, it’s a quite efficient way to let developers handle their staging environment. And serverless computing makes it even easier (of course, it still depends on the type of product they are creating).
However, things are getting more serious when we are deploying to production. There are many aspects to consider: monitoring the application, having a proper strategy for deployments, taking care of security, networking, debugging, the whole system scaling and more. This is a perfect place for DevOps specialists. Let’s don’t forget about them.
Do I need serverless at all?
You’ll soon be able to answer it for yourself. Let’s take a look at the example of a company which used the serverless approach to its advantage.
There was once a tool called Readability, made by Postlight. It allowed to get the website’s content, transform it into some ebook reader’s format and then send it to the device. It was so popular that the monthly cost of maintaining it grew up to $10,000. This is how much you pay when you have around 39 million requests per month (which gives around 15 requests per second).
Postlight had to reduce the cost. Fortunately, they found out how to make use of serverless computing.
Their new solution, called Mercury Web Parser, with similar traffic, costs around $370 per month. How exactly did they achieve that? With the use of a serverless FaaS service model hosted on AWS Lambda.
It’s possible that you’re considering using such technology yourself right now. Therefore, it’d be wise to have a deeper understanding of various cloud computing service models first. Let’s dig into the details then.
Cloud computing service models
It’s quite important to understand the characteristics of cloud computing service models related to the serverless computing. There are some significant differences between the said models and you need to be aware of it before starting your project. Let’s take a look at a few of them:
- Infrastructure as a Service (IaaS)
- Platform as a Service (PaaS)
- Software as a Service (SaaS)
- Function as a Service (FaaS)
Infrastructure as a Service
Unlike bare-metal solutions, where you have your own machine or use some dedicated servers, adopting this approach lets you forget about everything which is physical – from networking to storage to servers and their virtualization. However, while the IaaS model reduces the necessity of maintaining servers, you still have a lot of concerns on other levels. They include taking care of operating system’s security updates, third-party software, application runtime and data storage. What’s more, application scaling might be troublesome due to the fact that there’s no ready solution provided by a vendor. On the other hand, there are no bigger restrictions in terms of system architecture so you can set everything up as you wish. The IaaS approach can be found in Amazon EC2 and in Google Compute Engine.
See also: Node.js tutorial for beginners
Platform as a Service
Providers adapting the PaaS model take responsibility for even more aspects than the IaaS ones. It’s no longer needed to maintain an operating system and application runtime, and providers usually have solutions for automatic application scaling. It’s worth mentioning that, in this case, the system runs servers in long-running processes and the user is charged per running time, even if there’s no traffic at all. Additionally, scaling up usually means adding more processes and load balancers. Therefore, when using PaaS you need to beware of excessive scaling to avoid high bills – traffic monitoring and capacity planning is a must. If you’re interested in this particular cloud computing model then take a look at AWS Elastic Beanstalk, Heroku or Google App Engine.
Software as a Service
In the SaaS approach, everything’s controlled and hosted by a cloud vendor. You can only use software provided in that model. Depending on the type of the software, some later integrations are possible through various APIs. Some popular examples are Microsoft Office 365, Salesforce, Google Apps or Dropbox.
Function as a Service
You might have noticed that there’s a niche between PaaS and SaaS models. This is where the FaaS solution comes in. Unlike in the PaaS approach, here we’re charged per function execution and functions are executed per each request. Scaling is automatic and is as simple as executing more functions concurrently. Thanks to that, we don’t need to plan capacity and the system scales up effortlessly, even if there are high spikes in traffic. The main drawback of FaaS is the inability to handle long-running processes.
Choosing the right one
In our case, we’d like to outsource as much as possible, but keep the possibility to write our custom code at the same time. Therefore, the only valid options are PaaS and FaaS. And, if we add the next requirement – that is cost savings – then the FaaS model suits us the most. Its restrictions regarding long-running processes and various other aspects might be troublesome at the start, but we can handle that. Now, it’s time to find out where our application can be hosted.
There are plenty of providers on the market and it makes it quite hard to choose “the right one”. Let’s just mention the most popular ones:
- Amazon Web Services
- Google Cloud Platform
- Microsoft Azure
- IBM OpenWhisk
For those who are getting started, it might be a little bit overwhelming to choose a serverless provider, as there are a lot of aspects to compare and consider.
Let’s start with the pricing. Almost every provider has something called a “free tier” which gives you a possibility to use their services for some time almost for free. During that time, you can verify various aspects of it to make sure it satisfies your needs. I strongly encourage you to seize that opportunity.
You should also consider the maturity of services provided by mentioned companies. AWS is on the market for the longest and had plenty of time to develop various top-notch features. You can combine them to make a perfect ecosystem for your application. You can also find related technologies and tools that might be helpful during the development – if you’re interested in a particular service, such as Amazon Alexa, then it’ll be much easier to build the whole solution based on services of a single provider.
On the other hand, there’s Google Cloud Platform with its sophisticated big-data services and infrastructure tools, such as BigQuery or Kubernetes. It excels at large-scale data processing and, according to some sources, its global infrastructure is one of the fastest growing. It’s also worth mentioning that it works very well with all of the Google’s services and APIs: Maps, Docs, Search and many others.
However, regardless of your choice, most aspects of serverless computing (FaaS specifically) should remain almost the same. For the sake of this guide, we shall choose Amazon Web Services – because of their ease of use and the possibility to combine them with other services. Let’s take a look at their FaaS service called AWS Lambda.
Closer look at AWS Lambda for Node.js
There are various runtimes you can use, such as Python, Java, .NET Core, Go, but we’ll focus on Node.js. The only available Node.js runtime versions are 4.3.2, 6.10.3 and 8.10. The last one was introduced not so long ago and with this version it’s possible to write asynchronous function handlers. Unfortunately, newer Node.js versions are not available right now and it usually takes some time before it’s announced. Therefore, if you’re using some newer Node.js features then it might be more complicated to migrate your code to AWS Lambda. Nevertheless, let’s take a closer look at two crucial characteristics you’re probably wondering about – limits and pricing.
The most interesting part is that you pay only for the execution time (multiplied by the number of resources used during that time). When your functions are not being invoked, then there are no costs at all, even though they’re available to be executed all the time. It makes it a perfect solution for services which are not executed so frequently or have quite low requirements for execution time and memory consumption.
With AWS Lambda, 1 million of function executions per month are free. Then, they start charging you $0.20 per every consecutive million of requests – which is still fairly cheap. However, you need to consider memory consumption too. Every month, the first 400,000 GB-seconds are free, then you pay around $0.00001667 for a GB-second. A GB-second unit means memory used over time, so if your memory consumption is lower, then your functions can be executed for a longer period of time, and vice versa.
It’s very important that, unlike in other service models, every code optimisation in the FaaS model reduces costs – regardless if it’s lower memory usage or time usage.
Also, the free tiers make it a perfect solution for prototyping or small applications with low traffic. In a perfect scenario, you won’t even pay a single penny for it. As a result, companies may be more eager to test new solutions based on FaaS or to use it as a base for a startup.
Like elsewhere, there are some limitations and it’s wise to be aware of them at the beginning of your journey with AWS Lambda.
Functions can utilize no more than 3008 MB of memory and their maximum execution time is 15 minutes. That’s why AWS Lambda is best for execution of small programs. Secondly, due to the fact that there are no long-running processed, it’s also not possible to use WebSockets – additional third-party solutions must be used to achieve that functionality (e.g. AWS IoT or Fanout). Our functions are then just simply invoked by an external source serving them as backend containing logic.
There’s also a limit for 1000 concurrent executions by default, however, it can be increased if needed. Additionally, every function is executed in a context where it has 512 MB of ephemeral disk capacity which can be used as cache. We’ll focus on how to use that in the second part of the article.
We know the basics, so it’s a perfect time to run some code. Let’s write our first serverless “hello world” application.
Quick start with AWS Lambda and Serverless Framework
The simplest possible function is just a few lines of code. Let’s take a closer look at what’s behind the scenes. Here’s a “Hello world” example in AWS Lambda:
Each function consists of arguments such as event and context, and it returns an object. It must be exported and assigned a name, in this example a handler. Returned object data may vary, but if you want to use it alongside AWS API Gateway, then don’t forget to return status code and body so it can make a correct response.
An event is an object containing data from various other AWS services – depending on the type of infrastructure used. For example, if functions are called through API Gateway, then the event has proper information of request parameters and headers. On the other hand, if a function is invoked in response to AWS S3 adding a file event, then further details about that file are stored there.
Context has some additional information about the execution, like remaining time, used memory and so on. It might be useful, depending on the type of functions you are creating.
The presented function is just an example of a contract you need to fulfil. There could be your entire web server application logic inside. Of course, you can still organise your code as you like, but eventually, it needs to be wrapped in a handler.
It’s not the end of our example, as uploading it to AWS Lambda requires some manual steps. It might become tedious to update and upload those functions over and over again. Therefore, some automatic provisioning scripts and tools can come in handy – our weapon of choice is called Serverless Framework. Let’s take a look at its simplest configuration file. Here’s a “Hello world” example in Serverless Framework:
Configuration consists of a service name, details about the provider and the declaration of functions. The latter part is most interesting for us. Each function needs to be described with the proper handler name according to our implementation and with events which can invoke our function. In this particular example, we used an http event which will result in the creation of an AWS API Gateway service as well. It’ll be responsible for invoking our function and passing any kind of request parameters there.
But what to do with these two files? Just install the Serverless Framework’s NPM package and, after configuring your AWS credentials, you’re ready to go. Then, run a serverless deploy command and you should see similar results in a couple of minutes.
The framework outputs a lot of logs describing the whole process. The most interesting part is at the end, at the endpoints section. There is an URL our function is available at. Let’s call it with curl to check how it works.
It works as expected, returning our “hello world” message.
There are much more commands you can use to interact with your functions. The most interesting one is the serverless invoke command which allows you to invoke a function directly in the console. Moreover, with the use of serverless logs, you can check details about the execution times and billed durations. Take a look at the Serverless Framework documentation for more – there are lots of powerful commands out there.
Serverless technologies are trending nowadays and we can make use of them in perfect collaboration with Node.js. Although there are a lot of differences to the traditional approach, getting started with serverless is not that hard at all. Knowing various cloud computing service models gives you the possibility to choose the best option for you. The perfect choice might vary, but if you decide on FaaS for your project, it is a quite safe choice to also go with AWS. Just be aware of the pricing model and the limits, because they’ll not fit every kind of application. If you believe it suits your needs, then start with some experimentations and a simple example. It’ll be a good base for further development.