The phrase “serverless” doesn’t mean servers are no longer required. It solely proposes that developers no longer have to think that much about them. Going serverless lets developers shift their focus from the server level to the task level which is writing codes.
What it means to have servers?
First, let’s talk about what it means to have servers (virtual servers) providing the computing power required by your application. Owning servers comes with responsibilities –
- Managing how the primitives (functions in the case of applications, or objects when it comes to storage) map to server primitives (CPU, memory, disk etc.).
- Own provisioning (and therefore paying) for the capacity to handle your application’s projected traffic, independent of whether there’s actual traffic or not.
- Own managing reliability and availability constructs like redundancy, failover, retries etc.
Advantages of going Serverless
Why should one move to serverless architecture can be adequately described through its benefits.
- PaaS and Serverless – A user of traditional PaaS have to specify the amount of resources—such as dynos for Heroku or gears for OpenShift—for the application. The Serverless platform will take care of finding a server where the code is to run and to scale up when necessary.
- Lower operational and development costs – The containers used to run these functions are decommissioned as soon as the execution ends. And the execution is metered in units of 100 ms, You don’t pay anything when your code isn’t running.
- Fits with microservices, which can be implemented as functions.
Serverless architectures refer to applications that significantly depend on third-party services (knows as Backend as a Service or “BaaS”) or on custom code that’s run in ephemeral containers (Function as a Service or “FaaS”). But there are cons related to moving your application to FaaS which is discussed in our next post: Building Serverless Microservices with Python
Simplest way of thinking about FaaS is that it changes thinking from “build a framework to sit on a server to react to multiple events to “build/use micro-functionality to react to a single event.”
How to migrate to a Microservices Architecture?
In a simple definition, Microservices are independently scalable, independently deployable systems that communicate over some protocols HTTP (XML, JSON), Thrift, Protocol Buffers etc. Microservices are Single Responsibility Principle at code base level.
Below are some of the factors that can be followed to build Microservices:
- One code per app/service: There is always a one-to-one correlation between the codebase and the service.
- Explicitly declare and isolate dependencies: This can be done by using packaging systems.
- Use environment variables to store configurations.
- Strictly separate build, release and run stages.
- Treat logs as event streams. Route log event stream to analysis system such as Splunk for log analysis.
- Keep development, staging, and production as similar as possible.
Microservices Architecture: Benefits
Microservices Architectures have lots of very real and significant benefits:
- Systems built in this way are inherently loosely coupled
- The services themselves are very simple, focusing on doing one thing well
- Multiple developers and teams can deliver independently under this model
- They are a great enabler for continuous delivery, allowing frequent releases whilst keeping the rest of the system available and stable
In this post, we will implement a Nexastack function which integrates with a database(MongoDB used here). We are going to implement this new function in Java using Spring Framework. So, Let’s get started –
We are going to build an Employee Service consisting of a function to show Employees information from the database. For Demo purpose we are here implementing one function “GetEmployee”.
1. Setting up MongoDB Instance
- Install MongoDB and configure it to get started.
- Create Database EmployeeDB
- Create table Employee
- Insert some records into the table for demo.
- Write a file “config.properties” to setup configuration on the serverless architecture