test

How To Build a Serverless React.js Application with AWS Lambda, API Gateway, & DynamoDB – Part 4

Part 4 : Code Review, Deploy, & Configure an effective CI/CD Pipeline on AWS

This is a continuation of our multi-part series on building a simple web application on AWS using AWS Lambda and the ServerlessFramework. You can review the first, second, and third parts of this series starting with the setup of your localenvironment at:

You can also clone a sample of the application we will be using in this tutorial here: Serverless-Starter-Service

Please refer to this repo as you follow along with this tutorial.

Code Review Agility

We have definitely covered a lot of ground by now, and it may be best to take a minute and work through a Code Review of sorts, that we can use to absorb all of this information that we have reviewed and implemented together before merging our code and deploying our services into production. It is definitely a good idea to take this time to think about the workflow we will put in place to manage our day to day development and implementation activities as a team of engineers. In my case it’s just Wilson and I, remember?

Collaborating with other team members can be tricky, that is, it can be tricky if you don’t have a plan in place to work in an organized manner!!! We recommend that the documentation for any project repository include a set of ContributionGuidelines.md that dictate how each member of your team shall interact with the organization’s source code. There are numerous examples out in the ether that you can look to as a set of guiding principles, and yet, you know I have a suggestion for you anyway. As someone who has published OpenSource work for NASA as a contractor, I suggest you model any ContributionGuidelines that you make for you and your team after the NASA OpenMCT project guildelines to start off. It is pretty straght forward and really gets to the point in terms of what a good OpenSource contribution policy should look like.

Pull Request Check Lists Examples & Best Practices

In an Agile development environment, we just want a process that we can use to iterate over a predefined Code Reviewworkflow that will help us implement and merge new updates to our source code efficiently, transparently, and with close to zero downtime in the Wild. When a team member writes and implements a set of features, there should be someone, again, in my case Wilson, who will review the code you have implemented on a topic-branch in git after you create a Pull Request for your project lead to review your code.

The Code Review process is an important part of your team workflow because it allows you to share the knowledge you gained from the implementation of the logic and functionality that defines the feature you will deploy, it gives you a layer of quality assurance that lets your peers contribute and provide insight into the feature you will deploy, and it allows new team members to learn from the rest of the team by taking ownership of a feature and implementing the logic the new feature needs to deploy.

The Agile framework requires that every Code Review complete a Pull Request in phases. The first phase would require your Tech Lead to look at your implementation and any changes you made, and compare it to the code that you are refactoring. The second phase of the review requires the Reviewer to add feedback and comments to your implementation that critique your work. The Reviewer should get you to question all scenarios and edge-cases that you have to consider before approving your Pull Request.

You should complete and attached these checklists to any pull requests you create or file as an author or reviewer of a new feature in your project’s repository. When you decide to merge a pull request please complete a checklist similar to the version of the checklists provided below:

Author Checklist

  • Do the changes implement the correct feature?
  • Did you implement and update your Unit Tests?
  • Does a build from the Cmd Line Pass?
  • Has the author tested changes?

Reviewer Checklist

  • Do the changes implement the correct functionality?
  • Are there a sufficient amount of Unit Tests included?
  • Does the file’s code-style and inline comments meet standards?
  • Are the commit messages appropriate?

Collaborating with your team throughout the Code Review process is the most important part of the Pull Request workflow. When you create a Pull Request you initialize a collaborative review process that iterates between the updating and reviewing of your code. The process finishes with the Reviewer executing a merge of your code onto the next brange in the stage of deployment that your team defined.

The Pull Request & Agile Code Review

code review workflow

What have you done!?!?

If you have gotten this far I am impressed. You might still have a chance at completing your first production ready application soon enough. We still need to customize the logic in our starter template so that it matches the list of Lambda functions that I promised you earlier that we would eventually get around to implementing. We are not there yet, but we are really close. Let’s take a second to put it all down into a sort of OnePager that we can use to refer back to these commands throughout future projects.

Serverless MicroService Implementation Best Practices
  1. Setup local Serverless Environment:
    • Instal nvm and Node.js v8.10.0
    • Setup Editor & Instal ESLint
    • Configure SublimeText3
  2. Configure your Serverless Backend on AWS:
    • Decrease Application Latency with Warm Starts
    • Understand the AWS Lambda Runtime Environment
    • Register with AWS and configure your AWS-CLI
    • Setup the ServerlessFramework for local development
  3. Infrastructure As Code, Mock Services, & Unit Testing:
    • Configuring your Infrastructure As Code
    • Configure an API Endpoint
    • Mocking Services before Deployment
    • Unit Testing & Code Coverage
    • Run Tests before Deployment

Deploy the ServerlessStarterService Template

We have finally gotten to the coolest part of this tutorial. The buildup is enormous, I know. Do not let your head explode on me yet, I Promise there is a point to all of this. We have already run npm installrun lint, and test on our Serverless-Starter-Service that we have cloned on our local machine; We also mocked the Serverless-Starter-Service locally with sls local invoke and our local environment responded with all of the appropriate Response: 200 OK messages as expected. Now it is time to deploy our serverless + microservice to see what we can do. Are you ready?!?!

Navigate into the Serverless-Starter-Service project directory on your local machine and execute the following command in your terminal:

  • $ serverless deploy

Here is the result of the deployment; the output also includes the Service Information you need to consume the resources from the API you have just implemented on AWS!

Serverless-Starter-Service Output

MyDocs (master) ServerlessStarterService
$ serverless deploy
Serverless: WarmUp: setting 1 lambdas to be warm
Serverless: WarmUp: serverless-starter-service-node-dev-displayService
Serverless: Bundling with Webpack...
Time: 1008ms
Built at: 2019-03-29 19:19:19
               Asset      Size  Chunks             Chunk Names
    _warmup/index.js  8.94 KiB       0  [emitted]  _warmup/index
_warmup/index.js.map  7.42 KiB       0  [emitted]  _warmup/index
          handler.js  6.84 KiB       1  [emitted]  handler
      handler.js.map  5.82 KiB       1  [emitted]  handler
Entrypoint handler = handler.js handler.js.map
Entrypoint _warmup/index = _warmup/index.js _warmup/index.js.map
[0] external "babel-runtime/regenerator" 42 bytes {0} {1} [built]
[1] external "babel-runtime/helpers/asyncToGenerator" 42 bytes {0} {1} [built]
[2] external "babel-runtime/core-js/promise" 42 bytes {0} {1} [built]
[3] external "source-map-support/register" 42 bytes {0} {1} [built]
[4] ./handler.js 2.58 KiB {1} [built]
[5] external "babel-runtime/core-js/json/stringify" 42 bytes {1} [built]
[6] external "babel-runtime/helpers/objectWithoutProperties" 42 bytes {1} [built]
[7] ./_warmup/index.js 4.75 KiB {0} [built]
[8] external "aws-sdk" 42 bytes {0} [built]
Serverless: Package lock found - Using locked versions
Serverless: Packing external modules: babel-runtime@^6.26.0, source-map-support@^0.4.18
Serverless: Packaging service...
Serverless: Creating Stack...
Serverless: Checking Stack create progress...
.....
Serverless: Stack create finished...
Serverless: Uploading CloudFormation file to S3...
Serverless: Uploading artifacts...
Serverless: Uploading service serverless-starter-service-node.zip file to S3 (1.4 MB)...
Serverless: Validating template...
Serverless: Updating Stack...
Serverless: Checking Stack update progress...
................................................
Serverless: Stack update finished...
Service Information
service: serverless-starter-service-node
stage: dev
region: us-east-1
stack: serverless-starter-service-node-dev
resources: 16
api keys:
  None
endpoints:
  GET - http://g0xd40o7wd.execute-api.us-east-1.amazonaws.com/dev/starterService
functions:
  displayService: serverless-starter-service-node-dev-displayService
  warmUpPlugin: serverless-starter-service-node-dev-warmUpPlugin
layers:
  None
MyDocs (master) ServerlessStarterService
$

Serverless Starter Service Template API Endpoint

This is the endpoint to our Lambda. You should have something similar that will produce a similar result if you have followed along:

  • Template Endpoint: http://g0xd40o7wd.execute-api.us-east-1.amazonaws.com/dev/starterService

Deployment Output to terminal

template endpoint

Triggering the new resource we deployed on API Gateway from the address bar in our browser to our new Template Endpoint at http://g0xd40o7wd.execute-api.us-east-1.amazonaws.com/dev/starterService will produce the following output to our device screen:

{
	"message":"You are now Serverless on AWS! Your serverless lambda has executed as it should! (with a delay)",
	"input": {
		"resource":"/starterService",
		"path":"/starterService",
		"httpMethod":"GET",
		"headers": {
			"Accept":"text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8",
			"Accept-Encoding":"br, gzip, deflate",
			"Accept-Language":"en-us",
			"CloudFront-Forwarded-Proto":"http",
			"CloudFront-Is-Desktop-Viewer":"true",
			"CloudFront-Is-Mobile-Viewer":"false",
			"CloudFront-Is-SmartTV-Viewer":"false",
			"CloudFront-Is-Tablet-Viewer":"false",
			"CloudFront-Viewer-Country":"US",
			"Host":"g0xd40o7wd.execute-api.us-east-1.amazonaws.com",
			"User-Agent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_4) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/12.1 Safari/605.1.15",
			"Via":"2.0 6ba5553fa41dafcdc0e74d152f3a7a75.cloudfront.net (CloudFront)",
			"X-Amz-Cf-Id":"20__-h2k2APyiG8_1wFfAVbJm--W1nsOjH1m0la_Emdaft0DxqzW7A==",
			"X-Amzn-Trace-Id":"Root=1-5c9eac1d-58cadfb397aea186074bd6ab",
			"X-Forwarded-For":"134.56.130.56, 54.239.140.19",
			"X-Forwarded-Port":"443",
			"X-Forwarded-Proto":"http"
		},
		"multiValueHeaders": {
			"Accept":["text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8"],
			"Accept-Encoding":["br, gzip, deflate"],
			"Accept-Language":["en-us"],"CloudFront-Forwarded-Proto":["http"],"CloudFront-Is-Desktop-Viewer":["true"],
			"CloudFront-Is-Mobile-Viewer":[
				"false"
			],
			"CloudFront-Is-SmartTV-Viewer":[
				"false"
			],
			"CloudFront-Is-Tablet-Viewer":[
				"false"
			],
			"CloudFront-Viewer-Country":[
				"US"
			],
			"Host":[
				"g0xd40o7wd.execute-api.us-east-1.amazonaws.com"
			],
			"User-Agent":[
				"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_4) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/12.1 Safari/605.1.15"
			],
			"Via":[
				"2.0 6ba5553fa41dafcdc0e74d152f3a7a75.cloudfront.net (CloudFront)"
			],
			"X-Amz-Cf-Id":[
				"20__-h2k2APyiG8_1wFfAVbJm--W1nsOjH1m0la_Emdaft0DxqzW7A=="
			],
			"X-Amzn-Trace-Id":[
				"Root=1-5c9eac1d-58cadfb397aea186074bd6ab"
			],
			"X-Forwarded-For":[
				"134.56.130.56, 54.239.140.19"
			],
			"X-Forwarded-Port":[
				"443"
			],
			"X-Forwarded-Proto":[
				"http"
			]
		},
		"queryStringParameters":null,
		"multiValueQueryStringParameters":null,
		"pathParameters":null,
		"stageVariables":null,
		"requestContext": {
			"resourceId":"bzr3wo",
			"resourcePath":"/starterService",
			"httpMethod":"GET",
			"extendedRequestId":"XU_UiEVSIAMFnyw=",
			"requestTime":"29/Mar/2019:23:37:01 +0000",
			"path":"/dev/starterService",
			"accountId":"968256005255",
			"protocol":"HTTP/1.1",
			"stage":"dev",
			"domainPrefix":"g0xd40o7wd",
			"requestTimeEpoch":1553902621048,
			"requestId":"8cef7894-527b-11e9-b360-f339559a98bd",
			"identity": {
				"cognitoIdentityPoolId":null,
				"accountId":null,
				"cognitoIdentityId":null,
				"caller":null,
				"sourceIp":"134.56.130.56",
				"accessKey":null,
				"cognitoAuthenticationType":null,
				"cognitoAuthenticationProvider":null,
				"userArn":null,
				"userAgent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_4) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/12.1 Safari/605.1.15",
				"user":null
			},
			"domainName":"g0xd40o7wd.execute-api.us-east-1.amazonaws.com",
			"apiId":"g0xd40o7wd"
		},
		"body":null,
		"isBase64Encoded":false
	}
}

With the completion of this review and with the deployment of our ServerlessStarterTemplate, we are ready to proceed with the configuration of a proper Continuous Integration & Continuous Deployment pipelline that will allow us to abstract the deployment of our services a bit so that we can focus on implementing new features instead of maintaining infrastructure.

You have successfuly deployed your first serverless + microservice on AWS with the ServerlessFramework!

Implementing Continuous Integration & Continuous Deployment on AWS

Moving forward with our project, and this tutorial, we can now take some time to discuss and understand the principles, practices, and benefits of adopting a DevOps mentality. We will also study and review concepts in Continuous Integration and Continuous Delivery to really start getting comfortable deploying enterprise ready software to the AWS Cloud. Just to make sure you are ready, we will review and get you comfortable with commiting your code to a Version Control repository on something like GitHub, and I’ll show you how to setup a continuous integration server and integrate it with AWS DevOpstools like CodeDeploy and CodePipeline.

“Opportunity is missed by most people because it is dressed in overalls and looks like work.” – Thomas A. Edison

DevOps definitely conjures up the feeling of mysticism and confusion amongst those who discuss it in the cryptographic and vitriolic corners of the Dark Web and Tor Browser drum circles. If there was anything that you can call The Force, it would definitely be DevOps.

DevOps is really a benevolent force for cultural good that promotes collaborative working relationships between development teams, and their operational counterparts to work together to deploy and deliver software and infrastructure at a pace that allows the business units to cash in on the monetization of new features. The biggest benefit to this philosophy, is that the business teams can be sure that their efforts enforce the reliability and stability of the production environment holistically, because all of the key stakeholders are involved in the success of the product.

Implementing a DevOps mindset allows you to completely automate your deployment process, testing and validating every new feature that you implement using a consistent process-framework, starting with every developer on your team, all the way until the feature finds itself in use by your users, in production. This process is designed to eliminate IT silos of information across distributed teams of developers and operations teams. The idea is to breakdown the barriers that prevent teams from accessing information transparently, so that you can deploy infrastructure programatically, using a standard resource template, that allows you and your team to focus on developing new features and launching your product faster.

The idea is to apply software development practices like quality control, testing, and code reviews to infrastructure and feature deployment that can be rolled into production with little intervention and minimal risk. Transparency is prioritized so that every team member has a clear view at every stage of the development and deployment process from its implemetation by the dev team, all the way to the operations team that monitors and measures your application’s resources and infrastructure deployed in production.

Understanding Continuous Integration (CI)

Continuous Integration started with this evolution of collaborative ideas that we now call DevOps. Ine perfect world, you want you and your team of engineers implementing and integrating new features into your platform, continuously, i.e. Continuous Integration.

As you continuously integrate new features into your platform that will make the world a better place; hopefully, you and your group of code monkeys, have defined some kind of git commit style guideinto those needles ContributionGuidelines.mdthat I told you about earlier. What you need to be doing is continuouslly using $ git commit to $ git push origin new-feature-branch-1 new features that you implement into some god forsaken form of central version control repository that you and your team use to share and review code.

When you or someone on your team implements a new change to the source code that you have saved on your central, and version controlled repository, that is hopefully on something Free, like GitHub, the changes you $ git push must complete a build process that includes an automated unit testing phase that we used jest.js to implement in a previous discussion in this tutorial series for our use case. The architecture that decided to implement is suppossed to give you immediate feedack about any changes that you introduce into the source code files that you save on GitHub.

Being able to obtain a response with instant feedback from the implementation paradim that we have tried to show you throughout this tutorial series, enables you and the rest of your team to fix and correct any mistakes, errors, or bugs that you find along the way, quickly, so that you and your team may continue moving forward and iterating over the development of your product, so you can take it to market ASAP! The whole point of Continuous Integration is to optimize the $ git merge topic-branch of new features into your source code so that you can stay focused on launching new products that help your business and operations teams measure bottom line growth. When you deliver quality software very fast,your business will make more money and will be able to afford to pay you you salary. Play ball because there is no crying in Baseball!!!

Continuous Integration (CI) Workflow

CI.Workflow

Development

Visually, on the development side of the team, this would look like the image above with a developer making a change to a business feature deployed into production. The developer would commit the new code changes to the project’s GitHub repository onto the master branch. The master branch is where all of the changes that the engineers on the team make to their topic branches flow into, once each Pull Request is reviewed and merged into production.

The code changes that your engineers will $ git merge into production after the Code Reviews are complete for each feature implementation, will automatically trigger a system Build. A system Build that your Continuous Integration pipeline triggers automatically when your CI Server detects your newest changes, will verify that your code will compile and execute successfully at runtime. Our Unit Tests that we implement will execute at this time, and will run against the new code changes that our CI Server detects when merged onto the master branch by your team. The goal used as a Best Practice in these scenarious is to have our CI Servers complete the Build and Test process quickly so that you and your development team get the immediate feedback you need to quickly iterate your way to a solution. The name of the game is speed, and launch your product ASAP.

CI.Workflow.ops

Operations

When the idea of Continuous Integration devolved into this paradigm that we now call DevOps, it was originally only put into practice by engineering and devlopment teams within their Agile framework to deliver quality code faster. Database AdministratorsIT Operators, and Network Admins didn’t pay any attention to these models, and they moved on provisioning and autoscaling servers and load balancers on their own, in secret. Over time however, as the DevOps mentality and its transparent culture of collaboration amongst teams spread throughout the business realm, more teams adopted its philosophy. At some point, someone wrote a book about all of the headaches these companies were facing with these distributed teams of information silos, and one day, when CEO’s were convinced of its ability to improve their bottom line, all of these software development patterns and practices percolated through the industry which led infrastructure and operations teams straight to their adoption.

Graphically, this means that you and your operations engineers can write all of your infrastructure, as code, in a declarative YAML or json-formatted template and commit it to the same GitHub repository that your engineering and development teams are using to commit code to impolement new features. Following in the footsteps on the development teams, we then proceed to have our CI Server pull any of our infrastructure changes that we merge onto our Version Control Repositories to automate our build and unit testing processes. In the case of infrastructure we won’t be compiling and code or triggering any unit tests. Instead, we will use the build process to spin up any Linux Images (AMIs) or resources that we need to have deployed to the cloud. Furthermore, instead of testing code, we will use the test phase to validate our YAML-formatted templates used by [CloudFormation] and to run infrastructure tests to be sure our application is listening on the expected network ports and that our enpoints are returning the expected http responses.

The goal here is the same as in development, to get feedback quickly; we want to get your infrastructure engineer immediate information pertaining to the status of the resources deployed. Your operations teams need to be able to anticipate any issues and respond to feedback immediately to correct any issues that may appear in production.

Understanding Continuous Delivery (CD)

Continuous Delivery simply builds on the concept and process of Continuous Integration and is intended to automate the entire release of the application all the way on through the production environment. To execute this continuous integration and delivery lifecycle your team has to commit to practicing the DevOps culture transparently and in a consistent and Agile manner.

Development and operations teams shall implement the same model for versioning both application and infrastructure implemented as code to allow your Continuous Integration (CI) Servers to run automated builds and tests, that will trigger the deployment of new versions of our application to development branches before we decide to promote them into production ourselves. This brings up an important distinction in the different variations that we can choose to run our CI/CD pipelines which we will discuss shortly. For now please take a second to review a simple example of what a CI/CD Pipeline will look like:

This is a CI/CD Pipeline

ContinuousDelivery

We will need to follow a workflow similar to the pipeline described above, on both the development and operations sides of our implementation teams. Both of our teams will have to have the discipline to follow a standard set of ContributionGuidelines.md as we have discussed in previous parts in this series to holds team members accountable and committed to pushing all of their changes to our GutHub repositories that we are using as our Version Control Framework. With you and the rest of the team committing changes to the repository that stores our project’s source code, our Continuous Integration (CI) Server will initiate the builds and tests that will then trigger a deplpoyment to either a new or existing environment in production. This is our CI/CD Pipeline as shown in the image above.

The image does in fact simplify what we will be implementing over the course of this tutorial. We will also touch upon a few more complex examples throughout the rest of this tutorial. You can also customize your own CI/CD Pipeline to suit your own project’s needs, to include as many stages as you may require to suitably deploy your application with minimal risk of failure.

Continuous DELIVERY Stages

Stages.CD .Manual

Looking at this implementation from the high level view of the Systems Architect, the above is what a CI/CD Pipeline variation looks like. As we have discussed, the first step in the process includes a source stage where you and your team will commit your changes to the source and any new feature implementations to your respositories on GitHub.

The next stage will run a build process that will compile your source code and spin up any AMI‘s, Lambda‘s, or other infrastructure that your Operations team has declared as code, and which your application needs to function at scale. This stage will also perform and run unit tests, and infrastructure template validations on the resources declared as code that you will deploy with CloudFormation. Unit Tests are executed within this build step to mitigate against any errors or bugs that are introduced by any changes to the source at this stage of the process. Your Unit Tests SHALL trigger a deployment into the staging environment next INSTEAD OF deploying into production automatically. When the Final Release of the application to the production environment is NOT executed automatically, then this is known as Continuous DELIVERY. Typically there is a Business Rule or Manual Activity completed before the final decision is made to release, and promotethe new version into production.

Furthermore, you can add a stage that is run after Unit Tests are completed on your application’s source code files that mayload tests and execute them against your infrastructure to make sure that your application’s performance is acceptable. The objective of all of this is to make sure that everything must be validated and shown to work as designed at each stage before it moves on to the next step of the delivery or deployment process.

Continuous DEPLOYMENT Stages

Stages.CD .auto

Here is where things get tricky and where everyone confuses the diferences between Continuous Delivery vs Continuous Deployment. Be careful, there are always decisions to make, and this is one of those times when a decision must be made. In having to choose a path, this is the exact point where the road begins to diverge for many DevOps Teams. In a Continuous DEPLOYMENT pipeline on AWS, the last step of moving into production is automatic. As long as each stage of the pipeline on AWS was successful, the code that you commit to GitHub will ALWAYS go through the pipeline and into productionAUTOMATICALLY, assuming that all stages have completed and pased all testing successfuly.

In summary, in a Continuous DEPLOYMENT pipeline on AWS, the goal is to completely automate everything from end-to-end. The commits that we push to our GitHub repositories will automatically trigger a build phase and any appropriate Unit Testing on your appliction’s source code and infrastructure as code. The workflow will culminate with the deployment of the application into either a development, or staging environment, where will eventually be pushing our application into production from, onl after having completed a MANUAL process or BUSINESS RULE validation with the project’s principals.

In our specific use case as we work through the implementation of the PayMyInvoice application throughout the rest of this tutorial, we will be eliminating the manual task that provisions and configures our infrastructure, while making sure to define all of our application and infrastructure source code declaratively, to optimize our workflow with an implementation of a Continuous Deployment pipeline on AWS for our implementation of a CI/CD workflow for our application.

By pushing all of our application and infrastructure source code to a repository where we $ git commit all of our changes, you and your team will have the benefit and ability to see exactly what team member has changed and added to the sourcecode. This transparency and versioning controlled by a central repository also allows you to rollback to previous versions of you software as need and in case of emergency.

The implied goal of all of the processes is to unify the software delivery process so that our application and its infrastructure are treated as one object that we can run through our end-to-end automated testing build phase to validate all of our application, infrastructure, and configuration logic and provisioning is correct and to the project’s requirements and specifications.

[AWS CodeBuild] and AWS Code Pipeline are a few tools we will be using to implment our CI/CD pipeline. CodeBuild allows us to easily AUTOMATE our build and DEPLOYMENT to rapidly release new features and services. CodePipeline is a Continuous Delivery service that lets us model and visualize our solftware releases.

How To Build a Serverless React.js Application with AWS Lambda, API Gateway, & DynamoDB – Part 3

How To Configure Infrastructure As Code, Mock Services, & Unit Testing

This is a continuation of our multi-part series on building a simple web application on AWS using AWS Lambda and the ServerlessFramework. You can review the first and second parts of this series starting with the setup of your localenvironment at:

You can also clone a sample of the application we will be using in this tutorial here: Serverless-Starter-Service

Please refer to this repo as you follow along with this tutorial.

Configuring Infrastructure As Code

The ServerlessFramework lets you describe the infrastructure that you want configured for your serverless + microservice based application logic. You can use template files in .yml or .json format to tell AWS CloudFormation what exact resources you need deployed on AWS to correctly run your application. The YAML or JSON-formatted files are the blueprints you design and architect to build your services with AWS resources. We can see that by using the AWS Template Anatomy to describe our infrastructure, that the templates on AWS CloudFormation will include a few major sections described in the template fragments shown below:

YAML-formatted

This is a YAML-formatted template fragment. You can find more information on JSON-formatted templates at: AWS Template Anatomy.

---
AWSTemplateFormatVersion : "version date",

Description:
  String

Metadata:
  template metadata

Parameters:
  define any parameters

Mappings:
  configure your mappings

Conditions:
  declare any conditions

Transform:
  define any transforms

Resources:
  application resources

Outputs:
  define your outputs

The Resources section of the template is the only section that needs in your YAML or JSON-formatted template. When describing the services you need deployed to AWS on your CloudFormation templates it will be helpful to use the order I will be describing below because some sections may refer to values in a previous section, however, you can include these sections in your template in any order that you feel is appropriate.

  1. Format Version (optional)
  • This is the AWS CloudFormation template version that the YAML or JSON-formatted file abides by. This is a versioning format internal to AWS and can change independently of any API or WSDL versions published.
  1. Description (optional)
  • This is simply a description of the template on AWS CloudFormation. If used, you SHALL include this section after the Format Version section listed above.
  1. Metadata (optional)
  • You can use this object to give CloudFormation more information about the application and the template you are using to use AWS infrastructure.
  1. Parameters (optional)
  • These are values that you can declare and pass to your serverless.yml file at runtime when you update or create a stack on CloudFormation. You can use these parameters and call them from the Resources or Outputs sections defined in your serverless.yml file.
  1. Mappings (optional)
  1. Conditions (optional)
  • You can also control the creation of a resource or whether resource properties have values assigned that you declare when you update or create a stack in AWS. Depending on whether a stack is for a development or production environment, you can for example, conditionally create a resource for each stage as needed by your application.
  1. Transform (optional)
  1. Resources (REQUIRED)
  • This is the only required Template Section that you MUST include in your serverless.yml file in your serverless backend. You SHALL use this section to specify the precise stack resources and their properties that you need AWS CloudFormation to create for you on AWS. You can use ServerlessFramework to define the infrastructure you need to deploy with your serverless.yml file.
  1. Outputs (optional)
  • This section will let you view your properties on the CloudFormation stack and the values that it returns to you. You can easily use the AWS CLI to use commands that will display the values returned by your stack for outputs that you declare in your serverless.yml file.

In our CloudFormation Template called serverless.yml found in each of our serverless + microservices that we implement for our API, we describe the AWS Lambda functions, API Gateway endpoints to configure, DynamoDB tablesCognito User & Identity Pools, and S3 Buckets that we need to deploy to our serverless + microservice properly. This reality is what we call Infrastructure As Code. The goal in using an IAC Architecture is to reduce or prevent errors by avoiding the AWS Management Console. When we describe our Infrastructure As Code we can quickly and easily create multiple environments with minimal development time and effort exerted.

Transpiling and converting ES Code in Node v8.10 is the responsibility of the serverless-webpack plugin that we include with the ServerlessFramework.

AWS Resources & Property Types Reference

This is the AWS Resource & Property Types Reference information for the resources and properties that AWS CloudFormation supports as Infrastructure As Code. Resource type identifiers SHALL always look like this:

service-provider::service-name::data-type-name

We will use a few of the services that you can find in the AWS Resource & Property Types Reference. Below you will find a list of the resources in our demo application that we have implemented for this tutorial:

For now, please open the serverless.yml file found at the ServerlessStarterService repository. Please look at the sections below for an explanation of each declaration in our ServerlessFrameworkCloudFormation template:

# TODO: http://serverless.com/framework/docs/providers/aws/guide/intro/

# New service names create new projects in aws once deployed
service: serverless-starter-service(node.js)

# Use the serverless-webpack plugin to transpile ES6
# Using offline to mock & test locally
# Use warmup to prevent Cold Starts
plugins:
  - serverless-webpack
  - serverless-offline
  - serverless-plugin-warmup

# configure plugins declared above
custom:
  # Stages are based on the stage value-params you pass into the CLI when running
  # serverless commands. Or fallback to settings in provider section.
  stage: ${opt:stage, self:provider.stage}

  # Load webpack config
  # Enable auto-packing of external modules
  webpack:
    webpackConfig: ./webpack.config.js
    includeModules: true

  # ServerlessWarmup Configuration
  # See configuration Options at:
  # http://github.com/FidelLimited/serverless-plugin-warmup
  warmup:
    enabled: true # defaults to false
    forlderName: '_warmup' # Name of folder generated for warmup
    memorySize: 256
    events:
      # Run WarmUp every 720 minutes
      - schedule: rate(720 minutes)
    timeout: 20

  # Load secret environment variables based on the current stage
  # Fallback to default if it is not in PROD
  environment: ${file(env.yml):${self:custom.stage}, file(env.yml):default}

provider:
  name: aws
  runtime: nodejs8.10
  stage: dev
  region: us-east-1

  # Environment variables made available through process.env
  environment:
    #### EXAMPLES
    # tableName: ${self:custom.tableName}
    # stripePrivateKey: ${self:custom.environment.stripePrivatekey}

Taking a closer look at the YAML-formatted template above, the service block is where you will need to declare the name of your serverless + microservice with CloudFormation. The ServerlessFramework will use this as the name of the stack to create on AWS. If you change this name and redeploy it to AWS, then CloudFormation will simply create a new project for you in your AWS account.

The environment block is where we load secrets saved in our env.yml file. You must remember that AWS Lambda only gives you 4KB of space for this file which should be more than enough for our needs. The important thing to remember is to keep your logic modular and do not put all of you application secrets in one file. Use an env.yml file for each serverless + microservice that you implement. The secrets and custom variables that we load from our env.yml file are based on the stage that we are deploying to at any given point in the development lifecycle using file(env.yml):${self:custom.stage}. If you do not define your stage, then our application will fall back to load everything under the default: block with file(env.yml):default. The ServerlessFramework will check if custom.stage is available before falling back to default.

As shown in the example, we can also use this mechanism to add other custom variables that we may need loaded at any given time. You may add any environment variable to the environment block using something like:

${self:custom.environment.DECLARE_YOUR_VARIABLE}

Any custom variable that we declare in the manner shown above is available to our Lambda function with something like this:

process.env.DECLARE_YOUR_VARIABLE

Configure an API Endpoint

In the ServerlessStarterService serverless.yml AWS CloudFormation template that describes our infrastructure, we define the functions that we have implemented as Lambda that we want exposed on AWS. You must remember that each serverless + microservice will have its own serverless.yml template in the project’s root directory that will define the endpointsassociated with each serverless + microservice. Below is an example of a Lambda function defined in our ServerlessStarterService CloudFormation template:

functions:
  # Defines an HTTP API endpoint that calls the microServerless function in handler.js
  # - path: url path is /microServerless
  # - method: GET request
  # - cors: enabled CORS (Cross-Origin Resource Sharing) for browser cross
  #     domain api call
  # - authorizer: authenticate using an AWS IAM Role
  displayService:
    handler: handler.starterService
    events:
      - http:
          path: starterService
          method: get
          cors: true
          authorizer: aws_iam
    # Apply Warmup to each lambda to override
    # settings in custom.warmup block globally.
    warmup:
      enabled: true

Do not think that you must memorize this information. Use the Force and read the documentation and ask questions on Stack Overflow to discuss with the larger community of Jedi. As needed, do not forget to refer to the AWS Resource & Property Types Reference information for the resources and properties that AWS CloudFormation supports as Infrastructure As Code.

You configured your new serverless + microservice using the Infrastructure As Code architecture.

Mocking Serverless + MicroServices before Deploying to AWS

We have to mock, or fake the input parameters for a specific event needed by our Lambda’s with a *.json file to be stored in a directory within the serverless + microservice project that we will use by executing the ServerlessFramework’s invokecommand. The invoke command will run your serverless + microservice code locally by emulating the AWS Lambda environment. As the old saying goes however:

“If everything were candy and nuts, everyday would be Christmas…” – unknown

You just must remember that this is not a 100% perfect emulation of the AWS Lambda environment. There will be some differences between your cloud environment and your local machine, but this will do for most use-cases. There are a lot of discussions online, and tools available that promote different approaches that you can take to perfect your localenvironment and your machine for local development outside of the Cloud. We will be mocking the context of our serverless + microservices within this tutorial with simple mock data only. We will leave the study of these tools for a future tutorial. We do however find resources like LocalStack, which is a tool that supplies an easy testing and mocking framework for developing Cloud applications on the AWS Cloud, remarkably interesting, to say the least. Please feel free to experiment with LocalStack and let us know how it works, in the future we will extend this tutorial to include a guide on its implementation also.

When saving a *.json-formatted event in a /<microservice-project-directory>/mocks directory that we will use with the ServerlessFramework invoke command, we will execute the local serverless event as shown below:

Example Usage of invoke

$ serverless invoke local --function functionName --path mocks/testFile.json

invoke Options on local

  • --function or -f: The name of the function in your microservice that you want to invoke locally. Required.
  • --path or -p: The path to a json file storing the input data that you need to pass to the invoked Lambda that an event in the queue will trigger at a specific state in the application. This path is relative to the root directory of the microservice.
  • --data or -d: This is the String data you need to pass as an event to your Lambda function. Be aware that if you pass both --path and --data, the information included in the --path file will overwrite the data that you passed with the --data flag.
  • --raw: This flag will allow you to pass data as a raw string, even if the string you are working with is a .json-formatted string. If you do not set this flag, then any .json-formatted data that you pass into the CLI when you invoke local, will imply that the AWS Cloud runtime will parse and pass this data to your function as an object.
  • --contextPath or -x: This is the path to the .json-formatted file that you must store the input that you will pass in as a context parameter value to the function that you decide to invoke locally. This path is relative to the root directory of your serverless + microservice.
  • --context or -c: This is string data that you want to pass as the context to the function that you want to invokelocally. The information included with the --contextPath flag will overwrite the data that you passed with the --context flag.
  • --env or --e: This is a string that will represent an environment variable that you want to declare when you invokeyour function locally. The format of the environment variable SHALL have a declaration with the following syntax: <name> = <value>. You can reuse this for more than one environment variable that you may need declared in your application.

The invoke local command gives us the ability to fake it until we make it while implementing the serverless architecture for our applications. The functionality provided by the ServerlessFramework will set reasonable environment configuration parameters for us that will allow us to successfully test the functions that we trigger with invoke local. The ServerlessFramework configures all the local AWS specific values that are like those that we find in the actual AWS Cloud where our Lambda functions will execute when we deploy our application. It is also important to remember that ServerlessFramework will define the IS_LOCAL variable when using invoke local. This is important because it will prevent you from executing a request to services in production accidentally to help you safeguard your application while developing and extending new features. The ServerlessFramework will do its best to help you and I from reliving any Dilbert cartoons.

invoke local serverless function example with --data

$ serverless invoke local --function functionName --path mocks/data.json

In the example you can see that we are using the --function option as described, to tell invoke local which Lambda we want to mock on our local machine. When using the -f flag we also must provide the name of the Lambda function that we want to trigger with invoke local on our development machine. We are using functionName as a placeholder for the name of the Lambda function that we want to invoke.

Moving on, you will notice that we have also used the --path option to pass the .json-formatted data that we have saved in the mocks/data.json file that is relative to our project directory. We will use this information when we trigger the invoke local command for the functionName that we have declared as our Lambda in this example.

Example data.json File

{
  "resource": "/",
  "path": "/",
  "httpMethod": "GET",
  // etc  //
}

Limitations

We have tried to condense the most important topics and fundamentals that you need to understand to correctly mock and implement your application as a serverless + microservice. Please refer to the invoke local documentation to better understand how the ServerlessFramework helps you emulate the AWS Cloud on your local machine to help you expedite the development of your application that will help make the world a better place

Node.js, Python, Java, and Ruby are the only runtime environments that currently support the invoke local emulation environment. To obtain the correct output when using the Java runtime locally, your response class will need to implement the toString() method.

invoke local Mock example with ServerlessStarterService

Earlier, we asked you to create a project structure for the PayMyInvoice demo application that we will build once we complete the review of the fundamentals of serverless application development. If you recall, we asked you to complete a few steps to Setup the Serverless Framework locally. After installing and renaming the ServerlessStarterService as instructed, you should have ended up with a project structure for the PayMyInvoice application that looks a little something like this:

PayMyInvoice
    |__ services
       |__ invoice-log-api (renamed from template)
       |__ FutureServerlessMicroService (TBD)

To show you how to Mock your services locally, we would like to step back from this example for a second to walk you through a specific example we have set up for you in the ServerlessStarterService repository.

Please keep both of this repositories close. We will be performing exercises in both to better relate the material in this tutorial to each other.

Navigate to your home directory from your terminal and clone the ServerlessStarterService so that you can deploy it locally after we perform our invoke local testing and mocking.

  1. $ git clone http://github.com/lopezdp/ServerlessStarterService.git
  2. Run$ npm install
  3. $ serverless invoke local --function displayService
  • The displayService function is the name of the lambda function that we have declared in the serverless.yml file. The lambda functions in the ServerlessStarterService template should look like this:
functions:
# Defines an HTTP API endpoint that calls the microServerless function in handler.js
# - path: url path is /microServerless
# - method: GET request
# - cors: enabled CORS (Cross-Origin Resource Sharing) for browser cross
#     domain api call
# - authorizer: authenticate using an AWS IAM Role
displayService: # THIS IS THE NAME OF THE LAMBDA TO USE WITH INVOKE LOCAL!!!
  handler: handler.starterService
  events:
    - http:
        path: starterService
        method: get
        cors: true
  # Warmup can be applied to each lambda to override 
  # settings in custom.warmup block globally.
  warmup:
    enabled: true
  1. After executing invoke local your output should look something like this:

Invoke Local Output

invokelocal

More precisely your terminal will give you some .json output that will look like the following bit of text:

{
    "statusCode": 200,
    "body": "{\"message\":\"You are now Serverless on AWS! Your serverless lambda has executed as it should! (with a delay)\",\"input\":\"\"}"
}

Dont worry, I will walk you through this some more in a future section of this tutorial while we build the demo application. Keep reading young padawan, and may the force be with you.

Resources and IAM Permissions

When an event defined by your application triggers a Lambda function on the AWS Cloud, the ServerlessFramework creates an IAM role during the execution of the logic on your serverless + microservice. This will set all the permissions to the settings that we provided during the implementation of our infrastructure that you see in the iamRoleStatements block, that is in your serverless.yml file for the serverless + microservice in question. Every call your application makes to the aws-sdkimplemented in this Lambda function, will use the IAM role that the ServerlessFramework created for us. If you do not explicitly declare this role, then AWS will perform this task by creating a key pair and secret as an environment variable like:

  • AWS_ACCESS_KEY_ID
  • AWS_SECRET_ACCESS_KEY

The problem with mocking your services on your local machine with invoke local is that your machine does not have Jeff Bezos’ magic money inside of it. The role needed to invoke these Lambda functions are not available!!! You are not AWS! Trust me, I wish I was Jeff Bezos too (or at least his bank account). As a Cuban, I can wholeheartedly tell you that he is the hero of an Island of Caribeños who came to the USA in search of the ever so elusive Sueño Americano or what some call the American Dream. #Learn2Code.

не верь не бойся не проси

Russian Trained Programmers

bezosFloats

Now you know why I am an AWS #fanboy but I still must explain what is wrong with this picture…

It is different when you use the ServerlessFramework and its invoke local function because the role just is not available to your local machine and the aws-sdk is just going to default to using your default profile that you specified inside of your AWS credential configuration file. You can control how we use the default choice, or not, by hard coding another user directly in your code (not ideal), or with a key pair of environment variables (preferred).

Please take some time to review the official AWS documentation to better understand what you need to achieve in a secure manner. In this tutorial, the JavaScript SDK is our primary concern, but this process should be similar for all SDK’s:

The point of all of this is to make you aware that the set of permissions used will be different, regardless of the approach you decide to implement. You will have to play around with different tools because you will not be able to precisely emulate the actual AWS IAM Policy in place. Not to sound repetitive, but therefore we recommend resources like LocalStack, which is a tool that supplies an easy testing and mocking framework for developing cloud native applications on the AWS Cloud. Future tutorial in the making… (Not anytime soon though! –> Unless you send Bitcoin!)

You can now Mock and FAKE all your Serverless + MicroServices locally before Deploying to AWS.

Serverless Unit Testing & Code Coverage

We are taking an automated testing approach to achieving our ideal Code Coverage goals. Automated tests SHALL execute whenever a new change to the source code is merged into the main development branch of the project repository. We think that using a tool like GitHub is really the best approach here in terms of your project repositories; just keep things simple and unbelievably you will change your life!

You can implement Automated Testing with typical unit tests that will execute individual software modules or functions, in our case our unit tests will execute our Lambda functions on the AWS Cloud. When implementing your tests, you really want to try to make them useful, if not at least relevant to the goal of your application’s business logic. You really want to take some time to think of any edge cases that your users may be inputting into your application to ensure that your application’s user experience meets your user’s needs and expectations. If you are working as part of a team, you really should collaborate with them on the different test cases that you should implement to mitigate any potential errors, that your users may confront out in the wild

Examples of useful test cases should consider the following:

  • Tests which reproduce any bugs and their causes to verify that errors have a resolution by the latest updates to the application.
  • Tests that return the details that confirm the correct and precise implementation of the system requirements and specifications. Tests that return the proper data types as specified in the system’s data dictionary would be an appropriate metric.
  • Tests that confirm that your software is correctly handling expected or unexpected edge, and corner cases are appropriate.
  • Tests that confirm any expected state changes and interactions with loosely coupled modules and system components are a Best Practice.

Throughout our automated testing processing and workflow, code coverage KPI’s and metrics are a source of data to record and keep safe. It would be ideal to support Linear Code Sequence Coverage above 80%.

Jest.js Implementation and Configuration

This tutorial and its author have decided to use Jest.js for this “lecture”. We have decided to use this automated testing framework to test our codebase and the correctness of our logic and our application’s functionality because Jest.js is very simple, it works with the libraries and frontend frameworks that we will be implementing in future parts to this series, it takes minimal configuration, it is easy to determine the percentage of code coverage we have implemented and most importantly, Jest.js is well documented.

For this tutorial and the demo application that we will be walking through together shortly, you will notice a directory within each serverless + microservice labeled something like:

/<serverless-microservice-project>/tests/<component>.test.js

Jest.js is already installed as a dependency in your package.json file, which means that every time you npm install a new serverless + microservice project with our Serverless-Starter-Service project, you will automatically have Jest.js installed on your local environment for you. Easy Peasy my friend, work smart is the name of the game… For all the rest of you out there who simply refuse to just follow along because you absolutely must do everything from scratch yourself, then go ahead and get it done with the following cmd:

$ npm install --save-dev jest

You still must push forward, there is always more to accomplish; you also need to add a line of .json to update the scripts block in your package.json file. You need to add the following:

"scripts": {
  "test": ":jest"
}

With the above configuration complete, you will be able to use the $ npm test command to run your unit testing with the Jest.js framework from your terminal.

Add Unit Tests

Ideally you will want to try to keep your project files organized, you know, unlike the way you keep your bedroom. Show this to your mother, go ahead, let her know that you are out here getting an education and not wasting away watching porn. The internet is not all trolls and Dark Web, you can find a good example or two out here in this digital desert wasteland my friends. Either way, keep yourself organized!!! Anytime you add or implement any new unit tests for each serverless + microservice endpoint you define in each of your Lambda functions, make sure to create a new file for the endpoint that you are implementing. Proceed as such:

$ touch ~/<ApplicationProjectName>/service/<serverless-microservice-project>/tests/<serverless-endpoint>.test.js

Pay attention to the relative structure of the path that I have chosen to use in the example above as it will make more sense once we start digging into the implementation details of our demo application in this tutorial series. The crucial point to remember is that <test>.js files will save in the /tests directory. Easy enough, no???

We can use the example starter.test.js file that we have implemented in our Serverless-Starter-Service project to better understand how we will be implementing our test cases with the Jest.js framework:

Need to Debug Implementation (Not Ready for Publishing)!!!

import * as handler from "../handler.js";

// FIXME: Correct eslint `no-undef` errors
test("starterService Execution", async () => {
  const event = "event";
  const context = "context";
  const callback = (error, response) => {
    expect(response.statusCode).toEqual(200);
    expect(typeof response.body).toBe("string");
    expect(response.body).toMatch(/executed/);
  };

  await handler.starterService(event, context, callback);
});

The implementation of our tests is straight forward and easy, once you can understand how to assert values and data types as provided from our serverless + microservice responses. Here we are simply adding one test to assert that the response we are getting back from our service is a string. We carry out this with the .toMatch() method in the last line of the test we have implemented here as an example. We will explain this further as we implement the functionality in our demo application. In the meantime, be sure to read more about using Jest.js as a testing tool in their documentation.

Run Tests

Once you have your unit tests implemented, you can run those tests in your command line interface from the root of project using the following command:

$ npm test

The terminal will supply an output for a successful result that will look like this:

Testing Output

$ npm test

> serverless-starter-service(node.js)@1.1.16 test /Users/MyDocs/DevOps/ServerlessStarterService
> jest

 PASS  tests/starter.test.js
  ✓ starterService Execution (1007ms)

Test Suites: 1 passed, 1 total
Tests:       1 passed, 1 total
Snapshots:   0 total
Time:        2.044s
Ran all test suites.

Unit Test Output

UnitTestingImplemented

And that is it. We have discussed the fundamentals of building out the backend logic of an application using the serverless + microservice architecture with AWS Lambda. You know what Lambda functions are, how to configure and implement the logic you need in each Lambda, you know how to Mock, or fake your serverless + microservices on your local machine using the invoke local command provided to us by the ServerlessFramework, and you know how to implement the Unit Tests we will need to implement to create an automated testing pipeline we can use to eventually Continuously Integrate & Continuously Deploy our application to the different stages we define for our development and production environment slater on.

All services should implement the proper logic within the correct project structure as described. You will then mock and test each service before deploying any resources or infrastructure to the AWS Cloud as a software development Best Practice.

You configured your serverless backend as IAC to Mock & execute automated Unit Tests. Good Luck!

Part 4: Configuring an effective Continuous Integration & Continuous Deployment Pipeline

How To Build a Serverless React.js Application with AWS Lambda, API Gateway, & DynamoDB – Part 2

How To Configure your Serverless Backend on AWS

This is a continuation of our multi-part series on building a simple web application on AWS using AWS Lambda and the ServerlessFramework. You can review the first part of this series starting with the setup of your local environment at:

You can also clone a sample of the application we will be using in this tutorial here: Serverless-Starter-Service

Please refer to this repo as you follow along with this tutorial.

The Serverless Architecture

Serverless programming and computing, or serverless for short, is a software architecture that enables an execution paradigm where the cloud service provider (AWS, GoogleCloud, Azure) is the entity responsible for running a piece of backend logic that you write in the form of a stateless function. In our case we are using AWS Lambda. The cloud provider you choose to run your stateless function, is responsible for the execution of your code in the cloud, and will dynamically allocate the resources needed to run your backend logic, by abstracting the deployment infrastructure for you, so that you can focus on developing your product instead of auto-scaling servers. This paradigm is also known to be called Functions As A Service or (FAAS), and the code will run inside of stateless containers that may be triggered by a number of events that can include: cron jobs, http requests, queuing services, database events, alerts, file uploads, etc.

Serverless Considerations

Since the serverless paradigm abstracts away the need for an engineer to configure the underlying physical infrastructure typical to the deployment of a modern day application, in what is known as the new Functions As A Service (FAAS) reality, the following are a few considerations that should be kept in mind while we proceed through the development of our Single Page Application:

Stateless Computing

When we deploy a serverless + microservice architecture, the functions that we declare as a part of our application API, execute our application’s logic inside of stateless containers for us by our cloud service provider, or in our case AWS. The significance of this is that our code does not run as it typically would be on a server that executes long after the event has completed. There is no prior execution context to serve a request to the users of your application when running AWS Lambda. Throughout the development of your serverless applications, you MUST assume that AWS Lambda will invoke your function as if it is in its first application state every time, and with no contextual data to work with, because your function will not be handling concurrent requests. Your backend is stateless, and each function should strive to return an idempotentresponse.

Serverless + Microservices

When developing a serverless application, we need to make sure that the application is structured in a way in which the functions we declare as part of the backend logic, are defined in the form of individual services that mimic a microservicebased architecture so that we may better be able to reduce the size of our functions. The goal is to loosely couple the functionality between the services deployed as a part of the serverless backend, so that each function handles an independent piece of functionality that will provide a user with a response that does not rely on any other service.

Cold Starts

Because our functions execute inside of a stateless container that our cloud service provider manages for us, in our case AWS, there is a bit of latency associated with each http request to our serverless “backend”. Our stateless infrastructure is dynamically allocated to respond to the events triggered by our application, and although the container is typically kept “alive” for a short time after completion of the Lambda’s functionality, the resources will be deallocated and will lead to slower than expected/desired responses for new requests. We refer to these situations as Cold Starts.

Cold Start durations will typically last from a couple of hundred milliseconds to up to a few seconds. The size of the functions and the runtime language used can vary the Cold Start duration time in AWS Lambda. It is important to understand that our serverless containers can remain in an Active state in AWS Lambda, after the first request to our serverless endpoint has completed its execution routine. If a subsequent request to our Lambda is triggered by our application immediately after the completed execution of a previous request, the serverless lambda that defines our endpoint in the cloud in our stateless containers on AWS, will respond to this next request almost immediately and with little to no latency; this is called a Warm Start, and this is what we want to maintain to keep our application running optimally.

The wonderful thing about working with the ServerlessFramework library is the robust community that contributes to its development and evolution as Serverless becomes more of a thing. Dynamically distributing resources programmatically to deploy applications to the cloud just feels more practical from where I am sitting as an engineer working quietly on the cyber front. Take a second to bookmark this List of plugins developed by the community for the ServerlessFramework. We are going to use the serverless-plugin-warmup package on NPM, to make sure that our functions keep warm and purring like my dad’s old 1967 Camaro. I personally had a thing for Lane Meyer’s (John Cusack) ’67 Camaro in the 1980’s Cult Classic Better Off Dead.

I want my $2!

warmStartCamaro

Configure AWS Lambda & Decrease your Application’s Latency with Warm Starts

As discussed above, we will be keeping our Lambda’s warm during the hibernation season with serverless-plugin-warmup. In this next section, we will walk you through each step of the installation of this plugin. Remember, you can also refer to the sample of the application we will be using in this tutorial to follow along, here: Serverless-Starter-Service.

ServerlessWarmup eliminates Cold Start latency by creating a Lambda that will schedule and invoke all the services you select from your API, at a time interval of your choice, the default is set at 5-minutes. Thereby forcing your Lambda’s to stay warm. From within the root of you serverless project directory, continue to install ServerlessWarmup as follows:

  • Run: $ npm install --save-dev serverless-plugin-warmup

Add the following line to the plugin block in your serverless.yml file that you can find in the root of your service’s directory:

plugins:
  - serverless-plugin-warmup

Look at what my file looks like if you need a reference point:

serverless.yml plugin block

serverless warmup blockYML

Moving along now, you will want to configure the serverless-warmup-plugin in the custom: block of your service’s serverless.yml file. Remember, each service will always have its own serverless.yml file that will define the AWS Lambda endpoints for each serverless + microservice implemented for each path defined in the functions: block of the file. There are a few configuration settings you can read more about at the serverless-warmup Documentation repository. Here we will go over what we think are the most important settings you should at least be familiar with for now. Your custom: block in your serverless.yml file should look something like this:

custom:
  # Stages are based on what we pass into the CLI when running
  # serverless commands. Or fallback to settings in provider section.
  stage: ${opt:stage, self:provider.stage}

  # Load webpack config
  webpack:
    webpackConfig: ./webpack.config.js
    includeModules: true

  # ServerlessWarmup Configuration
  # See configuration Options at:
  # http://github.com/FidelLimited/serverless-plugin-warmup
  warmup:
    enabled: true # defaults to false
    folderName: '_warmup' # Name of folder generated for warmup
    memorySize: 256
    events:
      # Run WarmUp every 60 minutes
      - schedule: rate(60 minutes)
    timeout: 20

Inside of our custom: block you need to declare a warmup: resource that serverless will use to create a new Lambda function On your Behalf on AWS by the ServerlessFramework. Again, Lambda is going to use the serverless-warmup-pluginto keep your Serverless + Microservices warm, and latency-free. The primary setting that we need to configure is enabled: true. By default, this attribute is set to false because this does have an impact on your serverless + server costs on AWS. Warming up your Lambda’s means that they are computing for a longer time, and costing you more money. We will publish another article in the future that will show you how to determine these costs for you and your business. For now, please look at this calculator to help you estimate your monthly compute OPEX costs. We do like this calculator at Servers.LOL because it gives you a tool that will let you compare your current EC2 costs against your proposed AWS Lambda Serverless costs.

The next property we think you need to know about is the - schedule: rate(60 minutes) attribute in the events: block. By default, the rate is set to 5-minutes. We think for the purpose of this demo application that we can leave it to once an hour to minimize our AWS costs. You can also customize this setting on a more granular level to set it for certain times within certain days of the weeks to make sure your users can expect lower levels of latency at peak hours. For example, you can set your Lambda to Run WarmUp every 5 minutes on Monday to Friday between 8:00 am and 5:55 pm (UTC) with this setting: - schedule: 'cron(0/5 8-17 ? * MON-FRI *)'

If you are being perceptive right now, you will notice that this serverless.yml file is really letting us complete a lot of interesting tasks quickly, and without having to think too much about the impact of the resources we are conjuring-up out of thin air. As you can see Young Padawan, we are slowly, but surely making our way through a concept known as Infrastructure As Code, and we are, albeit moderately for now, programmatically allocating and spinning-up the cloud-based servers we need to keep our Lambda’s warm with this serverless-warmup-plugin.

We will get back to Infrastructure As Code in a bit, but this idea of serverless…servers??

Irony in the Cloud

cloud servers

Understanding AWS Lambda

We really must discuss how AWS Lambda will execute the logic within our functions to better understand, and to have a general idea about a few of the important properties that make up our AWS FaaS paradigm. Below are a few details you should know about how Lambda works with you:

AWS Lambda Specs

Lambda will support the runtime environments listed below:

  • Node.js: v8.10 & v6.10
  • Java 8
  • Python v3.6 & v2.7
  • .NET Core: v1.0.1 & v2.0
  • Go v1.x
  • Ruby v2.5
  • Rust

Each Lambda will execute and compute inside of a container with a 64-bit AWS Linux AMI. AWS will distribute our Lambda’s computational needs to each user according to the following system requirements:

  • Memory Allocation: 128MB – 3008MB (Allocated in 64MB increments)
  • Ephemeral Disk Space: 512 MB
  • Max execution time (timeout): 15 minutes (900s)
  • Function Evnironment Variables: 4 KB
  • Function Layers 5 layers
  • Deployment Package Size (unzipped): 250 MB
  • Deployment Package Size (zipped): 50MB
  • Execution Threads (Processes): 1024

Lambda puts the breaks on the amount of resources that you can use for compute and storage to run and store your functions on the AWS cloud. The following default limits are set per-region by AWS and can be increased by special request only:

  • Concurrent Executions: 1000
  • Function & Layer Storage: 75 MB

The serverless design paradigm is a language agnostic approach that is meant to give engineers the ability to leverage AWS resources and infrastructure to better scale their products, i.e. your products, to a global market place and to more quickly put your innovation into the hands of the users that need it the most.

AWSLambdaFunction

In the image, myLambda is the name of the Lambda function written for the Node.js runtime environment shown above. The event object has all the information about the event that triggered this Lambda for an async response, and in the case of an http-request, it will be the information you need about the specific request made to your application, and its serverless-backend. The context object will have information about the runtime environment that will execute our Lambda on AWS. When AWS completes the execution of the logic within our Lambda function, the callback function will execute and provide you with the corresponding result or error needed to respond to the http-request.

The Stateless nature of AWS Lambda

Because our Lambda functions are stateless events that execute inside of containers on the cloud, all of the code that is run inside of the program’s file in the container is executed and cached, while warm, and only the code in the Lambda function handler is run on subsequent attempts. In the example below, the let sns = new aws.SNS(); method will trigger the first time your container is instantiated on the cloud. The new aws.SNS(); method and the code above it is not run every time your Lambda is invoked. On the other hand, the myLambdaTopic handler function shown as a module export in the example, will trigger itself every time we invoke the Lambda.

let aws = require('aws-sdk');
aws.config.update({region: 'us-east-1'});

let sns = new aws.SNS();

exports.myLambdaTopic = function(event, context, callback) {

  const params = {
    Message: `Message Posted to Topic.`,
    TopicArn: process.env.SNSTopicARN
  }

  let result = sns.publish(params, (err,data) => {
      callback(null, '');
    });

  return result;
};

There is a /tmp directory inside of the 512MB of Ephemeral Disk Space that your 64-bit Amazon Linux AMI gives to your Lambda where your containers are effectively cached when it is executed from the AMS Lambda event that is triggered by your application. Using this directory is not a recommended approach to achieving stateful Lambda functions. There is no way to govern what happens to the memory given for this directory because the cloud provider handles this abstraction of work. When your containers go Cold and are no longer Cached you will lose everything in the /tmp directory.

Choose AWS as your Cloud Provider & Register

I am going to have to assume that you have an AWS Account and are a registered user with AWS for the sake of getting through this article in a reasonable amount of time. I hope that your curiosity has driven you to the wonders of the AWS Console to leave you in a state of despair, let alone paralysis. Do not be ashamed; I really believe that I am not the only person on this planet terrified by the AWS Console when first starting out as a cloud professional. The first thing that ran through my head was, “with what time am I going to figure all of this out now that I have learned how to master the art of the reversed Linked-List interview question“. I terrified myself to say the least. Do not you worry though young Silicon Valley Stallions, we will be here to walk you through every step of each of those services. One day, we will even show you how to launch a Machine Learning application on your very own Serverless backend! I am just not going to show you how to register with AWS.

aws.Services

Create AWS Developer Access Keys

To deploy your application’s resources using an Infrastructure As Code paradigm, you need to connect your development environment to AWS by authenticating your local machine with your AWS Access Keys that you can find within the IAM Service from your AWS Console.

AWS

When you get into your AWS Console, you need to click on the Services link on the top left side of the AWS navigation bar in your browser. Your AWS Console will continue to bombard you with everything it has. Take a deep breathe, and look for the Security, Identity, & Compliance section inside of the Services menu. The first service in that section is the AWS IAM service, or short for Identity & Access Management.

From your IAM Dashboard continue to click on the Users button from the navigation frame on the left side of your browser, and then click on the user that you will use to create a new Access Key. When the user‘s information renders itself within the browser for you, click on the tab called Security credentials and create a new Access Key as shown below.

AccessKey

Once you create a new Access Key, both record your Secret Access Key along with your Access Key ID to configure your AWS-CLI locally from your terminal.

NewAccessKeyID

AWS will only let you create two Access Keys per user. It is a best practice to change these keys often and to store them securely. AWS will not allow you to view your Secret Access Key after you initially create it, you must be sure to record it in a safe place, as soon as you see the screen above.

Install the AWS Command Line Interface (CLI)

The demo application we are using today to discuss and teach you these skills are all part of the Serverless Framework technology stack. The awscli needs Python v2.7 or Python v3.4+ and PiP to support our application environment. Below are links to the Python documentation repository, to help you familiarize yourself with these installations:

With Python installed, and using Pip, install the awscli on (Linux, MacOS, or Unix) and run:

  • $ sudo pip install awscli

Add your Access Key to your AWS CLI

Obtain your Access Key ID and your Secret Access Key from your AWS Console via the AWS IAM services console and run:

  • $ aws configure

With the credentials obtained from your IAM userID, enter the following information into the terminal prompts:

AWS Access Key ID [****************91RX]: <paste your data here>
AWS Secret Access Key [****************ADsM]: <paste your data here>
Default region name [us-east-1]: us-east-1
Default output format [json]: json

SetUp Serverless Framework locally

To deploy our demo application with a serverless backend to handle our business logic with independent functions deployed to AWS Lambda, we will need to configure Lambda and APIGateway to use the ServerlessFramework. The ServerlessFramework handles the configuration of our Lambda functions to use our code to respond to http requeststriggered by APIGateway. The ServerlessFramework lets us use easy template files to programmatically describe the resources and infrastructure that we need AWS to provision for us, and on deployment, AWS CloudFormation does the job of instantiating the cloud based infrastructure that we call the serverless architecture on AWS. The serverless.yml file is the file that executes the explicit resources that we declare from within the ServerlessFramework, to tell AWS CloudFormation what we need from AWS to run our application. Please make sure to install NPM Package Manager to complete this installation:

  • Install the ServerlessFramework globally and run:
    1. $ npm install serverless -g
  • Create a project structure that considers the Serverless + MicroService approach and clone the ServerlessStarterService demo application as follows:
    1. $ mkdir PayMyInvoice
    2. $ cd PayMyInvoice
    3. $ mkdir services
    4. $ cd services

    Critical Step: Install the repo and name it accordingly 5. $ serverless install --url http://github.com/lopezdp/ServerlessStarterService --name my-project

    1. In the command above, change the generic my-project name passed as an argument and call your serverless + microservice whatever you want. In our case we decided to call ours invoice-log-api. We will be creating invoices and displaying a log of the invoices that our users send to their customers for payment. Try to use a logical name to describe your service when coming up with your project’s naming conventions. After using the ServerlessStarterService as a template and renaming it from your terminal, your output should look something like this:

    serverless

  • Your project structure should now look like this after you rename your template:
    PayMyInvoice
    |__ services
       |__ invoice-log-api (renamed from template)
       |__ FutureServerlessMicroService (TBD)
  • From here, navigate to your invoice-log-api project. Currently, you have a directory each for mocks and tests, a serverless.yml file and a handler.js file that was a part of the original template. We will be refactoring this directory with the files shown below, which we will review later as a part of what you need to have to complete in your localenvironment to deploy this demo application. For now, just take a look at what we are proposing and try to understand the logic behind each Lambda. We will implement these a bit later.
    1. billing.js: This is our serverless Lambda function that will deploy our Stripe billing functionality to allow users to accept payment for the Invoiced that they create in the application.
    2. createInvoice.js: This serverless function will deploy the Lambda needed to let users of the PayMyInvoiceapplication create new invoices in the application.
    3. getInvoice.js: This serverless function will deploy the Lambda needed to let users of the PayMyInvoice application obtain a specific invoice stored in the application.
    4. listInvoices.js: This serverless function will deploy the Lambda needed to let users of the PayMyInvoice application obtain a list of invoices stored in the application.
    5. updateInvoice.js: This serverless function will deploy the Lambda needed to let users of the PayMyInvoiceapplication update a specific invoice stored in the application.
    6. deleteInvoice.js: This serverless function will deploy the Lambda needed to let users of the PayMyInvoiceapplication delete a specific invoice stored in the application.
    7. serverless.yml: This is the configuration template used by the ServerlessFramework to tell AWS CloudFormationwhich resources we needs provisioned for our application and how to configure them on AWS.
    8. /mocks: This is where we save the json files that we use in development, to mock http-request events to our serverless backend locally.
    9. /resources: This is the directory that we use to organize our programmatic resources and files.
    10. /tests: This is the directory where we save our Unit Tests. Typically, we will want to try to achieve at least 80% (or better) Unit Testing coverage.
  • This service relies on the dependencies that we list in the package.json file found in the root of the serverless project directory.
    1. Navigate to root project directory$ cd ~/PATH/PayMyInvoice/service/invoice-log-api
    2. Run$ npm install

We will continue the review of these resources and their deployment to AWS in the chapters that follow. For now, we have to make sure you understand what is happening behing the scenes of this new Serverless Paradigm

Your localhost serverless backend will now work and you can extend it for any feature you need to implement in the future. Good Luck!

Part 3: Configuring Infrastructure As Code, Mock Services, & Unit Testing

How To Build a Serverless React.js Application with AWS Lambda, API Gateway, & DynamoDB – Part 1

We did not hurt, injur, or maim any animals or ponies throughout the making of this tutorial. We completed all our heroic action sequences in CGI. Furthermore, we are not liable for what you build after learning how to be a Ninja. Never go Full Ulbricht.

How To SetUp Your local Serverless Environment

You can clone a sample of the application we will be using in this tutorial here: Serverless-Starter-Service

Please refer to this repo as you follow along with this tutorial.

Introduction

It is amazing how quickly the technology community iterates over innovative technology frameworks, and architectural paradigms to solve the never-ending series of problems and work-arounds that a new tool, or solution inevitably brings to the table. Even more enjoyable to watch is the relentless chatter, and online chat and commentary that spreads like wildfire all over the internet and the Twittersphere, that goes on discussing how to use the latest toolset and framework to build the next software that will Change the World for the Better. I must admit, although I sound cynical about the whole thing, I too am a part of the problem; I am an AWS Serverless Fanboy.

AWSServerless

The aim of this tutorial is to deploy a simple application on AWS to share with you, our favorite readers, the joys of developing applications on the Serverless Framework with AWS. The application we will be walking you through today, includes a backend API service to handle basic CRUD operations built on something we like to call the DARN Technology Stack. Yes, we created a new acronym for your recruiter to get excited about. They will have no idea what you are talking about, but they will get you an interview with Facebook if you tell them that you completed this tutorial to become a fully-fledged and certified Application Developer on the DARN Cloud. (That is an outright lie. Please do not believe any guarantees the maniac who authored this article promises you.) The DARN Stack includes the following tool set:

  • DynamoDB
  • AWS Serverless Lambda
  • React.js
  • Node.js

Technology Stack

For a more precise list of tools that we will be implementing throughout the development lifecycle of this multi-part series, below I have highlighted the technologies that we will be using to deploy our application on AWS using the ServerlessFramework:

  • AWS Lambda & API Gateway to expose the API endpoints
  • DynamoDB is our NoSQL database
  • Cognito supplies user authentication and secures our API’s
  • S3 will host our application & file uploads
  • CloudFront will serve our application to the world
  • Route53 is our domain registry
  • Certificate Manager provides us with SSL/TLS certificates
  • React.js is our single page application
  • React Router to develop our application routing functionality
  • Bootstrap for the development of our React UI components in Bootstrap
  • Stripe will process our credit card payments
  • GitHub will our project repositories

Local Development Environment Setup & Configuration: Part 1

One of the more difficult activities I faced as a junior developer a very long, long, time ago was understanding that the type of project that I would be working on, very much dictated how I would eventually have to configure my machine locally, to be ready to develop ground breaking and world changing software. Anytime you read a mention to world changing softwarefrom here on out, please refer to episodes of HBO’s Silicon Valley to understand the thick sarcasm sprinkled throughout this article. My point is that I have always felt that most tutorials seem to skim over the idea of setting up your local development environment, as if it were a given that developers were born with the inherent understanding of the differences between npmyarnbower, and the never ending list of package managers and slick tools available to ‘help’ you succeed at this life in SoftwareDevelopment… For a historical perspective with a wholistic take on this matter please brush up on your knowledge of a topic some people refer to as RPM Hell.

To avoid Dependency Hell, we have decided to codify and create a series of Best Practices you can take with you for the development of the application in this tutorial and any other projects you work on in the future. For the goals of completing the application in this tutorial please make sure to configure all local development machines using the tools, dependencies, and configuration parameters described in this article. This list is not definitive and is only meant as a baseline from which to begin the development of your applications as quickly and as easily as possible.

JavaScript Toolkit

The idea of a JavaScript Toolkit is to more easily onboard new engineers onto a fast-moving team of Software Stallions. I prefer the title of Ninja, but that’s not important right now. The important thing is to get the job and to act like you know what you are doing so that you can keep said job, no?

Anyway; There is a file you will be making use of called the package.json file that will eventually make this process easy for you every time you want to start a new project. What I mean is that once you understand the significance of all of the project dependencies declared in any random package.json file you find out there on the Internets, all you will really ever need to do is to run: $ npm install from the project directory of an application with a package.json file to install all of the dependencies declared. We are not going to do that in this Part 1 of our Serverless + React.js Mini-Series of a tutorial. Instead, I am going to hold your hand and walk you through each command, one, simple, step, at a time.

For those of you not interested in starting at the bottom because you are already Level-12 Ninja Assassins, then please… continue to Part 2: How To Configure Your Serverless Backend API – Not yet Published!. This section is for those who prefer to do things right.

Seriously though, this is what you will be working with throughout the course of this multi-part tutorial:

Install Node.js with NVM (Node Version Manager)

On 7 November 2018, AWS announced to the world that AWS Lambda, will now officially support Node.js v8.10.0. This project will use NVM (Node Version Manager) to work with different versions of Node.js between projects and to mitigate against any potential environment upgrades implemented in the future by any 3rd party vendors. To ensure that we are working on the correct version of Node.js for this project please install nvm and node as follows:

  • Refer to the Node Version Manager Documentation if this information is out of date
  • Installation (Choose ONE based on your system OS):
    1. cURL: curl -o- http://raw.githubusercontent.com/creationix/nvm/v0.33.11/install.sh | bash
    2. Wget: wget -q0- http://raw.githubusercontent.com/creationix/nvm/v0.33.11/install.sh | bash
    3. The installation will add the following to your .bashrc or .bash_profile in your home directory:
    export NVM_DIR="$HOME/.nvm"
    [ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh" # This loads nvm
    [ -s "$NVM_DIR/bash_completion" ] && \. "$NVM_DIR/bash_completion" # This loads nvm bash_completion
    
  • Run the following command to verify the installation of nvm:
    1. run from Terminal: $ command -v nvm
    2. Expected Output: nvm
  • On macOS, if you receive and error of nvm: command not found, then you need to add the following line to your .bash_profile found in your home directory as shown below:
    1. source ~/.bashrc
  • To download, compile, and install a specific version of Node.js, then please run the following command to obtain Node.js v8.10.0 as needed to be able to work with AWS Lambda:
    1. $ nvm install 8.10.0
    2. You can list available versions to install using:
    • $ nvm ls-remote
    1. The completed output should look like this, showing that you are now ready to start building services in AWS Lambda!

    NodeInstallComplete4Lambda

    1. Give yourself a pat on the back! Being able to hit the ground running as a new developer on a new team, is a skill that many employers would pay a few bucks extra to have more of in their organizations. Always refer to this page and tell your friends that this is where you can get the low-down on how to get it done without having to ask 10 questions on our favorite technology forums out in the ether.

Setup Editor and Install Linting

For JavaScript, React.js, and the build process that we are using in SublimeText3 we will continue to configure linting, and formatting, while making use of OpenSource libraries and tools to help make this implementation of React.js more efficient. Please take the following steps to complete the installation of ESLint:

  • Install ESLint: Must install BOTH Globally & Locally
    1. Global Install: $ npm install -g eslint
    2. Local Install: (Must complete from the root of the project directory!!!)
    • $ npm install --save-dev eslint
    1. Local Install of the babel-eslint library also: (Must complete from the root of the project directory!!!)
    • $ npm install --save-dev babel-eslint
  • Verify that you can run eslint from within the local project directory:
    1. run from Terminal: $ ./node_modules/.bin/eslint -v
    2. Expected Output: v5.15.0 (Or Latest)
    3. Here is what the output should look like right now:

    ESLintInstalled

From this point forward, you can run: $ eslint . and the linter will help you and your development team check syntax, find problems, and enforce code style across your entire organization. In my case, that means just team Wilson and I hammering away at the keyboard debating the intricacies of JavaScript Memory Leaks and the best approach to achieve the best string concatenation efficiency. I am sure this is all very relatable. Here is what my code review process looks like when Wilsondoes not use the eslint settings I have specifically laid out for you today:

Wilson: The SCRUM Master

WilsonScrumMaster

Confirm ESLint Configuration Settings & Attributes

From the project root directory run $ ls -a, and confirm that you can see the file called: ~/MyApp/Backend/services/ServerlesStarterService/.eslintrc.js within the serverless project directory for the service you are implementing. Please look at the structure I have chosen to use as the $PATH for this example. Take this as a hint for now and we will go over this in detail a bit later. For now, simply open the file from within the ServerlesStarterServicedirectory by running $ subl .eslintrc.js from your terminal to confirm that the following information is in the file:

module.exports = {
  "extends": ["eslint:recommended", "plugin:react/recommended"],
  "parser": "babel-eslint",
  "plugins": ["react"],
  "parserOptions": {
    "ecmaVersion": 8,
    "sourceType": "module",
    "ecmaFeatures": {
      "jsx": true
    }
  },
  "env": {
    "node": true,
    "es6": true,
    "es7": true
  },
  "rules": {
    "react/jsx-uses-react": 2,
    "react/jsx-uses-vars": 2,
    "react/react-in-jsx-scope": 2,
    "no-alert": 2,
    "no-array-constructor": 2,
    "no-caller": 2,
    "no-catch-shadow": 2,
    "no-labels": 2,
    "no-eval": 2,
    "no-extend-native": 2,
    "no-extra-bind": 2,
    "no-implied-eval": 2,
    "no-iterator": 2,
    "no-label-var": 2,
    "no-labels": 2,
    "no-lone-blocks": 2,
    "no-loop-func": 2,
    "no-multi-spaces": 2,
    "no-multi-str": 2,
    "no-native-reassign": 2,
    "no-new": 2,
    "no-new-func": 2,
    "no-new-object": 2,
    "no-new-wrappers": 2,
    "no-octal-escape": 2,
    "no-process-exit": 2,
    "no-proto": 2,
    "no-return-assign": 2,
    "no-script-url": 2,
    "no-sequences": 2,
    "no-shadow": 2,
    "no-shadow-restricted-names": 2,
    "no-spaced-func": 2,
    "no-trailing-spaces": 2,
    "no-undef-init": 2,
    "no-underscore-dangle": 2,
    "no-unused-expressions": 2,
    "no-use-before-define": 2,
    "no-with": 2,
    "camelcase": 2,
    "comma-spacing": 2,
    "consistent-return": 2,
    "curly": [2, "all"],
    "dot-notation": [2, {
      "allowKeywords": true
    }],
    "eol-last": 2,
    "no-extra-parens": [2, "functions"],
    "eqeqeq": 2,
    "key-spacing": [2, {
      "beforeColon": false,
      "afterColon": true
    }],
    "new-cap": 2,
    "new-parens": 2,
    "quotes": [2, "double"],
    "semi": 2,
    "semi-spacing": [2, {
      "before": false,
      "after": true
    }],
    "space-infix-ops": 2,
    "keyword-spacing": 2,
    "space-unary-ops": [2, {
      "words": true,
      "nonwords": false
    }],
    "strict": [2, "global"],
    "yoda": [2, "never"]
  }
};

Please add the eslintrc.js file with the information above if you do not have it in your directory as shown above.

Moving forward, we can continue to install the ESLint plugins that we will need to use for our React.js implementations also. In the root of the project directory, run the following two commands to install the React.js plugin for ESLint:

  • $ npm install eslint-plugin-react --save-dev
  • $ npm install -g eslint-plugin-react --save-dev (Install globally also)

Running $ ./node_modules/.bin/eslint src (with src being the path to your JavaScript files), will now parse your sources with Babel and will provide you with linting feedback to the command line.

We think it’s going to be best (and just easiest quite frankly), to just alias that last command with something like: $ npm run lint as part of your automation approach by adding the following script into the package.json file that you can find in the root directory of your application:

"scripts": {
    "lint": "eslint .",
    ...
  },

  ...
  • Run: $ npm run lint to lint your project from the terminal.

The linter will now supply feedback on syntax, bugs, and problems, while enforcing code style, all from within the comfort of your terminal. Do not fool yourself, this will work in vim also! For all those :wq! fans out there, the hope, is on our side… #UseTheForce.

A typical output running ESLint from the terminal as instructed would look something like this (depending on your project and the silly errors you make):

npmRunLint

One more thing, from within the project root directory please make sure that you have a file called .esformatter in existence. You are going to need that $ ls -a command to find it. If it does not exist, go ahead, and create it with a simple $ touch .esformatter and :wq! the following bit of information into the file (if it already exists, just make sure this is in there!):

{
  "preset": "default",

  "plugins": [
    "esformatter-quotes",
    "esformatter-semicolons",
    "esformatter-literal-notation",
    "esformatter-parseint",
    "esformatter-spaced-lined-comment",
    "esformatter-var-each",
    "esformatter-collapse-objects",
    "esformatter-remove-trailing-commas",
    "esformatter-quote-props"
  ],

  "quotes": {
    "type": "double"
  },

  "collapseObjects": {
    "ObjectExpression": {
      "maxLineLength": 79,
      "maxKeys": 1
    }
  },

  "indent": {
    "value": "  ",
    "AlignComments": false
  },

  "whiteSpace": {
    "before": {
      "ObjectExpressionClosingBrace": 0,
      "ModuleSpecifierClosingBrace": 0,
      "PropertyName": 1
    },
    "after": {
      "ObjectExpressionOpeningBrace": 0,
      "ModuleSpecifierOpeningBrace": 0
    }
  }
}

From here on out every time you save your JavaScript files all the formatting completes automatically for you. Sort of, we just must configure all of this into our trusty SublimeText3 text editor now!

Before we get into it, yes, you can use vim. I promise you, you will find people who will die by vim throughout your career. My belief; keep it simple, buy the SublimeText3 license and just use that over every other IDE that will bombard your spam folder in your email. SublimeText3 provides the simplicity of vim without having to memorize the abstract commands like :wq! that will inevitably leave you with a mess of *.swp files, because I guarantee that most of you will never clean them out and it becomes a hassle. Keep it simple, do not over complicate your life in an already all too complicated environment, and get the tool that is really cheap, easy on the eyes, and provides enough of the fancy functionality of a really expensive IDE and the simplicity of vim without having to turn your anonymous functions into ES6 syntax with an esoteric regex-replace like this $ :%s/function \?(\(.*\)) {/(\1) => {/.

SublimeText3 Configuration

To complete the configuration in SublimeText3 please install PackageControl and the following packages from within SublimeText3:

  • The easiest way to achieve this is to navigate to the SublimeText3 View –> ShowConsole menu, and continue to paste the following code provided by PackageControl:
    # For SublimeText3 ONLY!
    
    import urllib.request,os,hashlib;
    h = '6f4c264a24d933ce70df5dedcf1dcaee' + 'ebe013ee18cced0ef93d5f746d80ef60';
    pf = 'Package Control.sublime-package';
    ipp = sublime.installed_packages_path();
    urllib.request.install_opener( urllib.request.build_opener( urllib.request.ProxyHandler()) );
    by = urllib.request.urlopen( 'http://packagecontrol.io/' + pf.replace(' ', '%20')).read();
    dh = hashlib.sha256(by).hexdigest();
    print('Error validating download (got %s instead of %s), please try manual install' % (dh, h))
    if dh != h else open(os.path.join( ipp, pf), 'wb' ).write(by)
    

    The configuration parameters supplied above, will generate an Installed Packages directory on your local machine (if needed). It will download the Package Control.sublime-package over HTTP instead of HTTPS because of known Python standard library constraints, and the file will apply SHA-256 to confirm that it is in fact a valid file.

    WARNING: Please do not copy or install this code via this tutorial or our website. It will change with every release of PackageControl. Please make sure to view the Official PackageControl Release Documentation page and installation instructions to get the most recent version of the code shared in this tutorial. You can obtain the most up-to-date information needed directly from: PackageControl.

  • Next, please continue to navigate through to SublimeText –> Preferences –> PackageControl –> InstallPackagesand install the following tools:
    1. Babel
    2. ESLint
    3. JSFMT
    4. SublimeLinter
    5. SublimeLinter-eslint (Connector)
    6. SublimeLinter-annotations
    7. Oceanic Next Color Scheme
    8. Fix Mac Path (If using MacOS ONLY)
  • Once you have completed the above, go ahead and navigate to: View –> Syntax –> Open all with current extension as ... –> Babel –> JavaScript (Babel) with any file.js open to configure the settings for the syntax is JS and JSX.
  • It is helpful to setup a JSX formatter for your text editor also. In our case we are using esformatter-jsx. You will have to figure out where your SublimeText3 Packages directory is, to complete your jsfmt configuration. Run the following commands to complete this step:
    1. $ cd ~/Library/Application\ Support/Sublime\ Text\ 3/Packages/jsfmt
    2. $ npm install jsfmt
    • Try to run: $ npm ls esformatter before continuing to step #3
    • If you have esformatter installed, then skip step #3 and run:
      1. $ npm install esformatter-jsx
    1. $ npm install esformatter esformatter-jsx (See notes above before going ahead with this step!!!)
    2. Test the installation of each package and run: $ npm ls <package> (See the expected output below)

    jsfmt install

  • From the jsfmt package directory (See above if you do not remember! hint: See Step #1 from above) run: $ ls -aand open a file in SublimeText3 called: $ subl jsfmt.sublime-settings and paste the following configuration settings:
{
  // autoformat on file save events
  "autoformat": false,

  // This is an array of extensions for autoformatting
  "extensions": ["js",
    "jsx",
    "sublime-settings"
  ],

  // options for jsfmt
  "options": {
    "preset": "jquery",
    // plugins included
    "plugins": [
      "esformatter-jsx"
    // "esformatter-quotes",
    // "esformatter-semicolons",
    // "esformatter-braces",
    // "esformatter-dot-notation"
    ],
    "jsx": {
      "formatJSX": true, // Default value
      "attrsOnSameLineAsTag": false, // move each attribute to its own line
      "maxAttrsOnTag": 3, // if lower or equal than 3 attributes, they will be on a single line
      "firstAttributeOnSameLine": true, // keep the first attribute in the same line as the tag
      "formatJSXExpressions": true, // default is true, if false, then jsxExpressions does not apply formatting recursively
      "JSXExpressionsSingleLine": true, // default is true, if false the JSXExpressions may span several lines
      "alignWithFirstAttribute": false, // do not align attributes with the first tag
      "spaceInJSXExpressionContainers": " ", // default to one space. Make it empty if you do not like spaces between JSXExpressionContainers
      "removeSpaceBeforeClosingJSX": false, // default false. if true <React.Component /> => <React.Component />
      "closingTagOnNewLine": false, // default false. if true attributes on multiple lines will close the tag on a new line
      "JSXAttributeQuotes": "", // values "single" or "double". Leave it as empty string if you do not want to change the attributes' quotes
      "htmlOptions": {
        // put here the options for js-beautify.html
      }
    }
  },
  "options-JSON": {
    "plugins": [
      "esformatter-quotes"
    ],
    "quotes": {
      "type": "double"
    }
  },
  "node-path": "node",
  "alert-errors": true,
  "ignore-selection": false
}

Install and Activate Color Theme optimized for React.js

OceanicNext is a color scheme and syntax highlighter on SublimeText3 that is perfect for babel-sublime JavaScript and React.js. Please go ahead and review the Oceanic Next Documentation and Installation Instructions. However, it would be great if you could complete the instructions below so we can get through the rest of this tutorial. We may start building something now that we have a legitimate local environment to work on.

  • Navigate to SublimeText –> Preferences –> Settings –> User menu and add the following configuration parameters:
    "color_scheme": "Package/Oceanic Next Color Scheme/Oceanic Next.tmTheme",
    "theme": "Oceanic Next.sublime-theme",
    
  • Navigate to SublimeText –> Preferences –> PackageControl –> InstallPackages and install the following:
    1. Oceanic Next Color Scheme
  • Select the correct theme from: SublimeText –> Preferences –> ColorScheme –> Oceanic Next

From this point forward your local development environment is ready to work with and will be able to provide you and your Scrum MasterWilson, with the right kind of automated feedback you need to have as a professional Jedi, I mean… Engineer, so that you can debug and extend the most complex of Mobile Applications on the DARN Cloud

If you can figure out a better way to make the technicalities of setting up linting and formatting on a Linux machine without ever having known the terminal, a tad bit more entertaining and easier to digest than this little tutorial we put you through today, then please enlighten my friends and I pretending to be the masters of the Dark Side. UNTIL THEN:

Your local environment is ready for work and is configured correctly… Good Luck!

 

Part 2: localhost Serverless + Microservice & The NEW Backend Paradigm