test

How To Build a Serverless React.js Application with AWS Lambda, API Gateway, & DynamoDB – Part 2

How To Configure your Serverless Backend on AWS

This is a continuation of our multi-part series on building a simple web application on AWS using AWS Lambda and the ServerlessFramework. You can review the first part of this series starting with the setup of your local environment at:

You can also clone a sample of the application we will be using in this tutorial here: Serverless-Starter-Service

Please refer to this repo as you follow along with this tutorial.

The Serverless Architecture

Serverless programming and computing, or serverless for short, is a software architecture that enables an execution paradigm where the cloud service provider (AWS, GoogleCloud, Azure) is the entity responsible for running a piece of backend logic that you write in the form of a stateless function. In our case we are using AWS Lambda. The cloud provider you choose to run your stateless function, is responsible for the execution of your code in the cloud, and will dynamically allocate the resources needed to run your backend logic, by abstracting the deployment infrastructure for you, so that you can focus on developing your product instead of auto-scaling servers. This paradigm is also known to be called Functions As A Service or (FAAS), and the code will run inside of stateless containers that may be triggered by a number of events that can include: cron jobs, http requests, queuing services, database events, alerts, file uploads, etc.

Serverless Considerations

Since the serverless paradigm abstracts away the need for an engineer to configure the underlying physical infrastructure typical to the deployment of a modern day application, in what is known as the new Functions As A Service (FAAS) reality, the following are a few considerations that should be kept in mind while we proceed through the development of our Single Page Application:

Stateless Computing

When we deploy a serverless + microservice architecture, the functions that we declare as a part of our application API, execute our application’s logic inside of stateless containers for us by our cloud service provider, or in our case AWS. The significance of this is that our code does not run as it typically would be on a server that executes long after the event has completed. There is no prior execution context to serve a request to the users of your application when running AWS Lambda. Throughout the development of your serverless applications, you MUST assume that AWS Lambda will invoke your function as if it is in its first application state every time, and with no contextual data to work with, because your function will not be handling concurrent requests. Your backend is stateless, and each function should strive to return an idempotentresponse.

Serverless + Microservices

When developing a serverless application, we need to make sure that the application is structured in a way in which the functions we declare as part of the backend logic, are defined in the form of individual services that mimic a microservicebased architecture so that we may better be able to reduce the size of our functions. The goal is to loosely couple the functionality between the services deployed as a part of the serverless backend, so that each function handles an independent piece of functionality that will provide a user with a response that does not rely on any other service.

Cold Starts

Because our functions execute inside of a stateless container that our cloud service provider manages for us, in our case AWS, there is a bit of latency associated with each http request to our serverless “backend”. Our stateless infrastructure is dynamically allocated to respond to the events triggered by our application, and although the container is typically kept “alive” for a short time after completion of the Lambda’s functionality, the resources will be deallocated and will lead to slower than expected/desired responses for new requests. We refer to these situations as Cold Starts.

Cold Start durations will typically last from a couple of hundred milliseconds to up to a few seconds. The size of the functions and the runtime language used can vary the Cold Start duration time in AWS Lambda. It is important to understand that our serverless containers can remain in an Active state in AWS Lambda, after the first request to our serverless endpoint has completed its execution routine. If a subsequent request to our Lambda is triggered by our application immediately after the completed execution of a previous request, the serverless lambda that defines our endpoint in the cloud in our stateless containers on AWS, will respond to this next request almost immediately and with little to no latency; this is called a Warm Start, and this is what we want to maintain to keep our application running optimally.

The wonderful thing about working with the ServerlessFramework library is the robust community that contributes to its development and evolution as Serverless becomes more of a thing. Dynamically distributing resources programmatically to deploy applications to the cloud just feels more practical from where I am sitting as an engineer working quietly on the cyber front. Take a second to bookmark this List of plugins developed by the community for the ServerlessFramework. We are going to use the serverless-plugin-warmup package on NPM, to make sure that our functions keep warm and purring like my dad’s old 1967 Camaro. I personally had a thing for Lane Meyer’s (John Cusack) ’67 Camaro in the 1980’s Cult Classic Better Off Dead.

I want my $2!

Configure AWS Lambda & Decrease your Application’s Latency with Warm Starts

As discussed above, we will be keeping our Lambda’s warm during the hibernation season with serverless-plugin-warmup. In this next section, we will walk you through each step of the installation of this plugin. Remember, you can also refer to the sample of the application we will be using in this tutorial to follow along, here: Serverless-Starter-Service.

ServerlessWarmup eliminates Cold Start latency by creating a Lambda that will schedule and invoke all the services you select from your API, at a time interval of your choice, the default is set at 5-minutes. Thereby forcing your Lambda’s to stay warm. From within the root of you serverless project directory, continue to install ServerlessWarmup as follows:

  • Run: $ npm install --save-dev serverless-plugin-warmup

Add the following line to the plugin block in your serverless.yml file that you can find in the root of your service’s directory:

plugins:
  - serverless-plugin-warmup

Look at what my file looks like if you need a reference point:

serverless.yml plugin block

Moving along now, you will want to configure the serverless-warmup-plugin in the custom: block of your service’s serverless.yml file. Remember, each service will always have its own serverless.yml file that will define the AWS Lambda endpoints for each serverless + microservice implemented for each path defined in the functions: block of the file. There are a few configuration settings you can read more about at the serverless-warmup Documentation repository. Here we will go over what we think are the most important settings you should at least be familiar with for now. Your custom: block in your serverless.yml file should look something like this:

custom:
  # Stages are based on what we pass into the CLI when running
  # serverless commands. Or fallback to settings in provider section.
  stage: ${opt:stage, self:provider.stage}

  # Load webpack config
  webpack:
    webpackConfig: ./webpack.config.js
    includeModules: true

  # ServerlessWarmup Configuration
  # See configuration Options at:
  # http://github.com/FidelLimited/serverless-plugin-warmup
  warmup:
    enabled: true # defaults to false
    folderName: '_warmup' # Name of folder generated for warmup
    memorySize: 256
    events:
      # Run WarmUp every 60 minutes
      - schedule: rate(60 minutes)
    timeout: 20

Inside of our custom: block you need to declare a warmup: resource that serverless will use to create a new Lambda function On your Behalf on AWS by the ServerlessFramework. Again, Lambda is going to use the serverless-warmup-pluginto keep your Serverless + Microservices warm, and latency-free. The primary setting that we need to configure is enabled: true. By default, this attribute is set to false because this does have an impact on your serverless + server costs on AWS. Warming up your Lambda’s means that they are computing for a longer time, and costing you more money. We will publish another article in the future that will show you how to determine these costs for you and your business. For now, please look at this calculator to help you estimate your monthly compute OPEX costs. We do like this calculator at Servers.LOL because it gives you a tool that will let you compare your current EC2 costs against your proposed AWS Lambda Serverless costs.

The next property we think you need to know about is the - schedule: rate(60 minutes) attribute in the events: block. By default, the rate is set to 5-minutes. We think for the purpose of this demo application that we can leave it to once an hour to minimize our AWS costs. You can also customize this setting on a more granular level to set it for certain times within certain days of the weeks to make sure your users can expect lower levels of latency at peak hours. For example, you can set your Lambda to Run WarmUp every 5 minutes on Monday to Friday between 8:00 am and 5:55 pm (UTC) with this setting: - schedule: 'cron(0/5 8-17 ? * MON-FRI *)'

If you are being perceptive right now, you will notice that this serverless.yml file is really letting us complete a lot of interesting tasks quickly, and without having to think too much about the impact of the resources we are conjuring-up out of thin air. As you can see Young Padawan, we are slowly, but surely making our way through a concept known as Infrastructure As Code, and we are, albeit moderately for now, programmatically allocating and spinning-up the cloud-based servers we need to keep our Lambda’s warm with this serverless-warmup-plugin.

We will get back to Infrastructure As Code in a bit, but this idea of serverless…servers??

Irony in the Cloud

Understanding AWS Lambda

We really must discuss how AWS Lambda will execute the logic within our functions to better understand, and to have a general idea about a few of the important properties that make up our AWS FaaS paradigm. Below are a few details you should know about how Lambda works with you:

AWS Lambda Specs

Lambda will support the runtime environments listed below:

  • Node.js: v8.10 & v6.10
  • Java 8
  • Python v3.6 & v2.7
  • .NET Core: v1.0.1 & v2.0
  • Go v1.x
  • Ruby v2.5
  • Rust

Each Lambda will execute and compute inside of a container with a 64-bit AWS Linux AMI. AWS will distribute our Lambda’s computational needs to each user according to the following system requirements:

  • Memory Allocation: 128MB – 3008MB (Allocated in 64MB increments)
  • Ephemeral Disk Space: 512 MB
  • Max execution time (timeout): 15 minutes (900s)
  • Function Evnironment Variables: 4 KB
  • Function Layers 5 layers
  • Deployment Package Size (unzipped): 250 MB
  • Deployment Package Size (zipped): 50MB
  • Execution Threads (Processes): 1024

Lambda puts the breaks on the amount of resources that you can use for compute and storage to run and store your functions on the AWS cloud. The following default limits are set per-region by AWS and can be increased by special request only:

  • Concurrent Executions: 1000
  • Function & Layer Storage: 75 MB

The serverless design paradigm is a language agnostic approach that is meant to give engineers the ability to leverage AWS resources and infrastructure to better scale their products, i.e. your products, to a global market place and to more quickly put your innovation into the hands of the users that need it the most.

In the image, myLambda is the name of the Lambda function written for the Node.js runtime environment shown above. The event object has all the information about the event that triggered this Lambda for an async response, and in the case of an http-request, it will be the information you need about the specific request made to your application, and its serverless-backend. The context object will have information about the runtime environment that will execute our Lambda on AWS. When AWS completes the execution of the logic within our Lambda function, the callback function will execute and provide you with the corresponding result or error needed to respond to the http-request.

The Stateless nature of AWS Lambda

Because our Lambda functions are stateless events that execute inside of containers on the cloud, all of the code that is run inside of the program’s file in the container is executed and cached, while warm, and only the code in the Lambda function handler is run on subsequent attempts. In the example below, the let sns = new aws.SNS(); method will trigger the first time your container is instantiated on the cloud. The new aws.SNS(); method and the code above it is not run every time your Lambda is invoked. On the other hand, the myLambdaTopic handler function shown as a module export in the example, will trigger itself every time we invoke the Lambda.

let aws = require('aws-sdk');
aws.config.update({region: 'us-east-1'});

let sns = new aws.SNS();

exports.myLambdaTopic = function(event, context, callback) {

  const params = {
    Message: `Message Posted to Topic.`,
    TopicArn: process.env.SNSTopicARN
  }

  let result = sns.publish(params, (err,data) => {
      callback(null, '');
    });

  return result;
};

There is a /tmp directory inside of the 512MB of Ephemeral Disk Space that your 64-bit Amazon Linux AMI gives to your Lambda where your containers are effectively cached when it is executed from the AMS Lambda event that is triggered by your application. Using this directory is not a recommended approach to achieving stateful Lambda functions. There is no way to govern what happens to the memory given for this directory because the cloud provider handles this abstraction of work. When your containers go Cold and are no longer Cached you will lose everything in the /tmp directory.

Choose AWS as your Cloud Provider & Register

I am going to have to assume that you have an AWS Account and are a registered user with AWS for the sake of getting through this article in a reasonable amount of time. I hope that your curiosity has driven you to the wonders of the AWS Console to leave you in a state of despair, let alone paralysis. Do not be ashamed; I really believe that I am not the only person on this planet terrified by the AWS Console when first starting out as a cloud professional. The first thing that ran through my head was, “with what time am I going to figure all of this out now that I have learned how to master the art of the reversed Linked-List interview question“. I terrified myself to say the least. Do not you worry though young Silicon Valley Stallions, we will be here to walk you through every step of each of those services. One day, we will even show you how to launch a Machine Learning application on your very own Serverless backend! I am just not going to show you how to register with AWS.

Create AWS Developer Access Keys

To deploy your application’s resources using an Infrastructure As Code paradigm, you need to connect your development environment to AWS by authenticating your local machine with your AWS Access Keys that you can find within the IAM Service from your AWS Console.

When you get into your AWS Console, you need to click on the Services link on the top left side of the AWS navigation bar in your browser. Your AWS Console will continue to bombard you with everything it has. Take a deep breathe, and look for the Security, Identity, & Compliance section inside of the Services menu. The first service in that section is the AWS IAM service, or short for Identity & Access Management.

From your IAM Dashboard continue to click on the Users button from the navigation frame on the left side of your browser, and then click on the user that you will use to create a new Access Key. When the user‘s information renders itself within the browser for you, click on the tab called Security credentials and create a new Access Key as shown below.

Once you create a new Access Key, both record your Secret Access Key along with your Access Key ID to configure your AWS-CLI locally from your terminal.

AWS will only let you create two Access Keys per user. It is a best practice to change these keys often and to store them securely. AWS will not allow you to view your Secret Access Key after you initially create it, you must be sure to record it in a safe place, as soon as you see the screen above.

Install the AWS Command Line Interface (CLI)

The demo application we are using today to discuss and teach you these skills are all part of the Serverless Framework technology stack. The awscli needs Python v2.7 or Python v3.4+ and PiP to support our application environment. Below are links to the Python documentation repository, to help you familiarize yourself with these installations:

With Python installed, and using Pip, install the awscli on (Linux, MacOS, or Unix) and run:

  • $ sudo pip install awscli

Add your Access Key to your AWS CLI

Obtain your Access Key ID and your Secret Access Key from your AWS Console via the AWS IAM services console and run:

  • $ aws configure

With the credentials obtained from your IAM userID, enter the following information into the terminal prompts:

AWS Access Key ID [****************91RX]: <paste your data here>
AWS Secret Access Key [****************ADsM]: <paste your data here>
Default region name [us-east-1]: us-east-1
Default output format [json]: json

SetUp Serverless Framework locally

To deploy our demo application with a serverless backend to handle our business logic with independent functions deployed to AWS Lambda, we will need to configure Lambda and APIGateway to use the ServerlessFramework. The ServerlessFramework handles the configuration of our Lambda functions to use our code to respond to http requeststriggered by APIGateway. The ServerlessFramework lets us use easy template files to programmatically describe the resources and infrastructure that we need AWS to provision for us, and on deployment, AWS CloudFormation does the job of instantiating the cloud based infrastructure that we call the serverless architecture on AWS. The serverless.yml file is the file that executes the explicit resources that we declare from within the ServerlessFramework, to tell AWS CloudFormation what we need from AWS to run our application. Please make sure to install NPM Package Manager to complete this installation:

  • Install the ServerlessFramework globally and run:
    1. $ npm install serverless -g
  • Create a project structure that considers the Serverless + MicroService approach and clone the ServerlessStarterService demo application as follows:
    1. $ mkdir PayMyInvoice
    2. $ cd PayMyInvoice
    3. $ mkdir services
    4. $ cd services

    Critical Step: Install the repo and name it accordingly 5. $ serverless install --url http://github.com/lopezdp/ServerlessStarterService --name my-project

    1. In the command above, change the generic my-project name passed as an argument and call your serverless + microservice whatever you want. In our case we decided to call ours invoice-log-api. We will be creating invoices and displaying a log of the invoices that our users send to their customers for payment. Try to use a logical name to describe your service when coming up with your project’s naming conventions. After using the ServerlessStarterService as a template and renaming it from your terminal, your output should look something like this:

  • Your project structure should now look like this after you rename your template:
    PayMyInvoice
    |__ services
       |__ invoice-log-api (renamed from template)
       |__ FutureServerlessMicroService (TBD)
  • From here, navigate to your invoice-log-api project. Currently, you have a directory each for mocks and tests, a serverless.yml file and a handler.js file that was a part of the original template. We will be refactoring this directory with the files shown below, which we will review later as a part of what you need to have to complete in your localenvironment to deploy this demo application. For now, just take a look at what we are proposing and try to understand the logic behind each Lambda. We will implement these a bit later.
    1. billing.js: This is our serverless Lambda function that will deploy our Stripe billing functionality to allow users to accept payment for the Invoiced that they create in the application.
    2. createInvoice.js: This serverless function will deploy the Lambda needed to let users of the PayMyInvoiceapplication create new invoices in the application.
    3. getInvoice.js: This serverless function will deploy the Lambda needed to let users of the PayMyInvoice application obtain a specific invoice stored in the application.
    4. listInvoices.js: This serverless function will deploy the Lambda needed to let users of the PayMyInvoice application obtain a list of invoices stored in the application.
    5. updateInvoice.js: This serverless function will deploy the Lambda needed to let users of the PayMyInvoiceapplication update a specific invoice stored in the application.
    6. deleteInvoice.js: This serverless function will deploy the Lambda needed to let users of the PayMyInvoiceapplication delete a specific invoice stored in the application.
    7. serverless.yml: This is the configuration template used by the ServerlessFramework to tell AWS CloudFormationwhich resources we needs provisioned for our application and how to configure them on AWS.
    8. /mocks: This is where we save the json files that we use in development, to mock http-request events to our serverless backend locally.
    9. /resources: This is the directory that we use to organize our programmatic resources and files.
    10. /tests: This is the directory where we save our Unit Tests. Typically, we will want to try to achieve at least 80% (or better) Unit Testing coverage.
  • This service relies on the dependencies that we list in the package.json file found in the root of the serverless project directory.
    1. Navigate to root project directory$ cd ~/PATH/PayMyInvoice/service/invoice-log-api
    2. Run$ npm install

We will continue the review of these resources and their deployment to AWS in the chapters that follow. For now, we have to make sure you understand what is happening behing the scenes of this new Serverless Paradigm

Your localhost serverless backend will now work and you can extend it for any feature you need to implement in the future. Good Luck!

Part 3: Configuring Infrastructure As Code, Mock Services, & Unit Testing

How To Build a Serverless React.js Application with AWS Lambda, API Gateway, & DynamoDB – Part 1

We did not hurt, injur, or maim any animals or ponies throughout the making of this tutorial. We completed all our heroic action sequences in CGI. Furthermore, we are not liable for what you build after learning how to be a Ninja. Never go Full Ulbricht.

How To SetUp Your local Serverless Environment

You can clone a sample of the application we will be using in this tutorial here: Serverless-Starter-Service

Please refer to this repo as you follow along with this tutorial.

Introduction

It is amazing how quickly the technology community iterates over innovative technology frameworks, and architectural paradigms to solve the never-ending series of problems and work-arounds that a new tool, or solution inevitably brings to the table. Even more enjoyable to watch is the relentless chatter, and online chat and commentary that spreads like wildfire all over the internet and the Twittersphere, that goes on discussing how to use the latest toolset and framework to build the next software that will Change the World for the Better. I must admit, although I sound cynical about the whole thing, I too am a part of the problem; I am an AWS Serverless Fanboy.

The aim of this tutorial is to deploy a simple application on AWS to share with you, our favorite readers, the joys of developing applications on the Serverless Framework with AWS. The application we will be walking you through today, includes a backend API service to handle basic CRUD operations built on something we like to call the DARN Technology Stack. Yes, we created a new acronym for your recruiter to get excited about. They will have no idea what you are talking about, but they will get you an interview with Facebook if you tell them that you completed this tutorial to become a fully-fledged and certified Application Developer on the DARN Cloud. (That is an outright lie. Please do not believe any guarantees the maniac who authored this article promises you.) The DARN Stack includes the following tool set:

  • DynamoDB
  • AWS Serverless Lambda
  • React.js
  • Node.js

Technology Stack

For a more precise list of tools that we will be implementing throughout the development lifecycle of this multi-part series, below I have highlighted the technologies that we will be using to deploy our application on AWS using the ServerlessFramework:

  • AWS Lambda & API Gateway to expose the API endpoints
  • DynamoDB is our NoSQL database
  • Cognito supplies user authentication and secures our API’s
  • S3 will host our application & file uploads
  • CloudFront will serve our application to the world
  • Route53 is our domain registry
  • Certificate Manager provides us with SSL/TLS certificates
  • React.js is our single page application
  • React Router to develop our application routing functionality
  • Bootstrap for the development of our React UI components in Bootstrap
  • Stripe will process our credit card payments
  • GitHub will our project repositories

Local Development Environment Setup & Configuration: Part 1

One of the more difficult activities I faced as a junior developer a very long, long, time ago was understanding that the type of project that I would be working on, very much dictated how I would eventually have to configure my machine locally, to be ready to develop ground breaking and world changing software. Anytime you read a mention to world changing softwarefrom here on out, please refer to episodes of HBO’s Silicon Valley to understand the thick sarcasm sprinkled throughout this article. My point is that I have always felt that most tutorials seem to skim over the idea of setting up your local development environment, as if it were a given that developers were born with the inherent understanding of the differences between npmyarnbower, and the never ending list of package managers and slick tools available to ‘help’ you succeed at this life in SoftwareDevelopment… For a historical perspective with a wholistic take on this matter please brush up on your knowledge of a topic some people refer to as RPM Hell.

To avoid Dependency Hell, we have decided to codify and create a series of Best Practices you can take with you for the development of the application in this tutorial and any other projects you work on in the future. For the goals of completing the application in this tutorial please make sure to configure all local development machines using the tools, dependencies, and configuration parameters described in this article. This list is not definitive and is only meant as a baseline from which to begin the development of your applications as quickly and as easily as possible.

JavaScript Toolkit

The idea of a JavaScript Toolkit is to more easily onboard new engineers onto a fast-moving team of Software Stallions. I prefer the title of Ninja, but that’s not important right now. The important thing is to get the job and to act like you know what you are doing so that you can keep said job, no?

Anyway; There is a file you will be making use of called the package.json file that will eventually make this process easy for you every time you want to start a new project. What I mean is that once you understand the significance of all of the project dependencies declared in any random package.json file you find out there on the Internets, all you will really ever need to do is to run: $ npm install from the project directory of an application with a package.json file to install all of the dependencies declared. We are not going to do that in this Part 1 of our Serverless + React.js Mini-Series of a tutorial. Instead, I am going to hold your hand and walk you through each command, one, simple, step, at a time.

For those of you not interested in starting at the bottom because you are already Level-12 Ninja Assassins, then please… continue to Part 2: How To Configure Your Serverless Backend API – Not yet Published!. This section is for those who prefer to do things right.

Seriously though, this is what you will be working with throughout the course of this multi-part tutorial:

Install Node.js with NVM (Node Version Manager)

On 7 November 2018, AWS announced to the world that AWS Lambda, will now officially support Node.js v8.10.0. This project will use NVM (Node Version Manager) to work with different versions of Node.js between projects and to mitigate against any potential environment upgrades implemented in the future by any 3rd party vendors. To ensure that we are working on the correct version of Node.js for this project please install nvm and node as follows:

  • Refer to the Node Version Manager Documentation if this information is out of date
  • Installation (Choose ONE based on your system OS):
    1. cURL: curl -o- http://raw.githubusercontent.com/creationix/nvm/v0.33.11/install.sh | bash
    2. Wget: wget -q0- http://raw.githubusercontent.com/creationix/nvm/v0.33.11/install.sh | bash
    3. The installation will add the following to your .bashrc or .bash_profile in your home directory:
    export NVM_DIR="$HOME/.nvm"
    [ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh" # This loads nvm
    [ -s "$NVM_DIR/bash_completion" ] && \. "$NVM_DIR/bash_completion" # This loads nvm bash_completion
    
  • Run the following command to verify the installation of nvm:
    1. run from Terminal: $ command -v nvm
    2. Expected Output: nvm
  • On macOS, if you receive and error of nvm: command not found, then you need to add the following line to your .bash_profile found in your home directory as shown below:
    1. source ~/.bashrc
  • To download, compile, and install a specific version of Node.js, then please run the following command to obtain Node.js v8.10.0 as needed to be able to work with AWS Lambda:
    1. $ nvm install 8.10.0
    2. You can list available versions to install using:
    • $ nvm ls-remote
    1. The completed output should look like this, showing that you are now ready to start building services in AWS Lambda!

    1. Give yourself a pat on the back! Being able to hit the ground running as a new developer on a new team, is a skill that many employers would pay a few bucks extra to have more of in their organizations. Always refer to this page and tell your friends that this is where you can get the low-down on how to get it done without having to ask 10 questions on our favorite technology forums out in the ether.

Setup Editor and Install Linting

For JavaScript, React.js, and the build process that we are using in SublimeText3 we will continue to configure linting, and formatting, while making use of OpenSource libraries and tools to help make this implementation of React.js more efficient. Please take the following steps to complete the installation of ESLint:

  • Install ESLint: Must install BOTH Globally & Locally
    1. Global Install: $ npm install -g eslint
    2. Local Install: (Must complete from the root of the project directory!!!)
    • $ npm install --save-dev eslint
    1. Local Install of the babel-eslint library also: (Must complete from the root of the project directory!!!)
    • $ npm install --save-dev babel-eslint
  • Verify that you can run eslint from within the local project directory:
    1. run from Terminal: $ ./node_modules/.bin/eslint -v
    2. Expected Output: v5.15.0 (Or Latest)
    3. Here is what the output should look like right now:

From this point forward, you can run: $ eslint . and the linter will help you and your development team check syntax, find problems, and enforce code style across your entire organization. In my case, that means just team Wilson and I hammering away at the keyboard debating the intricacies of JavaScript Memory Leaks and the best approach to achieve the best string concatenation efficiency. I am sure this is all very relatable. Here is what my code review process looks like when Wilsondoes not use the eslint settings I have specifically laid out for you today:

Wilson: The SCRUM Master

Confirm ESLint Configuration Settings & Attributes

From the project root directory run $ ls -a, and confirm that you can see the file called: ~/MyApp/Backend/services/ServerlesStarterService/.eslintrc.js within the serverless project directory for the service you are implementing. Please look at the structure I have chosen to use as the $PATH for this example. Take this as a hint for now and we will go over this in detail a bit later. For now, simply open the file from within the ServerlesStarterServicedirectory by running $ subl .eslintrc.js from your terminal to confirm that the following information is in the file:

module.exports = {
  "extends": ["eslint:recommended", "plugin:react/recommended"],
  "parser": "babel-eslint",
  "plugins": ["react"],
  "parserOptions": {
    "ecmaVersion": 8,
    "sourceType": "module",
    "ecmaFeatures": {
      "jsx": true
    }
  },
  "env": {
    "node": true,
    "es6": true,
    "es7": true
  },
  "rules": {
    "react/jsx-uses-react": 2,
    "react/jsx-uses-vars": 2,
    "react/react-in-jsx-scope": 2,
    "no-alert": 2,
    "no-array-constructor": 2,
    "no-caller": 2,
    "no-catch-shadow": 2,
    "no-labels": 2,
    "no-eval": 2,
    "no-extend-native": 2,
    "no-extra-bind": 2,
    "no-implied-eval": 2,
    "no-iterator": 2,
    "no-label-var": 2,
    "no-labels": 2,
    "no-lone-blocks": 2,
    "no-loop-func": 2,
    "no-multi-spaces": 2,
    "no-multi-str": 2,
    "no-native-reassign": 2,
    "no-new": 2,
    "no-new-func": 2,
    "no-new-object": 2,
    "no-new-wrappers": 2,
    "no-octal-escape": 2,
    "no-process-exit": 2,
    "no-proto": 2,
    "no-return-assign": 2,
    "no-script-url": 2,
    "no-sequences": 2,
    "no-shadow": 2,
    "no-shadow-restricted-names": 2,
    "no-spaced-func": 2,
    "no-trailing-spaces": 2,
    "no-undef-init": 2,
    "no-underscore-dangle": 2,
    "no-unused-expressions": 2,
    "no-use-before-define": 2,
    "no-with": 2,
    "camelcase": 2,
    "comma-spacing": 2,
    "consistent-return": 2,
    "curly": [2, "all"],
    "dot-notation": [2, {
      "allowKeywords": true
    }],
    "eol-last": 2,
    "no-extra-parens": [2, "functions"],
    "eqeqeq": 2,
    "key-spacing": [2, {
      "beforeColon": false,
      "afterColon": true
    }],
    "new-cap": 2,
    "new-parens": 2,
    "quotes": [2, "double"],
    "semi": 2,
    "semi-spacing": [2, {
      "before": false,
      "after": true
    }],
    "space-infix-ops": 2,
    "keyword-spacing": 2,
    "space-unary-ops": [2, {
      "words": true,
      "nonwords": false
    }],
    "strict": [2, "global"],
    "yoda": [2, "never"]
  }
};

Please add the eslintrc.js file with the information above if you do not have it in your directory as shown above.

Moving forward, we can continue to install the ESLint plugins that we will need to use for our React.js implementations also. In the root of the project directory, run the following two commands to install the React.js plugin for ESLint:

  • $ npm install eslint-plugin-react --save-dev
  • $ npm install -g eslint-plugin-react --save-dev (Install globally also)

Running $ ./node_modules/.bin/eslint src (with src being the path to your JavaScript files), will now parse your sources with Babel and will provide you with linting feedback to the command line.

We think it’s going to be best (and just easiest quite frankly), to just alias that last command with something like: $ npm run lint as part of your automation approach by adding the following script into the package.json file that you can find in the root directory of your application:

"scripts": {
    "lint": "eslint .",
    ...
  },

  ...
  • Run: $ npm run lint to lint your project from the terminal.

The linter will now supply feedback on syntax, bugs, and problems, while enforcing code style, all from within the comfort of your terminal. Do not fool yourself, this will work in vim also! For all those :wq! fans out there, the hope, is on our side… #UseTheForce.

A typical output running ESLint from the terminal as instructed would look something like this (depending on your project and the silly errors you make):

One more thing, from within the project root directory please make sure that you have a file called .esformatter in existence. You are going to need that $ ls -a command to find it. If it does not exist, go ahead, and create it with a simple $ touch .esformatter and :wq! the following bit of information into the file (if it already exists, just make sure this is in there!):

{
  "preset": "default",

  "plugins": [
    "esformatter-quotes",
    "esformatter-semicolons",
    "esformatter-literal-notation",
    "esformatter-parseint",
    "esformatter-spaced-lined-comment",
    "esformatter-var-each",
    "esformatter-collapse-objects",
    "esformatter-remove-trailing-commas",
    "esformatter-quote-props"
  ],

  "quotes": {
    "type": "double"
  },

  "collapseObjects": {
    "ObjectExpression": {
      "maxLineLength": 79,
      "maxKeys": 1
    }
  },

  "indent": {
    "value": "  ",
    "AlignComments": false
  },

  "whiteSpace": {
    "before": {
      "ObjectExpressionClosingBrace": 0,
      "ModuleSpecifierClosingBrace": 0,
      "PropertyName": 1
    },
    "after": {
      "ObjectExpressionOpeningBrace": 0,
      "ModuleSpecifierOpeningBrace": 0
    }
  }
}

From here on out every time you save your JavaScript files all the formatting completes automatically for you. Sort of, we just must configure all of this into our trusty SublimeText3 text editor now!

Before we get into it, yes, you can use vim. I promise you, you will find people who will die by vim throughout your career. My belief; keep it simple, buy the SublimeText3 license and just use that over every other IDE that will bombard your spam folder in your email. SublimeText3 provides the simplicity of vim without having to memorize the abstract commands like :wq! that will inevitably leave you with a mess of *.swp files, because I guarantee that most of you will never clean them out and it becomes a hassle. Keep it simple, do not over complicate your life in an already all too complicated environment, and get the tool that is really cheap, easy on the eyes, and provides enough of the fancy functionality of a really expensive IDE and the simplicity of vim without having to turn your anonymous functions into ES6 syntax with an esoteric regex-replace like this $ :%s/function \?(\(.*\)) {/(\1) => {/.

SublimeText3 Configuration

To complete the configuration in SublimeText3 please install PackageControl and the following packages from within SublimeText3:

  • The easiest way to achieve this is to navigate to the SublimeText3 View –> ShowConsole menu, and continue to paste the following code provided by PackageControl:
    # For SublimeText3 ONLY!
    
    import urllib.request,os,hashlib;
    h = '6f4c264a24d933ce70df5dedcf1dcaee' + 'ebe013ee18cced0ef93d5f746d80ef60';
    pf = 'Package Control.sublime-package';
    ipp = sublime.installed_packages_path();
    urllib.request.install_opener( urllib.request.build_opener( urllib.request.ProxyHandler()) );
    by = urllib.request.urlopen( 'http://packagecontrol.io/' + pf.replace(' ', '%20')).read();
    dh = hashlib.sha256(by).hexdigest();
    print('Error validating download (got %s instead of %s), please try manual install' % (dh, h))
    if dh != h else open(os.path.join( ipp, pf), 'wb' ).write(by)
    

    The configuration parameters supplied above, will generate an Installed Packages directory on your local machine (if needed). It will download the Package Control.sublime-package over HTTP instead of HTTPS because of known Python standard library constraints, and the file will apply SHA-256 to confirm that it is in fact a valid file.

    WARNING: Please do not copy or install this code via this tutorial or our website. It will change with every release of PackageControl. Please make sure to view the Official PackageControl Release Documentation page and installation instructions to get the most recent version of the code shared in this tutorial. You can obtain the most up-to-date information needed directly from: PackageControl.

  • Next, please continue to navigate through to SublimeText –> Preferences –> PackageControl –> InstallPackagesand install the following tools:
    1. Babel
    2. ESLint
    3. JSFMT
    4. SublimeLinter
    5. SublimeLinter-eslint (Connector)
    6. SublimeLinter-annotations
    7. Oceanic Next Color Scheme
    8. Fix Mac Path (If using MacOS ONLY)
  • Once you have completed the above, go ahead and navigate to: View –> Syntax –> Open all with current extension as ... –> Babel –> JavaScript (Babel) with any file.js open to configure the settings for the syntax is JS and JSX.
  • It is helpful to setup a JSX formatter for your text editor also. In our case we are using esformatter-jsx. You will have to figure out where your SublimeText3 Packages directory is, to complete your jsfmt configuration. Run the following commands to complete this step:
    1. $ cd ~/Library/Application\ Support/Sublime\ Text\ 3/Packages/jsfmt
    2. $ npm install jsfmt
    • Try to run: $ npm ls esformatter before continuing to step #3
    • If you have esformatter installed, then skip step #3 and run:
      1. $ npm install esformatter-jsx
    1. $ npm install esformatter esformatter-jsx (See notes above before going ahead with this step!!!)
    2. Test the installation of each package and run: $ npm ls <package> (See the expected output below)

  • From the jsfmt package directory (See above if you do not remember! hint: See Step #1 from above) run: $ ls -aand open a file in SublimeText3 called: $ subl jsfmt.sublime-settings and paste the following configuration settings:
{
  // autoformat on file save events
  "autoformat": false,

  // This is an array of extensions for autoformatting
  "extensions": ["js",
    "jsx",
    "sublime-settings"
  ],

  // options for jsfmt
  "options": {
    "preset": "jquery",
    // plugins included
    "plugins": [
      "esformatter-jsx"
    // "esformatter-quotes",
    // "esformatter-semicolons",
    // "esformatter-braces",
    // "esformatter-dot-notation"
    ],
    "jsx": {
      "formatJSX": true, // Default value
      "attrsOnSameLineAsTag": false, // move each attribute to its own line
      "maxAttrsOnTag": 3, // if lower or equal than 3 attributes, they will be on a single line
      "firstAttributeOnSameLine": true, // keep the first attribute in the same line as the tag
      "formatJSXExpressions": true, // default is true, if false, then jsxExpressions does not apply formatting recursively
      "JSXExpressionsSingleLine": true, // default is true, if false the JSXExpressions may span several lines
      "alignWithFirstAttribute": false, // do not align attributes with the first tag
      "spaceInJSXExpressionContainers": " ", // default to one space. Make it empty if you do not like spaces between JSXExpressionContainers
      "removeSpaceBeforeClosingJSX": false, // default false. if true <React.Component /> => <React.Component />
      "closingTagOnNewLine": false, // default false. if true attributes on multiple lines will close the tag on a new line
      "JSXAttributeQuotes": "", // values "single" or "double". Leave it as empty string if you do not want to change the attributes' quotes
      "htmlOptions": {
        // put here the options for js-beautify.html
      }
    }
  },
  "options-JSON": {
    "plugins": [
      "esformatter-quotes"
    ],
    "quotes": {
      "type": "double"
    }
  },
  "node-path": "node",
  "alert-errors": true,
  "ignore-selection": false
}

Install and Activate Color Theme optimized for React.js

OceanicNext is a color scheme and syntax highlighter on SublimeText3 that is perfect for babel-sublime JavaScript and React.js. Please go ahead and review the Oceanic Next Documentation and Installation Instructions. However, it would be great if you could complete the instructions below so we can get through the rest of this tutorial. We may start building something now that we have a legitimate local environment to work on.

  • Navigate to SublimeText –> Preferences –> Settings –> User menu and add the following configuration parameters:
    "color_scheme": "Package/Oceanic Next Color Scheme/Oceanic Next.tmTheme",
    "theme": "Oceanic Next.sublime-theme",
    
  • Navigate to SublimeText –> Preferences –> PackageControl –> InstallPackages and install the following:
    1. Oceanic Next Color Scheme
  • Select the correct theme from: SublimeText –> Preferences –> ColorScheme –> Oceanic Next

From this point forward your local development environment is ready to work with and will be able to provide you and your Scrum MasterWilson, with the right kind of automated feedback you need to have as a professional Jedi, I mean… Engineer, so that you can debug and extend the most complex of Mobile Applications on the DARN Cloud

If you can figure out a better way to make the technicalities of setting up linting and formatting on a Linux machine without ever having known the terminal, a tad bit more entertaining and easier to digest than this little tutorial we put you through today, then please enlighten my friends and I pretending to be the masters of the Dark Side. UNTIL THEN:

Your local environment is ready for work and is configured correctly… Good Luck!

 

Part 2: localhost Serverless + Microservice & The NEW Backend Paradigm

Cardiac Activity Monitor

For a little fun, we decided to tinker with cardiac monitoring and simple prototyping of wave morphology via common components.  After we finishing, we wanted to post our little project to share with the world… so if you’ve got a chipKIT lying around and some free time, see if you can build your own pulse meter!

In order to build this project, you’ll need following things.

Pulse1

The OLED display on the Basic I/O Shield is driven through SPI interface. This requires the JP4 jumper on chipKIT Uno32 board be placed on the RG9 position so that the SPI SS function is available on Pin 10. Read our tutorial Exploring the chipKIT Uno32 for more details on jumper functions and settings on the Uno32 board. The following picture shows the required jumper settings on Uno32 board for this project. The firmware of this project is based on the same algorithm as used in our previous project PC-based heart rate monitor. The algorithm used by the PC application before is now implemented in the firmware of Uno32. The following flowchart shows the algorithm used to compute the pulse rate. Now plugin the Basic I/O shield on top of the Uno32 board and connect the Easy Pulse sensor power supply and the analog PPG output pins to the I/O shield as shown in the following figure.

Pulse2

The Easy Pulse board is powered with 3.3V power supply from the I/O board header pins. The analog pulse output from Easy Pulse is connected to analog pin A1 of Uno32 through headers available on the I/O shield. The analog pin A0 of Uno32 may not be used for feeding Easy Pulse output because it is already hard-wired to the potentiometer output on the I/O shield. The following picture shows the complete setup of this project.

Pulse4

The PPG waveform and the pulse rate (in beats per minute, BPM) are both displayed on the OLED screen of the I/O shield. The Uno32 sketch uses the chipKIT I/O Shield library to display data on the OLED. Unzip the downloaded sketch folder and upload the sketch named P3_Uno32_PulseMeter.pde to the Uno32 board.

After the sketch is uploaded, power the Uno32 board through a DC wall adapter or any other external power supply that could be used to power the Uno32 board. Using USB power source is not recommended as some USB ports do not supply enough current for Easy Pulse sensor, in which case the performance of the sensor would be poor. When the sensor is unplugged, the range of ADC would not exceed 50 counts (described in the algorithm) and the pulse meter displays “No Pulse Signal” message on the OLED screen as shown below.

Pulse3

Now wear the Easy Pulse sensor on a finger tip and adjust the P1 potentiometer on the Easy Pulse to 1/4th position from left. The PPG waveform and the pulse rate should now display on the screen. The screen is refreshed in every 3 seconds. If the PPG signal appears clipped at the peak values, consider decreasing the gain of the filter circuit on the Easy Pulse board. This can be done by rotating the P1 potentiometer counterclockwise. Similarly, if you see the “No Pulse Signal” message even after wearing the sensor, you should consider increasing the gain by slightly turning the potentiometer clockwise. Read the Easy Pulse document for more details on the function of P1 and P2 potentiometers of Easy Pulse.

Hovalin: 3D Printed Violin

These days, we can make virtually anything out of a few grams of PLA. However, it’s still fairly difficult to print full musical instruments (or even instrument parts) that produce high-quality sound comparable to the kind created by instruments manufactured in a more traditional manner. But that’s exactly what makers Kaitlyn and Matt Hova have done. The hovalin  is a completely 3D printed violin that’s proof that there’s no reason for an instrument’s acoustic quality to suffer just because it was made with a MakerBot.

 

hovalin iterative

Once the model was designed, it had to be divided in a way that accounted for the constraints of a commercial FDM printer’s build volume. “It’s a weird game that’s most parallel to the idea of cutting up a violin made out of jello so that each piece can fit in a small box and be oriented so that it would not collapse on itself,” say the Hovas. It took months of trial and error before they successfully printed the Hovalin 1.0.

 

With Kaitlyn’s experience as a professional violinist, neuroscientist, and software engineer at 3D Robotics –as well as Matt’s diverse careers in record production and electrical engineering– the husband and wife team have all the bases covered for this project. The Hovalin is a fully functional 3D printed violin that sounds nearly indistinguishable from a world-class wooden version.

 

What’s more, the Hovalin team designed the violin in such a way that it requires less than one kilogram of PLA to make. This helps keep production costs low, as the cost of raw materials comes in at around $70 for STEAM (Science, Technology, Arts, Engineering and Math) programs.

The EggBot Challenge

THE EGGBOT CHALLENGE

Following the tradition of cracking eggs in celebration of Easter in Greece with the game of tsougrisma (τσουγκρισμα in Greek) we decided to build an EggBot that could robotically draw intricate designs that we fielded from our client partners and staff.

 

So what’s an EggBot? It’s a machine capable of drawing on spherical or ellipsoidal surfaces. You might say, a pen plotter for spherical coordinates. Or a simple but unusual CNC machine, ripe for hacking. Or an educational robot– and you’d be right on all accounts.  It’s worthwhile to point out that there’s actually quite a bit of history here in this project. The EggBot kit is the result of our collaboration with Bruce Shapiro, Ben Trombley, and Brian Schmalz. Bruce designed the first Eggbot back in 1990, and there’s been a process of continuous evolution ever since.

 

EggBot III

The Eggbot kit includes Brian Schmalz’s EiBotBoard (EBB), which includes a USB interface and dual microstepping bipolar stepper driver. With its 16X microstepping and the 200 step/rev motors, we get a resolution of 3200 steps/revolution in each axis.  The EBB also controls the little servo motor that raises and lowers the pen arm. In previous versions of the EggBot kit, raising and lowering the pen has usually been done with a solenoid. But the servo motor allows very small, precise motion to raise and lower the pen over the surface. The end of the pen arm is hinged with an acetal flexure for precise bending, and the pen arm clamp fits ultra fine Sharpie pens and many others.

 

EggBot I

And the results? Pretty good. The objects shown here include golf balls, eggs, Christmas ornaments, and light bulbs… besides we had an egg-cellent time in the process.

Augmented Reality Sandbox

AUGMENTED REALITY SANDBOX

For a fun project to spruce up our office and interact with our local STEAM (Science, Technology, Engineering, Arts and Math) school programs, we decided to build our own Augmented Reality Sandbox.  Originally developed by researchers at UC Davis, the open source AR Sandbox lets people sculpt mountains, canyons and rivers, then fill them with water or even create erupting volcanoes. The UCLA device was built by Glesener and others at the Modeling and Educational Demonstrations Laboratory in the Department of Earth, Planetary, and Space Sciences, using off-the-shelf parts and regular playground sand… but after calling our local woodworking friend Kyle Jenkins, we decided to give our container a modern look that would match our office.

 

20160423_155332

 

Any shape made in the sandbox is detected by an Xbox Kinect sensor and processed via raw depth frames that arrive from the Kinect camera at 30 frames per second and are fed into a statistical evaluation filter with a fixed configurable per-pixel buffer size (currently defaulting to 30 frames, corresponding to 1 second delay), which serves the triple purpose of filtering out moving objects such as hands or tools, reducing the noise inherent in the Kinect’s depth data stream, and filling in missing data in the depth stream. The resulting topographic surface is then rendered from the point of view of the data projector suspended above the sandbox, with the effect that the projected topography exactly matches the real sand topography. The software uses a combination of several GLSL shaders to color the surface by elevation using customizable color maps and to add real-time topographic contour lines.

 

At the same time, a water flow simulation based on the Saint-Venant set of shallow water equations, which are a depth-integrated version of the set of Navier-Stokes equations governing fluid flow, is run in the background using another set of GLSL shaders. The simulation is an explicit second-order accurate time evolution of the hyperbolic system of partial differential equations, using the virtual sand surface as boundary conditions. The implementation of this method follows the paper “a second-order well-balanced positivity preserving central-upwind scheme for the Saint-Venant system” by A. Kurganov and G. Petrova, using a simple viscosity term, open boundary conditions at the edges of the grid, and a second-order strong stability-preserving Runge-Kutta temporal integration step. The simulation is run such that the water flows exactly at real speed assuming a 1:100 scale factor, unless turbulence in the flow forces too many integration steps for the driving graphics card to handle.

 

Essentially, this crazy AR sandbox allows people to literally move mountains with their hands. Check it out in the video above.

 

By utilizing common components and a little ingenuity we built an incredibly responsive program capable of not only reading changes made in the sand, but also reacting to them in real time.  By simply altering the layout of the sandbox, people can create towering mountains and volcanos, or water-filled valleys and rivers. When a person creates valleys and peaks, the Kinect sensor quickly detects the modifications and applies real-time topographical changes.  Moreover, our recent iterations of the software allows people to create falling rain onto the map by just raising their hands over the sandbox. As it falls, the rain erodes some of the landscape and pools into pits and valleys.

Robotic Bird Flight

ROBOTIC BIRD FLIGHT

We’ve always been fascinated by the mechanics and properties of flapping-wing flight… watching birds soar gracefully above us brings out that childlike wish of gliding through clouds from high above. Seeing them fly is amazing, but it’s not that easy to build an ornithopter.

 

The first thing anyone asks when they see an ornithopter fly is, “How does it work?”. We’ve all wondered the same thing about birds too at one time or another. Since the ornithopter flies like a bird, we can answer both questions at the same time. All flying creatures, and ornithopters too, have a stiff structure that forms the leading edge or front part of the wing. Birds have their wing bones at the leading edge. For insects, the veins of the wing are concentrated there. Ornithopters have a stiff spar at the leading edge. The rest of the wing is more flexible. It needs to be flexible so the wing can change shape as it flaps.

 

An airplane wing produces lift by moving forward through the air. This force, called lift, is what keeps the airplane from falling to the ground. The special shape of the wing, combined with the slight upward angle of the wing, causes air to be deflected downward. There is more pressure on the bottom of the wing than there is on the top. This difference in pressure produces lift. Birds flap their wings up and down. This motion is added to the forward motion of the bird’s body, so really the wings move diagonally. They move down and forward during the downstroke, and they move up and forward during the upstroke.

 

robobird 2

The outer part of the wing has a lot of downward movement, but the inner part near the bird’s body simply moves forward along with the bird. Since the downward motion is greater toward the wingtips, the wing has to twist so that each part of the wing is aligned correctly with the local movement of the wing. The upstroke is different from the downstroke, and its function can vary. In general, the upstroke produces lift by relying on the forward motion of the bird through the air. The inner part of the wing, near the body, produces most of the lift. The outer part of the wing, because of its sharp upward motion, can only hinder the bird’s flight. Birds solve this problem by partially folding their wings. Most ornithopters take a less subtle approach: more power!

 

Our Robotic Bird goes a step further than most traditional RC ornithopters. We built an onboard computer that acts like the brain of a real bird, giving complete control over the wing movements. We tacked on two powerful servos to move the wings which also are used for steering, so there is no need for any separate mechanism for moving the tail. Finally, we integrated tilt mechanics thatcan vary the flapping angle of the wings and allows a smooth transition between flapping and gliding flight.

 

The “brain” of the Robotic Bird is a tiny electronic device called the Axon-1b servo controller and weighs only 4.7 grams. It uses independent motion of the left and right wings for steering. It features a glide mode and variable flapping amplitude. The servo controller reads the throttle, aileron, and elevator commands from your radio receiver. Then it outputs a new signal to drive the servos in the RC bird.

InMoov Leap Motion Control

INMOOV LEAP MOTION CONTROL

To have a little more fun with our InMoov Robot we started to investigate immersion control via Leap Motion.  From a hardware perspective, the Leap Motion Controller is actually pretty darn simple. The heart of the device consists of two cameras and three infrared LEDs. These track infrared light with a wavelength of ~850 nanometers, which is outside the visible light spectrum.  The data representation takes the form of a grayscale stereo image of the near-infrared light spectrum, separated into the left and right cameras. Typically, the only objects you’ll see are those directly illuminated by the Leap Motion Controller’s LEDs. However, incandescent light bulbs, halogens, and daylight will also light up the scene in infrared.

Once the image data is streamed to the application, it’s time for some heavy mathematical lifting. Despite what you might think, the Leap Motion Controller doesn’t generate a depth map – instead it applies advanced algorithms to the raw sensor data.  The Leap Motion Service is the software on that can be connected via SDK which processes the images. After compensating for background objects (such as heads) and ambient environmental lighting, the images are analyzed to reconstruct a 3D representation of what the device sees.

dd

Next, the tracking layer matches the data to extract tracking information such as fingers and tools. The Leap Motion tracking algorithms interpret the 3D data and infer the positions of occluded objects. Filtering techniques are applied to ensure smooth temporal coherence of the data. The Leap Motion Service then feeds the results – expressed as a series of frames, or snapshots, containing all of the tracking data – into a transport protocol.

Through this protocol, the service communicates with the Leap Motion Control Panel, as well as native and web client libraries, through a local socket connection (TCP for native, WebSocket for web). The client library organizes the data into an object-oriented API structure, manages frame history, and provides helper functions and classes.  From there, the application logic ties into the Leap Motion input, allowing a motion-controlled interactive experience.

Now, in order to get Leap Motion and InMoov working properly you need to:

1) Install the Leap Motion Software from here : http://www.leapmotion.com/setup

2) Install the LeapMotion2 service from the MyRobotLab Runtime tab MRL VERSION 1.0.95 or higher

Once you’ve got your libraries and SDKs setup you can run this simple script to make the Leap Motion track your fingers and then move InMoov fingers accordingly.  This will start to get your basic controls running and experience a 1 to 1 kinetic reflection from the InMoov robot.  Now instead of just programming control based reactions (via the Kinnect Camera and Hercules WebCams) you can build immersive projects like the Robots for Good Project @ http://www.robotsforgood.com/.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
inmoov =Runtime.createAndStart("inmoov","InMoov")
inmoov.startRightHand("COM5")
inmoov.rightHand.index.setMinMax(30,89)
inmoov.rightHand.thumb.setMinMax(30,60)
inmoov.rightHand.majeure.setMinMax(30,89)
inmoov.rightHand.ringFinger.setMinMax(30,90)
inmoov.rightHand.pinky.setMinMax(30,90)
sleep(1)
inmoov.rightHand.startLeapTracking()
# inmoov.rightHand.stopLeapTracking()

 

bb

 

And there ya go, your very own real-life avatar…  now, once we can get the leg apparatuses figured out we’ll be able to send out ‘ole InMoov on a walk around the block as an augmented reality experience!

Brain Wave Monitor

BRAIN WAVE MONITOR

To get ready to make your own Portable Brain Wave Monitor all you need is about 40 bucks and a bit a time and ingenuity.  Take a look at our work and see if you can’t make one of your own!

Materials:

1 MindFlex – $25.00: eBay is one of the best places to find!
1 Arduino Uno R3 – $15.00: Find it on the Aruino site here!
1 Mini Breadboard: The “bread and butter” of electrical engineering.
1 SainSmart 1.8 ” Color TFT Display – Now you can see what you’re doing!
Some Wires: Go “Office Space” on your printer and salvage the wires.
A PC or Laptop: It’s 2016, everyone has one. Right?

Software & Files:

Arduino GUI <= Download Right Here!
Arduino Brain Example Code <= Download at the bottom!

Tools:

Soldering Iron: Every maker should have one.
Hot Glue Gun: Your mom should have one you can borrow from Arts and Crafts time!
Drill: Not really needed, but who doesn’t love to play with power tools?
Other Tools: Pliers, Wire Cutters, Wire Strippers, etc.

MONITOR SETUP

AB

Use the mini breadboard to connect the display to the Arduino. Use jumper wires to connect from the Arduino pins to the breadboard in the following order:
Arduino:  5v         TFT DISPLAY   VCC
gnd                                 GND
Digital     4                    SCL
5                                     SDA
6                                     CS
7                                     DC
8                                     RES

In order to test the display, you will need to download and install two libraries into your Arduino IDE. Although the display is sold by Sainsmart, I had trouble using their libraries and downloaded the libraries from ADAFRUIT, who sells a similar display and has  much better  documentation and support..I’d normally feel a little guilty about ordering these from SainSmart and using the Adafruit instructions and libraries, but I purchase enough other stuff from Adafruit to appease my conscience. If you end up taking the same route I did, I encourage you to at least look at the other stuff they offer.

Two libraries need to be downloaded and installed:  The first is the ST7735 library (which contains the low-level code specific to this device). The second is the Adafruit GFX Library (which handles graphics operations common to many displays that Adafruit sells). Download both ZIP files, uncompress and rename the folders to ‘Adafruit_ST7735’ and ‘Adafruit_GFX’ respectively, place them inside your Arduino libraries folder and restart the Arduino IDE.

You should now be able to see the  library and examples in  select File > Examples > Adafruit_ST7735 > graphicstest sketch. load the sketch to your Arduino.

If you were successful at installing the libraries, and loading the graficstest sketch,  Click on the verify button to compile the sketch and make sure there are no errors.

It’s time to connect your Arduino to your PC using the USB cable, and  click on the upload button  to upload the sketch to the Arduino.

Once uploaded, the Arduino should perform all the test display procedures! If you’re not seeing anything – check to see if the back-light is on, if the back-light is not lit, something is wrong with the Power  wiring. If the back-light is lit ,but you see nothing on the display recheck the digital signals wiring.

If everything is working as expected, we are ready to  wire up the DHT sensor.

c

As with the display, I used jumper wires to connect the required power and data pins for the DHT11  using the mini Breadboard. Line up the pins and then plug in the sensor.

Looking at the front of the sensor:
From Left to Right connect pin 1 of DHT11 to 5 V, Pin 2 to Digital Pin 2 of the Arduino, Pin 3 no connection, and Pin 4 to Arduino GND.

You need to download another Library  to get the Arduino to talk with the DHT11 sensor. The sensor I got didn’t come with any documentation, so I Googled around until I found a library that works.
I found it in the Virtualbotix website

As with the display  libraries, Download the library unzip it,  and install it in the Arduino IDE. Place it inside your Arduino libraries folder , rename it  DHT11, and restart the Arduino IDE.

You should now be able to see the  library and examples in  select File > Examples > DHT11 > dht11_functions sketch.
oad the sketch to your Arduino.

If you were successful at installing the libraries, and loading the dht11_functions sketch,  Compile the sketch  by clicking on the verify button and make sure there are no errors.

It’s time to connect your Arduino to your PC using the USB cable.  Click on the upload button  to upload the sketch to the Arduino.

Once uploaded to the Arduino, open the serial monitor, and you should see the data stream  with information coming from the sensor.

If you got the sensor working, we’re now ready to display the data on the TFT Screen..

// copy the sketch below and paste it into the Arduino IDE compile and run the program.
// this sketch was created using code from both the adafruit and the virtuabotix sample sketches
// You can use any (4 or) 5 pins
#define sclk 4
#define mosi 5
#define cs   6
#define dc   7
#define rst  8  // you can also connect this to the Arduino reset
#define ANALOG_IN 0 // for cds light sensor

#include <Adafruit_GFX.h>    // Core graphics library
#include <Adafruit_ST7735.h> // Hardware-specific library
#include <SPI.h>
#include <dht11.h> // dht temp humidity sensor library

dht11 DHT11;
Adafruit_ST7735 tft = Adafruit_ST7735(cs, dc, mosi, sclk, rst);

void setup(void) {
DHT11.attach(2); // set digital port 2 to sense dht input
Serial.begin(9600);
Serial.print(“hello!”);

tft.initR(INITR_BLACKTAB);   // initialize a ST7735S chip, black tab

Serial.println(“init”);
//tft.setRotation(tft.getRotation()+1); //uncomment to rotate display

// get time to display “sensor up time”
uint16_t time = millis();
tft.fillScreen(ST7735_BLACK);
time = millis() – time;

Serial.println(time, DEC);
delay(500);

Serial.println(“done”);
delay(1000);

tftPrintTest();
delay(500);
tft.fillScreen(ST7735_BLACK);
// Splash screen for esthetic purposes only
// optimized lines
testfastlines(ST7735_RED, ST7735_BLUE);
delay(500);

testdrawrects(ST7735_GREEN);
delay(500);
tft.fillScreen(ST7735_BLACK);
}
void loop() {
// tft.invertDisplay(true);
// delay(500);
//  tft.invertDisplay(false);
tft.setTextColor(ST7735_WHITE);
tft.setCursor(0,0);
tft.println(“Sketch has been”);
tft.println(“running for: “);
tft.setCursor(50, 20);
tft.setTextSize(2);
tft.setTextColor(ST7735_BLUE);
tft.print(millis() / 1000);
tft.setTextSize(1);
tft.setCursor(40, 40);
tft.setTextColor(ST7735_WHITE);
tft.println(“seconds”);
tft.setCursor(0, 60);
tft.drawLine(0, 50, tft.width()-1, 50, ST7735_WHITE); //draw line separator
tft.setTextColor(ST7735_YELLOW);
tft.print(“Temperature (C): “);
tft.setTextColor(ST7735_GREEN);
tft.println((float)DHT11.temperature,1);
tft.setTextColor(ST7735_WHITE);
tft.print(“Humidity    (%): “);
tft.setTextColor(ST7735_RED);
tft.println((float)DHT11.humidity,1);
tft.setTextColor(ST7735_YELLOW);
tft.print(“Temperature (F): “);
tft.setTextColor(ST7735_GREEN);
tft.println(DHT11.fahrenheit(), 1);
tft.setTextColor(ST7735_YELLOW);
tft.print(“Temperature (K): “);
// tft.print(” “);
tft.setTextColor(ST7735_GREEN);
tft.println(DHT11.kelvin(), 1);

tft.setTextColor(ST7735_WHITE);
tft.print(“Dew Point   (C): “);
tft.setTextColor(ST7735_RED);
tft.println(DHT11.dewPoint(), 1);
tft.setTextColor(ST7735_WHITE);
tft.print(“DewPointFast(C): “);
tft.setTextColor(ST7735_RED);
tft.println(DHT11.dewPointFast(), 1);
tft.drawLine(0, 110, tft.width()-1, 110, ST7735_WHITE);
tft.setCursor(0,115);
tft.print(“Light intensity “);
int val = analogRead(ANALOG_IN);
tft.setCursor(60, 130);
tft.setTextColor(ST7735_YELLOW);
tft.println(val, 1);
delay(2000);
tft.fillScreen(ST7735_BLACK);

}
void tftPrintTest() {
tft.setTextWrap(false);
tft.fillScreen(ST7735_BLACK);
tft.setCursor(0, 60);
tft.setTextColor(ST7735_RED);
tft.setTextSize(2);
tft.println(“temperature”);
tft.setTextColor(ST7735_YELLOW);
tft.setTextSize(2);
tft.println(“humidity”);
tft.setTextColor(ST7735_GREEN);
tft.setTextSize(2);
tft.println(“monitor”);
tft.setTextColor(ST7735_BLUE);
//tft.setTextSize(3);
//tft.print(3598865);
delay(500);
}

void testfastlines(uint16_t color1, uint16_t color2) {
tft.fillScreen(ST7735_BLACK);
for (int16_t y=0; y < tft.height(); y+=5) {
tft.drawFastHLine(0, y, tft.width(), color1);
}
for (int16_t x=0; x < tft.width(); x+=5) {
tft.drawFastVLine(x, 0, tft.height(), color2);
}
}

void testdrawrects(uint16_t color) {
tft.fillScreen(ST7735_BLACK);
for (int16_t x=0; x < tft.width(); x+=6) {
tft.drawRect(tft.width()/2 -x/2, tft.height()/2 -x/2 , x, x, color);
}
}

HARDWARE (MINDFLEX) SIDE

Lets start with the EEG Monitor – but before attempting to start this phase, I recommend you visit the  Frontier Nerds web site, where they do a better job than I could, of explaining how to mod the MindFlex headset so you can interface it with the arduino.

1 –  Remove the 4 screws from the back cover of the left pod of the Mind Flex headset.(The right pod holds the batteries.)

2 – Open the case to expose the electronics.

3 – Identify the  NeuroSky Board.It is the small daughterboard towards the bottom of the left pod.

4 – If you look closely, you will see pins that are labeled T and R  — these are the pins the EEG board uses to communicate serially to the microcontroller on the main circuit board.

5 – Solder a length of wire (carefully) to the “T” pin.  I used a pair of wires that  came from an old  PC. Be careful not to short the neighboring pins.

6 – Solder another length of wire to ground using the large solder pad where the battery’s ground connection is.

7  – Drill a hole in the case for the two wires to poke through after the case was closed.

8 – Guide the wires through the hole, and recheck your soldering.  I recommend putting a dab of glue in the hole to secure the wires in place ( I used my hot glue gun to do this).

9 – Carefully put the back case back on and re secure the screws.

10 – Connect the Mind Flex to the Arduino; Connect the Wire coming from the T pin to the Arduino rx pin;  Connect the other wire to an arduino GND pin.

SOFTWARE SIDE

Initial software test to make sure your mind Flex is talking to the Arduino: Run the  example BrainSerialOut sketch.

Note: You will not need the Mini display for this test, and if you have it connected nothing will display on it yet.

1.- You will need to download  and install the Brain Library from the frontiernerds web site.Decompress the zip file  and drag the “Brain” folder to the  Arduino’s “libraries” folder inside your sketch folder.  Restart  the Arduino IDE.

You should now be able to see the  library and examples in  select File > Examples > Brain > BrainSerialOut sketch.

If you were successful at installing the libraries, and loading the BrainSerialOut sketch,  Click on the verify button to compile the sketch and make sure there are no errors.

It’s time to connect your Arduino to your PC using the USB cable, and  click on the upload button  to upload the sketch to the Arduino.

Plug the two wires that you put in the Mind flex headset to the Arduino: the T signal wire from the mind flex to the rx pin in the Arduino; The ground wire from the Mind flex headset to the Arduino gnd pin.

Once the  sketch is uploaded to the Arduino, make sure your Mind flex headset is connected to the Arduino, and turn it on. Open the serial monitor.You should see a stream of coma separated numbers scrolling by.

NOTE: If the sketch doesn’t upload and you get a message like the one below,

avrdude: stk500_getsync(): not in sync: resp=0x00

Disconnect the wire from the  arduino rx pin, it sometimes interferes with the upload process.

Note that the connection to the Neurosky headset is half-duplex — it will use up the rx pin on your Arduino, but you will still be able to send data back to a PC via USB. (Although in this case you won’t be able to send data to the Arduino from the PC.)

Once disconnected click on the upload button again and, if successful, reconnect the wire to the rx pin.

READING BRAINWAVES

Before we proceed to the next part (sketch), you should know a couple of hints if you’re running into trouble:
1 – If the signalquality is anything but 0, you will not get a meditation or attention value.
2 –  The values for the brain waves( Alpha, Beta, Gamma, etc… ) are kind of nonsensical. They still change value even if the signal quality is greater than zero! Also  if you place any finger on the forehead sensor  and another one on the ear sensor on the left pad, you still get readings for all the brain wave functions. I mention this because I’m not quite sure whether the values are actually very reliable. In any case, the only values that are usable, if you want to control something with your brain are the  Attention and Meditation.

Alright, so here’s the code:

// Copy and Paste the sketch below to your ardunio IDE .
#define sclk 4
#define mosi 5
#define cs   6
#define dc   7
#define rst  8

#include <Adafruit_GFX.h>    // Core graphics library
#include <Adafruit_ST7735.h> // Hardware-specific library
#include <SPI.h>
#include <Brain.h>

Adafruit_ST7735 tft = Adafruit_ST7735(cs, dc, mosi, sclk, rst);

Brain brain(Serial);

void setup(void) {

tft.initR(INITR_BLACKTAB);   // initialize a ST7735S chip, black tab

tftPrintTest(); //Initial introduction text,
delay(1000);

tft.fillScreen(ST7735_BLACK); //  clear screen
tft.setTextColor(ST7735_WHITE);
tft.setTextSize(1);
tft.setCursor(30,0);
tft.println(“EEG Monitor”);

Serial.begin(9600);
}
void loop() {

if (brain.update()) {
if (brain.readSignalQuality() > 100) {
tft.fillScreen(ST7735_BLACK);
tft.setCursor(0,30);
tft.setTextColor(ST7735_RED,ST7735_BLACK);
tft.println(“signal quality low”);
}

else {
tft.setCursor(30,0);
tft.println(“EEG Monitor”);
tft.drawLine(0, 20, tft.width()-1, 20, ST7735_WHITE);
tft.drawLine(0, 130, tft.width()-1, 130, ST7735_WHITE);

tft.setCursor(0, 30);
tft.setTextColor(ST7735_YELLOW,ST7735_BLACK);
tft.print(“signal quality :”);
tft.print(brain.readSignalQuality());
tft.println(” “);
tft.setTextColor(ST7735_RED,ST7735_BLACK);
tft.print(“Attention      :”);
tft.print(brain.readAttention());
tft.println(” “);
tft.setTextColor(ST7735_WHITE,ST7735_BLACK);
tft.print(“Meditation     :”);
tft.print(brain.readMeditation());
tft.println(” “);
tft.setTextColor(ST7735_GREEN,ST7735_BLACK);
tft.print(“Delta      : “);
tft.print(brain.readDelta());
tft.println(” “);
tft.print(“Theta      : “);
tft.print(brain.readTheta());
tft.println(” “);
tft.print(“Low Alpha  : “);
tft.print(brain.readLowAlpha());
tft.println(” “);
tft.print(“High Alpha : “);
tft.print(brain.readHighAlpha());
tft.println(” “);
tft.print(“Low Beta   : “);
tft.print(brain.readLowBeta());
tft.println(” “);
tft.print(“High Beta  : “);
tft.println(brain.readHighBeta());
tft.print(“Low Gamma  : “);
tft.print(brain.readLowGamma());
tft.println(” “);
tft.print(“Mid Gamma  : “);
tft.print(brain.readMidGamma());
tft.println(” “);
}}

}
void tftPrintTest() {
tft.setTextWrap(false);
tft.fillScreen(ST7735_BLACK);
tft.setCursor(0, 10);
tft.setTextColor(ST7735_WHITE);
tft.setTextSize(1);
tft.println(“INSTRUCTABLES.COM”);
delay(500);
tft.setCursor(40, 60);
tft.setTextColor(ST7735_RED);
tft.setTextSize(2);
tft.println(“EEG”);
tft.setTextColor(ST7735_YELLOW);
tft.setCursor(20, 80);
tft.println(“Monitor”);
tft.setTextColor(ST7735_BLUE);
delay(50);
}

 

abc

FarmBot 1.0

FARMBOT 1.0

FarmBot Genesis is the first FarmBot to be designed, prototyped, and manufactured. Genesis is designed to be a flexible FarmBot foundation for experimentation, prototyping, and hacking. The driving factors behind the design are simplicity, manufacturability, scalability, and hackability.

 

Genesis is a small scale FarmBot primarily constructed from V-Slot aluminum extrusions and aluminum plates and brackets. Genesis is driven by NEMA 17 stepper motors, an Arduino MEGA with a RAMPS shield, and a Raspberry Pi 3 computer. These electronics were chosen for their great availability, support, and usage in the DIY 3D printer world. Genesis can vary in size from a planting area as little as 1m2 to a maximum of 4.5m2, while accommodating a maximum plant height of about 1m. With additional hardware and modifications we anticipate the Genesis concept to be able to scale to approximately 50m2and a maximum plant height of 1.5m.

 

Precision farming has been hailed as the future of agriculture, sustainability, and the food industry. That’s why a company called FarmBot is working to bring precision agriculture technology to environmentally conscious individuals for the first time. The first product — the FarmBot Genesis — is a do-it-yourself precision farming solution, that (theoretically) anyone can figure out. The system is already up to its ninth iteration, and the open source robot improves in each version thanks to input from the FarmBot community.

 

The FarmBot is designed to be customizable and accessible to the maker community. Its design is based on generic devices such as:

 

Agriculture is an expensive and wildly wasteful industry. The precision farming movement may not solve every problem the industry faces, but it does have a lot of potential to improve sustainability and efficiency. Before FarmBot, precision agriculture equipment was only available in the form of massive heavy machinery. Precision farming tractors used to cost more than $1 million each when FarmBot creator Rory Aronson first had the idea for his solution in 2011.

 

The FarmBot robot kit ships with an Arduino Mega 2560, Raspberry Pi 2 Model B, disassembled hardware packages and access to the open-source software community. FarmBot Genesis runs on custom built tracks and supporting infrastructure, all of which you need to assemble yourself. The online FarmBot community makes it easy to find step-by-step instructions for every single assembly process. There are even forums to troubleshoot installing a FarmBot in your own backyard. The robot relies on a software platform that users access through FarmBot’s web app, all of which looks a whole lot like Farmville, the infamous mobile game.

The physical FarmBot system is aligned with the crops you plot out in your virtual version on the web app. That’s how Farmbot can reliably dispense water, fertilizer, and other resources to keep plants healthy and thriving. Since it doesn’t require any delicate sensor technology, FarmBot is a cheaper solution than the industrial precision farming equipment on the market. And with its universal tool mount, you can adapt FarmBot to do pretty much any garden task you desire.

 

If you’re looking to build your own FarmBot, grab the build schematics and bill of materials (B0M)