test

What are React Hooks and why you should care about them – Part 2

Outline

  1. Context Hook
  2. Custom Hooks
  3. Important Rules with Hooks
  4. Sample application

In the first part of the blog, we discussed the main concepts of React Hooks and the reasons for using them. We compared the traditional ways of creating components and the new ways with hooks. In the second part, we will continue exploring the the different types of hooks, learn about the important rules and create a sample application. We have a lot to cover. Let’s get started!

Context Hook

What is a context?

First of all, what is a context in React?

Context is a way to share global data across components without passing props. Usually, data in React application is passed from parent to child through props. Sometimes we have some data that should be delivered to a component deep inside the component tree. It is tedious to manually pass the same data over and over again through all components. Instead, we can create a central store that can be inserted into any component, just like in Redux.

How about we see an example code without Context API and identify the need for using it.

Let’s say there is a banana  plantation called “App” and it sets the prices for all bananas in the world. Before bananas reach the end customers, they need to go through wholesalers and supermarkets first. After that we can go to the stores and buy them. But since wholesalers and supermarkets need to add their profit margins in the process, the cost of the bananas go up.

Without Context API

// App.js
import React, { useState } from 'react';

const App = () => {
  const [bananaPrice, setBananaPrice] = useState(2); // Original price: $2
  return <Wholesaler price={bananaPrice}/>;
}

const Wholesaler = (props) => {
  const price = props.price + 2;
  return <Supermarket price={price}/>;
}

const Supermarket = (props) => {
  const price = props.price + 3;
  return <Customer price={price}/>;
}

const Customer = (props) => {
  return (
	<div>
		<p>Plantation price: $2</p>
		<p>Final price: ${props.price} :)</p>
	</div>
  );
}

banana without

What if we want to buy the bananas straight from the plantation and avoid the middlemen?

context diagram

Now, the same code with Context API

With Context API

// App.js
import React, { useState } from 'react';

// First we need to create a Context
const BananaContext = React.createContext();

// Second we need to create a Provider Component
const BananaProvider = (props) => {
  const [bananaPrice, setBananaPrice] = useState(2); // Original price: $2
  return (
    <BananaContext.Provider value={bananaPrice}>
      {props.children}
    </BananaContext.Provider>
  );
}

const App = () => {
  return ( // Wrap the component in Provider, just like in Redux
    <BananaProvider>
      <Wholesaler/>
    </BananaProvider>
  );
}

const Wholesaler = () => {
  return <Supermarket/>;
}

const Supermarket = () => {
  return <Customer/>;
}

const Customer = () => {
  return (
    <BananaContext.Consumer>
      {(context) => (
        <div>
          <p>Plantation price: $2</p>
          <p>Final price: ${context} ?</p>
        </div>
      )}
    </BananaContext.Consumer>
  );
}

banana with

useContext()

So how can we use Context API with hooks? With useContext!

import React, { useState, useContext } from 'react';

// First we need to create a Context
const BananaContext = React.createContext();

// Then we need to a Provider Component
const BananaProvider = (props) => {
  const [bananaPrice, setBananaPrice] = useState(2); // Original price: $2
  return (
    <BananaContext.Provider value={bananaPrice}>
      {props.children}
    </BananaContext.Provider>
  );
}

const App = () => {
  return ( // Wrap the component in Provider, just like in Redux
    <BananaProvider>
      <Wholesaler/>
    </BananaProvider>
  );
}

const Wholesaler = () => {
  return <Supermarket/>;
}

const Supermarket = () => {
  return <Customer/>;
}

const Customer = () => {
  const context = useContext(BananaContext);
  return (
    <div>
      <p>Plantation price: $2</p>
      <p>Final price: ${context} :)</p>
    </div>
  );
}

banana with

I know you could remove the line where you add the price to the cost of the banana and still get $2 at the end. But that is not the point. The point is that you have to do the props drilling when you don’t use Context. Incrementing the price while passing the the components is sort of the cost to do the props drilling.

Custom Hooks

Why?

Why would we want to create our own hooks? Because of 1 reason – it makes component logic more reusable.

  • Custom hook = Reusable logic

How?

How do we create a custom hook? Since hooks are just JavaScript functions, we can create a custom hook by just making a function.

  • The only difference is that the function name must start with the word – use. For example, useFunctionNameuseHungerStatus etc.
  • Custom hooks can call other hooks

Example without custom hook

Say we want to create an application with multiple Stopwatch timers on a single page. How would we do that?

This is what I mean by multiple timers:

multipleStopWatch

Here is the code that implements hooks but does not reuse the logic for the timers

import React, { useEffect, useState } from 'react';

const App = () => {
  const [timerOneStatus, setTimerOneStatus] = useState(false);
  const [timerOneElapsed, setTimerOneElapsed] = useState(0);

  const [timerTwoStatus, setTimerTwoStatus] = useState(false);
  const [timerTwoElapsed, setTimerTwoElapsed] = useState(0);

  useEffect(() => {
    let intervalOne;
    if (timerOneStatus) {
      intervalOne = setInterval(
        () => setTimerOneElapsed(prevTimerOneElapsed => prevTimerOneElapsed + 0.1),
        100
      );
    }
    return () => clearInterval(intervalOne);
  }, [timerOneStatus]);

  useEffect(() => {
    let intervalTwo;
    if (timerTwoStatus) {
      intervalTwo = setInterval(
        () => setTimerTwoElapsed(prevtimerTwoElapsed => prevtimerTwoElapsed + 0.1),
        100
      );
    }
    return () => clearInterval(intervalTwo);
   }, [timerTwoStatus]);

  const handleReset1 = () => {
    setTimerOneStatus(false);
    setTimerOneElapsed(0);
  };

  const handleReset2 = () => {
    setTimerTwoStatus(false);
    setTimerTwoElapsed(0);
  };

  return (
    <div>
      <div>
        <h2>Stopwatch 1</h2>
        <h1>{timerOneElapsed.toFixed(1)} s</h1>
        <button onClick={() => setTimerOneStatus(!timerOneStatus)}>
          {timerOneStatus ? "Stop" : "Start"}</button>
        <button onClick={handleReset1}>Reset</button>
      </div>
      <div>
        <h2>Stopwatch 2</h2>
        <h1>{timerTwoElapsed.toFixed(1)} s</h1>
        <button onClick={() => setTimerTwoStatus(!timerTwoStatus)}>
          {timerTwoStatus ? "Stop" : "Start"}</button>
        <button onClick={handleReset2}>Reset</button>
      </div>
    </div>
  );
}

As we can see, we are not DRY coding here. Logic for the timers need to repeated every time we want to create a new timer. Imagine if we had 10 timers on one page.

Example with custom hook

Now what we could do is to separate the timer logic as a custom hook and use that one hook for creating any number of timers. Each timer would have its own state and action items. In the main component we use the custom hook just like useState or useEffect and destructure returned parameters from the hook.

import React, { useEffect, useState } from 'react';

const App = () => {
  const [timerOneStatus, setTimerOneStatus, timerOneElapsed, resetTimerOne] = useTimer();
  const [timerTwoStatus, setTimerTwoStatus, timerTwoElapsed, resetTimerTwo] = useTimer();

  return (
    <div>
      <div>
        <h2>Stopwatch 1</h2>
        <h1>{timerOneElapsed.toFixed(1)} s</h1>
        <button onClick={() => setTimerOneStatus(!timerOneStatus)}>
          {timerOneStatus ? "Stop" : "Start"}</button>
        <button onClick={() => resetTimerOne()}>Reset</button>
      </div>
      <div>
        <h2>Stopwatch 2</h2>
        <h1>{timerTwoElapsed.toFixed(1)} s</h1>
        <button onClick={() => setTimerTwoStatus(!timerTwoStatus)}>
          {timerTwoStatus ? "Stop" : "Start"}</button>
        <button onClick={() => resetTimerTwo()}>Reset</button>
      </div>
    </div>
  );
}

const useTimer = () => {
  const [status, setStatus] = useState(false);
  const [elapsedTime, setElapsedTime] = useState(0);

  useEffect(() => {
      let interval;
      if (status) {
        interval = setInterval(() =>
          setElapsedTime(prevElapsedTime => prevElapsedTime + 0.1),
          100
        );
      }
      return () => clearInterval(interval);
    },[status]);

  const handleReset = () => {
    setStatus(false);
    setElapsedTime(0);
  };

  return [status, setStatus, elapsedTime, handleReset];
}

stopwatch with hook

In this case, we can place the hook in another file with other custom hooks and call it from anywhere in our project. Much cleaner and more reusable!

Rules with Hooks

Even though hooks are just functions, react team recommends to follow certain rules when using them. If you are lazy or just want to make sure you are following the rules automagically, you can install this linter. However, it is important that we have some common knowledge about the rules.

Rule 1 – Call hooks only at the top level

What does that mean?

It means we should not call hooks inside conditions, loops or nested functions. Rather use them at the top level of React functions.

Rule 2 – Hooks cannot be called from regular JavaScript functions. Call them from React functions

You can call hooks from the following functions:

  • Custom hooks
  • React function components

Rule 3 – Always start your custom hooks’ name with the word ‘use’

Sample application

Now let us build an application that takes advantage of the most hooks we have covered so far.

We will be building an application called Caturday and it will fetch pictures of random cats on the internet and allow us to vote on the pictures. It will keep count of our likes and dislikes. We will also add a button that can turn our page into dark mode (just change the background color of the page for simplicity).

Here is what the final result will look like: Link to Demo | Github

caturday demo

Step 1

We start building our app by running

$ create-react-app caturday

(If you don’t have create-react-app module installed, please run npx create-react-app caturday)

After navigating into the project folder, run

$ npm install semantic-ui-react --save

to install the Semantic UI design tool that will make dealing with CSS much easier.

Step 2

Create 3 files in the /src folder:

  • Caturday.js,
  • Caturday.css
  • customHooks.js

folder structure

Step 3

Open the Caturday.css file and copy paste the following:

@media only screen and (max-width: 600px) {
  #image-container {
    width: 100%;
    padding-top: 50px;
  }
}
@media only screen and (min-width: 600px) {
  #image-container {
    width: 50%;
    margin: auto;
    padding-top: 50px;
  }
}
.dark-mode {
  background-color: #3b3939;
}
.main-img {
  margin: auto;
  height: 30em !important;
}
.main-placeholder {
  height: 30em !important;
}

Step 5

We create 2 custom hooks to use in our application. Open customHooks.js file and add these hooks

import { useState, useEffect } from "react";

export const useFetchImage = (url, likeCount, mehCount) => {
  const [data, setData] = useState(null);
  const [loading, setLoading] = useState(true);

  useEffect(() => {
    setLoading(true);
    fetch(url)
      .then(j => j.json())
      .then(data => {
        console.log(JSON.stringify(data))
        setData(data);
        setLoading(false);
      })
      .catch(err => {
        console.log(err);
      });
  }, [likeCount, mehCount]);
  return { data, loading };
};

export const useDarkMode = () => {
  const [enabled, setEnabled] = useState(false);
  useEffect(
    () => {
      enabled
        ? document.body.classList.add("dark-mode")
        : document.body.classList.remove("dark-mode");
    },
    [enabled] // Only re-runs when 'enabled' changes
  );
  return [enabled, setEnabled]; // Return state and setter
};

Step 5

We start constructing the primary file Caturday.js by importing the css file and creating a container for the app. We will also define the state of component and a customs hooks that update the image url when Like/Dislike buttons are clicked.

import React, { useState, useEffect } from "react";
import { Header, Segment, Image, Placeholder, Button, Container, Label, Icon, Checkbox } from "semantic-ui-react";
import './Caturday.css';

const Caturday = () => {
  const [likeCount, setLikeCount] = useState(0);
  const [mehCount, setMehCount] = useState(0);
  const [darkMode, setDarkMode] = useDarkMode();
  const {data, loading} = useFetchImage(
    "http://api.thecatapi.com/v1/images/search",
    likeCount,
    mehCount
  );
  
  return (
	<Container id="main-container">
      <Segment raised>
        // Header
        // Dark Mode toggle
        // Image container
        // Like and Dislike buttons
      </Segment>
    </Container>
  );
};

export default Caturday;

Step 6

Now when we define each element of our Caturday component and put all pieces together, our Caturday.js file should look like this.

import React, { useState, useEffect } from "react";
import { Header, Segment, Image, Placeholder, Button, Container, Label, Icon, Checkbox } from "semantic-ui-react";
import './Caturday.css';
import {useDarkMode, useFetchImage} from './customHooks';

const Caturday = () => {
  const [likeCount, setLikeCount] = useState(0);
  const [mehCount, setMehCount] = useState(0);

  const [darkMode, setDarkMode] = useDarkMode();
  const { data, loading } = useFetchImage(
    "http://api.thecatapi.com/v1/images/search",
    likeCount,
    mehCount
  );

  return (
    <Container id="image-container">
      <Segment raised>
      <Header className="ui basic segment centered">Caturday</Header>
        <Segment>
          <Checkbox onChange={() => setDarkMode(!darkMode)} toggle floated='right' label="Dark mode"/>
        </Segment>
        {
          loading ?
          <Placeholder fluid><Placeholder.Image className="main-placeholder"/></Placeholder> :
          <Image src={data[0] ? data[0].url : ''} className="main-img"/>
        }
        <Segment textAlign="center">
          <Button as="div" labelPosition="right">
            <Button onClick={() => setLikeCount(likeCount+1)} icon color="green">
              <Icon name="heart" />
              Like
            </Button>
            <Label as="a" basic pointing="left">
              {likeCount}
            </Label>
          </Button>
          <Button as="div" labelPosition="left">
            <Label as="a" basic pointing="right">
              {mehCount}
            </Label>
            <Button onClick={() => setMehCount(mehCount+1)} color="red" icon>
              <Icon name="thumbs down" />
              Meh
            </Button>
          </Button>
        </Segment>
      </Segment>
    </Container>
  );
};

export default Caturday;

Step 7

Open the App.js file and replace the return content with Caturday component

import React from 'react';
import Caturday from './Caturday';

const App = () => {
	return (
      <Caturday/>
    );
}

export default App;

Conclusion

We have covered most of the concepts about hooks and that should be enough for you to get started. If you have a project that you are working on right now that implements states and components in a traditional way, it is absolutely fine. No need to convert everything into hooks. However, when you are about to create new components, just give the hooks a try. They are 100% compatible with regular components with classes. You will be surprised how many lines of code you will be avoiding to accomplish the same functionalities. If you need more information about hooks, please checkout the official documentation by Facebook.

Thanks for spending your time to read the article!

What are React Hooks and why you should care about them – Part 1

Outline

  1. Intro
  2. What is wrong with React Components now?
  3. Hooks overview
  4. useState
  5. useEffect
  6. TLDR

Intro

There is a new kid on the block . React introduced a new feature called Hooks which will improve the code quality even more and make creating user interfaces easier. From now on, if you are going to create a new project, you should definitely take advantage of the new addition and keep your projects lean and clean. It was actually released a while ago, but production ready stable version has come out recently. So now is the time to really start using them. In this article, we will cover the main concepts and look at some examples. At the end of the article, you will have a fair amount of idea about React Hooks and you can start implementing them in your applications.

Before we dive in to the details of hooks, let us take a step back to understand the motivation behind it.


What is wrong with React components now?

3 things:

  1. Classes
  2. Hierarchical abyss (Not reusing stateful logic)
  3. Big components

1. Classes

Currently, there are mainly two ways to create components in React. First way is by using stateless functions:

function Greet(props){
	return <h1>Hello there, {props.name}!</h1>;
}

Second, using ES6 Classes:

class Greet extends React.Component {
	render(){
		return <h1>Hello there, {props.name}!</h1>;
	}
}

Right, so why are you saying there is something wrong with those two methods, you ask?

Well, first of all, there are no classes in JavaScript. A class is a syntactical sugar over JavaScript’s prototype-based inheritance. In other words, it is just a function with special features, which creates extra work for browser to process. But that is not the problem here. The problem is, classes are harder to understand and do not play well with minifying. They cause issues with hot reloading. Also, people often struggle when deciding to use classes or functions to make the components. Which results in inconsistency.

So, why not to use just functions then? Functions are stateless. Meaning, we cannot manage state in them. We can pass props back and forth, but that makes it hard to keep track of the changes.

2. Hierarchical Abyss

Just take a look at the picture below.

react hierarchical abyss

Extreme level of nested component tree makes it difficult to follow the data flow through the app.

3. Big components

Whether we like it or not, at some point of the development, our application gets large and requires more complex components. When that happens, our components end up implementing multiple React lifecycles that might contain unrelated data. A single task can be split across different lifecycles. That creates an opportunity for bugs in the application.


Hooks to the rescue!

Hooks solve all of the issues mentioned above. It does that by allowing us to add state management to functional components and use other React features.

Say what?

See, it is usually preferred to use just functions to create components. But as we mentioned above, we cannot manage the state or use lifecycles in our functions. But with the hooks we can!

(If you are thinking, why not use Redux for state management, hold on to that thought. That’s a discussion for another time.)

State Hook

Let’s look at an example code that changes a text when we click a button.

import React, { useState } from 'react';

function FruitMaster(){
	const [fruit, setFruit] = useState('Banana');
	return (
		<div>
			<p>Selected:{fruit}</p>
			<button onClick={
				() => setFruit(fruit=='Banana'?'Apple':'Banana')
			}>Change</button>
		</div>
	);
}

This is what supposed to happen:

apple banana

We have a selected text variable, which is set to Banana by default. When we click on the Change button it changes the text from Banana to Apple or vice versa.

Now, let us break down the new elements in the component.

component analysys

What are those things in the state variable?

state variable

In this case, setFruit() is equivalent of this.setState(). There is one important difference though. When we use this.setState(), it merges the changes to the state object. State hook on the other hand, will completely replace the state with the given value.

We used a state hook called useState in this example. There are other hooks too. We will see them soon.

So a hook, is actually a function that uses React features and returns a pair of values: one to hold the state value and one function to manage the value. We can name those values whatever we want. We can set a default value by passing it to the useState function.

Note that we are using a destructuring assignment to retrieve the pair of values. If you are not familiar with this method, take a look at here. Having said that, we could actually get the two values this way too:

const stateVariable = useState('Cherry'); //Returns an array with 2 values
const fruit = stateVariable[0];
const setFruit = stateVariable[1]; //function

Converting

Now, let’s convert our functional component to a class based component:

import React from 'react';

export default class FruitMaster extends React.Component {
  constructor(props){
    super(props);
    this.state = {
      fruit: 'Banana'
    };
    this.setFruit = this.setFruit.bind(this);
  }

  setFruit(value){
    this.setState({fruit: value});
  }

  render(){
    return (
      <div>
        <p>Selected: {this.state.fruit}</p>
        <button onClick={
          () => this.setFruit(this.state.fruit == 'Banana' ? 'Apple' : 'Banana')
          }>Change</button>
      </div>
    );
  }
}

Comparison

comparison2

We can see from the picture that using hooks reduces code volume almost by half!

Now, let us address the elephant in the room. What do we do when we have more than one variable in state?*

Simple! Create more state variables.

const [fruit, setFruit] = useState('Banana');
const [food, setFood] = useState('Taco');

Multiple state variables

import React, { useState } from 'react';

function FruitMaster(){
	const [fruit, setFruit] = useState('Banana');
	const [food, setFood] = useState('Taco');
	return (
		<div>
			<p>Fruit: {fruit}</p>
			<p>Lunch: {food}</p>
			<button onClick={
				() => setFruit(fruit=='Banana'?'Apple':'Banana')
			}>Change Fruit</button>
			<button onClick={
				() => setFood(food=='Taco'?'Burger':'Taco')
			}>Change Lunch</button>
		</div>
	);
}

fruit lunch

What if I want to store all variables in one object? you might ask. Well, go ahead. But one thing to remember is that, state hook function replaces the stateand not merges to it. this.setState() merges the given values to the state object, hook function does not. But there is a way to fix it. Let us see how:

import React, { useState } from 'react';

function MealMaster(){
	  const [myState, replaceState] = useState({
	    fruit: 'Apple',
	    food: 'Taco'
	  });
	  return (
	    <div>
	      <p>Fruit: {myState.fruit}</p>
	      <p>Lunch: {myState.food}</p>
	      <button onClick={
	        () => replaceState(myState => ({
	          ...myState,
	          fruit: myState.fruit=='Banana'?'Apple':'Banana'
	        }))
	      }>Change Fruit</button>
	      <button onClick={
	        () => replaceState(myState => ({
	          ...myState,
	          food: myState.food=='Taco'?'Burger':'Taco'
	        }))
	      }>Change Lunch</button>
	    </div>
	 );
}

We have to use spread operator to change the only part of the state we need and keep the rest of it as it is.

Effect Hook

What about the lifeCycle methods of React? We could use those with classes. But now they are gone…

Not really.

There is another hook called useEffect and we can use it instead of the lifecycles. In other words, we can handle side effects in our applications with hooks. (What is a side effect?)

Here is an example of a component that uses familiar lifecycles:

Old method

import React from 'react';

class TitleMaster extends React.Component {
	constructor(props){
		super(props);
		this.state = {
			title: 'Tuna'
		};
	}
	componentDidMount(){
		document.title = this.state.title; // Changes tab title
	}
	componentDidUpdate(){
		document.title = this.state.title;
	}
	updateTitle(value){
		this.setState({title: value});
	}
	
	render(){
		return (
			<div>
		        <button onClick={
		          () => this.updateTitle(this.state.title =='Tuna'?'Donut':'Tuna')
		        }>Update Title</button>
		    </div>
		);
	}
}

Our component in action.
title master

The component is bulky even for a small functionality. Code in componentDidMount and componentDidUpdate is repeated.

With hooks

Now let’s create the same component with hooks!

import React, { useState, useEffect } from 'react';

function TitleMaster(){
  const [title, updateTitle] = useState('Tuna');
  useEffect(() => {
    document.title = title;
  });
  return (
      <div>
          <button onClick={
            () => updateTitle(title =='Tuna'?'Donut':'Tuna')
          }>Update Title</button>
      </div>
    );
}

export default TitleMaster;

Much cleaner. Less code. Easier to understand.

As we mentioned before, a hook is a function. useEffect is also a function that accepts another function and an array. Don’t worry about the array part for now.

So inside the function we pass to useEffect, we can perform our side effect logic. In the example above, we are updating the tab title in the browser. Another common practice is to use data fetching in the useEffect hooks.

We can also use multiple useEffect hooks in one component.

Note that we are placing the hooks inside of our component functions. That way they will have access to the state variables.

Infinite loop

By default useEffect re-renders every time the component changes. Sometime incorrectly implementing the hook might cause infinite loop issue. Remember, we said that the useEffect takes 2 arguments? So the second argument is an object or array of values. Which tells React, “Hey React, re-run the useEffect only when these state values change

const [user, setUser] = userState();
useEffect(() => {
	document.title = user.id;
}, [user.id]); // Re-render the hook Only when user.id changes

Cleanup logic

useEffect hook can also handle cleanup logic. What does that mean? Well, sometimes we subscribe to some APIs and once we are done, we need to unsubscribe from it to prevent any leaks. Or when we create an eventListener, we need a way to remove it.

useEffect can do it by returning a function.

import React, { useState, useEffect } from 'react';

function TitleMaster(){
  const [title, updateTitle] = useState('Tuna');
  useEffect(() => {
    document.title = title;
    return () => {
		// Perform clean up logic here
	}
  });
  return (
      <div>
          <button onClick={
            () => updateTitle(title =='Tuna'?'Donut':'Tuna')
          }>Update Title</button>
      </div>
    );
}

TLDR:

React hooks are special functions that can be used in stateless components. They allow us to hook into react features and add state management to the components. There are 3 main hooks: useState, useEffect and useContextuseState can replaces the current way of declaring state object in the constructor and manipulating values in it by using this.setState()useEffect can be used instead of react lifecycles. These hooks are not meant to replace the current way of creating components. They are backwards compatible. No need to rewrite your existing components using hooks. They can make your projects much cleaner by letting you write less code though.

Part 2: What are React Hooks and why you should care about them

In Part 2 we will cover the following:

  • Context Hook
  • How to make custom hooks!
  • Important Rules with Hooks
  • Real life application example

Cardiac Activity Monitor

For a little fun, we decided to tinker with cardiac monitoring and simple prototyping of wave morphology via common components.  After we finishing, we wanted to post our little project to share with the world… so if you’ve got a chipKIT lying around and some free time, see if you can build your own pulse meter!

In order to build this project, you’ll need following things.

Pulse1

The OLED display on the Basic I/O Shield is driven through SPI interface. This requires the JP4 jumper on chipKIT Uno32 board be placed on the RG9 position so that the SPI SS function is available on Pin 10. Read our tutorial Exploring the chipKIT Uno32 for more details on jumper functions and settings on the Uno32 board. The following picture shows the required jumper settings on Uno32 board for this project. The firmware of this project is based on the same algorithm as used in our previous project PC-based heart rate monitor. The algorithm used by the PC application before is now implemented in the firmware of Uno32. The following flowchart shows the algorithm used to compute the pulse rate. Now plugin the Basic I/O shield on top of the Uno32 board and connect the Easy Pulse sensor power supply and the analog PPG output pins to the I/O shield as shown in the following figure.

Pulse2

The Easy Pulse board is powered with 3.3V power supply from the I/O board header pins. The analog pulse output from Easy Pulse is connected to analog pin A1 of Uno32 through headers available on the I/O shield. The analog pin A0 of Uno32 may not be used for feeding Easy Pulse output because it is already hard-wired to the potentiometer output on the I/O shield. The following picture shows the complete setup of this project.

Pulse4

The PPG waveform and the pulse rate (in beats per minute, BPM) are both displayed on the OLED screen of the I/O shield. The Uno32 sketch uses the chipKIT I/O Shield library to display data on the OLED. Unzip the downloaded sketch folder and upload the sketch named P3_Uno32_PulseMeter.pde to the Uno32 board.

After the sketch is uploaded, power the Uno32 board through a DC wall adapter or any other external power supply that could be used to power the Uno32 board. Using USB power source is not recommended as some USB ports do not supply enough current for Easy Pulse sensor, in which case the performance of the sensor would be poor. When the sensor is unplugged, the range of ADC would not exceed 50 counts (described in the algorithm) and the pulse meter displays “No Pulse Signal” message on the OLED screen as shown below.

Pulse3

Now wear the Easy Pulse sensor on a finger tip and adjust the P1 potentiometer on the Easy Pulse to 1/4th position from left. The PPG waveform and the pulse rate should now display on the screen. The screen is refreshed in every 3 seconds. If the PPG signal appears clipped at the peak values, consider decreasing the gain of the filter circuit on the Easy Pulse board. This can be done by rotating the P1 potentiometer counterclockwise. Similarly, if you see the “No Pulse Signal” message even after wearing the sensor, you should consider increasing the gain by slightly turning the potentiometer clockwise. Read the Easy Pulse document for more details on the function of P1 and P2 potentiometers of Easy Pulse.

Hovalin: 3D Printed Violin

These days, we can make virtually anything out of a few grams of PLA. However, it’s still fairly difficult to print full musical instruments (or even instrument parts) that produce high-quality sound comparable to the kind created by instruments manufactured in a more traditional manner. But that’s exactly what makers Kaitlyn and Matt Hova have done. The hovalin  is a completely 3D printed violin that’s proof that there’s no reason for an instrument’s acoustic quality to suffer just because it was made with a MakerBot.

 

hovalin iterative

Once the model was designed, it had to be divided in a way that accounted for the constraints of a commercial FDM printer’s build volume. “It’s a weird game that’s most parallel to the idea of cutting up a violin made out of jello so that each piece can fit in a small box and be oriented so that it would not collapse on itself,” say the Hovas. It took months of trial and error before they successfully printed the Hovalin 1.0.

 

With Kaitlyn’s experience as a professional violinist, neuroscientist, and software engineer at 3D Robotics –as well as Matt’s diverse careers in record production and electrical engineering– the husband and wife team have all the bases covered for this project. The Hovalin is a fully functional 3D printed violin that sounds nearly indistinguishable from a world-class wooden version.

 

What’s more, the Hovalin team designed the violin in such a way that it requires less than one kilogram of PLA to make. This helps keep production costs low, as the cost of raw materials comes in at around $70 for STEAM (Science, Technology, Arts, Engineering and Math) programs.

The EggBot Challenge

THE EGGBOT CHALLENGE

Following the tradition of cracking eggs in celebration of Easter in Greece with the game of tsougrisma (τσουγκρισμα in Greek) we decided to build an EggBot that could robotically draw intricate designs that we fielded from our client partners and staff.

 

So what’s an EggBot? It’s a machine capable of drawing on spherical or ellipsoidal surfaces. You might say, a pen plotter for spherical coordinates. Or a simple but unusual CNC machine, ripe for hacking. Or an educational robot– and you’d be right on all accounts.  It’s worthwhile to point out that there’s actually quite a bit of history here in this project. The EggBot kit is the result of our collaboration with Bruce Shapiro, Ben Trombley, and Brian Schmalz. Bruce designed the first Eggbot back in 1990, and there’s been a process of continuous evolution ever since.

 

EggBot III

The Eggbot kit includes Brian Schmalz’s EiBotBoard (EBB), which includes a USB interface and dual microstepping bipolar stepper driver. With its 16X microstepping and the 200 step/rev motors, we get a resolution of 3200 steps/revolution in each axis.  The EBB also controls the little servo motor that raises and lowers the pen arm. In previous versions of the EggBot kit, raising and lowering the pen has usually been done with a solenoid. But the servo motor allows very small, precise motion to raise and lower the pen over the surface. The end of the pen arm is hinged with an acetal flexure for precise bending, and the pen arm clamp fits ultra fine Sharpie pens and many others.

 

EggBot I

And the results? Pretty good. The objects shown here include golf balls, eggs, Christmas ornaments, and light bulbs… besides we had an egg-cellent time in the process.

Augmented Reality Sandbox

AUGMENTED REALITY SANDBOX

For a fun project to spruce up our office and interact with our local STEAM (Science, Technology, Engineering, Arts and Math) school programs, we decided to build our own Augmented Reality Sandbox.  Originally developed by researchers at UC Davis, the open source AR Sandbox lets people sculpt mountains, canyons and rivers, then fill them with water or even create erupting volcanoes. The UCLA device was built by Glesener and others at the Modeling and Educational Demonstrations Laboratory in the Department of Earth, Planetary, and Space Sciences, using off-the-shelf parts and regular playground sand… but after calling our local woodworking friend Kyle Jenkins, we decided to give our container a modern look that would match our office.

 

20160423_155332

 

Any shape made in the sandbox is detected by an Xbox Kinect sensor and processed via raw depth frames that arrive from the Kinect camera at 30 frames per second and are fed into a statistical evaluation filter with a fixed configurable per-pixel buffer size (currently defaulting to 30 frames, corresponding to 1 second delay), which serves the triple purpose of filtering out moving objects such as hands or tools, reducing the noise inherent in the Kinect’s depth data stream, and filling in missing data in the depth stream. The resulting topographic surface is then rendered from the point of view of the data projector suspended above the sandbox, with the effect that the projected topography exactly matches the real sand topography. The software uses a combination of several GLSL shaders to color the surface by elevation using customizable color maps and to add real-time topographic contour lines.

 

At the same time, a water flow simulation based on the Saint-Venant set of shallow water equations, which are a depth-integrated version of the set of Navier-Stokes equations governing fluid flow, is run in the background using another set of GLSL shaders. The simulation is an explicit second-order accurate time evolution of the hyperbolic system of partial differential equations, using the virtual sand surface as boundary conditions. The implementation of this method follows the paper “a second-order well-balanced positivity preserving central-upwind scheme for the Saint-Venant system” by A. Kurganov and G. Petrova, using a simple viscosity term, open boundary conditions at the edges of the grid, and a second-order strong stability-preserving Runge-Kutta temporal integration step. The simulation is run such that the water flows exactly at real speed assuming a 1:100 scale factor, unless turbulence in the flow forces too many integration steps for the driving graphics card to handle.

 

Essentially, this crazy AR sandbox allows people to literally move mountains with their hands. Check it out in the video above.

 

By utilizing common components and a little ingenuity we built an incredibly responsive program capable of not only reading changes made in the sand, but also reacting to them in real time.  By simply altering the layout of the sandbox, people can create towering mountains and volcanos, or water-filled valleys and rivers. When a person creates valleys and peaks, the Kinect sensor quickly detects the modifications and applies real-time topographical changes.  Moreover, our recent iterations of the software allows people to create falling rain onto the map by just raising their hands over the sandbox. As it falls, the rain erodes some of the landscape and pools into pits and valleys.

Robotic Bird Flight

ROBOTIC BIRD FLIGHT

We’ve always been fascinated by the mechanics and properties of flapping-wing flight… watching birds soar gracefully above us brings out that childlike wish of gliding through clouds from high above. Seeing them fly is amazing, but it’s not that easy to build an ornithopter.

 

The first thing anyone asks when they see an ornithopter fly is, “How does it work?”. We’ve all wondered the same thing about birds too at one time or another. Since the ornithopter flies like a bird, we can answer both questions at the same time. All flying creatures, and ornithopters too, have a stiff structure that forms the leading edge or front part of the wing. Birds have their wing bones at the leading edge. For insects, the veins of the wing are concentrated there. Ornithopters have a stiff spar at the leading edge. The rest of the wing is more flexible. It needs to be flexible so the wing can change shape as it flaps.

 

An airplane wing produces lift by moving forward through the air. This force, called lift, is what keeps the airplane from falling to the ground. The special shape of the wing, combined with the slight upward angle of the wing, causes air to be deflected downward. There is more pressure on the bottom of the wing than there is on the top. This difference in pressure produces lift. Birds flap their wings up and down. This motion is added to the forward motion of the bird’s body, so really the wings move diagonally. They move down and forward during the downstroke, and they move up and forward during the upstroke.

 

robobird 2

The outer part of the wing has a lot of downward movement, but the inner part near the bird’s body simply moves forward along with the bird. Since the downward motion is greater toward the wingtips, the wing has to twist so that each part of the wing is aligned correctly with the local movement of the wing. The upstroke is different from the downstroke, and its function can vary. In general, the upstroke produces lift by relying on the forward motion of the bird through the air. The inner part of the wing, near the body, produces most of the lift. The outer part of the wing, because of its sharp upward motion, can only hinder the bird’s flight. Birds solve this problem by partially folding their wings. Most ornithopters take a less subtle approach: more power!

 

Our Robotic Bird goes a step further than most traditional RC ornithopters. We built an onboard computer that acts like the brain of a real bird, giving complete control over the wing movements. We tacked on two powerful servos to move the wings which also are used for steering, so there is no need for any separate mechanism for moving the tail. Finally, we integrated tilt mechanics thatcan vary the flapping angle of the wings and allows a smooth transition between flapping and gliding flight.

 

The “brain” of the Robotic Bird is a tiny electronic device called the Axon-1b servo controller and weighs only 4.7 grams. It uses independent motion of the left and right wings for steering. It features a glide mode and variable flapping amplitude. The servo controller reads the throttle, aileron, and elevator commands from your radio receiver. Then it outputs a new signal to drive the servos in the RC bird.

InMoov Leap Motion Control

INMOOV LEAP MOTION CONTROL

To have a little more fun with our InMoov Robot we started to investigate immersion control via Leap Motion.  From a hardware perspective, the Leap Motion Controller is actually pretty darn simple. The heart of the device consists of two cameras and three infrared LEDs. These track infrared light with a wavelength of ~850 nanometers, which is outside the visible light spectrum.  The data representation takes the form of a grayscale stereo image of the near-infrared light spectrum, separated into the left and right cameras. Typically, the only objects you’ll see are those directly illuminated by the Leap Motion Controller’s LEDs. However, incandescent light bulbs, halogens, and daylight will also light up the scene in infrared.

Once the image data is streamed to the application, it’s time for some heavy mathematical lifting. Despite what you might think, the Leap Motion Controller doesn’t generate a depth map – instead it applies advanced algorithms to the raw sensor data.  The Leap Motion Service is the software on that can be connected via SDK which processes the images. After compensating for background objects (such as heads) and ambient environmental lighting, the images are analyzed to reconstruct a 3D representation of what the device sees.

dd

Next, the tracking layer matches the data to extract tracking information such as fingers and tools. The Leap Motion tracking algorithms interpret the 3D data and infer the positions of occluded objects. Filtering techniques are applied to ensure smooth temporal coherence of the data. The Leap Motion Service then feeds the results – expressed as a series of frames, or snapshots, containing all of the tracking data – into a transport protocol.

Through this protocol, the service communicates with the Leap Motion Control Panel, as well as native and web client libraries, through a local socket connection (TCP for native, WebSocket for web). The client library organizes the data into an object-oriented API structure, manages frame history, and provides helper functions and classes.  From there, the application logic ties into the Leap Motion input, allowing a motion-controlled interactive experience.

Now, in order to get Leap Motion and InMoov working properly you need to:

1) Install the Leap Motion Software from here : http://www.leapmotion.com/setup

2) Install the LeapMotion2 service from the MyRobotLab Runtime tab MRL VERSION 1.0.95 or higher

Once you’ve got your libraries and SDKs setup you can run this simple script to make the Leap Motion track your fingers and then move InMoov fingers accordingly.  This will start to get your basic controls running and experience a 1 to 1 kinetic reflection from the InMoov robot.  Now instead of just programming control based reactions (via the Kinnect Camera and Hercules WebCams) you can build immersive projects like the Robots for Good Project @ http://www.robotsforgood.com/.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
inmoov =Runtime.createAndStart("inmoov","InMoov")
inmoov.startRightHand("COM5")
inmoov.rightHand.index.setMinMax(30,89)
inmoov.rightHand.thumb.setMinMax(30,60)
inmoov.rightHand.majeure.setMinMax(30,89)
inmoov.rightHand.ringFinger.setMinMax(30,90)
inmoov.rightHand.pinky.setMinMax(30,90)
sleep(1)
inmoov.rightHand.startLeapTracking()
# inmoov.rightHand.stopLeapTracking()

 

bb

 

And there ya go, your very own real-life avatar…  now, once we can get the leg apparatuses figured out we’ll be able to send out ‘ole InMoov on a walk around the block as an augmented reality experience!

Brain Wave Monitor

BRAIN WAVE MONITOR

To get ready to make your own Portable Brain Wave Monitor all you need is about 40 bucks and a bit a time and ingenuity.  Take a look at our work and see if you can’t make one of your own!

Materials:

1 MindFlex – $25.00: eBay is one of the best places to find!
1 Arduino Uno R3 – $15.00: Find it on the Aruino site here!
1 Mini Breadboard: The “bread and butter” of electrical engineering.
1 SainSmart 1.8 ” Color TFT Display – Now you can see what you’re doing!
Some Wires: Go “Office Space” on your printer and salvage the wires.
A PC or Laptop: It’s 2016, everyone has one. Right?

Software & Files:

Arduino GUI <= Download Right Here!
Arduino Brain Example Code <= Download at the bottom!

Tools:

Soldering Iron: Every maker should have one.
Hot Glue Gun: Your mom should have one you can borrow from Arts and Crafts time!
Drill: Not really needed, but who doesn’t love to play with power tools?
Other Tools: Pliers, Wire Cutters, Wire Strippers, etc.

MONITOR SETUP

AB

Use the mini breadboard to connect the display to the Arduino. Use jumper wires to connect from the Arduino pins to the breadboard in the following order:
Arduino:  5v         TFT DISPLAY   VCC
gnd                                 GND
Digital     4                    SCL
5                                     SDA
6                                     CS
7                                     DC
8                                     RES

In order to test the display, you will need to download and install two libraries into your Arduino IDE. Although the display is sold by Sainsmart, I had trouble using their libraries and downloaded the libraries from ADAFRUIT, who sells a similar display and has  much better  documentation and support..I’d normally feel a little guilty about ordering these from SainSmart and using the Adafruit instructions and libraries, but I purchase enough other stuff from Adafruit to appease my conscience. If you end up taking the same route I did, I encourage you to at least look at the other stuff they offer.

Two libraries need to be downloaded and installed:  The first is the ST7735 library (which contains the low-level code specific to this device). The second is the Adafruit GFX Library (which handles graphics operations common to many displays that Adafruit sells). Download both ZIP files, uncompress and rename the folders to ‘Adafruit_ST7735’ and ‘Adafruit_GFX’ respectively, place them inside your Arduino libraries folder and restart the Arduino IDE.

You should now be able to see the  library and examples in  select File > Examples > Adafruit_ST7735 > graphicstest sketch. load the sketch to your Arduino.

If you were successful at installing the libraries, and loading the graficstest sketch,  Click on the verify button to compile the sketch and make sure there are no errors.

It’s time to connect your Arduino to your PC using the USB cable, and  click on the upload button  to upload the sketch to the Arduino.

Once uploaded, the Arduino should perform all the test display procedures! If you’re not seeing anything – check to see if the back-light is on, if the back-light is not lit, something is wrong with the Power  wiring. If the back-light is lit ,but you see nothing on the display recheck the digital signals wiring.

If everything is working as expected, we are ready to  wire up the DHT sensor.

c

As with the display, I used jumper wires to connect the required power and data pins for the DHT11  using the mini Breadboard. Line up the pins and then plug in the sensor.

Looking at the front of the sensor:
From Left to Right connect pin 1 of DHT11 to 5 V, Pin 2 to Digital Pin 2 of the Arduino, Pin 3 no connection, and Pin 4 to Arduino GND.

You need to download another Library  to get the Arduino to talk with the DHT11 sensor. The sensor I got didn’t come with any documentation, so I Googled around until I found a library that works.
I found it in the Virtualbotix website

As with the display  libraries, Download the library unzip it,  and install it in the Arduino IDE. Place it inside your Arduino libraries folder , rename it  DHT11, and restart the Arduino IDE.

You should now be able to see the  library and examples in  select File > Examples > DHT11 > dht11_functions sketch.
oad the sketch to your Arduino.

If you were successful at installing the libraries, and loading the dht11_functions sketch,  Compile the sketch  by clicking on the verify button and make sure there are no errors.

It’s time to connect your Arduino to your PC using the USB cable.  Click on the upload button  to upload the sketch to the Arduino.

Once uploaded to the Arduino, open the serial monitor, and you should see the data stream  with information coming from the sensor.

If you got the sensor working, we’re now ready to display the data on the TFT Screen..

// copy the sketch below and paste it into the Arduino IDE compile and run the program.
// this sketch was created using code from both the adafruit and the virtuabotix sample sketches
// You can use any (4 or) 5 pins
#define sclk 4
#define mosi 5
#define cs   6
#define dc   7
#define rst  8  // you can also connect this to the Arduino reset
#define ANALOG_IN 0 // for cds light sensor

#include <Adafruit_GFX.h>    // Core graphics library
#include <Adafruit_ST7735.h> // Hardware-specific library
#include <SPI.h>
#include <dht11.h> // dht temp humidity sensor library

dht11 DHT11;
Adafruit_ST7735 tft = Adafruit_ST7735(cs, dc, mosi, sclk, rst);

void setup(void) {
DHT11.attach(2); // set digital port 2 to sense dht input
Serial.begin(9600);
Serial.print(“hello!”);

tft.initR(INITR_BLACKTAB);   // initialize a ST7735S chip, black tab

Serial.println(“init”);
//tft.setRotation(tft.getRotation()+1); //uncomment to rotate display

// get time to display “sensor up time”
uint16_t time = millis();
tft.fillScreen(ST7735_BLACK);
time = millis() – time;

Serial.println(time, DEC);
delay(500);

Serial.println(“done”);
delay(1000);

tftPrintTest();
delay(500);
tft.fillScreen(ST7735_BLACK);
// Splash screen for esthetic purposes only
// optimized lines
testfastlines(ST7735_RED, ST7735_BLUE);
delay(500);

testdrawrects(ST7735_GREEN);
delay(500);
tft.fillScreen(ST7735_BLACK);
}
void loop() {
// tft.invertDisplay(true);
// delay(500);
//  tft.invertDisplay(false);
tft.setTextColor(ST7735_WHITE);
tft.setCursor(0,0);
tft.println(“Sketch has been”);
tft.println(“running for: “);
tft.setCursor(50, 20);
tft.setTextSize(2);
tft.setTextColor(ST7735_BLUE);
tft.print(millis() / 1000);
tft.setTextSize(1);
tft.setCursor(40, 40);
tft.setTextColor(ST7735_WHITE);
tft.println(“seconds”);
tft.setCursor(0, 60);
tft.drawLine(0, 50, tft.width()-1, 50, ST7735_WHITE); //draw line separator
tft.setTextColor(ST7735_YELLOW);
tft.print(“Temperature (C): “);
tft.setTextColor(ST7735_GREEN);
tft.println((float)DHT11.temperature,1);
tft.setTextColor(ST7735_WHITE);
tft.print(“Humidity    (%): “);
tft.setTextColor(ST7735_RED);
tft.println((float)DHT11.humidity,1);
tft.setTextColor(ST7735_YELLOW);
tft.print(“Temperature (F): “);
tft.setTextColor(ST7735_GREEN);
tft.println(DHT11.fahrenheit(), 1);
tft.setTextColor(ST7735_YELLOW);
tft.print(“Temperature (K): “);
// tft.print(” “);
tft.setTextColor(ST7735_GREEN);
tft.println(DHT11.kelvin(), 1);

tft.setTextColor(ST7735_WHITE);
tft.print(“Dew Point   (C): “);
tft.setTextColor(ST7735_RED);
tft.println(DHT11.dewPoint(), 1);
tft.setTextColor(ST7735_WHITE);
tft.print(“DewPointFast(C): “);
tft.setTextColor(ST7735_RED);
tft.println(DHT11.dewPointFast(), 1);
tft.drawLine(0, 110, tft.width()-1, 110, ST7735_WHITE);
tft.setCursor(0,115);
tft.print(“Light intensity “);
int val = analogRead(ANALOG_IN);
tft.setCursor(60, 130);
tft.setTextColor(ST7735_YELLOW);
tft.println(val, 1);
delay(2000);
tft.fillScreen(ST7735_BLACK);

}
void tftPrintTest() {
tft.setTextWrap(false);
tft.fillScreen(ST7735_BLACK);
tft.setCursor(0, 60);
tft.setTextColor(ST7735_RED);
tft.setTextSize(2);
tft.println(“temperature”);
tft.setTextColor(ST7735_YELLOW);
tft.setTextSize(2);
tft.println(“humidity”);
tft.setTextColor(ST7735_GREEN);
tft.setTextSize(2);
tft.println(“monitor”);
tft.setTextColor(ST7735_BLUE);
//tft.setTextSize(3);
//tft.print(3598865);
delay(500);
}

void testfastlines(uint16_t color1, uint16_t color2) {
tft.fillScreen(ST7735_BLACK);
for (int16_t y=0; y < tft.height(); y+=5) {
tft.drawFastHLine(0, y, tft.width(), color1);
}
for (int16_t x=0; x < tft.width(); x+=5) {
tft.drawFastVLine(x, 0, tft.height(), color2);
}
}

void testdrawrects(uint16_t color) {
tft.fillScreen(ST7735_BLACK);
for (int16_t x=0; x < tft.width(); x+=6) {
tft.drawRect(tft.width()/2 -x/2, tft.height()/2 -x/2 , x, x, color);
}
}

HARDWARE (MINDFLEX) SIDE

Lets start with the EEG Monitor – but before attempting to start this phase, I recommend you visit the  Frontier Nerds web site, where they do a better job than I could, of explaining how to mod the MindFlex headset so you can interface it with the arduino.

1 –  Remove the 4 screws from the back cover of the left pod of the Mind Flex headset.(The right pod holds the batteries.)

2 – Open the case to expose the electronics.

3 – Identify the  NeuroSky Board.It is the small daughterboard towards the bottom of the left pod.

4 – If you look closely, you will see pins that are labeled T and R  — these are the pins the EEG board uses to communicate serially to the microcontroller on the main circuit board.

5 – Solder a length of wire (carefully) to the “T” pin.  I used a pair of wires that  came from an old  PC. Be careful not to short the neighboring pins.

6 – Solder another length of wire to ground using the large solder pad where the battery’s ground connection is.

7  – Drill a hole in the case for the two wires to poke through after the case was closed.

8 – Guide the wires through the hole, and recheck your soldering.  I recommend putting a dab of glue in the hole to secure the wires in place ( I used my hot glue gun to do this).

9 – Carefully put the back case back on and re secure the screws.

10 – Connect the Mind Flex to the Arduino; Connect the Wire coming from the T pin to the Arduino rx pin;  Connect the other wire to an arduino GND pin.

SOFTWARE SIDE

Initial software test to make sure your mind Flex is talking to the Arduino: Run the  example BrainSerialOut sketch.

Note: You will not need the Mini display for this test, and if you have it connected nothing will display on it yet.

1.- You will need to download  and install the Brain Library from the frontiernerds web site.Decompress the zip file  and drag the “Brain” folder to the  Arduino’s “libraries” folder inside your sketch folder.  Restart  the Arduino IDE.

You should now be able to see the  library and examples in  select File > Examples > Brain > BrainSerialOut sketch.

If you were successful at installing the libraries, and loading the BrainSerialOut sketch,  Click on the verify button to compile the sketch and make sure there are no errors.

It’s time to connect your Arduino to your PC using the USB cable, and  click on the upload button  to upload the sketch to the Arduino.

Plug the two wires that you put in the Mind flex headset to the Arduino: the T signal wire from the mind flex to the rx pin in the Arduino; The ground wire from the Mind flex headset to the Arduino gnd pin.

Once the  sketch is uploaded to the Arduino, make sure your Mind flex headset is connected to the Arduino, and turn it on. Open the serial monitor.You should see a stream of coma separated numbers scrolling by.

NOTE: If the sketch doesn’t upload and you get a message like the one below,

avrdude: stk500_getsync(): not in sync: resp=0x00

Disconnect the wire from the  arduino rx pin, it sometimes interferes with the upload process.

Note that the connection to the Neurosky headset is half-duplex — it will use up the rx pin on your Arduino, but you will still be able to send data back to a PC via USB. (Although in this case you won’t be able to send data to the Arduino from the PC.)

Once disconnected click on the upload button again and, if successful, reconnect the wire to the rx pin.

READING BRAINWAVES

Before we proceed to the next part (sketch), you should know a couple of hints if you’re running into trouble:
1 – If the signalquality is anything but 0, you will not get a meditation or attention value.
2 –  The values for the brain waves( Alpha, Beta, Gamma, etc… ) are kind of nonsensical. They still change value even if the signal quality is greater than zero! Also  if you place any finger on the forehead sensor  and another one on the ear sensor on the left pad, you still get readings for all the brain wave functions. I mention this because I’m not quite sure whether the values are actually very reliable. In any case, the only values that are usable, if you want to control something with your brain are the  Attention and Meditation.

Alright, so here’s the code:

// Copy and Paste the sketch below to your ardunio IDE .
#define sclk 4
#define mosi 5
#define cs   6
#define dc   7
#define rst  8

#include <Adafruit_GFX.h>    // Core graphics library
#include <Adafruit_ST7735.h> // Hardware-specific library
#include <SPI.h>
#include <Brain.h>

Adafruit_ST7735 tft = Adafruit_ST7735(cs, dc, mosi, sclk, rst);

Brain brain(Serial);

void setup(void) {

tft.initR(INITR_BLACKTAB);   // initialize a ST7735S chip, black tab

tftPrintTest(); //Initial introduction text,
delay(1000);

tft.fillScreen(ST7735_BLACK); //  clear screen
tft.setTextColor(ST7735_WHITE);
tft.setTextSize(1);
tft.setCursor(30,0);
tft.println(“EEG Monitor”);

Serial.begin(9600);
}
void loop() {

if (brain.update()) {
if (brain.readSignalQuality() > 100) {
tft.fillScreen(ST7735_BLACK);
tft.setCursor(0,30);
tft.setTextColor(ST7735_RED,ST7735_BLACK);
tft.println(“signal quality low”);
}

else {
tft.setCursor(30,0);
tft.println(“EEG Monitor”);
tft.drawLine(0, 20, tft.width()-1, 20, ST7735_WHITE);
tft.drawLine(0, 130, tft.width()-1, 130, ST7735_WHITE);

tft.setCursor(0, 30);
tft.setTextColor(ST7735_YELLOW,ST7735_BLACK);
tft.print(“signal quality :”);
tft.print(brain.readSignalQuality());
tft.println(” “);
tft.setTextColor(ST7735_RED,ST7735_BLACK);
tft.print(“Attention      :”);
tft.print(brain.readAttention());
tft.println(” “);
tft.setTextColor(ST7735_WHITE,ST7735_BLACK);
tft.print(“Meditation     :”);
tft.print(brain.readMeditation());
tft.println(” “);
tft.setTextColor(ST7735_GREEN,ST7735_BLACK);
tft.print(“Delta      : “);
tft.print(brain.readDelta());
tft.println(” “);
tft.print(“Theta      : “);
tft.print(brain.readTheta());
tft.println(” “);
tft.print(“Low Alpha  : “);
tft.print(brain.readLowAlpha());
tft.println(” “);
tft.print(“High Alpha : “);
tft.print(brain.readHighAlpha());
tft.println(” “);
tft.print(“Low Beta   : “);
tft.print(brain.readLowBeta());
tft.println(” “);
tft.print(“High Beta  : “);
tft.println(brain.readHighBeta());
tft.print(“Low Gamma  : “);
tft.print(brain.readLowGamma());
tft.println(” “);
tft.print(“Mid Gamma  : “);
tft.print(brain.readMidGamma());
tft.println(” “);
}}

}
void tftPrintTest() {
tft.setTextWrap(false);
tft.fillScreen(ST7735_BLACK);
tft.setCursor(0, 10);
tft.setTextColor(ST7735_WHITE);
tft.setTextSize(1);
tft.println(“INSTRUCTABLES.COM”);
delay(500);
tft.setCursor(40, 60);
tft.setTextColor(ST7735_RED);
tft.setTextSize(2);
tft.println(“EEG”);
tft.setTextColor(ST7735_YELLOW);
tft.setCursor(20, 80);
tft.println(“Monitor”);
tft.setTextColor(ST7735_BLUE);
delay(50);
}

 

abc

FarmBot 1.0

FARMBOT 1.0

FarmBot Genesis is the first FarmBot to be designed, prototyped, and manufactured. Genesis is designed to be a flexible FarmBot foundation for experimentation, prototyping, and hacking. The driving factors behind the design are simplicity, manufacturability, scalability, and hackability.

 

Genesis is a small scale FarmBot primarily constructed from V-Slot aluminum extrusions and aluminum plates and brackets. Genesis is driven by NEMA 17 stepper motors, an Arduino MEGA with a RAMPS shield, and a Raspberry Pi 3 computer. These electronics were chosen for their great availability, support, and usage in the DIY 3D printer world. Genesis can vary in size from a planting area as little as 1m2 to a maximum of 4.5m2, while accommodating a maximum plant height of about 1m. With additional hardware and modifications we anticipate the Genesis concept to be able to scale to approximately 50m2and a maximum plant height of 1.5m.

 

FarmBot Genesis v1.2

Precision farming has been hailed as the future of agriculture, sustainability, and the food industry. That’s why a company called FarmBot is working to bring precision agriculture technology to environmentally conscious individuals for the first time. The first product — the FarmBot Genesis — is a do-it-yourself precision farming solution, that (theoretically) anyone can figure out. The system is already up to its ninth iteration, and the open source robot improves in each version thanks to input from the FarmBot community.

 

The FarmBot is designed to be customizable and accessible to the maker community. Its design is based on generic devices such as:

 

Agriculture is an expensive and wildly wasteful industry. The precision farming movement may not solve every problem the industry faces, but it does have a lot of potential to improve sustainability and efficiency. Before FarmBot, precision agriculture equipment was only available in the form of massive heavy machinery. Precision farming tractors used to cost more than $1 million each when FarmBot creator Rory Aronson first had the idea for his solution in 2011.

 

The FarmBot robot kit ships with an Arduino Mega 2560, Raspberry Pi 2 Model B, disassembled hardware packages and access to the open-source software community. FarmBot Genesis runs on custom built tracks and supporting infrastructure, all of which you need to assemble yourself. The online FarmBot community makes it easy to find step-by-step instructions for every single assembly process. There are even forums to troubleshoot installing a FarmBot in your own backyard. The robot relies on a software platform that users access through FarmBot’s web app, all of which looks a whole lot like Farmville, the infamous mobile game.

futureoffood

The physical FarmBot system is aligned with the crops you plot out in your virtual version on the web app. That’s how Farmbot can reliably dispense water, fertilizer, and other resources to keep plants healthy and thriving. Since it doesn’t require any delicate sensor technology, FarmBot is a cheaper solution than the industrial precision farming equipment on the market. And with its universal tool mount, you can adapt FarmBot to do pretty much any garden task you desire.

 

If you’re looking to build your own FarmBot, grab the build schematics and bill of materials (B0M)