TFM IOT – Home automation. Part 1 . Introduction

A project with simple requirements is beautiful . However simple requirements might turn into hard to achieve implementations . This is the case in this home automation project.
Requirement for this project is simple … automate everything
in the house
:

  • Lights
  • Shades
  • Music and video
  • Heating / ventilation
  • Air quality
  • Security

What exactly we are trying to achieve :

  • Control any device in the house remotely
  • Monitor and graph all sensors
  • Take action if events are happening in t
    he house
  • Have a secure, simple and  beautiful user interface so anyone can use
  • WIFI everything
  • Take action by using scenarios

So in this article series we will try to come up with a complete solution that will fulfill all requirements. And we will take it one device at a time

Create a maintainable Centos 7 box for web hosting

Goal: Create Centos 7 box for web hosting  ( LAMP stack / monitoring software code versioning software ) that will be easy to install , maintainable in time , easy to add functionality .

 

First things first: install the centos 7 minimal.

After instalation:

update OS

yum update
yum upgrade

generic tools

yum install perl perl-core ntpl nmap sudo libidn gmp libaio libstdc++ unzip sysstat sqlite net-tools mc bind-utils telnet

Remove iptables wrapper and install plain iptables service

yum remove firewalld

yum install iptables-services

Adding more useful software repositories

yum install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
yum install http://rpms.famillecollet.com/enterprise/remi-release-7.rpm
yum install https://www.percona.com/redir/downloads/percona-release/redhat/latest/percona-release-0.1-3.noarch.rpm

Installing php 5.6

yum –enablerepo=remi-php56 install php php-gd php-mysql php-mcrypt

Installing mysql DB and backup tools

yum install Percona-Server-server-57 percona-xtrabackup

git

For Centos 7 we didn’t find (yet) a decent software repository for GIT 2.7 . Most probably we will make one and try to maintain it.  ( will be covered in another article )

Monitoring

yum install collectd collectd-apache

 

 

Installing Ricoh Aficio SG3100SNw in linux

So , you have decided to purchase a Ricoh Aficio SG3100SNw printer. Good. It’s a good choice for small / medium companies. What we like about it is that it comes with wireless interface and that is quite nice . Less cables more fun. I’m not going to get into details on how to unbox it and make the initial setup. I leave that as an exercise for the reader ( it’s not easy but up to a point it’s fun ) .   Continue reading Installing Ricoh Aficio SG3100SNw in linux

Docker cluster swarm with consul

A quick post / howto  install a a swarm / consul cluster of dockers hosts and how to play with it. I will not go deep into details and lots of this from this post should and will be detailed or changed for a production environment.

How it will look like:

docker_cluster_swarm_consul
Docker cluster swarm consul

 

For the proof of the concept we will use 3 VMs in virtualbox.

Initially node1 , node2 , node3 will be identically configured. After the cluster is set up node1 will be also the management node.

Installing Node1:

[root@docker-01 server]# cat /etc/consul.d/server/config.json
{
    "bootstrap_expect": 3,
    "server": true,
    "data_dir": "/var/consul",
    "log_level": "INFO",
    "enable_syslog": false,
    "retry_join": ["172.17.0.30", "172.17.0.31", "172.17.0.32"],
    "client_addr": "0.0.0.0"
}
  • run consul agent
/usr/local/bin/consul agent -config-dir="/etc/consul.d/server/config.json" -ui-dir="/var/www/html">>/var/log/consul.log 2>&1 &
  • join the node in swarm cluster:

 

docker run -d swarm join --addr=172.17.0.30:2375 consul://172.17.0.30:8500/swarm

 

  • run the registration in consul service

 

docker run -d --name regdock01 -h docker01 -v /var/run/docker.sock:/tmp/docker.sock progrium/registrator consul://172.17.0.30:8500

 

  • run the swarm management:

 

docker run -d -p 3333:2375 swarm manage consul://172.17.0.30:8500/swarm

 

Installing Node2:

  • Install centos7
  • Disable firewald
  • Install docker from repo. Then update docker to 1.6.2 ( manually override /usr/bin/docker with the one from docker.io )  ( wget  https://get.docker.com/builds/Linux/x86_64/docker-latest )
  • Download and install consul (  wget https://dl.bintray.com/mitchellh/consul/0.5.2_linux_amd64.zip )
  • Add consul configuration file
    [root@docker-02 server]# cat /etc/consul.d/server/config.json
    {
        "bootstrap_expect": 3,
        "server": true,
        "data_dir": "/var/consul",
        "log_level": "INFO",
        "enable_syslog": false,
        "retry_join": ["172.17.0.30", "172.17.0.31", "172.17.0.32"],
        "client_addr": "0.0.0.0"
    }
  • run consul agent
    /usr/local/bin/consul agent -config-dir="/etc/consul.d/server/config.json" >>/var/log/consul.log 2>&1 &
  • join the node in swarm cluster
    docker run -d swarm join --addr=172.17.0.31:2375 consul://172.17.0.31:8500/swarm
  • run the registration in consul service
    docker run -d --name regdock02 -h docker02 -v /var/run/docker.sock:/tmp/docker.sock progrium/registrator consul://172.17.0.31:8500

Installing Node3:

  • Install centos7
  • Disable firewald
  • Install docker from repo. Then update docker to 1.6.2 ( manually override /usr/bin/docker with the one from docker.io )  ( wget  https://get.docker.com/builds/Linux/x86_64/docker-latest )
  • Download and install consul (  wget https://dl.bintray.com/mitchellh/consul/0.5.2_linux_amd64.zip )
  • Add consul configuration file
    [root@docker-03 server]# cat /etc/consul.d/server/config.json
    {
        "bootstrap_expect": 3,
        "server": true,
        "data_dir": "/var/consul",
        "log_level": "INFO",
        "enable_syslog": false,
        "retry_join": ["172.17.0.30", "172.17.0.31", "172.17.0.32"],
        "client_addr": "0.0.0.0"
    }
  • Run consul agent
    /usr/local/bin/consul agent -config-dir="/etc/consul.d/server/config.json" >>/var/log/consul.log 2>&1 &
  • Join the node in swarm cluster
    docker run -d swarm join --addr=172.17.0.32:2375 consul://172.17.0.32:8500/swarm
  • Run the registration in consul service
    docker run -d --name regdock03 -h docker03 -v /var/run/docker.sock:/tmp/docker.sock progrium/registrator consul://172.17.0.32:8500

Managing / viewing / running containers in cluster:

 

# show running containers in the cluster
docker -H tcp://172.17.0.30:3333 ps
# show cluster nodes infos
docker -H tcp://172.17.0.30:3333 info
# run containers in the cluster:
docker -H tcp://172.17.0.30:3333 run -d --name www1 -p 81:80 nginx
docker -H tcp://172.17.0.30:3333 run -d --name www2 -p 81:80 nginx
docker -H tcp://172.17.0.30:3333 run -d --name www3 -p 81:80 nginx

 

Consul web interface:

 

consul-web-ui
Consul web UI after install and deploy of 3 nginx containers

 

 

Ubuntu set local hostname via DHCP

Sometime for automation you need to be able to set the ubuntu hostname at  boot ( or at network restart ) via DHCP / DNS .

To be able to do that you only have to add in /etc/dhcp/dhclient-exit-hooks.d a file named hostname with the following content:

 

if [ "$reason" != BOUND ] && [ "$reason" != RENEW ] \
&& [ "$reason" != REBIND ] && [ "$reason" != REBOOT ]
then
return
fi

host=$(host $new_ip_address | cut -d ' ' -f 5)
host=${hostname:0:-1}
echo $host > /etc/hostname
hostname $host

What it does ? Simple it hooks dhcpclient and after the client receives the new ip from dhcp it will make a simple reverse lookup for the ip received
and will set the hostname accordingly.