TFM IOT – Home automation. Part 1 . Introduction

A project with simple requirements is beautiful . However simple requirements might turn into hard to achieve implementations . This is the case in this home automation project.
Requirement for this project is simple … automate everything
in the house
:

  • Lights
  • Shades
  • Music and video
  • Heating / ventilation
  • Air quality
  • Security

What exactly we are trying to achieve :

  • Control any device in the house remotely
  • Monitor and graph all sensors
  • Take action if events are happening in t
    he house
  • Have a secure, simple and  beautiful user interface so anyone can use
  • WIFI everything
  • Take action by using scenarios

So in this article series we will try to come up with a complete solution that will fulfill all requirements. And we will take it one device at a time

Create a maintainable Centos 7 box for web hosting

Goal: Create Centos 7 box for web hosting  ( LAMP stack / monitoring software code versioning software ) that will be easy to install , maintainable in time , easy to add functionality .

 

First things first: install the centos 7 minimal.

After instalation:

update OS

yum update
yum upgrade

generic tools

yum install perl perl-core ntpl nmap sudo libidn gmp libaio libstdc++ unzip sysstat sqlite net-tools mc bind-utils telnet

Remove iptables wrapper and install plain iptables service

yum remove firewalld

yum install iptables-services

Adding more useful software repositories

yum install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
yum install http://rpms.famillecollet.com/enterprise/remi-release-7.rpm
yum install https://www.percona.com/redir/downloads/percona-release/redhat/latest/percona-release-0.1-3.noarch.rpm

Installing php 5.6

yum –enablerepo=remi-php56 install php php-gd php-mysql php-mcrypt

Installing mysql DB and backup tools

yum install Percona-Server-server-57 percona-xtrabackup

git

For Centos 7 we didn’t find (yet) a decent software repository for GIT 2.7 . Most probably we will make one and try to maintain it.  ( will be covered in another article )

Monitoring

yum install collectd collectd-apache

 

 

Installing Ricoh Aficio SG3100SNw in linux

So , you have decided to purchase a Ricoh Aficio SG3100SNw printer. Good. It’s a good choice for small / medium companies. What we like about it is that it comes with wireless interface and that is quite nice . Less cables more fun. I’m not going to get into details on how to unbox it and make the initial setup. I leave that as an exercise for the reader ( it’s not easy but up to a point it’s fun ) .   Continue reading Installing Ricoh Aficio SG3100SNw in linux

Docker cluster swarm with consul

A quick post / howto  install a a swarm / consul cluster of dockers hosts and how to play with it. I will not go deep into details and lots of this from this post should and will be detailed or changed for a production environment.

How it will look like:

docker_cluster_swarm_consul
Docker cluster swarm consul

 

For the proof of the concept we will use 3 VMs in virtualbox.

Initially node1 , node2 , node3 will be identically configured. After the cluster is set up node1 will be also the management node.

Installing Node1:

[root@docker-01 server]# cat /etc/consul.d/server/config.json
{
    "bootstrap_expect": 3,
    "server": true,
    "data_dir": "/var/consul",
    "log_level": "INFO",
    "enable_syslog": false,
    "retry_join": ["172.17.0.30", "172.17.0.31", "172.17.0.32"],
    "client_addr": "0.0.0.0"
}
  • run consul agent
/usr/local/bin/consul agent -config-dir="/etc/consul.d/server/config.json" -ui-dir="/var/www/html">>/var/log/consul.log 2>&1 &
  • join the node in swarm cluster:

 

docker run -d swarm join --addr=172.17.0.30:2375 consul://172.17.0.30:8500/swarm

 

  • run the registration in consul service

 

docker run -d --name regdock01 -h docker01 -v /var/run/docker.sock:/tmp/docker.sock progrium/registrator consul://172.17.0.30:8500

 

  • run the swarm management:

 

docker run -d -p 3333:2375 swarm manage consul://172.17.0.30:8500/swarm

 

Installing Node2:

  • Install centos7
  • Disable firewald
  • Install docker from repo. Then update docker to 1.6.2 ( manually override /usr/bin/docker with the one from docker.io )  ( wget  https://get.docker.com/builds/Linux/x86_64/docker-latest )
  • Download and install consul (  wget https://dl.bintray.com/mitchellh/consul/0.5.2_linux_amd64.zip )
  • Add consul configuration file
    [root@docker-02 server]# cat /etc/consul.d/server/config.json
    {
        "bootstrap_expect": 3,
        "server": true,
        "data_dir": "/var/consul",
        "log_level": "INFO",
        "enable_syslog": false,
        "retry_join": ["172.17.0.30", "172.17.0.31", "172.17.0.32"],
        "client_addr": "0.0.0.0"
    }
  • run consul agent
    /usr/local/bin/consul agent -config-dir="/etc/consul.d/server/config.json" >>/var/log/consul.log 2>&1 &
  • join the node in swarm cluster
    docker run -d swarm join --addr=172.17.0.31:2375 consul://172.17.0.31:8500/swarm
  • run the registration in consul service
    docker run -d --name regdock02 -h docker02 -v /var/run/docker.sock:/tmp/docker.sock progrium/registrator consul://172.17.0.31:8500

Installing Node3:

  • Install centos7
  • Disable firewald
  • Install docker from repo. Then update docker to 1.6.2 ( manually override /usr/bin/docker with the one from docker.io )  ( wget  https://get.docker.com/builds/Linux/x86_64/docker-latest )
  • Download and install consul (  wget https://dl.bintray.com/mitchellh/consul/0.5.2_linux_amd64.zip )
  • Add consul configuration file
    [root@docker-03 server]# cat /etc/consul.d/server/config.json
    {
        "bootstrap_expect": 3,
        "server": true,
        "data_dir": "/var/consul",
        "log_level": "INFO",
        "enable_syslog": false,
        "retry_join": ["172.17.0.30", "172.17.0.31", "172.17.0.32"],
        "client_addr": "0.0.0.0"
    }
  • Run consul agent
    /usr/local/bin/consul agent -config-dir="/etc/consul.d/server/config.json" >>/var/log/consul.log 2>&1 &
  • Join the node in swarm cluster
    docker run -d swarm join --addr=172.17.0.32:2375 consul://172.17.0.32:8500/swarm
  • Run the registration in consul service
    docker run -d --name regdock03 -h docker03 -v /var/run/docker.sock:/tmp/docker.sock progrium/registrator consul://172.17.0.32:8500

Managing / viewing / running containers in cluster:

 

# show running containers in the cluster
docker -H tcp://172.17.0.30:3333 ps
# show cluster nodes infos
docker -H tcp://172.17.0.30:3333 info
# run containers in the cluster:
docker -H tcp://172.17.0.30:3333 run -d --name www1 -p 81:80 nginx
docker -H tcp://172.17.0.30:3333 run -d --name www2 -p 81:80 nginx
docker -H tcp://172.17.0.30:3333 run -d --name www3 -p 81:80 nginx

 

Consul web interface:

 

consul-web-ui
Consul web UI after install and deploy of 3 nginx containers

 

 

Ubuntu set local hostname via DHCP

Sometime for automation you need to be able to set the ubuntu hostname at  boot ( or at network restart ) via DHCP / DNS .

To be able to do that you only have to add in /etc/dhcp/dhclient-exit-hooks.d a file named hostname with the following content:

 

if [ "$reason" != BOUND ] && [ "$reason" != RENEW ] \
&& [ "$reason" != REBIND ] && [ "$reason" != REBOOT ]
then
return
fi

host=$(host $new_ip_address | cut -d ' ' -f 5)
host=${hostname:0:-1}
echo $host > /etc/hostname
hostname $host

What it does ? Simple it hooks dhcpclient and after the client receives the new ip from dhcp it will make a simple reverse lookup for the ip received
and will set the hostname accordingly.

Synology wake it up on LAN

Synology like any other recent and decent device accepts WakeOnLan . This comes very handy when you don’t want to keep it always on .
Setting synology is pretty easy . Just go to control panel , on Hardware and Power menu from the synology web interface and check Enable WOL on LAN1
wol-synology

However in order to wake it up you need to send in the network the magic packet to wake it.

I found that etherwake does the job right. Since i have eth1 connected to the internal network i’m using it like this:


etherwake -i eth1 00:11:33:22:bb:aa

where 00:11:33:22:bb:aa is the mac address from synology network card.

Ubuntu 14.04 adding HP proliant support pack (hpacucli problem solved)

tfm_logoIn the begining there was nothing but few baremetal. After a while someone delivers a whole bunch of baremetal on your doorstep and say : “I need them installed by tomorrow”. Same configuration … With the harddisks in raid and ubuntu on all of them . What do you do ?

It’s a big problem. Like most big problems you split it in lots of little problems that can be managed easier.
What you need to do for one baremetal:

  1. Update the ILO firmware and Bios (if necessary) . This will come handy: https://play.google.com/store/apps/details?id=com.hp.essn.iss.ilo.iec.spa&feature=search_result . I’m not going into details about it in this post
  2. Create the disk arrays
  3. Install the operating system on it
  4.  Configure it and deploy it.

For one server let’s say you can do it in few hours , few beers and some pizza’s. But … wait a minute .. There are a LOT of baremetals to be installed. One option is to call some friends and do that while you watch a movie.

OR you can be smart and automate the tasks. How ? What i need ?

You need a baremetal installer server or a laptop or a virtual something (virtual box / vmware / you choose )  image that will do the job for you while you sit back and relax.

The ideea is simple:

  • Baremetal will boot from network
  • tftp server will deliver the boot image, boot it , get an ip addres from the dhcp server and register the new server into the baremetal installer and will fill the hardware configuration there.
  • Then you can ( using ansible  ) to actually do the raid configuration ,  bios updates ,  firmware updates, and operating system install.
  • Add the necessary configurations.
  • Once complete the system will boot from raid and you have a system up and and running ready to be deployed.

Now … Back to the post subject .

How you can configure HP raid from inside  ubuntu ? In order to have in the network bootable image the proper tools to do actually do the configurations…

First we install hpacucli :


sudo echo "deb http://downloads.linux.hp.com/SDR/downloads/MCP/ubuntu precise/current non-free" >>/etc/apt/sources.list
wget http://downloads.linux.hp.com/SDR/downloads/MCP/GPG-KEY-mcp
sudo apt-key add GPG-KEY-mcp
sudo apt-get update
apt-get install cpqacuxe hp-ams hp-health hpacucli hponcfg
service hpsmhd stop
update-rc.d hpsmhd disable
hpasmcli -s "show server"

Then when we boot the new baremetal to be installed we can gather the informations about the raids:


hpacucli ctrl all show config

That will produce an output like ( in this case i already configured the raid:


Smart Array E200i in Slot 0 (Embedded) (sn: VX9AMP1927 )

array A (SAS, Unused Space: 0 MB)

logicaldrive 1 (136.7 GB, RAID 1, OK)

physicaldrive 1I:1:1 (port 1I:box 1:bay 1, SAS, 146 GB, OK)
physicaldrive 1I:1:2 (port 1I:box 1:bay 2, SAS, 146 GB, OK)

At this point you can create some scripts that will create the partitions in the way that the shareholder wants them.

for example:


hpacucli ctrl slot=9 create type=logicaldrive drives=1I:1:3,1I:1:4 raid=1