Resize VM images in proxmox

When you really need to resize a VM disk image ( and free space to be allocated into partitions inside the vm) you might want to read this:

Step-by-step guide

  1. Poweroff you VM from proxmox
  2. Grow the disk allocation from proxmox GUIresize-proxmox
  3. Open a shell to proxmox and use parted to resize the partitions ( for qcow you might need to mount it first on nbd )
    root@proxmox:~# qemu-nbd -c /dev/nbd0 /mnt/pve/gogu/images/108/vm-108-disk-1.qcow2
    root@proxmox:~# parted /dev/nbd0
    (parted) p
    Model: Unknown (unknown)
    Disk /dev/nbd0: 82.9GB
    Sector size (logical/physical): 512B/512B
    Partition Table: msdos
    Disk Flags:
    Number  Start   End     Size    Type     File system  Flags
     1      1049kB  525MB   524MB   primary  ext4         boot
     2      525MB   42.9GB  42.4GB  primary               lvm
    (parted) resizepart 2 82.8G
    root@proxmox:~# nbd-client -d /dev/nbd0
  4. Power on the VM and login to it. Time to put the free space to good use.
    pvdisplay
    pvresize /dev/vda2
    lvextend -l +100%FREE /dev/mapper/vg_webtest01-lv_root
    resize2fs /dev/mapper/vg_webtest01-lv_root
    # In case of xfs use:
    # xfs_growfs /dev/mapper/centos_template-root
  5. Job done . Enjoy

 

Docker cluster swarm with consul

A quick post / howto  install a a swarm / consul cluster of dockers hosts and how to play with it. I will not go deep into details and lots of this from this post should and will be detailed or changed for a production environment.

How it will look like:

docker_cluster_swarm_consul
Docker cluster swarm consul

 

For the proof of the concept we will use 3 VMs in virtualbox.

Initially node1 , node2 , node3 will be identically configured. After the cluster is set up node1 will be also the management node.

Installing Node1:

[root@docker-01 server]# cat /etc/consul.d/server/config.json
{
    "bootstrap_expect": 3,
    "server": true,
    "data_dir": "/var/consul",
    "log_level": "INFO",
    "enable_syslog": false,
    "retry_join": ["172.17.0.30", "172.17.0.31", "172.17.0.32"],
    "client_addr": "0.0.0.0"
}
  • run consul agent
/usr/local/bin/consul agent -config-dir="/etc/consul.d/server/config.json" -ui-dir="/var/www/html">>/var/log/consul.log 2>&1 &
  • join the node in swarm cluster:

 

docker run -d swarm join --addr=172.17.0.30:2375 consul://172.17.0.30:8500/swarm

 

  • run the registration in consul service

 

docker run -d --name regdock01 -h docker01 -v /var/run/docker.sock:/tmp/docker.sock progrium/registrator consul://172.17.0.30:8500

 

  • run the swarm management:

 

docker run -d -p 3333:2375 swarm manage consul://172.17.0.30:8500/swarm

 

Installing Node2:

  • Install centos7
  • Disable firewald
  • Install docker from repo. Then update docker to 1.6.2 ( manually override /usr/bin/docker with the one from docker.io )  ( wget  https://get.docker.com/builds/Linux/x86_64/docker-latest )
  • Download and install consul (  wget https://dl.bintray.com/mitchellh/consul/0.5.2_linux_amd64.zip )
  • Add consul configuration file
    [root@docker-02 server]# cat /etc/consul.d/server/config.json
    {
        "bootstrap_expect": 3,
        "server": true,
        "data_dir": "/var/consul",
        "log_level": "INFO",
        "enable_syslog": false,
        "retry_join": ["172.17.0.30", "172.17.0.31", "172.17.0.32"],
        "client_addr": "0.0.0.0"
    }
  • run consul agent
    /usr/local/bin/consul agent -config-dir="/etc/consul.d/server/config.json" >>/var/log/consul.log 2>&1 &
  • join the node in swarm cluster
    docker run -d swarm join --addr=172.17.0.31:2375 consul://172.17.0.31:8500/swarm
  • run the registration in consul service
    docker run -d --name regdock02 -h docker02 -v /var/run/docker.sock:/tmp/docker.sock progrium/registrator consul://172.17.0.31:8500

Installing Node3:

  • Install centos7
  • Disable firewald
  • Install docker from repo. Then update docker to 1.6.2 ( manually override /usr/bin/docker with the one from docker.io )  ( wget  https://get.docker.com/builds/Linux/x86_64/docker-latest )
  • Download and install consul (  wget https://dl.bintray.com/mitchellh/consul/0.5.2_linux_amd64.zip )
  • Add consul configuration file
    [root@docker-03 server]# cat /etc/consul.d/server/config.json
    {
        "bootstrap_expect": 3,
        "server": true,
        "data_dir": "/var/consul",
        "log_level": "INFO",
        "enable_syslog": false,
        "retry_join": ["172.17.0.30", "172.17.0.31", "172.17.0.32"],
        "client_addr": "0.0.0.0"
    }
  • Run consul agent
    /usr/local/bin/consul agent -config-dir="/etc/consul.d/server/config.json" >>/var/log/consul.log 2>&1 &
  • Join the node in swarm cluster
    docker run -d swarm join --addr=172.17.0.32:2375 consul://172.17.0.32:8500/swarm
  • Run the registration in consul service
    docker run -d --name regdock03 -h docker03 -v /var/run/docker.sock:/tmp/docker.sock progrium/registrator consul://172.17.0.32:8500

Managing / viewing / running containers in cluster:

 

# show running containers in the cluster
docker -H tcp://172.17.0.30:3333 ps
# show cluster nodes infos
docker -H tcp://172.17.0.30:3333 info
# run containers in the cluster:
docker -H tcp://172.17.0.30:3333 run -d --name www1 -p 81:80 nginx
docker -H tcp://172.17.0.30:3333 run -d --name www2 -p 81:80 nginx
docker -H tcp://172.17.0.30:3333 run -d --name www3 -p 81:80 nginx

 

Consul web interface:

 

consul-web-ui
Consul web UI after install and deploy of 3 nginx containers

 

 

How to cleanup your /boot in Ubuntu

First check your kernel version, so you won’t delete the in-use kernel image, running:

uname -a

Now run this command for a list of installed kernels:

sudo dpkg --list 'linux-image*'

and delete the kernels you don’t want/need anymore by running this:

sudo apt-get remove linux-image-VERSION linux-image-VERSION

Replace VERSION with the version of the kernel you want to remove.

When you’re done removing the older kernels, you can run this to remove ever packages you won’t need anymore:

sudo apt-get autoremove

And finally you can run this to update grub kernel list:

sudo update-grub

Ubuntu set local hostname via DHCP

Sometime for automation you need to be able to set the ubuntu hostname at  boot ( or at network restart ) via DHCP / DNS .

To be able to do that you only have to add in /etc/dhcp/dhclient-exit-hooks.d a file named hostname with the following content:

 

if [ "$reason" != BOUND ] && [ "$reason" != RENEW ] \
&& [ "$reason" != REBIND ] && [ "$reason" != REBOOT ]
then
return
fi

host=$(host $new_ip_address | cut -d ' ' -f 5)
host=${hostname:0:-1}
echo $host > /etc/hostname
hostname $host

What it does ? Simple it hooks dhcpclient and after the client receives the new ip from dhcp it will make a simple reverse lookup for the ip received
and will set the hostname accordingly.

Synology wake it up on LAN

Synology like any other recent and decent device accepts WakeOnLan . This comes very handy when you don’t want to keep it always on .
Setting synology is pretty easy . Just go to control panel , on Hardware and Power menu from the synology web interface and check Enable WOL on LAN1
wol-synology

However in order to wake it up you need to send in the network the magic packet to wake it.

I found that etherwake does the job right. Since i have eth1 connected to the internal network i’m using it like this:


etherwake -i eth1 00:11:33:22:bb:aa

where 00:11:33:22:bb:aa is the mac address from synology network card.