PracticalWeb Ltd

Websites that work for you.

Ubuntu Mysql Root Password Reset (Init File Not Found)

If you don’t use the mysql root account very often and change passwords reasonably frequently you may (like me) find that you no longer know the mysql root password on a dev box.

The mysql site has a reasonable guide here https://dev.mysql.com/doc/refman/5.0/en/resetting-permissions.html#resetting-permissions-unix

But there always seems to be something additional, this time for me it was apparmor preventing mysql from reading my reset file

to reset I had to look in /etc/apparmor.d/usr.sbin.mysqld and identify /etc/mysql/conf.d/ as a good place to place a reset init file

1
2
3
4
5
6
7
8
sudo su
service stop mysql
echo "SET PASSWORD FOR 'root'@'localhost' = PASSWORD('MyNewPass');" > /etc/mysql/conf.d/mysql-init
mysqld_safe --init-file=/etc/mysql/conf.d/mysql-init &
# check password works
# stop mysql 
service start mysql
rm /etc/mysql/conf.d/mysql-init

Add Annotations to Grafana via Elasticsearch

It took me a while to figure out how to setup elasticsearch for grafana annotations, and I wanted a small page to allow me to add annotations

Mostly I figure we’ll add a line to deploy scripts to track versions - but I’d like to be able to manually annotate outages

Git Stash Save Message

git stash is a great way to save work switch branch and then get back your half completed work from earlier

But I work on many projects, am often playing around with something, get pulled onto the next thing - and I often have stashed work kicking around

By default git stash saves the work but the git list just gives some id for each stash like

1
2
3
4
5
stash@{0}: WIP on master: 2166e26 foo
stash@{1}: WIP on master: 2166e26 foo
stash@{2}: WIP on master: 2166e26 foo
stash@{3}: WIP on master: 2166e26 foo
stash@{4}: WIP on master: 2166e26 foo

better is to do

1
git stash save "some description of the work"

Then laster git stash list can tell you what each stash is

1
2
3
4
stash@{0}: On master: bufix #123
stash@{1}: On master: feature foo
stash@{2}: On master: feature abc
stash@{3}: On master: feature xyz

Much better for reminding me what these things were.

Puppet vs Ansible

Some thoughts - I’ve used puppet for a while and ansible more recently.

Ansible is easier to get started with

Puppet sequencing can be hard - and occasionally you get bugs appearing in odd places where there was a dependency that you hadn’t noticed - but things happened to work OK untill some change

Puppet is faster - especially for a long set of config with a single change to apply.

Much nicer audit trail pushing puppet code to a branch - nice having a branch per env, hiera allows easy separation of data and code

Puppet has good logs - which can show the diff of any config change and are easy to centralise.

Puppet requires software on the server and a service or cronjob, Ansible needs ssh and full sudo access: it doesn’t run commands in quite the regular way and the only way to allow it to work seems to be to allow unrestricted sudo access (a password may be used)

I really don’t like running ansible from a local dev env, too easy to run uncommitted code that others can’t see, no central audit trail, and easy to accidentally use the wrong inventory and push chnages to teh wrong place

I get the impression that it is harder with puppet than with ansible to reuse code quite as well - I feel like I’m going to be bitten by Ansible’s strict sequencing - but so far it’s OK

Ansible gives you the ability run run a multi-machine deploy which puppet doesn’t (you’d use mcollective for this)

Much richer ecosystem of puppet modules than ansible ones at the time of writing.

I haven’t used ansible tower,

SSL Problems in Jmeter and Java 1.7

When using jmeter on an ssl enabled site I was seeing an error

SSL handshake alert: unrecognized_name error

But I’d read that since jmeter version 2.4 ssl should work fine

It turns out that Java 7 introduced a feature (SNI support) and can trigger this error in some circumstances.

As workaround you can disable this feature by setting the property jsse.enableSNIExtension to false.

and run jmeter like

1
jmeter -Djsse.enableSNIExtension=false -t mytest.jmx

Local Yum Cache Repo

When I’m working on ansible or docker with machine images that get rebuilt regularly it’s a pain waiting for slow downloads

Also if I want to work on teh train

I’m working on a process to setup a local repo with the stuff that I need

This script downloads the rpms I have installed

1
yum install yum-utils && sudo yumdownloader --destdir=./ $(rpm -qa --qf "%{NAME} ")

Next I need to drop them in a web dir, run createrepo and finally make sure this repo is enabled and prioritised in my test envs

Writing this part up now even though I haven’t finished - as I don’t want to loose this info.

First Steps With Ansible and Docker

I’ve been using puppet and vagrant for a while, due to client choices we’re switching to ansible which I’m less familiar with - and Docker has been on my to learn list for a while.

I love vagrant - being able to bring up a VM locally that matches the production servers to a good degree is just brilliant, and being able to repeat deploys is invaluable in testing process.

The limitation with VMs is that each one takes a lot of resources, and is slow to build.

Linux containers are much lighter weight - being faster to create and using much less system resource to run, I’m hoping to be able to run more servers at once inside my laptop.

There is a nice quick demo of docker https://www.docker.com/tryit/

The steps below cover installing ansible and docker, building a simple docker image, and then using an ansible playbook to both create a container and then connect to that container. I just run a hello world at that point - but from there running any ansible code should be simple.


On Ubuntu I installed the latest version via ppa following the notes from https://docs.docker.com/installation/ubuntulinux/

1
2
3
4
5
6
7
8
9
10
11
12
13
# add the Docker repository key to your local keychain.
sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 36A1D7869245C8950F966E92D8576A8BA88D21E9
# Add the Docker repository to your apt sources list, update and install the lxc-docker package.
# You may receive a warning that the package isn't trusted. Answer yes to continue installation.

sudo sh -c "echo deb https://get.docker.com/ubuntu docker main > /etc/apt/sources.list.d/docker.list"
sudo apt-get update
sudo apt-get install lxc-docker


# To verify that everything has worked as expected:

sudo docker run -i -t ubuntu /bin/bash

I also installed docker-py https://github.com/docker/docker-py

1
sudo pip install docker-py

This is required for the docker module in ansible which allows running docker commands from an ansible playbook, I probably don’t need this right away - but later I want to be able to manage multiple docker containers on remote servers.

I installed ansible from git, following the instructions at http://docs.ansible.com/intro_installation.html#getting-ansible

1
2
3
4
5
6
7
git clone git://github.com/ansible/ansible.git --recursive
cd ./ansible
source ./hacking/env-setup

#Ansible also uses the following Python modules that need to be installed:

sudo pip install paramiko PyYAML Jinja2 httplib2

Setup an Inventory file

1
2
echo "localhost ansible_connection=local" > ~/ansible_hosts
export ANSIBLE_HOSTS=~/ansible_hosts

test with a ping command:

1
ansible all -m ping

Those last steps are a slight variation on the official docs as I didn’t want to use ssh locally - preferring the local connection which doesn’t need passwords or keys to work, I’ll come to that later.

I created a Dockerfile which builds me a base image that just has

  • sshd running
  • the ssh port exposed
  • ansible user added
  • passwordless sudo for ansible user
  • an authorised key for ansible user

The file is

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
# base
#
# VERSION               0.0.1

FROM centos:6
MAINTAINER Sean Burlington <sean@practicalweb.co.uk>

# setup sshd,  ensuring it runs through regular configs once - this does some initial setup
RUN yum -y update && yum -y install openssh-server
RUN service sshd start && service sshd stop

# create ansible user with sudo and public key
RUN yum -y install sudo
RUN useradd ansible
RUN echo 'ansible:D\N5Vlucg7,JlUhiDb<N' | chpasswd
ADD etc/sudoers/ansible /etc/sudoers.d/
RUN mkdir -p /home/ansible/.ssh/
ADD home/ansible/.ssh/authorized_keys /home/ansible/.ssh/
RUN chown -R ansible:ansible /home/ansible/ && chmod 700 /home/ansible/.ssh/ && chmod 600 /home/ansible/.ssh/authorized_keys


# set example env variable that will be visible in users shell
RUN echo "export VISIBLE=now" >> /etc/profile

# run sshd and expose it 
EXPOSE 22
CMD ["/usr/sbin/sshd", "-D"]

get this file and supporting code by

1
2
git clone https://github.com/practicalweb/docker-ansible.git
cd docker-ansible

Note that I don’t include my public key there (it is gitignored)

copy your by something like

1
cp ~/.ssh/id_rsa.pub base-docker/home/ansible/.ssh/authorized_keys

build the image

1
sudo docker build -t base base-docker/

I wrote a playbook that creates a container from the image, adds this host to the ansible inventory and then runs an ansible command to it

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
# Docker / ansible hello world


# Create a a docker container on localhost (this has an ssh server in the image)
- hosts: localhost
  sudo: yes
  tasks:
  - name: start container
    docker: image=base hostname=test name=test detach=False

# add container(s) to the hosts inventory
  - name: add new hosts to inventory
    add_host: hostname=""
      groups=docker
      ansible_ssh_host=""
      ansible_ssh_port=22
    when: item.State.Running == True
    with_items: docker_containers


# Now we can run ansible on the dockers container(s)
- hosts: docker
  sudo: yes
  remote_user: ansible
  tasks:
    - name: hello
      shell: echo "hello world"

run it like

1
2
export ANSIBLE_HOST_KEY_CHECKING=False
ansible-playbook  --ask-sudo-pass docker1.playbook

I turn off host key checking since this is a new host and will always fail

To clean up after creating containers - or you end up with lots of mess…

To stop all docker containers

1
sudo docker stop $(sudo docker ps -a -q)

To remove all docker containers

1
sudo docker rm $(sudo docker ps -a -q)

Well now I can build a base image, create an instance of it as a container, and run ansible there.

Next steps are to run a more meaningful ansible playbook to setup my applications, and to link containers together so that apps can connect.

Ubuntu Disk Encryption Password

To reset the password on an encrypted disk in Ubuntu use the disks tool

The cogs icon for the partition has a change password option

screenshot

Git Integration Branch Based Workflow

The branching strategy I’ve found most effective and flexible is to use one branch per release version that is used for integration and feature branches off these for everything else.

It’s hard to visualise and I’ve tried drawing graphs but once I build in enough features to make the graph meaningful it is no longer easy to interpret.

At the end of the day there is a degree of complexity in a branching process suited to large teams and large projects - but it does come down to a few basic principles.

If you follow these principles you can maintain maximum release readiness, minimise conflicts, maximise flexibility, minimise bugs.

At the end of the day the goal is to try and make life easier for both the development team and the business team while delivering as much functionality as possible.

1 Each release has an Integration branch

When we start work on a new version the first thing we do is create a new integration branch based on the latest version of the previous release’s integration branch.

Releases are made from tags on this branch, no code gets merged to the integration branch until it has passed initial testing.

In theory the integration branch is always release ready - in real life we find some bugs later than we’d like, also sometimes we integrate early when we know a release is still a way off.

2 Every feature or bugfix has a branch

Developers (almost) never work directly on the integration branch but create a branch for very change from the earliest integration branch this change might be merged back into.

3 Merge forwards

Newer integration branches are forks of older integration branches so merging changes made to version x into version x+1 (or x+2 etc) will usually be trivial.

Feature branches are forks of integration branches - again this makes merging the integration code to the feature usually trivial.

I say the merge is usually trivial - occasionally two changes will have a conflict - for example if two developers have edited the same line in different ways. But by maintaining a branch hierarchy and merging frequently these conflicts are amazingly rare. Even when they do happen they will involve recently edited code which makes understanding the conflict much easier.

The thing that git is really really good at is merging two branches with a shared history.

If you do find you make a fix from a later integration branch (maybe you make a bugfix which you only later realise affects older releases too) then you may need to cherry pick commits back to an older branch - but this should be the exception.

4 Work in the oldest branch you can

If you don’t know for sure which release a feature will be in start it from the oldest one it might be in - since integration branches share heritage you can

You can always merge a feature branch to a later integration branch (since the later integration branch contains the older one). But the reverse is tricky and you would need to cherry pick.

5 Always merge the Integration branch to the feature branch before merging a feature

Before merging a feature to the integration branch we first make sure that the feature branch has all the latest code for that release, then we test it.

This avoids the risk that our code works OK in isolation but conflicts in some way with another feature that was merged to integration after we started work.

It means that we will deal with any conflicts in the feature branch and that merging to integration will always be a simple fast forward merge with no conflicts.

6 Never commit what you can build

Don’t put binary files or things like minified css/js in git, also don’t commit version numbers. These sorts of files tend to change in multiple branches at once and are very prone to generating conflicts.

The better solution is to have a build script that pulls in any binaries needed from a separate store, builds minified versions of files, does any compilation etc.

The build script could take the release number as a parameter or read it from a git tag and insert the version number into whatever file is needed.

Side note on rebasing/squashing commits on feature branches

Take a look at google https://www.google.co.uk/webhp?q=git+to+squash+or+not+to+squash

There are a lot of opinions out there on whether or not to squash commits before merging a feature branch

Personally I prefer when people don’t because I find that when they do the commit messages are quite terse “added feature foo”

I prefer to see the individual commits, and which lines were written at 5:30 on a Friday night when maybe someone was in a hurry to get home.

But if people do rebase then doing this on their feature branch just before merging to integration is a good time to do it.

Never rebase on integration.

ELK Gotchas

Syslog import takes curent year - and I can’t seem to set that (it’s early january I’m working on logs that include the end of December)

Even adding a year to the log data in sed didn’t seem to help

Selecting time periods via histogram I had inadvertantly setup conflicting time periods or times outside of my data - if “filtering” is collapsed it’s very easy for the dashboard to have a filter that leaves no data shown and as a beginner not realise it

Histogram panel view - exposes an \interval\ setting that was wrong in my case (set to an interval of a year which wasn’t useful for a few days worth of data)