Grafana + influxdb

TL;DR – go to data.cyplo.net

Some, rather long, time ago I’ve added a custom python-driven data acquisition and graphing to my sunpowered RaspberryPi installation on the balcony. Since then I’ve upgraded it to Raspi2 and ported the data thingy to influxdb + grafana.
All 3 of those things I am very positively surprised by.

IMG_1289
RaspberryPi2 – definitely worth the upgrade – it’s a speed daemon now.  Small caveat – I recommend installing raspbian from scratch, especially if you had some custom overclocking config, as these do not seem to be compatible between 1 and 2. Also RasPi2 needs a microsd card instead of full-sized one.

As for the software – since everything went suprisingly smoothly this post is not much of a tutorial. Just go to influxdb and Grafana and go through the respective installation documentation. You need x86 64bit server to host this, so unfortunately no self-hosting on RaspberryPi – at least I wasn’t able to compile the software there.

I’ve changed the original python scripts slightly, to upload the data to influxdb instead of graphing directly via matplotlib. Then configured grafana to display some cool graphs and that was pretty much it – you can see the result at data.cyplo.net.

IMG_1291Right now I’m testing 2 different sizes of solar panels and batteries, hooked at the same time. The ADC is connected as it was before though, so a TODO is to add more measurements, to see how the individual  panels’ output change during the data and how does it affect each of the batteries.

Flattr this!

CNC router arrives

After 2 months of waiting – my CNC router arrives. 8 weeks lead time they said – 7 weeks and 4 days it was ! Who are they ? TanieCNC people [CheapCNC in Polish :]. Although it may look like they don’t know how to make websites AND their name does not instill a lot of confidence – but girl, they certainly know how to weld and make precise machinery !

The size of the package caught me off guard, I’ve spent an hour disassembling the crate in full sun. After that I wasn’t able to get it through the stairs myself, fortunately a friendly neighbour gave me their pair of hands. Lifting the machine by 2 people is okay, it’s still not lightweight, but bearable. Putting it on the table was a different affair entirely. Careful not to damage anything, especially the motor assemblies – we’ve put it on a improptu wood ramp. Using heavy duty straps, we’ve lifted it up little by little. Then some inspection – the quality is really superb, especially of the metal frame ! After that I got an old PC with Windows XP and parallel port running Mach3 software – I wanted to set it up as in any other shop at start. Later on I’m planning on moving to LinuxCNC and then gradually off parallel port on to a USB stack, something more like an arduino parsing gcode and driving motors instead of relying of the accurate timing of the PC.

TODOs:

  • add an MDF bed layer on top of existing bed
  • get better clamps
  • get more router bits
  • get a vacuum attachment for the spindle
  • move to LinuxCNC
  • move off parallel-port driving

Flattr this!

Tools: PCB holder

I thought it would be cool to share with you the tools I find surprisingly useful.

Behold the first in the series: the PCB holder !

I cannot overstate how much is that of a difference from the ‘third hand’-type of holders. The grip is very firm but won’t scratch the surface nor short anything because the jaws are made from a soft plastic.
And the whole thing ROTATES !

Flattr this!

Backing up and restoring whole block devices

SD cards are not really a reliable storage, especially when used constantly e.g. while sitting in always powered-on Raspberry Pi. Because of that I’ve recently needed to perform lots of backup/restore operations 😉

I wrote this script for backing up:

#!/bin/bash

if [[ -z $1 ]]; then
    echo "usage: $0 device_to_clone"
    exit
fi

device=$1

timestamp=`date +%Y%m%d`
dest_file="/tmp/$timestamp.dd.xz"

echo "about to clone $device to $dest_file"
echo "ctrl-c or [enter]"
read

sudo umount $device?
sudo umount $device

sudo sync
sudo pv -tpreb $device | dd bs=4M | pixz > $dest_file
sudo sync

And this one for restoring:

#!/bin/bash

if [[ -z $1 ]] || [[ -z $2 ]]; then
    echo "usage: $0 restore_file.xz device_to_restore_to"
    exit
fi

source_file=$1
if [[ ! -f $source_file ]]; then
    echo "cannot open $source_file"
    exit
fi

device=$2

echo "about to restore $source_file onto $device"
echo "ctrl-c or [enter]"
read

sudo umount $device?
sudo umount $device

pv -tpreb $source_file | pixz -d | sudo dd bs=4M of=$device
sudo sync
sudo eject $device

Some of the more fun features include progressbars and making sure you’ve unmounted the device properly before 😉
This also uses parallel threads to deflate the data, so the XZ compression should not be a bottleneck on any modern machine.
The scripts above were used to backup and restore SD cards but will work for any block device, be it an external or internal disk drive, etc.

usage example [remember to use the whole device, not just its partition as an argument]:

./backup_sdcard /dev/sdc
about to clone /dev/sdc to /tmp/20150214.dd.xz
ctrl-c or [enter]

[sudo] password for cyryl:
umount: /dev/sdc1: not mounted
umount: /dev/sdc2: not mounted
umount: /dev/sdc: not mounted
19,6MiB 0:00:02 [9,72MiB/s] [>                                                                                                                                               ]  0% ETA 0:52:26

Flattr this!

Standing desk

It was some time since the last photo-story so, please accept these pictures of my standing desk.

On the actual desk, there is a laptop stand serving a role of a keyboard and mouse rest. Laptop itself is flipped on its back, motherboard attached to the back of what once was a lid. The whole thing is flying on standard monitor desk mount, using custom vesa-to-acrylic mounting system 😉

Flattr this!

GUI Vagrant box

Recently I’ve started working on changing my default development workflow. I’m evaluating vagrant as a main env manager, and then docker for extra speed. In short, my vagrant up boots up new dev box and then couple of docker containers. What I’ve found is that there is not really a plethora of GUI-enabled vagrant boxes, so I’ve created one !

If you want to use it, go:

vagrant init cyplo/ubuntu-gnome-utopic-gui
vagrant up

I will write about the whole setup later, as I’m not yet sure what approach is best for me.

Flattr this!

Tor talk

I gave a talk this Monday, an important one I think. The one from the kind of spreading  knowledge on the safe internet usage to people not necessarily of the tech background.

This was my first one given to a such audience and to add to it all, it was given in Polish. The biggest challenge ? Finding good equivalent for the English tech terms.

I think the talks went quite okay and the discussion afterwards was quite lively. I said a bit on how the internet works and what’s wrong with that, to transition later to what problems Tor addresses and which it does not. I tried to emphasize that using Tor does not make you automatically immune to the dangers of the internet.

Big thanks to the organizers, Praxis student group from the Wroclaw University of Economy.

You can find my slides’ sources here, along with speaker notes.

Flattr this!

Running Eagle on Ubuntu 14.10 64bit

Eagle is still the first choice when it comes to Open Hardware electronics design. That’s a bit unfortunate because the software itself is proprietary. Sometimes you need to run it though. For example to migrate projects over to non-proprietary software !

Say, you’d like to run new Eagle 7.1 under Ubuntu ?
Try repos.
Repos have the old major version 6 only.
The harder to get proprietary software the better, I suppose.

Download the blob then:

$ wget -c http://web.cadsoft.de/ftp/eagle/program/7.1/eagle-lin-7.1.0.run
$ chmod a+x eagle-lin-7.1.0.run

Inspect and run the stuff:

$ vim eagle-lin-7.1.0.run 
$ ./eagle-lin-7.1.0.run 
Ensure the following 32 bit libraries are available:
	libXrender.so.1 => not found
	libXrandr.so.2 => not found
	libXcursor.so.1 => not found
	libfreetype.so.6 => not found
	libfontconfig.so.1 => not found
	libXi.so.6 => not found
	libssl.so.1.0.0 => not found
	libcrypto.so.1.0.0 => not found

32bit craziness, you say.
New Ubuntu does not have ia32 libs prepackaged, you say ?

Here, have this handy list of all of the dependencies then:

$ sudo apt-get install libxrandr2:i386 libxrender1:i386 libxcursor1:i386 libfreetype6:i386 libfontconfig:i386 libxi6:i386 libssl1.0.0:i386 libcrypto++9:i386
# should show you the installation wizard [sic !]
$ ./eagle-lin-7.1.0.run 

Flattr this!

Poor man’s secrets storage

I’m a bit cautious when it comes to storing my passwords and other secrets. I do not use any web or desktop applications to do this for me. How do I remember those passphrases then ?

I have a central file server, accessible via a tunnel. I store there a gpg-encrypted file containing a tar archive of a directory with various files containing secrets. Syncing these files across computers became a bit cumbersome lately. I’m using git to version them, but because I do not want to have the sync server to contain unencrypted secrets I needed to bake some custom solution.

Bash to the rescue !
There are still some assumptions made here about permissions, directories layout and some stuff not failing, but I’m sure you’ll be able to figure this out and tweak to your needs.

#!/bin/bash

TUNNEL_CREDS="user@tunnelhost"
TUNNEL_PORT=123
STORAGE_CREDS="storage_user@localhost"
STORAGE_ADDRESS="storagehost.example.org"
SOCKET="/tmp/black_socket"
REMOTE_VAULT_PATH="/somepath/.vault.tar.gpg"
TMP_VAULT="/tmp/.vault.tar.gpg"
TMP_VAULT_TAR="/tmp/.vault.tar"
TMP_VAULT_DIR="/tmp/.vault"

TMP_LOCAL_PORT=10022
LOCAL_VAULT_DIR="$HOME/.vault"
LOCAL_VAULT_BACKUP_DIR="$LOCAL_VAULT_DIR.bak"

pushd `pwd`

echo "removing old vault backup at $LOCAL_VAULT_BACKUP_DIR"
rm -rI "$LOCAL_VAULT_BACKUP_DIR"

set -e

echo "backing up local vault..."
cp -r "$LOCAL_VAULT_DIR" "$LOCAL_VAULT_BACKUP_DIR"

echo "establishing tunnel ..."
ssh -L $TMP_LOCAL_PORT:$STORAGE_ADDRESS:22 $TUNNEL_CREDS -p $TUNNEL_PORT -N -f -M -S "$SOCKET"

echo "tunnel ready, copying remote version of the vault..."
rsync --progress -avz -e "ssh -p $TMP_LOCAL_PORT" "$STORAGE_CREDS:$REMOTE_VAULT_PATH" "$TMP_VAULT"

echo "decrypting new vault..."
gpg -d "$TMP_VAULT" > "$TMP_VAULT_TAR"

echo "unpacking new vault..."
mkdir -p "$TMP_VAULT_DIR"
tar xf "$TMP_VAULT_TAR" -C "$TMP_VAULT_DIR"

echo "pulling from remote vault..."
cd "$LOCAL_VAULT_DIR"
git pull "$TMP_VAULT_DIR"

echo "pulling to remote vault..."
cd "$TMP_VAULT_DIR"
git pull "$LOCAL_VAULT_DIR"

echo "cleaning up a bit..."
rm -fr "$TMP_VAULT_TAR"
rm -fr "$TMP_VAULT"

echo "packing refreshed remote vault..."
tar pcf "$TMP_VAULT_TAR" -C "$TMP_VAULT_DIR" .

echo "encrypting refreshed remote vault..."
gpg -c "$TMP_VAULT_TAR"

echo "sending out updated vault"
rsync --progress -avz "$TMP_VAULT" -e "ssh -p $TMP_LOCAL_PORT" "$STORAGE_CREDS:$REMOTE_VAULT_PATH"

echo "cleaning up.. "
rm -fr "$TMP_VAULT_DIR"
rm -fr "$TMP_VAULT_TAR"
rm -fr "$TMP_VAULT"

echo "closing tunnel.."
ssh -S "$SOCKET" -O exit $TUNNEL_CREDS

popd

Flattr this!