LVM recovery

Few days ago i made mistake and forced fsck to check partition that contain LVM instead of logic volume, as result i got broken LVM metadata. I was unable to see volume group an logic volumes.
pvs output looked like that:

# pvs -v

Scanning for physical volume names
Incorrect metadata area header checksum

I tried to run pvck but it did not help me, it founded corrupted metadata but did not repair LVM:

# pvck -d -v /dev/md5
Scanning /dev/md5
Incorrect metadata area header checksum
Found label on /dev/md5, sector 1, type=LVM2 
Found text metadata area: offset=4096, size=193024
Incorrect metadata area header checksum

Finally i founded that it’s possible to make backups of LVM metadata and restore it when needed, but i think that i had only broken LVM with broken metadata.
It’s hard to describe how happy I was when I found that by default LVM create backups of metadata when you make any changes. I found it into /etc/lvm/backup dir, after that recovery become easy task, first i recreate physical volume:

pvcreate -u b3Lk2a-pydG-Vhf3-DSEJ-9b84-RLm9-UEr6r3 --restorefile /etc/lvm/backup/vg-320 /dev/md5

UUID can be founded in pv section into metadata file:

 physical_volumes {
 
 pv0 {
 id = "<strong>b3Lk2a-pydG-Vhf3-DSEJ-9b84-RLm9-UEr6r3</strong>"
 device = "/dev/md5" # Hint only

Next i restored volume group:

vgcfgrestore -f /etc/lvm/backup/vg-320 vg-320

After that logical volumes became visible:

# lvs
 LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
 root vg-320 -wi-a--- 15.00g 
 swap vg-320 -wi-a--- 1.00g 
 var vg-320 -wi-ao-- 200.00g 
 zoneminder vg-320 -wi-a--- 15.00g

After reinitialization with vgscan -v && vgchange -ay commands, volume groups ready for fsck.

Simple OpenVPN profile generator

Few month ago i learned that OpenVPN support profiles. Before that i generate config for every client, create keys and certs with easy-rsa, tar it’s all together and put on client. Now i can create profile that will contain all necessary keys, certs and config in one file, so i write simple script that generate .ovpn profile for new client.
Generated .ovpn profile can be imported from sd card in Android, via iTunes or email in iOS, or just type `openvpn your_new_profile.ovpn` at PC.
Prerequisites: configured easy-rsa (`pkitool clientname` must produce cert and key for client).
You must customize config part for your server, it is possible to fetch data from server config file, but i’m too lazy to modify script for it.
There is it:

#!/bin/bash
#Dir where easy-rsa is placed
EASY_RSA_DIR="/etc/ssl/easy-rsa"
KEYS_DIR="$EASY_RSA_DIR/keys"
# Dir where profiles will be placed
OVPN_PATH="/root/ovpn"
REMOTE="your.server port"
 
 
if [ -z "$1" ]
then 
        echo -n "Enter new client common name (CN): "
        read -e CN
else
        CN=$1
fi
 
 
if [ -z "$CN" ]
        then echo "You must provide a CN."
        exit
fi
 
cd $EASY_RSA_DIR
if [ -f $KEYS_DIR/$CN.crt ]
then 
        echo "Certificate with the CN $CN already exists!"
        echo " $KEYS_DIR/$CN.crt"
else
source ./vars > /dev/null
./pkitool $CN
fi
 
cat > $OVPN_PATH/${CN}.ovpn << END
client
dev tun
resolv-retry infinite
nobind
persist-key
persist-tun
verb 1
comp-lzo
proto tcp
remote $REMOTE
 
<ca>
`cat $KEYS_DIR/ca.crt`
</ca>
 
<cert>
`sed -n '/BEGIN/,$p' $KEYS_DIR/${CN}.crt`
</cert>
 
<key>
`cat $KEYS_DIR/${CN}.key`
</key>
END

Libvirt + vnc + sasl

Error: connection to hypervisor host got refused or disconnected!Yesterday i wanted to configure libvirt with kvm virtualization, while i read comments in config file, i observed, that qemu can share credentials  with  libvirt via sasl. Also i found few how-to, that said ‘just copy /etc/sasl2/libvirt.conf to /etc/sasl2/qemu.conf’.
I done that, but when i tried to open console of VM i got “Error: connection to hypervisor host got refused or disconnected!”.
May be you think, that you can find something interesting in log? Nope. May be you think that you can run virt-manager in debug mode and will see something useful? Nope. The reason, why this happened is because, libvirt  run as root, but they start VM’s as libvirt-qemu user. And sasl2 database has owner root:root and 640 permissions. I changed owner of /etc/libvirt/passwd.db to libvirt-qemu:root and problem is gone.

Pulseview compilation

Half year ago i wanted to make device that can be used to clone ski pass. I thought that ski passes use RFID 125kHz. First i bought itead module RDM6300 but it turned out that it can only read tags, so i bought  EM4095 chip. At this time i also noticed, that most ski passes use MIFARE tags that operated at 13MHz.
Anyway i want to complete this project and build device that can read and write 125kHz tags (really there is to many different tags that operate on 125kHz and uses different protocols, so i want to start with EM4100 tags).That tags use manchester encoding to transfer data, also tags can use different bitrate. It is easy task to encode data into manchester, but it’s really pain in the ass if you want to decode them and does not know bitrate.
I have clone of saleae logic analizer so i decided to practice with decode manchester with libsigrokdecode. Sigrok have ‘official’ gui for libsigrok and libsigrokdecode called Pulseview.
I found that debian wheezy have old libsigrok and do not have pulseview at all, after that i decided to build sigrok and pulseview from scratch. It is really not easy quest, because additionally to libsigrok, libsigrokdecode you need to compile old libusb and libvisa.
Finally when i compiled all that stuff, i faced with errors when i tried to compile pulseview with decoders support.

First, libsigrokdecode need Python >= 3.0, Python.h placed in python3.2/Python.h, so you need to change it into libsigrokdecode.h:

./include/libsigrokdecode/libsigrokdecode.h:#include <python3.2/Python.h> /* First, so we avoid a _POSIX_C_SOURCE warning. */

Second, if you will got that error:

[ 40%] Building CXX object CMakeFiles/pulseview.dir/pv/view/decodetrace.cpp.o
/var/tmp/sigrok/pulseview/pv/view/decodetrace.cpp: In member function ‘virtual void pv::view::DecodeTrace::paint_mid(QPainter&, int, int)’:
/var/tmp/sigrok/pulseview/pv/view/decodetrace.cpp:203:3: error: ‘hash_combine’ is not a member of ‘boost’
/var/tmp/sigrok/pulseview/pv/view/decodetrace.cpp:204:3: error: ‘hash_combine’ is not a member of ‘boost’
/var/tmp/sigrok/pulseview/pv/view/decodetrace.cpp:205:3: error: ‘hash_combine’ is not a member of ‘boost’
make[2]: *** [CMakeFiles/pulseview.dir/pv/view/decodetrace.cpp.o] Error 1

Then you need to add “#include <boost/functional/hash.hpp>” into /var/tmp/sigrok/pulseview/pv/view/decodetrace.cpp

Third,  if you got that:

CMakeFiles/pulseview.dir/pv/data/decoderstack.cpp.o: In function `pv::data::DecoderStack::decode_proc(boost::shared_ptr<pv::data::Logic>)':
/var/tmp/sigrok/pulseview/pv/data/decoderstack.cpp:267: undefined reference to `srd_session_new'
/var/tmp/sigrok/pulseview/pv/data/decoderstack.cpp:283: undefined reference to `srd_inst_stack'

You need to add -lsigrokdecode into CMakeFiles/pulseview.dir/link.txt

I spent too many time to compile that stuff, so i decided to place here archive with complete libsigrok, libsigrokdecode, libvisa, libusb, sigrok and pulseview. I compiled it with preffix /opt/sigrok, so if you want to use it, place that stuff into /opt and run like that:

LD_LIBRARY_PATH=/opt/sigrok/lib /opt/sigrok/bin/pulseview

Enjoy: sigrok.tar
md5: 7bbb1d434959848c741230fe90a590c5 /tmp/sigrok.tar.gz
PS
Also you must install  libboost-thread.

Zram

Few months ago, i tried very cool feature called ‘zram’. It is linux kernel module that allow to create compressed block devices into memory, it can be used for creating compressed fs in ram  (/tmp for example) or for swap.
May be you think, that in context of swap, it is dumb to keeping  memory pages in memory when OS need that memory. =) But compress and store pages in memory is faster than write it onto disk, in most cases memory pages can be heavily compressed, that will help OS to free RAM, if you have SSD it will save life of your disk, also you can continue using swap on disk. If you want to keep you swap partition on-line, you must give higher priority for swap in zram, when zram will full, OS will started to using swap on disk.

I used that init.d script for debian, but i changed it to use not  a whole RAM for zram devices, but half of all memory (in worst case, when pages can not be compressed, zram will use only half of my memory). If you want to do same modification, just change echo $((mem_total / num_cpus )) to echo $((mem_total / num_cpus / 2)) in that script.
Without modifications this script will slice you memory by number of CPU core in your system, create swaps on that slices and attach it to your system with priority 100 (usualy swap partitions have priority -1).
I made simple test of compression ratio for zram:
Detached one of my swaps:

$ sudo swapoff /dev/zram3

Created core file of iceweasel process and wrote it into zram:

$ pgrep -lf icewea
3375 sh -c /usr/bin/iceweasel
3376 /usr/bin/iceweasel
$ gcore 3376
[Thread debugging using libthread_db enabled]
[New Thread 0x7f7c0b9fd700 (LWP 8455)]
...blablabla...
0x00007f7c62a57c13 in poll () from /lib/libc.so.6
Saved corefile core.3376
$ sudo dd if=./core.3376 of=/dev/zram3
dd: writing to `/dev/zram3': No space left on device
2027297+0 records in
2027296+0 records out
1037975552 bytes (1,0 GB) copied, 6,79003 s, 153 MB/s

Core file does not fit completely into zram device, but it is dose not mater, let’s look at compression ratio:

$ cd /sys/block
$ echo `cat ./zram3/orig_data_size`/`cat ./zram3/compr_data_size`|bc
2.68475926964795164955

So, in most cases zram has compress ratio more than 2.5.
Huh, i think it is pretty cool.

Zoneminder jitter

After several years of torture with easycap i realize that it is time to change capture device. I found that other usb capture device that supported by linux cost to high, also i can not use PCI or full height PCI-e devices because of mATX form factor of my server.  Suddenly i found ImpactVCB 1381, it is what i wanted to found, it is supported by linux, PCI-e and has half height bracket.
Before i did not try it card i did not think, that it is can be so much difference in image quality between two cards. Unfortunately i do not have sample with  easycap, but you can trust me, difference is enough to throw easycap.
As always there is a fly in the ointment, zoneminder or haupage driver has bug and captured image sometimes jittering, it is looks like that:
Zoneminder jitter

I preferred to think, that it is bug in zoneminder, because i did not seen same issue when i captured video with mencoder.
Will hope it will be fixed in future releases.

You IP

Since the idiots in the Russian government passed a law similar to ‘SOPA’ i started to modify routing scheme at my home. Many times i used internet.ya.ru to determine my current outgoing IP address, but i wanted to use more minimalistic tool for this purpose. So i created my own tool with blackjack and hookers, there it is: https://ivanbayan.com/uip.php i using different routes for TLS and http traffic, so it is also available there  https://ivanbayan.com/uip.php.
This script produce simple image with you outgoing ip:

You ip

Addresses of TOR relays

Some time ago i wrote about how to block access from TOR network.
Few days ago i observed, that file with addresses of TOR relays (http://exitlist.torproject.org/exit-addresses) not available anymore, i did not found similar list, so i wrote my own script with blackjack and hookers. Output of this script not compatible with format of “exit-addresses”, and represents a simple list of ipv4 addresses of TOR relays.

Enjoy: https://ivanbayan.com/tor_exitnodes_v4.txt

How to fix ‘Timezone database is corrupt – this should *never* happen!’

Today i upgraded to wheezy. When i entered my blog i observed that it does not work anymore. I looked into logs and found next errors:

[26-Jun-2013 12:59:41 UTC] PHP Fatal error:  date(): Timezone database is corrupt - 
this should *never* happen! in ...
[26-Jun-2013 12:59:49 UTC] PHP Fatal error:  strtotime(): Timezone database is 
corrupt - this should *never* happen! in /html/wp-includes/functions.php on line 33...

After a little investigation I discovered that happens when you use php in chroot enviroment (i using php5-fpm with chroot, so it is my case). I tried to copy /usr/share/zoneinfo in chroot environment with parent dir structure and correct permissions, but nothing change. Somewhere i read that it problem can happen in debian, because maintainers of php packages, patch source files, the solution – is to install tzdatadb:

apt-get install php-pear php5-dev
pecl install timezonedb
echo 'extension=timezonedb.so'> /etc/php5/mods-available/timezonedb.ini
ln -sf /etc/php5/mods-available/timezonedb.ini /etc/php5/conf.d/30-timezonedb.ini
service php5-fpm restart

After that all work like a charm.

PS

strings /usr/sbin/php5-fpm|grep Quake| head -n8
Quake I save: ddm4 East side invertationa
Quake I save: d7 The incinerator plant
Quake I save: d12 Takahiro laboratories
Quake I save: e1m1 The slipgate complex
Quake I save: e1m2 Castle of the damned
Quake I save: e2m6 The dismal oubliette
Quake I save: e3m4 Satan's dark delight
Quake I save: e4m2 The tower of despair

Easter egg?

Kali linux on LiveUSB with working persistent partition

Few days ago i wanted to make liveusb with kali linux (i had backtrack before). I used this guide to install kali, but i observed, that persistence partition does not work. There is partitions on my usb drive:
Kali - old partition table
When init script found persistent partition and tried to mount  their return error “mount: mounting /dev/sdaX on /root/lib/live/mount/persistence/sdaX failed: Device or resource busy”.
I think this happened because official guide suggest to write iso9660 image on usb drive, and init script think that it is cd drive and mount whole usb device, not a partition where iso placed. This i found in boot.log:

There you can see, that after kali boot, usb drive (sda) still mounted, and i can not mount second partition:
Kali1
After that whole usb drive is busy.  I attached boot.log to this post with enabled debug, may be it will help someone to fix that.

I decided to make bootable usb disk instead of flashing iso on it. For doing that i used extlinux and original kali iso file.
First i create 2 partitions on usb drive, one for kali and second for persistent files:

Kali2

Do not forget to set bootable flag on first partition and correct label for persistent paririon.
After that, install mbr from extlinux:

$ dd if=/usr/lib/extlinux/mbr.bin of=/dev/sda
0+1 records in
0+1 records out
440 bytes (440 B) copied, 0.00126658 s, 347 kB/s

Copy kali linux on first partition:

$ mkdir /mnt/sr0 /mnt/kali
$ mount /dev/sr0 /mnt/sr0/
mount: block device /dev/sr0 is write-protected, mounting read-only
$ mount /dev/sda1 /mnt/kali/
$ rsync -a /mnt/sr0/* /mnt/kali

Also i modify boot menu and add entry with persistence boot option at live.cfg:

label live-686-pae-persistence
menu label ^Live persistence (686-pae)
menu default
linux /live/vmlinuz
initrd /live/initrd.img
append boot=live noconfig=sudo username=root hostname=android-53f31a089339194f persistence

After that you need to rename isolinux.cfg to extlinux.conf and install extlinux:

$ cp /mnt/kali/isolinux/isolinux.cfg /mnt/kali/isolinux/extlinux.conf
$ extlinux --install /mnt/kali/isolinux/
/mnt/kali/isolinux/ is device /dev/sda1

Mount persistence partition and create config:

$ mkdir /mnt/persist
$ mount /dev/sda2 /mnt/persist/
$ echo "/ union" > /mnt/persist/persistence.conf

After that you can reboot and check, that persistent partition work.
Kali3