How to block IP ranges of specified autonomous system

If you want to prohibit access to your host for specified AS, you can use solution below. I made it some time ago, when I found out, that mail.ru hunting for hosts which help to bypass telegram censorship. It’s not perfect because I didn’t make much effort to it. Whois can return sub-networks and networks to which they belong in same response, so ipset set can contain duplicated ranges. Change ‘AS47764’ to AS which you want to block, ‘input_drop’ is an ipset set name.

ipset create input_drop hash:net comment
for i in $(whois -h whois.radb.net -- '-i origin AS47764' | grep 'route:'|cut -d : -f 2)
do
ipset add input_drop $i comment mail.ru
done
iptables -A INPUT -m set --match-set input_drop src -m comment --comment "DROP INPUT packets for AS47764" -j DROP

Also, i would recommend that solution, to make ipset rules persistent: https://github.com/BroHui/systemd-ipset-service

Galaxy S3: /efs/prox_cal doesn’t affect calibration settings under LineageOS

Few days ago I replaced front glass on samsung i9300 and flashed LineageOS 14.1. After that I’ve found that proximity sensor stays in triggered state, it may happened because of lack of experience (I’ve used too much UV-glue, so it was everywhere) or because of additional screen protector which been installed. Anyway, always-triggered-proximity-sensor made phone partially usable (you can’t cancel any call without pushing power button few times). I’ve found a lot of articles how to calibrate proximity sensor like this one. More over I’ve found that I shouldn’t do any calculation to update /efs/prox_cal, after auto-calibration /efs/prox_cal updated automatically (at least with kernel that shipped by default), but anyway it didn’t help me. Every reboot calibration  was reseted to zero.

For a first time, I’ve used proximity threshold value to fix proximity sensor, but later I saw that kernel driver read calibration directly from file and SELinux could be a reason why /efs/prox_cal haven’t effect.

Part that read calibration value looks like that:

#define CANCELATION_FILE_PATH "/efs/prox_cal"
...
int proximity_open_calibration(struct ssp_data *data)
{
 int iRet = 0;
 mm_segment_t old_fs;
 struct file *cancel_filp = NULL;
 
old_fs = get_fs();
 set_fs(KERNEL_DS);
 
cancel_filp = filp_open(CANCELATION_FILE_PATH, O_RDONLY, 0666);
 if (IS_ERR(cancel_filp)) {
 iRet = PTR_ERR(cancel_filp);
 if (iRet != -ENOENT)
 pr_err("[SSP]: %s - Can't open cancelation file\n",
 __func__);
 set_fs(old_fs);
 goto exit;
}

I’ve checked logcat and here is it:

05-06 21:29:12.916 3219 3219 W Binder:2377_A: type=1400 audit(0.0:39): avc: denied { read } for name="prox_cal" dev=mmcblk0p3 ino=46 scontext=u:r:system_server:s0 tcontext=u:object_r:efs_device_file:s0 tclass=file permissive=0

Definitely SELinux forbid reading of calibration file, I was surprised that SElinux capable to forbid kernel read call and now I feel a shame because usually I just disable it.

First I wanted to create new policy to allow reading of that file for kernel, but later I’ve found that /efs partition contains other calibration files, for example /efs/gyro_cal_data, I’ve checked security context of that files and found that it differs from /efs/prox_cal, it was u:object_r:sensors_data_file:s0 but prox_cal was created with default for /efs partition context u:object_r:efs_file:s0, so I’ve changed context:

# chcon u:object_r:sensors_data_file:s0 /efs/prox_cal

After that kernel started to load calibration value every boot. Looks like instructions like one mentioned above works for everyone who modified factory shipped prox_cal file with right security context, but I haven’t /efs/prox_cal before and it was created with wrong context.
I hope that story may help someone.

Dirty hack to add values mappings in Zabbix

“I’ll be brief.” ©
Here is two things about script published in ZBXNEXT-1424, first it can help you to automate creation of large mappings (and it’s cool), second it will broke your DB (not so cool, maaan).
When you will try to add mapping in broken DB you will see something like this:

poorzabbix

The “Error in query [INSERT INTO valuemaps (name,valuemapid) VALUES (‘Test mapping’,’50’)] [Duplicate entry ’50’ for key ‘PRIMARY’]” mean, that in table valuemaps you already have entry with valuemapid = 50. Why it happened i tell later after we fix DB.

To fix DB, you need to update few entries in table ‘idx‘, first update nextid where table_name = valuemaps:

mysql> update ids set nextid = (select max(valuemaps.valuemapid)+1 from valuemaps) where table_name = 'valuemaps';
Query OK, 1 row affected (0.22 sec)
Rows matched: 1 Changed: 1 Warnings: 0

Second update nextid for mappings:

mysql> update ids set nextid = (select max(mappings.mappingid)+1 from mappings) where table_name = 'mappings';
Query OK, 1 row affected (0.22 sec)
Rows matched: 1 Changed: 1 Warnings: 0

Here it is!

This happened because script does not update table idx. May be it’s ok for zabbix 2.0 that mentioned in feature request, but it’s broke database for zabbix 2.2 and newer. Unfortunately zabbix prior version 3.0 does not have API or ability to import mappings , so that script still useful.

Here is fixed script, i hope author will not offended at me:

#!/usr/bin/perl
 
use warnings;
use strict;
 
my $usage = "$0 valueMapName number newvalue [number2 newvalue2 [...]]
E.g.: 
 $0 'Alarm Status' 1 ok 2 unknown 3 stale 4 problem
 $0 'Aliveness' 0 dead 1 alive
";
 
my $valueMapName = shift() || die "No new valuemap name";
my @mapList = @ARGV;
die "No mappings given. Usage: $usage\n" if scalar(@mapList) == 0;
 
 
my $isEvenNumber = scalar(@mapList) % 2 == 0;
die "Must give mapping->value pairs. Usage: $usage\n" if not $isEvenNumber;
my %mappings = @mapList;
 
my $newValueMapId = int(qx/mysql -N -s -e 'select nextid from zabbix.ids where field_name = "valuemapid"'/) ||
die("Can't fetch max valuemapid\nUsage: $usage\n");
$newValueMapId++;
my $newMappingId = int(qx/mysql -N -s -e 'select nextid from zabbix.ids where field_name = "mappingid"'/) ||
die("Can't fetch max mappingid\nUsage: $usage\n");
$newMappingId++;
 
eval {
 my $valueMapCmd = qq/mysql -e "insert into zabbix.valuemaps (valuemapid, name) values ('$newValueMapId', '$valueMapName');"/;
 print "$valueMapCmd\n";
 system $valueMapCmd;
 eval {
 for my $from (keys %mappings) {
 my $to = $mappings{$from};
 my $mappingCmd= qq/mysql -e "insert into zabbix.mappings (mappingid, valuemapid, value, newvalue) values ('$newMappingId', '$newValueMapId', '$from', '$to');"/;
 print "$mappingCmd\n";
 system $mappingCmd;
 $newMappingId++;
 }
 };
 if ($@) {
 die "something went wrong inserting into mappings $@";
 }
};
if ($@) {
 die "something went wrong inserting into valuemaps $@";
}
 
my $valueMapUpdCmd = qq/mysql -e 'update zabbix.ids set nextid = "$newValueMapId" where field_name = "valuemapid";'/;
print "$valueMapUpdCmd\n";
system $valueMapUpdCmd;
$newMappingId--;
my $mappingUpdCmd = qq/mysql -e 'update zabbix.ids set nextid = "$newMappingId" where field_name = "mappingid";'/;
print "$mappingUpdCmd\n";
system $mappingUpdCmd;

 

LVM recovery

Few days ago i made mistake and forced fsck to check partition that contain LVM instead of logic volume, as result i got broken LVM metadata. I was unable to see volume group an logic volumes.
pvs output looked like that:

# pvs -v

Scanning for physical volume names
Incorrect metadata area header checksum

I tried to run pvck but it did not help me, it founded corrupted metadata but did not repair LVM:

# pvck -d -v /dev/md5
Scanning /dev/md5
Incorrect metadata area header checksum
Found label on /dev/md5, sector 1, type=LVM2 
Found text metadata area: offset=4096, size=193024
Incorrect metadata area header checksum

Finally i founded that it’s possible to make backups of LVM metadata and restore it when needed, but i think that i had only broken LVM with broken metadata.
It’s hard to describe how happy I was when I found that by default LVM create backups of metadata when you make any changes. I found it into /etc/lvm/backup dir, after that recovery become easy task, first i recreate physical volume:

pvcreate -u b3Lk2a-pydG-Vhf3-DSEJ-9b84-RLm9-UEr6r3 --restorefile /etc/lvm/backup/vg-320 /dev/md5

UUID can be founded in pv section into metadata file:

 physical_volumes {
 
 pv0 {
 id = "<strong>b3Lk2a-pydG-Vhf3-DSEJ-9b84-RLm9-UEr6r3</strong>"
 device = "/dev/md5" # Hint only

Next i restored volume group:

vgcfgrestore -f /etc/lvm/backup/vg-320 vg-320

After that logical volumes became visible:

# lvs
 LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
 root vg-320 -wi-a--- 15.00g 
 swap vg-320 -wi-a--- 1.00g 
 var vg-320 -wi-ao-- 200.00g 
 zoneminder vg-320 -wi-a--- 15.00g

After reinitialization with vgscan -v && vgchange -ay commands, volume groups ready for fsck.

Simple OpenVPN profile generator

Few month ago i learned that OpenVPN support profiles. Before that i generate config for every client, create keys and certs with easy-rsa, tar it’s all together and put on client. Now i can create profile that will contain all necessary keys, certs and config in one file, so i write simple script that generate .ovpn profile for new client.
Generated .ovpn profile can be imported from sd card in Android, via iTunes or email in iOS, or just type `openvpn your_new_profile.ovpn` at PC.
Prerequisites: configured easy-rsa (`pkitool clientname` must produce cert and key for client).
You must customize config part for your server, it is possible to fetch data from server config file, but i’m too lazy to modify script for it.
There is it:

#!/bin/bash
#Dir where easy-rsa is placed
EASY_RSA_DIR="/etc/ssl/easy-rsa"
KEYS_DIR="$EASY_RSA_DIR/keys"
# Dir where profiles will be placed
OVPN_PATH="/root/ovpn"
REMOTE="your.server port"
 
 
if [ -z "$1" ]
then 
        echo -n "Enter new client common name (CN): "
        read -e CN
else
        CN=$1
fi
 
 
if [ -z "$CN" ]
        then echo "You must provide a CN."
        exit
fi
 
cd $EASY_RSA_DIR
if [ -f $KEYS_DIR/$CN.crt ]
then 
        echo "Certificate with the CN $CN already exists!"
        echo " $KEYS_DIR/$CN.crt"
else
source ./vars > /dev/null
./pkitool $CN
fi
 
cat > $OVPN_PATH/${CN}.ovpn << END
client
dev tun
resolv-retry infinite
nobind
persist-key
persist-tun
verb 1
comp-lzo
proto tcp
remote $REMOTE
 
<ca>
`cat $KEYS_DIR/ca.crt`
</ca>
 
<cert>
`sed -n '/BEGIN/,$p' $KEYS_DIR/${CN}.crt`
</cert>
 
<key>
`cat $KEYS_DIR/${CN}.key`
</key>
END

Zram

Few months ago, i tried very cool feature called ‘zram’. It is linux kernel module that allow to create compressed block devices into memory, it can be used for creating compressed fs in ram  (/tmp for example) or for swap.
May be you think, that in context of swap, it is dumb to keeping  memory pages in memory when OS need that memory. =) But compress and store pages in memory is faster than write it onto disk, in most cases memory pages can be heavily compressed, that will help OS to free RAM, if you have SSD it will save life of your disk, also you can continue using swap on disk. If you want to keep you swap partition on-line, you must give higher priority for swap in zram, when zram will full, OS will started to using swap on disk.

I used that init.d script for debian, but i changed it to use not  a whole RAM for zram devices, but half of all memory (in worst case, when pages can not be compressed, zram will use only half of my memory). If you want to do same modification, just change echo $((mem_total / num_cpus )) to echo $((mem_total / num_cpus / 2)) in that script.
Without modifications this script will slice you memory by number of CPU core in your system, create swaps on that slices and attach it to your system with priority 100 (usualy swap partitions have priority -1).
I made simple test of compression ratio for zram:
Detached one of my swaps:

$ sudo swapoff /dev/zram3

Created core file of iceweasel process and wrote it into zram:

$ pgrep -lf icewea
3375 sh -c /usr/bin/iceweasel
3376 /usr/bin/iceweasel
$ gcore 3376
[Thread debugging using libthread_db enabled]
[New Thread 0x7f7c0b9fd700 (LWP 8455)]
...blablabla...
0x00007f7c62a57c13 in poll () from /lib/libc.so.6
Saved corefile core.3376
$ sudo dd if=./core.3376 of=/dev/zram3
dd: writing to `/dev/zram3': No space left on device
2027297+0 records in
2027296+0 records out
1037975552 bytes (1,0 GB) copied, 6,79003 s, 153 MB/s

Core file does not fit completely into zram device, but it is dose not mater, let’s look at compression ratio:

$ cd /sys/block
$ echo `cat ./zram3/orig_data_size`/`cat ./zram3/compr_data_size`|bc
2.68475926964795164955

So, in most cases zram has compress ratio more than 2.5.
Huh, i think it is pretty cool.

Zoneminder jitter

After several years of torture with easycap i realize that it is time to change capture device. I found that other usb capture device that supported by linux cost to high, also i can not use PCI or full height PCI-e devices because of mATX form factor of my server.  Suddenly i found ImpactVCB 1381, it is what i wanted to found, it is supported by linux, PCI-e and has half height bracket.
Before i did not try it card i did not think, that it is can be so much difference in image quality between two cards. Unfortunately i do not have sample with  easycap, but you can trust me, difference is enough to throw easycap.
As always there is a fly in the ointment, zoneminder or haupage driver has bug and captured image sometimes jittering, it is looks like that:
Zoneminder jitter

I preferred to think, that it is bug in zoneminder, because i did not seen same issue when i captured video with mencoder.
Will hope it will be fixed in future releases.

You IP

Since the idiots in the Russian government passed a law similar to ‘SOPA’ i started to modify routing scheme at my home. Many times i used internet.ya.ru to determine my current outgoing IP address, but i wanted to use more minimalistic tool for this purpose. So i created my own tool with blackjack and hookers, there it is: https://ivanbayan.com/uip.php i using different routes for TLS and http traffic, so it is also available there  https://ivanbayan.com/uip.php.
This script produce simple image with you outgoing ip:

You ip

How to fix ‘Timezone database is corrupt – this should *never* happen!’

Today i upgraded to wheezy. When i entered my blog i observed that it does not work anymore. I looked into logs and found next errors:

[26-Jun-2013 12:59:41 UTC] PHP Fatal error:  date(): Timezone database is corrupt - 
this should *never* happen! in ...
[26-Jun-2013 12:59:49 UTC] PHP Fatal error:  strtotime(): Timezone database is 
corrupt - this should *never* happen! in /html/wp-includes/functions.php on line 33...

After a little investigation I discovered that happens when you use php in chroot enviroment (i using php5-fpm with chroot, so it is my case). I tried to copy /usr/share/zoneinfo in chroot environment with parent dir structure and correct permissions, but nothing change. Somewhere i read that it problem can happen in debian, because maintainers of php packages, patch source files, the solution – is to install tzdatadb:

apt-get install php-pear php5-dev
pecl install timezonedb
echo 'extension=timezonedb.so'> /etc/php5/mods-available/timezonedb.ini
ln -sf /etc/php5/mods-available/timezonedb.ini /etc/php5/conf.d/30-timezonedb.ini
service php5-fpm restart

After that all work like a charm.

PS

strings /usr/sbin/php5-fpm|grep Quake| head -n8
Quake I save: ddm4 East side invertationa
Quake I save: d7 The incinerator plant
Quake I save: d12 Takahiro laboratories
Quake I save: e1m1 The slipgate complex
Quake I save: e1m2 Castle of the damned
Quake I save: e2m6 The dismal oubliette
Quake I save: e3m4 Satan's dark delight
Quake I save: e4m2 The tower of despair

Easter egg?

Kali linux on LiveUSB with working persistent partition

Few days ago i wanted to make liveusb with kali linux (i had backtrack before). I used this guide to install kali, but i observed, that persistence partition does not work. There is partitions on my usb drive:
Kali - old partition table
When init script found persistent partition and tried to mount  their return error “mount: mounting /dev/sdaX on /root/lib/live/mount/persistence/sdaX failed: Device or resource busy”.
I think this happened because official guide suggest to write iso9660 image on usb drive, and init script think that it is cd drive and mount whole usb device, not a partition where iso placed. This i found in boot.log:

There you can see, that after kali boot, usb drive (sda) still mounted, and i can not mount second partition:
Kali1
After that whole usb drive is busy.  I attached boot.log to this post with enabled debug, may be it will help someone to fix that.

I decided to make bootable usb disk instead of flashing iso on it. For doing that i used extlinux and original kali iso file.
First i create 2 partitions on usb drive, one for kali and second for persistent files:

Kali2

Do not forget to set bootable flag on first partition and correct label for persistent paririon.
After that, install mbr from extlinux:

$ dd if=/usr/lib/extlinux/mbr.bin of=/dev/sda
0+1 records in
0+1 records out
440 bytes (440 B) copied, 0.00126658 s, 347 kB/s

Copy kali linux on first partition:

$ mkdir /mnt/sr0 /mnt/kali
$ mount /dev/sr0 /mnt/sr0/
mount: block device /dev/sr0 is write-protected, mounting read-only
$ mount /dev/sda1 /mnt/kali/
$ rsync -a /mnt/sr0/* /mnt/kali

Also i modify boot menu and add entry with persistence boot option at live.cfg:

label live-686-pae-persistence
menu label ^Live persistence (686-pae)
menu default
linux /live/vmlinuz
initrd /live/initrd.img
append boot=live noconfig=sudo username=root hostname=android-53f31a089339194f persistence

After that you need to rename isolinux.cfg to extlinux.conf and install extlinux:

$ cp /mnt/kali/isolinux/isolinux.cfg /mnt/kali/isolinux/extlinux.conf
$ extlinux --install /mnt/kali/isolinux/
/mnt/kali/isolinux/ is device /dev/sda1

Mount persistence partition and create config:

$ mkdir /mnt/persist
$ mount /dev/sda2 /mnt/persist/
$ echo "/ union" > /mnt/persist/persistence.conf

After that you can reboot and check, that persistent partition work.
Kali3