How to configure redmine service via terraform with persistent storage on amazon ECS

First of all, I have very little experience of AWS and terraform, so it can be obviously for them who have enough experience, but it definitely saved me a lot of time if I found article like that early.

It wasn’t simple to figure out how to run redmine container on ECS.
The main problem was – persistent storage. Redmine suppose that it have persistent disk storage which remain the same between service restarts. If you have your docker host it’s simply to map hipervisor’s directory inside of the container, but when your docker nodes can be added and removed dynamically you can lost data on disk which was generated by app.
Amazon provide few ways to have persistent storage such as S3, EBS or EFS.

By nature S3 is a storage which accessibly over http, so if your app haven’t integration with S3 API it can’t be used (except when you mount S3 via fuse fs for example).
EBS is a remote block storage, so you need to connect block device to docker host, mount it and map inside container before you will be able to use it.
EFS by nature is just a NFS.

I wanted to find solution which will be most natural as possible. I wanted to keep docker and redmine image untouched (ie avoid of additional plugins/scripts/packages installation). So, I decided not to use S3, because it need something like s3fs to make S3 storage available for redmine.
I decided not to use EBS, because I’ve found reports when EBS stuck attached to host and can’t be re-attached to another host until initial host reboots.
EFS looked perfect, it could be mounted from different hosts, it kept data during application/hypervisor life cycles. Moreover, even if I didn’t find a simple way to use EFS, only thing I needed was nfs-common package.

I was lucky, because at the Aug of 2018 amazon announced support of docker volumes and docker volumes plugins, docker itself can mount NFS inside containers since version 17.06 (I couldn’t found it in the change log, but if you google it, you will found a lot of references to that). So, it was exactly what I wanted, I faced only with one cons – lack of documentation. I needed to use terraform for redmine configuration and its documentation didn’t specify how to exactly pass driver_opts to docker volume configuration, so here is solution:

First you need to specify mount point in task-definition.json

"mountPoints":[
     {
       "sourceVolume": "redmine_storage",
       "containerPath": "/usr/src/redmine/files"
     }
 ]

And here is volume block from from terraform code for volume specification:

volume {
    name = "redmine_storage"
    docker_volume_configuration {
        scope         = "task"
        driver      = "local"
        driver_opts = {
            "type" = "nfs"
            "device" = "${var.efs_dns}:/"
            "o" = "addr=${var.efs_dns},nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2"
        }
 }
}

That’s all.
Code above is a part of redmine module, which have input variable efs_dns , so you can put your EFS address here if you configured it manually.

PS
Here you can find redmine S3 plugin, but I wanted to migrate existing redmine, so it looked like a lot of work, because I needed to modify rdemine’s DB and put files on S3 in manner which that plugin expects, so I decided that S3 not an option.

How to block IP ranges of specified autonomous system

If you want to prohibit access to your host for specified AS, you can use solution below. I made it some time ago, when I found out, that mail.ru hunting for hosts which help to bypass telegram censorship. It’s not perfect because I didn’t make much effort to it. Whois can return sub-networks and networks to which they belong in same response, so ipset set can contain duplicated ranges. Change ‘AS47764’ to AS which you want to block, ‘input_drop’ is an ipset set name.

ipset create input_drop hash:net comment
for i in $(whois -h whois.radb.net -- '-i origin AS47764' | grep 'route:'|cut -d : -f 2)
do
ipset add input_drop $i comment mail.ru
done
iptables -A INPUT -m set --match-set input_drop src -m comment --comment "DROP INPUT packets for AS47764" -j DROP

Also, i would recommend that solution, to make ipset rules persistent: https://github.com/BroHui/systemd-ipset-service

Galaxy S3: /efs/prox_cal doesn’t affect calibration settings under LineageOS

Few days ago I replaced front glass on samsung i9300 and flashed LineageOS 14.1. After that I’ve found that proximity sensor stays in triggered state, it may happened because of lack of experience (I’ve used too much UV-glue, so it was everywhere) or because of additional screen protector which been installed. Anyway, always-triggered-proximity-sensor made phone partially usable (you can’t cancel any call without pushing power button few times). I’ve found a lot of articles how to calibrate proximity sensor like this one. More over I’ve found that I shouldn’t do any calculation to update /efs/prox_cal, after auto-calibration /efs/prox_cal updated automatically (at least with kernel that shipped by default), but anyway it didn’t help me. Every reboot calibration  was reseted to zero.

For a first time, I’ve used proximity threshold value to fix proximity sensor, but later I saw that kernel driver read calibration directly from file and SELinux could be a reason why /efs/prox_cal haven’t effect.

Part that read calibration value looks like that:

#define CANCELATION_FILE_PATH "/efs/prox_cal"
...
int proximity_open_calibration(struct ssp_data *data)
{
 int iRet = 0;
 mm_segment_t old_fs;
 struct file *cancel_filp = NULL;
 
old_fs = get_fs();
 set_fs(KERNEL_DS);
 
cancel_filp = filp_open(CANCELATION_FILE_PATH, O_RDONLY, 0666);
 if (IS_ERR(cancel_filp)) {
 iRet = PTR_ERR(cancel_filp);
 if (iRet != -ENOENT)
 pr_err("[SSP]: %s - Can't open cancelation file\n",
 __func__);
 set_fs(old_fs);
 goto exit;
}

I’ve checked logcat and here is it:

05-06 21:29:12.916 3219 3219 W Binder:2377_A: type=1400 audit(0.0:39): avc: denied { read } for name="prox_cal" dev=mmcblk0p3 ino=46 scontext=u:r:system_server:s0 tcontext=u:object_r:efs_device_file:s0 tclass=file permissive=0

Definitely SELinux forbid reading of calibration file, I was surprised that SElinux capable to forbid kernel read call and now I feel a shame because usually I just disable it.

First I wanted to create new policy to allow reading of that file for kernel, but later I’ve found that /efs partition contains other calibration files, for example /efs/gyro_cal_data, I’ve checked security context of that files and found that it differs from /efs/prox_cal, it was u:object_r:sensors_data_file:s0 but prox_cal was created with default for /efs partition context u:object_r:efs_file:s0, so I’ve changed context:

# chcon u:object_r:sensors_data_file:s0 /efs/prox_cal

After that kernel started to load calibration value every boot. Looks like instructions like one mentioned above works for everyone who modified factory shipped prox_cal file with right security context, but I haven’t /efs/prox_cal before and it was created with wrong context.
I hope that story may help someone.

Unravel unknown thermistor

Recently I made mistake and made PCB for arduino module where connect temperature sensor to A7 PIN. I’ve envisaged that sensor could be analog (diode) or digital. Soon I’ve learned that diode doesn’t provide enough accuracy even for ±5℃ (2mV/℃) and surprise-surprise A7 pin is only analog input so I can’t use DS18B.
I had haven’t any other temp sensors, fortunately I’ve remembered that I have broken battery controller from laptop and it should have some sort of temp sensor, here it is:
I’ve poked it with multimeter few times to be sure that it isn’t semiconductor sensor, but NTC with near 10K Ohm resistance at 25℃. I’ve decided to use it, but don’t know how much Ohm/℃ it has. I’ve planned to use linear approximation to convert resistance to temp, so i measure few points and here what i got:

Here is ADC value on X-Axis and temperature on Y-Axis. Pure perfect, i could use it with one pair of a and b coefficients in temperature range which i want.

How to update puppet 3 to puppet 4 on ubuntu 16

I spent near month to figured out why i can’t update puppet on ubuntu 16 with specially designed puppet_agent module. It was task full of confusing experience.

So, let’s start. For a beginning you shouldn’t debug update process from a console, because one of a bug related to puppet  service. You could solve all problem which you will found with ‘puppet agent -t’ but when you will try  to upgrade puppet when it daemonized, it will fail. So set ‘log_level=info’ in your puppet.conf and use kill to trigger puppet daemon.

 sudo kill -SIGUSR1 $(cat /var/run/puppet/agent.pid);

Next you should set ‘stringify_facts=false’ into puppet.conf. Now puppet_agent developers declared that they provide additional class ‘::puppet_agent::prepare::stringify_facts’ for that, but when i started upgrade procedure it wasn’t available (or i miss it), so here is external fact to provide stringify_facts settings and puppet.conf path:

require 'puppet'
 
Facter.add('puppet_config') do
 setcode do
 Puppet.settings['config']
 end
end
 
Facter.add('puppet_stringify_facts') do
 setcode do
 Puppet.settings['stringify_facts'] || false
 end
end

Call it something like puppet.rb and put it into <YOURMODULEDIR>/lib/facter.
Next puppet code will disable stringify_facts before doing upgrade:

if versioncmp($::clientversion, '4') < 0 {
 if $::puppet_stringify_facts {
 augeas { 'puppet.conf.stringify_facts':
 context => "/files${::puppet_config}/main",
 changes => [
 'set stringify_facts false',
 ],
 }
 } else {
<Do puppet upgrade here>
}

If you have puppet service defined somewhere, you will be faced with duplicate service declaration:

Feb 16 09:09:59 localhost puppet-agent[10026]: Could not retrieve catalog from remote server: Error 500 on SERVER: {"message":"Server Error: Evaluation Error: Error while evaluating a Resource Statement, Evaluation Error: Error while evaluating a Resource Statement, Duplicate declaration: Service[puppet] is already declared in file CUT:47; cannot redeclare at /etc/puppetlabs/code/environments/production_puppet4/modules/puppet_agent/manifests/service.pp:31 at /etc/puppetlabs/code/environments/production_puppet4/modules/puppet_agent/manifests/service.pp:31:7 on node llocalhost","issue_kind":"RUNTIME_ERROR","stacktrace":["Warning: The 'stacktrace' property is deprecated and will be removed in a future version of Puppet. For security reasons, stacktraces are not returned with Puppet HTTP Error responses."]}

So you should declare puppet_agent class in next maner:

 class {'::puppet_agent':
 collection => 'PC1',
 service_names => [],
 notify => Service['puppet']
 }

Interesting what will happens now if you will try to update puppet?

Feb 16 09:16:32 localhost puppet-agent[10474]: Caught TERM; exiting
Feb 16 09:16:32 localhost puppet-agent[8171]: Caught TERM; exiting
Feb 16 09:16:32 localhost systemd[1]: Stopping Puppet agent...
Feb 16 09:16:36 localhost systemd[1]: Stopped Puppet agent.

Tadaaam. Now you have barely installed puppet-agent package, deleted previous puppet package and killed puppet daemon:

ichurkin@localhost:~$ pgrep -f puppet
ichurkin@localhost:~$ dpkg -l|grep puppet
rF puppet 3.8.5-2 all configuration management system, agent
ii puppet-common 3.8.5-2 all configuration management system

It happens because during puppet-agent package installation systemd killed puppet daemon and all its children. So you need to fix unit file first:

[Service]
KillMode=process

Call it something like service.override.conf and put into <YOURMODULEDIR>/files, puppet code to fix that:

if $::os['name'] == 'Ubuntu' and versioncmp($::os['release']['major'], '16') >= 0 {
notify{ "Creating systemd ovveride file":}
 file {'/etc/systemd/system/puppet.service.d/':
 ensure => directory
 }~>
 file { '/etc/systemd/system/puppet.service.d/override.conf':
 mode => '0644',
 owner => 'root',
 group => 'root',
 source => 'puppet:///modules/puppet/puppet.service.override',
 }~>
 exec { 'systemd_reload':
 command => 'systemctl daemon-reload',
 path => [ '/usr/bin', '/bin', '/sbin', '/usr/sbin' ],
 refreshonly => true,
 before => Class['::puppet_agent']
 }

I tried to use fact ${::service_provider} instead of ugly os/release condition, but at least puppet 3.8 on ubuntu 16 return ‘debian’ instead of ‘systemd’.

Let’s update puppet?

Feb 16 04:49:14 localhost puppet-agent[10021]: Could not start Service[puppet]: Execution of '/usr/sbin/service puppet start' returned 1: Failed to start puppet.service: Unit puppet.service is masked.
Feb 16 04:49:14 localhost puppet-agent[10021]: (/Stage[main]/Puppet_agent::Service/Service[puppet]/ensure) change from stopped to running failed: Could not start Service[puppet]: Execution of '/usr/sbin/service puppet start' returned 1: Failed to start puppet.service: Unit puppet.service is masked.

Once again puppet render itself stopped, i think it may caused because service provider is debian instead of systemd, i too exhausted to search for right solution, so here another one dirty hack:

 exec { 'puppetagent_transition_restart':
 path => '/bin:/sbin:/usr/bin:/usr/sbin',
 command => '/opt/puppetlabs/bin/puppet resource service puppet enable=true ensure=running',
 require => Class['::puppet_agent']
 }

That’s all.

PS

List of related bugs below:
https://tickets.puppetlabs.com/browse/MODULES-3453
https://tickets.puppetlabs.com/browse/PUP-5637
https://tickets.puppetlabs.com/browse/PUP-3931
https://github.com/puppetlabs/puppet/pull/3699
https://github.com/puppetlabs/puppet/pull/3700
https://tickets.puppetlabs.com/browse/PUP-4512

 

Converting SNMP enumerations to Zabbix value mappings

Many of those, who tried to use Zabbix for monitoring SNMP capable devices faced with need of creating value mappings. It’s ok to create them by hands if mapping contain few values and you don’t have many metrics that uses ‘named-numbers’.
For those who have not had fortune to face with this, I will explain. Enumerations it’s some sort of agreement about how to code different states or types or something identical by using only integer values. For example let’s see on SNMPv2-MIB::snmpEnableAuthenTraps:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
% snmptranslate -Td SNMPv2-MIB::snmpEnableAuthenTraps
SNMPv2-MIB::snmpEnableAuthenTraps
snmpEnableAuthenTraps OBJECT-TYPE
 -- FROM SNMPv2-MIB
 SYNTAX INTEGER {enabled(1), disabled(2)} 
 MAX-ACCESS read-write
 STATUS current
 DESCRIPTION "Indicates whether the SNMP entity is permitted to
 generate authenticationFailure traps. The value of this
 object overrides any configuration information; as such,
 it provides a means whereby all authenticationFailure
 traps may be disabled.
 
Note that it is strongly recommended that this object
 be stored in non-volatile memory so that it remains
 constant across re-initializations of the network
 management system."
::= { iso(1) org(3) dod(6) internet(1) mgmt(2) mib-2(1) snmp(11) 30 }

Here you can see, that integer ‘1’  used to code ‘enabled’ and ‘2’ for ‘disabled’, so if you want to see in your zabbix human friendly ‘enabled/disable’, you need to create value in your zabbix mapping first. It’s not a difficult task, if your mapping small like this, but it’s pain in the ass if your mapping consist many values. For example IF-MIB::ifType consist of 254 values. For completeness i need to say, that prior zabbix 3.0 you had not legal way to automate it.

When i first time searching for solution, i found that script in feature request ZBXNEXT-1424
Unfortunately it will break your db, about it you can read here. In Zabbix 3.0  value mappings API was introduced, now you are able to import/export mappings in XML format or you can do it via RPC.

Looks like it’s time to a perl magic. Tadaam! Script that generate value mapping in XML format for specified OID. I placed it onto github: https://github.com/IvanBayan/Zabbix-oid2valuemapping here you will find requirements and examples of usage. In short you type in console something like this:

% perl ./oid2valuemapping.pl --oid SNMPv2-MIB::snmpEnableAuthenTraps

And it will generate something like this:

 <?xml version='1.0' standalone='yes'?>
<zabbix_export>
 <date>2016-08-26T14:51:09Z</date>
 <value_maps>
 <value_map>
 <name>snmpEnableAuthenTraps</name>
 <mappings>
 <mapping>
 <newvalue>disabled</newvalue>
 <value>2</value>
 </mapping>
 <mapping>
 <newvalue>enabled</newvalue>
 <value>1</value>
 </mapping>
 </mappings>
 </value_map>
 </value_maps>
 <version>3.0</version>
</zabbix_export>

You need only few additional modules for perl and configured snmp.

Dirty hack to add values mappings in Zabbix

“I’ll be brief.” ©
Here is two things about script published in ZBXNEXT-1424, first it can help you to automate creation of large mappings (and it’s cool), second it will broke your DB (not so cool, maaan).
When you will try to add mapping in broken DB you will see something like this:

poorzabbix

The “Error in query [INSERT INTO valuemaps (name,valuemapid) VALUES (‘Test mapping’,’50’)] [Duplicate entry ’50’ for key ‘PRIMARY’]” mean, that in table valuemaps you already have entry with valuemapid = 50. Why it happened i tell later after we fix DB.

To fix DB, you need to update few entries in table ‘idx‘, first update nextid where table_name = valuemaps:

mysql> update ids set nextid = (select max(valuemaps.valuemapid)+1 from valuemaps) where table_name = 'valuemaps';
Query OK, 1 row affected (0.22 sec)
Rows matched: 1 Changed: 1 Warnings: 0

Second update nextid for mappings:

mysql> update ids set nextid = (select max(mappings.mappingid)+1 from mappings) where table_name = 'mappings';
Query OK, 1 row affected (0.22 sec)
Rows matched: 1 Changed: 1 Warnings: 0

Here it is!

This happened because script does not update table idx. May be it’s ok for zabbix 2.0 that mentioned in feature request, but it’s broke database for zabbix 2.2 and newer. Unfortunately zabbix prior version 3.0 does not have API or ability to import mappings , so that script still useful.

Here is fixed script, i hope author will not offended at me:

#!/usr/bin/perl
 
use warnings;
use strict;
 
my $usage = "$0 valueMapName number newvalue [number2 newvalue2 [...]]
E.g.: 
 $0 'Alarm Status' 1 ok 2 unknown 3 stale 4 problem
 $0 'Aliveness' 0 dead 1 alive
";
 
my $valueMapName = shift() || die "No new valuemap name";
my @mapList = @ARGV;
die "No mappings given. Usage: $usage\n" if scalar(@mapList) == 0;
 
 
my $isEvenNumber = scalar(@mapList) % 2 == 0;
die "Must give mapping->value pairs. Usage: $usage\n" if not $isEvenNumber;
my %mappings = @mapList;
 
my $newValueMapId = int(qx/mysql -N -s -e 'select nextid from zabbix.ids where field_name = "valuemapid"'/) ||
die("Can't fetch max valuemapid\nUsage: $usage\n");
$newValueMapId++;
my $newMappingId = int(qx/mysql -N -s -e 'select nextid from zabbix.ids where field_name = "mappingid"'/) ||
die("Can't fetch max mappingid\nUsage: $usage\n");
$newMappingId++;
 
eval {
 my $valueMapCmd = qq/mysql -e "insert into zabbix.valuemaps (valuemapid, name) values ('$newValueMapId', '$valueMapName');"/;
 print "$valueMapCmd\n";
 system $valueMapCmd;
 eval {
 for my $from (keys %mappings) {
 my $to = $mappings{$from};
 my $mappingCmd= qq/mysql -e "insert into zabbix.mappings (mappingid, valuemapid, value, newvalue) values ('$newMappingId', '$newValueMapId', '$from', '$to');"/;
 print "$mappingCmd\n";
 system $mappingCmd;
 $newMappingId++;
 }
 };
 if ($@) {
 die "something went wrong inserting into mappings $@";
 }
};
if ($@) {
 die "something went wrong inserting into valuemaps $@";
}
 
my $valueMapUpdCmd = qq/mysql -e 'update zabbix.ids set nextid = "$newValueMapId" where field_name = "valuemapid";'/;
print "$valueMapUpdCmd\n";
system $valueMapUpdCmd;
$newMappingId--;
my $mappingUpdCmd = qq/mysql -e 'update zabbix.ids set nextid = "$newMappingId" where field_name = "mappingid";'/;
print "$mappingUpdCmd\n";
system $mappingUpdCmd;

 

LVM recovery

Few days ago i made mistake and forced fsck to check partition that contain LVM instead of logic volume, as result i got broken LVM metadata. I was unable to see volume group an logic volumes.
pvs output looked like that:

# pvs -v

Scanning for physical volume names
Incorrect metadata area header checksum

I tried to run pvck but it did not help me, it founded corrupted metadata but did not repair LVM:

# pvck -d -v /dev/md5
Scanning /dev/md5
Incorrect metadata area header checksum
Found label on /dev/md5, sector 1, type=LVM2 
Found text metadata area: offset=4096, size=193024
Incorrect metadata area header checksum

Finally i founded that it’s possible to make backups of LVM metadata and restore it when needed, but i think that i had only broken LVM with broken metadata.
It’s hard to describe how happy I was when I found that by default LVM create backups of metadata when you make any changes. I found it into /etc/lvm/backup dir, after that recovery become easy task, first i recreate physical volume:

pvcreate -u b3Lk2a-pydG-Vhf3-DSEJ-9b84-RLm9-UEr6r3 --restorefile /etc/lvm/backup/vg-320 /dev/md5

UUID can be founded in pv section into metadata file:

 physical_volumes {
 
 pv0 {
 id = "<strong>b3Lk2a-pydG-Vhf3-DSEJ-9b84-RLm9-UEr6r3</strong>"
 device = "/dev/md5" # Hint only

Next i restored volume group:

vgcfgrestore -f /etc/lvm/backup/vg-320 vg-320

After that logical volumes became visible:

# lvs
 LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
 root vg-320 -wi-a--- 15.00g 
 swap vg-320 -wi-a--- 1.00g 
 var vg-320 -wi-ao-- 200.00g 
 zoneminder vg-320 -wi-a--- 15.00g

After reinitialization with vgscan -v && vgchange -ay commands, volume groups ready for fsck.

Simple OpenVPN profile generator

Few month ago i learned that OpenVPN support profiles. Before that i generate config for every client, create keys and certs with easy-rsa, tar it’s all together and put on client. Now i can create profile that will contain all necessary keys, certs and config in one file, so i write simple script that generate .ovpn profile for new client.
Generated .ovpn profile can be imported from sd card in Android, via iTunes or email in iOS, or just type `openvpn your_new_profile.ovpn` at PC.
Prerequisites: configured easy-rsa (`pkitool clientname` must produce cert and key for client).
You must customize config part for your server, it is possible to fetch data from server config file, but i’m too lazy to modify script for it.
There is it:

#!/bin/bash
#Dir where easy-rsa is placed
EASY_RSA_DIR="/etc/ssl/easy-rsa"
KEYS_DIR="$EASY_RSA_DIR/keys"
# Dir where profiles will be placed
OVPN_PATH="/root/ovpn"
REMOTE="your.server port"
 
 
if [ -z "$1" ]
then 
        echo -n "Enter new client common name (CN): "
        read -e CN
else
        CN=$1
fi
 
 
if [ -z "$CN" ]
        then echo "You must provide a CN."
        exit
fi
 
cd $EASY_RSA_DIR
if [ -f $KEYS_DIR/$CN.crt ]
then 
        echo "Certificate with the CN $CN already exists!"
        echo " $KEYS_DIR/$CN.crt"
else
source ./vars > /dev/null
./pkitool $CN
fi
 
cat > $OVPN_PATH/${CN}.ovpn << END
client
dev tun
resolv-retry infinite
nobind
persist-key
persist-tun
verb 1
comp-lzo
proto tcp
remote $REMOTE
 
<ca>
`cat $KEYS_DIR/ca.crt`
</ca>
 
<cert>
`sed -n '/BEGIN/,$p' $KEYS_DIR/${CN}.crt`
</cert>
 
<key>
`cat $KEYS_DIR/${CN}.key`
</key>
END

KIS-3R33 calculator

Half year ago i wanted to get  in car  PSU to charge my smartphone or to power devices like camcoder. I wanted to build powerful  power supply, first i tried to build it on chip NCP3155, but has failed. I lurking around to find another chip and found complete module KIS-3R33 based on MP2307 chip. I found it very interesting and cheap, after i got few i started to find way how to change output voltage from 3.3V to 5V. I found many guides how to change Vout, by replacing internal components and all of them ignore the fact, that module have Adjust pin. I wanted to find way how to change Vout without replacing internal components.
KIS-33R3 very similar to  typical application:
screenshot14
Datasheet give ratio for calculation of Vout depend of R1 and R2:
Vout = 0.925* (R1 + R2) / R2
According to scheme of module that i found into the internet, Adjust pin connected to FB in series with resistor of 3.3kOhm:
Kis-3r33s_Diag.jpg

R1 = 25.5 kOhm (two 51kOhm resistors in parallel), so it is possible to vary R1 in range 25.5 kOhm – 2.9 kOhm and R2 in range 10 kOhm – 2.48 kOhm it give output range 1.19V – 10.43V. If you need voltage more than 5V, you need to remove zener diode first (D2 on scheme), because this diode limit Vout to 5.1V. Also voltage range of output capacitor (C2 on scheme) is not know, so it is good idea to replace it with capacitor that can handle your output voltage.
I wrote simple calculator for KIS-3R33 that compute resistor that you must connect between Adj pin and GND or Vout to get desired voltage. Don’t forget, that result will correct only for KIS-3R33 that have 3.3V output (i seen version that have 2.7V output, so be careful).

Vout = V

NaN