How to proxy a 3x-ui subscription service via Nginx on a different host

I have a feeling that I might be creating a problem only to heroically solve it, but here’s the issue: Xray and the subscription service listen on different ports. If VLESS-Reality is used, the host should mimic a legitimate third-party host as closely as possible. Therefore, having an HTTPS service with a different certificate on a non-standard port looks suspicious. Possible options include:

  1. SNI Shunting: (I haven’t tried this yet. I’ve seen examples using Nginx, but it’s unclear how to make Nginx appear as a legitimate third-party host and make it use the same certificate and more importantly how to get key for it? So it should be some point of transport proxying.)
  2. Using a Second IP on the Server

I’ve tried searching for examples or documentation, but without much success.

The most promising idea that came to mind: why not use a second server to proxy requests for the subscription service? The downside is the cost of maintaining a second VPS. However, the advantage is that the services are decoupled. If the proxy host gets blocked, the subscription service host remains functional and can be repointed to another server.

In this setup, the subscription service would listen on a port opened in the firewall only for the second server (or the servers could connect via a VPN using a private address). On the second server (assuming Nginx is used), you’d expect to set up a location block with proxy_pass, right? Wrong.

The subscription link looks like this: https://<subscription host>:<subscription port>/<uri>/<subscription>
When 3x-ui generates the link, it uses the Listen Domain, Listen Port, or Reverse Proxy URI. The solution seems obvious: set the Reverse Proxy URI, and… here’s where things start getting strange.
The subscription service returns a list of URLs with parameters for connecting to inbounds. For example, in the case of VLESS, the format is as follows: vless://<someuuid>@<address>:<subscription port>?parameters

The issue is that if the Listen Domain is not set, the subscription service substitutes the IP address from the X-Real-IP header in place of the connection address. As a result, the URL ends up looking like this: vless://<someuuid>@<client IP address>:<subscription port>?parameters
This, of course, breaks everything.
I found two solutions to this issue. Initially, I didn’t want to create a domain for 3x-ui, so I simply hardcoded the X-Real-IP in the Nginx configuration like this:

location ~ ^(/<subscription_location>/|/<subscription_json_location>/) {
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Real-IP <3x-ui public ip address>;
proxy_set_header Range $http_range;
proxy_set_header If-Range $http_if_range;
proxy_redirect off;
proxy_pass http://<3x-ui public/vpn ip address>:<filtered port>;
}

The second solution is to create a domain that points to the Xray public IP, set it as the Listen Domain, and configure Nginx as follows:

location ~ ^(/<subscription_location>/|/<subscription_json_location>/) {
proxy_set_header Host 3x-ui.domain.name; #Same as configured in Listen Domain
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Range $http_range;
proxy_set_header If-Range $http_if_range;
proxy_redirect off;
proxy_pass http://<3x-ui public/private ip address>:<filtered port>;
}
If the Xray public IP is different from the 3x-ui address, the Host header and proxy_pass should be adjusted accordingly. However, if they are the same, and the subscription service is listening on an address filtered by the firewall, the configuration will follow a standard setup for reverse proxying a location, like this:
location ~ ^(/<subscription_location>/|/<subscription_json_location>/) {
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Proto $scheme;proxy_set_header Range $http_range;
proxy_set_header If-Range $http_if_range;
proxy_redirect off;
proxy_pass http://<3x-ui public address>:<filtered port>;
}

Below is an example screenshot of the subscription configuration:

Nvmlfan – thinkfan like demon to control nvidia GPU’s fans

I recently upgraded my home server, and as an indirect result, I now have three more PCIe slots than before. So, I decided to add a GPU for video transcoding and machine learning acceleration (though I’m still debating whether I really need it).

I bought an NVIDIA T600 but noticed some strange behavior: under load, it heats up to over 80°C. Even then, nvidia-smi reports that the fan is set below maximum speed. I tried adjusting the GPU’s target temperature and encountered two issues:

  1. The temperature setting immediately resets to default. (It turns out that persistence mode needs to be enabled to prevent this.)
  2. Even with persistence mode, the adjustment doesn’t work.

The GPU (or its drivers) ignores the target temperature setting, which is a problem. First, I don’t want anything running hot in my server. Second, and more importantly, it causes thermal stress.

I tried searching for something like thinkfan for GPUs but didn’t find anything useful. Most solutions for controlling the fan speed on NVIDIA GPUs seem to rely on nvidia-settings, which requires a functioning X server with NVIDIA drivers. That feels like overkill for me.

Fortunately, I discovered that it’s still possible to control the fans without an X server by using libnvidia-ml. After spending 15 minutes with ChatGPT and half a day making it work (half day and 15 min hating GO in total), I finally got it running.

The key difference with ThinkFan is the following: nvmlfan has two modes. The first is the standard curve mode, which is defined like this:

cards:
0:
mode: curve
curve:
- [ 60, 30 ]
- [ 65, 50]
- [ 75, 100]

this maps the GPU’s temperature to a corresponding fan speed. The curve specifies anchor points in the format [ temperature, fan_speed ], with values in between approximated linearly.

  • Temperatures below the first anchor point are mapped to the fan speed of the first point or the minimum speed allowed by the GPU BIOS/driver.
  • If the last anchor point’s temperature is below the GPU’s maximum threshold, the fan speed is linearly approximated from the last point’s value to 100%.

Note: Fan speeds are controlled as percentages, not RPM values.

The second mode do what nvidia-smi GPU target temperature should do (shame on you nvidia), in this mode nvmlfan tries to maintain constant temperature.

cards:
0:
mode: target
target: 65
pid: [ 20, 0.1, 0 ]

Of course, it won’t heat up the GPU if the temperature is below the target. However, it will actively counteract the heat generated under load, which helps minimize thermal stress.

Unfortunately, there are two drawbacks:

  1. I haven’t found a better way to achieve this than using a PID controller.
  2. Tuning PID parameters is notoriously difficult—it’s practically rocket science.

There are countless articles and entire books on how to tune PID parameters and even fucking discipline called control theory. Master PID tuning on your GPU and it will helps when you try to build a rocket.

There are no universal PID parameters that work for everyone (though, when properly tuned, they should be the same for identical models of GPUs). Fortunately, controlling GPU temperatures with a fan creates a relatively inertial system, so it’s less prone to oscillation.

In the [ 20, 0.1, 0 ] array:

  1. The first number (P) is the proportional parameter. It controls how much the fan speed changes when the error (the difference between the target temperature and the actual temperature) equals one. For example, with a target temperature of 65°C and an actual temperature of 70°C, the fan speed would be set to 100%. However, setting the fan to 100% counters the heat, causing the temperature to decrease. This, in turn, reduces the fan speed, which can lead to the temperature rising again, and so on. If the P parameter is too high, this cycle can cause the system to oscillate.
  2. The second number (I) is the integral parameter. Since the proportional component is quite rough, the integral component adjusts slowly over time to ensure the fan speed perfectly matches the load, maintaining the target temperature.
  3. The third number (D) is the derivative parameter. It reacts to the rate at which the temperature changes. For systems with significant inertia, such as this one, the derivative component can often be omitted. If you’re curious about how to use it effectively, you’ll need to dive into some control theory books.

The second drawback is that target mode can create additional noise by frequently changing the fan speed. Since the target speed is recalculated every second, the variations might be noticeable—especially if the system begins to oscillate.

With that said, here’s the repository for the project: https://github.com/IvanBayan/nvmlfan

PS
I take no responsibility if someone damages their GPU using this. Use it at your own risk.

Debian dual boot with full encryption on LVM and enabled secure boot

Just a short note how to install Debian in dual boot with full encryption (without separate un-encypted boot).

I needed to preserve installed windows and didn’t wanted to touch bios settings, so the first needed thing is a un-allocated disk space.
During disk partitioning free space should be dedicated to encrypted partition.
After key is provided and partition initialized volume group and correspond logic volumes should be created on encrypted partition (it will be listed like /dev/nvme0n1p3_crypt).
At the latest stage of installation grub will be fail, to make grub seems installed you need to switch to second terminal and add GRUB_ENABLE_CRYPTODISK=y to /target/etc/grub:

echo GRUB_ENABLE_CRYPTODISK=y >> /target/etc/grub

Then repeat grub installation from menu.

The next steps I’ve made in recovery mode, because grub was continue insulting me with messages “error: Invalid passphrase. error: no such cryptodisk found.”

The most important step (and probably the only required) is to convert luks key from argon2i (which is turned out to be not supported by grub) to pbkdf2 with command:

cryptsetup luksConvertKey --pbkdf pbkdf2 /dev/nvme0n1p3

/dev/nvme0n1p3 should be the path to actual encrypted partition.

The last two steps are going in decrease of importance, probably they are not needed, but I’ve done them before key conversion, so not 100% sure.
Add cryptdevice=/dev/nvme0n1p3:lvm to the end of GRUB_CMDLINE_LINUX_DEFAULT in /etc/default/grub.

Re-install and update grub:

mount -t efivarfs none /sys/firmware/efi/efivars/ &amp;&amp; \
grub-install --target=x86_64-efi --uefi-secure-boot --force-extra-removable /dev/nvme0n1 &amp;&amp; \
update-grub

I’m the most skeptical about the last step since it’s usually needed when PC can’t load grub because of enabled secure boot and incorrect installation. But a saw message that it doesn’t accept LUKS password, so grub was definitely loaded.

How to pass host device to podman container in user mode using podman-compose

One of the main ideas of containerization is a way to achieve repeatable, hassle-free results. So, when I see headlines that podman gives the same experience and supports docker-compose like deployments (or even provides socket for native docker-compose), I expect that I can grab someone else’s docker-compose file, run $ podman-compose up and enjoy working installation of something. But it’s a trap.

In theory, if you leave service state management out of the equation, it works, but only if the payload doesn’t need access to devices, privileged ports, and you don’t need to organize interpods file sharing or sharing files between the host and pods.

Nevertheless, from time to time when I install a new host with the intention to use containers, I give podman a try as the “new shiny better” alternative to docker. I allocate time (like one evening), and if I’m unable to make it work as I need, I purge it and fall back to docker.

The last time I wanted to run prind in podman, my idea was:

  1. Create separate user
  2. Add it to dialout and video groups on the host system
  3. Run services in user-mode with podman-compose

Looks simple, right? But not in the case of podman.
First of all, device pass-thru doesn’t work in user mode, so if you want to use podman’s --device option, or device: section of compose file, you have to run it under root, which immediately wipes out a major part of podman’s advantages.
Second, podman uses user namespaces, uids:gids which  you see in a container not the same as in a host system. Because of that you may have user root in container which started by regular user, but in fact uids:gids will be offset to a host system by values configured in /etc/{subuid,subgid}, which means that even you have video group in the container, it will be different from the video group that the host system user may be a member of.
Because of that, my first two steps of the plan don’t work out of the box.

To counter that, quite recently, podman has an elegant and beautiful solution: you should add a group named keep-groups to the container with the --group-add option. So, here it is? You just add the group keep-groups to the groups section of the compose file? End of the story? Everything works as expected and everyone is happy? Nope!

For some reason it doesn’t work (at least in debian’s podman-compose 1.0.6-1~bpo12+1). But before this masterpiece was born, the same effect may be achieved with --annotation run.oci.keep_original_groups=1 and it works with compose. You need to mount volume /dev and add the next section:

services:
klipper:
...
volumes:
- /dev:/dev
annotations:
run.oci.keep_original_groups: 1
...

It makes groups looks weird and counter-intuitive in containers and forbids you adding new groups with `groups:` section, but it’s better than nothing:

print3d@aurora:~$ id
uid=1001(print3d) gid=1001(print3d) groups=1001(print3d),20(dialout),44(video),100(users)
print3d@aurora:~$ podman run --group-add keep-groups -v /dev:/dev -it m5p3nc3r/v4l-utils
✔ docker.io/m5p3nc3r/v4l-utils:latest
Trying to pull docker.io/m5p3nc3r/v4l-utils:latest...
Getting image source signatures
Copying blob 366c4c59e228 done
Copying blob 5843afab3874 done
Copying config 13e86697f5 done
Writing manifest to image destination
Storing signatures
/ # id
uid=0(root) gid=0(root) groups=65534(nobody),65534(nobody),65534(nobody),0(root)
/ # ls -l /dev/{ttyUSB1,video0}
ls: /dev/{ttyUSB1,video0}: No such file or directory
/ # ls -ln /dev/ttyUSB1
crw-rw---- 1 65534 65534 188, 1 Dec 26 15:26 /dev/ttyUSB1
/ # ls -ln /dev/video0
crw-rw---- 1 65534 65534 81, 0 Dec 17 15:01 /dev/video0
/ # v4l2-ctl --list-devices
UVC Camera (046d:0825) (usb-0000:04:00.3-5):
/dev/video0
/dev/video1
/dev/media0

Make xerox b215 work with samba 4 again

Recently I bought xerox b215 (if you can, buy something other than xerox or hp) and wanted to make it scan to smb share. I already had configured samba in container using servercontainers/samba image.
So, it’s just to add another new share and configure user for scanner, right? Wrong!
It’ just didn’t worked. Thanks xerox’s engineers who decided not to burden end-user with diagnostic messages. It started scanning and after a second  returned back to the scan screen. Samba with log level 10 didn’t help me too, I just saw that client tried to connect and that all.
The tool which helped me is wireshark, I’ve found that after NTLMSSP_AUTH request from scanner samba sends STATUS_LOGON_FAILURE.

A little bit of “letsgoogleit” and voila ntlm auth = ntlmv1-permitted allowed me not to configure FTP for that lovely xerox.

Lenovo battery hack and whitelist at the same time

Recently I’ve got x230 laptop and have a plan to change buggy Intel Centrino 6205 adapter to something like Atheros, also I decided that it’s worth to have ability to use x220 like batteries, just in case.
To achieve that, I needed to flash patched firmware for EC controller (thinkpad-ec project) and modified bios (1vyrain project), but it was confusing, what should go first? Firstly I didn’t realised that thinkpad-ec flashes only EC firmware, it looked like EC mod will update bios to newer version than supported by 1vyrain, same time 1vyrain would update bios to version newer than supported by thinpkad-ec.
Finally, here is how to have EC mod together with patched BIOS on x230 laptop:
1. BIOS should be old enough to be compatible with  1vyrain and thinpkad-ec, at 2020-03-22 it should be not newer than 2.60 (1vyrain has requirements of more older bios than thinkpad-ec, requirements for 1vyrain patch can be found here)  otherwise it should be downgraded as described here.
2. Make bootable device with thinkpad-ec image, in BIOS set boot mode to ‘Legacy’ and update EC firmware.
3. Make bootable device with 1vyrain image, in BIOS set boot mode “UEFI only”, disable “Secure boot” and update BIOS.

In my case I ended with BIOS version 2.77 EC version 1.14.

How to just send logs from files to graylog2

That solution allows to read logs from file and just send them to remote syslog/graylog server. Logs will not influent on current syslog settings, you won’t need to filter them out of any syslog facility (like local7), all you need – the rsyslog (I’ve used v8).

My task was to send logs which wrote by java application (if I’m right log4j was used), they were rotated by logrotate with truncation, so few specific options were added.
I replaced %APP-NAME% in rsyslog’s template(RSYSLOG_SyslogProtocol23Format) to be able differentiate from which files log messages were read.

As for me, it’s better to write logs in format which allow them to be parsed easily or send them right to remote location , but if you need to do it quickly without modification of application it’s appropriate solution. Just copy config below in file like  /etc/rsyslog.d/99-graylog.conf and modify TARGET.ADDRESS, TARGET.PORT, app_ tag and File setting according to your environment.

module(load="imfile")

template(
name="SyslogProtocol23Format_modified" type="string"
string="<%PRI%>1 %TIMESTAMP:::date-rfc3339% %HOSTNAME% %syslogtag%%$.suffix% %PROCID% %MSGID% %STRUCTURED-DATA% %msg%\n"
)

ruleset(name="sendToLogserver") {
action(type="omfwd" Target="TARGET.ADDRESS" Port="TARGET.PORT" Template="SyslogProtocol23Format_modified")
}

ruleset(name="app_logs") {
set $.suffix=re_extract($!metadata!filename, "(.*)/([^/]*)", 0, 2, "unknown.log");
call sendToLogserver
stop
}

input(
type="imfile"
File="/var/log/app_logs/*.log"
Tag="app_"
Ruleset="app_logs"
freshStartTail="on"
addMetadata="on"
)

In my case application wrote multi-line log messages, so startmsg.regex was used. Also logs were rotated by logrotate with truncate method, additional option reopenOnTruncate was used. So my input section looked like:

input(
type="imfile"
File="/var/log/app_logs/*.log"
Tag="app_"
Ruleset="app_logs"
freshStartTail="on"
addMetadata="on"
startmsg.regex="^[0-9]{4}-[0-9]{2}-[0-9]{2} "
reopenOnTruncate="on"
)

Fixing startup error of STMCubeMX on linux

After STMCubeMX was upgraded from version 4 to version 5, it can’t start. I’ve tried to reinstall it, but without result. Last messages in console after application stuck looks like:

2019-01-24 21:03:54,692 [INFO] PluginManage:339 - Loaded plugin projectmanager (category:projectmanager,tabindex:3)
2019-01-24 21:04:38,908 [ERROR] IntegrityCheckThread:90 - Cannot obtain updater plugin
2019-01-24 21:04:38,909 [INFO] IntegrityCheckThread:94 - End integrity checks thread
2019-01-24 21:04:38,909 [INFO] ThirdPartyDb:263 - Close Third Party DataBase File (/home/bob/.stm32cubemx/plugins/thirdparty/db/thirdparties_db.xml)

Same time java processes looks like:

bob 20652 102 1.5 5841340 127888 pts/3 Sl+ 21:03 2:41 java -jar STM32CubeMX
bob 20653 0.0 0.0 0 0 pts/3 Z+ 21:03 0:00 [STM32CubeMX] <defunct>

On the st forum I’ve found solution which had helped me, if you change tabindex parameter of com/st/microxplorer/plugins/tools/Plugin.properties in tools.jar to 6, STMCube will start to work.
Here is modified tools.jar

Fixing Gutenberg error “The editor has encountered an unexpected error”

After update to WP 5, I’ve faced with next issue, I’ve couldn’t add new post or edit existed. Looks like error happens because of misconfigured nginx and when new ‘Gutenberg’ editor is active (which is true by default for wordpress 5.0 and above).

Earlier I had nginx location / configured in next manner:

location / {
    try_files $uri $uri/ /index.php?$args;
}

Same configuration can be found on wordpress codex page:

And on nginx recipe page:

The issue caused by question sign in try_files directive, when $args is empty, index.php is called like this: “/index.php?”. Solution is simple:

$is_args
    “?” if a request line has arguments, or an empty string otherwise

After I changed location / block like this:

location / {
    try_files $uri $uri/ /index.php$is_args$args;
}

The problem is gone.

How to configure redmine service via terraform with persistent storage on amazon ECS

First of all, I have very little experience of AWS and terraform, so it can be obviously for them who have enough experience, but it definitely saved me a lot of time if I found article like that early.

It wasn’t simple to figure out how to run redmine container on ECS.
The main problem was – persistent storage. Redmine suppose that it have persistent disk storage which remain the same between service restarts. If you have your docker host it’s simply to map hipervisor’s directory inside of the container, but when your docker nodes can be added and removed dynamically you can lost data on disk which was generated by app.
Amazon provide few ways to have persistent storage such as S3, EBS or EFS.

By nature S3 is a storage which accessibly over http, so if your app haven’t integration with S3 API it can’t be used (except when you mount S3 via fuse fs for example).
EBS is a remote block storage, so you need to connect block device to docker host, mount it and map inside container before you will be able to use it.
EFS by nature is just a NFS.

I wanted to find solution which will be most natural as possible. I wanted to keep docker and redmine image untouched (ie avoid of additional plugins/scripts/packages installation). So, I decided not to use S3, because it need something like s3fs to make S3 storage available for redmine.
I decided not to use EBS, because I’ve found reports when EBS stuck attached to host and can’t be re-attached to another host until initial host reboots.
EFS looked perfect, it could be mounted from different hosts, it kept data during application/hypervisor life cycles. Moreover, even if I didn’t find a simple way to use EFS, only thing I needed was nfs-common package.

I was lucky, because at the Aug of 2018 amazon announced support of docker volumes and docker volumes plugins, docker itself can mount NFS inside containers since version 17.06 (I couldn’t found it in the change log, but if you google it, you will found a lot of references to that). So, it was exactly what I wanted, I faced only with one cons – lack of documentation. I needed to use terraform for redmine configuration and its documentation didn’t specify how to exactly pass driver_opts to docker volume configuration, so here is solution:

First you need to specify mount point in task-definition.json

"mountPoints":[
     {
       "sourceVolume": "redmine_storage",
       "containerPath": "/usr/src/redmine/files"
     }
 ]

And here is volume block from from terraform code for volume specification:

volume {
    name = "redmine_storage"
    docker_volume_configuration {
        scope         = "task"
        driver      = "local"
        driver_opts = {
            "type" = "nfs"
            "device" = "${var.efs_dns}:/"
            "o" = "addr=${var.efs_dns},nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2"
        }
 }
}

That’s all.
Code above is a part of redmine module, which have input variable efs_dns , so you can put your EFS address here if you configured it manually.

PS
Here you can find redmine S3 plugin, but I wanted to migrate existing redmine, so it looked like a lot of work, because I needed to modify rdemine’s DB and put files on S3 in manner which that plugin expects, so I decided that S3 not an option.