To make it short, the command at shell prompt is
$ perl -MMIME::QuotedPrint -e 'local $/; $x=<>; print decode_qp($x)' < quoted.txt > unquoted.html
and I needed this to extract an HTML segment of an email.
To make it short, the command at shell prompt is
$ perl -MMIME::QuotedPrint -e 'local $/; $x=<>; print decode_qp($x)' < quoted.txt > unquoted.html
and I needed this to extract an HTML segment of an email.
Cronjobs typically consists of a single utility which we’re pretty confident about. Even if it takes quite some time to complete (updatedb, for example), there’s always a simple story, a single task to complete with a known beginning and end.
If the task involves a shell script that calls a few utilities, that feeling of control fades. It’s therefore reassuring to know that everything can be cleaned up neatly by simple stopping a service. Systemd is good at that, since all processes that are involved in the service are kept in a separate cgroup. So when the service is stopped, all processes that were possibly generated eventually get a SIGKILL, typically 90 seconds after the request to stop the service, unless they terminated voluntarily in response to the initial SIGTERM.
Advantage number two is that the systemd allows for a series of capabilities to limit what the cronjob is capable of doing, thanks to the cgroup arrangement. This doesn’t fall very short from the possibilities of container virtualization, with pretty simple assignments in the unit file. This includes making certain directories inaccessible or accessible for read-only, setting up temporary directories, disallow external network connection, limit the set of allowed syscalls, and of course limit the amount of resources that are consumed by the service. They’re called Control Groups for a reason.
There’s also the RuntimeMaxSec parameter in the service unit file, which is the maximal wall clock time the service is allowed to run. The service is terminated and put in failure state if this time is exceeded. This is however supported from systemd version 229 and later, so check with “systemctl –version”.
My original idea was to use systemd timers to kick off the job, and let RuntimeMaxSec make sure it would get cleaned up if it ran too long (i.e. got stuck somehow). But because the server in question ran a rather old version of systemd, I went for a cron entry for starting the service and another one for stopping it, with a certain time difference between them. In hindsight, cron turned to be neater for kicking off the jobs, because I had multiple variants of them in different times. So one single file enclosed all.
The main practical difference is that if a service reaches RuntimeMaxSec, it’s terminated with a failed status. The cron solution stops the service without this. I guess there’s a systemctl way to achieve the failed status, if that’s really important.
As a side note, I have a separate post on Firejail, which is yet another possibility to use cgroups for controlling what processes do.
The idea is simple: A service can be started as a result of a timer event. That’s all that timer units do.
Timer units are configured like any systemd units (man systemd.unit) but have a .timer suffix and a dedicated [Timer] section. By convention, the timer unit named foo.timer activates the service foo.service, unless specified differently with the Unit= assignment (useful for generating confusion).
Units that are already running when the timer event occurs are not restarted, but are left to keep running. Exactly like systemctl start would do.
For an cronjob-style timer, use OnCalendar= to specify the times. See man systemd.time for the format. Note that AccuracySec= should be set too to control how much systemd can play with the exact time of execution, or systemd’s behavior might be confusing.
As usual, the unit file (e.g. /etc/systemd/system/cronjob-test@.service) is short and concise:
[Unit]
Description=Cronjob test service
[Service]
ExecStart=/home/eli/shellout/utils/shellout.pl "%I"
Type=simple
User=eli
WorkingDirectory=/home/eli/shellout/utils
KillMode=mixed
NoNewPrivileges=true
This is a simple service, meaning that systemd expects the process launched by ExecStart to run in the foreground.
Note however that the service unit’s file name has a “@” character and that %I is used to choose what to run, based upon the unescaped instance name (see main systemd.unit). This turns the unit file into a template, and allows choosing an arbitrary command (the shellout.pl script is explained below) with something like (really, this works)
# systemctl start cronjob-test@'echo "Hello, world"'
This might seems dangerous, but recall that root privileges are required to start the service, and you get a plain-user process (possibly with no ability to escalate privileges) in return. Not the big jackpot.
For stopping the service, exactly the same service specifier string is required. But it’s also possible to stop all instances of a service with
# systemctl stop 'cronjob-test@*'
How neat is that?
A few comments on this:
There is no log entry for a service of simple type that terminates with a success status. Even though it’s stopped in the sense that it has no allocated cgroup and “systemctl start” behaves as if it was stopped, a successful termination is silent. Not sure if I like this, but that’s the way it is.
When the process doesn’t respond to SIGTERM:
Jan 16 19:13:03 systemd[1]: Stopping Cronjob test service... Jan 16 19:14:33 systemd[1]: cronjob-test.service stop-sigterm timed out. Killing. Jan 16 19:14:33 systemd[1]: cronjob-test.service: main process exited, code=killed, status=9/KILL Jan 16 19:14:33 systemd[1]: Stopped Cronjob test service. Jan 16 19:14:33 systemd[1]: Unit cronjob-test.service entered failed state.
So there’s always “Stopping” first and then “Stopped”. And if there are processes in the control group 90 seconds after “Stopping”, SIGKILL is sent, and the service gets a “failed” status. Not being able to quit properly is a failure.
A “systemctl stop” on a service that is already stopped is legit: The systemctl utility returns silently with a success status, and a “Stopped” message appears in the log without anything actually taking place. Neither does the service’s status change, so if it was considered failed before, so it remains. And if the target to stop was a group if instances (e.g. systemctl stop ‘cronjob-test@*’) and there were no instances to stop, there’s even not a log message on that.
Same logic with “Starting” and “Started”: A superfluous “systemctl start” does nothing except for a “Started” log message, and the utility is silent, returning success.
By default, the output (stdout and stderr) of the processes is logged in the journal. This is usually pretty convenient, however I wanted the good old cronjob behavior: An email is sent unless the job is completely silent and exits with a success status (actually, crond doesn’t care, but I wanted this too).
This concept doesn’t fit systemd’s spirit: You don’t start sending mails each time a service has something to say. One could use OnFailure for activating another service that calls home when the service gets into a failure status (which includes a non-success termination of the main process), but that mail won’t tell me the output. To achieve this, I wrote a Perl script. So there’s one extra process, but who cares, systemd kills’em all in the end anyhow.
Here it comes (I called it shellout.pl):
#!/usr/bin/perl use strict; use warnings; # Parameters for sending mail to report errors my $sender = 'eli'; my $recipient = 'eli'; my $sendmail = "/usr/sbin/sendmail -i -f$sender"; my $cmd = shift; my $start = time(); my $output = ''; my $catcher = sub { finish("Received signal."); }; $SIG{HUP} = $catcher; $SIG{TERM} = $catcher; $SIG{INT} = $catcher; $SIG{QUIT} = $catcher; # Redirect stderr to stdout for child processes as well open (STDERR, ">&STDOUT"); open (my $fh, '-|', $cmd) or finish("Failed to fork: $!"); while (defined (my $l = <$fh>)) { $output .= $l; } close $fh or finish("Error: $! $?"); finish("Execution successful, but output was generated.") if (length $output); exit 0; # Happy end sub finish { my ($msg) = @_; my $elapsed = time() - $start; $msg .= "\n\nOutput generated:\n\n$output\n" if (length $output); open (my $fh, '|-', "$sendmail $recipient") or finish("Failed to run sendmail: $!"); print $fh <<"END"; From: Shellout script <$sender> Subject: systemd cron job issue To: $recipient The script with command \"$cmd\" ran $elapsed seconds. $msg END close $fh or die("Failed to send email: $! $?\n"); $SIG{TERM} = sub { }; # Not sure this matters kill -15, $$; # Kill entire process group exit(1); }
First, let’s pay attention to
open (STDERR, ">&STDOUT");
which makes sure standard error is redirected to standard output. This is inherited by child processes, which is exactly the point.
The script catches the signals (SIGTERM in particular, which is systemd’s first hint that it’s time to pack and leave) and sends a SIGTERM to all other processes in turn. This is combined with KillMode being set to “mixed” in the service unit file, so that only shellout.pl gets the signal, and not the other processes.
The rationale is that if all processes get the signal at once, it may (theoretically?) turn out that the child process terminates before the script reacted to the signal it got itself, so it will fail to report that the reason for the termination was a signal, as opposed to the termination of the child. This could miss a situation where the child process got stuck and said nothing when being killed.
Note that the script kills all processes in the process group just before quitting due to a signal it got, or when the invoked process terminates and there was output. Before doing so, it sets the signal handler to a NOP, to avoid an endless loop, since the script’s process will get it as well (?). This NOP thing appears to be unnecessary, but better safe than sorry.
Also note that the while loop quits when there’s nothing more in <$fh>. This means that if the child process forks and then terminates, the while loop will continue, because unless the forked process closed its output file handles, it will keep the reference count of the script’s stdin above zero. The first child process will remain as a zombie until the forked process is done. Only then will it be reaped by virtue of the close $fh. This machinery is not intended for fork() sorcery.
I took a different approach in another post of mine, where the idea was to fork explicitly and modify the child’s attributes. Another post discusses timing out a child process in general.
Yes, cronjobs are much simpler. But in the long run, it’s a good idea to acquire the ability to run cronjobs as services for the sake of keeping the system clean from runaway processes.
Paid-per-time cloud services. I don’t want to forget one of those running, just to get a fat bill at the end of the month. And if the intended use is short sessions anyhow, make sure that the machine shuts down by itself after a given amount of time. Just make sure that a shutdown by the machine itself accounts for cutting the costs. And sane cloud provider does that except for, possibly, costs for storing the VM’s disk image.
So this is the cloud computing parallel to “did I lock the door?”.
The examples here are based upon systemd 241 on Debian GNU/Linux 10.
There is more than one way to do this. I went for two services: One that calls /sbin/shutdown with a five minute delay (so I get a chance to cancel it) and then second is a timer for the uptime limit.
So the main service is this file as /etc/systemd/system/uptime-limiter.service:
[Unit] Description=Limit uptime service [Service] ExecStart=/sbin/shutdown -h +5 "System it taken down by uptime-limit.service" Type=simple [Install] WantedBy=multi-user.target
The naïve approach is to just enable the service and expect it to work. Well, it does work when started manually, but when this service starts as part of the system bringup, the shutdown request is registered but later ignored. Most likely because systemd somehow cancels pending shutdown requests when it reaches the ultimate target.
I should mention that adding After=multi-user.target in the unit file didn’t help. Maybe some other target. Don’t know.
So the way to ensure that the shutdown command is respected is to trigger it off with a timer service.
The timer service as /etc/systemd/system/uptime-limiter.timer, in this case allows for 6 hours of uptime (plus the extra 5 minutes given by the main service):
[Unit] Description=Timer for Limit uptime service [Timer] OnBootSec=6h AccuracySec=1s [Install] WantedBy=timers.target
and enable it:
# systemctl enable uptime-limiter.timer
Created symlink /etc/systemd/system/timers.target.wants/uptime-limiter.timer → /etc/systemd/system/uptime-limiter.timer.
Note two things here: That I enabled the timer, not the service itself, by adding the .timer suffix. And I didn’t start it. For that, there’s the –now flag.
So there are two steps: When the timer fires off, the call to /sbin/shutdown takes place, and that causes nagging wall messages to start once a minute, and eventually a shutdown. Mission complete.
Ah, that’s surprisingly easy:
# systemctl list-timers
NEXT LEFT LAST PASSED UNIT ACTIVATES
Sun 2021-01-31 17:38:28 UTC 14min left n/a n/a systemd-tmpfiles-clean.timer systemd-tmpfiles-clean.service
Sun 2021-01-31 20:50:22 UTC 3h 26min left Sun 2021-01-31 12:36:41 UTC 4h 47min ago apt-daily.timer apt-daily.service
Sun 2021-01-31 23:23:28 UTC 5h 59min left n/a n/a uptime-limiter.timer uptime-limiter.service
Sun 2021-01-31 23:23:34 UTC 5h 59min left Sun 2021-01-31 17:23:34 UTC 44s ago google-oslogin-cache.timer google-oslogin-cache.service
Mon 2021-02-01 00:00:00 UTC 6h left Sun 2021-01-31 12:36:41 UTC 4h 47min ago logrotate.timer logrotate.service
Mon 2021-02-01 00:00:00 UTC 6h left Sun 2021-01-31 12:36:41 UTC 4h 47min ago man-db.timer man-db.service
Mon 2021-02-01 06:49:19 UTC 13h left Sun 2021-01-31 12:36:41 UTC 4h 47min ago apt-daily-upgrade.timer apt-daily-upgrade.service
Clean and simple. And this is probably why this method is better than a long delay on shutdown, which is less clear about what it’s about to do, as shown next.
Note that a timer service can be stopped, which is parallel to canceling a shutdown. Restarting it to push the time limit further won’t work in this case, because the service is written related to OnBootSec.
To check if a shutdown is about to happen:
$ cat /run/systemd/shutdown/scheduled
USEC=1612103418427661
WARN_WALL=1
MODE=poweroff
WALL_MESSAGE=System it taken down by uptime-limit.service
There are different reports on what happens when the shutdown is canceled. On my system, the file was deleted in response to “shutdown -c”, but not when the shutdown was canceled because the system had just booted up. There’s other suggested ways too, but in the end, it appears like there’s no definite way to tell if a system has a shutdown scheduled or not. At least not as of systemd 241.
That USEC line is the epoch time for when shutdown will take place. A Perl guy like me goes
$ perl -e 'print scalar gmtime(1612103418427661/1e6)'
but that’s me.
So this shows what doesn’t work: Enable the main service (as well as start it right away with the –now flag):
# systemctl enable --now uptime-limiter Created symlink /etc/systemd/system/multi-user.target.wants/uptime-limiter.service → /etc/systemd/system/uptime-limiter.service. Broadcast message from root@instance-1 (Sun 2021-01-31 14:15:19 UTC): System it taken down by uptime-limit.service The system is going down for poweroff at Sun 2021-01-31 14:25:19 UTC!
So the broadcast message is out there right away. But this is misleading: It won’t work at all when the service is started automatically during system boot.
This is my short war story as I made Xilinx’ Impact, part of ISE 14.7, work on a Linux Mint 19 machine with a v4.15 Linux kernel. I should mention that I already use Vivado on the same machine, so the whole JTAG programming thing was already sorted out, including loading firmware into the USB JTAG adapters, whether it’s a platform cable or an on-board interface. All that was already history. It was Impact that refused to play ball.
In short, what needed to be done:
And now in painstaking detail.
The initial attempt to talk with the USB JTAG interface failed with a lot of dialog boxes saying something about windrvr6 and this:
PROGRESS_START - Starting Operation. If you are using the Platform Cable USB, please refer to the USB Cable Installation Guide (UG344) to install the libusb package. Connecting to cable (Usb Port - USB21). Checking cable driver. Linux release = 4.15.0-20-generic. WARNING:iMPACT - Module windrvr6 is not loaded. Please reinstall the cable drivers. See Answer Record 22648. Cable connection failed.
This is horribly misleading. windrvr6 is a Jungo driver which isn’t supported for anything by ancient kernels. Also, the said Answer Record seems to have been deleted.
Luckily, there’s a libusb interface as well, but it needs to be enabled. More precisely, Impact needs to find a libusb.so file somewhere. Even more precisely, this is some strace output related to its attempts:
openat(AT_FDCWD, "/opt/xilinx/14.7/ISE_DS/ISE//lib/lin64/libusb.so", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)
openat(AT_FDCWD, "/opt/xilinx/14.7/ISE_DS/ISE/lib/lin64/libusb.so", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)
openat(AT_FDCWD, "/opt/xilinx/14.7/ISE_DS/ISE/sysgen/lib/libusb.so", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)
openat(AT_FDCWD, "/opt/xilinx/14.7/ISE_DS/EDK/lib/lin64/libusb.so", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)
openat(AT_FDCWD, "/opt/xilinx/14.7/ISE_DS/common/lib/lin64/libusb.so", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)
[ ... ]
openat(AT_FDCWD, "/usr/lib/x86_64-linux-gnu/tls/libusb.so", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)
openat(AT_FDCWD, "/usr/lib/x86_64-linux-gnu/libusb.so", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)
openat(AT_FDCWD, "/lib/libusb.so", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)
openat(AT_FDCWD, "/usr/lib/libusb.so", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)
It so happens that a libusb module is present among the files installed along with ISE (several times, actually), so it’s enough to just
$ cd /opt/xilinx/14.7/ISE_DS/ISE/lib/lin64/ $ ln -s libusb-1.0.so.0 libusb.so
or alternatively, a symlink to /usr/lib/x86_64-linux-gnu/libusb-1.0.so worked equivalently well on my system.
Trying to initialize the chain I got:
PROGRESS_START - Starting Operation. Connecting to cable (Usb Port - USB21). Checking cable driver. File version of /opt/xilinx/14.7/ISE_DS/ISE/bin/lin64/xusbdfwu.hex = 1030. Using libusb. Please run `source ./setup_pcusb` from the /opt/xilinx/14.7/ISE_DS/ISE//bin/lin64 directory with root privilege to update the firmware. Disconnect and then reconnect the cable from the USB port to complete the driver update. Cable connection failed.
So yey, it was not going for libusb. But then it refused to go on.
Frankly speaking, I’m not so much into running any script with root privileges, knowing it can mess up things with the working Vivado installation. On my system, there was actually no need, because I had already installed and then removed the cable drivers (as required by ISE).
What happened here was that Impact looked for firmware files somewhere in /etc/hotplug/usb/, assuming that if they didn’t exist, then the USB device must not be loaded with firmware. But it was in my case. And yet, Impact refused on the grounds that the files couldn’t be found.
So I put those files back in place, and Impact was happy again. If you don’t have these files, an ISE Lab Tools installation should do the trick. Note that it also installs udev rules, which is what I wanted to avoid. And also that the installation will fail, because it includes compiling the Jungo driver against the kernel, and there’s some issue with that. But as far as I recall, the kernel thing is attempted last, so the firmware files will be in place. I think.
Or installing them on behalf of Vivado is also fine? Note sure.
Attempting to Cable Auto Connect, I got Identify Failed and a whole range of weird errors. Since I ran Impact from a console, I got stuff like this on the terminal:
ERROR set configuration. strerr=Device or resource busy. ERROR claiming interface. ERROR setting interface. ERROR claiming interface in bulk transfer. bulk tranfer failed, endpoint=02. ERROR releasing interface in bulk transfer. ERROR set configuration. strerr=Device or resource busy. ERROR claiming interface. ERROR setting interface. control tranfer failed. control tranfer failed.
This time it was a stupid mistake: Vivado’s hardware manager ran at the same time, so the two competed. Device or resource busy or not?
So I just turned off Vivado. And voila. All ran just nicely.
I mentioned that I already had the firmware loading properly set up. So it looked like this in the logs:
Feb 13 11:58:18 kernel: usb 1-5.1.1: new high-speed USB device number 78 using xhci_hcd
Feb 13 11:58:18 kernel: usb 1-5.1.1: New USB device found, idVendor=03fd, idProduct=000d
Feb 13 11:58:18 kernel: usb 1-5.1.1: New USB device strings: Mfr=0, Product=0, SerialNumber=0
Feb 13 11:58:18 systemd-udevd[59619]: Process '/alt-root/sbin/fxload -t fx2 -I /alt-root/etc/hotplug/usb/xusbdfwu.fw/xusb_emb.hex -D ' failed with exit code 255.
immediately followed by:
Feb 13 11:58:25 kernel: usb 1-5.1.1: new high-speed USB device number 80 using xhci_hcd Feb 13 11:58:25 kernel: usb 1-5.1.1: New USB device found, idVendor=03fd, idProduct=0008 Feb 13 11:58:25 kernel: usb 1-5.1.1: New USB device strings: Mfr=1, Product=2, SerialNumber=0 Feb 13 11:58:25 kernel: usb 1-5.1.1: Product: XILINX Feb 13 11:58:25 kernel: usb 1-5.1.1: Manufacturer: XILINX
This log contains contradicting messages. On one hand, the device is clearly re-enumerated with a new product ID, indicating that the firmware load went fine. On the other hand, there’s an error message saying fxload failed.
I messed around quite a bit with udev because of this. The problem is that the argument to the -D flag should be the path to the device files of the USB device, and there’s nothing there. In the related udev rule, it says $devnode, which should substitute to exactly that. Why doesn’t it work?
The answer is that it actually does work. For some unclear reason, the relevant udev rule is called a second time, and on that second time $devnode is substituted with nothing. Which is harmless because it fails royally with no device file to poke. Except for that confusing error message.
After a few days being happy with not getting spam, I started to suspect that something is completely wrong with receiving mail. As I’m using fetchmail to get mail from my own server running dovecot v2.2.13, I’m used to getting notifications when fetchmail is unhappy. But there was no such.
Checking up the server’s logs, there were tons of these messages:
dovecot: master: Warning: service(pop3-login): process_limit (100) reached, client connections are being dropped
Restarting dovecot got it back running properly again, and I got a flood of the mails that were pending on the server. This was exceptionally nasty, because mails stopped arriving silently.
So what was the problem? The clue is in these log messages, which occurred about a minute after the system’s boot (it’s a VPS virtual machine):
Jul 13 11:21:46 dovecot: master: Error: service(anvil): Initial status notification not received in 30 seconds, killing the process Jul 13 11:21:46 dovecot: master: Error: service(log): Initial status notification not received in 30 seconds, killing the process Jul 13 11:21:46 dovecot: master: Error: service(ssl-params): Initial status notification not received in 30 seconds, killing the process Jul 13 11:21:46 dovecot: master: Error: service(log): child 1210 killed with signal 9
These three services are helper processes for dovecot, as can be seen in the output of systemctl status:
├─dovecot.service │ ├─11690 /usr/sbin/dovecot -F │ ├─11693 dovecot/anvil │ ├─11694 dovecot/log │ ├─26494 dovecot/config │ ├─26495 dovecot/auth │ └─26530 dovecot/auth -w
What seems to have happened is that these processes failed to launch properly within the 30 second timeout limit, and were therefore killed by dovecot. And then attempts to make pop3 connections seem to have got stuck, with the forked processes that are made for each connection remaining. Eventually, they reached the maximum of 100.
The reason this happened only now is probably that the hosting server had some technical failure and was brought down for maintenance. When it went up again, all VMs were booted at the same time, so they were all very slow in the beginning. Hence it took exceptionally long to kick off those helper processes. The 30 seconds timeout kicked in.
The solution? Restart dovecot once in 24 hours with a plain cronjob. Ugly, but works. In the worst case, mail will be delayed for 24 hours. This is a very rare event to begin with.
This should have been a trivial task, but it turned out quite difficult. So these are my notes for the next time. Octave 4.2.2 under Linux Mint 19, using qt5ct plugin with GNU plot (or else I get blank plots).
So this is the small function I wrote for creating a plot and a thumbnail:
function []=toimg(fname, alt) grid on; saveas(gcf, sprintf('%s.png', fname), 'png'); print(gcf, sprintf('%s_thumb.png', fname), '-dpng', '-color', '-S280,210'); disp(sprintf('<a href="/media/%s.png" target="_blank"><img alt="%s" src="/media/%s_thumb.png" style="width: 280px; height: 210px;"></a>', fname, alt, fname));
The @alt argument becomes the image’s alternative text when shown on the web page.
The call to saveas() creates a 1200x900 image, and the print() call creates a 280x210 one (as specified directly). I take it that print() will create a 1200x900 without any specific argument for the size, but I left both methods, since this is how I ended up after struggling, and it’s better to have both possibilities shown.
To add some extra annoyment, toimg() always plots the current figure, which is typically the last figure plotted. Which is not necessarily the figure that has focus. As a matter of fact, even if the current figure is closed by clicking the upper-right X, it remains the current figure. Calling toimg() will make it reappear and get plotted. Which is really weird behavior.
The apparently only way around this is to use figure() to select the desired current figure before calling ioimg(), e.g.
>> figure(4);
The good news is that the figure numbers match those appearing on the windows’ titles. This also explains why the numbering doesn’t reset when closing all figure windows manually. To really clear all figures, go
>> close all hidden
Occasionally, I download / upload huge files, and it kills my internet connection for plain browsing. I don’t want to halt the download or suspend it, but merely calm it down a bit, temporarily, for doing other stuff. And then let it hog as much as it want again.
There are many ways to do this, and I went for firejail. I suggest reading this post of mine as well on this tool.
Firejail gives you a shell prompt, which runs inside a mini-container, like those cheap virtual hosting services. Then run wget or youtube-dl as you wish from that shell.
It has practically access to everything on the computer, but the network interface is controlled. Since firejail is based on cgroups, all processes and subprocesses are collectively subject to the network bandwidth limit.
Using firejail requires setting up a bridge network interface. This is a bit of container hocus-pocus, and is necessary to get control over the network data flow. But it’s simple, and it can be done once (until the next reboot, unless the bridge is configured permanently, something I don’t bother).
Remember: Do this once, and just don’t remove the interface when done with it.
You might need to
# apt install bridge-utils
So first, set up a new bridge device (as root):
# brctl addbr hog0
and give it an IP address that doesn’t collide with anything else on the system. Otherwise, it really doesn’t matter which:
# ifconfig hog0 10.22.1.1/24
What’s going to happen is that there will be a network interface named eth0 inside the container, which will behave as if it was connected to a real Ethernet card named hog0 on the computer. Hence the container has access to everything that is covered by the routing table (by means of IP forwarding), and is also subject to the firewall rules. With my specific firewall setting, it prevents some access, but ppp0 isn’t blocked, so who cares.
To remove the bridge (no real reason to do it):
# brctl delbr hog0
Launch a shell with firejail (I called it “nethog” in this example):
$ firejail --net=hog0 --noprofile --name=nethog
This starts a new shell, for which the bandwidth limit is applied. Run wget or whatever from here.
Note that despite the –noprofile flag, there are still some directories that are read-only and some are temporary as well. It’s done in a sensible way, though so odds are that it won’t cause any issues. Running “df” inside the container gives an idea on what is mounted how, and it’s scarier than the actual situation.
But be sure to check that the files that are downloaded are visible outside the container.
From another shell prompt, outside the container go something like (doesn’t require root):
$ firejail --bandwidth=nethog set hog0 800 400 Removing bandwith limit Configuring interface eth0 Download speed 3200kbps Upload speed 240kbps cleaning limits configuring tc ingress configuring tc egress
To drop the bandwidth limit:
$ firejail --bandwidth=nethog clear hog0
And get the status (saying, among others, how many packets have been dropped):
$ firejail --bandwidth=nethog status
Notes:
When starting a browser from within a container, pay attention to whether it really started a new process. Using firetools can help.
If Google Chrome says “Created new window in existing browser session”, it didn’t start a new process inside the container, in which case the window isn’t subject to bandwidth limitation.
So close all windows of Chrome before kicking off a new one. Alternatively, this can we worked around by starting the container with.
$ firejail --net=hog0 --noprofile --private --name=nethog
The –private flags creates, among others, a new volatile home directory, so Chrome doesn’t detect that it’s already running. Because I use some other disk mounts for the large partitions on my computer, it’s still possible to download stuff to them from within the container.
But extra care is required with this, and regardless, the new browser doesn’t remember passwords and such from the private container.
This is how to run a Firefox browser on a cheap VPS machine (e.g. a Google Cloud VM Instance) with an X-server connection. It’s actually not a good idea, because it’s extremely slow. The correct way is to set up a VNC server, because the X server connection exchanges information on every little mouse movement or screen update. It’s a disaster on a slow connection.
My motivation was to download a 10 GB file from Microsoft’s cloud storage. With my own Internet connection it failed consistently after a Gigabyte or so (I guess the connection timed out). So the idea is to have Firefox running on a remote server with a much better connection. And then transfer the file.
Since it’s a one-off task, and I kind-of like these bizarre experiments, here we go.
These steps:
Edit /etc/ssh/sshd_config, making sure it reads
X11Forwarding yes
Install xauth, also necessary to open a remote X:
# apt install xauth
Then restart the ssh server:
# systemctl restart ssh
and then install Firefox
# apt install firefox-esr
There will be a lot of dependencies to install.
At this point, it’s possible to connect to the server with ssh -X and run firefox on the remote machine.
Expect a horribly slow browser, though. Every small animation or mouse movement is transferred on the link, so it definitely gets stuck easily. So think before every single move, and think about every single little thing in the graphics that gets updated.
Firefox “cleverly” announces that “a web page is slowing down your browser” all the time, but the animation of these announcements become part of the problem.
It’s also a good idea to keep the window small, so there isn’t much to area to keep updated. And most important: Keep the mouse pointer off the remote window unless it’s needed there for a click. Otherwise things get stuck. Just gen into the window, click, and leave. Or stay if the click was for the sake of typing (or better, pasting something).
These are my notes as I upgraded Thunderbird from version 3.0.7 (released September 2010) to 91.10.0 on Linux Mint 19. That’s more than a ten year’s gap, which says something about what I think about upgrading software. What eventually forced me to do this was the need to support OAuth2 in order to send emails through Google’s Gmail server (supported since 91.8.0).
Thunderbird is essentially a Firefox browser which happens to be set up with a GUI that processes emails. So for example, the classic menubar is hidden, but can be revealed by pressing Alt.
When attempting to run a new version of Thunderbird, be sure to rename ~/.thunderbird into something else, or else the current profile will be upgraded right away. With some luck, the suffixes (e.g. -release) might make Thunderbird ignore the old information, but don’t trust that.
Actually, it seems like this is handled gracefully anyhow. When I installed exactly the same version on a different position on the disk, it ignored the profile with -release suffix, and added one with -release-1. So go figure.
To select which profile to work with, invoke Thunderbird with Profile Manager with
$ thunderbird -profilemanager &
For making the upgrade, first make a backup tarball from the original profile directory.
To adopt in into the new version of Thunderbird, invoke the Profile Manager and pick Create Profile…, create a new directory (I called it “mainprofile”), and pick that as the place for the new profile. Launch Thunderbird, quit right away, and then delete the new directory. Rename the old directory with the new deleted directory’s name. Then launch Thunderbird again.
Previously, I had the following add-ons:
So I remained with the first two only.
The simplest Thunderbird installation involves downloading it from their website and extract the tarball somewhere in the user’s own directories. For a proper installation, I installed it under /usr/local/bin/ with
# tar -C /usr/local/bin -xjvf thunderbird-91.10.0.tar.bz2
as root. And then reorganize it slightly:
# cd /usr/local/bin # mv thunderbird thunderbird-91.10.0 # ln -s thunderbird-91.10.0/thunderbird
Right-click the account at the left bar, pick Settings and select the Composition & Addressing item. Make sure Compose messages in HTML is unchecked: Messages should be composed as plain text by default.
Then go through each of the mail identities and verify that Compose messages in HTML is unchecked under the Composition & Addressing tab.
However if Shift is pressed along with clicking Write, Reply or whatever for composing a new message, Thunderbird opens it as HTML.
Thunderbird went from the old *.mab format to SQLite for keeping the address books. So go Tools > Import… > Pick Address Books… and pick Monk Database, and from there pick abook.mab (and posssibly repeat this with history.mab, but I skipped this, because it’s too much).
Exactly like 10 years ago, the trick is to create a “chrome” directory under .thunderbird/ and then add the following file:
$ cat ~/.thunderbird/sdf2k45i.default/chrome/userChrome.css @namespace url("http://www.mozilla.org/keymaster/gatekeeper/there.is.only.xul"); /* set default namespace to XUL */ /* Setting the color of folders containing new messages to red */ treechildren::-moz-tree-cell-text(folderNameCol, newMessages-true) { font-weight: bold; color: red !important; }
But unlike old Thunderbird, this file isn’t read by default. So to fix that, go to Preferences > General > Config Editor… (button at the bottom) and there change toolkit.legacyUserProfileCustomizations.stylesheets to true.
Thunderbird sends a regular notification when a new mail arrives, but exactly like last time, I want a dedicated icon that is dismissed only when I click it. The rationale is to be able to see if a new mail has arrived at a quick glance of the system tray. Neither zenity –notification nor send-notify were good for this, since they send the common notification (zenity used to just add an icon, but it “got better”).
But then there’s yad. I began with “apt install yad”, but that gave me a really old version that distorted the icon in the system bar. So I installed it from the git repository’s tag 1.0. I first attempted v12.0, but I ended up with the problem mentioned here, and didn’t want to mess around with it more.
Its “make install” adds /usr/local/bin/yad, as well as a lot of yad.mo under /usr/local/share/locale/*, a lot of yad.png under /usr/local/share/icons/*, yad.m4 under /usr/local/share/aclocal/ and yad.1 + pfd.1 in /usr/local/share/man/man1. So quite a lot of files, but in a sensible way.
With this done, the following script is kept (as executable) as /usr/local/bin/new-mail-icon:
#!/usr/bin/perl
use warnings;
use strict;
use Fcntl qw[ :flock ];
my $THEDIR="$ENV{HOME}/.thunderbird";
my $ICON="$THEDIR/green-mail-unread.png";
my $NOW=scalar localtime;
open(my $fh, "<", "$ICON")
or die "Can't open $ICON for read: $!";
# Lock the file. If it's already locked, the icon is already
# in the tray, so fail silently (and don't block).
flock($fh, LOCK_EX | LOCK_NB) or exit 0;
fork() && exit 0; # Only child continues
system('yad', '--notification', "--text=New mail on $NOW", "--image=$ICON", '--icon-size=32');
This script is the improved version of the previous one, and it prevents multiple icons in the tray much better: It locks the icon file exclusively and without blocking. Hence if there’s any other process that shows the icon, subsequent attempts to lock this file fail immediately.
Since the “yad” call takes a second or two, the scripts forks and exits before that, so it doesn’t delay Thunderbird’s machinery.
With this script in place, the Mailbox Alert is configured as follows. Add a new item to the list as in this dialog box:
The sound should be set to a WAV file of choice.
Then right-click the mail folder to have covered (Local Folders in my case), pick Mailbox Alert and enable “New Mail” and “Alert for child folders”.
Then right-click “Inbox” under this folder, and verify that nothing is checked for Mailbox Alert for it (in particular not “Default sound”). That except for the Outbox and Draft folders, for which “Don’t let parent folders alert for this one” should be checked, or else there’s a false alarm on autosaving and when using “send later”.
Later on, I changed my mind and added a message popup, so now all three checkboxes are ticked, and the Message tab reads:
I picked the icon as /usr/local/bin/thunderbird-91.10.0/chrome/icons/default/default32.png (this depends on the installation path, of course).
I’m not 100% clear why the original alert didn’t show up, even though “Show an alert” was still checked under “Incoming Mails” at Preferences > General. I actually preferred the good old one, but it seems like Mailbox Alert muted it. I unchecked it anyhow, just to be safe.
It’s not a real upgrade if a weird problem doesn’t occur out of the blue.
So attempting to Get Messages from pop3 server at localhost failed quite oddly: Every time I checked the box to use Password Manager to remember the password, it got stuck with “Main: Connected to 127.0.0.1…”. But checking with Wireshark, it turned out that Thunderbird asked the server about its capabilities (CAPA), got an answer and then did nothing for about 10 seconds, after which it closed the connection.
On the other hand, when I didn’t request remembering the password, it went fine, and so did subsequent attempts to fetch mail from the pop3 server.
Another thing was that when attempting to use Gmail’s server, I went through the entire OAuth2 thing (the browser window, and asking for my permissions) but then the mail was just stuck on “Sending message”. Like, forever.
So I followed the advice here, and deleted key3.db, key4.db, secmod.db, cert*.db and all signon* files with Thunderbird not running of course. Really old stuff.
And that fixed it.
The files that were apparently created when things got fine were logins.json, cert9.db, key4.db and pkcs11.txt. But I might have missed something.
I had some really annoying bots on one of my websites. Of the sort that make a million requests (like really, a million) per month, identifying themselves as a browser.
So IP blocking it is. I went for a minimalistic DIY approach. There are plenty of tools out there, but my experience with things like this is that in the end, it’s me and the scripts. So I might as well write them myself.
Iptables has an IP set module, which allows feeding it with a set of random IP addresses. Internally, it creates a hash with these addresses, so it’s an efficient way to keep track of multiple addresses.
IP sets has been in the kernel since ages, but it has to be opted in the kernel with CONFIG_IP_SET. Which it most likely is.
The ipset utility may need to be installed, with something like
# apt install ipset
There seems to be a protocol mismatch issue with the kernel, which apparently is a non-issue. But every time something goes wrong with ipset, there’s a warning message about this mismatch, which is misleading. So it looks something like this.
# ipset [ ... something stupid or malformed ... ] ipset v6.23: Kernel support protocol versions 6-7 while userspace supports protocol versions 6-6 [ ... some error message related to the stupidity ... ]
So the important thing is to be aware of is that odds are that the problem isn’t the version mismatch, but between chair and keyboard.
A quick session
# ipset create testset hash:ip # ipset add testset 1.2.3.4 # iptables -I INPUT -m set --match-set testset src -j DROP # ipset del testset 1.2.3.4
Attempting to add an IP address that is already in the list causes a warning, and the address isn’t added. So no need to check if the address is already there. Besides, there the -exist option, which is really great.
List the members of the IP set:
# ipset -L
An entry can have a timeout feature, which works exactly as one would expect: The rule vanishes after the timeout expires. The timeout entry in ipset -L counts down.
For this to work, the set must be created with a default timeout attribute. Zero means that timeout is disabled (which I chose as a default in this example).
# ipset create testset hash:ip timeout 0 # ipset add testset 1.2.3.4 timeout 10
The ‘-exist’ flag causes ipset to re-add an existing entry, which also resets its timeout. So this is the way to keep the list fresh.
It’s tempting to put the DROP rule with –match-set first, because hey, let’s give those intruders the boot right away. But doing that, there might be TCP connections lingering, because the last FIN packet is caught by the firewall as the new rule is added. Given that adding an IP address is the result of a flood of requests, this is a realistic scenario.
The solution is simple: There’s most likely a “state RELATED,ESTABLISHED” rule somewhere in the list. So push it to the top. The rationale is simple: If a connection has begun, don’t chop it in the middle in any case. It’s the first packet that we want killed.
The rule in iptables must refer to an existing set. So if the rule that relies on the set is part of the persistent firewall rules, it must be created before the script that brings up iptables runs.
This is easily done by adding a rule file like this as /usr/share/netfilter-persistent/plugins.d/10-ipset
#!/bin/sh
IPSET=/sbin/ipset
SET=mysiteset
case "$1" in
start|restart|reload|force-reload)
$IPSET destroy
$IPSET create $SET hash:ip timeout 0
;;
save)
echo "ipset-persistent: The save option does nothing"
;;
stop|flush)
$IPSET flush $SET
;;
*)
echo "Usage: $0 {start|restart|reload|force-reload|save|flush}" >&2
exit 1
;;
esac
exit 0
The idea is that the index 10 in the file’s name is smaller than the rule that sets up iptables, so it runs first.
This script is a dirty hack, but hey, it works. There’s a small project on this, for those who like to do it properly.
The operating system in question is systemd-based, but this old school style is still in effect.
The Perl script that performs the blacklisting is crude and inaccurate, but simple. This is the part to tweak and play with, and in particular adapt to each specific website. It’s all about detecting abnormal access.
Truth to be told, I replaced this script with a more sophisticated mechanism pretty much right away on my own system. But what’s really interesting is the calls to ipset.
This script reads through Apache’s access log file, and analyzes each minute in time (as in 60 seconds). In other words, all accesses that have the same timestamp, with the seconds part ignored. Note that the regex part that captures $time in the script ignores the last part of :\d\d.
If the same IP address appears more than 50 times, that address is blacklisted, with a timeout of 86400 seconds (24 hours). Log file that correspond to page requisites and such (images, style files etc.) are skipped for this purpose. Otherwise, it’s easy to reach 50 accesses within a minute with legit web browsing.
There are several imperfections about this script, among others:
The script goes as follows:
#!/usr/bin/perl
use warnings;
use strict;
my $logfile = '/var/log/mysite.com/access.log';
my $limit = 50; # 50 accesses per minute
my $timeout = 86400;
open(my $in, "<", $logfile)
or die "Can't open $logfile for read: $!\n";
my $current = '';
my $l;
my %h;
my %blacklist;
while (defined ($l = <$in>)) {
my ($ip, $time, $req) = ($l =~ /^([^ ]+).*?\[(.+?):\d\d[ ].*?\"\w+[ ]+([^\"]+)/);
unless (defined $ip) {
# warn("Failed to parse line $l\n");
next;
}
next
if ($req =~ /^\/(?:media\/|robots\.txt)/);
unless ($time eq $current) {
foreach my $k (sort keys %h) {
$blacklist{$k} = 1
if ($h{$k} >= $limit);
}
%h = ();
$current = $time;
}
$h{$ip}++;
}
close $in;
foreach my $k (sort keys %blacklist) {
system('/sbin/ipset', 'add', '-exist', 'mysiteset', $k, 'timeout', $timeout);
}
It has to be run as root, of course. Most likely as a cronjob.
Due to an incident that is beyond the scope of this blog, I wanted to put a 24/7 camera that watched a certain something, just in case that incident repeated itself.
Having a laptop that I barely use, and a cheap e-bay web camera, I thought I set up something and let ffmpeg do the job.
I’m not sure if a Raspberry Pi would be up for this job, even when connected to an external hard disk through USB. It depends much on how well ffmpeg performs on that platform. Haven’t tried. The laptop’s clear advantage is when there’s a brief power outage.
Overall verdict: It’s as good as the stability of the USB connection with the camera.
Note to self: I keep this in the misc/utils git repo, under surveillance-cam/.
Show the webcam’s image on screen, the ffmpeg way:
$ ffplay -f video4linux2 /dev/video0
Let ffmpeg list the formats:
$ ffplay -f video4linux2 -list_formats all /dev/video0
Or with a dedicated tool:
# apt install v4l-utils
and then
$ v4l2-ctl --list-formats-ext -d /dev/video0
Possibly also use “lsusb -v” on the device: It lists the format information, not necessarily in a user-friendly way, but that’s the actual source of information.
Get all parameters that can be tweaked:
$ v4l2-ctl --all
See an example output for this command at the bottom of this post.
If control over the exposure time is available, it will be listed as “exposure_absolute” (none of the webcams I tried had this). The exposure time is given in units of 100µs (see e.g. the definition of V4L2_CID_EXPOSURE_ABSOLUTE).
Get a specific parameter, e.g. brightness
$ v4l2-ctl --get-ctrl=brightness brightness: 137
Set the control (can be done while the camera is capturing video)
$ v4l2-ctl --set-ctrl=brightness=255
This is a simple bash script that creates .mp4 files from the captured video:
#!/bin/bash
OUTDIR=/extra/videos SRC=/dev/v4l/by-id/usb-Generic*
DURATION=3600 # In seconds
while [ 1 ]; do
TIME=`date +%F-%H%M%S`
if ! ffmpeg -f video4linux2 -i $SRC -t $DURATION -r 10 $OUTDIR/video-$TIME.mp4 < /dev/null ; then
echo 2-2 | sudo tee /sys/bus/usb/drivers/usb/unbind
echo 2-2 | sudo tee /sys/bus/usb/drivers/usb/bind
sleep 5;
fi
done
Comments on the script:
This is the smoking gun in Xorg.0.log: Lots of
[1194182.076] (II) config/udev: Adding input device USB2.0 PC CAMERA: USB2.0 PC CAM (/dev/input/event421) [1194182.076] (**) USB2.0 PC CAMERA: USB2.0 PC CAM: Applying InputClass "evdev keyboard catchall" [1194182.076] (II) Using input driver 'evdev' for 'USB2.0 PC CAMERA: USB2.0 PC CAM' [1194182.076] (**) USB2.0 PC CAMERA: USB2.0 PC CAM: always reports core events [1194182.076] (**) evdev: USB2.0 PC CAMERA: USB2.0 PC CAM: Device: "/dev/input/event421" [1194182.076] (--) evdev: USB2.0 PC CAMERA: USB2.0 PC CAM: Vendor 0x1908 Product 0x2311 [1194182.076] (--) evdev: USB2.0 PC CAMERA: USB2.0 PC CAM: Found keys [1194182.076] (II) evdev: USB2.0 PC CAMERA: USB2.0 PC CAM: Configuring as keyboard [1194182.076] (EE) Too many input devices. Ignoring USB2.0 PC CAMERA: USB2.0 PC CAM [1194182.076] (II) UnloadModule: "evdev"
and at some point the sad end:
[1194192.408] (II) config/udev: Adding input device USB2.0 PC CAMERA: USB2.0 PC CAM (/dev/input/event423) [1194192.408] (**) USB2.0 PC CAMERA: USB2.0 PC CAM: Applying InputClass "evdev keyboard catchall" [1194192.408] (II) Using input driver 'evdev' for 'USB2.0 PC CAMERA: USB2.0 PC CAM' [1194192.408] (**) USB2.0 PC CAMERA: USB2.0 PC CAM: always reports core events [1194192.408] (**) evdev: USB2.0 PC CAMERA: USB2.0 PC CAM: Device: "/dev/input/event423" [1194192.445] (EE) [1194192.445] (EE) Backtrace: [1194192.445] (EE) 0: /usr/bin/X (xorg_backtrace+0x48) [0x564128416d28] [1194192.445] (EE) 1: /usr/bin/X (0x56412826e000+0x1aca19) [0x56412841aa19] [1194192.445] (EE) 2: /lib/x86_64-linux-gnu/libpthread.so.0 (0x7f6e4d8b4000+0x10340) [0x7f6e4d8c4340] [1194192.445] (EE) 3: /usr/lib/xorg/modules/input/evdev_drv.so (0x7f6e45c4c000+0x39f5) [0x7f6e45c4f9f5] [1194192.445] (EE) 4: /usr/lib/xorg/modules/input/evdev_drv.so (0x7f6e45c4c000+0x68df) [0x7f6e45c528df] [1194192.445] (EE) 5: /usr/bin/X (0x56412826e000+0xa1721) [0x56412830f721] [1194192.446] (EE) 6: /usr/bin/X (0x56412826e000+0xb731b) [0x56412832531b] [1194192.446] (EE) 7: /usr/bin/X (0x56412826e000+0xb7658) [0x564128325658] [1194192.446] (EE) 8: /usr/bin/X (WakeupHandler+0x6d) [0x5641282c839d] [1194192.446] (EE) 9: /usr/bin/X (WaitForSomething+0x1bf) [0x5641284142df] [1194192.446] (EE) 10: /usr/bin/X (0x56412826e000+0x55771) [0x5641282c3771] [1194192.446] (EE) 11: /usr/bin/X (0x56412826e000+0x598aa) [0x5641282c78aa] [1194192.446] (EE) 12: /lib/x86_64-linux-gnu/libc.so.6 (__libc_start_main+0xf5) [0x7f6e4c2f3ec5] [1194192.446] (EE) 13: /usr/bin/X (0x56412826e000+0x44dde) [0x5641282b2dde] [1194192.446] (EE) [1194192.446] (EE) Segmentation fault at address 0x10200000adb [1194192.446] (EE) Fatal server error: [1194192.446] (EE) Caught signal 11 (Segmentation fault). Server aborting [1194192.446] (EE)
Apparently, the webcam presents itself as a keyboard, among others. I guess the chipset has inputs for control buttons (which the specific webcam doesn’t have), so as the USB device goes on and off, X windows registers the nonexistent keyboard on and off, and eventually some bug causes it to crash. It might very well be that the camera camera connected, started some kind of connection event handler, which didn’t finish its job before it disconnected. Somewhere in the code, the handler fetched information that didn’t exist, it got a bad pointer instead (NULL?) and used it. Boom. Just a wild guess, but this is the typical scenario.
Anyhow, it’s a really old OS (Ubuntu 14.04.1) so this bug might have been solved long ago.
This is small & junky webcam. Clearly no control over exposure time.
$ v4l2-ctl --all -d /dev/v4l/by-id/usb-Generic_USB2.0_PC_CAMERA-video-index0 Driver Info (not using libv4l2): Driver name : uvcvideo Card type : USB2.0 PC CAMERA: USB2.0 PC CAM Bus info : usb-0000:00:14.0-2 Driver version: 4.14.0 Capabilities : 0x84200001 Video Capture Streaming Device Capabilities Device Caps : 0x04200001 Video Capture Streaming Priority: 2 Video input : 0 (Camera 1: ok) Format Video Capture: Width/Height : 640/480 Pixel Format : 'YUYV' Field : None Bytes per Line: 1280 Size Image : 614400 Colorspace : Unknown (00000000) Custom Info : feedcafe Crop Capability Video Capture: Bounds : Left 0, Top 0, Width 640, Height 480 Default : Left 0, Top 0, Width 640, Height 480 Pixel Aspect: 1/1 Selection: crop_default, Left 0, Top 0, Width 640, Height 480 Selection: crop_bounds, Left 0, Top 0, Width 640, Height 480 Streaming Parameters Video Capture: Capabilities : timeperframe Frames per second: 30.000 (30/1) Read buffers : 0 brightness (int) : min=0 max=255 step=1 default=128 value=128 contrast (int) : min=0 max=255 step=1 default=130 value=130 saturation (int) : min=0 max=255 step=1 default=64 value=64 hue (int) : min=-127 max=127 step=1 default=0 value=0 gamma (int) : min=1 max=8 step=1 default=4 value=4 power_line_frequency (menu) : min=0 max=2 default=1 value=1 sharpness (int) : min=0 max=15 step=1 default=13 value=13 backlight_compensation (int) : min=1 max=5 step=1 default=1 value=1
There is a widespread belief, that in order to use git send-email with Gmail, there’s a need to subscribe to Google Cloud services and obtain some credentials. Or that a two-factor authentication (2fa) is required.
This is not the case, however. If Thunderbird can manage to fetch and send emails through Google’s mail servers (as well as other OAUTH2 authenticated mail services), there’s no reason why a utility won’t be able to do the same.
The subscription to Google’s services is indeed required if the communication with Google’s server must be done without human supervision. That’s the whole point with API keys. If a human is around when the mail is dispatched, there’s no need for any special measures. And it’s quite obvious that there’s a responsive human around when a patch is being submitted.
What is actually needed, is a client ID and a client secret, and these are indeed obtained by registering to Google’s cloud service (this explains how). But here’s the thing: Someone at Mozilla has already obtained these, and hardcoded them into Thunderbird itself. So there’s no problem using these to access Gmail with another mail client. It seems like many believe that the client ID and secret must be related to the mail account to access, and therefore each and every one has to obtain their own pair. That’s a mistake that has made a lot of people angry for nothing.
This post describes how to use git send-email without any further involvement with Google, except for having a Gmail account. The same method surely applies for other mail service providers that rely on OAUTH2, but I haven’t gotten into that. It should be quite easy to apply the same idea to other services as well however.
For this to work, Thunderbird must be configured to access the same email account. This doesn’t mean that you actually have to use Thunderbird for your mail exchange. It’s actually enough to configure the Gmail server as an outgoing mail server for the relevant account. In other words, you don’t even need to fetch mails from the server with Thunderbird.
The point is to make Thunderbird set up the OAUTH2 session, and then fetch the relevant piece of credentials from it. And take it from there with Google’s servers. Thunderbird is a good candidate for taking care of the session’s setup, because the whole idea with OAUTH2 is that the user / password session (plus possible additional authentication challenges) is done with a browser. Since Thunderbird is Firefox in disguise, it integrates the browser session well into its general flow.
If you want to use another piece of software to maintain the OAUTH2 session, that’s most likely possible, given that you can get its refresh token. This will also require obtaining its client ID and client secret. Odds are that it can be found somewhere in that software’s sources, exactly as I found it for Thunderbird. Or look at the https connection it runs to get an access token (which isn’t all that easy, encryption and that).
All below relates to Linux Mint 19, Thunderbird 91.10.0, git version 2.17.1, Perl 5.26 and msmtp 1.8.14. But except for Thunderbird and msmtp, I don’t think the versions are going to matter.
It’s highly recommended to read through my blog post on OAUTH2, in particular the section called “The authentication handshake in a nutshell”. You’re going to need to know the difference between an access token and a refresh token sooner or later.
So the first obstacle is the fact that git send-email relies on the system’s sendmail to send out the emails. That utility doesn’t support OAUTH2 at the time of writing this. So instead, I used msmtp, which is a drop-in replacement for sendmail, plus it supports OAUTH2 (since version 1.8.13).
msmtp identifies itself to the server by sending it an access token in the SMTP session (see a dump of a sample session below). This access token is short-lived (3600 seconds from Google as of writing this), so it can’t be fetched from Thunderbird just like that. In particular because most of the time Thunderbird doesn’t have it.
What Thunderbird does have is a refresh token. It’s a completely automatic task to ask Google’s server for the access token with the refresh token at hand. It’s also an easy task (once you’ve figured out how to do it, that is). It’s also easy to get the refresh token from Thunderbird, exactly in the same way as getting a saved password. In fact, Thunderbird treats the refresh token as a password.
msmtp allows executing an arbitrary program in order to get the password or the access token. So I wrote a Perl script (oauth2-helper.pl) that reads the refresh token from a file and gets an access token from Google’s server. This is how msmtp manages to authenticate itself.
So everything relies on this refresh token. In principle, it can change every time it’s used. In practice, as of today, Google’s servers don’t change it. It seems like the refresh token is automatically replaced every six months, but even if that’s true today, it may change.
But that doesn’t matter so much. All that is necessary is that the refresh token is correct once. If the refresh token goes out of sync with Google’s server, a simple user / password session rectifies this. And as of now, than virtually never happens.
So let’s get to the hands-on part.
Odds are that your distribution offers msmtp, so it can be installed with something like
# apt install msmtp
Note however that the version needs to be at least 1.8.13, which wasn’t my case (Linux Mint 19). So I installed it from the sources. To do that, first install the TLS library, if it’s not installed already (as root):
# apt install gnutls-dev
Then clone the git repository, compile and install:
$ GIT_SSL_NO_VERIFY=true git clone http://git.marlam.de/git/msmtp.git $ cd msmtp $ git checkout msmtp-1.8.14 $ autoreconf -i $ ./configure $ make && echo Success $ sudo make install
The installation goes to /usr/local/bin and other /usr/local/ paths, as one would expect.
I checked out version 1.8.14 because later versions failed to compile on my Linux Mint 19. OAUTH2 support was added in 1.8.13, and judging by the commit messages it hasn’t been changed since, except for commit 1f3f4bfd098, which is “Send XOAUTH2 in two lines, required by Microsoft servers”. Possibly cherry-pick this commit (I didn’t).
Once everything has been set up as described below, it’s possible to send an email with
$ msmtp -v -t < ~/email.eml
The -v flag is used only for debugging, and it prints out the entire SMTP session.
The -t flag tells msmtp to fetch the recipients from the mail’s own headers. Otherwise, the recipients need to be listed in the command line, just like sendmail. Without this flag or recipients, msmtp just replies with
msmtp: no recipients found
The -t flag isn’t necessary with git send-email, because it explicitly lists the recipients in the command line.
As mentioned above, Thunderbird has the refresh token, but msmtp needs an access token. So the script that talks with Google’s server and grabs the access token can be downloaded from its Github repo. Save it, with execution permission to /usr/local/bin/oauth2-helper.pl (or whatever, but this is what I assume in the configurations below).
Some Perl libraries may be required to run this script. On a Debian-based system, the packages’ names are probably something like libhttp-message-perl, libwww-perl and libjson-perl.
It’s written to access Google’s token server, but can be modified easily to access a different service provider by changing the parameters at its beginning. For other email providers, check if it happens to be listed in OAuth2Providers.jsm. I don’t know how well it will work with those other providers, though.
The script reads the refresh token from ~/.oauth2_reftoken as a plain file containing the blob only. There’s an inherent security risk of having this token stored like this, but it’s basically the same risk as the fact that it can be obtained from Thunderbird’s credential files. The difference is the amount of security by obscurity. Anyhow, the reference token isn’t your password, and it can’t be derived from it. Either way, make sure that this file has a 0600 or 0400 permission, if you’re running on a multi-user computer.
The script caches the access token in ~/.oauth2_acctoken, with an expiration timestamp. As of today, it means that the script talks with the Google’s server once in 60 minutes at most.
So with msmtp installed and the script downloaded into /usr/local/bin/oauth2-helper.pl, all that is left is configuration files.
First, create ~/.msmtprc as follows (put your Gmail username instead of mail.username, of course):
account default host smtp.gmail.com port 587 tls on tls_starttls on auth xoauth2 user mail.username passwordeval /usr/local/bin/oauth2-helper.pl from mail.username@gmail.com
And then change the [sendemail] section in ~/.gitconfig to
[sendemail] smtpServer = /usr/local/bin/msmtp
That’s it. Only that single line. It’s however possible to use smtpServerOption in the .gitconfig to add various flags. So for example, to get the entire SMTP session shown while sending the email, it should say:
[sendemail]
smtpServer = /usr/local/bin/msmtp
smtpServerOption = -v
But really, don’t, unless there’s a problem sending mails.
Other than that, don’t keep old settings. For example, there should not be a “from=” entry in .gitconfig. Having such causes a “From:” header to be added into the mail body (so it’s visible to the reader of the mail). This header is created when there is a difference between the “From” that is generated by git send-email (which is taken from the “from=” entry) and the patch’ author, as it appears in the patch’ “From” header. The purpose of this in-body header is to tell “git am” who the real author is (i.e. not the sender of the patch). So this extra header won’t appear in the commit, but it nevertheless makes the sender of the message look somewhat clueless.
So in short, no old junk.
Unless it’s the first time, I suggest just trying to send the patch to your own email address, and see if it works. There’s a good chance that the refresh token from the previous time will still be good, so it will just work, and no point hassling more.
Actually, it’s fine to try like this even on the first time, because the Perl script will fail to grab the access token and then tell you what to do to fix it, namely:
And then go, as usual:
$ git send-email --to 'my@test.mail' 0001-my.patch
I’ve added the output of a successful session (with the -v flag) below.
It would have been nicer to fetch the refresh token automatically from Thunderbird’s credentials store (that is from logins.json, based upon the decryption key that is kept in key4.db), but the available scripts for that are written in Python. And to me Python is equal to “will cause trouble sooner or later”. Anyhow, this tutorial describes the mechanism (in the part about Firefox).
Besides, it could have been even nicer if the script was completely standalone, and didn’t depend on Thunderbird at all. That requires doing the whole dance with the browser, something I have no motivation to get into.
This is what it looks like when a patch is properly sent, with the smtpServerOption = -v line in .gitignore (so msmtp produces verbose output):
Send this email? ([y]es|[n]o|[q]uit|[a]ll): y ignoring system configuration file /usr/local/etc/msmtprc: No such file or directory loaded user configuration file /home/eli/.msmtprc falling back to default account Fetching access token based upon refresh token in /home/eli/.oauth2_reftoken... using account default from /home/eli/.msmtprc host = smtp.gmail.com port = 587 source ip = (not set) proxy host = (not set) proxy port = 0 socket = (not set) timeout = off protocol = smtp domain = localhost auth = XOAUTH2 user = mail.username password = * passwordeval = /usr/local/bin/oauth2-helper.pl ntlmdomain = (not set) tls = on tls_starttls = on tls_trust_file = system tls_crl_file = (not set) tls_fingerprint = (not set) tls_key_file = (not set) tls_cert_file = (not set) tls_certcheck = on tls_min_dh_prime_bits = (not set) tls_priorities = (not set) tls_host_override = (not set) auto_from = off maildomain = (not set) from = mail.username@gmail.com set_from_header = auto set_date_header = auto remove_bcc_headers = on undisclosed_recipients = off dsn_notify = (not set) dsn_return = (not set) logfile = (not set) logfile_time_format = (not set) syslog = (not set) aliases = (not set) reading recipients from the command line <-- 220 smtp.gmail.com ESMTP m8-20020a7bcb88000000b003c6d21a19a0sm3316430wmi.29 - gsmtp --> EHLO localhost <-- 250-smtp.gmail.com at your service, [109.186.183.118] <-- 250-SIZE 35882577 <-- 250-8BITMIME <-- 250-STARTTLS <-- 250-ENHANCEDSTATUSCODES <-- 250-PIPELINING <-- 250-CHUNKING <-- 250 SMTPUTF8 --> STARTTLS <-- 220 2.0.0 Ready to start TLS TLS session parameters: (TLS1.2)-(ECDHE-ECDSA-SECP256R1)-(CHACHA20-POLY1305) TLS certificate information: Subject: CN=smtp.gmail.com Issuer: C=US,O=Google Trust Services LLC,CN=GTS CA 1C3 Validity: Activation time: Mon 26 Sep 2022 11:22:04 AM IDT Expiration time: Mon 19 Dec 2022 10:22:03 AM IST Fingerprints: SHA256: 53:F3:CA:1D:37:F2:1F:ED:2C:67:40:A2:A2:29:C2:C8:E8:AF:9E:60:7A:01:92:EC:F0:2A:11:E8:37:A5:88:F3 SHA1 (deprecated): D4:69:6E:59:2D:75:43:59:02:74:25:67:E7:57:40:E0:28:43:A8:62 --> EHLO localhost <-- 250-smtp.gmail.com at your service, [109.186.183.118] <-- 250-SIZE 35882577 <-- 250-8BITMIME <-- 250-AUTH LOGIN PLAIN XOAUTH2 PLAIN-CLIENTTOKEN OAUTHBEARER XOAUTH <-- 250-ENHANCEDSTATUSCODES <-- 250-PIPELINING <-- 250-CHUNKING <-- 250 SMTPUTF8 --> AUTH XOAUTH2 dXNlcj1lbGkuYmlsbGF1ZXIBYXV0aD1CZWFyZXIgeWEyOS5hMEFhNHhyWE1GM1gtOTJMVWNidjE4MFdVOBROENRcUdSbk5KaUFSY0VSckVaXzdzbDlHMTNpdFIyUTk0NjlKWG45aHVGLQVRBU0FSTVXJpSjRqMjBLcWh6WU9GekxlcU5BYVpFNUU4WXRhNjdLUXpCRm1HRDg3dFgzeHJ4amNPTnRVTkZFVWdESXhsUlcxOFhVT0pqQ1hPSlFwZlNGUUVqRHZMOWw4RExkTjlKZlNbGRTazNNbFNMNjVfQWFDZ1lLVVF2Y0luOWNSSUEwMTY2AQE= <-- 235 2.7.0 Accepted --> MAIL FROM:<mail.username@gmail.com> --> RCPT TO:<test@mail.com> --> RCPT TO:<mail.username@gmail.com> --> DATA <-- 250 2.1.0 OK m8-20020a7bcb88000000b003c6d21a19a0sm3316430wmi.29 - gsmtp <-- 250 2.1.5 OK m8-20020a7bcb88000000b003c6d21a19a0sm3316430wmi.29 - gsmtp <-- 250 2.1.5 OK m8-20020a7bcb88000000b003c6d21a19a0sm3316430wmi.29 - gsmtp <-- 354 Go ahead m8-20020a7bcb88000000b003c6d21a19a0sm3316430wmi.29 - gsmtp --> From: Eli Billauer <mail.username@gmail.com> --> To: test@mail.com --> Cc: Eli Billauer <mail.username@gmail.com> --> Subject: [PATCH v8] Gosh! Why don't you apply this patch already! --> Date: Sun, 30 Oct 2022 07:01:14 +0200 --> Message-Id: <20221030050114.49299-1-mail.username@gmail.com> --> X-Mailer: git-send-email 2.17.1 --> [ ... email body comes here ... ] --> -- --> 2.17.1 --> --> . <-- 250 2.0.0 OK 1667106108 m8-20020a7bcb88000000b003c6d21a19a0sm3316430wmi.29 - gsmtp --> QUIT <-- 221 2.0.0 closing connection m8-20020a7bcb88000000b003c6d21a19a0sm3316430wmi.29 - gsmtp OK. Log says: Sendmail: /usr/local/bin/msmtp -v -i test@mail.com mail.username@gmail.com From: Eli Billauer <mail.username@gmail.com> To: test@mail.com Cc: Eli Billauer <mail.username@gmail.com> Subject: [PATCH v8] Gosh! Why don't you apply this patch already! Date: Sun, 30 Oct 2022 07:01:14 +0200 Message-Id: <20221030050114.49299-1-mail.username@gmail.com> X-Mailer: git-send-email 2.17.1 Result: OK
Ah, and the fact that the access token can be copied from here is of course meaningless, as it has expired long ago.
These are some random notes I made while digging in Thunderbird’s guts to find out what’s going on.
So this is Thunderbird’s official git repo. Not that I used it.
To get logging info from Thunderbird: Based upon this page, go to Thunderbird’s preferences > General and click the Config Editor button. Set mailnews.oauth.loglevel to All (was Warn). Same with mailnews.smtp.loglevel. Then open the Error Console with Ctrl+Shift+J.
The cute thing about these logs is that the access code is written in the log. So it’s possible to skip the Perl script, and use the access code from Thunderbird’s log. Really inconvenient, but possible.
The OAuth2 token requests is implemented in Oauth2.jsm. It’s possible to make a breakpoint in this module by through Tools > Developer Tools > Developer Toolbox, and once it opens (after requesting permission for external connection), go to the debugger.
Find Oauth2.jsm in the sources pane to the left (of the Debugger tab), under resource:// modules > sessionstore. Add a breakpoint in requestAccessToken() so that the clientID and consumerSecret properties can be revealed.
This is a really bad idea. But if you have Thunderbird, and need to send a patch right now, this is a quick, dirty and somewhat dangerous procedure for doing that.
Why is it dangerous? Because at some point, it’s easy to pick “Send now” instead of “Send later”, and boom, a junk patch is mailed to the whole world.
The problem with Thunderbird is that it makes small changes into the patch’ body. So to work around this, there’s a really silly procedure. I used it once, and I’m not proud of that.
So here we go.
First, a very simple script that outputs the patch mail into a file. Say that I called it dumpit (should be executable, of course):
#!/bin/bash
cat > /home/eli/Desktop/git-send-email.eml
Then change ~/.gitconfig, so it reads something like this in the [sendemail] section:
[sendemail]
from = mail.username@gmail.com
smtpServer = /home/eli/Desktop/dumpit
So basically it uses the silly script as a mail server, and the content goes out to a plain file.
Then run git send-email as usual. The result is a git-send-email.eml as a file.
And now comes the part of making Thunderbird send it.
Are you sure you want to do this?
This is a spin-off post about failing attempts to fix the problem with a webcam’s keyboard buttons. Namely, that the a shaky physical connections caused the USB device to go on and off the bus rapidly, and consequently crash X windows. The background story is in this post.
There is really nothing to learn from this post regarding how to accomplish something. The only reason I don’t trash this is that there’s some possibly useful information about udev.
There is a possibility to ban a USB device from being accessed by Linux, by virtue of the “authorized” attribute. Something like this.
# cd /sys/devices/pci0000:00/0000:00:14.0/usb2/2-5/ # echo 0 > authorized ^C^Z # echo 1 > authorized bash: echo: write error: Invalid argument
The ^C^Z after the first command is not a mistake. The first command got stuck for several seconds.
And this can be done with udev rules as well.
But surprisingly enough, there doesn’t seem to be a way to avoid the generation of the /dev/input/event* file without ignoring the USB device completely. It’s possible to delete it early enough, but that doesn’t really help, it turns out.
ATTRS{authorized} can be set to 0 only for the entire USB device. There is no such parameter for a udev event with the “input” subsystem.
While trying to figure out the ATTRS{authorized} thing, these are my little play-arounds. Nothing really useful here:
$ sudo udevadm monitor --udev --property
I got
UDEV [5662716.427855] add /devices/pci0000:00/0000:00:14.0/usb1/1-5/1-5.2/1-5.2.1 (usb)
ACTION=add
BUSNUM=001
DEVNAME=/dev/bus/usb/001/098
DEVNUM=098
DEVPATH=/devices/pci0000:00/0000:00:14.0/usb1/1-5/1-5.2/1-5.2.1
DEVTYPE=usb_device
DRIVER=usb
ID_BUS=usb
ID_MODEL=USB2.0_PC_CAMERA
ID_MODEL_ENC=USB2.0\x20PC\x20CAMERA
ID_MODEL_ID=2311
ID_REVISION=0100
ID_SERIAL=Generic_USB2.0_PC_CAMERA
ID_USB_INTERFACES=:0e0100:0e0200:
ID_VENDOR=Generic
ID_VENDOR_ENC=Generic
ID_VENDOR_FROM_DATABASE=GEMBIRD
ID_VENDOR_ID=1908
MAJOR=189
MINOR=97
PRODUCT=1908/2311/100
SEQNUM=24413
SUBSYSTEM=usb
TYPE=239/2/1
USEC_INITIALIZED=5662716427506
UDEV [5662716.430744] add /devices/pci0000:00/0000:00:14.0/usb1/1-5/1-5.2/1-5.2.1/1-5.2.1:1.1 (usb)
ACTION=add
DEVPATH=/devices/pci0000:00/0000:00:14.0/usb1/1-5/1-5.2/1-5.2.1/1-5.2.1:1.1
DEVTYPE=usb_interface
DRIVER=uvcvideo
ID_USB_CLASS_FROM_DATABASE=Miscellaneous Device
ID_USB_PROTOCOL_FROM_DATABASE=Interface Association
ID_VENDOR_FROM_DATABASE=GEMBIRD
INTERFACE=14/2/0
MODALIAS=usb:v1908p2311d0100dcEFdsc02dp01ic0Eisc02ip00in01
PRODUCT=1908/2311/100
SEQNUM=24420
SUBSYSTEM=usb
TYPE=239/2/1
USEC_INITIALIZED=5662716430425
UDEV [5662716.430935] add /devices/pci0000:00/0000:00:14.0/usb1/1-5/1-5.2/1-5.2.1/1-5.2.1:1.0 (usb)
ACTION=add
DEVPATH=/devices/pci0000:00/0000:00:14.0/usb1/1-5/1-5.2/1-5.2.1/1-5.2.1:1.0
DEVTYPE=usb_interface
DRIVER=uvcvideo
ID_USB_CLASS_FROM_DATABASE=Miscellaneous Device
ID_USB_PROTOCOL_FROM_DATABASE=Interface Association
ID_VENDOR_FROM_DATABASE=GEMBIRD
INTERFACE=14/1/0
MODALIAS=usb:v1908p2311d0100dcEFdsc02dp01ic0Eisc01ip00in00
PRODUCT=1908/2311/100
SEQNUM=24414
SUBSYSTEM=usb
TYPE=239/2/1
USEC_INITIALIZED=5662716430396
UDEV [5662716.433265] add /devices/pci0000:00/0000:00:14.0/usb1/1-5/1-5.2/1-5.2.1/1-5.2.1:1.0/media5 (media)
ACTION=add
DEVNAME=/dev/media5
DEVPATH=/devices/pci0000:00/0000:00:14.0/usb1/1-5/1-5.2/1-5.2.1/1-5.2.1:1.0/media5
MAJOR=509
MINOR=5
SEQNUM=24416
SUBSYSTEM=media
USEC_INITIALIZED=5662716433110
UDEV [5662716.435400] bind /devices/pci0000:00/0000:00:14.0/usb1/1-5/1-5.2/1-5.2.1/1-5.2.1:1.1 (usb)
ACTION=bind
DEVPATH=/devices/pci0000:00/0000:00:14.0/usb1/1-5/1-5.2/1-5.2.1/1-5.2.1:1.1
DEVTYPE=usb_interface
DRIVER=uvcvideo
ID_USB_CLASS_FROM_DATABASE=Miscellaneous Device
ID_USB_PROTOCOL_FROM_DATABASE=Interface Association
ID_VENDOR_FROM_DATABASE=GEMBIRD
INTERFACE=14/2/0
MODALIAS=usb:v1908p2311d0100dcEFdsc02dp01ic0Eisc02ip00in01
PRODUCT=1908/2311/100
SEQNUM=24421
SUBSYSTEM=usb
TYPE=239/2/1
USEC_INITIALIZED=5662716430425
UDEV [5662716.436539] add /devices/pci0000:00/0000:00:14.0/usb1/1-5/1-5.2/1-5.2.1/1-5.2.1:1.0/video4linux/video0 (video4linux)
ACTION=add
COLORD_DEVICE=1
COLORD_KIND=camera
DEVLINKS=/dev/v4l/by-id/usb-Generic_USB2.0_PC_CAMERA-video-index0 /dev/v4l/by-path/pci-0000:00:14.0-usb-0:5.2.1:1.0-video-index0
DEVNAME=/dev/video0
DEVPATH=/devices/pci0000:00/0000:00:14.0/usb1/1-5/1-5.2/1-5.2.1/1-5.2.1:1.0/video4linux/video0
ID_BUS=usb
ID_FOR_SEAT=video4linux-pci-0000_00_14_0-usb-0_5_2_1_1_0
ID_MODEL=USB2.0_PC_CAMERA
ID_MODEL_ENC=USB2.0\x20PC\x20CAMERA
ID_MODEL_ID=2311
ID_PATH=pci-0000:00:14.0-usb-0:5.2.1:1.0
ID_PATH_TAG=pci-0000_00_14_0-usb-0_5_2_1_1_0
ID_REVISION=0100
ID_SERIAL=Generic_USB2.0_PC_CAMERA
ID_TYPE=video
ID_USB_DRIVER=uvcvideo
ID_USB_INTERFACES=:0e0100:0e0200:
ID_USB_INTERFACE_NUM=00
ID_V4L_CAPABILITIES=:capture:
ID_V4L_PRODUCT=USB2.0 PC CAMERA: USB2.0 PC CAM
ID_V4L_VERSION=2
ID_VENDOR=Generic
ID_VENDOR_ENC=Generic
ID_VENDOR_ID=1908
MAJOR=81
MINOR=0
SEQNUM=24415
SUBSYSTEM=video4linux
TAGS=:seat:uaccess:
USEC_INITIALIZED=5662716436054
UDEV [5662716.436956] add /devices/pci0000:00/0000:00:14.0/usb1/1-5/1-5.2/1-5.2.1/1-5.2.1:1.0/input/input121 (input)
ACTION=add
DEVPATH=/devices/pci0000:00/0000:00:14.0/usb1/1-5/1-5.2/1-5.2.1/1-5.2.1:1.0/input/input121
EV=3
ID_BUS=usb
ID_FOR_SEAT=input-pci-0000_00_14_0-usb-0_5_2_1_1_0
ID_INPUT=1
ID_INPUT_KEY=1
ID_MODEL=USB2.0_PC_CAMERA
ID_MODEL_ENC=USB2.0\x20PC\x20CAMERA
ID_MODEL_ID=2311
ID_PATH=pci-0000:00:14.0-usb-0:5.2.1:1.0
ID_PATH_TAG=pci-0000_00_14_0-usb-0_5_2_1_1_0
ID_REVISION=0100
ID_SERIAL=Generic_USB2.0_PC_CAMERA
ID_TYPE=video
ID_USB_DRIVER=uvcvideo
ID_USB_INTERFACES=:0e0100:0e0200:
ID_USB_INTERFACE_NUM=00
ID_VENDOR=Generic
ID_VENDOR_ENC=Generic
ID_VENDOR_ID=1908
KEY=100000 0 0 0
MODALIAS=input:b0003v1908p2311e0100-e0,1,kD4,ramlsfw
NAME="USB2.0 PC CAMERA: USB2.0 PC CAM"
PHYS="usb-0000:00:14.0-5.2.1/button"
PRODUCT=3/1908/2311/100
PROP=0
SEQNUM=24417
SUBSYSTEM=input
TAGS=:seat:
USEC_INITIALIZED=5662716436500
UDEV [5662716.591160] add /devices/pci0000:00/0000:00:14.0/usb1/1-5/1-5.2/1-5.2.1/1-5.2.1:1.0/input/input121/event22 (input)
ACTION=add
BACKSPACE=guess
DEVLINKS=/dev/input/by-path/pci-0000:00:14.0-usb-0:5.2.1:1.0-event /dev/input/by-id/usb-Generic_USB2.0_PC_CAMERA-event-if00
DEVNAME=/dev/input/event22
DEVPATH=/devices/pci0000:00/0000:00:14.0/usb1/1-5/1-5.2/1-5.2.1/1-5.2.1:1.0/input/input121/event22
ID_BUS=usb
ID_INPUT=1
ID_INPUT_KEY=1
ID_MODEL=USB2.0_PC_CAMERA
ID_MODEL_ENC=USB2.0\x20PC\x20CAMERA
ID_MODEL_ID=2311
ID_PATH=pci-0000:00:14.0-usb-0:5.2.1:1.0
ID_PATH_TAG=pci-0000_00_14_0-usb-0_5_2_1_1_0
ID_REVISION=0100
ID_SERIAL=Generic_USB2.0_PC_CAMERA
ID_TYPE=video
ID_USB_DRIVER=uvcvideo
ID_USB_INTERFACES=:0e0100:0e0200:
ID_USB_INTERFACE_NUM=00
ID_VENDOR=Generic
ID_VENDOR_ENC=Generic
ID_VENDOR_ID=1908
LIBINPUT_DEVICE_GROUP=3/1908/2311:usb-0000:00:14.0-5.2
MAJOR=13
MINOR=86
SEQNUM=24418
SUBSYSTEM=input
TAGS=:power-switch:
USEC_INITIALIZED=5662716590816
XKBLAYOUT=us,il
XKBMODEL=pc105
XKBOPTIONS=grp:alt_shift_toggle,grp_led:scroll
XKBVARIANT=,
UDEV [5662716.593390] bind /devices/pci0000:00/0000:00:14.0/usb1/1-5/1-5.2/1-5.2.1/1-5.2.1:1.0 (usb)
ACTION=bind
DEVPATH=/devices/pci0000:00/0000:00:14.0/usb1/1-5/1-5.2/1-5.2.1/1-5.2.1:1.0
DEVTYPE=usb_interface
DRIVER=uvcvideo
ID_USB_CLASS_FROM_DATABASE=Miscellaneous Device
ID_USB_PROTOCOL_FROM_DATABASE=Interface Association
ID_VENDOR_FROM_DATABASE=GEMBIRD
INTERFACE=14/1/0
MODALIAS=usb:v1908p2311d0100dcEFdsc02dp01ic0Eisc01ip00in00
PRODUCT=1908/2311/100
SEQNUM=24419
SUBSYSTEM=usb
TYPE=239/2/1
USEC_INITIALIZED=5662716430396
UDEV [5662716.595836] bind /devices/pci0000:00/0000:00:14.0/usb1/1-5/1-5.2/1-5.2.1 (usb)
ACTION=bind
BUSNUM=001
DEVNAME=/dev/bus/usb/001/098
DEVNUM=098
DEVPATH=/devices/pci0000:00/0000:00:14.0/usb1/1-5/1-5.2/1-5.2.1
DEVTYPE=usb_device
DRIVER=usb
ID_BUS=usb
ID_MODEL=USB2.0_PC_CAMERA
ID_MODEL_ENC=USB2.0\x20PC\x20CAMERA
ID_MODEL_ID=2311
ID_REVISION=0100
ID_SERIAL=Generic_USB2.0_PC_CAMERA
ID_USB_INTERFACES=:0e0100:0e0200:
ID_VENDOR=Generic
ID_VENDOR_ENC=Generic
ID_VENDOR_FROM_DATABASE=GEMBIRD
ID_VENDOR_ID=1908
MAJOR=189
MINOR=97
PRODUCT=1908/2311/100
SEQNUM=24422
SUBSYSTEM=usb
TYPE=239/2/1
USEC_INITIALIZED=5662716427506
So the device I want to avoid was /dev/input/event22 this time. What’s its attributes?
$ sudo udevadm info -a -n /dev/input/event22 Udevadm info starts with the device specified by the devpath and then walks up the chain of parent devices. It prints for every device found, all possible attributes in the udev rules key format. A rule to match, can be composed by the attributes of the device and the attributes from one single parent device. looking at device '/devices/pci0000:00/0000:00:14.0/usb1/1-5/1-5.2/1-5.2.1/1-5.2.1:1.0/input/input121/event22': KERNEL=="event22" SUBSYSTEM=="input" DRIVER=="" looking at parent device '/devices/pci0000:00/0000:00:14.0/usb1/1-5/1-5.2/1-5.2.1/1-5.2.1:1.0/input/input121': KERNELS=="input121" SUBSYSTEMS=="input" DRIVERS=="" ATTRS{name}=="USB2.0 PC CAMERA: USB2.0 PC CAM" ATTRS{phys}=="usb-0000:00:14.0-5.2.1/button" ATTRS{properties}=="0" ATTRS{uniq}=="" looking at parent device '/devices/pci0000:00/0000:00:14.0/usb1/1-5/1-5.2/1-5.2.1/1-5.2.1:1.0': KERNELS=="1-5.2.1:1.0" SUBSYSTEMS=="usb" DRIVERS=="uvcvideo" ATTRS{authorized}=="1" ATTRS{bAlternateSetting}==" 0" ATTRS{bInterfaceClass}=="0e" ATTRS{bInterfaceNumber}=="00" ATTRS{bInterfaceProtocol}=="00" ATTRS{bInterfaceSubClass}=="01" ATTRS{bNumEndpoints}=="01" ATTRS{iad_bFirstInterface}=="00" ATTRS{iad_bFunctionClass}=="0e" ATTRS{iad_bFunctionProtocol}=="00" ATTRS{iad_bFunctionSubClass}=="03" ATTRS{iad_bInterfaceCount}=="02" ATTRS{interface}=="USB2.0 PC CAMERA" ATTRS{supports_autosuspend}=="1" looking at parent device '/devices/pci0000:00/0000:00:14.0/usb1/1-5/1-5.2/1-5.2.1': KERNELS=="1-5.2.1" SUBSYSTEMS=="usb" DRIVERS=="usb" ATTRS{authorized}=="1" ATTRS{avoid_reset_quirk}=="0" ATTRS{bConfigurationValue}=="1" ATTRS{bDeviceClass}=="ef" ATTRS{bDeviceProtocol}=="01" ATTRS{bDeviceSubClass}=="02" ATTRS{bMaxPacketSize0}=="64" ATTRS{bMaxPower}=="256mA" ATTRS{bNumConfigurations}=="1" ATTRS{bNumInterfaces}==" 2" ATTRS{bcdDevice}=="0100" ATTRS{bmAttributes}=="80" ATTRS{busnum}=="1" ATTRS{configuration}=="" ATTRS{devnum}=="98" ATTRS{devpath}=="5.2.1" ATTRS{idProduct}=="2311" ATTRS{idVendor}=="1908" ATTRS{ltm_capable}=="no" ATTRS{manufacturer}=="Generic" ATTRS{maxchild}=="0" ATTRS{product}=="USB2.0 PC CAMERA" ATTRS{quirks}=="0x0" ATTRS{removable}=="unknown" ATTRS{speed}=="480" ATTRS{urbnum}=="16" ATTRS{version}==" 2.00" looking at parent device '/devices/pci0000:00/0000:00:14.0/usb1/1-5/1-5.2': KERNELS=="1-5.2" SUBSYSTEMS=="usb" DRIVERS=="usb" ATTRS{authorized}=="1" ATTRS{avoid_reset_quirk}=="0" ATTRS{bConfigurationValue}=="1" ATTRS{bDeviceClass}=="09" ATTRS{bDeviceProtocol}=="01" ATTRS{bDeviceSubClass}=="00" ATTRS{bMaxPacketSize0}=="64" ATTRS{bMaxPower}=="100mA" ATTRS{bNumConfigurations}=="1" ATTRS{bNumInterfaces}==" 1" ATTRS{bcdDevice}=="0100" ATTRS{bmAttributes}=="e0" ATTRS{busnum}=="1" ATTRS{configuration}=="" ATTRS{devnum}=="75" ATTRS{devpath}=="5.2" ATTRS{idProduct}=="7250" ATTRS{idVendor}=="214b" ATTRS{ltm_capable}=="no" ATTRS{maxchild}=="4" ATTRS{product}=="USB2.0 HUB" ATTRS{quirks}=="0x0" ATTRS{removable}=="unknown" ATTRS{speed}=="480" ATTRS{urbnum}=="409" ATTRS{version}==" 2.00" looking at parent device '/devices/pci0000:00/0000:00:14.0/usb1/1-5': KERNELS=="1-5" SUBSYSTEMS=="usb" DRIVERS=="usb" ATTRS{authorized}=="1" ATTRS{avoid_reset_quirk}=="0" ATTRS{bConfigurationValue}=="1" ATTRS{bDeviceClass}=="09" ATTRS{bDeviceProtocol}=="02" ATTRS{bDeviceSubClass}=="00" ATTRS{bMaxPacketSize0}=="64" ATTRS{bMaxPower}=="0mA" ATTRS{bNumConfigurations}=="1" ATTRS{bNumInterfaces}==" 1" ATTRS{bcdDevice}=="0123" ATTRS{bmAttributes}=="e0" ATTRS{busnum}=="1" ATTRS{configuration}=="" ATTRS{devnum}=="73" ATTRS{devpath}=="5" ATTRS{idProduct}=="5411" ATTRS{idVendor}=="0bda" ATTRS{ltm_capable}=="no" ATTRS{manufacturer}=="Generic" ATTRS{maxchild}=="4" ATTRS{product}=="4-Port USB 2.0 Hub" ATTRS{quirks}=="0x0" ATTRS{removable}=="removable" ATTRS{speed}=="480" ATTRS{urbnum}=="69" ATTRS{version}==" 2.10" looking at parent device '/devices/pci0000:00/0000:00:14.0/usb1': KERNELS=="usb1" SUBSYSTEMS=="usb" DRIVERS=="usb" ATTRS{authorized}=="1" ATTRS{authorized_default}=="1" ATTRS{avoid_reset_quirk}=="0" ATTRS{bConfigurationValue}=="1" ATTRS{bDeviceClass}=="09" ATTRS{bDeviceProtocol}=="01" ATTRS{bDeviceSubClass}=="00" ATTRS{bMaxPacketSize0}=="64" ATTRS{bMaxPower}=="0mA" ATTRS{bNumConfigurations}=="1" ATTRS{bNumInterfaces}==" 1" ATTRS{bcdDevice}=="0415" ATTRS{bmAttributes}=="e0" ATTRS{busnum}=="1" ATTRS{configuration}=="" ATTRS{devnum}=="1" ATTRS{devpath}=="0" ATTRS{idProduct}=="0002" ATTRS{idVendor}=="1d6b" ATTRS{interface_authorized_default}=="1" ATTRS{ltm_capable}=="no" ATTRS{manufacturer}=="Linux 4.15.0-20-generic xhci-hcd" ATTRS{maxchild}=="16" ATTRS{product}=="xHCI Host Controller" ATTRS{quirks}=="0x0" ATTRS{removable}=="unknown" ATTRS{serial}=="0000:00:14.0" ATTRS{speed}=="480" ATTRS{urbnum}=="454" ATTRS{version}==" 2.00" looking at parent device '/devices/pci0000:00/0000:00:14.0': KERNELS=="0000:00:14.0" SUBSYSTEMS=="pci" DRIVERS=="xhci_hcd" ATTRS{broken_parity_status}=="0" ATTRS{class}=="0x0c0330" ATTRS{consistent_dma_mask_bits}=="64" ATTRS{d3cold_allowed}=="1" ATTRS{dbc}=="disabled" ATTRS{device}=="0xa2af" ATTRS{dma_mask_bits}=="64" ATTRS{driver_override}=="(null)" ATTRS{enable}=="1" ATTRS{irq}=="33" ATTRS{local_cpulist}=="0-11" ATTRS{local_cpus}=="0,00000000,00000fff" ATTRS{msi_bus}=="1" ATTRS{numa_node}=="0" ATTRS{revision}=="0x00" ATTRS{subsystem_device}=="0x5007" ATTRS{subsystem_vendor}=="0x1458" ATTRS{vendor}=="0x8086" looking at parent device '/devices/pci0000:00': KERNELS=="pci0000:00" SUBSYSTEMS=="" DRIVERS==""
And what udev rules are currently in effect for this? Note that this doesn’t require root, and nothing really happens to the system:
$ udevadm test -a add $(udevadm info -q path -n /dev/input/event22)calling: test
version 237
This program is for debugging only, it does not run any program
specified by a RUN key. It may show incorrect results, because
some values may be different, or not available at a simulation run.
Load module index
Parsed configuration file /etc/systemd/network/eth1.link
Skipping empty file: /etc/systemd/network/99-default.link
Created link configuration context.
[ ... reading a lot of files ... ]
rules contain 393216 bytes tokens (32768 * 12 bytes), 39371 bytes strings
25632 strings (220044 bytes), 22252 de-duplicated (184054 bytes), 3381 trie nodes used
GROUP 104 /lib/udev/rules.d/50-udev-default.rules:29
IMPORT builtin 'hwdb' /lib/udev/rules.d/60-evdev.rules:8
IMPORT builtin 'hwdb' returned non-zero
IMPORT builtin 'hwdb' /lib/udev/rules.d/60-evdev.rules:17
IMPORT builtin 'hwdb' returned non-zero
IMPORT builtin 'hwdb' /lib/udev/rules.d/60-evdev.rules:21
IMPORT builtin 'hwdb' returned non-zero
IMPORT builtin 'input_id' /lib/udev/rules.d/60-input-id.rules:5
capabilities/ev raw kernel attribute: 3
capabilities/abs raw kernel attribute: 0
capabilities/rel raw kernel attribute: 0
capabilities/key raw kernel attribute: 100000 0 0 0
properties raw kernel attribute: 0
test_key: checking bit block 0 for any keys; found=0
test_key: checking bit block 64 for any keys; found=0
test_key: checking bit block 128 for any keys; found=0
test_key: checking bit block 192 for any keys; found=1
IMPORT builtin 'hwdb' /lib/udev/rules.d/60-input-id.rules:6
IMPORT builtin 'hwdb' returned non-zero
IMPORT builtin 'usb_id' /lib/udev/rules.d/60-persistent-input.rules:11
/sys/devices/pci0000:00/0000:00:14.0/usb1/1-5/1-5.2/1-5.2.1/1-5.2.1:1.0: if_class 14 protocol 0
LINK 'input/by-id/usb-Generic_USB2.0_PC_CAMERA-event-if00' /lib/udev/rules.d/60-persistent-input.rules:32
IMPORT builtin 'path_id' /lib/udev/rules.d/60-persistent-input.rules:35
LINK 'input/by-path/pci-0000:00:14.0-usb-0:5.2.1:1.0-event' /lib/udev/rules.d/60-persistent-input.rules:40
PROGRAM 'libinput-device-group /sys/devices/pci0000:00/0000:00:14.0/usb1/1-5/1-5.2/1-5.2.1/1-5.2.1:1.0/input/input121/event22' /lib/udev/rules.d/80-libinput-device-groups.rules:7
starting 'libinput-device-group /sys/devices/pci0000:00/0000:00:14.0/usb1/1-5/1-5.2/1-5.2.1/1-5.2.1:1.0/input/input121/event22'
'libinput-device-group /sys/devices/pci0000:00/0000:00:14.0/usb1/1-5/1-5.2/1-5.2.1/1-5.2.1:1.0/input/input121/event22'(out) '3/1908/2311:usb-0000:00:14.0-5.2'
Process 'libinput-device-group /sys/devices/pci0000:00/0000:00:14.0/usb1/1-5/1-5.2/1-5.2.1/1-5.2.1:1.0/input/input121/event22' succeeded.
IMPORT builtin 'hwdb' /lib/udev/rules.d/90-libinput-model-quirks.rules:46
IMPORT builtin 'hwdb' returned non-zero
IMPORT builtin 'hwdb' /lib/udev/rules.d/90-libinput-model-quirks.rules:50
IMPORT builtin 'hwdb' returned non-zero
handling device node '/dev/input/event22', devnum=c13:86, mode=0660, uid=0, gid=104
preserve permissions /dev/input/event22, 020660, uid=0, gid=104
preserve already existing symlink '/dev/char/13:86' to '../input/event22'
found 'c13:86' claiming '/run/udev/links/\x2finput\x2fby-id\x2fusb-Generic_USB2.0_PC_CAMERA-event-if00'
found 'c13:85' claiming '/run/udev/links/\x2finput\x2fby-id\x2fusb-Generic_USB2.0_PC_CAMERA-event-if00'
found 'c13:84' claiming '/run/udev/links/\x2finput\x2fby-id\x2fusb-Generic_USB2.0_PC_CAMERA-event-if00'
found 'c13:83' claiming '/run/udev/links/\x2finput\x2fby-id\x2fusb-Generic_USB2.0_PC_CAMERA-event-if00'
creating link '/dev/input/by-id/usb-Generic_USB2.0_PC_CAMERA-event-if00' to '/dev/input/event22'
preserve already existing symlink '/dev/input/by-id/usb-Generic_USB2.0_PC_CAMERA-event-if00' to '../event22'
found 'c13:86' claiming '/run/udev/links/\x2finput\x2fby-path\x2fpci-0000:00:14.0-usb-0:5.2.1:1.0-event'
found 'c13:85' claiming '/run/udev/links/\x2finput\x2fby-path\x2fpci-0000:00:14.0-usb-0:5.2.1:1.0-event'
found 'c13:84' claiming '/run/udev/links/\x2finput\x2fby-path\x2fpci-0000:00:14.0-usb-0:5.2.1:1.0-event'
found 'c13:83' claiming '/run/udev/links/\x2finput\x2fby-path\x2fpci-0000:00:14.0-usb-0:5.2.1:1.0-event'
creating link '/dev/input/by-path/pci-0000:00:14.0-usb-0:5.2.1:1.0-event' to '/dev/input/event22'
preserve already existing symlink '/dev/input/by-path/pci-0000:00:14.0-usb-0:5.2.1:1.0-event' to '../event22'
ACTION=add
BACKSPACE=guess
DEVLINKS=/dev/input/by-path/pci-0000:00:14.0-usb-0:5.2.1:1.0-event /dev/input/by-id/usb-Generic_USB2.0_PC_CAMERA-event-if00
DEVNAME=/dev/input/event22
DEVPATH=/devices/pci0000:00/0000:00:14.0/usb1/1-5/1-5.2/1-5.2.1/1-5.2.1:1.0/input/input121/event22
ID_BUS=usb
ID_INPUT=1
ID_INPUT_KEY=1
ID_MODEL=USB2.0_PC_CAMERA
ID_MODEL_ENC=USB2.0\x20PC\x20CAMERA
ID_MODEL_ID=2311
ID_PATH=pci-0000:00:14.0-usb-0:5.2.1:1.0
ID_PATH_TAG=pci-0000_00_14_0-usb-0_5_2_1_1_0
ID_REVISION=0100
ID_SERIAL=Generic_USB2.0_PC_CAMERA
ID_TYPE=video
ID_USB_DRIVER=uvcvideo
ID_USB_INTERFACES=:0e0100:0e0200:
ID_USB_INTERFACE_NUM=00
ID_VENDOR=Generic
ID_VENDOR_ENC=Generic
ID_VENDOR_ID=1908
LIBINPUT_DEVICE_GROUP=3/1908/2311:usb-0000:00:14.0-5.2
MAJOR=13
MINOR=86
SUBSYSTEM=input
TAGS=:power-switch:
USEC_INITIALIZED=5662716590816
XKBLAYOUT=us,il
XKBMODEL=pc105
XKBOPTIONS=grp:alt_shift_toggle,grp_led:scroll
XKBVARIANT=,
Unload module index
Unloaded link configuration context.
I tried the following:
# Rule for disabling bogus keyboard on webcam. It causes X-Windows to # crash if it goes on and off too much SUBSYSTEM=="input", ENV{ID_VENDOR_ID}=="1908", ENV{ID_MODEL_ID}=="2311", MODE:="000" SUBSYSTEM=="input", ATTRS{name}=="USB2.0 PC CAMERA:*", ENV{LIBINPUT_IGNORE_DEVICE}="1"
(the := assignment makes this assignment final).
However none of these two rules managed to stop X from reacting.
Setting the mode to 000 made the device file inaccessible, but yet it was registered. As for the second rule, it doesn’t help, because it indeed set LIBINPUT_IGNORE_DEVICE correctly, but for the wrong udev event. That’s because the udev event that triggers libinput is based upon that the KERNEL attribute is event[0-9]*, which is executed earlier (see 80-libinput-device-groups.rules), but ATTRS{name} isn’t defined for that specific udev event (see output of udevadm info above).
I also tried RUN+=”/bin/rm /dev/input/event%n”, and that indeed removed the device node, but X still reacted, and complained with “libinput: USB2.0 PC CAMERA: USB2.0 PC CAM: Failed to create a device for /dev/input/event28″. Because it was indeed deleted.
But since it appears like X.org accesses keyboards through libinput, maybe use the example for ignoring a device, as given on this page, even though it’s quite similar to what I’ve already attempted?
So I saved this file as /etc/udev/rules.d/79-no-camera-keyboard.rules:
# Make libinput ignore webcam's button as a keyboard. As a result there's # no event to X-Windows ACTION=="add|change", KERNEL=="event[0-9]*", \ ENV{ID_VENDOR_ID}=="1908", \ ENV{ID_MODEL_ID}=="2311", \ ENV{LIBINPUT_IGNORE_DEVICE}="1"
And then reload:
# udevadm control --reload
but that didn’t make any apparent difference (I verified that the rule was matched).
And that’s all, folks. Recall that I didn’t promise a happy end.
These are notes I made while trying to make my Sony WH-CH510 bluetooth headphones work properly with my Linux Mint 19 machine. It’s quite possible that an upgrade of the OS would have fixed the problem, but I have my opinion on upgrades.
The problem: It takes a long time for the headphones to connect (around 20-30 seconds), and once that happens, the headphones are not automatically chosen by Pulseaudio as the output device. There are hacky solutions to the second part of the problems, but I have this thing about wanting to solve a problem properly.
The twist is that there’s no problem at all with Sony’s WH-CH500 headphones, neither with a pair of junky earbuds, which are labeled Y30.
So after trying quite a few quick fixes, I decided to get to the bottom of the problem. The good news is that I found out the reason for the problem. The bad news is that I don’t know how to solve it. For now.
Bluetooth is handled by the kernel, which presents an hci device (typically hci0). The heavy lifting of implementing the protocol is done by bluetoothd ( /usr/lib/bluetooth/bluetoothd on my machine, started by the bluetooth systemd service). Try, for example,
$ hciconfig
To capture bluetooth communication, there are two primary option: The bluetooth0 interface in Wireshark or the btmon command line utility. Wireshark is more comprehensive as always, however btmon is actually easier to work with, because all crucial information is concentrated in a text file. The eternal tradeoff between GUI and text. Possibly, use both.
I’m going to show excerpts from btmon dumps below, obtained with e.g.
$ sudo stdbuf -oL btmon | tee btmon.txt
This could have been just “sudo btmon”, but using stdbuf and tee, there’s also data printed out to console in real time (stdbuf removes unnecessary buffering).
I skip a lot of entries, because there are, well, a lot of them. Hopefully I didn’t throw away anything relevant.
With btmon, “>” means incoming to host, and “<” means outgoing from host (the device is at the left side, which is a bit odd).
I’ve never carried out a bluetooth-related project, so all my comments on btmon’s output below are no more than hopefully intelligent guesses. I don’t pretend to be acquainted with any of the related protocols.
This is WH-CH510′s connection request (the earbud’s request was exactly the same):
> HCI Event: Connect Request (0x04) plen 10 #11 [hci0] 10.725859 Address: 30:53:C1:11:40:2D (OUI 30-53-C1) Class: 0x240404 Major class: Audio/Video (headset, speaker, stereo, video, vcr) Minor class: Wearable Headset Device Rendering (Printing, Speaker) Audio (Speaker, Microphone, Headset) Link type: ACL (0x01) < HCI Command: Accept Connection R.. (0x01|0x0009) plen 7 #12 [hci0] 10.725914 Address: 30:53:C1:11:40:2D (OUI 30-53-C1) Role: Master (0x00)
And after a few packet exchanges, this appears in the dump output (three times, not clear why):
@ MGMT Event: Device Connected (0x000b) plen 28 {0x0001} [hci0] 11.256871 BR/EDR Address: 30:53:C1:11:40:2D (OUI 30-53-C1) Flags: 0x00000000 Data length: 15 Name (complete): WH-CH510 Class: 0x240404 Major class: Audio/Video (headset, speaker, stereo, video, vcr) Minor class: Wearable Headset Device Rendering (Printing, Speaker) Audio (Speaker, Microphone, Headset)
Unlike USB, the device is in control: The requests information, and the host responds. The device chooses how to set up the connection, and the host follows suit. And it’s also the device that possibly messes up.
For the Y30 earbuds, the attribute request / response session went as follows.
First, the device checks if the host supports Handsfree Audio Gateway (0x111f), by asking for attributes, but the host doesn’t have any of it:
> ACL Data RX: Handle 17 flags 0x02 dlen 24 #42 [hci0] 14.317596
Channel: 64 len 20 [PSM 1 mode 0] {chan 0}
SDP: Service Search Attribute Request (0x06) tid 1 len 15
Search pattern: [len 6]
Sequence (6) with 3 bytes [16 extra bits] len 6
UUID (3) with 2 bytes [0 extra bits] len 3
Handsfree Audio Gateway (0x111f)
Max record count: 512
Attribute list: [len 6]
Sequence (6) with 3 bytes [16 extra bits] len 6
Unsigned Integer (1) with 2 bytes [0 extra bits] len 3
0x0004
Continuation state: 0
< ACL Data TX: Handle 17 flags 0x00 dlen 14 #43 [hci0] 14.317758
Channel: 64 len 10 [PSM 1 mode 0] {chan 0}
SDP: Service Search Attribute Response (0x07) tid 1 len 5
Attribute bytes: 2
Continuation state: 0
So the device tries AVDTP (0x0019) instead:
> ACL Data RX: Handle 17 flags 0x02 dlen 24 #45 [hci0] 14.322578 Channel: 64 len 20 [PSM 1 mode 0] {chan 0} SDP: Service Search Attribute Request (0x06) tid 2 len 15 Search pattern: [len 6] Sequence (6) with 3 bytes [16 extra bits] len 6 UUID (3) with 2 bytes [0 extra bits] len 3 AVDTP (0x0019) Max record count: 512 Attribute list: [len 6] Sequence (6) with 3 bytes [16 extra bits] len 6 Unsigned Integer (1) with 2 bytes [0 extra bits] len 3 0x0004 Continuation state: 0 < ACL Data TX: Handle 17 flags 0x00 dlen 60 #46 [hci0] 14.322737 Channel: 64 len 56 [PSM 1 mode 0] {chan 0} SDP: Service Search Attribute Response (0x07) tid 2 len 51 Attribute bytes: 48 Attribute list: [len 21] {position 0} Attribute: Protocol Descriptor List (0x0004) [len 2] Sequence (6) with 6 bytes [8 extra bits] len 8 UUID (3) with 2 bytes [0 extra bits] len 3 L2CAP (0x0100) Unsigned Integer (1) with 2 bytes [0 extra bits] len 3 0x0019 Sequence (6) with 6 bytes [8 extra bits] len 8 UUID (3) with 2 bytes [0 extra bits] len 3 AVDTP (0x0019) Unsigned Integer (1) with 2 bytes [0 extra bits] len 3 0x0103 Attribute list: [len 21] {position 1} Attribute: Protocol Descriptor List (0x0004) [len 2] Sequence (6) with 6 bytes [8 extra bits] len 8 UUID (3) with 2 bytes [0 extra bits] len 3 L2CAP (0x0100) Unsigned Integer (1) with 2 bytes [0 extra bits] len 3 0x0019 Sequence (6) with 6 bytes [8 extra bits] len 8 UUID (3) with 2 bytes [0 extra bits] len 3 AVDTP (0x0019) Unsigned Integer (1) with 2 bytes [0 extra bits] len 3 0x0103 Continuation state: 0
And yes, the host supports it.
AVDTP is (according to Wikipedia) used by the advanced audio distribution (A2DP) profile to stream music to stereo headsets over an L2CAP channel intended for video distribution profile in the Bluetooth transmission.
So it goes on with connecting channel 65 to service number 0x0019 (PSM stands for Protocol Service Multiplexor).
> ACL Data RX: Handle 17 flags 0x02 dlen 12 #51 [hci0] 14.358888 L2CAP: Connection Request (0x02) ident 3 len 4 PSM: 25 (0x0019) Source CID: 65 < ACL Data TX: Handle 17 flags 0x00 dlen 16 #52 [hci0] 14.358945 L2CAP: Connection Response (0x03) ident 3 len 8 Destination CID: 65 Source CID: 65 Result: Connection pending (0x0001) Status: Authorization pending (0x0002) < ACL Data TX: Handle 17 flags 0x00 dlen 16 #53 [hci0] 14.359202 L2CAP: Connection Response (0x03) ident 3 len 8 Destination CID: 65 Source CID: 65 Result: Connection successful (0x0000) Status: No further information available (0x0000) < ACL Data TX: Handle 17 flags 0x00 dlen 12 #54 [hci0] 14.359220 L2CAP: Configure Request (0x04) ident 3 len 4 Destination CID: 65 Flags: 0x0000 AVDTP (0x0019)> HCI Event: Number of Completed Packets (0x13) plen 5 #55 [hci0] 14.361709 Num handles: 1 Handle: 17 Count: 1
This is followed by a discovery phase:
@ MGMT Command: Start Discovery (0x0023) plen 1 {0x0001} [hci0] 14.362330 Address type: 0x07 BR/EDR LE Public LE Random
In the packet exchange that follows, the device obtains the capabilities of the host, and the audio connection is set up.
The WH-CH510 takes another approach: It issues a Service Search Request for the Headset AG (0x1112). AG means Audio Gateway.
> ACL Data RX: Handle 20 flags 0x02 dlen 17 #48 [hci0] 11.736388 Channel: 64 len 13 [PSM 1 mode 0] {chan 0} SDP: Service Search Request (0x02) tid 1 len 8 Search pattern: [len 5] Sequence (6) with 3 bytes [8 extra bits] len 5 UUID (3) with 2 bytes [0 extra bits] len 3 Headset AG (0x1112) Max record count: 68 Continuation state: 0 < ACL Data TX: Handle 20 flags 0x00 dlen 18 #49 [hci0] 11.736611 Channel: 64 len 14 [PSM 1 mode 0] {chan 0} SDP: Service Search Response (0x03) tid 1 len 9 Total record count: 1 Current record count: 1 Record handle: 0x1000c Continuation state: 0
And yes, it’s supported and given the handle 0x1000c. Using this handle, the device asks for this service’s attributes:
> ACL Data RX: Handle 20 flags 0x02 dlen 23 #51 [hci0] 11.741392 Channel: 64 len 19 [PSM 1 mode 0] {chan 0} SDP: Service Attribute Request (0x04) tid 2 len 14 Record handle: 0x1000c Max attribute bytes: 277 Attribute list: [len 7] Sequence (6) with 5 bytes [8 extra bits] len 7 Unsigned Integer (1) with 4 bytes [0 extra bits] len 5 0x0000ffff Continuation state: 0 < ACL Data TX: Handle 20 flags 0x00 dlen 97 #52 [hci0] 11.741610 Channel: 64 len 93 [PSM 1 mode 0] {chan 0} SDP: Service Attribute Response (0x05) tid 2 len 88 Attribute bytes: 85 Attribute list: [len 83] {position 0} Attribute: Service Record Handle (0x0000) [len 2] 0x0001000c Attribute: Service Class ID List (0x0001) [len 2] UUID (3) with 2 bytes [0 extra bits] len 3 Headset AG (0x1112) UUID (3) with 2 bytes [0 extra bits] len 3 Generic Audio (0x1203) Attribute: Protocol Descriptor List (0x0004) [len 2] Sequence (6) with 3 bytes [8 extra bits] len 5 UUID (3) with 2 bytes [0 extra bits] len 3 L2CAP (0x0100) Sequence (6) with 5 bytes [8 extra bits] len 7 UUID (3) with 2 bytes [0 extra bits] len 3 RFCOMM (0x0003) Unsigned Integer (1) with 1 byte [0 extra bits] len 2 0x0c Attribute: Browse Group List (0x0005) [len 2] UUID (3) with 2 bytes [0 extra bits] len 3 Public Browse Root (0x1002) Attribute: Bluetooth Profile Descriptor List (0x0009) [len 2] Sequence (6) with 6 bytes [8 extra bits] len 8 UUID (3) with 2 bytes [0 extra bits] len 3 Headset (0x1108) Unsigned Integer (1) with 2 bytes [0 extra bits] len 3 0x0102 Attribute: Unknown (0x0100) [len 2] Headset Voice gateway [len 21] Continuation state: 0
The device chooses to connect channel 65 to the RFCOMM service:
> ACL Data RX: Handle 20 flags 0x02 dlen 12 #62 [hci0] 12.337691
L2CAP: Connection Request (0x02) ident 6 len 4
PSM: 3 (0x0003)
Source CID: 65
< ACL Data TX: Handle 20 flags 0x00 dlen 16 #63 [hci0] 12.337747
L2CAP: Connection Response (0x03) ident 6 len 8
Destination CID: 64
Source CID: 65
Result: Connection successful (0x0000)
Status: No further information available (0x0000)
After some configuration packets, the discovery begins:
@ MGMT Event: Discovering (0x0013) plen 2 {0x0001} [hci0] 14.686897
Address type: 0x07
BR/EDR
LE Public
LE Random
Discovery: Disabled (0x00)
> ACL Data RX: Handle 20 flags 0x02 dlen 18 #91 [hci0] 15.848944
Channel: 64 len 14 [PSM 3 mode 0] {chan 0}
RFCOMM: Unnumbered Info with Header Check (UIH) (0xef)
Address: 0x63 cr 1 dlci 0x18
Control: 0xef poll/final 0
Length: 10
FCS: 0x0e
41 54 2b 43 49 4e 44 3d 3f 0d 0e AT+CIND=?..
> ACL Data RX: Handle 20 flags 0x02 dlen 17 #92 [hci0] 18.849015
Channel: 64 len 13 [PSM 3 mode 0] {chan 0}
RFCOMM: Unnumbered Info with Header Check (UIH) (0xef)
Address: 0x63 cr 1 dlci 0x18
Control: 0xef poll/final 0
Length: 9
FCS: 0x0e
41 54 2b 43 49 4e 44 3f 0d 0e AT+CIND?..
This discovery isn’t all that successful: The device sends AT commands, but the host doesn’t respond to them. 4 seconds of futile attempts.
After some useless back and forth, the device tries again with the same Service Search Request, to which it receives the same answer, and so it goes on.
> ACL Data RX: Handle 20 flags 0x02 dlen 17 #108 [hci0] 20.717682
Channel: 65 len 13 [PSM 1 mode 0] {chan 1}
SDP: Service Search Request (0x02) tid 3 len 8
Search pattern: [len 5]
Sequence (6) with 3 bytes [8 extra bits] len 5
UUID (3) with 2 bytes [0 extra bits] len 3
Headset AG (0x1112)
Max record count: 68
Continuation state: 0
[ ... ]
The device disconnects the futile channel 65:
> ACL Data RX: Handle 20 flags 0x02 dlen 12 #114 [hci0] 20.818859 L2CAP: Disconnection Request (0x06) ident 10 len 4 Destination CID: 65 Source CID: 64 < ACL Data TX: Handle 20 flags 0x00 dlen 12 #115 [hci0] 20.818906 L2CAP: Disconnection Response (0x07) ident 10 len 4 Destination CID: 65 Source CID: 64
And then it tries another few AT commands on this same channel, despite having disconnected it. It didn’t reconnect it, so probably disconnection doesn’t mean what I think it does…?
> ACL Data RX: Handle 20 flags 0x02 dlen 24 #118 [hci0] 22.225305 Channel: 64 len 20 [PSM 3 mode 0] {chan 0} RFCOMM: Unnumbered Info with Header Check (UIH) (0xef) Address: 0x63 cr 1 dlci 0x18 Control: 0xef poll/final 0 Length: 16 FCS: 0x0e 41 54 2b 43 4d 45 52 3d 33 2c 30 2c 30 2c 31 0d AT+CMER=3,0,0,1. 0e . > ACL Data RX: Handle 20 flags 0x02 dlen 18 #119 [hci0] 25.285306 Channel: 64 len 14 [PSM 3 mode 0] {chan 0} RFCOMM: Unnumbered Info with Header Check (UIH) (0xef) Address: 0x63 cr 1 dlci 0x18 Control: 0xef poll/final 0 Length: 10 FCS: 0x0e 41 54 2b 43 43 57 41 3d 31 0d 0e AT+CCWA=1..
Needless to say, this was futile as well.
And then, out of the blue, the device requests to connect to AVDTP by choosing the service at address 0x0019.
> ACL Data RX: Handle 20 flags 0x02 dlen 12 #125 [hci0] 29.875346
L2CAP: Connection Request (0x02) ident 11 len 4
PSM: 25 (0x0019)
Source CID: 64
< ACL Data TX: Handle 20 flags 0x00 dlen 16 #126 [hci0] 29.875431
L2CAP: Connection Response (0x03) ident 11 len 8
Destination CID: 65
Source CID: 64
Result: Connection pending (0x0001)
Status: Authorization pending (0x0002)
< ACL Data TX: Handle 20 flags 0x00 dlen 16 #128 [hci0] 29.875677
L2CAP: Connection Response (0x03) ident 11 len 8
Destination CID: 65
Source CID: 64
Result: Connection successful (0x0000)
Status: No further information available (0x0000)
It’s not clear where the device got the idea to do this: As far as I can tell, the device wasn’t informed by the host about the existence of this service, at least not directly.
From this point, the negotiation goes well, and an audio device is set up. This allows selecting the headphones in the Sound Settings. Something that is done automatically with the old headphones and the junky earbuds.
So what is the problem? Maybe that the host doesn’t answer the AT commands. This is maybe solved in later versions of bluetoothd or Linux distributions in general. It’s quite possible that my distribution is too old for these newer headphones. Upgrade or perish.
Or go the other way: Downgrade. I suppose the solution would be to disable the Headset AG profile (0x1112), so that bluetoothd refuses to play ball when this is requested. This would speed up the fallback to A2DP, I hope, and possibly solve the problem.
I’ve tried hard to find a way to make bluetoothd refuse to the Headset AG profile, but in vain (so far?). sdptool has the option to disable a service, however it’s deprecated, and bluez 5 won’t talk with it. The updated method to tickle bluetoothd is through Dbus. Not sure if it has an API for turning off a service.
Unfortunately, I have no idea how to do this except for compiling bluetoothd from its sources and remove that option altogether. Actually, changing the UUID is enough to make it unusable.
I tried that, but it didn’t work all that well. More on that below.
Now to some extra stuff I randomly found out while working on this.
This utility talks with bluetoothd through Dbus.
Doesn’t require root (when run from a terminal window on the computer’s desktop):
$ bluetoothctl [NEW] Controller 9C:BB:CC:DD:EE:FF compname [default] [NEW] Device 78:2E:D4:D9:62:C1 Y30 [NEW] Device 00:18:09:76:27:29 WH-CH500 [NEW] Device 30:53:C1:11:40:2D WH-CH510 Agent registered [bluetooth]# show Controller 9C:BB:CC:DD:EE:FF (public) Name: compname Alias: compname Class: 0x001c0104 Powered: yes Discoverable: yes Pairable: yes UUID: Headset AG (00001112-0000-1000-8000-00805f9b34fb) UUID: Generic Attribute Profile (00001801-0000-1000-8000-00805f9b34fb) UUID: A/V Remote Control (0000110e-0000-1000-8000-00805f9b34fb) UUID: OBEX File Transfer (00001106-0000-1000-8000-00805f9b34fb) UUID: Generic Access Profile (00001800-0000-1000-8000-00805f9b34fb) UUID: OBEX Object Push (00001105-0000-1000-8000-00805f9b34fb) UUID: PnP Information (00001200-0000-1000-8000-00805f9b34fb) UUID: IrMC Sync (00001104-0000-1000-8000-00805f9b34fb) UUID: A/V Remote Control Target (0000110c-0000-1000-8000-00805f9b34fb) UUID: Audio Source (0000110a-0000-1000-8000-00805f9b34fb) UUID: Audio Sink (0000110b-0000-1000-8000-00805f9b34fb) UUID: Vendor specific (00005005-0000-1000-8000-0002ee000001) UUID: Message Notification Se.. (00001133-0000-1000-8000-00805f9b34fb) UUID: Phonebook Access Server (0000112f-0000-1000-8000-00805f9b34fb) UUID: Message Access Server (00001132-0000-1000-8000-00805f9b34fb) UUID: Headset (00001108-0000-1000-8000-00805f9b34fb) Modalias: usb:v1D6Bp0246d0530 Discovering: no
This utility spits out a lot of information by itself when the daemon is restarted with e.g. “systemctl restart bluetooth”. There’s also output when a device is connected and disconnected.
An interesting feature of bluetootctl is the submenus, in particular “gatt” and “advertise”. Maybe the former allows deregistering UUIDs.
Try
[bluetooth]# menu gatt
and when done, go back to original menu:
[bluetooth]# back
btmgmt talks with the kernel directly through an AF_BLUETOOTH raw network socket. I considered this tool because it has the rm-uuid command, which is supposed to remove a UUID.
$ sudo btmgmt [mgmt]# rm-uuid 00001112-0000-1000-8000-00805f9b34fb Remove UUID succeeded. Class 0x1c0104
This is reverted when the bluetooth service is restarted. But it doesn’t seem to have any effect on the interface anyhow. The UUID keeps appearing in bluetoothctl’s “show” and the service is advertised and used. “clr-uuids” apparently removes all UUIDs, but this has no real effect.
It seems like the effective UUIDs are kept in bluetoothd. btmgmt changes the UUIDs in the kernel. See “Remove UUID Command” in mgmt-api.txt.
btmgmt also gets very active when bluetoothd is restarted and other events occur.
See my notes on DBus on this post. Getting a property:
$ dbus-send --system --dest=org.bluez --print-reply /org/bluez/hci0 org.freedesktop.DBus.Properties.Get string:org.bluez.Adapter1 string:Address method return time=1685778877.065819 sender=:1.4 -> destination=:1.4934 serial=82 reply_serial=2 variant string "9C:BB:CC:DD:EE:FF"
Let’s break this down. Bluetoothd’ Dbus API is published in its source’s doc/ subdirectory. The Address property of the adapter is documented in adapter-api.txt. So:
Likewise, I can fetch the UUIDs:
$ dbus-send --print-reply --system --dest=org.bluez /org/bluez/hci0 org.freedesktop.DBus.Properties.Get string:org.bluez.Adapter1 string:UUIDs method return time=1685779625.693856 sender=:1.4 -> destination=:1.5017 serial=89 reply_serial=2 variant array [ string "00001112-0000-1000-8000-00805f9b34fb" string "00001801-0000-1000-8000-00805f9b34fb" string "0000110e-0000-1000-8000-00805f9b34fb" string "00001106-0000-1000-8000-00805f9b34fb" string "00001800-0000-1000-8000-00805f9b34fb" string "00001105-0000-1000-8000-00805f9b34fb" string "00001200-0000-1000-8000-00805f9b34fb" string "0000110c-0000-1000-8000-00805f9b34fb" string "00001104-0000-1000-8000-00805f9b34fb" string "0000110a-0000-1000-8000-00805f9b34fb" string "0000110b-0000-1000-8000-00805f9b34fb" string "00005005-0000-1000-8000-0002ee000001" string "00001133-0000-1000-8000-00805f9b34fb" string "0000112f-0000-1000-8000-00805f9b34fb" string "00001132-0000-1000-8000-00805f9b34fb" string "00001108-0000-1000-8000-00805f9b34fb" ]
So this is how bluetoothctl got these values. Unfortunately, this property is read-only according to adapter-api.txt, so it can’t be manipulated with a Set method.
It’s of course possible to run methods that are published in bluetoothd’s DBus API, but I didn’t find anything related to disabling services.
Download the sources for bluez 5.48-0ubuntu3.1 as bluez_5.48.orig.tar.xz (which is the version running on Mint 19).
In lib/sdp.h, change 0x1112 to 0xeb12. Same in lib/uuid.h and in src/profile.c.
Then in the source’s root directory, go:
$ ./configure && echo Success
and then just
$ make && echo Success
On my machine, there was a need to install libical, to make configure work, i.e.
$ sudo apt-get install libical-dev
And then replace /usr/lib/bluetooth/bluetoothd with the compiled version in src/. Keep a copy of the old executable, of course.
That didn’t work at all. The old UUID kept appearing in bluetoothctl’s output for “show”. There was a change, however: The WH-CH510 headphones refused to connect to the host, and reverted to pairing. At least I did something, I thought. But as it turned out, these headphones refused to connect to the host even after going back to the original bluetooth daemon (but had no problem with my cellphone). Y30 had no problems, as usual.
Resetting the headphone by pressing the power button and “-” button for 7 seconds didn’t help either. What eventually did the trick was to remove /usr/lib/bluetooth/bluetoothd, go “systemctl restart bluetooth”, which failed of course. Then return bluetoothd to its place, which worked, as expected. And then everything was back to normal again.
This should have been a trivial task, but it turned out quite difficult. So these are my notes for the next time. Octave 4.2.2 under Linux Mint 19, using qt5ct plugin with GNU plot (or else I get blank plots).
So this is the small function I wrote for creating a plot and a thumbnail:
function []=toimg(fname, alt) grid on; saveas(gcf, sprintf('%s.png', fname), 'png'); print(gcf, sprintf('%s_thumb.png', fname), '-dpng', '-color', '-S280,210'); disp(sprintf('<a href="/media/%s.png" target="_blank"><img alt="%s" src="/media/%s_thumb.png" style="width: 280px; height: 210px;"></a>', fname, alt, fname));
The @alt argument becomes the image’s alternative text when shown on the web page.
The call to saveas() creates a 1200x900 image, and the print() call creates a 280x210 one (as specified directly). I take it that print() will create a 1200x900 without any specific argument for the size, but I left both methods, since this is how I ended up after struggling, and it’s better to have both possibilities shown.
To add some extra annoyment, toimg() always plots the current figure, which is typically the last figure plotted. Which is not necessarily the figure that has focus. As a matter of fact, even if the current figure is closed by clicking the upper-right X, it remains the current figure. Calling toimg() will make it reappear and get plotted. Which is really weird behavior.
The apparently only way around this is to use figure() to select the desired current figure before calling ioimg(), e.g.
>> figure(4);
The good news is that the figure numbers match those appearing on the windows’ titles. This also explains why the numbering doesn’t reset when closing all figure windows manually. To really clear all figures, go
>> close all hidden
Occasionally, I download / upload huge files, and it kills my internet connection for plain browsing. I don’t want to halt the download or suspend it, but merely calm it down a bit, temporarily, for doing other stuff. And then let it hog as much as it want again.
There are many ways to do this, and I went for firejail. I suggest reading this post of mine as well on this tool.
Firejail gives you a shell prompt, which runs inside a mini-container, like those cheap virtual hosting services. Then run wget or youtube-dl as you wish from that shell.
It has practically access to everything on the computer, but the network interface is controlled. Since firejail is based on cgroups, all processes and subprocesses are collectively subject to the network bandwidth limit.
Using firejail requires setting up a bridge network interface. This is a bit of container hocus-pocus, and is necessary to get control over the network data flow. But it’s simple, and it can be done once (until the next reboot, unless the bridge is configured permanently, something I don’t bother).
Remember: Do this once, and just don’t remove the interface when done with it.
You might need to
# apt install bridge-utils
So first, set up a new bridge device (as root):
# brctl addbr hog0
and give it an IP address that doesn’t collide with anything else on the system. Otherwise, it really doesn’t matter which:
# ifconfig hog0 10.22.1.1/24
What’s going to happen is that there will be a network interface named eth0 inside the container, which will behave as if it was connected to a real Ethernet card named hog0 on the computer. Hence the container has access to everything that is covered by the routing table (by means of IP forwarding), and is also subject to the firewall rules. With my specific firewall setting, it prevents some access, but ppp0 isn’t blocked, so who cares.
To remove the bridge (no real reason to do it):
# brctl delbr hog0
Launch a shell with firejail (I called it “nethog” in this example):
$ firejail --net=hog0 --noprofile --name=nethog
This starts a new shell, for which the bandwidth limit is applied. Run wget or whatever from here.
Note that despite the –noprofile flag, there are still some directories that are read-only and some are temporary as well. It’s done in a sensible way, though so odds are that it won’t cause any issues. Running “df” inside the container gives an idea on what is mounted how, and it’s scarier than the actual situation.
But be sure to check that the files that are downloaded are visible outside the container.
From another shell prompt, outside the container go something like (doesn’t require root):
$ firejail --bandwidth=nethog set hog0 800 75 Removing bandwith limit Configuring interface eth0 Download speed 6400kbps Upload speed 600kbps cleaning limits configuring tc ingress configuring tc egress
To drop the bandwidth limit:
$ firejail --bandwidth=nethog clear hog0
And get the status (saying, among others, how many packets have been dropped):
$ firejail --bandwidth=nethog status
Notes:
When starting a browser from within a container, pay attention to whether it really started a new process. Using firetools can help.
If Google Chrome says “Created new window in existing browser session”, it didn’t start a new process inside the container, in which case the window isn’t subject to bandwidth limitation.
So close all windows of Chrome before kicking off a new one. Alternatively, this can we worked around by starting the container with.
$ firejail --net=hog0 --noprofile --private --name=nethog
The –private flags creates, among others, a new volatile home directory, so Chrome doesn’t detect that it’s already running. Because I use some other disk mounts for the large partitions on my computer, it’s still possible to download stuff to them from within the container.
But extra care is required with this, and regardless, the new browser doesn’t remember passwords and such from the private container.
This isn’t really related, and yet: What if I want to use a different version of Chrome momentarily, without upgrading? This can be done by downloading the .deb package, and extracting its files as shown on this post. Then copy the directory opt/google/chrome in the package’s “data” files to somewhere reachable by the jail (e.g. /bulk/transient/google-chrome-105.0/).
All that is left is to start a jail with the –private option as shown above (possibly without the –net flag, if throttling isn’t required) and go e.g.
$ /bulk/transient/google-chrome-105.0/chrome &
So the new browser can run while there are still windows of the old one open. The advantage and disadvantage of jailing is that there’s no access to the persistent data. So the new browser doesn’t remember passwords. This is also an advantage, because there’s a chance that the new version will mess up things for the old version.
This is how to run a Firefox browser on a cheap VPS machine (e.g. a Google Cloud VM Instance) with an X-server connection. It’s actually not a good idea, because it’s extremely slow. The correct way is to set up a VNC server, because the X server connection exchanges information on every little mouse movement or screen update. It’s a disaster on a slow connection.
My motivation was to download a 10 GB file from Microsoft’s cloud storage. With my own Internet connection it failed consistently after a Gigabyte or so (I guess the connection timed out). So the idea is to have Firefox running on a remote server with a much better connection. And then transfer the file.
Since it’s a one-off task, and I kind-of like these bizarre experiments, here we go.
These steps:
Edit /etc/ssh/sshd_config, making sure it reads
X11Forwarding yes
Install xauth, also necessary to open a remote X:
# apt install xauth
Then restart the ssh server:
# systemctl restart ssh
and then install Firefox
# apt install firefox-esr
There will be a lot of dependencies to install.
At this point, it’s possible to connect to the server with ssh -X and run firefox on the remote machine.
Expect a horribly slow browser, though. Every small animation or mouse movement is transferred on the link, so it definitely gets stuck easily. So think before every single move, and think about every single little thing in the graphics that gets updated.
Firefox “cleverly” announces that “a web page is slowing down your browser” all the time, but the animation of these announcements become part of the problem.
It’s also a good idea to keep the window small, so there isn’t much to area to keep updated. And most important: Keep the mouse pointer off the remote window unless it’s needed there for a click. Otherwise things get stuck. Just gen into the window, click, and leave. Or stay if the click was for the sake of typing (or better, pasting something).
This requires installing an X-Windows server. Not a big deal.
# apt update # apt-get install xfce4 # apt install x-window-system
once installed, open a VNC window. It’s really easiest by clicking a button on the user’s VPS Client Area (also available on the control panel, but why go that far) and go
# startx
at command prompt to start the server. And then start the browser as usual.
It doesn’t make sense to have a login server as it slows down the boot process and eats memory. Unless a VNC connection is the intended way to always use the virtual machine.
Firefox is still quite slow, but not as bad as with ssh.
These are my notes as I upgraded Thunderbird from version 3.0.7 (released September 2010) to 91.10.0 on Linux Mint 19. That’s more than a ten year’s gap, which says something about what I think about upgrading software (which was somewhat justified, given the rubbish issues that arose, as detailed below). What eventually forced me to do this was the need to support OAuth2 in order to send emails through Google’s Gmail server (supported since 91.8.0).
Thunderbird is essentially a Firefox browser which happens to be set up with a GUI that processes emails. So for example, the classic menubar is hidden, but can be revealed by pressing Alt.
When attempting to run a new version of Thunderbird, be sure to rename ~/.thunderbird into something else, or else the current profile will be upgraded right away. With some luck, the suffixes (e.g. -release) might make Thunderbird ignore the old information, but don’t trust that.
Actually, it seems like this is handled gracefully anyhow. When I installed exactly the same version on a different position on the disk, it ignored the profile with -release suffix, and added one with -release-1. So go figure.
To select which profile to work with, invoke Thunderbird with Profile Manager with
$ thunderbird -profilemanager &
For making the upgrade, first make a backup tarball from the original profile directory.
To adopt in into the new version of Thunderbird, invoke the Profile Manager and pick Create Profile…, create a new directory (I called it “mainprofile”), and pick that as the place for the new profile. Launch Thunderbird, quit right away, and then delete the new directory. Rename the old directory with the new deleted directory’s name. Then launch Thunderbird again.
Previously, I had the following add-ons:
So I remained with the first two only.
The simplest Thunderbird installation involves downloading it from their website and extract the tarball somewhere in the user’s own directories. For a proper installation, I installed it under /usr/local/bin/ with
# tar -C /usr/local/bin -xjvf thunderbird-91.10.0.tar.bz2
as root. And then reorganize it slightly:
# cd /usr/local/bin # mv thunderbird thunderbird-91.10.0 # ln -s thunderbird-91.10.0/thunderbird
Right-click the account at the left bar, pick Settings and select the Composition & Addressing item. Make sure Compose messages in HTML is unchecked: Messages should be composed as plain text by default.
Then go through each of the mail identities and verify that Compose messages in HTML is unchecked under the Composition & Addressing tab.
However if Shift is pressed along with clicking Write, Reply or whatever for composing a new message, Thunderbird opens it as HTML.
Thunderbird went from the old *.mab format to SQLite for keeping the address books. So go Tools > Import… > Pick Address Books… and pick Monk Database, and from there pick abook.mab (and posssibly repeat this with history.mab, but I skipped this, because it’s too much).
Thunderbird, like most software nowadays, wants to update itself automatically, because who cares if something goes wrong all of the sudden as long as the latest version is installed.
I messed around with this for quite long until I found the solution. So I’m leaving everything I did written here, but it’s probably enough with just adding policies.json, as suggested below.
So to the whole story (which you probably want to skip): Under Preferences > General > Updates I selected “check for updates” rather than install automatically (it can’t anyhow, since I’ve installed Thunderbird as root), but then it starts nagging that there are updates.
So it’s down to setting the application properties manually by going to Preferences > General > Config Editor… (button at the bottom).
I changed app.update.promptWaitTime to 31536000 (365 days) but that didn’t have any effect. So I added an app.update.silent property and set it true, but that didn’t solve the problem either. So the next step was to change app.update.staging.enabled to false, and that did the trick. Well, almost. With this, Thunderbird didn’t issue a notification, but its tab on the system tray gets focus every day. Passive aggressive.
As a side note, there are other suggestions I’ve encountered out there: To change app.update.url so that Thunderbird doesn’t know where to look for updates, or set app.update.doorhanger false. Haven’t tried either.
So what actually worked: Create a policies.json in /usr/local/bin/thunderbird/distribution/, with “DisableAppUpdate“: true, that is:
{ "policies": { "DisableAppUpdate": true } }
Note that the “distribution” directory must be in the same the directory as the actual executable for Thunderbird (that is, follow the symbolic link if such exists). In my case, I had to add this directory myself, because of a manual installation.
And, as suggested on this page, the successful deployment can be verified by restarting Thunderbird, and then looking at Help > About inside Thunderbird, which now says (note the comment on updates being disabled):
In hindsight, I can speculate on why this works: The authors of Thunderbird really don’t want us to turn off automatic updates, mainly because if people start running outdated software, that increases the chance of a widespread attack on some vulnerability, which can damage the software’s reputation. So Thunderbird is designed to ignore previous possibilities to turn the update off.
There’s only one case where there’s no choice: If Thunderbird was installed by the distribution. In this case, it’s installed as root, so it can’t be updated by a plain user. Hence it’s the distribution’s role to nag. And it has the same interest to nag about upgrades (reputation and that).
So I guess that’s why Thunderbird respects this JSON file only.
Exactly like 10 years ago, the trick is to create a “chrome” directory under .thunderbird/ and then add the following file:
$ cat ~/.thunderbird/sdf2k45i.default/chrome/userChrome.css @namespace url("http://www.mozilla.org/keymaster/gatekeeper/there.is.only.xul"); /* set default namespace to XUL */ /* Setting the color of folders containing new messages to red */ treechildren::-moz-tree-cell-text(folderNameCol, newMessages-true) { font-weight: bold; color: red !important; }
But unlike old Thunderbird, this file isn’t read by default. So to fix that, go to Preferences > General > Config Editor… (button at the bottom) and there change toolkit.legacyUserProfileCustomizations.stylesheets to true.
Thunderbird sends a regular notification when a new mail arrives, but exactly like last time, I want a dedicated icon that is dismissed only when I click it. The rationale is to be able to see if a new mail has arrived at a quick glance of the system tray. Neither zenity –notification nor send-notify were good for this, since they send the common notification (zenity used to just add an icon, but it “got better”).
But then there’s yad. I began with “apt install yad”, but that gave me a really old version that distorted the icon in the system bar. So I installed it from the git repository’s tag 1.0. I first attempted v12.0, but I ended up with the problem mentioned here, and didn’t want to mess around with it more.
Its “make install” adds /usr/local/bin/yad, as well as a lot of yad.mo under /usr/local/share/locale/*, a lot of yad.png under /usr/local/share/icons/*, yad.m4 under /usr/local/share/aclocal/ and yad.1 + pfd.1 in /usr/local/share/man/man1. So quite a lot of files, but in a sensible way.
With this done, the following script is kept (as executable) as /usr/local/bin/new-mail-icon:
#!/usr/bin/perl
use warnings;
use strict;
use Fcntl qw[ :flock ];
my $THEDIR="$ENV{HOME}/.thunderbird";
my $ICON="$THEDIR/green-mail-unread.png";
my $NOW=scalar localtime;
open(my $fh, "<", "$ICON")
or die "Can't open $ICON for read: $!";
# Lock the file. If it's already locked, the icon is already
# in the tray, so fail silently (and don't block).
flock($fh, LOCK_EX | LOCK_NB) or exit 0;
fork() && exit 0; # Only child continues
system('yad', '--notification', "--text=New mail on $NOW", "--image=$ICON", '--icon-size=32');
This script is the improved version of the previous one, and it prevents multiple icons in the tray much better: It locks the icon file exclusively and without blocking. Hence if there’s any other process that shows the icon, subsequent attempts to lock this file fail immediately.
Since the “yad” call takes a second or two, the scripts forks and exits before that, so it doesn’t delay Thunderbird’s machinery.
With this script in place, the Mailbox Alert is configured as follows. Add a new item to the list as in this dialog box:
The sound should be set to a WAV file of choice.
Then right-click the mail folder to have covered (Local Folders in my case), pick Mailbox Alert and enable “New Mail” and “Alert for child folders”.
Then right-click “Inbox” under this folder, and verify that nothing is checked for Mailbox Alert for it (in particular not “Default sound”). That except for the Outbox and Draft folders, for which “Don’t let parent folders alert for this one” should be checked, or else there’s a false alarm on autosaving and when using “send later”.
Later on, I changed my mind and added a message popup, so now all three checkboxes are ticked, and the Message tab reads:
I picked the icon as /usr/local/bin/thunderbird-91.10.0/chrome/icons/default/default32.png (this depends on the installation path, of course).
I’m not 100% clear why the original alert didn’t show up, even though “Show an alert” was still checked under “Incoming Mails” at Preferences > General. I actually preferred the good old one, but it seems like Mailbox Alert muted it. I unchecked it anyhow, just to be safe.
It’s not a real upgrade if a weird problem doesn’t occur out of the blue.
So attempting to Get Messages from pop3 server at localhost failed quite oddly: Every time I checked the box to use Password Manager to remember the password, it got stuck with “Main: Connected to 127.0.0.1…”. But checking with Wireshark, it turned out that Thunderbird asked the server about its capabilities (CAPA), got an answer and then did nothing for about 10 seconds, after which it closed the connection.
On the other hand, when I didn’t request remembering the password, it went fine, and so did subsequent attempts to fetch mail from the pop3 server.
Another thing was that when attempting to use Gmail’s server, I went through the entire OAuth2 thing (the browser window, and asking for my permissions) but then the mail was just stuck on “Sending message”. Like, forever.
So I followed the advice here, and deleted key3.db, key4.db, secmod.db, cert*.db and all signon* files with Thunderbird not running of course. Really old stuff.
And that fixed it.
The files that were apparently created when things got fine were logins.json, cert9.db, key4.db and pkcs11.txt. But I might have missed something.
This happened occasionally when I navigated from one mail folder to another. The solution I found somewhere was to delete all .msf files from where Thunderbird keeps the mail info, and that did the trick. Ehm, just for a while. After a few days, it was back.
As a side effect, it forgot the display settings for each folder, i.e. which columns to show and in what order.
These .msf files are apparently indexes to the files containing the actual messages, and indeed it took a few seconds before something appeared when I went to view each mail folder for the first time. At which time the new .msf files went from zero bytes to a significant figure.
Since the problem remains, I watched “top” when the GUI got stuck. And indeed, Thunderbird’s process was at 100%, but so was a completely different process: caribou. Which is a virtual keyboard. Do I need one? No. So to get rid of this process (which runs all the time, but doesn’t eat a lot of CPU normally), go Accessibility settings, the Keyboard tab and turn “Enable the on-screen keyboard” off. The process is gone, and so is the problem with the GUI? Nope. It’s basically the same, but instead of two processes taking 100% CPU, now it’s Thunderbird alone. I have no idea what to do next.
I had some really annoying bots on one of my websites. Of the sort that make a million requests (like really, a million) per month, identifying themselves as a browser.
So IP blocking it is. I went for a minimalistic DIY approach. There are plenty of tools out there, but my experience with things like this is that in the end, it’s me and the scripts. So I might as well write them myself.
Iptables has an IP set module, which allows feeding it with a set of random IP addresses. Internally, it creates a hash with these addresses, so it’s an efficient way to keep track of multiple addresses.
IP sets has been in the kernel since ages, but it has to be opted in the kernel with CONFIG_IP_SET. Which it most likely is.
The ipset utility may need to be installed, with something like
# apt install ipset
There seems to be a protocol mismatch issue with the kernel, which apparently is a non-issue. But every time something goes wrong with ipset, there’s a warning message about this mismatch, which is misleading. So it looks something like this.
# ipset [ ... something stupid or malformed ... ] ipset v6.23: Kernel support protocol versions 6-7 while userspace supports protocol versions 6-6 [ ... some error message related to the stupidity ... ]
So the important thing is to be aware of is that odds are that the problem isn’t the version mismatch, but between chair and keyboard.
A quick session
# ipset create testset hash:ip # ipset add testset 1.2.3.4 # iptables -I INPUT -m set --match-set testset src -j DROP # ipset del testset 1.2.3.4
Attempting to add an IP address that is already in the list causes a warning, and the address isn’t added. So no need to check if the address is already there. Besides, there the -exist option, which is really great.
List the members of the IP set:
# ipset -L
An entry can have a timeout feature, which works exactly as one would expect: The rule vanishes after the timeout expires. The timeout entry in ipset -L counts down.
For this to work, the set must be created with a default timeout attribute. Zero means that timeout is disabled (which I chose as a default in this example).
# ipset create testset hash:ip timeout 0 # ipset add testset 1.2.3.4 timeout 10
The ‘-exist’ flag causes ipset to re-add an existing entry, which also resets its timeout. So this is the way to keep the list fresh.
It’s tempting to put the DROP rule with –match-set first, because hey, let’s give those intruders the boot right away. But doing that, there might be TCP connections lingering, because the last FIN packet is caught by the firewall as the new rule is added. Given that adding an IP address is the result of a flood of requests, this is a realistic scenario.
The solution is simple: There’s most likely a “state RELATED,ESTABLISHED” rule somewhere in the list. So push it to the top. The rationale is simple: If a connection has begun, don’t chop it in the middle in any case. It’s the first packet that we want killed.
The rule in iptables must refer to an existing set. So if the rule that relies on the set is part of the persistent firewall rules, it must be created before the script that brings up iptables runs.
This is easily done by adding a rule file like this as /usr/share/netfilter-persistent/plugins.d/10-ipset
#!/bin/sh
IPSET=/sbin/ipset
SET=mysiteset
case "$1" in
start|restart|reload|force-reload)
$IPSET destroy
$IPSET create $SET hash:ip timeout 0
;;
save)
echo "ipset-persistent: The save option does nothing"
;;
stop|flush)
$IPSET flush $SET
;;
*)
echo "Usage: $0 {start|restart|reload|force-reload|save|flush}" >&2
exit 1
;;
esac
exit 0
The idea is that the index 10 in the file’s name is smaller than the rule that sets up iptables, so it runs first.
This script is a dirty hack, but hey, it works. There’s a small project on this, for those who like to do it properly.
The operating system in question is systemd-based, but this old school style is still in effect.
Since all offending requests came from the same country (cough, cough, China, from more than 4000 different IP addresses) I’m considering to block them in one go. A list of 4000+ IP addresses that I busted in August 2022 with aggressive bots (all from China) can be downloaded as a simple compressed text file.
So the idea is going something like
ipset create foo hash:net ipset add foo 192.168.0.0/24 ipset add foo 10.1.0.0/16 ipset add foo 192.168.0/24
and download the per-country IP ranges from IP deny. That’s a simple and crude tool for denial by geolocation. The only thing that puts me down a bit is that it’s > 7000 rules, so I wonder if that doesn’t put a load on the server. But what really counts is the number of sizes of submasks, because each submask size has its own hash. So if the list covers all possible sizes, from a full /32 down to say, 16/, there are 17 hashes to look up for each packet arriving.
On the other hand, since the rule should be after the “state RELATED,ESTABLISHED” rule, it only covers SYN packets. And if this whole thing is put as late as possible in the list of rules, it boils down to handling only packets that are intended for the web server’s ports, or those that are going to be dropped anyhow. So compared with the CPU cycles of handling the http request, even 17 hashes isn’t all that much.
The biggest caveat is however if other websites are colocated on the server. It’s one thing to block offending IPs, but blocking a whole country from all sites, that’s a bit too much.
Note to self: In the end, I wrote a little Perl-XS module that says if the IP belongs to a group. Look for byip.pm.
The Perl script that performs the blacklisting is crude and inaccurate, but simple. This is the part to tweak and play with, and in particular adapt to each specific website. It’s all about detecting abnormal access.
Truth to be told, I replaced this script with a more sophisticated mechanism pretty much right away on my own system. But what’s really interesting is the calls to ipset.
This script reads through Apache’s access log file, and analyzes each minute in time (as in 60 seconds). In other words, all accesses that have the same timestamp, with the seconds part ignored. Note that the regex part that captures $time in the script ignores the last part of :\d\d.
If the same IP address appears more than 50 times, that address is blacklisted, with a timeout of 86400 seconds (24 hours). Log file that correspond to page requisites and such (images, style files etc.) are skipped for this purpose. Otherwise, it’s easy to reach 50 accesses within a minute with legit web browsing.
There are several imperfections about this script, among others:
The script goes as follows:
#!/usr/bin/perl
use warnings;
use strict;
my $logfile = '/var/log/mysite.com/access.log';
my $limit = 50; # 50 accesses per minute
my $timeout = 86400;
open(my $in, "<", $logfile)
or die "Can't open $logfile for read: $!\n";
my $current = '';
my $l;
my %h;
my %blacklist;
while (defined ($l = <$in>)) {
my ($ip, $time, $req) = ($l =~ /^([^ ]+).*?\[(.+?):\d\d[ ].*?\"\w+[ ]+([^\"]+)/);
unless (defined $ip) {
# warn("Failed to parse line $l\n");
next;
}
next
if ($req =~ /^\/(?:media\/|robots\.txt)/);
unless ($time eq $current) {
foreach my $k (sort keys %h) {
$blacklist{$k} = 1
if ($h{$k} >= $limit);
}
%h = ();
$current = $time;
}
$h{$ip}++;
}
close $in;
foreach my $k (sort keys %blacklist) {
system('/sbin/ipset', 'add', '-exist', 'mysiteset', $k, 'timeout', $timeout);
}
It has to be run as root, of course. Most likely as a cronjob.
Due to an incident that is beyond the scope of this blog, I wanted to put a 24/7 camera that watched a certain something, just in case that incident repeated itself.
Having a laptop that I barely use, and a cheap e-bay web camera, I thought I set up something and let ffmpeg do the job.
I’m not sure if a Raspberry Pi would be up for this job, even when connected to an external hard disk through USB. It depends much on how well ffmpeg performs on that platform. Haven’t tried. The laptop’s clear advantage is when there’s a brief power outage.
Overall verdict: It’s as good as the stability of the USB connection with the camera.
Note to self: I keep this in the misc/utils git repo, under surveillance-cam/.
Show the webcam’s image on screen, the ffmpeg way:
$ ffplay -f video4linux2 /dev/video0
Let ffmpeg list the formats:
$ ffplay -f video4linux2 -list_formats all /dev/video0
Or with a dedicated tool:
# apt install v4l-utils
and then
$ v4l2-ctl --list-formats-ext -d /dev/video0
Possibly also use “lsusb -v” on the device: It lists the format information, not necessarily in a user-friendly way, but that’s the actual source of information.
Get all parameters that can be tweaked:
$ v4l2-ctl --all
See an example output for this command at the bottom of this post.
If control over the exposure time is available, it will be listed as “exposure_absolute” (none of the webcams I tried had this). The exposure time is given in units of 100µs (see e.g. the definition of V4L2_CID_EXPOSURE_ABSOLUTE).
Get a specific parameter, e.g. brightness
$ v4l2-ctl --get-ctrl=brightness brightness: 137
Set the control (can be done while the camera is capturing video)
$ v4l2-ctl --set-ctrl=brightness=255
This is a simple bash script that creates .mp4 files from the captured video:
#!/bin/bash
OUTDIR=/extra/videos SRC=/dev/v4l/by-id/usb-Generic*
DURATION=3600 # In seconds
while [ 1 ]; do
TIME=`date +%F-%H%M%S`
if ! ffmpeg -f video4linux2 -i $SRC -t $DURATION -r 10 $OUTDIR/video-$TIME.mp4 < /dev/null ; then
echo 2-2 | sudo tee /sys/bus/usb/drivers/usb/unbind
echo 2-2 | sudo tee /sys/bus/usb/drivers/usb/bind
sleep 5;
fi
done
Comments on the script:
First, the spoiler: I solved this problem by putting a physical weight on the USB cable, close to the plug. This held the connector steady in place, and the vast majority of the problems were gone.
I also have a separate post about how I tried to make Linux ignore the offending bogus keyboard from being. Needless to say, that failed (because either you ban the entire USB device or you don’t ban at all).
This is the smoking gun in /var/log/Xorg.0.log: Lots of
[1194182.076] (II) config/udev: Adding input device USB2.0 PC CAMERA: USB2.0 PC CAM (/dev/input/event421) [1194182.076] (**) USB2.0 PC CAMERA: USB2.0 PC CAM: Applying InputClass "evdev keyboard catchall" [1194182.076] (II) Using input driver 'evdev' for 'USB2.0 PC CAMERA: USB2.0 PC CAM' [1194182.076] (**) USB2.0 PC CAMERA: USB2.0 PC CAM: always reports core events [1194182.076] (**) evdev: USB2.0 PC CAMERA: USB2.0 PC CAM: Device: "/dev/input/event421" [1194182.076] (--) evdev: USB2.0 PC CAMERA: USB2.0 PC CAM: Vendor 0x1908 Product 0x2311 [1194182.076] (--) evdev: USB2.0 PC CAMERA: USB2.0 PC CAM: Found keys [1194182.076] (II) evdev: USB2.0 PC CAMERA: USB2.0 PC CAM: Configuring as keyboard [1194182.076] (EE) Too many input devices. Ignoring USB2.0 PC CAMERA: USB2.0 PC CAM [1194182.076] (II) UnloadModule: "evdev"
and at some point the sad end:
[1194192.408] (II) config/udev: Adding input device USB2.0 PC CAMERA: USB2.0 PC CAM (/dev/input/event423) [1194192.408] (**) USB2.0 PC CAMERA: USB2.0 PC CAM: Applying InputClass "evdev keyboard catchall" [1194192.408] (II) Using input driver 'evdev' for 'USB2.0 PC CAMERA: USB2.0 PC CAM' [1194192.408] (**) USB2.0 PC CAMERA: USB2.0 PC CAM: always reports core events [1194192.408] (**) evdev: USB2.0 PC CAMERA: USB2.0 PC CAM: Device: "/dev/input/event423" [1194192.445] (EE) [1194192.445] (EE) Backtrace: [1194192.445] (EE) 0: /usr/bin/X (xorg_backtrace+0x48) [0x564128416d28] [1194192.445] (EE) 1: /usr/bin/X (0x56412826e000+0x1aca19) [0x56412841aa19] [1194192.445] (EE) 2: /lib/x86_64-linux-gnu/libpthread.so.0 (0x7f6e4d8b4000+0x10340) [0x7f6e4d8c4340] [1194192.445] (EE) 3: /usr/lib/xorg/modules/input/evdev_drv.so (0x7f6e45c4c000+0x39f5) [0x7f6e45c4f9f5] [1194192.445] (EE) 4: /usr/lib/xorg/modules/input/evdev_drv.so (0x7f6e45c4c000+0x68df) [0x7f6e45c528df] [1194192.445] (EE) 5: /usr/bin/X (0x56412826e000+0xa1721) [0x56412830f721] [1194192.446] (EE) 6: /usr/bin/X (0x56412826e000+0xb731b) [0x56412832531b] [1194192.446] (EE) 7: /usr/bin/X (0x56412826e000+0xb7658) [0x564128325658] [1194192.446] (EE) 8: /usr/bin/X (WakeupHandler+0x6d) [0x5641282c839d] [1194192.446] (EE) 9: /usr/bin/X (WaitForSomething+0x1bf) [0x5641284142df] [1194192.446] (EE) 10: /usr/bin/X (0x56412826e000+0x55771) [0x5641282c3771] [1194192.446] (EE) 11: /usr/bin/X (0x56412826e000+0x598aa) [0x5641282c78aa] [1194192.446] (EE) 12: /lib/x86_64-linux-gnu/libc.so.6 (__libc_start_main+0xf5) [0x7f6e4c2f3ec5] [1194192.446] (EE) 13: /usr/bin/X (0x56412826e000+0x44dde) [0x5641282b2dde] [1194192.446] (EE) [1194192.446] (EE) Segmentation fault at address 0x10200000adb [1194192.446] (EE) Fatal server error: [1194192.446] (EE) Caught signal 11 (Segmentation fault). Server aborting [1194192.446] (EE)
The thing is that webcam presents itself as a keyboard, among others. I guess the chipset has inputs for control buttons (which the specific webcam doesn’t have), so as the USB device goes on and off, X windows registers the nonexistent keyboard on and off, and eventually some bug causes it to crash (note that number of the event device is 423, so there were quite a few on and offs). It might very well be that the camera camera connected, started some kind of connection event handler, which didn’t finish its job before it disconnected. Somewhere in the code, the handler fetched information that didn’t exist, it got a bad pointer instead (NULL?) and used it. Boom. Just a wild guess, but this is the typical scenario.
The crash can be avoided by making X windows ignore this “keyboard”. I did this by adding a new file named /usr/share/X11/xorg.conf.d/10-nocamera.conf as follows:
# Ignore bogus button on webcam Section "InputClass" Identifier "Blacklist USB webcam button as keyboard" MatchUSBID "1908:2311" Option "Ignore" "on" EndSection
This way, X windows didn’t fiddle with the bogus buttons, and hence didn’t care if they suddenly went away.
Anyhow, it’s a really old OS (Ubuntu 14.04.1) so this bug might have been solved long ago.
Another problem with this wobbling is that /dev/input/ becomes crowded with a lot of eventN files:
$ ls /dev/input/event* /dev/input/event0 /dev/input/event267 /dev/input/event295 /dev/input/event1 /dev/input/event268 /dev/input/event296 /dev/input/event10 /dev/input/event269 /dev/input/event297 /dev/input/event11 /dev/input/event27 /dev/input/event298 /dev/input/event12 /dev/input/event270 /dev/input/event299 /dev/input/event13 /dev/input/event271 /dev/input/event3 /dev/input/event14 /dev/input/event272 /dev/input/event30 /dev/input/event15 /dev/input/event273 /dev/input/event300 /dev/input/event16 /dev/input/event274 /dev/input/event301 /dev/input/event17 /dev/input/event275 /dev/input/event302 /dev/input/event18 /dev/input/event276 /dev/input/event303 /dev/input/event19 /dev/input/event277 /dev/input/event304 /dev/input/event2 /dev/input/event278 /dev/input/event305 /dev/input/event20 /dev/input/event279 /dev/input/event306 /dev/input/event21 /dev/input/event28 /dev/input/event307 /dev/input/event22 /dev/input/event280 /dev/input/event308 /dev/input/event23 /dev/input/event281 /dev/input/event309 /dev/input/event24 /dev/input/event282 /dev/input/event31 /dev/input/event25 /dev/input/event283 /dev/input/event310 /dev/input/event256 /dev/input/event284 /dev/input/event311 /dev/input/event257 /dev/input/event285 /dev/input/event312 /dev/input/event258 /dev/input/event286 /dev/input/event313 /dev/input/event259 /dev/input/event287 /dev/input/event314 /dev/input/event26 /dev/input/event288 /dev/input/event315 /dev/input/event260 /dev/input/event289 /dev/input/event316 /dev/input/event261 /dev/input/event29 /dev/input/event4 /dev/input/event262 /dev/input/event290 /dev/input/event5 /dev/input/event263 /dev/input/event291 /dev/input/event6 /dev/input/event264 /dev/input/event292 /dev/input/event7 /dev/input/event265 /dev/input/event293 /dev/input/event8 /dev/input/event266 /dev/input/event294 /dev/input/event9
Cute, huh? And this is even before there was a problem. So what does X windows make of this?
$ xinput list ⎡ Virtual core pointer id=2 [master pointer (3)] ⎜ ↳ Virtual core XTEST pointer id=4 [slave pointer (2)] ⎜ ↳ ELAN Touchscreen id=9 [slave pointer (2)] ⎜ ↳ SynPS/2 Synaptics TouchPad id=13 [slave pointer (2)] ⎣ Virtual core keyboard id=3 [master keyboard (2)] ↳ Virtual core XTEST keyboard id=5 [slave keyboard (3)] ↳ Power Button id=6 [slave keyboard (3)] ↳ Video Bus id=7 [slave keyboard (3)] ↳ Power Button id=8 [slave keyboard (3)] ↳ Lenovo EasyCamera: Lenovo EasyC id=10 [slave keyboard (3)] ↳ Ideapad extra buttons id=11 [slave keyboard (3)] ↳ AT Translated Set 2 keyboard id=12 [slave keyboard (3)] ↳ USB 2.0 PC Cam id=14 [slave keyboard (3)] ↳ USB 2.0 PC Cam id=15 [slave keyboard (3)] ↳ USB 2.0 PC Cam id=16 [slave keyboard (3)] ↳ USB 2.0 PC Cam id=17 [slave keyboard (3)] ↳ USB 2.0 PC Cam id=18 [slave keyboard (3)] ↳ USB 2.0 PC Cam id=19 [slave keyboard (3)]
Now, let me assure you that there were not six webcams connected when I did this. Actually, not a single one.
Anyhow, I didn’t dig further into this. The real problem is that all of these /dev/input/event files have the same major. Which means that when there are really a lot of them, the system runs out of minors. So if the normal kernel log for plugging in the webcam was this,
usb 2-2: new high-speed USB device number 22 using xhci_hcd usb 2-2: New USB device found, idVendor=1908, idProduct=2311 usb 2-2: New USB device strings: Mfr=1, Product=2, SerialNumber=0 usb 2-2: Product: USB2.0 PC CAMERA usb 2-2: Manufacturer: Generic uvcvideo: Found UVC 1.00 device USB2.0 PC CAMERA (1908:2311) uvcvideo 2-2:1.0: Entity type for entity Processing 2 was not initialized! uvcvideo 2-2:1.0: Entity type for entity Camera 1 was not initialized! input: USB2.0 PC CAMERA: USB2.0 PC CAM as /devices/pci0000:00/0000:00:14.0/usb2/2-2/2-2:1.0/input/input274
after all minors ran out, I got this:
usb 2-2: new high-speed USB device number 24 using xhci_hcd
usb 2-2: New USB device found, idVendor=1908, idProduct=2311
usb 2-2: New USB device strings: Mfr=1, Product=2, SerialNumber=0
usb 2-2: Product: USB2.0 PC CAMERA
usb 2-2: Manufacturer: Generic
uvcvideo: Found UVC 1.00 device USB2.0 PC CAMERA (1908:2311)
uvcvideo 2-2:1.0: Entity type for entity Processing 2 was not initialized!
uvcvideo 2-2:1.0: Entity type for entity Camera 1 was not initialized!
media: could not get a free minor
And then immediately after:
systemd-udevd[4487]: Failed to apply ACL on /dev/video2: No such file or directory systemd-udevd[4487]: Failed to apply ACL on /dev/video2: No such file or directory
Why these eventN files aren’t removed is unclear. The kernel is pretty old, v4.14, so maybe this has been fixed since.
This is small & junky webcam. Clearly no control over exposure time.
$ v4l2-ctl --all -d /dev/v4l/by-id/usb-Generic_USB2.0_PC_CAMERA-video-index0 Driver Info (not using libv4l2): Driver name : uvcvideo Card type : USB2.0 PC CAMERA: USB2.0 PC CAM Bus info : usb-0000:00:14.0-2 Driver version: 4.14.0 Capabilities : 0x84200001 Video Capture Streaming Device Capabilities Device Caps : 0x04200001 Video Capture Streaming Priority: 2 Video input : 0 (Camera 1: ok) Format Video Capture: Width/Height : 640/480 Pixel Format : 'YUYV' Field : None Bytes per Line: 1280 Size Image : 614400 Colorspace : Unknown (00000000) Custom Info : feedcafe Crop Capability Video Capture: Bounds : Left 0, Top 0, Width 640, Height 480 Default : Left 0, Top 0, Width 640, Height 480 Pixel Aspect: 1/1 Selection: crop_default, Left 0, Top 0, Width 640, Height 480 Selection: crop_bounds, Left 0, Top 0, Width 640, Height 480 Streaming Parameters Video Capture: Capabilities : timeperframe Frames per second: 30.000 (30/1) Read buffers : 0 brightness (int) : min=0 max=255 step=1 default=128 value=128 contrast (int) : min=0 max=255 step=1 default=130 value=130 saturation (int) : min=0 max=255 step=1 default=64 value=64 hue (int) : min=-127 max=127 step=1 default=0 value=0 gamma (int) : min=1 max=8 step=1 default=4 value=4 power_line_frequency (menu) : min=0 max=2 default=1 value=1 sharpness (int) : min=0 max=15 step=1 default=13 value=13 backlight_compensation (int) : min=1 max=5 step=1 default=1 value=1