Latest posts for tag debian

I acquired some unusual input devices to experiment with, like a CNC control panel and a bluetooth pedal page turner.
These identify and behave like a keyboard, sending nice and simple keystrokes, and can be accessed with no drivers or other special software. However, their keystrokes appear together with keystrokes from normal keyboards, which is the expected default when plugging in a keyboard, but not what I want in this case.
I'd also like them to be readable via evdev and accessible by my own user.
Here's the udev rule I cooked up to handle this use case:
# Handle the CNC control panel
SUBSYSTEM=="input", ENV{ID_VENDOR}=="04d9", ENV{ID_MODEL}=="1203", \
OWNER="enrico", ENV{ID_INPUT}=""
# Handle the Bluetooth page turner
SUBSYSTEM=="input", ENV{ID_BUS}=="bluetooth", ENV{LIBINPUT_DEVICE_GROUP}=="*/…mac…", ENV{ID_INPUT_KEYBOARD}="1" \
OWNER="enrico", ENV{ID_INPUT}="", SYMLINK+="input/by-id/bluetooth-…mac…-kbd"
SUBSYSTEM=="input", ENV{ID_BUS}=="bluetooth", ENV{LIBINPUT_DEVICE_GROUP}=="*/…mac…", ENV{ID_INPUT_TABLET}="1" \
OWNER="enrico", ENV{ID_INPUT}="", SYMLINK+="input/by-id/bluetooth-…mac…-tablet"
The bluetooth device didn't have standard rules to create /dev/input/by-id/
symlinks so I added them. In my own code, I watch /dev/input/by-id
with
inotify to handle when devices appear or disappear.
I used udevadm info /dev/input/event…
to see what I could use to identify the
device.
The Static device configuration via udev page of libinput's documentation has documentation on the various elements specific to the input subsystem
Grepping rule files in /usr/lib/udev/rules.d
was useful to see syntax
examples.
udevadm test /dev/input/event…
was invaluable for syntax checking and testing
my rule file while working on it.
Finally, this is an extract of a quick prototype Python code to read keys from the CNC control panel:
import libevdev
KEY_MAP = {
libevdev.EV_KEY.KEY_GRAVE: "EMERGENCY",
# InputEvent(EV_KEY, KEY_LEFTALT, 1)
libevdev.EV_KEY.KEY_R: "CYCLE START",
libevdev.EV_KEY.KEY_F5: "SPINDLE ON/OFF",
# InputEvent(EV_KEY, KEY_RIGHTCTRL, 1)
libevdev.EV_KEY.KEY_W: "REDO",
# InputEvent(EV_KEY, KEY_LEFTALT, 1)
libevdev.EV_KEY.KEY_N: "SINGLE STEP",
# InputEvent(EV_KEY, KEY_LEFTCTRL, 1)
libevdev.EV_KEY.KEY_O: "ORIGIN POINT",
libevdev.EV_KEY.KEY_ESC: "STOP",
libevdev.EV_KEY.KEY_KPPLUS: "SPEED UP",
libevdev.EV_KEY.KEY_KPMINUS: "SLOW DOWN",
libevdev.EV_KEY.KEY_F11: "F+",
libevdev.EV_KEY.KEY_F10: "F-",
libevdev.EV_KEY.KEY_RIGHTBRACE: "J+",
libevdev.EV_KEY.KEY_LEFTBRACE: "J-",
libevdev.EV_KEY.KEY_UP: "+Y",
libevdev.EV_KEY.KEY_DOWN: "-Y",
libevdev.EV_KEY.KEY_LEFT: "-X",
libevdev.EV_KEY.KEY_RIGHT: "+X",
libevdev.EV_KEY.KEY_KP7: "+A",
libevdev.EV_KEY.KEY_Q: "-A",
libevdev.EV_KEY.KEY_PAGEDOWN: "-Z",
libevdev.EV_KEY.KEY_PAGEUP: "+Z",
}
class KeyReader:
def __init__(self, path: str):
self.path = path
self.fd: IO[bytes] | None = None
self.device: libevdev.Device | None = None
def __enter__(self):
self.fd = open(self.path, "rb")
self.device = libevdev.Device(self.fd)
return self
def __exit__(self, exc_type, exc, tb):
self.device = None
self.fd.close()
self.fd = None
def events(self) -> Iterator[dict[str, Any]]:
for e in self.device.events():
if e.type == libevdev.EV_KEY:
if (val := KEY_MAP.get(e.code)):
yield {
"name": val,
"value": e.value,
"sec": e.sec,
"usec": e.usec,
}
Edited: added rules to handle the Bluetooth page turner
- str.endswith() can take a tuple of possible endings instead of a single string
About JACK and Debian
- There are 3 JACK implementations: jackd1, jackd2, pipewire-jack.
- jackd1 is mostly superseded in favour of jackd2, and as far as I understand, can be ignored
- pipewire-jack integrates well with pipewire and the rest of the Linux audio world
- jackd2 is the native JACK server. When started it handles the sound card directly, and will steal it from pipewire. Non-JACK audio applications will likely cease to see the sound card until JACK is stopped and wireplumber is restarted. Pipewire should be able to keep working as a JACK client but I haven't gone down that route yet
- pipewire-jack mostly works. At some point I experienced glitches in complex JACK apps like giada or ardour that went away after switching to jackd2. I have not investigated further into the glitches
- So: try things with pw-jack. If you see odd glitches, try without pw-jack to use the native jackd2. Keep in mind, if you do so, that you will lose standard pipewire until you stop jackd2 and restart wireplumber.
I have Python code for reading a heart rate monitor.
I have Python code to generate MIDI events.
Could I resist putting them together? Clearly not.
Here's Jack Of Hearts, a JACK MIDI drum loop generator that uses the heart rate for BPM, and an improvised way to compute heart rate increase/decrease to add variations in the drum pattern.
It's very simple minded and silly. To me it was a fun way of putting unrelated things together, and Python worked very well for it.
I had a go at trying to figure out how to generate arbitrary MIDI events and send them out over a JACK MIDI channel.
Setting up JACK and Pipewire
Pipewire has a JACK interface, which in theory means one could use JACK clients out of the box without extra setup.
In practice, one need to tell JACK clients which set of libraries to use to communicate to servers, and Pipewire's JACK server is not the default choice.
To tell JACK clients to use Pipewire's server, you can either:
- on a client-by-client basis, wrap the commands with pw-jack
- to change the system default:
cp /usr/share/doc/pipewire/examples/ld.so.conf.d/pipewire-jack-*.conf /etc/ld.so.conf.d/
and runldconfig
(see the Debian wiki for details)
Programming with JACK
Python has a JACK client library that worked flawlessly for me so far.
Everything with JACK is designed around minimizing latency. Everything happens around a callback that gets called form a separate thread, and which gets a buffer to fill with events.
All the heavy processing needs to happen outside the callback, and the callback is only there to do the minimal amount of work needed to shovel the data your application produced into JACK channels.
Generating MIDI messages
The Mido library can be used to parse and create MIDI messages and it also worked flawlessly for me so far.
One needs to study a bit what kind of MIDI message one needs to generate (like "note on", "note off", "program change") and what arguments they get.
It also helps to read about the General MIDI standard which defines mappings between well-known instruments and channels and instrument numbers in MIDI messages.
A timed message queue
To keep a queue of events that happen over time, I implemented a Delta List that indexes events by their future frame number.
I called the humble container for my audio experiments pyeep and here's my delta list implementation.
A JACK player
The simple JACK MIDI player backend is also in pyeep.
It needs to protect the delta list with a mutex since we are working across thread boundaries, but it tries to do as little work under lock as possible, to minimize the risk of locking the realtime thread for too long.
The play
method converts delays in seconds to frame counts, and the
on_process
callback moves events from the queue to the jack output.
Here's an example script that plays a simple drum pattern:
#!/usr/bin/python3
# Example JACK midi event generator
#
# Play a drum pattern over JACK
import time
from pyeep.jackmidi import MidiPlayer
# See:
# https://soundprogramming.net/file-formats/general-midi-instrument-list/
# https://www.pgmusic.com/tutorial_gm.htm
DRUM_CHANNEL = 9
with MidiPlayer("pyeep drums") as player:
beat: int = 0
while True:
player.play("note_on", velocity=64, note=35, channel=DRUM_CHANNEL)
player.play("note_off", note=38, channel=DRUM_CHANNEL, delay_sec=0.5)
if beat == 0:
player.play("note_on", velocity=100, note=38, channel=DRUM_CHANNEL)
player.play("note_off", note=36, channel=DRUM_CHANNEL, delay_sec=0.3)
if beat + 1 == 2:
player.play("note_on", velocity=100, note=42, channel=DRUM_CHANNEL)
player.play("note_off", note=42, channel=DRUM_CHANNEL, delay_sec=0.3)
beat = (beat + 1) % 4
time.sleep(0.3)
Running the example
I ran the jack_drums
script, and of course not much happened.
First I needed a MIDI synthesizer. I installed fluidsynth, and ran it on the command line with no arguments. it registered with JACK, ready to do its thing.
Then I connected things together. I used qjackctl, opened the graph view, and connected the MIDI output of "pyeep drums" to the "FLUID Synth input port".
fluidsynth's output was already automatically connected to the audio card and I started hearing the drums playing! 🥁️🎉️
I bought myself a cheap wearable Bluetooth LE heart rate monitor in order to play with it, and this is a simple Python script to monitor it and plot data.
Bluetooth LE
I was surprised that these things seem decently interoperable.
You can use hcitool
to scan for devices:
hcitool lescan
You can then use gatttool
to connect to device and poke at them interactively
from a command line.
Bluetooth LE from Python
There is a nice library called Bleak which is also packaged in Debian. It's modern Python with asyncio and works beautifully!
Heart rate monitors
Things I learnt:
- The UUID for the heart rate interface starts with
00002a37
. - The UUID for checking battery status starts with
00002a19
. - A longer list of UUIDs is here.
- The layout of heart rate data packets and some Python code to parse them
- What are RR values
How about a proper fitness tracker?
I found OpenTracks, also on F-Droid, which seems nice
Why script it from a desktop computer?
The question is: why not?
A fitness tracker on a phone is useful, but there are lots of silly things one can do from one's computer that one can't do from a phone. A heart rate monitor is, after all, one more input device, and there are never enough input devices!
There are so many extremely important use cases that seem entirely unexplored:
- Log your heart rate with your git commits!
- Add your heart rate as a header in your emails!
- Correlate heart rate information with your work activity tracker to find out what tasks stress you the most!
- Sync ping intervals with your own heartbeat, so you get faster replies when you're more anxious!
- Configure workrave to block your keyboard if you get too excited, to improve the quality of your mailing list contributions!
- You can monitor the monitor script of the heart rate monitor that monitors you! Forget buffalo, be your monitor monitor monitor monitor monitor monitor monitor monitor...
Python: typing.overload
typing.overload
makes it easier to type functions with behaviour that depends on input types.
Functions marked with @overload
are ignored by Python and only used by the
type checker:
@overload
def process(response: None) -> None:
...
@overload
def process(response: int) -> tuple[int, str]:
...
@overload
def process(response: bytes) -> str:
...
def process(response):
# <actual implementation>
Python's multiprocessing and deadlocks
Python's multiprocessing is prone to deadlocks in a number of conditions. In my case, the running program was a standard single-process, non-threaded script, but it used complex native libraries which might have been the triggers for the deadlocks.
The suggested workaround is using set_start_method("spawn")
, but when we
tried it we hit serious performance penalties.
Lesson learnt: multiprocessing is good for prototypes, and may end up being too hacky for production.
In my case, I was already generating small python scripts corresponding to worker tasks, which were useful for reproducing and debugging Magics issues, so I switched to running those as the actual workers. In the future, this may come in handy for dispatching work to HPC nodes, too.
Here's a parallel execution scheduler based on asyncio that I wrote to run them, which may always come in handy on other projects.
Debian:
- You can Build-Depend on
debhelper-compat (=version)
and get rid ofdebhelper
as a build-dependency, and ofdebian/compat
(details) - You can Build-Depend on
dh-sequence-foo
and get rid of the correspondingdh-foo
build-dependency, and of the need to add--with foo
indebian/rules
(details) - You can (and should) get rid of
dh-buildinfo
, which is now handled automatically - In salsa.debian.org there is a default CI
pipeline for Debian packages
that works beautifully without needing to add any
.gitlab-ci.yml
to a repository - Add
Testsuite: autopkgtest-pkg-python
todebian/control
, and you get a free autopkgtest that verifies that your packaged Python module can be imported. The default CI pipeline in salsa will automatically run the tests. (specification, details)
Python:
- From Python 3.8, you can use
=
in format strings to make it easier to debug variables and expressions (details):
>>> name="test"
>>> print(f"{name=}")
name='test'
>>> print(f"{3*8=}")
3*8=24
Leaflet:
[abc].tile.openstreetmap.org
links need to be replaced withtile.openstreetmap.org
(details)
Further reading
Talk notes
Intro
- I'm not speaking for the whole of DAM
- Motivation in part is personal frustration, and need to set boundaries and negotiate expectations
Debian Account Managers
- history
Responsibility for official membership
- approve account creation
- manage the New Member Process and nm.debian.org
- close MIA accounts
- occasional emergency termination of accounts
- handle Emeritus
- with lots of help from FrontDesk and MIA teams (big shoutout)
What DAM is not
- we are not mediators
- we are not a community management team
- a list or IRC moderation team
- we are not responsible for vision or strategic choices about how people are expected to interact in Debian
- We shouldn't try and solve things because they need solving
Unexpected responsibilities
- Over time, the community has grown larger and more complex, in a larger and more complex online environment
- Enforcing the Diversity Statement and the Code of Conduct
- Emergency list moderation
- we have ended up using DAM warnings to compensate for the lack of list moderation, at least twice
- contributors.debian.org (mostly only because of me, but it would be good to have its own team)
DAM warnings
- except for rare glaring cases, patterns of behaviour / intentions / taking feedback in, are more relevant than individual incidents
- we do not set out to fix people. It is enough for us to get people to
acknowledge a problem
- if they can't acknowledge a problem they're probably out
- once a problem is acknowledged, fixing it could be their implementation detail
- then again it's not that easy to get a number of troublesome people to acknowledge problems, so we go back to the problem of deciding when enough is enough
DAM warnings?
- I got to a point where I look at DAM warnings as potential signals that DAM has ended up with the ball that everyone else in Debian dropped.
- DAM warning means we haven't gotten to a last resort situation yet, meaning that it probably shouldn't be DAM dealing with this at this point
- Everyone in the project can write a person "do you realise there's an issue here? Can you do something to stop?", and give them a chance to reflect on issues or ignore them, and build their reputation accordingly.
- People in Debian should not have to endure, completey powerless, as trolls drag painful list discussions indefinitely until all the trolled people run out of energy and leave. At the same time, people who abuse a list should expect to be suspended or banned from the list, not have their Debian membership put into question (unless it is a recurring pattern of behaviour).
- The push to grow DAM warnings as a tool, is a sign of the rest of Debian passing on their responsibilities, and DAM picking them up.
- Then in DAM we end up passing on things, too, because we also don't have the energy to face another intensive megametathread, and as we take actions for things that shouldn't quite be our responsibility, we face a higher level of controversy, and therefore demotivation.
- Also, as we take actions for things that shouldn't be our
responsibility, and work on a higher level of controversy, our
legitimacy is undermined (and understandably so)
- there's a pothole on my street that never gets filled, so at some point I go out and fill it. Then people thank me, people complain I shouldn't have, people complain I didn't fill it right, people appreciate the gesture and invite me to learn how to fix potholes better, people point me out to more potholes, and then complain that potholes don't get fixed properly on the whole street. I end up being the problem, instead of whoever had responsibility of the potholes but wasn't fixing them
- The Community Team, the Diversity Team, and individual developers, have no energy or entitlement for explaining what a healthy community looks like, and DAM is left with that responsibility in the form of accountability for their actions: to issue, say, a DAM warning for bullying, we are expected to explain what is bullying, and how that kind of behaviour constitutes bullying, in a way that is understandable by the whole project.
- Since there isn't consensus in the project about what bullying loos like, we end up having to define it in a warning, which again is a responsibility we shouldn't have, and we need to do it because we have an escalated situation at hand, but we can't do it right
House rules
- We have the Diversity Statement
- We have the Code of Conduct
- We have the DebConf Code of Conduct
- We have the Debian Mailinglist Code of Conduct
Interpreting house rules
- you can't encode common sense about people behaviour in written rules: no matter how hard you try, people will find ways to cheat that
- so one can use rules as a guideline, and someone responsible for the bits
that can't go into rules.
- context matters, privilege/oppression matters, patterns matter, histor matters
- example:
- call a person out for breaking a rule
- get DARVO in response
- state that DARVO is not acceptable
- get concern trolling against margninalised people and accuse them of DARVO if they complain
- example: assume good intentions vs enabling
- example: rule lawyering and Figure skating
- this cannot be solved by GRs: I/we (DAM)/possibly also we (Debian) don't want to do GRs about evaluating people
Governance by bullying
- How to DoS discussions in Debian
- example: gender, minority groups, affirmative action, inclusion,
anything about the community team itself, anything about the
CoC, systemd, usrmerge, dam warnings, expulsions
- think of a topic. Think about sending a mail to debian-project about it. If you instinctively shiver at the thought, this is probably happening
- would you send a mail about that to -project / -devel?
- can you think of other topics?
- it is an effective way of governance as it excludes topics from public discussion
- example: gender, minority groups, affirmative action, inclusion,
anything about the community team itself, anything about the
CoC, systemd, usrmerge, dam warnings, expulsions
- A small number of people abuse all this, intentionally or not, to effectively manipulate decision making in the project.
- Instead of using the rules of the community to bring forth the issues one cares about, it costs less energy to make it unthinkable or unbearable to have a discussion on issues one doesn't want to progress. What one can't stop constructively, one can oppose destructively.
- even regularly diverting the discussion away from the original point or concern is enough to derail it without people realising you're doing it
- This is an effective strategy for a few reckless people to unilaterally direct change, in the current state of Debian, at the cost of the health and the future of the community as a whole.
- There are now a number of important issues nobody has the energy to discuss, because experience says that energy requirements to bring them to the foreground and deal with the consequences are anticipated to be disproportionate.
- This is grave, as we're talking about trolling and bullying as malicious power moves to work around the accepted decision making structures of our community.
- Solving this is out of scope for this talk, but it is urgent nevertheless, and can't be solved by expecting DAM to fix it
How about the Community Team?
- It is also a small group of people who cannot pick up the responsibility of doing what the community isn't doing for itself
- I believe we need to recover the Community Team: it's been years that every time they write something in public, they get bullied by the same recurring small group of people (see governance by bullying above)
How about DAM?
- I was just saying that we are not the emergency catch all
- When the only enforcement you have is "nuclear escalation", there's nothing you can do until it's too late, and meanwhile lots of people suffer (this was written before Russia invaded Ukraine)
- Also, when issues happen on public lists, the BTS, or on IRC, some of the perpetrators are also outside of the jurisdiction of DAM, which shows how DAM is not the tool for this
How about the DPL?
- Talking about emergency catch alls, don't they have enough to do already?
Concentrating responsibility
- Concentrating all responsibility on social issues on a single point creates a
scapegoat: we're blamed for any conduct issue, and we're blamed for any action
we take on conduct issues
- also, when you are a small group you are personally identified with it. Taking action on a person may mean making a new enemy, and becoming a target for harassment, retaliation, or even just the general unwarranted hostility of someone who is left with an axe to grind
- As long as responsibility is centralised, any action one takes as a response of
one micro-aggression (or one micro-aggression too many) is an overreaction.
Distributing that responsibility allows a finer granularity of actions to be
taken
- you don't call the police to tell someone they're being annoying at the pub: the people at the pub will tell you you're being annoying, and the police is called if you want to beat them up in response
- We are also a community where we have no tool to give feedback to posts, so
it still looks good to nitpick stupid details with smart-looking tranchant
one-liners, or elaborate confrontational put-downs, and one doesn't get the
feedback of "that did not help".
Compare with discussing https://salsa.debian.org/debian/grow-your-ideas/
which does have this kind of feedback
- the lack of moderation and enforcement makes the Debian community ideal for easy baiting, concern trolling, dog whistling, and related fun, and people not empowered can be so manipulated to troll those responsible
- if you're fragile in Debian, people will play cat and mouse with you. It might be social awkwardness, or people taking themselves too serious, but it can easily become bullying, and with no feedback it's hard to tell and course correct
- Since DAM and DPL are where the ball stops, everyone else in Debian can afford to let the ball drop.
- More generally, if only one group is responsible, nobody else is
Empowering developers
- Police alone does not make a community safe: a community makes a community safe.
- DDs currently have no power to act besides complaining to DAM, or
complaining to Community Team that then can only pass complaints on to
DAM.
- you could act directly, but currently nobody has your back if the (micro-)aggression then starts extending to you, too
- From no power comes no responsibility. And yet, the safety of a community is sustainable only if it is the responsibility of every member of the community.
- don't wait for DAM as the only group who can do something
- people should be able to address issues in smaller groups, without escalation at project level
- but people don't have the tools for that
- I/we've shouldered this responsibility for far too long because nobody else was doing it, and it's time the whole Debian community gets its act together and picks up this responsibility as they should be. You don't get to not care just because there's a small number of people who is caring for you.
What needs to happen
- distinguish DAM decisions from decisions that are more about vision and direction, and would require more representation
- DAM warnings shouldn't belong in DAM
- who is responsible for interpretation of the CoC?
- deciding what to do about controversial people shouldn't belong in DAM
- curation of the community shouldn't belong in DAM
- can't do this via GRs, it's a mess to do a GR to decide how acceptable is a specific person's behaviour, and a lot of this requires more and more frequent micro-decisions than one'd do via GRs
Back in 2017 I did work to setup a cross-building toolchain for QT Creator, that takes advantage of Debian's packaging for all the dependency ecosystem.
It ended with cbqt which is a little script that sets up a chroot to hold cross-build-dependencies, to avoid conflicting with packages in the host system, and sets up a qmake alternative to make use of them.
Today I'm dusting off that work, to ensure it works on Debian bullseye.
Resetting QT Creator
To make things reproducible, I wanted to reset QT Creator's configuration.
Besides purging and reinstalling the package, one needs to manually remove:
~/.config/QtProject
~/.cache/QtProject/
/usr/share/qtcreator/QtProject
which is where configuration is stored if you used sdktool to programmatically configure Qt Creator (see for example this post and see Debian bug #1012561.
Updating cbqt
Easy start, change the distribution for the chroot:
-DIST_CODENAME = "stretch"
+DIST_CODENAME = "bullseye"
Adding LIBDIR
Something else does not work:
Test$ qmake-armhf -makefile
Info: creating stash file …/Test/.qmake.stash
Test$ make
[...]
/usr/bin/arm-linux-gnueabihf-g++ -Wl,-O1 -Wl,-rpath-link,…/armhf/lib/arm-linux-gnueabihf -Wl,-rpath-link,…/armhf/usr/lib/arm-linux-gnueabihf -Wl,-rpath-link,…/armhf/usr/lib/ -o Test main.o mainwindow.o moc_mainwindow.o …/armhf/usr/lib/arm-linux-gnueabihf/libQt5Widgets.so …/armhf/usr/lib/arm-linux-gnueabihf/libQt5Gui.so …/armhf/usr/lib/arm-linux-gnueabihf/libQt5Core.so -lGLESv2 -lpthread
/usr/lib/gcc-cross/arm-linux-gnueabihf/10/../../../../arm-linux-gnueabihf/bin/ld: cannot find -lGLESv2
collect2: error: ld returned 1 exit status
make: *** [Makefile:146: Test] Error 1
I figured that now I also need to set QMAKE_LIBDIR
and not just
QMAKE_RPATHLINKDIR
:
--- a/cbqt
+++ b/cbqt
@@ -241,18 +241,21 @@ include(../common/linux.conf)
include(../common/gcc-base-unix.conf)
include(../common/g++-unix.conf)
+QMAKE_LIBDIR += {chroot.abspath}/lib/arm-linux-gnueabihf
+QMAKE_LIBDIR += {chroot.abspath}/usr/lib/arm-linux-gnueabihf
+QMAKE_LIBDIR += {chroot.abspath}/usr/lib/
QMAKE_RPATHLINKDIR += {chroot.abspath}/lib/arm-linux-gnueabihf
QMAKE_RPATHLINKDIR += {chroot.abspath}/usr/lib/arm-linux-gnueabihf
QMAKE_RPATHLINKDIR += {chroot.abspath}/usr/lib/
Now it links again:
Test$ qmake-armhf -makefile
Test$ make
/usr/bin/arm-linux-gnueabihf-g++ -Wl,-O1 -Wl,-rpath-link,…/armhf/lib/arm-linux-gnueabihf -Wl,-rpath-link,…/armhf/usr/lib/arm-linux-gnueabihf -Wl,-rpath-link,…/armhf/usr/lib/ -o Test main.o mainwindow.o moc_mainwindow.o -L…/armhf/lib/arm-linux-gnueabihf -L…/armhf/usr/lib/arm-linux-gnueabihf -L…/armhf/usr/lib/ …/armhf/usr/lib/arm-linux-gnueabihf/libQt5Widgets.so …/armhf/usr/lib/arm-linux-gnueabihf/libQt5Gui.so …/armhf/usr/lib/arm-linux-gnueabihf/libQt5Core.so -lGLESv2 -lpthread
Making it work in Qt Creator
Time to try it in Qt Creator, and sadly it fails:
…/armhf/usr/lib/arm-linux-gnueabihf/qt5/mkspecs/features/toolchain.prf:76: Variable QMAKE_CXX.COMPILER_MACROS is not defined.
QMAKE_CXX.COMPILER_MACROS
is not defined
I traced it to this bit in
armhf/usr/lib/arm-linux-gnueabihf/qt5/mkspecs/features/toolchain.prf
(nonrelevant bits deleted):
isEmpty($${target_prefix}.COMPILER_MACROS) {
msvc {
# …
} else: gcc|ghs {
vars = $$qtVariablesFromGCC($$QMAKE_CXX)
}
for (v, vars) {
# …
$${target_prefix}.COMPILER_MACROS += $$v
}
cache($${target_prefix}.COMPILER_MACROS, set stash)
} else {
# …
}
It turns out that qmake is not able to realise that the compiler is gcc, so
vars does not get set, nothing is set in COMPILER_MACROS
, and qmake fails.
Reproducing it on the command line
When run manually, however, qmake-armhf
worked, so it would be good to know
how Qt Creator is actually running qmake
. Since it frustratingly does not
show what commands it runs, I'll have to strace
it:
strace -e trace=execve --string-limit=123456 -o qtcreator.trace -f qtcreator
And there it is:
$ grep qmake- qtcreator.trace
1015841 execve("/usr/local/bin/qmake-armhf", ["/usr/local/bin/qmake-armhf", "-query"], 0x56096e923040 /* 54 vars */) = 0
1015865 execve("/usr/local/bin/qmake-armhf", ["/usr/local/bin/qmake-armhf", "…/Test/Test.pro", "-spec", "arm-linux-gnueabihf", "CONFIG+=debug", "CONFIG+=qml_debug"], 0x7f5cb4023e20 /* 55 vars */) = 0
I run the command manually and indeed I reproduce the problem:
$ /usr/local/bin/qmake-armhf Test.pro -spec arm-linux-gnueabihf CONFIG+=debug CONFIG+=qml_debug
…/armhf/usr/lib/arm-linux-gnueabihf/qt5/mkspecs/features/toolchain.prf:76: Variable QMAKE_CXX.COMPILER_MACROS is not defined.
I try removing options until I find the one that breaks it and... now it's always broken! Even manually running qmake-armhf, like I did earlier, stopped working:
$ rm .qmake.stash
$ qmake-armhf -makefile
…/armhf/usr/lib/arm-linux-gnueabihf/qt5/mkspecs/features/toolchain.prf:76: Variable QMAKE_CXX.COMPILER_MACROS is not defined.
Debugging toolchain.prf
I tried purging and reinstalling qtcreator, and recreating the chroot, but
qmake-armhf
is staying broken. I'll let that be, and try to debug
toolchain.prf
.
By grepping gcc
in the mkspecs
directory, I managed to figure out that:
- The
} else: gcc|ghs {
test is matching the value(s) ofQMAKE_COMPILER
QMAKE_COMPILER
can have multiple values, separated by space- If in
armhf/usr/lib/arm-linux-gnueabihf/qt5/mkspecs/arm-linux-gnueabihf/qmake.conf
I setQMAKE_COMPILER = gcc arm-linux-gnueabihf-gcc
, then things work again.
Sadly, I failed to find reference documentation for QMAKE_COMPILER
's syntax
and behaviour. I also failed to find why qmake-armhf
worked earlier, and I am
also failing to restore the system to a situation where it works again. Maybe I
dreamt that it worked? I had some manual change laying around from some
previous fiddling with things?
Anyway at least now I have the fix:
--- a/cbqt
+++ b/cbqt
@@ -248,7 +248,7 @@ QMAKE_RPATHLINKDIR += {chroot.abspath}/lib/arm-linux-gnueabihf
QMAKE_RPATHLINKDIR += {chroot.abspath}/usr/lib/arm-linux-gnueabihf
QMAKE_RPATHLINKDIR += {chroot.abspath}/usr/lib/
-QMAKE_COMPILER = {chroot.arch_triplet}-gcc
+QMAKE_COMPILER = gcc {chroot.arch_triplet}-gcc
QMAKE_CC = /usr/bin/{chroot.arch_triplet}-gcc
Fixing a compiler mismatch warning
In setting up the kit, Qt Creator also complained that the compiler from qmake
did not match the one configured in the kit. That was easy to fix, by pointing
at the host system cross-compiler in qmake.conf
:
QMAKE_COMPILER = {chroot.arch_triplet}-gcc
-QMAKE_CC = {chroot.arch_triplet}-gcc
+QMAKE_CC = /usr/bin/{chroot.arch_triplet}-gcc
QMAKE_LINK_C = $$QMAKE_CC
QMAKE_LINK_C_SHLIB = $$QMAKE_CC
-QMAKE_CXX = {chroot.arch_triplet}-g++
+QMAKE_CXX = /usr/bin/{chroot.arch_triplet}-g++
QMAKE_LINK = $$QMAKE_CXX
QMAKE_LINK_SHLIB = $$QMAKE_CXX
Updated setup instructions
Create an armhf environment:
sudo cbqt ./armhf --create --verbose
Create a qmake wrapper that builds with this environment:
sudo ./cbqt ./armhf --qmake -o /usr/local/bin/qmake-armhf
Install the build-dependencies that you need:
# Note: :arch is added automatically to package names if no arch is explicitly specified
sudo ./cbqt ./armhf --install libqt5svg5-dev libmosquittopp-dev qtwebengine5-dev
Build with qmake
Use qmake-armhf
instead of qmake
and it works perfectly:
qmake-armhf -makefile
make
Set up Qt Creator
Configure a new Kit in Qt Creator:
- Tools/Options, then Kits, then Add
- Name:
armhf
(or anything you like) - In the Qt Versions tab, click Add then set the path of the new Qt to
/usr/local/bin/qmake-armhf
. Click Apply. - Back in the Kits, select the Qt version you just created in the Qt version field
- In Compilers, select the ARM versions of GCC. If they do not appear,
install
crossbuild-essential-armhf
, then in the Compilers tab click Re-detect and then Apply to make them available for selection - Dismiss the dialog with "OK": the new kit is ready
Now you can choose the default kit to build and run locally, and the armhf
kit for remote cross-development.
I tried looking at sdktool to automate this step, and it requires a nontrivial amount of work to do it reliably, so these manual instructions will have to do.
Credits
This has been done as part of my work with Truelite.
Anarcat's "procmail considered harmful" post convinced me to get my act together and finally migrate my venerable procmail based setup to sieve.
My setup was nontrivial, so I migrated with an intermediate step in which sieve
scripts would by default pipe everything to procmail, which allowed me to
slowly move rules from procmailrc
to sieve until nothing remained in
procmailrc
.
Here's what I did.
Literature review
https://brokkr.net/2019/10/31/lets-do-dovecot-slowly-and-properly-part-3-lmtp/ has a guide quite aligned with current Debian, and could be a starting point to get an idea of the work to do.
https://wiki.dovecot.org/HowTo/PostfixDovecotLMTP is way more terse, but more aligned with my intentions. Reading the former helped me in understanding the latter.
https://datatracker.ietf.org/doc/html/rfc5228 has the full Sieve syntax.
https://doc.dovecot.org/configuration_manual/sieve/pigeonhole_sieve_interpreter/ has the list of Sieve features supported by Dovecot.
https://doc.dovecot.org/settings/pigeonhole/ has the reference on Dovecot's sieve implementation.
https://raw.githubusercontent.com/dovecot/pigeonhole/master/doc/rfc/spec-bosch-sieve-extprograms.txt is the hard to find full reference for the functions introduced by the extprograms plugin.
Debugging tools:
- doveconf to dump dovecot's configuration to see if what it understands matches what I mean
- sieve-test
parses sieve scripts:
sieve-test file.sieve /dev/null
is a quick and dirty syntax check
Backup of all mails processed
One thing I did with procmail was to generate a monthly mailbox with all incoming email, with something like this:
BACKUP="/srv/backupts/test-`date +%Y-%m-d`.mbox"
:0c
$BACKUP
I did not find an obvious way in sieve to create montly mailboxes, so I
redesigned that system using Postfix's
always_bcc
feature, piping everything to an archive user.
I'll then recreate the monthly archiving using a chewmail script that I can simply run via cron.
Configure dovecot
apt install dovecot-sieve dovecot-lmtpd
I added this to the local dovecot configuration:
service lmtp {
unix_listener /var/spool/postfix/private/dovecot-lmtp {
user = postfix
group = postfix
mode = 0666
}
}
protocol lmtp {
mail_plugins = $mail_plugins sieve
}
plugin {
sieve = file:~/.sieve;active=~/.dovecot.sieve
}
This makes Dovecot ready to receive mail from Postfix via a lmtp unix socket created in Postfix's private chroot.
It also activates the sieve plugin, and uses ~/.sieve
as a sieve script.
The script can be a file or a directory; if it is a directory,
~/.dovecot.sieve
will be a symlink pointing to the .sieve
file to run.
This is a feature I'm not yet using, but if one day I want to try enabling UIs to edit sieve scripts, that part is ready.
Delegate to procmail
To make sieve scripts that delegate to procmail, I enabled the
sieve_extprograms
plugin:
plugin {
sieve = file:~/.sieve;active=~/.dovecot.sieve
+ sieve_plugins = sieve_extprograms
+ sieve_extensions +vnd.dovecot.pipe
+ sieve_pipe_bin_dir = /usr/local/lib/dovecot/sieve-pipe
+ sieve_trace_dir = ~/.sieve-trace
+ sieve_trace_level = matching
+ sieve_trace_debug = yes
}
and then created a script for it:
mkdir -p /usr/local/lib/dovecot/sieve-pipe/
(echo "#!/bin/sh'; echo "exec /usr/bin/procmail") > /usr/local/lib/dovecot/sieve-pipe/procmail
chmod 0755 /usr/local/lib/dovecot/sieve-pipe/procmail
And I can have a sieve script that delegates processing to procmail:
require "vnd.dovecot.pipe";
pipe "procmail";
Activate the postfix side
These changes switched local delivery over to Dovecot:
--- a/roles/mailserver/templates/dovecot.conf
+++ b/roles/mailserver/templates/dovecot.conf
@@ -25,6 +25,8 @@
…
+auth_username_format = %Ln
+
…
diff --git a/roles/mailserver/templates/main.cf b/roles/mailserver/templates/main.cf
index d2c515a..d35537c 100644
--- a/roles/mailserver/templates/main.cf
+++ b/roles/mailserver/templates/main.cf
@@ -64,8 +64,7 @@ virtual_alias_domains =
…
-mailbox_command = procmail -a "$EXTENSION"
-mailbox_size_limit = 0
+mailbox_transport = lmtp:unix:private/dovecot-lmtp
…
Without auth_username_format = %Ln
dovecot won't be able to understand
usernames sent by postfix in my specific setup.
Moving rules over to sieve
This is mostly straightforward, with the luxury of being able to do it a bit at a time.
The last tricky bit was how to call spamc
from sieve, as in some situations I
reduce system load by running the spamfilter only on a prefiltered selection of
incoming emails.
For this I enabled the filter
directive in sieve:
plugin {
sieve = file:~/.sieve;active=~/.dovecot.sieve
sieve_plugins = sieve_extprograms
- sieve_extensions +vnd.dovecot.pipe
+ sieve_extensions +vnd.dovecot.pipe +vnd.dovecot.filter
sieve_pipe_bin_dir = /usr/local/lib/dovecot/sieve-pipe
+ sieve_filter_bin_dir = /usr/local/lib/dovecot/sieve-filter
sieve_trace_dir = ~/.sieve-trace
sieve_trace_level = matching
sieve_trace_debug = yes
}
Then I created a filter script:
mkdir -p /usr/local/lib/dovecot/sieve-filter/"
(echo "#!/bin/sh'; echo "exec /usr/bin/spamc") > /usr/local/lib/dovecot/sieve-filter/spamc
chmod 0755 /usr/local/lib/dovecot/sieve-filter/spamc
And now what was previously:
:0 fw
| /usr/bin/spamc
:0
* ^X-Spam-Status: Yes
.spam/
Can become:
require "vnd.dovecot.filter";
require "fileinto";
filter "spamc";
if header :contains "x-spam-level" "**************" {
discard;
} elsif header :matches "X-Spam-Status" "Yes,*" {
fileinto "spam";
}
Updates
Ansgar mentioned that it's possible to replicate the monthly mailbox using the variables and date extensions, with a hacky trick from the extensions' RFC:
require "date"
require "variables"
if currentdate :matches "month" "*" { set "month" "${1}"; }
if currentdate :matches "year" "*" { set "year" "${1}"; }
fileinto :create "${month}-${year}";