

I was pretty sure of this before because I definitely can’t read and listen to different books simultaneously


I was pretty sure of this before because I definitely can’t read and listen to different books simultaneously


There’s nothing quite like the sound engine on that version. It’s just so well done, and they completely massacred it on the PC port.
deleted by creator


Oh, oh I know this one!
If your keyboard shortcut contains control characters, it will be interpreting the keypresses with the control characters you’re holding for the shortcut. Alt+a super+b etc.
Some keyboard shortcuts trigger on press, they can also trigger on release. This is why you need the sleep statement, to give you time to release the keys before it starts typing. You want the shortcuts to trigger after release.
I can set the difference in my window manager, but I’m not sure about doing it in (GNOME?) Ubuntu. Even assuming you can set the shortcut to only run on release, you still need to let go of all the keys instantly, so chaining with sleep is probably the best approach.
Chaining bash sleep and ydotool works for me in my window manager. Consider using “&&” instead of “;” to run the ydotool type command. Whatever is written after the “&&” only executes if the previous command (sleep 2) succeeds. The “;” might be interpreted by the keyboard shortcut system as an end of the statement:
sleep 2 && ydotool type abcde12345
Or perhaps the shortcut system is just executing the programs, not necessarily through a bash shell. In that case we would need to actually run bash to use its sleep function and the “;” or “&&” bit. Wrapping the lot in a bash command might look like this:
bash -c "sleep 2 && ydotool type abcde12345"
Assuming that doesn’t work, I see nothing wrong with running a script to do it. You just need to get past whatever in the shortcut system is cutting off the command after the sleep statement.
Running ydotoold at user level is preferred and recommended. It keeps it inside your user context, which is better for security.


Which is a pretty good advertisement for the fediverse actually. I’d love to know how many of those thousand blocked communities are still active, but not enough to bother working it out.
They’ll just take the ick out of his name when he assumes his PR/Community manager role
Long live the Executive 14 99Wh - you’ll pry mine from my cold dead hands


I once had to edit and dump a Cisco config from a 10 switch stack over 9600 baud.
It took ages, and then I realised my fancy new terminal still had a default scrollback limit set, and had to do it again.
Actual torture.


This is a meta package, rather than directly an emulator, but retrodeck (Linux, steamdeck) is such an excellent experience I have to give it a shoutout


So the package is a specific driver version, which will keep you on the 580 diver version through updates. This package would be installed to provide the drivers and requires the matched utils package.
You would install this, rather than just installing the meta-package from the official repositories. As shown in the AUR page:
Conflicts: nvidia, NVIDIA-MODULE, nvidia-open-dkms
Provides: nvidia, NVIDIA-MODULE
This is also a DKMS package. This will let it build against whatever kernel you’re running, so you can keep using the module through regular system qns kernel upgrades.
So, the idea would be, remove the nvidia drivers you have, install this one, and it’ll be like the upgrade and support drop never happened. You won’t get driver upgrades, but you wouldn’t anyway. It’s the mostly safe way to version pin the package without actually pinning it in pacman. That would count as a partial upgrade, which is unsupported
I’m using stow, and then git for versioning. The only question I’m currently facing is whether to keep my stow packages as individual got repos (so I can switch branches for radically different configs or new setups) or treat the whole lot as a big repo, and set the others up as subtrees.
I was trying to finalize a backup device to gift to my dad over Christmas. We’re planning to use each other for offsite backup, and save on the cloud costs, while providing a bridge to each other’s networks to get access to services we don’t want to advertise publicly.
It is a Beelink ME Mini running arch, btrfs on luks for the os on the emmc storage and the fTPM handling the decryption automatically.
I have built a few similar boxes since and migrated the build over to ansible, but this one was the proving ground and template for them. It was missing some of the other improvements I had built in to the deployed boxes, notably:
I don’t know what possessed me, but I decided that the question marks and tasks I had in my original build documentation should be investigated as I did it up, I was hoping to export some more specific configuration to ansible to the other boxes once done. I was going to migrate manually to learn some lessons.
I wasn’t sure about bothering with UKI. I wanted zfs running, and that meant moving to the linux-lts kernel package for arch.
Given systemd-boot’s superior (at current time) support for owner keys, boot time unlocking and direct efi boot, I’ve been using that. However, it works differently if you use plain kernels, compared to if you use UKI. Plain kernels use a loader file to point to the correct locations for the initramfs and the kernel, which existed on this box.
I installed the linux-lts package, all good. I removed the linux kernel package, and something in the pacman hooks failed. The autosigning process for the secure-boot setup couldn’t find the old kernel files when it regenerated my initramfs, but happily signed the new lts ones. Cool, I thought, I’ll remove the old ones from the database, and re-enroll my os drive with systemd-cryotenroll after booting on the new kernel (the PCRs I’m using would be different on a new kernel, so auto-decrypt wouldn’t work anyway.)
So, just to be sure, I regenerated my initram and kernel with mkinitcpio -p linux-lts, everything worked fine, and rebooted. I was greeted with:
Reboot to firmware settings
as my only boot option. Sigh.
Still, I was determined to learn something from this. After a good long while of reading the arch wiki and mucking about with bootctl (PITA in a live CD booted system) I thought about checking my other machines. I was hoping to find a bootctl loader entry that matched the lts kernel I had on other machines, and copy it to this machine to at least prove to myself that I had sussed the problem.
After checking, I realised no other newer machine had a loader configuration actually specifying where the kernel and initram were. I was so lost. How the fuck is any of this working?
Well, it turns out, if you have UKI set up, as described, it bundles all the major bits together like the kernel, microcode, initram and boot config options in to one direct efi-bootable file. Which is automatically detected by bootctl when installed correctly. All my other machines had UKI set up and I’d forgotten. That was how it was working. Unfortunately, I had used archinstall for setting up UKI, and I had no idea how it was doing it. There was a line in my docs literally telling me to go check this out before it bit me in the ass…
…
…
So, after that sidetrack, I did actually prove that the kernel could be described in that bootctl loader entry, then I was able to figure out how I’d done the UKI piece in the other machines, and applied it to this one, so it matched and updated my docs…
…
UKI configuration is in mkinitcpio default configs, but needs changing to make it work.
vim /etc/mkinitcpio.d/linux-lts.preset
…
Turns out my Christmas wish came true, I learned I need to keep better notes.


His instance and mine, “sh.itjust.works” was federated with lemm.ee
HHHHhsk Jeeves!


Get rid of the tool bars. All of them. Menu, navigation, window decoration, cookie consent, status, tab and start.
They suck. We live in a 16:9-21:9 world, where it’s bad enough in landscape. When it’s in portrait, where half of the real estate is taken up by a keyboard, and that space really matters, it’s almost worse. Letterboxing is dumb when it’s black bars on a movie, I don’t need its cluttered cousin on every application and webpage I’m on.
Vertical overlays or context menus can be enabled by default if you must, but give me shortcuts to do the even the most esoteric operation and I’ll gladly learn them.
I don’t know how this is an unpopular opinion after a half centuary of dealing with increasingly multileveled toolbars, but it must be because toolbars are not going anywhere.
If you have to have a toolbar, at least make it go away when you scroll.
Mine (Thunder) doesn’t recognize tagging the code block as a specific syntax, it just shows it as preformatted block, with no highlighting.
Can I ask what client you’re using?


I assumed that the primary account had full control over secondary user profiles, will have to revisit and confirm - thanks for the tip!


I’m aware of what’s happening in the states. I’m talking from a resourcing perspective. You’d already have to know what you were after to confirm its absence from the phone, if the wipe can be done silently.
If you could load in to your dummy profile, while deleting the keys to your main profile, which could then be freed up as storage space, all silently, with the right unlock password, that’d be pretty hard to prove in a way that warranted arresting everyone.
This would limit this charge to only those that announced it as a political statement or who were already being targeted specifically.
Who else could know what it’s like to walk in another’s skin and see a face you don’t recognize in the mirror?