commandline

SubArcticTundra , in bluetuith - A TUI based Bluetooth Manager v0.2.6 is released (Windows/MacOS support)
@SubArcticTundra@lemmy.ml avatar
stewie410 , in I've collected all my useful bash scripts and command aliases into one CLI, but I want more!

I've gotten to the point that, anything "useful" enough goes in a repo -- unless its for work, since I'd otherwise be polluting our "great" subversion server...

Functions

I've stopped using as many functions, though are just too handy:

  • bm(): super basic bookmark manager, cd or pushd to some static path
    • I never got into everything zoxide has to offer
  • mkcd(): essentially mkdir -p && cd, but I use it enough that I forgot it isn't standard

I'm also primarily a WSL user these days (one day, I'll move permanently) -- to deal with ssh-agent shenanigans there, I also rely on ssh.sh in my config. I should at some point remove kc(), as I don't think I'lll ever go back.

Scripts

Despite having a big collection of scripts, I don't use these too often; but still wanted to mention:

  • md2d.sh: pandoc wrapper, mostly using it to convert markdown into docx
    • my boss has a weird requirement that all documentation shared with the team must be editable in Word...
  • gitclone.sh: git clone wrapper, but I use it as gcl -g quite often

A lot of my more useful scripts are, unfortunately, work related -- and probably pretty niche.

"Library"

I also keep a library of sorts for reusable snippets, which I'll source as needed. The math & array libs in particular are very rarely used -- AoC, for the most part.

Config

Otherwise, my bash config is my lifeblood -- without it, I'm pretty unproductive.

dtools comments

Had a look through your repo, and have some thoughts if you don't mind. You may already know about several of these items, but I'm not going to be able to sift through 30K lines to see what is/isn't known.

printf vs echo

There's a great writeup on why echo should be used with caution. Its probably fine, but wanted to mention it -- personally, I'll use echo when I need static text and printf doesn't make sense to use otherwise.

Multiline-printf vs HEREDOC

In the script, you've got like 6K lines of printf statements to show various usage text. Instead, I'd recommend using HEREDOCs (<<).

As an example:

dtools_usage() {
    cat << EOF
dtools - A CLI tool to manage all personal dev tools

\e[1mUsage:\e[0m
    dtools COMMAND
    dtools [COMMAND] --help | -h
    dtools --version | -v

\e[1mCommands:\e[0m
    \e[0;32mupdate\e[0m     Update the dtools CLI to the latest version
    ...
EOF
}

HEREDOCs can also be used for basically any stdin stream; for example:

ssh user@host << EOF
hostname
mkdir -p ~/.config/
EOF

bold() vs $'\e[1m'

On a related note, rather than using functions and by extension subshells ($(...)) to color text; you could do something like:

ANSI_FMT=(
    ['norm']=$'\e[0m'
    
    ['red']=$'\e[31m'
    ['green']=$'\e[32m'
    ['yellow']=$'\e[33m'
    ['blue']=$'\e[34m'
    ['magenta']=$'\e[35m'
    ['cyan']=$'\e[36m'
    ['black']=$'\e[30m'
    ['white']=$'\e[37m'

    ['bold']=$'\e[1m'
    ['red_bold']=$'\e[1;31m'
    ['green_bold']=$'\e[1;32m'
    ['yellow_bold']=$'\e[1;33m'
    ['blue_bold']=$'\e[1;34m'
    ['magenta_bold']=$'\e[1;35m'
    ['cyan_bold']=$'\e[1;36m'
    ['black_bold']=$'\e[1;30m'
    ['white_bold']=$'\e[1;37m'

    ['underlined']=$'\e[4m'
    ['red_underline']=$'\e[4;31m'
    ['green_underline']=$'\e[4;32m'
    ['yellow_underline']=$'\e[4;33m'
    ['blue_underline']=$'\e[4;34m'
    ['magenta_underline']=$'\e[4;35m'
    ['cyan_underline']=$'\e[4;36m'
    ['black_underline']=$'\e[4;30m'
    ['white_underline']=$'\e[4;37m'
)

This sets each of these options in an associative array (or hash table, sort of); callable with ${ANSI_FMT["key"]}; which expands like any other variable. As such, the text will be inserted directly without needing to spawn a subshell.

Additionally, the $'...' or $"..." syntax is a bashism that expands escape sequences directly; so $'\t' expands to a literal tab character. The only difference betweeen the two forms is whether $ expressions will also be expanded, e.g. $"\e[31m$HOME\e[0m vs $'\e[31mHOME\e[0m.

Do also note that $'\e[0m (or equiv) is required with this method, as you're no longer performing the formatting in a subshell environment. I personally find this tradeoff worthwhile, though. But, I also don't use it very often.

The heredoc example before would then look like:

dtools_usage() {
    cat << EOF
dtools - A CLI tool to manage all personal dev tools

${ANSI_FMT['bold']}Usage:${ANSI_FMT['norm']}
    dtools COMMAND
    dtools [COMMAND] --help | -h
    dtools --version | -v

${ANSI_FMT['bold']}Commands:${ANSI_FMT['norm']}
    ${ANSI_FMT['green']}update${ANSI_FMT['norm']}     Update the dtools CLI to the latest version
    ...
EOF
}

As a real-world example from a recent work project:

log() {
    if (( $# == 1 )); then
        mapfile -t largs
        set -- "${1}" "${largs[@]}"
        unset largs
    fi

    local rgb lvl
    case "${1,,}" in
        emerg )     rgb='\e[1;31m'; lvl='EMERGENCY';;
        alert )     rgb='\e[1;36m'; lvl='ALERT';;
        crit )      rgb='\e[1;33m'; lvl='CRITICAL';;
        err )       rgb='\e[0;31m'; lvl='ERROR';;
        warn )      rgb='\e[0;33m'; lvl='WARNING';;
        notice )    rgb='\e[0;32m'; lvl='NOTICE';;
        info )      rgb='\e[1;37m'; lvl='INFO';;
        debug )     rgb='\e[1;35m'; lvl='DEBUG';;
    esac
    case "${1,,}" in
        emerg | alert | crit | err ) err+=( "${@:2}" );;
    esac
    shift

    [[ -n "${nocolor}" ]] && unset rgb

    while (( $# > 0 )); do
        printf '[%(%FT%T)T] [%b%-9s\e[0m] %s: %s\n' -1 \
            "${rgb}" "${lvl}" "${FUNCNAME[1]}" "${1}"
        shift
    done | tee >(
        sed --unbuffered $'s/\e[[][^a-zA-Z]*m//g' >> "${log:-/dev/null}"
    )
}

Here, I'm using printf's %b to expand the color code, then later using $'...' with sed to strip those out for writing to a logfile. While I'm not using an associative array in this case, I do something similar in my log.sh library.

One vs Many

Seeing that there's nearly 30K lines in this script, I would argue it should be split up. You can easily split scripts up to keep everything organized, or to make reusable code, by sourceing the script. For example, to use the log.sh library, I would do something like:

#!/usr/bin/env bash

# $BASH_LIB == ~/.config/bash/lib
#NO_COLOR="1"
source "${BASH_LIB}/log.sh"

# set function
log() {
    log.pretty "${@}"
}

log info "foo"

# or use them directly
log.die.pretty "oopsie!"

Given the insane length of this monolith, splitting it up is probably worth it. The run() and related functions could stay within dtools, but each part could be split out to another file, which does its own subcommand argparse?

Bashisms

The Wooledge on Bashisms is a great writeup explaining the quirks between POSIX and bash -- more specificaly, what kind of tools available out of the box when writing for bash specifically.

Some that I use on a regular basis:

  • &> or &>>: redirect both stdout & stdin to some file/descriptor
  • |&: shorthand for 2>&1 |
  • var="$(< file)": read file contents into a variable
    • Though, I prefer mapfile or readarray for most of these cases
    • Exceptions would be in containers where those are unavailable (alpine + bash)
  • (( ... )): arithemtic expressions, including C-Style for-loops
    • Makes checking numeric values much nicer: (( myvar >= 1 )) or (( count++ ))

grep | awk | sed

Just wanted to note that awk can do basically everything. These days I tend to avoid it, but it can do it all. Using ln. 6361 as an example:

zellij_session_id="$(zellij ls | awk '
    tolower($0) ~ /current/ {
        print gensub(/\x1B[[][^a-zA-Z]*?m/, "", "G", $1)
    }
')"

The downside of awk is that it can be a little slow compared to grep, sed or cut. More power in a single tool, but maybe not as performant.

Shellcheck

I'm almost certain I'm preaching to the choir, but will add the recommendation for shellcheck or bash-language-server broadly.

While there's not much it spit out for dtools, there are some items of concern, notably unclosed strings.

A Loose Offer

If interested, I could like at rewriting dtool taking into account the items I've listed above, amongst others. Given the scope of the project, that's quite the undertaking from a new set of eyes, but figured I'd throw it out there. Gives me something to do over the upcoming long weekend.

aclarke OP , (edited )

Thanks so much for the other stuff you use! I've been using bm for years, but I haven't used mkcd, so I'm definitely going to add that.

I'm going to give your library, config, and AoC a good look because that's exactly what I was hoping for in this conversation! :)

In general: The only time I add it to my ~/.bashrc is when it's either an alias for something simple, or a very simple function. Otherwise, anything that requires more legwork or is bigger than a few lines, I put in dtools. I used to put it all in my ~/.bashrc but that honestly became kind of cumbersome when I have different configs on different servers, or machines for work vs personal, etc. And sometimes the exports would differ making functions work differently and I didn't want to just have to copy that section of my ~/.bashrc as well every time something updated, hence why I created the dtools repo!

To respond to your other comments, I'm going to do my best to respond in the order they show up:

printf vs echo through bold() vs $'\e[1m'

The dtools script is actually compiled, not written by me. So in the vast majority of the project, my code is all in the src directory, not in the dtools script. In fact, my repo without compilation consists of only 3k lines. The compiled code in the script then makes up the completions, coloring, some error handling, validations, filters, help messages, constraints (like conflicting flags), etc.

So many of the echos you see are from the bashly framework, not my code. I often use heredocs for longer or multiline strings (being SUPER careful when using <<-EOF to make sure my damn editor is using TABs...that's such a nightmare otherwise 😂 ).

If you look through my code in particular, you'll see I use many of these bash-isms you've mentioned!

So the One vs Many comment is exactly how the repo works! Each subcommand is its own directory and file. So, for example: All dtools aws commands are in the src/commands/aws directory. And any further subcommands like dtools aws secretsmanager are in another subdirectory where each command has its own individual script!

Bashisms

I'm familiar with many of the bashisms you mentioned except the var="$(< file)" one, that's awesome! I've been trying to migrate away from using cat to just output file contents and use more direct, purpose methods that are often built into tools (like jq '.[]' file.json instead of cat file.json | jq '.[]'). However, I'll say that when I'm trying to read each line into an iterable array, I often use readarray too.

grep | awk | sed

I've been trying for years to get more people to look into awk because it's amazing! It's so undervalued! sed takes some getting used to with the pattern and hold space but it's worth the initial suffering 😛

Shellcheck

I've got my Helix editor set up with Shellcheck! It's awesome! You'll notice if you look at my code directly that there's a number of places I have to do # shellcheck disable=SC2154 (a variable is referencing an undefined value). This is because the framework creates and passes those variables to my scripts for me.

A Loose Offer

You seem a lot like me in that you do a LOT of bash scripting! So I'll admit to the fact that I've looked at the compiled code and noted that the most important code is mine, and while there's a lot of things going on in the compiled script, I agree with most of it. But I've also been a bit concerned about how often it's spawning subshells when it doesn't have to.

I think I can fix some of them with associative arrays if I add a minimum bash version requirement in my config, but I've honestly never tried. I'll check that out now!

Since you make a solid point about a lot of this that should maybe be updated in the Bashly framework, maybe we should work together to update the framework to have better conventions like you've mentioned?

stewie410 ,

Thanks so much for the other stuff you use! I’ve been using bm for years

If you mean from my dotfiles, that's wild. A friend of mine wrote his own implementation in rust, but I've not really used their version, though I'm not sure its on github.

that honestly became kind of cumbersome when I have different configs on different servers, or machines for work vs personal, etc.

While I'm not currently using it, its on my todo list to take a real look at chezmoi for these per-machine differences; especially as I'm always between Linux, Windows & WSL. While chezmoi is outside the scope of this topic, it seems like a pretty solid configuration management option...and probably safer than what I'm doing (ln -s).

And sometimes the exports would differ making functions work differently and I didn’t want to just have to copy that section of my ~/.bashrc as well every time something updated

My "solution" is a collection of templates I'll load in to my editor (nvim, with my lackluster plugin), which contains the basics for most scripts of a certain type. The only time that I'll write something and rely on something that isn't builtin, e.g. a customization, is if:

  • Its a personal/primary machine that I'm working from
  • I require() the item & add testing for it
    • [[ -z "${var}" ]], or command -v usually

For my work, every script is usually as "batteries included" as reasonable, in whatever language I'm required to work with (bash, sh, pwsh or groovy). That said, the only items that appear in nearly every script at work are:

  • Base functions for normal ops: main(), show_help(), etc.
  • Some kind of logging facility with log()
    • colors & "levels" are a pretty recent change
  • Email notifications on failure (just a curl wrapper for Mailgun)

bashly framework

Transpiling bash into bash is probably the weirdest workflow I've ever heard of. While I can see some benefit of a "framework" mentality, if the 'compiled' result is a 30K line script, I'm not sure how useful it is IMO.

For me at least, I view most shell scripts as being simple automation tools, and an exercise in limitation.

If you look through my code in particular, you’ll see I use many of these bash-isms you’ve mentioned!

I did see some of that, even in the transpiled dtools monolith

$(<file)

Just be aware that this reads the full contents into a variable, not an array. I would generally use mapfile/readarray for multiline files. As for the jq example, you should be able to get away with jq '.[]' < file.json, which is also POSIX when that's a concern.

maybe we should work together to update the framework to have better conventions like you’ve mentioned?

I don't think I'm the right person for that job -- I'm both unfamiliar with Ruby and have no desire to interract with it. I'm also pretty opinionated about shell generally, and likely not the right person to come up with a general spec for most people.

Additionally, my initial reaction that bashly seems like a solution in search of a problem, probably isn't healthy for the project.

Sxan , in DotR - A dotfiles manager as dear as a daugjter
@Sxan@piefed.zip avatar

Þank you for posting wiþ a clear description of þe program! 🎖️

Why did you write þis when þere are already several alternatives? Not þat you need a reason, I'm just curious about what DotR does which no oþer tool does. Dotfile managers have a relatively high barrier of entry to get a feel about how well þey work - compared to e.g. ripgrep or fd-find - and hearing þe motivation of þe project is interesting.

uroybd OP ,
@uroybd@lemmy.world avatar

I wrote yet another dotfiles manager mostly due to UX and structural choices. Once I am done with the beta phase, I may write a comparison table.

CoderSupreme OP , in Keep a "latest" symlink for timestamped cron logs

Communities / Forums

Resources / Tutorials

nous , in Keep a "latest" symlink for timestamped cron logs

Or use systems timers which keep track on this information for you. Can even tell you when the job will next run and automatically captures the logs and exit status from the runs.

CoderSupreme OP ,

Or use systems timers

systemd timers? Isn't that dependent on whether or not there is systemd? I thought there were Linux distros that don't use it.

Sxan ,
@Sxan@piefed.zip avatar

Every cron system can do þis. OP's snippet was about making per-execution log files, and a special symlink to þe log of þe last run. Not just jamming all logs togeþer into one giant, bloated, slow mess.

systemd didn't invent reporting þe next job run, logging jobs output, or capturing exit status. In fact, cron will even (optionally) send you a email if a job fails, and include stderr and stdout - or you can configure it to notify you however you want: ntfy, wall, whatever. MAILTO can be any notification system you want. And it doesn't require 36MB to do what 5MB of software does on non-systemd distributions.

I mean, seriously? systemd has really done a number on people.

tetris11 , in nmcli and nmtui for WiFi control from the command line
@tetris11@lemmy.ml avatar

nmtui is the best, I just wish network manager played a little better with wireguard

ayushnix , in ANN: tjot v0.0.5, a terminal renderer for djot
@ayushnix@lemmy.sdf.org avatar

I don't see a link to tjot in your post.

sxan OP ,
@sxan@midwest.social avatar

Thank you; fixed.

It's always something.

ayushnix ,
@ayushnix@lemmy.sdf.org avatar

Thanks! This tool should be helpful for previewing djot files.

I've wanted to use djot for blog posts but it doesn't look like there are any decent static site generators for it yet.

sxan OP ,
@sxan@midwest.social avatar

Of course! Let me know when you find bad rendering.

Hugo said they wouldn't accept a PR. I'm converting to Markdown for Hugo, but it's lossy since Markdown doesn't support a few djot things I use, like pipe tables and CriticMarkup.

I've also asked Sourcehut if they (Drew) would accept a PR to support djot READMEs. I haven't seen a response from that ML yet. I know that's not a static site generator, but those are there two biggest places for me, outside of personal documents, where markup gets rendered for the web.

The other significant missing component for me after this pager is to integrate CriticMarkup into my VCS and SyncThing workflows. I want a better way of resolving diffs and sync-conflict files when they're on djot documents. tjot is part of that, and a djot diff tool that generates CriticMarkup is the other.

Kissaki , in Edit is now open source
@Kissaki@programming.dev avatar

I guess the 11 MB micro was too big.

It does look like it has a lot of functionality for 250 kB.

Clickable menus instead of separate help functionality or lines are a good way to make them obvious and discoverable, teaching about them.

refalo , in Edit is now open source

not MS-DOS Edit

sad.

Dark_Arc ,
@Dark_Arc@social.packetloss.gg avatar

Was there something special about it?

deschutron ,

It was packaged with DOS and Windows for a long time, was quick and reliable, and could edit files in binary mode. Personally, when I was first learning computers, it was the closest thing I had to a hex editor, and I edited all kinds of files with it - bitmaps, WAV files, EXE files, game save-state files.

The article itself says:
"What motivated us to build Edit was the need for a default CLI text editor in 64-bit versions of Windows. 32-bit versions of Windows ship with the MS-DOS editor, but 64-bit versions do not have a CLI editor installed inbox."

Dark_Arc ,
@Dark_Arc@social.packetloss.gg avatar

Huh, thanks for the history and I missed that last part so thanks for that too!

It sounds like the original edit program might have been (at least partially) hand coded assembly or something (maybe some unportable C that made a variety of bad assumptions) if they were never able to port it to anything but 32-bit x86.

Maybe the new edit will gain any missing functionality eventually.

deschutron ,

IIRC it was the loss of WoW in 64-bit Windows that killed it. WoW (Windows on Windows) was a 16-bit environment that ran on 32-bit NT-derived Windows systems, allowing them to run pre-95 applications and the MS-DOS environment. 64-bit Windows came out in the 00s, when Microsoft seemed to think the command-line was a relic of the past that they could afford to lose. Since then they've bought so much of the open source world, I think it's changed them.

christos OP , in Sausage, a terminal word puzzle in Bash, inspired by Bookworm
@christos@lemmy.world avatar

UPDATE: Added Levels feature.

markstos , in Zellij 0.41.0 released with its solution for colliding keybindings

I ended my Zellij eval when I ran into this dangerous bug when Sync and Fullscreen are combined.

It potentially sends commands to a server in a hidden window you can’t see.

https://github.com/zellij-org/zellij/issues/3458

Tmux doesn’t have this problem.

pm_me_your_quackers , in Zellij - a terminal workspace with batteries included

Love using this, the only issue is some key defaults clash with nvim, but it's customizable so basically a non-issue. I do wish they'd make a PS port, not everything I do can be done in WSL. But when it works it's great.

caseyweederman ,
@caseyweederman@lemmy.ca avatar

One thing that took me ages to get is that Ctrl+G (lock layout) makes it stop intercepting key combos.
Get the layout to where you like it, set it to be locked by default, step out of locked mode when you need to resize panes or add tabs.