BB_C, bb_c@programming.dev
Instance: programming.dev
Joined: 2 years ago
Posts: 2
Comments: 57
Posts and Comments by BB_C, bb_c@programming.dev
Comments by BB_C, bb_c@programming.dev
dyn compatibility of the trait itself is another matter. In this case, an async method makes a trait not dyn-compatible because of the implicit -> impl Future opaque return type, as documented here.
But OP didn’t mention whether dyn is actually needed or not. For me, dyn is almost always a crutch (exceptions exist).
If I understand what you’re asking…
This leaves out some details/specifics out to simplify. But basically:
async fn foo() {}
// ^ this roughly desugars to
fn foo() -> impl Future<()> {}
This meant that you couldn’t just have (stable) async methods in traits, not because of async itself, but because you couldn’t use impl Trait in return positions in trait methods, in general.
Box<dyn Future> was an unideal workaround (not zero-cost, and other dyn drawbacks). async_trait was a proc macro solution that generated code with that workaround. so Box<dyn Future> was never a desugaring done by the language/compiler.
now that we have (stable) impl Trait in return positions in trait methods, all this dance is not strictly needed anymore, and hasn’t been needed for a while.
I was just referring to the fact that they are macros.
printf uses macros in its implementation.
int
__printf (const char *format, ...)
{
va_list arg;
int done;
va_start (arg, format);
done = __vfprintf_internal (stdout, format, arg, 0);
va_end (arg);
return done;
}
^ This is from glibc. Do you know what va_start and va_end are?
to get features that I normally achieve through regular code in other languages.
Derives expand to “regular code”. You can run cargo expand to see it. And I’m not sure how that’s an indication of “bare bone"-ness in any case.
Such derives are actually using a cool trick, which is the fact that proc macros and traits have separate namespaces. so #[derive(Debug)] is using the proc macro named Debug which happens to generate “regular code” that implements the Debug trait. The proc macro named Debug and implemented trait Debug don’t point to the same thing, and don’t have to match name-wise.
Not sure if you’re talking about the language, or the core/alloc/std libraries, or both/something in-between?
Can you provide specific examples, an which specific languages are you comparing against?
(didn’t read OP, didn’t keep up with chimera recently)
From the top of my head:
The init system. Usable FreeBSD utils instead of busybox overridable by gnu utils (which you will have to do because the former are bare-bones). Everything is built with LLVM (not gcc). Extra hardening (utilizing LLVM). And it doesn’t perform like shit in some multi-threaded allocator-heavy loads because they patch musl directly with mimalloc. It also doesn’t pretend to have a stable/release channel (only rolling).
So, the use of apk is not that relevant. “no GNU” is not really the case with Alpine. They do indeed have “musl” in common, but Chimera “fixes” one of the most relevant practical shortcomings of using it. And finally, I don’t think Chimera really targets fake “lightweight"-ness just for the sake of it.
'0'..'9' (characters in ASCII) are (0+48)..(9+48) when read as integer values.
For readability you can do:
unsigned char zero = '0';
int h = getchar() - zero;
int l = getchar() - zero;
And as I mentioned in another comment, if this was serious code, you would check that both h and l are between 0 and 9.
Note that one of the stupid quirks about C is that char is not guaranteed to be unsigned in certain implementations/architectures. So it’s better to be explicit about expecting unsigned values. This is also why man 3 getchar states:
>
fgetc() reads the next character from stream and returns it as an *unsigned char* cast to an int, or EOF on end of file or error.
getchar() is equivalent to fgetc(stdin).
How is this literal joke still getting so much engagement?
nice CLAUDE.md. got it on the contributor list too.
An actually serious project that is not at the "joke" stage. Zero LLM use too:
https://nihav.org/
For audio at least, people should be aware of:
https://github.com/pdeljanov/Symphonia
software-rendered implemented-in-C++ terminal
you fail the cult test 😉
This is unnecessarily complicated
really!
and I don’t see how your second version is supposed to be more optimal?
It was a half-joke. But since you asked, It doesn’t do any duplicate range checks.
But it’s not like any of this is going to be measurable.
Things you should/could have complained about:
* [semantics] not checking if h and l are in the [0, 9] range before taking the result of h*10 + l.
* [logical consistency] not using a set bet for [0, 100] and a set bit for [1, 12], and having both bits set for the latter.
* [cosmetic/visual] not having the props bits for p0 on the left in the switch.
And as a final note, you might want to check what kind of code compilers actually generate (with -O2/-O3 of course). Because your complaints don’t point to someone who knows.
The whole premise is wrong, since it’s based on the presumption of C++ and Rust being effectively generational siblings, with the C++ “designers” (charitable) having the option to take the Rust route (in the superficial narrow aspects covered), but choosing not to do so. When the reality is that C++ was the intellectual pollution product of “next C” and OOP overhype from that era (late 80’s/ early 90’s), resulting in the “C with classes” moniker.
The lack of both history (and/or evolution) and paradigm talk is telling.
Maybe something like this
#include <stdio.h>
// reads next 4 chars. doesn't check what's beyond that.
int get_pair() {
int h = getchar() - 48;
int l = getchar() - 48;
return h * 10 + l;
}
int main(){
int p0 = get_pair();
int p1 = get_pair();
if (p0 < 0 || p1 < 0 || p0 > 100 || p1 > 100) {
// not 4 digi seq, return with failure if that's a requirement
}
if ((p0 == 0 || p0 > 12) && (p1 >= 1 && p1 <= 12)) {
printf("YYMM");
} else if ((p1 == 0 || p1 > 12) && (p0 >= 1 && p0 <= 12)) {
printf("MMYY");
} else if ((p0 >= 1 && p0 <= 12) && (p1 >= 1 && p1 <= 12)) {
printf("AMBIGUOUS");
} else {
printf("NA");
}
return 0;
}
or if you want to optimize
#include <stdio.h>
#include <stdint.h>
// reads next 4 chars. doesn't check what's beyond that.
int get_pair() {
int h = getchar() - 48;
int l = getchar() - 48;
return h * 10 + l;
}
uint8_t props (int p) {
if (p >= 1 && p <= 12) {
return 0b10;
} else if (p < 0 || p >= 100) {
return 0b11;
} else {
return 0b00;
}
}
int main(){
int p0 = get_pair();
int p1 = get_pair();
switch (props(p0) | (props(p1) << 2)) {
case 0b1010: printf("AMBIGUOUS"); break;
case 0b1000: printf("YYMM"); break;
case 0b0010: printf("MMYY"); break;
default: printf("NA");
}
return 0;
}
No. This one is actually cool, useful, and innovative. And it tries to do some things differently than everyone else.
/me putting my Rust (post-v1.0 era) historian hat on.
The list of (language-level) reasons why people liked Rust was already largely covered by the bullet points in the real original Rust website homepage, before some “community” people decided to nuke that website because they didn’t like the person who wrote these points (or rather, what that person was “becoming"). They tasked some faultless volunteers who didn’t even know much Rust to develop a new website, and then rushed it out. It was ugly. It lacked supposedly important components like internationalization, which the original site did. But what was important to those “community people” (not to be confused with the larger body of people who develop Rust and/or with Rust) is that the very much technically relevant bullet points were gone. And it was then, and only then, that useless meaningless “empowerment” speak came into the picture.
less likely to be insecure
Evidenced by?
requires reviewing all source code
This is exactly the la-la-land view of what distributors do I was dispelling with facts and reality checks. No one is reviewing all source code of anything, except for cases where a distro developer and an upstream member are the same person. And even then, this may not be the case depending on the upstream project, its size, and the distro developer’s role within that project.
to make sure it meets interoperability
Doesn’t mean anything other than “it builds” and “API is not broken” (e.g. withing the same .so version), and “seems to work”.
These considerations happen to hardly exist with the good tooling provided by cargo.
and open-source standards.
Doesn’t mean anything outside of licensing (for code and assets), and “seems to work”.
Your argument that crates.io is a known organization therefore we should trust the packages distributed is undermined by your acknowledgement that crates.io does not produce any code. Instead we are relying on the individual crate developers, who can be as anonymous as they want.
Largely correct. But that was me comparing middle-man vs. middle-man. That is if crates.io operators can be described as middle-men, since their responsibilities (and consequently, attack vectors) are much smaller.
Barring organizational attacks from within, with crates.io, you have one presumably competent/knowledgable, possibly anonymous, source, and operators that don’t do much. With a binary distro, you have that, AND another “middle-man” source, possibly anonymous, and with competence and applicable knowledge <= upstream (charitable), yet put in a position to decide what to do with what upstream provides, or rather, provided.. X years ago, if we are talking about the claimed "stable" release channel.
The middle man pulls sources from places like crates.io anyway. So applying trivial “logic"/"maths”, it can’t be “better”, in the context being discussed.
Software doesn’t get depended on out of thin air. You are either first in line directly depending on a library, and thus you would naturally at least make the minimum effort to make sure it’s minimally “fit for purpose”. Or you are an indirect dependant, and thus looking at your direct dependencies, and maybe “trusting” them with the “trickle down”.
More processes, especially automated ones, are always welcome to help catch “stuff” early. But it is no surprise that the “success stories” concern crates with fat ZERO dependants.
Processes that help dependants share their knowledge about their dependencies (a la cargo vet) are unquestionably good additions. They sure trump the dogmatic blind faith in distros doing something they simply don’t have the knowledge or resources to do, or the slightly less dogmatic faith in some library being “trustable” if packaged by X or XX distros, assuming at least someone knowledgable/competent must have given a thorough look (this has a rough equivalent in the number of dependants anyway).
This is all obvious, and doesn’t take much thought from anyone active from the inside (upstreams or distros), instead of the surface “knowledge” that leaks, and possibly gets manipulated, in route to the outside.
While it may never be “enough” depending on your requirements (which you didn’t specifically and coherently define), the amount of “review”, and having the required know-how to do it competently, is much bigger/higher from your crate dependants, than from your distro packages.
It’s not rare for a distro packager to not know much about the programming language (let a lone the specific code) of some packages they package. It’s very rare for a packager to know much about the specific code of what they package (they may or may not have some level of familiarity with a handful of codebases).
So what you get is someone who pulls source packages (from the interwebs), possibly patching them (and possibly breaking them), compiling them, and giving you the binaries (libs/execs). With source distros, you don’t have the compiling and binary package part. With crates.io, you don’t have the middle man at all. Which is why the comparison was never right from the start. That’s the pondering I left you to do on your own two comments ago.
Almost all sufficiently complex user-space software in your system right now has a lot of dependencies (vendored or packaged), you just don’t think of them because they are not in your face, and/or because you are ambivalent to the realities of how distros work, and what distro developers/packagers actually do (described above). You can see for yourself with whatever the Debian equivalent is to pactree (from pcaman).
At least with cargo, you can have all your dependencies in their source form one command away from you (cargo vendor), so you can trivially inspect as much as you like/require. The only part that adds unknowns/complexities is crates that usebuild.rs. But just like unsafe{}, this factor is actually useful, because it tells you where you should look first with the biggest magnifying glass. And just like cargo itself, the streamlining of the process means there aren’t thousands of ways/places in the build process to do something.
Debian (and other “community” distros) is distributed collaboration, not an organization in the sense you’re describing. You’re trusting a scattered large number of individuals (some anonymous), infrastructure, and processes. The individuals themselves change all the time. The founder of the whole project is not even still with us for example.
Not only the processes did nothing to stop shipping the already mentioned xz backdoor (malicious upstream). But the well-known blasé attitude towards patching upstream code without good reason within some Debian developer circles actually directly caused Debian-only security holes in the past (If you’re young, check this XKCD and the explanation below it). And it just happens that it’s the same blasé attitude that ended up causing the xz backdoor to affect PID 1 (systemd) in the first place. While that particular malicious attack wasn’t effective/applicable in distros that don’t have such an attitude in their "culture" (e.g. Arch).
On the other hand, other Debian developer(s) were the first to put a lot of effort into making reproducible builds a thing. That was a good invaluable contribution.
So there is good, and there is very much some bad. But overall, Debian is nothing special in the world of "traditional" binary distros. But in any case, it’s the stipulation “trusting an organization because it has a long track record of being trustworthy” in the context of Debian that would be weird.
(The "stable distro" model of shipping old patched upstreams itself is problematic, but this comment is too long already.)
crates.io is 10+ years old upstream-submitted repository of language-specific source packages. It’s both not that comparable to a binary distro, and happens to come with no track record of own goals. It can’t come with own goals like the “OpenSSL fiasco” in any case, because the source packages ARE the upstreams. It is also not operated by any anonymous people, which is the first practical requirement to have some logically-coherent trustworthiness into an individual or a group. Most community distros can’t have this as a hard requirement by their own nature, although top developers and infrastructure people tend to be known. But it takes one (intentionally or accidentally) malicious binary packager…
You don’t seem to have a coherent picture of a threat model, or actual specific factualities about Debian, or crates.io, or anything really, in mind. Just regurgitations about "crates.io BAD" that have been fed mostly by non-techies to non-techies.
So, we established that “pulled in from the interwebs” is not a valid differentiator.
which has existed for much longer than has crates.io
True and irrelevant/invalid (see below). Among the arguments that could be made for <some_distro> packages vs. crates.io, age is not one of them. And that’s before we get to the validity of such arguments.
In this case, it is also an apples-to-oranges comparison, since Debian is a binary distro, and crates.io is a source package repository. Which one is "better", if we were to consider this aspect alone, is left for you to ponder.
and has had fewer malicious packages get into it.
The xz backdoor was discovered on a Debian Sid system, my friend. Can you point to such “malicious packages” that actually had valid users/dependants on crates.io?
dyncompatibility of the trait itself is another matter. In this case, an async method makes a trait not dyn-compatible because of the implicit-> impl Futureopaque return type, as documented here.But OP didn’t mention whether
dynis actually needed or not. For me,dynis almost always a crutch (exceptions exist).If I understand what you’re asking…
This leaves out some details/specifics out to simplify. But basically:
This meant that you couldn’t just have (stable) async methods in traits, not because of async itself, but because you couldn’t use impl Trait in return positions in trait methods, in general.
Box<dyn Future>was an unideal workaround (not zero-cost, and otherdyndrawbacks).async_traitwas a proc macro solution that generated code with that workaround. soBox<dyn Future>was never a desugaring done by the language/compiler.now that we have (stable) impl Trait in return positions in trait methods, all this dance is not strictly needed anymore, and hasn’t been needed for a while.
I was just referring to the fact that they are macros.
printfuses macros in its implementation.^ This is from glibc. Do you know what
va_startandva_endare?Derives expand to “regular code”. You can run
cargo expandto see it. And I’m not sure how that’s an indication of “bare bone"-ness in any case.Such derives are actually using a cool trick, which is the fact that proc macros and traits have separate namespaces. so
#[derive(Debug)]is using the proc macro namedDebugwhich happens to generate “regular code” that implements theDebugtrait. The proc macro namedDebugand implemented traitDebugdon’t point to the same thing, and don’t have to match name-wise.Not sure if you’re talking about the language, or the core/alloc/std libraries, or both/something in-between?
Can you provide specific examples, an which specific languages are you comparing against?
Wild linker v0.8 released (and updated benchmarks) (github.com)
(didn’t read OP, didn’t keep up with chimera recently)
From the top of my head:
The init system. Usable FreeBSD utils instead of busybox overridable by gnu utils (which you will have to do because the former are bare-bones). Everything is built with LLVM (not gcc). Extra hardening (utilizing LLVM). And it doesn’t perform like shit in some multi-threaded allocator-heavy loads because they patch musl directly with mimalloc. It also doesn’t pretend to have a stable/release channel (only rolling).
So, the use of
apkis not that relevant. “no GNU” is not really the case with Alpine. They do indeed have “musl” in common, but Chimera “fixes” one of the most relevant practical shortcomings of using it. And finally, I don’t think Chimera really targets fake “lightweight"-ness just for the sake of it.'0'..'9'(characters in ASCII) are(0+48)..(9+48)when read as integer values.For readability you can do:
And as I mentioned in another comment, if this was serious code, you would check that both
handlare between0and9.Note that one of the stupid quirks about C is that
charis not guaranteed to be unsigned in certain implementations/architectures. So it’s better to be explicit about expecting unsigned values. This is also whyman 3 getcharstates:>
How is this literal joke still getting so much engagement?
nice
CLAUDE.md. got it on the contributor list too.An actually serious project that is not at the "joke" stage. Zero LLM use too:
https://nihav.org/
For audio at least, people should be aware of:
https://github.com/pdeljanov/Symphonia
you fail the cult test 😉
really!
It was a half-joke. But since you asked, It doesn’t do any duplicate range checks.
But it’s not like any of this is going to be measurable.
Things you should/could have complained about:
* [semantics] not checking if
handlare in the [0, 9] range before taking the result ofh*10 + l.* [logical consistency] not using a set bet for [0, 100] and a set bit for [1, 12], and having both bits set for the latter.
* [cosmetic/visual] not having the props bits for p0 on the left in the switch.
And as a final note, you might want to check what kind of code compilers actually generate (with -O2/-O3 of course). Because your complaints don’t point to someone who knows.
The whole premise is wrong, since it’s based on the presumption of C++ and Rust being effectively generational siblings, with the C++ “designers” (charitable) having the option to take the Rust route (in the superficial narrow aspects covered), but choosing not to do so. When the reality is that C++ was the intellectual pollution product of “next C” and OOP overhype from that era (late 80’s/ early 90’s), resulting in the “C with classes” moniker.
The lack of both history (and/or evolution) and paradigm talk is telling.
Maybe something like this
or if you want to optimize
No. This one is actually cool, useful, and innovative. And it tries to do some things differently than everyone else.
/me putting my Rust (post-v1.0 era) historian hat on.
The list of (language-level) reasons why people liked Rust was already largely covered by the bullet points in the real original Rust website homepage, before some “community” people decided to nuke that website because they didn’t like the person who wrote these points (or rather, what that person was “becoming"). They tasked some faultless volunteers who didn’t even know much Rust to develop a new website, and then rushed it out. It was ugly. It lacked supposedly important components like internationalization, which the original site did. But what was important to those “community people” (not to be confused with the larger body of people who develop Rust and/or with Rust) is that the very much technically relevant bullet points were gone. And it was then, and only then, that useless meaningless “empowerment” speak came into the picture.
Evidenced by?
This is exactly the la-la-land view of what distributors do I was dispelling with facts and reality checks. No one is reviewing all source code of anything, except for cases where a distro developer and an upstream member are the same person. And even then, this may not be the case depending on the upstream project, its size, and the distro developer’s role within that project.
Doesn’t mean anything other than “it builds” and “API is not broken” (e.g. withing the same
.soversion), and “seems to work”.These considerations happen to hardly exist with the good tooling provided by cargo.
Doesn’t mean anything outside of licensing (for code and assets), and “seems to work”.
Largely correct. But that was me comparing middle-man vs. middle-man. That is if
crates.iooperators can be described as middle-men, since their responsibilities (and consequently, attack vectors) are much smaller.Barring organizational attacks from within, with
crates.io, you have one presumably competent/knowledgable, possibly anonymous, source, and operators that don’t do much. With a binary distro, you have that, AND another “middle-man” source, possibly anonymous, and with competence and applicable knowledge <= upstream (charitable), yet put in a position to decide what to do with what upstream provides, or rather, provided.. X years ago, if we are talking about the claimed "stable" release channel.The middle man pulls sources from places like
crates.ioanyway. So applying trivial “logic"/"maths”, it can’t be “better”, in the context being discussed.Software doesn’t get depended on out of thin air. You are either first in line directly depending on a library, and thus you would naturally at least make the minimum effort to make sure it’s minimally “fit for purpose”. Or you are an indirect dependant, and thus looking at your direct dependencies, and maybe “trusting” them with the “trickle down”.
More processes, especially automated ones, are always welcome to help catch “stuff” early. But it is no surprise that the “success stories” concern crates with fat ZERO dependants.
Processes that help dependants share their knowledge about their dependencies (a la
cargo vet) are unquestionably good additions. They sure trump the dogmatic blind faith in distros doing something they simply don’t have the knowledge or resources to do, or the slightly less dogmatic faith in some library being “trustable” if packaged by X or XX distros, assuming at least someone knowledgable/competent must have given a thorough look (this has a rough equivalent in the number of dependants anyway).This is all obvious, and doesn’t take much thought from anyone active from the inside (upstreams or distros), instead of the surface “knowledge” that leaks, and possibly gets manipulated, in route to the outside.
While it may never be “enough” depending on your requirements (which you didn’t specifically and coherently define), the amount of “review”, and having the required know-how to do it competently, is much bigger/higher from your crate dependants, than from your distro packages.
It’s not rare for a distro packager to not know much about the programming language (let a lone the specific code) of some packages they package. It’s very rare for a packager to know much about the specific code of what they package (they may or may not have some level of familiarity with a handful of codebases).
So what you get is someone who pulls source packages (from the interwebs), possibly patching them (and possibly breaking them), compiling them, and giving you the binaries (libs/execs). With source distros, you don’t have the compiling and binary package part. With
crates.io, you don’t have the middle man at all. Which is why the comparison was never right from the start. That’s the pondering I left you to do on your own two comments ago.Almost all sufficiently complex user-space software in your system right now has a lot of dependencies (vendored or packaged), you just don’t think of them because they are not in your face, and/or because you are ambivalent to the realities of how distros work, and what distro developers/packagers actually do (described above). You can see for yourself with whatever the Debian equivalent is to pactree (from pcaman).
At least with cargo, you can have all your dependencies in their source form one command away from you (
cargo vendor), so you can trivially inspect as much as you like/require. The only part that adds unknowns/complexities is crates that usebuild.rs. But just likeunsafe{}, this factor is actually useful, because it tells you where you should look first with the biggest magnifying glass. And just like cargo itself, the streamlining of the process means there aren’t thousands of ways/places in the build process to do something.Debian (and other “community” distros) is distributed collaboration, not an organization in the sense you’re describing. You’re trusting a scattered large number of individuals (some anonymous), infrastructure, and processes. The individuals themselves change all the time. The founder of the whole project is not even still with us for example.
Not only the processes did nothing to stop shipping the already mentioned xz backdoor (malicious upstream). But the well-known blasé attitude towards patching upstream code without good reason within some Debian developer circles actually directly caused Debian-only security holes in the past (If you’re young, check this XKCD and the explanation below it). And it just happens that it’s the same blasé attitude that ended up causing the xz backdoor to affect PID 1 (systemd) in the first place. While that particular malicious attack wasn’t effective/applicable in distros that don’t have such an attitude in their "culture" (e.g. Arch).
On the other hand, other Debian developer(s) were the first to put a lot of effort into making reproducible builds a thing. That was a good invaluable contribution.
So there is good, and there is very much some bad. But overall, Debian is nothing special in the world of "traditional" binary distros. But in any case, it’s the stipulation “trusting an organization because it has a long track record of being trustworthy” in the context of Debian that would be weird.
(The "stable distro" model of shipping old patched upstreams itself is problematic, but this comment is too long already.)
crates.iois 10+ years old upstream-submitted repository of language-specific source packages. It’s both not that comparable to a binary distro, and happens to come with no track record of own goals. It can’t come with own goals like the “OpenSSL fiasco” in any case, because the source packages ARE the upstreams. It is also not operated by any anonymous people, which is the first practical requirement to have some logically-coherent trustworthiness into an individual or a group. Most community distros can’t have this as a hard requirement by their own nature, although top developers and infrastructure people tend to be known. But it takes one (intentionally or accidentally) malicious binary packager…You don’t seem to have a coherent picture of a threat model, or actual specific factualities about Debian, or
crates.io, or anything really, in mind. Just regurgitations about "crates.ioBAD" that have been fed mostly by non-techies to non-techies.