• 2 Posts
  • 78 Comments
Joined 1 month ago
cake
Cake day: February 11th, 2026

help-circle

  • When it comes to the usage of both words, that difference you listed is completely arbitrary and obviously irrelevant

    What? No. Software is something people go looking for and choose to download, unless we’re talking about malware which I think is fair to say is obviously outside the bounds of this conversation. Spam emails are forced on people without their asking or looking for them. They’re not at all interchangeable or the same thing.

    Most people don’t care how their software is written, just like they don’t care how their food is actually made. And by “most people” I don’t mean you or anyone else here on Lemmy, I mean the majority of people who use computers. You wouldn’t believe how technically illiterate and uncurious the average person is - that’s who I mean. Those people hate spam emails, but they don’t care if their email app was vibecoded with AI. They don’t even know the difference between AI code and hand-typed human code, and most of 'em probably think “more code is better so AI is better!”.

    Unless you’re trying to argue something else; that the slop in this specific case is more justified.

    Sort of. I’m saying that while I understand why AI disclosures are a good thing, I think that if a person is not paying for an application and they’re not contributing to its development, then that person can keep their opinions on the development process to themselves. They can take those opinions and go build something of their own to satisfy them.

    it’s the eagerness to treat users as braindead trash undeserving transparency.

    I simply don’t think that’s a fair characterisation, because it ignores how people treat the developers who use the tools in the first place. People who have no technical skills whatsoever are happy to loudly shit all over said developers and call their work garbage - work they’ve been doing for nothing.

    I agree the initial response could have been approached better, but all of us have the benefit of judging in hindsight and from a distance. I can understand how their emotions got the better of them, while under fire like that. This looks distinctly different from the BookLore fiasco though, where the dev is trying to close up the source in retaliation.

    I just wish people would find more reasonable targets for their ire, instead of rolling with the pitchforks-and-torches mentality. Individuals building open source software are not usually reasonable targets. I do think “good thing it’s easy to fork open source” is the right sentiment; this is why anything I build, I put up under the Unlicense, because as far as I’m concerned any utility someone can get from it is to the good.







  • Because coding is hard work even with AI assistance, and people who don’t code will judge you the loudest and longest and meanest for using AI to make the work easier. I personally suffer rejection sensitivity dysphoria so I understand the emotions behind their actions.

    But yeah, everyone just ignores the years of coding work this person did for nothing just to help people enjoy their games, to crucify them for using AI and then having feelings about getting yelled at by the very beneficiaries of their prior work.

    It’s not like they’re stripping out or reimplementing contributions and taking the project closed source, like BookLore. People need some damn perspective.










  • Yeah, I’m getting that; though this isn’t purely AI-generated. This is a working application that I’ve tested, have improved and plan on continuing to improve, and am currently using to transcode my media. There’s a lot more care and thought put into it than most people would expect on reading that it was created with the help of an AI model.

    I put the disclaimer because I respect that serious developers who actually go look at the code would like a heads-up that it’s genAI before they waste their time reading it. But, I would like people to at least have a chance to read why I think my approach is different than most.

    And, if you have videos to transcode, I’d love to hear what you think if you give it a go! I do actively fix bugs as well as add new features, so please do let me know if you try it and find an issue - I could use all the help testing it I can get 'cause my hardware to test on is quite limited.


  • I was hoping to catch this before your replied, as I went and read the readme, then it made more sense. So I deleted my reply. But too late!

    All good! I’m actually enjoying talking about this thing with people who want to know more so I don’t mind at all _

    The cool thing is there isn’t much to put into a command that does stuff like this, unless you changing the FFMPEG parameters every time, but that would seem unlikely.

    So actually, that’s exactly the issue I was running into! I’d run a batch command on a whole folder full of videos, but a handful would already be well-encoded or at least they’d have a much MUCH lower bitrate, so I’d end up with mostly well-compressed files and a handful that looked like they went through a woodchipper. I wanted everything to be in the same codecs, in the same containers, at roughly the same quality (and playable on devices from around 2016 and newer) when it came out the other end, so I implemented a three-way decision based around the target bitrate you set and every file gets evaluated independently for which approach to use:

    1. Above target → VBR re-encode: If a file’s source bitrate is higher than the target (e.g. source is 8 Mbps and target is 4 Mbps), the video is re-encoded using variable bitrate mode aimed at the target, with a peak cap set to 150% of the target. This is the only case where the file actually gets compressed.

    2. At or below target, same codec → stream copy: If the file is already at or below the target bitrate and it’s already in the target codec (e.g. it’s HEVC and you’re encoding to HEVC), the video stream is copied bit-for-bit with -c:v copy. No re-encoding happens at all - the video passes through untouched. This is what prevents overcompression of files that are already well-compressed.

    3. At or below target, different codec → quality-mode transcode: If the file is at or below the target but in a different codec (e.g. it’s H.264 and you’re encoding to HEVC), it can’t be copied because the codec needs to change. In this case it’s transcoded using either CQP (constant quantisation parameter) or CRF (constant rate factor) rather than VBR - so the encoder targets a quality level rather than a bitrate. This avoids the situation where VBR would try to force a 2 Mbps file “down” to a 4 Mbps target and potentially bloat it, or where the encoder wastes bits trying to hit a target that’s higher than what the content needs.

    There’s also a post-encode size check as a safety net: if the output file ends up larger than the source (which can happen when a quality-mode transcode expands a very efficiently compressed source), HISTV deletes the output, remuxes the original source into the target container instead, and logs a warning. So even in the worst case, you never end up with a file bigger than what you started with which is much harder to claim with a raw CLI input. The audio side has a similar approach; each audio stream is independently compared against the audio cap, and streams already below the cap in the target codec are copied rather than re-encoded.

    But yeah everything beyond that was bells and whistles to make it easier for people who aren’t me to use it haha.

    I am 100% looking for more stuff I can build - let’s talk about it!