@scottgal@hachyderm.io cover
@scottgal@hachyderm.io avatar

scottgal

@[email protected]

Web Developer/
Consulting Software Lead / CTO @ mostlylucid /
Former Clinical Research Psychologist
"Be kind wherever possible. It is always possible."

This profile is from a federated server and may be incomplete. View on remote instance

@scottgal@hachyderm.io avatar scottgal , to random

Using the new @xoofx terminal stuff dor the lucidRESEARCH tool https://github.com/scottgal/lucidrag/releases (a new type of llm steered closed loop Arxiv research agent) It's pretty nice! Early days but lucidRESEARCH benefits from this more 'app' structure it allows.
WAY more planned, it's tree view stuff will be awesome for graph viewing, I'm adding a consoleimage tool to view images & videos inline etc. https://github.com/XenoAtom/XenoAtom.Terminal.UI

@scottgal@hachyderm.io avatar scottgal , to random

Typical Daily Mail 🎅

ALT
@scottgal@hachyderm.io avatar scottgal , to random

Today’s fun project: a small CLI for my LLM-powered API mocking tool (Ollama / LM Studio / OpenAI).

Load an OpenAPI spec and it behaves like the real API — REST, GraphQL, gRPC, SignalR, and SSE all supported.
Or just call endpoints like /api/users/43.

Releases: https://github.com/scottgal/LLMApi/releases

Write-up: https://www.mostlylucid.net/blog/building-a-frontend-before-the-api-is-ready-mostlylucid-mockllmapi

@cstross@wandering.shop avatar cstross , to random

Bubble's gonna burst!

OpenAI generated US$4.3 billion in revenue in the first half of 2025, according to financial disclosures to shareholders.

The artificial intelligence firm reported a net loss of US$13.5 billion during the same period, with more than half attributed to the remeasurement of convertible interest rights.

Research and development expenses were its largest cost, totaling US$6.7 billion in the first half.

(OpenAI current valuation: $500Bn.)

https://www.techinasia.com/news/openais-revenue-rises-16-to-4-3b-in-h1-2025

scottgal ,
@scottgal@hachyderm.io avatar

@cstross It's feeling more like a race these days. Better, smaller LLMs, consumer grade hardware starting to appear which can run local models, the plateau of LLMs being reached and these companies going bust or being sued into non-existence. I hope local is good enough by then, I suspect it'll converge on a hybrid model with local and cloud.

@georgetakei@universeodon.com avatar georgetakei , to random

They're not wrong...

ALT
scottgal ,
@scottgal@hachyderm.io avatar

@georgetakei "The smelly cupboard"