tl;dr¶
Check out the instructions on GitHub and connect your Claude Desktop, GitHub Copilot, Cursor or any similar tool supporting MCP to PythonAnywhere directly.
New ways to use software¶
Large Language Models (LLMs) are no longer just answering questions – they’re rapidly becoming the layer through which people use software. That works with more or less success, depending on how well the model understands the problem described to the user and how the user is able to describe it to the model.
On the one hand, one may complain that LLMs are not very good at being deterministic, on the other hand, for many end users, computers in general feel like a wildly unpredictable black box.
When ChatGPT or Claude suggests “to start a FastAPI app on PythonAnywhere”, the model is effectively acting as ambassador of our platform. That’s exciting – but it can become a burden when the protein interface between our service and the model (also known as a user) is not able to translate between the model’s prose and the actual UI and the other way around.
That’s why we at PythonAnywhere are excited to share how we’re making those interactions a whole lot more robust with the Model Context Protocol (MCP).
What is the problem?¶
When an AI assistant tells a user to open their PythonAnywhere account, edit a file, and then start a web app or scheduled task, three separate things must line up:
- Up‑to‑date knowledge – the model needs a current picture of our interface.
- User execution – the human has to translate the model’s prose into the right clicks and keystrokes.
- Clear feedback loop – if either side slips, the model can only guess what happened, so errors snowball fast.
That approach works surprisingly often, but we’ve all witnessed the moments when it fails: the model points to a menu that moved last week or assumes a feature exists when it never did. Users are left bewildered, and our support inbox fills with messages such as “Where’s the ‘Foo’ button?” Unfortunately, the only honest reply is that the button never existed–the LLM simply hallucinated it.
Model Context Protocol to the rescue¶
Anthropic introduced the Model Context Protocol (MCP) in late 2024. Since then it’s been adopted by tools like GitHub Copilot, Cursor and others. It is described as “USB-C for LLMs.”
It is a way to provide LLMs with structured, machine-readable information about the capabilities of a system, allowing them to interact with it directly in well-defined ways.
How it works?¶
Instead of treating the model like a backstage advisor, we can make it a first‑class client of our API. MCP gives us the contract we need: it shows the model a machine‑readable catalogue of exactly what actions are available, validates every call so hallucinations produce a clear error instead of confusing the user, and keeps the “what can I do?” logic in one place no matter who’s asking.
Now the LLM can launch a website, restart it, tail logs – all without guessing which buttons exist on which page.
How we wired it up?¶
We already had an open‑source dev toolchain consisting of pythonanywhere-core,
which wraps our API in a Pythonic interface,
and pythonanywhere, which provides the pa CLI
tool on top of it. The
next step was to add an MCP‑compliant server using the official Python
SDK and expose those same
capabilities to agents.
Thinking of MCP as just another frontend turned out to be great. The protocol server is simply the version tuned for LLMs – strict schemas and rich metadata (agents learn about the toolbox from the docstrings).
There be dragons¶
Giving an autonomous agent the power to rm -rf ~/mysite is exhilarating –
and potentially disastrous. Whenever an operation could delete data or shut down
a live service, the server pauses and asks for explicit confirmation, or relies
on a pre‑approved policy. Nothing happens silently.
We also insist that every agent rehearse in a disposable development account before it is allowed anywhere near production (whatever “production” means in your world). That dry‑run phase surfaces edge‑cases, catches stupid mistakes, and gives the human operator a last look at the plan. At the same time one needs to remember that nothing in LLM world is truly deterministic.
Bottom line: MCP provides the guard‑rails, but a human still holds the steering wheel.
The world we live in now¶
LLM‑driven workflows are here to stay. By giving models structured, contextual access to PythonAnywhere, we swap guesswork for guarantees and free users from some points of frustration (probably adding new ones). The MCP stack should keep our backend solid, our frontends flexible, and our support team handling slightly more interesting problems than what was lost in translation between the model and the UI. At least we hope so.
Ready to try it?¶
Head over to our GitHub repository for installation instructions and start connecting your favorite MCP-compatible tools directly to PythonAnywhere. The age of guessing what the LLM meant is over (or elevated to the new level).