Skip to main contentOne of the key features of Traceloop is the ability to monitor the quality of your LLM outputs in real time. It helps you to detect hallucinations and regressions in the quality of your models and prompts.
To start monitoring your LLM outputs, make sure you installed OpenLLMetry and configured it to send data to Traceloop. If you haven’t done that yet, you can follow the instructions in the Getting Started guide.
Next, if you’re not using a supported LLM framework, make sure to annotate workflows and tasks.