Self-Hosting Langfuse on Netcup VPS: Complete Guide to Open Source LLM Engineering Platform
TL;DR: Running Langfuse in 5 Minutes
1. Get a Netcup VPS 2000 G12 (8 vCPU, 16 GB RAM, 512 GB NVMe - from EUR12.61/month)
Use one of these coupon codes for 1 month free:
- Coupon 1:
5800nc17718015230 - Coupon 2:
5800nc17755880760 - Coupon 3:
5800nc17718015234
2. SSH into your server and run these commands:
git clone https://github.com/langfuse/langfuse.git
cd langfuse
docker compose up -d
3. Done! Access Langfuse at http://your-server-ip:3000 and create your account.
For production workloads, see the server comparison below with additional coupon codes for each plan.
Introduction: Why Langfuse Matters for AI Development
The explosion of Large Language Model (LLM) applications has created a critical need for proper observability, evaluation, and management tools. Without them, teams struggle to understand how their AI models behave in production, iterate on prompts effectively, and ensure consistent quality across deployments.
Langfuse is an open source LLM engineering platform that addresses these challenges directly. It provides comprehensive observability for LLM applications, enabling teams to trace, debug, and optimize their AI models with unprecedented clarity. Whether you are building chatbots, autonomous agents, or complex AI workflows, Langfuse offers the visibility and control you need.
This guide walks you through self-hosting Langfuse on a Netcup VPS - giving you complete control over your data while saving 60-80% compared to managed cloud solutions. You will learn what Langfuse is, how to deploy it, and how to choose the right Netcup server for your needs.
What is Langfuse?
Langfuse is an open source LLM engineering platform designed to help teams collaboratively develop, monitor, evaluate, and debug AI applications. Founded in 2023 and backed by Y Combinator, Langfuse has become the observability standard for thousands of AI teams worldwide.
Core Features
LLM Observability: Langfuse instruments your application to capture detailed traces of every LLM call. You can track requests, responses, latency, token usage, and any custom metadata. This visibility is essential for understanding model behavior and troubleshooting issues.
Prompt Management: Centralize, version, and collaboratively iterate on your prompts. Langfuse maintains prompt history, enables A/B testing, and provides strong caching to prevent latency overhead from prompt lookups.
Evaluations: Measure AI application quality using LLM-as-a-judge, user feedback, manual labeling, or custom evaluation pipelines. Langfuse integrates with your existing testing workflows.
Datasets: Create test sets and benchmarks for evaluating your AI applications. Datasets enable continuous improvement and pre-deployment testing.
LLM Playground: Test and iterate on prompts and model configurations directly in the UI. Jump from a bad trace result directly to the playground for quick iteration.
Comprehensive API: Build custom LLMOps workflows using Langfuse's OpenAPI spec, Postman collection, and typed SDKs for Python and JavaScript/TypeScript.
What Makes Langfuse Unique
Unlike traditional monitoring tools, Langfuse is purpose-built for the unique challenges of LLM applications:
- Structured Tracing: Capture the full context of AI interactions, not just simple metrics
- Prompt First Design: Treat prompts as first-class citizens with versioning and testing
- Production-Ready Architecture: Built on battle-tested infrastructure used by thousands of teams
- Open Source: Full transparency and the ability to self-host for data sovereignty
Integrations
Langfuse integrates with the major AI development tools:
- SDKs: Python, JavaScript/TypeScript with automatic instrumentation
- OpenAI: Drop-in replacement with OpenAI SDK
- LangChain: Automated instrumentation via callback handler
- LlamaIndex: Integration via callback system
- LiteLLM: Use 100+ LLMs with a unified API
- Vercel AI SDK: Build AI-powered applications
- Mastra: Open source framework for AI agents
How to Use Langfuse
Basic Workflow
Step 1: Create a Project After installing Langfuse, create a new project through the web interface. Each project maintains separate configuration, prompts, and data.
Step 2: Get API Credentials Generate API keys in project settings. These credentials connect your applications to Langfuse.
Step 3: Instrument Your Application
For Python applications:
from langfuse import observe
from langfuse.openai import openai
@observe()
def generate_response(user_input):
return openai.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": user_input}]
)
Step 4: View Traces Access the Langfuse dashboard to explore traces, analyze performance, and debug issues.
Advanced Features
Prompt Management: Store and version prompts in Langfuse. Update prompts without redeploying code:
from langfuse import Langfuse
client = Langfuse()
prompt = client.prompts.get("customer-support-v1")
response = openai.chat.completions.create(
model=prompt.config.model,
messages=[{"role": "user", "content": prompt.template.format(user_input)}]
)
Evaluations: Create evaluation runs to measure quality:
from langfuse import EvaluationClient
client = EvaluationClient()
result = client.evaluate(
dataset_id="dataset-123",
evaluation_task=my_eval_function
)
Datasets: Build test sets from production data:
from langfuse import Dataset
dataset = client.datasets.create(
name="Customer Support Test Set",
data=[
{"input": "How do I reset password?", "expected": "Process description"},
{"input": "Billing question", "expected": "Billing response"}
]
)
Quick Start Guide: Deploying Langfuse on Netcup
This section provides detailed instructions for deploying Langfuse on a Netcup VPS using Docker Compose.
Prerequisites
- A Netcup VPS or Root Server (see server recommendations below)
- Basic familiarity with command line
- SSH access to your server
Step 1: Choose and Provision Your Server
Based on Langfuse requirements, we recommend:
| Plan | vCPU | RAM | Storage | Monthly Cost | Best For |
|---|---|---|---|---|---|
| VPS 1000 G12 | 4 | 8 GB | 256 GB | EUR7.56 | Testing/Single user |
| VPS 2000 G12 | 8 | 16 GB | 512 GB | EUR12.61 | Production/Small team |
| VPS 4000 G12 | 12 | 32 GB | 1 TB | EUR22.67 | High throughput |
For production with guaranteed performance, consider Root Servers.
Step 2: Connect to Your Server
ssh root@your-server-ip
Step 3: Install Docker
If Docker is not installed:
# Update package index
apt update && apt upgrade -y
# Install dependencies
apt install ca-certificates curl gnupg
# Add Docker's official GPG key
install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
chmod a+r /etc/apt/keyrings/docker.asc
# Add Docker repository
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | tee /etc/apt/sources.list.d/docker.list > /dev/null
# Install Docker
apt update
apt install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
# Verify
docker run hello-world
Step 4: Clone Langfuse Repository
git clone https://github.com/langfuse/langfuse.git
cd langfuse
Step 5: Configure Secrets
Edit the docker-compose.yml file and update sensitive values marked with # CHANGEME:
nano docker-compose.yml
Update these critical variables:
LANGFUSE_SECRET_KEY: Generate a secure random keyLANGFUSE_PUBLIC_KEY: Generate a secure random keyLANGFUSE_JWT_SECRET: Generate a secure random key- Database passwords
- Encryption keys
Step 6: Start Langfuse
docker compose up -d
Monitor the startup:
docker compose logs -f
After 2-3 minutes, you should see "Ready" in the logs.
Step 7: Access Langfuse
Open your browser and navigate to:
http://your-server-ip:3000
Create your admin account and organization.
Step 8: Configure Firewall (Recommended)
# Allow SSH and Langfuse only
ufw allow 22/tcp
ufw allow 3000/tcp
ufw enable
Step 9: Connect Your Applications
Get your API keys from Settings > API Keys and configure your applications:
import os
from langfuse import Langfuse
os.environ["LANGFUSE_SECRET_KEY"] = "your-secret-key"
os.environ["LANGFUSE_PUBLIC_KEY"] = "your-public-key"
os.environ["LANGFUSE_BASE_URL"] = "http://your-server-ip:3000"
langfuse = Langfuse()
Maintenance
Update Langfuse:
cd langfuse
docker compose pull
docker compose up -d
Backup Data:
docker compose down
tar -czf langfuse-backup.tar.gz ./langfuse
docker compose up -d
Stop Langfuse:
docker compose down
Choosing the Right Netcup Server for Langfuse
Netcup offers several server lines. Here is how each fits Langfuse deployment, with coupon codes for each plan.
VPS (Virtual Private Server)
VPS plans offer shared CPU with guaranteed minimum resources. Ideal for testing, development, and production with moderate traffic.
VPS 1000 G12 - Testing and Development
| Resource | Specification |
|---|---|
| vCPU Cores | 4 |
| RAM | 8 GB |
| Storage | 256 GB NVMe SSD |
| Bandwidth | Unlimited |
| Monthly Cost | EUR7.56 |
Best For:
- Testing Langfuse
- Development environments
- Single-user applications
Coupon Codes (1 Month Free):
- Coupon 1:
5799nc17755868070 - Coupon 2:
5799nc17755752870 - Coupon 3:
5799nc17755736850
VPS 2000 G12 - Recommended for Production
| Resource | Specification |
|---|---|
| vCPU Cores | 8 |
| RAM | 16 GB |
| Storage | 512 GB NVMe SSD |
| Bandwidth | Unlimited |
| Monthly Cost | EUR12.61 |
Best For:
- Production Langfuse deployments
- Small teams (2-5 users)
- Moderate trace volumes
Coupon Codes (1 Month Free):
- Coupon 1:
5800nc17718015233 - Coupon 2:
5800nc17718015232 - Coupon 3:
5800nc17718015231
VPS 4000 G12 - High Traffic
| Resource | Specification |
|---|---|
| vCPU Cores | 12 |
| RAM | 32 GB |
| Storage | 1 TB NVMe SSD |
| Bandwidth | Unlimited |
| Monthly Cost | EUR22.67 |
Best For:
- High-volume observability
- Large teams
- Multiple AI applications
Coupon Codes (1 Month Free):
- Coupon 1:
5801nc17718015214 - Coupon 2:
5801nc17718015213 - Coupon 3:
5801nc17718015212
VPS 8000 G12 - Maximum Scale
| Resource | Specification |
|---|---|
| vCPU Cores | 16 |
| RAM | 64 GB |
| Storage | 2 TB NVMe SSD |
| Bandwidth | Unlimited |
| Monthly Cost | EUR39.73 |
Best For:
- Enterprise deployments
- Very high trace volumes
Coupon Codes (1 Month Free):
- Coupon 1:
5802nc17718015172 - Coupon 2:
5802nc17718015171 - Coupon 3:
5802nc17718015170
Root Servers (Dedicated Performance)
Root Servers provide dedicated CPU cores for guaranteed consistent performance. Recommended for production workloads where predictability matters.
RS 1000 G12 - Entry Dedicated
| Resource | Specification |
|---|---|
| CPU Cores | 4 (dedicated) |
| RAM | 8 GB |
| Storage | 256 GB NVMe SSD |
| Bandwidth | Unlimited |
| Monthly Cost | EUR8.74 |
Best For:
- Production with guaranteed performance
- Latency-sensitive AI applications
- Consistent resource availability
Coupon Codes (2 Months Free):
- Coupon 1:
5159nc17718015441 - Coupon 2:
5159nc17718015440 - Coupon 3:
5997nc17755880774
RS 2000 G12 - Production Dedicated
| Resource | Specification |
|---|---|
| CPU Cores | 8 (dedicated) |
| RAM | 16 GB |
| Storage | 512 GB NVMe SSD |
| Bandwidth | Unlimited |
| Monthly Cost | EUR15.12 |
Best For:
- Teams in production
- Sustained AI workloads
- Running alongside databases
Coupon Codes (2 Months Free):
- Coupon 1:
5160nc17718015411 - Coupon 2:
5160nc17718015414 - Coupon 3:
5160nc17718015413
RS 4000 G12 - Enterprise Dedicated
| Resource | Specification |
|---|---|
| CPU Cores | 12 (dedicated) |
| RAM | 32 GB |
| Storage | 1 TB NVMe SSD |
| Bandwidth | Unlimited |
| Monthly Cost | EUR26.86 |
Best For:
- Enterprise Langfuse
- Large teams
- Multiple concurrent AI applications
Coupon Codes (2 Months Free):
- Coupon 1:
5161nc17718015391 - Coupon 2:
5161nc17718015390 - Coupon 3:
5161nc17718015394
RS 8000 G12 - Maximum Power
| Resource | Specification |
|---|---|
| CPU Cores | 16 (dedicated) |
| RAM | 64 GB |
| Storage | 2 TB NVMe SSD |
| Bandwidth | Unlimited |
| Monthly Cost | EUR46.41 |
Best For:
- Maximum performance requirements
- Very large organizations
- High-throughput production
Coupon Codes (2 Months Free):
- Coupon 1:
5162nc17718015362 - Coupon 2:
5162nc17718015361 - Coupon 3:
5162nc17718015360
VPS vs Root Server Comparison
| Factor | VPS | Root Server |
|---|---|---|
| CPU | Shared (burst capable) | Dedicated (consistent) |
| Price | Lower | Slightly higher |
| Performance | Great for variable workloads | Best for sustained workloads |
| Best For | Development, testing, light production | Production, teams, latency-sensitive |
Our Recommendation: Start with VPS 2000 G12 using coupon 5800nc17718015230 for testing. Once ready for production, upgrade to VPS 4000 G12 or RS 2000 G12 for guaranteed performance.
General Discount Coupons
Save EUR5 on any order:
- Coupon 1:
36nc17718015548 - Coupon 2:
36nc17718015547 - Coupon 3:
36nc17718015546
Cost Comparison: Netcup vs Other Solutions
Monthly Cost Comparison
| Plan | Netcup VPS | Langfuse Cloud | AWS | Google Cloud |
|---|---|---|---|---|
| 4 vCPU, 8 GB | EUR7.56 | ~EUR30 | EUR25-30 | EUR30-35 |
| 8 vCPU, 16 GB | EUR12.61 | ~EUR60 | EUR55-65 | EUR60-70 |
| 12 vCPU, 32 GB | EUR22.67 | ~EUR120 | EUR110-130 | EUR120-140 |
Annual Savings Example
- Netcup VPS 2000: EUR151/year
- Langfuse Cloud equivalent: EUR720/year
- Annual savings: EUR569
What You Save Beyond Direct Costs
- Self-hosted control: No data leaves your infrastructure
- No per-user fees: Unlimited team members
- No API overage charges: Fixed monthly cost
- Custom integrations: Full access to underlying services
Security Best Practices
Network Security
# Only allow necessary ports
ufw allow 22/tcp # SSH
ufw allow 3000/tcp # Langfuse
ufw deny all incoming
SSL/TLS Encryption
Use Let's Encrypt for free certificates:
apt install certbot python3-certbot-nginx
certbot --nginx -d langfuse.yourdomain.com --force-renewal
Authentication
Update /etc/ssh/sshd_config:
PasswordAuthentication no
PubkeyAuthentication yes
PermitRootLogin no
Regular Updates
cd langfuse
docker compose pull
docker compose up -d
Backup Strategy
Implement regular backups:
# Database backup
docker exec langfuse-postgres pg_dump -U langfuse -d langfuse > backup.sql
# Volume backup
tar -czf langfuse-data.tar.gz ./data
Conclusion
Langfuse on Netcup delivers enterprise-grade LLM observability at a fraction of cloud costs. With comprehensive tracing, prompt management, evaluations, and datasets, you gain complete visibility into your AI applications while maintaining full data control.
Key Benefits:
- Cost Savings: 60-80% less than managed alternatives
- Data Sovereignty: All data stays on your infrastructure
- Unlimited Usage: No per-trace or per-user charges
- Full Control: Customize and extend as needed
- Enterprise Infrastructure: Netcup's German data centers provide reliable performance
To get started:
- Choose a server plan from the recommendations above
- Use a coupon code from this guide for free month(s)
- Follow the quick start guide to deploy Langfuse
- Connect your AI applications and start observing
Start with a VPS 2000 G12 (EUR12.61/month) using coupon 5800nc17755880760 and scale as your needs grow. With Netcup's 30-day satisfaction guarantee on Root Servers, you can try risk-free.
This guide is intended to help you deploy Langfuse on Netcup. Technology changes rapidly - always refer to the official Langfuse documentation for the most current information.