STUN/TURN Server Configuration: The Complete Guide
Everything you need to know about configuring STUN and TURN servers for WebRTC - from local development to production deployment
Your WebRTC app works perfectly on localhost. Then you deploy it, and half your users can't connect. The culprit is almost always STUN/TURN configuration - or the lack of it.
This guide covers everything from "what are these things" to "my TURN server handles 10,000 concurrent users." Let's go.
Why You Need STUN and TURN
WebRTC wants to establish direct peer-to-peer connections. The problem: most devices are behind NAT (Network Address Translation), which means they don't have public IP addresses.
STUN (Session Traversal Utilities for NAT) helps peers discover their public IP address and port. It's like asking someone outside your house what address they see on your letters.
TURN (Traversal Using Relays around NAT) is a fallback when direct connection fails. It relays all media through a server. More latency, but it always works.
When Each Is Used
- Host candidates - Direct connection (same network, rare on internet)
- Server reflexive (srflx) - STUN worked, NAT traversal possible (~70% of connections)
- Relay - TURN required, usually symmetric NAT or firewall (~30% of connections)
That 30% is why you must have TURN in production. Without it, nearly a third of your users will fail to connect.
Basic Configuration
Here's a production-ready ICE server configuration:
const config = {
iceServers: [
// Public STUN (for most users)
{ urls: 'stun:stun.l.google.com:19302' },
// Primary TURN - UDP preferred
{
urls: [
'turn:turn.yourserver.com:3478?transport=udp',
'turn:turn.yourserver.com:3478?transport=tcp'
],
username: 'user',
credential: 'pass'
},
// Fallback TURN - TCP 443 for restrictive firewalls
{
urls: 'turns:turn.yourserver.com:443?transport=tcp',
username: 'user',
credential: 'pass'
}
]
}; Key points:
- STUN is free (Google provides public servers), but don't rely on it for production
- Include both UDP and TCP TURN - some networks block UDP
- TCP on port 443 with TLS (
turns:) bypasses most corporate firewalls - Order matters - browsers try candidates in order
TURN Authentication
Never use static credentials in production. Anyone who decompiles your app can steal your TURN bandwidth.
// Static credentials (simple but insecure)
{
urls: 'turn:turn.example.com:3478',
username: 'staticuser',
credential: 'staticpassword'
}
// Time-limited credentials (recommended)
// Server generates credentials with expiry
const timestamp = Math.floor(Date.now() / 1000) + 3600; // 1 hour
const username = `${timestamp}:userid`;
const hmac = createHmac('sha1', SECRET_KEY);
hmac.update(username);
const credential = hmac.digest('base64');
{
urls: 'turn:turn.example.com:3478',
username: username,
credential: credential
} Time-limited credentials use HMAC-SHA1 to generate temporary passwords. The TURN server validates them using the same shared secret. Typical flow:
- Client requests credentials from your backend
- Backend generates username/password with timestamp
- Client uses credentials for that session
- Credentials expire after configured time (usually 1-24 hours)
Setting Up Coturn
Coturn is the de facto open-source TURN server. Here's a production configuration:
# Install coturn
sudo apt install coturn
# Generate TLS certificate (Let's Encrypt)
sudo certbot certonly --standalone -d turn.example.com
# Enable coturn service
sudo systemctl enable coturn
# Edit /etc/default/coturn
TURNSERVER_ENABLED=1 And the server configuration:
# /etc/turnserver.conf
# Network
listening-port=3478
tls-listening-port=5349
alt-listening-port=443
# For TCP 443 fallback (looks like HTTPS to firewalls)
listening-port=443
no-tlsv1
no-tlsv1_1
# TLS certificates
cert=/etc/ssl/certs/turn.crt
pkey=/etc/ssl/private/turn.key
# Realm (your domain)
realm=turn.yourcompany.com
# Authentication
lt-cred-mech
use-auth-secret
static-auth-secret=YOUR_SHARED_SECRET
# Limits
total-quota=100
user-quota=10
max-bps=1000000
# Logging
log-file=/var/log/turnserver.log
verbose Critical Settings Explained
- alt-listening-port=443 - Fallback port that looks like HTTPS
- use-auth-secret - Enable time-limited credentials
- static-auth-secret - Shared secret with your backend
- total-quota - Max concurrent allocations (sessions)
- max-bps - Bandwidth limit per user
Testing Your Setup
Don't deploy blind. Test your TURN server programmatically:
async function testTurnServer(turnConfig) {
const pc = new RTCPeerConnection({
iceServers: [turnConfig],
iceCandidatePoolSize: 0
});
return new Promise((resolve, reject) => {
const timeout = setTimeout(() => {
pc.close();
reject(new Error('TURN test timed out'));
}, 10000);
let foundRelay = false;
pc.onicecandidate = (e) => {
if (e.candidate) {
console.log('Candidate:', e.candidate.type);
if (e.candidate.type === 'relay') {
foundRelay = true;
clearTimeout(timeout);
pc.close();
resolve({ success: true, candidate: e.candidate });
}
} else {
// Gathering complete
clearTimeout(timeout);
pc.close();
if (!foundRelay) {
reject(new Error('No relay candidates - TURN failed'));
}
}
};
// Create dummy data channel to trigger ICE
pc.createDataChannel('test');
pc.createOffer().then(offer => pc.setLocalDescription(offer));
});
}
// Usage
testTurnServer({
urls: 'turn:turn.example.com:3478',
username: 'user',
credential: 'pass'
}).then(result => {
console.log('TURN working:', result);
}).catch(err => {
console.error('TURN failed:', err.message);
}); Also use the Trickle ICE test page - it's invaluable for debugging.
Monitoring in Production
Track which connection types your users actually use:
// Monitor relay usage with getStats
setInterval(async () => {
const stats = await pc.getStats();
stats.forEach(report => {
if (report.type === 'candidate-pair' && report.selected) {
const local = stats.get(report.localCandidateId);
const remote = stats.get(report.remoteCandidateId);
console.log('Connection type:', {
local: local?.candidateType, // host, srflx, relay
remote: remote?.candidateType,
bytesReceived: report.bytesReceived,
bytesSent: report.bytesSent
});
if (local?.candidateType === 'relay') {
console.log('Using TURN relay - higher latency expected');
}
}
});
}, 5000); This helps you understand:
- What percentage of connections use TURN (affects bandwidth costs)
- Whether your STUN configuration is working
- Geographic patterns in relay usage
Scaling TURN Servers
Capacity Planning
TURN is expensive - it relays all media traffic. For a video call at 1.5 Mbps bidirectional, one user consumes 3 Mbps of TURN bandwidth.
- CPU: Minimal - TURN just forwards packets
- Memory: ~10KB per allocation
- Bandwidth: This is your bottleneck. Size accordingly.
Geographic Distribution
TURN latency directly impacts user experience. Deploy TURN servers in regions where your users are:
- Use GeoDNS to route users to nearest server
- Return region-appropriate credentials from your backend
- Major cloud providers (AWS, GCP) have good TURN server images
Managed Services
If you don't want to run your own TURN infrastructure:
- Twilio Network Traversal - Pay per GB, global coverage
- Xirsys - WebRTC-focused, competitive pricing
- Metered.ca - Simple per-minute pricing
At scale, self-hosted is cheaper. Below 1000 concurrent users, managed services often make sense.
Common Mistakes
- Using only STUN - Works in development, fails in production. Always include TURN.
- Forgetting TCP fallback - Many corporate networks block UDP entirely.
- Static credentials - Gets abused. Use time-limited credentials.
- No monitoring - You won't know when TURN fails until users complain.
- Single region deployment - TURN latency matters. Go global or users suffer.
Quick Reference
| Protocol | Default Port | Use Case |
|---|---|---|
| STUN (UDP) | 3478 | NAT discovery |
| TURN (UDP) | 3478 | Primary relay |
| TURN (TCP) | 3478 | UDP-blocked networks |
| TURNS (TLS) | 5349 or 443 | Firewall bypass |
Summary
STUN/TURN configuration is the difference between "it works in my office" and "it works everywhere." The setup isn't complicated, but it requires attention:
- Always include TURN (not just STUN)
- Support multiple transports (UDP, TCP, TLS)
- Use time-limited credentials
- Deploy in multiple regions for low latency
- Monitor relay usage and server health
Get this right, and NAT traversal becomes invisible. Get it wrong, and you'll spend weeks debugging "random" connection failures.