Go has a built in profiler known as pprof that provides useful CPU, memory, and thread/go routine information for running applications. The pprof tool is built into the Go command line interface, and can be accessed through go tool pprof. In some situations, it can be useful to get into the nitty-gritty to see what our applications are doing under the hood.
Note: before using pprof, you must have graphviz installed. This can be achieved via:
brew install graphviz
Depending on the situation, one of the following profiles may be appropriate to investigate:
- goroutine - stack traces of all current goroutines
- heap - a sampling of all heap allocations
- threadcreate - stack traces that led to the creation of new OS threads
- block - stack traces that led to blocking on synchronization primitives
- mutex - stack traces of holders of contended mutexes
- profile - provided stack frames for CPU allocation
Dumping a Running Pod
In a situation where we need to dump a variety of types of profiles for a pod, we can use the following script to do so. Adjust the namespace, podname, and profiles as necessary.
#!/bin/bash
set -euo pipefail
namespace="mirai"
pod_name="POD_NAME"
port="8080"
profile_types=("profile" "heap" "goroutine" "block" "mutex" "threadcreate")
date_format="+%Y-%m-%d_%H-%M-%S"
log() {
echo "[DEBUG] $1"
}
capture_profile() {
local type="$1"
local output_file="${namespace}-${pod_name}-${type}-$(date "$date_format").out"
log "Capturing $type data to $output_file..."
http_code=$(curl -s -w "%{http_code}" "http://localhost:$port/debug/pprof/$type" -o "$output_file")
if [ "$http_code" != "200" ]; then
echo "[ERROR] Failed to capture $type data. HTTP status: $http_code"
rm -f "$output_file"
else
log "Successfully captured $type data."
fi
}
log "Starting profile capture for pod: $pod_name in namespace: $namespace"
log "Running: kubectl port-forward pod/$pod_name $port:$port --namespace $namespace"
kubectl port-forward pod/"$pod_name" "$port:$port" --namespace "$namespace" > /tmp/port_forward.log 2>&1 &
port_forward_pid=$!
log "Port-forward PID: $port_forward_pid"
# Ensure port-forward is cleaned up on exit
trap "kill $port_forward_pid 2>/dev/null || true" EXIT
# Sleep and wait before we attempt to pull this data
sleep 2
if ! ps -p $port_forward_pid > /dev/null; then
echo "[ERROR] Port-forward process died unexpectedly. Log output:"
cat /tmp/port_forward.log
exit 1
fi
log "Port-forward appears healthy. Beginning profile collection..."
for type in "${profile_types[@]}"; do
capture_profile "$type"
done
log "Profile capture complete."
echo "Use 'go tool pprof -http=:8080 <profile_file>' to analyze the captured data."
General Debugging Tips
For each available type of profile, you can start looking for the following issues depending on your concern:
goroutine – Look for too many active goroutines, stuck routines, or signs of leaks.
heap – Look for unexpected high memory usage, large retained objects, or memory leaks.
threadcreate – Look for rapid or excessive thread creation, which can signal leaks or performance issues.
block – Look for where goroutines are blocked too long on locks, channels, or other sync points.
mutex – Look for lock contention hotspots causing slowdowns or bottlenecks.
profile (CPU) – Look for functions that consume the most CPU time to find performance bottlenecks.