Back to Blog
Guides
internal servers
private IP

How to Monitor Internal Servers Without Public IPs

Servers without public IPs can't be reached by external monitoring. Learn how to monitor private servers using agents, tunnels, and internal probes.

WT

Wakestack Team

Engineering Team

6 min read

The Problem

Your servers don't have public IP addresses:

  • Private cloud VMs (AWS private subnets, GCP internal IPs)
  • On-premise servers behind NAT
  • Kubernetes nodes in private networks
  • Development/staging environments

External monitoring services can't reach them. Pinging 10.0.1.50 from the internet doesn't work.

But you still need to know when CPU spikes, disk fills up, or memory runs out.

Solutions Overview

MethodHow It WorksBest For
Monitoring agentAgent on server sends metrics outMost use cases
Push gatewayServices push metrics to collectorEphemeral/batch jobs
Internal monitoringSelf-hosted monitoring in same networkAir-gapped environments
Bastion/jump hostProxy monitoring through public hostExisting bastion setup

Install an agent on each server. The agent:

  1. Collects local metrics (CPU, memory, disk, network)
  2. Sends data outbound to monitoring service
  3. Receives configuration from monitoring service
  4. No inbound connections needed

Network Flow

[Private Server]           [Internet]           [Monitoring]

  10.0.1.50                                    Dashboard
      ↓                                            ↑
  Agent  ──outbound HTTPS (443)──→  API  ──→  Metrics DB

All connections originate from inside your network going out.

Implementation

Wakestack Agent

# SSH to your private server (through bastion if needed)
ssh user@10.0.1.50
 
# Install agent
curl -fsSL https://wakestack.co.uk/install.sh | sudo bash
 
# Agent automatically starts collecting and reporting

Datadog Agent

DD_API_KEY=your-key DD_SITE="datadoghq.com" bash -c \
  "$(curl -L https://s3.amazonaws.com/dd-agent/scripts/install_script.sh)"

Node Exporter + Remote Write

# Install node_exporter
wget https://github.com/prometheus/node_exporter/releases/download/v1.7.0/node_exporter-1.7.0.linux-amd64.tar.gz
tar xvf node_exporter-1.7.0.linux-amd64.tar.gz
./node_exporter
 
# Configure Prometheus remote write to send metrics out
# (in prometheus.yml)
remote_write:
  - url: "https://prometheus-remote-write.example.com/api/v1/write"

Firewall Requirements

Only outbound HTTPS (port 443) needed:

# If using iptables
iptables -A OUTPUT -p tcp --dport 443 -j ACCEPT
 
# Most environments allow this by default

Method 2: Push Gateway

For short-lived jobs or services that can't run agents:

How It Works

[Private Server]           [Push Gateway]           [Monitoring]

  Batch Job                   Collector             Dashboard
      ↓                           ↑                     ↑
  Push metrics ──→  Internal endpoint  ──→  Scrape/Forward

Services actively push their metrics rather than being scraped.

Implementation with Prometheus Pushgateway

# Run pushgateway (can be on any internal server)
docker run -d -p 9091:9091 prom/pushgateway
 
# From your batch job, push metrics:
echo "job_duration_seconds 42" | curl --data-binary @- \
  http://pushgateway:9091/metrics/job/batch_job_name

When to Use

  • Batch jobs that run and exit
  • Serverless functions
  • Services where you can't install agents
  • Legacy systems

Method 3: Internal Monitoring Stack

Run monitoring infrastructure inside your private network.

How It Works

[Private Network]

  Server A (10.0.1.50)  ←──scrape──  Prometheus (10.0.1.10)
  Server B (10.0.1.51)  ←──scrape──       ↓
  Server C (10.0.1.52)  ←──scrape──   Grafana (10.0.1.10)

Everything stays internal. Access dashboards through VPN or bastion.

Implementation

Prometheus + Grafana Stack

# docker-compose.yml on internal monitoring server
version: '3'
services:
  prometheus:
    image: prom/prometheus
    volumes:
      - ./prometheus.yml:/etc/prometheus/prometheus.yml
    ports:
      - "9090:9090"
 
  grafana:
    image: grafana/grafana
    ports:
      - "3000:3000"
    environment:
      - GF_SECURITY_ADMIN_PASSWORD=secure-password
# prometheus.yml
scrape_configs:
  - job_name: 'nodes'
    static_configs:
      - targets:
        - '10.0.1.50:9100'
        - '10.0.1.51:9100'
        - '10.0.1.52:9100'

Netdata for Simpler Setup

# On monitoring server
docker run -d --name=netdata \
  -p 19999:19999 \
  netdata/netdata
 
# On each target server
docker run -d --name=netdata \
  -e NETDATA_CLAIM_TOKEN=your-token \
  -e NETDATA_CLAIM_ROOMS=your-room \
  netdata/netdata

Benefits

  • No data leaves your network
  • Works in air-gapped environments
  • Full control over retention and access

Considerations

  • You maintain the infrastructure
  • Need to access dashboards internally (VPN/bastion)
  • Monitoring can fail with the network it monitors

Method 4: Bastion/Jump Host Proxy

If you have a bastion host with a public IP, use it to proxy monitoring.

How It Works

[Internet]           [Bastion]           [Private Servers]

  Monitoring  ──→  Public IP  ──→  10.0.1.50, 10.0.1.51
                    (proxy)

External monitoring reaches the bastion, which proxies to internal servers.

Implementation with SSH Tunnel

# On bastion, create tunnel for monitoring
ssh -L 9100:10.0.1.50:9100 -L 9101:10.0.1.51:9100 localhost
 
# Now localhost:9100 reaches 10.0.1.50:9100
# External Prometheus can scrape bastion:9100

Implementation with nginx

# On bastion
stream {
    upstream node_exporter_1 {
        server 10.0.1.50:9100;
    }
 
    server {
        listen 9100;
        proxy_pass node_exporter_1;
        # Restrict to monitoring service IPs
        allow 1.2.3.4;
        deny all;
    }
}

Security Considerations

  • Bastion becomes a critical security point
  • Whitelist monitoring service IPs
  • Consider authentication
  • Monitor the bastion itself

Kubernetes-Specific Solutions

For Kubernetes Clusters with Private Nodes

Prometheus Operator (Internal)

# Deploy Prometheus inside the cluster
helm install prometheus prometheus-community/kube-prometheus-stack

Prometheus runs inside the cluster and can reach all nodes/pods.

Datadog Kubernetes Agent

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: datadog-agent
spec:
  # Runs on every node, reports out to Datadog

Metrics Server + External Dashboard

# Deploy metrics-server
kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
 
# Access via kubectl or API
kubectl top nodes

Choosing the Right Approach

Use Agent-Based If:

  • You can install software on servers
  • Servers have outbound internet access
  • You want simplicity
  • Using cloud monitoring service

Use Push Gateway If:

  • Running batch/ephemeral jobs
  • Can't install persistent agents
  • Jobs push their own metrics

Use Internal Monitoring If:

  • Air-gapped or restricted environment
  • Data must stay internal
  • You have ops capacity for self-hosting

Use Bastion Proxy If:

  • Already have bastion infrastructure
  • Few servers to monitor
  • Temporary/debugging needs

Common Pitfalls

Forgetting to Monitor the Monitor

If your internal Prometheus goes down, you won't know. Always have:

  • External uptime check on monitoring service
  • Alerts that go outside your infrastructure (email, PagerDuty)

Agent Connectivity Issues

Agents need to reach the monitoring service. Common blockers:

  • Corporate proxy (configure agent to use proxy)
  • Restrictive egress rules (whitelist monitoring service IPs/domains)
  • DNS issues (ensure agent can resolve monitoring service)

Time Synchronization

Metrics with wrong timestamps cause confusion. Ensure NTP is configured:

# Check time sync
timedatectl status
 
# Enable NTP if needed
sudo timedatectl set-ntp true

Summary

Monitoring servers without public IPs requires different approaches than external uptime monitoring.

Best approaches:

  1. Monitoring agent (recommended): Install on each server, sends metrics outbound
  2. Push gateway: For batch jobs and ephemeral workloads
  3. Internal monitoring: Self-hosted Prometheus/Grafana in same network
  4. Bastion proxy: Route external monitoring through public jump host

Key principles:

  • Prefer outbound connections (agents push out)
  • No inbound firewall rules needed with agents
  • Monitor your monitoring infrastructure
  • Ensure time synchronization across servers

Private IPs don't mean invisible servers. With the right approach, you get full visibility without exposing your infrastructure.

About the Author

WT

Wakestack Team

Engineering Team

Frequently Asked Questions

Can external uptime monitoring reach servers with private IPs?

No. External monitoring services can only reach publicly accessible endpoints. Servers with only private IPs (10.x, 172.16-31.x, 192.168.x) need internal monitoring or agent-based solutions.

What's the easiest way to monitor private servers?

Install a monitoring agent on the server. The agent collects metrics locally and sends them outbound to your monitoring service. No public IP or inbound connections required.

Do I need to open firewall ports for server monitoring?

Not with agent-based monitoring. Agents initiate outbound HTTPS connections, which most firewalls allow by default. You don't need to open inbound ports.

Related Articles

Ready to monitor your uptime?

Start monitoring your websites, APIs, and services in minutes. Free forever for small projects.