Pro Feature — Available with React on Rails Pro. Free or very low cost for startups and small companies. Get a license →
This guide covers deploying the Node Renderer in containerized environments (Docker, Kubernetes, ControlPlane, etc.), including architecture options, performance tuning, memory management, error tracking, and troubleshooting.
- React on Rails Pro v16.4.0 or later
- Node.js 22+ (LTS recommended)
- Ruby 3.1+ with Bundler
- The
react-on-rails-pro-node-renderernpm package installed in your project
When running Rails and the Node Renderer in containers, you have three options, listed from simplest to most complex:
| Single Container | Sidecar Containers | Separate Workloads | |
|---|---|---|---|
| Complexity | Lowest | Medium | Highest |
| Scaling | Together | Together | Independent |
| Version alignment | Guaranteed | Guaranteed | Risk of drift |
| Networking | localhost |
localhost |
Service DNS |
| Per-process visibility | No | Yes | Yes |
| When to use | Default starting point | Need to diagnose OOM source | Need independent scaling at high replica counts |
Rails and the Node Renderer run together in a single container. This is the simplest setup and the recommended starting point.
┌──────────────────────────┐
│ Container │
│ ┌────────┐ ┌──────────┐ │
│ │ Rails │ │ Node │ │
│ │ process│ │ Renderer │ │
│ └────────┘ └──────────┘ │
│ shared OS resources │
└──────────────────────────┘
Both processes share the container's CPU and memory limits (cgroup resources); they do not communicate via shared-memory IPC.
Advantages:
- Simplest setup — One container, one image, one deploy.
- No networking config — Rails connects to the renderer via
localhostout of the box. - Guaranteed version alignment — Both processes always run from the same image.
- Lower overhead — Shared OS layer saves ~200–300 MB vs. separate containers.
When to move on: If you're experiencing OOM restarts and need to determine whether Rails or the Node Renderer is the culprit, move to sidecar containers for visibility.
Configuration:
# config/initializers/react_on_rails_pro.rb
ReactOnRailsPro.configure do |config|
config.server_renderer = "NodeRenderer"
config.renderer_url = "http://localhost:3800"
endRails and the Node Renderer run as separate containers within the same pod/workload, sharing the same lifecycle. Use this when you need to isolate and diagnose memory/CPU usage per process.
┌─────────────────────────────────┐
│ Pod / Workload │
│ ┌─────────────┐ ┌───────────┐ │
│ │ Rails │ │ Node │ │
│ │ Container │ │ Renderer │ │
│ │ (2 CPU, │ │ (2 CPU, │ │
│ │ 4 GB RAM) │ │ 4 GB RAM)│ │
│ └──────┬──────┘ └─────┬─────┘ │
│ │ localhost │ │
│ └───────┬──────┘ │
└─────────────────┴───────────────┘
Advantages:
- Per-process visibility — See exactly which process is consuming memory and causing OOM kills.
- Independent resource limits — Set separate CPU/memory limits for Rails and the Node Renderer.
- Guaranteed version alignment — Both containers use the same image on deploy.
- Simpler networking — Rails still connects via
localhost.
Tradeoffs:
- Slightly more complex deployment config than a single container.
- Autoscaling logic may need adjustment (see Autoscaling Considerations).
Configuration:
# config/initializers/react_on_rails_pro.rb
ReactOnRailsPro.configure do |config|
config.server_renderer = "NodeRenderer"
config.renderer_url = "http://localhost:3800"
endRails and the Node Renderer run as independent workloads with their own scaling. This is the most complex option and is rarely needed.
Advantages:
- Independent scaling — Scale the renderer independently of Rails.
- Isolated resource limits — OOM in the renderer doesn't affect Rails and vice versa.
Disadvantages:
- Version drift risk — During rolling deploys, Rails and the Node Renderer may briefly run different versions. While protocol changes are rare, this is a risk to monitor.
- Race conditions — Pods restart independently, which can cause transient connection errors.
- Network dependency — Renderer must be reachable via internal service DNS.
- Overkill for most setups — If you're running 2–4 replicas, independent scaling adds complexity without real benefit.
Configuration:
# config/initializers/react_on_rails_pro.rb
ReactOnRailsPro.configure do |config|
config.server_renderer = "NodeRenderer"
config.renderer_url = ENV.fetch("RENDERER_URL", "http://node-renderer-service:3800")
endRecommendation: Start with a single container. Move to sidecar containers if you need per-process memory/CPU visibility (e.g., to diagnose OOM restarts). Separate workloads are rarely justified unless you have a specific need for independent scaling at high replica counts.
Why the renderer entry point lives in a dedicated
renderer/directory: Production Docker builds commonly strip JavaScript sources after the client bundles are built, since the Rails app no longer needs them at runtime. Keeping the renderer entry point in its own top-level directory (separate fromclient/) makes it trivial to exclude from that cleanup — the Node Renderer process still needs its entry file and dependencies at runtime.
A minimal Dockerfile that bundles Rails and the Node Renderer in a single image:
FROM node:22-slim AS node
FROM ruby:3.3
# Copy Node.js from the official image (avoids curl-pipe-bash)
COPY --from=node /usr/local/bin/node /usr/local/bin/node
COPY --from=node /usr/local/lib/node_modules /usr/local/lib/node_modules
RUN ln -s /usr/local/lib/node_modules/npm/bin/npm-cli.js /usr/local/bin/npm
# jemalloc for Rails memory (adjust path for arm64: aarch64-linux-gnu)
# and curl for h2c health checks (`curl --http2-prior-knowledge`)
RUN apt-get update && apt-get install -y libjemalloc2 curl && rm -rf /var/lib/apt/lists/*
ENV LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libjemalloc.so.2
ENV MALLOC_CONF="dirty_decay_ms:1000,muzzy_decay_ms:1000"
WORKDIR /app
# Install Ruby and Node dependencies
COPY Gemfile Gemfile.lock ./
RUN bundle install
COPY package.json package-lock.json ./
RUN npm ci
COPY . .
# Precompile assets
RUN bundle exec rake assets:precompile
# Expose Rails and Node Renderer ports
EXPOSE 3000 3800
# Start both processes with a process manager (overmind, foreman, etc.)
# See the Procfile example below
CMD ["overmind", "start", "-f", "Procfile"]For the single-container pattern, use a process manager like overmind or foreman with a Procfile:
# Procfile
rails: bundle exec rails server -b 0.0.0.0 -p 3000
renderer: node renderer/node-renderer.js
Tip: For sidecar containers, use the same image but override the
CMD— one container runsbundle exec rails server, the other runsnode renderer/node-renderer.js(or your Node Renderer entry point).
For local development with sidecar-like containers:
services:
rails:
build: .
command: bundle exec rails server -b 0.0.0.0 -p 3000
ports:
- '3000:3000'
environment:
RENDERER_URL: 'http://renderer:3800'
depends_on:
renderer:
condition: service_healthy
renderer:
build: .
command: node renderer/node-renderer.js
ports:
- '3800:3800'
environment:
RENDERER_HOST: '0.0.0.0'
NODE_OPTIONS: '--max-old-space-size=512'
healthcheck:
test: ['CMD', 'curl', '-sf', '--http2-prior-knowledge', 'http://localhost:3800/info']
interval: 5s
timeout: 3s
retries: 5
start_period: 10sNote: In Docker Compose, the containers do not share a network namespace (unlike Kubernetes sidecars), so the renderer must bind to
0.0.0.0and Rails must connect via the service name (renderer).
By default, the Node Renderer binds to localhost. For sidecar containers in the same Kubernetes pod, that works because the containers share a network namespace. For separate workloads or Docker Compose setups without shared networking, bind to 0.0.0.0:
// renderer/node-renderer.js
import { reactOnRailsProNodeRenderer } from 'react-on-rails-pro-node-renderer';
const config = {
host: '0.0.0.0',
// ... other config
};
reactOnRailsProNodeRenderer(config);Or via environment variable:
RENDERER_HOST=0.0.0.0Security note: The renderer executes JavaScript via
vm.runInContext(), making it a remote code execution service. Binding to0.0.0.0exposes this to the network. Use it only when the renderer must be reachable across a network namespace (separate workloads, Docker Compose). Always setRENDERER_PASSWORDand place the renderer behind private networking when bound to0.0.0.0. See Network Security for the full threat model.
The Node Renderer's memory grows over time as it handles SSR requests. This is expected behavior — V8's garbage collector doesn't always return memory to the OS, even after objects are freed. In containerized environments with cgroup memory limits, this can trigger OOM kills.
Typical memory profile:
- Startup: ~500–600 MB for the renderer process with workers
- After hours of traffic: 2–3 GB+ depending on component complexity and traffic volume
- Rails container: Usually stabilizes at 2–4 GB depending on
WEB_CONCURRENCYandRAILS_MAX_THREADS
Use NODE_OPTIONS to cap V8's old-generation heap per worker:
NODE_OPTIONS="--max-old-space-size=512"This tells V8 to trigger garbage collection more aggressively and limits each worker's heap. Adjust the value based on your component complexity:
| Component Complexity | Recommended max-old-space-size |
|---|---|
| Simple components | 256–512 MB |
| Medium (Redux, large props) | 512–768 MB |
| Complex (large data, many components) | 768–1024 MB |
Note: This setting applies per worker. Total renderer memory ≈
max-old-space-size × workersCount + overhead. If V8 hits the limit, the worker process exits and is automatically restarted by the cluster manager.
The default workersCount is CPU count minus 1, which may over-allocate in containers. Set it explicitly:
const config = {
workersCount: parseInt(process.env.RENDERER_WORKERS_COUNT, 10), // Required — derive from the sizing formula below
};Sizing guideline: Match worker count to expected concurrent SSR requests.
A rough formula (assuming each SSR render takes roughly half a full Rails request cycle, so one renderer worker can serve ~2 concurrent threads — adjust the divisor based on your measured render times):
renderer_workers ≥ (WEB_CONCURRENCY × RAILS_MAX_THREADS × ssr_request_ratio) / 2
Where ssr_request_ratio is the fraction of requests that need server rendering (often 30–60% for hybrid apps).
Example: With WEB_CONCURRENCY=4 and RAILS_MAX_THREADS=8 (32 total Rails threads), and ~50% of requests needing SSR:
renderer_workers ≥ (4 × 8 × 0.5) / 2 = 8 workers
Tip: Monitor CPU utilization per worker. If average CPU per worker is above 80%, you need more workers. If below 20%, you can reduce.
To mitigate memory growth, enable periodic worker restarts:
const config = {
allWorkersRestartInterval: 360, // Restart all workers every 6 hours
delayBetweenIndividualWorkerRestarts: 2, // 2 minutes between each worker restart
gracefulWorkerRestartTimeout: 30, // Kill stuck workers after 30 seconds
};This drains requests from each worker before restarting, so there's no downtime.
Recommended starting points for sidecar configuration:
| Container | CPU Request | CPU Limit | Memory Request | Memory Limit |
|---|---|---|---|---|
| Rails | 1–2 cores | 2–4 cores | 4 GB | 4 GB |
| Node Renderer | 1–2 cores | 2–4 cores | 4 GB | 4 GB |
Important: Set memory requests equal to limits for the renderer container so its memory budget is explicit. Kubernetes QoS is determined at the pod level, so you only get
GuaranteedQoS when every container in the pod has matching requests and limits. If using--max-old-space-size, set the container memory limit tomax-old-space-size × workersCount × 1.5to account for overhead.
Consider using jemalloc as Ruby's memory allocator to reduce Rails memory fragmentation:
RUN apt-get install -y libjemalloc2
## Adjust the preload path for your image architecture:
## amd64: /usr/lib/x86_64-linux-gnu/libjemalloc.so.2
## arm64: /usr/lib/aarch64-linux-gnu/libjemalloc.so.2
ENV LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libjemalloc.so.2
ENV MALLOC_CONF="dirty_decay_ms:1000,muzzy_decay_ms:1000"This won't help the Node Renderer (which uses V8's allocator), but can significantly reduce Rails container memory.
During rolling deploys and scale-down events, containers receive SIGTERM before being killed. Both Rails and the Node Renderer need to handle this gracefully to avoid dropping in-flight requests.
The Node Renderer's cluster manager handles SIGTERM automatically — it stops accepting new connections and waits for in-flight requests to complete before exiting. No additional configuration is needed.
Set terminationGracePeriodSeconds in your pod spec to give workers enough time to drain:
spec:
terminationGracePeriodSeconds: 60 # default is 30; increase if SSR requests are slowRails handles SIGTERM via Puma's built-in graceful shutdown. Ensure terminationGracePeriodSeconds exceeds your longest expected request duration.
Tip: If either process doesn't shut down within
terminationGracePeriodSeconds, Kubernetes sendsSIGKILL. Monitor for exit code 137 in your logs, which may indicate the grace period is too short rather than an OOM kill.
When using sidecar containers, autoscaling becomes more nuanced because CPU and memory metrics are aggregated across both containers.
If you previously scaled on CPU at 75%, you may need to adjust:
- Rails-heavy traffic (mostly API/JSON responses): Rails CPU spikes, but the renderer is idle. The pod-level CPU average may underreport Rails load.
- SSR-heavy traffic: Both containers are active. Pod-level CPU is more representative.
Recommendation: If your orchestrator supports per-container metrics (e.g., Kubernetes custom metrics), scale based on the Rails container's CPU. Otherwise, lower the threshold (e.g., to 60%) to account for the averaging effect.
With separate workloads, scale each independently:
- Rails: Scale on CPU utilization or request queue depth.
- Node Renderer: Scale on CPU utilization per worker.
During container startup, you may see ERR_STREAM_PREMATURE_CLOSE errors from Fastify. This occurs when Rails sends render requests before all Node Renderer workers are ready.
Mitigation:
- Health check endpoint — The Node Renderer exposes a built-in
/infoendpoint that returns the node version and renderer version. Because the renderer uses cleartext HTTP/2, KuberneteshttpGetprobes (HTTP/1.1) are incompatible with this listener. Use a TCP probe, anexecprobe (for example withcurl --http2-prior-knowledge, which requires curl with HTTP/2 support in your container image), or a dedicated HTTP/1.1 sidecar/port for probes. For a custom/healthroute with more granular checks, use theconfigureFastify()option (see JS Configuration: Custom Fastify Configuration). Configure your container orchestrator to wait for it before routing traffic. - Startup probe — Configure a startup probe with a generous
initialDelaySeconds:startupProbe: tcpSocket: port: 3800 initialDelaySeconds: 10 periodSeconds: 5 failureThreshold: 6
- Readiness probe — Ensure traffic is only routed to the renderer when it's ready to accept requests. Prefer an
execprobe with an h2c-aware client for application-level readiness. UsetcpSocketonly as a minimal fallback that confirms the port is accepting connections:readinessProbe: exec: command: - curl - -sf - --http2-prior-knowledge - http://localhost:3800/info timeoutSeconds: 5 periodSeconds: 5 failureThreshold: 3
Note: The
execprobe requires curl with HTTP/2 support in your image. Verify withcurl --version | grep HTTP2. If curl is unavailable, usetcpSocketas a fallback. - Liveness probe — Ensure the renderer is restarted if it becomes unresponsive:
livenessProbe: tcpSocket: port: 3800 periodSeconds: 10 failureThreshold: 3
Distinguish between Rails and Node Renderer OOM kills by checking container-level exit codes:
- Exit code 137: Process received SIGKILL — commonly OOM from cgroup limit, but can also be forced termination (grace-period expiry, liveness probe failure). Check
kubectl describe podforReason: OOMKilledto confirm OOM. - Exit code 1: Application crash (check logs for stack trace).
With sidecar containers, your orchestrator should report which container was OOM-killed. Use this to tune resource limits for the specific container rather than increasing the entire pod.
See Error Reporting and Tracing for setting up Sentry, Honeybadger, or other error tracking with the Node Renderer.
Set log levels to capture useful information without noise:
const config = {
logLevel: 'info', // General renderer logs
logHttpLevel: 'error', // Only log HTTP errors (not every request)
};In production, logLevel: 'warn' is sufficient unless actively debugging.
A complete pod spec for the sidecar pattern:
apiVersion: apps/v1
kind: Deployment
metadata:
name: rails-app
spec:
replicas: 2
selector:
matchLabels:
app: rails-app
template:
metadata:
labels:
app: rails-app
spec:
terminationGracePeriodSeconds: 60
containers:
- name: rails
image: your-app:latest
command: ['bundle', 'exec', 'rails', 'server', '-b', '0.0.0.0']
ports:
- containerPort: 3000
env:
- name: RENDERER_URL
value: 'http://localhost:3800'
- name: LD_PRELOAD
value: '/usr/lib/x86_64-linux-gnu/libjemalloc.so.2'
- name: MALLOC_CONF
value: 'dirty_decay_ms:1000,muzzy_decay_ms:1000'
resources:
requests:
cpu: '1'
memory: '4Gi'
limits:
cpu: '2'
memory: '4Gi'
- name: node-renderer
image: your-app:latest # Same image as Rails
command: ['node', 'renderer/node-renderer.js']
ports:
- containerPort: 3800
env:
- name: RENDERER_HOST
value: '0.0.0.0' # Bind to all interfaces for pod-network access
- name: NODE_OPTIONS
value: '--max-old-space-size=512'
- name: RENDERER_WORKERS_COUNT
value: '4'
resources:
requests:
cpu: '1'
memory: '4Gi'
limits:
cpu: '2'
memory: '4Gi'
startupProbe:
tcpSocket:
port: 3800
initialDelaySeconds: 10
periodSeconds: 5
failureThreshold: 6
readinessProbe:
exec:
command:
- curl
- -sf
- --http2-prior-knowledge
- http://localhost:3800/info
timeoutSeconds: 5
periodSeconds: 5
failureThreshold: 3
livenessProbe:
tcpSocket:
port: 3800
periodSeconds: 10
failureThreshold: 3Note: Both containers use the same Docker image, ensuring the React on Rails gem and Node Renderer package versions are always aligned.
- Check which container is OOM-killed — Use your orchestrator's events/logs to identify if it's Rails or the Node Renderer.
- If Node Renderer — Set
NODE_OPTIONS="--max-old-space-size=512"and enable rolling restarts with bothallWorkersRestartIntervalanddelayBetweenIndividualWorkerRestarts. - If Rails — Consider jemalloc and review
WEB_CONCURRENCY/RAILS_MAX_THREADSsettings. - Monitor over time — Memory growth is gradual. Monitor for 8+ hours to see the steady-state memory usage.
- Verify the renderer container is running and healthy.
- For sidecars: Check that the renderer binds to
localhost(default) or0.0.0.0. - For separate workloads: Verify DNS resolution and that
config.renderer_urlmatches the renderer's service endpoint. - Check for readiness probe failures that may have removed the renderer from the service.
The React on Rails gem and Node Renderer package have a protocol version. If they mismatch, render requests return an error. To avoid this:
- Sidecars: Both containers use the same image, so versions are always aligned.
- Separate workloads: Deploy both workloads simultaneously. If your orchestrator doesn't support atomic multi-workload deploys, consider switching to sidecars.
- Check worker count — If all workers are busy, requests queue up. Increase
workersCountor scale replicas. - Check
max-old-space-size— If set too low, frequent GC pauses increase latency. Increase the limit. - Profile components — Use
node --inspectto profile server-rendering code. See Profiling Server-Side Rendering Code.
In sidecar configurations, Rails may start before the Node Renderer is ready. Configure renderer_request_retry_limit in Rails to retry failed connections:
ReactOnRailsPro.configure do |config|
config.renderer_request_retry_limit = 5 # default: 5
endThis handles transient startup ordering issues. For a more robust solution, add a startup dependency or init container that waits for the renderer port to accept TCP connections, or queries /info with an h2c-aware client (for example, curl --http2-prior-knowledge).
When deploying on ControlPlane:
- Port configuration: Use
process.env.PORTfor the renderer port if running as a standalone workload. ControlPlane injects thePORTenvironment variable for the primary container. - Sidecar setup: Configure separate CPU and memory limits per container in your workload definition. In ControlPlane, sidecar containers are defined alongside the main container in the same workload spec.
- Autoscaling: ControlPlane uses workload-level metrics for autoscaling — see Autoscaling Considerations above. If you need per-container scaling, use separate workloads instead of sidecars.
- Health checks: ControlPlane supports readiness and liveness probes in the same format as Kubernetes. Configure them as shown in the Kubernetes Sidecar Manifest section.