|
1 | | -# JS Memory Leaks |
| 1 | +# Avoiding Memory Leaks in Node Renderer SSR |
2 | 2 |
|
3 | | -## Finding Memory Leaks |
| 3 | +> **Pro Feature** — Available with [React on Rails Pro](https://pro.reactrails.com). |
4 | 4 |
|
5 | | -For memory leaks, see [node-memwatch](https://github.com/marcominetti/node-memwatch). Use the `—inspect` flag to make and compare heap snapshots. |
| 5 | +## Why Memory Leaks Happen in the Node Renderer |
6 | 6 |
|
7 | | -## Causes of Memory Leaks |
| 7 | +The Node Renderer reuses [V8 VM contexts](https://nodejs.org/api/vm.html) across requests for performance. Your server bundle is loaded **once** into a VM context and reused for every SSR request until the worker restarts. |
8 | 8 |
|
9 | | -### Mobx (mobx-react) |
| 9 | +This means **module-level state persists across all requests** for the lifetime of the worker process. Code that works fine in the browser — where each page navigation creates a fresh JavaScript context — can silently leak memory on the server. |
10 | 10 |
|
11 | | -```js |
12 | | -import { useStaticRendering } from "mobx-react"; |
| 11 | +```text |
| 12 | +Browser: page load → JS context created → user navigates → context destroyed ✓ |
| 13 | +Node SSR: worker starts → JS context created → request 1, 2, 3, ... 10,000 → same context ✗ |
| 14 | +``` |
| 15 | + |
| 16 | +> **Migrating from ExecJS?** ExecJS creates a fresh JavaScript context per render, so module-level state is automatically cleared. When you switch to the Node Renderer, code that "worked fine" before may start leaking because the same context is now reused across requests. |
| 17 | +
|
| 18 | +## Common Leak Patterns |
| 19 | + |
| 20 | +### 1. Module-level caches without eviction |
| 21 | + |
| 22 | +Any module-level `Map`, `Set`, plain object, or array used as a cache will grow unboundedly because the module is loaded once and reused across all requests. |
| 23 | + |
| 24 | +**Leaks:** |
| 25 | +```javascript |
| 26 | +// cache lives forever — entries are never removed |
| 27 | +const cache = new Map(); |
| 28 | + |
| 29 | +export function buildSignedUrl(imageUrl, width, height) { |
| 30 | + const key = `${imageUrl}-${width}-${height}`; |
| 31 | + if (cache.has(key)) return cache.get(key); |
| 32 | + |
| 33 | + const result = computeHmacSignature(imageUrl, width, height); |
| 34 | + cache.set(key, result); // grows with every unique input across all requests |
| 35 | + return result; |
| 36 | +} |
| 37 | +``` |
| 38 | + |
| 39 | +**Fix:** Add a max size with LRU eviction, clear the cache periodically, or remove it if the computation is cheap: |
| 40 | +```javascript |
| 41 | +import { LRUCache } from 'lru-cache'; |
| 42 | + |
| 43 | +const cache = new LRUCache({ max: 1000 }); // bounded — evicts oldest entries |
| 44 | +``` |
| 45 | + |
| 46 | +### 2. Lodash `_.memoize` and similar unbounded memoization |
| 47 | + |
| 48 | +Lodash's `_.memoize` uses an unbounded `Map` internally. At module scope, it accumulates entries across all SSR requests forever. |
| 49 | + |
| 50 | +**Leaks:** |
| 51 | +```javascript |
| 52 | +import _ from 'lodash'; |
| 53 | + |
| 54 | +// Each unique argument adds a permanent entry |
| 55 | +export const formatLocation = _.memoize((city, state) => { |
| 56 | + return `${city}, ${state}`.toLowerCase().replace(/\s+/g, '-'); |
| 57 | +}); |
| 58 | +``` |
| 59 | + |
| 60 | +**Fix:** Use a bounded LRU cache, or avoid memoization at module scope for functions called with diverse inputs during SSR. |
| 61 | + |
| 62 | +### 3. Module-level Sets or arrays that accumulate |
| 63 | + |
| 64 | +**Leaks:** |
| 65 | +```javascript |
| 66 | +const SENT_EVENTS = new Set(); // grows with every unique event |
| 67 | + |
| 68 | +export function trackEvent(event) { |
| 69 | + if (SENT_EVENTS.has(event.key)) return; |
| 70 | + SENT_EVENTS.add(event.key); // never removed |
| 71 | + sendToAnalytics(event); |
| 72 | +} |
| 73 | +``` |
| 74 | + |
| 75 | +**Fix:** Don't track client-side-only state (like analytics) during SSR. Guard with a server-side check: |
| 76 | +```javascript |
| 77 | +export function trackEvent(event, railsContext) { |
| 78 | + if (railsContext.serverSide) return; // skip during SSR |
| 79 | + // ... client-only tracking |
| 80 | +} |
| 81 | +``` |
| 82 | + |
| 83 | +### 4. Third-party libraries with internal caches |
| 84 | + |
| 85 | +Some libraries maintain internal caches or singletons that grow in SSR: |
| 86 | +- **Styled-components / Emotion**: CSS-in-JS libraries can accumulate style sheets. Use `ServerStyleSheet` (styled-components) or `extractCritical` (Emotion) and reset between renders |
| 87 | +- **Apollo Client**: GraphQL cache grows if not reset between renders |
| 88 | +- **MobX**: Observer components can leak if `useStaticRendering` is not enabled (mobx-react < v7) |
| 89 | +- **Amplitude / analytics SDKs**: Event queues accumulate if initialized during SSR |
| 90 | +- **i18n libraries**: Message catalogs may cache translations |
| 91 | + |
| 92 | +**Fix:** Check if your libraries have SSR-specific configuration. Many provide a `resetServerContext()` or similar function. Initialize analytics and tracking libraries only on the client side. |
| 93 | + |
| 94 | +### 5. Event listeners at module scope |
| 95 | + |
| 96 | +If code registers event listeners at module scope during SSR, they accumulate across requests: |
| 97 | + |
| 98 | +**Leaks:** |
| 99 | +```javascript |
| 100 | +// Every SSR render adds another listener — they're never removed |
| 101 | +process.on('unhandledRejection', (err) => { |
| 102 | + reportError(err); |
| 103 | +}); |
| 104 | +``` |
| 105 | + |
| 106 | +**Fix:** Register listeners once (outside the render path), or guard with a flag: |
| 107 | +```javascript |
| 108 | +let listenerRegistered = false; |
| 109 | +if (!listenerRegistered) { |
| 110 | + process.on('unhandledRejection', (err) => reportError(err)); |
| 111 | + listenerRegistered = true; |
| 112 | +} |
| 113 | +``` |
13 | 114 |
|
14 | | -const App = (props, railsContext) => { |
15 | | - const { location, serverSide } = railsContext; |
16 | | - const context = {}; |
| 115 | +## Diagnosing Memory Leaks |
17 | 116 |
|
18 | | - useStaticRendering(true); |
| 117 | +### 1. Monitor worker RSS over time |
| 118 | + |
| 119 | +Watch the worker process memory. If RSS grows monotonically without plateauing, you have a leak: |
| 120 | + |
| 121 | +```bash |
| 122 | +# Check worker memory every 10 seconds |
| 123 | +while true; do |
| 124 | + ps -o rss= -p <worker-pid> | awk '{printf "%.1f MB\n", $1/1024}' |
| 125 | + sleep 10 |
| 126 | +done |
19 | 127 | ``` |
20 | 128 |
|
21 | | -- See details here: [Mobx site](https://github.com/mobxjs/mobx-react#server-side-rendering-with-usestaticrendering) |
| 129 | +### 2. Take V8 heap snapshots |
| 130 | + |
| 131 | +Use `v8.writeHeapSnapshot()` to capture heap state before and after load, then compare in Chrome DevTools: |
| 132 | + |
| 133 | +```javascript |
| 134 | +// In your renderer config, add a way to trigger snapshots: |
| 135 | +const v8 = require('v8'); |
| 136 | + |
| 137 | +process.on('SIGUSR2', () => { |
| 138 | + if (global.gc) global.gc(); // force GC first |
| 139 | + const filename = v8.writeHeapSnapshot(); |
| 140 | + console.log(`Heap snapshot written to ${filename}`); |
| 141 | +}); |
| 142 | +``` |
| 143 | + |
| 144 | +Then send `kill -USR2 <worker-pid>` at different times and compare the snapshots in Chrome DevTools (Memory tab → Load). |
| 145 | + |
| 146 | +### 3. Use `--inspect` for live profiling |
| 147 | + |
| 148 | +Start the renderer with the `--inspect` flag to connect Chrome DevTools: |
| 149 | + |
| 150 | +```bash |
| 151 | +node --inspect node-renderer.js |
| 152 | +``` |
| 153 | + |
| 154 | +Open `chrome://inspect` in Chrome, take heap snapshots, and use the "Comparison" view to see what objects accumulated between snapshots. |
| 155 | + |
| 156 | +## Mitigations |
| 157 | + |
| 158 | +### Set `--max-old-space-size` |
| 159 | + |
| 160 | +Without this flag, V8 reads the container's memory limit and sets a very large heap ceiling. This causes V8 to defer garbage collection, amplifying any existing leaks. |
| 161 | + |
| 162 | +**Always set this for production:** |
| 163 | +```bash |
| 164 | +NODE_OPTIONS=--max-old-space-size=1536 node node-renderer.js |
| 165 | +``` |
| 166 | + |
| 167 | +Size it based on your container memory and worker count. For example, with 4GB container memory and 3 workers: `4096 / 3 ≈ 1365`, round to `1400`. |
| 168 | + |
| 169 | +### Enable worker rolling restarts |
| 170 | + |
| 171 | +Rolling restarts are the primary safety net against memory leaks. They periodically kill and restart workers, reclaiming all accumulated memory: |
| 172 | + |
| 173 | +```javascript |
| 174 | +const config = { |
| 175 | + // Restart all workers every 45 minutes |
| 176 | + allWorkersRestartInterval: 45, |
| 177 | + // Stagger individual restarts by 6 minutes to avoid downtime |
| 178 | + delayBetweenIndividualWorkerRestarts: 6, |
| 179 | + // Force-kill workers that don't restart within 30 seconds |
| 180 | + gracefulWorkerRestartTimeout: 30, |
| 181 | +}; |
| 182 | +``` |
| 183 | + |
| 184 | +**Important:** Both `allWorkersRestartInterval` and `delayBetweenIndividualWorkerRestarts` must be set for restarts to be enabled. See [JS Configuration](../oss/building-features/node-renderer/js-configuration.md) for details. |
| 185 | + |
| 186 | +### Size restart intervals for your traffic |
| 187 | + |
| 188 | +The restart interval should be short enough that leaked memory doesn't fill the container: |
| 189 | + |
| 190 | +- **Low traffic / small bundles**: 60–120 minutes may be fine |
| 191 | +- **High traffic / large bundles**: 15–30 minutes |
| 192 | +- **If you're seeing OOMs**: reduce the interval until stable, then investigate the root cause |
| 193 | + |
| 194 | +## The Browser vs. Server Mental Model |
| 195 | + |
| 196 | +When writing code that runs during SSR, always ask: **"If this module-level variable is never reset, will it grow with each request?"** |
| 197 | + |
| 198 | +| Pattern | Browser | Node Renderer | |
| 199 | +|---------|---------|---------------| |
| 200 | +| `const cache = {}` at module scope | Cleared on navigation | Persists forever | |
| 201 | +| `new Set()` at module scope | Cleared on navigation | Persists forever | |
| 202 | +| `_.memoize(fn)` at module scope | Cleared on navigation | Persists forever | |
| 203 | +| React component state (`useState`) | Per-component lifecycle | Created and collected per render (OK) | |
| 204 | +| `useEffect` callbacks | Runs on client | Skipped during SSR (OK) | |
| 205 | +| `useMemo` inside components | Per-component lifecycle | Runs during SSR but result is per-render (OK) | |
| 206 | + |
| 207 | +The rule of thumb: **module-level mutable state is the danger zone.** React component-level state and hooks are fine because React creates and discards them per render. |
| 208 | + |
| 209 | +## Audit Checklist |
| 210 | + |
| 211 | +Use this to scan your server bundle code for potential leaks: |
| 212 | + |
| 213 | +- [ ] Search for module-level `new Map()`, `new Set()`, `const cache = {}`, `[]` — are any of these unbounded? |
| 214 | +- [ ] Search for `_.memoize` or `memoize(` at module scope — are they called with diverse SSR inputs? |
| 215 | +- [ ] Search for `setInterval` without corresponding `clearInterval` — timers leak if not cleaned up (only relevant when `stubTimers: false`) |
| 216 | +- [ ] Search for `process.on(` or `.addEventListener(` at module scope — listeners accumulate if added per render |
| 217 | +- [ ] Check third-party libraries for SSR cleanup functions (`resetServerContext`, `useStaticRendering`, etc.) |
| 218 | +- [ ] Verify `NODE_OPTIONS=--max-old-space-size=<MB>` is set in production |
| 219 | +- [ ] Verify `allWorkersRestartInterval` and `delayBetweenIndividualWorkerRestarts` are both configured |
0 commit comments