Skip to content

Commit d8cc486

Browse files
justin808claude
andcommitted
fix: address PR C review feedback for rolling_deploy_adapter
MUST-FIX (2): - **Nil adapter + PREVIOUS_BUNDLE_HASHES bug** (greptile P1, claude): when the env override was set without config.rolling_deploy_adapter, the stager proceeded past the early-return guard and crashed with NoMethodError at adapter.fetch. Now warns and returns instead. Doctor promotes the corresponding check from :info to :warning and explains that both env and adapter are required. - **RSC previous-bundle staging path** (codex P2): the renderer routes RSC requests by rsc_bundle_hash, which differs from server_bundle_hash and is a separate <cache>/<hash>/<hash>.js entry. The original protocol merged both bundles under a single hash's directory using basenames, so the RSC bundle never landed where the renderer looks. Fix: protocol refactor — each hash is now one bundle's cache entry. fetch(hash) → { bundle: path, assets: [paths] } upload(hash, bundle:, assets:) previous_bundle_hashes → flat list including server AND rsc hashes AssetsPrecompile.publish_current_bundle_if_configured now calls upload twice when RSC is enabled (once per hash). Stager stages each hash at <cache>/<hash>/<hash>.js with its companion assets. SHOULD-FIX (4): - **Timeout on adapter.fetch** (claude): wrap with Timeout.timeout (default 30s) so a hung external store can't block pre-seeding or assets:precompile indefinitely. - **Relative symlinks** (greptile P2): stage_file in :symlink mode now uses the same Pathname#realpath + relative_path_from pattern as PreSeedRendererCache.make_relative_symlink, matching current-hash staging and surviving cache-dir moves. - **Remove stray module_function** (claude): was paired with private_class_method (an unusual combination that leaks private instance methods to any include-r). Switched to `def self.` for all methods, private_class_method for helpers. - **respond_to? instead of methods.include?** (claude): more idiomatic duck-typing in both Configuration.validate_rolling_deploy_adapter and Doctor.report_adapter_protocol. DOC POLISH (3): - Control Plane reference impl: switched from backtick/system shell interpolation to Open3.capture2e with array-form args to avoid shell injection via env-var contents. Returns both server AND RSC hashes via previous_bundle_hashes. - Both S3 and Control Plane examples: lazy env-var accessor methods instead of constants evaluated at require time, so KeyError can't fire at class-load in environments that don't configure the adapter. - S3 reference impl: explicit note about the read-modify-write race in update_manifest!; documents serializing deploys or using If-Match/ETag as strict-safety alternatives. - Updated protocol docstrings + all reference impls (S3, Control Plane, Filesystem) to the new fetch/upload signatures. Tests: 13 rolling-deploy specs now (was 7) — adds nil-adapter+env warning, fetch timeout, relative-symlink assertion. Doctor spec updated for the warn-vs-info change. 38 Pro specs pass, 167/168 doctor specs pass (1 pre-existing unrelated failure). Refs: #3122, #3167 (PR B), #3173 (this PR) Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
1 parent 77155df commit d8cc486

7 files changed

Lines changed: 209 additions & 108 deletions

File tree

docs/pro/rolling-deploy-adapters.md

Lines changed: 79 additions & 40 deletions
Original file line numberDiff line numberDiff line change
@@ -28,33 +28,48 @@ If the renderer handles a request for bundle `abc` but reads the **new** build's
2828

2929
## Protocol
3030

31+
Each **bundle hash** is a single cache entry. A deploy that has both a server bundle and an RSC bundle contributes **two** hashes — `server_bundle_hash` and `rsc_bundle_hash`. The protocol is opaque about which kind of bundle a hash represents; the adapter just stores and retrieves files keyed by hash.
32+
3133
Your adapter must define three class methods:
3234

3335
```ruby
3436
module MyRollingDeployAdapter
3537
# Discovery. Called during pre-seeding to determine which historical
3638
# hashes to fetch. Typically hits the running deployment's /_health
3739
# endpoint or reads a manifest file in the artifact store.
40+
#
41+
# When RSC is enabled, this should return BOTH the server and RSC
42+
# hashes for each deploy you want to seed — the stager treats every
43+
# hash as an independent cache entry at <cache>/<hash>/<hash>.js.
44+
#
3845
# @return [Array<String>] ordered list of recent bundle hashes.
3946
# @return [] to disable previous-bundle seeding on this build.
4047
def self.previous_bundle_hashes
4148
# ...
4249
end
4350

44-
# Retrieval. Given a bundle hash, fetch the bundle + its companion
45-
# assets to local disk and return their paths.
46-
# @return [Hash, nil] Hash with :server_bundle (required), :rsc_bundle
47-
# (optional), :assets (Array<String>). nil if unavailable — pre-seeding
48-
# logs a warning and continues.
51+
# Retrieval. Given a bundle hash, fetch the bundle file + its
52+
# companion assets to local disk and return their paths.
53+
#
54+
# @return [Hash, nil] Hash with keys :bundle (String path to the
55+
# bundle file, required) and :assets (Array<String> of companion
56+
# asset paths like loadable-stats.json / RSC manifests). nil if
57+
# the bundle is unavailable — pre-seeding logs a warning and
58+
# continues.
59+
#
60+
# Fetch is wrapped in Timeout.timeout(30s) to protect pre-seeding
61+
# and assets:precompile from hanging on slow external stores.
4962
def self.fetch(bundle_hash)
5063
# ...
5164
end
5265

5366
# Publication. Called automatically after assets:precompile in
5467
# production-like environments when the adapter is configured.
55-
# Uploads the current build's bundle + assets keyed by hash so
56-
# future deploys can retrieve them. Errors are warned, not raised.
57-
def self.upload(bundle_hash, server_bundle:, rsc_bundle: nil, assets:)
68+
# Uploads one bundle + its companion assets keyed by that bundle's
69+
# hash. When RSC is enabled, upload is called twice per deploy:
70+
# once with server_bundle_hash, once with rsc_bundle_hash.
71+
# Errors are warned per-hash, not raised.
72+
def self.upload(bundle_hash, bundle:, assets:)
5873
# ...
5974
end
6075
end
@@ -73,7 +88,7 @@ For CI and testing, set `PREVIOUS_BUNDLE_HASHES` as a comma-separated list to sk
7388
PREVIOUS_BUNDLE_HASHES=abc123,def456 rake react_on_rails_pro:pre_seed_renderer_cache
7489
```
7590

76-
This runs the adapter's `fetch(hash)` for each listed hash but skips discovery.
91+
This runs the adapter's `fetch(hash)` for each listed hash but skips discovery. The adapter is still required to fetch the actual bundle files; setting the env var without configuring `config.rolling_deploy_adapter` produces a warning and skips seeding.
7792

7893
## Edge cases and error handling
7994

@@ -93,21 +108,30 @@ These are copy-pasteable starting points. Adapt to your infrastructure.
93108

94109
### S3
95110

96-
Publish bundles + companion assets under `s3://<bucket>/bundles/<hash>/`. A manifest file at `bundles/_manifest.json` tracks the rolling list of recent hashes.
111+
Stores each bundle + its companion assets under `s3://<bucket>/bundles/<hash>/bundle.js` + `s3://<bucket>/bundles/<hash>/<asset>`. A manifest file at `bundles/_manifest.json` tracks the rolling list of recent hashes.
112+
113+
> [!NOTE]
114+
> The manifest update is a read-modify-write cycle with no native concurrency guard. Concurrent deploys can lose entries (last writer wins). For strict safety, use S3 conditional writes (`If-Match` with ETag) or a small coordination layer (e.g., a deploy-level mutex, or a database row with optimistic locking). The pattern below is intentionally simple and sufficient when deploys are serialized.
97115
98116
```ruby
99117
require "aws-sdk-s3"
100118
require "fileutils"
101119
require "json"
102120

103121
class S3RollingDeployAdapter
104-
BUCKET = ENV.fetch("ROLLING_DEPLOY_BUCKET")
122+
# Lazy accessors — env vars are read when first used, not at require time.
123+
# This avoids KeyError at class-load when the bucket isn't configured
124+
# in dev/test/CI environments that don't use the adapter.
125+
def self.bucket
126+
ENV.fetch("ROLLING_DEPLOY_BUCKET")
127+
end
128+
105129
PREFIX = "bundles"
106130
MANIFEST_KEY = "#{PREFIX}/_manifest.json".freeze
107-
RETENTION = 3
131+
RETENTION = 6 # keep last ~3 deploys' worth (2 hashes per deploy when RSC is enabled)
108132

109133
def self.previous_bundle_hashes
110-
resp = s3.get_object(bucket: BUCKET, key: MANIFEST_KEY)
134+
resp = s3.get_object(bucket: bucket, key: MANIFEST_KEY)
111135
JSON.parse(resp.body.read).fetch("hashes", []).last(RETENTION)
112136
rescue Aws::S3::Errors::NoSuchKey
113137
[]
@@ -117,8 +141,7 @@ class S3RollingDeployAdapter
117141
dir = Rails.root.join("tmp/rolling-deploy", hash)
118142
FileUtils.mkdir_p(dir)
119143
{
120-
server_bundle: download_to(dir, "server-bundle.js", hash),
121-
rsc_bundle: download_optional(dir, "rsc-bundle.js", hash),
144+
bundle: download_to(dir, "bundle.js", hash),
122145
assets: %w[loadable-stats.json react-client-manifest.json react-server-client-manifest.json]
123146
.map { |name| download_optional(dir, name, hash) }
124147
.compact
@@ -127,9 +150,8 @@ class S3RollingDeployAdapter
127150
nil
128151
end
129152

130-
def self.upload(hash, server_bundle:, rsc_bundle: nil, assets:)
131-
put("#{PREFIX}/#{hash}/server-bundle.js", server_bundle)
132-
put("#{PREFIX}/#{hash}/rsc-bundle.js", rsc_bundle) if rsc_bundle
153+
def self.upload(hash, bundle:, assets:)
154+
put("#{PREFIX}/#{hash}/bundle.js", bundle)
133155
assets.each { |path| put("#{PREFIX}/#{hash}/#{File.basename(path)}", path) }
134156
update_manifest!(hash)
135157
end
@@ -142,7 +164,7 @@ class S3RollingDeployAdapter
142164

143165
def self.download_to(dir, name, hash)
144166
path = dir.join(name).to_s
145-
s3.get_object(bucket: BUCKET, key: "#{PREFIX}/#{hash}/#{name}", response_target: path)
167+
s3.get_object(bucket: bucket, key: "#{PREFIX}/#{hash}/#{name}", response_target: path)
146168
path
147169
end
148170

@@ -153,16 +175,16 @@ class S3RollingDeployAdapter
153175
end
154176

155177
def self.put(key, path)
156-
File.open(path, "rb") { |body| s3.put_object(bucket: BUCKET, key: key, body: body) }
178+
File.open(path, "rb") { |body| s3.put_object(bucket: bucket, key: key, body: body) }
157179
end
158180

159181
def self.update_manifest!(hash)
160182
hashes = previous_bundle_hashes
161183
hashes << hash unless hashes.include?(hash)
162184
s3.put_object(
163-
bucket: BUCKET,
185+
bucket: bucket,
164186
key: MANIFEST_KEY,
165-
body: JSON.generate(hashes: hashes.last(RETENTION + 1))
187+
body: JSON.generate(hashes: hashes.last(RETENTION + 2))
166188
)
167189
end
168190
end
@@ -172,30 +194,49 @@ end
172194

173195
Uses `cpln` CLI to pull the previous deployment's image layer and extract cache contents. `upload` is a no-op — the image itself is the artifact.
174196

197+
The deploy pipeline is expected to set two env vars on the running workload: `REACT_ON_RAILS_BUNDLE_HASH` (server bundle hash) and, when RSC is enabled, `REACT_ON_RAILS_RSC_BUNDLE_HASH`. Both are returned from `previous_bundle_hashes` so the stager can seed each independently.
198+
199+
Uses `Open3.capture2e` with array-form arguments rather than shell interpolation to avoid injection via env-var contents.
200+
175201
```ruby
202+
require "json"
203+
require "open3"
204+
require "fileutils"
205+
176206
class ControlPlaneRollingDeployAdapter
177-
GVC = ENV.fetch("CPLN_GVC")
178-
WORKLOAD = ENV.fetch("CPLN_RAILS_WORKLOAD")
207+
# Lazy accessors — see S3 adapter note on KeyError at require time.
208+
def self.gvc
209+
ENV.fetch("CPLN_GVC")
210+
end
211+
212+
def self.workload
213+
ENV.fetch("CPLN_RAILS_WORKLOAD")
214+
end
179215

180216
def self.previous_bundle_hashes
181-
output = `cpln workload get #{WORKLOAD} --gvc #{GVC} -o json`
217+
output, status = Open3.capture2e("cpln", "workload", "get", workload, "--gvc", gvc, "-o", "json")
218+
return [] unless status.success?
219+
182220
env = JSON.parse(output).dig("spec", "containers", 0, "env") || []
183-
hash = env.find { |e| e["name"] == "REACT_ON_RAILS_BUNDLE_HASH" }&.dig("value")
184-
hash ? [hash] : []
221+
%w[REACT_ON_RAILS_BUNDLE_HASH REACT_ON_RAILS_RSC_BUNDLE_HASH]
222+
.map { |name| env.find { |e| e["name"] == name }&.dig("value") }
223+
.compact
185224
end
186225

187226
def self.fetch(hash)
188-
image = "#{GVC}/app-#{hash}"
189-
tmp = Rails.root.join("tmp/rolling-deploy", hash)
227+
image = "#{gvc}/app-#{hash}"
228+
tmp = Rails.root.join("tmp/rolling-deploy", hash).to_s
190229
FileUtils.mkdir_p(tmp)
191-
# Extract cache dir contents from the previous image layer:
192-
system("cpln image pull #{image} --output #{tmp}") or return nil
193-
{ server_bundle: tmp.join("server-bundle.js").to_s,
194-
rsc_bundle: File.exist?(tmp.join("rsc-bundle.js")) ? tmp.join("rsc-bundle.js").to_s : nil,
195-
assets: Dir[tmp.join("*.json")] }
230+
_out, status = Open3.capture2e("cpln", "image", "pull", image, "--output", tmp)
231+
return nil unless status.success?
232+
233+
bundle = Dir[File.join(tmp, "*.js")].first
234+
return nil unless bundle
235+
236+
{ bundle: bundle, assets: Dir[File.join(tmp, "*.json")] }
196237
end
197238

198-
def self.upload(_hash, server_bundle:, rsc_bundle: nil, assets:)
239+
def self.upload(_hash, bundle:, assets:)
199240
# No-op: the Docker image IS the artifact. The next build pulls
200241
# via `cpln image pull`.
201242
end
@@ -226,16 +267,14 @@ class FilesystemRollingDeployAdapter
226267
dir = root.join(hash)
227268
return nil unless dir.directory?
228269

229-
{ server_bundle: dir.join("server-bundle.js").to_s,
230-
rsc_bundle: (dir.join("rsc-bundle.js").to_s if dir.join("rsc-bundle.js").exist?),
270+
{ bundle: dir.join("bundle.js").to_s,
231271
assets: Dir[dir.join("*.json")] }
232272
end
233273

234-
def self.upload(hash, server_bundle:, rsc_bundle: nil, assets:)
274+
def self.upload(hash, bundle:, assets:)
235275
dir = root.join(hash)
236276
FileUtils.mkdir_p(dir)
237-
FileUtils.cp(server_bundle, dir.join("server-bundle.js"))
238-
FileUtils.cp(rsc_bundle, dir.join("rsc-bundle.js")) if rsc_bundle
277+
FileUtils.cp(bundle, dir.join("bundle.js"))
239278
assets.each { |p| FileUtils.cp(p, dir.join(File.basename(p))) }
240279
hashes = (previous_bundle_hashes + [hash]).uniq
241280
root.join("_manifest.json").write(JSON.generate(hashes: hashes))

react_on_rails/lib/react_on_rails/doctor.rb

Lines changed: 6 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -2835,9 +2835,11 @@ def check_rolling_deploy_adapter
28352835
if adapter.nil?
28362836
env_override = ENV.fetch("PREVIOUS_BUNDLE_HASHES", nil)
28372837
if env_override && !env_override.empty?
2838-
checker.add_info(
2839-
"ℹ️ PREVIOUS_BUNDLE_HASHES=#{env_override.inspect} set but no rolling_deploy_adapter " \
2840-
"configured — env override will be used."
2838+
checker.add_warning(
2839+
"⚠️ PREVIOUS_BUNDLE_HASHES=#{env_override.inspect} is set but no rolling_deploy_adapter " \
2840+
"is configured. Rolling-deploy seeding needs both — the env var overrides *discovery* " \
2841+
"but the adapter is still required to fetch bundle files. " \
2842+
"Set config.rolling_deploy_adapter or unset PREVIOUS_BUNDLE_HASHES."
28412843
)
28422844
else
28432845
checker.add_info("ℹ️ No rolling_deploy_adapter configured (rolling-deploy seeding disabled).")
@@ -2853,7 +2855,7 @@ def check_rolling_deploy_adapter
28532855
end
28542856

28552857
def report_adapter_protocol(adapter)
2856-
missing = ROLLING_DEPLOY_REQUIRED_METHODS.reject { |m| adapter.methods.include?(m) }
2858+
missing = ROLLING_DEPLOY_REQUIRED_METHODS.reject { |m| adapter.respond_to?(m) }
28572859
if missing.empty?
28582860
checker.add_success(
28592861
"✅ rolling_deploy_adapter responds to all required methods " \

react_on_rails/spec/lib/react_on_rails/doctor_spec.rb

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -2504,10 +2504,10 @@ class << self
25042504

25052505
before { ENV["PREVIOUS_BUNDLE_HASHES"] = "abc,def" }
25062506

2507-
it "surfaces the env override" do
2507+
it "warns that both are required" do
25082508
doctor.send(:check_rolling_deploy_adapter)
2509-
info = checker.messages.select { |m| m[:type] == :info }
2510-
expect(info.any? do |m|
2509+
warnings = checker.messages.select { |m| m[:type] == :warning }
2510+
expect(warnings.any? do |m|
25112511
m[:content].include?("PREVIOUS_BUNDLE_HASHES") && m[:content].include?("abc,def")
25122512
end).to be(true)
25132513
end

react_on_rails_pro/lib/react_on_rails_pro/assets_precompile.rb

Lines changed: 22 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -77,34 +77,38 @@ def self.call
7777
publish_current_bundle_if_configured
7878
end
7979

80-
# Best-effort publication of the just-built bundle + assets to the configured
81-
# rolling_deploy_adapter so that the *next* deploy can fetch this hash as a
82-
# "previous" bundle. Runs only in production-like environments. Errors are
83-
# warned, not raised, because a failed upload degrades the next deploy's
84-
# rolling-deploy seeding — not this deploy's correctness.
80+
# Best-effort publication of the just-built bundles + assets to the configured
81+
# rolling_deploy_adapter so that the *next* deploy can fetch these hashes as
82+
# "previous" bundles. Runs only in production-like environments. Errors are
83+
# warned per-hash, not raised, because a failed upload degrades the next
84+
# deploy's rolling-deploy seeding — not this deploy's correctness.
85+
#
86+
# Protocol: each hash is one bundle's cache entry — when RSC is enabled,
87+
# upload is called once for the server bundle (under server_bundle_hash)
88+
# and once for the RSC bundle (under rsc_bundle_hash).
8589
def self.publish_current_bundle_if_configured
8690
adapter = ReactOnRailsPro.configuration.rolling_deploy_adapter
8791
return if adapter.nil?
8892
return if Rails.env.development? || Rails.env.test?
8993

9094
pool = ReactOnRailsPro::ServerRenderingPool::NodeRenderingPool
95+
assets = ReactOnRailsPro::RendererCacheHelpers.collect_assets.map(&:to_s)
96+
9197
server_bundle = ReactOnRails::Utils.server_bundle_js_file_path
92-
return unless File.exist?(server_bundle)
98+
upload_bundle(adapter, pool.server_bundle_hash, server_bundle, assets) if File.exist?(server_bundle)
9399

94-
hash = pool.server_bundle_hash
95-
rsc_bundle = (ReactOnRailsPro::Utils.rsc_bundle_js_file_path if ReactOnRailsPro.configuration.enable_rsc_support)
96-
assets = ReactOnRailsPro::RendererCacheHelpers.collect_assets.map(&:to_s)
100+
return unless ReactOnRailsPro.configuration.enable_rsc_support
101+
102+
rsc_bundle = ReactOnRailsPro::Utils.rsc_bundle_js_file_path
103+
upload_bundle(adapter, pool.rsc_bundle_hash, rsc_bundle, assets) if File.exist?(rsc_bundle)
104+
end
97105

98-
adapter.upload(
99-
hash,
100-
server_bundle: server_bundle,
101-
rsc_bundle: rsc_bundle,
102-
assets: assets
103-
)
104-
puts "[ReactOnRailsPro] Published current bundle hash #{hash} via rolling_deploy_adapter"
106+
def self.upload_bundle(adapter, hash, bundle, assets)
107+
adapter.upload(hash, bundle: bundle, assets: assets)
108+
puts "[ReactOnRailsPro] Published bundle hash #{hash} via rolling_deploy_adapter"
105109
rescue StandardError => e
106-
warn "[ReactOnRailsPro] rolling_deploy_adapter#upload raised #{e.class}: #{e.message}. " \
107-
"Next deploy's rolling-deploy seeding may degrade until the next successful upload."
110+
warn "[ReactOnRailsPro] rolling_deploy_adapter#upload for #{hash} raised #{e.class}: " \
111+
"#{e.message}. Next deploy's rolling-deploy seeding for this hash may degrade."
108112
end
109113

110114
def build_or_fetch_bundles

react_on_rails_pro/lib/react_on_rails_pro/configuration.rb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -263,7 +263,7 @@ def validate_rolling_deploy_adapter
263263
end
264264

265265
%i[previous_bundle_hashes fetch upload].each do |method_name|
266-
next if rolling_deploy_adapter.methods.include?(method_name)
266+
next if rolling_deploy_adapter.respond_to?(method_name)
267267

268268
raise ReactOnRailsPro::Error,
269269
"config.rolling_deploy_adapter must define class method ##{method_name}. " \

0 commit comments

Comments
 (0)