Skip to content

Commit 0a4ffb0

Browse files
jakubno0div
andauthored
Update connect bucket docs page (#710)
# Description Update connect bucket guide to new SDK version --------- Co-authored-by: 0div <98087403+0div@users.noreply.github.com>
1 parent 5cb2edc commit 0a4ffb0

3 files changed

Lines changed: 167 additions & 185 deletions

File tree

apps/web/src/app/(docs)/docs/legacy/guide/connect-bucket/page.mdx

Lines changed: 0 additions & 185 deletions
This file was deleted.
Lines changed: 163 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,163 @@
1+
# Connecting bucket to the sandbox
2+
3+
To connect a bucket for storing data from the sandbox, we will use the FUSE file system to mount the bucket to the sandbox.
4+
5+
You will need to create a custom sandbox template with the FUSE file system installed. The guide for building a custom sandbox template can be found [here](/docs/sandbox-template).
6+
7+
## Google Cloud Storage
8+
### Prerequisites
9+
10+
To use Google Cloud Storage, you'll need a bucket and a service account. You can create a service account [here](https://console.cloud.google.com/iam-admin/serviceaccounts) and a bucket [here](https://console.cloud.google.com/storage).
11+
12+
If you want to write to the bucket, make sure the service account has the `Storage Object User` role for this bucket.
13+
14+
You can find a guide on creating a service account key [here](https://cloud.google.com/iam/docs/keys-create-delete#iam-service-account-keys-create-console).
15+
16+
### Mounting the bucket
17+
18+
To use the Google Cloud Storage we need to install the `gcsfuse` package. There's simple `Dockerfile` that can be used to create a container with the `gcsfuse` installed.
19+
20+
```docker
21+
FROM e2bdev/code-interpreter:latest
22+
23+
RUN apt-get update && apt-get install -y gnupg lsb-release wget
24+
25+
RUN lsb_release -c -s > /tmp/lsb_release
26+
RUN GCSFUSE_REPO=$(cat /tmp/lsb_release) && echo "deb https://packages.cloud.google.com/apt gcsfuse-$GCSFUSE_REPO main" | tee /etc/apt/sources.list.d/gcsfuse.list
27+
RUN wget -O - https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
28+
29+
RUN apt-get update && apt-get install -y gcsfuse
30+
31+
```
32+
33+
The bucket is mounted during the sandbox runtime using the `gcsfuse` command.
34+
35+
<CodeGroup isRunnable={false}>
36+
```js {{ language: 'js' }}
37+
import { Sandbox } from 'e2b'
38+
39+
const sandbox = await Sandbox.create('<your template id>')
40+
await sandbox.files.makeDir('/home/user/bucket')
41+
await sandbox.files.write('key.json', '<your service account key>')
42+
43+
await sandbox.commands.run('sudo gcsfuse <flags> --key-file /home/user/key.json <bucket-name> /home/user/bucket')
44+
```
45+
46+
```python {{ language: 'python' }}
47+
from e2b import Sandbox
48+
49+
sandbox = Sandbox("<your template id>")
50+
sandbox.files.make_dir("/home/user/bucket")
51+
sandbox.files.write("key.json", "<your key file content>")
52+
53+
output = sandbox.commands.run(
54+
"sudo gcsfuse <flags> --key-file /home/user/key.json <bucket-name> /home/user/bucket"
55+
)
56+
```
57+
</CodeGroup>
58+
59+
### Flags
60+
61+
The complete list of flags is available [here](https://cloud.google.com/storage/docs/gcsfuse-cli#options).
62+
63+
### Allow the default user to access the files
64+
65+
To allow the default user to access the files, we can use the following flags:
66+
67+
```
68+
-o allow_other -file-mode=777 -dir-mode=777
69+
```
70+
71+
## Amazon S3
72+
73+
To use Amazon S3, we can use the `s3fs` package. The `Dockerfile` setup is similar to that of Google Cloud Storage.
74+
75+
```docker
76+
FROM ubuntu:latest
77+
78+
RUN apt-get update && apt-get install s3fs
79+
```
80+
81+
Similar to Google Cloud Storage, the bucket is mounted during the runtime of the sandbox. The `s3fs` command is used to mount the bucket to the sandbox.
82+
83+
<CodeGroup isRunnable={false}>
84+
```js {{ language: 'js' }}
85+
import { Sandbox } from 'e2b'
86+
87+
const sandbox = await Sandbox.create('<your template id>')
88+
await sandbox.files.makeDir('/home/user/bucket')
89+
90+
// Create a file with the credentials
91+
// If you use another path for the credentials you need to add the path in the command s3fs command
92+
await sandbox.files.write('/root/.passwd-s3fs', '<AWS_ACCESS_KEY_ID>:<AWS_SECRET_ACCESS_KEY>')
93+
await sandbox.commands.run('sudo chmod 600 /root/.passwd-s3fs')
94+
95+
await sandbox.commands.run('sudo s3fs <flags> <bucket-name> /home/user/bucket')
96+
```
97+
98+
```python {{ language: 'python' }}
99+
from e2b import Sandbox
100+
101+
sandbox = Sandbox("<your template id>")
102+
sandbox.files.make_dir("/home/user/bucket")
103+
104+
# Create a file with the credentials
105+
# If you use another path for the credentials you need to add the path in the command s3fs command
106+
sandbox.files.write("/root/.passwd-s3fs", "<AWS_ACCESS_KEY_ID>:<AWS_SECRET_ACCESS_KEY>")
107+
sandbox.commands.run("sudo chmod 600 /root/.passwd-s3fs")
108+
109+
sandbox.commands.run("sudo s3fs <flags> <bucket-name> /home/user/bucket")
110+
```
111+
</CodeGroup>
112+
113+
### Flags
114+
115+
The complete list of flags is available [here](https://manpages.ubuntu.com/manpages/xenial/man1/s3fs.1.html).
116+
117+
### Allow the default user to access the files
118+
119+
To allow the default user to access the files, add the following flag:
120+
121+
```
122+
-o allow_other
123+
```
124+
125+
## Cloudflare R2
126+
127+
For Cloudflare R2, we can use a setup very similar to S3. The `Dockerfile` remains the same as for S3. However, the mounting differs slightly; we need to specify the endpoint for R2.
128+
129+
<CodeGroup isRunnable={false}>
130+
```js {{ language: 'js' }}
131+
import { Sandbox } from 'e2b'
132+
133+
const sandbox = await Sandbox.create({ template: '<your template id>' })
134+
await sandbox.files.makeDir('/home/user/bucket')
135+
136+
// Create a file with the R2 credentials
137+
// If you use another path for the credentials you need to add the path in the command s3fs command
138+
await sandbox.files.write('/root/.passwd-s3fs', '<R2_ACCESS_KEY_ID>:<R2_SECRET_ACCESS_KEY>')
139+
await sandbox.commands.run('sudo chmod 600 /root/.passwd-s3fs')
140+
141+
await sandbox.commands.run('sudo s3fs -o url=https://<ACCOUNT ID>.r2.cloudflarestorage.com <flags> <bucket-name> /home/user/bucket')
142+
```
143+
144+
```python {{ language: 'python' }}
145+
from e2b import Sandbox
146+
147+
sandbox = Sandbox("<your template id>")
148+
sandbox.files.make_dir("/home/user/bucket")
149+
150+
# Create a file with the R2 credentials
151+
# If you use another path for the credentials you need to add the path in the command s3fs command
152+
sandbox.files.write("/root/.passwd-s3fs", "<R2_ACCESS_KEY_ID>:<R2_SECRET_ACCESS_KEY>")
153+
sandbox.commands.run("sudo chmod 600 /root/.passwd-s3fs")
154+
155+
sandbox.commands.run(
156+
"sudo s3fs -o url=https://<ACCOUNT ID>.r2.cloudflarestorage.com <flags> <bucket-name> /home/user/bucket"
157+
)
158+
```
159+
</CodeGroup>
160+
161+
### Flags
162+
163+
It's the same as for S3.

apps/web/src/components/Navigation/routes.tsx

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -310,6 +310,10 @@ export const docRoutes: NavGroup[] = [
310310
title: 'Internet access',
311311
href: '/docs/sandbox/internet-access',
312312
},
313+
{
314+
title: 'Connecting bucket',
315+
href: '/docs/sandbox/connect-bucket',
316+
},
313317
{
314318
title: 'Installing beta SDKs',
315319
href: '/docs/sandbox/installing-beta-sdks',

0 commit comments

Comments
 (0)