Introduction
Silo is free S3 storage for Hack Clubbers.
If you're a teen building something and need storage whether it's for a game, a website, or a hackathon project, this is for you.
Under the hood, we just proxy all requests into one single bucket under routes. This lets us handle the boring stuff (like quotas and auth) while giving you a standard S3 API. You can use all the normal tools (AWS CLI, libraries, etc.) without needing a credit card, a cloudflare account etc.
Just log in and start shipping.
YSWS Program
"You Ship, We Ship" (YSWS) is how you earn more storage.
Instead of paying with money, you pay with code. When you ship a project using Silo, you can submit it to us. Based on the hours you spent coding, we'll permanently increase your storage quota.
Reward Calculator
You Earn
GB
Permanent storage added to your account
How it works
- Build a project that uses Silo for storage.
- Go to the YSWS Dashboard.
- Submit your project details and hours spent.
- Once approved, your storage limit is automatically increased!
Authentication
To get your credentials:
- Log in to the Dashboard.
- Click Create Bucket.
- Copy your Access Key and Secret Key.
Important: Your Secret Key is only shown once. Make sure to save it securely.
Public Buckets
By default, all buckets are private. This means every request requires valid authentication signatures.
You can toggle a bucket to be Public in the dashboard settings.
What does Public mean?
- Anyone can perform
GetObject and HeadObject requests without authentication.
- Files are accessible via direct URL:
https://silo.deployor.dev/bucket-name/key.
ListObjects and other operations still require authentication.
This is ideal for hosting static assets like images, CSS, or game files that need to be publicly accessible on the web.
CORS Configuration
Cross-Origin Resource Sharing (CORS) allows client-side web applications loaded in one domain to interact with resources in a different domain.
Silo supports per-bucket CORS configuration. You can manage this directly in the Dashboard or via the standard S3 API.
How it works
We handle CORS at the proxy level ("Virtual CORS"). When you set a CORS configuration, we store it and intercept OPTIONS requests to respond with the correct headers. We also inject the appropriate CORS headers into GET and other responses based on your rules.
Example Configuration
{
"CORSRules": [
{
"AllowedHeaders": ["*"],
"AllowedMethods": ["GET", "HEAD"],
"AllowedOrigins": ["https://myapp.com"],
"ExposeHeaders": []
}
]
}
Supported Operations
Silo implements a subset of the Amazon S3 API. The following operations are fully supported:
Bucket Operations
- ✓ ListBuckets (GET /)
- ✓ HeadBucket (HEAD /bucket)
- ✓ GetBucketLocation (GET /bucket?location)
- ✓ PutBucketCors (PUT /bucket?cors)
- ✓ GetBucketCors (GET /bucket?cors)
- ✓ DeleteBucketCors (DELETE /bucket?cors)
Object Operations
- ✓ PutObject (PUT /bucket/key)
- ✓ GetObject (GET /bucket/key)
- ✓ HeadObject (HEAD /bucket/key)
- ✓ DeleteObject (DELETE /bucket/key)
- ✓ CopyObject (PUT /bucket/key + x-amz-copy-source)
- ✓ ListObjectsV2 (GET /bucket?list-type=2)
- ✓ DeleteObjects (POST /bucket?delete)
Multipart Uploads
Fully supported for large files.
- ✓ CreateMultipartUpload (POST /bucket/key?uploads)
- ✓ UploadPart (PUT /bucket/key?partNumber&uploadId)
- ✓ CompleteMultipartUpload (POST /bucket/key?uploadId)
- ✓ AbortMultipartUpload (DELETE /bucket/key?uploadId)
- ✓ ListMultipartUploads (GET /bucket?uploads)
- ✓ ListParts (GET /bucket/key?uploadId)
Unsupported Operations
The following operations are not supported. We prioritize simplicity and security, so many complex or legacy S3 features are intentionally omitted.
Dead / Deprecated by AWS
Not Implemented in Silo
Dead / Deprecated by AWS
These features are deprecated, discontinued, or discouraged by Amazon S3. We do not implement them because they are obsolete. We only support standard "hot" storage.
SOAP over HTTP/HTTPS
Signature Version 2 (SigV2)
SelectObjectContent (S3 Select)
GetObjectTorrent
PutBucketLifecycle
PutBucketReplication
PutBucketNotification
ACLs (Access Control Lists)
REDUCED_REDUNDANCY
ListObjects (V1)
Not Implemented in Silo
These are valid S3 features that are not currently supported by Silo's infrastructure.
Versioning
Encryption
Object Locking
Website Hosting
Accelerate
Tagging
Bucket Policies
Public Access Blocks
Ownership Controls
Endpoints & Regions
Region
eu-central-1
Endpoint URL
https://silo.deployor.dev
Addressing Styles
For authenticated API requests, you can simply use the root endpoint. Your Access Key is uniquely tied to a specific bucket, so we automatically route your requests to the correct bucket.
// API Endpoint (Authenticated)
https://silo.deployor.dev
No bucket name needed in hostname or path for API calls.
For Public Buckets (direct file access in browser), you should use virtual-hosted-style or path-style URLs:
// Virtual Host (Public Access)
https://{bucket}.silo.deployor.dev/{key}
// Path Style (Public Access)
https://silo.deployor.dev/{bucket}/{key}
Limits & Quotas
Storage & Bandwidth
We monitor storage usage and egress bandwidth.
- Storage Limit: Defined per user (check dashboard).
- Egress Limit: Typically 3x your storage limit.
AWS CLI
You can use the standard AWS CLI to interact with Silo. Configure a profile with your keys and our endpoint.
Configuration
aws configure --profile silo
# AWS Access Key ID: [Your Access Key]
# AWS Secret Access Key: [Your Secret Key]
# Default region name: eu-central-1
# Default output format: json
Usage Examples
# List Buckets
aws s3 ls --endpoint-url https://silo.deployor.dev --profile silo
# Upload a file
aws s3 cp myfile.txt s3://my-bucket/myfile.txt --endpoint-url https://silo.deployor.dev --profile silo
# List Objects
aws s3 ls s3://my-bucket --endpoint-url https://silo.deployor.dev --profile silo
JavaScript / TypeScript
The best way to interact with Silo in JavaScript/TypeScript is using the official AWS SDK v3.
Quick Start
Don't want to read? Copy the initialization code and start shipping.
1. Installation
npm install @aws-sdk/client-s3
# or
bun add @aws-sdk/client-s3
2. Initialization
Create a reusable client instance. We recommend storing your keys in a .env file.
import { S3Client } from "@aws-sdk/client-s3";
// Initialize the client
const s3 = new S3Client({
region: "auto",
endpoint: "https://silo.deployor.dev",
credentials: {
accessKeyId: process.env.ACCESS_KEY_ID, // e.g. "23823..."
secretAccessKey: process.env.SECRET_ACCESS_KEY, // e.g. "82382..."
},
});
3. Uploading Files (PutObject)
You can upload strings, Buffers, or Streams. Always set the ContentType so browsers know how to handle the file.
import { PutObjectCommand } from "@aws-sdk/client-s3";
import { readFile } from "fs/promises";
// Example 1: Uploading a simple text string
await s3.send(new PutObjectCommand({
Bucket: "my-bucket",
Key: "hello.txt",
Body: "Hello World!",
ContentType: "text/plain"
}));
// Example 2: Uploading an image from disk
const fileBuffer = await readFile("./image.png");
await s3.send(new PutObjectCommand({
Bucket: "my-bucket",
Key: "images/profile.png",
Body: fileBuffer,
ContentType: "image/png"
}));
4. Downloading Files (GetObject)
Reading files returns a stream. Here's a helper to convert it to a string.
import { GetObjectCommand } from "@aws-sdk/client-s3";
const response = await s3.send(new GetObjectCommand({
Bucket: "my-bucket",
Key: "hello.txt"
}));
// Helper to convert stream to string
const str = await response.Body.transformToString();
console.log(str); // "Hello World!"
5. Listing Files
List contents of a bucket. Useful for building file browsers.
import { ListObjectsV2Command } from "@aws-sdk/client-s3";
const response = await s3.send(new ListObjectsV2Command({
Bucket: "my-bucket",
Prefix: "images/" // Optional: filter by folder
}));
// Check if bucket is empty
if (!response.Contents) {
console.log("Bucket is empty!");
} else {
response.Contents.forEach((file) => {
console.log(`${file.Key} (${file.Size} bytes)`);
});
}
6. Deleting Files
import { DeleteObjectCommand } from "@aws-sdk/client-s3";
await s3.send(new DeleteObjectCommand({
Bucket: "my-bucket",
Key: "hello.txt"
}));
7. Generating Presigned URLs
Want to let users upload directly to your bucket without sharing your secret key? Use Presigned URLs.
import { getSignedUrl } from "@aws-sdk/s3-request-presigner";
import { PutObjectCommand } from "@aws-sdk/client-s3";
const command = new PutObjectCommand({
Bucket: "my-bucket",
Key: "user-upload.png",
});
// Generate a URL valid for 15 minutes
const url = await getSignedUrl(s3, command, { expiresIn: 900 });
console.log("Upload here:", url);
Python (Boto3)
Boto3 is the standard AWS SDK for Python. It's robust and widely used in data science and backend development.
1. Installation
pip install boto3
2. Initialization
import boto3
import os
# Initialize the S3 client
s3 = boto3.client('s3',
endpoint_url='https://silo.deployor.dev',
aws_access_key_id=os.getenv('ACCESS_KEY'),
aws_secret_access_key=os.getenv('SECRET_KEY'),
region_name='auto'
)
3. Uploading Files
# Upload a file from disk
s3.upload_file('local_image.jpg', 'my-bucket', 'images/remote_image.jpg')
# Upload a file object (useful for web frameworks like Flask/Django)
with open('local_image.jpg', 'rb') as f:
s3.upload_fileobj(f, 'my-bucket', 'images/remote_image.jpg')
# Upload raw bytes
s3.put_object(
Bucket='my-bucket',
Key='hello.txt',
Body=b'Hello World!',
ContentType='text/plain'
)
4. Downloading Files
# Download to disk
s3.download_file('my-bucket', 'images/remote_image.jpg', 'local_image.jpg')
# Download to memory
response = s3.get_object(Bucket='my-bucket', Key='hello.txt')
content = response['Body'].read().decode('utf-8')
print(content)
5. Listing Objects
paginator = s3.get_paginator('list_objects_v2')
for page in paginator.paginate(Bucket='my-bucket'):
for obj in page.get('Contents', []):
print(f"{obj['Key']} - {obj['Size']} bytes")
Go
Use the official AWS SDK for Go v2. It provides a type-safe and performant way to interact with Silo.
1. Installation
go get github.com/aws/aws-sdk-go-v2
go get github.com/aws/aws-sdk-go-v2/config
go get github.com/aws/aws-sdk-go-v2/service/s3
2. Complete Example
package main
import (
"context"
"fmt"
"log"
"os"
"strings"
"github.com/aws/aws-sdk-go-v2/aws"
"github.com/aws/aws-sdk-go-v2/config"
"github.com/aws/aws-sdk-go-v2/credentials"
"github.com/aws/aws-sdk-go-v2/service/s3"
)
func main() {
// 1. Configure the custom endpoint resolver
r2Resolver := aws.EndpointResolverWithOptionsFunc(func(service, region string, options ...interface{}) (aws.Endpoint, error) {
return aws.Endpoint{
URL: "https://silo.deployor.dev",
}, nil
})
// 2. Load credentials
cfg, err := config.LoadDefaultConfig(context.TODO(),
config.WithEndpointResolverWithOptions(r2Resolver),
config.WithCredentialsProvider(credentials.NewStaticCredentialsProvider(
os.Getenv("ACCESS_KEY"),
os.Getenv("SECRET_KEY"),
"",
)),
config.WithRegion("auto"),
)
if err != nil {
log.Fatal(err)
}
client := s3.NewFromConfig(cfg)
// 3. Upload a file
_, err = client.PutObject(context.TODO(), &s3.PutObjectInput{
Bucket: aws.String("my-bucket"),
Key: aws.String("hello.txt"),
Body: strings.NewReader("Hello World!"),
ContentType: aws.String("text/plain"),
})
if err != nil {
log.Fatal(err)
}
fmt.Println("Uploaded hello.txt")
// 4. List files
output, err := client.ListObjectsV2(context.TODO(), &s3.ListObjectsV2Input{
Bucket: aws.String("my-bucket"),
})
if err != nil {
log.Fatal(err)
}
for _, object := range output.Contents {
fmt.Printf("Found: %s (%d bytes)\n", *object.Key, object.Size)
}
}
Rclone
Rclone is the "Swiss army knife of cloud storage". It's perfect for backups, migrations, and mounting buckets as local drives.
1. Interactive Configuration
Run rclone config and follow these steps:
- n (New remote)
- name: silo
- Storage: s3
- Provider: Other
- env_auth: false
- access_key_id: [Paste your Access Key]
- secret_access_key: [Paste your Secret Key]
- region: auto
- endpoint: https://silo.deployor.dev
- acl: private
2. Manual Configuration
Alternatively, edit ~/.config/rclone/rclone.conf directly:
[silo]
type = s3
provider = Other
env_auth = false
access_key_id = YOUR_ACCESS_KEY
secret_access_key = YOUR_SECRET_KEY
endpoint = https://silo.deployor.dev
region = auto
acl = private
3. Common Commands
List all buckets
rclone lsd silo:
Copy a local file to Silo
rclone copy ./my-game.zip silo:games-bucket/v1/
Sync a local folder (mirror)
rclone sync ./build silo:my-website/ --progress
Mount bucket as a local drive (macOS/Linux)
mkdir ~/mnt/silo
rclone mount silo:my-bucket ~/mnt/silo --vfs-cache-mode writes