Understanding RamNode Object Storage
RamNode Object Storage is built on industry-standard S3-compatible APIs, making it a drop-in replacement for traditional object storage solutions. Any tool or application that supports the S3 API will work with RamNode Object Storage without modification.
Key Benefits
- • S3 API compatibility with existing tools
- • Low-latency access with enterprise SSD
- • Competitive pricing without egress fees
- • Unlimited scalability
- • High durability guarantees
Getting Started
- • Access Key ID and Secret Access Key
- • Bucket name (storage container)
- • Endpoint:
object-storage.ramnode.com
1. Backup & Disaster Recovery
RamNode Object Storage is ideal for backup destinations due to its durability, cost-effectiveness, and S3 compatibility.
Restic Backup Integration
Restic is a modern backup solution with native S3-compatible support:
# Set environment variables
export AWS_ACCESS_KEY_ID="your-ramnode-access-key"
export AWS_SECRET_ACCESS_KEY="your-ramnode-secret-key"
export RESTIC_PASSWORD="your-restic-password"
# Initialize repository on RamNode Object Storage
restic -r s3:object-storage.ramnode.com/your-bucket-name init
# Backup critical directories
restic -r s3:object-storage.ramnode.com/your-bucket-name backup /var/www /etc /homecat > /etc/cron.daily/restic-backup <<'EOF'
#!/bin/bash
export AWS_ACCESS_KEY_ID="your-ramnode-access-key"
export AWS_SECRET_ACCESS_KEY="your-ramnode-secret-key"
export RESTIC_PASSWORD="your-restic-password"
REPO="s3:object-storage.ramnode.com/your-bucket-name"
# Perform backup
restic -r $REPO backup /var/www /etc /home
# Retention policy: keep 7 daily, 4 weekly, 6 monthly
restic -r $REPO forget --keep-daily 7 --keep-weekly 4 --keep-monthly 6 --prune
echo "$(date): Backup completed" >> /var/log/restic-backup.log
EOF
chmod +x /etc/cron.daily/restic-backupDatabase Backups
Automated MySQL/PostgreSQL backups with RamNode Object Storage:
#!/bin/bash
# mysql-ramnode-backup.sh
BACKUP_DATE=$(date +%Y%m%d_%H%M%S)
BACKUP_FILE="mysql_backup_${BACKUP_DATE}.sql.gz"
ENDPOINT="https://object-storage.ramnode.com"
BUCKET="database-backups"
# Create backup
mysqldump --all-databases --single-transaction --quick | gzip > /tmp/${BACKUP_FILE}
# Upload to RamNode Object Storage using AWS CLI
aws s3 cp /tmp/${BACKUP_FILE} s3://${BUCKET}/mysql/ \
--endpoint-url=${ENDPOINT} \
--storage-class STANDARD
# Cleanup local file
rm /tmp/${BACKUP_FILE}
echo "$(date): Database backup completed" >> /var/log/mysql-backup.logRclone Integration
Rclone supports RamNode Object Storage with excellent performance:
# Configure rclone for RamNode
cat > ~/.config/rclone/rclone.conf <<EOF
[ramnode]
type = s3
provider = Other
access_key_id = your-ramnode-access-key
secret_access_key = your-ramnode-secret-key
endpoint = https://object-storage.ramnode.com
acl = private
EOF
# Sync directory to RamNode Object Storage
rclone sync /var/www ramnode:your-bucket/www-backup \
--progress \
--transfers 8 \
--checkers 16
# Create encrypted remote
rclone config create ramnode-crypt crypt \
remote=ramnode:your-bucket/encrypted \
filename_encryption=standard \
directory_name_encryption=true \
password=your-encryption-password2. Media & Static Asset Storage
RamNode Object Storage excels at serving static content like images, videos, documents, and other media files.
WordPress Integration
Configure WordPress to use RamNode Object Storage for media uploads:
// wp-config.php additions for RamNode Object Storage
define('AS3CF_SETTINGS', serialize(array(
'provider' => 'other',
'use-server-roles' => false,
'access-key-id' => 'your-ramnode-access-key',
'secret-access-key' => 'your-ramnode-secret-key',
)));
define('AS3CF_BUCKET', 'your-bucket-name');
define('AS3CF_REGION', 'us-east-1');
define('AS3CF_ENDPOINT', 'https://object-storage.ramnode.com');
// Optional: Use custom domain if configured
define('AS3CF_CLOUDFRONT', 'cdn.yourdomain.com');Node.js Integration
const AWS = require('aws-sdk');
const fs = require('fs');
// Configure for RamNode Object Storage
AWS.config.update({
accessKeyId: process.env.RAMNODE_ACCESS_KEY_ID,
secretAccessKey: process.env.RAMNODE_SECRET_ACCESS_KEY,
endpoint: 'https://object-storage.ramnode.com',
s3ForcePathStyle: false,
signatureVersion: 'v4',
region: 'us-east-1'
});
const s3 = new AWS.S3();
// Upload file
async function uploadFile(filePath, key, bucket) {
const fileContent = fs.readFileSync(filePath);
const params = {
Bucket: bucket,
Key: key,
Body: fileContent,
ACL: 'public-read'
};
const data = await s3.upload(params).promise();
return data.Location;
}
// Generate presigned URL
function getPresignedUrl(key, bucket, expiresIn = 3600) {
return s3.getSignedUrl('getObject', {
Bucket: bucket,
Key: key,
Expires: expiresIn
});
}Python (boto3) Integration
import boto3
from botocore.client import Config
import os
class RamNodeStorage:
def __init__(self, access_key, secret_key, bucket):
self.bucket = bucket
self.s3_client = boto3.client(
's3',
endpoint_url='https://object-storage.ramnode.com',
aws_access_key_id=access_key,
aws_secret_access_key=secret_key,
config=Config(signature_version='s3v4'),
region_name='us-east-1'
)
def upload_file(self, file_path, object_name=None, acl='public-read'):
if object_name is None:
object_name = os.path.basename(file_path)
self.s3_client.upload_file(
file_path, self.bucket, object_name,
ExtraArgs={'ACL': acl}
)
return f"https://object-storage.ramnode.com/{self.bucket}/{object_name}"
def generate_presigned_url(self, object_name, expiration=3600):
return self.s3_client.generate_presigned_url(
'get_object',
Params={'Bucket': self.bucket, 'Key': object_name},
ExpiresIn=expiration
)
# Usage
storage = RamNodeStorage(
os.environ.get('RAMNODE_ACCESS_KEY'),
os.environ.get('RAMNODE_SECRET_KEY'),
'your-bucket-name'
)
url = storage.upload_file('/path/to/file.pdf', 'documents/file.pdf')3. Static Website Hosting
RamNode Object Storage can host static websites directly with custom domain support.
Website Configuration
ENDPOINT="https://object-storage.ramnode.com"
BUCKET="my-website"
# Enable website hosting
aws s3 website s3://${BUCKET}/ \
--endpoint-url=${ENDPOINT} \
--index-document index.html \
--error-document error.html
# Set bucket policy for public read access
cat > bucket-policy.json <<EOF
{
"Version": "2012-10-17",
"Statement": [{
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::${BUCKET}/*"
}]
}
EOF
aws s3api put-bucket-policy \
--endpoint-url=${ENDPOINT} \
--bucket ${BUCKET} \
--policy file://bucket-policy.json
# Deploy website files
aws s3 sync ./build/ s3://${BUCKET}/ \
--endpoint-url=${ENDPOINT} \
--delete \
--cache-control max-age=31536000,publicDeployment Script
#!/bin/bash
# deploy-to-ramnode.sh
ENDPOINT="https://object-storage.ramnode.com"
BUCKET="your-website-bucket"
BUILD_DIR="./dist"
echo "Building application..."
npm run build
echo "Deploying to RamNode Object Storage..."
# Static assets with long cache
aws s3 sync ${BUILD_DIR}/ s3://${BUCKET}/ \
--endpoint-url=${ENDPOINT} \
--delete \
--acl public-read \
--cache-control "public, max-age=31536000, immutable" \
--exclude "*.html" --exclude "*.json"
# HTML files with shorter cache
aws s3 sync ${BUILD_DIR}/ s3://${BUCKET}/ \
--endpoint-url=${ENDPOINT} \
--acl public-read \
--cache-control "public, max-age=300" \
--exclude "*" --include "*.html" --include "*.json"
echo "Deployment completed!"4. Application Log Storage
Centralize logs to RamNode Object Storage for long-term retention and analysis.
Fluent Bit Configuration
# /etc/fluent-bit/fluent-bit.conf
[SERVICE]
Flush 5
Daemon Off
Log_Level info
[INPUT]
Name tail
Path /var/log/nginx/*.log
Parser nginx
Tag nginx
[INPUT]
Name tail
Path /var/log/application/*.log
Parser json
Tag app
[OUTPUT]
Name s3
Match *
bucket your-logs-bucket
endpoint object-storage.ramnode.com
region us-east-1
total_file_size 100M
upload_timeout 10m
use_put_object On
compression gzip
s3_key_format /logs/$TAG[0]/%Y/%m/%d/$TAG[0]-%H%M%S5. Container Image Storage
Use RamNode Object Storage as backend for private container registries.
Docker Registry Backend
version: '3.8'
services:
registry:
image: registry:2
environment:
REGISTRY_STORAGE: s3
REGISTRY_STORAGE_S3_ACCESSKEY: your-ramnode-access-key
REGISTRY_STORAGE_S3_SECRETKEY: your-ramnode-secret-key
REGISTRY_STORAGE_S3_REGION: us-east-1
REGISTRY_STORAGE_S3_BUCKET: docker-registry
REGISTRY_STORAGE_S3_REGIONENDPOINT: https://object-storage.ramnode.com
REGISTRY_STORAGE_S3_SECURE: true
REGISTRY_HTTP_SECRET: random-secret-string
ports:
- "5000:5000"
restart: always
registry-ui:
image: joxit/docker-registry-ui:latest
environment:
REGISTRY_TITLE: Private Docker Registry
REGISTRY_URL: http://registry:5000
DELETE_IMAGES: true
ports:
- "8080:80"
depends_on:
- registry6. Database Storage & Archival
Integrate RamNode Object Storage with databases for long-term storage and WAL archiving.
PostgreSQL WAL Archiving
# Install wal-g
wget https://github.com/wal-g/wal-g/releases/download/v2.0.1/wal-g-pg-ubuntu-20.04-amd64.tar.gz
tar -zxvf wal-g-pg-ubuntu-20.04-amd64.tar.gz
mv wal-g-pg-ubuntu-20.04-amd64 /usr/local/bin/wal-g
chmod +x /usr/local/bin/wal-g
# Configure PostgreSQL
cat >> /etc/postgresql/14/main/postgresql.conf <<EOF
archive_mode = on
archive_command = '/usr/local/bin/wal-g wal-push %p'
archive_timeout = 60
EOF
# WAL-G configuration for RamNode
cat > /var/lib/postgresql/.walg.json <<EOF
{
"AWS_ACCESS_KEY_ID": "your-ramnode-access-key",
"AWS_SECRET_ACCESS_KEY": "your-ramnode-secret-key",
"WALG_S3_PREFIX": "s3://postgres-backups",
"AWS_ENDPOINT": "https://object-storage.ramnode.com",
"AWS_REGION": "us-east-1",
"WALG_COMPRESSION_METHOD": "brotli"
}
EOF
# Create base backup
sudo -u postgres wal-g backup-push /var/lib/postgresql/14/main7. Application Framework Integrations
Laravel Storage
// config/filesystems.php
'disks' => [
'ramnode' => [
'driver' => 's3',
'key' => env('RAMNODE_ACCESS_KEY_ID'),
'secret' => env('RAMNODE_SECRET_ACCESS_KEY'),
'region' => env('RAMNODE_REGION', 'us-east-1'),
'bucket' => env('RAMNODE_BUCKET'),
'endpoint' => env('RAMNODE_ENDPOINT', 'https://object-storage.ramnode.com'),
'use_path_style_endpoint' => true,
],
],
// Usage in application
use Illuminate\Support\Facades\Storage;
// Store file
Storage::disk('ramnode')->put('uploads/document.pdf', $fileContents);
// Get file URL
$url = Storage::disk('ramnode')->url('uploads/document.pdf');
// Temporary URL (presigned)
$url = Storage::disk('ramnode')->temporaryUrl(
'uploads/document.pdf',
now()->addMinutes(30)
);Django Storage
# settings.py
import os
AWS_ACCESS_KEY_ID = os.environ.get('RAMNODE_ACCESS_KEY_ID')
AWS_SECRET_ACCESS_KEY = os.environ.get('RAMNODE_SECRET_ACCESS_KEY')
AWS_STORAGE_BUCKET_NAME = os.environ.get('RAMNODE_BUCKET_NAME')
AWS_S3_ENDPOINT_URL = 'https://object-storage.ramnode.com'
AWS_S3_REGION_NAME = 'us-east-1'
AWS_DEFAULT_ACL = 'public-read'
AWS_S3_OBJECT_PARAMETERS = {
'CacheControl': 'max-age=86400',
}
# Static files
STATICFILES_STORAGE = 'storages.backends.s3boto3.S3Boto3Storage'
# Media files
DEFAULT_FILE_STORAGE = 'storages.backends.s3boto3.S3Boto3Storage'Ruby on Rails ActiveStorage
# config/storage.yml
ramnode:
service: S3
access_key_id: <%= ENV['RAMNODE_ACCESS_KEY_ID'] %>
secret_access_key: <%= ENV['RAMNODE_SECRET_ACCESS_KEY'] %>
region: us-east-1
bucket: <%= ENV['RAMNODE_BUCKET_NAME'] %>
endpoint: https://object-storage.ramnode.com
force_path_style: true
# config/environments/production.rb
config.active_storage.service = :ramnode
# Usage in models
class User < ApplicationRecord
has_one_attached :avatar
has_many_attached :documents
endSecurity Best Practices
Encryption
Enable encryption for data at rest and in transit:
ENDPOINT="https://object-storage.ramnode.com"
BUCKET="secure-bucket"
# Enable default encryption
aws s3api put-bucket-encryption \
--endpoint-url=${ENDPOINT} \
--bucket ${BUCKET} \
--server-side-encryption-configuration '{
"Rules": [{
"ApplyServerSideEncryptionByDefault": {
"SSEAlgorithm": "AES256"
}
}]
}'
# Upload with encryption
aws s3 cp sensitive-file.pdf s3://${BUCKET}/ \
--endpoint-url=${ENDPOINT} \
--server-side-encryption AES256Secure Credential Management
Never hardcode credentials. Use environment variables:
# Create .env file (never commit this!)
cat > .env <<EOF
RAMNODE_ACCESS_KEY_ID=your-access-key
RAMNODE_SECRET_ACCESS_KEY=your-secret-key
RAMNODE_BUCKET=your-bucket
RAMNODE_ENDPOINT=https://object-storage.ramnode.com
EOF
# Load in application
export $(cat .env | xargs)Performance Optimization
Multipart Uploads
Handle large files efficiently:
import boto3
from boto3.s3.transfer import TransferConfig
s3_client = boto3.client(
's3',
endpoint_url='https://object-storage.ramnode.com',
aws_access_key_id='your-access-key',
aws_secret_access_key='your-secret-key'
)
# Configure multipart upload settings
config = TransferConfig(
multipart_threshold=1024 * 25, # 25 MB
max_concurrency=10,
multipart_chunksize=1024 * 25, # 25 MB chunks
use_threads=True
)
# Upload large file
s3_client.upload_file(
'large-video.mp4',
'your-bucket',
'videos/large-video.mp4',
Config=config
)Monitor Storage Usage
import boto3
def get_bucket_size(bucket):
s3_client = boto3.client(
's3',
endpoint_url='https://object-storage.ramnode.com',
aws_access_key_id='your-access-key',
aws_secret_access_key='your-secret-key'
)
total_size = 0
total_objects = 0
paginator = s3_client.get_paginator('list_objects_v2')
for page in paginator.paginate(Bucket=bucket):
if 'Contents' in page:
for obj in page['Contents']:
total_size += obj['Size']
total_objects += 1
size_gb = total_size / (1024**3)
print(f"Bucket: {bucket}")
print(f"Total Objects: {total_objects:,}")
print(f"Total Size: {size_gb:.2f} GB")
return {'objects': total_objects, 'size_gb': size_gb}Troubleshooting
Connection Debugging
import boto3
from botocore.exceptions import ClientError, EndpointConnectionError
def test_connection(endpoint, access_key, secret_key):
try:
s3_client = boto3.client(
's3',
endpoint_url=endpoint,
aws_access_key_id=access_key,
aws_secret_access_key=secret_key
)
response = s3_client.list_buckets()
print("Connection successful!")
print(f"Found {len(response['Buckets'])} buckets")
return True
except EndpointConnectionError as e:
print(f"Connection Error: Cannot reach {endpoint}")
return False
except ClientError as e:
error_code = e.response['Error']['Code']
if error_code == 'InvalidAccessKeyId':
print("Error: Invalid Access Key ID")
elif error_code == 'SignatureDoesNotMatch':
print("Error: Invalid Secret Access Key")
return False
test_connection(
'https://object-storage.ramnode.com',
'your-access-key',
'your-secret-key'
)Need Help?
If you encounter issues not covered here, please contact our support team for assistance with your RamNode Object Storage integration.
