BHK S3 · 11-nines durability · Zero egress fees →

S3-Compatible Storage
Built for AI at Scale.

Developer-first object storage with a clean S3 API, policy-aware data services, and direct co-location with GPU clusters, starting from $0.99/TB with no egress fees.

  • $0.99per TB / month
  • 50GB/s peak throughput
  • 11nines durability
  • 15µs metadata access

Purpose-built for the data needs of modern AI teams

AI Training Datasets Model Checkpoints Vector Embeddings Media Pipelines Compliance Archival Analytics Lakes Inference Artifacts Backup & DR

Architecture

Self-healing storage fabric with sub-millisecond metadata.

Every object is synchronously replicated across three availability zones with erasure coding for space efficiency and rapid rebuilds. A memory-tier index keeps metadata accessible at 15 µs regardless of dataset size.

  • Native S3 signature v4, multi-part uploads, and presigned URLs
  • In-flight and at-rest AES-256-GCM with customer-managed keys
  • Per-bucket versioning, object-lock compliance, configurable retention
  • Event bus streaming notifications to queues, webhooks, or inference endpoints
  • Direct co-location with GPU clusters at 2–4 GB/s intra-cluster transfer
training-data · healthy LIVE
Throughput
48 GB/s
Durability 99.999999999% · 3-zone erasure
Objects 2.4 M files · 847 GB used
Replication Lag
12 ms
Metadata Latency 15 µs · memory-tier index
API key required. Set BHK_API_KEY before calling
# export BHK_API_KEY=bhk_sk_live_4a2e8b1c9d7f3a5b
$ bhk s3 ls s3://training-data --show-versions
 2.4M objects · 847 GB · latest version

How It Works

Storage that fits your AI pipeline.

Create, upload, configure, and stream, all through the API.

01

Create a Bucket

One API call sets region pinning, encryption, retention policy, and lifecycle rules before the first byte lands.

02

Upload Your Data

Automatic multi-part above 64 MB with checksum validation. Parallel streams saturate the 50 GB/s ingest ceiling.

03

Configure Policies

Declarative lifecycle, residency, and access policies in YAML. Drift detection alerts whenever buckets deviate from guardrails.

04

Stream to GPU Clusters

Mount buckets directly to RTX 3090 nodes. Stream training data into PyTorch dataloaders at 2–4 GB/s without staging.

Durability

Protection layers, by default.

Integrity, security, lifecycle, and governance, all built in rather than bolted on.

01

Continuous Data Integrity

Background scrubbing validates checksums every 24 hours and rebuilds corrupted shards automatically. Audit logs capture every repair event with full lineage.

02

Zero-Egress Envelope

Define zero-egress buckets where data never leaves specified regions. Permit GPU clusters or analytics services through short-lived signed policies.

03

Intelligent Tiering

Adapts media based on real-time access patterns. Frequently-accessed data on NVMe; cold data shifts to high-density media with zero retrieval delays.

04

Policy Automation

Declarative policy packs enforce retention, legal hold, and residency. Drift detection alerts whenever buckets deviate from guardrails.

05

Versioning & Compliance

Per-bucket versioning, object-lock immutability, and configurable retention periods. Meets WORM requirements for financial services and healthcare.

06

Event Bus Streaming

Stream object notifications to queues, webhooks, or inference endpoints the moment new data lands. Trigger GPU jobs automatically on ingest.

API Reference

Fully S3-compatible. Plus BHK extensions.

Drop-in replacement for any AWS S3 client. Set the endpoint and go.

API key required for all operations. Set your credentials before running any CLI command:
export BHK_API_KEY=bhk_sk_live_4a2e8b1c9d7f3a5b  ·  export BHK_ACCESS_KEY=BHK4A2E8B1C9D7F3A5B6
Get your keys at ai.bhkcloud.com/dashboard →
Operation Endpoint Notes Sample CLI
Create Bucket PUT /{bucket} Supports --data-residency for regional pinning. bhk s3 mb s3://media-prod --data-residency=eu-central
Upload Object PUT /{bucket}/{key} Automatic multi-part above 64 MB with checksum validation. bhk s3 cp ./assets s3://media-prod/ --recursive
List Versions GET /{bucket}?versions Returns retention class, immutability status, and lineage metadata. bhk s3 ls s3://media-prod --show-versions
Presigned URL POST /presign Specify intention codes to scope downstream access to AI pipelines. bhk s3 presign s3://media-prod/dataset.tar --intention=train

Why BHK S3

Storage that doesn't charge for moving your own data.

Most cloud providers charge $90/TB to read data you already paid to store. We don't.

Feature BHK S3 AWS S3 Wasabi Backblaze B2
Storage price / TB/mo $0.99 $23 $6.99 $6
Egress fees None (zero egress) $90/TB First 1 TB free/mo First 3 GB/day free
S3 API compatibility Full S3 sig v4 Native Full S3 sig v4 Partial (S3-compat)
GPU co-location RTX 3090 clusters · 2–4 GB/s EC2 only · separate billing No compute No compute
Metadata latency 15 µs · memory-tier ~5–50 ms ~10–50 ms ~10–50 ms
Intelligent tiering Automatic · no retrieval fees S3-IA · pay per retrieval No tiering No tiering
Event bus / notifications Built-in · queues, webhooks SNS/SQS (separate) None None

Pricing sourced from public rates as of May 2026. Actual costs vary by region and usage. AWS egress based on internet transfer out.

FAQ

Object storage, answered.

Everything you need to know before uploading your first dataset.

Is BHK S3 fully compatible with the AWS S3 API?

Yes. BHK S3 implements AWS S3 Signature Version 4 authentication and the full core S3 API surface: buckets, objects, multi-part uploads, presigned URLs, versioning, and object lock. Any AWS SDK (Python boto3, AWS CLI, Go, Node.js) works by setting --endpoint-url https://s3.bhkcloud.com. We also provide BHK-specific extensions for data residency pinning and intention-scoped presigned URLs.

What does "zero egress" actually mean?

We do not charge for data transferred out of BHK S3 to GPU nodes, the internet, or other services. There are no egress fees, no free-tier limits, and no "first TB free" caveats. You pay $0.99/TB/month for what you store. Bandwidth in and out is included at no additional charge.

How fast can I stream data to GPU training jobs?

GPU nodes and BHK S3 are co-located on the same internal network. Direct intra-cluster reads achieve 2–4 GB/s per node, and parallel multi-stream uploads saturate the 50 GB/s ingest ceiling. For PyTorch, set the endpoint in your dataset loader and stream shards directly with no local staging required.

How does intelligent tiering work?

BHK S3 monitors access patterns at the object level and transparently moves data between NVMe (hot), high-density HDD (warm), and archival tiers. Unlike AWS S3 Intelligent-Tiering, there are no per-object monitoring fees and no retrieval charges when cold data is accessed. The tier transition is fully automatic based on a configurable inactivity threshold (default: 30 days).

Can I migrate from AWS S3 without changing my code?

In most cases, yes. Change your SDK's endpoint to https://s3.bhkcloud.com and update your credentials. For migrations, we provide bhk s3 sync which streams objects directly from AWS without local staging, preserving metadata, ACLs, and versioning history. Contact our team for migrations above 100 TB as we offer complimentary architecture reviews.

What compliance standards does BHK S3 meet?

BHK S3 supports WORM (Write Once Read Many) object lock, configurable retention periods, and legal hold for FINRA, SEC 17a-4, and healthcare data requirements. AES-256-GCM encryption is applied at rest and in transit. For enterprise compliance reviews, SLA guarantees, or data processing agreements, contact our team.

Need a storage architecture review?

Our specialists benchmark workloads, recommend tiering policies, and script migrations. Complimentary 60-minute readiness session.

Schedule a Session → View Storage Pricing