In an era where most organizational systems have moved to the cloud, storing your company’s important documents whether Word files, PDFs, or other types with a single cloud provider may no longer be the best option. This is because doing so concentrates all the “Risk” in a single place.
Therefore, to avoid concentrating all the risk on a single cloud, it may be necessary to distribute your important documents across multiple cloud providers — enabling Disaster Recovery (DR) in case of unexpected events.
But!!! The moment you start looking for a file storage system that supports Multi-Cloud, Hybrid Cloud, or even On-Premise, you’ll quickly run into a problem each provider has a different API whether it’s AWS S3, Azure Blob, Google Cloud Storage, or others. This means you end up spending countless hours tweaking your code, and of course… it feels like a never-ending cycle.
If you don’t want Devs or Ops to keep rewriting code and redeploying every time your storage system changes to support new APIs
“MinIO is the answer you’ve been looking for”
MinIO is designed to be fully compatible with the Amazon S3 API, which means… whether your data is on AWS, Azure, GCP, or even on-premise, all systems can communicate as if they’re speaking the same language.
And the best part you don’t have to change a single line of code Whether it’s for DR, backups, or cloud migrations, no matter how many times… the life of Devs, Ops, and Data Engineers becomes easier than you ever thought possible
So how does MinIO do it?
Behind the scenes, MinIO is designed to be “simple yet powerful”, using a Distributed Object Storage architecture. No matter which cloud you deploy it on or even on your organization’s on-premise servers MinIO can communicate seamlessly across systems using a single standard: the S3 API.
What makes MinIO stand out is its Replication and Bucket-level Disaster Recovery capabilities,
allowing you to distribute your data across clouds in real-time. For example, you can store your primary files on Azure and replicate them to AWS, GCP, or on-premise systems,
while keeping everything automatically synced and immediately available in case of unexpected events.
How does MinIO Replication across Multi-Cloud + On-Premise work?
Suppose you have a MinIO cluster in each environment as follows:
- minio-aks.example.com → Azure (AKS)
- minio-eks.example.com → AWS (EKS)
- minio-gke.example.com → Google Cloud (GKE)
- minio-onprem.local → On-Premise Cluster
You can set up replication across all these clouds using just a few lines of MinIO CLI (mc) commands, for example:
1. Log in to the source MinIO cluster (AKS)
mc alias set minio-aks https://minio-aks.example.com accessKey secretKey
2. Log in to the target MinIO cluster (EKS)
mc alias set minio-eks https://minio-eks.example.com accessKey secretKey
3. Log in to the target MinIO cluster (GKE)
mc alias set minio-gke https://minio-gke.example.com accessKey secretKey
4. Log in to the On-Premise MinIO cluster
mc alias set minio-onprem https://minio-onprem.local accessKey secretKey
5. Set up bucket replication for data from AKS → EKS, GKE, On-Prem
mc replicate add --remote-bucket "minio-eks/data" --replicate "delete,delete-marker,existing-objects" minio-aks/data
mc replicate add --remote-bucket "minio-gke/data" --replicate "delete,delete-marker,existing-objects" minio-aks/data
mc replicate add --remote-bucket "minio-onprem/data" --replicate "delete,delete-marker,existing-objects" minio-aks/data
After running these commands, any new files, modifications, or deletions in the data bucket on AKS will be automatically replicated to EKS, GKE, and On-Prem in real-time.
MinIO: Really Dev-Friendly? How to Switch from S3 Without Changing a Single Line of Code
One of MinIO’s biggest strengths is its ability to work with applications using the Amazon S3 API
in a “fully compatible” way. This means… if your app already uses the S3 API, you can switch from S3 to MinIO immediately without changing a single line of code Whether running on AKS, EKS, GKE, or On-Prem,
just point the endpoint to MinIO, and it works immediately.
Conditions for “No Code Changes”
If your application uses standard SDKs such as boto3, aws-sdk, minio, s3fs, or Go SDK and does not rely on AWS-specific features (like IAM Roles, STS, KMS, or Glacier), you only need to adjust a few parameters, for example:
# Original (AWS S3)
s3 = boto3.client(
's3',
region_name='ap-southeast-1',
aws_access_key_id='AWS_ACCESS_KEY',
aws_secret_access_key='AWS_SECRET_KEY'
)
# Modified (MinIO)
s3 = boto3.client(
's3',
endpoint_url='https://minio.example.com', # ✅ Added MinIO endpoint
aws_access_key_id='MINIO_ACCESS_KEY',
aws_secret_access_key='MINIO_SECRET_KEY',
region_name='us-east-1'
)
PUT, GET, DELETE, LIST — all commands work exactly the same.
No changes to the logic are required at all.
Important Note
If your application does not yet use the S3 API (e.g., using a local file system or a custom REST API), you may need to adjust the code the first time to switch to an S3 SDK. After that, you can point the endpoint to MinIO, AWS, or any cloud without touching the code again.
Quick Summary
- Already using S3 API No changes needed
- Using local filesystem / custom REST API Change required the first time
- Using IAM Role or STS Must switch to Access/Secret Key
- Using S3 Glacier / KMS Need to adjust encryption/archive handling
So, how do we get started with MinIO?
There are two main ways to install MinIO
- Install on a Host (Standalone / Multi-node)
- Bare Metal – Install directly on a physical server
- VM – Install on a virtual machine
- Install on Kubernetes / Container (MinIO Operator – used to create MinIO Tenants as a cluster)
- Native Kubernetes – Install on a standard Kubernetes cluster
- Managed Kubernetes (Cloud) – Install on a cloud-managed cluster, such as:
- AWS EKS
- Azure AKS
- Google GKE
- Docker / Podman – Run MinIO as a container on a server
Get started easily by installing on Docker
Example: Run 2 MinIO instances to test replication
1. Create data folders
mkdir -p ~/minio-data1
mkdir -p ~/minio-data2
2. Run the first MinIO instance (minio1)
docker run -d \
--name minio1 \
-p 9001:9000 -p 9091:9090 \
-e "MINIO_ROOT_USER=admin1" \
-e "MINIO_ROOT_PASSWORD=admin123" \
-v ~/minio-data1:/data \
quay.io/minio/minio server /data --console-address ":9090"
3. Run the second MinIO instance (minio2)
docker run -d \
--name minio2 \
-p 9002:9000 -p 9092:9090 \
-e "MINIO_ROOT_USER=admin2" \
-e "MINIO_ROOT_PASSWORD=admin123" \
-v ~/minio-data2:/data \
quay.io/minio/minio server /data --console-address ":9090"
4. Access via Web UI
minio1: http://localhost:9091 user: admin1 password: admin123
minio2: http://localhost:9092 user: admin1 password: admin123
5. Set up Replication using mc (MinIO Client)
brew install minio/stable/mc
6. Set up aliases on both sides
mc alias set minio1 http://localhost:9001 admin1 admin123
mc alias set minio2 http://localhost:9002 admin2 admin123
7. Create a bucket named data on both sides
mc mb minio1/data
mc mb minio2/data
8. Set up replication from minio1 → minio2
mc replicate add minio1/data --remote-bucket "minio2/data" --replicate "delete,delete-marker,existing-objects"
9. Test replication Upload files to minio1
mc cp TEST.TXT minio1/data/
10. Check on minio2
mc ls minio2/data/
If you see the same files it means the replication is working correctly!
Conclusion
In an era where organizations widely adopt Multi-Cloud and Hybrid Cloud architectures, storing critical files in a single location poses a significant risk.
MinIO is an Object Storage Platform that offers more than just speed it is the ultimate solution that truly eliminates the headaches faced by Dev/Ops teams, for these reasons:
- End Lock-in: Fully S3 Compatible: If your app already uses the S3 API, you can point it to MinIO immediately, regardless of the cloud, without touching a single line of code.
- Cross-Cloud DR (Real-time): Automatically synchronize data across AWS, Azure, GCP, or On-Prem, ensuring your data is always available during unexpected events.
- Flexible Architecture: Deploy anywhere (Kubernetes, Bare-Metal) with Erasure Coding features for enterprise-grade data durability.
- Save Time & Money: Disaster Recovery, backups, or cloud migrations are no longer complicated or time-consuming.
References
Looking for a DevOps solution that automates your workflow and reduces business costs? SCB TechX helps you modernize your delivery pipeline and bring high-quality products to market faster, building a foundation for long-term growth.
For service inquiries, please contact us at Please fill out this form
Learn more: xPlatform | SCB Tech X

