Join 162 other subscribers


Introduction Part

MinIO is one of the most sought-after object storage mechanisms of the modern world, which is famous for its high performance. It is readily available in the market under the Apache V2 license and runs on the top industry level standard. The system is software-defined, which means it can abstract the administrative and management capabilities from the technology. Several things make MinIO a strong contender among other private cloud object storages. The main reason lies in its design, which got established in a way that supports its high standards. MinIO is built in such a way that it serves only the objects. Due to this building mechanism, all of the necessary functions are done through a single layer of the architecture. Things like these make MinIO a special kind of system with features revolving around scalability, lightweight, and performance-driven in a cloud-native object server. In the world of private cloud storage, problems like cloud-native applications getting harms with workloads, analytics, and machine learning are always popping up. This is where the MinIO excels as the traditional object storage with use cases such as archiving, secondary storage, and disaster recovery.


Before we venture into the features and other functionalities of MinIO, let us take some time to appreciate the developers of this wonderful storage system. The MInIO project falls under the jurisdiction of MinIO Inc. that started its initial years from Silicon Valley. The credit for foundation goes to Anand Babu Periasamy, Harshavardan and Garima Kapoor who officially started the project in 2014. The glory of MinIO started in August of 2019 when it got 250 million downloads and 17 thousand stars from its host GitHub. The source code is still in the GitHub and all the contributions to this project are still done via the pull requests.

MinIO Cloud Storage Stack

There are three components of the MinIO cloud storage stack: the cloud storage server, MinIO SDKs and the MinIO client. MinIO SDKs are used by applications while interacting the Amazon S3 compatible servers. The MinIO Client is a desktop client used for file management with the help of servers compatible with Amazon S3. It is also known as mc.

MinIO Server

The main purpose of building the MinIO cloud storage is to focus on the minimality and scalability. Feature of this system include the capability to store unstructured data like videos, log files, archives, photos, and container images. MinIO server has tons of features due to its high-performance and peta-scale work loading capacities. These points make it the best choice for enterprise deployments. There are many features, but the most notables ones are undoubtedly global federation, erasure coding, encryption/WORM, continuous replication, bitroit protection, and supporting multi-cloud deployments with the help of gateway mode.


The developers of MinIO servers have made the system hardware agnostic and this feature helps the system suitable with several virtual and physical environments. Hence, the system can run on containers that itself run on commodity servers with local disks without compromising the scalability and safety of data. One can install MinIO servers on virtual/physical machines or deploy on container platforms like Mesosphere, Docker Swarm, and Kubernetes.

MinIO Client

MinIO client or mc works as an alternative to the standard UNIX commands like “cat”, “diff”, “ls”, “cp”, “mirror”, etc. It adds support for the Amazon S3 cloud storage services like AWS signature v2 and v4 by adding support. MinIO client is cross-platform friendly which means you can run it on Windows, Mac or Linux Operating systems.

MinIO Client SDK

For accessing the object storage server compatible with Amazon S3, MinIO client SDK has a simple API. There are language bindings for Python, Java, Haskell, JavaScript, and Go and the languages are hosted on .NET Framework.


The standards of the MinIO enterprise-class features get represented in the form of object storage space. Due to several essential and unique features of MinIO, the technology has been adopted by several big names in the modern business and tech industry. Some of the notable ones include S3 Select, AWS S3 API, and the implementation of inline security and inline erasure coding. Let’s take a closer look at some of these features one by one.


 Before diving into the fundamentals of erasure coding let’s get to know what an erasure code is. Basically, erasure code is the code used for error correction. What they do is change the message which consists of “k” symbols and makes them longer with code words. These code words consist of n symbols. When the message gets transported, one can recover the original message with the help of subset of n symbols. In the case of MinIO, the system uses something called “Reed-Solomon” code that divides the original data into n/2 parity blocks and n/2 data. For example, if you have a 12 drive setup then the code will divide your object into 6 parity blocks and 6 data. This mechanism will help you recover data be it either from parity or data even if you lose 5((n-2)-1) drives.


How is it Different from RAID?

In contrary to the functionalities of RAID or replication, Erasure code helps to protect data from failure due to multiple drives. If we talk about real-life comparisons, RAID 6 will help you preserve the data against two drive failures but, MinIO erasure code will still help you recover your data even if you lose more than half of them. Another thing to note is that MinIO’s erasure code operates at object level. Here, each object gets individually encoded with high parity count. So while healing the data corruption, it can go one by one with the process. For RAID, this process takes a lot of time because it has to do the healing at volume level. MinIO erasure coding has a strong backend design which supports efficient operation. The system takes advantage of the hardware as per necessary within the given boundaries.


** With all the protection and restoring facilities, users still have to deal with the problems like data getting corrupted due to issues on the disk drives in this day and age. The frustrating part is, it gets without giving any signal to the user. When you look at this problem, the result is always the same – data compromise. But, when we dissect the problem, several reasons come to light that slowly move the wheel and start the snowball effect of data loss in the end. Some of them are: driver errors, aging of drives, phantom writes, accidental overwrites, misdirected writes/reads, disk firmware bugs, and current spikes. Another feature of MinIO is that it uses the HighwayHash algorithm to ensure the quality of its data. The system never reads data that is corrupted. Instead, it captures the damaged objects on the fly and heals them. The system ensures the integrity of data by assigning a hash value during the reading process and checking its authenticity during the WRITE process all across the network until it gets to the memory drive. The whole design of implementation is about the speed with hashing speeds reaching up to 10 GB per second only with a single-core chip of Intel CPUs.




Bit rot or silent data still remains a big problem for the disk drives. This causes the data inside the disks to get corrupted without getting noticed by the user. While there are several reasons for this kind of accident, some of them are bugs in the disk firmware, DMA parity errors happening between the server memory and the array, misdirected writes/reads, current spikes, driver errors, phantom writes, and accidental overwrites. MinIO provides robust protection against all these data corruption-related problems and bit riot issues. It has faster hashing algorithm and data recovery system which is based upon the erasure coding. So, how does MinIO does the whole thing? The bit riot protection system in MinIO is very interesting. At first, it verifies the data integrity with the help of a computed hash. It initially puts a certain hash to the data and compares it with the hashing value after the data has been stored in the disk. This method is very useful to find out whether the data has been altered in the process or not. In case the hash values do not match up, the whole block gets discarded by the system and it needs to get rebuild with the help of parity chunks and other data.

What is Hashing in MinIO?

The hashing procedure used in the MinIO system is known as “Highwayhash” and this is the elevated version of Highway hashing algorithm. The algorithm is completely written in Go. So, why do we call it the elevated version? It is because the hashing speeds can go up to 10 GB/sec and it achieves this amazing speed with only a single core Intel CPU chip. Hashing speed has to be very high for making the data secure and avoid duplication. It has to be fast enough to verify and save the initial hash values assigned to each data. Thus, “Highwayhash” has to be the best option for such use cases.


Data encryption is a vast universe of data security with new decryption and encryption tools getting introduced in the market almost every other day. With that said, the encryption of data is a complex process that differs when the information is getting transferred versus the data that is resting on somebody’s server. This is where MinIO comes with its robust and promising features of securing the server-side data with the help of encryption schemes. These help in the protection of data regardless of the place of choice. In simple terms, we can say that MinIO provides the facilities of integrity, authenticity, and confidentiality without compromising the performance levels.

The main processes involved in the encryption of server and client-side data are ChaCha20-Poly1305, AES-CBC, and AES-256-GCM. In the encrypted objects, we use the AEAD server-side encryption process, which makes the objects tamper-proof. And, we don’t even have to fear for the compatibility issues as the MinIO is entirely okay and tested with several critical management solutions like the Hashi Corp Vault. As for the MinIO itself, it follows the encryption method of KMS or Key management system for supporting the SSE-S3. The mechanism is pretty simple. When a client requests for SSE-S3 or uses the auto-encryption procedure, the MinIO server automatically gives a unique object key to each object. This key is safe within the hands of a master key overlooked by the KMS. The auto-encryption function can be used for every instance or every application due to its extremely low overhead.



MinIO has a protection system in-built to prevent the mutation of the metadata and the object data. It automatically disables the APIs that can be potential harm as soon as WORM gets enabled. This process helps to make the data tamper-proof once it has been written. The strange but helpful functionality aids in different requirements and further contributes to the regulation of practical applications.


MinIO scales in multi-tenant way and according to this rule, each tenant’s data get stored on different MinIO instance. Due to this, the scale gets decoupled according to the physical limits of the system. Due to the easy deployment of MinIO, it does not matter if one has hundreds or even thousands of tenants. The system allocates single MinIO instance to each of them and this helps in the deployment and scalability. Due to this, only few tenants go down at a time which increases the maintainability of the system. The deployment is thus simple and scaling up and down is easier. The recent version of MinIO supports large bucket support which is a good for large deployments.


When it comes to the sector of identity management, MinIO has the most advanced settings or standards. The two significant functionalities, namely, integration with OpenID connect compatible providers, and critical external IDP vendors, help it to maintain its standards. What does that mean? Well, it means that the passwords are made temporary and often rotated to maintain ambiguity. Also, the centralization of access is done, and the passwords do not get stored in databases. One can easily configure access policies because they are finely grained. This, in turn, helps to simplify the supporting multi-instance and multi-tenant.



The replication approach has been going on for decades now. With small improvements here and there, replication has developed over the years, but the problem of scalability still haunts the traditional approach. The problem is, it does not go beyond a few hundred Terabytes. But finding out the flaw in the procedure will not solve the issues related to disaster recovery. We still need an excellent strategy to span data centers, clouds, and geographies. So, MinIO is a great way to deal with this problem as it supports continuous replication, which is suitable for a cross-data center and large scale deployment. Another crucial factor of the MinIO is to contribute the efficient and quick delta computation. It does this task by leveraging object metadata, and Lambda computes notifications. The system is entirely different from the batch mode that follows the traditional method of immediate propagation. Here, Lambda notifications help to make sure the changes are implemented immediately. As such, continuous replication helps the users to minimize data loss in case of failure severely occurs during the process like high dynamic datasets. The constant replication mechanism of the MinIO is multi-vendor, which means that the location of backup can be either a public cloud or NAS.



In the modern world, data is power, and as such, we can find data anywhere we hear the name enterprise. MinIO is a part of this data generation that helps combine these various instances and make a global namespace by unifying them. Talking about real statistics, we can combine up to 32 MinIO servers to form a Distributed Mode set and bring together several Distributed Mode sets to create a MinIO Server Federation. In each MinIO Server Federation, the MinIO provides a namespace and a unified admin for the users. Also, there is an unlimited amount of Distributed Mode sets in the MinIO Federation Server. The main benefit of this kind of approach is that one can extend the object store to massive scalability and geographically distributed enterprise. But, this does not compromise its ability to accommodate several types of applications like MySQL, Hive, S3 Select, Spark, TensorFlow, Presto, and H20, all using a single console.



A multi-cloud strategy is quickly gaining popularity in the enterprise domain, and most of them have started adopted this strategy. Private clouds are also a part of this growth strategy. So how does it affect your public cloud services or your bare-metal virtualization containers? To put it simply, they need to look identical, including the non-S3 providers you come across like Microsoft, Alibaba, and Google. In this day and age, the applications made can be transferred from one place to another very quickly. But, when it comes to the data that runs these applications, they are not portable. MinIO faces this problem regularly. It often struggles to make those data available from the location.

Be it network-attached storage, bare metal, or any kind of public cloud, MinIO runs smoothly without any hindrances. MinIO makes sure that the data view from your application does not differ from the picture from the perspective of management when looked through the Amazon S3 API. What more, MinIO can also help you upgrade your current storage infrastructure so that it gets compatible with Amazon S3. Also, the use of this facility helps to increase quality. As such, any organization can now unite the infrastructure of their data, be it from file to block with the help of Amazon S2 API. For this, they do not require migration.   



MinIO is designed in such a way that it can be organized with the help of Kubernetes, an external orchestration service. The manufacturers have designed it to comply with cloud-native services, and it can run in the form of lightweight containers. Talking about the entirety of the server, the static binary value is nearly equivalent to 40 megabytes and manages the CPU in a way that it brings out the maximum efficiency. This means the CPU will properly function even in conditions where it gets strained with high loads. Due to this, one can rest easy while he or she uses shared hardware to host several tenants.


The operation of MinIO is backed up by locally attached drives like the JBOD or JBOF, all of which lie under the commodity servers. The exciting thing is all of the servers associated with forming a cluster have equal levels of capability. This system is also known as fully symmetrical architecture, which means the functional skills are symmetrically divided. This system lacks the metadata servers or the name nodes. In MinIO, you will rarely find the presence of a metadata database, and the reason lies in its working process. As it happens, it groups the metadata and singular data and writes them as objects which leave no requirement for creating the database. When we look at the way MinIO functions, it often performs its tasks in an inline and strictly consistent manner, which includes bitrot checking, encryption, and erasure coding. All these small functions add up to make the MinIO system an exceptionally resilient and robust structure.

When we look closely at the distributed MinIO servers, they usually form a cluster which in term gets the name of MinIO cluster and functions with the help of one process per node mechanism. It uses lightweight co-routines to maintain its high concurrency and is seen running as a single process in the userspace. At first, the drives are separated and divided into 16 disks per group into the erasure sets. Then, the system puts objects on each of these sets with the help of a hashing algorithm. Its features like these that make MinIO a favorite to multi-datacenter with cloud storage services and large scale functions. The MinIO clusters work independently under the control of the tenants, and this helps the system maintain its security. As the works are done in isolation, there is less chance of unnecessary updates, security issues, and problems during an upgrade.


MinIO and its functions depend on the type of hardware it gets associated with. While various kinds of equipment support the MinIO architectures, the most notable ones are probably ARM-based embedded systems, POWER9 servers, and other high-end x64. While these are the bare minimum requirements, we would like to recommend the following configuration details if you are aiming for data storage at a larger scale.

Processor: It is recommended to use Dual Intel Xeon Scalable Gold CPUs, which has at least eight cores per socket.

Memory: As for memory, 128 GB RAM will ensure smooth running.

Network: If you are seeking high-density, 25GbE is recommended but, high-performance requires 100GbE NICs.

Drives: Like the network requirements, drive recommendation for high-density is SATA/SAS HDDs, whereas NVMe SSDs for higher performance where the per server minimum is eight drives.

A Quickstart Guide for MinIO

For Docker Container


docker pull minio/minio
docker run -p 9000:9000 minio/minio server /data


docker pull minio/minio: edge
docker run -p 9000:9000 minio/minio:edge server /data  

Important Note: If you do not start the container with “-it,” which stands for interactive TTY argument, then the Docker will not show default keys. Professionals do not recommend using default keys with containers.


For the mac operating system, the Homebrew is recommended. You can install the MinIO packages using Homebrew.

Here is the code:

brew install minio/stable/minio
minio server /data

Important Note: If you have a previously installed version of minio from the “brew install minio,” then we recommend reinstalling it from “minio/stable/minio,” which is the official one. brew uninstall minio brew install minio/stable/minio

Binary Download: The platform is Apple macOS, and the architecture is a 64-bit Intel processor. Here is the URL and code:

chmod 755 minio
./minio server /data

GNU and Linux

The platform is GNU/Linux, and the architecture is 64-bit intel. Here is the URL and code:

chmod +x minio
./minio server /data
Here is a different URL and code of the same platform but with a ppc64le architecture:
chmod +x minio
./minio server /data

Microsoft Windows

Binary Download: For Microsoft Windows platform with a 64-bit processor, the code and the download URL is given below:

minio.exe server D:\Photos


Port: You can install the minio packages with the help of pkg. As it happens, the MinIO does not build FreeBSD binaries but gets maintenance from FreeBSD upstream. Here is the code:

pkg install minio
sysrc minio_enable=yes
sysrc minio_disks=/home/user/Photos
service minio start

Installation from Source

If you are a part of the advanced user group or someone among the developers, then source installation is the best option. Before you install from the source, you have to make sure that the system has a pre-installed Golang environment. Of course, you can first install it and then proceed with the other steps. Also, the minimum version required of the Golang is go1.13. Here is the code:

GO111MODULE=on go get

Allowing port access for Firewalls

To listen to the incoming connections, MinIO uses the port 9000 as a default setting. In case your platform blocks your port as a part of the default setting, you can easily enable the access.

iptables: If you have Operating systems like CentOS or RHEL that has hosts with enabled iptables, you can use the command “iptables.” This will help you to enable the total traffic that comes to specific ports. For example, you can use the following code to allow access to port 9000.

iptables -A INPUT -p tcp --dport 9000 -j ACCEPT
service iptables restart
As for the whole range of ports from 9000 to 9010, use the below command to enable incoming traffic:
iptables -A INPUT -p tcp --dport 9000:9010 -j ACCEPT
service iptables restart

ufw: If the user has a host with ufw enabled state or Debian based distros, type “ufw” command and managed the traffic to specific ports. Like above, we look at the example for port 9000. Here is the code:

ufw allow 9000
As for the range of 9000 to 9010, type the below command:
ufw allow 9000:9010/tcp

Firewall-cmd: For CentOS users where the hosts have their firewall-cmd already enabled, use “firewall-cmd” to specify ports and their allocated traffic. For port 9000 using the following command:

firewall-cmd --get-active-zones

Using the above command, you can target the active zones. For the zones returned in the previous topic, use port rules and if the zone is public, type in the following lines:

firewall-cmd --zone=public --add-port=9000/tcp --permanent

The “permanent” at the end of the code is to make the rules are even for firewall restart, start, or reloading. In the end, just reload the firewall to properly ingrain the changes you made. For that type:

firewall-cmd --reload

Test with the help of MinIO Browser

When you get your hands on the MinIO server, it automatically comes with a web-based object browser that is embedded in the system. To make sure that your server has started in the right way, check the following URL in your web browser:

Test with MinIO Client mc

For commands like cat, cp, mirror, ls, diff, and many others that provide UNIX with the human interface it enjoys, “mc” offers a new kind of alternative. Several filesystems are supported and the cloud storage services, which are compatible with Amazon S3. We have several examples of the quickstart guide to navigate you through the processes.

Pre-existing data

One function of the MinIO server is that it provides enough support for its clients to access any data that is pre-existing in the data directory. For this, it has to get deployed on a single drive-through. We can see this through an example where the command “minio server/ mnt/ data” can help you access the data, which is the directory as selected by “/mnt/data.” Also, the previous statement also holds for all the gateway backends.

Upgrading MinIO

MinIO server has functions that let you update MinIO instances in the distributed cluster one at a time. This is also called rolling upgrades and allows an upgrade system with no downtime. The process of making upgrades is simple; you can manually replace the binary and put the latest release to replace the existing one. After that, you just have to restart the servers one by one. With that said, we recommend using the “mc admin update” to all our users from the client. What the given command does is restart the nodes in the cluster after updating them. Use the following full control provided by the MinIO client or mc: mc admin update <minio alias, e.g., myminio>

Here are some of the things that you might need to know before upgrading:

  • There is a specific condition for “mc admin” to work, and that is the user running the MinIO needs to have write access on the parent directory. This is the directory that has the binary. For example, the binary is located at /usr/local/bin/minio, and for the upgrades to be successful, you need write access to /usr/local/bin.
  • In the case of federated setups, you need to update “mc admin” by running the updates one cluster at a time. Until you have done updating all the clusters, do not update “mc.”
  • Here is another thing you need to look for it you are updating the server, which is making sure the mc is upgraded after you have successfully upgraded all the servers using mc update. There is an exception in this rule if the MinIO server release notes explicitly mention otherwise.
  • If you are currently the user of the docker or container environment, the mc admin update is disabled. This is because there is a different mechanism in container environments for updating the existing containers.
  • In case you are a Vault user with KMS within MinIO, make sure you have the Vault upgraded, and this also applies for, etcd, users who use MinIO for the federation.

MinIO Docker Quickstart Guide

There is only one prerequisite for this process, which is the presence of Docker on your current system. Download your relevant installation file from the sources and then proceed to the second process, which is:

Run Standalone MinIO on Docker

For storing the application data and configuration, MinIO needs to have a persistent volume. If you are testing, then you can follow a kind of shortcut, that is, launching MinIO by passing a directory, as shown in the example below. Here, a directory is created inside the container filesystem during the time of the container start. The downside is all the data gets erased after you exit the container.

docker run -p 9000:9000 minio/minio server /data

The process of creating a MinIO container is simple. For persistent storage, just map the persistent local directory to virtual config from the host OS “~/.minio” and then export “/data” directory. You can run the following command to achieve this:

docker run -p 9000:9000 --name minio1 \
  -v /mnt/data:/data \
  minio/minio server /data
For Linux/GNU and macOS users:
docker run -p 9000:9000 --name minio1 \
  -v /mnt/data:/data \
  minio/minio server /data
For Windows users:
docker run -p 9000:9000 --name minio1 \
  -v D:\data:/data \
  minio/minio server /data

Running Distributed MinIO on Docker

There are two ways to deploy the distributed MinIO, and those are via Swarm mode and Docker Compose. There are some differences as to why people tend to divide them into these two processes. While the Docker Compose helps to create multi-container deployment on a single host, the Swarm mode operates in multi-container implementation, creating a multi-host. What that means is Docker Compose helps the users to get started on the Distributed MinIO quickly on their setup. This facility is useful for certain things like testing, staging environments, and an ideal environment for development. If you are longing for a firmer and production level deployment, use Distributed MinIO on Swarm. Some Tips for MinIO Docker

MinIO Secret Keys and Custom Access

For overriding the MinIO’s auto-generated keys, one can pass the access keys and secret keys in the form of environment variables explicitly. You can also pass regular strings as secret and access keys.

For GNU/Linux and macOS users:

docker run -p 9000:9000 --name minio1 \
  -v /mnt/data:/data \
  minio/minio server /data

For Windows Users:

docker run -p 9000:9000 --name minio1 \
  -v D:\data:/data \
  minio/minio server /data

Running the MinIO Docker as a regular user

If you are a non-root user, the Docker has standardized mechanisms to help you run docker containers.

For GNU/Linux and macOS: To run the container as a regular user on Linux and macOS, you can use the “–user” command. Before you use this command, make sure that the “–user” has write permission to ${HOME}/data.

mkdir -p ${HOME}/data
docker run -p 9000:9000 \
  --user $(id -u):$(id -g) \
  --name minio1 \
  -v ${HOME}/data:/data \
  minio/minio server /data

For Windows users: There is a slightly different route for the windows users. First, you have to use Docker integrated windows authentication and then make a container with active directory support. Also, you need to make sure the AD/Windows user has permission to write to D:/data before you use the command “credentialspec=.”

docker run -p 9000:9000 \
  --name minio1 \
  --security-opt "credentialspec=file://myuser.json"
  -v D:\data:/data \
  minio/minio server /data

MInIO Secret Keys and Custome Access with the help of Docker secrets: For overriding the auto-generated keys of MinIO, one can pass the access keys and secret keys explicitly. All they have to do is create the secret and access keys as Docker secrets. Luckily, the MinIO server is okay with passing regular strings as access and secret keys.

echo "AKIAIOSFODNN7EXAMPLE" | docker secret create access_key -
echo "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY" | docker secret create secret_key -

Here is the code for creating a MinIO service to read from Docker secrets using “docker service”:

docker service create --name="minio-service" --secret="access_key" --secret="secret_key" minio/minio server /data

Customer Access and Secret Key files in MinIO

If you want to use personal secret names, follow the rules mentioned above and just replace the “secret_key” and “access_key” with preferred names, for example, my_custom_key and my_secret_key. To run the service, use the following command:

docker service create --name="minio-service" \
--secret="my_access_key" \

--env="MINIO_ACCESS_KEY_FILE=my_access_key" \
  minio/minio server /data

To Retrieve the Container ID

One needs to know the “Container ID” of the specific container is he or she wants to use Docker commands on it. For extracting the container ID, use:

docker ps -a

Here, “-a” flag helps you retrieve all the containers, which include the Created, Exited, and Running ones. From the output, you have to identify the container ID.

How to Start and Stop Containers

If you want to start a container that has been stopped, use the “docker start” command. You can copy the below line:

docker start <container_id>

Also, use the “docker stop” command to stop a container that is running. Use the below-given command line:

docker stop <container_id>

Container logs of MinIO You can use the docker logs command to access the MinIO logs. Use the command given below:

docker logs <container_id>

Monitoring the MinIO Docker Container

Use the “docker stats” command for monitoring the MinIO container resources. Use the below-given command:

docker stats <container_id>

MinIO deployment on Kubernetes

There are several options by which we can deploy MinIO on Kubernetes. Let’s take a look at some of them:

  • MinIO-Operator: Operator provides an easy and efficient way to update and create readily available distributed clusters of MinIO.  
  • Helm Chart: The Helm Chart provides secure MinIO deployment services and customizable options with the help of a single command.

How to Monitor MinIO in Kubernetes

The work of the MinIO server is to expose the defected endpoints, which are of un-authenticated liveliness and readiness. This helps Kubernetes in the native identification of unhealthy MinIO containers. For Prometheus users, there is another catch. The MinIO exposes data compatible with Prometheus to a different endpoint. This helps the users to monitor their MinIO deployments. Another thing to note is that the readiness check is not an option in MinIO deployment. If a container fails in the readiness check, Kubernetes will not allow any traffic directly to the container. In a distributed setup, the MinIO server will not respond to any readiness checks until and unless all the nodes are reachable.


  1. MinIO, I., 2020. Minio | Enterprise Grade, High Performance Object Storage. online [Accessed 28 May 2020].
  2. 2020. Minio | The Minio Quickstart Guide. online [Accessed 28 May 2020].
Published On: June 15th, 2020Categories: Uncategorized5652 words0 CommentsTags: ,

About the Author: sanjog