The content below is taken from the original (Introducing Google Container Engine (GKE) node pools), to continue reading please visit the site. Remember to respect the Author & Copyright.
Google Container Engine (GKE) aims to be the best place to set up and manage your Kubernetes clusters. When creating a cluster, users have always been able to select options like the nodes’ machine type, disk size, etc. but that applied to all the nodes, making the cluster homogenous. Until now, it was very difficult to have a cluster with a heterogenous machine configuration.
That’s where node pools come in, a new feature in Google Container Engine that’s now generally available. A node pool is simply a collection, or “pool,” of machines with the same configuration. Now instead of a uniform cluster where all the nodes are the same, you can have multiple node pools that better suit your needs. Imagine you created a cluster composed of n1-standard-2 machines, and realize that you need more CPU. You can now easily add a node pool to your existing cluster composed of n1-standard-4 (or bigger) machines.
All this happens through the new “node-pools” commands available via the gcloud command line tool. Let’s take a deeper look at using this new feature.
Creating your cluster
A node pool must belong to a cluster and all clusters have a default node pool named “default-pool”. So, let’s create a new cluster (we assume you’ve set the project and zone defaults in gcloud):
> gcloud container clusters create work NAME ZONE MASTER_VERSION MASTER_IP MACHINE_TYPE NODE_VERSION NUM_NODES STATUS work us-central1-f 1.2.3 123.456.789.xxx n1-standard-1 1.2.3 3 RUNNING
Like before, you can still specify some node configuration options, like “--machine-type”
to specify a machine type, or “--num-nodes”
to set the initial number of nodes.
Creating a new node pool
Once the cluster has been created, you can see its node pools with the new “node-pools” top level object (Note: You may need to upgrade your gcloud commands via “gcloud components update” to use these new options.).
> gcloud container node-pools list --cluster=work NAME MACHINE_TYPE DISK_SIZE_GB NODE_VERSION default-pool n1-standard-1 100 1.2.3
Notice that you must now specify a new parameter, “–cluster”. Recall that node pools belong to a cluster, so you must specify the cluster with which to use node-pools commands. You can also set it as the default in config by calling:
> gcloud config set container/cluster
Also, if you have an existing cluster on GKE, your clusters will have been automatically migrated to “default-pool,” with the original cluster node configuration.
Let’s create a new node pool on our “work” with a custom machine type of 2 CPUs and 12 GB of RAM:
> gcloud container node-pools create high-mem --cluster=work --machine-type=custom-2-12288 --disk-size=200 --num-nodes=4
This creates a new node pool with 4 nodes, using custom machine VMs and 200 GB boot disks. Now, when you list your node pools, you get:
> gcloud container node-pools list --cluster=work NAME MACHINE_TYPE DISK_SIZE_GB NODE_VERSION default-pool n1-standard-1 100 1.2.3 high-mem custom-2-12288 200 1.2.3
And if you list the nodes in kubectl:
> kubectl get nodes NAME STATUS AGE gke-work-high-mem-d8e4e9a4-xzdy Ready 2m gke-work-high-mem-d8e4e9a4-4dfc Ready 2m gke-work-high-mem-d8e4e9a4-bv3d Ready 2m gke-work-high-mem-d8e4e9a4-5312 Ready 2m gke-work-default-pool-9356555a-uliq Ready 1d
With Kubernetes 1.2, the nodes on each node pool are also automatically assigned the node label, “http://bit.ly/1WXRilF
More fun with node pools
There are also other, more advanced scenarios for node pools. Suppose you want to upgrade the nodes in your cluster to the latest Kubernetes release, but need finer grained control of the transition (e.g., to perform A/B testing, or to migrate the pods slowly). When a new release of Kubernetes is available on GKE, simply create a new node pool; all node pools have the same version as the cluster master, which will be automatically updated to the latest Kubernetes release. Here’s how to create a new node pool with the appropriate version:
> gcloud container node-pools create my-1-2-4-pool --cluster=work --num-nodes=3 --machine-type=n1-standard-4 > gcloud container node-pools list --cluster=work NAME MACHINE_TYPE DISK_SIZE_GB NODE_VERSION default-pool n1-standard-1 100 1.2.3 high-mem custom-2-12288 200 1.2.3 my-1-2-4-pool n1-standard-4 100 1.2.4
You can now go to “kubectl” and update your replication controller to schedule your pods with the label selector “http://bit.ly/1WXRlO4”. Your pods will then be rescheduled from the old nodes to the new pool nodes. After the verifications are complete, continue the transition with other pods, until all of the old nodes are effectively empty. You can then delete your original node pool:
> gcloud container node-pools delete default-pool --cluster=work > gcloud container node-pools list --cluster=work NAME MACHINE_TYPE DISK_SIZE_GB NODE_VERSION high-mem custom-2-12288 200 1.2.3 My-1-2-4-pool n1-standard-4 100 1.2.4
And voila, all of your pods are now running on nodes running the latest version of Kubernetes!
Conclusion
The new node pools feature in GKE enables more powerful and flexible scenarios for your Kubernetes clusters. As always, we’d love to hear your feedback and help guide us on what you’d like to see in the product.