Working With GKE (Google kubernetes engine)

Best practices for begineers

Working With GKE (Google kubernetes engine)

Overview

GKE provides a managed environment for deploying, managing, and scaling your containerized applications using Google infrastructure. The Kubernetes Engine environment consists of multiple machines grouped to form a container cluster.

Benefits of working with GKE

When you run a GKE cluster, you also gain the benefit of advanced cluster management features that Google Cloud provides. These include:

  • Load balancing - for compute engines.

  • Node pools - to designate a subset of nodes within a cluster for additional flexibility.

  • Automatic scaling - of your cluster's node instance count.

  • Automatic upgrades - for your cluster's node software.

  • Node auto repair - to maintain node health and availability.

  • Logging and Monitoring -with Cloud Monitoring for visibility into your cluster

What i'll do

Task 1 - Set a default compute zone
Task 2 - Create a GKE cluster
Task 3 - Get authentication credentials for the cluster
Task 4 - Deploy an application to the cluster

Note: Activate the cloud shell, before starting and use this command to lookup if it's working correctly or not: gcloud auth list

Task 1: Set a default compute zone

Your compute zone is an approximate regional location in which your clusters and their resources live. For example, us-central1-a is a zone in the us-central1 region. Follow these commands for doing this :
1. set the default compute region :

gcloud config set compute/region us-central1

2. Set the default compute zone :

gcloud config set compute/zone us-central1-b

Note: You can use the zone and region according to your preference.

Task 2: Create a GKE cluster

A cluster consists of at least one cluster master machine and multiple worker machines called nodes. Nodes are Compute Engine virtual machine (VM) instances that run the Kubernetes processes necessary to make them part of the cluster.
Run the following command for doing this:

  • Create a cluster

    gcloud container clusters create --machine-type=e2-medium --zone=us-central1-c lab-cluster

    Expected output :

       NAME: lab-cluster   
       LOCATION: us-central1-b  
       MASTER_VERSION: 1.25.7-gke.1000   
       MASTER_IP: 104.154.92.110    
       MACHINE_TYPE: e2-medium    
       NODE_VERSION: 1.25.7-gke.1000    
       NUM_NODES: 3    
       STATUS: RUNNING
    

    Task 3: Get authentication credentials for the cluster

After creating your cluster, you need authentication credentials to interact with it.

  • Authenticate with the cluster:
    gcloud container clusters get-credentials lab cluster

Expected output :

        Fetching cluster endpoint and auth data.kubeconfig entry generated for lab-cluster.

Task 4: Deploy an application to the cluster

You can now deploy a containerized application to the cluster. For this lab, you'll run hello-app in your cluster. GKE uses Kubernetes objects to create and manage your cluster's resources. Kubernetes provides the Deployment object for deploying stateless applications like web servers. Service objects define rules and load balancing for accessing your application from the internet.

  • To create a new Deployment hello-server from the hello-app container image, run the following kubectl create command:

    kubectl create deployemnt hello-server --image=gcr.io/google-samples/hello-app:1.0

Expected output :

            deployment.apps/hello-server created

This Kubernetes command creates a Deployment object that represents hello-server. In this case, --image specifies a container image to deploy. The command pulls the example image from a Container Registry bucket. gcr.io/google-samples/hello-app:1.0 indicates the specific image version to pull. If a version is not specified, the latest version is used.

  • To create a Kubernetes Service, which is a Kubernetes resource that lets you expose your application to external traffic, run the following kubectl expose command:

kubectl expose deployment hello-server --type=LoadBalancer --port 8080

Expected output :

                service/hello-server exposed

In this command: --port specifies the port that the container exposes. - type="LoadBalancer" creates a Compute Engine load balancer for your container.

  • To inspect the hello-server Service, run kubectl get:
    kubectl get service

Expected output :

            NAME           TYPE           CLUSTER-IP    EXTERNAL-IP      PORT(S)          AGEhello-server   LoadBalancer   10.120.5.78   104.154.98.140   8080:30558/TCP   103skubernetes     ClusterIP      10.120.0.1    <none>           443/TCP          32m

```

  • To view the application from your web browser, open a new tab and enter the following address, replacing [EXTERNAL IP] with the EXTERNAL-IP for hello-server.
    http://[EXTERNAL-IP]:8080

  • Task 5. Deleting the cluster:
    gcloud container clusters delete lab-cluster
    When prompted, type Y to confirm.
    Note: For more information on deleted GKE clusters from the Google Kubernetes Engine (GKE) article, Deleting a cluster.

conclusion :

This is how you can create multiple clusters in google cloud using Google kubernetes engine and can run multiple containers over that.