Born to be cloud

Creating robust digital systems that flourish in an evolving landscape. Our services, spanning from Cloud to Applications, Data, and AI, are trusted by 150+ customers. Collaborating with our global partners, we transform possibilities into tangible outcomes.

Experience our services.
We can help to make the move - design, built and migrate to the cloud.

Cloud Migration

Maximise your investment in the cloud and achieve cost-effectiveness, on-demand scalability, unlimited computing, and enhanced security.

Artificial Intelligence/ Machine Learning

Infuse AI & ML into your business to solve complex problems, drive top-line growth, and innovate mission critical applications.

Data & Analytics

Discover the Hidden Gems in Your Data with cloud-native Analytics. Our comprehensive solutions cover data processing, analysis, and visualization.

Generative Artificial Intelligence (GenAI)

Drive measurable business success with GenAI, Where creative solutions lead to tangible outcomes, including improved operational efficiency, enhanced customer satisfactions, and accelerated time-to-market.

Ankercloud: Partners with AWS, GCP, and Azure

We excel through partnerships with industry giants like AWS, GCP, and Azure, offering innovative solutions backed by leading cloud technologies.

A black and white photo of a computer screen.
A black and white photo of a clock tower.
A black and white photo of a clock tower.

Awards and Competencies

Competencies

Awards

Our Specializations & Expertise

AWS Premier Partner Badge
 As an Advanced Tier AWS Service Partner, we hold more than 100+ AWS certifications, 7 AWS Competencies, 7  partner programs
Google Cloud Premier Partner Badge
We are a Premier Level partner for Google Cloud with additional competencies in Infrastructure and Machine Learning
The logo for a company that sells products.
AWS
HPC
Cloud
Bio Tech
Machine Learning

High Performance Computing using Parallel Cluster, Infrastructure Set-up

AWS
Cloud Migration

gocomo Migrates Social Data Platform to AWS for Performance & Scalability with Ankercloud

A black and white photo of the logo for salopritns.
Google Cloud
Saas
Cost Optimization
Cloud

Migration a Saas platform from On-Prem to GCP

AWS
HPC

Benchmarking AWS performance to run environmental simulations over Belgium

Countless Happy Clients and Counting!

A man wearing glasses and a suit looks at the camera.

"Ankercloud is working as a direct extension of our team. Their strong technical know-how, agile approach, and cross-cloud experience have
accelerated our cloud journey - from DevOps to AIML Development. They are a valuable partner to have."

Serge N'Silu
Member of the Board of Bitech AG

“It is almost unbelievable how we could build a SaaS solution for Antibody patent analysis at AWS in only a few months, from nothing to 100% up and running. Many thanks to the team at Ankercloud, AWS Rising Star Partner 2023”

Johannes Fraaije
Founder and Chief Science Advisor, Iridescent Bio
A man wearing glasses and a suit looks at the camera.

"Whatever questions we had, Ankercloud was really proactive about getting us the right person to talk to. Whenever we had an issue, they did a great job of mitigating the impact and the cost and finding us a good solution.”

Haris Bravo
Head of Development, gocomo
A man wearing glasses and a suit looks at the camera.

“Ankercloud has been very helpful and understanding. All interactions have been smooth and enjoyable.”

Torbjörn Svensson
Head of Development
A black and white photo of a clock tower.

"Overall, the adoption of cloud infrastructure empowers our research group to propel our scientific pursuits with greater efficiency and effectiveness."

Prof. Jörn Wilms
Professor of Astronomy and Astrophysics

Check out our blog

Blog

Automating AWS Amplify: Streamlining CI/CD with Shell & Expect Scripts

Introduction

Automating cloud infrastructure and deployments is a crucial aspect of DevOps. AWS Amplify provides a powerful framework for developing and deploying full-stack applications. However, initializing and managing an Amplify app manually can be time-consuming, especially when integrating it into a CI/CD pipeline like Jenkins.

This blog explores how we automated the Amplify app creation process in headless mode using shell scripting and Expect scripts, eliminating interactive prompts to streamline our pipeline.

Setting Up AWS and Amplify CLI

1. Configure AWS Credentials

Before initializing an Amplify app, configure AWS CLI with your Access Key and Secret Key:

aws configure

2. Install and Configure Amplify CLI

To install Amplify CLI and configure it:

npm install -g @aws-amplify/cli

amplify configure

This will prompt you to create an IAM user and set up authentication.

Automating Amplify App Creation

1. Initialize the Amplify App Using a Script

We created a shell script amplify-init.sh to automate the initialization process.

amplify-init.sh

#!/bin/bash

set -e

IFS='|'

AMPLIFY_NAME=amplifyapp

API_FOLDER_NAME=amplifyapp

BACKEND_ENV_NAME=staging

AWS_PROFILE=default

REGION=us-east-1

AWSCLOUDFORMATIONCONFIG="{\

\"configLevel\":\"project\",\

\"useProfile\":true,\

\"profileName\":\"${AWS_PROFILE}\",\

\"region\":\"${REGION}\"\

}"

AMPLIFY="{\

\"projectName\":\"${AMPLIFY_NAME}\",\

\"envName\":\"${BACKEND_ENV_NAME}\",\

\"defaultEditor\":\"Visual Studio Code\"\

}"

amplify init --amplify $AMPLIFY --providers $AWSCLOUDFORMATIONCONFIG --yes

Run the script:

./amplify-init.sh

2. Automating API and Storage Integration

Since Amplify prompts users for inputs, we used Expect scripts to automate API and storage creation.

add-api-response.exp

#!/usr/bin/expect

spawn ./add-api.sh

expect "? Please select from one of the below mentioned services:\r"

send -- "GraphQL\r"

expect eof

add-storage-response.exp

#!/usr/bin/expect

spawn ./add-storage.sh

expect "? Select from one of the below mentioned services:\r"

send -- "Content\r"

expect eof

These scripts eliminate manual input, making Amplify API and storage additions fully automated.

Automating Schema Updates

One of the biggest challenges was automating schema.graphql updates without manual intervention. The usual approach required engineers to manually upload the file, leading to potential errors.

To solve this, we automated the process with an Amplify Pull script.

amplify-pull.sh

#!/bin/bash

set -e

IFS='|'

AMPLIFY_NAME=amp3

API_FOLDER_NAME=amp3

BACKEND_ENV_NAME=prod

AWS_PROFILE=default

REGION=us-east-1

APP_ID=dzvchzih477u2

AWSCLOUDFORMATIONCONFIG="{\

\"configLevel\":\"project\",\

\"useProfile\":true,\

\"profileName\":\"${AWS_PROFILE}\",\

\"region\":\"${REGION}\"\

}"

AMPLIFY="{\

\"projectName\":\"${AMPLIFY_NAME}\",\

\"appId\":\"${APP_ID}\",\

\"envName\":\"${BACKEND_ENV_NAME}\",\

\"defaultEditor\":\"code\"\

}"

amplify pull --amplify $AMPLIFY --providers $AWSCLOUDFORMATIONCONFIG --yes

This script ensures that the latest schema changes are pulled and updated in the pipeline automatically.

Integrating with Jenkins

Since this automation was integrated with a Jenkins pipeline, we enabled "This project is parameterized" to allow file uploads directly into the workspace.

  1. Upload the schema.graphql file via Jenkins UI.
  2. The script pulls the latest changes and updates Amplify automatically.

This method eliminates manual intervention, ensuring consistency in schema updates across multiple environments.

Conclusion

By automating AWS Amplify workflows with shell scripting and Expect scripts, we achieved:  Fully automated Amplify app creation
  Eliminated manual schema updates
  Seamless integration with Jenkins pipelines
  Faster deployments with reduced errors

This approach significantly minimized manual effort, ensuring that updates were streamlined and efficient. If you're using Amplify for your projects, automation like this can save countless hours and improve developer productivity.

Have questions or feedback? Drop a comment below! 

Feb 27, 2025

2

Blog

Configuring GKE Ingress: Traffic Routing, Security, and Load Balancing

GKE Ingress acts as a bridge between external users and your Kubernetes services. It allows you to define rules for routing traffic based on hostnames and URL paths, enabling you to direct requests to different backend services seamlessly.

A single GKE Ingress controller routes traffic to multiple services by identifying the target backend based on hostname and URL paths. It supports multiple certificates for different domains.

FrontendConfig enables automatic redirection from HTTP to HTTPS, ensuring encrypted communication between the web browser and the Ingress.
BackendConfig that allows you to configure advanced settings for backend services. It provides additional options beyond standard service configurations, enabling better control over traffic handling, security, and load balancing behavior.

Setup GKE ingress with application loadbalancer

To specify an Ingress class, you must use the kubernetes.io/ingress.class annotation.The “gce” class deploys an external Application Load Balancer

apiVersion: networking.k8s.io/v1

kind: Ingress

metadata:

name: my-ingress

annotations:

kubernetes.io/ingress.class: “gce”

Configure FrontendConfiguration:

apiVersion: networking.gke.io/v1beta1

kind: FrontendConfig

metadata:

name: my-frontend-config

spec:

redirectToHttps:

enabled: true

The FrontendConfig resource in GKE enables automatic redirection from HTTP to HTTPS, ensuring secure communication between clients and services.

Associating FrontendConfig with your Ingress

You can associate a FrontendConfig with an Ingress. Use the “networking.gke.io/v1beta1.FrontendConfig” to annotate with the ingress.

apiVersion: networking.k8s.io/v1

kind: Ingress

metadata:

annotations:

networking.gke.io/v1beta1.FrontendConfig: “my-frontend-config”

Configure Backend Configuration:

apiVersion: cloud.google.com/v1

kind: BackendConfig

metadata:

name: my-backendconfig

spec:

timeoutSec: 40

BackendConfig to set a backend service timeout period in seconds.The following BackendConfig manifest specifies a timeout of 40 seconds.

Associate the backend configuration with service:

apiVersion: v1

kind: Service

metadata:

annotations:

cloud.google.com/backend-config: ‘{“ports”:{“my-backendconfig”}}’

cloud.google.com/neg: ‘{“ingress”: true}’

spec:

ports:

- name: app

port: 80

protocol: TCP

targetPort: 50000

We can specify a custom BackendConfig for one or more ports using a key that matches the port’s name or number. The Ingress controller uses the specific BackendConfig when it creates a load balancer backend service for a referenced Service port.

Creating an Ingress with a Google-Managed SSL Certificate

To set up a Google-managed SSL certificate and link it to an Ingress, follow these steps:

  • Create a ManagedCertificate resource in the same namespace as the Ingress.
  • Associate the ManagedCertificate with the Ingress by adding the annotation networking.gke.io/managed-certificates to the Ingress resource.

apiVersion: networking.gke.io/v1

kind: ManagedCertificate

metadata:

name: managed-cert

spec:

domains:

- hello.example.com

- world.example.com

Associate the SSL with Ingress

apiVersion: networking.k8s.io/v1

kind: Ingress

metadata:

name: ingress

annotations:

networking.gke.io/v1beta1.FrontendConfig: “my-frontend-config”

networking.gke.io/managed-certificates: managed-cert

kubernetes.io/ingress.class: “gce”

associate it with the managed-certificate by adding an annotation.

Assign Static IP to Ingress

When hosting a web server on a domain, the application’s external IP address should be static to ensure it remains unchanged.

By default, GKE assigns ephemeral external IP addresses for HTTP applications exposed via an Ingress. However, these addresses can change over time. If you intend to run your application long-term, it is essential to use a static external IP address for stability.

Create a global static ip from gcp console with specific name eg: web-static-ip and associate it with ingress by adding the global-static-ip-name annotation.

apiVersion: networking.k8s.io/v1

kind: Ingress

metadata:

name: ingress

annotations:

networking.gke.io/v1beta1.FrontendConfig: “my-frontend-config”

networking.gke.io/managed-certificates: managed-cert

kubernetes.io/ingress.class: “gce”

kubernetes.io/ingress.global-static-ip-name: “web-static-ip”

Google Cloud Armor Ingress security policy

Google Cloud Armor security policies safeguard your load-balanced applications against web-based attacks. Once configured, a security policy can be referenced in a BackendConfig to apply protection to specific backends.

To enable a security policy, add its name to the BackendConfig. The following example configures a security policy named security-policy:

apiVersion: cloud.google.com/v1

kind: BackendConfig

metadata:

namespace: cloud-armor-how-to

name: my-backendconfig

spec:

securityPolicy:

name: “security-policy”

User-defined request/response headers

A BackendConfig can be used to define custom request headers that the load balancer appends to requests before forwarding them to the backend services.

These custom headers are only added to client requests and not to health check probes. If a backend requires a specific header for authorization and it is absent in the health check request, the health check may fail.

To configure user-defined request headers, specify them under the customRequestHeaders/customResponseHeaders property in the BackendConfig resource. Each header should be defined as a header-name:header-value string.

apiVersion: cloud.google.com/v1

kind: BackendConfig

metadata:

name: my-backendconfig

spec:

customRequestHeaders:

headers:

- “X-Client-Region:{client_region}”

- “X-Client-City:{client_city}”

- “X-Client-CityLatLong:{client_city_lat_long}”

apiVersion: cloud.google.com/v1

kind: BackendConfig

metadata:

name: my-backendconfig

spec:

customResponseHeaders:

headers:

- “Strict-Transport-Security: max-age=28800; includeSubDomains”

Feb 25, 2025

2

Blog

Automating Kubernetes Deployments with Argo CD

Argo CD is a declarative, GitOps-based continuous delivery tool designed for Kubernetes. It allows you to manage and automate application deployment using Git as the single source of truth. Argo CD continuously monitors your Git repository and ensures the Kubernetes environment matches the desired state described in your manifest.

Step 1: Create and Connect to a Kubernetes Cluster

Steps to Create and Connect

Create a Kubernetes Cluster
If you’re using Google Kubernetes Engine (GKE), you can create a cluster using the following command:

gcloud container clusters create <cluster name> — zone <zone of cluster>

Replace <cluster name> with your desired cluster name and <zone of cluster> with your preferred zone.

Connect to the Cluster
Once the cluster is created, configure kubectl (the Kubernetes CLI) to interact with it:

gcloud container clusters get-credentials argo-test — zone us-central1-c

Verify the connection by listing the nodes in the cluster:
kubectl get nodes

Step 2: Install Argo CD

Installing Argo CD means deploying its server, UI, and supporting components as Kubernetes resources in a namespace.

Steps to Install

Create a Namespace for Argo CD
A namespace in Kubernetes is a logical partition to organize resources:

kubectl create namespace argocd

Install Argo CD Components
Use the official installation manifest to deploy all Argo CD components:

kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml

This deploys key components like the API server, repository server, application controller, and web UI.

Step 3: Expose Argo CD Publicly

By default, the argocd-server service is configured as a ClusterIP, making it accessible only within the cluster. You need to expose it for external access.

Options to Expose Argo CD

Option-1 LoadBalancer
Change the service type to LoadBalancer to get an external IP address:

kubectl patch svc argocd-server -n argocd -p ‘{“spec”: {“type”: “LoadBalancer”}}’

Ingress
For advanced routing and SSL support, create an Ingress resource. This approach is recommended if you want to add HTTPS to your setup.

Option-2 Port Forwarding
If you only need temporary access:

kubectl port-forward svc/argocd-server -n argocd 8080:80

Step 4: Access the Argo CD Dashboard

Retrieve the External IP
After exposing the service as a LoadBalancer, get the external IP address:

kubectl get svc argocd-server -n argocd

Login Credentials

Username: admin

Password: Retrieve it from the secret:

kubectl get secret argocd-initial-admin-secret -n argocd -o yaml

Decode the base64 password:

echo “<base64_encoded_password>” | base64 — decode

Access the dashboard by navigating to https://<external-ip> in your browser.

Step 5: Install the Argo CD CLI

The Argo CD CLI enables you to interact with the Argo CD server programmatically for managing clusters, applications, and configurations.

Steps to Install

Download the CLI

curl -sSL -o argocd-linux-amd64 https://github.com/argoproj/argo-cd/releases/latest/download/argocd-linux-amd64

Install the CLI

sudo install -m 555 argocd-linux-amd64 /usr/local/bin/argocd

rm argocd-linux-amd64

Verify Installation

argocd version

Step 6: Add a Kubernetes Cluster to Argo CD

Argo CD requires access to the Kubernetes cluster where it will deploy applications.

Steps to Add

Log in to Argo CD via CLI

argocd login <argocd-server-url>:<port> — username admin — password <password>

Get the Kubernetes Context

kubectl config get-contexts -o name

Add the Cluster

argocd cluster add <context-name>

This command creates a service account (argocd-manager) with cluster-wide permissions to deploy applications.

To Verify the added cluster via cli use below command else navigate to the ui dashboard setting -> cluster

argocd cluster list

Step 7: Add a Git Repository

The Git repository serves as the source of truth for application manifests.

Steps to Add

  1. Navigate to Repositories
    Log in to the Argo CD dashboard, go to Settings -> Repositories, and click Connect Repo.
  2. Enter Repository Details
  • Choose a connection method (e.g., HTTPS or SSH).
  • Provide the repository URL and credentials.
  • Assign a project to organize repositories.

Step 8: Create an Application in Argo CD

An Argo CD application represents the Kubernetes resources defined in a Git repository.

Steps to Create

Click New App
Enter the application details:

  • Application Name: e.g., hello-world
  • Project: Assign the application to a project.
  • Source: Select the Git repository and specify the manifest file path.
  • Destination: Select the cluster and namespace for deployment.
  1. Enable Auto-Sync policy
    Enable this option for automated synchronization between the Git repository and the Kubernetes cluster.
  2. Create the Application
    Click Create. Argo CD will deploy the application and monitor its state.

Feb 25, 2025

2

AWS, Amplify, DevOps, Automation, CI CD, Shell Scripting

Automating AWS Amplify: Streamlining CI/CD with Shell & Expect Scripts

Feb 27, 2025
00

Introduction

Automating cloud infrastructure and deployments is a crucial aspect of DevOps. AWS Amplify provides a powerful framework for developing and deploying full-stack applications. However, initializing and managing an Amplify app manually can be time-consuming, especially when integrating it into a CI/CD pipeline like Jenkins.

This blog explores how we automated the Amplify app creation process in headless mode using shell scripting and Expect scripts, eliminating interactive prompts to streamline our pipeline.

Setting Up AWS and Amplify CLI

1. Configure AWS Credentials

Before initializing an Amplify app, configure AWS CLI with your Access Key and Secret Key:

aws configure

2. Install and Configure Amplify CLI

To install Amplify CLI and configure it:

npm install -g @aws-amplify/cli

amplify configure

This will prompt you to create an IAM user and set up authentication.

Automating Amplify App Creation

1. Initialize the Amplify App Using a Script

We created a shell script amplify-init.sh to automate the initialization process.

amplify-init.sh

#!/bin/bash

set -e

IFS='|'

AMPLIFY_NAME=amplifyapp

API_FOLDER_NAME=amplifyapp

BACKEND_ENV_NAME=staging

AWS_PROFILE=default

REGION=us-east-1

AWSCLOUDFORMATIONCONFIG="{\

\"configLevel\":\"project\",\

\"useProfile\":true,\

\"profileName\":\"${AWS_PROFILE}\",\

\"region\":\"${REGION}\"\

}"

AMPLIFY="{\

\"projectName\":\"${AMPLIFY_NAME}\",\

\"envName\":\"${BACKEND_ENV_NAME}\",\

\"defaultEditor\":\"Visual Studio Code\"\

}"

amplify init --amplify $AMPLIFY --providers $AWSCLOUDFORMATIONCONFIG --yes

Run the script:

./amplify-init.sh

2. Automating API and Storage Integration

Since Amplify prompts users for inputs, we used Expect scripts to automate API and storage creation.

add-api-response.exp

#!/usr/bin/expect

spawn ./add-api.sh

expect "? Please select from one of the below mentioned services:\r"

send -- "GraphQL\r"

expect eof

add-storage-response.exp

#!/usr/bin/expect

spawn ./add-storage.sh

expect "? Select from one of the below mentioned services:\r"

send -- "Content\r"

expect eof

These scripts eliminate manual input, making Amplify API and storage additions fully automated.

Automating Schema Updates

One of the biggest challenges was automating schema.graphql updates without manual intervention. The usual approach required engineers to manually upload the file, leading to potential errors.

To solve this, we automated the process with an Amplify Pull script.

amplify-pull.sh

#!/bin/bash

set -e

IFS='|'

AMPLIFY_NAME=amp3

API_FOLDER_NAME=amp3

BACKEND_ENV_NAME=prod

AWS_PROFILE=default

REGION=us-east-1

APP_ID=dzvchzih477u2

AWSCLOUDFORMATIONCONFIG="{\

\"configLevel\":\"project\",\

\"useProfile\":true,\

\"profileName\":\"${AWS_PROFILE}\",\

\"region\":\"${REGION}\"\

}"

AMPLIFY="{\

\"projectName\":\"${AMPLIFY_NAME}\",\

\"appId\":\"${APP_ID}\",\

\"envName\":\"${BACKEND_ENV_NAME}\",\

\"defaultEditor\":\"code\"\

}"

amplify pull --amplify $AMPLIFY --providers $AWSCLOUDFORMATIONCONFIG --yes

This script ensures that the latest schema changes are pulled and updated in the pipeline automatically.

Integrating with Jenkins

Since this automation was integrated with a Jenkins pipeline, we enabled "This project is parameterized" to allow file uploads directly into the workspace.

  1. Upload the schema.graphql file via Jenkins UI.
  2. The script pulls the latest changes and updates Amplify automatically.

This method eliminates manual intervention, ensuring consistency in schema updates across multiple environments.

Conclusion

By automating AWS Amplify workflows with shell scripting and Expect scripts, we achieved:  Fully automated Amplify app creation
  Eliminated manual schema updates
  Seamless integration with Jenkins pipelines
  Faster deployments with reduced errors

This approach significantly minimized manual effort, ensuring that updates were streamlined and efficient. If you're using Amplify for your projects, automation like this can save countless hours and improve developer productivity.

Have questions or feedback? Drop a comment below! 

Read Blog
GKE Ingress, Kubernetes Networking, Google Cloud, Load Balancing, Cloud Security

Configuring GKE Ingress: Traffic Routing, Security, and Load Balancing

Feb 25, 2025
00

GKE Ingress acts as a bridge between external users and your Kubernetes services. It allows you to define rules for routing traffic based on hostnames and URL paths, enabling you to direct requests to different backend services seamlessly.

A single GKE Ingress controller routes traffic to multiple services by identifying the target backend based on hostname and URL paths. It supports multiple certificates for different domains.

FrontendConfig enables automatic redirection from HTTP to HTTPS, ensuring encrypted communication between the web browser and the Ingress.
BackendConfig that allows you to configure advanced settings for backend services. It provides additional options beyond standard service configurations, enabling better control over traffic handling, security, and load balancing behavior.

Setup GKE ingress with application loadbalancer

To specify an Ingress class, you must use the kubernetes.io/ingress.class annotation.The “gce” class deploys an external Application Load Balancer

apiVersion: networking.k8s.io/v1

kind: Ingress

metadata:

name: my-ingress

annotations:

kubernetes.io/ingress.class: “gce”

Configure FrontendConfiguration:

apiVersion: networking.gke.io/v1beta1

kind: FrontendConfig

metadata:

name: my-frontend-config

spec:

redirectToHttps:

enabled: true

The FrontendConfig resource in GKE enables automatic redirection from HTTP to HTTPS, ensuring secure communication between clients and services.

Associating FrontendConfig with your Ingress

You can associate a FrontendConfig with an Ingress. Use the “networking.gke.io/v1beta1.FrontendConfig” to annotate with the ingress.

apiVersion: networking.k8s.io/v1

kind: Ingress

metadata:

annotations:

networking.gke.io/v1beta1.FrontendConfig: “my-frontend-config”

Configure Backend Configuration:

apiVersion: cloud.google.com/v1

kind: BackendConfig

metadata:

name: my-backendconfig

spec:

timeoutSec: 40

BackendConfig to set a backend service timeout period in seconds.The following BackendConfig manifest specifies a timeout of 40 seconds.

Associate the backend configuration with service:

apiVersion: v1

kind: Service

metadata:

annotations:

cloud.google.com/backend-config: ‘{“ports”:{“my-backendconfig”}}’

cloud.google.com/neg: ‘{“ingress”: true}’

spec:

ports:

- name: app

port: 80

protocol: TCP

targetPort: 50000

We can specify a custom BackendConfig for one or more ports using a key that matches the port’s name or number. The Ingress controller uses the specific BackendConfig when it creates a load balancer backend service for a referenced Service port.

Creating an Ingress with a Google-Managed SSL Certificate

To set up a Google-managed SSL certificate and link it to an Ingress, follow these steps:

  • Create a ManagedCertificate resource in the same namespace as the Ingress.
  • Associate the ManagedCertificate with the Ingress by adding the annotation networking.gke.io/managed-certificates to the Ingress resource.

apiVersion: networking.gke.io/v1

kind: ManagedCertificate

metadata:

name: managed-cert

spec:

domains:

- hello.example.com

- world.example.com

Associate the SSL with Ingress

apiVersion: networking.k8s.io/v1

kind: Ingress

metadata:

name: ingress

annotations:

networking.gke.io/v1beta1.FrontendConfig: “my-frontend-config”

networking.gke.io/managed-certificates: managed-cert

kubernetes.io/ingress.class: “gce”

associate it with the managed-certificate by adding an annotation.

Assign Static IP to Ingress

When hosting a web server on a domain, the application’s external IP address should be static to ensure it remains unchanged.

By default, GKE assigns ephemeral external IP addresses for HTTP applications exposed via an Ingress. However, these addresses can change over time. If you intend to run your application long-term, it is essential to use a static external IP address for stability.

Create a global static ip from gcp console with specific name eg: web-static-ip and associate it with ingress by adding the global-static-ip-name annotation.

apiVersion: networking.k8s.io/v1

kind: Ingress

metadata:

name: ingress

annotations:

networking.gke.io/v1beta1.FrontendConfig: “my-frontend-config”

networking.gke.io/managed-certificates: managed-cert

kubernetes.io/ingress.class: “gce”

kubernetes.io/ingress.global-static-ip-name: “web-static-ip”

Google Cloud Armor Ingress security policy

Google Cloud Armor security policies safeguard your load-balanced applications against web-based attacks. Once configured, a security policy can be referenced in a BackendConfig to apply protection to specific backends.

To enable a security policy, add its name to the BackendConfig. The following example configures a security policy named security-policy:

apiVersion: cloud.google.com/v1

kind: BackendConfig

metadata:

namespace: cloud-armor-how-to

name: my-backendconfig

spec:

securityPolicy:

name: “security-policy”

User-defined request/response headers

A BackendConfig can be used to define custom request headers that the load balancer appends to requests before forwarding them to the backend services.

These custom headers are only added to client requests and not to health check probes. If a backend requires a specific header for authorization and it is absent in the health check request, the health check may fail.

To configure user-defined request headers, specify them under the customRequestHeaders/customResponseHeaders property in the BackendConfig resource. Each header should be defined as a header-name:header-value string.

apiVersion: cloud.google.com/v1

kind: BackendConfig

metadata:

name: my-backendconfig

spec:

customRequestHeaders:

headers:

- “X-Client-Region:{client_region}”

- “X-Client-City:{client_city}”

- “X-Client-CityLatLong:{client_city_lat_long}”

apiVersion: cloud.google.com/v1

kind: BackendConfig

metadata:

name: my-backendconfig

spec:

customResponseHeaders:

headers:

- “Strict-Transport-Security: max-age=28800; includeSubDomains”

Read Blog
Kubernetes, ArgoCD, GitOps, DevOps, ContinuousDelivery

Automating Kubernetes Deployments with Argo CD

Feb 25, 2025
00

Argo CD is a declarative, GitOps-based continuous delivery tool designed for Kubernetes. It allows you to manage and automate application deployment using Git as the single source of truth. Argo CD continuously monitors your Git repository and ensures the Kubernetes environment matches the desired state described in your manifest.

Step 1: Create and Connect to a Kubernetes Cluster

Steps to Create and Connect

Create a Kubernetes Cluster
If you’re using Google Kubernetes Engine (GKE), you can create a cluster using the following command:

gcloud container clusters create <cluster name> — zone <zone of cluster>

Replace <cluster name> with your desired cluster name and <zone of cluster> with your preferred zone.

Connect to the Cluster
Once the cluster is created, configure kubectl (the Kubernetes CLI) to interact with it:

gcloud container clusters get-credentials argo-test — zone us-central1-c

Verify the connection by listing the nodes in the cluster:
kubectl get nodes

Step 2: Install Argo CD

Installing Argo CD means deploying its server, UI, and supporting components as Kubernetes resources in a namespace.

Steps to Install

Create a Namespace for Argo CD
A namespace in Kubernetes is a logical partition to organize resources:

kubectl create namespace argocd

Install Argo CD Components
Use the official installation manifest to deploy all Argo CD components:

kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml

This deploys key components like the API server, repository server, application controller, and web UI.

Step 3: Expose Argo CD Publicly

By default, the argocd-server service is configured as a ClusterIP, making it accessible only within the cluster. You need to expose it for external access.

Options to Expose Argo CD

Option-1 LoadBalancer
Change the service type to LoadBalancer to get an external IP address:

kubectl patch svc argocd-server -n argocd -p ‘{“spec”: {“type”: “LoadBalancer”}}’

Ingress
For advanced routing and SSL support, create an Ingress resource. This approach is recommended if you want to add HTTPS to your setup.

Option-2 Port Forwarding
If you only need temporary access:

kubectl port-forward svc/argocd-server -n argocd 8080:80

Step 4: Access the Argo CD Dashboard

Retrieve the External IP
After exposing the service as a LoadBalancer, get the external IP address:

kubectl get svc argocd-server -n argocd

Login Credentials

Username: admin

Password: Retrieve it from the secret:

kubectl get secret argocd-initial-admin-secret -n argocd -o yaml

Decode the base64 password:

echo “<base64_encoded_password>” | base64 — decode

Access the dashboard by navigating to https://<external-ip> in your browser.

Step 5: Install the Argo CD CLI

The Argo CD CLI enables you to interact with the Argo CD server programmatically for managing clusters, applications, and configurations.

Steps to Install

Download the CLI

curl -sSL -o argocd-linux-amd64 https://github.com/argoproj/argo-cd/releases/latest/download/argocd-linux-amd64

Install the CLI

sudo install -m 555 argocd-linux-amd64 /usr/local/bin/argocd

rm argocd-linux-amd64

Verify Installation

argocd version

Step 6: Add a Kubernetes Cluster to Argo CD

Argo CD requires access to the Kubernetes cluster where it will deploy applications.

Steps to Add

Log in to Argo CD via CLI

argocd login <argocd-server-url>:<port> — username admin — password <password>

Get the Kubernetes Context

kubectl config get-contexts -o name

Add the Cluster

argocd cluster add <context-name>

This command creates a service account (argocd-manager) with cluster-wide permissions to deploy applications.

To Verify the added cluster via cli use below command else navigate to the ui dashboard setting -> cluster

argocd cluster list

Step 7: Add a Git Repository

The Git repository serves as the source of truth for application manifests.

Steps to Add

  1. Navigate to Repositories
    Log in to the Argo CD dashboard, go to Settings -> Repositories, and click Connect Repo.
  2. Enter Repository Details
  • Choose a connection method (e.g., HTTPS or SSH).
  • Provide the repository URL and credentials.
  • Assign a project to organize repositories.

Step 8: Create an Application in Argo CD

An Argo CD application represents the Kubernetes resources defined in a Git repository.

Steps to Create

Click New App
Enter the application details:

  • Application Name: e.g., hello-world
  • Project: Assign the application to a project.
  • Source: Select the Git repository and specify the manifest file path.
  • Destination: Select the cluster and namespace for deployment.
  1. Enable Auto-Sync policy
    Enable this option for automated synchronization between the Git repository and the Kubernetes cluster.
  2. Create the Application
    Click Create. Argo CD will deploy the application and monitor its state.

Read Blog

FAQs

Some benefits of using cloud computing services include cost savings, scalability, flexibility, reliability, and increased collaboration.

Ankercloud takes data privacy and compliance seriously and adheres to industry best practices and standards to protect customer data. This includes implementing strong encryption, access controls, regular security audits, and compliance certifications such as ISO 27001, GDPR, and HIPAA, depending on the specific requirements of the customer. Learn More

The main types of cloud computing models are Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS). Each offers different levels of control and management for users.

Public clouds are owned and operated by third-party providers, private clouds are dedicated to a single organization, and hybrid clouds combine elements of both public and private clouds. The choice depends on factors like security requirements, scalability needs, and budget constraints.

Cloud computing services typically offer pay-as-you-go or subscription-based pricing models, where users only pay for the resources they consume. Prices may vary based on factors like usage, storage, data transfer, and additional features.

The process of migrating applications to the cloud depends on various factors, including the complexity of the application, the chosen cloud provider, and the desired deployment model. It typically involves assessing your current environment, selecting the appropriate cloud services, planning the migration strategy, testing and validating the migration, and finally, executing the migration with minimal downtime.

Ankercloud provides various levels of support to its customers, including technical support, account management, training, and documentation. Customers can access support through various channels such as email, phone, chat, and a self-service knowledge base.

The Ankercloud Team loves to listen