In my previous article on Building a resilient deployment on Kubernetes-part 1: Minimum application downtime, I discussed how to keep instance healthy and maintain high availability.

In this article, I would talk about how the resource utilization and capacity planning in Kubernetes helps to build a resilient deployment.

(The gist of what I discussed in this article can be found in the summary section for a quick look)

Image for post
Image for post
Photo by Lukas Blazek on Unsplash

General brief on Resources and Capacity Planning

Any application needs a certain amount of minimum resources such as CPUs and memory, in order to work properly. …

In this article, I would be discussing one of the Kubernetes features that would help to build a high availability cluster with minimum downtime.

(The summary section is at the bottom of this article if you want to get a quick look at what I discussed throughout the article.)

Image for post
Image for post
Photo by Daniel McCullough on Unsplash

General Brief on High Availability

In an enterprise-grade deployment, the high availability is a crucial factor to keep the clients who engage with the application satisfied.

The basic idea of the high availability is engaging with an application successfully at a given time.

How do we make sure the high availability

  1. Having multiple healthy instances (2…

Image for post
Image for post
Photo by Ross Sneddon on Unsplash

Kubernetes is a platform which enables developers to automate the management of their applications in a containerized deployments.

A Kubernetes cluster can consist of several nodes. By default, Kubernetes places the pods for your applications randomly within these nodes. What if you want to place the pods in your cluster strategically so that a selective set of pods will be placed on specific nodes only. Or to not to schedule some pods in certain nodes in the cluster.

Kubernetes provides the capability to achieve this using a few concepts.

  1. Taints and tolerations.
  2. Node selectors
  3. Node affinity

1. Taints and Tolerations

Generally, the word taint…

WSO2 API Manager is a fully open-source full lifecycle API Management solution that can be run anywhere.

This article will cover the basics best practices that you should follow when deploying WSO2 API Manager on VMs or containerized environments for a production level use cases.

Introduction on Deployment Patterns

WSO2 API Manager distribution consists of 5 profiles.

  1. Gateway
  2. Key manager
  3. Traffic manager
  4. Dev portal
  5. Publisher

We can either use all in one distribution or this profile modes to create the deployment.

Based on this, there are recommended pre-defined 5 deployment patterns to cater to different requirements.

Pattern 1 — Single Node (all-in-one) Deployment

Image for post
Image for post
Pattern 1 Deployment

This pattern consists of two all in one

With microservices becoming growing rapidly in today's world, exposing them to the consumers should be managed and easy to be deployed.

This article describes how we can manage and deploy APIs in a Kubernetes cluster under 3 minutes using API Operator for Kubernetes.

Image for post
Image for post
Photo by Dan Lohmar on Unsplash
  • Kubectl
  • Kubernetes v1.12 or above
    Minimum CPU and Memory for the K8s cluster: 2 vCPU, 8GB of Memory
  • An account in DockerHub or private docker registry
  • Download API controller v3.0.0 for your operating system from the website
  • Download the API Operator distribution from GitHub.
  • Once the download is complete, extract the zip and navigate inside api-k8s-crd-1.0.0 directory

Section 1: Configurations

Image for post
Image for post
Photo by 贝莉儿 NG on Unsplash

Kaniko is a tool developed to build and push container images into a container image registry inside a container.

Imagine if you have a scenario, where you have to build and push a docker image to a remote container registry inside a containerized environment. What would you do?

To do that, you need to run a container with Docker installed (Docker daemon), and build and push the Docker image you need.

But there could be scenarios where it is difficult to run docker daemon secure and convenient way. That’s where Kaniko comes into play.

As mentioned in the Kaniko documentation…

Image for post
Image for post
Photo by Markus Spiske on Unsplash

What is Secure Vault?

Secure vault helps users to store and retrieve sensitive data using aliases by improving the security aspect of the product. WSO2 products are shipped with the secure vault implementation which makes its users easier to incorporate this implementation whenever necessary.

Usage of encrypting data using secure vault and retrieve in mediation

This article describes how you can store sensitive information in WSO2 APIM using secure vault such that it would not be in a readable format, and retrieve those data for any mediation logic when making API requests.

Note: Before you continue, replace the existing org.wso2.ciphertool-1.0.0-wso2v8.jar with this jar.

Configure Secure Vault With APIM

  • Encrypt the passwords in the configurations files using cipher tool.
  • In order…

Image for post
Image for post
“A MacBook with lines of code on its screen on a busy desk” by Christopher Gower on Unsplash

If you are looking for an open-source project to participate, it would be worth to spend a few minutes on reading this article as I will be explaining basics you need to begin with contributing to WSO2 API Manager product.

You may be an open-source enthusiast or a student looking for participate in GSoC. Perhaps you are trying to begin with WSO2. Then this article is for you.

What is WSO2 and WSO2 APIM

WSO2 is a 100% open-source middleware company which helps to achieve. WSO2 APIM is becoming a trending API Management solution in the business world.

It is always highly valued the input/contribution of…

Suppose you want to check the functionality of WSO2 APIM with LDAP as the user-store. For this, you need WSO2 APIM and IS packs.

If you have the relevant products downloaded already, you can setup these within 2 mins.

Before going into details, let me explain what is going to happen here. By default WSO2 APIM comes with JDBC user-store manager. But on the other hand WSO2 IS comes with a LDAP user-store by default.

What we are going to do is, start an IS server with a port offset so that it doesn’t conflict with WSO2 APIM ports.


Image for post
Image for post
Photo by Thomas Tucker on Unsplash

This is a short article of using scopes for API access limit for the users who are in user stores with no groups.

Roles which are used in WSO2 APIM, get mapped with the “group” in user stores. When we add scopes, we assign the scope to roles.

Suppose in your user store, you do not have the groups to map roles with. But still you want to use scope to limit the access to APIs/ API resources. So how can you use scopes with the user in those user stores with no groups.

The easiest way to do is…

Dinusha Dissanayake

Engineer @WSO2

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store