In today’s world, microservices have become the go-to solution for most applications. They offer greater flexibility, scalability, and ease of management. However, managing microservices at scale can become complicated, and that’s where Kubernetes and Istio come in. Kubernetes is an open-source container orchestration system that automates the deployment, scaling, and management of containerized applications, while Istio is an open-source service mesh that provides a uniform way to connect, secure, and monitor microservices. In this blog post, we’ll discuss how Otterize can be used with Kubernetes and how it differs from Istio.
Otterize with Kubernetes
- Otterize is a cloud-native API gateway that is built specifically for Kubernetes. It is designed to provide advanced routing, security, and observability for microservices running on Kubernetes.
- Otterize is built on top of Envoy, an open-source edge and service proxy that provides advanced load balancing, routing, and observability features.
- Otterize can be deployed as a Kubernetes deployment, which makes it easy to scale up or down as needed.
- It can be configured using Kubernetes ConfigMaps, which makes it easy to manage configurations for different environments.
- Otterize also provides integration with Kubernetes RBAC, which makes it easy to manage access control for different users and services.
- Otterize provides advanced routing features, including path-based routing, header-based routing, and HTTP method-based routing. It also provides support for advanced traffic splitting and canary deployments, which makes it easy to test new versions of microservices before rolling them out to production.
- Otterize has advanced security features, including authentication, authorization, and rate limiting.
- It integrates with popular identity providers like Okta, Auth0, and Keycloak, which makes it easy to secure microservices with OAuth2 and OpenID Connect.
- It also provides support for mutual TLS, which adds an extra layer of security to microservices.
- Otterize also provides advanced observability features, including metrics, logs, and tracing which can also integrate with popular monitoring solutions like Prometheus and Grafana, making it easy to monitor the health of microservices.
- It also integrates with popular tracing solutions like Jaeger and Zipkin, which makes it easy to trace requests across microservices.
Otterize Vs Istio
While Otterize and Istio both provide service mesh capabilities, they differ in several key areas. Here are some of the key differences between Otterize and Istio:
1) Architecture
Otterize is built specifically for Kubernetes, which means it is tightly integrated with the Kubernetes API server. It is built on top of Envoy, an open-source edge and service proxy that provides advanced load balancing, routing, and observability features. Istio, on the other hand, can be deployed on any Kubernetes cluster or platform. It is built on top of Envoy and provides advanced service mesh features like traffic management, security, and observability.
2) Features
Both Otterize and Istio provide advanced routing, security, and observability features. However, Otterize provides some unique features like canary deployments, advanced traffic splitting, and integration with popular identity providers like Okta, Auth0, and Keycloak. Istio provides some unique features like fault injection, circuit breaking, and distributed tracing.
3) Complexity
Istio is known for its complexity, which can make it difficult to deploy and manage. It requires a steep learning curve, and its configuration can become complicated as the number of microservices grows. Otterize, on the other hand, is designed to be easy to deploy and manage. It is built specifically for Kubernetes, which means it integrates seamlessly with Kubernetes ConfigMaps and RBAC.
Getting Started
To install Otterize on your Kubernetes cluster, you first need to add the Otterize Helm repository , update it and install:
Copied!helm repo add otterize https://helm.otterize.com helm repo update helm install otterize otterize/otterize-kubernetes -n otterize-system --create-namespace
This installs the Otterize intents operator, Otterize credentials operator, Otterize network mapper, and SPIRE. If you’re installing Otterize for network policies, make sure your cluster supports network policies.
Once you have installed Otterize, verify that all the pods are running
This should show all the pods running and ready.
Copied!kubectl get pods -n otterize-system
Upgrading Otterize is simple
To upgrade Otterize to the latest version, use Helm:
Copied!helm repo update helm upgrade --install otterize otterize/otterize-kubernetes -n otterize-system
Installing the Otterize network mapper
If you want to install the Otterize network mapper, you can do so as follows:
Copied!helm install network-mapper otterize/network-mapper -n otterize-system --create-namespace
Installing the Otterize CLI
The Otterize CLI is a command-line utility used to control and interact with the Otterize network mapper, manipulate local intents files, and interact with Otterize Cloud. To install the CLI on macOS, use Homebrew:
Copied!brew install otterize/otterize/otterize-cli
Alternatively, you can download the CLI from the GitHub Releases page.
Uninstalling Otterize
Before uninstalling Otterize, you should make sure to delete any resources created by users, such as ClientIntents and KafkaServerConfigs. To remove these resources, run the following commands:
Copied!kubectl get clientintents --all-namespaces kubectl delete clientintents --all-namespaces kubectl get kafkaserverconfig --all-namespaces kubectl delete kafkaserverconfig --all-namespaces
Make sure to remove the ClientIntents before removing the KafkaServerConfigs. Once you’ve removed these resources, you can uninstall Otterize:
Copied!helm uninstall otterize -n otterize-system
Creating a Node.js application deployment
Creating a simple Node app
First, create a simple Node.js application. Here’s an example app.js file:
Copied!const http = require('http'); const hostname = '0.0.0.0'; const port = 3000; const server = http.createServer((req, res) => { res.statusCode = 200; res.setHeader('Content-Type', 'text/plain'); res.end('Hello, World!n'); }); server.listen(port, hostname, () => { console.log(`Server running at http://${hostname}:${port}/`); });
Create Dockerfile
Create a Dockerfile for the Node.js application. Here’s an example Dockerfile:
This Dockerfile uses the node:14-alpine image as a base and installs the Node.js dependencies for the application.
Copied!FROM node:14-alpine WORKDIR /app COPY package.json . RUN npm install COPY . . CMD ["npm", "start"]
Create a Kubernetes deployment manifest
Create a Kubernetes deployment YAML file for your Node.js application. For example, you can create a file named node-app.yaml with the following content:
Copied!apiVersion: apps/v1 kind: Deployment metadata: name: node-app spec: replicas: 1 selector: matchLabels: app: node-app template: metadata: labels: app: node-app spec: containers: - name: node-app image: your-docker-image ports: - containerPort: 3000
Create a Kubernetes service YAML file to expose your Node.js application.
You can create a file named node-app-service.yaml with the following content:
Copied!apiVersion: v1 kind: Service metadata: name: node-app-service spec: selector: app: node-app ports: - protocol: TCP port: 80 targetPort: 3000 type: LoadBalancer
Create a Kubernetes secret to store any necessary credentials
You can create a secret named node-app-secret with the following command:
Copied!kubectl create secret generic node-app-secret --from-literal=db-password=your-db-password
Create an intent YAML file to define the intent for your Node.js application.
You can create a file named node-app-intent.yaml with the following content:
Copied!apiVersion: otterize.io/v1alpha1 kind: ClientIntent metadata: name: node-app-intent spec: deployments: - name: node-app container: node-app imagePullSecrets: - name: your-docker-registry-secret ports: - name: http port: 80 envFrom: - secretRef: name: node-app-secret
Apply the Kubernetes resources using the following command
After the resources are created, Otterize will automatically configure the necessary network policies and security groups to allow traffic to and from your Node.js application.
Copied!kubectl apply -f node-app.yaml kubectl apply -f node-app-service.yaml kubectl apply -f node-app-secret.yaml kubectl apply -f node-app-intent.yaml
Conclusion
Otterize is a powerful tool that simplifies the process of managing network policies in Kubernetes clusters. With Otterize, you can create, enforce, and monitor network policies across multiple clusters with ease. The tool offers various features such as network mapper, intent manager, and credential manager that can help to enhance the security and reliability of your Kubernetes deployments.
Otterize also provides cloud-based management capabilities, allowing you to easily configure and monitor your network policies from a central location. Moreover, Otterize is highly customizable and can be configured to work with different CNI plugins, cloud providers, and other third-party tools. Otterize can save time and effort when it comes to managing network policies in Kubernetes clusters, making it a valuable addition to any DevOps team’s toolbox.
Leave a Reply