0% found this document useful (0 votes)
8 views

www-solo-io-topics...

Uploaded by

nextsaasstarter
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views

www-solo-io-topics...

Uploaded by

nextsaasstarter
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

Why Solo Products Solutions Learn

Company
Discover the power of
Read the Docs Request a Demo

Solo.io today!
HowKubernetes
Home Topic Series Kubernetes API Gateway can we help you today?
Ingress

Kubernetes Chat with us ✨


Schedule a demo 📅

Ingress I have a question


Actually, I need customer support
A Practical Guide
This session and your communications may be
monitored and recorded. View our Privacy Policy

What is Kubernetes Ingress?


A Kubernetes ingress is an API object used to manage external
user access to services running in a Kubernetes cluster. It provides
routing rules, defined within the ingress resource, which you can
use to configure access to your clusters. The HTTPS/HTTP protocol
is commonly used to facilitate routing.

An ingress provides a single point of entry into a cluster or service,


making it easier to manage applications and troubleshoot routing
issues. Its primary functions are traffic load balancing, secure
sockets layer (SSL) termination, and name-based virtual hosting.

An ingress is an alternative to creating a dedicated load balancer in


front of Kubernetes services, or manually exposing services within
a node. It lets you flexibly configure routing rules, greatly
simplifying your production environment.

This is part of a series of articles about Kubernetes API gateway.

READ MORE ABOUT API GATEWAYS IN A CLOUD


NATIVE WORLD
What is a Kubernetes Ingress
Controller?
An Ingress controller implements a Kubernetes Ingress and works
as a load balancer and reverse proxy entity. It abstracts traffic
routing by directing traffic coming towards a Kubernetes platform
to the pods inside and load balancing it. The controller’s routing
directions come from the Ingress resource configurations.

A Kubernetes cluster can have multiple associated Ingress


controllers, and each one works similarly to a deployment
controller. It is event-driven and gets triggered when, for example,
a user creates, updates or deletes an Ingress. Once triggered, an
Ingress controller tracks a cluster’s Ingress resources and reads the
Ingress specifications. Then, it makes the configuration (in YAML or
JSON) intelligible for the reverse proxy so it can provide the cluster
with the resources it requires.

Learn more in our detailed guide to Kubernetes Ingress


Controller

Types of Ingress
Single Service Ingress
A single service ingress exposes only one service to external users.
To enable a single service ingress, you must define a default
backend—if the ingress object’s host or path does not match
information in the HTTP message, traffic is forwarded to this
default backend. The default backend does not have any routing
rules.

cluster

Pod

Ingress-managed
client load balancer Ingress routing rule Service

Pod

Image Source: Kubernetes


Simple Fanout Ingress
A simple fan-out ingress allows you to expose multiple services
using a single IP address. This makes it easier to route traffic to
destinations based on the type of request. This type of ingress
simplifies traffic routing while reducing the total number of load
balancers in the cluster.

cluster

Pod

/foo Service service1:4200

Pod

Ingress-managed
client load balancer Ingress, 178.91.123.132

Pod

/bar Service service2:8080

Pod

Image Source: Kubernetes

Name-based Virtual Hosting


Name-based virtual hosting makes it possible to route HTTP traffic
to multiple hostnames with the same IP address. This ingress type
usually directs traffic to a specific host before evaluating routing
rules.

cluster

Pod

Host: foo.bar.com Service service1:80

Pod

Ingress-managed
client load balancer Ingress, 178.91.123.132

Pod

Host: bar.foo.com Service service2:80

Pod

Image Source: Kubernetes

Ingress vs. ClusterIP vs. NodePort vs.


LoadBalancer
There are several different ways to enable traffic to flow into a
Kubernetes cluster:

Ingress—allows you to consolidate traffic routing rules into a


single resource that runs natively within the Kubernetes
cluster.
ClusterIP—this is the recommended option for accessing
internal services using internal IP addresses. You can use it for
debugging services, internal traffic, or for the Kubernetes
dashboard during development stages.
NodePort—this is a virtual machine (VM) used to expose
services using a static port number. It is not recommended for
production environments, but can be used to expose services
in development environments. It does not provide load
balancing or multi-service routing capabilities.
LoadBalancer—this object exposes services to the Internet
using an external LoadBalancer. It is acceptable for use in
production Kubernetes clusters.

When should you use Kubernetes


ingress?
The ingress resource is recommended when exposing services in
production Kubernetes clusters. It is useful when you need to
define complex traffic routing, and helps you reduce the cost of
external load balancers, leveraging the resources of your
Kubernetes cluster nodes. Another motivation for using an ingress
is when you want to manage traffic routing within Kubernetes and
avoid having one more system to manage

Kubernetes Ingress Examples


Basic Example: Default Ingress Resource
A default Kubernetes Ingress resource example is below:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: demo-ingress-example
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
ingressClassName: demo-nginx-example
rules:
- http:
paths:
- path: /demofilepath
pathType: Prefix
backend:
service:
name: demoservice
port:
number: 99

An Ingress requires the following fields:

apiVersion
kind
metadata
spec

The Ingress object’s name has to be a valid DNS subdomain title.


The specification has everything required for configuring load
balancers and proxy servers. It also specifies rules to match
incoming requests against. However, an Ingress resource can only
have rules to direct HTTP(S) traffic.

An Ingress Controller needs to be deployed for using Kubernetes


Ingress. Some different choices of Ingress Controller that
Kubernetes supports are:

AKS Application Gateway Ingress Controller


Istio Ingress
Kusk Gateway
NGINX Ingress Controller For Kubernetes

Exposing Services as Externally Usable


URLs via Ingress
Configuration inside an ingress Controller gets updated through
Ingress Resources. Taking Nginx as an example, a set of pods are
present inside the Nginx instance which monitors the Ingress
Resources for any changes. Then, it reloads the configuration after
updating it if there are changes.

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: demo-Ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
ingressClassName: nginx
rules:
- host: demo.web.com
http:
paths:
- path: /demo-Ingress
pathType: Prefix
backend:
service:
name: demo-Ingress
port: 1919

The specification tells the Nginx Ingress controller that it has to


monitor and update by specifying ingress-nginx as the ingress
class. In case there are multiple Controllers, the ingress.class
specifies which Controller monitors the resources. The spec part
contains the following information:

the rules
the hostname which the rules apply to
if the traffic is HTTP or HTTPs or neither
the path it is watching
the port the traffic is going to get sent to

The traffic received at demo.web.com gets directed to the demo-


Ingress service on the 1919 port. Such a specification provides a
path-based load balancing strategy and allows using a single DNS
entry for a host(s).

Load Balancing Traffic with Kubernetes


Ingress
The following specification allows load balancing through splitting
up the traffic and routing it towards different backends and
deployments in the application based on the path. It is favorable
for endpoints that receive more traffic than others and a single
deployment can be scaled for /demo-Ingress.

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: demo-Ingress-split
annotations:
nginx.ingress.kubernetes.io/rewrite-target:
spec:
ingressClassName: nginx
rules:
- host: demo.web.com
http:
paths:
- path: /demo-Ingress
pathType: Prefix
backend:
service:
name: demo-Ingress
port:
number: 1919
- path: /demo-Ingress-split
pathType: Prefix
backend:
service:
name: demo-Ingress-split
port:
number: 2020

Here, the specification is the same as before. However, the


additional part features another deployment and service.

BACK TO TOP

Additional Resources

WHITE PAPER
Demystifying Kubernetes Security
Read the White Paper
Series: Kubernetes API Gateway

Related Articles

Kubernetes API Gateway


Kubernetes Ingress Controller
Kubernetes Ingress

From Our blog

A New Foundation for Cloud-Native API Gateways: Kubernetes Gateway


API and Backstage
Hands-On with the Kubernetes Gateway API: A 45-Minute Workshop
and Guided Tour

Learn
Docs
Customers
Products Resource Library
Gloo Gateway Blog
Gloo AI Gateway Newsletters
Gloo Mesh Solo Academy
Spotlight Developer Portal

Topic Series Company


All Topic Series Upcoming Events
API Gateway Careers
AI Gateway About Us
API Management News Room
AWS API Gateway Partners
Cilium Swag Store
Kubernetes API Gateway Open Source
Linkerd
Microservices
NGINX
OpenShift
Rate Limiting
Service Mesh
Zero Trust

Contact Solo
Contact Solo.io
Get Support
Contact Sales
Pricing

© 2024 Solo.io, inc. All Rights Privacy Policy Security


Reserved. Terms of Use

You might also like