Il serve API di Kubernetes 竪 un componente fondamentale per la gestione dei cluster e l'interazione con le risorse del cluster. Tuttavia, le richieste ripetitive e le risposte voluminose possono causare problemi di prestazioni. In questo talk esploreremo come utilizzare i meccanismi di caching per migliorare le prestazioni del server API di Kubernetes e come utilizzare gli eventi per invalidare la cache in modo efficiente. Verranno presentati esempi concreti di implementazione e verranno discusse le best practice per l'utilizzo di questi meccanismi nell'ambiente di produzione. Imparare a utilizzare questi strumenti pu嘆 aiutare a ridurre i tempi di risposta e aumentare la scalabilit del cluster.
1 of 19
More Related Content
Ottimizzare le performance dell'API Server K8s come utilizzare cache e eventi per implementare il Change Data Capture
1. OTTIMIZZARE LE PERFORMANCE
DELL'API SERVER K8S:
COME UTILIZZARE CACHE ED
EVENTI PER IMPLEMENTARE IL
CHANGE DATA CAPTURE
By
Diego Braga, Product Manager
@ Krateo PlatformOps
&
Luigi Fugaro, Sr. Solution Architect
@ Redis
4. WHY ARE WE PRESENTING THIS TALK?
Kubernetes is becoming the de facto industrial standard for
orchestrating cloud-native applications.
What if we want to treat Kubernetes as a platform that is
interrogated by different clients via API? This could generate a
lot of requests to the API server.
Kubernetes doesn't offer caching out of the box, so
what could be the best pattern to implement it?
5. WHAT IS A CACHE
Prompt: "Briefly describe what is a cache."
Response: "A cache is a hardware or software
component that stores data so future requests
for that data can be served faster. The data
stored in a cache might be the results of an
earlier computation, or duplicates of data
stored elsewhere. A cache hit occurs when the
requested data can be found in a cache, while a
cache miss occurs when it cannot."
Caches are utilized because the data
stored in a cache can often be
delivered faster than re-fetching or
recomputing the original data. This is
due to the principles of the locality of
reference, where programs access data
and instructions in localized areas.
Common types of caches include CPU
caches, web caches, and database
caches. By storing frequently accessed
information, caches reduce data access
latency, decrease bandwidth
consumption, and improve overall
system performance.
6. CACHE-ASIDE
READ-THROUGH
WRITE-THROUGH
Cache-miss managed at application
level.
Cache-miss managed at cache
provider level.
Write data in the cache and
automatically update the db. SYNC
CACHING PATTERNS
WRITE-BEHIND Write data in the cache and
automatically update the db. ASYNC
7. CHANGE DATA CAPTURE
Prompt: "Briefly describe what is change data
capture."
Response: "Change Data Capture (CDC) is a
design pattern that enables tracking and
capturing changes in data so that other software
(like ETL tools, data replication tools, or data
auditing tools) can respond to those changes.
The changes might include insertions, updates,
and deletions that are performed on a database
table."
AUTOMATION
OFFLOADING
CQRS DESIGN PATTERN
8. KUBERNETES API SERVER
The API server is a component of the
Kubernetes control plane that exposes the
Kubernetes API. The API server is the
front end for the Kubernetes control plane
9. KUBERNETES
INFORMERS
As the center of Kubernetes
communication components,
API Server receives requests,
contacts etcd, and manages all
the resource objects. So it
would be under pressure easily.
To ease API Servers
pressure, Kubernetes
employed a message
system called Informer,
which guarantees message
delivery and provides high-
performance real-time
delivery, like any other
famous message system.
Informers query the
resource data and store it in
a local cache. Once stored,
an event is only generated
when it detects a change in
the object (or resource)
state.
10. KUBERNETES CLIENT
The following client libraries are officially maintained
by Kubernetes SIG API Machinery.
C
DOTNET
GO
HASKELL
JAVA
JAVASCRIPT
PERL
PYTHON
RUBY
11. USE CASE
Leveraging client-javascript, we wrote a
backend that exposes an API to query
Kubernetes resources: via query parameter is
it possible to use Redis as cache or interact
directly with the API server
12. USE CASE
ARCHITECTURE
CACHE WARM UP
When the Informer is instantiated, all
the existing resources are pushed
into the cache
AVOID CACHE MISS
The client will always find an entry in
the cache if the resource exists in
Kubernetes
CACHE UPDATE
The Change Data Capture is
implemented via the Informer that
is notified for every resource
change from the API server - and
the microservice will set, update or
delete the key
2
3
1
Client
Webserver
(express.js)
Cache
Kubernetes API server
Informer
(client-javascript)
/cache=true
/cache=false
13. CODE SNIPPET A new Informer is instantiated for any kind of Resource
available in the cluster (namespaces or cluster-scoped)
and it interacts directly with the cache
14. CODE SNIPPET If the 'cache' query parameter is set to 'true' (or not
specified), the webservice will inquiry the cache,
otherwise it will call directly the API server
15. LOG SNIPPET When a new Kubernetes Resource is created / updated /
deleted (i.e. via 'kubectl'), the Change Data Capture pattern is
implemented with the Informer that creates / updates /
deletes the key in the cache
16. MAX No Cache MAX Cache
0
1000
2000
3000
4000
5000
10 requests 50 requests 100 requests 150 requests 200 requests
USE CASE
RESULTS
Max 10 concurrent requests
to inquiry a single resource
Duration of 30 seconds
17. AVG No Cache AVG Cache
0.00
500.00
1000.00
1500.00
2000.00
10 requests 50 requests 100 requests 150 requests 200 requests
USE CASE
RESULTS
Max 10 concurrent requests
to inquiry a single resource
Duration of 30 seconds
18. CONCLUSIONS
Kubernetes is not only an
orchestrator for containerized
applications but a platform
that can be elevated by
leveraging all the existing
features.
One feature that is missing
is caching for clients that
interact with Kubernetes as
a backend that exposes
resources via API.
Change Data Capture is a
pattern that is already
implemented in Kubernetes
client libraries with the
Informer object.
There are different caching
patterns that can be
implemented at the
application level or cache
provider level.
Placing a cache between
the client and the API server
can enhance latencies by
one order of magnitude.