This document discusses WSO2 API Manager's throttling capabilities. It provides an overview of the new throttling features in API Manager 2.0.0 including flexible throttling policies, instant request blocking, custom policies, and distributed traffic management. It then covers policy deployment, different policy types like application and subscription throttling, advanced policies, and custom policies. Finally, it discusses scaling the traffic management deployment using distributed and high availability solutions, and data receiver and publisher patterns.
1 of 40
Download to read offline
More Related Content
Understanding how your ap is are being traffic controlled
2. Agenda
Whats new in API Manager 2.0.0 Throttling
Policy deployment and throttling
Different Throttling Policies
Using Custom Throttling Policies
Scaling Traffic Manager deployment
Distributed Traffic Management Solution
Distributed Traffic Management Solution with HA
Data Receiver Patterns
Data Publisher patterns
Q&A
3. Whats New In API Manager 2.0.0 Throttling
Throttling decisions are made in Traffic Manager which is
powered by powerful Siddhi runtime.
Flexibility to design throttling policies based on both
request count and bandwidth
Supporting Instantaneous request blocking based on
User, IP Address, Application and API
Extensible and Flexible to define advanced rules based on
API properties such as headers, users, JWT Claims, etc...
4. Burst Controlling to ensure overuse APIs within smaller time
durations
Provide the ability to write custom throttle policies based on
need
Ability to attach advanced throttling policies to entire APIs
and to individual Resources
Rich user interfaces for define throttle policies
Wide span of throttle policy time interval such as minutes,
hours, day, week and year
More Features
8. Blocking Policies
API invocations can be blocked instantaneously.
Evaluated at tenant level
Block Policies can be created using following parameters
API context - Blocks all request for a particular API. Throttle key will be
complete context of API URL.
Application - Blocks requests from one application. Throttle key will be a
combination of subscriber name and application name
IP Address - Blocks specific IP address. Throttle key will be IP address of
incoming message and tenant id
User - Blocks all the request coming from a specific user. Throttle key will
Username in message context.
10. Advanced Throttling Policy
Can be applied to the whole API or API resource
Allocated Quota count can be used in following levels
API/Resource level - all users will share N number of requests
user level - each user will get N number of requests
(Currently this is not configurable in UI but but runtime support this)
Advance throttling can be implemented using conditional
groups using following parameters
IP address, IP range
Http Headers
Query Params
JWT Condition policy
Can add multiple Conditional Groups
12. Subscription Throttling with burst controlling
Request quota per subscription. N number of request will be
shared among all users.
Supports both request count and bandwidth
Throttling key will be a combination of application Id,
apiContext & apiVersion
Eg :- 1:/pizzashack/1.0.0:1.0.0
Using Burst Control can be used to limit the abnormal high
usage of APIs.
14. Application Throttling
Quota per application will be allocated per user
Supports both request count and bandwidth
Throttling Key :- {applicationId}:{authorizedUser}
Eg :- 3:admin@carbon.super
17. Custom Throttling Policies
Used to Implement complex throttling policies
Can be added only by a system admin
Applied globally for every API
Supported Throttling parameters
apiContext,apiVersion,resourceKey,userId,appId, apiTenant,appTenant
Contains Key Template and Siddhi Query
Key Template must be unique. Two custom Throttling
Policies cannot have the same key template
18. Sample Custom Throttling Policy
The following custom throttling policy will allow
admin@carbon.super user to make 5 requests per minute
to pizza shack API
Key Template :- $userId:$apiContext
Siddhi Query :-
24. Do i need more servers (hardware) ?
Will it add additional costs to production support ?
Do i need to run traffic manager as separate server?
Distributed Deployment(with Traffic Manager)
27. Data receiver patterns - Failover receiver
In this pattern we connect gateway workers to two or more
traffic managers.
If one goes down then other can act as traffic manager for
gateway.
Gateway receive throttle decision updates from both(or
more) traffic managers using failover data receiver pattern.
This guarantee gateway connectivity to traffic manager at
any given time.
29. Data publisher patterns
Data publisher can be configured according to load balance,
failover pattern or combination of both.
Failover Data publishing.
Load Balance Data Publishing.
Load Balance Data Publishing for multiple groups.
Publishing to multiple receiver groups with load
balancing.
Publishing to multiple receiver groups with failover.
Publishing to all receivers.
31. Failover data publishing.
First events are sent to Traffic Manager Receiver-1. If it is
unavailable, then events will be sent to Traffic Manager
Receiver-2. Further, if that is also available, then events will
be sent to Traffic Manager Receiver-3.
In this scenario event duplication is false because one event
will always go to only one receiver. If that receiver failed only
it will go to one of the other available nodes.
33. The load balanced publishing is done in a Round Robin
manner, sending each event to each receiver in a circular
order without any priority.
This functionality significantly reduces the loss of data and
provides more concurrency. In this scenario one message will
always go to one data receiver and event duplication will not
happen.
So in this scenario event duplication need to mark as false.
Load balance data publishing.
35. Data publisher will push events to both groups.
But since we do have multiple nodes within each group it will
send event to only one node at a given time in round robin
fashion.
That means within group A first request goes to traffic
manager 01 and next goes to traffic manager 02 and so.
If traffic manager node 01 is unavailable then all traffic will
go to traffic manager node 02 and it will address failover
scenarios.
Data publishing for multiple groups with load
balance within group.
37. Data publisher will push events to both groups.
But since we do have multiple nodes within each group it will
send event to only one node at a given time.
Then if that node goes down then event publisher will send
events to other node within same group.
This model guarantees message publishing to at least one
node within each server group.
Data publishing to multiple receiver groups with
failover within group.
39. Publishing to all receivers.
In this scenario we will be sending all the events to more
than one Traffic Manager receiver.
This approach is mainly followed when you use other servers
to analyze events together with Traffic Manager servers.
You can use this functionality to publish the same event to
both servers at the same time.
This will be useful to perform real time analytics with CEP
while perform throttling with traffic manager in nearly real
time with the same data.