際際滷

際際滷Share a Scribd company logo
WSO2 API Manager
Understanding How Your APIs are Being Traffic
Controlled
By
Sanjeewa Malalgoda
Sam Sivayogam
Agenda
 Whats new in API Manager 2.0.0 Throttling
 Policy deployment and throttling
 Different Throttling Policies
 Using Custom Throttling Policies
 Scaling Traffic Manager deployment
Distributed Traffic Management Solution
Distributed Traffic Management Solution with HA
Data Receiver Patterns
Data Publisher patterns
 Q&A
Whats New In API Manager 2.0.0 Throttling
 Throttling decisions are made in Traffic Manager which is
powered by powerful Siddhi runtime.
 Flexibility to design throttling policies based on both
request count and bandwidth
 Supporting Instantaneous request blocking based on
User, IP Address, Application and API
 Extensible and Flexible to define advanced rules based on
API properties such as headers, users, JWT Claims, etc...
 Burst Controlling to ensure overuse APIs within smaller time
durations
 Provide the ability to write custom throttle policies based on
need
 Ability to attach advanced throttling policies to entire APIs
and to individual Resources
 Rich user interfaces for define throttle policies
 Wide span of throttle policy time interval such as minutes,
hours, day, week and year
More Features
Policy deployment and Throttling
Different Throttling Policies
 Application Throttling
 Subscription Throttling with burst controlling
 Advanced Throttling
 Custom Throttling Policies
 Blocking conditions
Policy execution order
Blocking Policies
 API invocations can be blocked instantaneously.
 Evaluated at tenant level
 Block Policies can be created using following parameters
 API context - Blocks all request for a particular API. Throttle key will be
complete context of API URL.
 Application - Blocks requests from one application. Throttle key will be a
combination of subscriber name and application name
 IP Address - Blocks specific IP address. Throttle key will be IP address of
incoming message and tenant id
 User - Blocks all the request coming from a specific user. Throttle key will
Username in message context.
Understanding how your ap is are being traffic controlled
Advanced Throttling Policy
 Can be applied to the whole API or API resource
 Allocated Quota count can be used in following levels
 API/Resource level - all users will share N number of requests
 user level - each user will get N number of requests
(Currently this is not configurable in UI but but runtime support this)
 Advance throttling can be implemented using conditional
groups using following parameters
 IP address, IP range
 Http Headers
 Query Params
 JWT Condition policy
 Can add multiple Conditional Groups
Default Limits
Condition GroupsConditions
Group Limit
Subscription Throttling with burst controlling
 Request quota per subscription. N number of request will be
shared among all users.
 Supports both request count and bandwidth
 Throttling key will be a combination of application Id,
apiContext & apiVersion
 Eg :- 1:/pizzashack/1.0.0:1.0.0
 Using Burst Control can be used to limit the abnormal high
usage of APIs.
Understanding how your ap is are being traffic controlled
Application Throttling
 Quota per application will be allocated per user
 Supports both request count and bandwidth
 Throttling Key :- {applicationId}:{authorizedUser}
Eg :- 3:admin@carbon.super
User Inputs Converted to a Siddhi Query
Understanding how your ap is are being traffic controlled
Custom Throttling Policies
 Used to Implement complex throttling policies
 Can be added only by a system admin
 Applied globally for every API
 Supported Throttling parameters
 apiContext,apiVersion,resourceKey,userId,appId, apiTenant,appTenant
 Contains Key Template and Siddhi Query
 Key Template must be unique. Two custom Throttling
Policies cannot have the same key template
Sample Custom Throttling Policy
 The following custom throttling policy will allow
admin@carbon.super user to make 5 requests per minute
to pizza shack API
Key Template :- $userId:$apiContext
Siddhi Query :-
Logging Events using event Publisher
Sample Event Log
[2016-09-29 18:09:59,703] INFO -
LoggerEventAdapter Unique ID: Request_Data_logger,
Event:
messageID:urn:uuid:527854d3-d059-4e9b-9aea-0d08d8b0
499f,
appKey:1:admin@carbon.super,
app Tier:Unlimited,
subscription Key:1:/pizza shack/1.0.0:1.0.0,
apiKey:/pizzashack/1.0.0:1.0.0,
apiTier:,
subscriptionTier:Sam,
resourceKey:/pizzashack/1.0.0/1.0.0/menu:GET,
resourceTier:Unlimited,
userId:admin@carbon.super,
apiContext:/pizzashack/1.0.0,
apiVersion:1.0.0,
appTenant:carbon.super,
apiTenant:carbon.super,
appId:1,
apiName:PizzaShackAPI,
propertiesMap:{ip=174327232}
Scale Traffic Management Deployment
 Distributed Traffic Management Solution
 Distributed Traffic Management Solution with HA
 Data Receiver Patterns
 Data Publisher patterns
Distributed Deployment(default)
Distributed Deployment(with Traffic Manager)
 Do i need more servers (hardware) ?
 Will it add additional costs to production support ?
 Do i need to run traffic manager as separate server?
Distributed Deployment(with Traffic Manager)
Distributed Traffic Management Solution
Distributed Traffic Management Solution + HA
Data receiver patterns - Failover receiver
 In this pattern we connect gateway workers to two or more
traffic managers.
 If one goes down then other can act as traffic manager for
gateway.
 Gateway receive throttle decision updates from both(or
more) traffic managers using failover data receiver pattern.
 This guarantee gateway connectivity to traffic manager at
any given time.
Data receiver patterns - Failover receiver
Data publisher patterns
 Data publisher can be configured according to load balance,
failover pattern or combination of both.
 Failover Data publishing.
 Load Balance Data Publishing.
 Load Balance Data Publishing for multiple groups.
 Publishing to multiple receiver groups with load
balancing.
 Publishing to multiple receiver groups with failover.
 Publishing to all receivers.
Failover data publishing.
Failover data publishing.
 First events are sent to Traffic Manager Receiver-1. If it is
unavailable, then events will be sent to Traffic Manager
Receiver-2. Further, if that is also available, then events will
be sent to Traffic Manager Receiver-3.
 In this scenario event duplication is false because one event
will always go to only one receiver. If that receiver failed only
it will go to one of the other available nodes.
Load balance data publishing.
 The load balanced publishing is done in a Round Robin
manner, sending each event to each receiver in a circular
order without any priority.
 This functionality significantly reduces the loss of data and
provides more concurrency. In this scenario one message will
always go to one data receiver and event duplication will not
happen.
 So in this scenario event duplication need to mark as false.
Load balance data publishing.
Data publishing for multiple groups with load
balance within group.
 Data publisher will push events to both groups.
 But since we do have multiple nodes within each group it will
send event to only one node at a given time in round robin
fashion.
 That means within group A first request goes to traffic
manager 01 and next goes to traffic manager 02 and so.
 If traffic manager node 01 is unavailable then all traffic will
go to traffic manager node 02 and it will address failover
scenarios.

Data publishing for multiple groups with load
balance within group.
Data publishing to multiple receiver groups with
failover within group.
 Data publisher will push events to both groups.
 But since we do have multiple nodes within each group it will
send event to only one node at a given time.
 Then if that node goes down then event publisher will send
events to other node within same group.
 This model guarantees message publishing to at least one
node within each server group.
Data publishing to multiple receiver groups with
failover within group.
Publishing to all receivers.
Publishing to all receivers.
 In this scenario we will be sending all the events to more
than one Traffic Manager receiver.
 This approach is mainly followed when you use other servers
to analyze events together with Traffic Manager servers.
 You can use this functionality to publish the same event to
both servers at the same time.
 This will be useful to perform real time analytics with CEP
while perform throttling with traffic manager in nearly real
time with the same data.
Understanding how your ap is are being traffic controlled

More Related Content

Understanding how your ap is are being traffic controlled

  • 1. WSO2 API Manager Understanding How Your APIs are Being Traffic Controlled By Sanjeewa Malalgoda Sam Sivayogam
  • 2. Agenda Whats new in API Manager 2.0.0 Throttling Policy deployment and throttling Different Throttling Policies Using Custom Throttling Policies Scaling Traffic Manager deployment Distributed Traffic Management Solution Distributed Traffic Management Solution with HA Data Receiver Patterns Data Publisher patterns Q&A
  • 3. Whats New In API Manager 2.0.0 Throttling Throttling decisions are made in Traffic Manager which is powered by powerful Siddhi runtime. Flexibility to design throttling policies based on both request count and bandwidth Supporting Instantaneous request blocking based on User, IP Address, Application and API Extensible and Flexible to define advanced rules based on API properties such as headers, users, JWT Claims, etc...
  • 4. Burst Controlling to ensure overuse APIs within smaller time durations Provide the ability to write custom throttle policies based on need Ability to attach advanced throttling policies to entire APIs and to individual Resources Rich user interfaces for define throttle policies Wide span of throttle policy time interval such as minutes, hours, day, week and year More Features
  • 6. Different Throttling Policies Application Throttling Subscription Throttling with burst controlling Advanced Throttling Custom Throttling Policies Blocking conditions
  • 8. Blocking Policies API invocations can be blocked instantaneously. Evaluated at tenant level Block Policies can be created using following parameters API context - Blocks all request for a particular API. Throttle key will be complete context of API URL. Application - Blocks requests from one application. Throttle key will be a combination of subscriber name and application name IP Address - Blocks specific IP address. Throttle key will be IP address of incoming message and tenant id User - Blocks all the request coming from a specific user. Throttle key will Username in message context.
  • 10. Advanced Throttling Policy Can be applied to the whole API or API resource Allocated Quota count can be used in following levels API/Resource level - all users will share N number of requests user level - each user will get N number of requests (Currently this is not configurable in UI but but runtime support this) Advance throttling can be implemented using conditional groups using following parameters IP address, IP range Http Headers Query Params JWT Condition policy Can add multiple Conditional Groups
  • 12. Subscription Throttling with burst controlling Request quota per subscription. N number of request will be shared among all users. Supports both request count and bandwidth Throttling key will be a combination of application Id, apiContext & apiVersion Eg :- 1:/pizzashack/1.0.0:1.0.0 Using Burst Control can be used to limit the abnormal high usage of APIs.
  • 14. Application Throttling Quota per application will be allocated per user Supports both request count and bandwidth Throttling Key :- {applicationId}:{authorizedUser} Eg :- 3:admin@carbon.super
  • 15. User Inputs Converted to a Siddhi Query
  • 17. Custom Throttling Policies Used to Implement complex throttling policies Can be added only by a system admin Applied globally for every API Supported Throttling parameters apiContext,apiVersion,resourceKey,userId,appId, apiTenant,appTenant Contains Key Template and Siddhi Query Key Template must be unique. Two custom Throttling Policies cannot have the same key template
  • 18. Sample Custom Throttling Policy The following custom throttling policy will allow admin@carbon.super user to make 5 requests per minute to pizza shack API Key Template :- $userId:$apiContext Siddhi Query :-
  • 19. Logging Events using event Publisher
  • 20. Sample Event Log [2016-09-29 18:09:59,703] INFO - LoggerEventAdapter Unique ID: Request_Data_logger, Event: messageID:urn:uuid:527854d3-d059-4e9b-9aea-0d08d8b0 499f, appKey:1:admin@carbon.super, app Tier:Unlimited, subscription Key:1:/pizza shack/1.0.0:1.0.0, apiKey:/pizzashack/1.0.0:1.0.0, apiTier:, subscriptionTier:Sam, resourceKey:/pizzashack/1.0.0/1.0.0/menu:GET, resourceTier:Unlimited, userId:admin@carbon.super, apiContext:/pizzashack/1.0.0, apiVersion:1.0.0, appTenant:carbon.super, apiTenant:carbon.super, appId:1, apiName:PizzaShackAPI, propertiesMap:{ip=174327232}
  • 21. Scale Traffic Management Deployment Distributed Traffic Management Solution Distributed Traffic Management Solution with HA Data Receiver Patterns Data Publisher patterns
  • 24. Do i need more servers (hardware) ? Will it add additional costs to production support ? Do i need to run traffic manager as separate server? Distributed Deployment(with Traffic Manager)
  • 27. Data receiver patterns - Failover receiver In this pattern we connect gateway workers to two or more traffic managers. If one goes down then other can act as traffic manager for gateway. Gateway receive throttle decision updates from both(or more) traffic managers using failover data receiver pattern. This guarantee gateway connectivity to traffic manager at any given time.
  • 28. Data receiver patterns - Failover receiver
  • 29. Data publisher patterns Data publisher can be configured according to load balance, failover pattern or combination of both. Failover Data publishing. Load Balance Data Publishing. Load Balance Data Publishing for multiple groups. Publishing to multiple receiver groups with load balancing. Publishing to multiple receiver groups with failover. Publishing to all receivers.
  • 31. Failover data publishing. First events are sent to Traffic Manager Receiver-1. If it is unavailable, then events will be sent to Traffic Manager Receiver-2. Further, if that is also available, then events will be sent to Traffic Manager Receiver-3. In this scenario event duplication is false because one event will always go to only one receiver. If that receiver failed only it will go to one of the other available nodes.
  • 32. Load balance data publishing.
  • 33. The load balanced publishing is done in a Round Robin manner, sending each event to each receiver in a circular order without any priority. This functionality significantly reduces the loss of data and provides more concurrency. In this scenario one message will always go to one data receiver and event duplication will not happen. So in this scenario event duplication need to mark as false. Load balance data publishing.
  • 34. Data publishing for multiple groups with load balance within group.
  • 35. Data publisher will push events to both groups. But since we do have multiple nodes within each group it will send event to only one node at a given time in round robin fashion. That means within group A first request goes to traffic manager 01 and next goes to traffic manager 02 and so. If traffic manager node 01 is unavailable then all traffic will go to traffic manager node 02 and it will address failover scenarios. Data publishing for multiple groups with load balance within group.
  • 36. Data publishing to multiple receiver groups with failover within group.
  • 37. Data publisher will push events to both groups. But since we do have multiple nodes within each group it will send event to only one node at a given time. Then if that node goes down then event publisher will send events to other node within same group. This model guarantees message publishing to at least one node within each server group. Data publishing to multiple receiver groups with failover within group.
  • 38. Publishing to all receivers.
  • 39. Publishing to all receivers. In this scenario we will be sending all the events to more than one Traffic Manager receiver. This approach is mainly followed when you use other servers to analyze events together with Traffic Manager servers. You can use this functionality to publish the same event to both servers at the same time. This will be useful to perform real time analytics with CEP while perform throttling with traffic manager in nearly real time with the same data.