ݺߣshows by User: andreasmmetzger / http://www.slideshare.net/images/logo.gif ݺߣshows by User: andreasmmetzger / Fri, 24 Feb 2023 11:10:29 GMT ݺߣShare feed for ݺߣshows by User: andreasmmetzger Explainable Online Reinforcement Learning for Adaptive Systems /slideshow/explainable-online-reinforcement-learning-for-adaptive-systems/256091223 se23-230224111029-039da5fe
An adaptive system can automatically maintain its requirements in the presence of dynamic environment changes. Developing an adaptive system is difficult due to design time uncertainty, because how the environment will change at runtime and what precise effects adaptations will have on the running system are typically unknown at design time. Online reinforcement learning, i.e., employing reinforcement learning (RL) at runtime, is an emerging approach to realize self-adaptive systems in the presence of design time uncertainty. Online RL learns via actual operational data and thereby leverages feedback only available at runtime. Deep RL algorithms represent the learned knowledge as a neural network. Compared with classical RL algorithms, Deep RL algorithms offer important benefits for adaptive systems. Deep RL can generalize over unseen inputs, it can handle continuous environment states and adaptation actions, and it can readily capture concept and data drifts. Yet, a fundamental problem of Deep RL is that learned knowledge is not represented explicitly. For a human, it is practically impossible to relate neural network parameters to concrete RL decisions. Understanding RL decisions is key to (1) increase trust, and (2) facilitate debugging. Debugging is especially relevant for adaptive systems, because the reward function, which quantifies the feedback to the RL algorithm, must explicitly defined by developers, thus introducing a source for human error. We introduce XRL-DINE to make Deep RL decisions for self-adaptive systems explainable. XRL-DINE enhances and combines explainable RL techniques from machine 1 paluno, University of Duisburg-Essen, Essen, Germany, f.m.feit@gmail.com 2 paluno, University of Duisburg-Essen, Essen, Germany, andreas.metzger@paluno.uni-due.de 3 paluno, University of Duisburg-Essen, Essen, Germany, klaus.pohl@paluno.uni-due.de 2 Felix Feit, Andreas Metzger, Klaus Pohl State S: UP 24 25 26 27 -13,36 -12,57 -11,73 -10,74 28 -9,95 29 -8,91 30 -7,99 31 -6,98 32 -5,95 33 -4,92 34 -3,93 35 -2,98 LEFT DOWN -12,99 -13,00 -11,98 -10,95 -13,95 -112,18 -112,80 -111,49 -9,99 -112,13 -8,89 -112,68 -7,97 -112,91 - -6,98 112,42 - -5,94 111,81 - -4,81 110,62 - -3,94 112,86 -2,98 -1,00 Abb. 1: Illustration of how learned knowledge is represented for Cliff Walk example from [SB18]. learning research. We present a proof-of-concept implementation of XRL-DINE, as well as qualitative and quantitative results that demonstrate the usefulness of XRL-DINE. ]]>

An adaptive system can automatically maintain its requirements in the presence of dynamic environment changes. Developing an adaptive system is difficult due to design time uncertainty, because how the environment will change at runtime and what precise effects adaptations will have on the running system are typically unknown at design time. Online reinforcement learning, i.e., employing reinforcement learning (RL) at runtime, is an emerging approach to realize self-adaptive systems in the presence of design time uncertainty. Online RL learns via actual operational data and thereby leverages feedback only available at runtime. Deep RL algorithms represent the learned knowledge as a neural network. Compared with classical RL algorithms, Deep RL algorithms offer important benefits for adaptive systems. Deep RL can generalize over unseen inputs, it can handle continuous environment states and adaptation actions, and it can readily capture concept and data drifts. Yet, a fundamental problem of Deep RL is that learned knowledge is not represented explicitly. For a human, it is practically impossible to relate neural network parameters to concrete RL decisions. Understanding RL decisions is key to (1) increase trust, and (2) facilitate debugging. Debugging is especially relevant for adaptive systems, because the reward function, which quantifies the feedback to the RL algorithm, must explicitly defined by developers, thus introducing a source for human error. We introduce XRL-DINE to make Deep RL decisions for self-adaptive systems explainable. XRL-DINE enhances and combines explainable RL techniques from machine 1 paluno, University of Duisburg-Essen, Essen, Germany, f.m.feit@gmail.com 2 paluno, University of Duisburg-Essen, Essen, Germany, andreas.metzger@paluno.uni-due.de 3 paluno, University of Duisburg-Essen, Essen, Germany, klaus.pohl@paluno.uni-due.de 2 Felix Feit, Andreas Metzger, Klaus Pohl State S: UP 24 25 26 27 -13,36 -12,57 -11,73 -10,74 28 -9,95 29 -8,91 30 -7,99 31 -6,98 32 -5,95 33 -4,92 34 -3,93 35 -2,98 LEFT DOWN -12,99 -13,00 -11,98 -10,95 -13,95 -112,18 -112,80 -111,49 -9,99 -112,13 -8,89 -112,68 -7,97 -112,91 - -6,98 112,42 - -5,94 111,81 - -4,81 110,62 - -3,94 112,86 -2,98 -1,00 Abb. 1: Illustration of how learned knowledge is represented for Cliff Walk example from [SB18]. learning research. We present a proof-of-concept implementation of XRL-DINE, as well as qualitative and quantitative results that demonstrate the usefulness of XRL-DINE. ]]>
Fri, 24 Feb 2023 11:10:29 GMT /slideshow/explainable-online-reinforcement-learning-for-adaptive-systems/256091223 andreasmmetzger@slideshare.net(andreasmmetzger) Explainable Online Reinforcement Learning for Adaptive Systems andreasmmetzger An adaptive system can automatically maintain its requirements in the presence of dynamic environment changes. Developing an adaptive system is difficult due to design time uncertainty, because how the environment will change at runtime and what precise effects adaptations will have on the running system are typically unknown at design time. Online reinforcement learning, i.e., employing reinforcement learning (RL) at runtime, is an emerging approach to realize self-adaptive systems in the presence of design time uncertainty. Online RL learns via actual operational data and thereby leverages feedback only available at runtime. Deep RL algorithms represent the learned knowledge as a neural network. Compared with classical RL algorithms, Deep RL algorithms offer important benefits for adaptive systems. Deep RL can generalize over unseen inputs, it can handle continuous environment states and adaptation actions, and it can readily capture concept and data drifts. Yet, a fundamental problem of Deep RL is that learned knowledge is not represented explicitly. For a human, it is practically impossible to relate neural network parameters to concrete RL decisions. Understanding RL decisions is key to (1) increase trust, and (2) facilitate debugging. Debugging is especially relevant for adaptive systems, because the reward function, which quantifies the feedback to the RL algorithm, must explicitly defined by developers, thus introducing a source for human error. We introduce XRL-DINE to make Deep RL decisions for self-adaptive systems explainable. XRL-DINE enhances and combines explainable RL techniques from machine 1 paluno, University of Duisburg-Essen, Essen, Germany, f.m.feit@gmail.com 2 paluno, University of Duisburg-Essen, Essen, Germany, andreas.metzger@paluno.uni-due.de 3 paluno, University of Duisburg-Essen, Essen, Germany, klaus.pohl@paluno.uni-due.de 2 Felix Feit, Andreas Metzger, Klaus Pohl State S: UP 24 25 26 27 -13,36 -12,57 -11,73 -10,74 28 -9,95 29 -8,91 30 -7,99 31 -6,98 32 -5,95 33 -4,92 34 -3,93 35 -2,98 LEFT DOWN -12,99 -13,00 -11,98 -10,95 -13,95 -112,18 -112,80 -111,49 -9,99 -112,13 -8,89 -112,68 -7,97 -112,91 - -6,98 112,42 - -5,94 111,81 - -4,81 110,62 - -3,94 112,86 -2,98 -1,00 Abb. 1: Illustration of how learned knowledge is represented for Cliff Walk example from [SB18]. learning research. We present a proof-of-concept implementation of XRL-DINE, as well as qualitative and quantitative results that demonstrate the usefulness of XRL-DINE. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/se23-230224111029-039da5fe-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> An adaptive system can automatically maintain its requirements in the presence of dynamic environment changes. Developing an adaptive system is difficult due to design time uncertainty, because how the environment will change at runtime and what precise effects adaptations will have on the running system are typically unknown at design time. Online reinforcement learning, i.e., employing reinforcement learning (RL) at runtime, is an emerging approach to realize self-adaptive systems in the presence of design time uncertainty. Online RL learns via actual operational data and thereby leverages feedback only available at runtime. Deep RL algorithms represent the learned knowledge as a neural network. Compared with classical RL algorithms, Deep RL algorithms offer important benefits for adaptive systems. Deep RL can generalize over unseen inputs, it can handle continuous environment states and adaptation actions, and it can readily capture concept and data drifts. Yet, a fundamental problem of Deep RL is that learned knowledge is not represented explicitly. For a human, it is practically impossible to relate neural network parameters to concrete RL decisions. Understanding RL decisions is key to (1) increase trust, and (2) facilitate debugging. Debugging is especially relevant for adaptive systems, because the reward function, which quantifies the feedback to the RL algorithm, must explicitly defined by developers, thus introducing a source for human error. We introduce XRL-DINE to make Deep RL decisions for self-adaptive systems explainable. XRL-DINE enhances and combines explainable RL techniques from machine 1 paluno, University of Duisburg-Essen, Essen, Germany, f.m.feit@gmail.com 2 paluno, University of Duisburg-Essen, Essen, Germany, andreas.metzger@paluno.uni-due.de 3 paluno, University of Duisburg-Essen, Essen, Germany, klaus.pohl@paluno.uni-due.de 2 Felix Feit, Andreas Metzger, Klaus Pohl State S: UP 24 25 26 27 -13,36 -12,57 -11,73 -10,74 28 -9,95 29 -8,91 30 -7,99 31 -6,98 32 -5,95 33 -4,92 34 -3,93 35 -2,98 LEFT DOWN -12,99 -13,00 -11,98 -10,95 -13,95 -112,18 -112,80 -111,49 -9,99 -112,13 -8,89 -112,68 -7,97 -112,91 - -6,98 112,42 - -5,94 111,81 - -4,81 110,62 - -3,94 112,86 -2,98 -1,00 Abb. 1: Illustration of how learned knowledge is represented for Cliff Walk example from [SB18]. learning research. We present a proof-of-concept implementation of XRL-DINE, as well as qualitative and quantitative results that demonstrate the usefulness of XRL-DINE.
Explainable Online Reinforcement Learning for Adaptive Systems from Andreas Metzger
]]>
14 0 https://cdn.slidesharecdn.com/ss_thumbnails/se23-230224111029-039da5fe-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Data Quality Issues in Online Reinforcement Learning for Self-Adaptive Systems (Keynote) /slideshow/data-quality-issues-in-online-reinforcement-learning-for-selfadaptive-systems-keynote/254265584 sea4dq-keynote-221117034633-7866111b
Online reinforcement learning is an emerging machine learning approach that addresses the challenge of design-time uncertainty faced when building self-adaptive systems. Online reinforcement learning means that the self-adaptive system can learn from data only available at run time. After introducing the fundamentals of self-adaptive systems and reinforcement learning, the keynote discusses three relevant issues and recent solutions related to data quality in online reinforcement learning for self-adaptive systems.]]>

Online reinforcement learning is an emerging machine learning approach that addresses the challenge of design-time uncertainty faced when building self-adaptive systems. Online reinforcement learning means that the self-adaptive system can learn from data only available at run time. After introducing the fundamentals of self-adaptive systems and reinforcement learning, the keynote discusses three relevant issues and recent solutions related to data quality in online reinforcement learning for self-adaptive systems.]]>
Thu, 17 Nov 2022 03:46:33 GMT /slideshow/data-quality-issues-in-online-reinforcement-learning-for-selfadaptive-systems-keynote/254265584 andreasmmetzger@slideshare.net(andreasmmetzger) Data Quality Issues in Online Reinforcement Learning for Self-Adaptive Systems (Keynote) andreasmmetzger Online reinforcement learning is an emerging machine learning approach that addresses the challenge of design-time uncertainty faced when building self-adaptive systems. Online reinforcement learning means that the self-adaptive system can learn from data only available at run time. After introducing the fundamentals of self-adaptive systems and reinforcement learning, the keynote discusses three relevant issues and recent solutions related to data quality in online reinforcement learning for self-adaptive systems. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/sea4dq-keynote-221117034633-7866111b-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Online reinforcement learning is an emerging machine learning approach that addresses the challenge of design-time uncertainty faced when building self-adaptive systems. Online reinforcement learning means that the self-adaptive system can learn from data only available at run time. After introducing the fundamentals of self-adaptive systems and reinforcement learning, the keynote discusses three relevant issues and recent solutions related to data quality in online reinforcement learning for self-adaptive systems.
Data Quality Issues in Online Reinforcement Learning for Self-Adaptive Systems (Keynote) from Andreas Metzger
]]>
48 0 https://cdn.slidesharecdn.com/ss_thumbnails/sea4dq-keynote-221117034633-7866111b-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Explaining Online Reinforcement Learning Decisions of Self-Adaptive Systems /andreasmmetzger/explaining-online-reinforcement-learning-decisions-of-selfadaptive-systems acsosweb-220923152500-4a52bbc8
Design time uncertainty poses an important challenge when developing the self-adaptation logic of a self-adaptive system. To define when the system should adapt, all potential environment states need to be anticipated, which is infeasible in most cases due to incomplete information at design time. Defining how the system should adapt when facing a new environment state, requires understanding the precise effect of an adaptation, which may not be known at design time. Online reinforcement learning, i.e., employing reinforcement learning (RL) at runtime, is an emerging approach to realizing self-adaptive systems in the presence of design time uncertainty. Using Online RL, the self-adaptive system can learn from actual operational data and leverage information only available at runtime. Recently Deep RL is gaining interest, in which learned knowledge is represented as a neural network. Deep RL can generalize well over unseen inputs, as well as handle continuous environment states and adaptation actions. A fundamental problem of Deep RL is that learned knowledge is not explicitly represented. For a human, it is practically impossible to relate the parametrization of the underlying neural network to concrete RL decisions and thus Deep RL essentially appears as a black box. Yet, understanding the decisions made by Deep RL is key to (1) increasing trust in these systems, and (2) facilitating their debugging. Facilitating the debugging of Deep RL for self-adaptive systems is especially relevant because Online RL does not completely eliminate manual engineering effort. As self-adaptive systems do not have an inherent reward signal, the reward function must be explicitly defined, which introduces a potential source for human error. To address the challenge of how to explain Deep RL systems for self-adaptive systems we enhance and combine two existing explainable RL techniques from the AI literature. The combined technique, XRL-DINE, overcomes the respective limitations of the individual techniques. We present a proof-of-concept implementation of XRL-DINE, as well as qualitative and quantitative results of applying XRL-DINE to the self-adaptive system exemplar SWIM -- a self-adaptive web application.]]>

Design time uncertainty poses an important challenge when developing the self-adaptation logic of a self-adaptive system. To define when the system should adapt, all potential environment states need to be anticipated, which is infeasible in most cases due to incomplete information at design time. Defining how the system should adapt when facing a new environment state, requires understanding the precise effect of an adaptation, which may not be known at design time. Online reinforcement learning, i.e., employing reinforcement learning (RL) at runtime, is an emerging approach to realizing self-adaptive systems in the presence of design time uncertainty. Using Online RL, the self-adaptive system can learn from actual operational data and leverage information only available at runtime. Recently Deep RL is gaining interest, in which learned knowledge is represented as a neural network. Deep RL can generalize well over unseen inputs, as well as handle continuous environment states and adaptation actions. A fundamental problem of Deep RL is that learned knowledge is not explicitly represented. For a human, it is practically impossible to relate the parametrization of the underlying neural network to concrete RL decisions and thus Deep RL essentially appears as a black box. Yet, understanding the decisions made by Deep RL is key to (1) increasing trust in these systems, and (2) facilitating their debugging. Facilitating the debugging of Deep RL for self-adaptive systems is especially relevant because Online RL does not completely eliminate manual engineering effort. As self-adaptive systems do not have an inherent reward signal, the reward function must be explicitly defined, which introduces a potential source for human error. To address the challenge of how to explain Deep RL systems for self-adaptive systems we enhance and combine two existing explainable RL techniques from the AI literature. The combined technique, XRL-DINE, overcomes the respective limitations of the individual techniques. We present a proof-of-concept implementation of XRL-DINE, as well as qualitative and quantitative results of applying XRL-DINE to the self-adaptive system exemplar SWIM -- a self-adaptive web application.]]>
Fri, 23 Sep 2022 15:25:00 GMT /andreasmmetzger/explaining-online-reinforcement-learning-decisions-of-selfadaptive-systems andreasmmetzger@slideshare.net(andreasmmetzger) Explaining Online Reinforcement Learning Decisions of Self-Adaptive Systems andreasmmetzger Design time uncertainty poses an important challenge when developing the self-adaptation logic of a self-adaptive system. To define when the system should adapt, all potential environment states need to be anticipated, which is infeasible in most cases due to incomplete information at design time. Defining how the system should adapt when facing a new environment state, requires understanding the precise effect of an adaptation, which may not be known at design time. Online reinforcement learning, i.e., employing reinforcement learning (RL) at runtime, is an emerging approach to realizing self-adaptive systems in the presence of design time uncertainty. Using Online RL, the self-adaptive system can learn from actual operational data and leverage information only available at runtime. Recently Deep RL is gaining interest, in which learned knowledge is represented as a neural network. Deep RL can generalize well over unseen inputs, as well as handle continuous environment states and adaptation actions. A fundamental problem of Deep RL is that learned knowledge is not explicitly represented. For a human, it is practically impossible to relate the parametrization of the underlying neural network to concrete RL decisions and thus Deep RL essentially appears as a black box. Yet, understanding the decisions made by Deep RL is key to (1) increasing trust in these systems, and (2) facilitating their debugging. Facilitating the debugging of Deep RL for self-adaptive systems is especially relevant because Online RL does not completely eliminate manual engineering effort. As self-adaptive systems do not have an inherent reward signal, the reward function must be explicitly defined, which introduces a potential source for human error. To address the challenge of how to explain Deep RL systems for self-adaptive systems we enhance and combine two existing explainable RL techniques from the AI literature. The combined technique, XRL-DINE, overcomes the respective limitations of the individual techniques. We present a proof-of-concept implementation of XRL-DINE, as well as qualitative and quantitative results of applying XRL-DINE to the self-adaptive system exemplar SWIM -- a self-adaptive web application. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/acsosweb-220923152500-4a52bbc8-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Design time uncertainty poses an important challenge when developing the self-adaptation logic of a self-adaptive system. To define when the system should adapt, all potential environment states need to be anticipated, which is infeasible in most cases due to incomplete information at design time. Defining how the system should adapt when facing a new environment state, requires understanding the precise effect of an adaptation, which may not be known at design time. Online reinforcement learning, i.e., employing reinforcement learning (RL) at runtime, is an emerging approach to realizing self-adaptive systems in the presence of design time uncertainty. Using Online RL, the self-adaptive system can learn from actual operational data and leverage information only available at runtime. Recently Deep RL is gaining interest, in which learned knowledge is represented as a neural network. Deep RL can generalize well over unseen inputs, as well as handle continuous environment states and adaptation actions. A fundamental problem of Deep RL is that learned knowledge is not explicitly represented. For a human, it is practically impossible to relate the parametrization of the underlying neural network to concrete RL decisions and thus Deep RL essentially appears as a black box. Yet, understanding the decisions made by Deep RL is key to (1) increasing trust in these systems, and (2) facilitating their debugging. Facilitating the debugging of Deep RL for self-adaptive systems is especially relevant because Online RL does not completely eliminate manual engineering effort. As self-adaptive systems do not have an inherent reward signal, the reward function must be explicitly defined, which introduces a potential source for human error. To address the challenge of how to explain Deep RL systems for self-adaptive systems we enhance and combine two existing explainable RL techniques from the AI literature. The combined technique, XRL-DINE, overcomes the respective limitations of the individual techniques. We present a proof-of-concept implementation of XRL-DINE, as well as qualitative and quantitative results of applying XRL-DINE to the self-adaptive system exemplar SWIM -- a self-adaptive web application.
Explaining Online Reinforcement Learning Decisions of Self-Adaptive Systems from Andreas Metzger
]]>
33 0 https://cdn.slidesharecdn.com/ss_thumbnails/acsosweb-220923152500-4a52bbc8-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Antrittsvorlesung - APL.pptx https://de.slideshare.net/andreasmmetzger/antrittsvorlesung-aplpptx antrittsvorlesung-apl-220427121303
Ein adaptives System ist in der Lage sich zur Laufzeit anzupassen und somit auf dynamische Veränderungen in seiner Umgebung zu reagieren. Eine wesentliche Herausforderung bei der Entwicklung adaptiver Systeme ist festzulegen, wann und wie sich das System zur Laufzeit anpassen soll. Dies erfordert die Antizipation zukünftiger Umgebungssituationen sowie eine genaue Kenntnis der Auswirkungen der Adaptionsmöglichkeiten auf das System. Beides ist aufgrund von unvollständigem Wissen zur Entwicklungszeit (sog. „design time uncertainty“) im Allgemeinen nicht vollständig möglich. In diesem Vortrag wird als Lösungsansatz für diese Herausforderung das Online-Reinforcement-Learning (Online-RL) vorgestellt. Online-RL lernt auf Basis von Laufzeit-Feedback geeignete Adaptionen. Der Vortrag führt zunächst in die Grundlagen adaptiver Systeme und RL ein. Im Hauptteil werden zwei konkrete Probleme beim Einsatz von Online-RL für adaptive Systeme vorgestellt: (1) Adaptive Systeme verfügen typischerweise über eine große Anzahl von Adaptionsmöglichkeiten. Eine große Anzahl von Adaptionsmöglichkeiten führt bei aktuellen Online-RL-Verfahren für adaptive Systeme jedoch zu einem langsamen Lernprozess. (2) Adaptive Systeme werden häufig in nichtstationären Umgebungen eingesetzt, was bedeutet, dass sich die Effekte von Adaptionen über die Zeit ändern können. Mit solchen nichtstationären Umgebungen können aktuelle Online-RL-Verfahren für adaptive Systeme nicht automatisch umgehen. Der Vortrag stellt aktuelle Forschungsansätze zur Adressierung dieser beiden Probleme vor. Er schließt mit einer kritischen Diskussion und einem Ausblick auf weiterführende Forschungsfragen.]]>

Ein adaptives System ist in der Lage sich zur Laufzeit anzupassen und somit auf dynamische Veränderungen in seiner Umgebung zu reagieren. Eine wesentliche Herausforderung bei der Entwicklung adaptiver Systeme ist festzulegen, wann und wie sich das System zur Laufzeit anpassen soll. Dies erfordert die Antizipation zukünftiger Umgebungssituationen sowie eine genaue Kenntnis der Auswirkungen der Adaptionsmöglichkeiten auf das System. Beides ist aufgrund von unvollständigem Wissen zur Entwicklungszeit (sog. „design time uncertainty“) im Allgemeinen nicht vollständig möglich. In diesem Vortrag wird als Lösungsansatz für diese Herausforderung das Online-Reinforcement-Learning (Online-RL) vorgestellt. Online-RL lernt auf Basis von Laufzeit-Feedback geeignete Adaptionen. Der Vortrag führt zunächst in die Grundlagen adaptiver Systeme und RL ein. Im Hauptteil werden zwei konkrete Probleme beim Einsatz von Online-RL für adaptive Systeme vorgestellt: (1) Adaptive Systeme verfügen typischerweise über eine große Anzahl von Adaptionsmöglichkeiten. Eine große Anzahl von Adaptionsmöglichkeiten führt bei aktuellen Online-RL-Verfahren für adaptive Systeme jedoch zu einem langsamen Lernprozess. (2) Adaptive Systeme werden häufig in nichtstationären Umgebungen eingesetzt, was bedeutet, dass sich die Effekte von Adaptionen über die Zeit ändern können. Mit solchen nichtstationären Umgebungen können aktuelle Online-RL-Verfahren für adaptive Systeme nicht automatisch umgehen. Der Vortrag stellt aktuelle Forschungsansätze zur Adressierung dieser beiden Probleme vor. Er schließt mit einer kritischen Diskussion und einem Ausblick auf weiterführende Forschungsfragen.]]>
Wed, 27 Apr 2022 12:13:02 GMT https://de.slideshare.net/andreasmmetzger/antrittsvorlesung-aplpptx andreasmmetzger@slideshare.net(andreasmmetzger) Antrittsvorlesung - APL.pptx andreasmmetzger Ein adaptives System ist in der Lage sich zur Laufzeit anzupassen und somit auf dynamische Veränderungen in seiner Umgebung zu reagieren. Eine wesentliche Herausforderung bei der Entwicklung adaptiver Systeme ist festzulegen, wann und wie sich das System zur Laufzeit anpassen soll. Dies erfordert die Antizipation zukünftiger Umgebungssituationen sowie eine genaue Kenntnis der Auswirkungen der Adaptionsmöglichkeiten auf das System. Beides ist aufgrund von unvollständigem Wissen zur Entwicklungszeit (sog. „design time uncertainty“) im Allgemeinen nicht vollständig möglich. In diesem Vortrag wird als Lösungsansatz für diese Herausforderung das Online-Reinforcement-Learning (Online-RL) vorgestellt. Online-RL lernt auf Basis von Laufzeit-Feedback geeignete Adaptionen. Der Vortrag führt zunächst in die Grundlagen adaptiver Systeme und RL ein. Im Hauptteil werden zwei konkrete Probleme beim Einsatz von Online-RL für adaptive Systeme vorgestellt: (1) Adaptive Systeme verfügen typischerweise über eine große Anzahl von Adaptionsmöglichkeiten. Eine große Anzahl von Adaptionsmöglichkeiten führt bei aktuellen Online-RL-Verfahren für adaptive Systeme jedoch zu einem langsamen Lernprozess. (2) Adaptive Systeme werden häufig in nichtstationären Umgebungen eingesetzt, was bedeutet, dass sich die Effekte von Adaptionen über die Zeit ändern können. Mit solchen nichtstationären Umgebungen können aktuelle Online-RL-Verfahren für adaptive Systeme nicht automatisch umgehen. Der Vortrag stellt aktuelle Forschungsansätze zur Adressierung dieser beiden Probleme vor. Er schließt mit einer kritischen Diskussion und einem Ausblick auf weiterführende Forschungsfragen. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/antrittsvorlesung-apl-220427121303-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Ein adaptives System ist in der Lage sich zur Laufzeit anzupassen und somit auf dynamische Veränderungen in seiner Umgebung zu reagieren. Eine wesentliche Herausforderung bei der Entwicklung adaptiver Systeme ist festzulegen, wann und wie sich das System zur Laufzeit anpassen soll. Dies erfordert die Antizipation zukünftiger Umgebungssituationen sowie eine genaue Kenntnis der Auswirkungen der Adaptionsmöglichkeiten auf das System. Beides ist aufgrund von unvollständigem Wissen zur Entwicklungszeit (sog. „design time uncertainty“) im Allgemeinen nicht vollständig möglich. In diesem Vortrag wird als Lösungsansatz für diese Herausforderung das Online-Reinforcement-Learning (Online-RL) vorgestellt. Online-RL lernt auf Basis von Laufzeit-Feedback geeignete Adaptionen. Der Vortrag führt zunächst in die Grundlagen adaptiver Systeme und RL ein. Im Hauptteil werden zwei konkrete Probleme beim Einsatz von Online-RL für adaptive Systeme vorgestellt: (1) Adaptive Systeme verfügen typischerweise über eine große Anzahl von Adaptionsmöglichkeiten. Eine große Anzahl von Adaptionsmöglichkeiten führt bei aktuellen Online-RL-Verfahren für adaptive Systeme jedoch zu einem langsamen Lernprozess. (2) Adaptive Systeme werden häufig in nichtstationären Umgebungen eingesetzt, was bedeutet, dass sich die Effekte von Adaptionen über die Zeit ändern können. Mit solchen nichtstationären Umgebungen können aktuelle Online-RL-Verfahren für adaptive Systeme nicht automatisch umgehen. Der Vortrag stellt aktuelle Forschungsansätze zur Adressierung dieser beiden Probleme vor. Er schließt mit einer kritischen Diskussion und einem Ausblick auf weiterführende Forschungsfragen.
from Andreas Metzger
]]>
66 0 https://cdn.slidesharecdn.com/ss_thumbnails/antrittsvorlesung-apl-220427121303-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Feature Model-Guided Online Reinforcement Learning for Self-Adaptive Services /andreasmmetzger/feature-modelguided-online-reinforcement-learning-for-selfadaptive-services icsoc2020-web-201215080306
A self-adaptive service can maintain its QoS requirements in the presence of dynamic environment changes. To develop a self-adaptive service, service engineers have to create self-adaptation logic encoding when the service should execute which adaptation actions. However, developing self-adaptation logic may be difficult due to design time un- certainty; e.g., anticipating all potential environment changes at design time is in most cases infeasible. Online reinforcement learning addresses design time uncertainty by learning suitable adaptation actions through interactions with the environment at runtime. To learn more about its environment, reinforcement learning has to select actions that were not selected before, which is known as exploration. How exploration happens has an impact on the performance of the learning process. We focus on two problems related to how a service’s adaptation actions are explored: (1) Existing solutions randomly explore adaptation actions and thus may exhibit slow learning if there are many possible adaptation actions to choose from. (2) Existing solutions are unaware of service evolution, and thus may explore new adaptation actions introduced during such evolu- tion rather late. We propose novel exploration strategies that use feature models (from software product line engineering) to guide exploration in the presence of many adaptation actions and in the presence of service evolution. Experimental results for a self-adaptive cloud management service indicate an average speed-up of the learning process of 58.8% in the presence of many adaptation actions, and of 61.3% in the presence of service evolution. The improved learning performance in turn led to an average QoS improvement of 7.8% and 23.7% respectively. ]]>

A self-adaptive service can maintain its QoS requirements in the presence of dynamic environment changes. To develop a self-adaptive service, service engineers have to create self-adaptation logic encoding when the service should execute which adaptation actions. However, developing self-adaptation logic may be difficult due to design time un- certainty; e.g., anticipating all potential environment changes at design time is in most cases infeasible. Online reinforcement learning addresses design time uncertainty by learning suitable adaptation actions through interactions with the environment at runtime. To learn more about its environment, reinforcement learning has to select actions that were not selected before, which is known as exploration. How exploration happens has an impact on the performance of the learning process. We focus on two problems related to how a service’s adaptation actions are explored: (1) Existing solutions randomly explore adaptation actions and thus may exhibit slow learning if there are many possible adaptation actions to choose from. (2) Existing solutions are unaware of service evolution, and thus may explore new adaptation actions introduced during such evolu- tion rather late. We propose novel exploration strategies that use feature models (from software product line engineering) to guide exploration in the presence of many adaptation actions and in the presence of service evolution. Experimental results for a self-adaptive cloud management service indicate an average speed-up of the learning process of 58.8% in the presence of many adaptation actions, and of 61.3% in the presence of service evolution. The improved learning performance in turn led to an average QoS improvement of 7.8% and 23.7% respectively. ]]>
Tue, 15 Dec 2020 08:03:06 GMT /andreasmmetzger/feature-modelguided-online-reinforcement-learning-for-selfadaptive-services andreasmmetzger@slideshare.net(andreasmmetzger) Feature Model-Guided Online Reinforcement Learning for Self-Adaptive Services andreasmmetzger A self-adaptive service can maintain its QoS requirements in the presence of dynamic environment changes. To develop a self-adaptive service, service engineers have to create self-adaptation logic encoding when the service should execute which adaptation actions. However, developing self-adaptation logic may be difficult due to design time un- certainty; e.g., anticipating all potential environment changes at design time is in most cases infeasible. Online reinforcement learning addresses design time uncertainty by learning suitable adaptation actions through interactions with the environment at runtime. To learn more about its environment, reinforcement learning has to select actions that were not selected before, which is known as exploration. How exploration happens has an impact on the performance of the learning process. We focus on two problems related to how a service’s adaptation actions are explored: (1) Existing solutions randomly explore adaptation actions and thus may exhibit slow learning if there are many possible adaptation actions to choose from. (2) Existing solutions are unaware of service evolution, and thus may explore new adaptation actions introduced during such evolu- tion rather late. We propose novel exploration strategies that use feature models (from software product line engineering) to guide exploration in the presence of many adaptation actions and in the presence of service evolution. Experimental results for a self-adaptive cloud management service indicate an average speed-up of the learning process of 58.8% in the presence of many adaptation actions, and of 61.3% in the presence of service evolution. The improved learning performance in turn led to an average QoS improvement of 7.8% and 23.7% respectively. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/icsoc2020-web-201215080306-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> A self-adaptive service can maintain its QoS requirements in the presence of dynamic environment changes. To develop a self-adaptive service, service engineers have to create self-adaptation logic encoding when the service should execute which adaptation actions. However, developing self-adaptation logic may be difficult due to design time un- certainty; e.g., anticipating all potential environment changes at design time is in most cases infeasible. Online reinforcement learning addresses design time uncertainty by learning suitable adaptation actions through interactions with the environment at runtime. To learn more about its environment, reinforcement learning has to select actions that were not selected before, which is known as exploration. How exploration happens has an impact on the performance of the learning process. We focus on two problems related to how a service’s adaptation actions are explored: (1) Existing solutions randomly explore adaptation actions and thus may exhibit slow learning if there are many possible adaptation actions to choose from. (2) Existing solutions are unaware of service evolution, and thus may explore new adaptation actions introduced during such evolu- tion rather late. We propose novel exploration strategies that use feature models (from software product line engineering) to guide exploration in the presence of many adaptation actions and in the presence of service evolution. Experimental results for a self-adaptive cloud management service indicate an average speed-up of the learning process of 58.8% in the presence of many adaptation actions, and of 61.3% in the presence of service evolution. The improved learning performance in turn led to an average QoS improvement of 7.8% and 23.7% respectively.
Feature Model-Guided Online Reinforcement Learning for Self-Adaptive Services from Andreas Metzger
]]>
158 0 https://cdn.slidesharecdn.com/ss_thumbnails/icsoc2020-web-201215080306-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Triggering Proactive Business Process Adaptations via Online Reinforcement Learning /slideshow/triggering-proactive-business-process-adaptations-via-online-reinforcement-learning/238495073 bpm2020-200915143004
Proactive process adaptation can prevent and mitigate upcoming problems during process execution by using predictions about how an ongoing case will unfold. There is an important trade-off with respect to these predictions: Earlier predictions leave more time for adaptations than later predictions, but earlier predictions typically exhibit a lower accuracy than later predictions, because not much information about the ongoing case is available. An emerging solution to address this trade-off is to continuously generate predictions and only trigger proactive adaptations when prediction reliability is greater than a predefined threshold. However, a good threshold is not known a priori. One solution is to empirically determine the threshold using a subset of the training data. While an empirical threshold may be optimal for the training data used and the given cost structure, such a threshold may not be optimal over time due to non-stationarity of process environments, data, and cost structures. Here, we use online reinforcement learning as an alternative solution to learn when to trigger proactive process adaptations based on the predictions and their reliability at run time. Experimental results for three public data sets indicate that our approach may on average lead to 12.2% lower process execution costs compared to empirical thresholding. ]]>

Proactive process adaptation can prevent and mitigate upcoming problems during process execution by using predictions about how an ongoing case will unfold. There is an important trade-off with respect to these predictions: Earlier predictions leave more time for adaptations than later predictions, but earlier predictions typically exhibit a lower accuracy than later predictions, because not much information about the ongoing case is available. An emerging solution to address this trade-off is to continuously generate predictions and only trigger proactive adaptations when prediction reliability is greater than a predefined threshold. However, a good threshold is not known a priori. One solution is to empirically determine the threshold using a subset of the training data. While an empirical threshold may be optimal for the training data used and the given cost structure, such a threshold may not be optimal over time due to non-stationarity of process environments, data, and cost structures. Here, we use online reinforcement learning as an alternative solution to learn when to trigger proactive process adaptations based on the predictions and their reliability at run time. Experimental results for three public data sets indicate that our approach may on average lead to 12.2% lower process execution costs compared to empirical thresholding. ]]>
Tue, 15 Sep 2020 14:30:03 GMT /slideshow/triggering-proactive-business-process-adaptations-via-online-reinforcement-learning/238495073 andreasmmetzger@slideshare.net(andreasmmetzger) Triggering Proactive Business Process Adaptations via Online Reinforcement Learning andreasmmetzger Proactive process adaptation can prevent and mitigate upcoming problems during process execution by using predictions about how an ongoing case will unfold. There is an important trade-off with respect to these predictions: Earlier predictions leave more time for adaptations than later predictions, but earlier predictions typically exhibit a lower accuracy than later predictions, because not much information about the ongoing case is available. An emerging solution to address this trade-off is to continuously generate predictions and only trigger proactive adaptations when prediction reliability is greater than a predefined threshold. However, a good threshold is not known a priori. One solution is to empirically determine the threshold using a subset of the training data. While an empirical threshold may be optimal for the training data used and the given cost structure, such a threshold may not be optimal over time due to non-stationarity of process environments, data, and cost structures. Here, we use online reinforcement learning as an alternative solution to learn when to trigger proactive process adaptations based on the predictions and their reliability at run time. Experimental results for three public data sets indicate that our approach may on average lead to 12.2% lower process execution costs compared to empirical thresholding. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/bpm2020-200915143004-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Proactive process adaptation can prevent and mitigate upcoming problems during process execution by using predictions about how an ongoing case will unfold. There is an important trade-off with respect to these predictions: Earlier predictions leave more time for adaptations than later predictions, but earlier predictions typically exhibit a lower accuracy than later predictions, because not much information about the ongoing case is available. An emerging solution to address this trade-off is to continuously generate predictions and only trigger proactive adaptations when prediction reliability is greater than a predefined threshold. However, a good threshold is not known a priori. One solution is to empirically determine the threshold using a subset of the training data. While an empirical threshold may be optimal for the training data used and the given cost structure, such a threshold may not be optimal over time due to non-stationarity of process environments, data, and cost structures. Here, we use online reinforcement learning as an alternative solution to learn when to trigger proactive process adaptations based on the predictions and their reliability at run time. Experimental results for three public data sets indicate that our approach may on average lead to 12.2% lower process execution costs compared to empirical thresholding.
Triggering Proactive Business Process Adaptations via Online Reinforcement Learning from Andreas Metzger
]]>
72 0 https://cdn.slidesharecdn.com/ss_thumbnails/bpm2020-200915143004-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Data-driven AI for Self-Adaptive Software Systems /slideshow/datadriven-ai-for-selfadaptive-software-systems/198750977 groningen-colloquiumweb-191128071417
A self-adaptive software system is capable of modifying its own structure and behaviour at runtime based on its perception of the environment, of itself and of its requirements. As an example, a self-adaptive web application may deactivate its resource-intensive recommender engine in order to maintain its performance requirements when faced with a sudden increase in workload. This talk will explore the opportunities that data-driven AI offers in building self-adaptive software systems. On the one hand, it will present how machine learning ensembles support accurate and timely proactive decision making for business process management. On the other hand, it will present how reinforcement learning may be employed to build software systems that dynamically improve themselves during operations. The talk closes with a critical look at the challenges entailed in delivering trustworthy AI-based software systems. ]]>

A self-adaptive software system is capable of modifying its own structure and behaviour at runtime based on its perception of the environment, of itself and of its requirements. As an example, a self-adaptive web application may deactivate its resource-intensive recommender engine in order to maintain its performance requirements when faced with a sudden increase in workload. This talk will explore the opportunities that data-driven AI offers in building self-adaptive software systems. On the one hand, it will present how machine learning ensembles support accurate and timely proactive decision making for business process management. On the other hand, it will present how reinforcement learning may be employed to build software systems that dynamically improve themselves during operations. The talk closes with a critical look at the challenges entailed in delivering trustworthy AI-based software systems. ]]>
Thu, 28 Nov 2019 07:14:16 GMT /slideshow/datadriven-ai-for-selfadaptive-software-systems/198750977 andreasmmetzger@slideshare.net(andreasmmetzger) Data-driven AI for Self-Adaptive Software Systems andreasmmetzger A self-adaptive software system is capable of modifying its own structure and behaviour at runtime based on its perception of the environment, of itself and of its requirements. As an example, a self-adaptive web application may deactivate its resource-intensive recommender engine in order to maintain its performance requirements when faced with a sudden increase in workload. This talk will explore the opportunities that data-driven AI offers in building self-adaptive software systems. On the one hand, it will present how machine learning ensembles support accurate and timely proactive decision making for business process management. On the other hand, it will present how reinforcement learning may be employed to build software systems that dynamically improve themselves during operations. The talk closes with a critical look at the challenges entailed in delivering trustworthy AI-based software systems. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/groningen-colloquiumweb-191128071417-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> A self-adaptive software system is capable of modifying its own structure and behaviour at runtime based on its perception of the environment, of itself and of its requirements. As an example, a self-adaptive web application may deactivate its resource-intensive recommender engine in order to maintain its performance requirements when faced with a sudden increase in workload. This talk will explore the opportunities that data-driven AI offers in building self-adaptive software systems. On the one hand, it will present how machine learning ensembles support accurate and timely proactive decision making for business process management. On the other hand, it will present how reinforcement learning may be employed to build software systems that dynamically improve themselves during operations. The talk closes with a critical look at the challenges entailed in delivering trustworthy AI-based software systems.
Data-driven AI for Self-Adaptive Software Systems from Andreas Metzger
]]>
195 2 https://cdn.slidesharecdn.com/ss_thumbnails/groningen-colloquiumweb-191128071417-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Data-driven Deep Learning for Proactive Terminal Process Management� /andreasmmetzger/datadriven-deep-learning-for-proactive-terminal-process-management bpm-industry19-web-190905093056
Big data offers tremendous opportunities for transport process innovation. One key enabling big data technology is predictive data analytics. Predictive data analytics supports business process management by facilitating the proactive adaptation of process instances to mitigate or prevent problems. We present an industry case employing big data for process management innovation at duisport, the world's largest inland container port. In particular, we show how data-driven deep learning facilitates proactive port terminal process management. We demonstrate the feasibility of our deep learning approach by implementing it as part of a terminal productivity cockpit prototype. The terminal productivity cockpit provides decision support to terminal operators for proactive process adaptation. We confirm the desirability of our approach via interviews. We assess the viability of our approach by estimating the improvements in a key business KPI, as well as experimentally measuring the cost savings when compared to terminal operations without using proactive adaptation. We also present our main technical lessons learned regarding the use of big data for predictive analytics.]]>

Big data offers tremendous opportunities for transport process innovation. One key enabling big data technology is predictive data analytics. Predictive data analytics supports business process management by facilitating the proactive adaptation of process instances to mitigate or prevent problems. We present an industry case employing big data for process management innovation at duisport, the world's largest inland container port. In particular, we show how data-driven deep learning facilitates proactive port terminal process management. We demonstrate the feasibility of our deep learning approach by implementing it as part of a terminal productivity cockpit prototype. The terminal productivity cockpit provides decision support to terminal operators for proactive process adaptation. We confirm the desirability of our approach via interviews. We assess the viability of our approach by estimating the improvements in a key business KPI, as well as experimentally measuring the cost savings when compared to terminal operations without using proactive adaptation. We also present our main technical lessons learned regarding the use of big data for predictive analytics.]]>
Thu, 05 Sep 2019 09:30:56 GMT /andreasmmetzger/datadriven-deep-learning-for-proactive-terminal-process-management andreasmmetzger@slideshare.net(andreasmmetzger) Data-driven Deep Learning for Proactive Terminal Process Management� andreasmmetzger Big data offers tremendous opportunities for transport process innovation. One key enabling big data technology is predictive data analytics. Predictive data analytics supports business process management by facilitating the proactive adaptation of process instances to mitigate or prevent problems. We present an industry case employing big data for process management innovation at duisport, the world's largest inland container port. In particular, we show how data-driven deep learning facilitates proactive port terminal process management. We demonstrate the feasibility of our deep learning approach by implementing it as part of a terminal productivity cockpit prototype. The terminal productivity cockpit provides decision support to terminal operators for proactive process adaptation. We confirm the desirability of our approach via interviews. We assess the viability of our approach by estimating the improvements in a key business KPI, as well as experimentally measuring the cost savings when compared to terminal operations without using proactive adaptation. We also present our main technical lessons learned regarding the use of big data for predictive analytics. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/bpm-industry19-web-190905093056-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Big data offers tremendous opportunities for transport process innovation. One key enabling big data technology is predictive data analytics. Predictive data analytics supports business process management by facilitating the proactive adaptation of process instances to mitigate or prevent problems. We present an industry case employing big data for process management innovation at duisport, the world&#39;s largest inland container port. In particular, we show how data-driven deep learning facilitates proactive port terminal process management. We demonstrate the feasibility of our deep learning approach by implementing it as part of a terminal productivity cockpit prototype. The terminal productivity cockpit provides decision support to terminal operators for proactive process adaptation. We confirm the desirability of our approach via interviews. We assess the viability of our approach by estimating the improvements in a key business KPI, as well as experimentally measuring the cost savings when compared to terminal operations without using proactive adaptation. We also present our main technical lessons learned regarding the use of big data for predictive analytics.
Data-driven Deep Learning for Proactive Terminal Process Management from Andreas Metzger
]]>
409 3 https://cdn.slidesharecdn.com/ss_thumbnails/bpm-industry19-web-190905093056-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Big Data Technology Insights /andreasmmetzger/big-data-technology-insights tt-technology-insights-riga-190710112033
Big data offers tremendous opportunities for transport process innovation and will have a profound economic and societal impact on mobility and logistics. As an example, with annual growth rates of 3.2% of passenger transport and 4.5% of freight transport in the EU, transforming the current mobility and logistics processes to become significantly more efficient, will have major impact. Improvements in operational efficiency empowered by big data are expected to save as much as EUR 440 billion globally in terms of fuel and time within the mobility and logistics sector, as well as reducing 380 megatons of CO2 emissions. The mobility and logistics sector is ideally placed to benefit from big data technologies, as it already manages massive flows of goods and people whilst generating vast amounts of data. This talk reports about the main technical findings and lessons learned regarding the application of big data in the transport domain.]]>

Big data offers tremendous opportunities for transport process innovation and will have a profound economic and societal impact on mobility and logistics. As an example, with annual growth rates of 3.2% of passenger transport and 4.5% of freight transport in the EU, transforming the current mobility and logistics processes to become significantly more efficient, will have major impact. Improvements in operational efficiency empowered by big data are expected to save as much as EUR 440 billion globally in terms of fuel and time within the mobility and logistics sector, as well as reducing 380 megatons of CO2 emissions. The mobility and logistics sector is ideally placed to benefit from big data technologies, as it already manages massive flows of goods and people whilst generating vast amounts of data. This talk reports about the main technical findings and lessons learned regarding the application of big data in the transport domain.]]>
Wed, 10 Jul 2019 11:20:33 GMT /andreasmmetzger/big-data-technology-insights andreasmmetzger@slideshare.net(andreasmmetzger) Big Data Technology Insights andreasmmetzger Big data offers tremendous opportunities for transport process innovation and will have a profound economic and societal impact on mobility and logistics. As an example, with annual growth rates of 3.2% of passenger transport and 4.5% of freight transport in the EU, transforming the current mobility and logistics processes to become significantly more efficient, will have major impact. Improvements in operational efficiency empowered by big data are expected to save as much as EUR 440 billion globally in terms of fuel and time within the mobility and logistics sector, as well as reducing 380 megatons of CO2 emissions. The mobility and logistics sector is ideally placed to benefit from big data technologies, as it already manages massive flows of goods and people whilst generating vast amounts of data. This talk reports about the main technical findings and lessons learned regarding the application of big data in the transport domain. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/tt-technology-insights-riga-190710112033-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Big data offers tremendous opportunities for transport process innovation and will have a profound economic and societal impact on mobility and logistics. As an example, with annual growth rates of 3.2% of passenger transport and 4.5% of freight transport in the EU, transforming the current mobility and logistics processes to become significantly more efficient, will have major impact. Improvements in operational efficiency empowered by big data are expected to save as much as EUR 440 billion globally in terms of fuel and time within the mobility and logistics sector, as well as reducing 380 megatons of CO2 emissions. The mobility and logistics sector is ideally placed to benefit from big data technologies, as it already manages massive flows of goods and people whilst generating vast amounts of data. This talk reports about the main technical findings and lessons learned regarding the application of big data in the transport domain.
Big Data Technology Insights from Andreas Metzger
]]>
139 3 https://cdn.slidesharecdn.com/ss_thumbnails/tt-technology-insights-riga-190710112033-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Proactive Process Adaptation using Deep Learning Ensembles /andreasmmetzger/proactive-process-adaptation-using-deep-learning-ensembles caise19-190607115644
Proactive process adaptation can prevent and mitigate upcoming problems during process execution. Proactive adaptation decisions are based on pre- dictions about how an ongoing process instance will unfold up to its completion. On the one hand, these predictions must have high accuracy, as, for instance, false negative predictions mean that necessary adaptations are missed. On the other hand, these predictions should be produced early during process execution, as this leaves more time for adaptations, which typically have non-negligible latencies. However, there is an important tradeoff between prediction accuracy and earliness. Later predictions typically have a higher accuracy, because more information about the ongoing process instance is available. To address this tradeoff, we use an ensemble of deep learning models that can produce predictions at arbitrary points during process execution and that provides reliability estimates for each prediction. We use these reliability estimates to dynamically determine the earliest prediction with sufficient accuracy, which is used as basis for proactive adaptation. Experimental results indicate that our dynamic approach may offer cost savings of 27% on average when compared to using a static prediction point. ]]>

Proactive process adaptation can prevent and mitigate upcoming problems during process execution. Proactive adaptation decisions are based on pre- dictions about how an ongoing process instance will unfold up to its completion. On the one hand, these predictions must have high accuracy, as, for instance, false negative predictions mean that necessary adaptations are missed. On the other hand, these predictions should be produced early during process execution, as this leaves more time for adaptations, which typically have non-negligible latencies. However, there is an important tradeoff between prediction accuracy and earliness. Later predictions typically have a higher accuracy, because more information about the ongoing process instance is available. To address this tradeoff, we use an ensemble of deep learning models that can produce predictions at arbitrary points during process execution and that provides reliability estimates for each prediction. We use these reliability estimates to dynamically determine the earliest prediction with sufficient accuracy, which is used as basis for proactive adaptation. Experimental results indicate that our dynamic approach may offer cost savings of 27% on average when compared to using a static prediction point. ]]>
Fri, 07 Jun 2019 11:56:44 GMT /andreasmmetzger/proactive-process-adaptation-using-deep-learning-ensembles andreasmmetzger@slideshare.net(andreasmmetzger) Proactive Process Adaptation using Deep Learning Ensembles andreasmmetzger Proactive process adaptation can prevent and mitigate upcoming problems during process execution. Proactive adaptation decisions are based on pre- dictions about how an ongoing process instance will unfold up to its completion. On the one hand, these predictions must have high accuracy, as, for instance, false negative predictions mean that necessary adaptations are missed. On the other hand, these predictions should be produced early during process execution, as this leaves more time for adaptations, which typically have non-negligible latencies. However, there is an important tradeoff between prediction accuracy and earliness. Later predictions typically have a higher accuracy, because more information about the ongoing process instance is available. To address this tradeoff, we use an ensemble of deep learning models that can produce predictions at arbitrary points during process execution and that provides reliability estimates for each prediction. We use these reliability estimates to dynamically determine the earliest prediction with sufficient accuracy, which is used as basis for proactive adaptation. Experimental results indicate that our dynamic approach may offer cost savings of 27% on average when compared to using a static prediction point. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/caise19-190607115644-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Proactive process adaptation can prevent and mitigate upcoming problems during process execution. Proactive adaptation decisions are based on pre- dictions about how an ongoing process instance will unfold up to its completion. On the one hand, these predictions must have high accuracy, as, for instance, false negative predictions mean that necessary adaptations are missed. On the other hand, these predictions should be produced early during process execution, as this leaves more time for adaptations, which typically have non-negligible latencies. However, there is an important tradeoff between prediction accuracy and earliness. Later predictions typically have a higher accuracy, because more information about the ongoing process instance is available. To address this tradeoff, we use an ensemble of deep learning models that can produce predictions at arbitrary points during process execution and that provides reliability estimates for each prediction. We use these reliability estimates to dynamically determine the earliest prediction with sufficient accuracy, which is used as basis for proactive adaptation. Experimental results indicate that our dynamic approach may offer cost savings of 27% on average when compared to using a static prediction point.
Proactive Process Adaptation using Deep Learning Ensembles from Andreas Metzger
]]>
138 2 https://cdn.slidesharecdn.com/ss_thumbnails/caise19-190607115644-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Data-driven AI for Self-adaptive Information Systems /andreasmmetzger/datadriven-ai-for-selfadaptive-information-systems faise19-keynote-190603162610
Artificial Intelligence (AI) offers tremendous potential to benefit citizens, economy and society. From an industrial point of view, AI means algorithm-based and data-driven information systems that provide machines and people with digital capabilities such as perception, reasoning, learning and even autonomous decision making. AI thereby facilitates information systems to draw conclusions, learn, adapt and adjust parameters accordingly. With recent advances in special-purpose hardware and machine learning algorithms, AI is capable of capturing more and more complex problems. This keynote talk will explore the opportunities that AI offers in building self-adaptive information systems. On the one hand, it will present how ensembles of deep neural networks support proactive decision making for business process execution. On the other hand, it will present how reinforcement learning may be employed to build self-learning information systems. These systems learn from every interaction with their environment in order to dynamically improve themselves during operations. The talk closes with a critical look at the challenges entailed in delivering responsible AI-based information systems.]]>

Artificial Intelligence (AI) offers tremendous potential to benefit citizens, economy and society. From an industrial point of view, AI means algorithm-based and data-driven information systems that provide machines and people with digital capabilities such as perception, reasoning, learning and even autonomous decision making. AI thereby facilitates information systems to draw conclusions, learn, adapt and adjust parameters accordingly. With recent advances in special-purpose hardware and machine learning algorithms, AI is capable of capturing more and more complex problems. This keynote talk will explore the opportunities that AI offers in building self-adaptive information systems. On the one hand, it will present how ensembles of deep neural networks support proactive decision making for business process execution. On the other hand, it will present how reinforcement learning may be employed to build self-learning information systems. These systems learn from every interaction with their environment in order to dynamically improve themselves during operations. The talk closes with a critical look at the challenges entailed in delivering responsible AI-based information systems.]]>
Mon, 03 Jun 2019 16:26:10 GMT /andreasmmetzger/datadriven-ai-for-selfadaptive-information-systems andreasmmetzger@slideshare.net(andreasmmetzger) Data-driven AI for Self-adaptive Information Systems andreasmmetzger Artificial Intelligence (AI) offers tremendous potential to benefit citizens, economy and society. From an industrial point of view, AI means algorithm-based and data-driven information systems that provide machines and people with digital capabilities such as perception, reasoning, learning and even autonomous decision making. AI thereby facilitates information systems to draw conclusions, learn, adapt and adjust parameters accordingly. With recent advances in special-purpose hardware and machine learning algorithms, AI is capable of capturing more and more complex problems. This keynote talk will explore the opportunities that AI offers in building self-adaptive information systems. On the one hand, it will present how ensembles of deep neural networks support proactive decision making for business process execution. On the other hand, it will present how reinforcement learning may be employed to build self-learning information systems. These systems learn from every interaction with their environment in order to dynamically improve themselves during operations. The talk closes with a critical look at the challenges entailed in delivering responsible AI-based information systems. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/faise19-keynote-190603162610-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Artificial Intelligence (AI) offers tremendous potential to benefit citizens, economy and society. From an industrial point of view, AI means algorithm-based and data-driven information systems that provide machines and people with digital capabilities such as perception, reasoning, learning and even autonomous decision making. AI thereby facilitates information systems to draw conclusions, learn, adapt and adjust parameters accordingly. With recent advances in special-purpose hardware and machine learning algorithms, AI is capable of capturing more and more complex problems. This keynote talk will explore the opportunities that AI offers in building self-adaptive information systems. On the one hand, it will present how ensembles of deep neural networks support proactive decision making for business process execution. On the other hand, it will present how reinforcement learning may be employed to build self-learning information systems. These systems learn from every interaction with their environment in order to dynamically improve themselves during operations. The talk closes with a critical look at the challenges entailed in delivering responsible AI-based information systems.
Data-driven AI for Self-adaptive Information Systems from Andreas Metzger
]]>
551 1 https://cdn.slidesharecdn.com/ss_thumbnails/faise19-keynote-190603162610-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Towards an End-to-End Architecture for Run-time Data Protection in the Cloud /slideshow/towards-an-endtoend-architecture-for-runtime-data-protection-in-the-cloud/113768547 03-seaacloud-180910140841
Protecting sensitive data, such as personal data or confidential business data, is a key concern for the adoption of cloud solutions. Protecting data in the cloud is made particularly challenging by the dynamic changes that cloud systems may undergo at run time, as well as the complex interactions among multiple software and hardware components, services, and stakeholders. Conformance to data protection requirements in such a dynamic environment cannot any longer be ensured during design time; e.g., due to the dynamic changes imposed by replication and migration of components. It requires run-time data protection mechanisms. Given the complex interactions in the cloud, individual data protection mechanisms, such as access control, encryption, or secure hardware, are not sufficient, as they only help protect specific parts of a cloud system. An attacker may use the “weakest link in the chain” to get access to sensitive data. To address these challenges, we propose combining multiple existing data protection approaches and extending them to run-time, ultimately delivering an end-to-end architecture for run-time data protection in the cloud. We describe the process of designing this architecture, present the architecture itself, and validate and substantiate its practical applicability in a commercial case study.]]>

Protecting sensitive data, such as personal data or confidential business data, is a key concern for the adoption of cloud solutions. Protecting data in the cloud is made particularly challenging by the dynamic changes that cloud systems may undergo at run time, as well as the complex interactions among multiple software and hardware components, services, and stakeholders. Conformance to data protection requirements in such a dynamic environment cannot any longer be ensured during design time; e.g., due to the dynamic changes imposed by replication and migration of components. It requires run-time data protection mechanisms. Given the complex interactions in the cloud, individual data protection mechanisms, such as access control, encryption, or secure hardware, are not sufficient, as they only help protect specific parts of a cloud system. An attacker may use the “weakest link in the chain” to get access to sensitive data. To address these challenges, we propose combining multiple existing data protection approaches and extending them to run-time, ultimately delivering an end-to-end architecture for run-time data protection in the cloud. We describe the process of designing this architecture, present the architecture itself, and validate and substantiate its practical applicability in a commercial case study.]]>
Mon, 10 Sep 2018 14:08:41 GMT /slideshow/towards-an-endtoend-architecture-for-runtime-data-protection-in-the-cloud/113768547 andreasmmetzger@slideshare.net(andreasmmetzger) Towards an End-to-End Architecture for Run-time Data Protection in the Cloud andreasmmetzger Protecting sensitive data, such as personal data or confidential business data, is a key concern for the adoption of cloud solutions. Protecting data in the cloud is made particularly challenging by the dynamic changes that cloud systems may undergo at run time, as well as the complex interactions among multiple software and hardware components, services, and stakeholders. Conformance to data protection requirements in such a dynamic environment cannot any longer be ensured during design time; e.g., due to the dynamic changes imposed by replication and migration of components. It requires run-time data protection mechanisms. Given the complex interactions in the cloud, individual data protection mechanisms, such as access control, encryption, or secure hardware, are not sufficient, as they only help protect specific parts of a cloud system. An attacker may use the “weakest link in the chain” to get access to sensitive data. To address these challenges, we propose combining multiple existing data protection approaches and extending them to run-time, ultimately delivering an end-to-end architecture for run-time data protection in the cloud. We describe the process of designing this architecture, present the architecture itself, and validate and substantiate its practical applicability in a commercial case study. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/03-seaacloud-180910140841-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Protecting sensitive data, such as personal data or confidential business data, is a key concern for the adoption of cloud solutions. Protecting data in the cloud is made particularly challenging by the dynamic changes that cloud systems may undergo at run time, as well as the complex interactions among multiple software and hardware components, services, and stakeholders. Conformance to data protection requirements in such a dynamic environment cannot any longer be ensured during design time; e.g., due to the dynamic changes imposed by replication and migration of components. It requires run-time data protection mechanisms. Given the complex interactions in the cloud, individual data protection mechanisms, such as access control, encryption, or secure hardware, are not sufficient, as they only help protect specific parts of a cloud system. An attacker may use the “weakest link in the chain” to get access to sensitive data. To address these challenges, we propose combining multiple existing data protection approaches and extending them to run-time, ultimately delivering an end-to-end architecture for run-time data protection in the cloud. We describe the process of designing this architecture, present the architecture itself, and validate and substantiate its practical applicability in a commercial case study.
Towards an End-to-End Architecture for Run-time Data Protection in the Cloud from Andreas Metzger
]]>
237 2 https://cdn.slidesharecdn.com/ss_thumbnails/03-seaacloud-180910140841-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Considering Non-sequential Control Flows for Process Prediction with Recurrent Neural Networks /slideshow/considering-nonsequential-control-flows-for-process-prediction-with-recurrent-neural-networks/112262260 01-seaaabpm-180830135302
Predictive business process monitoring aims to predict how an ongoing process instance will unfold up to its completion, thereby facilitating proactively responding to anticipated problems. Recurrent Neural Networks (RNNs), a special form of deep learning techniques, gain interest as a prediction technique in BPM. However, non-sequential control flows may make the prediction task more difficult, because RNNs were conceived for learning and predicting sequences of data. Based on an industrial dataset, we provide experimental results comparing different alternatives for considering non- sequential control flows. In particular, we consider cycles and parallelism for business process prediction with RNNs.]]>

Predictive business process monitoring aims to predict how an ongoing process instance will unfold up to its completion, thereby facilitating proactively responding to anticipated problems. Recurrent Neural Networks (RNNs), a special form of deep learning techniques, gain interest as a prediction technique in BPM. However, non-sequential control flows may make the prediction task more difficult, because RNNs were conceived for learning and predicting sequences of data. Based on an industrial dataset, we provide experimental results comparing different alternatives for considering non- sequential control flows. In particular, we consider cycles and parallelism for business process prediction with RNNs.]]>
Thu, 30 Aug 2018 13:53:02 GMT /slideshow/considering-nonsequential-control-flows-for-process-prediction-with-recurrent-neural-networks/112262260 andreasmmetzger@slideshare.net(andreasmmetzger) Considering Non-sequential Control Flows for Process Prediction with Recurrent Neural Networks andreasmmetzger Predictive business process monitoring aims to predict how an ongoing process instance will unfold up to its completion, thereby facilitating proactively responding to anticipated problems. Recurrent Neural Networks (RNNs), a special form of deep learning techniques, gain interest as a prediction technique in BPM. However, non-sequential control flows may make the prediction task more difficult, because RNNs were conceived for learning and predicting sequences of data. Based on an industrial dataset, we provide experimental results comparing different alternatives for considering non- sequential control flows. In particular, we consider cycles and parallelism for business process prediction with RNNs. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/01-seaaabpm-180830135302-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Predictive business process monitoring aims to predict how an ongoing process instance will unfold up to its completion, thereby facilitating proactively responding to anticipated problems. Recurrent Neural Networks (RNNs), a special form of deep learning techniques, gain interest as a prediction technique in BPM. However, non-sequential control flows may make the prediction task more difficult, because RNNs were conceived for learning and predicting sequences of data. Based on an industrial dataset, we provide experimental results comparing different alternatives for considering non- sequential control flows. In particular, we consider cycles and parallelism for business process prediction with RNNs.
Considering Non-sequential Control Flows for Process Prediction with Recurrent Neural Networks from Andreas Metzger
]]>
299 5 https://cdn.slidesharecdn.com/ss_thumbnails/01-seaaabpm-180830135302-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Big Data Value in Mobility and Logistics /slideshow/big-data-value-in-mobility-and-logistics/91967264 bmda-keynote-180326164143
Big data is expected to have a profound economic and societal impact in mobility and logistics. Examples include 500 billion USD in value worldwide in the form of time and fuel savings, and savings of 380 megatons CO2. With freight transport activities projected to increase by 40% in 2030, transforming the current mobility and logistics processes to become significantly more efficient, will have a profound impact. A 10% efficiency improvement may lead to EU cost savings of 100 billion EUR. This keynote will highlight the key value dimensions for big data in mobility and logistics. The talk will present examples from the EU Horizon 2020 lighthouse project TransformingTransport, demonstrating the transformations that big data can bring to the mobility and logistics sector. TransformingTransport addresses 13 pilots in seven highly relevant pilot domains within mobility and transport that will benefit from big data solutions and the increased availability of data. The talk will close with an outlook on barriers and future opportunities. ]]>

Big data is expected to have a profound economic and societal impact in mobility and logistics. Examples include 500 billion USD in value worldwide in the form of time and fuel savings, and savings of 380 megatons CO2. With freight transport activities projected to increase by 40% in 2030, transforming the current mobility and logistics processes to become significantly more efficient, will have a profound impact. A 10% efficiency improvement may lead to EU cost savings of 100 billion EUR. This keynote will highlight the key value dimensions for big data in mobility and logistics. The talk will present examples from the EU Horizon 2020 lighthouse project TransformingTransport, demonstrating the transformations that big data can bring to the mobility and logistics sector. TransformingTransport addresses 13 pilots in seven highly relevant pilot domains within mobility and transport that will benefit from big data solutions and the increased availability of data. The talk will close with an outlook on barriers and future opportunities. ]]>
Mon, 26 Mar 2018 16:41:43 GMT /slideshow/big-data-value-in-mobility-and-logistics/91967264 andreasmmetzger@slideshare.net(andreasmmetzger) Big Data Value in Mobility and Logistics andreasmmetzger Big data is expected to have a profound economic and societal impact in mobility and logistics. Examples include 500 billion USD in value worldwide in the form of time and fuel savings, and savings of 380 megatons CO2. With freight transport activities projected to increase by 40% in 2030, transforming the current mobility and logistics processes to become significantly more efficient, will have a profound impact. A 10% efficiency improvement may lead to EU cost savings of 100 billion EUR. This keynote will highlight the key value dimensions for big data in mobility and logistics. The talk will present examples from the EU Horizon 2020 lighthouse project TransformingTransport, demonstrating the transformations that big data can bring to the mobility and logistics sector. TransformingTransport addresses 13 pilots in seven highly relevant pilot domains within mobility and transport that will benefit from big data solutions and the increased availability of data. The talk will close with an outlook on barriers and future opportunities. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/bmda-keynote-180326164143-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Big data is expected to have a profound economic and societal impact in mobility and logistics. Examples include 500 billion USD in value worldwide in the form of time and fuel savings, and savings of 380 megatons CO2. With freight transport activities projected to increase by 40% in 2030, transforming the current mobility and logistics processes to become significantly more efficient, will have a profound impact. A 10% efficiency improvement may lead to EU cost savings of 100 billion EUR. This keynote will highlight the key value dimensions for big data in mobility and logistics. The talk will present examples from the EU Horizon 2020 lighthouse project TransformingTransport, demonstrating the transformations that big data can bring to the mobility and logistics sector. TransformingTransport addresses 13 pilots in seven highly relevant pilot domains within mobility and transport that will benefit from big data solutions and the increased availability of data. The talk will close with an outlook on barriers and future opportunities.
Big Data Value in Mobility and Logistics from Andreas Metzger
]]>
406 4 https://cdn.slidesharecdn.com/ss_thumbnails/bmda-keynote-180326164143-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Predictive Business Process Monitoring considering Reliability and Risk /slideshow/predictive-business-process-monitoring-considering-reliability-and-risk/90015941 se18prediction-180308084553
This presentation presents techniques and experimental results for considering prediction reliability and risk during predictive business process monitoring. Considering reliability and risk provides additional decision support for proactive process adaptation.]]>

This presentation presents techniques and experimental results for considering prediction reliability and risk during predictive business process monitoring. Considering reliability and risk provides additional decision support for proactive process adaptation.]]>
Thu, 08 Mar 2018 08:45:53 GMT /slideshow/predictive-business-process-monitoring-considering-reliability-and-risk/90015941 andreasmmetzger@slideshare.net(andreasmmetzger) Predictive Business Process Monitoring considering Reliability and Risk andreasmmetzger This presentation presents techniques and experimental results for considering prediction reliability and risk during predictive business process monitoring. Considering reliability and risk provides additional decision support for proactive process adaptation. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/se18prediction-180308084553-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> This presentation presents techniques and experimental results for considering prediction reliability and risk during predictive business process monitoring. Considering reliability and risk provides additional decision support for proactive process adaptation.
Predictive Business Process Monitoring considering Reliability and Risk from Andreas Metzger
]]>
252 6 https://cdn.slidesharecdn.com/ss_thumbnails/se18prediction-180308084553-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Risk-based Proactive Process Adaptation /slideshow/riskbased-proactive-process-adaptation/86697638 icsoc2017v6-180125163237
Proactive process adaptation facilitates preventing or mit- igating upcoming problems during process execution, such as process delays. Key for proactive process adaptation is that adaptation decisions are based on accurate predictions of problems. Previous research focused on improving aggregate accuracy, such as precision or recall. However, aggregate accuracy provides little information about the error of an indi- vidual prediction. In contrast, so called reliability estimates provide such additional information. Previous work has shown that considering reli- ability estimates can improve decision making during proactive process adaptation and can lead to cost savings. So far, only constant cost func- tions have been considered. In practice, however, costs may differ depend- ing on the magnitude of the problem; e.g., a longer process delay may result in higher penalties. To capture different cost functions, we exploit numeric predictions computed from ensembles of regression models. We combine reliability estimates and predicted costs to quantify the risk of a problem, i.e., its probability and its severity. Proactive adaptations are triggered if risks are above a pre-defined threshold. A comparative eval- uation indicates that cost savings of up to 31%, with 14.8% savings on average, may be achieved by the risk-based approach. ]]>

Proactive process adaptation facilitates preventing or mit- igating upcoming problems during process execution, such as process delays. Key for proactive process adaptation is that adaptation decisions are based on accurate predictions of problems. Previous research focused on improving aggregate accuracy, such as precision or recall. However, aggregate accuracy provides little information about the error of an indi- vidual prediction. In contrast, so called reliability estimates provide such additional information. Previous work has shown that considering reli- ability estimates can improve decision making during proactive process adaptation and can lead to cost savings. So far, only constant cost func- tions have been considered. In practice, however, costs may differ depend- ing on the magnitude of the problem; e.g., a longer process delay may result in higher penalties. To capture different cost functions, we exploit numeric predictions computed from ensembles of regression models. We combine reliability estimates and predicted costs to quantify the risk of a problem, i.e., its probability and its severity. Proactive adaptations are triggered if risks are above a pre-defined threshold. A comparative eval- uation indicates that cost savings of up to 31%, with 14.8% savings on average, may be achieved by the risk-based approach. ]]>
Thu, 25 Jan 2018 16:32:37 GMT /slideshow/riskbased-proactive-process-adaptation/86697638 andreasmmetzger@slideshare.net(andreasmmetzger) Risk-based Proactive Process Adaptation andreasmmetzger Proactive process adaptation facilitates preventing or mit- igating upcoming problems during process execution, such as process delays. Key for proactive process adaptation is that adaptation decisions are based on accurate predictions of problems. Previous research focused on improving aggregate accuracy, such as precision or recall. However, aggregate accuracy provides little information about the error of an indi- vidual prediction. In contrast, so called reliability estimates provide such additional information. Previous work has shown that considering reli- ability estimates can improve decision making during proactive process adaptation and can lead to cost savings. So far, only constant cost func- tions have been considered. In practice, however, costs may differ depend- ing on the magnitude of the problem; e.g., a longer process delay may result in higher penalties. To capture different cost functions, we exploit numeric predictions computed from ensembles of regression models. We combine reliability estimates and predicted costs to quantify the risk of a problem, i.e., its probability and its severity. Proactive adaptations are triggered if risks are above a pre-defined threshold. A comparative eval- uation indicates that cost savings of up to 31%, with 14.8% savings on average, may be achieved by the risk-based approach. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/icsoc2017v6-180125163237-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Proactive process adaptation facilitates preventing or mit- igating upcoming problems during process execution, such as process delays. Key for proactive process adaptation is that adaptation decisions are based on accurate predictions of problems. Previous research focused on improving aggregate accuracy, such as precision or recall. However, aggregate accuracy provides little information about the error of an indi- vidual prediction. In contrast, so called reliability estimates provide such additional information. Previous work has shown that considering reli- ability estimates can improve decision making during proactive process adaptation and can lead to cost savings. So far, only constant cost func- tions have been considered. In practice, however, costs may differ depend- ing on the magnitude of the problem; e.g., a longer process delay may result in higher penalties. To capture different cost functions, we exploit numeric predictions computed from ensembles of regression models. We combine reliability estimates and predicted costs to quantify the risk of a problem, i.e., its probability and its severity. Proactive adaptations are triggered if risks are above a pre-defined threshold. A comparative eval- uation indicates that cost savings of up to 31%, with 14.8% savings on average, may be achieved by the risk-based approach.
Risk-based Proactive Process Adaptation from Andreas Metzger
]]>
88 1 https://cdn.slidesharecdn.com/ss_thumbnails/icsoc2017v6-180125163237-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Predictive Process Monitoring Considering Reliability Estimates /slideshow/predictive-process-monitoring-considering-reliability-estimates/77003877 caise17-predictiveprocessmonitoringconsideringreliabilityestimates-metzgerfoecker-170616120754
Full Paper Presentation at the 29th International Conference on Advanced Information Systems Engineering - CAiSE 2017, Essen, Germany, June 12-16, 2017, Lecture Notes in Computer Science, E. Dubois and K. Pohl, Eds., vol. 10253. Springer, 2017. https://doi.org/10.1007/978-3-319-59536-8_28 (Open Access)]]>

Full Paper Presentation at the 29th International Conference on Advanced Information Systems Engineering - CAiSE 2017, Essen, Germany, June 12-16, 2017, Lecture Notes in Computer Science, E. Dubois and K. Pohl, Eds., vol. 10253. Springer, 2017. https://doi.org/10.1007/978-3-319-59536-8_28 (Open Access)]]>
Fri, 16 Jun 2017 12:07:53 GMT /slideshow/predictive-process-monitoring-considering-reliability-estimates/77003877 andreasmmetzger@slideshare.net(andreasmmetzger) Predictive Process Monitoring Considering Reliability Estimates andreasmmetzger Full Paper Presentation at the 29th International Conference on Advanced Information Systems Engineering - CAiSE 2017, Essen, Germany, June 12-16, 2017, Lecture Notes in Computer Science, E. Dubois and K. Pohl, Eds., vol. 10253. Springer, 2017. https://doi.org/10.1007/978-3-319-59536-8_28 (Open Access) <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/caise17-predictiveprocessmonitoringconsideringreliabilityestimates-metzgerfoecker-170616120754-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Full Paper Presentation at the 29th International Conference on Advanced Information Systems Engineering - CAiSE 2017, Essen, Germany, June 12-16, 2017, Lecture Notes in Computer Science, E. Dubois and K. Pohl, Eds., vol. 10253. Springer, 2017. https://doi.org/10.1007/978-3-319-59536-8_28 (Open Access)
Predictive Process Monitoring Considering Reliability Estimates from Andreas Metzger
]]>
201 5 https://cdn.slidesharecdn.com/ss_thumbnails/caise17-predictiveprocessmonitoringconsideringreliabilityestimates-metzgerfoecker-170616120754-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
https://cdn.slidesharecdn.com/profile-photo-andreasmmetzger-48x48.jpg?cb=1677236669 http://www.sse.uni-due.de/de/team/mitarbeiter-innen/dr-andreas-metzger https://cdn.slidesharecdn.com/ss_thumbnails/se23-230224111029-039da5fe-thumbnail.jpg?width=320&height=320&fit=bounds slideshow/explainable-online-reinforcement-learning-for-adaptive-systems/256091223 Explainable Online Rei... https://cdn.slidesharecdn.com/ss_thumbnails/sea4dq-keynote-221117034633-7866111b-thumbnail.jpg?width=320&height=320&fit=bounds slideshow/data-quality-issues-in-online-reinforcement-learning-for-selfadaptive-systems-keynote/254265584 Data Quality Issues in... https://cdn.slidesharecdn.com/ss_thumbnails/acsosweb-220923152500-4a52bbc8-thumbnail.jpg?width=320&height=320&fit=bounds andreasmmetzger/explaining-online-reinforcement-learning-decisions-of-selfadaptive-systems Explaining Online Rein...