狠狠撸shows by User: VincenzoLomonaco
/
http://www.slideshare.net/images/logo.gif狠狠撸shows by User: VincenzoLomonaco
/
Wed, 30 Aug 2023 05:58:06 GMT狠狠撸Share feed for 狠狠撸shows by User: VincenzoLomonaco2023-08-22 CoLLAs Tutorial - Beyond CIL.pdf
/slideshow/20230822-collas-tutorial-beyond-cilpdf/260316675
2023-08-22collastutorial-beyondcil-230830055806-e678884f The Deep Continual Learning community should move beyond studying forgetting in Class-Incremental Learning Scenarios! In this tutorial we gave at
#CoLLAs2023, me and Antonio Carta try to explain why and how! 馃憞
Do you agree?]]>
The Deep Continual Learning community should move beyond studying forgetting in Class-Incremental Learning Scenarios! In this tutorial we gave at
#CoLLAs2023, me and Antonio Carta try to explain why and how! 馃憞
Do you agree?]]>
Wed, 30 Aug 2023 05:58:06 GMT/slideshow/20230822-collas-tutorial-beyond-cilpdf/260316675VincenzoLomonaco@slideshare.net(VincenzoLomonaco)2023-08-22 CoLLAs Tutorial - Beyond CIL.pdfVincenzoLomonacoThe Deep Continual Learning community should move beyond studying forgetting in Class-Incremental Learning Scenarios! In this tutorial we gave at
#CoLLAs2023, me and Antonio Carta try to explain why and how! 馃憞
Do you agree?<img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/2023-08-22collastutorial-beyondcil-230830055806-e678884f-thumbnail.jpg?width=120&height=120&fit=bounds" /><br> The Deep Continual Learning community should move beyond studying forgetting in Class-Incremental Learning Scenarios! In this tutorial we gave at
#CoLLAs2023, me and Antonio Carta try to explain why and how! 馃憞
Do you agree?
]]>
1740https://cdn.slidesharecdn.com/ss_thumbnails/2023-08-22collastutorial-beyondcil-230830055806-e678884f-thumbnail.jpg?width=120&height=120&fit=boundspresentationBlackhttp://activitystrea.ms/schema/1.0/posthttp://activitystrea.ms/schema/1.0/posted0Continual Learning with Deep Architectures - Tutorial ICML 2021
/VincenzoLomonaco/continual-learning-with-deep-architectures-tutorial-icml-2021
icml2021-cl-tutorialfull-210720120745 Humans have the extraordinary ability to learn continually from experience. Not only we can apply previously learned knowledge and skills to new situations, we can also use these as the foundation for later learning. One of the grand goals of Artificial Intelligence (AI) is building an artificial 鈥渃ontinual learning鈥� agent that constructs a sophisticated understanding of the world from its own experience through the autonomous incremental development of ever more complex knowledge and skills (Parisi, 2019). However, despite early speculations and few pioneering works (Ring, 1998; Thrun, 1998; Carlson, 2010), very little research and effort has been devoted to address this vision. Current AI systems greatly suffer from the exposure to new data or environments which even slightly differ from the ones for which they have been trained for (Goodfellow, 2013). Moreover, the learning process is usually constrained on fixed datasets within narrow and isolated tasks which may hardly lead to the emergence of more complex and autonomous intelligent behaviors. In essence, continual learning and adaptation capabilities, while more than often thought as fundamental pillars of every intelligent agent, have been mostly left out of the main AI research focus.
In this tutorial, we propose to summarize the application of these ideas in light of the more recent advances in machine learning research and in the context of deep architectures for AI (Lomonaco, 2019). Starting from a motivation and a brief history, we link recent Continual Learning advances to previous research endeavours on related topics and we summarize the state-of-the-art in terms of major approaches, benchmarks and key results. In the second part of the tutorial we plan to cover more exploratory studies about Continual Learning with low supervised signals and the relationships with other paradigms such as Unsupervised, Semi-Supervised and Reinforcement Learning. We will also highlight the impact of recent Neuroscience discoveries in the design of original continual learning algorithms as well as their deployment in real-world applications. Finally, we will underline the notion of continual learning as a key technological enabler for Sustainable Machine Learning and its societal impact, as well as recap interesting research questions and directions worth addressing in the future.
Authors: Vincenzo Lomonaco, Irina Rish
Official Website: https://sites.google.com/view/cltutorial-icml2021]]>
Humans have the extraordinary ability to learn continually from experience. Not only we can apply previously learned knowledge and skills to new situations, we can also use these as the foundation for later learning. One of the grand goals of Artificial Intelligence (AI) is building an artificial 鈥渃ontinual learning鈥� agent that constructs a sophisticated understanding of the world from its own experience through the autonomous incremental development of ever more complex knowledge and skills (Parisi, 2019). However, despite early speculations and few pioneering works (Ring, 1998; Thrun, 1998; Carlson, 2010), very little research and effort has been devoted to address this vision. Current AI systems greatly suffer from the exposure to new data or environments which even slightly differ from the ones for which they have been trained for (Goodfellow, 2013). Moreover, the learning process is usually constrained on fixed datasets within narrow and isolated tasks which may hardly lead to the emergence of more complex and autonomous intelligent behaviors. In essence, continual learning and adaptation capabilities, while more than often thought as fundamental pillars of every intelligent agent, have been mostly left out of the main AI research focus.
In this tutorial, we propose to summarize the application of these ideas in light of the more recent advances in machine learning research and in the context of deep architectures for AI (Lomonaco, 2019). Starting from a motivation and a brief history, we link recent Continual Learning advances to previous research endeavours on related topics and we summarize the state-of-the-art in terms of major approaches, benchmarks and key results. In the second part of the tutorial we plan to cover more exploratory studies about Continual Learning with low supervised signals and the relationships with other paradigms such as Unsupervised, Semi-Supervised and Reinforcement Learning. We will also highlight the impact of recent Neuroscience discoveries in the design of original continual learning algorithms as well as their deployment in real-world applications. Finally, we will underline the notion of continual learning as a key technological enabler for Sustainable Machine Learning and its societal impact, as well as recap interesting research questions and directions worth addressing in the future.
Authors: Vincenzo Lomonaco, Irina Rish
Official Website: https://sites.google.com/view/cltutorial-icml2021]]>
Tue, 20 Jul 2021 12:07:45 GMT/VincenzoLomonaco/continual-learning-with-deep-architectures-tutorial-icml-2021VincenzoLomonaco@slideshare.net(VincenzoLomonaco)Continual Learning with Deep Architectures - Tutorial ICML 2021VincenzoLomonacoHumans have the extraordinary ability to learn continually from experience. Not only we can apply previously learned knowledge and skills to new situations, we can also use these as the foundation for later learning. One of the grand goals of Artificial Intelligence (AI) is building an artificial 鈥渃ontinual learning鈥� agent that constructs a sophisticated understanding of the world from its own experience through the autonomous incremental development of ever more complex knowledge and skills (Parisi, 2019). However, despite early speculations and few pioneering works (Ring, 1998; Thrun, 1998; Carlson, 2010), very little research and effort has been devoted to address this vision. Current AI systems greatly suffer from the exposure to new data or environments which even slightly differ from the ones for which they have been trained for (Goodfellow, 2013). Moreover, the learning process is usually constrained on fixed datasets within narrow and isolated tasks which may hardly lead to the emergence of more complex and autonomous intelligent behaviors. In essence, continual learning and adaptation capabilities, while more than often thought as fundamental pillars of every intelligent agent, have been mostly left out of the main AI research focus.
In this tutorial, we propose to summarize the application of these ideas in light of the more recent advances in machine learning research and in the context of deep architectures for AI (Lomonaco, 2019). Starting from a motivation and a brief history, we link recent Continual Learning advances to previous research endeavours on related topics and we summarize the state-of-the-art in terms of major approaches, benchmarks and key results. In the second part of the tutorial we plan to cover more exploratory studies about Continual Learning with low supervised signals and the relationships with other paradigms such as Unsupervised, Semi-Supervised and Reinforcement Learning. We will also highlight the impact of recent Neuroscience discoveries in the design of original continual learning algorithms as well as their deployment in real-world applications. Finally, we will underline the notion of continual learning as a key technological enabler for Sustainable Machine Learning and its societal impact, as well as recap interesting research questions and directions worth addressing in the future.
Authors: Vincenzo Lomonaco, Irina Rish
Official Website: https://sites.google.com/view/cltutorial-icml2021<img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/icml2021-cl-tutorialfull-210720120745-thumbnail.jpg?width=120&height=120&fit=bounds" /><br> Humans have the extraordinary ability to learn continually from experience. Not only we can apply previously learned knowledge and skills to new situations, we can also use these as the foundation for later learning. One of the grand goals of Artificial Intelligence (AI) is building an artificial 鈥渃ontinual learning鈥� agent that constructs a sophisticated understanding of the world from its own experience through the autonomous incremental development of ever more complex knowledge and skills (Parisi, 2019). However, despite early speculations and few pioneering works (Ring, 1998; Thrun, 1998; Carlson, 2010), very little research and effort has been devoted to address this vision. Current AI systems greatly suffer from the exposure to new data or environments which even slightly differ from the ones for which they have been trained for (Goodfellow, 2013). Moreover, the learning process is usually constrained on fixed datasets within narrow and isolated tasks which may hardly lead to the emergence of more complex and autonomous intelligent behaviors. In essence, continual learning and adaptation capabilities, while more than often thought as fundamental pillars of every intelligent agent, have been mostly left out of the main AI research focus.
In this tutorial, we propose to summarize the application of these ideas in light of the more recent advances in machine learning research and in the context of deep architectures for AI (Lomonaco, 2019). Starting from a motivation and a brief history, we link recent Continual Learning advances to previous research endeavours on related topics and we summarize the state-of-the-art in terms of major approaches, benchmarks and key results. In the second part of the tutorial we plan to cover more exploratory studies about Continual Learning with low supervised signals and the relationships with other paradigms such as Unsupervised, Semi-Supervised and Reinforcement Learning. We will also highlight the impact of recent Neuroscience discoveries in the design of original continual learning algorithms as well as their deployment in real-world applications. Finally, we will underline the notion of continual learning as a key technological enabler for Sustainable Machine Learning and its societal impact, as well as recap interesting research questions and directions worth addressing in the future.
Authors: Vincenzo Lomonaco, Irina Rish
Official Website: https://sites.google.com/view/cltutorial-icml2021
]]>
9150https://cdn.slidesharecdn.com/ss_thumbnails/icml2021-cl-tutorialfull-210720120745-thumbnail.jpg?width=120&height=120&fit=boundspresentationBlackhttp://activitystrea.ms/schema/1.0/posthttp://activitystrea.ms/schema/1.0/posted0Toward Continual Learning on the Edge
/slideshow/toward-continual-learning-on-the-edge/227850766
unipitalk14-02-2020-200213122938 Humans have the extraordinary ability to learn continually from experience. Not only we can apply previously learned knowledge and skills to new situations, we can also use these as the foundation for later learning, constantly and efficiently updating our biased understanding of the external world. On the contrary, current AI systems are usually trained offline on huge datasets and later deployed with frozen learning capabilities as they have been shown to suffer from catastrophic forgetting if trained continuously on changing data distributions. A common, practical solution to the problem is to re-train the underlying prediction model from scratch and re-deploy it as a new batch of data becomes available. However, this naive approach is incredibly wasteful in terms of memory and computation other than impossible to sustain over longer timescales and frequent updates. In this talk, we will introduce an efficient continual learning strategy, which can reduce the amount of computation and memory overhead of more than 45% w.r.t. the standard re-train & re-deploy approach, further exploring its real-world application in the context of continual object recognition, running on the edge on highly-constrained hardware platforms such as widely adopted smartphones devices.]]>
Humans have the extraordinary ability to learn continually from experience. Not only we can apply previously learned knowledge and skills to new situations, we can also use these as the foundation for later learning, constantly and efficiently updating our biased understanding of the external world. On the contrary, current AI systems are usually trained offline on huge datasets and later deployed with frozen learning capabilities as they have been shown to suffer from catastrophic forgetting if trained continuously on changing data distributions. A common, practical solution to the problem is to re-train the underlying prediction model from scratch and re-deploy it as a new batch of data becomes available. However, this naive approach is incredibly wasteful in terms of memory and computation other than impossible to sustain over longer timescales and frequent updates. In this talk, we will introduce an efficient continual learning strategy, which can reduce the amount of computation and memory overhead of more than 45% w.r.t. the standard re-train & re-deploy approach, further exploring its real-world application in the context of continual object recognition, running on the edge on highly-constrained hardware platforms such as widely adopted smartphones devices.]]>
Thu, 13 Feb 2020 12:29:38 GMT/slideshow/toward-continual-learning-on-the-edge/227850766VincenzoLomonaco@slideshare.net(VincenzoLomonaco)Toward Continual Learning on the EdgeVincenzoLomonacoHumans have the extraordinary ability to learn continually from experience. Not only we can apply previously learned knowledge and skills to new situations, we can also use these as the foundation for later learning, constantly and efficiently updating our biased understanding of the external world. On the contrary, current AI systems are usually trained offline on huge datasets and later deployed with frozen learning capabilities as they have been shown to suffer from catastrophic forgetting if trained continuously on changing data distributions. A common, practical solution to the problem is to re-train the underlying prediction model from scratch and re-deploy it as a new batch of data becomes available. However, this naive approach is incredibly wasteful in terms of memory and computation other than impossible to sustain over longer timescales and frequent updates. In this talk, we will introduce an efficient continual learning strategy, which can reduce the amount of computation and memory overhead of more than 45% w.r.t. the standard re-train & re-deploy approach, further exploring its real-world application in the context of continual object recognition, running on the edge on highly-constrained hardware platforms such as widely adopted smartphones devices.<img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/unipitalk14-02-2020-200213122938-thumbnail.jpg?width=120&height=120&fit=bounds" /><br> Humans have the extraordinary ability to learn continually from experience. Not only we can apply previously learned knowledge and skills to new situations, we can also use these as the foundation for later learning, constantly and efficiently updating our biased understanding of the external world. On the contrary, current AI systems are usually trained offline on huge datasets and later deployed with frozen learning capabilities as they have been shown to suffer from catastrophic forgetting if trained continuously on changing data distributions. A common, practical solution to the problem is to re-train the underlying prediction model from scratch and re-deploy it as a new batch of data becomes available. However, this naive approach is incredibly wasteful in terms of memory and computation other than impossible to sustain over longer timescales and frequent updates. In this talk, we will introduce an efficient continual learning strategy, which can reduce the amount of computation and memory overhead of more than 45% w.r.t. the standard re-train & re-deploy approach, further exploring its real-world application in the context of continual object recognition, running on the edge on highly-constrained hardware platforms such as widely adopted smartphones devices.
]]>
4010https://cdn.slidesharecdn.com/ss_thumbnails/unipitalk14-02-2020-200213122938-thumbnail.jpg?width=120&height=120&fit=boundspresentationBlackhttp://activitystrea.ms/schema/1.0/posthttp://activitystrea.ms/schema/1.0/posted0Continual Learning: Another Step Towards Truly Intelligent Machines
/slideshow/continual-learning-another-step-towards-truly-intelligent-machines/173526314
numentatalk-190918211350 Humans have the extraordinary ability to learn continually from experience. Not only we can apply previously learned knowledge and skills to new situations, we can also use these as the foundation for later learning. One of the grand goals of Artificial Intelligence (AI) is building an artificial continual learning agent that constructs a sophisticated understanding of the world from its own experience through the autonomous incremental development of ever more complex knowledge and skills. However, current AI systems greatly suffer from the exposure to new data or environments which even slightly differ from the ones for which they have been trained for. Moreover, the learning process is usually constrained on fixed datasets within narrow and isolated tasks which may hardly lead to the emergence of more complex and autonomous intelligent behaviors. In essence, continual learning and adaptation capabilities, while more than often thought as fundamental pillars of every intelligent agent, have been mostly left out of the main AI research focus. In this talk, we explore the application of these ideas in the context of Vision with a focus on (deep) continual learning strategies for object recognition running at the edge on highly-constrained hardware devices. ]]>
Humans have the extraordinary ability to learn continually from experience. Not only we can apply previously learned knowledge and skills to new situations, we can also use these as the foundation for later learning. One of the grand goals of Artificial Intelligence (AI) is building an artificial continual learning agent that constructs a sophisticated understanding of the world from its own experience through the autonomous incremental development of ever more complex knowledge and skills. However, current AI systems greatly suffer from the exposure to new data or environments which even slightly differ from the ones for which they have been trained for. Moreover, the learning process is usually constrained on fixed datasets within narrow and isolated tasks which may hardly lead to the emergence of more complex and autonomous intelligent behaviors. In essence, continual learning and adaptation capabilities, while more than often thought as fundamental pillars of every intelligent agent, have been mostly left out of the main AI research focus. In this talk, we explore the application of these ideas in the context of Vision with a focus on (deep) continual learning strategies for object recognition running at the edge on highly-constrained hardware devices. ]]>
Wed, 18 Sep 2019 21:13:50 GMT/slideshow/continual-learning-another-step-towards-truly-intelligent-machines/173526314VincenzoLomonaco@slideshare.net(VincenzoLomonaco)Continual Learning: Another Step Towards Truly Intelligent MachinesVincenzoLomonacoHumans have the extraordinary ability to learn continually from experience. Not only we can apply previously learned knowledge and skills to new situations, we can also use these as the foundation for later learning. One of the grand goals of Artificial Intelligence (AI) is building an artificial continual learning agent that constructs a sophisticated understanding of the world from its own experience through the autonomous incremental development of ever more complex knowledge and skills. However, current AI systems greatly suffer from the exposure to new data or environments which even slightly differ from the ones for which they have been trained for. Moreover, the learning process is usually constrained on fixed datasets within narrow and isolated tasks which may hardly lead to the emergence of more complex and autonomous intelligent behaviors. In essence, continual learning and adaptation capabilities, while more than often thought as fundamental pillars of every intelligent agent, have been mostly left out of the main AI research focus. In this talk, we explore the application of these ideas in the context of Vision with a focus on (deep) continual learning strategies for object recognition running at the edge on highly-constrained hardware devices. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/numentatalk-190918211350-thumbnail.jpg?width=120&height=120&fit=bounds" /><br> Humans have the extraordinary ability to learn continually from experience. Not only we can apply previously learned knowledge and skills to new situations, we can also use these as the foundation for later learning. One of the grand goals of Artificial Intelligence (AI) is building an artificial continual learning agent that constructs a sophisticated understanding of the world from its own experience through the autonomous incremental development of ever more complex knowledge and skills. However, current AI systems greatly suffer from the exposure to new data or environments which even slightly differ from the ones for which they have been trained for. Moreover, the learning process is usually constrained on fixed datasets within narrow and isolated tasks which may hardly lead to the emergence of more complex and autonomous intelligent behaviors. In essence, continual learning and adaptation capabilities, while more than often thought as fundamental pillars of every intelligent agent, have been mostly left out of the main AI research focus. In this talk, we explore the application of these ideas in the context of Vision with a focus on (deep) continual learning strategies for object recognition running at the edge on highly-constrained hardware devices.
]]>
4492https://cdn.slidesharecdn.com/ss_thumbnails/numentatalk-190918211350-thumbnail.jpg?width=120&height=120&fit=boundspresentationBlackhttp://activitystrea.ms/schema/1.0/posthttp://activitystrea.ms/schema/1.0/posted0Tutorial inns2019 full
/slideshow/tutorial-inns2019-full/173453921
tutorialinns2019full-190918165526 Artificial agents interacting in highly dynamic environments are required to continually acquire and fine-tune their knowledge overtime. In contrast to conventional deep neural networks that typically rely on a large batch of annotated training samples, lifelong learning systems must account for situations in which the number of tasks is not known a priori and the data samples become incrementally available over time. Despite recent advances in deep learning, lifelong machine learning has remained a long-standing challenge due to neural networks being prone to catastrophic forgetting, i.e., the learning of new tasks interferes with previously learned ones and leads to abrupt disruptions of performance. Recently proposed deep supervised and reinforcement learning models for addressing catastrophic forgetting suffer from flexibility, robustness, and scalability issues with respect to biological systems. In this tutorial, we will present and discuss well-established and emerging neural network approaches motivated by lifelong learning factors in biological systems such as neurosynaptic plasticity, complementary memory systems, multi-task transfer learning, and intrinsically motivated exploration.]]>
Artificial agents interacting in highly dynamic environments are required to continually acquire and fine-tune their knowledge overtime. In contrast to conventional deep neural networks that typically rely on a large batch of annotated training samples, lifelong learning systems must account for situations in which the number of tasks is not known a priori and the data samples become incrementally available over time. Despite recent advances in deep learning, lifelong machine learning has remained a long-standing challenge due to neural networks being prone to catastrophic forgetting, i.e., the learning of new tasks interferes with previously learned ones and leads to abrupt disruptions of performance. Recently proposed deep supervised and reinforcement learning models for addressing catastrophic forgetting suffer from flexibility, robustness, and scalability issues with respect to biological systems. In this tutorial, we will present and discuss well-established and emerging neural network approaches motivated by lifelong learning factors in biological systems such as neurosynaptic plasticity, complementary memory systems, multi-task transfer learning, and intrinsically motivated exploration.]]>
Wed, 18 Sep 2019 16:55:26 GMT/slideshow/tutorial-inns2019-full/173453921VincenzoLomonaco@slideshare.net(VincenzoLomonaco)Tutorial inns2019 fullVincenzoLomonacoArtificial agents interacting in highly dynamic environments are required to continually acquire and fine-tune their knowledge overtime. In contrast to conventional deep neural networks that typically rely on a large batch of annotated training samples, lifelong learning systems must account for situations in which the number of tasks is not known a priori and the data samples become incrementally available over time. Despite recent advances in deep learning, lifelong machine learning has remained a long-standing challenge due to neural networks being prone to catastrophic forgetting, i.e., the learning of new tasks interferes with previously learned ones and leads to abrupt disruptions of performance. Recently proposed deep supervised and reinforcement learning models for addressing catastrophic forgetting suffer from flexibility, robustness, and scalability issues with respect to biological systems. In this tutorial, we will present and discuss well-established and emerging neural network approaches motivated by lifelong learning factors in biological systems such as neurosynaptic plasticity, complementary memory systems, multi-task transfer learning, and intrinsically motivated exploration.<img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/tutorialinns2019full-190918165526-thumbnail.jpg?width=120&height=120&fit=bounds" /><br> Artificial agents interacting in highly dynamic environments are required to continually acquire and fine-tune their knowledge overtime. In contrast to conventional deep neural networks that typically rely on a large batch of annotated training samples, lifelong learning systems must account for situations in which the number of tasks is not known a priori and the data samples become incrementally available over time. Despite recent advances in deep learning, lifelong machine learning has remained a long-standing challenge due to neural networks being prone to catastrophic forgetting, i.e., the learning of new tasks interferes with previously learned ones and leads to abrupt disruptions of performance. Recently proposed deep supervised and reinforcement learning models for addressing catastrophic forgetting suffer from flexibility, robustness, and scalability issues with respect to biological systems. In this tutorial, we will present and discuss well-established and emerging neural network approaches motivated by lifelong learning factors in biological systems such as neurosynaptic plasticity, complementary memory systems, multi-task transfer learning, and intrinsically motivated exploration.
]]>
3775https://cdn.slidesharecdn.com/ss_thumbnails/tutorialinns2019full-190918165526-thumbnail.jpg?width=120&height=120&fit=boundspresentationBlackhttp://activitystrea.ms/schema/1.0/posthttp://activitystrea.ms/schema/1.0/posted0Continual Reinforcement Learning in 3D Non-stationary Environments
/VincenzoLomonaco/continual-reinforcement-learning-in-3d-nonstationary-environments
continualrl-190918164859 Dynamic and always-changing environments constitute an hard challenge for current reinforcement learning techniques. Artificial agents, nowadays, are often trained in very static and reproducible conditions in simulation, where the common assumption is that observations can be sampled i.i.d from the environment. However, tackling more complex problems and real-world settings this can be rarely considered the case, with environments often non-stationary and subject to unpredictable, frequent changes. In this talk we discuss about a new open benchmark for learning continually through reinforce in a complex 3D non-stationary object picking task based on VizDoom and subject to several environmental changes. We further propose a number of end-to-end, model-free continual reinforcement learning strategies showing competitive results even without any access to previously encountered environmental conditions or observations.]]>
Dynamic and always-changing environments constitute an hard challenge for current reinforcement learning techniques. Artificial agents, nowadays, are often trained in very static and reproducible conditions in simulation, where the common assumption is that observations can be sampled i.i.d from the environment. However, tackling more complex problems and real-world settings this can be rarely considered the case, with environments often non-stationary and subject to unpredictable, frequent changes. In this talk we discuss about a new open benchmark for learning continually through reinforce in a complex 3D non-stationary object picking task based on VizDoom and subject to several environmental changes. We further propose a number of end-to-end, model-free continual reinforcement learning strategies showing competitive results even without any access to previously encountered environmental conditions or observations.]]>
Wed, 18 Sep 2019 16:48:59 GMT/VincenzoLomonaco/continual-reinforcement-learning-in-3d-nonstationary-environmentsVincenzoLomonaco@slideshare.net(VincenzoLomonaco)Continual Reinforcement Learning in 3D Non-stationary EnvironmentsVincenzoLomonacoDynamic and always-changing environments constitute an hard challenge for current reinforcement learning techniques. Artificial agents, nowadays, are often trained in very static and reproducible conditions in simulation, where the common assumption is that observations can be sampled i.i.d from the environment. However, tackling more complex problems and real-world settings this can be rarely considered the case, with environments often non-stationary and subject to unpredictable, frequent changes. In this talk we discuss about a new open benchmark for learning continually through reinforce in a complex 3D non-stationary object picking task based on VizDoom and subject to several environmental changes. We further propose a number of end-to-end, model-free continual reinforcement learning strategies showing competitive results even without any access to previously encountered environmental conditions or observations.<img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/continualrl-190918164859-thumbnail.jpg?width=120&height=120&fit=bounds" /><br> Dynamic and always-changing environments constitute an hard challenge for current reinforcement learning techniques. Artificial agents, nowadays, are often trained in very static and reproducible conditions in simulation, where the common assumption is that observations can be sampled i.i.d from the environment. However, tackling more complex problems and real-world settings this can be rarely considered the case, with environments often non-stationary and subject to unpredictable, frequent changes. In this talk we discuss about a new open benchmark for learning continually through reinforce in a complex 3D non-stationary object picking task based on VizDoom and subject to several environmental changes. We further propose a number of end-to-end, model-free continual reinforcement learning strategies showing competitive results even without any access to previously encountered environmental conditions or observations.
]]>
3573https://cdn.slidesharecdn.com/ss_thumbnails/continualrl-190918164859-thumbnail.jpg?width=120&height=120&fit=boundspresentationBlackhttp://activitystrea.ms/schema/1.0/posthttp://activitystrea.ms/schema/1.0/posted0Continual/Lifelong Learning with Deep Architectures
/slideshow/continuallifelong-learning-with-deep-architectures/173449722
datasciencemilan-190918164331 Humans have the extraordinary ability to learn continually from experience. Not only can we apply previously learned knowledge and skills to new situations, we can also use these as the foundation for later learning. One of the grand goals of AI is building an artificial continually learning agent that constructs a sophisticated understanding of the world from its own experience through the autonomous incremental development of ever more complex skills and knowledge.
"Continual Learning" (CL) is indeed a fast emerging topic in AI concerning the ability to efficiently improve the performance of a deep model over time, dealing with a long (and possibly unlimited) sequence of data/tasks. In this workshop, after a brief introduction of the topic, we鈥檒l implement different Continual Learning strategies and assess them on common vision benchmarks. We鈥檒l conclude the workshop with a look at possible real world applications of CL.]]>
Humans have the extraordinary ability to learn continually from experience. Not only can we apply previously learned knowledge and skills to new situations, we can also use these as the foundation for later learning. One of the grand goals of AI is building an artificial continually learning agent that constructs a sophisticated understanding of the world from its own experience through the autonomous incremental development of ever more complex skills and knowledge.
"Continual Learning" (CL) is indeed a fast emerging topic in AI concerning the ability to efficiently improve the performance of a deep model over time, dealing with a long (and possibly unlimited) sequence of data/tasks. In this workshop, after a brief introduction of the topic, we鈥檒l implement different Continual Learning strategies and assess them on common vision benchmarks. We鈥檒l conclude the workshop with a look at possible real world applications of CL.]]>
Wed, 18 Sep 2019 16:43:31 GMT/slideshow/continuallifelong-learning-with-deep-architectures/173449722VincenzoLomonaco@slideshare.net(VincenzoLomonaco)Continual/Lifelong Learning with Deep ArchitecturesVincenzoLomonacoHumans have the extraordinary ability to learn continually from experience. Not only can we apply previously learned knowledge and skills to new situations, we can also use these as the foundation for later learning. One of the grand goals of AI is building an artificial continually learning agent that constructs a sophisticated understanding of the world from its own experience through the autonomous incremental development of ever more complex skills and knowledge.
"Continual Learning" (CL) is indeed a fast emerging topic in AI concerning the ability to efficiently improve the performance of a deep model over time, dealing with a long (and possibly unlimited) sequence of data/tasks. In this workshop, after a brief introduction of the topic, we鈥檒l implement different Continual Learning strategies and assess them on common vision benchmarks. We鈥檒l conclude the workshop with a look at possible real world applications of CL.<img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/datasciencemilan-190918164331-thumbnail.jpg?width=120&height=120&fit=bounds" /><br> Humans have the extraordinary ability to learn continually from experience. Not only can we apply previously learned knowledge and skills to new situations, we can also use these as the foundation for later learning. One of the grand goals of AI is building an artificial continually learning agent that constructs a sophisticated understanding of the world from its own experience through the autonomous incremental development of ever more complex skills and knowledge.
"Continual Learning" (CL) is indeed a fast emerging topic in AI concerning the ability to efficiently improve the performance of a deep model over time, dealing with a long (and possibly unlimited) sequence of data/tasks. In this workshop, after a brief introduction of the topic, we鈥檒l implement different Continual Learning strategies and assess them on common vision benchmarks. We鈥檒l conclude the workshop with a look at possible real world applications of CL.
]]>
12294https://cdn.slidesharecdn.com/ss_thumbnails/datasciencemilan-190918164331-thumbnail.jpg?width=120&height=120&fit=boundspresentationBlackhttp://activitystrea.ms/schema/1.0/posthttp://activitystrea.ms/schema/1.0/posted0Continual Learning for Robotics
/slideshow/continual-learning-for-robotics/173436613
iittalk11-09-19-190918160752 Humans have the extraordinary ability to learn continually from experience. Not only we can apply previously learned knowledge and skills to new situations, we can also use these as the foundation for later learning. One of the grand goals of Artificial Intelligence (AI) is building an artificial continual learning agent that constructs a sophisticated understanding of the world from its own experience through the autonomous incremental development of ever more complex knowledge and skills. However, current AI systems greatly suffer from the exposure to new data or environments which even slightly differ from the ones for which they have been trained for. Moreover, the learning process is usually constrained on fixed datasets within narrow and isolated tasks which may hardly lead to the emergence of more complex and autonomous intelligent behaviors. In essence, continual learning and adaptation capabilities, while more than often thought as fundamental pillars of every intelligent agent, have been mostly left out of the main AI research focus. In this talk, we explore the application of these ideas in the context of Robotics with a focus on (deep) continual learning strategies for object recognition running at the edge on highly-constrained hardware devices.]]>
Humans have the extraordinary ability to learn continually from experience. Not only we can apply previously learned knowledge and skills to new situations, we can also use these as the foundation for later learning. One of the grand goals of Artificial Intelligence (AI) is building an artificial continual learning agent that constructs a sophisticated understanding of the world from its own experience through the autonomous incremental development of ever more complex knowledge and skills. However, current AI systems greatly suffer from the exposure to new data or environments which even slightly differ from the ones for which they have been trained for. Moreover, the learning process is usually constrained on fixed datasets within narrow and isolated tasks which may hardly lead to the emergence of more complex and autonomous intelligent behaviors. In essence, continual learning and adaptation capabilities, while more than often thought as fundamental pillars of every intelligent agent, have been mostly left out of the main AI research focus. In this talk, we explore the application of these ideas in the context of Robotics with a focus on (deep) continual learning strategies for object recognition running at the edge on highly-constrained hardware devices.]]>
Wed, 18 Sep 2019 16:07:52 GMT/slideshow/continual-learning-for-robotics/173436613VincenzoLomonaco@slideshare.net(VincenzoLomonaco)Continual Learning for RoboticsVincenzoLomonacoHumans have the extraordinary ability to learn continually from experience. Not only we can apply previously learned knowledge and skills to new situations, we can also use these as the foundation for later learning. One of the grand goals of Artificial Intelligence (AI) is building an artificial continual learning agent that constructs a sophisticated understanding of the world from its own experience through the autonomous incremental development of ever more complex knowledge and skills. However, current AI systems greatly suffer from the exposure to new data or environments which even slightly differ from the ones for which they have been trained for. Moreover, the learning process is usually constrained on fixed datasets within narrow and isolated tasks which may hardly lead to the emergence of more complex and autonomous intelligent behaviors. In essence, continual learning and adaptation capabilities, while more than often thought as fundamental pillars of every intelligent agent, have been mostly left out of the main AI research focus. In this talk, we explore the application of these ideas in the context of Robotics with a focus on (deep) continual learning strategies for object recognition running at the edge on highly-constrained hardware devices.<img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/iittalk11-09-19-190918160752-thumbnail.jpg?width=120&height=120&fit=bounds" /><br> Humans have the extraordinary ability to learn continually from experience. Not only we can apply previously learned knowledge and skills to new situations, we can also use these as the foundation for later learning. One of the grand goals of Artificial Intelligence (AI) is building an artificial continual learning agent that constructs a sophisticated understanding of the world from its own experience through the autonomous incremental development of ever more complex knowledge and skills. However, current AI systems greatly suffer from the exposure to new data or environments which even slightly differ from the ones for which they have been trained for. Moreover, the learning process is usually constrained on fixed datasets within narrow and isolated tasks which may hardly lead to the emergence of more complex and autonomous intelligent behaviors. In essence, continual learning and adaptation capabilities, while more than often thought as fundamental pillars of every intelligent agent, have been mostly left out of the main AI research focus. In this talk, we explore the application of these ideas in the context of Robotics with a focus on (deep) continual learning strategies for object recognition running at the edge on highly-constrained hardware devices.
]]>
4740https://cdn.slidesharecdn.com/ss_thumbnails/iittalk11-09-19-190918160752-thumbnail.jpg?width=120&height=120&fit=boundspresentationBlackhttp://activitystrea.ms/schema/1.0/posthttp://activitystrea.ms/schema/1.0/posted0Don't forget, there is more than forgetting: new metrics for Continual Learning - Poster
/slideshow/dont-forget-there-is-more-than-forgetting-new-metrics-for-continual-learning/127535503
neurips18posterclmetrics-190108153037 Continual learning consists of algorithms that learn from a stream of data/tasks continuously and adaptively thought time, enabling the incremental development of ever more complex knowledge and skills. The lack of consensus in evaluating continual learning algorithms and the almost exclusive focus on forgetting motivate us to propose a more comprehensive set of implementation independent metrics accounting for several factors we believe have practical implications worth considering in the deployment of real AI systems that learn continually: accuracy or performance over time, backward and forward knowledge transfer, memory overhead as well as computational efficiency. Drawing inspiration from the standard Multi-Attribute Value Theory (MAVT) we further propose to fuse these metrics into a single score for ranking purposes and we evaluate our proposal with five continual learning strategies on the iCIFAR-100 continual learning benchmark.]]>
Continual learning consists of algorithms that learn from a stream of data/tasks continuously and adaptively thought time, enabling the incremental development of ever more complex knowledge and skills. The lack of consensus in evaluating continual learning algorithms and the almost exclusive focus on forgetting motivate us to propose a more comprehensive set of implementation independent metrics accounting for several factors we believe have practical implications worth considering in the deployment of real AI systems that learn continually: accuracy or performance over time, backward and forward knowledge transfer, memory overhead as well as computational efficiency. Drawing inspiration from the standard Multi-Attribute Value Theory (MAVT) we further propose to fuse these metrics into a single score for ranking purposes and we evaluate our proposal with five continual learning strategies on the iCIFAR-100 continual learning benchmark.]]>
Tue, 08 Jan 2019 15:30:37 GMT/slideshow/dont-forget-there-is-more-than-forgetting-new-metrics-for-continual-learning/127535503VincenzoLomonaco@slideshare.net(VincenzoLomonaco)Don't forget, there is more than forgetting: new metrics for Continual Learning - PosterVincenzoLomonacoContinual learning consists of algorithms that learn from a stream of data/tasks continuously and adaptively thought time, enabling the incremental development of ever more complex knowledge and skills. The lack of consensus in evaluating continual learning algorithms and the almost exclusive focus on forgetting motivate us to propose a more comprehensive set of implementation independent metrics accounting for several factors we believe have practical implications worth considering in the deployment of real AI systems that learn continually: accuracy or performance over time, backward and forward knowledge transfer, memory overhead as well as computational efficiency. Drawing inspiration from the standard Multi-Attribute Value Theory (MAVT) we further propose to fuse these metrics into a single score for ranking purposes and we evaluate our proposal with five continual learning strategies on the iCIFAR-100 continual learning benchmark.<img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/neurips18posterclmetrics-190108153037-thumbnail.jpg?width=120&height=120&fit=bounds" /><br> Continual learning consists of algorithms that learn from a stream of data/tasks continuously and adaptively thought time, enabling the incremental development of ever more complex knowledge and skills. The lack of consensus in evaluating continual learning algorithms and the almost exclusive focus on forgetting motivate us to propose a more comprehensive set of implementation independent metrics accounting for several factors we believe have practical implications worth considering in the deployment of real AI systems that learn continually: accuracy or performance over time, backward and forward knowledge transfer, memory overhead as well as computational efficiency. Drawing inspiration from the standard Multi-Attribute Value Theory (MAVT) we further propose to fuse these metrics into a single score for ranking purposes and we evaluate our proposal with five continual learning strategies on the iCIFAR-100 continual learning benchmark.
]]>
3791https://cdn.slidesharecdn.com/ss_thumbnails/neurips18posterclmetrics-190108153037-thumbnail.jpg?width=120&height=120&fit=boundsdocumentBlackhttp://activitystrea.ms/schema/1.0/posthttp://activitystrea.ms/schema/1.0/posted0Open-Source Frameworks for Deep Learning: an Overview
/slideshow/opensource-frameworks-for-deep-learning-an-overview/127431506
frameworksfordeeplearning-190107120355 The rise of deep learning over the last decade has led to profound changes in the landscape of the machine learning software stack both for research and production. In this talk we will provide a comprehensive overview of the open-source deep learning frameworks landscape with both a theoretical and hands-on approach. After a brief introduction and historical contextualization, we will highlight common features and distinctions of their recent developments. Finally, we will take at deeper look into three of the most used deep learning frameworks today: Caffe, Tensorflow, PyTorch; with practical examples and considerations worth reckoning in the choice of such libraries.]]>
The rise of deep learning over the last decade has led to profound changes in the landscape of the machine learning software stack both for research and production. In this talk we will provide a comprehensive overview of the open-source deep learning frameworks landscape with both a theoretical and hands-on approach. After a brief introduction and historical contextualization, we will highlight common features and distinctions of their recent developments. Finally, we will take at deeper look into three of the most used deep learning frameworks today: Caffe, Tensorflow, PyTorch; with practical examples and considerations worth reckoning in the choice of such libraries.]]>
Mon, 07 Jan 2019 12:03:55 GMT/slideshow/opensource-frameworks-for-deep-learning-an-overview/127431506VincenzoLomonaco@slideshare.net(VincenzoLomonaco)Open-Source Frameworks for Deep Learning: an OverviewVincenzoLomonacoThe rise of deep learning over the last decade has led to profound changes in the landscape of the machine learning software stack both for research and production. In this talk we will provide a comprehensive overview of the open-source deep learning frameworks landscape with both a theoretical and hands-on approach. After a brief introduction and historical contextualization, we will highlight common features and distinctions of their recent developments. Finally, we will take at deeper look into three of the most used deep learning frameworks today: Caffe, Tensorflow, PyTorch; with practical examples and considerations worth reckoning in the choice of such libraries.<img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/frameworksfordeeplearning-190107120355-thumbnail.jpg?width=120&height=120&fit=bounds" /><br> The rise of deep learning over the last decade has led to profound changes in the landscape of the machine learning software stack both for research and production. In this talk we will provide a comprehensive overview of the open-source deep learning frameworks landscape with both a theoretical and hands-on approach. After a brief introduction and historical contextualization, we will highlight common features and distinctions of their recent developments. Finally, we will take at deeper look into three of the most used deep learning frameworks today: Caffe, Tensorflow, PyTorch; with practical examples and considerations worth reckoning in the choice of such libraries.
]]>
8813https://cdn.slidesharecdn.com/ss_thumbnails/frameworksfordeeplearning-190107120355-thumbnail.jpg?width=120&height=120&fit=boundspresentationBlackhttp://activitystrea.ms/schema/1.0/posthttp://activitystrea.ms/schema/1.0/posted0Continual Learning with Deep Architectures Workshop @ Computer VISIONers Conference 2018
/slideshow/continual-learning-with-deep-architectures-workshop-computer-visioners-conference-2018/127431113
7dm9g5gerdeg7nt0x1e5-signature-964bd2753428c221a3efd63f5417ac54041fcb2153cdd429a6ed2a5e06d0c548-poli-190107115131 Continual Learning (CL) is a fast emerging topic in AI concerning the ability to efficiently improve the performance of a deep model over time, dealing with a long (and possibly unlimited) sequence of data/tasks. In this workshop, after a brief introduction of the subject, we鈥檒l analyze different Continual Learning strategies and assess them on common Vision benchmarks. We鈥檒l conclude the workshop with a look at possible real world application of CL.
]]>
Continual Learning (CL) is a fast emerging topic in AI concerning the ability to efficiently improve the performance of a deep model over time, dealing with a long (and possibly unlimited) sequence of data/tasks. In this workshop, after a brief introduction of the subject, we鈥檒l analyze different Continual Learning strategies and assess them on common Vision benchmarks. We鈥檒l conclude the workshop with a look at possible real world application of CL.
]]>
Mon, 07 Jan 2019 11:51:30 GMT/slideshow/continual-learning-with-deep-architectures-workshop-computer-visioners-conference-2018/127431113VincenzoLomonaco@slideshare.net(VincenzoLomonaco)Continual Learning with Deep Architectures Workshop @ Computer VISIONers Conference 2018VincenzoLomonacoContinual Learning (CL) is a fast emerging topic in AI concerning the ability to efficiently improve the performance of a deep model over time, dealing with a long (and possibly unlimited) sequence of data/tasks. In this workshop, after a brief introduction of the subject, we鈥檒l analyze different Continual Learning strategies and assess them on common Vision benchmarks. We鈥檒l conclude the workshop with a look at possible real world application of CL.
<img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/7dm9g5gerdeg7nt0x1e5-signature-964bd2753428c221a3efd63f5417ac54041fcb2153cdd429a6ed2a5e06d0c548-poli-190107115131-thumbnail.jpg?width=120&height=120&fit=bounds" /><br> Continual Learning (CL) is a fast emerging topic in AI concerning the ability to efficiently improve the performance of a deep model over time, dealing with a long (and possibly unlimited) sequence of data/tasks. In this workshop, after a brief introduction of the subject, we鈥檒l analyze different Continual Learning strategies and assess them on common Vision benchmarks. We鈥檒l conclude the workshop with a look at possible real world application of CL.
]]>
5051https://cdn.slidesharecdn.com/ss_thumbnails/7dm9g5gerdeg7nt0x1e5-signature-964bd2753428c221a3efd63f5417ac54041fcb2153cdd429a6ed2a5e06d0c548-poli-190107115131-thumbnail.jpg?width=120&height=120&fit=boundspresentationBlackhttp://activitystrea.ms/schema/1.0/posthttp://activitystrea.ms/schema/1.0/posted0CORe50: a New Dataset and Benchmark for Continual Learning and Object Recognition - 狠狠撸s
/slideshow/core50-a-new-dataset-and-benchmark-for-continual-learning-and-object-recognition-slides/127430916
xlvcq7gptlcptjfldlav-signature-aad24a868a7427f3b6dcf41a5601ee08aad816232abc675b90510b66408ba37e-poli-190107114411 Continuous/Lifelong learning of high-dimensional data streams is a challenging research problem. In fact, fully retraining models each time new data become available is infeasible, due to computational and storage issues, while na\"ive incremental strategies have been shown to suffer from catastrophic forgetting. In the context of real-world object recognition applications (e.g., robotic vision), where continuous learning is crucial, very few datasets and benchmarks are available to evaluate and compare emerging techniques. In this work we propose a new dataset and benchmark CORe50, specifically designed for continuous object recognition, and introduce baseline approaches for different continuous learning scenarios. ]]>
Continuous/Lifelong learning of high-dimensional data streams is a challenging research problem. In fact, fully retraining models each time new data become available is infeasible, due to computational and storage issues, while na\"ive incremental strategies have been shown to suffer from catastrophic forgetting. In the context of real-world object recognition applications (e.g., robotic vision), where continuous learning is crucial, very few datasets and benchmarks are available to evaluate and compare emerging techniques. In this work we propose a new dataset and benchmark CORe50, specifically designed for continuous object recognition, and introduce baseline approaches for different continuous learning scenarios. ]]>
Mon, 07 Jan 2019 11:44:11 GMT/slideshow/core50-a-new-dataset-and-benchmark-for-continual-learning-and-object-recognition-slides/127430916VincenzoLomonaco@slideshare.net(VincenzoLomonaco)CORe50: a New Dataset and Benchmark for Continual Learning and Object Recognition - 狠狠撸sVincenzoLomonacoContinuous/Lifelong learning of high-dimensional data streams is a challenging research problem. In fact, fully retraining models each time new data become available is infeasible, due to computational and storage issues, while na\"ive incremental strategies have been shown to suffer from catastrophic forgetting. In the context of real-world object recognition applications (e.g., robotic vision), where continuous learning is crucial, very few datasets and benchmarks are available to evaluate and compare emerging techniques. In this work we propose a new dataset and benchmark CORe50, specifically designed for continuous object recognition, and introduce baseline approaches for different continuous learning scenarios. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/xlvcq7gptlcptjfldlav-signature-aad24a868a7427f3b6dcf41a5601ee08aad816232abc675b90510b66408ba37e-poli-190107114411-thumbnail.jpg?width=120&height=120&fit=bounds" /><br> Continuous/Lifelong learning of high-dimensional data streams is a challenging research problem. In fact, fully retraining models each time new data become available is infeasible, due to computational and storage issues, while na\"ive incremental strategies have been shown to suffer from catastrophic forgetting. In the context of real-world object recognition applications (e.g., robotic vision), where continuous learning is crucial, very few datasets and benchmarks are available to evaluate and compare emerging techniques. In this work we propose a new dataset and benchmark CORe50, specifically designed for continuous object recognition, and introduce baseline approaches for different continuous learning scenarios.
]]>
3152https://cdn.slidesharecdn.com/ss_thumbnails/xlvcq7gptlcptjfldlav-signature-aad24a868a7427f3b6dcf41a5601ee08aad816232abc675b90510b66408ba37e-poli-190107114411-thumbnail.jpg?width=120&height=120&fit=boundspresentationBlackhttp://activitystrea.ms/schema/1.0/posthttp://activitystrea.ms/schema/1.0/posted0Continuous Learning with Deep Architectures
/slideshow/continuous-learning-with-deep-architectures/102948710
clwdaensta-180625091339 One of the greatest goals of AI is building an artificial continuous learning agent which can construct a sophisticated understanding about the external world from its own experience through the adaptive, goal-oriented and incremental development of ever more complex skills and knowledge. Yet, Continuous/Lifelong Learning (CL) from high-dimensional streaming data is a challenging research problem far from being solved. In fact, fully retraining deep prediction models each time a new piece of data becomes available is infeasible, due to computational and storage issues, while na茂ve continuous learning strategies have been shown to suffer from catastrophic forgetting. This talk will cover some of the most common end-to-end continuous learning strategies for gradient-based architectures and the recently proposed AR-1 strategy, which can outperform other state-of-the-art regularization and architectural approaches on the CORe50 benchmark.]]>
One of the greatest goals of AI is building an artificial continuous learning agent which can construct a sophisticated understanding about the external world from its own experience through the adaptive, goal-oriented and incremental development of ever more complex skills and knowledge. Yet, Continuous/Lifelong Learning (CL) from high-dimensional streaming data is a challenging research problem far from being solved. In fact, fully retraining deep prediction models each time a new piece of data becomes available is infeasible, due to computational and storage issues, while na茂ve continuous learning strategies have been shown to suffer from catastrophic forgetting. This talk will cover some of the most common end-to-end continuous learning strategies for gradient-based architectures and the recently proposed AR-1 strategy, which can outperform other state-of-the-art regularization and architectural approaches on the CORe50 benchmark.]]>
Mon, 25 Jun 2018 09:13:39 GMT/slideshow/continuous-learning-with-deep-architectures/102948710VincenzoLomonaco@slideshare.net(VincenzoLomonaco)Continuous Learning with Deep ArchitecturesVincenzoLomonacoOne of the greatest goals of AI is building an artificial continuous learning agent which can construct a sophisticated understanding about the external world from its own experience through the adaptive, goal-oriented and incremental development of ever more complex skills and knowledge. Yet, Continuous/Lifelong Learning (CL) from high-dimensional streaming data is a challenging research problem far from being solved. In fact, fully retraining deep prediction models each time a new piece of data becomes available is infeasible, due to computational and storage issues, while na茂ve continuous learning strategies have been shown to suffer from catastrophic forgetting. This talk will cover some of the most common end-to-end continuous learning strategies for gradient-based architectures and the recently proposed AR-1 strategy, which can outperform other state-of-the-art regularization and architectural approaches on the CORe50 benchmark.<img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/clwdaensta-180625091339-thumbnail.jpg?width=120&height=120&fit=bounds" /><br> One of the greatest goals of AI is building an artificial continuous learning agent which can construct a sophisticated understanding about the external world from its own experience through the adaptive, goal-oriented and incremental development of ever more complex skills and knowledge. Yet, Continuous/Lifelong Learning (CL) from high-dimensional streaming data is a challenging research problem far from being solved. In fact, fully retraining deep prediction models each time a new piece of data becomes available is infeasible, due to computational and storage issues, while na茂ve continuous learning strategies have been shown to suffer from catastrophic forgetting. This talk will cover some of the most common end-to-end continuous learning strategies for gradient-based architectures and the recently proposed AR-1 strategy, which can outperform other state-of-the-art regularization and architectural approaches on the CORe50 benchmark.
]]>
4902https://cdn.slidesharecdn.com/ss_thumbnails/clwdaensta-180625091339-thumbnail.jpg?width=120&height=120&fit=boundspresentationBlackhttp://activitystrea.ms/schema/1.0/posthttp://activitystrea.ms/schema/1.0/posted0CORe50: a New Dataset and Benchmark for Continuous Object Recognition Poster
/slideshow/core50-a-new-dataset-and-benchmark-for-continuous-object-recognition-poster/82385001
core50poster24x36-171120152746 Continuous/Lifelong learning of high-dimensional data streams is a challenging research problem. In fact, fully retraining models each time new data become available is infeasible, due to computational and storage issues, while na\"ive incremental strategies have been shown to suffer from catastrophic forgetting. In the context of real-world object recognition applications (e.g., robotic vision), where continuous learning is crucial, very few datasets and benchmarks are available to evaluate and compare emerging techniques. In this work we propose a new dataset and benchmark CORe50, specifically designed for continuous object recognition, and introduce baseline approaches for different continuous learning scenarios.]]>
Continuous/Lifelong learning of high-dimensional data streams is a challenging research problem. In fact, fully retraining models each time new data become available is infeasible, due to computational and storage issues, while na\"ive incremental strategies have been shown to suffer from catastrophic forgetting. In the context of real-world object recognition applications (e.g., robotic vision), where continuous learning is crucial, very few datasets and benchmarks are available to evaluate and compare emerging techniques. In this work we propose a new dataset and benchmark CORe50, specifically designed for continuous object recognition, and introduce baseline approaches for different continuous learning scenarios.]]>
Mon, 20 Nov 2017 15:27:46 GMT/slideshow/core50-a-new-dataset-and-benchmark-for-continuous-object-recognition-poster/82385001VincenzoLomonaco@slideshare.net(VincenzoLomonaco)CORe50: a New Dataset and Benchmark for Continuous Object Recognition PosterVincenzoLomonacoContinuous/Lifelong learning of high-dimensional data streams is a challenging research problem. In fact, fully retraining models each time new data become available is infeasible, due to computational and storage issues, while na\"ive incremental strategies have been shown to suffer from catastrophic forgetting. In the context of real-world object recognition applications (e.g., robotic vision), where continuous learning is crucial, very few datasets and benchmarks are available to evaluate and compare emerging techniques. In this work we propose a new dataset and benchmark CORe50, specifically designed for continuous object recognition, and introduce baseline approaches for different continuous learning scenarios.<img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/core50poster24x36-171120152746-thumbnail.jpg?width=120&height=120&fit=bounds" /><br> Continuous/Lifelong learning of high-dimensional data streams is a challenging research problem. In fact, fully retraining models each time new data become available is infeasible, due to computational and storage issues, while na\"ive incremental strategies have been shown to suffer from catastrophic forgetting. In the context of real-world object recognition applications (e.g., robotic vision), where continuous learning is crucial, very few datasets and benchmarks are available to evaluate and compare emerging techniques. In this work we propose a new dataset and benchmark CORe50, specifically designed for continuous object recognition, and introduce baseline approaches for different continuous learning scenarios.
]]>
2752https://cdn.slidesharecdn.com/ss_thumbnails/core50poster24x36-171120152746-thumbnail.jpg?width=120&height=120&fit=boundsdocumentBlackhttp://activitystrea.ms/schema/1.0/posthttp://activitystrea.ms/schema/1.0/posted0Continuous Unsupervised Training of Deep Architectures
/VincenzoLomonaco/continuous-unsupervised-training-of-deep-architectures
2017iitv0-170711121512 A number of successful Computer Vision applications have been recently proposed based on Convolutional Networks. However, in most of the cases the system is fully supervised, the training set is fixed and the task completely defined a priori. Even though Transfer Learning approaches proved to be very useful to adapt heavily pre-trained models to ever-changing scenarios, the incremental learning and adaptation capabilities of existing models is still limited and catastrophic forgetting very difficult to control. In this talk we will discuss our experience in the design of deep architectures and algorithms capable of learning objects incrementally both in a supervised and unsupervised way. Finally we will introduce a new dataset and benchmark (CORe50) that we specifically collected to focus on continuous object recognition for Robotic Vision.]]>
A number of successful Computer Vision applications have been recently proposed based on Convolutional Networks. However, in most of the cases the system is fully supervised, the training set is fixed and the task completely defined a priori. Even though Transfer Learning approaches proved to be very useful to adapt heavily pre-trained models to ever-changing scenarios, the incremental learning and adaptation capabilities of existing models is still limited and catastrophic forgetting very difficult to control. In this talk we will discuss our experience in the design of deep architectures and algorithms capable of learning objects incrementally both in a supervised and unsupervised way. Finally we will introduce a new dataset and benchmark (CORe50) that we specifically collected to focus on continuous object recognition for Robotic Vision.]]>
Tue, 11 Jul 2017 12:15:12 GMT/VincenzoLomonaco/continuous-unsupervised-training-of-deep-architecturesVincenzoLomonaco@slideshare.net(VincenzoLomonaco)Continuous Unsupervised Training of Deep ArchitecturesVincenzoLomonacoA number of successful Computer Vision applications have been recently proposed based on Convolutional Networks. However, in most of the cases the system is fully supervised, the training set is fixed and the task completely defined a priori. Even though Transfer Learning approaches proved to be very useful to adapt heavily pre-trained models to ever-changing scenarios, the incremental learning and adaptation capabilities of existing models is still limited and catastrophic forgetting very difficult to control. In this talk we will discuss our experience in the design of deep architectures and algorithms capable of learning objects incrementally both in a supervised and unsupervised way. Finally we will introduce a new dataset and benchmark (CORe50) that we specifically collected to focus on continuous object recognition for Robotic Vision.<img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/2017iitv0-170711121512-thumbnail.jpg?width=120&height=120&fit=bounds" /><br> A number of successful Computer Vision applications have been recently proposed based on Convolutional Networks. However, in most of the cases the system is fully supervised, the training set is fixed and the task completely defined a priori. Even though Transfer Learning approaches proved to be very useful to adapt heavily pre-trained models to ever-changing scenarios, the incremental learning and adaptation capabilities of existing models is still limited and catastrophic forgetting very difficult to control. In this talk we will discuss our experience in the design of deep architectures and algorithms capable of learning objects incrementally both in a supervised and unsupervised way. Finally we will introduce a new dataset and benchmark (CORe50) that we specifically collected to focus on continuous object recognition for Robotic Vision.
]]>
6943https://cdn.slidesharecdn.com/ss_thumbnails/2017iitv0-170711121512-thumbnail.jpg?width=120&height=120&fit=boundspresentationBlackhttp://activitystrea.ms/schema/1.0/posthttp://activitystrea.ms/schema/1.0/posted0Comparing Incremental Learning Strategies for Convolutional Neural Networks
/slideshow/comparing-incremental-learning-strategies-for-convolutional-neural-networks/66711942
annpr2016presentation-161004105053 In the last decade, Convolutional Neural Networks (CNNs) have shown to perform incredibly well in many computer vision tasks such as object recognition and object detection, being able to extract meaningful high-level invariant features. However, partly because of their complex training and tricky hyper-parameters tuning, CNNs have been scarcely studied in the context of incremental learning where data are available in consecutive batches and retraining the model from scratch is unfeasible. In this work we compare different incremental learning strategies for CNN based architectures, targeting real-word applications.
If you are interested in this work please cite:
Lomonaco, V., & Maltoni, D. (2016, September). Comparing Incremental Learning Strategies for Convolutional Neural Networks. In IAPR Workshop on Artificial Neural Networks in Pattern Recognition (pp. 175-184). Springer International Publishing.
For further information visit my website: http://www.vincenzolomonaco.com/]]>
In the last decade, Convolutional Neural Networks (CNNs) have shown to perform incredibly well in many computer vision tasks such as object recognition and object detection, being able to extract meaningful high-level invariant features. However, partly because of their complex training and tricky hyper-parameters tuning, CNNs have been scarcely studied in the context of incremental learning where data are available in consecutive batches and retraining the model from scratch is unfeasible. In this work we compare different incremental learning strategies for CNN based architectures, targeting real-word applications.
If you are interested in this work please cite:
Lomonaco, V., & Maltoni, D. (2016, September). Comparing Incremental Learning Strategies for Convolutional Neural Networks. In IAPR Workshop on Artificial Neural Networks in Pattern Recognition (pp. 175-184). Springer International Publishing.
For further information visit my website: http://www.vincenzolomonaco.com/]]>
Tue, 04 Oct 2016 10:50:53 GMT/slideshow/comparing-incremental-learning-strategies-for-convolutional-neural-networks/66711942VincenzoLomonaco@slideshare.net(VincenzoLomonaco)Comparing Incremental Learning Strategies for Convolutional Neural NetworksVincenzoLomonacoIn the last decade, Convolutional Neural Networks (CNNs) have shown to perform incredibly well in many computer vision tasks such as object recognition and object detection, being able to extract meaningful high-level invariant features. However, partly because of their complex training and tricky hyper-parameters tuning, CNNs have been scarcely studied in the context of incremental learning where data are available in consecutive batches and retraining the model from scratch is unfeasible. In this work we compare different incremental learning strategies for CNN based architectures, targeting real-word applications.
If you are interested in this work please cite:
Lomonaco, V., & Maltoni, D. (2016, September). Comparing Incremental Learning Strategies for Convolutional Neural Networks. In IAPR Workshop on Artificial Neural Networks in Pattern Recognition (pp. 175-184). Springer International Publishing.
For further information visit my website: http://www.vincenzolomonaco.com/<img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/annpr2016presentation-161004105053-thumbnail.jpg?width=120&height=120&fit=bounds" /><br> In the last decade, Convolutional Neural Networks (CNNs) have shown to perform incredibly well in many computer vision tasks such as object recognition and object detection, being able to extract meaningful high-level invariant features. However, partly because of their complex training and tricky hyper-parameters tuning, CNNs have been scarcely studied in the context of incremental learning where data are available in consecutive batches and retraining the model from scratch is unfeasible. In this work we compare different incremental learning strategies for CNN based architectures, targeting real-word applications.
If you are interested in this work please cite:
Lomonaco, V., & Maltoni, D. (2016, September). Comparing Incremental Learning Strategies for Convolutional Neural Networks. In IAPR Workshop on Artificial Neural Networks in Pattern Recognition (pp. 175-184). Springer International Publishing.
For further information visit my website: http://www.vincenzolomonaco.com/
]]>
14602https://cdn.slidesharecdn.com/ss_thumbnails/annpr2016presentation-161004105053-thumbnail.jpg?width=120&height=120&fit=boundspresentationBlackhttp://activitystrea.ms/schema/1.0/posthttp://activitystrea.ms/schema/1.0/posted0Deep Learning for Computer Vision: A comparision between Convolutional Neural Networks and Hierarchical Temporal Memories on object recognition tasks - Master's Degree Thesis
/slideshow/deep-learning-for-computer-vision-a-comparision-between-convolutional-neural-networks-and-hierarchical-temporal-memories-on-object-recognition-tasks-masters-degree-thesis/52856175
vincenzolomonacotesi-150916162517-lva1-app6892 In recent years, Deep Learning techniques have shown to perform well on a large variety of problems both in Computer Vision and Natural Language Processing, reaching and often surpassing the state of the art on many tasks. The rise of deep learning is also revolutionizing the entire field of Machine Learning and Pattern Recognition pushing forward the concepts of automatic feature extraction and unsupervised learning in general.
However, despite the strong success both in science and business, deep learning has its own limitations. It is often questioned if such techniques are only some kind of brute-force statistical approaches and if they can only work in the context of High Performance Computing with tons of data. Another important question is whether they are really biologically inspired, as claimed in certain cases, and if they can scale well in terms of 鈥渋ntelligence鈥�.
The dissertation is focused on trying to answer these key questions in the context of Computer Vision and, in particular, Object Recognition, a task that has been heavily revolutionized by recent advances in the field. Practically speaking, these answers are based on an exhaustive comparison between two, very different, deep learning techniques on the aforementioned task: Convolutional Neural Network (CNN) and Hierarchical Temporal memory (HTM). They stand for two different approaches and points of view within the big hat of deep learning and are the best choices to understand and point out strengths and weaknesses of each of them.
CNN is considered one of the most classic and powerful supervised methods used today in machine learning and pattern recognition, especially in object recognition. CNNs are well received and accepted by the scientific community and are already deployed in large corporation like Google and Facebook for solving face recognition and image auto-tagging problems.
HTM, on the other hand, is known as a new emerging paradigm and a new meanly-unsupervised method, that is more biologically inspired. It tries to gain more insights from the computational neuroscience community in order to incorporate concepts like time, context and attention during the learning process which are typical of the human brain.
In the end, the thesis is supposed to prove that in certain cases, with a lower quantity of data, HTM can outperform CNN.]]>
In recent years, Deep Learning techniques have shown to perform well on a large variety of problems both in Computer Vision and Natural Language Processing, reaching and often surpassing the state of the art on many tasks. The rise of deep learning is also revolutionizing the entire field of Machine Learning and Pattern Recognition pushing forward the concepts of automatic feature extraction and unsupervised learning in general.
However, despite the strong success both in science and business, deep learning has its own limitations. It is often questioned if such techniques are only some kind of brute-force statistical approaches and if they can only work in the context of High Performance Computing with tons of data. Another important question is whether they are really biologically inspired, as claimed in certain cases, and if they can scale well in terms of 鈥渋ntelligence鈥�.
The dissertation is focused on trying to answer these key questions in the context of Computer Vision and, in particular, Object Recognition, a task that has been heavily revolutionized by recent advances in the field. Practically speaking, these answers are based on an exhaustive comparison between two, very different, deep learning techniques on the aforementioned task: Convolutional Neural Network (CNN) and Hierarchical Temporal memory (HTM). They stand for two different approaches and points of view within the big hat of deep learning and are the best choices to understand and point out strengths and weaknesses of each of them.
CNN is considered one of the most classic and powerful supervised methods used today in machine learning and pattern recognition, especially in object recognition. CNNs are well received and accepted by the scientific community and are already deployed in large corporation like Google and Facebook for solving face recognition and image auto-tagging problems.
HTM, on the other hand, is known as a new emerging paradigm and a new meanly-unsupervised method, that is more biologically inspired. It tries to gain more insights from the computational neuroscience community in order to incorporate concepts like time, context and attention during the learning process which are typical of the human brain.
In the end, the thesis is supposed to prove that in certain cases, with a lower quantity of data, HTM can outperform CNN.]]>
Wed, 16 Sep 2015 16:25:17 GMT/slideshow/deep-learning-for-computer-vision-a-comparision-between-convolutional-neural-networks-and-hierarchical-temporal-memories-on-object-recognition-tasks-masters-degree-thesis/52856175VincenzoLomonaco@slideshare.net(VincenzoLomonaco)Deep Learning for Computer Vision: A comparision between Convolutional Neural Networks and Hierarchical Temporal Memories on object recognition tasks - Master's Degree ThesisVincenzoLomonacoIn recent years, Deep Learning techniques have shown to perform well on a large variety of problems both in Computer Vision and Natural Language Processing, reaching and often surpassing the state of the art on many tasks. The rise of deep learning is also revolutionizing the entire field of Machine Learning and Pattern Recognition pushing forward the concepts of automatic feature extraction and unsupervised learning in general.
However, despite the strong success both in science and business, deep learning has its own limitations. It is often questioned if such techniques are only some kind of brute-force statistical approaches and if they can only work in the context of High Performance Computing with tons of data. Another important question is whether they are really biologically inspired, as claimed in certain cases, and if they can scale well in terms of 鈥渋ntelligence鈥�.
The dissertation is focused on trying to answer these key questions in the context of Computer Vision and, in particular, Object Recognition, a task that has been heavily revolutionized by recent advances in the field. Practically speaking, these answers are based on an exhaustive comparison between two, very different, deep learning techniques on the aforementioned task: Convolutional Neural Network (CNN) and Hierarchical Temporal memory (HTM). They stand for two different approaches and points of view within the big hat of deep learning and are the best choices to understand and point out strengths and weaknesses of each of them.
CNN is considered one of the most classic and powerful supervised methods used today in machine learning and pattern recognition, especially in object recognition. CNNs are well received and accepted by the scientific community and are already deployed in large corporation like Google and Facebook for solving face recognition and image auto-tagging problems.
HTM, on the other hand, is known as a new emerging paradigm and a new meanly-unsupervised method, that is more biologically inspired. It tries to gain more insights from the computational neuroscience community in order to incorporate concepts like time, context and attention during the learning process which are typical of the human brain.
In the end, the thesis is supposed to prove that in certain cases, with a lower quantity of data, HTM can outperform CNN.<img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/vincenzolomonacotesi-150916162517-lva1-app6892-thumbnail.jpg?width=120&height=120&fit=bounds" /><br> In recent years, Deep Learning techniques have shown to perform well on a large variety of problems both in Computer Vision and Natural Language Processing, reaching and often surpassing the state of the art on many tasks. The rise of deep learning is also revolutionizing the entire field of Machine Learning and Pattern Recognition pushing forward the concepts of automatic feature extraction and unsupervised learning in general.
However, despite the strong success both in science and business, deep learning has its own limitations. It is often questioned if such techniques are only some kind of brute-force statistical approaches and if they can only work in the context of High Performance Computing with tons of data. Another important question is whether they are really biologically inspired, as claimed in certain cases, and if they can scale well in terms of 鈥渋ntelligence鈥�.
The dissertation is focused on trying to answer these key questions in the context of Computer Vision and, in particular, Object Recognition, a task that has been heavily revolutionized by recent advances in the field. Practically speaking, these answers are based on an exhaustive comparison between two, very different, deep learning techniques on the aforementioned task: Convolutional Neural Network (CNN) and Hierarchical Temporal memory (HTM). They stand for two different approaches and points of view within the big hat of deep learning and are the best choices to understand and point out strengths and weaknesses of each of them.
CNN is considered one of the most classic and powerful supervised methods used today in machine learning and pattern recognition, especially in object recognition. CNNs are well received and accepted by the scientific community and are already deployed in large corporation like Google and Facebook for solving face recognition and image auto-tagging problems.
HTM, on the other hand, is known as a new emerging paradigm and a new meanly-unsupervised method, that is more biologically inspired. It tries to gain more insights from the computational neuroscience community in order to incorporate concepts like time, context and attention during the learning process which are typical of the human brain.
In the end, the thesis is supposed to prove that in certain cases, with a lower quantity of data, HTM can outperform CNN.
]]>
534611https://cdn.slidesharecdn.com/ss_thumbnails/vincenzolomonacotesi-150916162517-lva1-app6892-thumbnail.jpg?width=120&height=120&fit=boundsdocumentBlackhttp://activitystrea.ms/schema/1.0/posthttp://activitystrea.ms/schema/1.0/posted0Deep Learning for Computer Vision: A comparision between Convolutional Neural Networks and Hierarchical Temporal Memories on object recognition tasks - 狠狠撸s
/slideshow/deep-learning-for-computer-vision-a-comparision-between-convolutional-neural-networks-and-hierarchical-temporal-memories-on-object-recognition-tasks-slides/52855710
presentation-150916161458-lva1-app6892 In recent years, Deep Learning techniques have shown to perform well on a large variety of problems both in Computer Vision and Natural Language Processing, reaching and often surpassing the state of the art on many tasks. The rise of deep learning is also revolutionizing the entire field of Machine Learning and Pattern Recognition pushing forward the concepts of automatic feature extraction and unsupervised learning in general.
However, despite the strong success both in science and business, deep learning has its own limitations. It is often questioned if such techniques are only some kind of brute-force statistical approaches and if they can only work in the context of High Performance Computing with tons of data. Another important question is whether they are really biologically inspired, as claimed in certain cases, and if they can scale well in terms of 鈥渋ntelligence鈥�.
The dissertation is focused on trying to answer these key questions in the context of Computer Vision and, in particular, Object Recognition, a task that has been heavily revolutionized by recent advances in the field. Practically speaking, these answers are based on an exhaustive comparison between two, very different, deep learning techniques on the aforementioned task: Convolutional Neural Network (CNN) and Hierarchical Temporal memory (HTM). They stand for two different approaches and points of view within the big hat of deep learning and are the best choices to understand and point out strengths and weaknesses of each of them.
CNN is considered one of the most classic and powerful supervised methods used today in machine learning and pattern recognition, especially in object recognition. CNNs are well received and accepted by the scientific community and are already deployed in large corporation like Google and Facebook for solving face recognition and image auto-tagging problems.
HTM, on the other hand, is known as a new emerging paradigm and a new meanly-unsupervised method, that is more biologically inspired. It tries to gain more insights from the computational neuroscience community in order to incorporate concepts like time, context and attention during the learning process which are typical of the human brain.
In the end, the thesis is supposed to prove that in certain cases, with a lower quantity of data, HTM can outperform CNN.]]>
In recent years, Deep Learning techniques have shown to perform well on a large variety of problems both in Computer Vision and Natural Language Processing, reaching and often surpassing the state of the art on many tasks. The rise of deep learning is also revolutionizing the entire field of Machine Learning and Pattern Recognition pushing forward the concepts of automatic feature extraction and unsupervised learning in general.
However, despite the strong success both in science and business, deep learning has its own limitations. It is often questioned if such techniques are only some kind of brute-force statistical approaches and if they can only work in the context of High Performance Computing with tons of data. Another important question is whether they are really biologically inspired, as claimed in certain cases, and if they can scale well in terms of 鈥渋ntelligence鈥�.
The dissertation is focused on trying to answer these key questions in the context of Computer Vision and, in particular, Object Recognition, a task that has been heavily revolutionized by recent advances in the field. Practically speaking, these answers are based on an exhaustive comparison between two, very different, deep learning techniques on the aforementioned task: Convolutional Neural Network (CNN) and Hierarchical Temporal memory (HTM). They stand for two different approaches and points of view within the big hat of deep learning and are the best choices to understand and point out strengths and weaknesses of each of them.
CNN is considered one of the most classic and powerful supervised methods used today in machine learning and pattern recognition, especially in object recognition. CNNs are well received and accepted by the scientific community and are already deployed in large corporation like Google and Facebook for solving face recognition and image auto-tagging problems.
HTM, on the other hand, is known as a new emerging paradigm and a new meanly-unsupervised method, that is more biologically inspired. It tries to gain more insights from the computational neuroscience community in order to incorporate concepts like time, context and attention during the learning process which are typical of the human brain.
In the end, the thesis is supposed to prove that in certain cases, with a lower quantity of data, HTM can outperform CNN.]]>
Wed, 16 Sep 2015 16:14:58 GMT/slideshow/deep-learning-for-computer-vision-a-comparision-between-convolutional-neural-networks-and-hierarchical-temporal-memories-on-object-recognition-tasks-slides/52855710VincenzoLomonaco@slideshare.net(VincenzoLomonaco)Deep Learning for Computer Vision: A comparision between Convolutional Neural Networks and Hierarchical Temporal Memories on object recognition tasks - 狠狠撸sVincenzoLomonacoIn recent years, Deep Learning techniques have shown to perform well on a large variety of problems both in Computer Vision and Natural Language Processing, reaching and often surpassing the state of the art on many tasks. The rise of deep learning is also revolutionizing the entire field of Machine Learning and Pattern Recognition pushing forward the concepts of automatic feature extraction and unsupervised learning in general.
However, despite the strong success both in science and business, deep learning has its own limitations. It is often questioned if such techniques are only some kind of brute-force statistical approaches and if they can only work in the context of High Performance Computing with tons of data. Another important question is whether they are really biologically inspired, as claimed in certain cases, and if they can scale well in terms of 鈥渋ntelligence鈥�.
The dissertation is focused on trying to answer these key questions in the context of Computer Vision and, in particular, Object Recognition, a task that has been heavily revolutionized by recent advances in the field. Practically speaking, these answers are based on an exhaustive comparison between two, very different, deep learning techniques on the aforementioned task: Convolutional Neural Network (CNN) and Hierarchical Temporal memory (HTM). They stand for two different approaches and points of view within the big hat of deep learning and are the best choices to understand and point out strengths and weaknesses of each of them.
CNN is considered one of the most classic and powerful supervised methods used today in machine learning and pattern recognition, especially in object recognition. CNNs are well received and accepted by the scientific community and are already deployed in large corporation like Google and Facebook for solving face recognition and image auto-tagging problems.
HTM, on the other hand, is known as a new emerging paradigm and a new meanly-unsupervised method, that is more biologically inspired. It tries to gain more insights from the computational neuroscience community in order to incorporate concepts like time, context and attention during the learning process which are typical of the human brain.
In the end, the thesis is supposed to prove that in certain cases, with a lower quantity of data, HTM can outperform CNN.<img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/presentation-150916161458-lva1-app6892-thumbnail.jpg?width=120&height=120&fit=bounds" /><br> In recent years, Deep Learning techniques have shown to perform well on a large variety of problems both in Computer Vision and Natural Language Processing, reaching and often surpassing the state of the art on many tasks. The rise of deep learning is also revolutionizing the entire field of Machine Learning and Pattern Recognition pushing forward the concepts of automatic feature extraction and unsupervised learning in general.
However, despite the strong success both in science and business, deep learning has its own limitations. It is often questioned if such techniques are only some kind of brute-force statistical approaches and if they can only work in the context of High Performance Computing with tons of data. Another important question is whether they are really biologically inspired, as claimed in certain cases, and if they can scale well in terms of 鈥渋ntelligence鈥�.
The dissertation is focused on trying to answer these key questions in the context of Computer Vision and, in particular, Object Recognition, a task that has been heavily revolutionized by recent advances in the field. Practically speaking, these answers are based on an exhaustive comparison between two, very different, deep learning techniques on the aforementioned task: Convolutional Neural Network (CNN) and Hierarchical Temporal memory (HTM). They stand for two different approaches and points of view within the big hat of deep learning and are the best choices to understand and point out strengths and weaknesses of each of them.
CNN is considered one of the most classic and powerful supervised methods used today in machine learning and pattern recognition, especially in object recognition. CNNs are well received and accepted by the scientific community and are already deployed in large corporation like Google and Facebook for solving face recognition and image auto-tagging problems.
HTM, on the other hand, is known as a new emerging paradigm and a new meanly-unsupervised method, that is more biologically inspired. It tries to gain more insights from the computational neuroscience community in order to incorporate concepts like time, context and attention during the learning process which are typical of the human brain.
In the end, the thesis is supposed to prove that in certain cases, with a lower quantity of data, HTM can outperform CNN.
]]>
27636https://cdn.slidesharecdn.com/ss_thumbnails/presentation-150916161458-lva1-app6892-thumbnail.jpg?width=120&height=120&fit=boundspresentationBlackhttp://activitystrea.ms/schema/1.0/posthttp://activitystrea.ms/schema/1.0/posted0A Framework for Deadlock Detection in Java
/slideshow/report-19-062015/49615346
report19062015-150619203620-lva1-app6891 In this work we started to develop a novel framework for statically detecting deadlocks in a concurrent Java environment with asynchronous method calls and cooperative scheduling of method activations. Since this language features recursion and dynamic resource creation, dead-
lock detection is extremely complex and state-of-the-art solutions either give imprecise answers or do not scale. The basic component of the framework is a front-end inference algorithm that ex-
tracts abstract behavioral descriptions of methods, called contracts, which retain resource dependency information. This component is integrated with a back-end that analyze contracts and derive deadlock information computing a fixpoint semantics.]]>
In this work we started to develop a novel framework for statically detecting deadlocks in a concurrent Java environment with asynchronous method calls and cooperative scheduling of method activations. Since this language features recursion and dynamic resource creation, dead-
lock detection is extremely complex and state-of-the-art solutions either give imprecise answers or do not scale. The basic component of the framework is a front-end inference algorithm that ex-
tracts abstract behavioral descriptions of methods, called contracts, which retain resource dependency information. This component is integrated with a back-end that analyze contracts and derive deadlock information computing a fixpoint semantics.]]>
Fri, 19 Jun 2015 20:36:20 GMT/slideshow/report-19-062015/49615346VincenzoLomonaco@slideshare.net(VincenzoLomonaco)A Framework for Deadlock Detection in JavaVincenzoLomonacoIn this work we started to develop a novel framework for statically detecting deadlocks in a concurrent Java environment with asynchronous method calls and cooperative scheduling of method activations. Since this language features recursion and dynamic resource creation, dead-
lock detection is extremely complex and state-of-the-art solutions either give imprecise answers or do not scale. The basic component of the framework is a front-end inference algorithm that ex-
tracts abstract behavioral descriptions of methods, called contracts, which retain resource dependency information. This component is integrated with a back-end that analyze contracts and derive deadlock information computing a fixpoint semantics.<img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/report19062015-150619203620-lva1-app6891-thumbnail.jpg?width=120&height=120&fit=bounds" /><br> In this work we started to develop a novel framework for statically detecting deadlocks in a concurrent Java environment with asynchronous method calls and cooperative scheduling of method activations. Since this language features recursion and dynamic resource creation, dead-
lock detection is extremely complex and state-of-the-art solutions either give imprecise answers or do not scale. The basic component of the framework is a front-end inference algorithm that ex-
tracts abstract behavioral descriptions of methods, called contracts, which retain resource dependency information. This component is integrated with a back-end that analyze contracts and derive deadlock information computing a fixpoint semantics.
]]>
8033https://cdn.slidesharecdn.com/ss_thumbnails/report19062015-150619203620-lva1-app6891-thumbnail.jpg?width=120&height=120&fit=boundsdocumentBlackhttp://activitystrea.ms/schema/1.0/posthttp://activitystrea.ms/schema/1.0/posted0Deep Learning libraries and 铿乺st experiments with Theano
/slideshow/deep-learning-libraries-and-rst-experiments-with-theano/46460911
report-150330133031-conversion-gate01 In recent years, neural networks and deep learning techniques have shown to perform well on many
problems in image recognition, speech recognition, natural language processing and many other tasks.
As a result, a large number of libraries, toolkits and frameworks came out in different languages and
with different purposes. In this report, 铿乺stly we take a look at these projects and secondly we choose the
framework that best suits our needs: Theano. Eventually, we implement a simple convolutional neural net
using this framework to test both its ease-of-use and ef铿乧iency.]]>
In recent years, neural networks and deep learning techniques have shown to perform well on many
problems in image recognition, speech recognition, natural language processing and many other tasks.
As a result, a large number of libraries, toolkits and frameworks came out in different languages and
with different purposes. In this report, 铿乺stly we take a look at these projects and secondly we choose the
framework that best suits our needs: Theano. Eventually, we implement a simple convolutional neural net
using this framework to test both its ease-of-use and ef铿乧iency.]]>
Mon, 30 Mar 2015 13:29:43 GMT/slideshow/deep-learning-libraries-and-rst-experiments-with-theano/46460911VincenzoLomonaco@slideshare.net(VincenzoLomonaco)Deep Learning libraries and 铿乺st experiments with TheanoVincenzoLomonacoIn recent years, neural networks and deep learning techniques have shown to perform well on many
problems in image recognition, speech recognition, natural language processing and many other tasks.
As a result, a large number of libraries, toolkits and frameworks came out in different languages and
with different purposes. In this report, 铿乺stly we take a look at these projects and secondly we choose the
framework that best suits our needs: Theano. Eventually, we implement a simple convolutional neural net
using this framework to test both its ease-of-use and ef铿乧iency.<img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/report-150330133031-conversion-gate01-thumbnail.jpg?width=120&height=120&fit=bounds" /><br> In recent years, neural networks and deep learning techniques have shown to perform well on many
problems in image recognition, speech recognition, natural language processing and many other tasks.
As a result, a large number of libraries, toolkits and frameworks came out in different languages and
with different purposes. In this report, 铿乺stly we take a look at these projects and secondly we choose the
framework that best suits our needs: Theano. Eventually, we implement a simple convolutional neural net
using this framework to test both its ease-of-use and ef铿乧iency.
]]>
77005https://cdn.slidesharecdn.com/ss_thumbnails/report-150330133031-conversion-gate01-thumbnail.jpg?width=120&height=120&fit=boundsdocument000000http://activitystrea.ms/schema/1.0/posthttp://activitystrea.ms/schema/1.0/posted0https://cdn.slidesharecdn.com/profile-photo-VincenzoLomonaco-48x48.jpg?cb=1693374998I am PhD student at the University of Bologna. I have a strong interest in Artificial Intelligence, Machine Learning, Deep Learning and Computational Neuroscience. I'm always looking for collaborations, please do not hesitate to contact me!
ACADEMIC PROFILE:
I have a strong background in Information Retrieval, Data mining and Data Analysis. I have always felt at ease in manipulating, managing and analyzing large collections of data and I have successfully applied this knowledge in the context of biological systems.
My own interest has then evolved towards the huge field of Artificial Intelligence, having always been intrigued by the idea of building machines with superhuman abilities. I ...www.vincenzolomonaco.com/https://cdn.slidesharecdn.com/ss_thumbnails/2023-08-22collastutorial-beyondcil-230830055806-e678884f-thumbnail.jpg?width=320&height=320&fit=boundsslideshow/20230822-collas-tutorial-beyond-cilpdf/2603166752023-08-22 CoLLAs Tuto...https://cdn.slidesharecdn.com/ss_thumbnails/icml2021-cl-tutorialfull-210720120745-thumbnail.jpg?width=320&height=320&fit=boundsVincenzoLomonaco/continual-learning-with-deep-architectures-tutorial-icml-2021Continual Learning wit...https://cdn.slidesharecdn.com/ss_thumbnails/unipitalk14-02-2020-200213122938-thumbnail.jpg?width=320&height=320&fit=boundsslideshow/toward-continual-learning-on-the-edge/227850766Toward Continual Learn...