ºÝºÝߣshows by User: DavideNardone / http://www.slideshare.net/images/logo.gif ºÝºÝߣshows by User: DavideNardone / Fri, 17 May 2019 08:10:33 GMT ºÝºÝߣShare feed for ºÝºÝߣshows by User: DavideNardone M.Sc thesis /slideshow/msc-thesis-146229726/146229726 mscthesis-190517081033
In this thesis, we propose a novel Feature Selection framework, called Sparse-Modeling Based Approach for Class Specific Feature Selection (SMBA-CSFS), that simultaneously exploits the idea of Sparse Modeling and Class-Specific Feature Selection. Feature selection plays a key role in several fields (e.g., computational biology), making it possible to treat models with fewer variables which, in turn, are easier to explain, by providing valuable insights on the importance of their role, and might speed the experimental validation up. Unfortunately, also corroborated by the no free lunch theorems, none of the approaches in literature is the most apt to detect the optimal feature subset for building a final model, thus it still represents a challenge. The proposed feature selection procedure conceives a two steps approach: (a) a sparse modeling-based learning technique is first used to find the best subset of features, for each class of a training set; (b) the discovered feature subsets are then fed to a class-specific feature selection scheme, in order to assess the effectiveness of the selected features in classification tasks. To this end, an ensemble of classifiers is built, where each classifier is trained on its own feature subset discovered in the previous phase, and a proper decision rule is adopted to compute the ensemble responses. In order to evaluate the performance of the proposed method, extensive experiments have been performed on publicly available datasets, in particular belonging to the computational biology field where feature selection is indispensable: the acute lymphoblastic leukemia and acute myeloid leukemia, the human carcinomas, the human lung carcinomas, the diffuse large B-cell lymphoma, and the malignant glioma. SMBA-CSFS is able to identify/retrieve the most representative features that maximize the classification accuracy. With top 20 and 80 features, SMBA-CSFS exhibits a promising performance when compared to its competitors from literature, on all considered datasets, especially those with a higher number of features. Experiments show that the proposed approach might outperform the state-of-the-art methods when the number of features is high. For this reason, the introduced approach proposes itself for selection and classification of data with a large number of features and classes.]]>

In this thesis, we propose a novel Feature Selection framework, called Sparse-Modeling Based Approach for Class Specific Feature Selection (SMBA-CSFS), that simultaneously exploits the idea of Sparse Modeling and Class-Specific Feature Selection. Feature selection plays a key role in several fields (e.g., computational biology), making it possible to treat models with fewer variables which, in turn, are easier to explain, by providing valuable insights on the importance of their role, and might speed the experimental validation up. Unfortunately, also corroborated by the no free lunch theorems, none of the approaches in literature is the most apt to detect the optimal feature subset for building a final model, thus it still represents a challenge. The proposed feature selection procedure conceives a two steps approach: (a) a sparse modeling-based learning technique is first used to find the best subset of features, for each class of a training set; (b) the discovered feature subsets are then fed to a class-specific feature selection scheme, in order to assess the effectiveness of the selected features in classification tasks. To this end, an ensemble of classifiers is built, where each classifier is trained on its own feature subset discovered in the previous phase, and a proper decision rule is adopted to compute the ensemble responses. In order to evaluate the performance of the proposed method, extensive experiments have been performed on publicly available datasets, in particular belonging to the computational biology field where feature selection is indispensable: the acute lymphoblastic leukemia and acute myeloid leukemia, the human carcinomas, the human lung carcinomas, the diffuse large B-cell lymphoma, and the malignant glioma. SMBA-CSFS is able to identify/retrieve the most representative features that maximize the classification accuracy. With top 20 and 80 features, SMBA-CSFS exhibits a promising performance when compared to its competitors from literature, on all considered datasets, especially those with a higher number of features. Experiments show that the proposed approach might outperform the state-of-the-art methods when the number of features is high. For this reason, the introduced approach proposes itself for selection and classification of data with a large number of features and classes.]]>
Fri, 17 May 2019 08:10:33 GMT /slideshow/msc-thesis-146229726/146229726 DavideNardone@slideshare.net(DavideNardone) M.Sc thesis DavideNardone In this thesis, we propose a novel Feature Selection framework, called Sparse-Modeling Based Approach for Class Specific Feature Selection (SMBA-CSFS), that simultaneously exploits the idea of Sparse Modeling and Class-Specific Feature Selection. Feature selection plays a key role in several fields (e.g., computational biology), making it possible to treat models with fewer variables which, in turn, are easier to explain, by providing valuable insights on the importance of their role, and might speed the experimental validation up. Unfortunately, also corroborated by the no free lunch theorems, none of the approaches in literature is the most apt to detect the optimal feature subset for building a final model, thus it still represents a challenge. The proposed feature selection procedure conceives a two steps approach: (a) a sparse modeling-based learning technique is first used to find the best subset of features, for each class of a training set; (b) the discovered feature subsets are then fed to a class-specific feature selection scheme, in order to assess the effectiveness of the selected features in classification tasks. To this end, an ensemble of classifiers is built, where each classifier is trained on its own feature subset discovered in the previous phase, and a proper decision rule is adopted to compute the ensemble responses. In order to evaluate the performance of the proposed method, extensive experiments have been performed on publicly available datasets, in particular belonging to the computational biology field where feature selection is indispensable: the acute lymphoblastic leukemia and acute myeloid leukemia, the human carcinomas, the human lung carcinomas, the diffuse large B-cell lymphoma, and the malignant glioma. SMBA-CSFS is able to identify/retrieve the most representative features that maximize the classification accuracy. With top 20 and 80 features, SMBA-CSFS exhibits a promising performance when compared to its competitors from literature, on all considered datasets, especially those with a higher number of features. Experiments show that the proposed approach might outperform the state-of-the-art methods when the number of features is high. For this reason, the introduced approach proposes itself for selection and classification of data with a large number of features and classes. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/mscthesis-190517081033-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> In this thesis, we propose a novel Feature Selection framework, called Sparse-Modeling Based Approach for Class Specific Feature Selection (SMBA-CSFS), that simultaneously exploits the idea of Sparse Modeling and Class-Specific Feature Selection. Feature selection plays a key role in several fields (e.g., computational biology), making it possible to treat models with fewer variables which, in turn, are easier to explain, by providing valuable insights on the importance of their role, and might speed the experimental validation up. Unfortunately, also corroborated by the no free lunch theorems, none of the approaches in literature is the most apt to detect the optimal feature subset for building a final model, thus it still represents a challenge. The proposed feature selection procedure conceives a two steps approach: (a) a sparse modeling-based learning technique is first used to find the best subset of features, for each class of a training set; (b) the discovered feature subsets are then fed to a class-specific feature selection scheme, in order to assess the effectiveness of the selected features in classification tasks. To this end, an ensemble of classifiers is built, where each classifier is trained on its own feature subset discovered in the previous phase, and a proper decision rule is adopted to compute the ensemble responses. In order to evaluate the performance of the proposed method, extensive experiments have been performed on publicly available datasets, in particular belonging to the computational biology field where feature selection is indispensable: the acute lymphoblastic leukemia and acute myeloid leukemia, the human carcinomas, the human lung carcinomas, the diffuse large B-cell lymphoma, and the malignant glioma. SMBA-CSFS is able to identify/retrieve the most representative features that maximize the classification accuracy. With top 20 and 80 features, SMBA-CSFS exhibits a promising performance when compared to its competitors from literature, on all considered datasets, especially those with a higher number of features. Experiments show that the proposed approach might outperform the state-of-the-art methods when the number of features is high. For this reason, the introduced approach proposes itself for selection and classification of data with a large number of features and classes.
M.Sc thesis from Davide Nardone
]]>
179 9 https://cdn.slidesharecdn.com/ss_thumbnails/mscthesis-190517081033-thumbnail.jpg?width=120&height=120&fit=bounds document Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Quantum computing /slideshow/quantum-computing-85117213/85117213 quantumcomputing-171227211000
Quantum computers are incredibly powerful machines that take a new approach to processing information. Built on the principles of quantum mechanics, they exploit complex and fascinating laws of nature that are always there, but usually remain hidden from view. By harnessing such natural behavior, quantum computing can run new types of algorithms to process information more holistically. They may one day lead to revolutionary breakthroughs in materials and drug discovery, the optimization of complex manmade systems, and artificial intelligence. We expect them to open doors that we once thought would remain locked indefinitely. Acquaint yourself with the strange and exciting world of quantum computing.]]>

Quantum computers are incredibly powerful machines that take a new approach to processing information. Built on the principles of quantum mechanics, they exploit complex and fascinating laws of nature that are always there, but usually remain hidden from view. By harnessing such natural behavior, quantum computing can run new types of algorithms to process information more holistically. They may one day lead to revolutionary breakthroughs in materials and drug discovery, the optimization of complex manmade systems, and artificial intelligence. We expect them to open doors that we once thought would remain locked indefinitely. Acquaint yourself with the strange and exciting world of quantum computing.]]>
Wed, 27 Dec 2017 21:10:00 GMT /slideshow/quantum-computing-85117213/85117213 DavideNardone@slideshare.net(DavideNardone) Quantum computing DavideNardone Quantum computers are incredibly powerful machines that take a new approach to processing information. Built on the principles of quantum mechanics, they exploit complex and fascinating laws of nature that are always there, but usually remain hidden from view. By harnessing such natural behavior, quantum computing can run new types of algorithms to process information more holistically. They may one day lead to revolutionary breakthroughs in materials and drug discovery, the optimization of complex manmade systems, and artificial intelligence. We expect them to open doors that we once thought would remain locked indefinitely. Acquaint yourself with the strange and exciting world of quantum computing. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/quantumcomputing-171227211000-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Quantum computers are incredibly powerful machines that take a new approach to processing information. Built on the principles of quantum mechanics, they exploit complex and fascinating laws of nature that are always there, but usually remain hidden from view. By harnessing such natural behavior, quantum computing can run new types of algorithms to process information more holistically. They may one day lead to revolutionary breakthroughs in materials and drug discovery, the optimization of complex manmade systems, and artificial intelligence. We expect them to open doors that we once thought would remain locked indefinitely. Acquaint yourself with the strange and exciting world of quantum computing.
Quantum computing from Davide Nardone
]]>
519 4 https://cdn.slidesharecdn.com/ss_thumbnails/quantumcomputing-171227211000-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
A Sparse-Coding Based Approach for Class-Specific Feature Selection /slideshow/a-sparsecoding-based-approach-for-classspecific-feature-selection/84145718 mscpresentation-171215090746
Feature selection (FS) plays a key role in several fields and in particular computational biology, making it possible to treat models with fewer variables, which in turn are easier to explain and might speed the experimental validation up, by providing valuable insight into the importance and their role. Here, we propose a novel procedure for FS conceiving a two-steps approach. Firstly, a sparse coding based learning technique is used to find the best subset of features for each class of the training data. In doing so, it is assumed that a class is represented by using a subset of features, called representatives, such that each sample, in a specific class, can be described as a linear combination of them. Secondly, the discovered feature subsets are fed to a class-specific feature selection scheme, to assess the effectiveness of the selected features in classification task. To this end, an ensemble of classifiers is built by training a classifier, one for each class on its own feature subset, i.e., the one discovered in the previous step and a proper decision rule is adopted to compute the ensemble responses. To assess the effectiveness of the proposed FS approach, a number of experiments have been performed on benchmark microarray data sets, in order to compare the performance to several FS techniques from literature. In all cases, the proposed FS methodology exhibits convincing results, often overcoming its competitors.]]>

Feature selection (FS) plays a key role in several fields and in particular computational biology, making it possible to treat models with fewer variables, which in turn are easier to explain and might speed the experimental validation up, by providing valuable insight into the importance and their role. Here, we propose a novel procedure for FS conceiving a two-steps approach. Firstly, a sparse coding based learning technique is used to find the best subset of features for each class of the training data. In doing so, it is assumed that a class is represented by using a subset of features, called representatives, such that each sample, in a specific class, can be described as a linear combination of them. Secondly, the discovered feature subsets are fed to a class-specific feature selection scheme, to assess the effectiveness of the selected features in classification task. To this end, an ensemble of classifiers is built by training a classifier, one for each class on its own feature subset, i.e., the one discovered in the previous step and a proper decision rule is adopted to compute the ensemble responses. To assess the effectiveness of the proposed FS approach, a number of experiments have been performed on benchmark microarray data sets, in order to compare the performance to several FS techniques from literature. In all cases, the proposed FS methodology exhibits convincing results, often overcoming its competitors.]]>
Fri, 15 Dec 2017 09:07:46 GMT /slideshow/a-sparsecoding-based-approach-for-classspecific-feature-selection/84145718 DavideNardone@slideshare.net(DavideNardone) A Sparse-Coding Based Approach for Class-Specific Feature Selection DavideNardone Feature selection (FS) plays a key role in several fields and in particular computational biology, making it possible to treat models with fewer variables, which in turn are easier to explain and might speed the experimental validation up, by providing valuable insight into the importance and their role. Here, we propose a novel procedure for FS conceiving a two-steps approach. Firstly, a sparse coding based learning technique is used to find the best subset of features for each class of the training data. In doing so, it is assumed that a class is represented by using a subset of features, called representatives, such that each sample, in a specific class, can be described as a linear combination of them. Secondly, the discovered feature subsets are fed to a class-specific feature selection scheme, to assess the effectiveness of the selected features in classification task. To this end, an ensemble of classifiers is built by training a classifier, one for each class on its own feature subset, i.e., the one discovered in the previous step and a proper decision rule is adopted to compute the ensemble responses. To assess the effectiveness of the proposed FS approach, a number of experiments have been performed on benchmark microarray data sets, in order to compare the performance to several FS techniques from literature. In all cases, the proposed FS methodology exhibits convincing results, often overcoming its competitors. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/mscpresentation-171215090746-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Feature selection (FS) plays a key role in several fields and in particular computational biology, making it possible to treat models with fewer variables, which in turn are easier to explain and might speed the experimental validation up, by providing valuable insight into the importance and their role. Here, we propose a novel procedure for FS conceiving a two-steps approach. Firstly, a sparse coding based learning technique is used to find the best subset of features for each class of the training data. In doing so, it is assumed that a class is represented by using a subset of features, called representatives, such that each sample, in a specific class, can be described as a linear combination of them. Secondly, the discovered feature subsets are fed to a class-specific feature selection scheme, to assess the effectiveness of the selected features in classification task. To this end, an ensemble of classifiers is built by training a classifier, one for each class on its own feature subset, i.e., the one discovered in the previous step and a proper decision rule is adopted to compute the ensemble responses. To assess the effectiveness of the proposed FS approach, a number of experiments have been performed on benchmark microarray data sets, in order to compare the performance to several FS techniques from literature. In all cases, the proposed FS methodology exhibits convincing results, often overcoming its competitors.
A Sparse-Coding Based Approach for Class-Specific Feature Selection from Davide Nardone
]]>
181 2 https://cdn.slidesharecdn.com/ss_thumbnails/mscpresentation-171215090746-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
A Biological Smart Platform for the Environmental Risk Assessment /slideshow/a-biological-smart-platform-for-the-environmental-risk-assessment/73072061 garrpresentation-170312195629
A project proposal for a scholarship with Consortium GARR.]]>

A project proposal for a scholarship with Consortium GARR.]]>
Sun, 12 Mar 2017 19:56:29 GMT /slideshow/a-biological-smart-platform-for-the-environmental-risk-assessment/73072061 DavideNardone@slideshare.net(DavideNardone) A Biological Smart Platform for the Environmental Risk Assessment DavideNardone A project proposal for a scholarship with Consortium GARR. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/garrpresentation-170312195629-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> A project proposal for a scholarship with Consortium GARR.
A Biological Smart Platform for the Environmental Risk Assessment from Davide Nardone
]]>
269 5 https://cdn.slidesharecdn.com/ss_thumbnails/garrpresentation-170312195629-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Installing Apache tomcat with Netbeans /slideshow/installing-apache-tomcat-with-netbeans/70829874 apachetomcat-170109175951
Semplice tutorial per l'installazione e la configurazione di Apache Tomcat mediante Netbeans.]]>

Semplice tutorial per l'installazione e la configurazione di Apache Tomcat mediante Netbeans.]]>
Mon, 09 Jan 2017 17:59:51 GMT /slideshow/installing-apache-tomcat-with-netbeans/70829874 DavideNardone@slideshare.net(DavideNardone) Installing Apache tomcat with Netbeans DavideNardone Semplice tutorial per l'installazione e la configurazione di Apache Tomcat mediante Netbeans. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/apachetomcat-170109175951-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Semplice tutorial per l&#39;installazione e la configurazione di Apache Tomcat mediante Netbeans.
Installing Apache tomcat with Netbeans from Davide Nardone
]]>
509 3 https://cdn.slidesharecdn.com/ss_thumbnails/apachetomcat-170109175951-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Internet of Things: Research Directions /slideshow/internet-of-things-research-directions-70555120/70555120 iot-seminar-161230165928
Many technical communities are vigorously pursuing research topics that contribute to the Internet of Things (IoT). Nowadays, as sensing, actuation, communication, and control become even more sophisticated and ubiquitous, there is a significant overlap in these communities, sometimes from slightly different perspectives. More cooperation between communities is encouraged. To provide a basis for discussing open research problems in IoT, a vision for how IoT could change the world in the distant future is first presented. Then, eight key research topics are enumerated and research problems within these topics are discussed.]]>

Many technical communities are vigorously pursuing research topics that contribute to the Internet of Things (IoT). Nowadays, as sensing, actuation, communication, and control become even more sophisticated and ubiquitous, there is a significant overlap in these communities, sometimes from slightly different perspectives. More cooperation between communities is encouraged. To provide a basis for discussing open research problems in IoT, a vision for how IoT could change the world in the distant future is first presented. Then, eight key research topics are enumerated and research problems within these topics are discussed.]]>
Fri, 30 Dec 2016 16:59:27 GMT /slideshow/internet-of-things-research-directions-70555120/70555120 DavideNardone@slideshare.net(DavideNardone) Internet of Things: Research Directions DavideNardone Many technical communities are vigorously pursuing research topics that contribute to the Internet of Things (IoT). Nowadays, as sensing, actuation, communication, and control become even more sophisticated and ubiquitous, there is a significant overlap in these communities, sometimes from slightly different perspectives. More cooperation between communities is encouraged. To provide a basis for discussing open research problems in IoT, a vision for how IoT could change the world in the distant future is first presented. Then, eight key research topics are enumerated and research problems within these topics are discussed. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/iot-seminar-161230165928-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Many technical communities are vigorously pursuing research topics that contribute to the Internet of Things (IoT). Nowadays, as sensing, actuation, communication, and control become even more sophisticated and ubiquitous, there is a significant overlap in these communities, sometimes from slightly different perspectives. More cooperation between communities is encouraged. To provide a basis for discussing open research problems in IoT, a vision for how IoT could change the world in the distant future is first presented. Then, eight key research topics are enumerated and research problems within these topics are discussed.
Internet of Things: Research Directions from Davide Nardone
]]>
3924 8 https://cdn.slidesharecdn.com/ss_thumbnails/iot-seminar-161230165928-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Online Tweet Sentiment Analysis with Apache Spark /slideshow/online-tweet-sentiment-analysis-with-apache-spark/69766358 otsa-161202161104
Sentiment Analysis (SA) relates to the use of: Natural Language Processing (NLP), analysis and computational linguistics text to extract and identify subjective information in the source material. A fundamental task of SA is to "classify" the polarity of a given document text, phrases or levels of functionality/appearance - whether the opinion expressed in a document or in a sentence is positive, negative or neutral. Usually, this analysis is performed "offline" using Machine Learning (ML) techniques. In this project two online tweet classification methods have been proposed, which exploits the well known framework "Apache Spark" for processing the data and the tool "Apache Zeppelin" for data visualization.]]>

Sentiment Analysis (SA) relates to the use of: Natural Language Processing (NLP), analysis and computational linguistics text to extract and identify subjective information in the source material. A fundamental task of SA is to "classify" the polarity of a given document text, phrases or levels of functionality/appearance - whether the opinion expressed in a document or in a sentence is positive, negative or neutral. Usually, this analysis is performed "offline" using Machine Learning (ML) techniques. In this project two online tweet classification methods have been proposed, which exploits the well known framework "Apache Spark" for processing the data and the tool "Apache Zeppelin" for data visualization.]]>
Fri, 02 Dec 2016 16:11:04 GMT /slideshow/online-tweet-sentiment-analysis-with-apache-spark/69766358 DavideNardone@slideshare.net(DavideNardone) Online Tweet Sentiment Analysis with Apache Spark DavideNardone Sentiment Analysis (SA) relates to the use of: Natural Language Processing (NLP), analysis and computational linguistics text to extract and identify subjective information in the source material. A fundamental task of SA is to "classify" the polarity of a given document text, phrases or levels of functionality/appearance - whether the opinion expressed in a document or in a sentence is positive, negative or neutral. Usually, this analysis is performed "offline" using Machine Learning (ML) techniques. In this project two online tweet classification methods have been proposed, which exploits the well known framework "Apache Spark" for processing the data and the tool "Apache Zeppelin" for data visualization. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/otsa-161202161104-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Sentiment Analysis (SA) relates to the use of: Natural Language Processing (NLP), analysis and computational linguistics text to extract and identify subjective information in the source material. A fundamental task of SA is to &quot;classify&quot; the polarity of a given document text, phrases or levels of functionality/appearance - whether the opinion expressed in a document or in a sentence is positive, negative or neutral. Usually, this analysis is performed &quot;offline&quot; using Machine Learning (ML) techniques. In this project two online tweet classification methods have been proposed, which exploits the well known framework &quot;Apache Spark&quot; for processing the data and the tool &quot;Apache Zeppelin&quot; for data visualization.
Online Tweet Sentiment Analysis with Apache Spark from Davide Nardone
]]>
2921 2 https://cdn.slidesharecdn.com/ss_thumbnails/otsa-161202161104-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Blind Source Separation using Dictionary Learning /DavideNardone/blind-source-separation-using-dictionary-learning smpresentation-161025103809
The sparse decomposition of images and signals found great use in the field of: Compression, Noise removal and also in the Sources separation. This implies the decomposition of signals in the form of linear combinations with some elements of a redundant dictionary. The dictionary may be either a fixed dictionary (Fourier, Wavelet, etc) or may be learned from a set of samples. The algorithms based on learning the dictionary can be applied to a broad class of signals and have a better compression performance than methods based on fixed dictionary. Here we present a Compressed Sensing (CS) approach with an adaptive dictionary for solving a Determined Blind Source Separation (DBSS). The proposed method has been developed by reformulating a DBSS as Sparse Coding (SC) problem. The algorithm consist of few steps: Mixing matrix estimation, Sparse source separation and Source reconstruction. A sparse mixture of the original source signals has been used for the estimating the mixing matrix which have been used for the reconstruction of the of the source signals. A 'block signal representation' is used for representing the mixture in order to greatly improve the computation efficiency of the 'mixing matrix estimation' and the 'signal recovery' processes without particularly lose separation accuracy. Some experimental results are provided to compare the computation and separation performance of the method by varying the type of the dictionary used, be it fixed or an adaptive one. Finally a real case of study in the field of the Wireless Sensor Network (WSN) is illustrated in which a set of sensor nodes relay data to a multi-receiver node. Since more nodes transmits messages simultaneously it's necessary to separate the mixture of information at the receiver, thus solving a BSS problem.]]>

The sparse decomposition of images and signals found great use in the field of: Compression, Noise removal and also in the Sources separation. This implies the decomposition of signals in the form of linear combinations with some elements of a redundant dictionary. The dictionary may be either a fixed dictionary (Fourier, Wavelet, etc) or may be learned from a set of samples. The algorithms based on learning the dictionary can be applied to a broad class of signals and have a better compression performance than methods based on fixed dictionary. Here we present a Compressed Sensing (CS) approach with an adaptive dictionary for solving a Determined Blind Source Separation (DBSS). The proposed method has been developed by reformulating a DBSS as Sparse Coding (SC) problem. The algorithm consist of few steps: Mixing matrix estimation, Sparse source separation and Source reconstruction. A sparse mixture of the original source signals has been used for the estimating the mixing matrix which have been used for the reconstruction of the of the source signals. A 'block signal representation' is used for representing the mixture in order to greatly improve the computation efficiency of the 'mixing matrix estimation' and the 'signal recovery' processes without particularly lose separation accuracy. Some experimental results are provided to compare the computation and separation performance of the method by varying the type of the dictionary used, be it fixed or an adaptive one. Finally a real case of study in the field of the Wireless Sensor Network (WSN) is illustrated in which a set of sensor nodes relay data to a multi-receiver node. Since more nodes transmits messages simultaneously it's necessary to separate the mixture of information at the receiver, thus solving a BSS problem.]]>
Tue, 25 Oct 2016 10:38:09 GMT /DavideNardone/blind-source-separation-using-dictionary-learning DavideNardone@slideshare.net(DavideNardone) Blind Source Separation using Dictionary Learning DavideNardone The sparse decomposition of images and signals found great use in the field of: Compression, Noise removal and also in the Sources separation. This implies the decomposition of signals in the form of linear combinations with some elements of a redundant dictionary. The dictionary may be either a fixed dictionary (Fourier, Wavelet, etc) or may be learned from a set of samples. The algorithms based on learning the dictionary can be applied to a broad class of signals and have a better compression performance than methods based on fixed dictionary. Here we present a Compressed Sensing (CS) approach with an adaptive dictionary for solving a Determined Blind Source Separation (DBSS). The proposed method has been developed by reformulating a DBSS as Sparse Coding (SC) problem. The algorithm consist of few steps: Mixing matrix estimation, Sparse source separation and Source reconstruction. A sparse mixture of the original source signals has been used for the estimating the mixing matrix which have been used for the reconstruction of the of the source signals. A 'block signal representation' is used for representing the mixture in order to greatly improve the computation efficiency of the 'mixing matrix estimation' and the 'signal recovery' processes without particularly lose separation accuracy. Some experimental results are provided to compare the computation and separation performance of the method by varying the type of the dictionary used, be it fixed or an adaptive one. Finally a real case of study in the field of the Wireless Sensor Network (WSN) is illustrated in which a set of sensor nodes relay data to a multi-receiver node. Since more nodes transmits messages simultaneously it's necessary to separate the mixture of information at the receiver, thus solving a BSS problem. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/smpresentation-161025103809-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> The sparse decomposition of images and signals found great use in the field of: Compression, Noise removal and also in the Sources separation. This implies the decomposition of signals in the form of linear combinations with some elements of a redundant dictionary. The dictionary may be either a fixed dictionary (Fourier, Wavelet, etc) or may be learned from a set of samples. The algorithms based on learning the dictionary can be applied to a broad class of signals and have a better compression performance than methods based on fixed dictionary. Here we present a Compressed Sensing (CS) approach with an adaptive dictionary for solving a Determined Blind Source Separation (DBSS). The proposed method has been developed by reformulating a DBSS as Sparse Coding (SC) problem. The algorithm consist of few steps: Mixing matrix estimation, Sparse source separation and Source reconstruction. A sparse mixture of the original source signals has been used for the estimating the mixing matrix which have been used for the reconstruction of the of the source signals. A &#39;block signal representation&#39; is used for representing the mixture in order to greatly improve the computation efficiency of the &#39;mixing matrix estimation&#39; and the &#39;signal recovery&#39; processes without particularly lose separation accuracy. Some experimental results are provided to compare the computation and separation performance of the method by varying the type of the dictionary used, be it fixed or an adaptive one. Finally a real case of study in the field of the Wireless Sensor Network (WSN) is illustrated in which a set of sensor nodes relay data to a multi-receiver node. Since more nodes transmits messages simultaneously it&#39;s necessary to separate the mixture of information at the receiver, thus solving a BSS problem.
Blind Source Separation using Dictionary Learning from Davide Nardone
]]>
746 3 https://cdn.slidesharecdn.com/ss_thumbnails/smpresentation-161025103809-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Accelerating Dynamic Time Warping Subsequence Search with GPU /slideshow/accelerating-dynamic-time-warping-subsequence-search-with-gpu/58791875 cpiipowerpoint-160227123631
Many time series data mining problems require subsequence similarity search as a subroutine. While this can be performed with any distance measure, and dozens of distance measures have been proposed in the last decade, there is increasing evidence that Dynamic Time Warping (DTW) is the best measure across a wide range of domains. Given DTW’s usefulness and ubiquity, there has been a large community-wide effort to mitigate its relative lethargy. Proposed speedup techniques include early abandoning strategies, lower-bound based pruning, indexing and embedding. In this work we argue that we are now close to exhausting all possible speedup from software, and that we must turn to hardware-based solutions if we are to tackle the many problems that are currently untenable even with stateof- the-art algorithms running on high-end desktops. With this motivation, we investigate both GPU (Graphics Processing Unit) and FPGA (Field Programmable Gate Array) based acceleration of subsequence similarity search under the DTW measure. As we shall show, our novel algorithms allow GPUs, which are typically bundled with standard desktops, to achieve two orders of magnitude speedup. For problem domains which require even greater scale up, we show that FPGAs costing just a few thousand dollars can be used to produce four orders of magnitude speedup. We conduct detailed case studies on the classification of astronomical observations and similarity search in commercial agriculture, and demonstrate that our ideas allow us to tackle problems that would be simply untenable otherwise.]]>

Many time series data mining problems require subsequence similarity search as a subroutine. While this can be performed with any distance measure, and dozens of distance measures have been proposed in the last decade, there is increasing evidence that Dynamic Time Warping (DTW) is the best measure across a wide range of domains. Given DTW’s usefulness and ubiquity, there has been a large community-wide effort to mitigate its relative lethargy. Proposed speedup techniques include early abandoning strategies, lower-bound based pruning, indexing and embedding. In this work we argue that we are now close to exhausting all possible speedup from software, and that we must turn to hardware-based solutions if we are to tackle the many problems that are currently untenable even with stateof- the-art algorithms running on high-end desktops. With this motivation, we investigate both GPU (Graphics Processing Unit) and FPGA (Field Programmable Gate Array) based acceleration of subsequence similarity search under the DTW measure. As we shall show, our novel algorithms allow GPUs, which are typically bundled with standard desktops, to achieve two orders of magnitude speedup. For problem domains which require even greater scale up, we show that FPGAs costing just a few thousand dollars can be used to produce four orders of magnitude speedup. We conduct detailed case studies on the classification of astronomical observations and similarity search in commercial agriculture, and demonstrate that our ideas allow us to tackle problems that would be simply untenable otherwise.]]>
Sat, 27 Feb 2016 12:36:31 GMT /slideshow/accelerating-dynamic-time-warping-subsequence-search-with-gpu/58791875 DavideNardone@slideshare.net(DavideNardone) Accelerating Dynamic Time Warping Subsequence Search with GPU DavideNardone Many time series data mining problems require subsequence similarity search as a subroutine. While this can be performed with any distance measure, and dozens of distance measures have been proposed in the last decade, there is increasing evidence that Dynamic Time Warping (DTW) is the best measure across a wide range of domains. Given DTW’s usefulness and ubiquity, there has been a large community-wide effort to mitigate its relative lethargy. Proposed speedup techniques include early abandoning strategies, lower-bound based pruning, indexing and embedding. In this work we argue that we are now close to exhausting all possible speedup from software, and that we must turn to hardware-based solutions if we are to tackle the many problems that are currently untenable even with stateof- the-art algorithms running on high-end desktops. With this motivation, we investigate both GPU (Graphics Processing Unit) and FPGA (Field Programmable Gate Array) based acceleration of subsequence similarity search under the DTW measure. As we shall show, our novel algorithms allow GPUs, which are typically bundled with standard desktops, to achieve two orders of magnitude speedup. For problem domains which require even greater scale up, we show that FPGAs costing just a few thousand dollars can be used to produce four orders of magnitude speedup. We conduct detailed case studies on the classification of astronomical observations and similarity search in commercial agriculture, and demonstrate that our ideas allow us to tackle problems that would be simply untenable otherwise. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/cpiipowerpoint-160227123631-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Many time series data mining problems require subsequence similarity search as a subroutine. While this can be performed with any distance measure, and dozens of distance measures have been proposed in the last decade, there is increasing evidence that Dynamic Time Warping (DTW) is the best measure across a wide range of domains. Given DTW’s usefulness and ubiquity, there has been a large community-wide effort to mitigate its relative lethargy. Proposed speedup techniques include early abandoning strategies, lower-bound based pruning, indexing and embedding. In this work we argue that we are now close to exhausting all possible speedup from software, and that we must turn to hardware-based solutions if we are to tackle the many problems that are currently untenable even with stateof- the-art algorithms running on high-end desktops. With this motivation, we investigate both GPU (Graphics Processing Unit) and FPGA (Field Programmable Gate Array) based acceleration of subsequence similarity search under the DTW measure. As we shall show, our novel algorithms allow GPUs, which are typically bundled with standard desktops, to achieve two orders of magnitude speedup. For problem domains which require even greater scale up, we show that FPGAs costing just a few thousand dollars can be used to produce four orders of magnitude speedup. We conduct detailed case studies on the classification of astronomical observations and similarity search in commercial agriculture, and demonstrate that our ideas allow us to tackle problems that would be simply untenable otherwise.
Accelerating Dynamic Time Warping Subsequence Search with GPU from Davide Nardone
]]>
2745 9 https://cdn.slidesharecdn.com/ss_thumbnails/cpiipowerpoint-160227123631-thumbnail.jpg?width=120&height=120&fit=bounds presentation 000000 http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
LZ78 /DavideNardone/lz78-58791779 lz78-151204233442-lva1-app6892-160227123024
Lempel–Ziv–Welch (LZW) is a universal lossless data compression algorithm created by Abraham Lempel, Jacob Ziv, and Terry Welch. It was published by Welch in 1984 as an improved implementation of the LZ78 algorithm published by Lempel and Ziv in 1978. The algorithm is simple to implement, and has the potential for very high throughput in hardware implementations. It is the algorithm of the widely used Unix file compression utility compress, and is used in the GIF image format.]]>

Lempel–Ziv–Welch (LZW) is a universal lossless data compression algorithm created by Abraham Lempel, Jacob Ziv, and Terry Welch. It was published by Welch in 1984 as an improved implementation of the LZ78 algorithm published by Lempel and Ziv in 1978. The algorithm is simple to implement, and has the potential for very high throughput in hardware implementations. It is the algorithm of the widely used Unix file compression utility compress, and is used in the GIF image format.]]>
Sat, 27 Feb 2016 12:30:24 GMT /DavideNardone/lz78-58791779 DavideNardone@slideshare.net(DavideNardone) LZ78 DavideNardone Lempel–Ziv–Welch (LZW) is a universal lossless data compression algorithm created by Abraham Lempel, Jacob Ziv, and Terry Welch. It was published by Welch in 1984 as an improved implementation of the LZ78 algorithm published by Lempel and Ziv in 1978. The algorithm is simple to implement, and has the potential for very high throughput in hardware implementations. It is the algorithm of the widely used Unix file compression utility compress, and is used in the GIF image format. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/lz78-151204233442-lva1-app6892-160227123024-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Lempel–Ziv–Welch (LZW) is a universal lossless data compression algorithm created by Abraham Lempel, Jacob Ziv, and Terry Welch. It was published by Welch in 1984 as an improved implementation of the LZ78 algorithm published by Lempel and Ziv in 1978. The algorithm is simple to implement, and has the potential for very high throughput in hardware implementations. It is the algorithm of the widely used Unix file compression utility compress, and is used in the GIF image format.
LZ78 from Davide Nardone
]]>
5619 13 https://cdn.slidesharecdn.com/ss_thumbnails/lz78-151204233442-lva1-app6892-160227123024-thumbnail.jpg?width=120&height=120&fit=bounds presentation 000000 http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
https://cdn.slidesharecdn.com/profile-photo-DavideNardone-48x48.jpg?cb=1617090015 I earned a Bachelor's degree in Computer Science at University of Naples "Parthenope". Currently I am enrolled in a Master's Program in Applied Computer Science at University of Naples "Parthenope". During my studies I've achieved skills in several branches such as Data Analysis, Machine Learning, Bioinformatics, Image Processing, Algorithms and Data Structures, Database Management, OOP and much more. In addition, during these years I've successfully combined my studies with other commitments, showing myself to be self-motivated, organized and capable of working under pressure. I have a clear, logical mind with a practical approach to problem solving. I enjoy working on my own initiat... http://www.slideshare.net/DavideNardone https://cdn.slidesharecdn.com/ss_thumbnails/mscthesis-190517081033-thumbnail.jpg?width=320&height=320&fit=bounds slideshow/msc-thesis-146229726/146229726 M.Sc thesis https://cdn.slidesharecdn.com/ss_thumbnails/quantumcomputing-171227211000-thumbnail.jpg?width=320&height=320&fit=bounds slideshow/quantum-computing-85117213/85117213 Quantum computing https://cdn.slidesharecdn.com/ss_thumbnails/mscpresentation-171215090746-thumbnail.jpg?width=320&height=320&fit=bounds slideshow/a-sparsecoding-based-approach-for-classspecific-feature-selection/84145718 A Sparse-Coding Based ...