ݺߣshows by User: INRIA-OAK / http://www.slideshare.net/images/logo.gif ݺߣshows by User: INRIA-OAK / Wed, 16 Dec 2015 09:39:56 GMT ݺߣShare feed for ݺߣshows by User: INRIA-OAK Change Management in the Traditional and Semantic Web /slideshow/change-management-in-the-traditional-and-semantic-web/56196101 solimando-151216093956
Data has played such a crucial role in the development of modern society that relational databases, the most popular data management systems, have been defined as the "foundation of western civilization" for their massive adoption in business, government and education, that allowed to reach the necessary productivity and standardization rate. Data, however, in order to become knowledge, needs to be organized in a way that enables the retrieval of relevant information and data analysis, for becoming a real asset. Another fundamental aspect for an effective data management is the full support for changes at the data and metadata level. A failure in the management of data changes usually results in a dramatic diminishment of its usefulness.Data does evolve, in its format and its content, due to changes in the modeled domain, error correction, different required level of granularity, in order to accommodate new information. Change management has been covered extensively in literature, we concentrate in particular on two enabling factors of the World Wide Web: XML (the W3C-endorsed and widely used markup language to define semi-structured data) and ontologies (a major enabling factor of the Semantic Web vision), with related metadata. Concretely, we cover change management techniques for XML, and we then concentrate on the additional problems arising when considering the evolution of semantically-equipped data (with the help of a use-case featuring evolving ontologies and related mappings), which cannot be exclusively operated at the syntactical level, disregarding logical consequences.]]>

Data has played such a crucial role in the development of modern society that relational databases, the most popular data management systems, have been defined as the "foundation of western civilization" for their massive adoption in business, government and education, that allowed to reach the necessary productivity and standardization rate. Data, however, in order to become knowledge, needs to be organized in a way that enables the retrieval of relevant information and data analysis, for becoming a real asset. Another fundamental aspect for an effective data management is the full support for changes at the data and metadata level. A failure in the management of data changes usually results in a dramatic diminishment of its usefulness.Data does evolve, in its format and its content, due to changes in the modeled domain, error correction, different required level of granularity, in order to accommodate new information. Change management has been covered extensively in literature, we concentrate in particular on two enabling factors of the World Wide Web: XML (the W3C-endorsed and widely used markup language to define semi-structured data) and ontologies (a major enabling factor of the Semantic Web vision), with related metadata. Concretely, we cover change management techniques for XML, and we then concentrate on the additional problems arising when considering the evolution of semantically-equipped data (with the help of a use-case featuring evolving ontologies and related mappings), which cannot be exclusively operated at the syntactical level, disregarding logical consequences.]]>
Wed, 16 Dec 2015 09:39:56 GMT /slideshow/change-management-in-the-traditional-and-semantic-web/56196101 INRIA-OAK@slideshare.net(INRIA-OAK) Change Management in the Traditional and Semantic Web INRIA-OAK Data has played such a crucial role in the development of modern society that relational databases, the most popular data management systems, have been defined as the "foundation of western civilization" for their massive adoption in business, government and education, that allowed to reach the necessary productivity and standardization rate. Data, however, in order to become knowledge, needs to be organized in a way that enables the retrieval of relevant information and data analysis, for becoming a real asset. Another fundamental aspect for an effective data management is the full support for changes at the data and metadata level. A failure in the management of data changes usually results in a dramatic diminishment of its usefulness.Data does evolve, in its format and its content, due to changes in the modeled domain, error correction, different required level of granularity, in order to accommodate new information. Change management has been covered extensively in literature, we concentrate in particular on two enabling factors of the World Wide Web: XML (the W3C-endorsed and widely used markup language to define semi-structured data) and ontologies (a major enabling factor of the Semantic Web vision), with related metadata. Concretely, we cover change management techniques for XML, and we then concentrate on the additional problems arising when considering the evolution of semantically-equipped data (with the help of a use-case featuring evolving ontologies and related mappings), which cannot be exclusively operated at the syntactical level, disregarding logical consequences. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/solimando-151216093956-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Data has played such a crucial role in the development of modern society that relational databases, the most popular data management systems, have been defined as the &quot;foundation of western civilization&quot; for their massive adoption in business, government and education, that allowed to reach the necessary productivity and standardization rate. Data, however, in order to become knowledge, needs to be organized in a way that enables the retrieval of relevant information and data analysis, for becoming a real asset. Another fundamental aspect for an effective data management is the full support for changes at the data and metadata level. A failure in the management of data changes usually results in a dramatic diminishment of its usefulness.Data does evolve, in its format and its content, due to changes in the modeled domain, error correction, different required level of granularity, in order to accommodate new information. Change management has been covered extensively in literature, we concentrate in particular on two enabling factors of the World Wide Web: XML (the W3C-endorsed and widely used markup language to define semi-structured data) and ontologies (a major enabling factor of the Semantic Web vision), with related metadata. Concretely, we cover change management techniques for XML, and we then concentrate on the additional problems arising when considering the evolution of semantically-equipped data (with the help of a use-case featuring evolving ontologies and related mappings), which cannot be exclusively operated at the syntactical level, disregarding logical consequences.
Change Management in the Traditional and Semantic Web from INRIA-OAK
]]>
894 4 https://cdn.slidesharecdn.com/ss_thumbnails/solimando-151216093956-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
A Network-Aware Approach for Searching As-You-Type in Social Media /slideshow/a-networkaware-approach-for-searching-asyoutype-in-social-media/55074766 slideslagreeoak-151113105816-lva1-app6891
We present a novel approach for as-you-type top-k keyword search over social media. We adopt a natural "network-aware" interpretation for information relevance, by which information produced by users who are closer to the seeker is considered more relevant. In practice, this query model poses new challenges for effectiveness and efficiency in online search, even when a complete query is given as input in one keystroke. This is mainly because it requires a joint exploration of the social space and classic IR indexes such as inverted lists. We describe a memory-efficient and incremental prefix-based retrieval algorithm, which also exhibits an anytime behavior, allowing to output the most likely answer within any chosen running-time limit. We evaluate it through extensive experiments for several applications and search scenarios , including searching for posts in micro-blogging (Twitter and Tumblr), as well as searching for businesses based on reviews in Yelp. They show that our solution is effective in answering real-time as-you-type searches over social media.]]>

We present a novel approach for as-you-type top-k keyword search over social media. We adopt a natural "network-aware" interpretation for information relevance, by which information produced by users who are closer to the seeker is considered more relevant. In practice, this query model poses new challenges for effectiveness and efficiency in online search, even when a complete query is given as input in one keystroke. This is mainly because it requires a joint exploration of the social space and classic IR indexes such as inverted lists. We describe a memory-efficient and incremental prefix-based retrieval algorithm, which also exhibits an anytime behavior, allowing to output the most likely answer within any chosen running-time limit. We evaluate it through extensive experiments for several applications and search scenarios , including searching for posts in micro-blogging (Twitter and Tumblr), as well as searching for businesses based on reviews in Yelp. They show that our solution is effective in answering real-time as-you-type searches over social media.]]>
Fri, 13 Nov 2015 10:58:15 GMT /slideshow/a-networkaware-approach-for-searching-asyoutype-in-social-media/55074766 INRIA-OAK@slideshare.net(INRIA-OAK) A Network-Aware Approach for Searching As-You-Type in Social Media INRIA-OAK We present a novel approach for as-you-type top-k keyword search over social media. We adopt a natural "network-aware" interpretation for information relevance, by which information produced by users who are closer to the seeker is considered more relevant. In practice, this query model poses new challenges for effectiveness and efficiency in online search, even when a complete query is given as input in one keystroke. This is mainly because it requires a joint exploration of the social space and classic IR indexes such as inverted lists. We describe a memory-efficient and incremental prefix-based retrieval algorithm, which also exhibits an anytime behavior, allowing to output the most likely answer within any chosen running-time limit. We evaluate it through extensive experiments for several applications and search scenarios , including searching for posts in micro-blogging (Twitter and Tumblr), as well as searching for businesses based on reviews in Yelp. They show that our solution is effective in answering real-time as-you-type searches over social media. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/slideslagreeoak-151113105816-lva1-app6891-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> We present a novel approach for as-you-type top-k keyword search over social media. We adopt a natural &quot;network-aware&quot; interpretation for information relevance, by which information produced by users who are closer to the seeker is considered more relevant. In practice, this query model poses new challenges for effectiveness and efficiency in online search, even when a complete query is given as input in one keystroke. This is mainly because it requires a joint exploration of the social space and classic IR indexes such as inverted lists. We describe a memory-efficient and incremental prefix-based retrieval algorithm, which also exhibits an anytime behavior, allowing to output the most likely answer within any chosen running-time limit. We evaluate it through extensive experiments for several applications and search scenarios , including searching for posts in micro-blogging (Twitter and Tumblr), as well as searching for businesses based on reviews in Yelp. They show that our solution is effective in answering real-time as-you-type searches over social media.
A Network-Aware Approach for Searching As-You-Type in Social Media from INRIA-OAK
]]>
460 6 https://cdn.slidesharecdn.com/ss_thumbnails/slideslagreeoak-151113105816-lva1-app6891-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Speeding up information extraction programs: a holistic optimizer and a learning-based approach to rank documents /slideshow/speeding-up-information-extraction-programs-a-holistic-optimizer-and-a-learningbased-approach-to-rank-documents/46624940 inria-150403164927-conversion-gate01
A wealth of information produced by individuals and organizations is expressed in natural language text. Text lacks the explicit structure that is necessary to support rich querying and analysis. Information extraction systems are sophisticated software tools to discover structured information in natural language text. Unfortunately, information extraction is a challenging and time-consuming task. In this talk, I will first present our proposal to optimize information extraction programs. It consists of a holistic approach that focuses on: (i) optimizing all key aspects of the information extraction process collectively and in a coordinated manner, rather than focusing on individual subtasks in isolation; (ii) accurately predicting the execution time, recall, and precision for each information extraction execution plan; and (iii) using these predictions to choose the best execution plan to execute a given information extraction program. Then, I will briefly present a principled, learning-based approach for ranking documents according to their potential usefulness for an extraction task. Our online learning-to-rank methods exploit the information collected during extraction, as we process new documents and the fine-grained characteristics of the useful documents are revealed. Then, these methods decide when the ranking model should be updated, hence significantly improving the document ranking quality over time. This is joint work with Gonçalo Simões, INESC-ID and IST/University of Lisbon, and Pablo Barrio and Luis Gravano from Columbia University, NY.]]>

A wealth of information produced by individuals and organizations is expressed in natural language text. Text lacks the explicit structure that is necessary to support rich querying and analysis. Information extraction systems are sophisticated software tools to discover structured information in natural language text. Unfortunately, information extraction is a challenging and time-consuming task. In this talk, I will first present our proposal to optimize information extraction programs. It consists of a holistic approach that focuses on: (i) optimizing all key aspects of the information extraction process collectively and in a coordinated manner, rather than focusing on individual subtasks in isolation; (ii) accurately predicting the execution time, recall, and precision for each information extraction execution plan; and (iii) using these predictions to choose the best execution plan to execute a given information extraction program. Then, I will briefly present a principled, learning-based approach for ranking documents according to their potential usefulness for an extraction task. Our online learning-to-rank methods exploit the information collected during extraction, as we process new documents and the fine-grained characteristics of the useful documents are revealed. Then, these methods decide when the ranking model should be updated, hence significantly improving the document ranking quality over time. This is joint work with Gonçalo Simões, INESC-ID and IST/University of Lisbon, and Pablo Barrio and Luis Gravano from Columbia University, NY.]]>
Fri, 03 Apr 2015 16:49:26 GMT /slideshow/speeding-up-information-extraction-programs-a-holistic-optimizer-and-a-learningbased-approach-to-rank-documents/46624940 INRIA-OAK@slideshare.net(INRIA-OAK) Speeding up information extraction programs: a holistic optimizer and a learning-based approach to rank documents INRIA-OAK A wealth of information produced by individuals and organizations is expressed in natural language text. Text lacks the explicit structure that is necessary to support rich querying and analysis. Information extraction systems are sophisticated software tools to discover structured information in natural language text. Unfortunately, information extraction is a challenging and time-consuming task. In this talk, I will first present our proposal to optimize information extraction programs. It consists of a holistic approach that focuses on: (i) optimizing all key aspects of the information extraction process collectively and in a coordinated manner, rather than focusing on individual subtasks in isolation; (ii) accurately predicting the execution time, recall, and precision for each information extraction execution plan; and (iii) using these predictions to choose the best execution plan to execute a given information extraction program. Then, I will briefly present a principled, learning-based approach for ranking documents according to their potential usefulness for an extraction task. Our online learning-to-rank methods exploit the information collected during extraction, as we process new documents and the fine-grained characteristics of the useful documents are revealed. Then, these methods decide when the ranking model should be updated, hence significantly improving the document ranking quality over time. This is joint work with Gonçalo Simões, INESC-ID and IST/University of Lisbon, and Pablo Barrio and Luis Gravano from Columbia University, NY. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/inria-150403164927-conversion-gate01-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> A wealth of information produced by individuals and organizations is expressed in natural language text. Text lacks the explicit structure that is necessary to support rich querying and analysis. Information extraction systems are sophisticated software tools to discover structured information in natural language text. Unfortunately, information extraction is a challenging and time-consuming task. In this talk, I will first present our proposal to optimize information extraction programs. It consists of a holistic approach that focuses on: (i) optimizing all key aspects of the information extraction process collectively and in a coordinated manner, rather than focusing on individual subtasks in isolation; (ii) accurately predicting the execution time, recall, and precision for each information extraction execution plan; and (iii) using these predictions to choose the best execution plan to execute a given information extraction program. Then, I will briefly present a principled, learning-based approach for ranking documents according to their potential usefulness for an extraction task. Our online learning-to-rank methods exploit the information collected during extraction, as we process new documents and the fine-grained characteristics of the useful documents are revealed. Then, these methods decide when the ranking model should be updated, hence significantly improving the document ranking quality over time. This is joint work with Gonçalo Simões, INESC-ID and IST/University of Lisbon, and Pablo Barrio and Luis Gravano from Columbia University, NY.
Speeding up information extraction programs: a holistic optimizer and a learning-based approach to rank documents from INRIA-OAK
]]>
845 2 https://cdn.slidesharecdn.com/ss_thumbnails/inria-150403164927-conversion-gate01-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Querying incomplete data /slideshow/cristina-lrimars15/45795490 oyzpn3v8t3gm9khum10j-signature-63591599cf3651dc8c758bf95c15f4e8a4aeca6e618028e22586617417027815-poli-150313070127-conversion-gate01
Data is incomplete when it contains missing/unknown information, or more generally when it is only partially available, e.g. because of restrictions on data access. Incompleteness is receiving a renewed interest as it is naturally generated in data interoperation, a very common framework for today's data-centric applications. In this setting data is decentralized, needs to be integrated from several sources and exchanged between different applications. Incompleteness arises from the semantic and syntactic heterogeneity of different data sources. Querying incomplete data is usually an expensive task. In this talk we survey on the state of the art and recent developments on the tractability of querying incomplete data, under different possible interpretations of incompleteness.]]>

Data is incomplete when it contains missing/unknown information, or more generally when it is only partially available, e.g. because of restrictions on data access. Incompleteness is receiving a renewed interest as it is naturally generated in data interoperation, a very common framework for today's data-centric applications. In this setting data is decentralized, needs to be integrated from several sources and exchanged between different applications. Incompleteness arises from the semantic and syntactic heterogeneity of different data sources. Querying incomplete data is usually an expensive task. In this talk we survey on the state of the art and recent developments on the tractability of querying incomplete data, under different possible interpretations of incompleteness.]]>
Fri, 13 Mar 2015 07:01:27 GMT /slideshow/cristina-lrimars15/45795490 INRIA-OAK@slideshare.net(INRIA-OAK) Querying incomplete data INRIA-OAK Data is incomplete when it contains missing/unknown information, or more generally when it is only partially available, e.g. because of restrictions on data access. Incompleteness is receiving a renewed interest as it is naturally generated in data interoperation, a very common framework for today's data-centric applications. In this setting data is decentralized, needs to be integrated from several sources and exchanged between different applications. Incompleteness arises from the semantic and syntactic heterogeneity of different data sources. Querying incomplete data is usually an expensive task. In this talk we survey on the state of the art and recent developments on the tractability of querying incomplete data, under different possible interpretations of incompleteness. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/oyzpn3v8t3gm9khum10j-signature-63591599cf3651dc8c758bf95c15f4e8a4aeca6e618028e22586617417027815-poli-150313070127-conversion-gate01-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Data is incomplete when it contains missing/unknown information, or more generally when it is only partially available, e.g. because of restrictions on data access. Incompleteness is receiving a renewed interest as it is naturally generated in data interoperation, a very common framework for today&#39;s data-centric applications. In this setting data is decentralized, needs to be integrated from several sources and exchanged between different applications. Incompleteness arises from the semantic and syntactic heterogeneity of different data sources. Querying incomplete data is usually an expensive task. In this talk we survey on the state of the art and recent developments on the tractability of querying incomplete data, under different possible interpretations of incompleteness.
Querying incomplete data from INRIA-OAK
]]>
741 2 https://cdn.slidesharecdn.com/ss_thumbnails/oyzpn3v8t3gm9khum10j-signature-63591599cf3651dc8c758bf95c15f4e8a4aeca6e618028e22586617417027815-poli-150313070127-conversion-gate01-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
ANGIE in wonderland /slideshow/angie-wornderland/45066921 eosksxrrlcthoz7wfkug-signature-22aa210ab7e4d5396946e2586fef69f6d2327d27350fc1a2a9076030d2fe0df5-poli-150224054904-conversion-gate02
In recent years, several important content providers such as Amazon, Musicbrainz, IMDb, Geonames, Google, and Twitter, have chosen to export their data through Web services. To unleash the potential of these sources for new intelligent applications, the data has to be combined across different APIs. To this end, we have developed ANGIE, a framework that maps the knowledge provided by Web services dynamically into a local knowledge base. ANGIE represents Web services as views with binding patterns over the schema of the knowledge base. In this talk, I will focus on two problems related to our framework. In the first part, the focus will be on the automatic integration of new Web services. I will present a novel algorithm for inferring the view definition of a given Web service in terms of the schema of the global knowledge base. The algorithm also generates a declarative script can transform the call results into results of the view. Our experiments on real Web services show the viability of our approach. The second part will address the evaluation of conjunctive queries under a budget of calls. Conjunctive queries may require an unbound number of calls in order to compute the maximal answers. However, Web services typically allow only a fixed number of calls per session. Therefore, we have to prioritize query evaluation plans. We are working on distinguishing among all plans that could return answers those plans that actually will. Finally, I will show an application for this new notion of plans.]]>

In recent years, several important content providers such as Amazon, Musicbrainz, IMDb, Geonames, Google, and Twitter, have chosen to export their data through Web services. To unleash the potential of these sources for new intelligent applications, the data has to be combined across different APIs. To this end, we have developed ANGIE, a framework that maps the knowledge provided by Web services dynamically into a local knowledge base. ANGIE represents Web services as views with binding patterns over the schema of the knowledge base. In this talk, I will focus on two problems related to our framework. In the first part, the focus will be on the automatic integration of new Web services. I will present a novel algorithm for inferring the view definition of a given Web service in terms of the schema of the global knowledge base. The algorithm also generates a declarative script can transform the call results into results of the view. Our experiments on real Web services show the viability of our approach. The second part will address the evaluation of conjunctive queries under a budget of calls. Conjunctive queries may require an unbound number of calls in order to compute the maximal answers. However, Web services typically allow only a fixed number of calls per session. Therefore, we have to prioritize query evaluation plans. We are working on distinguishing among all plans that could return answers those plans that actually will. Finally, I will show an application for this new notion of plans.]]>
Tue, 24 Feb 2015 05:49:04 GMT /slideshow/angie-wornderland/45066921 INRIA-OAK@slideshare.net(INRIA-OAK) ANGIE in wonderland INRIA-OAK In recent years, several important content providers such as Amazon, Musicbrainz, IMDb, Geonames, Google, and Twitter, have chosen to export their data through Web services. To unleash the potential of these sources for new intelligent applications, the data has to be combined across different APIs. To this end, we have developed ANGIE, a framework that maps the knowledge provided by Web services dynamically into a local knowledge base. ANGIE represents Web services as views with binding patterns over the schema of the knowledge base. In this talk, I will focus on two problems related to our framework. In the first part, the focus will be on the automatic integration of new Web services. I will present a novel algorithm for inferring the view definition of a given Web service in terms of the schema of the global knowledge base. The algorithm also generates a declarative script can transform the call results into results of the view. Our experiments on real Web services show the viability of our approach. The second part will address the evaluation of conjunctive queries under a budget of calls. Conjunctive queries may require an unbound number of calls in order to compute the maximal answers. However, Web services typically allow only a fixed number of calls per session. Therefore, we have to prioritize query evaluation plans. We are working on distinguishing among all plans that could return answers those plans that actually will. Finally, I will show an application for this new notion of plans. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/eosksxrrlcthoz7wfkug-signature-22aa210ab7e4d5396946e2586fef69f6d2327d27350fc1a2a9076030d2fe0df5-poli-150224054904-conversion-gate02-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> In recent years, several important content providers such as Amazon, Musicbrainz, IMDb, Geonames, Google, and Twitter, have chosen to export their data through Web services. To unleash the potential of these sources for new intelligent applications, the data has to be combined across different APIs. To this end, we have developed ANGIE, a framework that maps the knowledge provided by Web services dynamically into a local knowledge base. ANGIE represents Web services as views with binding patterns over the schema of the knowledge base. In this talk, I will focus on two problems related to our framework. In the first part, the focus will be on the automatic integration of new Web services. I will present a novel algorithm for inferring the view definition of a given Web service in terms of the schema of the global knowledge base. The algorithm also generates a declarative script can transform the call results into results of the view. Our experiments on real Web services show the viability of our approach. The second part will address the evaluation of conjunctive queries under a budget of calls. Conjunctive queries may require an unbound number of calls in order to compute the maximal answers. However, Web services typically allow only a fixed number of calls per session. Therefore, we have to prioritize query evaluation plans. We are working on distinguishing among all plans that could return answers those plans that actually will. Finally, I will show an application for this new notion of plans.
ANGIE in wonderland from INRIA-OAK
]]>
748 1 https://cdn.slidesharecdn.com/ss_thumbnails/eosksxrrlcthoz7wfkug-signature-22aa210ab7e4d5396946e2586fef69f6d2327d27350fc1a2a9076030d2fe0df5-poli-150224054904-conversion-gate02-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
On building more human query answering systems /slideshow/on-building-more-human-query-answering-systems/42604892 parissudtalknoanimation-141211074926-conversion-gate01
The underlying principle behind every query answering system is the existence of a query describing the information of interest. When this model is applied to non-expert users, two traditional issues become highly significant. The first is that many queries are often over specified leading to empty answers. We propose a principled optimization-based interactive query relaxation framework for such queries. The framework computes dynamically and suggests alternative queries with less conditions to help the user arrive at a query with a non-empty answer, or at a query for which it is clear that independently of the relaxations the answer will always be empty. The second issue is the lack of expertise from the user to accurately describe the requirements of the elements of interest. The user may though know examples of elements that would like to have in the results. We introduce a novel form of query paradigm in which queries are not any more specifications of what the user is searching for, but simply a sample of what the user knows to be of interest. We refer to this novel form of queries as Exemplar Queries.]]>

The underlying principle behind every query answering system is the existence of a query describing the information of interest. When this model is applied to non-expert users, two traditional issues become highly significant. The first is that many queries are often over specified leading to empty answers. We propose a principled optimization-based interactive query relaxation framework for such queries. The framework computes dynamically and suggests alternative queries with less conditions to help the user arrive at a query with a non-empty answer, or at a query for which it is clear that independently of the relaxations the answer will always be empty. The second issue is the lack of expertise from the user to accurately describe the requirements of the elements of interest. The user may though know examples of elements that would like to have in the results. We introduce a novel form of query paradigm in which queries are not any more specifications of what the user is searching for, but simply a sample of what the user knows to be of interest. We refer to this novel form of queries as Exemplar Queries.]]>
Thu, 11 Dec 2014 07:49:26 GMT /slideshow/on-building-more-human-query-answering-systems/42604892 INRIA-OAK@slideshare.net(INRIA-OAK) On building more human query answering systems INRIA-OAK The underlying principle behind every query answering system is the existence of a query describing the information of interest. When this model is applied to non-expert users, two traditional issues become highly significant. The first is that many queries are often over specified leading to empty answers. We propose a principled optimization-based interactive query relaxation framework for such queries. The framework computes dynamically and suggests alternative queries with less conditions to help the user arrive at a query with a non-empty answer, or at a query for which it is clear that independently of the relaxations the answer will always be empty. The second issue is the lack of expertise from the user to accurately describe the requirements of the elements of interest. The user may though know examples of elements that would like to have in the results. We introduce a novel form of query paradigm in which queries are not any more specifications of what the user is searching for, but simply a sample of what the user knows to be of interest. We refer to this novel form of queries as Exemplar Queries. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/parissudtalknoanimation-141211074926-conversion-gate01-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> The underlying principle behind every query answering system is the existence of a query describing the information of interest. When this model is applied to non-expert users, two traditional issues become highly significant. The first is that many queries are often over specified leading to empty answers. We propose a principled optimization-based interactive query relaxation framework for such queries. The framework computes dynamically and suggests alternative queries with less conditions to help the user arrive at a query with a non-empty answer, or at a query for which it is clear that independently of the relaxations the answer will always be empty. The second issue is the lack of expertise from the user to accurately describe the requirements of the elements of interest. The user may though know examples of elements that would like to have in the results. We introduce a novel form of query paradigm in which queries are not any more specifications of what the user is searching for, but simply a sample of what the user knows to be of interest. We refer to this novel form of queries as Exemplar Queries.
On building more human query answering systems from INRIA-OAK
]]>
562 1 https://cdn.slidesharecdn.com/ss_thumbnails/parissudtalknoanimation-141211074926-conversion-gate01-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Dynamically Optimizing Queries over Large Scale Data Platforms /slideshow/2014-1126-karanasosdynoinria/42090731 2014-11-26karanasosdynoinria-141127061442-conversion-gate01
Enterprises are adapting large-scale data processing platforms, such as Hadoop, to gain actionable insights from their "big data". Query optimization is still an open challenge in this environment due to the volume and heterogeneity of data, comprising both structured and un/semi-structured datasets. Moreover, it has become common practice to push business logic close to the data via user-defined functions (UDFs), which are usually opaque to the optimizer, further complicating cost-based optimization. As a result, classical relational query optimization techniques do not fit well in this setting, while at the same time, suboptimal query plans can be disastrous with large datasets. In this talk, I will present new techniques that take into account UDFs and correlations between relations for optimizing queries running on large scale clusters. We introduce "pilot runs", which execute part of the query over a sample of the data to estimate selectivities, and employ a cost-based optimizer that uses these selectivities to choose an initial query plan. Then, we follow a dynamic optimization approach, in which plans evolve as parts of the queries get executed. Our experimental results show that our techniques produce plans that are at least as good as, and up to 2x (4x) better for Jaql (Hive) than, the best hand-written left-deep query plans.]]>

Enterprises are adapting large-scale data processing platforms, such as Hadoop, to gain actionable insights from their "big data". Query optimization is still an open challenge in this environment due to the volume and heterogeneity of data, comprising both structured and un/semi-structured datasets. Moreover, it has become common practice to push business logic close to the data via user-defined functions (UDFs), which are usually opaque to the optimizer, further complicating cost-based optimization. As a result, classical relational query optimization techniques do not fit well in this setting, while at the same time, suboptimal query plans can be disastrous with large datasets. In this talk, I will present new techniques that take into account UDFs and correlations between relations for optimizing queries running on large scale clusters. We introduce "pilot runs", which execute part of the query over a sample of the data to estimate selectivities, and employ a cost-based optimizer that uses these selectivities to choose an initial query plan. Then, we follow a dynamic optimization approach, in which plans evolve as parts of the queries get executed. Our experimental results show that our techniques produce plans that are at least as good as, and up to 2x (4x) better for Jaql (Hive) than, the best hand-written left-deep query plans.]]>
Thu, 27 Nov 2014 06:14:42 GMT /slideshow/2014-1126-karanasosdynoinria/42090731 INRIA-OAK@slideshare.net(INRIA-OAK) Dynamically Optimizing Queries over Large Scale Data Platforms INRIA-OAK Enterprises are adapting large-scale data processing platforms, such as Hadoop, to gain actionable insights from their "big data". Query optimization is still an open challenge in this environment due to the volume and heterogeneity of data, comprising both structured and un/semi-structured datasets. Moreover, it has become common practice to push business logic close to the data via user-defined functions (UDFs), which are usually opaque to the optimizer, further complicating cost-based optimization. As a result, classical relational query optimization techniques do not fit well in this setting, while at the same time, suboptimal query plans can be disastrous with large datasets. In this talk, I will present new techniques that take into account UDFs and correlations between relations for optimizing queries running on large scale clusters. We introduce "pilot runs", which execute part of the query over a sample of the data to estimate selectivities, and employ a cost-based optimizer that uses these selectivities to choose an initial query plan. Then, we follow a dynamic optimization approach, in which plans evolve as parts of the queries get executed. Our experimental results show that our techniques produce plans that are at least as good as, and up to 2x (4x) better for Jaql (Hive) than, the best hand-written left-deep query plans. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/2014-11-26karanasosdynoinria-141127061442-conversion-gate01-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Enterprises are adapting large-scale data processing platforms, such as Hadoop, to gain actionable insights from their &quot;big data&quot;. Query optimization is still an open challenge in this environment due to the volume and heterogeneity of data, comprising both structured and un/semi-structured datasets. Moreover, it has become common practice to push business logic close to the data via user-defined functions (UDFs), which are usually opaque to the optimizer, further complicating cost-based optimization. As a result, classical relational query optimization techniques do not fit well in this setting, while at the same time, suboptimal query plans can be disastrous with large datasets. In this talk, I will present new techniques that take into account UDFs and correlations between relations for optimizing queries running on large scale clusters. We introduce &quot;pilot runs&quot;, which execute part of the query over a sample of the data to estimate selectivities, and employ a cost-based optimizer that uses these selectivities to choose an initial query plan. Then, we follow a dynamic optimization approach, in which plans evolve as parts of the queries get executed. Our experimental results show that our techniques produce plans that are at least as good as, and up to 2x (4x) better for Jaql (Hive) than, the best hand-written left-deep query plans.
Dynamically Optimizing Queries over Large Scale Data Platforms from INRIA-OAK
]]>
868 1 https://cdn.slidesharecdn.com/ss_thumbnails/2014-11-26karanasosdynoinria-141127061442-conversion-gate01-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Web Data Management in RDF Age /slideshow/web-medium/39833161 web-medium-141003040750-phpapp02
Web data management has been a topic of interest for many years during which a number of different modelling approaches have been tried. The latest in this approaches is to use RDF (Resource Description Framework), which seems to provide real opportunity for querying at least some of the web data systematically. RDF has been proposed by the World Wide Web Consortium (W3C) for modeling Web objects as part of developing the “semantic web”. W3C has also proposed SPARQL as the query language for accessing RDF data repositories. The publication of Linked Open Data (LOD) on the Web has gained tremendous momentum over the last number of years, and this provides a new opportunity to accomplish web data integration. A number of approaches have been proposed for running SPARQL queries over RDF­encoded Web data: data warehousing, SPARQL federation, and live linked query execution. In this talk, I will review these approaches with particular emphasis on some of our research within the context of gStore project (joint project with Prof. Lei Zou of Peking University and Prof. Lei Chen of Hong Kong University of Science and Technology), chameleon­db project (joint work with Günes Aluç, Dr. Olaf Hartig, and Prof. Khuzaima Daudjee of University of Waterloo), and live linked query execution (joint work with Dr. Olaf Hartig).]]>

Web data management has been a topic of interest for many years during which a number of different modelling approaches have been tried. The latest in this approaches is to use RDF (Resource Description Framework), which seems to provide real opportunity for querying at least some of the web data systematically. RDF has been proposed by the World Wide Web Consortium (W3C) for modeling Web objects as part of developing the “semantic web”. W3C has also proposed SPARQL as the query language for accessing RDF data repositories. The publication of Linked Open Data (LOD) on the Web has gained tremendous momentum over the last number of years, and this provides a new opportunity to accomplish web data integration. A number of approaches have been proposed for running SPARQL queries over RDF­encoded Web data: data warehousing, SPARQL federation, and live linked query execution. In this talk, I will review these approaches with particular emphasis on some of our research within the context of gStore project (joint project with Prof. Lei Zou of Peking University and Prof. Lei Chen of Hong Kong University of Science and Technology), chameleon­db project (joint work with Günes Aluç, Dr. Olaf Hartig, and Prof. Khuzaima Daudjee of University of Waterloo), and live linked query execution (joint work with Dr. Olaf Hartig).]]>
Fri, 03 Oct 2014 04:07:49 GMT /slideshow/web-medium/39833161 INRIA-OAK@slideshare.net(INRIA-OAK) Web Data Management in RDF Age INRIA-OAK Web data management has been a topic of interest for many years during which a number of different modelling approaches have been tried. The latest in this approaches is to use RDF (Resource Description Framework), which seems to provide real opportunity for querying at least some of the web data systematically. RDF has been proposed by the World Wide Web Consortium (W3C) for modeling Web objects as part of developing the “semantic web”. W3C has also proposed SPARQL as the query language for accessing RDF data repositories. The publication of Linked Open Data (LOD) on the Web has gained tremendous momentum over the last number of years, and this provides a new opportunity to accomplish web data integration. A number of approaches have been proposed for running SPARQL queries over RDF­encoded Web data: data warehousing, SPARQL federation, and live linked query execution. In this talk, I will review these approaches with particular emphasis on some of our research within the context of gStore project (joint project with Prof. Lei Zou of Peking University and Prof. Lei Chen of Hong Kong University of Science and Technology), chameleon­db project (joint work with Günes Aluç, Dr. Olaf Hartig, and Prof. Khuzaima Daudjee of University of Waterloo), and live linked query execution (joint work with Dr. Olaf Hartig). <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/web-medium-141003040750-phpapp02-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Web data management has been a topic of interest for many years during which a number of different modelling approaches have been tried. The latest in this approaches is to use RDF (Resource Description Framework), which seems to provide real opportunity for querying at least some of the web data systematically. RDF has been proposed by the World Wide Web Consortium (W3C) for modeling Web objects as part of developing the “semantic web”. W3C has also proposed SPARQL as the query language for accessing RDF data repositories. The publication of Linked Open Data (LOD) on the Web has gained tremendous momentum over the last number of years, and this provides a new opportunity to accomplish web data integration. A number of approaches have been proposed for running SPARQL queries over RDF­encoded Web data: data warehousing, SPARQL federation, and live linked query execution. In this talk, I will review these approaches with particular emphasis on some of our research within the context of gStore project (joint project with Prof. Lei Zou of Peking University and Prof. Lei Chen of Hong Kong University of Science and Technology), chameleon­db project (joint work with Günes Aluç, Dr. Olaf Hartig, and Prof. Khuzaima Daudjee of University of Waterloo), and live linked query execution (joint work with Dr. Olaf Hartig).
Web Data Management in RDF Age from INRIA-OAK
]]>
835 4 https://cdn.slidesharecdn.com/ss_thumbnails/web-medium-141003040750-phpapp02-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Oak meeting 18/09/2014 /INRIA-OAK/oak-meeting-09-20140final oak-meeting-09-20140-final-140918051858-phpapp02
A general view on how the team works: people, project, grants, administrative agenda.]]>

A general view on how the team works: people, project, grants, administrative agenda.]]>
Thu, 18 Sep 2014 05:18:58 GMT /INRIA-OAK/oak-meeting-09-20140final INRIA-OAK@slideshare.net(INRIA-OAK) Oak meeting 18/09/2014 INRIA-OAK A general view on how the team works: people, project, grants, administrative agenda. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/oak-meeting-09-20140-final-140918051858-phpapp02-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> A general view on how the team works: people, project, grants, administrative agenda.
Oak meeting 18/09/2014 from INRIA-OAK
]]>
697 4 https://cdn.slidesharecdn.com/ss_thumbnails/oak-meeting-09-20140-final-140918051858-phpapp02-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Nautilus /slideshow/nautilus-39234490/39234490 nautilus-140918045925-phpapp01
SQL query result analysis SQL Debugging tool]]>

SQL query result analysis SQL Debugging tool]]>
Thu, 18 Sep 2014 04:59:25 GMT /slideshow/nautilus-39234490/39234490 INRIA-OAK@slideshare.net(INRIA-OAK) Nautilus INRIA-OAK SQL query result analysis SQL Debugging tool <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/nautilus-140918045925-phpapp01-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> SQL query result analysis SQL Debugging tool
Nautilus from INRIA-OAK
]]>
466 1 https://cdn.slidesharecdn.com/ss_thumbnails/nautilus-140918045925-phpapp01-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Warg /INRIA-OAK/warg warg-140918045247-phpapp02
Warg presentation]]>

Warg presentation]]>
Thu, 18 Sep 2014 04:52:47 GMT /INRIA-OAK/warg INRIA-OAK@slideshare.net(INRIA-OAK) Warg INRIA-OAK Warg presentation <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/warg-140918045247-phpapp02-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Warg presentation
Warg from INRIA-OAK
]]>
386 1 https://cdn.slidesharecdn.com/ss_thumbnails/warg-140918045247-phpapp02-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Vip2p /slideshow/vip2p/39234238 vip2p-140918045247-phpapp02
Vip2p presentation]]>

Vip2p presentation]]>
Thu, 18 Sep 2014 04:52:47 GMT /slideshow/vip2p/39234238 INRIA-OAK@slideshare.net(INRIA-OAK) Vip2p INRIA-OAK Vip2p presentation <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/vip2p-140918045247-phpapp02-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Vip2p presentation
Vip2p from INRIA-OAK
]]>
334 1 https://cdn.slidesharecdn.com/ss_thumbnails/vip2p-140918045247-phpapp02-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
S4 /slideshow/s4-39234236/39234236 s4-140918045245-phpapp02
S4 presentation]]>

S4 presentation]]>
Thu, 18 Sep 2014 04:52:45 GMT /slideshow/s4-39234236/39234236 INRIA-OAK@slideshare.net(INRIA-OAK) S4 INRIA-OAK S4 presentation <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/s4-140918045245-phpapp02-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> S4 presentation
S4 from INRIA-OAK
]]>
268 2 https://cdn.slidesharecdn.com/ss_thumbnails/s4-140918045245-phpapp02-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Rdf saturator /slideshow/rdf-saturator/39234233 rdfsaturator-140918045245-phpapp01
Rdf saturator]]>

Rdf saturator]]>
Thu, 18 Sep 2014 04:52:45 GMT /slideshow/rdf-saturator/39234233 INRIA-OAK@slideshare.net(INRIA-OAK) Rdf saturator INRIA-OAK Rdf saturator <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/rdfsaturator-140918045245-phpapp01-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Rdf saturator
Rdf saturator from INRIA-OAK
]]>
270 1 https://cdn.slidesharecdn.com/ss_thumbnails/rdfsaturator-140918045245-phpapp01-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Rdf generator /slideshow/rdf-generator/39234232 rdfgenerator-140918045245-phpapp01
Rdf generator]]>

Rdf generator]]>
Thu, 18 Sep 2014 04:52:44 GMT /slideshow/rdf-generator/39234232 INRIA-OAK@slideshare.net(INRIA-OAK) Rdf generator INRIA-OAK Rdf generator <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/rdfgenerator-140918045245-phpapp01-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Rdf generator
Rdf generator from INRIA-OAK
]]>
331 1 https://cdn.slidesharecdn.com/ss_thumbnails/rdfgenerator-140918045245-phpapp01-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Rdf conjunctive query selectivity estimation /slideshow/rdf-conjunctive-query-selectivity-estimation/39234231 rdfconjunctivequeryselectivityestimation-140918045244-phpapp02
..]]>

..]]>
Thu, 18 Sep 2014 04:52:44 GMT /slideshow/rdf-conjunctive-query-selectivity-estimation/39234231 INRIA-OAK@slideshare.net(INRIA-OAK) Rdf conjunctive query selectivity estimation INRIA-OAK .. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/rdfconjunctivequeryselectivityestimation-140918045244-phpapp02-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> ..
Rdf conjunctive query selectivity estimation from INRIA-OAK
]]>
315 1 https://cdn.slidesharecdn.com/ss_thumbnails/rdfconjunctivequeryselectivityestimation-140918045244-phpapp02-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
rdf query reformulation /slideshow/raphael-rdf-queryreformulation/39234230 raphaelrdfqueryreformulation-140918045244-phpapp01
query reformulation]]>

query reformulation]]>
Thu, 18 Sep 2014 04:52:44 GMT /slideshow/raphael-rdf-queryreformulation/39234230 INRIA-OAK@slideshare.net(INRIA-OAK) rdf query reformulation INRIA-OAK query reformulation <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/raphaelrdfqueryreformulation-140918045244-phpapp01-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> query reformulation
rdf query reformulation from INRIA-OAK
]]>
393 2 https://cdn.slidesharecdn.com/ss_thumbnails/raphaelrdfqueryreformulation-140918045244-phpapp01-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
postgres loader /INRIA-OAK/raphael-postgres-loader raphaelpostgresloader-140918045244-phpapp01
pg loader]]>

pg loader]]>
Thu, 18 Sep 2014 04:52:44 GMT /INRIA-OAK/raphael-postgres-loader INRIA-OAK@slideshare.net(INRIA-OAK) postgres loader INRIA-OAK pg loader <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/raphaelpostgresloader-140918045244-phpapp01-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> pg loader
postgres loader from INRIA-OAK
]]>
293 1 https://cdn.slidesharecdn.com/ss_thumbnails/raphaelpostgresloader-140918045244-phpapp01-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Plreuse /slideshow/plreuse/39234228 plreuse-140918045244-phpapp02
...]]>

...]]>
Thu, 18 Sep 2014 04:52:43 GMT /slideshow/plreuse/39234228 INRIA-OAK@slideshare.net(INRIA-OAK) Plreuse INRIA-OAK ... <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/plreuse-140918045244-phpapp02-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> ...
Plreuse from INRIA-OAK
]]>
321 2 https://cdn.slidesharecdn.com/ss_thumbnails/plreuse-140918045244-phpapp02-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Paxquery /slideshow/paxquery/39234227 paxquery-140918045243-phpapp02
PaxQuery]]>

PaxQuery]]>
Thu, 18 Sep 2014 04:52:43 GMT /slideshow/paxquery/39234227 INRIA-OAK@slideshare.net(INRIA-OAK) Paxquery INRIA-OAK PaxQuery <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/paxquery-140918045243-phpapp02-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> PaxQuery
Paxquery from INRIA-OAK
]]>
280 1 https://cdn.slidesharecdn.com/ss_thumbnails/paxquery-140918045243-phpapp02-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
https://public.slidesharecdn.com/v2/images/profile-picture.png https://cdn.slidesharecdn.com/ss_thumbnails/solimando-151216093956-thumbnail.jpg?width=320&height=320&fit=bounds slideshow/change-management-in-the-traditional-and-semantic-web/56196101 Change Management in t... https://cdn.slidesharecdn.com/ss_thumbnails/slideslagreeoak-151113105816-lva1-app6891-thumbnail.jpg?width=320&height=320&fit=bounds slideshow/a-networkaware-approach-for-searching-asyoutype-in-social-media/55074766 A Network-Aware Approa... https://cdn.slidesharecdn.com/ss_thumbnails/inria-150403164927-conversion-gate01-thumbnail.jpg?width=320&height=320&fit=bounds slideshow/speeding-up-information-extraction-programs-a-holistic-optimizer-and-a-learningbased-approach-to-rank-documents/46624940 Speeding up informatio...