際際滷shows by User: treygrainger / http://www.slideshare.net/images/logo.gif 際際滷shows by User: treygrainger / Wed, 06 Nov 2019 06:28:26 GMT 際際滷Share feed for 際際滷shows by User: treygrainger Balancing the Dimensions of User Intent /slideshow/balancing-the-dimensions-of-user-intent/190952294 balancing-the-dimensions-of-user-intent-191106062826
The first step in returning relevant search results is successfully interpreting the users intent. This requires combining a holistic understanding of your content, your users, and your domain. Traditional keyword search focuses on the content understanding dimension. Knowledge graphs are then typically built and leveraged to represent an understanding of your domain. Finally, Collaborative recommendations and user profile learning are typically the tools of choice for generating and modeling an understanding of the preferences of each user. While these systems (search, recommendations, and knowledge graphs) are often built and used in isolation, combining them together is the key to truly understanding a users query intent. For example, combining traditional keyword search with your knowledge graph leads to semantic search capabilities, and combining traditional keyword search with recommendations leads to personalized search experiences. Combining all of these dimensions together in an appropriately balanced way will ultimately lead to the most accurate interpretation of a users query, resulting in a better query to the core search engine and ultimately a better, more relevant search experience. In this talk, well demonstrate strategies for delivering and combining each of these dimensions of user intent, and well walk through concrete examples of how to balance the nuances of each so that you also dont over-personalize, over-contextualize, or under appreciate the nuances of your users intent.]]>

The first step in returning relevant search results is successfully interpreting the users intent. This requires combining a holistic understanding of your content, your users, and your domain. Traditional keyword search focuses on the content understanding dimension. Knowledge graphs are then typically built and leveraged to represent an understanding of your domain. Finally, Collaborative recommendations and user profile learning are typically the tools of choice for generating and modeling an understanding of the preferences of each user. While these systems (search, recommendations, and knowledge graphs) are often built and used in isolation, combining them together is the key to truly understanding a users query intent. For example, combining traditional keyword search with your knowledge graph leads to semantic search capabilities, and combining traditional keyword search with recommendations leads to personalized search experiences. Combining all of these dimensions together in an appropriately balanced way will ultimately lead to the most accurate interpretation of a users query, resulting in a better query to the core search engine and ultimately a better, more relevant search experience. In this talk, well demonstrate strategies for delivering and combining each of these dimensions of user intent, and well walk through concrete examples of how to balance the nuances of each so that you also dont over-personalize, over-contextualize, or under appreciate the nuances of your users intent.]]>
Wed, 06 Nov 2019 06:28:26 GMT /slideshow/balancing-the-dimensions-of-user-intent/190952294 treygrainger@slideshare.net(treygrainger) Balancing the Dimensions of User Intent treygrainger The first step in returning relevant search results is successfully interpreting the users intent. This requires combining a holistic understanding of your content, your users, and your domain. Traditional keyword search focuses on the content understanding dimension. Knowledge graphs are then typically built and leveraged to represent an understanding of your domain. Finally, Collaborative recommendations and user profile learning are typically the tools of choice for generating and modeling an understanding of the preferences of each user. While these systems (search, recommendations, and knowledge graphs) are often built and used in isolation, combining them together is the key to truly understanding a users query intent. For example, combining traditional keyword search with your knowledge graph leads to semantic search capabilities, and combining traditional keyword search with recommendations leads to personalized search experiences. Combining all of these dimensions together in an appropriately balanced way will ultimately lead to the most accurate interpretation of a users query, resulting in a better query to the core search engine and ultimately a better, more relevant search experience. In this talk, well demonstrate strategies for delivering and combining each of these dimensions of user intent, and well walk through concrete examples of how to balance the nuances of each so that you also dont over-personalize, over-contextualize, or under appreciate the nuances of your users intent. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/balancing-the-dimensions-of-user-intent-191106062826-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> The first step in returning relevant search results is successfully interpreting the users intent. This requires combining a holistic understanding of your content, your users, and your domain. Traditional keyword search focuses on the content understanding dimension. Knowledge graphs are then typically built and leveraged to represent an understanding of your domain. Finally, Collaborative recommendations and user profile learning are typically the tools of choice for generating and modeling an understanding of the preferences of each user. While these systems (search, recommendations, and knowledge graphs) are often built and used in isolation, combining them together is the key to truly understanding a users query intent. For example, combining traditional keyword search with your knowledge graph leads to semantic search capabilities, and combining traditional keyword search with recommendations leads to personalized search experiences. Combining all of these dimensions together in an appropriately balanced way will ultimately lead to the most accurate interpretation of a users query, resulting in a better query to the core search engine and ultimately a better, more relevant search experience. In this talk, well demonstrate strategies for delivering and combining each of these dimensions of user intent, and well walk through concrete examples of how to balance the nuances of each so that you also dont over-personalize, over-contextualize, or under appreciate the nuances of your users intent.
Balancing the Dimensions of User Intent from Trey Grainger
]]>
2041 5 https://cdn.slidesharecdn.com/ss_thumbnails/balancing-the-dimensions-of-user-intent-191106062826-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Reflected Intelligence: Real world AI in Digital Transformation /slideshow/reflected-intelligence-real-world-ai-in-digital-transformation/190947636 reflected-intelligence-real-world-ai-in-digital-transformation-191106061341
The goal of most digital transformations is to create competitive advantage by enhancing customer experience and employee success, so giving these stakeholders the ability to find the right information at their moment of need is paramount. Employees and customers increasingly expect an intuitive, interactive experience where they can simply type or speak their questions or keywords into a search box, their intent will be understood, and the best answers and content are then immediately presented. Providing this compelling experience, however, requires a deep understanding of your content, your unique business domain, and the collective and personalized needs of each of your users. Modern artificial intelligence (AI) approaches are able to continuously learn from both your content and the ongoing stream of user interactions with your applications, and to automatically reflect back that learned intelligence in order to instantly and scalably deliver contextually-relevant answers to employees and customers. In this talk, we'll discuss how AI is currently being deployed across the Fortune 1000 to accomplish these goals, both in the digital workplace (helping employees more efficiently get answers and make decisions) and in digital commerce (understanding customer intent and connecting them with the best information and products). We'll separate fact from fiction as we break down the hype around AI and show how it is being practically implemented today to power many real-world digital transformations for the next generation of employees and customers.]]>

The goal of most digital transformations is to create competitive advantage by enhancing customer experience and employee success, so giving these stakeholders the ability to find the right information at their moment of need is paramount. Employees and customers increasingly expect an intuitive, interactive experience where they can simply type or speak their questions or keywords into a search box, their intent will be understood, and the best answers and content are then immediately presented. Providing this compelling experience, however, requires a deep understanding of your content, your unique business domain, and the collective and personalized needs of each of your users. Modern artificial intelligence (AI) approaches are able to continuously learn from both your content and the ongoing stream of user interactions with your applications, and to automatically reflect back that learned intelligence in order to instantly and scalably deliver contextually-relevant answers to employees and customers. In this talk, we'll discuss how AI is currently being deployed across the Fortune 1000 to accomplish these goals, both in the digital workplace (helping employees more efficiently get answers and make decisions) and in digital commerce (understanding customer intent and connecting them with the best information and products). We'll separate fact from fiction as we break down the hype around AI and show how it is being practically implemented today to power many real-world digital transformations for the next generation of employees and customers.]]>
Wed, 06 Nov 2019 06:13:41 GMT /slideshow/reflected-intelligence-real-world-ai-in-digital-transformation/190947636 treygrainger@slideshare.net(treygrainger) Reflected Intelligence: Real world AI in Digital Transformation treygrainger The goal of most digital transformations is to create competitive advantage by enhancing customer experience and employee success, so giving these stakeholders the ability to find the right information at their moment of need is paramount. Employees and customers increasingly expect an intuitive, interactive experience where they can simply type or speak their questions or keywords into a search box, their intent will be understood, and the best answers and content are then immediately presented. Providing this compelling experience, however, requires a deep understanding of your content, your unique business domain, and the collective and personalized needs of each of your users. Modern artificial intelligence (AI) approaches are able to continuously learn from both your content and the ongoing stream of user interactions with your applications, and to automatically reflect back that learned intelligence in order to instantly and scalably deliver contextually-relevant answers to employees and customers. In this talk, we'll discuss how AI is currently being deployed across the Fortune 1000 to accomplish these goals, both in the digital workplace (helping employees more efficiently get answers and make decisions) and in digital commerce (understanding customer intent and connecting them with the best information and products). We'll separate fact from fiction as we break down the hype around AI and show how it is being practically implemented today to power many real-world digital transformations for the next generation of employees and customers. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/reflected-intelligence-real-world-ai-in-digital-transformation-191106061341-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> The goal of most digital transformations is to create competitive advantage by enhancing customer experience and employee success, so giving these stakeholders the ability to find the right information at their moment of need is paramount. Employees and customers increasingly expect an intuitive, interactive experience where they can simply type or speak their questions or keywords into a search box, their intent will be understood, and the best answers and content are then immediately presented. Providing this compelling experience, however, requires a deep understanding of your content, your unique business domain, and the collective and personalized needs of each of your users. Modern artificial intelligence (AI) approaches are able to continuously learn from both your content and the ongoing stream of user interactions with your applications, and to automatically reflect back that learned intelligence in order to instantly and scalably deliver contextually-relevant answers to employees and customers. In this talk, we&#39;ll discuss how AI is currently being deployed across the Fortune 1000 to accomplish these goals, both in the digital workplace (helping employees more efficiently get answers and make decisions) and in digital commerce (understanding customer intent and connecting them with the best information and products). We&#39;ll separate fact from fiction as we break down the hype around AI and show how it is being practically implemented today to power many real-world digital transformations for the next generation of employees and customers.
Reflected Intelligence: Real world AI in Digital Transformation from Trey Grainger
]]>
543 1 https://cdn.slidesharecdn.com/ss_thumbnails/reflected-intelligence-real-world-ai-in-digital-transformation-191106061341-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Thought Vectors and Knowledge Graphs in AI-powered Search /slideshow/thought-vectors-and-knowledge-graphs-in-ai-powered-search-190947140/190947140 thoughtvectorsandknowledgegraphsinaipoweredsearch-191106061149
While traditional keyword search is still useful, pure text-based keyword matching is quickly becoming obsolete; today, it is a necessary but not sufficient tool for delivering relevant results and intelligent search experiences. In this talk, we'll cover some of the emerging trends in AI-powered search, including the use of thought vectors (multi-level vector embeddings) and semantic knowledge graphs to contextually interpret and conceptualize queries. We'll walk through some live query interpretation demos to demonstrate the power that can be delivered through these semantic search techniques leveraging auto-generated knowledge graphs learned from your content and user interactions.]]>

While traditional keyword search is still useful, pure text-based keyword matching is quickly becoming obsolete; today, it is a necessary but not sufficient tool for delivering relevant results and intelligent search experiences. In this talk, we'll cover some of the emerging trends in AI-powered search, including the use of thought vectors (multi-level vector embeddings) and semantic knowledge graphs to contextually interpret and conceptualize queries. We'll walk through some live query interpretation demos to demonstrate the power that can be delivered through these semantic search techniques leveraging auto-generated knowledge graphs learned from your content and user interactions.]]>
Wed, 06 Nov 2019 06:11:49 GMT /slideshow/thought-vectors-and-knowledge-graphs-in-ai-powered-search-190947140/190947140 treygrainger@slideshare.net(treygrainger) Thought Vectors and Knowledge Graphs in AI-powered Search treygrainger While traditional keyword search is still useful, pure text-based keyword matching is quickly becoming obsolete; today, it is a necessary but not sufficient tool for delivering relevant results and intelligent search experiences. In this talk, we'll cover some of the emerging trends in AI-powered search, including the use of thought vectors (multi-level vector embeddings) and semantic knowledge graphs to contextually interpret and conceptualize queries. We'll walk through some live query interpretation demos to demonstrate the power that can be delivered through these semantic search techniques leveraging auto-generated knowledge graphs learned from your content and user interactions. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/thoughtvectorsandknowledgegraphsinaipoweredsearch-191106061149-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> While traditional keyword search is still useful, pure text-based keyword matching is quickly becoming obsolete; today, it is a necessary but not sufficient tool for delivering relevant results and intelligent search experiences. In this talk, we&#39;ll cover some of the emerging trends in AI-powered search, including the use of thought vectors (multi-level vector embeddings) and semantic knowledge graphs to contextually interpret and conceptualize queries. We&#39;ll walk through some live query interpretation demos to demonstrate the power that can be delivered through these semantic search techniques leveraging auto-generated knowledge graphs learned from your content and user interactions.
Thought Vectors and Knowledge Graphs in AI-powered Search from Trey Grainger
]]>
1684 5 https://cdn.slidesharecdn.com/ss_thumbnails/thoughtvectorsandknowledgegraphsinaipoweredsearch-191106061149-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Natural Language Search with Knowledge Graphs (Chicago Meetup) /slideshow/natural-language-search-with-knowledge-graphs-chicago-meetup/190942133 natural-language-search-with-knowledge-graphs-chicago-meetup-191106055714
To optimally interpret most natural language queries, its important to understand a highly-nuanced, contextual interpretation of the domain-specific phrases, entities, commands, and relationships represented or implied within the search and within your domain. In this talk, we'll walk through such a search system powered by Solr's Text Tagger and Semantic Knowledge graph. We'll have fun with some of the more search-centric use cases of knowledge graphs, such as entity extraction, query expansion, disambiguation, and pattern identification within our queries: for example, transforming the query "best bbq near activate" into: {!func}mul(min(popularity,1),100) bbq^0.91032 ribs^0.65674 brisket^0.63386 doc_type:"restaurant" {!geofilt d=50 sfield="coordinates_pt" pt="38.916120,-77.045220"} We'll see a live demo with real world data demonstrating how you can build and apply your own knowledge graphs to power much more relevant query understanding like this within your search engine.]]>

To optimally interpret most natural language queries, its important to understand a highly-nuanced, contextual interpretation of the domain-specific phrases, entities, commands, and relationships represented or implied within the search and within your domain. In this talk, we'll walk through such a search system powered by Solr's Text Tagger and Semantic Knowledge graph. We'll have fun with some of the more search-centric use cases of knowledge graphs, such as entity extraction, query expansion, disambiguation, and pattern identification within our queries: for example, transforming the query "best bbq near activate" into: {!func}mul(min(popularity,1),100) bbq^0.91032 ribs^0.65674 brisket^0.63386 doc_type:"restaurant" {!geofilt d=50 sfield="coordinates_pt" pt="38.916120,-77.045220"} We'll see a live demo with real world data demonstrating how you can build and apply your own knowledge graphs to power much more relevant query understanding like this within your search engine.]]>
Wed, 06 Nov 2019 05:57:14 GMT /slideshow/natural-language-search-with-knowledge-graphs-chicago-meetup/190942133 treygrainger@slideshare.net(treygrainger) Natural Language Search with Knowledge Graphs (Chicago Meetup) treygrainger To optimally interpret most natural language queries, its important to understand a highly-nuanced, contextual interpretation of the domain-specific phrases, entities, commands, and relationships represented or implied within the search and within your domain. In this talk, we'll walk through such a search system powered by Solr's Text Tagger and Semantic Knowledge graph. We'll have fun with some of the more search-centric use cases of knowledge graphs, such as entity extraction, query expansion, disambiguation, and pattern identification within our queries: for example, transforming the query "best bbq near activate" into: {!func}mul(min(popularity,1),100) bbq^0.91032 ribs^0.65674 brisket^0.63386 doc_type:"restaurant" {!geofilt d=50 sfield="coordinates_pt" pt="38.916120,-77.045220"} We'll see a live demo with real world data demonstrating how you can build and apply your own knowledge graphs to power much more relevant query understanding like this within your search engine. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/natural-language-search-with-knowledge-graphs-chicago-meetup-191106055714-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> To optimally interpret most natural language queries, its important to understand a highly-nuanced, contextual interpretation of the domain-specific phrases, entities, commands, and relationships represented or implied within the search and within your domain. In this talk, we&#39;ll walk through such a search system powered by Solr&#39;s Text Tagger and Semantic Knowledge graph. We&#39;ll have fun with some of the more search-centric use cases of knowledge graphs, such as entity extraction, query expansion, disambiguation, and pattern identification within our queries: for example, transforming the query &quot;best bbq near activate&quot; into: {!func}mul(min(popularity,1),100) bbq^0.91032 ribs^0.65674 brisket^0.63386 doc_type:&quot;restaurant&quot; {!geofilt d=50 sfield=&quot;coordinates_pt&quot; pt=&quot;38.916120,-77.045220&quot;} We&#39;ll see a live demo with real world data demonstrating how you can build and apply your own knowledge graphs to power much more relevant query understanding like this within your search engine.
Natural Language Search with Knowledge Graphs (Chicago Meetup) from Trey Grainger
]]>
562 3 https://cdn.slidesharecdn.com/ss_thumbnails/natural-language-search-with-knowledge-graphs-chicago-meetup-191106055714-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
The Next Generation of AI-powered Search /slideshow/the-next-generation-of-ai-powered-search/190939055 thenextgenerationofai-poweredsearch-191106054912
What does it really mean to deliver an "AI-powered Search" solution? In this talk, well bring clarity to this topic, showing you how to marry the art of the possible with the real-world challenges involved in understanding your content, your users, and your domain. We'll dive into emerging trends in AI-powered Search, as well as many of the stumbling blocks found in even the most advanced AI and Search applications, showing how to proactively plan for and avoid them. We'll walk through the various uses of reflected intelligence and feedback loops for continuous learning from user behavioral signals and content updates, also covering the increasing importance of virtual assistants and personalized search use cases found within the intersection of traditional search and recommendation engines. Our goal will be to provide a baseline of mainstream AI-powered Search capabilities available today, and to paint a picture of what we can all expect just on the horizon.]]>

What does it really mean to deliver an "AI-powered Search" solution? In this talk, well bring clarity to this topic, showing you how to marry the art of the possible with the real-world challenges involved in understanding your content, your users, and your domain. We'll dive into emerging trends in AI-powered Search, as well as many of the stumbling blocks found in even the most advanced AI and Search applications, showing how to proactively plan for and avoid them. We'll walk through the various uses of reflected intelligence and feedback loops for continuous learning from user behavioral signals and content updates, also covering the increasing importance of virtual assistants and personalized search use cases found within the intersection of traditional search and recommendation engines. Our goal will be to provide a baseline of mainstream AI-powered Search capabilities available today, and to paint a picture of what we can all expect just on the horizon.]]>
Wed, 06 Nov 2019 05:49:12 GMT /slideshow/the-next-generation-of-ai-powered-search/190939055 treygrainger@slideshare.net(treygrainger) The Next Generation of AI-powered Search treygrainger What does it really mean to deliver an "AI-powered Search" solution? In this talk, well bring clarity to this topic, showing you how to marry the art of the possible with the real-world challenges involved in understanding your content, your users, and your domain. We'll dive into emerging trends in AI-powered Search, as well as many of the stumbling blocks found in even the most advanced AI and Search applications, showing how to proactively plan for and avoid them. We'll walk through the various uses of reflected intelligence and feedback loops for continuous learning from user behavioral signals and content updates, also covering the increasing importance of virtual assistants and personalized search use cases found within the intersection of traditional search and recommendation engines. Our goal will be to provide a baseline of mainstream AI-powered Search capabilities available today, and to paint a picture of what we can all expect just on the horizon. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/thenextgenerationofai-poweredsearch-191106054912-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> What does it really mean to deliver an &quot;AI-powered Search&quot; solution? In this talk, well bring clarity to this topic, showing you how to marry the art of the possible with the real-world challenges involved in understanding your content, your users, and your domain. We&#39;ll dive into emerging trends in AI-powered Search, as well as many of the stumbling blocks found in even the most advanced AI and Search applications, showing how to proactively plan for and avoid them. We&#39;ll walk through the various uses of reflected intelligence and feedback loops for continuous learning from user behavioral signals and content updates, also covering the increasing importance of virtual assistants and personalized search use cases found within the intersection of traditional search and recommendation engines. Our goal will be to provide a baseline of mainstream AI-powered Search capabilities available today, and to paint a picture of what we can all expect just on the horizon.
The Next Generation of AI-powered Search from Trey Grainger
]]>
1629 2 https://cdn.slidesharecdn.com/ss_thumbnails/thenextgenerationofai-poweredsearch-191106054912-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Natural Language Search with Knowledge Graphs (Activate 2019) /slideshow/natural-language-search-with-knowledge-graphs-activate/190937388 naturallanguagesearchwithknowledgegraphs-191106054427
To optimally interpret most natural language queries, its important to understand a highly-nuanced, contextual interpretation of the domain-specific phrases, entities, commands, and relationships represented or implied within the search and within your domain. In this talk, we'll walk through such a search system powered by Solr's Text Tagger and Semantic Knowledge graph. We'll have fun with some of the more search-centric use cases of knowledge graphs, such as entity extraction, query expansion, disambiguation, and pattern identification within our queries: for example, transforming the query "best bbq near activate" into: {!func}mul(min(popularity,1),100) bbq^0.91032 ribs^0.65674 brisket^0.63386 doc_type:"restaurant" {!geofilt d=50 sfield="coordinates_pt" pt="38.916120,-77.045220"} We'll see a live demo with real world data demonstrating how you can build and apply your own knowledge graphs to power much more relevant query understanding like this within your search engine.]]>

To optimally interpret most natural language queries, its important to understand a highly-nuanced, contextual interpretation of the domain-specific phrases, entities, commands, and relationships represented or implied within the search and within your domain. In this talk, we'll walk through such a search system powered by Solr's Text Tagger and Semantic Knowledge graph. We'll have fun with some of the more search-centric use cases of knowledge graphs, such as entity extraction, query expansion, disambiguation, and pattern identification within our queries: for example, transforming the query "best bbq near activate" into: {!func}mul(min(popularity,1),100) bbq^0.91032 ribs^0.65674 brisket^0.63386 doc_type:"restaurant" {!geofilt d=50 sfield="coordinates_pt" pt="38.916120,-77.045220"} We'll see a live demo with real world data demonstrating how you can build and apply your own knowledge graphs to power much more relevant query understanding like this within your search engine.]]>
Wed, 06 Nov 2019 05:44:27 GMT /slideshow/natural-language-search-with-knowledge-graphs-activate/190937388 treygrainger@slideshare.net(treygrainger) Natural Language Search with Knowledge Graphs (Activate 2019) treygrainger To optimally interpret most natural language queries, its important to understand a highly-nuanced, contextual interpretation of the domain-specific phrases, entities, commands, and relationships represented or implied within the search and within your domain. In this talk, we'll walk through such a search system powered by Solr's Text Tagger and Semantic Knowledge graph. We'll have fun with some of the more search-centric use cases of knowledge graphs, such as entity extraction, query expansion, disambiguation, and pattern identification within our queries: for example, transforming the query "best bbq near activate" into: {!func}mul(min(popularity,1),100) bbq^0.91032 ribs^0.65674 brisket^0.63386 doc_type:"restaurant" {!geofilt d=50 sfield="coordinates_pt" pt="38.916120,-77.045220"} We'll see a live demo with real world data demonstrating how you can build and apply your own knowledge graphs to power much more relevant query understanding like this within your search engine. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/naturallanguagesearchwithknowledgegraphs-191106054427-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> To optimally interpret most natural language queries, its important to understand a highly-nuanced, contextual interpretation of the domain-specific phrases, entities, commands, and relationships represented or implied within the search and within your domain. In this talk, we&#39;ll walk through such a search system powered by Solr&#39;s Text Tagger and Semantic Knowledge graph. We&#39;ll have fun with some of the more search-centric use cases of knowledge graphs, such as entity extraction, query expansion, disambiguation, and pattern identification within our queries: for example, transforming the query &quot;best bbq near activate&quot; into: {!func}mul(min(popularity,1),100) bbq^0.91032 ribs^0.65674 brisket^0.63386 doc_type:&quot;restaurant&quot; {!geofilt d=50 sfield=&quot;coordinates_pt&quot; pt=&quot;38.916120,-77.045220&quot;} We&#39;ll see a live demo with real world data demonstrating how you can build and apply your own knowledge graphs to power much more relevant query understanding like this within your search engine.
Natural Language Search with Knowledge Graphs (Activate 2019) from Trey Grainger
]]>
540 2 https://cdn.slidesharecdn.com/ss_thumbnails/naturallanguagesearchwithknowledgegraphs-191106054427-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
AI, Search, and the Disruption of Knowledge Management /slideshow/ai-search-and-the-disruption-of-knowledge-management/190935645 reflected-intelligence-ai-search-and-disruption-of-km-191106053811
Trey Grainger's Presentation from the DOD & Federal Knowledge Management Symposium 2019.]]>

Trey Grainger's Presentation from the DOD & Federal Knowledge Management Symposium 2019.]]>
Wed, 06 Nov 2019 05:38:11 GMT /slideshow/ai-search-and-the-disruption-of-knowledge-management/190935645 treygrainger@slideshare.net(treygrainger) AI, Search, and the Disruption of Knowledge Management treygrainger Trey Grainger's Presentation from the DOD & Federal Knowledge Management Symposium 2019. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/reflected-intelligence-ai-search-and-disruption-of-km-191106053811-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Trey Grainger&#39;s Presentation from the DOD &amp; Federal Knowledge Management Symposium 2019.
AI, Search, and the Disruption of Knowledge Management from Trey Grainger
]]>
547 3 https://cdn.slidesharecdn.com/ss_thumbnails/reflected-intelligence-ai-search-and-disruption-of-km-191106053811-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Measuring Relevance in the Negative Space /slideshow/measuring-relevance-in-the-negative-space/190932902 measuring-relevance-in-the-negative-space-191106052829
Trey Grainger's presentation at the Southern Data Science Conference, 2019.]]>

Trey Grainger's presentation at the Southern Data Science Conference, 2019.]]>
Wed, 06 Nov 2019 05:28:29 GMT /slideshow/measuring-relevance-in-the-negative-space/190932902 treygrainger@slideshare.net(treygrainger) Measuring Relevance in the Negative Space treygrainger Trey Grainger's presentation at the Southern Data Science Conference, 2019. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/measuring-relevance-in-the-negative-space-191106052829-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Trey Grainger&#39;s presentation at the Southern Data Science Conference, 2019.
Measuring Relevance in the Negative Space from Trey Grainger
]]>
394 4 https://cdn.slidesharecdn.com/ss_thumbnails/measuring-relevance-in-the-negative-space-191106052829-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Natural Language Search with Knowledge Graphs (Haystack 2019) /slideshow/natural-language-search-with-knowledge-graphs/146060757 natural-language-search-with-knowledge-graphs-190516145506
To optimally interpret most natural language queries, it is necessary to understand the phrases, entities, commands, and relationships represented or implied within the search. Knowledge graphs serve as useful instantiations of ontologies which can help represent this kind of knowledge within a domain. In this talk, we'll walk through techniques to build knowledge graphs automatically from your own domain-specific content, how you can update and edit the nodes and relationships, and how you can seamlessly integrate them into your search solution for enhanced query interpretation and semantic search. We'll have some fun with some of the more search-centric use cased of knowledge graphs, such as entity extraction, query expansion, disambiguation, and pattern identification within our queries: for example, transforming the query "bbq near haystack" into { filter:["doc_type":"restaurant"], "query": { "boost": { "b": "recip(geodist(38.034780,-78.486790),1,1000,1000)", "query": "bbq OR barbeque OR barbecue" } } } We'll also specifically cover use of the Semantic Knowledge Graph, a particularly interesting knowledge graph implementation available within Apache Solr that can be auto-generated from your own domain-specific content and which provides highly-nuanced, contextual interpretation of all of the terms, phrases and entities within your domain. We'll see a live demo with real world data demonstrating how you can build and apply your own knowledge graphs to power much more relevant query understanding within your search engine.]]>

To optimally interpret most natural language queries, it is necessary to understand the phrases, entities, commands, and relationships represented or implied within the search. Knowledge graphs serve as useful instantiations of ontologies which can help represent this kind of knowledge within a domain. In this talk, we'll walk through techniques to build knowledge graphs automatically from your own domain-specific content, how you can update and edit the nodes and relationships, and how you can seamlessly integrate them into your search solution for enhanced query interpretation and semantic search. We'll have some fun with some of the more search-centric use cased of knowledge graphs, such as entity extraction, query expansion, disambiguation, and pattern identification within our queries: for example, transforming the query "bbq near haystack" into { filter:["doc_type":"restaurant"], "query": { "boost": { "b": "recip(geodist(38.034780,-78.486790),1,1000,1000)", "query": "bbq OR barbeque OR barbecue" } } } We'll also specifically cover use of the Semantic Knowledge Graph, a particularly interesting knowledge graph implementation available within Apache Solr that can be auto-generated from your own domain-specific content and which provides highly-nuanced, contextual interpretation of all of the terms, phrases and entities within your domain. We'll see a live demo with real world data demonstrating how you can build and apply your own knowledge graphs to power much more relevant query understanding within your search engine.]]>
Thu, 16 May 2019 14:55:06 GMT /slideshow/natural-language-search-with-knowledge-graphs/146060757 treygrainger@slideshare.net(treygrainger) Natural Language Search with Knowledge Graphs (Haystack 2019) treygrainger To optimally interpret most natural language queries, it is necessary to understand the phrases, entities, commands, and relationships represented or implied within the search. Knowledge graphs serve as useful instantiations of ontologies which can help represent this kind of knowledge within a domain. In this talk, we'll walk through techniques to build knowledge graphs automatically from your own domain-specific content, how you can update and edit the nodes and relationships, and how you can seamlessly integrate them into your search solution for enhanced query interpretation and semantic search. We'll have some fun with some of the more search-centric use cased of knowledge graphs, such as entity extraction, query expansion, disambiguation, and pattern identification within our queries: for example, transforming the query "bbq near haystack" into { filter:["doc_type":"restaurant"], "query": { "boost": { "b": "recip(geodist(38.034780,-78.486790),1,1000,1000)", "query": "bbq OR barbeque OR barbecue" } } } We'll also specifically cover use of the Semantic Knowledge Graph, a particularly interesting knowledge graph implementation available within Apache Solr that can be auto-generated from your own domain-specific content and which provides highly-nuanced, contextual interpretation of all of the terms, phrases and entities within your domain. We'll see a live demo with real world data demonstrating how you can build and apply your own knowledge graphs to power much more relevant query understanding within your search engine. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/natural-language-search-with-knowledge-graphs-190516145506-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> To optimally interpret most natural language queries, it is necessary to understand the phrases, entities, commands, and relationships represented or implied within the search. Knowledge graphs serve as useful instantiations of ontologies which can help represent this kind of knowledge within a domain. In this talk, we&#39;ll walk through techniques to build knowledge graphs automatically from your own domain-specific content, how you can update and edit the nodes and relationships, and how you can seamlessly integrate them into your search solution for enhanced query interpretation and semantic search. We&#39;ll have some fun with some of the more search-centric use cased of knowledge graphs, such as entity extraction, query expansion, disambiguation, and pattern identification within our queries: for example, transforming the query &quot;bbq near haystack&quot; into { filter:[&quot;doc_type&quot;:&quot;restaurant&quot;], &quot;query&quot;: { &quot;boost&quot;: { &quot;b&quot;: &quot;recip(geodist(38.034780,-78.486790),1,1000,1000)&quot;, &quot;query&quot;: &quot;bbq OR barbeque OR barbecue&quot; } } } We&#39;ll also specifically cover use of the Semantic Knowledge Graph, a particularly interesting knowledge graph implementation available within Apache Solr that can be auto-generated from your own domain-specific content and which provides highly-nuanced, contextual interpretation of all of the terms, phrases and entities within your domain. We&#39;ll see a live demo with real world data demonstrating how you can build and apply your own knowledge graphs to power much more relevant query understanding within your search engine.
Natural Language Search with Knowledge Graphs (Haystack 2019) from Trey Grainger
]]>
1951 8 https://cdn.slidesharecdn.com/ss_thumbnails/natural-language-search-with-knowledge-graphs-190516145506-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
The Future of Search and AI /slideshow/the-future-of-search-and-ai/121890864 the-future-of-search-and-ai-181105064409
Closing keynote by Trey Grainger from Activate 2018 in Montreal, Canada. Covers trends in the intersection of Search (Information Retrieval) and Artificial Intelligence, and the underlying capabilities needed to deliver those trends at scale.]]>

Closing keynote by Trey Grainger from Activate 2018 in Montreal, Canada. Covers trends in the intersection of Search (Information Retrieval) and Artificial Intelligence, and the underlying capabilities needed to deliver those trends at scale.]]>
Mon, 05 Nov 2018 06:44:09 GMT /slideshow/the-future-of-search-and-ai/121890864 treygrainger@slideshare.net(treygrainger) The Future of Search and AI treygrainger Closing keynote by Trey Grainger from Activate 2018 in Montreal, Canada. Covers trends in the intersection of Search (Information Retrieval) and Artificial Intelligence, and the underlying capabilities needed to deliver those trends at scale. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/the-future-of-search-and-ai-181105064409-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Closing keynote by Trey Grainger from Activate 2018 in Montreal, Canada. Covers trends in the intersection of Search (Information Retrieval) and Artificial Intelligence, and the underlying capabilities needed to deliver those trends at scale.
The Future of Search and AI from Trey Grainger
]]>
2648 1 https://cdn.slidesharecdn.com/ss_thumbnails/the-future-of-search-and-ai-181105064409-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
How to Build a Semantic Search System /slideshow/how-to-build-a-semantic-search-system/121889064 how-to-build-a-semantic-search-system-181105062957
Building a semantic search system - one that can correctly parse and interpret end-user intent and return the ideal results for users queries - is not an easy task. It requires semantically parsing the terms, phrases, and structure within queries, disambiguating polysemous terms, correcting misspellings, expanding to conceptually synonymous or related concepts, and rewriting queries in a way that maps the correct interpretation of each end users query into the ideal representation of features and weights that will return the best results for that user. Not only that, but the above must often be done within the confines of a very specific domain - ripe with its own jargon and linguistic and conceptual nuances. This talk will walk through the anatomy of a semantic search system and how each of the pieces described above fit together to deliver a final solution. We'll leverage several recently-released capabilities in Apache Solr (the Semantic Knowledge Graph, Solr Text Tagger, Statistical Phrase Identifier) and Lucidworks Fusion (query log mining, misspelling job, word2vec job, query pipelines, relevancy experiment backtesting) to show you an end-to-end working Semantic Search system that can automatically learn the nuances of any domain and deliver a substantially more relevant search experience. ]]>

Building a semantic search system - one that can correctly parse and interpret end-user intent and return the ideal results for users queries - is not an easy task. It requires semantically parsing the terms, phrases, and structure within queries, disambiguating polysemous terms, correcting misspellings, expanding to conceptually synonymous or related concepts, and rewriting queries in a way that maps the correct interpretation of each end users query into the ideal representation of features and weights that will return the best results for that user. Not only that, but the above must often be done within the confines of a very specific domain - ripe with its own jargon and linguistic and conceptual nuances. This talk will walk through the anatomy of a semantic search system and how each of the pieces described above fit together to deliver a final solution. We'll leverage several recently-released capabilities in Apache Solr (the Semantic Knowledge Graph, Solr Text Tagger, Statistical Phrase Identifier) and Lucidworks Fusion (query log mining, misspelling job, word2vec job, query pipelines, relevancy experiment backtesting) to show you an end-to-end working Semantic Search system that can automatically learn the nuances of any domain and deliver a substantially more relevant search experience. ]]>
Mon, 05 Nov 2018 06:29:57 GMT /slideshow/how-to-build-a-semantic-search-system/121889064 treygrainger@slideshare.net(treygrainger) How to Build a Semantic Search System treygrainger Building a semantic search system - one that can correctly parse and interpret end-user intent and return the ideal results for users queries - is not an easy task. It requires semantically parsing the terms, phrases, and structure within queries, disambiguating polysemous terms, correcting misspellings, expanding to conceptually synonymous or related concepts, and rewriting queries in a way that maps the correct interpretation of each end users query into the ideal representation of features and weights that will return the best results for that user. Not only that, but the above must often be done within the confines of a very specific domain - ripe with its own jargon and linguistic and conceptual nuances. This talk will walk through the anatomy of a semantic search system and how each of the pieces described above fit together to deliver a final solution. We'll leverage several recently-released capabilities in Apache Solr (the Semantic Knowledge Graph, Solr Text Tagger, Statistical Phrase Identifier) and Lucidworks Fusion (query log mining, misspelling job, word2vec job, query pipelines, relevancy experiment backtesting) to show you an end-to-end working Semantic Search system that can automatically learn the nuances of any domain and deliver a substantially more relevant search experience. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/how-to-build-a-semantic-search-system-181105062957-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Building a semantic search system - one that can correctly parse and interpret end-user intent and return the ideal results for users queries - is not an easy task. It requires semantically parsing the terms, phrases, and structure within queries, disambiguating polysemous terms, correcting misspellings, expanding to conceptually synonymous or related concepts, and rewriting queries in a way that maps the correct interpretation of each end users query into the ideal representation of features and weights that will return the best results for that user. Not only that, but the above must often be done within the confines of a very specific domain - ripe with its own jargon and linguistic and conceptual nuances. This talk will walk through the anatomy of a semantic search system and how each of the pieces described above fit together to deliver a final solution. We&#39;ll leverage several recently-released capabilities in Apache Solr (the Semantic Knowledge Graph, Solr Text Tagger, Statistical Phrase Identifier) and Lucidworks Fusion (query log mining, misspelling job, word2vec job, query pipelines, relevancy experiment backtesting) to show you an end-to-end working Semantic Search system that can automatically learn the nuances of any domain and deliver a substantially more relevant search experience.
How to Build a Semantic Search System from Trey Grainger
]]>
5608 18 https://cdn.slidesharecdn.com/ss_thumbnails/how-to-build-a-semantic-search-system-181105062957-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
The Relevance of the Apache Solr Semantic Knowledge Graph /treygrainger/relevance-of-the-apache-solr-semantic-knowledge-graph relevance-of-apache-solr-semantic-knowledge-graph-181105062232
The Semantic Knowledge Graph is an Apache Solr plugin that can be used to discover and rank the relationships between any arbitrary queries or terms within the search index. It is a relevancy swiss army knife, able to discover related terms and concepts, disambiguate different meanings of terms given their context, cleanup noise in datasets, discover previously unknown relationships between entities across documents and fields, rank lists of keywords based upon conceptual cohesion to reduce noise, summarize documents by extracting their most significant terms, generate recommendations and personalized search, and power numerous other applications involving anomaly detection, significance/relationship discovery, and semantic search. This talk will walk you through how to setup and use this plugin in concert with other open source tools (probabilistic query parser, SolrTextTagger for entity extraction) to parse, interpret, and much more correctly model the true intent of user searches than traditional keyword-based search approaches.]]>

The Semantic Knowledge Graph is an Apache Solr plugin that can be used to discover and rank the relationships between any arbitrary queries or terms within the search index. It is a relevancy swiss army knife, able to discover related terms and concepts, disambiguate different meanings of terms given their context, cleanup noise in datasets, discover previously unknown relationships between entities across documents and fields, rank lists of keywords based upon conceptual cohesion to reduce noise, summarize documents by extracting their most significant terms, generate recommendations and personalized search, and power numerous other applications involving anomaly detection, significance/relationship discovery, and semantic search. This talk will walk you through how to setup and use this plugin in concert with other open source tools (probabilistic query parser, SolrTextTagger for entity extraction) to parse, interpret, and much more correctly model the true intent of user searches than traditional keyword-based search approaches.]]>
Mon, 05 Nov 2018 06:22:32 GMT /treygrainger/relevance-of-the-apache-solr-semantic-knowledge-graph treygrainger@slideshare.net(treygrainger) The Relevance of the Apache Solr Semantic Knowledge Graph treygrainger The Semantic Knowledge Graph is an Apache Solr plugin that can be used to discover and rank the relationships between any arbitrary queries or terms within the search index. It is a relevancy swiss army knife, able to discover related terms and concepts, disambiguate different meanings of terms given their context, cleanup noise in datasets, discover previously unknown relationships between entities across documents and fields, rank lists of keywords based upon conceptual cohesion to reduce noise, summarize documents by extracting their most significant terms, generate recommendations and personalized search, and power numerous other applications involving anomaly detection, significance/relationship discovery, and semantic search. This talk will walk you through how to setup and use this plugin in concert with other open source tools (probabilistic query parser, SolrTextTagger for entity extraction) to parse, interpret, and much more correctly model the true intent of user searches than traditional keyword-based search approaches. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/relevance-of-apache-solr-semantic-knowledge-graph-181105062232-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> The Semantic Knowledge Graph is an Apache Solr plugin that can be used to discover and rank the relationships between any arbitrary queries or terms within the search index. It is a relevancy swiss army knife, able to discover related terms and concepts, disambiguate different meanings of terms given their context, cleanup noise in datasets, discover previously unknown relationships between entities across documents and fields, rank lists of keywords based upon conceptual cohesion to reduce noise, summarize documents by extracting their most significant terms, generate recommendations and personalized search, and power numerous other applications involving anomaly detection, significance/relationship discovery, and semantic search. This talk will walk you through how to setup and use this plugin in concert with other open source tools (probabilistic query parser, SolrTextTagger for entity extraction) to parse, interpret, and much more correctly model the true intent of user searches than traditional keyword-based search approaches.
The Relevance of the Apache Solr Semantic Knowledge Graph from Trey Grainger
]]>
2384 9 https://cdn.slidesharecdn.com/ss_thumbnails/relevance-of-apache-solr-semantic-knowledge-graph-181105062232-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Searching for Meaning /slideshow/searching-for-meaning/121887408 searching-for-meaning-181105061426
"Searching for Meaning: The Hidden Structure in Unstructured Data". Presentation by Trey Grainger at the Southern Data Science Conference (SDSC) 2018. Covers linguistic theory, application in search and information retrieval, and knowledge graph and ontology learning methods for automatically deriving contextualized meaning from unstructured (free text) content.]]>

"Searching for Meaning: The Hidden Structure in Unstructured Data". Presentation by Trey Grainger at the Southern Data Science Conference (SDSC) 2018. Covers linguistic theory, application in search and information retrieval, and knowledge graph and ontology learning methods for automatically deriving contextualized meaning from unstructured (free text) content.]]>
Mon, 05 Nov 2018 06:14:26 GMT /slideshow/searching-for-meaning/121887408 treygrainger@slideshare.net(treygrainger) Searching for Meaning treygrainger "Searching for Meaning: The Hidden Structure in Unstructured Data". Presentation by Trey Grainger at the Southern Data Science Conference (SDSC) 2018. Covers linguistic theory, application in search and information retrieval, and knowledge graph and ontology learning methods for automatically deriving contextualized meaning from unstructured (free text) content. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/searching-for-meaning-181105061426-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> &quot;Searching for Meaning: The Hidden Structure in Unstructured Data&quot;. Presentation by Trey Grainger at the Southern Data Science Conference (SDSC) 2018. Covers linguistic theory, application in search and information retrieval, and knowledge graph and ontology learning methods for automatically deriving contextualized meaning from unstructured (free text) content.
Searching for Meaning from Trey Grainger
]]>
1982 7 https://cdn.slidesharecdn.com/ss_thumbnails/searching-for-meaning-181105061426-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
The Intent Algorithms of Search & Recommendation Engines /slideshow/intent-algorithms-of-search-and-recommendation-engines/121882843 furman-181105053411
Guest lecture for Furman University "Big Data: Mining and Analysis" class (CS272), December 1, 2017.]]>

Guest lecture for Furman University "Big Data: Mining and Analysis" class (CS272), December 1, 2017.]]>
Mon, 05 Nov 2018 05:34:11 GMT /slideshow/intent-algorithms-of-search-and-recommendation-engines/121882843 treygrainger@slideshare.net(treygrainger) The Intent Algorithms of Search & Recommendation Engines treygrainger Guest lecture for Furman University "Big Data: Mining and Analysis" class (CS272), December 1, 2017. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/furman-181105053411-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Guest lecture for Furman University &quot;Big Data: Mining and Analysis&quot; class (CS272), December 1, 2017.
The Intent Algorithms of Search & Recommendation Engines from Trey Grainger
]]>
2556 16 https://cdn.slidesharecdn.com/ss_thumbnails/furman-181105053411-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
The Apache Solr Semantic Knowledge Graph /slideshow/the-apache-solr-semantic-knowledge-graph/121880632 skg-181105051457
What if instead of a query returning documents, you could alternatively return other keywords most related to the query: i.e. given a search for "data science", return me back results like "machine learning", "predictive modeling", "artificial neural networks", etc.? Solrs Semantic Knowledge Graph does just that. It leverages the inverted index to automatically model the significance of relationships between every term in the inverted index (even across multiple fields) allowing real-time traversal and ranking of any relationship within your documents. Use cases for the Semantic Knowledge Graph include disambiguation of multiple meanings of terms (does "driver" mean truck driver, printer driver, a type of golf club, etc.), searching on vectors of related keywords to form a conceptual search (versus just a text match), powering recommendation algorithms, ranking lists of keywords based upon conceptual cohesion to reduce noise, summarizing documents by extracting their most significant terms, and numerous other applications involving anomaly detection, significance/relationship discovery, and semantic search. In this talk, we'll do a deep dive into the internals of how the Semantic Knowledge Graph works and will walk you through how to get up and running with an example dataset to explore the meaningful relationships hidden within your data.]]>

What if instead of a query returning documents, you could alternatively return other keywords most related to the query: i.e. given a search for "data science", return me back results like "machine learning", "predictive modeling", "artificial neural networks", etc.? Solrs Semantic Knowledge Graph does just that. It leverages the inverted index to automatically model the significance of relationships between every term in the inverted index (even across multiple fields) allowing real-time traversal and ranking of any relationship within your documents. Use cases for the Semantic Knowledge Graph include disambiguation of multiple meanings of terms (does "driver" mean truck driver, printer driver, a type of golf club, etc.), searching on vectors of related keywords to form a conceptual search (versus just a text match), powering recommendation algorithms, ranking lists of keywords based upon conceptual cohesion to reduce noise, summarizing documents by extracting their most significant terms, and numerous other applications involving anomaly detection, significance/relationship discovery, and semantic search. In this talk, we'll do a deep dive into the internals of how the Semantic Knowledge Graph works and will walk you through how to get up and running with an example dataset to explore the meaningful relationships hidden within your data.]]>
Mon, 05 Nov 2018 05:14:57 GMT /slideshow/the-apache-solr-semantic-knowledge-graph/121880632 treygrainger@slideshare.net(treygrainger) The Apache Solr Semantic Knowledge Graph treygrainger What if instead of a query returning documents, you could alternatively return other keywords most related to the query: i.e. given a search for "data science", return me back results like "machine learning", "predictive modeling", "artificial neural networks", etc.? Solrs Semantic Knowledge Graph does just that. It leverages the inverted index to automatically model the significance of relationships between every term in the inverted index (even across multiple fields) allowing real-time traversal and ranking of any relationship within your documents. Use cases for the Semantic Knowledge Graph include disambiguation of multiple meanings of terms (does "driver" mean truck driver, printer driver, a type of golf club, etc.), searching on vectors of related keywords to form a conceptual search (versus just a text match), powering recommendation algorithms, ranking lists of keywords based upon conceptual cohesion to reduce noise, summarizing documents by extracting their most significant terms, and numerous other applications involving anomaly detection, significance/relationship discovery, and semantic search. In this talk, we'll do a deep dive into the internals of how the Semantic Knowledge Graph works and will walk you through how to get up and running with an example dataset to explore the meaningful relationships hidden within your data. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/skg-181105051457-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> What if instead of a query returning documents, you could alternatively return other keywords most related to the query: i.e. given a search for &quot;data science&quot;, return me back results like &quot;machine learning&quot;, &quot;predictive modeling&quot;, &quot;artificial neural networks&quot;, etc.? Solrs Semantic Knowledge Graph does just that. It leverages the inverted index to automatically model the significance of relationships between every term in the inverted index (even across multiple fields) allowing real-time traversal and ranking of any relationship within your documents. Use cases for the Semantic Knowledge Graph include disambiguation of multiple meanings of terms (does &quot;driver&quot; mean truck driver, printer driver, a type of golf club, etc.), searching on vectors of related keywords to form a conceptual search (versus just a text match), powering recommendation algorithms, ranking lists of keywords based upon conceptual cohesion to reduce noise, summarizing documents by extracting their most significant terms, and numerous other applications involving anomaly detection, significance/relationship discovery, and semantic search. In this talk, we&#39;ll do a deep dive into the internals of how the Semantic Knowledge Graph works and will walk you through how to get up and running with an example dataset to explore the meaningful relationships hidden within your data.
The Apache Solr Semantic Knowledge Graph from Trey Grainger
]]>
7587 10 https://cdn.slidesharecdn.com/ss_thumbnails/skg-181105051457-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Building Search & Recommendation Engines /slideshow/building-search-and-recommendation-engines/77562306 greenville-data-science-meetup-170706051836
In this talk, you'll learn how to build your own search and recommendation engine based on the open source Apache Lucene/Solr project. We'll dive into some of the data science behind how search engines work, covering multi-lingual text analysis, natural language processing, relevancy ranking algorithms, knowledge graphs, reflected intelligence, collaborative filtering, and other machine learning techniques used to drive relevant results for free-text queries. We'll also demonstrate how to build a recommendation engine leveraging the same platform and techniques that power search for most of the world's top companies. You'll walk away from this presentation with the toolbox you need to go and implement your very own search-based product using your own data.]]>

In this talk, you'll learn how to build your own search and recommendation engine based on the open source Apache Lucene/Solr project. We'll dive into some of the data science behind how search engines work, covering multi-lingual text analysis, natural language processing, relevancy ranking algorithms, knowledge graphs, reflected intelligence, collaborative filtering, and other machine learning techniques used to drive relevant results for free-text queries. We'll also demonstrate how to build a recommendation engine leveraging the same platform and techniques that power search for most of the world's top companies. You'll walk away from this presentation with the toolbox you need to go and implement your very own search-based product using your own data.]]>
Thu, 06 Jul 2017 05:18:36 GMT /slideshow/building-search-and-recommendation-engines/77562306 treygrainger@slideshare.net(treygrainger) Building Search & Recommendation Engines treygrainger In this talk, you'll learn how to build your own search and recommendation engine based on the open source Apache Lucene/Solr project. We'll dive into some of the data science behind how search engines work, covering multi-lingual text analysis, natural language processing, relevancy ranking algorithms, knowledge graphs, reflected intelligence, collaborative filtering, and other machine learning techniques used to drive relevant results for free-text queries. We'll also demonstrate how to build a recommendation engine leveraging the same platform and techniques that power search for most of the world's top companies. You'll walk away from this presentation with the toolbox you need to go and implement your very own search-based product using your own data. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/greenville-data-science-meetup-170706051836-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> In this talk, you&#39;ll learn how to build your own search and recommendation engine based on the open source Apache Lucene/Solr project. We&#39;ll dive into some of the data science behind how search engines work, covering multi-lingual text analysis, natural language processing, relevancy ranking algorithms, knowledge graphs, reflected intelligence, collaborative filtering, and other machine learning techniques used to drive relevant results for free-text queries. We&#39;ll also demonstrate how to build a recommendation engine leveraging the same platform and techniques that power search for most of the world&#39;s top companies. You&#39;ll walk away from this presentation with the toolbox you need to go and implement your very own search-based product using your own data.
Building Search & Recommendation Engines from Trey Grainger
]]>
6105 8 https://cdn.slidesharecdn.com/ss_thumbnails/greenville-data-science-meetup-170706051836-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Intent Algorithms: The Data Science of Smart Information Retrieval Systems /slideshow/intent-algorithms/77562106 trey-graingerintent-algorithms-170706050805
Search engines, recommendation systems, advertising networks, and even data analytics tools all share the same end goal - to deliver the most relevant information possible to meet a given information need (usually in real-time). Perfecting these systems requires algorithms which can build a deep understanding of the domains represented by the underlying data, understand the nuanced ways in which words and phrases should be parsed and interpreted within different contexts, score the relationships between arbitrary phrases and concepts, continually learn from users' context and interactions to make the system smarter, and generate custom models of personalized tastes for each user of the system. In this talk, we'll dive into both the philosophical questions associated with such systems ("how do you accurately represent and interpret the meaning of words?", "How do you prevent filter bubbles?", etc.), as well as look at practical examples of how these systems have been successfully implemented in production systems combining a variety of available commercial and open source components (inverted indexes, entity extraction, similarity scoring and machine-learned ranking, auto-generated knowledge graphs, phrase interpretation and concept expansion, etc.).]]>

Search engines, recommendation systems, advertising networks, and even data analytics tools all share the same end goal - to deliver the most relevant information possible to meet a given information need (usually in real-time). Perfecting these systems requires algorithms which can build a deep understanding of the domains represented by the underlying data, understand the nuanced ways in which words and phrases should be parsed and interpreted within different contexts, score the relationships between arbitrary phrases and concepts, continually learn from users' context and interactions to make the system smarter, and generate custom models of personalized tastes for each user of the system. In this talk, we'll dive into both the philosophical questions associated with such systems ("how do you accurately represent and interpret the meaning of words?", "How do you prevent filter bubbles?", etc.), as well as look at practical examples of how these systems have been successfully implemented in production systems combining a variety of available commercial and open source components (inverted indexes, entity extraction, similarity scoring and machine-learned ranking, auto-generated knowledge graphs, phrase interpretation and concept expansion, etc.).]]>
Thu, 06 Jul 2017 05:08:05 GMT /slideshow/intent-algorithms/77562106 treygrainger@slideshare.net(treygrainger) Intent Algorithms: The Data Science of Smart Information Retrieval Systems treygrainger Search engines, recommendation systems, advertising networks, and even data analytics tools all share the same end goal - to deliver the most relevant information possible to meet a given information need (usually in real-time). Perfecting these systems requires algorithms which can build a deep understanding of the domains represented by the underlying data, understand the nuanced ways in which words and phrases should be parsed and interpreted within different contexts, score the relationships between arbitrary phrases and concepts, continually learn from users' context and interactions to make the system smarter, and generate custom models of personalized tastes for each user of the system. In this talk, we'll dive into both the philosophical questions associated with such systems ("how do you accurately represent and interpret the meaning of words?", "How do you prevent filter bubbles?", etc.), as well as look at practical examples of how these systems have been successfully implemented in production systems combining a variety of available commercial and open source components (inverted indexes, entity extraction, similarity scoring and machine-learned ranking, auto-generated knowledge graphs, phrase interpretation and concept expansion, etc.). <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/trey-graingerintent-algorithms-170706050805-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Search engines, recommendation systems, advertising networks, and even data analytics tools all share the same end goal - to deliver the most relevant information possible to meet a given information need (usually in real-time). Perfecting these systems requires algorithms which can build a deep understanding of the domains represented by the underlying data, understand the nuanced ways in which words and phrases should be parsed and interpreted within different contexts, score the relationships between arbitrary phrases and concepts, continually learn from users&#39; context and interactions to make the system smarter, and generate custom models of personalized tastes for each user of the system. In this talk, we&#39;ll dive into both the philosophical questions associated with such systems (&quot;how do you accurately represent and interpret the meaning of words?&quot;, &quot;How do you prevent filter bubbles?&quot;, etc.), as well as look at practical examples of how these systems have been successfully implemented in production systems combining a variety of available commercial and open source components (inverted indexes, entity extraction, similarity scoring and machine-learned ranking, auto-generated knowledge graphs, phrase interpretation and concept expansion, etc.).
Intent Algorithms: The Data Science of Smart Information Retrieval Systems from Trey Grainger
]]>
5448 13 https://cdn.slidesharecdn.com/ss_thumbnails/trey-graingerintent-algorithms-170706050805-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Self-learned Relevancy with Apache Solr /slideshow/self-learned-relevancy-with-apache-solr/77562051 self-learned-relevancy-with-apache-solr-170706050445
Search engines are known for "relevancy", but the relevancy models that ship out of the box (BM25, classic tf-idf, etc.) are just scratching the surface of what's needed for a truly insightful application. What if your search engine could automatically tune its own domain-specific relevancy model based on user interactions? What if it could learn the important phrases and topics within your domain, learn the conceptual relationships embedded within your documents, and even use machine-learned ranking to discover the relative importance of different features and then automatically optimize its own ranking algorithms for your domain? What if you could further use SQL queries to explore these relationships within your own BI tools and return results in ranked order to deliver relevance-driven analytics visualizations? In this presentation, we'll walk through how you can leverage the myriad of capabilities in the Apache Solr ecosystem (such as the Solr Text Tagger, Semantic Knowledge Graph, Spark-Solr, Solr SQL, learning to rank, probabilistic query parsing, and Lucidworks Fusion) to build self-learning, relevance-first search, recommendations, and data analytics applications.]]>

Search engines are known for "relevancy", but the relevancy models that ship out of the box (BM25, classic tf-idf, etc.) are just scratching the surface of what's needed for a truly insightful application. What if your search engine could automatically tune its own domain-specific relevancy model based on user interactions? What if it could learn the important phrases and topics within your domain, learn the conceptual relationships embedded within your documents, and even use machine-learned ranking to discover the relative importance of different features and then automatically optimize its own ranking algorithms for your domain? What if you could further use SQL queries to explore these relationships within your own BI tools and return results in ranked order to deliver relevance-driven analytics visualizations? In this presentation, we'll walk through how you can leverage the myriad of capabilities in the Apache Solr ecosystem (such as the Solr Text Tagger, Semantic Knowledge Graph, Spark-Solr, Solr SQL, learning to rank, probabilistic query parsing, and Lucidworks Fusion) to build self-learning, relevance-first search, recommendations, and data analytics applications.]]>
Thu, 06 Jul 2017 05:04:45 GMT /slideshow/self-learned-relevancy-with-apache-solr/77562051 treygrainger@slideshare.net(treygrainger) Self-learned Relevancy with Apache Solr treygrainger Search engines are known for "relevancy", but the relevancy models that ship out of the box (BM25, classic tf-idf, etc.) are just scratching the surface of what's needed for a truly insightful application. What if your search engine could automatically tune its own domain-specific relevancy model based on user interactions? What if it could learn the important phrases and topics within your domain, learn the conceptual relationships embedded within your documents, and even use machine-learned ranking to discover the relative importance of different features and then automatically optimize its own ranking algorithms for your domain? What if you could further use SQL queries to explore these relationships within your own BI tools and return results in ranked order to deliver relevance-driven analytics visualizations? In this presentation, we'll walk through how you can leverage the myriad of capabilities in the Apache Solr ecosystem (such as the Solr Text Tagger, Semantic Knowledge Graph, Spark-Solr, Solr SQL, learning to rank, probabilistic query parsing, and Lucidworks Fusion) to build self-learning, relevance-first search, recommendations, and data analytics applications. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/self-learned-relevancy-with-apache-solr-170706050445-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Search engines are known for &quot;relevancy&quot;, but the relevancy models that ship out of the box (BM25, classic tf-idf, etc.) are just scratching the surface of what&#39;s needed for a truly insightful application. What if your search engine could automatically tune its own domain-specific relevancy model based on user interactions? What if it could learn the important phrases and topics within your domain, learn the conceptual relationships embedded within your documents, and even use machine-learned ranking to discover the relative importance of different features and then automatically optimize its own ranking algorithms for your domain? What if you could further use SQL queries to explore these relationships within your own BI tools and return results in ranked order to deliver relevance-driven analytics visualizations? In this presentation, we&#39;ll walk through how you can leverage the myriad of capabilities in the Apache Solr ecosystem (such as the Solr Text Tagger, Semantic Knowledge Graph, Spark-Solr, Solr SQL, learning to rank, probabilistic query parsing, and Lucidworks Fusion) to build self-learning, relevance-first search, recommendations, and data analytics applications.
Self-learned Relevancy with Apache Solr from Trey Grainger
]]>
2566 9 https://cdn.slidesharecdn.com/ss_thumbnails/self-learned-relevancy-with-apache-solr-170706050445-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
The Apache Solr Smart Data Ecosystem /slideshow/apache-solr-smart-data-ecosystem/71077596 apache-solr-smart-data-ecosystem-170116230545
Search engines, and Apache Solr in particular, are quickly shifting the focus away from big data systems storing massive amounts of raw (but largely unharnessed) content, to smart data systems where the most relevant and actionable content is quickly surfaced instead. Apache Solr is the blazing-fast and fault-tolerant distributed search engine leveraged by 90% of Fortune 500 companies. As a community-driven open source project, Solr brings in diverse contributions from many of the top companies in the world, particularly those for whom returning the most relevant results is mission critical. Out of the box, Solr includes advanced capabilities like learning to rank (machine-learned ranking), graph queries and distributed graph traversals, job scheduling for processing batch and streaming data workloads, the ability to build and deploy machine learning models, and a wide variety of query parsers and functions allowing you to very easily build highly relevant and domain-specific semantic search, recommendations, or personalized search experiences. These days, Solr even enables you to run SQL queries directly against it, mixing and matching the full power of Solrs free-text, geospatial, and other search capabilities with the a prominent query language already known by most developers (and which many external systems can use to query Solr directly). Due to the community-oriented nature of Solr, the ecosystem of capabilities also spans well beyond just the core project. In this talk, well also cover several other projects within the larger Apache Lucene/Solr ecosystem that further enhance Solrs smart data capabilities: bi-directional integration of Apache Spark and Solrs capabilities, large-scale entity extraction, semantic knowledge graphs for discovering, traversing, and scoring meaningful relationships within your data, auto-generation of domain-specific ontologies, running SPARQL queries against Solr on RDF triples, probabilistic identification of key phrases within a query or document, conceptual search leveraging Word2Vec, and even Lucidworks own Fusion project which extends Solr to provide an enterprise-ready smart data platform out of the box. Well dive into how all of these capabilities can fit within your data science toolbox, and youll come away with a really good feel for how to build highly relevant smart data applications leveraging these key technologies.]]>

Search engines, and Apache Solr in particular, are quickly shifting the focus away from big data systems storing massive amounts of raw (but largely unharnessed) content, to smart data systems where the most relevant and actionable content is quickly surfaced instead. Apache Solr is the blazing-fast and fault-tolerant distributed search engine leveraged by 90% of Fortune 500 companies. As a community-driven open source project, Solr brings in diverse contributions from many of the top companies in the world, particularly those for whom returning the most relevant results is mission critical. Out of the box, Solr includes advanced capabilities like learning to rank (machine-learned ranking), graph queries and distributed graph traversals, job scheduling for processing batch and streaming data workloads, the ability to build and deploy machine learning models, and a wide variety of query parsers and functions allowing you to very easily build highly relevant and domain-specific semantic search, recommendations, or personalized search experiences. These days, Solr even enables you to run SQL queries directly against it, mixing and matching the full power of Solrs free-text, geospatial, and other search capabilities with the a prominent query language already known by most developers (and which many external systems can use to query Solr directly). Due to the community-oriented nature of Solr, the ecosystem of capabilities also spans well beyond just the core project. In this talk, well also cover several other projects within the larger Apache Lucene/Solr ecosystem that further enhance Solrs smart data capabilities: bi-directional integration of Apache Spark and Solrs capabilities, large-scale entity extraction, semantic knowledge graphs for discovering, traversing, and scoring meaningful relationships within your data, auto-generation of domain-specific ontologies, running SPARQL queries against Solr on RDF triples, probabilistic identification of key phrases within a query or document, conceptual search leveraging Word2Vec, and even Lucidworks own Fusion project which extends Solr to provide an enterprise-ready smart data platform out of the box. Well dive into how all of these capabilities can fit within your data science toolbox, and youll come away with a really good feel for how to build highly relevant smart data applications leveraging these key technologies.]]>
Mon, 16 Jan 2017 23:05:45 GMT /slideshow/apache-solr-smart-data-ecosystem/71077596 treygrainger@slideshare.net(treygrainger) The Apache Solr Smart Data Ecosystem treygrainger Search engines, and Apache Solr in particular, are quickly shifting the focus away from big data systems storing massive amounts of raw (but largely unharnessed) content, to smart data systems where the most relevant and actionable content is quickly surfaced instead. Apache Solr is the blazing-fast and fault-tolerant distributed search engine leveraged by 90% of Fortune 500 companies. As a community-driven open source project, Solr brings in diverse contributions from many of the top companies in the world, particularly those for whom returning the most relevant results is mission critical. Out of the box, Solr includes advanced capabilities like learning to rank (machine-learned ranking), graph queries and distributed graph traversals, job scheduling for processing batch and streaming data workloads, the ability to build and deploy machine learning models, and a wide variety of query parsers and functions allowing you to very easily build highly relevant and domain-specific semantic search, recommendations, or personalized search experiences. These days, Solr even enables you to run SQL queries directly against it, mixing and matching the full power of Solrs free-text, geospatial, and other search capabilities with the a prominent query language already known by most developers (and which many external systems can use to query Solr directly). Due to the community-oriented nature of Solr, the ecosystem of capabilities also spans well beyond just the core project. In this talk, well also cover several other projects within the larger Apache Lucene/Solr ecosystem that further enhance Solrs smart data capabilities: bi-directional integration of Apache Spark and Solrs capabilities, large-scale entity extraction, semantic knowledge graphs for discovering, traversing, and scoring meaningful relationships within your data, auto-generation of domain-specific ontologies, running SPARQL queries against Solr on RDF triples, probabilistic identification of key phrases within a query or document, conceptual search leveraging Word2Vec, and even Lucidworks own Fusion project which extends Solr to provide an enterprise-ready smart data platform out of the box. Well dive into how all of these capabilities can fit within your data science toolbox, and youll come away with a really good feel for how to build highly relevant smart data applications leveraging these key technologies. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/apache-solr-smart-data-ecosystem-170116230545-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Search engines, and Apache Solr in particular, are quickly shifting the focus away from big data systems storing massive amounts of raw (but largely unharnessed) content, to smart data systems where the most relevant and actionable content is quickly surfaced instead. Apache Solr is the blazing-fast and fault-tolerant distributed search engine leveraged by 90% of Fortune 500 companies. As a community-driven open source project, Solr brings in diverse contributions from many of the top companies in the world, particularly those for whom returning the most relevant results is mission critical. Out of the box, Solr includes advanced capabilities like learning to rank (machine-learned ranking), graph queries and distributed graph traversals, job scheduling for processing batch and streaming data workloads, the ability to build and deploy machine learning models, and a wide variety of query parsers and functions allowing you to very easily build highly relevant and domain-specific semantic search, recommendations, or personalized search experiences. These days, Solr even enables you to run SQL queries directly against it, mixing and matching the full power of Solrs free-text, geospatial, and other search capabilities with the a prominent query language already known by most developers (and which many external systems can use to query Solr directly). Due to the community-oriented nature of Solr, the ecosystem of capabilities also spans well beyond just the core project. In this talk, well also cover several other projects within the larger Apache Lucene/Solr ecosystem that further enhance Solrs smart data capabilities: bi-directional integration of Apache Spark and Solrs capabilities, large-scale entity extraction, semantic knowledge graphs for discovering, traversing, and scoring meaningful relationships within your data, auto-generation of domain-specific ontologies, running SPARQL queries against Solr on RDF triples, probabilistic identification of key phrases within a query or document, conceptual search leveraging Word2Vec, and even Lucidworks own Fusion project which extends Solr to provide an enterprise-ready smart data platform out of the box. Well dive into how all of these capabilities can fit within your data science toolbox, and youll come away with a really good feel for how to build highly relevant smart data applications leveraging these key technologies.
The Apache Solr Smart Data Ecosystem from Trey Grainger
]]>
6258 7 https://cdn.slidesharecdn.com/ss_thumbnails/apache-solr-smart-data-ecosystem-170116230545-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
South Big Data Hub: Text Data Analysis Panel /slideshow/south-big-data-hub-text-data-analysis-panel/71076613 south-big-data-hub-text-data-analysis-panel-170116222115
際際滷s from Trey's opening presentation for the South Big Data Hub's Text Data Analysis Panel on December 8th, 2016. Trey provided a quick introduction to Apache Solr, described how companies are using Solr to power relevant search in industry, and provided a glimpse on where the industry is heading with regard to implementing more intelligent and relevant semantic search.]]>

際際滷s from Trey's opening presentation for the South Big Data Hub's Text Data Analysis Panel on December 8th, 2016. Trey provided a quick introduction to Apache Solr, described how companies are using Solr to power relevant search in industry, and provided a glimpse on where the industry is heading with regard to implementing more intelligent and relevant semantic search.]]>
Mon, 16 Jan 2017 22:21:15 GMT /slideshow/south-big-data-hub-text-data-analysis-panel/71076613 treygrainger@slideshare.net(treygrainger) South Big Data Hub: Text Data Analysis Panel treygrainger 際際滷s from Trey's opening presentation for the South Big Data Hub's Text Data Analysis Panel on December 8th, 2016. Trey provided a quick introduction to Apache Solr, described how companies are using Solr to power relevant search in industry, and provided a glimpse on where the industry is heading with regard to implementing more intelligent and relevant semantic search. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/south-big-data-hub-text-data-analysis-panel-170116222115-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> 際際滷s from Trey&#39;s opening presentation for the South Big Data Hub&#39;s Text Data Analysis Panel on December 8th, 2016. Trey provided a quick introduction to Apache Solr, described how companies are using Solr to power relevant search in industry, and provided a glimpse on where the industry is heading with regard to implementing more intelligent and relevant semantic search.
South Big Data Hub: Text Data Analysis Panel from Trey Grainger
]]>
4900 5 https://cdn.slidesharecdn.com/ss_thumbnails/south-big-data-hub-text-data-analysis-panel-170116222115-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
https://cdn.slidesharecdn.com/profile-photo-treygrainger-48x48.jpg?cb=1719206685 Trey is the Chief Algorithms Officer at Lucidworks, where he drives vision and practical application of intelligent data science algorithms to power relevant search experiences for hundreds of the worlds biggest and brightest organizations. Trey previously served as SVP of Engineering at Lucidworks and as the Director of Engineering for Search & Recommendations at CareerBuilder. Trey is also author of the books "AI-powered Search" and "Solr in Action", the comprehensive example-driven guide to Apache Solr. Trey received his MBA in Management of Technology from Georgia Tech, studied Computer Science, Business, and Philosophy at Furman University, and studied Search at Stanford University. www.treygrainger.com https://cdn.slidesharecdn.com/ss_thumbnails/balancing-the-dimensions-of-user-intent-191106062826-thumbnail.jpg?width=320&height=320&fit=bounds slideshow/balancing-the-dimensions-of-user-intent/190952294 Balancing the Dimensio... https://cdn.slidesharecdn.com/ss_thumbnails/reflected-intelligence-real-world-ai-in-digital-transformation-191106061341-thumbnail.jpg?width=320&height=320&fit=bounds slideshow/reflected-intelligence-real-world-ai-in-digital-transformation/190947636 Reflected Intelligence... https://cdn.slidesharecdn.com/ss_thumbnails/thoughtvectorsandknowledgegraphsinaipoweredsearch-191106061149-thumbnail.jpg?width=320&height=320&fit=bounds slideshow/thought-vectors-and-knowledge-graphs-in-ai-powered-search-190947140/190947140 Thought Vectors and Kn...