ºÝºÝߣshows by User: jmugan / http://www.slideshare.net/images/logo.gif ºÝºÝߣshows by User: jmugan / Sat, 28 Jan 2023 23:23:10 GMT ºÝºÝߣShare feed for ºÝºÝߣshows by User: jmugan How to build someone we can talk to /slideshow/how-to-build-someone-we-can-talk-to/255580238 buildsomeonetotalktodistribute-230128232310-aa5807b2
(date is wrong, sorry, it was January 28th, 2023) So many intelligent robots have come and gone, failing to become a commercial success. We’ve lost Aibo, Romo, Jibo, Baxter, Scout, and even Alexa is reducing staff. I posit that they failed because you can’t talk to them, not really. AI has recently made substantial progress, speech recognition now actually works, and we have neural networks such as ChatGPT that produce astounding natural language. But you can’t just throw a neural network into a robot because there isn’t anybody home in those networks—they are just purposeless mimicry agents with no understanding of what they are saying. This lack of understanding means that they can’t be relied upon because the mistakes they make are ones that no human would make, not even a child. Like a child first learning about the world, a robot needs a mental model of the current real-world situation and must use that model to understand what you say and generate a meaningful response. This model must be composable to represent the infinite possibilities of our world, and to keep up in a conversation, it must be built on the fly using human-like, immediate learning. This talk will cover what is required to build that mental model so that robots can begin to understand the world as well as human children do. Intelligent robots won’t be a commercial success based on their form—they aren’t as cute and cuddly as cats and dogs—but robots can be much more if we can talk to them.]]>

(date is wrong, sorry, it was January 28th, 2023) So many intelligent robots have come and gone, failing to become a commercial success. We’ve lost Aibo, Romo, Jibo, Baxter, Scout, and even Alexa is reducing staff. I posit that they failed because you can’t talk to them, not really. AI has recently made substantial progress, speech recognition now actually works, and we have neural networks such as ChatGPT that produce astounding natural language. But you can’t just throw a neural network into a robot because there isn’t anybody home in those networks—they are just purposeless mimicry agents with no understanding of what they are saying. This lack of understanding means that they can’t be relied upon because the mistakes they make are ones that no human would make, not even a child. Like a child first learning about the world, a robot needs a mental model of the current real-world situation and must use that model to understand what you say and generate a meaningful response. This model must be composable to represent the infinite possibilities of our world, and to keep up in a conversation, it must be built on the fly using human-like, immediate learning. This talk will cover what is required to build that mental model so that robots can begin to understand the world as well as human children do. Intelligent robots won’t be a commercial success based on their form—they aren’t as cute and cuddly as cats and dogs—but robots can be much more if we can talk to them.]]>
Sat, 28 Jan 2023 23:23:10 GMT /slideshow/how-to-build-someone-we-can-talk-to/255580238 jmugan@slideshare.net(jmugan) How to build someone we can talk to jmugan (date is wrong, sorry, it was January 28th, 2023) So many intelligent robots have come and gone, failing to become a commercial success. We’ve lost Aibo, Romo, Jibo, Baxter, Scout, and even Alexa is reducing staff. I posit that they failed because you can’t talk to them, not really. AI has recently made substantial progress, speech recognition now actually works, and we have neural networks such as ChatGPT that produce astounding natural language. But you can’t just throw a neural network into a robot because there isn’t anybody home in those networks—they are just purposeless mimicry agents with no understanding of what they are saying. This lack of understanding means that they can’t be relied upon because the mistakes they make are ones that no human would make, not even a child. Like a child first learning about the world, a robot needs a mental model of the current real-world situation and must use that model to understand what you say and generate a meaningful response. This model must be composable to represent the infinite possibilities of our world, and to keep up in a conversation, it must be built on the fly using human-like, immediate learning. This talk will cover what is required to build that mental model so that robots can begin to understand the world as well as human children do. Intelligent robots won’t be a commercial success based on their form—they aren’t as cute and cuddly as cats and dogs—but robots can be much more if we can talk to them. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/buildsomeonetotalktodistribute-230128232310-aa5807b2-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> (date is wrong, sorry, it was January 28th, 2023) So many intelligent robots have come and gone, failing to become a commercial success. We’ve lost Aibo, Romo, Jibo, Baxter, Scout, and even Alexa is reducing staff. I posit that they failed because you can’t talk to them, not really. AI has recently made substantial progress, speech recognition now actually works, and we have neural networks such as ChatGPT that produce astounding natural language. But you can’t just throw a neural network into a robot because there isn’t anybody home in those networks—they are just purposeless mimicry agents with no understanding of what they are saying. This lack of understanding means that they can’t be relied upon because the mistakes they make are ones that no human would make, not even a child. Like a child first learning about the world, a robot needs a mental model of the current real-world situation and must use that model to understand what you say and generate a meaningful response. This model must be composable to represent the infinite possibilities of our world, and to keep up in a conversation, it must be built on the fly using human-like, immediate learning. This talk will cover what is required to build that mental model so that robots can begin to understand the world as well as human children do. Intelligent robots won’t be a commercial success based on their form—they aren’t as cute and cuddly as cats and dogs—but robots can be much more if we can talk to them.
How to build someone we can talk to from Jonathan Mugan
]]>
54 0 https://cdn.slidesharecdn.com/ss_thumbnails/buildsomeonetotalktodistribute-230128232310-aa5807b2-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Moving Your Machine Learning Models to Production with TensorFlow Extended /slideshow/moving-your-machine-learning-models-to-production-with-tensorflow-extended/224508720 tfxdataday-200126053933
ML is great fun, but now we want it to solve real problems. To do this, we need a way of keeping track of all of our data and models, and we need to know when our models fail and why. This talk will cover how to move ML to production with TensorFlow Extended (TFX). TFX is used by Google internally for machine-learning model development and deployment, and it has recently been made public. TFX consists of multiple pipeline elements and associated components, and this talk will cover them all, but three elements are particularly interesting: TensorFlow Data Validation, TensorFlow Model Analysis, and the What-If Tool. The TensorFlow Data Validation library analyses incoming data and computes distributions over the feature values. This can show us which features many not be useful, maybe because they always have the same value, or which features may contain bugs. TensorFlow Model Analysis allows us to understand how well our data performs on different slices of the data. For example, we may find that our predictive models are more accurate for events that happen on Tuesdays, and such knowledge can be used to help us better understand our data and our business. The What-If Tool is as an interactive tool that allows you to change data and see what the model would say if a particular record had a particular feature value. It lets you probe your model, and it can automatically find the closest record with a different predicted label, which allows you to learn what the model is homing in on. Machine learning is growing up. ]]>

ML is great fun, but now we want it to solve real problems. To do this, we need a way of keeping track of all of our data and models, and we need to know when our models fail and why. This talk will cover how to move ML to production with TensorFlow Extended (TFX). TFX is used by Google internally for machine-learning model development and deployment, and it has recently been made public. TFX consists of multiple pipeline elements and associated components, and this talk will cover them all, but three elements are particularly interesting: TensorFlow Data Validation, TensorFlow Model Analysis, and the What-If Tool. The TensorFlow Data Validation library analyses incoming data and computes distributions over the feature values. This can show us which features many not be useful, maybe because they always have the same value, or which features may contain bugs. TensorFlow Model Analysis allows us to understand how well our data performs on different slices of the data. For example, we may find that our predictive models are more accurate for events that happen on Tuesdays, and such knowledge can be used to help us better understand our data and our business. The What-If Tool is as an interactive tool that allows you to change data and see what the model would say if a particular record had a particular feature value. It lets you probe your model, and it can automatically find the closest record with a different predicted label, which allows you to learn what the model is homing in on. Machine learning is growing up. ]]>
Sun, 26 Jan 2020 05:39:33 GMT /slideshow/moving-your-machine-learning-models-to-production-with-tensorflow-extended/224508720 jmugan@slideshare.net(jmugan) Moving Your Machine Learning Models to Production with TensorFlow Extended jmugan ML is great fun, but now we want it to solve real problems. To do this, we need a way of keeping track of all of our data and models, and we need to know when our models fail and why. This talk will cover how to move ML to production with TensorFlow Extended (TFX). TFX is used by Google internally for machine-learning model development and deployment, and it has recently been made public. TFX consists of multiple pipeline elements and associated components, and this talk will cover them all, but three elements are particularly interesting: TensorFlow Data Validation, TensorFlow Model Analysis, and the What-If Tool. The TensorFlow Data Validation library analyses incoming data and computes distributions over the feature values. This can show us which features many not be useful, maybe because they always have the same value, or which features may contain bugs. TensorFlow Model Analysis allows us to understand how well our data performs on different slices of the data. For example, we may find that our predictive models are more accurate for events that happen on Tuesdays, and such knowledge can be used to help us better understand our data and our business. The What-If Tool is as an interactive tool that allows you to change data and see what the model would say if a particular record had a particular feature value. It lets you probe your model, and it can automatically find the closest record with a different predicted label, which allows you to learn what the model is homing in on. Machine learning is growing up. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/tfxdataday-200126053933-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> ML is great fun, but now we want it to solve real problems. To do this, we need a way of keeping track of all of our data and models, and we need to know when our models fail and why. This talk will cover how to move ML to production with TensorFlow Extended (TFX). TFX is used by Google internally for machine-learning model development and deployment, and it has recently been made public. TFX consists of multiple pipeline elements and associated components, and this talk will cover them all, but three elements are particularly interesting: TensorFlow Data Validation, TensorFlow Model Analysis, and the What-If Tool. The TensorFlow Data Validation library analyses incoming data and computes distributions over the feature values. This can show us which features many not be useful, maybe because they always have the same value, or which features may contain bugs. TensorFlow Model Analysis allows us to understand how well our data performs on different slices of the data. For example, we may find that our predictive models are more accurate for events that happen on Tuesdays, and such knowledge can be used to help us better understand our data and our business. The What-If Tool is as an interactive tool that allows you to change data and see what the model would say if a particular record had a particular feature value. It lets you probe your model, and it can automatically find the closest record with a different predicted label, which allows you to learn what the model is homing in on. Machine learning is growing up.
Moving Your Machine Learning Models to Production with TensorFlow Extended from Jonathan Mugan
]]>
54026 1 https://cdn.slidesharecdn.com/ss_thumbnails/tfxdataday-200126053933-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Generating Natural-Language Text with Neural Networks /slideshow/generating-naturallanguage-text-with-neural-networks-143262260/143262260 neuraltextgeneration3distribute-190502180828
Automatic text generation enables computers to summarize text, to have conversations in customer-service and other settings, and to customize content based on the characteristics and goals of the human interlocutor. Using neural networks to automatically generate text is appealing because they can be trained through examples with no need to manually specify what should be said when. In this talk, we will provide an overview of the existing algorithms used in neural text generation, such as sequence2sequence models, reinforcement learning, variational methods, and generative adversarial networks. We will also discuss existing work that specifies how the content of generated text can be determined by manipulating a latent code. The talk will conclude with a discussion of current challenges and shortcomings of neural text generation.]]>

Automatic text generation enables computers to summarize text, to have conversations in customer-service and other settings, and to customize content based on the characteristics and goals of the human interlocutor. Using neural networks to automatically generate text is appealing because they can be trained through examples with no need to manually specify what should be said when. In this talk, we will provide an overview of the existing algorithms used in neural text generation, such as sequence2sequence models, reinforcement learning, variational methods, and generative adversarial networks. We will also discuss existing work that specifies how the content of generated text can be determined by manipulating a latent code. The talk will conclude with a discussion of current challenges and shortcomings of neural text generation.]]>
Thu, 02 May 2019 18:08:28 GMT /slideshow/generating-naturallanguage-text-with-neural-networks-143262260/143262260 jmugan@slideshare.net(jmugan) Generating Natural-Language Text with Neural Networks jmugan Automatic text generation enables computers to summarize text, to have conversations in customer-service and other settings, and to customize content based on the characteristics and goals of the human interlocutor. Using neural networks to automatically generate text is appealing because they can be trained through examples with no need to manually specify what should be said when. In this talk, we will provide an overview of the existing algorithms used in neural text generation, such as sequence2sequence models, reinforcement learning, variational methods, and generative adversarial networks. We will also discuss existing work that specifies how the content of generated text can be determined by manipulating a latent code. The talk will conclude with a discussion of current challenges and shortcomings of neural text generation. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/neuraltextgeneration3distribute-190502180828-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Automatic text generation enables computers to summarize text, to have conversations in customer-service and other settings, and to customize content based on the characteristics and goals of the human interlocutor. Using neural networks to automatically generate text is appealing because they can be trained through examples with no need to manually specify what should be said when. In this talk, we will provide an overview of the existing algorithms used in neural text generation, such as sequence2sequence models, reinforcement learning, variational methods, and generative adversarial networks. We will also discuss existing work that specifies how the content of generated text can be determined by manipulating a latent code. The talk will conclude with a discussion of current challenges and shortcomings of neural text generation.
Generating Natural-Language Text with Neural Networks from Jonathan Mugan
]]>
54784 7 https://cdn.slidesharecdn.com/ss_thumbnails/neuraltextgeneration3distribute-190502180828-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Data Day Seattle, From NLP to AI /slideshow/data-day-seattle-from-nlp-to-ai/81027430 fromnlptoai-171020181128
An overview of the current state of the art in NLP.]]>

An overview of the current state of the art in NLP.]]>
Fri, 20 Oct 2017 18:11:28 GMT /slideshow/data-day-seattle-from-nlp-to-ai/81027430 jmugan@slideshare.net(jmugan) Data Day Seattle, From NLP to AI jmugan An overview of the current state of the art in NLP. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/fromnlptoai-171020181128-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> An overview of the current state of the art in NLP.
Data Day Seattle, From NLP to AI from Jonathan Mugan
]]>
1067 12 https://cdn.slidesharecdn.com/ss_thumbnails/fromnlptoai-171020181128-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Data Day Seattle, Chatbots from First Principles /slideshow/data-day-seattle-chatbots-from-first-principles/81027216 chatbotfirstprinciples-171020180527
An introduction to the current state of the art in chatbots.]]>

An introduction to the current state of the art in chatbots.]]>
Fri, 20 Oct 2017 18:05:27 GMT /slideshow/data-day-seattle-chatbots-from-first-principles/81027216 jmugan@slideshare.net(jmugan) Data Day Seattle, Chatbots from First Principles jmugan An introduction to the current state of the art in chatbots. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/chatbotfirstprinciples-171020180527-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> An introduction to the current state of the art in chatbots.
Data Day Seattle, Chatbots from First Principles from Jonathan Mugan
]]>
639 6 https://cdn.slidesharecdn.com/ss_thumbnails/chatbotfirstprinciples-171020180527-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Chatbots from first principles /slideshow/chatbots-from-first-principles/76617554 chatbotfirstprinciples-170603160205
There are lots of frameworks for building chatbots, but those abstractions can obscure understanding and hinder application development. In this talk, we will cover building chatbots from the ground up in Python. This can be done with either classic NLP or deep learning. We will cover both approaches, but this talk will focus on how one can build a chatbot using spaCy, pattern matching, and context-free grammars.]]>

There are lots of frameworks for building chatbots, but those abstractions can obscure understanding and hinder application development. In this talk, we will cover building chatbots from the ground up in Python. This can be done with either classic NLP or deep learning. We will cover both approaches, but this talk will focus on how one can build a chatbot using spaCy, pattern matching, and context-free grammars.]]>
Sat, 03 Jun 2017 16:02:05 GMT /slideshow/chatbots-from-first-principles/76617554 jmugan@slideshare.net(jmugan) Chatbots from first principles jmugan There are lots of frameworks for building chatbots, but those abstractions can obscure understanding and hinder application development. In this talk, we will cover building chatbots from the ground up in Python. This can be done with either classic NLP or deep learning. We will cover both approaches, but this talk will focus on how one can build a chatbot using spaCy, pattern matching, and context-free grammars. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/chatbotfirstprinciples-170603160205-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> There are lots of frameworks for building chatbots, but those abstractions can obscure understanding and hinder application development. In this talk, we will cover building chatbots from the ground up in Python. This can be done with either classic NLP or deep learning. We will cover both approaches, but this talk will focus on how one can build a chatbot using spaCy, pattern matching, and context-free grammars.
Chatbots from first principles from Jonathan Mugan
]]>
2304 8 https://cdn.slidesharecdn.com/ss_thumbnails/chatbotfirstprinciples-170603160205-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
From Natural Language Processing to Artificial Intelligence /slideshow/from-natural-language-processing-to-artificial-intelligence/71019044 fromnlptoai-170114191309
Overview of natural language processing (NLP) from both symbolic and deep learning perspectives. Covers tf-idf, sentiment analysis, LDA, WordNet, FrameNet, word2vec, and recurrent neural networks (RNNs).]]>

Overview of natural language processing (NLP) from both symbolic and deep learning perspectives. Covers tf-idf, sentiment analysis, LDA, WordNet, FrameNet, word2vec, and recurrent neural networks (RNNs).]]>
Sat, 14 Jan 2017 19:13:09 GMT /slideshow/from-natural-language-processing-to-artificial-intelligence/71019044 jmugan@slideshare.net(jmugan) From Natural Language Processing to Artificial Intelligence jmugan Overview of natural language processing (NLP) from both symbolic and deep learning perspectives. Covers tf-idf, sentiment analysis, LDA, WordNet, FrameNet, word2vec, and recurrent neural networks (RNNs). <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/fromnlptoai-170114191309-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Overview of natural language processing (NLP) from both symbolic and deep learning perspectives. Covers tf-idf, sentiment analysis, LDA, WordNet, FrameNet, word2vec, and recurrent neural networks (RNNs).
From Natural Language Processing to Artificial Intelligence from Jonathan Mugan
]]>
5397 16 https://cdn.slidesharecdn.com/ss_thumbnails/fromnlptoai-170114191309-thumbnail.jpg?width=120&height=120&fit=bounds presentation 000000 http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
What Deep Learning Means for Artificial Intelligence /slideshow/what-deep-learning-means-for-artificial-intelligence/68028291 deeplearningencoredistribute-161102034159
Discusses how deep learning has advanced artificial intelligence and contrasts the approach with symbol-based systems.]]>

Discusses how deep learning has advanced artificial intelligence and contrasts the approach with symbol-based systems.]]>
Wed, 02 Nov 2016 03:41:59 GMT /slideshow/what-deep-learning-means-for-artificial-intelligence/68028291 jmugan@slideshare.net(jmugan) What Deep Learning Means for Artificial Intelligence jmugan Discusses how deep learning has advanced artificial intelligence and contrasts the approach with symbol-based systems. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/deeplearningencoredistribute-161102034159-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Discusses how deep learning has advanced artificial intelligence and contrasts the approach with symbol-based systems.
What Deep Learning Means for Artificial Intelligence from Jonathan Mugan
]]>
3273 11 https://cdn.slidesharecdn.com/ss_thumbnails/deeplearningencoredistribute-161102034159-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Deep Learning for Natural Language Processing /slideshow/deep-learning-for-natural-language-processing-62732431/62732431 nlpdatadaydistribute-160605020047
Deep Learning represents a significant advance in artificial intelligence because it enables computers to represent concepts using vectors instead of symbols. Representing concepts using vectors is particularly useful in natural language processing, and this talk will elucidate those benefits and provide an understandable introduction to the technologies that make up deep learning. The talk will outline ways to get started in deep learning, and it will conclude with a discussion of the gaps that remain between our current technologies and true computer understanding.]]>

Deep Learning represents a significant advance in artificial intelligence because it enables computers to represent concepts using vectors instead of symbols. Representing concepts using vectors is particularly useful in natural language processing, and this talk will elucidate those benefits and provide an understandable introduction to the technologies that make up deep learning. The talk will outline ways to get started in deep learning, and it will conclude with a discussion of the gaps that remain between our current technologies and true computer understanding.]]>
Sun, 05 Jun 2016 02:00:46 GMT /slideshow/deep-learning-for-natural-language-processing-62732431/62732431 jmugan@slideshare.net(jmugan) Deep Learning for Natural Language Processing jmugan Deep Learning represents a significant advance in artificial intelligence because it enables computers to represent concepts using vectors instead of symbols. Representing concepts using vectors is particularly useful in natural language processing, and this talk will elucidate those benefits and provide an understandable introduction to the technologies that make up deep learning. The talk will outline ways to get started in deep learning, and it will conclude with a discussion of the gaps that remain between our current technologies and true computer understanding. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/nlpdatadaydistribute-160605020047-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Deep Learning represents a significant advance in artificial intelligence because it enables computers to represent concepts using vectors instead of symbols. Representing concepts using vectors is particularly useful in natural language processing, and this talk will elucidate those benefits and provide an understandable introduction to the technologies that make up deep learning. The talk will outline ways to get started in deep learning, and it will conclude with a discussion of the gaps that remain between our current technologies and true computer understanding.
Deep Learning for Natural Language Processing from Jonathan Mugan
]]>
4989 8 https://cdn.slidesharecdn.com/ss_thumbnails/nlpdatadaydistribute-160605020047-thumbnail.jpg?width=120&height=120&fit=bounds presentation 000000 http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
What Deep Learning Means for Artificial Intelligence /slideshow/deep-learningforartificialintelligence/45737203 deeplearningforartificialintelligence-150312004402-conversion-gate01
Describes deep learning as applied to natural language processing, computer vision, and robot actions. Also discusses what deep learning still can't do.]]>

Describes deep learning as applied to natural language processing, computer vision, and robot actions. Also discusses what deep learning still can't do.]]>
Thu, 12 Mar 2015 00:44:02 GMT /slideshow/deep-learningforartificialintelligence/45737203 jmugan@slideshare.net(jmugan) What Deep Learning Means for Artificial Intelligence jmugan Describes deep learning as applied to natural language processing, computer vision, and robot actions. Also discusses what deep learning still can't do. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/deeplearningforartificialintelligence-150312004402-conversion-gate01-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Describes deep learning as applied to natural language processing, computer vision, and robot actions. Also discusses what deep learning still can&#39;t do.
What Deep Learning Means for Artificial Intelligence from Jonathan Mugan
]]>
7566 4 https://cdn.slidesharecdn.com/ss_thumbnails/deeplearningforartificialintelligence-150312004402-conversion-gate01-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
https://cdn.slidesharecdn.com/profile-photo-jmugan-48x48.jpg?cb=1705867087 Computer scientist: machine learning and data science. Author of The Curiosity Cycle: Preparing Your Child for the Ongoing Technological Explosion http://www.jonathanmugan.com https://cdn.slidesharecdn.com/ss_thumbnails/buildsomeonetotalktodistribute-230128232310-aa5807b2-thumbnail.jpg?width=320&height=320&fit=bounds slideshow/how-to-build-someone-we-can-talk-to/255580238 How to build someone w... https://cdn.slidesharecdn.com/ss_thumbnails/tfxdataday-200126053933-thumbnail.jpg?width=320&height=320&fit=bounds slideshow/moving-your-machine-learning-models-to-production-with-tensorflow-extended/224508720 Moving Your Machine Le... https://cdn.slidesharecdn.com/ss_thumbnails/neuraltextgeneration3distribute-190502180828-thumbnail.jpg?width=320&height=320&fit=bounds slideshow/generating-naturallanguage-text-with-neural-networks-143262260/143262260 Generating Natural-Lan...