ºÝºÝߣshows by User: ShirinGlander / http://www.slideshare.net/images/logo.gif ºÝºÝߣshows by User: ShirinGlander / Fri, 27 Oct 2023 07:38:15 GMT ºÝºÝߣShare feed for ºÝºÝߣshows by User: ShirinGlander Transparente und Verantwortungsbewusste KI https://de.slideshare.net/slideshow/transparente-und-verantwortungsbewusste-ki/262786333 transparenteundverantwortungsbewussteki1-231027073815-73d4c9f6
Transparente und Verantwortungsbewusste KI: Was bedeutet Explainable und Responsible AI und warum sollten wir uns damit beschäftigen? In dieser Session widmen wir uns den aktuellen Themen Explainable AI (XAI) und Responsible AI (RAI), bei denen es um die Entwicklung von Künstlicher Intelligenz geht, die transparent, erklärbar und ethisch verantwortlich agiert. Wir wollen ein breites Publikum ansprechen, von KI-Experten und Forschern bis hin zu Vertretern aus Wirtschaft, Politik und der Zivilgesellschaft. Ziel ist es, ein Verständnis für die Bedeutung von Explainable und Responsible AI zu schaffen, mögliche Herausforderungen zu diskutieren und Lösungsansätze für eine verantwortungsvolle KI-Nutzung zu entwickeln.]]>

Transparente und Verantwortungsbewusste KI: Was bedeutet Explainable und Responsible AI und warum sollten wir uns damit beschäftigen? In dieser Session widmen wir uns den aktuellen Themen Explainable AI (XAI) und Responsible AI (RAI), bei denen es um die Entwicklung von Künstlicher Intelligenz geht, die transparent, erklärbar und ethisch verantwortlich agiert. Wir wollen ein breites Publikum ansprechen, von KI-Experten und Forschern bis hin zu Vertretern aus Wirtschaft, Politik und der Zivilgesellschaft. Ziel ist es, ein Verständnis für die Bedeutung von Explainable und Responsible AI zu schaffen, mögliche Herausforderungen zu diskutieren und Lösungsansätze für eine verantwortungsvolle KI-Nutzung zu entwickeln.]]>
Fri, 27 Oct 2023 07:38:15 GMT https://de.slideshare.net/slideshow/transparente-und-verantwortungsbewusste-ki/262786333 ShirinGlander@slideshare.net(ShirinGlander) Transparente und Verantwortungsbewusste KI ShirinGlander Transparente und Verantwortungsbewusste KI: Was bedeutet Explainable und Responsible AI und warum sollten wir uns damit beschäftigen? In dieser Session widmen wir uns den aktuellen Themen Explainable AI (XAI) und Responsible AI (RAI), bei denen es um die Entwicklung von Künstlicher Intelligenz geht, die transparent, erklärbar und ethisch verantwortlich agiert. Wir wollen ein breites Publikum ansprechen, von KI-Experten und Forschern bis hin zu Vertretern aus Wirtschaft, Politik und der Zivilgesellschaft. Ziel ist es, ein Verständnis für die Bedeutung von Explainable und Responsible AI zu schaffen, mögliche Herausforderungen zu diskutieren und Lösungsansätze für eine verantwortungsvolle KI-Nutzung zu entwickeln. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/transparenteundverantwortungsbewussteki1-231027073815-73d4c9f6-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Transparente und Verantwortungsbewusste KI: Was bedeutet Explainable und Responsible AI und warum sollten wir uns damit beschäftigen? In dieser Session widmen wir uns den aktuellen Themen Explainable AI (XAI) und Responsible AI (RAI), bei denen es um die Entwicklung von Künstlicher Intelligenz geht, die transparent, erklärbar und ethisch verantwortlich agiert. Wir wollen ein breites Publikum ansprechen, von KI-Experten und Forschern bis hin zu Vertretern aus Wirtschaft, Politik und der Zivilgesellschaft. Ziel ist es, ein Verständnis für die Bedeutung von Explainable und Responsible AI zu schaffen, mögliche Herausforderungen zu diskutieren und Lösungsansätze für eine verantwortungsvolle KI-Nutzung zu entwickeln.
from Shirin Elsinghorst
]]>
42 0 https://cdn.slidesharecdn.com/ss_thumbnails/transparenteundverantwortungsbewussteki1-231027073815-73d4c9f6-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Datenstrategie in der Praxis https://de.slideshare.net/slideshow/datenstrategie-in-der-praxis/262786218 datenstrategieinderpraxis1-231027073527-3eb65158
Die digitale Transformation und der Aufstieg von Künstlicher Intelligenz (KI) haben die Geschäftswelt revolutioniert. Unternehmen, die ihre Geschäftsentscheidungen auf Daten und KI stützen, gewinnen einen klaren Wettbewerbsvorteil und schaffen innovative Lösungen für ihre Kunden. Diese Veranstaltung richtet sich an Führungskräfte, Manager, Unternehmer und Data-Enthusiasten, die mehr darüber erfahren möchten, ob und wann sie eine Datenstrategie brauchen, wie sie eine effektive Datenstrategie entwickeln und wie sie damit ein datengetriebenes und KI-basiertes Business aufbauen oder bereichern können.]]>

Die digitale Transformation und der Aufstieg von Künstlicher Intelligenz (KI) haben die Geschäftswelt revolutioniert. Unternehmen, die ihre Geschäftsentscheidungen auf Daten und KI stützen, gewinnen einen klaren Wettbewerbsvorteil und schaffen innovative Lösungen für ihre Kunden. Diese Veranstaltung richtet sich an Führungskräfte, Manager, Unternehmer und Data-Enthusiasten, die mehr darüber erfahren möchten, ob und wann sie eine Datenstrategie brauchen, wie sie eine effektive Datenstrategie entwickeln und wie sie damit ein datengetriebenes und KI-basiertes Business aufbauen oder bereichern können.]]>
Fri, 27 Oct 2023 07:35:27 GMT https://de.slideshare.net/slideshow/datenstrategie-in-der-praxis/262786218 ShirinGlander@slideshare.net(ShirinGlander) Datenstrategie in der Praxis ShirinGlander Die digitale Transformation und der Aufstieg von Künstlicher Intelligenz (KI) haben die Geschäftswelt revolutioniert. Unternehmen, die ihre Geschäftsentscheidungen auf Daten und KI stützen, gewinnen einen klaren Wettbewerbsvorteil und schaffen innovative Lösungen für ihre Kunden. Diese Veranstaltung richtet sich an Führungskräfte, Manager, Unternehmer und Data-Enthusiasten, die mehr darüber erfahren möchten, ob und wann sie eine Datenstrategie brauchen, wie sie eine effektive Datenstrategie entwickeln und wie sie damit ein datengetriebenes und KI-basiertes Business aufbauen oder bereichern können. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/datenstrategieinderpraxis1-231027073527-3eb65158-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Die digitale Transformation und der Aufstieg von Künstlicher Intelligenz (KI) haben die Geschäftswelt revolutioniert. Unternehmen, die ihre Geschäftsentscheidungen auf Daten und KI stützen, gewinnen einen klaren Wettbewerbsvorteil und schaffen innovative Lösungen für ihre Kunden. Diese Veranstaltung richtet sich an Führungskräfte, Manager, Unternehmer und Data-Enthusiasten, die mehr darüber erfahren möchten, ob und wann sie eine Datenstrategie brauchen, wie sie eine effektive Datenstrategie entwickeln und wie sie damit ein datengetriebenes und KI-basiertes Business aufbauen oder bereichern können.
from Shirin Elsinghorst
]]>
139 0 https://cdn.slidesharecdn.com/ss_thumbnails/datenstrategieinderpraxis1-231027073527-3eb65158-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
RICHTIG GUT: DIE QUALITÄT VON MODELLEN VERSTEHEN https://de.slideshare.net/slideshow/richtig-gut-die-qualitt-von-modellen-verstehen/238008327 m3online-erklaerbarkeit-elsinghorst1-200818065507
Vortrag auf der M3 Online-Konferenz am 16.06.2020 (https://online.m3-konferenz.de/lecture.php?id=12337&source=0) Mit Machine Learning getroffene Entscheidungen sind inhärent schwierig – wenn nicht gar unmöglich – nachzuvollziehen. Ein scheinbar gutes Ergebnis mit Hilfe von maschinellen Lernverfahren ist oft schnell erzielt oder wird von anderen als bahnbrechend verkauft. Die Komplexität einiger der besten Modelle wie neuronaler Netze ist genau das, was sie so erfolgreich macht. Aber es macht sie gleichzeitig zu einer Black Box. Das kann problematisch sein, denn Geschäftsführer oder Vorstände werden weniger geneigt sein, einer Entscheidung zu vertrauen und nach ihr zu handeln, wenn sie sie nicht verstehen. Shapley Values, Local Interpretable Model-Agnostic Explanations (LIME) und Anchors sind Ansätze, diese komplexen Modelle zumindest teilweise nachvollziehbar zu machen. In diesem Vortrag erkläre ich, wie diese Ansätze funktionieren, und zeige Anwendungsbeispiele. LERNZIELE * Die Teilnehmer erhalten Einblick in Möglichkeit, die komplexe Modelle erklärbar machen. * Sie lernen, Datensätze kritisch zu hinterfragen und angemessen aufzuteilen. * Und sie erfahren, unter welchen Bedingungen sie Entscheidungen durch Machine Learning vertrauen können.]]>

Vortrag auf der M3 Online-Konferenz am 16.06.2020 (https://online.m3-konferenz.de/lecture.php?id=12337&source=0) Mit Machine Learning getroffene Entscheidungen sind inhärent schwierig – wenn nicht gar unmöglich – nachzuvollziehen. Ein scheinbar gutes Ergebnis mit Hilfe von maschinellen Lernverfahren ist oft schnell erzielt oder wird von anderen als bahnbrechend verkauft. Die Komplexität einiger der besten Modelle wie neuronaler Netze ist genau das, was sie so erfolgreich macht. Aber es macht sie gleichzeitig zu einer Black Box. Das kann problematisch sein, denn Geschäftsführer oder Vorstände werden weniger geneigt sein, einer Entscheidung zu vertrauen und nach ihr zu handeln, wenn sie sie nicht verstehen. Shapley Values, Local Interpretable Model-Agnostic Explanations (LIME) und Anchors sind Ansätze, diese komplexen Modelle zumindest teilweise nachvollziehbar zu machen. In diesem Vortrag erkläre ich, wie diese Ansätze funktionieren, und zeige Anwendungsbeispiele. LERNZIELE * Die Teilnehmer erhalten Einblick in Möglichkeit, die komplexe Modelle erklärbar machen. * Sie lernen, Datensätze kritisch zu hinterfragen und angemessen aufzuteilen. * Und sie erfahren, unter welchen Bedingungen sie Entscheidungen durch Machine Learning vertrauen können.]]>
Tue, 18 Aug 2020 06:55:07 GMT https://de.slideshare.net/slideshow/richtig-gut-die-qualitt-von-modellen-verstehen/238008327 ShirinGlander@slideshare.net(ShirinGlander) RICHTIG GUT: DIE QUALITÄT VON MODELLEN VERSTEHEN ShirinGlander Vortrag auf der M3 Online-Konferenz am 16.06.2020 (https://online.m3-konferenz.de/lecture.php?id=12337&source=0) Mit Machine Learning getroffene Entscheidungen sind inhärent schwierig – wenn nicht gar unmöglich – nachzuvollziehen. Ein scheinbar gutes Ergebnis mit Hilfe von maschinellen Lernverfahren ist oft schnell erzielt oder wird von anderen als bahnbrechend verkauft. Die Komplexität einiger der besten Modelle wie neuronaler Netze ist genau das, was sie so erfolgreich macht. Aber es macht sie gleichzeitig zu einer Black Box. Das kann problematisch sein, denn Geschäftsführer oder Vorstände werden weniger geneigt sein, einer Entscheidung zu vertrauen und nach ihr zu handeln, wenn sie sie nicht verstehen. Shapley Values, Local Interpretable Model-Agnostic Explanations (LIME) und Anchors sind Ansätze, diese komplexen Modelle zumindest teilweise nachvollziehbar zu machen. In diesem Vortrag erkläre ich, wie diese Ansätze funktionieren, und zeige Anwendungsbeispiele. LERNZIELE * Die Teilnehmer erhalten Einblick in Möglichkeit, die komplexe Modelle erklärbar machen. * Sie lernen, Datensätze kritisch zu hinterfragen und angemessen aufzuteilen. * Und sie erfahren, unter welchen Bedingungen sie Entscheidungen durch Machine Learning vertrauen können. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/m3online-erklaerbarkeit-elsinghorst1-200818065507-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Vortrag auf der M3 Online-Konferenz am 16.06.2020 (https://online.m3-konferenz.de/lecture.php?id=12337&amp;source=0) Mit Machine Learning getroffene Entscheidungen sind inhärent schwierig – wenn nicht gar unmöglich – nachzuvollziehen. Ein scheinbar gutes Ergebnis mit Hilfe von maschinellen Lernverfahren ist oft schnell erzielt oder wird von anderen als bahnbrechend verkauft. Die Komplexität einiger der besten Modelle wie neuronaler Netze ist genau das, was sie so erfolgreich macht. Aber es macht sie gleichzeitig zu einer Black Box. Das kann problematisch sein, denn Geschäftsführer oder Vorstände werden weniger geneigt sein, einer Entscheidung zu vertrauen und nach ihr zu handeln, wenn sie sie nicht verstehen. Shapley Values, Local Interpretable Model-Agnostic Explanations (LIME) und Anchors sind Ansätze, diese komplexen Modelle zumindest teilweise nachvollziehbar zu machen. In diesem Vortrag erkläre ich, wie diese Ansätze funktionieren, und zeige Anwendungsbeispiele. LERNZIELE * Die Teilnehmer erhalten Einblick in Möglichkeit, die komplexe Modelle erklärbar machen. * Sie lernen, Datensätze kritisch zu hinterfragen und angemessen aufzuteilen. * Und sie erfahren, unter welchen Bedingungen sie Entscheidungen durch Machine Learning vertrauen können.
from Shirin Elsinghorst
]]>
239 37 https://cdn.slidesharecdn.com/ss_thumbnails/m3online-erklaerbarkeit-elsinghorst1-200818065507-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Real-World Data Science (Fraud Detection, Customer Churn & Predictive Maintenance) /slideshow/realworld-data-science-fraud-detection-customer-churn-predictive-maintenance/128149903 fhslides-190116072903
These are slides from a lecture I gave at the School of Applied Sciences in Münster. In this lecture, I talked about **Real-World Data Science** at showed examples on **Fraud Detection, Customer Churn & Predictive Maintenance**.]]>

These are slides from a lecture I gave at the School of Applied Sciences in Münster. In this lecture, I talked about **Real-World Data Science** at showed examples on **Fraud Detection, Customer Churn & Predictive Maintenance**.]]>
Wed, 16 Jan 2019 07:29:03 GMT /slideshow/realworld-data-science-fraud-detection-customer-churn-predictive-maintenance/128149903 ShirinGlander@slideshare.net(ShirinGlander) Real-World Data Science (Fraud Detection, Customer Churn & Predictive Maintenance) ShirinGlander These are slides from a lecture I gave at the School of Applied Sciences in Münster. In this lecture, I talked about **Real-World Data Science** at showed examples on **Fraud Detection, Customer Churn & Predictive Maintenance**. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/fhslides-190116072903-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> These are slides from a lecture I gave at the School of Applied Sciences in Münster. In this lecture, I talked about **Real-World Data Science** at showed examples on **Fraud Detection, Customer Churn &amp; Predictive Maintenance**.
Real-World Data Science (Fraud Detection, Customer Churn & Predictive Maintenance) from Shirin Elsinghorst
]]>
7424 16 https://cdn.slidesharecdn.com/ss_thumbnails/fhslides-190116072903-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
SAP webinar: Explaining Keras Image Classification Models with LIME /slideshow/sap-webinar-explaining-keras-image-classification-models-with-lime/110815828 sapwebinar-180821080320
Keras is a high-level open-source deep learning framework that by default works on top of TensorFlow. Keras is minimalistic, efficient and highly flexible because it works with a modular layer system to define, compile and fit neural networks. It has been written in Python but can also be used from within R. Because the underlying backend can be changed from TensorFlow to Theano and CNTK (with more options being developed right now) it is designed to be framework-independent. Models can be trained on CPU or GPU, locally or in the cloud. I will show an example how to build an image classifier with Keras. We'll be using a convolutional neural net to classify fruits in images. But that's not all! We not only want to judge our black-box model based on accuracy and loss measures - we want to get a better understanding of how the model works. We will use an algorithm called LIME (local interpretable model-agnostic explanations) to find out what part of the different test images contributed most strongly to the classification that was made by our model. I will introduce LIME and explain how it works. And finally, I will show how to apply LIME to the image classifier we built before, as well as to a pretrained Imagenet model. You will get: * an introduction to Keras * an overview about deep learning and neural nets * a demo how to build an image classifier with Keras * an introduction to explaining black box models, specifically to the LIME algorithm * a demo how to apply LIME to explain the predictions of our own Keras image classifier, as well as of a pretrained Imagenet Further Information: * www.shirin-glander.de * https://blog.codecentric.de/author/shirin-glander/ * www.youtube.com/codecentricAI ]]>

Keras is a high-level open-source deep learning framework that by default works on top of TensorFlow. Keras is minimalistic, efficient and highly flexible because it works with a modular layer system to define, compile and fit neural networks. It has been written in Python but can also be used from within R. Because the underlying backend can be changed from TensorFlow to Theano and CNTK (with more options being developed right now) it is designed to be framework-independent. Models can be trained on CPU or GPU, locally or in the cloud. I will show an example how to build an image classifier with Keras. We'll be using a convolutional neural net to classify fruits in images. But that's not all! We not only want to judge our black-box model based on accuracy and loss measures - we want to get a better understanding of how the model works. We will use an algorithm called LIME (local interpretable model-agnostic explanations) to find out what part of the different test images contributed most strongly to the classification that was made by our model. I will introduce LIME and explain how it works. And finally, I will show how to apply LIME to the image classifier we built before, as well as to a pretrained Imagenet model. You will get: * an introduction to Keras * an overview about deep learning and neural nets * a demo how to build an image classifier with Keras * an introduction to explaining black box models, specifically to the LIME algorithm * a demo how to apply LIME to explain the predictions of our own Keras image classifier, as well as of a pretrained Imagenet Further Information: * www.shirin-glander.de * https://blog.codecentric.de/author/shirin-glander/ * www.youtube.com/codecentricAI ]]>
Tue, 21 Aug 2018 08:03:20 GMT /slideshow/sap-webinar-explaining-keras-image-classification-models-with-lime/110815828 ShirinGlander@slideshare.net(ShirinGlander) SAP webinar: Explaining Keras Image Classification Models with LIME ShirinGlander Keras is a high-level open-source deep learning framework that by default works on top of TensorFlow. Keras is minimalistic, efficient and highly flexible because it works with a modular layer system to define, compile and fit neural networks. It has been written in Python but can also be used from within R. Because the underlying backend can be changed from TensorFlow to Theano and CNTK (with more options being developed right now) it is designed to be framework-independent. Models can be trained on CPU or GPU, locally or in the cloud. I will show an example how to build an image classifier with Keras. We'll be using a convolutional neural net to classify fruits in images. But that's not all! We not only want to judge our black-box model based on accuracy and loss measures - we want to get a better understanding of how the model works. We will use an algorithm called LIME (local interpretable model-agnostic explanations) to find out what part of the different test images contributed most strongly to the classification that was made by our model. I will introduce LIME and explain how it works. And finally, I will show how to apply LIME to the image classifier we built before, as well as to a pretrained Imagenet model. You will get: * an introduction to Keras * an overview about deep learning and neural nets * a demo how to build an image classifier with Keras * an introduction to explaining black box models, specifically to the LIME algorithm * a demo how to apply LIME to explain the predictions of our own Keras image classifier, as well as of a pretrained Imagenet Further Information: * www.shirin-glander.de<http://www.shirin-glander.de> * https://blog.codecentric.de/author/shirin-glander/ * www.youtube.com/codecentricAI <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/sapwebinar-180821080320-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Keras is a high-level open-source deep learning framework that by default works on top of TensorFlow. Keras is minimalistic, efficient and highly flexible because it works with a modular layer system to define, compile and fit neural networks. It has been written in Python but can also be used from within R. Because the underlying backend can be changed from TensorFlow to Theano and CNTK (with more options being developed right now) it is designed to be framework-independent. Models can be trained on CPU or GPU, locally or in the cloud. I will show an example how to build an image classifier with Keras. We&#39;ll be using a convolutional neural net to classify fruits in images. But that&#39;s not all! We not only want to judge our black-box model based on accuracy and loss measures - we want to get a better understanding of how the model works. We will use an algorithm called LIME (local interpretable model-agnostic explanations) to find out what part of the different test images contributed most strongly to the classification that was made by our model. I will introduce LIME and explain how it works. And finally, I will show how to apply LIME to the image classifier we built before, as well as to a pretrained Imagenet model. You will get: * an introduction to Keras * an overview about deep learning and neural nets * a demo how to build an image classifier with Keras * an introduction to explaining black box models, specifically to the LIME algorithm * a demo how to apply LIME to explain the predictions of our own Keras image classifier, as well as of a pretrained Imagenet Further Information: * www.shirin-glander.de * https://blog.codecentric.de/author/shirin-glander/ * www.youtube.com/codecentricAI
SAP webinar: Explaining Keras Image Classification Models with LIME from Shirin Elsinghorst
]]>
970 9 https://cdn.slidesharecdn.com/ss_thumbnails/sapwebinar-180821080320-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Workshop - Introduction to Machine Learning with R /slideshow/workshop-introduction-to-machine-learning-with-r/103557395 introtomlslides-180629063700
These are the slides from workshop: Introduction to Machine Learning with R which I gave at the University of Heidelberg, Germany on June 28th 2018. The accompanying code to generate all plots in these slides (plus additional code) can be found on my blog: https://shirinsplayground.netlify.com/2018/06/intro_to_ml_workshop_heidelberg/ The workshop covered the basics of machine learning. With an example dataset I went through a standard machine learning workflow in R with the packages caret and h2o: - reading in data - exploratory data analysis - missingness - feature engineering - training and test split - model training with Random Forests, Gradient Boosting, Neural Nets, etc. - hyperparameter tuning]]>

These are the slides from workshop: Introduction to Machine Learning with R which I gave at the University of Heidelberg, Germany on June 28th 2018. The accompanying code to generate all plots in these slides (plus additional code) can be found on my blog: https://shirinsplayground.netlify.com/2018/06/intro_to_ml_workshop_heidelberg/ The workshop covered the basics of machine learning. With an example dataset I went through a standard machine learning workflow in R with the packages caret and h2o: - reading in data - exploratory data analysis - missingness - feature engineering - training and test split - model training with Random Forests, Gradient Boosting, Neural Nets, etc. - hyperparameter tuning]]>
Fri, 29 Jun 2018 06:37:00 GMT /slideshow/workshop-introduction-to-machine-learning-with-r/103557395 ShirinGlander@slideshare.net(ShirinGlander) Workshop - Introduction to Machine Learning with R ShirinGlander These are the slides from workshop: Introduction to Machine Learning with R which I gave at the University of Heidelberg, Germany on June 28th 2018. The accompanying code to generate all plots in these slides (plus additional code) can be found on my blog: https://shirinsplayground.netlify.com/2018/06/intro_to_ml_workshop_heidelberg/ The workshop covered the basics of machine learning. With an example dataset I went through a standard machine learning workflow in R with the packages caret and h2o: - reading in data - exploratory data analysis - missingness - feature engineering - training and test split - model training with Random Forests, Gradient Boosting, Neural Nets, etc. - hyperparameter tuning <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/introtomlslides-180629063700-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> These are the slides from workshop: Introduction to Machine Learning with R which I gave at the University of Heidelberg, Germany on June 28th 2018. The accompanying code to generate all plots in these slides (plus additional code) can be found on my blog: https://shirinsplayground.netlify.com/2018/06/intro_to_ml_workshop_heidelberg/ The workshop covered the basics of machine learning. With an example dataset I went through a standard machine learning workflow in R with the packages caret and h2o: - reading in data - exploratory data analysis - missingness - feature engineering - training and test split - model training with Random Forests, Gradient Boosting, Neural Nets, etc. - hyperparameter tuning
Workshop - Introduction to Machine Learning with R from Shirin Elsinghorst
]]>
39731 20 https://cdn.slidesharecdn.com/ss_thumbnails/introtomlslides-180629063700-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Deep learning - a primer /slideshow/deep-learning-a-primer-95197733/95197733 deeplearning-180427063627
This is a slide deck from a presentation, that my colleague Uwe Friedrichsen (/ufried/) and I did together. As we created our respective parts of the presentation on our own, it is quite easy to figure out who did which part of the presentation as the two slide decks look quite different ... :) For the sake of simplicity and completeness, Uwe copied the two slide decks together. As he did the "surrounding" part, he added my part at the place where I took over and then added concluding slides at the end. Well, I'm sure, you will figure it out easily ... ;) The presentation was intended to be an introduction to deep learning (DL) for people who are new to the topic. It starts with some DL success stories as motivation. Then a quick classification and a bit of history follows before the "how" part starts. The first part of the "how" is some theory of DL, to demystify the topic and explain and connect some of the most important terms on the one hand, but also to give an idea of the broadness of the topic on the other hand. After that the second part dives deeper into the question how to actually implement DL networks. This part starts with coding it all on your own and then moves on to less coding step by step, depending on where you want to start. The presentation ends with some pitfalls and challenges that you should have in mind if you want to dive deeper into DL - plus the invitation to become part of it. As always the voice track of the presentation is missing. I hope that the slides are of some use for you, though.]]>

This is a slide deck from a presentation, that my colleague Uwe Friedrichsen (/ufried/) and I did together. As we created our respective parts of the presentation on our own, it is quite easy to figure out who did which part of the presentation as the two slide decks look quite different ... :) For the sake of simplicity and completeness, Uwe copied the two slide decks together. As he did the "surrounding" part, he added my part at the place where I took over and then added concluding slides at the end. Well, I'm sure, you will figure it out easily ... ;) The presentation was intended to be an introduction to deep learning (DL) for people who are new to the topic. It starts with some DL success stories as motivation. Then a quick classification and a bit of history follows before the "how" part starts. The first part of the "how" is some theory of DL, to demystify the topic and explain and connect some of the most important terms on the one hand, but also to give an idea of the broadness of the topic on the other hand. After that the second part dives deeper into the question how to actually implement DL networks. This part starts with coding it all on your own and then moves on to less coding step by step, depending on where you want to start. The presentation ends with some pitfalls and challenges that you should have in mind if you want to dive deeper into DL - plus the invitation to become part of it. As always the voice track of the presentation is missing. I hope that the slides are of some use for you, though.]]>
Fri, 27 Apr 2018 06:36:27 GMT /slideshow/deep-learning-a-primer-95197733/95197733 ShirinGlander@slideshare.net(ShirinGlander) Deep learning - a primer ShirinGlander This is a slide deck from a presentation, that my colleague Uwe Friedrichsen (/ufried/) and I did together. As we created our respective parts of the presentation on our own, it is quite easy to figure out who did which part of the presentation as the two slide decks look quite different ... :) For the sake of simplicity and completeness, Uwe copied the two slide decks together. As he did the "surrounding" part, he added my part at the place where I took over and then added concluding slides at the end. Well, I'm sure, you will figure it out easily ... ;) The presentation was intended to be an introduction to deep learning (DL) for people who are new to the topic. It starts with some DL success stories as motivation. Then a quick classification and a bit of history follows before the "how" part starts. The first part of the "how" is some theory of DL, to demystify the topic and explain and connect some of the most important terms on the one hand, but also to give an idea of the broadness of the topic on the other hand. After that the second part dives deeper into the question how to actually implement DL networks. This part starts with coding it all on your own and then moves on to less coding step by step, depending on where you want to start. The presentation ends with some pitfalls and challenges that you should have in mind if you want to dive deeper into DL - plus the invitation to become part of it. As always the voice track of the presentation is missing. I hope that the slides are of some use for you, though. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/deeplearning-180427063627-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> This is a slide deck from a presentation, that my colleague Uwe Friedrichsen (/ufried/) and I did together. As we created our respective parts of the presentation on our own, it is quite easy to figure out who did which part of the presentation as the two slide decks look quite different ... :) For the sake of simplicity and completeness, Uwe copied the two slide decks together. As he did the &quot;surrounding&quot; part, he added my part at the place where I took over and then added concluding slides at the end. Well, I&#39;m sure, you will figure it out easily ... ;) The presentation was intended to be an introduction to deep learning (DL) for people who are new to the topic. It starts with some DL success stories as motivation. Then a quick classification and a bit of history follows before the &quot;how&quot; part starts. The first part of the &quot;how&quot; is some theory of DL, to demystify the topic and explain and connect some of the most important terms on the one hand, but also to give an idea of the broadness of the topic on the other hand. After that the second part dives deeper into the question how to actually implement DL networks. This part starts with coding it all on your own and then moves on to less coding step by step, depending on where you want to start. The presentation ends with some pitfalls and challenges that you should have in mind if you want to dive deeper into DL - plus the invitation to become part of it. As always the voice track of the presentation is missing. I hope that the slides are of some use for you, though.
Deep learning - a primer from Shirin Elsinghorst
]]>
4741 32 https://cdn.slidesharecdn.com/ss_thumbnails/deeplearning-180427063627-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
HH Data Science Meetup: Explaining complex machine learning models with LIME /slideshow/hh-data-science-meetup-explaining-complex-machine-learning-models-with-lime-94218890/94218890 hhdatasciencemeetup-180418112151
On April 12th, 2018 I gave a talk about Explaining complex machine learning models with LIME at the Hamburg Data Science Meetup: Traditional machine learning workflows focus heavily on model training and optimization; the best model is usually chosen via performance measures like accuracy or error and we tend to assume that a model is good enough for deployment if it passes certain thresholds of these performance criteria. Why a model makes the predictions it makes, however, is generally neglected. But being able to understand and interpret such models can be immensely important for improving model quality, increasing trust and transparency and for reducing bias. Because complex machine learning models are essentially black boxes and too complicated to understand, we need to use approximations to get a better sense of how they work. One such approach is LIME, which stands for Local Interpretable Model-agnostic Explanations and is a tool that helps understand and explain the decisions made by complex machine learning models. – slide deck was produced with beautiful.ai –]]>

On April 12th, 2018 I gave a talk about Explaining complex machine learning models with LIME at the Hamburg Data Science Meetup: Traditional machine learning workflows focus heavily on model training and optimization; the best model is usually chosen via performance measures like accuracy or error and we tend to assume that a model is good enough for deployment if it passes certain thresholds of these performance criteria. Why a model makes the predictions it makes, however, is generally neglected. But being able to understand and interpret such models can be immensely important for improving model quality, increasing trust and transparency and for reducing bias. Because complex machine learning models are essentially black boxes and too complicated to understand, we need to use approximations to get a better sense of how they work. One such approach is LIME, which stands for Local Interpretable Model-agnostic Explanations and is a tool that helps understand and explain the decisions made by complex machine learning models. – slide deck was produced with beautiful.ai –]]>
Wed, 18 Apr 2018 11:21:51 GMT /slideshow/hh-data-science-meetup-explaining-complex-machine-learning-models-with-lime-94218890/94218890 ShirinGlander@slideshare.net(ShirinGlander) HH Data Science Meetup: Explaining complex machine learning models with LIME ShirinGlander On April 12th, 2018 I gave a talk about Explaining complex machine learning models with LIME at the Hamburg Data Science Meetup: Traditional machine learning workflows focus heavily on model training and optimization; the best model is usually chosen via performance measures like accuracy or error and we tend to assume that a model is good enough for deployment if it passes certain thresholds of these performance criteria. Why a model makes the predictions it makes, however, is generally neglected. But being able to understand and interpret such models can be immensely important for improving model quality, increasing trust and transparency and for reducing bias. Because complex machine learning models are essentially black boxes and too complicated to understand, we need to use approximations to get a better sense of how they work. One such approach is LIME, which stands for Local Interpretable Model-agnostic Explanations and is a tool that helps understand and explain the decisions made by complex machine learning models. – slide deck was produced with beautiful.ai – <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/hhdatasciencemeetup-180418112151-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> On April 12th, 2018 I gave a talk about Explaining complex machine learning models with LIME at the Hamburg Data Science Meetup: Traditional machine learning workflows focus heavily on model training and optimization; the best model is usually chosen via performance measures like accuracy or error and we tend to assume that a model is good enough for deployment if it passes certain thresholds of these performance criteria. Why a model makes the predictions it makes, however, is generally neglected. But being able to understand and interpret such models can be immensely important for improving model quality, increasing trust and transparency and for reducing bias. Because complex machine learning models are essentially black boxes and too complicated to understand, we need to use approximations to get a better sense of how they work. One such approach is LIME, which stands for Local Interpretable Model-agnostic Explanations and is a tool that helps understand and explain the decisions made by complex machine learning models. – slide deck was produced with beautiful.ai –
HH Data Science Meetup: Explaining complex machine learning models with LIME from Shirin Elsinghorst
]]>
6359 10 https://cdn.slidesharecdn.com/ss_thumbnails/hhdatasciencemeetup-180418112151-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
HH Data Science Meetup: Explaining complex machine learning models with LIME /slideshow/hh-data-science-meetup-explaining-complex-machine-learning-models-with-lime/93763896 hhdatasciencemeetup-180413065720
Unfortunately, slideshare doesn't allow re-uploading slides any more, so there is an updated version with some corrected errors here: /ShirinGlander/hh-data-science-meetup-explaining-complex-machine-learning-models-with-lime-94218890]]>

Unfortunately, slideshare doesn't allow re-uploading slides any more, so there is an updated version with some corrected errors here: /ShirinGlander/hh-data-science-meetup-explaining-complex-machine-learning-models-with-lime-94218890]]>
Fri, 13 Apr 2018 06:57:19 GMT /slideshow/hh-data-science-meetup-explaining-complex-machine-learning-models-with-lime/93763896 ShirinGlander@slideshare.net(ShirinGlander) HH Data Science Meetup: Explaining complex machine learning models with LIME ShirinGlander Unfortunately, slideshare doesn't allow re-uploading slides any more, so there is an updated version with some corrected errors here: /ShirinGlander/hh-data-science-meetup-explaining-complex-machine-learning-models-with-lime-94218890 <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/hhdatasciencemeetup-180413065720-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Unfortunately, slideshare doesn&#39;t allow re-uploading slides any more, so there is an updated version with some corrected errors here: /ShirinGlander/hh-data-science-meetup-explaining-complex-machine-learning-models-with-lime-94218890
HH Data Science Meetup: Explaining complex machine learning models with LIME from Shirin Elsinghorst
]]>
1487 5 https://cdn.slidesharecdn.com/ss_thumbnails/hhdatasciencemeetup-180413065720-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Ruhr.PY - Introducing Deep Learning with Keras and Python /slideshow/ruhrpy-introducing-deep-learning-with-keras-and-python/93534697 ruhr-180411092305
Ruhr.PY - Python Developer Meetup: Keras is a high-level API written in Python for building and prototyping neural networks. It can be used on top of TensorFlow, Theano or CNTK. In this talk we build, train and visualize a Model using Python and Keras - all interactive with Jupyter Notebooks! https://www.meetup.com/Ruhr-py/events/248093628/ -- slide deck generated with beautiful.ai -- -- video recording can be seen here: https://youtu.be/Q8hVXnpEPmc -- -- comment here: https://shirinsplayground.netlify.com/2018/04/ruhrpy_meetup_2018_slides/ --]]>

Ruhr.PY - Python Developer Meetup: Keras is a high-level API written in Python for building and prototyping neural networks. It can be used on top of TensorFlow, Theano or CNTK. In this talk we build, train and visualize a Model using Python and Keras - all interactive with Jupyter Notebooks! https://www.meetup.com/Ruhr-py/events/248093628/ -- slide deck generated with beautiful.ai -- -- video recording can be seen here: https://youtu.be/Q8hVXnpEPmc -- -- comment here: https://shirinsplayground.netlify.com/2018/04/ruhrpy_meetup_2018_slides/ --]]>
Wed, 11 Apr 2018 09:23:05 GMT /slideshow/ruhrpy-introducing-deep-learning-with-keras-and-python/93534697 ShirinGlander@slideshare.net(ShirinGlander) Ruhr.PY - Introducing Deep Learning with Keras and Python ShirinGlander Ruhr.PY - Python Developer Meetup: Keras is a high-level API written in Python for building and prototyping neural networks. It can be used on top of TensorFlow, Theano or CNTK. In this talk we build, train and visualize a Model using Python and Keras - all interactive with Jupyter Notebooks! https://www.meetup.com/Ruhr-py/events/248093628/ -- slide deck generated with beautiful.ai -- -- video recording can be seen here: https://youtu.be/Q8hVXnpEPmc -- -- comment here: https://shirinsplayground.netlify.com/2018/04/ruhrpy_meetup_2018_slides/ -- <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/ruhr-180411092305-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Ruhr.PY - Python Developer Meetup: Keras is a high-level API written in Python for building and prototyping neural networks. It can be used on top of TensorFlow, Theano or CNTK. In this talk we build, train and visualize a Model using Python and Keras - all interactive with Jupyter Notebooks! https://www.meetup.com/Ruhr-py/events/248093628/ -- slide deck generated with beautiful.ai -- -- video recording can be seen here: https://youtu.be/Q8hVXnpEPmc -- -- comment here: https://shirinsplayground.netlify.com/2018/04/ruhrpy_meetup_2018_slides/ --
Ruhr.PY - Introducing Deep Learning with Keras and Python from Shirin Elsinghorst
]]>
6273 4 https://cdn.slidesharecdn.com/ss_thumbnails/ruhr-180411092305-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
From Biology to Industry. A Blogger’s Journey to Data Science. /slideshow/from-biology-to-industry-a-bloggers-journey-to-data-science/79891422 finalwebinarmystorytodatascience-170918122443
What does blogging mean for Data Sciences? What is Big Data today? How to become a Data Scientist and what type of work results from this transformation?]]>

What does blogging mean for Data Sciences? What is Big Data today? How to become a Data Scientist and what type of work results from this transformation?]]>
Mon, 18 Sep 2017 12:24:43 GMT /slideshow/from-biology-to-industry-a-bloggers-journey-to-data-science/79891422 ShirinGlander@slideshare.net(ShirinGlander) From Biology to Industry. A Blogger’s Journey to Data Science. ShirinGlander What does blogging mean for Data Sciences? What is Big Data today? How to become a Data Scientist and what type of work results from this transformation? <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/finalwebinarmystorytodatascience-170918122443-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> What does blogging mean for Data Sciences? What is Big Data today? How to become a Data Scientist and what type of work results from this transformation?
From Biology to Industry. A Blogger’s Journey to Data Science. from Shirin Elsinghorst
]]>
2874 10 https://cdn.slidesharecdn.com/ss_thumbnails/finalwebinarmystorytodatascience-170918122443-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
https://public.slidesharecdn.com/v2/images/profile-picture.png https://cdn.slidesharecdn.com/ss_thumbnails/transparenteundverantwortungsbewussteki1-231027073815-73d4c9f6-thumbnail.jpg?width=320&height=320&fit=bounds slideshow/transparente-und-verantwortungsbewusste-ki/262786333 Transparente und Veran... https://cdn.slidesharecdn.com/ss_thumbnails/datenstrategieinderpraxis1-231027073527-3eb65158-thumbnail.jpg?width=320&height=320&fit=bounds slideshow/datenstrategie-in-der-praxis/262786218 Datenstrategie in der ... https://cdn.slidesharecdn.com/ss_thumbnails/m3online-erklaerbarkeit-elsinghorst1-200818065507-thumbnail.jpg?width=320&height=320&fit=bounds slideshow/richtig-gut-die-qualitt-von-modellen-verstehen/238008327 RICHTIG GUT: DIE QUALI...