狠狠撸shows by User: ShirinGlander
/
http://www.slideshare.net/images/logo.gif狠狠撸shows by User: ShirinGlander
/
Fri, 27 Oct 2023 07:38:15 GMT狠狠撸Share feed for 狠狠撸shows by User: ShirinGlanderTransparente und Verantwortungsbewusste KI
https://de.slideshare.net/slideshow/transparente-und-verantwortungsbewusste-ki/262786333
transparenteundverantwortungsbewussteki1-231027073815-73d4c9f6 Transparente und Verantwortungsbewusste KI: Was bedeutet Explainable und Responsible AI und warum sollten wir uns damit besch盲ftigen?
In dieser Session widmen wir uns den aktuellen Themen Explainable AI (XAI) und Responsible AI (RAI), bei denen es um die Entwicklung von K眉nstlicher Intelligenz geht, die transparent, erkl盲rbar und ethisch verantwortlich agiert. Wir wollen ein breites Publikum ansprechen, von KI-Experten und Forschern bis hin zu Vertretern aus Wirtschaft, Politik und der Zivilgesellschaft. Ziel ist es, ein Verst盲ndnis f眉r die Bedeutung von Explainable und Responsible AI zu schaffen, m枚gliche Herausforderungen zu diskutieren und L枚sungsans盲tze f眉r eine verantwortungsvolle KI-Nutzung zu entwickeln.]]>
Transparente und Verantwortungsbewusste KI: Was bedeutet Explainable und Responsible AI und warum sollten wir uns damit besch盲ftigen?
In dieser Session widmen wir uns den aktuellen Themen Explainable AI (XAI) und Responsible AI (RAI), bei denen es um die Entwicklung von K眉nstlicher Intelligenz geht, die transparent, erkl盲rbar und ethisch verantwortlich agiert. Wir wollen ein breites Publikum ansprechen, von KI-Experten und Forschern bis hin zu Vertretern aus Wirtschaft, Politik und der Zivilgesellschaft. Ziel ist es, ein Verst盲ndnis f眉r die Bedeutung von Explainable und Responsible AI zu schaffen, m枚gliche Herausforderungen zu diskutieren und L枚sungsans盲tze f眉r eine verantwortungsvolle KI-Nutzung zu entwickeln.]]>
Fri, 27 Oct 2023 07:38:15 GMThttps://de.slideshare.net/slideshow/transparente-und-verantwortungsbewusste-ki/262786333ShirinGlander@slideshare.net(ShirinGlander)Transparente und Verantwortungsbewusste KIShirinGlanderTransparente und Verantwortungsbewusste KI: Was bedeutet Explainable und Responsible AI und warum sollten wir uns damit besch盲ftigen?
In dieser Session widmen wir uns den aktuellen Themen Explainable AI (XAI) und Responsible AI (RAI), bei denen es um die Entwicklung von K眉nstlicher Intelligenz geht, die transparent, erkl盲rbar und ethisch verantwortlich agiert. Wir wollen ein breites Publikum ansprechen, von KI-Experten und Forschern bis hin zu Vertretern aus Wirtschaft, Politik und der Zivilgesellschaft. Ziel ist es, ein Verst盲ndnis f眉r die Bedeutung von Explainable und Responsible AI zu schaffen, m枚gliche Herausforderungen zu diskutieren und L枚sungsans盲tze f眉r eine verantwortungsvolle KI-Nutzung zu entwickeln.<img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/transparenteundverantwortungsbewussteki1-231027073815-73d4c9f6-thumbnail.jpg?width=120&height=120&fit=bounds" /><br> Transparente und Verantwortungsbewusste KI: Was bedeutet Explainable und Responsible AI und warum sollten wir uns damit besch盲ftigen?
In dieser Session widmen wir uns den aktuellen Themen Explainable AI (XAI) und Responsible AI (RAI), bei denen es um die Entwicklung von K眉nstlicher Intelligenz geht, die transparent, erkl盲rbar und ethisch verantwortlich agiert. Wir wollen ein breites Publikum ansprechen, von KI-Experten und Forschern bis hin zu Vertretern aus Wirtschaft, Politik und der Zivilgesellschaft. Ziel ist es, ein Verst盲ndnis f眉r die Bedeutung von Explainable und Responsible AI zu schaffen, m枚gliche Herausforderungen zu diskutieren und L枚sungsans盲tze f眉r eine verantwortungsvolle KI-Nutzung zu entwickeln.
]]>
670https://cdn.slidesharecdn.com/ss_thumbnails/transparenteundverantwortungsbewussteki1-231027073815-73d4c9f6-thumbnail.jpg?width=120&height=120&fit=boundspresentationBlackhttp://activitystrea.ms/schema/1.0/posthttp://activitystrea.ms/schema/1.0/posted0Datenstrategie in der Praxis
https://de.slideshare.net/slideshow/datenstrategie-in-der-praxis/262786218
datenstrategieinderpraxis1-231027073527-3eb65158 Die digitale Transformation und der Aufstieg von K眉nstlicher Intelligenz (KI) haben die Gesch盲ftswelt revolutioniert. Unternehmen, die ihre Gesch盲ftsentscheidungen auf Daten und KI st眉tzen, gewinnen einen klaren Wettbewerbsvorteil und schaffen innovative L枚sungen f眉r ihre Kunden. Diese Veranstaltung richtet sich an F眉hrungskr盲fte, Manager, Unternehmer und Data-Enthusiasten, die mehr dar眉ber erfahren m枚chten, ob und wann sie eine Datenstrategie brauchen, wie sie eine effektive Datenstrategie entwickeln und wie sie damit ein datengetriebenes und KI-basiertes Business aufbauen oder bereichern k枚nnen.]]>
Die digitale Transformation und der Aufstieg von K眉nstlicher Intelligenz (KI) haben die Gesch盲ftswelt revolutioniert. Unternehmen, die ihre Gesch盲ftsentscheidungen auf Daten und KI st眉tzen, gewinnen einen klaren Wettbewerbsvorteil und schaffen innovative L枚sungen f眉r ihre Kunden. Diese Veranstaltung richtet sich an F眉hrungskr盲fte, Manager, Unternehmer und Data-Enthusiasten, die mehr dar眉ber erfahren m枚chten, ob und wann sie eine Datenstrategie brauchen, wie sie eine effektive Datenstrategie entwickeln und wie sie damit ein datengetriebenes und KI-basiertes Business aufbauen oder bereichern k枚nnen.]]>
Fri, 27 Oct 2023 07:35:27 GMThttps://de.slideshare.net/slideshow/datenstrategie-in-der-praxis/262786218ShirinGlander@slideshare.net(ShirinGlander)Datenstrategie in der PraxisShirinGlanderDie digitale Transformation und der Aufstieg von K眉nstlicher Intelligenz (KI) haben die Gesch盲ftswelt revolutioniert. Unternehmen, die ihre Gesch盲ftsentscheidungen auf Daten und KI st眉tzen, gewinnen einen klaren Wettbewerbsvorteil und schaffen innovative L枚sungen f眉r ihre Kunden. Diese Veranstaltung richtet sich an F眉hrungskr盲fte, Manager, Unternehmer und Data-Enthusiasten, die mehr dar眉ber erfahren m枚chten, ob und wann sie eine Datenstrategie brauchen, wie sie eine effektive Datenstrategie entwickeln und wie sie damit ein datengetriebenes und KI-basiertes Business aufbauen oder bereichern k枚nnen.<img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/datenstrategieinderpraxis1-231027073527-3eb65158-thumbnail.jpg?width=120&height=120&fit=bounds" /><br> Die digitale Transformation und der Aufstieg von K眉nstlicher Intelligenz (KI) haben die Gesch盲ftswelt revolutioniert. Unternehmen, die ihre Gesch盲ftsentscheidungen auf Daten und KI st眉tzen, gewinnen einen klaren Wettbewerbsvorteil und schaffen innovative L枚sungen f眉r ihre Kunden. Diese Veranstaltung richtet sich an F眉hrungskr盲fte, Manager, Unternehmer und Data-Enthusiasten, die mehr dar眉ber erfahren m枚chten, ob und wann sie eine Datenstrategie brauchen, wie sie eine effektive Datenstrategie entwickeln und wie sie damit ein datengetriebenes und KI-basiertes Business aufbauen oder bereichern k枚nnen.
]]>
1580https://cdn.slidesharecdn.com/ss_thumbnails/datenstrategieinderpraxis1-231027073527-3eb65158-thumbnail.jpg?width=120&height=120&fit=boundspresentationBlackhttp://activitystrea.ms/schema/1.0/posthttp://activitystrea.ms/schema/1.0/posted0RICHTIG GUT: DIE QUALIT脛T VON MODELLEN VERSTEHEN
https://de.slideshare.net/slideshow/richtig-gut-die-qualitt-von-modellen-verstehen/238008327
m3online-erklaerbarkeit-elsinghorst1-200818065507 Vortrag auf der M3 Online-Konferenz am 16.06.2020 (https://online.m3-konferenz.de/lecture.php?id=12337&source=0)
Mit Machine Learning getroffene Entscheidungen sind inh盲rent schwierig 鈥� wenn nicht gar unm枚glich 鈥� nachzuvollziehen. Ein scheinbar gutes Ergebnis mit Hilfe von maschinellen Lernverfahren ist oft schnell erzielt oder wird von anderen als bahnbrechend verkauft.
Die Komplexit盲t einiger der besten Modelle wie neuronaler Netze ist genau das, was sie so erfolgreich macht. Aber es macht sie gleichzeitig zu einer Black Box. Das kann problematisch sein, denn Gesch盲ftsf眉hrer oder Vorst盲nde werden weniger geneigt sein, einer Entscheidung zu vertrauen und nach ihr zu handeln, wenn sie sie nicht verstehen.
Shapley Values, Local Interpretable Model-Agnostic Explanations (LIME) und Anchors sind Ans盲tze, diese komplexen Modelle zumindest teilweise nachvollziehbar zu machen.
In diesem Vortrag erkl盲re ich, wie diese Ans盲tze funktionieren, und zeige Anwendungsbeispiele.
LERNZIELE
* Die Teilnehmer erhalten Einblick in M枚glichkeit, die komplexe Modelle erkl盲rbar machen.
* Sie lernen, Datens盲tze kritisch zu hinterfragen und angemessen aufzuteilen.
* Und sie erfahren, unter welchen Bedingungen sie Entscheidungen durch Machine Learning vertrauen k枚nnen.]]>
Vortrag auf der M3 Online-Konferenz am 16.06.2020 (https://online.m3-konferenz.de/lecture.php?id=12337&source=0)
Mit Machine Learning getroffene Entscheidungen sind inh盲rent schwierig 鈥� wenn nicht gar unm枚glich 鈥� nachzuvollziehen. Ein scheinbar gutes Ergebnis mit Hilfe von maschinellen Lernverfahren ist oft schnell erzielt oder wird von anderen als bahnbrechend verkauft.
Die Komplexit盲t einiger der besten Modelle wie neuronaler Netze ist genau das, was sie so erfolgreich macht. Aber es macht sie gleichzeitig zu einer Black Box. Das kann problematisch sein, denn Gesch盲ftsf眉hrer oder Vorst盲nde werden weniger geneigt sein, einer Entscheidung zu vertrauen und nach ihr zu handeln, wenn sie sie nicht verstehen.
Shapley Values, Local Interpretable Model-Agnostic Explanations (LIME) und Anchors sind Ans盲tze, diese komplexen Modelle zumindest teilweise nachvollziehbar zu machen.
In diesem Vortrag erkl盲re ich, wie diese Ans盲tze funktionieren, und zeige Anwendungsbeispiele.
LERNZIELE
* Die Teilnehmer erhalten Einblick in M枚glichkeit, die komplexe Modelle erkl盲rbar machen.
* Sie lernen, Datens盲tze kritisch zu hinterfragen und angemessen aufzuteilen.
* Und sie erfahren, unter welchen Bedingungen sie Entscheidungen durch Machine Learning vertrauen k枚nnen.]]>
Tue, 18 Aug 2020 06:55:07 GMThttps://de.slideshare.net/slideshow/richtig-gut-die-qualitt-von-modellen-verstehen/238008327ShirinGlander@slideshare.net(ShirinGlander)RICHTIG GUT: DIE QUALIT脛T VON MODELLEN VERSTEHENShirinGlanderVortrag auf der M3 Online-Konferenz am 16.06.2020 (https://online.m3-konferenz.de/lecture.php?id=12337&source=0)
Mit Machine Learning getroffene Entscheidungen sind inh盲rent schwierig 鈥� wenn nicht gar unm枚glich 鈥� nachzuvollziehen. Ein scheinbar gutes Ergebnis mit Hilfe von maschinellen Lernverfahren ist oft schnell erzielt oder wird von anderen als bahnbrechend verkauft.
Die Komplexit盲t einiger der besten Modelle wie neuronaler Netze ist genau das, was sie so erfolgreich macht. Aber es macht sie gleichzeitig zu einer Black Box. Das kann problematisch sein, denn Gesch盲ftsf眉hrer oder Vorst盲nde werden weniger geneigt sein, einer Entscheidung zu vertrauen und nach ihr zu handeln, wenn sie sie nicht verstehen.
Shapley Values, Local Interpretable Model-Agnostic Explanations (LIME) und Anchors sind Ans盲tze, diese komplexen Modelle zumindest teilweise nachvollziehbar zu machen.
In diesem Vortrag erkl盲re ich, wie diese Ans盲tze funktionieren, und zeige Anwendungsbeispiele.
LERNZIELE
* Die Teilnehmer erhalten Einblick in M枚glichkeit, die komplexe Modelle erkl盲rbar machen.
* Sie lernen, Datens盲tze kritisch zu hinterfragen und angemessen aufzuteilen.
* Und sie erfahren, unter welchen Bedingungen sie Entscheidungen durch Machine Learning vertrauen k枚nnen.<img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/m3online-erklaerbarkeit-elsinghorst1-200818065507-thumbnail.jpg?width=120&height=120&fit=bounds" /><br> Vortrag auf der M3 Online-Konferenz am 16.06.2020 (https://online.m3-konferenz.de/lecture.php?id=12337&source=0)
Mit Machine Learning getroffene Entscheidungen sind inh盲rent schwierig 鈥� wenn nicht gar unm枚glich 鈥� nachzuvollziehen. Ein scheinbar gutes Ergebnis mit Hilfe von maschinellen Lernverfahren ist oft schnell erzielt oder wird von anderen als bahnbrechend verkauft.
Die Komplexit盲t einiger der besten Modelle wie neuronaler Netze ist genau das, was sie so erfolgreich macht. Aber es macht sie gleichzeitig zu einer Black Box. Das kann problematisch sein, denn Gesch盲ftsf眉hrer oder Vorst盲nde werden weniger geneigt sein, einer Entscheidung zu vertrauen und nach ihr zu handeln, wenn sie sie nicht verstehen.
Shapley Values, Local Interpretable Model-Agnostic Explanations (LIME) und Anchors sind Ans盲tze, diese komplexen Modelle zumindest teilweise nachvollziehbar zu machen.
In diesem Vortrag erkl盲re ich, wie diese Ans盲tze funktionieren, und zeige Anwendungsbeispiele.
LERNZIELE
* Die Teilnehmer erhalten Einblick in M枚glichkeit, die komplexe Modelle erkl盲rbar machen.
* Sie lernen, Datens盲tze kritisch zu hinterfragen und angemessen aufzuteilen.
* Und sie erfahren, unter welchen Bedingungen sie Entscheidungen durch Machine Learning vertrauen k枚nnen.
]]>
24637https://cdn.slidesharecdn.com/ss_thumbnails/m3online-erklaerbarkeit-elsinghorst1-200818065507-thumbnail.jpg?width=120&height=120&fit=boundspresentationBlackhttp://activitystrea.ms/schema/1.0/posthttp://activitystrea.ms/schema/1.0/posted0Real-World Data Science (Fraud Detection, Customer Churn & Predictive Maintenance)
/slideshow/realworld-data-science-fraud-detection-customer-churn-predictive-maintenance/128149903
fhslides-190116072903 These are slides from a lecture I gave at the School of Applied Sciences in M眉nster. In this lecture, I talked about **Real-World Data Science** at showed examples on **Fraud Detection, Customer Churn & Predictive Maintenance**.]]>
These are slides from a lecture I gave at the School of Applied Sciences in M眉nster. In this lecture, I talked about **Real-World Data Science** at showed examples on **Fraud Detection, Customer Churn & Predictive Maintenance**.]]>
Wed, 16 Jan 2019 07:29:03 GMT/slideshow/realworld-data-science-fraud-detection-customer-churn-predictive-maintenance/128149903ShirinGlander@slideshare.net(ShirinGlander)Real-World Data Science (Fraud Detection, Customer Churn & Predictive Maintenance)ShirinGlanderThese are slides from a lecture I gave at the School of Applied Sciences in M眉nster. In this lecture, I talked about **Real-World Data Science** at showed examples on **Fraud Detection, Customer Churn & Predictive Maintenance**.<img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/fhslides-190116072903-thumbnail.jpg?width=120&height=120&fit=bounds" /><br> These are slides from a lecture I gave at the School of Applied Sciences in M眉nster. In this lecture, I talked about **Real-World Data Science** at showed examples on **Fraud Detection, Customer Churn & Predictive Maintenance**.
]]>
746816https://cdn.slidesharecdn.com/ss_thumbnails/fhslides-190116072903-thumbnail.jpg?width=120&height=120&fit=boundspresentationBlackhttp://activitystrea.ms/schema/1.0/posthttp://activitystrea.ms/schema/1.0/posted0SAP webinar: Explaining Keras Image Classification Models with LIME
/slideshow/sap-webinar-explaining-keras-image-classification-models-with-lime/110815828
sapwebinar-180821080320 Keras is a high-level open-source deep learning framework that by default works on top of TensorFlow. Keras is minimalistic, efficient and highly flexible because it works with a modular layer system to define, compile and fit neural networks. It has been written in Python but can also be used from within R. Because the underlying backend can be changed from TensorFlow to Theano and CNTK (with more options being developed right now) it is designed to be framework-independent. Models can be trained on CPU or GPU, locally or in the cloud.
I will show an example how to build an image classifier with Keras. We'll be using a convolutional neural net to classify fruits in images. But that's not all! We not only want to judge our black-box model based on accuracy and loss measures - we want to get a better understanding of how the model works. We will use an algorithm called LIME (local interpretable model-agnostic explanations) to find out what part of the different test images contributed most strongly to the classification that was made by our model. I will introduce LIME and explain how it works. And finally, I will show how to apply LIME to the image classifier we built before, as well as to a pretrained Imagenet model.
You will get:
* an introduction to Keras
* an overview about deep learning and neural nets
* a demo how to build an image classifier with Keras
* an introduction to explaining black box models, specifically to the LIME algorithm
* a demo how to apply LIME to explain the predictions of our own Keras image classifier, as well as of a pretrained Imagenet
Further Information:
* www.shirin-glander.de
* https://blog.codecentric.de/author/shirin-glander/
* www.youtube.com/codecentricAI
]]>
Keras is a high-level open-source deep learning framework that by default works on top of TensorFlow. Keras is minimalistic, efficient and highly flexible because it works with a modular layer system to define, compile and fit neural networks. It has been written in Python but can also be used from within R. Because the underlying backend can be changed from TensorFlow to Theano and CNTK (with more options being developed right now) it is designed to be framework-independent. Models can be trained on CPU or GPU, locally or in the cloud.
I will show an example how to build an image classifier with Keras. We'll be using a convolutional neural net to classify fruits in images. But that's not all! We not only want to judge our black-box model based on accuracy and loss measures - we want to get a better understanding of how the model works. We will use an algorithm called LIME (local interpretable model-agnostic explanations) to find out what part of the different test images contributed most strongly to the classification that was made by our model. I will introduce LIME and explain how it works. And finally, I will show how to apply LIME to the image classifier we built before, as well as to a pretrained Imagenet model.
You will get:
* an introduction to Keras
* an overview about deep learning and neural nets
* a demo how to build an image classifier with Keras
* an introduction to explaining black box models, specifically to the LIME algorithm
* a demo how to apply LIME to explain the predictions of our own Keras image classifier, as well as of a pretrained Imagenet
Further Information:
* www.shirin-glander.de
* https://blog.codecentric.de/author/shirin-glander/
* www.youtube.com/codecentricAI
]]>
Tue, 21 Aug 2018 08:03:20 GMT/slideshow/sap-webinar-explaining-keras-image-classification-models-with-lime/110815828ShirinGlander@slideshare.net(ShirinGlander)SAP webinar: Explaining Keras Image Classification Models with LIMEShirinGlanderKeras is a high-level open-source deep learning framework that by default works on top of TensorFlow. Keras is minimalistic, efficient and highly flexible because it works with a modular layer system to define, compile and fit neural networks. It has been written in Python but can also be used from within R. Because the underlying backend can be changed from TensorFlow to Theano and CNTK (with more options being developed right now) it is designed to be framework-independent. Models can be trained on CPU or GPU, locally or in the cloud.
I will show an example how to build an image classifier with Keras. We'll be using a convolutional neural net to classify fruits in images. But that's not all! We not only want to judge our black-box model based on accuracy and loss measures - we want to get a better understanding of how the model works. We will use an algorithm called LIME (local interpretable model-agnostic explanations) to find out what part of the different test images contributed most strongly to the classification that was made by our model. I will introduce LIME and explain how it works. And finally, I will show how to apply LIME to the image classifier we built before, as well as to a pretrained Imagenet model.
You will get:
* an introduction to Keras
* an overview about deep learning and neural nets
* a demo how to build an image classifier with Keras
* an introduction to explaining black box models, specifically to the LIME algorithm
* a demo how to apply LIME to explain the predictions of our own Keras image classifier, as well as of a pretrained Imagenet
Further Information:
* www.shirin-glander.de<http://www.shirin-glander.de>
* https://blog.codecentric.de/author/shirin-glander/
* www.youtube.com/codecentricAI
<img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/sapwebinar-180821080320-thumbnail.jpg?width=120&height=120&fit=bounds" /><br> Keras is a high-level open-source deep learning framework that by default works on top of TensorFlow. Keras is minimalistic, efficient and highly flexible because it works with a modular layer system to define, compile and fit neural networks. It has been written in Python but can also be used from within R. Because the underlying backend can be changed from TensorFlow to Theano and CNTK (with more options being developed right now) it is designed to be framework-independent. Models can be trained on CPU or GPU, locally or in the cloud.
I will show an example how to build an image classifier with Keras. We'll be using a convolutional neural net to classify fruits in images. But that's not all! We not only want to judge our black-box model based on accuracy and loss measures - we want to get a better understanding of how the model works. We will use an algorithm called LIME (local interpretable model-agnostic explanations) to find out what part of the different test images contributed most strongly to the classification that was made by our model. I will introduce LIME and explain how it works. And finally, I will show how to apply LIME to the image classifier we built before, as well as to a pretrained Imagenet model.
You will get:
* an introduction to Keras
* an overview about deep learning and neural nets
* a demo how to build an image classifier with Keras
* an introduction to explaining black box models, specifically to the LIME algorithm
* a demo how to apply LIME to explain the predictions of our own Keras image classifier, as well as of a pretrained Imagenet
Further Information:
* www.shirin-glander.de
* https://blog.codecentric.de/author/shirin-glander/
* www.youtube.com/codecentricAI
]]>
10039https://cdn.slidesharecdn.com/ss_thumbnails/sapwebinar-180821080320-thumbnail.jpg?width=120&height=120&fit=boundspresentationBlackhttp://activitystrea.ms/schema/1.0/posthttp://activitystrea.ms/schema/1.0/posted0Workshop - Introduction to Machine Learning with R
/slideshow/workshop-introduction-to-machine-learning-with-r/103557395
introtomlslides-180629063700 These are the slides from workshop: Introduction to Machine Learning with R which I gave at the University of Heidelberg, Germany on June 28th 2018.
The accompanying code to generate all plots in these slides (plus additional code) can be found on my blog: https://shirinsplayground.netlify.com/2018/06/intro_to_ml_workshop_heidelberg/
The workshop covered the basics of machine learning. With an example dataset I went through a standard machine learning workflow in R with the packages caret and h2o:
- reading in data
- exploratory data analysis
- missingness
- feature engineering
- training and test split
- model training with Random Forests, Gradient Boosting, Neural Nets, etc.
- hyperparameter tuning]]>
These are the slides from workshop: Introduction to Machine Learning with R which I gave at the University of Heidelberg, Germany on June 28th 2018.
The accompanying code to generate all plots in these slides (plus additional code) can be found on my blog: https://shirinsplayground.netlify.com/2018/06/intro_to_ml_workshop_heidelberg/
The workshop covered the basics of machine learning. With an example dataset I went through a standard machine learning workflow in R with the packages caret and h2o:
- reading in data
- exploratory data analysis
- missingness
- feature engineering
- training and test split
- model training with Random Forests, Gradient Boosting, Neural Nets, etc.
- hyperparameter tuning]]>
Fri, 29 Jun 2018 06:37:00 GMT/slideshow/workshop-introduction-to-machine-learning-with-r/103557395ShirinGlander@slideshare.net(ShirinGlander)Workshop - Introduction to Machine Learning with RShirinGlanderThese are the slides from workshop: Introduction to Machine Learning with R which I gave at the University of Heidelberg, Germany on June 28th 2018.
The accompanying code to generate all plots in these slides (plus additional code) can be found on my blog: https://shirinsplayground.netlify.com/2018/06/intro_to_ml_workshop_heidelberg/
The workshop covered the basics of machine learning. With an example dataset I went through a standard machine learning workflow in R with the packages caret and h2o:
- reading in data
- exploratory data analysis
- missingness
- feature engineering
- training and test split
- model training with Random Forests, Gradient Boosting, Neural Nets, etc.
- hyperparameter tuning<img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/introtomlslides-180629063700-thumbnail.jpg?width=120&height=120&fit=bounds" /><br> These are the slides from workshop: Introduction to Machine Learning with R which I gave at the University of Heidelberg, Germany on June 28th 2018.
The accompanying code to generate all plots in these slides (plus additional code) can be found on my blog: https://shirinsplayground.netlify.com/2018/06/intro_to_ml_workshop_heidelberg/
The workshop covered the basics of machine learning. With an example dataset I went through a standard machine learning workflow in R with the packages caret and h2o:
- reading in data
- exploratory data analysis
- missingness
- feature engineering
- training and test split
- model training with Random Forests, Gradient Boosting, Neural Nets, etc.
- hyperparameter tuning
]]>
3988420https://cdn.slidesharecdn.com/ss_thumbnails/introtomlslides-180629063700-thumbnail.jpg?width=120&height=120&fit=boundspresentationBlackhttp://activitystrea.ms/schema/1.0/posthttp://activitystrea.ms/schema/1.0/posted0Deep learning - a primer
/slideshow/deep-learning-a-primer-95197733/95197733
deeplearning-180427063627 This is a slide deck from a presentation, that my colleague Uwe Friedrichsen (/ufried/) and I did together. As we created our respective parts of the presentation on our own, it is quite easy to figure out who did which part of the presentation as the two slide decks look quite different ... :)
For the sake of simplicity and completeness, Uwe copied the two slide decks together. As he did the "surrounding" part, he added my part at the place where I took over and then added concluding slides at the end. Well, I'm sure, you will figure it out easily ... ;)
The presentation was intended to be an introduction to deep learning (DL) for people who are new to the topic. It starts with some DL success stories as motivation. Then a quick classification and a bit of history follows before the "how" part starts.
The first part of the "how" is some theory of DL, to demystify the topic and explain and connect some of the most important terms on the one hand, but also to give an idea of the broadness of the topic on the other hand.
After that the second part dives deeper into the question how to actually implement DL networks. This part starts with coding it all on your own and then moves on to less coding step by step, depending on where you want to start.
The presentation ends with some pitfalls and challenges that you should have in mind if you want to dive deeper into DL - plus the invitation to become part of it.
As always the voice track of the presentation is missing. I hope that the slides are of some use for you, though.]]>
This is a slide deck from a presentation, that my colleague Uwe Friedrichsen (/ufried/) and I did together. As we created our respective parts of the presentation on our own, it is quite easy to figure out who did which part of the presentation as the two slide decks look quite different ... :)
For the sake of simplicity and completeness, Uwe copied the two slide decks together. As he did the "surrounding" part, he added my part at the place where I took over and then added concluding slides at the end. Well, I'm sure, you will figure it out easily ... ;)
The presentation was intended to be an introduction to deep learning (DL) for people who are new to the topic. It starts with some DL success stories as motivation. Then a quick classification and a bit of history follows before the "how" part starts.
The first part of the "how" is some theory of DL, to demystify the topic and explain and connect some of the most important terms on the one hand, but also to give an idea of the broadness of the topic on the other hand.
After that the second part dives deeper into the question how to actually implement DL networks. This part starts with coding it all on your own and then moves on to less coding step by step, depending on where you want to start.
The presentation ends with some pitfalls and challenges that you should have in mind if you want to dive deeper into DL - plus the invitation to become part of it.
As always the voice track of the presentation is missing. I hope that the slides are of some use for you, though.]]>
Fri, 27 Apr 2018 06:36:27 GMT/slideshow/deep-learning-a-primer-95197733/95197733ShirinGlander@slideshare.net(ShirinGlander)Deep learning - a primerShirinGlanderThis is a slide deck from a presentation, that my colleague Uwe Friedrichsen (/ufried/) and I did together. As we created our respective parts of the presentation on our own, it is quite easy to figure out who did which part of the presentation as the two slide decks look quite different ... :)
For the sake of simplicity and completeness, Uwe copied the two slide decks together. As he did the "surrounding" part, he added my part at the place where I took over and then added concluding slides at the end. Well, I'm sure, you will figure it out easily ... ;)
The presentation was intended to be an introduction to deep learning (DL) for people who are new to the topic. It starts with some DL success stories as motivation. Then a quick classification and a bit of history follows before the "how" part starts.
The first part of the "how" is some theory of DL, to demystify the topic and explain and connect some of the most important terms on the one hand, but also to give an idea of the broadness of the topic on the other hand.
After that the second part dives deeper into the question how to actually implement DL networks. This part starts with coding it all on your own and then moves on to less coding step by step, depending on where you want to start.
The presentation ends with some pitfalls and challenges that you should have in mind if you want to dive deeper into DL - plus the invitation to become part of it.
As always the voice track of the presentation is missing. I hope that the slides are of some use for you, though.<img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/deeplearning-180427063627-thumbnail.jpg?width=120&height=120&fit=bounds" /><br> This is a slide deck from a presentation, that my colleague Uwe Friedrichsen (/ufried/) and I did together. As we created our respective parts of the presentation on our own, it is quite easy to figure out who did which part of the presentation as the two slide decks look quite different ... :)
For the sake of simplicity and completeness, Uwe copied the two slide decks together. As he did the "surrounding" part, he added my part at the place where I took over and then added concluding slides at the end. Well, I'm sure, you will figure it out easily ... ;)
The presentation was intended to be an introduction to deep learning (DL) for people who are new to the topic. It starts with some DL success stories as motivation. Then a quick classification and a bit of history follows before the "how" part starts.
The first part of the "how" is some theory of DL, to demystify the topic and explain and connect some of the most important terms on the one hand, but also to give an idea of the broadness of the topic on the other hand.
After that the second part dives deeper into the question how to actually implement DL networks. This part starts with coding it all on your own and then moves on to less coding step by step, depending on where you want to start.
The presentation ends with some pitfalls and challenges that you should have in mind if you want to dive deeper into DL - plus the invitation to become part of it.
As always the voice track of the presentation is missing. I hope that the slides are of some use for you, though.
]]>
475732https://cdn.slidesharecdn.com/ss_thumbnails/deeplearning-180427063627-thumbnail.jpg?width=120&height=120&fit=boundspresentationBlackhttp://activitystrea.ms/schema/1.0/posthttp://activitystrea.ms/schema/1.0/posted0HH Data Science Meetup: Explaining complex machine learning models with LIME
/slideshow/hh-data-science-meetup-explaining-complex-machine-learning-models-with-lime-94218890/94218890
hhdatasciencemeetup-180418112151 On April 12th, 2018 I gave a talk about Explaining complex machine learning models with LIME at the Hamburg Data Science Meetup:
Traditional machine learning workflows focus heavily on model training and optimization; the best model is usually chosen via performance measures like accuracy or error and we tend to assume that a model is good enough for deployment if it passes certain thresholds of these performance criteria. Why a model makes the predictions it makes, however, is generally neglected. But being able to understand and interpret such models can be immensely important for improving model quality, increasing trust and transparency and for reducing bias. Because complex machine learning models are essentially black boxes and too complicated to understand, we need to use approximations to get a better sense of how they work. One such approach is LIME, which stands for Local Interpretable Model-agnostic Explanations and is a tool that helps understand and explain the decisions made by complex machine learning models.
鈥� slide deck was produced with beautiful.ai 鈥揮]>
On April 12th, 2018 I gave a talk about Explaining complex machine learning models with LIME at the Hamburg Data Science Meetup:
Traditional machine learning workflows focus heavily on model training and optimization; the best model is usually chosen via performance measures like accuracy or error and we tend to assume that a model is good enough for deployment if it passes certain thresholds of these performance criteria. Why a model makes the predictions it makes, however, is generally neglected. But being able to understand and interpret such models can be immensely important for improving model quality, increasing trust and transparency and for reducing bias. Because complex machine learning models are essentially black boxes and too complicated to understand, we need to use approximations to get a better sense of how they work. One such approach is LIME, which stands for Local Interpretable Model-agnostic Explanations and is a tool that helps understand and explain the decisions made by complex machine learning models.
鈥� slide deck was produced with beautiful.ai 鈥揮]>
Wed, 18 Apr 2018 11:21:51 GMT/slideshow/hh-data-science-meetup-explaining-complex-machine-learning-models-with-lime-94218890/94218890ShirinGlander@slideshare.net(ShirinGlander)HH Data Science Meetup: Explaining complex machine learning models with LIMEShirinGlanderOn April 12th, 2018 I gave a talk about Explaining complex machine learning models with LIME at the Hamburg Data Science Meetup:
Traditional machine learning workflows focus heavily on model training and optimization; the best model is usually chosen via performance measures like accuracy or error and we tend to assume that a model is good enough for deployment if it passes certain thresholds of these performance criteria. Why a model makes the predictions it makes, however, is generally neglected. But being able to understand and interpret such models can be immensely important for improving model quality, increasing trust and transparency and for reducing bias. Because complex machine learning models are essentially black boxes and too complicated to understand, we need to use approximations to get a better sense of how they work. One such approach is LIME, which stands for Local Interpretable Model-agnostic Explanations and is a tool that helps understand and explain the decisions made by complex machine learning models.
鈥� slide deck was produced with beautiful.ai 鈥�<img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/hhdatasciencemeetup-180418112151-thumbnail.jpg?width=120&height=120&fit=bounds" /><br> On April 12th, 2018 I gave a talk about Explaining complex machine learning models with LIME at the Hamburg Data Science Meetup:
Traditional machine learning workflows focus heavily on model training and optimization; the best model is usually chosen via performance measures like accuracy or error and we tend to assume that a model is good enough for deployment if it passes certain thresholds of these performance criteria. Why a model makes the predictions it makes, however, is generally neglected. But being able to understand and interpret such models can be immensely important for improving model quality, increasing trust and transparency and for reducing bias. Because complex machine learning models are essentially black boxes and too complicated to understand, we need to use approximations to get a better sense of how they work. One such approach is LIME, which stands for Local Interpretable Model-agnostic Explanations and is a tool that helps understand and explain the decisions made by complex machine learning models.
鈥� slide deck was produced with beautiful.ai 鈥�
]]>
640210https://cdn.slidesharecdn.com/ss_thumbnails/hhdatasciencemeetup-180418112151-thumbnail.jpg?width=120&height=120&fit=boundspresentationBlackhttp://activitystrea.ms/schema/1.0/posthttp://activitystrea.ms/schema/1.0/posted0HH Data Science Meetup: Explaining complex machine learning models with LIME
/slideshow/hh-data-science-meetup-explaining-complex-machine-learning-models-with-lime/93763896
hhdatasciencemeetup-180413065720 Unfortunately, slideshare doesn't allow re-uploading slides any more, so there is an updated version with some corrected errors here: /ShirinGlander/hh-data-science-meetup-explaining-complex-machine-learning-models-with-lime-94218890]]>
Unfortunately, slideshare doesn't allow re-uploading slides any more, so there is an updated version with some corrected errors here: /ShirinGlander/hh-data-science-meetup-explaining-complex-machine-learning-models-with-lime-94218890]]>
Fri, 13 Apr 2018 06:57:19 GMT/slideshow/hh-data-science-meetup-explaining-complex-machine-learning-models-with-lime/93763896ShirinGlander@slideshare.net(ShirinGlander)HH Data Science Meetup: Explaining complex machine learning models with LIMEShirinGlanderUnfortunately, slideshare doesn't allow re-uploading slides any more, so there is an updated version with some corrected errors here: /ShirinGlander/hh-data-science-meetup-explaining-complex-machine-learning-models-with-lime-94218890<img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/hhdatasciencemeetup-180413065720-thumbnail.jpg?width=120&height=120&fit=bounds" /><br> Unfortunately, slideshare doesn't allow re-uploading slides any more, so there is an updated version with some corrected errors here: /ShirinGlander/hh-data-science-meetup-explaining-complex-machine-learning-models-with-lime-94218890
]]>
14915https://cdn.slidesharecdn.com/ss_thumbnails/hhdatasciencemeetup-180413065720-thumbnail.jpg?width=120&height=120&fit=boundspresentationBlackhttp://activitystrea.ms/schema/1.0/posthttp://activitystrea.ms/schema/1.0/posted0Ruhr.PY - Introducing Deep Learning with Keras and Python
/slideshow/ruhrpy-introducing-deep-learning-with-keras-and-python/93534697
ruhr-180411092305 Ruhr.PY - Python Developer Meetup:
Keras is a high-level API written in Python for building and prototyping neural networks. It can be used on top of TensorFlow, Theano or CNTK. In this talk we build, train and visualize a Model using Python and Keras - all interactive with Jupyter Notebooks!
https://www.meetup.com/Ruhr-py/events/248093628/
-- slide deck generated with beautiful.ai --
-- video recording can be seen here: https://youtu.be/Q8hVXnpEPmc --
-- comment here: https://shirinsplayground.netlify.com/2018/04/ruhrpy_meetup_2018_slides/ --]]>
Ruhr.PY - Python Developer Meetup:
Keras is a high-level API written in Python for building and prototyping neural networks. It can be used on top of TensorFlow, Theano or CNTK. In this talk we build, train and visualize a Model using Python and Keras - all interactive with Jupyter Notebooks!
https://www.meetup.com/Ruhr-py/events/248093628/
-- slide deck generated with beautiful.ai --
-- video recording can be seen here: https://youtu.be/Q8hVXnpEPmc --
-- comment here: https://shirinsplayground.netlify.com/2018/04/ruhrpy_meetup_2018_slides/ --]]>
Wed, 11 Apr 2018 09:23:05 GMT/slideshow/ruhrpy-introducing-deep-learning-with-keras-and-python/93534697ShirinGlander@slideshare.net(ShirinGlander)Ruhr.PY - Introducing Deep Learning with Keras and PythonShirinGlanderRuhr.PY - Python Developer Meetup:
Keras is a high-level API written in Python for building and prototyping neural networks. It can be used on top of TensorFlow, Theano or CNTK. In this talk we build, train and visualize a Model using Python and Keras - all interactive with Jupyter Notebooks!
https://www.meetup.com/Ruhr-py/events/248093628/
-- slide deck generated with beautiful.ai --
-- video recording can be seen here: https://youtu.be/Q8hVXnpEPmc --
-- comment here: https://shirinsplayground.netlify.com/2018/04/ruhrpy_meetup_2018_slides/ --<img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/ruhr-180411092305-thumbnail.jpg?width=120&height=120&fit=bounds" /><br> Ruhr.PY - Python Developer Meetup:
Keras is a high-level API written in Python for building and prototyping neural networks. It can be used on top of TensorFlow, Theano or CNTK. In this talk we build, train and visualize a Model using Python and Keras - all interactive with Jupyter Notebooks!
https://www.meetup.com/Ruhr-py/events/248093628/
-- slide deck generated with beautiful.ai --
-- video recording can be seen here: https://youtu.be/Q8hVXnpEPmc --
-- comment here: https://shirinsplayground.netlify.com/2018/04/ruhrpy_meetup_2018_slides/ --
]]>
62904https://cdn.slidesharecdn.com/ss_thumbnails/ruhr-180411092305-thumbnail.jpg?width=120&height=120&fit=boundspresentationBlackhttp://activitystrea.ms/schema/1.0/posthttp://activitystrea.ms/schema/1.0/posted0From Biology to Industry. A Blogger鈥檚 Journey to Data Science.
/slideshow/from-biology-to-industry-a-bloggers-journey-to-data-science/79891422
finalwebinarmystorytodatascience-170918122443 What does blogging mean for Data Sciences?
What is Big Data today?
How to become a Data Scientist and what type of work results from this transformation?]]>
What does blogging mean for Data Sciences?
What is Big Data today?
How to become a Data Scientist and what type of work results from this transformation?]]>
Mon, 18 Sep 2017 12:24:43 GMT/slideshow/from-biology-to-industry-a-bloggers-journey-to-data-science/79891422ShirinGlander@slideshare.net(ShirinGlander)From Biology to Industry. A Blogger鈥檚 Journey to Data Science.ShirinGlanderWhat does blogging mean for Data Sciences?
What is Big Data today?
How to become a Data Scientist and what type of work results from this transformation?<img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/finalwebinarmystorytodatascience-170918122443-thumbnail.jpg?width=120&height=120&fit=bounds" /><br> What does blogging mean for Data Sciences?
What is Big Data today?
How to become a Data Scientist and what type of work results from this transformation?
]]>
288310https://cdn.slidesharecdn.com/ss_thumbnails/finalwebinarmystorytodatascience-170918122443-thumbnail.jpg?width=120&height=120&fit=boundspresentationBlackhttp://activitystrea.ms/schema/1.0/posthttp://activitystrea.ms/schema/1.0/posted0https://public.slidesharecdn.com/v2/images/profile-picture.pnghttps://cdn.slidesharecdn.com/ss_thumbnails/transparenteundverantwortungsbewussteki1-231027073815-73d4c9f6-thumbnail.jpg?width=320&height=320&fit=boundsslideshow/transparente-und-verantwortungsbewusste-ki/262786333Transparente und Veran...https://cdn.slidesharecdn.com/ss_thumbnails/datenstrategieinderpraxis1-231027073527-3eb65158-thumbnail.jpg?width=320&height=320&fit=boundsslideshow/datenstrategie-in-der-praxis/262786218Datenstrategie in der ...https://cdn.slidesharecdn.com/ss_thumbnails/m3online-erklaerbarkeit-elsinghorst1-200818065507-thumbnail.jpg?width=320&height=320&fit=boundsslideshow/richtig-gut-die-qualitt-von-modellen-verstehen/238008327RICHTIG GUT: DIE QUALI...