際際滷shows by User: luckbr1975 / http://www.slideshare.net/images/logo.gif 際際滷shows by User: luckbr1975 / Thu, 06 Oct 2022 01:01:42 GMT 際際滷Share feed for 際際滷shows by User: luckbr1975 A Jupyter kernel for Scala and Apache Spark.pdf /slideshow/a-jupyter-kernel-for-scala-and-apache-sparkpdf/253364422 apacheconna-apachetoree-ajupyterkernelforscalaandapachespark-221006010142-87c31afd
Many data scientists are already making heavy usage of the Jupyter ecosystem for analyzing data using interactive notebooks. Apache Toree (incubating) is a Jupyter kernel designed that enables data scientists and data engineers to easily connect and leverage Apache Spark and its powerful APIs from a standard Jupyter notebook to execute their analytics workloads. In this talk, we will go over what's new with the most recent Apache Toree release. We will cover available magics and visualizations extensions that can be integrated with Toree to enable better data exploration and data visualizations. We will also describe some high-level designs of Toree and how users can extend the functionality of Apache Toree powerful plugin system. And all of these with multiple live demos that demonstrate how Toree can help with your analytics workloads in an Apache Spark environment.]]>

Many data scientists are already making heavy usage of the Jupyter ecosystem for analyzing data using interactive notebooks. Apache Toree (incubating) is a Jupyter kernel designed that enables data scientists and data engineers to easily connect and leverage Apache Spark and its powerful APIs from a standard Jupyter notebook to execute their analytics workloads. In this talk, we will go over what's new with the most recent Apache Toree release. We will cover available magics and visualizations extensions that can be integrated with Toree to enable better data exploration and data visualizations. We will also describe some high-level designs of Toree and how users can extend the functionality of Apache Toree powerful plugin system. And all of these with multiple live demos that demonstrate how Toree can help with your analytics workloads in an Apache Spark environment.]]>
Thu, 06 Oct 2022 01:01:42 GMT /slideshow/a-jupyter-kernel-for-scala-and-apache-sparkpdf/253364422 luckbr1975@slideshare.net(luckbr1975) A Jupyter kernel for Scala and Apache Spark.pdf luckbr1975 Many data scientists are already making heavy usage of the Jupyter ecosystem for analyzing data using interactive notebooks. Apache Toree (incubating) is a Jupyter kernel designed that enables data scientists and data engineers to easily connect and leverage Apache Spark and its powerful APIs from a standard Jupyter notebook to execute their analytics workloads. In this talk, we will go over what's new with the most recent Apache Toree release. We will cover available magics and visualizations extensions that can be integrated with Toree to enable better data exploration and data visualizations. We will also describe some high-level designs of Toree and how users can extend the functionality of Apache Toree powerful plugin system. And all of these with multiple live demos that demonstrate how Toree can help with your analytics workloads in an Apache Spark environment. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/apacheconna-apachetoree-ajupyterkernelforscalaandapachespark-221006010142-87c31afd-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Many data scientists are already making heavy usage of the Jupyter ecosystem for analyzing data using interactive notebooks. Apache Toree (incubating) is a Jupyter kernel designed that enables data scientists and data engineers to easily connect and leverage Apache Spark and its powerful APIs from a standard Jupyter notebook to execute their analytics workloads. In this talk, we will go over what&#39;s new with the most recent Apache Toree release. We will cover available magics and visualizations extensions that can be integrated with Toree to enable better data exploration and data visualizations. We will also describe some high-level designs of Toree and how users can extend the functionality of Apache Toree powerful plugin system. And all of these with multiple live demos that demonstrate how Toree can help with your analytics workloads in an Apache Spark environment.
A Jupyter kernel for Scala and Apache Spark.pdf from Luciano Resende
]]>
141 0 https://cdn.slidesharecdn.com/ss_thumbnails/apacheconna-apachetoree-ajupyterkernelforscalaandapachespark-221006010142-87c31afd-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Using Elyra for COVID-19 Analytics /slideshow/using-elyra-for-covid19-analytics/236223336 elyraoverview-200625230848
In this session, Luciano will be walking you through a real use case pipeline that uses Elyra features to help analyze COVID-19 related datasets. He will introduce Elyra, a project built to extend JupyterLab with AI-centric capabilities. He'll showcase the extensions that allow you to build Notebook Pipelines and execute these in a Kubeflow environment, execute notebooks as batch jobs, the ability to create, edit and execute Python scripts directly from JupyterLab]]>

In this session, Luciano will be walking you through a real use case pipeline that uses Elyra features to help analyze COVID-19 related datasets. He will introduce Elyra, a project built to extend JupyterLab with AI-centric capabilities. He'll showcase the extensions that allow you to build Notebook Pipelines and execute these in a Kubeflow environment, execute notebooks as batch jobs, the ability to create, edit and execute Python scripts directly from JupyterLab]]>
Thu, 25 Jun 2020 23:08:48 GMT /slideshow/using-elyra-for-covid19-analytics/236223336 luckbr1975@slideshare.net(luckbr1975) Using Elyra for COVID-19 Analytics luckbr1975 In this session, Luciano will be walking you through a real use case pipeline that uses Elyra features to help analyze COVID-19 related datasets. He will introduce Elyra, a project built to extend JupyterLab with AI-centric capabilities. He'll showcase the extensions that allow you to build Notebook Pipelines and execute these in a Kubeflow environment, execute notebooks as batch jobs, the ability to create, edit and execute Python scripts directly from JupyterLab <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/elyraoverview-200625230848-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> In this session, Luciano will be walking you through a real use case pipeline that uses Elyra features to help analyze COVID-19 related datasets. He will introduce Elyra, a project built to extend JupyterLab with AI-centric capabilities. He&#39;ll showcase the extensions that allow you to build Notebook Pipelines and execute these in a Kubeflow environment, execute notebooks as batch jobs, the ability to create, edit and execute Python scripts directly from JupyterLab
Using Elyra for COVID-19 Analytics from Luciano Resende
]]>
488 0 https://cdn.slidesharecdn.com/ss_thumbnails/elyraoverview-200625230848-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Elyra - a set of AI-centric extensions to JupyterLab Notebooks. /slideshow/elyra-a-set-of-aicentric-extensions-to-jupyterlab-notebooks/230023931 elyraoverview-200310201402
In this session Luciano will explore the different projects that compose the Jupyter ecosystem; including Jupyter Notebooks, JupyterLab, JupyterHub and Jupyter Enterprise Gateway. Jupyter Notebooks are the current open standard for data science and AI model development, and IBM is dedicated to contributing to their success and adoption. Continuing the trend of building out the Jupyter ecosystem, Luciano will introduce Elyra. It's a project built to extend JupyterLab with AI-centric capabilities. He'll showcase the extensions that allow you to build Notebook Pipelines, execute notebooks as batch jobs, navigate and execute Python scripts, and tie neatly into Notebook versioning.]]>

In this session Luciano will explore the different projects that compose the Jupyter ecosystem; including Jupyter Notebooks, JupyterLab, JupyterHub and Jupyter Enterprise Gateway. Jupyter Notebooks are the current open standard for data science and AI model development, and IBM is dedicated to contributing to their success and adoption. Continuing the trend of building out the Jupyter ecosystem, Luciano will introduce Elyra. It's a project built to extend JupyterLab with AI-centric capabilities. He'll showcase the extensions that allow you to build Notebook Pipelines, execute notebooks as batch jobs, navigate and execute Python scripts, and tie neatly into Notebook versioning.]]>
Tue, 10 Mar 2020 20:14:02 GMT /slideshow/elyra-a-set-of-aicentric-extensions-to-jupyterlab-notebooks/230023931 luckbr1975@slideshare.net(luckbr1975) Elyra - a set of AI-centric extensions to JupyterLab Notebooks. luckbr1975 In this session Luciano will explore the different projects that compose the Jupyter ecosystem; including Jupyter Notebooks, JupyterLab, JupyterHub and Jupyter Enterprise Gateway. Jupyter Notebooks are the current open standard for data science and AI model development, and IBM is dedicated to contributing to their success and adoption. Continuing the trend of building out the Jupyter ecosystem, Luciano will introduce Elyra. It's a project built to extend JupyterLab with AI-centric capabilities. He'll showcase the extensions that allow you to build Notebook Pipelines, execute notebooks as batch jobs, navigate and execute Python scripts, and tie neatly into Notebook versioning. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/elyraoverview-200310201402-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> In this session Luciano will explore the different projects that compose the Jupyter ecosystem; including Jupyter Notebooks, JupyterLab, JupyterHub and Jupyter Enterprise Gateway. Jupyter Notebooks are the current open standard for data science and AI model development, and IBM is dedicated to contributing to their success and adoption. Continuing the trend of building out the Jupyter ecosystem, Luciano will introduce Elyra. It&#39;s a project built to extend JupyterLab with AI-centric capabilities. He&#39;ll showcase the extensions that allow you to build Notebook Pipelines, execute notebooks as batch jobs, navigate and execute Python scripts, and tie neatly into Notebook versioning.
Elyra - a set of AI-centric extensions to JupyterLab Notebooks. from Luciano Resende
]]>
515 0 https://cdn.slidesharecdn.com/ss_thumbnails/elyraoverview-200310201402-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
From Data to AI - Silicon Valley Open Source projects come to you - Madrid meetup /slideshow/from-data-to-ai-silicon-valley-open-source-projects-come-to-you-madrid-meetup/229148628 madridmeetup-debigdataaaipasandoporsiliconvalley-200225222339
The IBM Center for Open Source, Data and AI Technology "CODAIT" (https://developer.ibm.com/code/open/centers/codait/) works on multiple open-source Data and AI projects. In this section we will introduce these projects around Jupyter Notebooks, reusable Model and Data assets, Trusted AI among others.]]>

The IBM Center for Open Source, Data and AI Technology "CODAIT" (https://developer.ibm.com/code/open/centers/codait/) works on multiple open-source Data and AI projects. In this section we will introduce these projects around Jupyter Notebooks, reusable Model and Data assets, Trusted AI among others.]]>
Tue, 25 Feb 2020 22:23:39 GMT /slideshow/from-data-to-ai-silicon-valley-open-source-projects-come-to-you-madrid-meetup/229148628 luckbr1975@slideshare.net(luckbr1975) From Data to AI - Silicon Valley Open Source projects come to you - Madrid meetup luckbr1975 The IBM Center for Open Source, Data and AI Technology "CODAIT" (https://developer.ibm.com/code/open/centers/codait/) works on multiple open-source Data and AI projects. In this section we will introduce these projects around Jupyter Notebooks, reusable Model and Data assets, Trusted AI among others. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/madridmeetup-debigdataaaipasandoporsiliconvalley-200225222339-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> The IBM Center for Open Source, Data and AI Technology &quot;CODAIT&quot; (https://developer.ibm.com/code/open/centers/codait/) works on multiple open-source Data and AI projects. In this section we will introduce these projects around Jupyter Notebooks, reusable Model and Data assets, Trusted AI among others.
From Data to AI - Silicon Valley Open Source projects come to you - Madrid meetup from Luciano Resende
]]>
219 0 https://cdn.slidesharecdn.com/ss_thumbnails/madridmeetup-debigdataaaipasandoporsiliconvalley-200225222339-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Ai pipelines powered by jupyter notebooks /slideshow/ai-pipelines-powered-by-jupyter-notebooks/156023171 aipipelinespoweredbyjupyternotebooks-190717043349
The Jupyter Notebook has become the de facto platform used by data scientists and AI engineers to build interactive applications and develop their AI/ML models. In this scenario, its very common to decompose various phases of the development into multiple notebooks to simplify the development and management of the model lifecycle. Luciano Resende details how to schedule together these multiple notebooks that correspond to different phases of the model lifecycle into notebook-based AI pipelines and walk you through scenarios that demonstrate how to reuse notebooks via parameterization.]]>

The Jupyter Notebook has become the de facto platform used by data scientists and AI engineers to build interactive applications and develop their AI/ML models. In this scenario, its very common to decompose various phases of the development into multiple notebooks to simplify the development and management of the model lifecycle. Luciano Resende details how to schedule together these multiple notebooks that correspond to different phases of the model lifecycle into notebook-based AI pipelines and walk you through scenarios that demonstrate how to reuse notebooks via parameterization.]]>
Wed, 17 Jul 2019 04:33:49 GMT /slideshow/ai-pipelines-powered-by-jupyter-notebooks/156023171 luckbr1975@slideshare.net(luckbr1975) Ai pipelines powered by jupyter notebooks luckbr1975 The Jupyter Notebook has become the de facto platform used by data scientists and AI engineers to build interactive applications and develop their AI/ML models. In this scenario, its very common to decompose various phases of the development into multiple notebooks to simplify the development and management of the model lifecycle. Luciano Resende details how to schedule together these multiple notebooks that correspond to different phases of the model lifecycle into notebook-based AI pipelines and walk you through scenarios that demonstrate how to reuse notebooks via parameterization. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/aipipelinespoweredbyjupyternotebooks-190717043349-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> The Jupyter Notebook has become the de facto platform used by data scientists and AI engineers to build interactive applications and develop their AI/ML models. In this scenario, its very common to decompose various phases of the development into multiple notebooks to simplify the development and management of the model lifecycle. Luciano Resende details how to schedule together these multiple notebooks that correspond to different phases of the model lifecycle into notebook-based AI pipelines and walk you through scenarios that demonstrate how to reuse notebooks via parameterization.
Ai pipelines powered by jupyter notebooks from Luciano Resende
]]>
4226 13 https://cdn.slidesharecdn.com/ss_thumbnails/aipipelinespoweredbyjupyternotebooks-190717043349-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Strata - Scaling Jupyter with Jupyter Enterprise Gateway /slideshow/strata-scaling-jupyter-with-jupyter-enterprise-gateway/138754438 strata-scalingjupyterwithjupyterenterprisegateway-190330015340
Born in academia, Jupyter notebooks are prevalent in both learning and research environments throughout the scientific community. Due to the widespread adoption of big data, AI, and deep learning frameworks, notebooks are also finding their way into the enterprise, which introduces a different set of requirements. Alan Chin and Luciano Resende explain how to introduce Jupyter Enterprise Gateway into new and existing notebook environments to enable a bring your own notebook model while simultaneously optimizing resources consumed by the notebook kernels running across managed clusters within the enterprise. Along the way, they detail how to use different frameworks with Enterprise Gateway to meet the needs of data scientists operating within the AI and deep learning ecosystems.]]>

Born in academia, Jupyter notebooks are prevalent in both learning and research environments throughout the scientific community. Due to the widespread adoption of big data, AI, and deep learning frameworks, notebooks are also finding their way into the enterprise, which introduces a different set of requirements. Alan Chin and Luciano Resende explain how to introduce Jupyter Enterprise Gateway into new and existing notebook environments to enable a bring your own notebook model while simultaneously optimizing resources consumed by the notebook kernels running across managed clusters within the enterprise. Along the way, they detail how to use different frameworks with Enterprise Gateway to meet the needs of data scientists operating within the AI and deep learning ecosystems.]]>
Sat, 30 Mar 2019 01:53:40 GMT /slideshow/strata-scaling-jupyter-with-jupyter-enterprise-gateway/138754438 luckbr1975@slideshare.net(luckbr1975) Strata - Scaling Jupyter with Jupyter Enterprise Gateway luckbr1975 Born in academia, Jupyter notebooks are prevalent in both learning and research environments throughout the scientific community. Due to the widespread adoption of big data, AI, and deep learning frameworks, notebooks are also finding their way into the enterprise, which introduces a different set of requirements. Alan Chin and Luciano Resende explain how to introduce Jupyter Enterprise Gateway into new and existing notebook environments to enable a bring your own notebook model while simultaneously optimizing resources consumed by the notebook kernels running across managed clusters within the enterprise. Along the way, they detail how to use different frameworks with Enterprise Gateway to meet the needs of data scientists operating within the AI and deep learning ecosystems. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/strata-scalingjupyterwithjupyterenterprisegateway-190330015340-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Born in academia, Jupyter notebooks are prevalent in both learning and research environments throughout the scientific community. Due to the widespread adoption of big data, AI, and deep learning frameworks, notebooks are also finding their way into the enterprise, which introduces a different set of requirements. Alan Chin and Luciano Resende explain how to introduce Jupyter Enterprise Gateway into new and existing notebook environments to enable a bring your own notebook model while simultaneously optimizing resources consumed by the notebook kernels running across managed clusters within the enterprise. Along the way, they detail how to use different frameworks with Enterprise Gateway to meet the needs of data scientists operating within the AI and deep learning ecosystems.
Strata - Scaling Jupyter with Jupyter Enterprise Gateway from Luciano Resende
]]>
2161 4 https://cdn.slidesharecdn.com/ss_thumbnails/strata-scalingjupyterwithjupyterenterprisegateway-190330015340-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Scaling notebooks for Deep Learning workloads /slideshow/scaling-notebooks-for-deep-learning-workloads/111159447 scalingnotebooksfordeeplearningworkloads-180823155615
Deep learning workloads are computing intensive, and training these type of models is better done with specialized hardware like GPUs. Luciano Resende outlines a pattern for building deep learning models using the Jupyter Notebooks interactive development in commodity hardware and leveraging platforms and services such as Fabric for Deep Learning (FfDL) for cost-effective full dataset training of deep learning models.]]>

Deep learning workloads are computing intensive, and training these type of models is better done with specialized hardware like GPUs. Luciano Resende outlines a pattern for building deep learning models using the Jupyter Notebooks interactive development in commodity hardware and leveraging platforms and services such as Fabric for Deep Learning (FfDL) for cost-effective full dataset training of deep learning models.]]>
Thu, 23 Aug 2018 15:56:15 GMT /slideshow/scaling-notebooks-for-deep-learning-workloads/111159447 luckbr1975@slideshare.net(luckbr1975) Scaling notebooks for Deep Learning workloads luckbr1975 Deep learning workloads are computing intensive, and training these type of models is better done with specialized hardware like GPUs. Luciano Resende outlines a pattern for building deep learning models using the Jupyter Notebooks interactive development in commodity hardware and leveraging platforms and services such as Fabric for Deep Learning (FfDL) for cost-effective full dataset training of deep learning models. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/scalingnotebooksfordeeplearningworkloads-180823155615-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Deep learning workloads are computing intensive, and training these type of models is better done with specialized hardware like GPUs. Luciano Resende outlines a pattern for building deep learning models using the Jupyter Notebooks interactive development in commodity hardware and leveraging platforms and services such as Fabric for Deep Learning (FfDL) for cost-effective full dataset training of deep learning models.
Scaling notebooks for Deep Learning workloads from Luciano Resende
]]>
361 5 https://cdn.slidesharecdn.com/ss_thumbnails/scalingnotebooksfordeeplearningworkloads-180823155615-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Jupyter Enterprise Gateway Overview /slideshow/jupyter-enterprise-gateway-overview/110945531 codait-jupyterenterprisegatewayoverviewexternal-20180821-180822003530
Jupyter Enterprise Gateway enables Jupyter Notebook to launch remote kernels in a distributed cluster, including Apache Spark managed by YARN, IBM Spectrum Conductor or Kubernetes. It provides out of the box support for the following kernels: Python using IPython kernel R using IRkernel Scala using Apache Toree kernel]]>

Jupyter Enterprise Gateway enables Jupyter Notebook to launch remote kernels in a distributed cluster, including Apache Spark managed by YARN, IBM Spectrum Conductor or Kubernetes. It provides out of the box support for the following kernels: Python using IPython kernel R using IRkernel Scala using Apache Toree kernel]]>
Wed, 22 Aug 2018 00:35:29 GMT /slideshow/jupyter-enterprise-gateway-overview/110945531 luckbr1975@slideshare.net(luckbr1975) Jupyter Enterprise Gateway Overview luckbr1975 Jupyter Enterprise Gateway enables Jupyter Notebook to launch remote kernels in a distributed cluster, including Apache Spark managed by YARN, IBM Spectrum Conductor or Kubernetes. It provides out of the box support for the following kernels: Python using IPython kernel R using IRkernel Scala using Apache Toree kernel <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/codait-jupyterenterprisegatewayoverviewexternal-20180821-180822003530-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Jupyter Enterprise Gateway enables Jupyter Notebook to launch remote kernels in a distributed cluster, including Apache Spark managed by YARN, IBM Spectrum Conductor or Kubernetes. It provides out of the box support for the following kernels: Python using IPython kernel R using IRkernel Scala using Apache Toree kernel
Jupyter Enterprise Gateway Overview from Luciano Resende
]]>
927 4 https://cdn.slidesharecdn.com/ss_thumbnails/codait-jupyterenterprisegatewayoverviewexternal-20180821-180822003530-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Inteligencia artificial, open source e IBM Call for Code /slideshow/inteligencia-artificial-open-source-e-ibm-call-for-code/107116597 inteligenciaartificialopensourceeibmcallforcode-180723131539
Nesta palestra vamos abordar algumas das tend棚ncias em Intelig棚ncia Artificial e as dificuldades na uso da Intelig棚ncia Artificial. Por isso, tamb辿m apresentaremos algumas ferramentas dispon鱈veis em c坦digo livre que podem ajudar a simplificar a ado巽達o da IA. E faremos uma breve introdu巽達o ao Call for Code que 辿 uma iniciativa da IBM para construir solu巽探es na preven巽達o e rea巽達o a desastres naturais.]]>

Nesta palestra vamos abordar algumas das tend棚ncias em Intelig棚ncia Artificial e as dificuldades na uso da Intelig棚ncia Artificial. Por isso, tamb辿m apresentaremos algumas ferramentas dispon鱈veis em c坦digo livre que podem ajudar a simplificar a ado巽達o da IA. E faremos uma breve introdu巽達o ao Call for Code que 辿 uma iniciativa da IBM para construir solu巽探es na preven巽達o e rea巽達o a desastres naturais.]]>
Mon, 23 Jul 2018 13:15:39 GMT /slideshow/inteligencia-artificial-open-source-e-ibm-call-for-code/107116597 luckbr1975@slideshare.net(luckbr1975) Inteligencia artificial, open source e IBM Call for Code luckbr1975 Nesta palestra vamos abordar algumas das tend棚ncias em Intelig棚ncia Artificial e as dificuldades na uso da Intelig棚ncia Artificial. Por isso, tamb辿m apresentaremos algumas ferramentas dispon鱈veis em c坦digo livre que podem ajudar a simplificar a ado巽達o da IA. E faremos uma breve introdu巽達o ao Call for Code que 辿 uma iniciativa da IBM para construir solu巽探es na preven巽達o e rea巽達o a desastres naturais. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/inteligenciaartificialopensourceeibmcallforcode-180723131539-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Nesta palestra vamos abordar algumas das tend棚ncias em Intelig棚ncia Artificial e as dificuldades na uso da Intelig棚ncia Artificial. Por isso, tamb辿m apresentaremos algumas ferramentas dispon鱈veis em c坦digo livre que podem ajudar a simplificar a ado巽達o da IA. E faremos uma breve introdu巽達o ao Call for Code que 辿 uma iniciativa da IBM para construir solu巽探es na preven巽達o e rea巽達o a desastres naturais.
Inteligencia artificial, open source e IBM Call for Code from Luciano Resende
]]>
416 5 https://cdn.slidesharecdn.com/ss_thumbnails/inteligenciaartificialopensourceeibmcallforcode-180723131539-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
IoT Applications and Patterns using Apache Spark & 鐃Apache Bahir /slideshow/iot-applications-and-patterns-using-apache-spark-apache-bahir/105574628 apacheroadshow-iotapplicationsandpatternsusingapachesparkapachebahir-180712164137
The Internet of Things (IoT) is all about connected devices that produce and exchange data, and building applications that produce insights from these high volumes of data is very challenging and require a understanding of multiple protocols, platforms and other components. On this session, we will start by providing a quick introduction to IoT, some of the common analytic patterns used on IoT, and also touch on the MQTT protocol and how it is used by IoT solutions some of the quality of services tradeoffs to be considered when building an IoT application. We will also discuss some of the Apache Spark platform components, the ones utilized by IoT applications to process devices streaming data. We will also talk about Apache Bahir and some of its IoT connectors available for the Apache Spark platform. We will also go over the details on how to build, test and deploy an IoT application for Apache Spark using the MQTT data source for the new Apache Spark Structure Streaming functionality.]]>

The Internet of Things (IoT) is all about connected devices that produce and exchange data, and building applications that produce insights from these high volumes of data is very challenging and require a understanding of multiple protocols, platforms and other components. On this session, we will start by providing a quick introduction to IoT, some of the common analytic patterns used on IoT, and also touch on the MQTT protocol and how it is used by IoT solutions some of the quality of services tradeoffs to be considered when building an IoT application. We will also discuss some of the Apache Spark platform components, the ones utilized by IoT applications to process devices streaming data. We will also talk about Apache Bahir and some of its IoT connectors available for the Apache Spark platform. We will also go over the details on how to build, test and deploy an IoT application for Apache Spark using the MQTT data source for the new Apache Spark Structure Streaming functionality.]]>
Thu, 12 Jul 2018 16:41:37 GMT /slideshow/iot-applications-and-patterns-using-apache-spark-apache-bahir/105574628 luckbr1975@slideshare.net(luckbr1975) IoT Applications and Patterns using Apache Spark & 鐃Apache Bahir luckbr1975 The Internet of Things (IoT) is all about connected devices that produce and exchange data, and building applications that produce insights from these high volumes of data is very challenging and require a understanding of multiple protocols, platforms and other components. On this session, we will start by providing a quick introduction to IoT, some of the common analytic patterns used on IoT, and also touch on the MQTT protocol and how it is used by IoT solutions some of the quality of services tradeoffs to be considered when building an IoT application. We will also discuss some of the Apache Spark platform components, the ones utilized by IoT applications to process devices streaming data. We will also talk about Apache Bahir and some of its IoT connectors available for the Apache Spark platform. We will also go over the details on how to build, test and deploy an IoT application for Apache Spark using the MQTT data source for the new Apache Spark Structure Streaming functionality. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/apacheroadshow-iotapplicationsandpatternsusingapachesparkapachebahir-180712164137-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> The Internet of Things (IoT) is all about connected devices that produce and exchange data, and building applications that produce insights from these high volumes of data is very challenging and require a understanding of multiple protocols, platforms and other components. On this session, we will start by providing a quick introduction to IoT, some of the common analytic patterns used on IoT, and also touch on the MQTT protocol and how it is used by IoT solutions some of the quality of services tradeoffs to be considered when building an IoT application. We will also discuss some of the Apache Spark platform components, the ones utilized by IoT applications to process devices streaming data. We will also talk about Apache Bahir and some of its IoT connectors available for the Apache Spark platform. We will also go over the details on how to build, test and deploy an IoT application for Apache Spark using the MQTT data source for the new Apache Spark Structure Streaming functionality.
IoT Applications and Patterns using Apache Spark & Apache Bahir from Luciano Resende
]]>
1009 6 https://cdn.slidesharecdn.com/ss_thumbnails/apacheroadshow-iotapplicationsandpatternsusingapachesparkapachebahir-180712164137-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Getting insights from IoT data with Apache Spark and Apache Bahir /slideshow/getting-insights-from-iot-data-with-apache-spark-and-apache-bahir/104780224 gettinginsightsfromiotdatawithapachesparkandapachebahir-180708002333
The Internet of Things (IoT) is all about connected devices that produce and exchange data, and producing insights from these high volumes of data is challenging. On this session, we will start by providing a quick introduction to the MQTT protocol, and focus on using AI and machine learning techniques to provide insights from data collected from IoT devices. We will present some common AI concepts and techniques used by the industry to deploy state-of-the-art smart IoT systems. These techniques allow systems to determined patterns from the data, predict and prevent failures as well as suggest actions that can be used to minimize or avoid IoT device breakdowns on an intelligent way beyond rule-based and database search approaches. We will finish with a demo that puts together all the techniques discussed in an application that uses Apache Spark and Apache Bahir support for MQTT.]]>

The Internet of Things (IoT) is all about connected devices that produce and exchange data, and producing insights from these high volumes of data is challenging. On this session, we will start by providing a quick introduction to the MQTT protocol, and focus on using AI and machine learning techniques to provide insights from data collected from IoT devices. We will present some common AI concepts and techniques used by the industry to deploy state-of-the-art smart IoT systems. These techniques allow systems to determined patterns from the data, predict and prevent failures as well as suggest actions that can be used to minimize or avoid IoT device breakdowns on an intelligent way beyond rule-based and database search approaches. We will finish with a demo that puts together all the techniques discussed in an application that uses Apache Spark and Apache Bahir support for MQTT.]]>
Sun, 08 Jul 2018 00:23:33 GMT /slideshow/getting-insights-from-iot-data-with-apache-spark-and-apache-bahir/104780224 luckbr1975@slideshare.net(luckbr1975) Getting insights from IoT data with Apache Spark and Apache Bahir luckbr1975 The Internet of Things (IoT) is all about connected devices that produce and exchange data, and producing insights from these high volumes of data is challenging. On this session, we will start by providing a quick introduction to the MQTT protocol, and focus on using AI and machine learning techniques to provide insights from data collected from IoT devices. We will present some common AI concepts and techniques used by the industry to deploy state-of-the-art smart IoT systems. These techniques allow systems to determined patterns from the data, predict and prevent failures as well as suggest actions that can be used to minimize or avoid IoT device breakdowns on an intelligent way beyond rule-based and database search approaches. We will finish with a demo that puts together all the techniques discussed in an application that uses Apache Spark and Apache Bahir support for MQTT. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/gettinginsightsfromiotdatawithapachesparkandapachebahir-180708002333-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> The Internet of Things (IoT) is all about connected devices that produce and exchange data, and producing insights from these high volumes of data is challenging. On this session, we will start by providing a quick introduction to the MQTT protocol, and focus on using AI and machine learning techniques to provide insights from data collected from IoT devices. We will present some common AI concepts and techniques used by the industry to deploy state-of-the-art smart IoT systems. These techniques allow systems to determined patterns from the data, predict and prevent failures as well as suggest actions that can be used to minimize or avoid IoT device breakdowns on an intelligent way beyond rule-based and database search approaches. We will finish with a demo that puts together all the techniques discussed in an application that uses Apache Spark and Apache Bahir support for MQTT.
Getting insights from IoT data with Apache Spark and Apache Bahir from Luciano Resende
]]>
885 5 https://cdn.slidesharecdn.com/ss_thumbnails/gettinginsightsfromiotdatawithapachesparkandapachebahir-180708002333-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Open Source AI - News and examples /luckbr1975/open-source-ai-news-and-examples opensourceai-newsandexamplesmeetup-180613083829
This presentation describes some of the Open Source Ai projects we are working at the Center for Open Source, Data and AI Technologies (CODAIT), including Model Asset Exchange (MAX), Fabric for Deep Learning (FfDL) and Jupyter Enterprise Gateway.]]>

This presentation describes some of the Open Source Ai projects we are working at the Center for Open Source, Data and AI Technologies (CODAIT), including Model Asset Exchange (MAX), Fabric for Deep Learning (FfDL) and Jupyter Enterprise Gateway.]]>
Wed, 13 Jun 2018 08:38:29 GMT /luckbr1975/open-source-ai-news-and-examples luckbr1975@slideshare.net(luckbr1975) Open Source AI - News and examples luckbr1975 This presentation describes some of the Open Source Ai projects we are working at the Center for Open Source, Data and AI Technologies (CODAIT), including Model Asset Exchange (MAX), Fabric for Deep Learning (FfDL) and Jupyter Enterprise Gateway. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/opensourceai-newsandexamplesmeetup-180613083829-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> This presentation describes some of the Open Source Ai projects we are working at the Center for Open Source, Data and AI Technologies (CODAIT), including Model Asset Exchange (MAX), Fabric for Deep Learning (FfDL) and Jupyter Enterprise Gateway.
Open Source AI - News and examples from Luciano Resende
]]>
675 7 https://cdn.slidesharecdn.com/ss_thumbnails/opensourceai-newsandexamplesmeetup-180613083829-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Building analytical microservices powered by jupyter kernels /slideshow/building-analytical-microservices-powered-by-jupyter-kernels/91741612 buildinganalyticalmicroservicespoweredbyjupyterkernels-180324000302
The Jupyter Kernels, which abstracts the computing engine used in Jupyter Notebooks, are a very powerful component that can be reutilized in different scenarios to bring analytical capabilities to applications. In this session, we will discuss how you can build a simple python based micro service that leverages Jupyter Kernels to incorporate sentiment analysis to the service it provides.]]>

The Jupyter Kernels, which abstracts the computing engine used in Jupyter Notebooks, are a very powerful component that can be reutilized in different scenarios to bring analytical capabilities to applications. In this session, we will discuss how you can build a simple python based micro service that leverages Jupyter Kernels to incorporate sentiment analysis to the service it provides.]]>
Sat, 24 Mar 2018 00:03:02 GMT /slideshow/building-analytical-microservices-powered-by-jupyter-kernels/91741612 luckbr1975@slideshare.net(luckbr1975) Building analytical microservices powered by jupyter kernels luckbr1975 The Jupyter Kernels, which abstracts the computing engine used in Jupyter Notebooks, are a very powerful component that can be reutilized in different scenarios to bring analytical capabilities to applications. In this session, we will discuss how you can build a simple python based micro service that leverages Jupyter Kernels to incorporate sentiment analysis to the service it provides. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/buildinganalyticalmicroservicespoweredbyjupyterkernels-180324000302-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> The Jupyter Kernels, which abstracts the computing engine used in Jupyter Notebooks, are a very powerful component that can be reutilized in different scenarios to bring analytical capabilities to applications. In this session, we will discuss how you can build a simple python based micro service that leverages Jupyter Kernels to incorporate sentiment analysis to the service it provides.
Building analytical microservices powered by jupyter kernels from Luciano Resende
]]>
1418 5 https://cdn.slidesharecdn.com/ss_thumbnails/buildinganalyticalmicroservicespoweredbyjupyterkernels-180324000302-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Building iot applications with Apache Spark and Apache Bahir /slideshow/building-io-t-applications-with-apache-spark-and-apache-bahir/83551767 buildingiotapplicationswithapachesparkandapachebahir-171207100832
We leave in a connected world where connected devices are becoming part of our day to day and are providing invaluable streams of data. In this talk, we will introduce you to Apache Bahir and some of its IoT connectors available for Apache Spark. We will also go over the details on how to build, test and deploy an IoT application for Apache Spark using the MQTT data source for the new Apache Spark Structure Streaming functionality. ]]>

We leave in a connected world where connected devices are becoming part of our day to day and are providing invaluable streams of data. In this talk, we will introduce you to Apache Bahir and some of its IoT connectors available for Apache Spark. We will also go over the details on how to build, test and deploy an IoT application for Apache Spark using the MQTT data source for the new Apache Spark Structure Streaming functionality. ]]>
Thu, 07 Dec 2017 10:08:32 GMT /slideshow/building-io-t-applications-with-apache-spark-and-apache-bahir/83551767 luckbr1975@slideshare.net(luckbr1975) Building iot applications with Apache Spark and Apache Bahir luckbr1975 We leave in a connected world where connected devices are becoming part of our day to day and are providing invaluable streams of data. In this talk, we will introduce you to Apache Bahir and some of its IoT connectors available for Apache Spark. We will also go over the details on how to build, test and deploy an IoT application for Apache Spark using the MQTT data source for the new Apache Spark Structure Streaming functionality. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/buildingiotapplicationswithapachesparkandapachebahir-171207100832-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> We leave in a connected world where connected devices are becoming part of our day to day and are providing invaluable streams of data. In this talk, we will introduce you to Apache Bahir and some of its IoT connectors available for Apache Spark. We will also go over the details on how to build, test and deploy an IoT application for Apache Spark using the MQTT data source for the new Apache Spark Structure Streaming functionality.
Building iot applications with Apache Spark and Apache Bahir from Luciano Resende
]]>
456 3 https://cdn.slidesharecdn.com/ss_thumbnails/buildingiotapplicationswithapachesparkandapachebahir-171207100832-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
An Enterprise Analytics Platform with Jupyter Notebooks and Apache Spark /slideshow/an-enterprise-analytics-platform-with-jupyter-notebooks-and-apache-spark/83130246 anenterpriseanalyticsplatformwithjupyternotebooksandapachesparkbigdatavilnius-171201140824
IBM has built a Data Science Experience cloud service that exposes Notebook services at web scale. Behind this service, there are various components that power this platform, including Jupyter Notebooks, an enterprise gateway that manages the execution of the Jupyter Kernels and an Apache Spark cluster that power the computation. In this session we will describe our experience and best practices putting together this analytical platform as a service based on Jupyter Notebooks and Apache Spark, in particular how we built the Enterprise Gateway that enables all the Notebooks to share the Spark cluster computational resources.]]>

IBM has built a Data Science Experience cloud service that exposes Notebook services at web scale. Behind this service, there are various components that power this platform, including Jupyter Notebooks, an enterprise gateway that manages the execution of the Jupyter Kernels and an Apache Spark cluster that power the computation. In this session we will describe our experience and best practices putting together this analytical platform as a service based on Jupyter Notebooks and Apache Spark, in particular how we built the Enterprise Gateway that enables all the Notebooks to share the Spark cluster computational resources.]]>
Fri, 01 Dec 2017 14:08:24 GMT /slideshow/an-enterprise-analytics-platform-with-jupyter-notebooks-and-apache-spark/83130246 luckbr1975@slideshare.net(luckbr1975) An Enterprise Analytics Platform with Jupyter Notebooks and Apache Spark luckbr1975 IBM has built a Data Science Experience cloud service that exposes Notebook services at web scale. Behind this service, there are various components that power this platform, including Jupyter Notebooks, an enterprise gateway that manages the execution of the Jupyter Kernels and an Apache Spark cluster that power the computation. In this session we will describe our experience and best practices putting together this analytical platform as a service based on Jupyter Notebooks and Apache Spark, in particular how we built the Enterprise Gateway that enables all the Notebooks to share the Spark cluster computational resources. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/anenterpriseanalyticsplatformwithjupyternotebooksandapachesparkbigdatavilnius-171201140824-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> IBM has built a Data Science Experience cloud service that exposes Notebook services at web scale. Behind this service, there are various components that power this platform, including Jupyter Notebooks, an enterprise gateway that manages the execution of the Jupyter Kernels and an Apache Spark cluster that power the computation. In this session we will describe our experience and best practices putting together this analytical platform as a service based on Jupyter Notebooks and Apache Spark, in particular how we built the Enterprise Gateway that enables all the Notebooks to share the Spark cluster computational resources.
An Enterprise Analytics Platform with Jupyter Notebooks and Apache Spark from Luciano Resende
]]>
592 6 https://cdn.slidesharecdn.com/ss_thumbnails/anenterpriseanalyticsplatformwithjupyternotebooksandapachesparkbigdatavilnius-171201140824-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
The Analytic Platform behind IBMs Watson Data Platform - Big Data Spain 2017 /slideshow/the-analytic-platform-behind-ibms-watson-data-platform-big-data-spain-2017/82216642 theanalyticplatformbehindibmswatsondataplatformbigdataspain-171117142752
IBM has built a Data Science Experience cloud service that exposes Notebook services at web scale. Behind this service, there are various components that power this platform, including Jupyter Notebooks, an enterprise gateway that manages the execution of the Jupyter Kernels and an Apache Spark cluster that power the computation. In this session we will describe our experience and best practices putting together this analytical platform as a service based on Jupyter Notebooks and Apache Spark, in particular how we built the Enterprise Gateway that enables all the Notebooks to share the Spark cluster computational resources.]]>

IBM has built a Data Science Experience cloud service that exposes Notebook services at web scale. Behind this service, there are various components that power this platform, including Jupyter Notebooks, an enterprise gateway that manages the execution of the Jupyter Kernels and an Apache Spark cluster that power the computation. In this session we will describe our experience and best practices putting together this analytical platform as a service based on Jupyter Notebooks and Apache Spark, in particular how we built the Enterprise Gateway that enables all the Notebooks to share the Spark cluster computational resources.]]>
Fri, 17 Nov 2017 14:27:52 GMT /slideshow/the-analytic-platform-behind-ibms-watson-data-platform-big-data-spain-2017/82216642 luckbr1975@slideshare.net(luckbr1975) The Analytic Platform behind IBMs Watson Data Platform - Big Data Spain 2017 luckbr1975 IBM has built a Data Science Experience cloud service that exposes Notebook services at web scale. Behind this service, there are various components that power this platform, including Jupyter Notebooks, an enterprise gateway that manages the execution of the Jupyter Kernels and an Apache Spark cluster that power the computation. In this session we will describe our experience and best practices putting together this analytical platform as a service based on Jupyter Notebooks and Apache Spark, in particular how we built the Enterprise Gateway that enables all the Notebooks to share the Spark cluster computational resources. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/theanalyticplatformbehindibmswatsondataplatformbigdataspain-171117142752-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> IBM has built a Data Science Experience cloud service that exposes Notebook services at web scale. Behind this service, there are various components that power this platform, including Jupyter Notebooks, an enterprise gateway that manages the execution of the Jupyter Kernels and an Apache Spark cluster that power the computation. In this session we will describe our experience and best practices putting together this analytical platform as a service based on Jupyter Notebooks and Apache Spark, in particular how we built the Enterprise Gateway that enables all the Notebooks to share the Spark cluster computational resources.
The Analytic Platform behind IBMs Watson Data Platform - Big Data Spain 2017 from Luciano Resende
]]>
2065 2 https://cdn.slidesharecdn.com/ss_thumbnails/theanalyticplatformbehindibmswatsondataplatformbigdataspain-171117142752-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
What's new in Apache SystemML - Declarative Machine Learning /slideshow/whats-new-in-apache-systemml-declarative-machine-learning/82151373 whatsnewinsystemml-declarativemachinelearningmadrid-171116092224
SystemML was designed with the main goal of lowering the complexity required to maintain and scale Machine Learning algorithms. It provides a declarative machine learning (DML) that simplify the specification of machine learning algorithms using an R-like and Python-like that significantly increases the productivity of data scientist as it provides flexibility on how the custom analytics are expressed and also provides data independence from the underlying input formats and physical custom analytics. This presentation gives a quick introduction to Apache SystemML, provides an updated on the recent areas that are being developed by the project community, and go over a tutorial that enables one to quickly get up to speed in SystemML.]]>

SystemML was designed with the main goal of lowering the complexity required to maintain and scale Machine Learning algorithms. It provides a declarative machine learning (DML) that simplify the specification of machine learning algorithms using an R-like and Python-like that significantly increases the productivity of data scientist as it provides flexibility on how the custom analytics are expressed and also provides data independence from the underlying input formats and physical custom analytics. This presentation gives a quick introduction to Apache SystemML, provides an updated on the recent areas that are being developed by the project community, and go over a tutorial that enables one to quickly get up to speed in SystemML.]]>
Thu, 16 Nov 2017 09:22:24 GMT /slideshow/whats-new-in-apache-systemml-declarative-machine-learning/82151373 luckbr1975@slideshare.net(luckbr1975) What's new in Apache SystemML - Declarative Machine Learning luckbr1975 SystemML was designed with the main goal of lowering the complexity required to maintain and scale Machine Learning algorithms. It provides a declarative machine learning (DML) that simplify the specification of machine learning algorithms using an R-like and Python-like that significantly increases the productivity of data scientist as it provides flexibility on how the custom analytics are expressed and also provides data independence from the underlying input formats and physical custom analytics. This presentation gives a quick introduction to Apache SystemML, provides an updated on the recent areas that are being developed by the project community, and go over a tutorial that enables one to quickly get up to speed in SystemML. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/whatsnewinsystemml-declarativemachinelearningmadrid-171116092224-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> SystemML was designed with the main goal of lowering the complexity required to maintain and scale Machine Learning algorithms. It provides a declarative machine learning (DML) that simplify the specification of machine learning algorithms using an R-like and Python-like that significantly increases the productivity of data scientist as it provides flexibility on how the custom analytics are expressed and also provides data independence from the underlying input formats and physical custom analytics. This presentation gives a quick introduction to Apache SystemML, provides an updated on the recent areas that are being developed by the project community, and go over a tutorial that enables one to quickly get up to speed in SystemML.
What's new in Apache SystemML - Declarative Machine Learning from Luciano Resende
]]>
405 4 https://cdn.slidesharecdn.com/ss_thumbnails/whatsnewinsystemml-declarativemachinelearningmadrid-171116092224-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Big analytics meetup - Extended Jupyter Kernel Gateway /slideshow/big-analytics-meetup-extended-jupyter-kernel-gateway/79786821 biganalyticsmeetup-jupyterenterprisegateway-170914222659
Building Enterprise/Cloud Analytics Platform with Jupyter Notebooks and Apache Spark]]>

Building Enterprise/Cloud Analytics Platform with Jupyter Notebooks and Apache Spark]]>
Thu, 14 Sep 2017 22:26:59 GMT /slideshow/big-analytics-meetup-extended-jupyter-kernel-gateway/79786821 luckbr1975@slideshare.net(luckbr1975) Big analytics meetup - Extended Jupyter Kernel Gateway luckbr1975 Building Enterprise/Cloud Analytics Platform with Jupyter Notebooks and Apache Spark <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/biganalyticsmeetup-jupyterenterprisegateway-170914222659-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Building Enterprise/Cloud Analytics Platform with Jupyter Notebooks and Apache Spark
Big analytics meetup - Extended Jupyter Kernel Gateway from Luciano Resende
]]>
380 2 https://cdn.slidesharecdn.com/ss_thumbnails/biganalyticsmeetup-jupyterenterprisegateway-170914222659-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Jupyter con meetup extended jupyter kernel gateway /slideshow/jupyter-con-meetup-extended-jupyter-kernel-gateway/79075573 jupyterconmeetup-extendedjupyterkernelgateway-170823034447
Data Scientists are becoming a necessity of every company in the data-centric world of today, and with them comes the requirement to make available a elastic and interactive analytics platform. This session will describe our experience and best practices putting together an Analytical platform based on Jupyter stack and different kernels running in a distributed Apache Spark cluster.]]>

Data Scientists are becoming a necessity of every company in the data-centric world of today, and with them comes the requirement to make available a elastic and interactive analytics platform. This session will describe our experience and best practices putting together an Analytical platform based on Jupyter stack and different kernels running in a distributed Apache Spark cluster.]]>
Wed, 23 Aug 2017 03:44:47 GMT /slideshow/jupyter-con-meetup-extended-jupyter-kernel-gateway/79075573 luckbr1975@slideshare.net(luckbr1975) Jupyter con meetup extended jupyter kernel gateway luckbr1975 Data Scientists are becoming a necessity of every company in the data-centric world of today, and with them comes the requirement to make available a elastic and interactive analytics platform. This session will describe our experience and best practices putting together an Analytical platform based on Jupyter stack and different kernels running in a distributed Apache Spark cluster. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/jupyterconmeetup-extendedjupyterkernelgateway-170823034447-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Data Scientists are becoming a necessity of every company in the data-centric world of today, and with them comes the requirement to make available a elastic and interactive analytics platform. This session will describe our experience and best practices putting together an Analytical platform based on Jupyter stack and different kernels running in a distributed Apache Spark cluster.
Jupyter con meetup extended jupyter kernel gateway from Luciano Resende
]]>
1811 2 https://cdn.slidesharecdn.com/ss_thumbnails/jupyterconmeetup-extendedjupyterkernelgateway-170823034447-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Writing Apache Spark and Apache Flink Applications Using Apache Bahir /slideshow/writing-apache-spark-and-apache-flink-applications-using-apache-bahir/69117482 apachebigdata-apachebahir-161116210757
Big Data is all about being to access and process data in various formats, and from various sources. Apache Bahir provides extensions to distributed analytic platforms providing them access to different data sources. In this talk we will introduce you to Apache Bahir and its various connectors that are available for Apache Spark and Apache Flink. We will also go over the details of how to build, test and deploy an Spark Application using the MQTT data source for the new Apache Spark 2.0 Structure Streaming functionality.]]>

Big Data is all about being to access and process data in various formats, and from various sources. Apache Bahir provides extensions to distributed analytic platforms providing them access to different data sources. In this talk we will introduce you to Apache Bahir and its various connectors that are available for Apache Spark and Apache Flink. We will also go over the details of how to build, test and deploy an Spark Application using the MQTT data source for the new Apache Spark 2.0 Structure Streaming functionality.]]>
Wed, 16 Nov 2016 21:07:57 GMT /slideshow/writing-apache-spark-and-apache-flink-applications-using-apache-bahir/69117482 luckbr1975@slideshare.net(luckbr1975) Writing Apache Spark and Apache Flink Applications Using Apache Bahir luckbr1975 Big Data is all about being to access and process data in various formats, and from various sources. Apache Bahir provides extensions to distributed analytic platforms providing them access to different data sources. In this talk we will introduce you to Apache Bahir and its various connectors that are available for Apache Spark and Apache Flink. We will also go over the details of how to build, test and deploy an Spark Application using the MQTT data source for the new Apache Spark 2.0 Structure Streaming functionality. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/apachebigdata-apachebahir-161116210757-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Big Data is all about being to access and process data in various formats, and from various sources. Apache Bahir provides extensions to distributed analytic platforms providing them access to different data sources. In this talk we will introduce you to Apache Bahir and its various connectors that are available for Apache Spark and Apache Flink. We will also go over the details of how to build, test and deploy an Spark Application using the MQTT data source for the new Apache Spark 2.0 Structure Streaming functionality.
Writing Apache Spark and Apache Flink Applications Using Apache Bahir from Luciano Resende
]]>
42718 5 https://cdn.slidesharecdn.com/ss_thumbnails/apachebigdata-apachebahir-161116210757-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
https://cdn.slidesharecdn.com/profile-photo-luckbr1975-48x48.jpg?cb=1694662937 Luciano Resende is an STSM and Open Source Data Science/AI Platform Architect at IBM Center for Open source Data and AI Technologies - CODAIT (formerly Spark Technology Center). He has been contributing to open source at The ASF for over 10 years, he is a member of ASF and is currently contributing to various big data related Apache projects around the Apache Spark ecosystem. Currently, Luciano is contributing to Jupyter Ecosystem projects building scalable, secure and flexible Enterprise Data Science platform. lresende.blogspot.com https://cdn.slidesharecdn.com/ss_thumbnails/apacheconna-apachetoree-ajupyterkernelforscalaandapachespark-221006010142-87c31afd-thumbnail.jpg?width=320&height=320&fit=bounds slideshow/a-jupyter-kernel-for-scala-and-apache-sparkpdf/253364422 A Jupyter kernel for S... https://cdn.slidesharecdn.com/ss_thumbnails/elyraoverview-200625230848-thumbnail.jpg?width=320&height=320&fit=bounds slideshow/using-elyra-for-covid19-analytics/236223336 Using Elyra for COVID-... https://cdn.slidesharecdn.com/ss_thumbnails/elyraoverview-200310201402-thumbnail.jpg?width=320&height=320&fit=bounds slideshow/elyra-a-set-of-aicentric-extensions-to-jupyterlab-notebooks/230023931 Elyra - a set of AI-ce...