Meteor Taipei 2016 January talk -- MantraWey-Han Liaw
?
Mantra is an application architecture specification and set of libraries for building Meteor apps. It is based on Flux principles and includes core components like React UI components, action handlers for business logic, state managers for local and remote data, and container components to connect UI, actions and states. While Mantra provides a good architecture, the author believes it could be improved by being tailored more specifically for Meteor, enhancing unit testing support, and developing a code generation tool or opinionated framework to implement the pattern.
This document discusses asynchronous programming in Python. It introduces the concepts of asynchronous I/O, event loops, and callbacks. Traditional approaches to asynchronous programming using callbacks are messy and hard to organize. Generators and coroutines provide a cleaner solution through yield and await keywords. The asyncio module implements asynchronous I/O with coroutines and provides an event loop to execute coroutine functions concurrently.
This document contains information about Adrian Liaw including his online usernames, interests in Scratch, Udacity, Python and obtaining a Data Analyst Nanodegree. It also lists several Python MOOCs through Udacity, Coursera and other online sources. Recommendations are made for learning Python through online courses, the Taipei.py community and websites for teaching kids to code like code.org.
The document discusses the key components that make up the Meteor stack, including Blaze, Tracker, DDP, and Mongo. It notes that while Meteor uses these components by default, the framework is customizable - developers can replace the UI library, database, and DDP client/server with alternatives. The standard Meteor application architecture is then outlined, explaining how each component fits together and interacts to provide reactivity on the client.
Implementation of Rubik's Cube Formula in PyCuberWey-Han Liaw
?
The document discusses PyCuber, a Python package for solving Rubik's Cubes. It introduces Rubik's Cube and its creator Erno Rubik. It then describes how PyCuber implements classes like Step and Formula to represent cube positions and solving algorithms. Various methods are included, like optimizing formulas and randomizing or mirroring solutions. Some interesting Rubik's Cube facts and records are also mentioned.
This document provides an introduction and overview of several Apache Spark labs covering: a "hello world" example of Resilient Distributed Datasets (RDDs); importing and performing operations on a wine dataset using DataFrames and SQL; and using the MLlib library to perform k-means clustering on features from the wine dataset. The labs demonstrate basic Spark concepts like RDDs, DataFrames, ML pipelines, and clustering algorithms.
This document discusses building custom kernels for IPython. It begins by explaining what an IPython kernel is and how kernels work. It describes the benefits of building custom kernels for other languages that lack interactive development tools. The architecture of IPython is explained, with the kernel and clients communicating over ZeroMQ. The two types of kernels are discussed. The messaging specification and channels are covered at a high level. Finally, the document focuses on building wrapper kernels by extending the Kernel base class and implementing specific methods, using examples like a Bash and Redis kernel.
Este documento describe la evolución de IPython y Jupyter, desde sus inicios como un shell interactivo de Python hasta convertirse en una plataforma multi-lenguaje para computación interactiva y publicación de documentos. Se explica cómo el protocolo REPL genérico de Jupyter permite ejecutar código en múltiples lenguajes y cómo herramientas como JupyterHub, nbviewer y notebooks han impulsado su adopción en educación, investigación y comunicación científica.
NOTE: This was converted to Powerpoint from Keynote. 狠狠撸share does not play the embedded videos. You can download the powerpoint from slideshare and import it into keynote. The videos should work in the keynote.
Abstract:
In this presentation, we will describe the "Spark Kernel" which enables applications, such as end-user facing and interactive applications, to interface with Spark clusters. It provides a gateway to define and run Spark tasks and to collect results from a cluster without the friction associated with shipping jars and reading results from peripheral systems. Using the Spark Kernel as a proxy, applications can be hosted remotely from Spark.
Jupyter Kernel: How to Speak in Another LanguageWey-Han Liaw
?
The document discusses how to create a Jupyter kernel. It explains that kernels use ZeroMQ sockets to communicate with clients via the Jupyter messaging protocol. Native kernels are implemented from scratch while wrapper kernels are built from an existing interpreter using Python. The document provides examples of existing kernels like IJulia and the Python ipykernel. It also outlines the steps to build a wrapper kernel and mentions several other kernel types.
This presentation includes a comprehensive introduction to Apache Spark. From an explanation of its rapid ascent to performance and developer advantages over MapReduce. We also explore its built-in functionality for application types involving streaming, machine learning, and Extract, Transform and Load (ETL).
Implementation of Rubik's Cube Formula in PyCuberWey-Han Liaw
?
The document discusses PyCuber, a Python package for solving Rubik's Cubes. It introduces Rubik's Cube and its creator Erno Rubik. It then describes how PyCuber implements classes like Step and Formula to represent cube positions and solving algorithms. Various methods are included, like optimizing formulas and randomizing or mirroring solutions. Some interesting Rubik's Cube facts and records are also mentioned.
This document provides an introduction and overview of several Apache Spark labs covering: a "hello world" example of Resilient Distributed Datasets (RDDs); importing and performing operations on a wine dataset using DataFrames and SQL; and using the MLlib library to perform k-means clustering on features from the wine dataset. The labs demonstrate basic Spark concepts like RDDs, DataFrames, ML pipelines, and clustering algorithms.
This document discusses building custom kernels for IPython. It begins by explaining what an IPython kernel is and how kernels work. It describes the benefits of building custom kernels for other languages that lack interactive development tools. The architecture of IPython is explained, with the kernel and clients communicating over ZeroMQ. The two types of kernels are discussed. The messaging specification and channels are covered at a high level. Finally, the document focuses on building wrapper kernels by extending the Kernel base class and implementing specific methods, using examples like a Bash and Redis kernel.
Este documento describe la evolución de IPython y Jupyter, desde sus inicios como un shell interactivo de Python hasta convertirse en una plataforma multi-lenguaje para computación interactiva y publicación de documentos. Se explica cómo el protocolo REPL genérico de Jupyter permite ejecutar código en múltiples lenguajes y cómo herramientas como JupyterHub, nbviewer y notebooks han impulsado su adopción en educación, investigación y comunicación científica.
NOTE: This was converted to Powerpoint from Keynote. 狠狠撸share does not play the embedded videos. You can download the powerpoint from slideshare and import it into keynote. The videos should work in the keynote.
Abstract:
In this presentation, we will describe the "Spark Kernel" which enables applications, such as end-user facing and interactive applications, to interface with Spark clusters. It provides a gateway to define and run Spark tasks and to collect results from a cluster without the friction associated with shipping jars and reading results from peripheral systems. Using the Spark Kernel as a proxy, applications can be hosted remotely from Spark.
Jupyter Kernel: How to Speak in Another LanguageWey-Han Liaw
?
The document discusses how to create a Jupyter kernel. It explains that kernels use ZeroMQ sockets to communicate with clients via the Jupyter messaging protocol. Native kernels are implemented from scratch while wrapper kernels are built from an existing interpreter using Python. The document provides examples of existing kernels like IJulia and the Python ipykernel. It also outlines the steps to build a wrapper kernel and mentions several other kernel types.
This presentation includes a comprehensive introduction to Apache Spark. From an explanation of its rapid ascent to performance and developer advantages over MapReduce. We also explore its built-in functionality for application types involving streaming, machine learning, and Extract, Transform and Load (ETL).