翻轉學習的四大基礎— Four Pillars:F . L . I . P
Flexible Environments(彈性的學習環境)
Learning Culture(不同的學習文化)
Intentional Content(更明確的內容)
Professional Educators(更專業的教師)
-----------------------------
能仁家商翻轉教育講座
Evaluating Large Language Models for Your Applications and Why It MattersMia Chang
?
Event: AWS WUG Cloud Talks
Date: 2025-02-11
Description: Confused by the overwhelming metrics for evaluating LLMs? This talk will guide you through key evaluation metrics, tools, and frameworks tailored to specific use cases, including mitigating social biases and extracting interpretable features. Gain clarity on LLM evaluation to build better generative AI applications.
Service: Amazon Bedrock
Speaker: Mia Chang: ML Specialist Solutions Architect at AWS, NLP expert, and author, with extensive experience in AI/ML workloads on the cloud.
Running the first automatic speech recognition (ASR) model with HuggingFace -...Mia Chang
?
Running the first automatic speech recognition (ASR) model with HuggingFace
06-18, 11:00–11:45 (Europe/London), Tower Suite 1
Come and learn your first audio machine learning model with Automatic speech recognition (ASR) use case! ASR has been a popular application like voice-controlled assistants and voice-to-text/speech-to-text applications. These applications take audio clips as input and convert speech signals to text.
This talk is aiming for Python developers or ML practitioners who are knowing Python, and interested in working with audio machine learning use case. I will cover minimum slides about ML algorithm in this talk. Instead, I will walk through types of ASR applications, like automatic subtitling for videos and transcribing meetings. So you will know what are the occasions to work with ASR models. And talk about data processing of audio data, how to do feature extraction, and Fine-tune Wav2Vec2 using HuggingFace. The notebook that presented in the talk is running on Amazon SageMaker, the concept for this talk is cloud agnostic and applies to local computer(on premises) as well.
---
Github: https://github.com/pymia/amazon-sagemaker-fine-tune-and-deploy-wav2vec2-huggingface
Event: PyData London 2022
Date: JUNE 17TH-19TH, 2022
Event link: https://pydata.org/london2022/
Linkedin: http://linkedin.com/in/mia-chang/
More Related Content
Similar to What's AI, Machine Learning and Deep Learning - Talk @NCCU python 讀書會 (20)
翻轉學習的四大基礎— Four Pillars:F . L . I . P
Flexible Environments(彈性的學習環境)
Learning Culture(不同的學習文化)
Intentional Content(更明確的內容)
Professional Educators(更專業的教師)
-----------------------------
能仁家商翻轉教育講座
Evaluating Large Language Models for Your Applications and Why It MattersMia Chang
?
Event: AWS WUG Cloud Talks
Date: 2025-02-11
Description: Confused by the overwhelming metrics for evaluating LLMs? This talk will guide you through key evaluation metrics, tools, and frameworks tailored to specific use cases, including mitigating social biases and extracting interpretable features. Gain clarity on LLM evaluation to build better generative AI applications.
Service: Amazon Bedrock
Speaker: Mia Chang: ML Specialist Solutions Architect at AWS, NLP expert, and author, with extensive experience in AI/ML workloads on the cloud.
Running the first automatic speech recognition (ASR) model with HuggingFace -...Mia Chang
?
Running the first automatic speech recognition (ASR) model with HuggingFace
06-18, 11:00–11:45 (Europe/London), Tower Suite 1
Come and learn your first audio machine learning model with Automatic speech recognition (ASR) use case! ASR has been a popular application like voice-controlled assistants and voice-to-text/speech-to-text applications. These applications take audio clips as input and convert speech signals to text.
This talk is aiming for Python developers or ML practitioners who are knowing Python, and interested in working with audio machine learning use case. I will cover minimum slides about ML algorithm in this talk. Instead, I will walk through types of ASR applications, like automatic subtitling for videos and transcribing meetings. So you will know what are the occasions to work with ASR models. And talk about data processing of audio data, how to do feature extraction, and Fine-tune Wav2Vec2 using HuggingFace. The notebook that presented in the talk is running on Amazon SageMaker, the concept for this talk is cloud agnostic and applies to local computer(on premises) as well.
---
Github: https://github.com/pymia/amazon-sagemaker-fine-tune-and-deploy-wav2vec2-huggingface
Event: PyData London 2022
Date: JUNE 17TH-19TH, 2022
Event link: https://pydata.org/london2022/
Linkedin: http://linkedin.com/in/mia-chang/
7 steps to AI production - global azure bootcamp 2020 KolnMia Chang
?
Session: 7 steps to AI production
Abstract: What was your last AI project? Was it another Kaggle dataset running on Jupyter notebook, hard to reproduce, and don't know how to deploy as an AI service? How to do auto-scaling for the model serving?
How far is the distance from playing with the sample dataset to AI production?
Let's go through 7 steps in the AI application development lifecycle. From data wrangling, reproduce your training, model acceptance to model deployment and management.
Target audience: Data scientist who doesn't know the model serving and Azure DevOps. Backend/DevOps who doesn't know how to help your data team go production.
---
Github: https://github.com/pymia/7-steps-production
Event: Global Azure Bootcamp 2020 Virtual
Date: Apr 25, 2020
Event link: https://www.meetup.com/Azure-Cologne-Meetup/events/266727986/
Linkedin: http://linkedin.com/in/mia-chang/
The content was modified from Google Content Group
Eric ShangKuan(ericsk@google.com)
---
TensorFlow Lite guide( for mobile & IoT )
TensorFlow Lite is a set of tools to help developers run TensorFlow models on mobile, embedded, and IoT devices. It enables on-device machine learning inference with low latency and small binary size.
TensorFlow Lite consists of two main components:
The TensorFlow Lite interpreter:
- optimize models on many different hardware types, like mobile phones, embedded Linux devices, and microcontrollers.
The TensorFlow Lite converter:
- which converts TensorFlow models into an efficient form for use by the interpreter, and can introduce optimizations to improve binary size and performance.
---
Event: PyLadies TensorFlow All-Around
Date: Sep 25, 2019
Event link: https://www.meetup.com/PyLadies-Berlin/events/264205538/
Linkedin: http://linkedin.com/in/mia-chang/
DPS2019 data scientist in the real estate industry Mia Chang
?
This document summarizes a presentation about applying artificial intelligence in the real estate industry. It discusses the different stages of the real estate process and how AI could be used at each stage, including predicting energy usage, processing text in different languages, and automating workflows. It also covers challenges around regulations like GDPR and strategies for developing and deploying AI models, including using transfer learning and version control systems.
Leverage the power of machine learning on windowsMia Chang
?
Note:
The Content was modified from the Microsoft Content team.
Deck Owner: Nitah Onsongo
Tech/Msg Review: Cesar De La Torre, Simon Tao, Clarke Rahrig
---
Event: Insider Dev Tour Berlin
Event Description: Microsoft is going on a world tour with the announcements of Build 2019. The Insider Dev Tour focuses on innovations related to Microsoft 365 from a developer's perspective.
Date: June 7th, 2019
Event link: https://www.microsoft.com/de-de/techwiese/news/best-of-build-insider-dev-tour-am-7-juni-in-berlin.aspx
Linkedin: http://linkedin.com/in/mia-chang/
Develop computer vision applications with azure computer vision apiMia Chang
?
This document discusses developing computer vision applications using the Azure Computer Vision API. It provides an overview of computer vision and AI development on Azure. It also discusses using emotion recognition in chatbots and provides references to computer vision papers, datasets, and tools like the Azure Machine Learning Workbench. The document includes examples of computer vision tasks like object detection and segmentation and provides a small demo of emotion detection.
This document summarizes chapters 5 and 6 from a book on unit testing. Chapter 5 discusses why isolation frameworks are useful for creating fake objects more easily than hand-coding mocks. It also covers simulating fake values and testing events. Chapter 6 distinguishes between constrained and unconstrained isolation frameworks and discusses features that support future-proofing and usability of frameworks. Both chapters emphasize that isolation frameworks make testing easier, faster and less error-prone compared to manually writing mocks.
Play Kaggle with R, Facebook V: Predicting Check InsMia Chang
?
Sharing a study case from Kaggle competition, Facebook V: Predicting Check Ins data science competition. Hope will bring R users more possibilities using R doing Kaggle competition!
For community sharing usage.
2. WHY AM I HERE...
Data Scientist
Microsoft MVP (Data Platform)
Computer Vision/Algorithm Research
AS A COMMUNITY Co-Organizer
R-Ladies Taipei,
Azure Taiwan Community,
Tech Podcast Night
Mia-Chang
mia5419@gmail.com