際際滷shows by User: KevinLee56 / http://www.slideshare.net/images/logo.gif 際際滷shows by User: KevinLee56 / Tue, 30 Apr 2024 13:41:22 GMT 際際滷Share feed for 際際滷shows by User: KevinLee56 Patients Journey using Real World Data and its Advanced Analytics /slideshow/patients-journey-using-real-world-data-and-its-advanced-analytics/267673542 patientsjourneyusingrwdanditsadvancedanalyticsv1-240430134122-9b6505b6
Real World Data (RWD) is data collected outside of clinical trial study, and Real-World Evidence (RWE) could be achieved through the insight from RWD. RWD sources come from EMR, health insurance claims, genomic data, and IoT from apps and wearables. RWD anonymized patient data has revolutionized how companies view patient data since it captures longitudinal pharmacy prescription, medical claims, and diagnosis. The paper is written for those who want to understand how RWD patient data are collected and how they could be analyzed to support pharmaceutical companies. Mainly, RWD patient data could support patient analytics, commercial analytics, and payer analytics such as source of business, switch of prescription, payment method, market analysis, promotional activities, drug launch and forecasting. The paper also discusses the technology that data scientists use for RWD such as Data Warehouse, Data Visualization, Opensource Programming, Cloud Computing, GitHub, and Machine Learning. ]]>

Real World Data (RWD) is data collected outside of clinical trial study, and Real-World Evidence (RWE) could be achieved through the insight from RWD. RWD sources come from EMR, health insurance claims, genomic data, and IoT from apps and wearables. RWD anonymized patient data has revolutionized how companies view patient data since it captures longitudinal pharmacy prescription, medical claims, and diagnosis. The paper is written for those who want to understand how RWD patient data are collected and how they could be analyzed to support pharmaceutical companies. Mainly, RWD patient data could support patient analytics, commercial analytics, and payer analytics such as source of business, switch of prescription, payment method, market analysis, promotional activities, drug launch and forecasting. The paper also discusses the technology that data scientists use for RWD such as Data Warehouse, Data Visualization, Opensource Programming, Cloud Computing, GitHub, and Machine Learning. ]]>
Tue, 30 Apr 2024 13:41:22 GMT /slideshow/patients-journey-using-real-world-data-and-its-advanced-analytics/267673542 KevinLee56@slideshare.net(KevinLee56) Patients Journey using Real World Data and its Advanced Analytics KevinLee56 Real World Data (RWD) is data collected outside of clinical trial study, and Real-World Evidence (RWE) could be achieved through the insight from RWD. RWD sources come from EMR, health insurance claims, genomic data, and IoT from apps and wearables. RWD anonymized patient data has revolutionized how companies view patient data since it captures longitudinal pharmacy prescription, medical claims, and diagnosis. The paper is written for those who want to understand how RWD patient data are collected and how they could be analyzed to support pharmaceutical companies. Mainly, RWD patient data could support patient analytics, commercial analytics, and payer analytics such as source of business, switch of prescription, payment method, market analysis, promotional activities, drug launch and forecasting. The paper also discusses the technology that data scientists use for RWD such as Data Warehouse, Data Visualization, Opensource Programming, Cloud Computing, GitHub, and Machine Learning. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/patientsjourneyusingrwdanditsadvancedanalyticsv1-240430134122-9b6505b6-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Real World Data (RWD) is data collected outside of clinical trial study, and Real-World Evidence (RWE) could be achieved through the insight from RWD. RWD sources come from EMR, health insurance claims, genomic data, and IoT from apps and wearables. RWD anonymized patient data has revolutionized how companies view patient data since it captures longitudinal pharmacy prescription, medical claims, and diagnosis. The paper is written for those who want to understand how RWD patient data are collected and how they could be analyzed to support pharmaceutical companies. Mainly, RWD patient data could support patient analytics, commercial analytics, and payer analytics such as source of business, switch of prescription, payment method, market analysis, promotional activities, drug launch and forecasting. The paper also discusses the technology that data scientists use for RWD such as Data Warehouse, Data Visualization, Opensource Programming, Cloud Computing, GitHub, and Machine Learning.
Patients Journey using Real World Data and its Advanced Analytics from Kevin Lee
]]>
126 0 https://cdn.slidesharecdn.com/ss_thumbnails/patientsjourneyusingrwdanditsadvancedanalyticsv1-240430134122-9b6505b6-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Introduction of AWS Cloud Computing and its future for Biometric Department /slideshow/introduction-of-aws-cloud-computing-and-its-future-for-biometric-department/267673451 si-164awscloudcomputingv1-240430133756-bc8da8fa
When statistical programmers or statisticians starts in open-source programming, we usually begin with installing Python and/or R on our local computer and writing codes in a local IDE such as Jupyter notebook or RStudio, but as biometric team grow, and advanced analytics become more prevalent, collaborative solutions and environments are needed. Traditional solutions have been SAS速 servers, but nowadays, there is a growing need and interest for Cloud Computing. The paper is written for those who want to know about the Cloud Computing environment (e.g., AWS) and its possible implementation for the Biometric Department. The paper will start with the main components of Cloud computing databases, servers, applications, data analytics, reports, visualization, dashboards etc., and its benefits - Elasticity, Control, Flexibility, Integration, Reliability, Security, Inexpensive and Easy to Start. Most popular Cloud computing platforms are AWS, Google Cloud and Microsoft Azure, and this paper will introduce AWS Cloud Computing Environment. The paper will also introduce the core technologies of AWS Cloud Computing computing (EC2), Storage ( EBS, EFS, S3), Database ( Redshift, RDS, DynamoDB ), Security (IAM) and Networking (VPC ), and how they could be integrated to support modern-day data analytics. Finally, the paper will introduce the department-driven Cloud computing transition project that the whole SAS programming department has moved from SAS Window Server into AWS Cloud Computing. It will also discuss the challenges, and the lessons learn and its future in the Biometric department ]]>

When statistical programmers or statisticians starts in open-source programming, we usually begin with installing Python and/or R on our local computer and writing codes in a local IDE such as Jupyter notebook or RStudio, but as biometric team grow, and advanced analytics become more prevalent, collaborative solutions and environments are needed. Traditional solutions have been SAS速 servers, but nowadays, there is a growing need and interest for Cloud Computing. The paper is written for those who want to know about the Cloud Computing environment (e.g., AWS) and its possible implementation for the Biometric Department. The paper will start with the main components of Cloud computing databases, servers, applications, data analytics, reports, visualization, dashboards etc., and its benefits - Elasticity, Control, Flexibility, Integration, Reliability, Security, Inexpensive and Easy to Start. Most popular Cloud computing platforms are AWS, Google Cloud and Microsoft Azure, and this paper will introduce AWS Cloud Computing Environment. The paper will also introduce the core technologies of AWS Cloud Computing computing (EC2), Storage ( EBS, EFS, S3), Database ( Redshift, RDS, DynamoDB ), Security (IAM) and Networking (VPC ), and how they could be integrated to support modern-day data analytics. Finally, the paper will introduce the department-driven Cloud computing transition project that the whole SAS programming department has moved from SAS Window Server into AWS Cloud Computing. It will also discuss the challenges, and the lessons learn and its future in the Biometric department ]]>
Tue, 30 Apr 2024 13:37:55 GMT /slideshow/introduction-of-aws-cloud-computing-and-its-future-for-biometric-department/267673451 KevinLee56@slideshare.net(KevinLee56) Introduction of AWS Cloud Computing and its future for Biometric Department KevinLee56 When statistical programmers or statisticians starts in open-source programming, we usually begin with installing Python and/or R on our local computer and writing codes in a local IDE such as Jupyter notebook or RStudio, but as biometric team grow, and advanced analytics become more prevalent, collaborative solutions and environments are needed. Traditional solutions have been SAS速 servers, but nowadays, there is a growing need and interest for Cloud Computing. The paper is written for those who want to know about the Cloud Computing environment (e.g., AWS) and its possible implementation for the Biometric Department. The paper will start with the main components of Cloud computing databases, servers, applications, data analytics, reports, visualization, dashboards etc., and its benefits - Elasticity, Control, Flexibility, Integration, Reliability, Security, Inexpensive and Easy to Start. Most popular Cloud computing platforms are AWS, Google Cloud and Microsoft Azure, and this paper will introduce AWS Cloud Computing Environment. The paper will also introduce the core technologies of AWS Cloud Computing computing (EC2), Storage ( EBS, EFS, S3), Database ( Redshift, RDS, DynamoDB ), Security (IAM) and Networking (VPC ), and how they could be integrated to support modern-day data analytics. Finally, the paper will introduce the department-driven Cloud computing transition project that the whole SAS programming department has moved from SAS Window Server into AWS Cloud Computing. It will also discuss the challenges, and the lessons learn and its future in the Biometric department <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/si-164awscloudcomputingv1-240430133756-bc8da8fa-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> When statistical programmers or statisticians starts in open-source programming, we usually begin with installing Python and/or R on our local computer and writing codes in a local IDE such as Jupyter notebook or RStudio, but as biometric team grow, and advanced analytics become more prevalent, collaborative solutions and environments are needed. Traditional solutions have been SAS速 servers, but nowadays, there is a growing need and interest for Cloud Computing. The paper is written for those who want to know about the Cloud Computing environment (e.g., AWS) and its possible implementation for the Biometric Department. The paper will start with the main components of Cloud computing databases, servers, applications, data analytics, reports, visualization, dashboards etc., and its benefits - Elasticity, Control, Flexibility, Integration, Reliability, Security, Inexpensive and Easy to Start. Most popular Cloud computing platforms are AWS, Google Cloud and Microsoft Azure, and this paper will introduce AWS Cloud Computing Environment. The paper will also introduce the core technologies of AWS Cloud Computing computing (EC2), Storage ( EBS, EFS, S3), Database ( Redshift, RDS, DynamoDB ), Security (IAM) and Networking (VPC ), and how they could be integrated to support modern-day data analytics. Finally, the paper will introduce the department-driven Cloud computing transition project that the whole SAS programming department has moved from SAS Window Server into AWS Cloud Computing. It will also discuss the challenges, and the lessons learn and its future in the Biometric department
Introduction of AWS Cloud Computing and its future for Biometric Department from Kevin Lee
]]>
49 0 https://cdn.slidesharecdn.com/ss_thumbnails/si-164awscloudcomputingv1-240430133756-bc8da8fa-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
A fear of missing out and a fear of messing up : A Strategic Roadmap for ChatGPT Integration at Work /slideshow/a-fear-of-missing-out-and-a-fear-of-messing-up-a-strategic-roadmap-for-chatgpt-integration-at-work/267673251 si-140slidesstrategicgenaichatgptintegrationatworkv1-240430133040-ccb52392
Does your organization allow ChatGPT at work? The answer might depend on where you work. Many organizations do not allow ChatGPT at work. The truth is that for the organizations, ChatGPT is a fear of missing out and a fear of messing up. But, just like any other past new technologies such as Cloud computing and social media, the organizations eventually integrate ChatGPT or other Large Language Model (LLM). This paper is for those especially Biometrics who want to initiate ChatGPT integration at work. This paper presents how Biometric department can lead the integration of LLM, focusing on the exemplary model ChatGPT, across an entire enterprise, even in situations where the organization restricts or prohibits ChatGPT usage at work. The roadmap outlines key stages, starting with an introduction to LLM and ChatGPT, followed by potential risks and concerns and the benefits and diverse use cases. The roadmap will emphasize how Biometrics function leads the building of a cross-functional team to initiate ChatGPT integration and build the policy and guidelines. Then, the roadmap discusses the crucial aspect of training, emphasizing user education and engagement based on company polices. The roadmap finishes with a Proof of Concept (PoC) to validate and evaluate the ChatGPTs applicability to organizational needs and its compliance to company policies. This paper can serve as a valuable resource navigating the implementation journey of ChatGPT, providing insights and strategies for successful integration, even within the confines of organizational limitations on ChatGPT usage. ]]>

Does your organization allow ChatGPT at work? The answer might depend on where you work. Many organizations do not allow ChatGPT at work. The truth is that for the organizations, ChatGPT is a fear of missing out and a fear of messing up. But, just like any other past new technologies such as Cloud computing and social media, the organizations eventually integrate ChatGPT or other Large Language Model (LLM). This paper is for those especially Biometrics who want to initiate ChatGPT integration at work. This paper presents how Biometric department can lead the integration of LLM, focusing on the exemplary model ChatGPT, across an entire enterprise, even in situations where the organization restricts or prohibits ChatGPT usage at work. The roadmap outlines key stages, starting with an introduction to LLM and ChatGPT, followed by potential risks and concerns and the benefits and diverse use cases. The roadmap will emphasize how Biometrics function leads the building of a cross-functional team to initiate ChatGPT integration and build the policy and guidelines. Then, the roadmap discusses the crucial aspect of training, emphasizing user education and engagement based on company polices. The roadmap finishes with a Proof of Concept (PoC) to validate and evaluate the ChatGPTs applicability to organizational needs and its compliance to company policies. This paper can serve as a valuable resource navigating the implementation journey of ChatGPT, providing insights and strategies for successful integration, even within the confines of organizational limitations on ChatGPT usage. ]]>
Tue, 30 Apr 2024 13:30:40 GMT /slideshow/a-fear-of-missing-out-and-a-fear-of-messing-up-a-strategic-roadmap-for-chatgpt-integration-at-work/267673251 KevinLee56@slideshare.net(KevinLee56) A fear of missing out and a fear of messing up : A Strategic Roadmap for ChatGPT Integration at Work KevinLee56 Does your organization allow ChatGPT at work? The answer might depend on where you work. Many organizations do not allow ChatGPT at work. The truth is that for the organizations, ChatGPT is a fear of missing out and a fear of messing up. But, just like any other past new technologies such as Cloud computing and social media, the organizations eventually integrate ChatGPT or other Large Language Model (LLM). This paper is for those especially Biometrics who want to initiate ChatGPT integration at work. This paper presents how Biometric department can lead the integration of LLM, focusing on the exemplary model ChatGPT, across an entire enterprise, even in situations where the organization restricts or prohibits ChatGPT usage at work. The roadmap outlines key stages, starting with an introduction to LLM and ChatGPT, followed by potential risks and concerns and the benefits and diverse use cases. The roadmap will emphasize how Biometrics function leads the building of a cross-functional team to initiate ChatGPT integration and build the policy and guidelines. Then, the roadmap discusses the crucial aspect of training, emphasizing user education and engagement based on company polices. The roadmap finishes with a Proof of Concept (PoC) to validate and evaluate the ChatGPTs applicability to organizational needs and its compliance to company policies. This paper can serve as a valuable resource navigating the implementation journey of ChatGPT, providing insights and strategies for successful integration, even within the confines of organizational limitations on ChatGPT usage. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/si-140slidesstrategicgenaichatgptintegrationatworkv1-240430133040-ccb52392-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Does your organization allow ChatGPT at work? The answer might depend on where you work. Many organizations do not allow ChatGPT at work. The truth is that for the organizations, ChatGPT is a fear of missing out and a fear of messing up. But, just like any other past new technologies such as Cloud computing and social media, the organizations eventually integrate ChatGPT or other Large Language Model (LLM). This paper is for those especially Biometrics who want to initiate ChatGPT integration at work. This paper presents how Biometric department can lead the integration of LLM, focusing on the exemplary model ChatGPT, across an entire enterprise, even in situations where the organization restricts or prohibits ChatGPT usage at work. The roadmap outlines key stages, starting with an introduction to LLM and ChatGPT, followed by potential risks and concerns and the benefits and diverse use cases. The roadmap will emphasize how Biometrics function leads the building of a cross-functional team to initiate ChatGPT integration and build the policy and guidelines. Then, the roadmap discusses the crucial aspect of training, emphasizing user education and engagement based on company polices. The roadmap finishes with a Proof of Concept (PoC) to validate and evaluate the ChatGPTs applicability to organizational needs and its compliance to company policies. This paper can serve as a valuable resource navigating the implementation journey of ChatGPT, providing insights and strategies for successful integration, even within the confines of organizational limitations on ChatGPT usage.
A fear of missing out and a fear of messing up : A Strategic Roadmap for ChatGPT Integration at Work from Kevin Lee
]]>
158 0 https://cdn.slidesharecdn.com/ss_thumbnails/si-140slidesstrategicgenaichatgptintegrationatworkv1-240430133040-ccb52392-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Prompt it, not Google it - Prompt Engineering for Data Scientists /slideshow/prompt-it-not-google-it-prompt-engineering-for-data-scientists/267673113 sd-141slidespromptengineeringv1-240430132501-b834deea
Since its release, ChatGPT has rapidly gained popularity, reaching 100 million users within 2 months. Even a new concept has emerged : Prompt it is now the new Google it. Research shows ChatGPT users complete projects 25% faster. The paper is written for Statistical Programmers and Biostatisticians who want to improve their productivity and efficiency by using ChatGPT prompts better. The paper explores the pivotal role of prompts in enhancing the performance and versatility of ChatGPT or other Large Language Model. The paper shows how Statistical Programmers and Biostatistician utilize ChatGPT's capabilities and benefits such as the content development (e.g., emails, images), search for the information, Programming assistance in R, SAS and Python, Result Interpretation and many more. The paper also elucidates the distinctive advantages of employing prompts over traditional search methods. It emphasizes the unique characteristics of prompt engineering in ChatGPT. Various techniques, such as zero-shot learning, few-shot learning, reflection, chain of thought, and tree of thought, are dissected to illustrate the nuanced ways in which prompts can be engineered to optimize outcomes. The comprehensive exploration also offers insights into how to prompt better by adding constraints, incorporating more contexts, setting roles, coaching with feedback, probing further, and introducing step-by-step instructions to ChatGPT. The paper discusses ChatGPT's functionality in modifying and resubmitting the prompt, copying the answer, regenerating the answer, and continuing the previous prompt. The paper highlights how Stat programmers and Biostatisticians use and lead the transformative impact of prompts to be more productive and effective. ]]>

Since its release, ChatGPT has rapidly gained popularity, reaching 100 million users within 2 months. Even a new concept has emerged : Prompt it is now the new Google it. Research shows ChatGPT users complete projects 25% faster. The paper is written for Statistical Programmers and Biostatisticians who want to improve their productivity and efficiency by using ChatGPT prompts better. The paper explores the pivotal role of prompts in enhancing the performance and versatility of ChatGPT or other Large Language Model. The paper shows how Statistical Programmers and Biostatistician utilize ChatGPT's capabilities and benefits such as the content development (e.g., emails, images), search for the information, Programming assistance in R, SAS and Python, Result Interpretation and many more. The paper also elucidates the distinctive advantages of employing prompts over traditional search methods. It emphasizes the unique characteristics of prompt engineering in ChatGPT. Various techniques, such as zero-shot learning, few-shot learning, reflection, chain of thought, and tree of thought, are dissected to illustrate the nuanced ways in which prompts can be engineered to optimize outcomes. The comprehensive exploration also offers insights into how to prompt better by adding constraints, incorporating more contexts, setting roles, coaching with feedback, probing further, and introducing step-by-step instructions to ChatGPT. The paper discusses ChatGPT's functionality in modifying and resubmitting the prompt, copying the answer, regenerating the answer, and continuing the previous prompt. The paper highlights how Stat programmers and Biostatisticians use and lead the transformative impact of prompts to be more productive and effective. ]]>
Tue, 30 Apr 2024 13:25:01 GMT /slideshow/prompt-it-not-google-it-prompt-engineering-for-data-scientists/267673113 KevinLee56@slideshare.net(KevinLee56) Prompt it, not Google it - Prompt Engineering for Data Scientists KevinLee56 Since its release, ChatGPT has rapidly gained popularity, reaching 100 million users within 2 months. Even a new concept has emerged : Prompt it is now the new Google it. Research shows ChatGPT users complete projects 25% faster. The paper is written for Statistical Programmers and Biostatisticians who want to improve their productivity and efficiency by using ChatGPT prompts better. The paper explores the pivotal role of prompts in enhancing the performance and versatility of ChatGPT or other Large Language Model. The paper shows how Statistical Programmers and Biostatistician utilize ChatGPT's capabilities and benefits such as the content development (e.g., emails, images), search for the information, Programming assistance in R, SAS and Python, Result Interpretation and many more. The paper also elucidates the distinctive advantages of employing prompts over traditional search methods. It emphasizes the unique characteristics of prompt engineering in ChatGPT. Various techniques, such as zero-shot learning, few-shot learning, reflection, chain of thought, and tree of thought, are dissected to illustrate the nuanced ways in which prompts can be engineered to optimize outcomes. The comprehensive exploration also offers insights into how to prompt better by adding constraints, incorporating more contexts, setting roles, coaching with feedback, probing further, and introducing step-by-step instructions to ChatGPT. The paper discusses ChatGPT's functionality in modifying and resubmitting the prompt, copying the answer, regenerating the answer, and continuing the previous prompt. The paper highlights how Stat programmers and Biostatisticians use and lead the transformative impact of prompts to be more productive and effective. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/sd-141slidespromptengineeringv1-240430132501-b834deea-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Since its release, ChatGPT has rapidly gained popularity, reaching 100 million users within 2 months. Even a new concept has emerged : Prompt it is now the new Google it. Research shows ChatGPT users complete projects 25% faster. The paper is written for Statistical Programmers and Biostatisticians who want to improve their productivity and efficiency by using ChatGPT prompts better. The paper explores the pivotal role of prompts in enhancing the performance and versatility of ChatGPT or other Large Language Model. The paper shows how Statistical Programmers and Biostatistician utilize ChatGPT&#39;s capabilities and benefits such as the content development (e.g., emails, images), search for the information, Programming assistance in R, SAS and Python, Result Interpretation and many more. The paper also elucidates the distinctive advantages of employing prompts over traditional search methods. It emphasizes the unique characteristics of prompt engineering in ChatGPT. Various techniques, such as zero-shot learning, few-shot learning, reflection, chain of thought, and tree of thought, are dissected to illustrate the nuanced ways in which prompts can be engineered to optimize outcomes. The comprehensive exploration also offers insights into how to prompt better by adding constraints, incorporating more contexts, setting roles, coaching with feedback, probing further, and introducing step-by-step instructions to ChatGPT. The paper discusses ChatGPT&#39;s functionality in modifying and resubmitting the prompt, copying the answer, regenerating the answer, and continuing the previous prompt. The paper highlights how Stat programmers and Biostatisticians use and lead the transformative impact of prompts to be more productive and effective.
Prompt it, not Google it - Prompt Engineering for Data Scientists from Kevin Lee
]]>
125 0 https://cdn.slidesharecdn.com/ss_thumbnails/sd-141slidespromptengineeringv1-240430132501-b834deea-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Leading into the Unknown? Yes, we need Change Management Leadership /slideshow/leading-into-the-unknown-yes-we-need-change-management-leadership/252207026 changemanagementleadershipv1-220717015722-5f54cdc8
The paper is written for those who want to lead the new changes in biometric department. Currently, the biometric department is going through Big Changes from traditional SAS 速 programming to open-source programming, cloud computing, data science or even Machine Learning, and how to manage and lead those changes becomes critical for the leaders so that changes could be achieved under budget and on schedule. Change Management is the activities/processes that support the success of changes in the organization and is considered as a leadership competency for enabling changes within the organization. More importantly, the success rate of the changes directly correlates with change management by the leaders. Leaders with excellent change management is six times more likely to succeed than ones with poor change management. The paper will discuss major obstacles that leaders will face such as programmer/middle management resistance, insufficient support. And it will also discuss about success factors that leaders could implement in change management such as detailed planning, dedicated resources and funds, experiences in change, participation of programmers, frequent transparent communication, and clear goals. Finally, the paper will show the examples of how change management effectively lead the success of Open-Source Programming Migration from SAS 速 for the department of more than 150 SAS programmers.]]>

The paper is written for those who want to lead the new changes in biometric department. Currently, the biometric department is going through Big Changes from traditional SAS 速 programming to open-source programming, cloud computing, data science or even Machine Learning, and how to manage and lead those changes becomes critical for the leaders so that changes could be achieved under budget and on schedule. Change Management is the activities/processes that support the success of changes in the organization and is considered as a leadership competency for enabling changes within the organization. More importantly, the success rate of the changes directly correlates with change management by the leaders. Leaders with excellent change management is six times more likely to succeed than ones with poor change management. The paper will discuss major obstacles that leaders will face such as programmer/middle management resistance, insufficient support. And it will also discuss about success factors that leaders could implement in change management such as detailed planning, dedicated resources and funds, experiences in change, participation of programmers, frequent transparent communication, and clear goals. Finally, the paper will show the examples of how change management effectively lead the success of Open-Source Programming Migration from SAS 速 for the department of more than 150 SAS programmers.]]>
Sun, 17 Jul 2022 01:57:21 GMT /slideshow/leading-into-the-unknown-yes-we-need-change-management-leadership/252207026 KevinLee56@slideshare.net(KevinLee56) Leading into the Unknown? Yes, we need Change Management Leadership KevinLee56 The paper is written for those who want to lead the new changes in biometric department. Currently, the biometric department is going through Big Changes from traditional SAS 速 programming to open-source programming, cloud computing, data science or even Machine Learning, and how to manage and lead those changes becomes critical for the leaders so that changes could be achieved under budget and on schedule. Change Management is the activities/processes that support the success of changes in the organization and is considered as a leadership competency for enabling changes within the organization. More importantly, the success rate of the changes directly correlates with change management by the leaders. Leaders with excellent change management is six times more likely to succeed than ones with poor change management. The paper will discuss major obstacles that leaders will face such as programmer/middle management resistance, insufficient support. And it will also discuss about success factors that leaders could implement in change management such as detailed planning, dedicated resources and funds, experiences in change, participation of programmers, frequent transparent communication, and clear goals. Finally, the paper will show the examples of how change management effectively lead the success of Open-Source Programming Migration from SAS 速 for the department of more than 150 SAS programmers. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/changemanagementleadershipv1-220717015722-5f54cdc8-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> The paper is written for those who want to lead the new changes in biometric department. Currently, the biometric department is going through Big Changes from traditional SAS 速 programming to open-source programming, cloud computing, data science or even Machine Learning, and how to manage and lead those changes becomes critical for the leaders so that changes could be achieved under budget and on schedule. Change Management is the activities/processes that support the success of changes in the organization and is considered as a leadership competency for enabling changes within the organization. More importantly, the success rate of the changes directly correlates with change management by the leaders. Leaders with excellent change management is six times more likely to succeed than ones with poor change management. The paper will discuss major obstacles that leaders will face such as programmer/middle management resistance, insufficient support. And it will also discuss about success factors that leaders could implement in change management such as detailed planning, dedicated resources and funds, experiences in change, participation of programmers, frequent transparent communication, and clear goals. Finally, the paper will show the examples of how change management effectively lead the success of Open-Source Programming Migration from SAS 速 for the department of more than 150 SAS programmers.
Leading into the Unknown? Yes, we need Change Management Leadership from Kevin Lee
]]>
20 0 https://cdn.slidesharecdn.com/ss_thumbnails/changemanagementleadershipv1-220717015722-5f54cdc8-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
How to create SDTM DM.xpt using Python v1.1 /slideshow/how-to-create-sdtm-dmxpt-using-python-v11/252207007 howtocreatesdtmdm-220717015547-611cf0bd
The paper is written for those who wants to use Python to create SDTM SAS transport files from Raw SAS datasets. The paper will show the similarity and differences between SAS and Python in terms of SDTM dataset development, and actual Python codes to create SDTM SAS transport file. The paper will start with Python packages that could read and write SAS datasets such as xport, sas7bdat, and pyreadstat. The paper will introduce how Python reads SAS datasets from the local drive such as demographic, exposure, randomization and disposition raw SAS datasets. The paper will also show how Python creates variables from raw data such as SEX, USUBJID, RACE, RFSTDTC, and RFENDTC. The paper will also show how Python merge datasets using outer and inner join. The paper will also show how programmers use Python Dataframe for data manipulation such as renaming, dropping, and replacing variables. Finally, the paper will show how Python could create SAS SDTM transport file in the local drive. The paper also includes the actual python codes that read Raw SAS datasets, merge, manipulate and write SDTM DM SAS xport file.]]>

The paper is written for those who wants to use Python to create SDTM SAS transport files from Raw SAS datasets. The paper will show the similarity and differences between SAS and Python in terms of SDTM dataset development, and actual Python codes to create SDTM SAS transport file. The paper will start with Python packages that could read and write SAS datasets such as xport, sas7bdat, and pyreadstat. The paper will introduce how Python reads SAS datasets from the local drive such as demographic, exposure, randomization and disposition raw SAS datasets. The paper will also show how Python creates variables from raw data such as SEX, USUBJID, RACE, RFSTDTC, and RFENDTC. The paper will also show how Python merge datasets using outer and inner join. The paper will also show how programmers use Python Dataframe for data manipulation such as renaming, dropping, and replacing variables. Finally, the paper will show how Python could create SAS SDTM transport file in the local drive. The paper also includes the actual python codes that read Raw SAS datasets, merge, manipulate and write SDTM DM SAS xport file.]]>
Sun, 17 Jul 2022 01:55:47 GMT /slideshow/how-to-create-sdtm-dmxpt-using-python-v11/252207007 KevinLee56@slideshare.net(KevinLee56) How to create SDTM DM.xpt using Python v1.1 KevinLee56 The paper is written for those who wants to use Python to create SDTM SAS transport files from Raw SAS datasets. The paper will show the similarity and differences between SAS and Python in terms of SDTM dataset development, and actual Python codes to create SDTM SAS transport file. The paper will start with Python packages that could read and write SAS datasets such as xport, sas7bdat, and pyreadstat. The paper will introduce how Python reads SAS datasets from the local drive such as demographic, exposure, randomization and disposition raw SAS datasets. The paper will also show how Python creates variables from raw data such as SEX, USUBJID, RACE, RFSTDTC, and RFENDTC. The paper will also show how Python merge datasets using outer and inner join. The paper will also show how programmers use Python Dataframe for data manipulation such as renaming, dropping, and replacing variables. Finally, the paper will show how Python could create SAS SDTM transport file in the local drive. The paper also includes the actual python codes that read Raw SAS datasets, merge, manipulate and write SDTM DM SAS xport file. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/howtocreatesdtmdm-220717015547-611cf0bd-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> The paper is written for those who wants to use Python to create SDTM SAS transport files from Raw SAS datasets. The paper will show the similarity and differences between SAS and Python in terms of SDTM dataset development, and actual Python codes to create SDTM SAS transport file. The paper will start with Python packages that could read and write SAS datasets such as xport, sas7bdat, and pyreadstat. The paper will introduce how Python reads SAS datasets from the local drive such as demographic, exposure, randomization and disposition raw SAS datasets. The paper will also show how Python creates variables from raw data such as SEX, USUBJID, RACE, RFSTDTC, and RFENDTC. The paper will also show how Python merge datasets using outer and inner join. The paper will also show how programmers use Python Dataframe for data manipulation such as renaming, dropping, and replacing variables. Finally, the paper will show how Python could create SAS SDTM transport file in the local drive. The paper also includes the actual python codes that read Raw SAS datasets, merge, manipulate and write SDTM DM SAS xport file.
How to create SDTM DM.xpt using Python v1.1 from Kevin Lee
]]>
519 0 https://cdn.slidesharecdn.com/ss_thumbnails/howtocreatesdtmdm-220717015547-611cf0bd-thumbnail.jpg?width=120&height=120&fit=bounds document Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Enterprise-level Transition from SAS to Open-source Programming for the whole department /slideshow/enterpriselevel-transition-from-sas-to-opensource-programming-for-the-whole-department/252206973 open-sourcetransitionprojectslidesv2-220717015204-789b1f0c
The paper is written for those who wants to learn the enterprise-level transition from SAS to open-source programming. The paper will introduce the transition project that the whole department of 150+ SAS programmers has completely moved from SAS to Open-source programming. The paper will start with the scopes of the project Analytic platform switch from SAS Studio to R Pro Server, converting the existing SAS codes to R/Python codes, Window server to AWS Cloud computing environment, and the transition of SAS programmers to R/Python programmers. It will also discuss the challenges of the project such as inexperience in Open-source Programming, new analytic platform, and change management. The paper will introduce how the transition-support team, executive leadership and SAS programmers have overcome the challenges together during the project. The paper will also discuss the difference in SAS and Open-source language and programming, and it will show some examples of the conversion of SAS codes to R/Python codes. Finally, it will close with the benefits of the Open-source programming transition and the lessons learned from the project. ]]>

The paper is written for those who wants to learn the enterprise-level transition from SAS to open-source programming. The paper will introduce the transition project that the whole department of 150+ SAS programmers has completely moved from SAS to Open-source programming. The paper will start with the scopes of the project Analytic platform switch from SAS Studio to R Pro Server, converting the existing SAS codes to R/Python codes, Window server to AWS Cloud computing environment, and the transition of SAS programmers to R/Python programmers. It will also discuss the challenges of the project such as inexperience in Open-source Programming, new analytic platform, and change management. The paper will introduce how the transition-support team, executive leadership and SAS programmers have overcome the challenges together during the project. The paper will also discuss the difference in SAS and Open-source language and programming, and it will show some examples of the conversion of SAS codes to R/Python codes. Finally, it will close with the benefits of the Open-source programming transition and the lessons learned from the project. ]]>
Sun, 17 Jul 2022 01:52:04 GMT /slideshow/enterpriselevel-transition-from-sas-to-opensource-programming-for-the-whole-department/252206973 KevinLee56@slideshare.net(KevinLee56) Enterprise-level Transition from SAS to Open-source Programming for the whole department KevinLee56 The paper is written for those who wants to learn the enterprise-level transition from SAS to open-source programming. The paper will introduce the transition project that the whole department of 150+ SAS programmers has completely moved from SAS to Open-source programming. The paper will start with the scopes of the project Analytic platform switch from SAS Studio to R Pro Server, converting the existing SAS codes to R/Python codes, Window server to AWS Cloud computing environment, and the transition of SAS programmers to R/Python programmers. It will also discuss the challenges of the project such as inexperience in Open-source Programming, new analytic platform, and change management. The paper will introduce how the transition-support team, executive leadership and SAS programmers have overcome the challenges together during the project. The paper will also discuss the difference in SAS and Open-source language and programming, and it will show some examples of the conversion of SAS codes to R/Python codes. Finally, it will close with the benefits of the Open-source programming transition and the lessons learned from the project. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/open-sourcetransitionprojectslidesv2-220717015204-789b1f0c-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> The paper is written for those who wants to learn the enterprise-level transition from SAS to open-source programming. The paper will introduce the transition project that the whole department of 150+ SAS programmers has completely moved from SAS to Open-source programming. The paper will start with the scopes of the project Analytic platform switch from SAS Studio to R Pro Server, converting the existing SAS codes to R/Python codes, Window server to AWS Cloud computing environment, and the transition of SAS programmers to R/Python programmers. It will also discuss the challenges of the project such as inexperience in Open-source Programming, new analytic platform, and change management. The paper will introduce how the transition-support team, executive leadership and SAS programmers have overcome the challenges together during the project. The paper will also discuss the difference in SAS and Open-source language and programming, and it will show some examples of the conversion of SAS codes to R/Python codes. Finally, it will close with the benefits of the Open-source programming transition and the lessons learned from the project.
Enterprise-level Transition from SAS to Open-source Programming for the whole department from Kevin Lee
]]>
64 0 https://cdn.slidesharecdn.com/ss_thumbnails/open-sourcetransitionprojectslidesv2-220717015204-789b1f0c-thumbnail.jpg?width=120&height=120&fit=bounds document Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
How I became ML Engineer /slideshow/how-i-became-ml-engineer/252206943 howibecamemlengineerfromstatisticalprogrammerpptv3-220717014820-a3310843
One of the most popular buzz words nowadays in the technology world is Machine Learning (ML). Most economists and business experts foresee Machine Learning changing every aspect of our lives in the next 10 years through automating and optimizing processes. This is leading many organizations to seek experts who can implement Machine Learning into their businesses. The paper will be written for statistical programmers who want to explore Machine Learning career, add Machine Learning skills to their experiences or enter a Machine Learning fields. The paper will discuss about personal journey to become to a Machine Learning Engineer from a statistical programmer. The paper will share my personal experience on what motivated me to start Machine Learning career, how I started it, and what I have learned and done to be a Machine Learning Engineer. In addition, the paper will also discuss the future of Machine Learning in Pharmaceutical Industry, especially in Biometric department.]]>

One of the most popular buzz words nowadays in the technology world is Machine Learning (ML). Most economists and business experts foresee Machine Learning changing every aspect of our lives in the next 10 years through automating and optimizing processes. This is leading many organizations to seek experts who can implement Machine Learning into their businesses. The paper will be written for statistical programmers who want to explore Machine Learning career, add Machine Learning skills to their experiences or enter a Machine Learning fields. The paper will discuss about personal journey to become to a Machine Learning Engineer from a statistical programmer. The paper will share my personal experience on what motivated me to start Machine Learning career, how I started it, and what I have learned and done to be a Machine Learning Engineer. In addition, the paper will also discuss the future of Machine Learning in Pharmaceutical Industry, especially in Biometric department.]]>
Sun, 17 Jul 2022 01:48:20 GMT /slideshow/how-i-became-ml-engineer/252206943 KevinLee56@slideshare.net(KevinLee56) How I became ML Engineer KevinLee56 One of the most popular buzz words nowadays in the technology world is Machine Learning (ML). Most economists and business experts foresee Machine Learning changing every aspect of our lives in the next 10 years through automating and optimizing processes. This is leading many organizations to seek experts who can implement Machine Learning into their businesses. The paper will be written for statistical programmers who want to explore Machine Learning career, add Machine Learning skills to their experiences or enter a Machine Learning fields. The paper will discuss about personal journey to become to a Machine Learning Engineer from a statistical programmer. The paper will share my personal experience on what motivated me to start Machine Learning career, how I started it, and what I have learned and done to be a Machine Learning Engineer. In addition, the paper will also discuss the future of Machine Learning in Pharmaceutical Industry, especially in Biometric department. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/howibecamemlengineerfromstatisticalprogrammerpptv3-220717014820-a3310843-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> One of the most popular buzz words nowadays in the technology world is Machine Learning (ML). Most economists and business experts foresee Machine Learning changing every aspect of our lives in the next 10 years through automating and optimizing processes. This is leading many organizations to seek experts who can implement Machine Learning into their businesses. The paper will be written for statistical programmers who want to explore Machine Learning career, add Machine Learning skills to their experiences or enter a Machine Learning fields. The paper will discuss about personal journey to become to a Machine Learning Engineer from a statistical programmer. The paper will share my personal experience on what motivated me to start Machine Learning career, how I started it, and what I have learned and done to be a Machine Learning Engineer. In addition, the paper will also discuss the future of Machine Learning in Pharmaceutical Industry, especially in Biometric department.
How I became ML Engineer from Kevin Lee
]]>
41 0 https://cdn.slidesharecdn.com/ss_thumbnails/howibecamemlengineerfromstatisticalprogrammerpptv3-220717014820-a3310843-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Artificial Intelligence in Pharmaceutical Industry /slideshow/artificial-intelligence-in-pharmaceutical-industry-233109637/233109637 aiinpharmaceuticalindustryv7-200503230150
This presentation will show the introduction of AI and its possible implementation in Pharmaceutical Industry such as drug discovery, personalized medicine, molecular target prediction, site selection, patient recruitment, process automation, process optimization and more.]]>

This presentation will show the introduction of AI and its possible implementation in Pharmaceutical Industry such as drug discovery, personalized medicine, molecular target prediction, site selection, patient recruitment, process automation, process optimization and more.]]>
Sun, 03 May 2020 23:01:50 GMT /slideshow/artificial-intelligence-in-pharmaceutical-industry-233109637/233109637 KevinLee56@slideshare.net(KevinLee56) Artificial Intelligence in Pharmaceutical Industry KevinLee56 This presentation will show the introduction of AI and its possible implementation in Pharmaceutical Industry such as drug discovery, personalized medicine, molecular target prediction, site selection, patient recruitment, process automation, process optimization and more. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/aiinpharmaceuticalindustryv7-200503230150-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> This presentation will show the introduction of AI and its possible implementation in Pharmaceutical Industry such as drug discovery, personalized medicine, molecular target prediction, site selection, patient recruitment, process automation, process optimization and more.
Artificial Intelligence in Pharmaceutical Industry from Kevin Lee
]]>
3532 0 https://cdn.slidesharecdn.com/ss_thumbnails/aiinpharmaceuticalindustryv7-200503230150-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Tell stories with jupyter notebook /slideshow/tell-stories-with-jupyter-notebook/233107374 tellstorieswithjupyternotebook-200503215211
The Jupyter Notebook is an open-source web application that allows programmers and data scientists to create and share documents that contain live code, visualizations and narrative text. Jupyter Notebook is one of most popular tool for data visualization and machine learning, and it is the perfect tool for story telling tool for data scientist. First, the paper will start with the introduction of Jupyter Notebook and why it is the most popular tool for data scientist to show, share and visualize the data and analysis. The paper will show how data scientist uses Python programming language in Jupyter Notebook. The paper will show how data scientists import data into Jupyter Notebook using Panda. The paper will introduce Python data visualization library, matplotlib, and show how data scientists use matplotlib to easily create scatter plot, line, histograms, Kaplan Meier curves and many more. The paper will present how data scientist use Jupyter notebook for image recognitions with visualization and machine learning. The paper will show how data scientists can convert images into numeric array. Then, the paper will show how data scientist can use this numeric data to visualize and train machine learning model for image recognition. ]]>

The Jupyter Notebook is an open-source web application that allows programmers and data scientists to create and share documents that contain live code, visualizations and narrative text. Jupyter Notebook is one of most popular tool for data visualization and machine learning, and it is the perfect tool for story telling tool for data scientist. First, the paper will start with the introduction of Jupyter Notebook and why it is the most popular tool for data scientist to show, share and visualize the data and analysis. The paper will show how data scientist uses Python programming language in Jupyter Notebook. The paper will show how data scientists import data into Jupyter Notebook using Panda. The paper will introduce Python data visualization library, matplotlib, and show how data scientists use matplotlib to easily create scatter plot, line, histograms, Kaplan Meier curves and many more. The paper will present how data scientist use Jupyter notebook for image recognitions with visualization and machine learning. The paper will show how data scientists can convert images into numeric array. Then, the paper will show how data scientist can use this numeric data to visualize and train machine learning model for image recognition. ]]>
Sun, 03 May 2020 21:52:11 GMT /slideshow/tell-stories-with-jupyter-notebook/233107374 KevinLee56@slideshare.net(KevinLee56) Tell stories with jupyter notebook KevinLee56 The Jupyter Notebook is an open-source web application that allows programmers and data scientists to create and share documents that contain live code, visualizations and narrative text. Jupyter Notebook is one of most popular tool for data visualization and machine learning, and it is the perfect tool for story telling tool for data scientist. First, the paper will start with the introduction of Jupyter Notebook and why it is the most popular tool for data scientist to show, share and visualize the data and analysis. The paper will show how data scientist uses Python programming language in Jupyter Notebook. The paper will show how data scientists import data into Jupyter Notebook using Panda. The paper will introduce Python data visualization library, matplotlib, and show how data scientists use matplotlib to easily create scatter plot, line, histograms, Kaplan Meier curves and many more. The paper will present how data scientist use Jupyter notebook for image recognitions with visualization and machine learning. The paper will show how data scientists can convert images into numeric array. Then, the paper will show how data scientist can use this numeric data to visualize and train machine learning model for image recognition. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/tellstorieswithjupyternotebook-200503215211-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> The Jupyter Notebook is an open-source web application that allows programmers and data scientists to create and share documents that contain live code, visualizations and narrative text. Jupyter Notebook is one of most popular tool for data visualization and machine learning, and it is the perfect tool for story telling tool for data scientist. First, the paper will start with the introduction of Jupyter Notebook and why it is the most popular tool for data scientist to show, share and visualize the data and analysis. The paper will show how data scientist uses Python programming language in Jupyter Notebook. The paper will show how data scientists import data into Jupyter Notebook using Panda. The paper will introduce Python data visualization library, matplotlib, and show how data scientists use matplotlib to easily create scatter plot, line, histograms, Kaplan Meier curves and many more. The paper will present how data scientist use Jupyter notebook for image recognitions with visualization and machine learning. The paper will show how data scientists can convert images into numeric array. Then, the paper will show how data scientist can use this numeric data to visualize and train machine learning model for image recognition.
Tell stories with jupyter notebook from Kevin Lee
]]>
162 0 https://cdn.slidesharecdn.com/ss_thumbnails/tellstorieswithjupyternotebook-200503215211-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Perfect partnership - machine learning and CDISC standard data /slideshow/perfect-partnership-machine-learning-and-cdisc-standard-data/233099922 dia2018-perfectpartnership-machinelearningandcdiscv7-200503185406
The most popular buzz word nowadays in the technology world is Machine Learning (ML). Most economists and business experts foresee Machine Learning changing every aspect of our lives in the next 10 years through automating and optimizing processes. This is leading many organizations including drug companies to implement Machine Learning into their businesses. The presentation will start with the introduction of basic concept of Machine Learning, the computer science technology that provides systems with the ability to learn without being explicitly programmed, and it will discuss what it means by without being explicitly programmed. The presentation will also introduce basic ML algorithm -SVM, Decision Tress, Regression, Artificial Neural Network (ANN), and DNN. The presentation will also discuss the impact and potential of Machine Learning in our daily lives and pharmaceutical industry. The presentation will show how CDISC data can be a perfect match on Machine Learning implementation. In this Machine Learning/AI driven process, data is considered as the most important component. 80 to 90 % of works in Machine Learning is preparing data. Since FDA mandated CDISC standards submission as of Dec 17th, 2016, all the clinical trial data are prepared in CDISC SDTM and ADaM data format. The presentation will show how CDISC data is better choice than Real World Evidence (RWE) data for ML model. The presentation will also show how pharmaceutical industry use CDISC data to build ML model and apply ML model for Real World evidence. Finally, the presentation will show how Pharma industry can use their own in-house data and Machine Learning to build innovative, data-driven business models. ]]>

The most popular buzz word nowadays in the technology world is Machine Learning (ML). Most economists and business experts foresee Machine Learning changing every aspect of our lives in the next 10 years through automating and optimizing processes. This is leading many organizations including drug companies to implement Machine Learning into their businesses. The presentation will start with the introduction of basic concept of Machine Learning, the computer science technology that provides systems with the ability to learn without being explicitly programmed, and it will discuss what it means by without being explicitly programmed. The presentation will also introduce basic ML algorithm -SVM, Decision Tress, Regression, Artificial Neural Network (ANN), and DNN. The presentation will also discuss the impact and potential of Machine Learning in our daily lives and pharmaceutical industry. The presentation will show how CDISC data can be a perfect match on Machine Learning implementation. In this Machine Learning/AI driven process, data is considered as the most important component. 80 to 90 % of works in Machine Learning is preparing data. Since FDA mandated CDISC standards submission as of Dec 17th, 2016, all the clinical trial data are prepared in CDISC SDTM and ADaM data format. The presentation will show how CDISC data is better choice than Real World Evidence (RWE) data for ML model. The presentation will also show how pharmaceutical industry use CDISC data to build ML model and apply ML model for Real World evidence. Finally, the presentation will show how Pharma industry can use their own in-house data and Machine Learning to build innovative, data-driven business models. ]]>
Sun, 03 May 2020 18:54:06 GMT /slideshow/perfect-partnership-machine-learning-and-cdisc-standard-data/233099922 KevinLee56@slideshare.net(KevinLee56) Perfect partnership - machine learning and CDISC standard data KevinLee56 The most popular buzz word nowadays in the technology world is Machine Learning (ML). Most economists and business experts foresee Machine Learning changing every aspect of our lives in the next 10 years through automating and optimizing processes. This is leading many organizations including drug companies to implement Machine Learning into their businesses. The presentation will start with the introduction of basic concept of Machine Learning, the computer science technology that provides systems with the ability to learn without being explicitly programmed, and it will discuss what it means by without being explicitly programmed. The presentation will also introduce basic ML algorithm -SVM, Decision Tress, Regression, Artificial Neural Network (ANN), and DNN. The presentation will also discuss the impact and potential of Machine Learning in our daily lives and pharmaceutical industry. The presentation will show how CDISC data can be a perfect match on Machine Learning implementation. In this Machine Learning/AI driven process, data is considered as the most important component. 80 to 90 % of works in Machine Learning is preparing data. Since FDA mandated CDISC standards submission as of Dec 17th, 2016, all the clinical trial data are prepared in CDISC SDTM and ADaM data format. The presentation will show how CDISC data is better choice than Real World Evidence (RWE) data for ML model. The presentation will also show how pharmaceutical industry use CDISC data to build ML model and apply ML model for Real World evidence. Finally, the presentation will show how Pharma industry can use their own in-house data and Machine Learning to build innovative, data-driven business models. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/dia2018-perfectpartnership-machinelearningandcdiscv7-200503185406-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> The most popular buzz word nowadays in the technology world is Machine Learning (ML). Most economists and business experts foresee Machine Learning changing every aspect of our lives in the next 10 years through automating and optimizing processes. This is leading many organizations including drug companies to implement Machine Learning into their businesses. The presentation will start with the introduction of basic concept of Machine Learning, the computer science technology that provides systems with the ability to learn without being explicitly programmed, and it will discuss what it means by without being explicitly programmed. The presentation will also introduce basic ML algorithm -SVM, Decision Tress, Regression, Artificial Neural Network (ANN), and DNN. The presentation will also discuss the impact and potential of Machine Learning in our daily lives and pharmaceutical industry. The presentation will show how CDISC data can be a perfect match on Machine Learning implementation. In this Machine Learning/AI driven process, data is considered as the most important component. 80 to 90 % of works in Machine Learning is preparing data. Since FDA mandated CDISC standards submission as of Dec 17th, 2016, all the clinical trial data are prepared in CDISC SDTM and ADaM data format. The presentation will show how CDISC data is better choice than Real World Evidence (RWE) data for ML model. The presentation will also show how pharmaceutical industry use CDISC data to build ML model and apply ML model for Real World evidence. Finally, the presentation will show how Pharma industry can use their own in-house data and Machine Learning to build innovative, data-driven business models.
Perfect partnership - machine learning and CDISC standard data from Kevin Lee
]]>
170 0 https://cdn.slidesharecdn.com/ss_thumbnails/dia2018-perfectpartnership-machinelearningandcdiscv7-200503185406-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Machine Learning : why we should know and how it works /KevinLee56/machine-learning-why-we-should-know-and-how-it-works machinelearning-whyweshouldknowandhowitworks-200503184559
The most popular buzz word nowadays in the technology world is Machine Learning (ML). Most economists and business experts foresee Machine Learning changing every aspect of our lives in the next 10 years through automating and optimizing processes such as: self-driving vehicles; online recommendation on Netflix and Amazon; fraud detection in banks; image and video recognition; natural language processing; question answering machines (e.g., IBM Watson); and many more. This is leading many organizations to seek experts who can implement Machine Learning into their businesses. Statistical programmers and statisticians in the pharmaceutical industry are in very interesting positions. We have very similar backgrounds as Machine Learning experts, such as programming, statistics, and data expertise, thus embodying the essential technical skill sets needed. This similarity leads many individuals to ask us about Machine Learning. If you are the leaders of biometric groups, you get asked more often. The paper is intended for statistical programmers and statisticians who are interested in learning and applying Machine Learning to lead innovation in the pharmaceutical industry. The paper will start with the introduction of basic concepts of Machine Learning - hypothesis and cost function and gradient descent. Then, paper will introduce Supervised ML (e.g., Support Vector Machine, Decision Trees, Logistic Regression), Unsupervised ML (e.g., clustering) and the most powerful ML algorithm, Artificial Neural Network (ANN). The paper will also introduce some of popular SAS 速 ML procedures and SAS Visual Data Mining and Machine Learning. Finally, the paper will discuss the current ML implementation, its future implementation and how programmers and statisticians could lead this exciting and disruptive technology in pharmaceutical industry.]]>

The most popular buzz word nowadays in the technology world is Machine Learning (ML). Most economists and business experts foresee Machine Learning changing every aspect of our lives in the next 10 years through automating and optimizing processes such as: self-driving vehicles; online recommendation on Netflix and Amazon; fraud detection in banks; image and video recognition; natural language processing; question answering machines (e.g., IBM Watson); and many more. This is leading many organizations to seek experts who can implement Machine Learning into their businesses. Statistical programmers and statisticians in the pharmaceutical industry are in very interesting positions. We have very similar backgrounds as Machine Learning experts, such as programming, statistics, and data expertise, thus embodying the essential technical skill sets needed. This similarity leads many individuals to ask us about Machine Learning. If you are the leaders of biometric groups, you get asked more often. The paper is intended for statistical programmers and statisticians who are interested in learning and applying Machine Learning to lead innovation in the pharmaceutical industry. The paper will start with the introduction of basic concepts of Machine Learning - hypothesis and cost function and gradient descent. Then, paper will introduce Supervised ML (e.g., Support Vector Machine, Decision Trees, Logistic Regression), Unsupervised ML (e.g., clustering) and the most powerful ML algorithm, Artificial Neural Network (ANN). The paper will also introduce some of popular SAS 速 ML procedures and SAS Visual Data Mining and Machine Learning. Finally, the paper will discuss the current ML implementation, its future implementation and how programmers and statisticians could lead this exciting and disruptive technology in pharmaceutical industry.]]>
Sun, 03 May 2020 18:45:59 GMT /KevinLee56/machine-learning-why-we-should-know-and-how-it-works KevinLee56@slideshare.net(KevinLee56) Machine Learning : why we should know and how it works KevinLee56 The most popular buzz word nowadays in the technology world is Machine Learning (ML). Most economists and business experts foresee Machine Learning changing every aspect of our lives in the next 10 years through automating and optimizing processes such as: self-driving vehicles; online recommendation on Netflix and Amazon; fraud detection in banks; image and video recognition; natural language processing; question answering machines (e.g., IBM Watson); and many more. This is leading many organizations to seek experts who can implement Machine Learning into their businesses. Statistical programmers and statisticians in the pharmaceutical industry are in very interesting positions. We have very similar backgrounds as Machine Learning experts, such as programming, statistics, and data expertise, thus embodying the essential technical skill sets needed. This similarity leads many individuals to ask us about Machine Learning. If you are the leaders of biometric groups, you get asked more often. The paper is intended for statistical programmers and statisticians who are interested in learning and applying Machine Learning to lead innovation in the pharmaceutical industry. The paper will start with the introduction of basic concepts of Machine Learning - hypothesis and cost function and gradient descent. Then, paper will introduce Supervised ML (e.g., Support Vector Machine, Decision Trees, Logistic Regression), Unsupervised ML (e.g., clustering) and the most powerful ML algorithm, Artificial Neural Network (ANN). The paper will also introduce some of popular SAS 速 ML procedures and SAS Visual Data Mining and Machine Learning. Finally, the paper will discuss the current ML implementation, its future implementation and how programmers and statisticians could lead this exciting and disruptive technology in pharmaceutical industry. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/machinelearning-whyweshouldknowandhowitworks-200503184559-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> The most popular buzz word nowadays in the technology world is Machine Learning (ML). Most economists and business experts foresee Machine Learning changing every aspect of our lives in the next 10 years through automating and optimizing processes such as: self-driving vehicles; online recommendation on Netflix and Amazon; fraud detection in banks; image and video recognition; natural language processing; question answering machines (e.g., IBM Watson); and many more. This is leading many organizations to seek experts who can implement Machine Learning into their businesses. Statistical programmers and statisticians in the pharmaceutical industry are in very interesting positions. We have very similar backgrounds as Machine Learning experts, such as programming, statistics, and data expertise, thus embodying the essential technical skill sets needed. This similarity leads many individuals to ask us about Machine Learning. If you are the leaders of biometric groups, you get asked more often. The paper is intended for statistical programmers and statisticians who are interested in learning and applying Machine Learning to lead innovation in the pharmaceutical industry. The paper will start with the introduction of basic concepts of Machine Learning - hypothesis and cost function and gradient descent. Then, paper will introduce Supervised ML (e.g., Support Vector Machine, Decision Trees, Logistic Regression), Unsupervised ML (e.g., clustering) and the most powerful ML algorithm, Artificial Neural Network (ANN). The paper will also introduce some of popular SAS 速 ML procedures and SAS Visual Data Mining and Machine Learning. Finally, the paper will discuss the current ML implementation, its future implementation and how programmers and statisticians could lead this exciting and disruptive technology in pharmaceutical industry.
Machine Learning : why we should know and how it works from Kevin Lee
]]>
170 0 https://cdn.slidesharecdn.com/ss_thumbnails/machinelearning-whyweshouldknowandhowitworks-200503184559-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Big data for SAS programmers /slideshow/big-data-for-sas-programmers/233080317 bigdataforsasprogrammers-200503073031
We are living in the world of Big Data. Big Data is mainly expressed with three Vs Volume, Velocity and Variety. The presentation will discuss how Big Data impacts us and how SAS programmers can use SAS skills in Big Data environment The presentation will introduce Big Data Storage solution Hadoop and NoSQL. In Hadoop, the presentation will discuss two major Hadoop capabilities - Hadoop Distributed File System (HDFS) and Map/Reduce (parallel computing in Hadoop). The presentation will show how SAS can work with Hadoop using HDFS LIBNAME, FILENAME, SAS/ACCESS to Hadoop HIVE and SAS GRID Managers to Hadoop YARN. The presentation will also introduce the concepts of NoSQL database for a big data solution. The presentation will also introduce how SAS can work with the variety of data format, especially XML and JSON. The presentation will show the use case of converting XML documents to SAS datasets using LIBNAME XMLV2 XMLMAP statement. The presentation will also introduce REST API to extract data through internets and will demonstrate how SAS PROC HTTP can move the data through REST API. ]]>

We are living in the world of Big Data. Big Data is mainly expressed with three Vs Volume, Velocity and Variety. The presentation will discuss how Big Data impacts us and how SAS programmers can use SAS skills in Big Data environment The presentation will introduce Big Data Storage solution Hadoop and NoSQL. In Hadoop, the presentation will discuss two major Hadoop capabilities - Hadoop Distributed File System (HDFS) and Map/Reduce (parallel computing in Hadoop). The presentation will show how SAS can work with Hadoop using HDFS LIBNAME, FILENAME, SAS/ACCESS to Hadoop HIVE and SAS GRID Managers to Hadoop YARN. The presentation will also introduce the concepts of NoSQL database for a big data solution. The presentation will also introduce how SAS can work with the variety of data format, especially XML and JSON. The presentation will show the use case of converting XML documents to SAS datasets using LIBNAME XMLV2 XMLMAP statement. The presentation will also introduce REST API to extract data through internets and will demonstrate how SAS PROC HTTP can move the data through REST API. ]]>
Sun, 03 May 2020 07:30:31 GMT /slideshow/big-data-for-sas-programmers/233080317 KevinLee56@slideshare.net(KevinLee56) Big data for SAS programmers KevinLee56 We are living in the world of Big Data. Big Data is mainly expressed with three Vs Volume, Velocity and Variety. The presentation will discuss how Big Data impacts us and how SAS programmers can use SAS skills in Big Data environment The presentation will introduce Big Data Storage solution Hadoop and NoSQL. In Hadoop, the presentation will discuss two major Hadoop capabilities - Hadoop Distributed File System (HDFS) and Map/Reduce (parallel computing in Hadoop). The presentation will show how SAS can work with Hadoop using HDFS LIBNAME, FILENAME, SAS/ACCESS to Hadoop HIVE and SAS GRID Managers to Hadoop YARN. The presentation will also introduce the concepts of NoSQL database for a big data solution. The presentation will also introduce how SAS can work with the variety of data format, especially XML and JSON. The presentation will show the use case of converting XML documents to SAS datasets using LIBNAME XMLV2 XMLMAP statement. The presentation will also introduce REST API to extract data through internets and will demonstrate how SAS PROC HTTP can move the data through REST API. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/bigdataforsasprogrammers-200503073031-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> We are living in the world of Big Data. Big Data is mainly expressed with three Vs Volume, Velocity and Variety. The presentation will discuss how Big Data impacts us and how SAS programmers can use SAS skills in Big Data environment The presentation will introduce Big Data Storage solution Hadoop and NoSQL. In Hadoop, the presentation will discuss two major Hadoop capabilities - Hadoop Distributed File System (HDFS) and Map/Reduce (parallel computing in Hadoop). The presentation will show how SAS can work with Hadoop using HDFS LIBNAME, FILENAME, SAS/ACCESS to Hadoop HIVE and SAS GRID Managers to Hadoop YARN. The presentation will also introduce the concepts of NoSQL database for a big data solution. The presentation will also introduce how SAS can work with the variety of data format, especially XML and JSON. The presentation will show the use case of converting XML documents to SAS datasets using LIBNAME XMLV2 XMLMAP statement. The presentation will also introduce REST API to extract data through internets and will demonstrate how SAS PROC HTTP can move the data through REST API.
Big data for SAS programmers from Kevin Lee
]]>
170 0 https://cdn.slidesharecdn.com/ss_thumbnails/bigdataforsasprogrammers-200503073031-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Big data in pharmaceutical industry /slideshow/big-data-in-pharmaceutical-industry/233077986 bigdatainpharmaceuticalindustry-200503042732
We are living in the world of Big Data. Big Data is mainly expressed with three Vs Volume, Velocity and Variety. The presentation will discuss how Big Data impacts Pharmaceutical Industry and how drug companies can lead this new Big Data environment. ]]>

We are living in the world of Big Data. Big Data is mainly expressed with three Vs Volume, Velocity and Variety. The presentation will discuss how Big Data impacts Pharmaceutical Industry and how drug companies can lead this new Big Data environment. ]]>
Sun, 03 May 2020 04:27:32 GMT /slideshow/big-data-in-pharmaceutical-industry/233077986 KevinLee56@slideshare.net(KevinLee56) Big data in pharmaceutical industry KevinLee56 We are living in the world of Big Data. Big Data is mainly expressed with three Vs Volume, Velocity and Variety. The presentation will discuss how Big Data impacts Pharmaceutical Industry and how drug companies can lead this new Big Data environment. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/bigdatainpharmaceuticalindustry-200503042732-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> We are living in the world of Big Data. Big Data is mainly expressed with three Vs Volume, Velocity and Variety. The presentation will discuss how Big Data impacts Pharmaceutical Industry and how drug companies can lead this new Big Data environment.
Big data in pharmaceutical industry from Kevin Lee
]]>
685 0 https://cdn.slidesharecdn.com/ss_thumbnails/bigdatainpharmaceuticalindustry-200503042732-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
How FDA will reject non compliant electronic submission /slideshow/how-fda-will-reject-non-compliant-electronic-submission/233077450 howfdawillrejectnon-compliantelectronicsubmission-200503033929
Beginning Dec 18, 2016, all clinical trial and nonclinical trial studies must use standards (e.g., CDISC) for submission data and beginning May 5, 2017, NDA, ANDA, and BLA submissions must follow eCTD format for submission documents. In order to enforce these standards mandates, the FDA also released "Technical Rejection Criteria for Study Data" in FDA eCTD website on October 3, 2016. FDA also implemented a rejection process for submissions that do not conform to the required study data standards. The paper will discuss how these new FDA mandates impact the electronic submission and the required preparation for CDISC and eCTD complaint submission package such as SDTM, ADaM, Define.xml, SDTM annotated eCRF, SDRG, ADRG and SAS速 programs. The paper will introduce the current FDA submission process, including the current FDA rejection processes Technical Rejection and Refuse-to-File and discuss how FDA uses Technical Rejection and Refuse-to-File to reject submission. The paper will show how FDA rejection of CDISC non-compliant data will impact sponsors submission process, and how sponsors should respond to FDA rejections as well as questions throughout the whole submission process. Use cases will demonstrate the key technical rejection criteria that will have the greatest impact on a successful submission process]]>

Beginning Dec 18, 2016, all clinical trial and nonclinical trial studies must use standards (e.g., CDISC) for submission data and beginning May 5, 2017, NDA, ANDA, and BLA submissions must follow eCTD format for submission documents. In order to enforce these standards mandates, the FDA also released "Technical Rejection Criteria for Study Data" in FDA eCTD website on October 3, 2016. FDA also implemented a rejection process for submissions that do not conform to the required study data standards. The paper will discuss how these new FDA mandates impact the electronic submission and the required preparation for CDISC and eCTD complaint submission package such as SDTM, ADaM, Define.xml, SDTM annotated eCRF, SDRG, ADRG and SAS速 programs. The paper will introduce the current FDA submission process, including the current FDA rejection processes Technical Rejection and Refuse-to-File and discuss how FDA uses Technical Rejection and Refuse-to-File to reject submission. The paper will show how FDA rejection of CDISC non-compliant data will impact sponsors submission process, and how sponsors should respond to FDA rejections as well as questions throughout the whole submission process. Use cases will demonstrate the key technical rejection criteria that will have the greatest impact on a successful submission process]]>
Sun, 03 May 2020 03:39:29 GMT /slideshow/how-fda-will-reject-non-compliant-electronic-submission/233077450 KevinLee56@slideshare.net(KevinLee56) How FDA will reject non compliant electronic submission KevinLee56 Beginning Dec 18, 2016, all clinical trial and nonclinical trial studies must use standards (e.g., CDISC) for submission data and beginning May 5, 2017, NDA, ANDA, and BLA submissions must follow eCTD format for submission documents. In order to enforce these standards mandates, the FDA also released "Technical Rejection Criteria for Study Data" in FDA eCTD website on October 3, 2016. FDA also implemented a rejection process for submissions that do not conform to the required study data standards. The paper will discuss how these new FDA mandates impact the electronic submission and the required preparation for CDISC and eCTD complaint submission package such as SDTM, ADaM, Define.xml, SDTM annotated eCRF, SDRG, ADRG and SAS速 programs. The paper will introduce the current FDA submission process, including the current FDA rejection processes Technical Rejection and Refuse-to-File and discuss how FDA uses Technical Rejection and Refuse-to-File to reject submission. The paper will show how FDA rejection of CDISC non-compliant data will impact sponsors submission process, and how sponsors should respond to FDA rejections as well as questions throughout the whole submission process. Use cases will demonstrate the key technical rejection criteria that will have the greatest impact on a successful submission process <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/howfdawillrejectnon-compliantelectronicsubmission-200503033929-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Beginning Dec 18, 2016, all clinical trial and nonclinical trial studies must use standards (e.g., CDISC) for submission data and beginning May 5, 2017, NDA, ANDA, and BLA submissions must follow eCTD format for submission documents. In order to enforce these standards mandates, the FDA also released &quot;Technical Rejection Criteria for Study Data&quot; in FDA eCTD website on October 3, 2016. FDA also implemented a rejection process for submissions that do not conform to the required study data standards. The paper will discuss how these new FDA mandates impact the electronic submission and the required preparation for CDISC and eCTD complaint submission package such as SDTM, ADaM, Define.xml, SDTM annotated eCRF, SDRG, ADRG and SAS速 programs. The paper will introduce the current FDA submission process, including the current FDA rejection processes Technical Rejection and Refuse-to-File and discuss how FDA uses Technical Rejection and Refuse-to-File to reject submission. The paper will show how FDA rejection of CDISC non-compliant data will impact sponsors submission process, and how sponsors should respond to FDA rejections as well as questions throughout the whole submission process. Use cases will demonstrate the key technical rejection criteria that will have the greatest impact on a successful submission process
How FDA will reject non compliant electronic submission from Kevin Lee
]]>
313 0 https://cdn.slidesharecdn.com/ss_thumbnails/howfdawillrejectnon-compliantelectronicsubmission-200503033929-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
End to end standards driven oncology study (solid tumor, Immunotherapy, Leukemia, Lymphoma) /slideshow/end-to-end-standards-driven-oncology-study/233073790 endtoendstandardsdrivenoncologystudy-200502223925
Each therapeutic area has its own unique data collection and analysis. Oncology especially, has particularly specific standards for collection and analysis of data. Oncology studies are also separated into one of three different sub types according to response criteria guidelines. The first sub type, Solid Tumor study, usually follows RECIST (Response Evaluation Criteria in Solid Tumor). The second sub type, Lymphoma study, usually follows Cheson. Lastly, Leukemia study follows study specific guidelines (IWCLL for Chronic Lymphocytic Leukemia, IWAML for Acute Myeloid Leukemia, NCCN Guidelines for Acute Lymphoblastic Leukemia and ESMO clinical practice guides for Chronic Myeloid Leukemia). This paper will demonstrate the notable level of sophistication implemented in CDISC standards, mainly driven by the differentiation across different response criteria. The paper will specifically show what SDTM domains are used to collect the different data points in each type. For example, Solid tumor studies collect tumor results in TR and TU and response in RS. Lymphoma studies collect not only tumor results and response, but also bone marrow assessment in LB and FA, and spleen and liver enlargement in PE. Leukemia studies collect blood counts (i.e., lymphocytes, neutrophils, hemoglobin and platelet count) in LB and genetic mutation as well as what are collected in Lymphoma studies. The paper will also introduce oncology terminologies (e.g., CR, PR, SD, PD, NE) and oncology-specific ADaM data sets - Time to Event (--TTE) data set. Finally, the paper will show how standards (e.g., response criteria guidelines and CDISC) will streamline clinical trial artefacts development in oncology studies and how end to end clinical trial artefacts development can be accomplished through this standards-driven process. ]]>

Each therapeutic area has its own unique data collection and analysis. Oncology especially, has particularly specific standards for collection and analysis of data. Oncology studies are also separated into one of three different sub types according to response criteria guidelines. The first sub type, Solid Tumor study, usually follows RECIST (Response Evaluation Criteria in Solid Tumor). The second sub type, Lymphoma study, usually follows Cheson. Lastly, Leukemia study follows study specific guidelines (IWCLL for Chronic Lymphocytic Leukemia, IWAML for Acute Myeloid Leukemia, NCCN Guidelines for Acute Lymphoblastic Leukemia and ESMO clinical practice guides for Chronic Myeloid Leukemia). This paper will demonstrate the notable level of sophistication implemented in CDISC standards, mainly driven by the differentiation across different response criteria. The paper will specifically show what SDTM domains are used to collect the different data points in each type. For example, Solid tumor studies collect tumor results in TR and TU and response in RS. Lymphoma studies collect not only tumor results and response, but also bone marrow assessment in LB and FA, and spleen and liver enlargement in PE. Leukemia studies collect blood counts (i.e., lymphocytes, neutrophils, hemoglobin and platelet count) in LB and genetic mutation as well as what are collected in Lymphoma studies. The paper will also introduce oncology terminologies (e.g., CR, PR, SD, PD, NE) and oncology-specific ADaM data sets - Time to Event (--TTE) data set. Finally, the paper will show how standards (e.g., response criteria guidelines and CDISC) will streamline clinical trial artefacts development in oncology studies and how end to end clinical trial artefacts development can be accomplished through this standards-driven process. ]]>
Sat, 02 May 2020 22:39:25 GMT /slideshow/end-to-end-standards-driven-oncology-study/233073790 KevinLee56@slideshare.net(KevinLee56) End to end standards driven oncology study (solid tumor, Immunotherapy, Leukemia, Lymphoma) KevinLee56 Each therapeutic area has its own unique data collection and analysis. Oncology especially, has particularly specific standards for collection and analysis of data. Oncology studies are also separated into one of three different sub types according to response criteria guidelines. The first sub type, Solid Tumor study, usually follows RECIST (Response Evaluation Criteria in Solid Tumor). The second sub type, Lymphoma study, usually follows Cheson. Lastly, Leukemia study follows study specific guidelines (IWCLL for Chronic Lymphocytic Leukemia, IWAML for Acute Myeloid Leukemia, NCCN Guidelines for Acute Lymphoblastic Leukemia and ESMO clinical practice guides for Chronic Myeloid Leukemia). This paper will demonstrate the notable level of sophistication implemented in CDISC standards, mainly driven by the differentiation across different response criteria. The paper will specifically show what SDTM domains are used to collect the different data points in each type. For example, Solid tumor studies collect tumor results in TR and TU and response in RS. Lymphoma studies collect not only tumor results and response, but also bone marrow assessment in LB and FA, and spleen and liver enlargement in PE. Leukemia studies collect blood counts (i.e., lymphocytes, neutrophils, hemoglobin and platelet count) in LB and genetic mutation as well as what are collected in Lymphoma studies. The paper will also introduce oncology terminologies (e.g., CR, PR, SD, PD, NE) and oncology-specific ADaM data sets - Time to Event (--TTE) data set. Finally, the paper will show how standards (e.g., response criteria guidelines and CDISC) will streamline clinical trial artefacts development in oncology studies and how end to end clinical trial artefacts development can be accomplished through this standards-driven process. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/endtoendstandardsdrivenoncologystudy-200502223925-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Each therapeutic area has its own unique data collection and analysis. Oncology especially, has particularly specific standards for collection and analysis of data. Oncology studies are also separated into one of three different sub types according to response criteria guidelines. The first sub type, Solid Tumor study, usually follows RECIST (Response Evaluation Criteria in Solid Tumor). The second sub type, Lymphoma study, usually follows Cheson. Lastly, Leukemia study follows study specific guidelines (IWCLL for Chronic Lymphocytic Leukemia, IWAML for Acute Myeloid Leukemia, NCCN Guidelines for Acute Lymphoblastic Leukemia and ESMO clinical practice guides for Chronic Myeloid Leukemia). This paper will demonstrate the notable level of sophistication implemented in CDISC standards, mainly driven by the differentiation across different response criteria. The paper will specifically show what SDTM domains are used to collect the different data points in each type. For example, Solid tumor studies collect tumor results in TR and TU and response in RS. Lymphoma studies collect not only tumor results and response, but also bone marrow assessment in LB and FA, and spleen and liver enlargement in PE. Leukemia studies collect blood counts (i.e., lymphocytes, neutrophils, hemoglobin and platelet count) in LB and genetic mutation as well as what are collected in Lymphoma studies. The paper will also introduce oncology terminologies (e.g., CR, PR, SD, PD, NE) and oncology-specific ADaM data sets - Time to Event (--TTE) data set. Finally, the paper will show how standards (e.g., response criteria guidelines and CDISC) will streamline clinical trial artefacts development in oncology studies and how end to end clinical trial artefacts development can be accomplished through this standards-driven process.
End to end standards driven oncology study (solid tumor, Immunotherapy, Leukemia, Lymphoma) from Kevin Lee
]]>
254 0 https://cdn.slidesharecdn.com/ss_thumbnails/endtoendstandardsdrivenoncologystudy-200502223925-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Are you ready for Dec 17, 2016 - CDISC compliant data? /slideshow/are-you-ready-for-dec-17-2016-cdisc-compliant-data/233049348 areyoureadyfordec172016-cdisccompliantdatav11-200502091029
Are you ready for Dec 17th, 2016? According to FDA Data Standards Catalog v4.4, all clinical trial studies starting after December 17th, 2016 with the exception of certain INDs will be required to have CDISC compliant data. Organizations who are unclear on their compliance status will have their understanding of FDA expectations elucidated in the paper. The paper will show how programmers can interpret and understand the crucial elements of the FDA Data Standards Catalog, which includes support begin date, support end date, requirement begin date and requirement end date of specific standards for both eCTD and CDISC. First, the paper will provide the brief introduction of regulatory recommendation of electronic submission, including methods, five modules in CTD especially m5, technical deficiencies in submission and etc. The paper will also discuss what programmers need to prepare for the submission according to FDA and CDISC guidelines for CSR, Protocol, SAP, SDTM annotated eCRF, SDTM datasets, ADaM datasets, ADaM datasets SAS速 programs and Define.xml. Additionally, the paper will discuss formatting logistics that programmers should be aware of in their preparation of documents, including length, naming conventions and file formats of electronic files. For examples, SAS data sets should be submitted as SAS transport file formats and SAS programs should be submitted as text format, rather than SAS format. Finally, based on information from FDA CSS meeting and FDA Study Data Technical Conformance guides v 3.0, the paper will discuss the latest FDA concerns and issues on electronic submission. This will include the size of SAS data sets, lack of Trial Design dataset(TS) and Define.xml, importance of Reviewer Guide and etc.]]>

Are you ready for Dec 17th, 2016? According to FDA Data Standards Catalog v4.4, all clinical trial studies starting after December 17th, 2016 with the exception of certain INDs will be required to have CDISC compliant data. Organizations who are unclear on their compliance status will have their understanding of FDA expectations elucidated in the paper. The paper will show how programmers can interpret and understand the crucial elements of the FDA Data Standards Catalog, which includes support begin date, support end date, requirement begin date and requirement end date of specific standards for both eCTD and CDISC. First, the paper will provide the brief introduction of regulatory recommendation of electronic submission, including methods, five modules in CTD especially m5, technical deficiencies in submission and etc. The paper will also discuss what programmers need to prepare for the submission according to FDA and CDISC guidelines for CSR, Protocol, SAP, SDTM annotated eCRF, SDTM datasets, ADaM datasets, ADaM datasets SAS速 programs and Define.xml. Additionally, the paper will discuss formatting logistics that programmers should be aware of in their preparation of documents, including length, naming conventions and file formats of electronic files. For examples, SAS data sets should be submitted as SAS transport file formats and SAS programs should be submitted as text format, rather than SAS format. Finally, based on information from FDA CSS meeting and FDA Study Data Technical Conformance guides v 3.0, the paper will discuss the latest FDA concerns and issues on electronic submission. This will include the size of SAS data sets, lack of Trial Design dataset(TS) and Define.xml, importance of Reviewer Guide and etc.]]>
Sat, 02 May 2020 09:10:29 GMT /slideshow/are-you-ready-for-dec-17-2016-cdisc-compliant-data/233049348 KevinLee56@slideshare.net(KevinLee56) Are you ready for Dec 17, 2016 - CDISC compliant data? KevinLee56 Are you ready for Dec 17th, 2016? According to FDA Data Standards Catalog v4.4, all clinical trial studies starting after December 17th, 2016 with the exception of certain INDs will be required to have CDISC compliant data. Organizations who are unclear on their compliance status will have their understanding of FDA expectations elucidated in the paper. The paper will show how programmers can interpret and understand the crucial elements of the FDA Data Standards Catalog, which includes support begin date, support end date, requirement begin date and requirement end date of specific standards for both eCTD and CDISC. First, the paper will provide the brief introduction of regulatory recommendation of electronic submission, including methods, five modules in CTD especially m5, technical deficiencies in submission and etc. The paper will also discuss what programmers need to prepare for the submission according to FDA and CDISC guidelines for CSR, Protocol, SAP, SDTM annotated eCRF, SDTM datasets, ADaM datasets, ADaM datasets SAS速 programs and Define.xml. Additionally, the paper will discuss formatting logistics that programmers should be aware of in their preparation of documents, including length, naming conventions and file formats of electronic files. For examples, SAS data sets should be submitted as SAS transport file formats and SAS programs should be submitted as text format, rather than SAS format. Finally, based on information from FDA CSS meeting and FDA Study Data Technical Conformance guides v 3.0, the paper will discuss the latest FDA concerns and issues on electronic submission. This will include the size of SAS data sets, lack of Trial Design dataset(TS) and Define.xml, importance of Reviewer Guide and etc. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/areyoureadyfordec172016-cdisccompliantdatav11-200502091029-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Are you ready for Dec 17th, 2016? According to FDA Data Standards Catalog v4.4, all clinical trial studies starting after December 17th, 2016 with the exception of certain INDs will be required to have CDISC compliant data. Organizations who are unclear on their compliance status will have their understanding of FDA expectations elucidated in the paper. The paper will show how programmers can interpret and understand the crucial elements of the FDA Data Standards Catalog, which includes support begin date, support end date, requirement begin date and requirement end date of specific standards for both eCTD and CDISC. First, the paper will provide the brief introduction of regulatory recommendation of electronic submission, including methods, five modules in CTD especially m5, technical deficiencies in submission and etc. The paper will also discuss what programmers need to prepare for the submission according to FDA and CDISC guidelines for CSR, Protocol, SAP, SDTM annotated eCRF, SDTM datasets, ADaM datasets, ADaM datasets SAS速 programs and Define.xml. Additionally, the paper will discuss formatting logistics that programmers should be aware of in their preparation of documents, including length, naming conventions and file formats of electronic files. For examples, SAS data sets should be submitted as SAS transport file formats and SAS programs should be submitted as text format, rather than SAS format. Finally, based on information from FDA CSS meeting and FDA Study Data Technical Conformance guides v 3.0, the paper will discuss the latest FDA concerns and issues on electronic submission. This will include the size of SAS data sets, lack of Trial Design dataset(TS) and Define.xml, importance of Reviewer Guide and etc.
Are you ready for Dec 17, 2016 - CDISC compliant data? from Kevin Lee
]]>
171 0 https://cdn.slidesharecdn.com/ss_thumbnails/areyoureadyfordec172016-cdisccompliantdatav11-200502091029-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
SAS integration with NoSQL data /slideshow/sas-integration-with-nosql-data/233045707 sasinteggationwithnosqldatav6-200502074208
We are living in the world of abundant data, so called big data. The term big data is closely associated with unstructured data. They are called unstructured or NoSQL data because they do not fit neatly in a traditional row-column relational database. A NoSQL (Not only SQL or Non-relational SQL) database is the type of database that can handle unstructured data. For example, a NoSQL database can store unstructured data such as XML (Extensible Markup Language), JSON (JavaScript Object Notation) or RDF (Resource Description Framework) files. If an enterprise is able to extract unstructured data from NoSQL databases and transfer it to the SAS environment for analysis, this will produce tremendous value, especially from a big data solutions standpoint. This paper will show how unstructured data is stored in the NoSQL databases and ways to transfer it to the SAS environment for analysis. First, the paper will introduce the NoSQL database. For example, NoSQL databases can store unstructured data such as XML, JSON or RDF files. Secondly, the paper will show how the SAS system connects to NoSQL databases using REST (Representational State Transfer) API (Application Programming Interface). For example, SAS programmers can use the PROC HTTP option to extract XML or JSON files through REST API from the NoSQL database. Finally, the paper will show how SAS programmers can convert XML and JSON files to SAS datasets for analysis. For example, SAS programmers can create XMLMap files using XMLV2 LIBNAME engine and convert the extracted XML files to SAS datasets.]]>

We are living in the world of abundant data, so called big data. The term big data is closely associated with unstructured data. They are called unstructured or NoSQL data because they do not fit neatly in a traditional row-column relational database. A NoSQL (Not only SQL or Non-relational SQL) database is the type of database that can handle unstructured data. For example, a NoSQL database can store unstructured data such as XML (Extensible Markup Language), JSON (JavaScript Object Notation) or RDF (Resource Description Framework) files. If an enterprise is able to extract unstructured data from NoSQL databases and transfer it to the SAS environment for analysis, this will produce tremendous value, especially from a big data solutions standpoint. This paper will show how unstructured data is stored in the NoSQL databases and ways to transfer it to the SAS environment for analysis. First, the paper will introduce the NoSQL database. For example, NoSQL databases can store unstructured data such as XML, JSON or RDF files. Secondly, the paper will show how the SAS system connects to NoSQL databases using REST (Representational State Transfer) API (Application Programming Interface). For example, SAS programmers can use the PROC HTTP option to extract XML or JSON files through REST API from the NoSQL database. Finally, the paper will show how SAS programmers can convert XML and JSON files to SAS datasets for analysis. For example, SAS programmers can create XMLMap files using XMLV2 LIBNAME engine and convert the extracted XML files to SAS datasets.]]>
Sat, 02 May 2020 07:42:08 GMT /slideshow/sas-integration-with-nosql-data/233045707 KevinLee56@slideshare.net(KevinLee56) SAS integration with NoSQL data KevinLee56 We are living in the world of abundant data, so called big data. The term big data is closely associated with unstructured data. They are called unstructured or NoSQL data because they do not fit neatly in a traditional row-column relational database. A NoSQL (Not only SQL or Non-relational SQL) database is the type of database that can handle unstructured data. For example, a NoSQL database can store unstructured data such as XML (Extensible Markup Language), JSON (JavaScript Object Notation) or RDF (Resource Description Framework) files. If an enterprise is able to extract unstructured data from NoSQL databases and transfer it to the SAS environment for analysis, this will produce tremendous value, especially from a big data solutions standpoint. This paper will show how unstructured data is stored in the NoSQL databases and ways to transfer it to the SAS environment for analysis. First, the paper will introduce the NoSQL database. For example, NoSQL databases can store unstructured data such as XML, JSON or RDF files. Secondly, the paper will show how the SAS system connects to NoSQL databases using REST (Representational State Transfer) API (Application Programming Interface). For example, SAS programmers can use the PROC HTTP option to extract XML or JSON files through REST API from the NoSQL database. Finally, the paper will show how SAS programmers can convert XML and JSON files to SAS datasets for analysis. For example, SAS programmers can create XMLMap files using XMLV2 LIBNAME engine and convert the extracted XML files to SAS datasets. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/sasinteggationwithnosqldatav6-200502074208-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> We are living in the world of abundant data, so called big data. The term big data is closely associated with unstructured data. They are called unstructured or NoSQL data because they do not fit neatly in a traditional row-column relational database. A NoSQL (Not only SQL or Non-relational SQL) database is the type of database that can handle unstructured data. For example, a NoSQL database can store unstructured data such as XML (Extensible Markup Language), JSON (JavaScript Object Notation) or RDF (Resource Description Framework) files. If an enterprise is able to extract unstructured data from NoSQL databases and transfer it to the SAS environment for analysis, this will produce tremendous value, especially from a big data solutions standpoint. This paper will show how unstructured data is stored in the NoSQL databases and ways to transfer it to the SAS environment for analysis. First, the paper will introduce the NoSQL database. For example, NoSQL databases can store unstructured data such as XML, JSON or RDF files. Secondly, the paper will show how the SAS system connects to NoSQL databases using REST (Representational State Transfer) API (Application Programming Interface). For example, SAS programmers can use the PROC HTTP option to extract XML or JSON files through REST API from the NoSQL database. Finally, the paper will show how SAS programmers can convert XML and JSON files to SAS datasets for analysis. For example, SAS programmers can create XMLMap files using XMLV2 LIBNAME engine and convert the extracted XML files to SAS datasets.
SAS integration with NoSQL data from Kevin Lee
]]>
114 0 https://cdn.slidesharecdn.com/ss_thumbnails/sasinteggationwithnosqldatav6-200502074208-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Introduction of semantic technology for SAS programmers /slideshow/introduction-of-semantic-technology-for-sas-programmers-233025030/233025030 introductionofsemantictechnologyforsasprogrammersv6-200501224358
There is a new technology to express and search the data that can provide more meaning and relationship semantic technology. The semantic technology can easily add, change and implement the meaning and relationship to the current data. Companies such as Facebook and Google are currently using the semantic technology. For example, Facebook Graph Search use semantic technology to enhance more meaningful search for users. The paper will introduce the basic concepts of semantic technology and its graph data model, Resource Description Framework (RDF). RDF can link data elements in a self-describing way with elements and property: subject, predicate and object. The paper will introduce the application and examples of RDF elements. The paper will also introduce three different representation of RDF: RDF/XML representation, turtle representation and N-triple representation. The paper will also introduce CDISC standards RDF representation, Reference and Review Guide published by CDISC and PhUSE CSS. The paper will discuss RDF representation, reference and review guide and show how CDISC standards are represented and displayed in RDF format. The paper will also introduce Simple Protocol RDF Query Language (SPARQL) that can retrieve and manipulate data in RDF format. The paper will show how programmers can use SPARQL to re-represent RDF format of CDISC standards metadata into structured tabular format. Finally, paper will discuss the benefits and futures of semantic technology. The paper will also discuss what semantic technology means to SAS programmers and how programmers take an advantage of this new technology.]]>

There is a new technology to express and search the data that can provide more meaning and relationship semantic technology. The semantic technology can easily add, change and implement the meaning and relationship to the current data. Companies such as Facebook and Google are currently using the semantic technology. For example, Facebook Graph Search use semantic technology to enhance more meaningful search for users. The paper will introduce the basic concepts of semantic technology and its graph data model, Resource Description Framework (RDF). RDF can link data elements in a self-describing way with elements and property: subject, predicate and object. The paper will introduce the application and examples of RDF elements. The paper will also introduce three different representation of RDF: RDF/XML representation, turtle representation and N-triple representation. The paper will also introduce CDISC standards RDF representation, Reference and Review Guide published by CDISC and PhUSE CSS. The paper will discuss RDF representation, reference and review guide and show how CDISC standards are represented and displayed in RDF format. The paper will also introduce Simple Protocol RDF Query Language (SPARQL) that can retrieve and manipulate data in RDF format. The paper will show how programmers can use SPARQL to re-represent RDF format of CDISC standards metadata into structured tabular format. Finally, paper will discuss the benefits and futures of semantic technology. The paper will also discuss what semantic technology means to SAS programmers and how programmers take an advantage of this new technology.]]>
Fri, 01 May 2020 22:43:57 GMT /slideshow/introduction-of-semantic-technology-for-sas-programmers-233025030/233025030 KevinLee56@slideshare.net(KevinLee56) Introduction of semantic technology for SAS programmers KevinLee56 There is a new technology to express and search the data that can provide more meaning and relationship semantic technology. The semantic technology can easily add, change and implement the meaning and relationship to the current data. Companies such as Facebook and Google are currently using the semantic technology. For example, Facebook Graph Search use semantic technology to enhance more meaningful search for users. The paper will introduce the basic concepts of semantic technology and its graph data model, Resource Description Framework (RDF). RDF can link data elements in a self-describing way with elements and property: subject, predicate and object. The paper will introduce the application and examples of RDF elements. The paper will also introduce three different representation of RDF: RDF/XML representation, turtle representation and N-triple representation. The paper will also introduce CDISC standards RDF representation, Reference and Review Guide published by CDISC and PhUSE CSS. The paper will discuss RDF representation, reference and review guide and show how CDISC standards are represented and displayed in RDF format. The paper will also introduce Simple Protocol RDF Query Language (SPARQL) that can retrieve and manipulate data in RDF format. The paper will show how programmers can use SPARQL to re-represent RDF format of CDISC standards metadata into structured tabular format. Finally, paper will discuss the benefits and futures of semantic technology. The paper will also discuss what semantic technology means to SAS programmers and how programmers take an advantage of this new technology. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/introductionofsemantictechnologyforsasprogrammersv6-200501224358-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> There is a new technology to express and search the data that can provide more meaning and relationship semantic technology. The semantic technology can easily add, change and implement the meaning and relationship to the current data. Companies such as Facebook and Google are currently using the semantic technology. For example, Facebook Graph Search use semantic technology to enhance more meaningful search for users. The paper will introduce the basic concepts of semantic technology and its graph data model, Resource Description Framework (RDF). RDF can link data elements in a self-describing way with elements and property: subject, predicate and object. The paper will introduce the application and examples of RDF elements. The paper will also introduce three different representation of RDF: RDF/XML representation, turtle representation and N-triple representation. The paper will also introduce CDISC standards RDF representation, Reference and Review Guide published by CDISC and PhUSE CSS. The paper will discuss RDF representation, reference and review guide and show how CDISC standards are represented and displayed in RDF format. The paper will also introduce Simple Protocol RDF Query Language (SPARQL) that can retrieve and manipulate data in RDF format. The paper will show how programmers can use SPARQL to re-represent RDF format of CDISC standards metadata into structured tabular format. Finally, paper will discuss the benefits and futures of semantic technology. The paper will also discuss what semantic technology means to SAS programmers and how programmers take an advantage of this new technology.
Introduction of semantic technology for SAS programmers from Kevin Lee
]]>
49 0 https://cdn.slidesharecdn.com/ss_thumbnails/introductionofsemantictechnologyforsasprogrammersv6-200501224358-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Standards Metadata Management (system) /slideshow/standards-metadata-management-system/54460461 standardsmetadatamanagementsystem-151028035227-lva1-app6892
Over the past decade, CDISC Standards have been widely accepted and implemented in clinical research. The FDAs final Guidance for Industry on electronic submission mandates that submission data conform to CDISC standards such as SDTM, ADaM and SEND. This presentation will discuss how life sciences organizations can use Standards metadata to manage the regulatory compliance process. It will introduce how standards metadata management not only ensures regulatory compliance, but also supports process efficiency in clinical trial artefacts (e.g., protocol, CDASH, SDMT and ADaM) development and standards governance, and enables efficient communication between organizational units. It will also introduce metadata management system and discuss how metadata management system will create, store, govern and manage standards. It will also show how standards metadata management system interacts with ETL system and dictates standards-driven clinical artefacts development. ]]>

Over the past decade, CDISC Standards have been widely accepted and implemented in clinical research. The FDAs final Guidance for Industry on electronic submission mandates that submission data conform to CDISC standards such as SDTM, ADaM and SEND. This presentation will discuss how life sciences organizations can use Standards metadata to manage the regulatory compliance process. It will introduce how standards metadata management not only ensures regulatory compliance, but also supports process efficiency in clinical trial artefacts (e.g., protocol, CDASH, SDMT and ADaM) development and standards governance, and enables efficient communication between organizational units. It will also introduce metadata management system and discuss how metadata management system will create, store, govern and manage standards. It will also show how standards metadata management system interacts with ETL system and dictates standards-driven clinical artefacts development. ]]>
Wed, 28 Oct 2015 03:52:27 GMT /slideshow/standards-metadata-management-system/54460461 KevinLee56@slideshare.net(KevinLee56) Standards Metadata Management (system) KevinLee56 Over the past decade, CDISC Standards have been widely accepted and implemented in clinical research. The FDAs final Guidance for Industry on electronic submission mandates that submission data conform to CDISC standards such as SDTM, ADaM and SEND. This presentation will discuss how life sciences organizations can use Standards metadata to manage the regulatory compliance process. It will introduce how standards metadata management not only ensures regulatory compliance, but also supports process efficiency in clinical trial artefacts (e.g., protocol, CDASH, SDMT and ADaM) development and standards governance, and enables efficient communication between organizational units. It will also introduce metadata management system and discuss how metadata management system will create, store, govern and manage standards. It will also show how standards metadata management system interacts with ETL system and dictates standards-driven clinical artefacts development. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/standardsmetadatamanagementsystem-151028035227-lva1-app6892-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Over the past decade, CDISC Standards have been widely accepted and implemented in clinical research. The FDAs final Guidance for Industry on electronic submission mandates that submission data conform to CDISC standards such as SDTM, ADaM and SEND. This presentation will discuss how life sciences organizations can use Standards metadata to manage the regulatory compliance process. It will introduce how standards metadata management not only ensures regulatory compliance, but also supports process efficiency in clinical trial artefacts (e.g., protocol, CDASH, SDMT and ADaM) development and standards governance, and enables efficient communication between organizational units. It will also introduce metadata management system and discuss how metadata management system will create, store, govern and manage standards. It will also show how standards metadata management system interacts with ETL system and dictates standards-driven clinical artefacts development.
Standards Metadata Management (system) from Kevin Lee
]]>
1299 8 https://cdn.slidesharecdn.com/ss_thumbnails/standardsmetadatamanagementsystem-151028035227-lva1-app6892-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
https://cdn.slidesharecdn.com/profile-photo-KevinLee56-48x48.jpg?cb=1729030772 I am a life time learner who loves to learn. I also love to share my mistakes and experiences with others, and I have presented about 100 papers in various conferences and meetings. I am positive, extremely curious, humble, but deliberate. I am always motivated to grow and improve myself. I rather have a team success than an individual success. On my career perspective, I am mainly interested in leadership and innovative technologies. https://cdn.slidesharecdn.com/ss_thumbnails/patientsjourneyusingrwdanditsadvancedanalyticsv1-240430134122-9b6505b6-thumbnail.jpg?width=320&height=320&fit=bounds slideshow/patients-journey-using-real-world-data-and-its-advanced-analytics/267673542 Patients Journey usin... https://cdn.slidesharecdn.com/ss_thumbnails/si-164awscloudcomputingv1-240430133756-bc8da8fa-thumbnail.jpg?width=320&height=320&fit=bounds slideshow/introduction-of-aws-cloud-computing-and-its-future-for-biometric-department/267673451 Introduction of AWS Cl... https://cdn.slidesharecdn.com/ss_thumbnails/si-140slidesstrategicgenaichatgptintegrationatworkv1-240430133040-ccb52392-thumbnail.jpg?width=320&height=320&fit=bounds slideshow/a-fear-of-missing-out-and-a-fear-of-messing-up-a-strategic-roadmap-for-chatgpt-integration-at-work/267673251 A fear of missing out ...