The presentation I gave in the plenary session at the ICCS (Presentation A3). It has been slightly modified for publishing. Please contact me if you have any questions!
This document provides an overview of three-dimensional displays, from physiological depth cues to electronic holograms. It discusses depth cues our eyes use to perceive depth, including psychological cues like occlusion and physiological cues like binocular disparity. Examples of 3D displays that provide some depth cues, like lenticular sheets, are described. The document also covers holograms, including how they can provide all depth cues by reconstructing the original wavefront. It discusses challenges like the large amount of information in holograms and methods to reduce it, like rainbow and multiplex holograms. Computer generated and electronic holograms using dynamic modulators are also summarized.
Why we dont know how many colors there areJan Morovic
油
There is no definitive answer to how many colors exist because the concept of color depends on factors like the illumination, viewing conditions, and human perception. Computational models can predict color gamuts under different scenarios, but the largest gamut volume estimated is around 6.6 million colors using real measured light sources, which still may not capture all possible colors perceivable by humans. Determining all possible colors ultimately requires a color appearance model that more closely mimics the complexities of human vision.
This document discusses using boosted decision trees to select important hyperspectral bands for geology classification. It aims to reduce dimensionality and processing time while maintaining classification accuracy. The method embeds band selection within the boosting process to identify the most informative bands. Experiments are conducted on hyperspectral data from an iron ore mine to evaluate the approach.
Steerable Filters generated with the Hypercomplex Dual-Tree Wavelet Transform...Jan Wedekind
油
The use of wavelets in the image processing domain is still in its infancy, and largely associated with image compression. With the advent of the dual-tree hypercomplex wavelet transform (DHWT) and its improved shift invariance and directional selectivity, applications in other areas of image processing are more conceivable. This paper discusses the problems and solutions in developing the DHWT and its inverse. It also offers a practical implementation of the algorithms involved. The aim of this work is to apply the DHWT in machine vision.
Tentative work on a possible new way of feature extraction is presented. The paper shows that 2-D hypercomplex basis wavelets can be used to generate steerable filters which allow rotation as well as translation.
Initial Sintering Mechanism of Mesocarbon Microbeadsguestdc9119
油
The document analyzes the initial sintering mechanisms of mesocarbon microbeads through various experiments. It finds that sample shrinkage is primarily due to increasing theoretical density caused by crystallographic transformation during heating. The 硫-resin helps maintain particle cohesion during sintering. Sample porosity remains largely unaffected by the sintering process. Thermogravimetric analysis and heating schedule experiments help understand mass change and shrinkage over time and temperature.
Using proteomics, researchers analyzed the physiological response of the Pacific oyster Crassostrea gigas to varying levels of ocean acidification (pCO2) and mechanical stress. They identified 1,241 proteins and found oxidation-reduction was the most significantly enriched process. pCO2 had a significant effect on the proteome, while mechanical stress only impacted the proteome at high pCO2. Exposure to multiple stressors can impact an organism's ability to respond to either stressor individually. Considering multiple stressors is important for assessing responses to environmental change.
This document discusses compressive sensing and its applications for transient signal analysis. It introduces compressive sensing as a technique to reduce data measurements while preserving signal information using sparsity. Transient signals are sparse and can be represented by a small number of waveforms. The document proposes using compressive sensing for transient detection by exploiting signal sparsity and reconstructing signals from undersampled data. It describes applications in power quality analysis, audio/biomedical signals, and radar. Advantages over wavelet methods include preserving transient characteristics like amplitude and frequency.
This document provides an overview of basic statistics concepts and terminology. It discusses descriptive and inferential statistics, measures of central tendency (mean, median, mode), measures of variability, distributions, correlations, outliers, frequencies, t-tests, confidence intervals, research designs, hypotheses testing, and data analysis procedures. Key steps in research like research design, data collection, and statistical analysis are outlined. Descriptive statistics are used to describe data while inferential statistics investigate hypotheses about populations. Common statistical analyses and concepts are also defined.
This document outlines the key topics to be covered in an introductory econometrics course, including definitions, applications, and illustrations of econometrics. The course will define econometrics and statistics, explore how econometrics is used to test economic theories, estimate relationships, and make policy recommendations. Examples of applying econometrics to questions around class size and grades, racial discrimination in mortgage lending, taxes and cigarette smoking, and forecasting inflation will also be discussed to illustrate distinguishing econometrics from general statistics.
This document appears to be a chapter from a textbook on statistics that includes sample problems and answers. It provides the table of contents for homework problems 1-23 from Chapter 1 on introduction to statistics. The problems cover topics like identifying independent and dependent variables, distinguishing between experimental and non-experimental research studies, different types of variables, and computing statistical expressions. The document serves as a reference for students to review examples and solutions to common introductory statistics problems.
Applied Statistics : Sampling method & central limit theoremwahidsajol
油
This document discusses sampling methods and the central limit theorem. It provides details on types of probability sampling including simple random sampling, systematic sampling, stratified sampling, and cluster sampling. Simple random sampling involves randomly selecting items from a population so that each item has an equal chance of selection. Systematic sampling selects every kth item from a population. Stratified sampling divides a population into subgroups and then randomly samples from each subgroup. Cluster sampling divides a population into geographical clusters and randomly samples from each cluster. The document also explains that the central limit theorem states that the sampling distribution of sample means will approximate a normal distribution as sample size increases.
This document discusses the role and importance of statistics in scientific research. It begins by defining statistics as the science of learning from data and communicating uncertainty. Statistics are important for summarizing, analyzing, and drawing inferences from data in research studies. They also allow researchers to effectively present their findings and support their conclusions. The document then describes how statistics are used and are important in many fields of scientific research like biology, economics, physics, and more. It also provides examples of statistical terms commonly used in research studies and some common misuses of statistics.
Objective Determination Of Minimum Engine Mapping Requirements For Optimal SI...pmaloney1
油
The document discusses determining the minimum number of engine test points needed for optimal spark-ignition direct-injection engine calibration. Results found that approximately 100 spark sweeps were required to optimally calibrate the engine using a 2-stage modeling approach. A problem-solving approach was used that involved designing test matrices, developing 2-stage models, and comparing calibrations from reduced and exhaustive test designs.
Towards Probabilistic Assessment of ModularityKevin Hoffman
油
We propose new modularity metrics based on probabilistic computations over weighted design structure matrices, also known as dependency structure matrices.
We define how these metrics are calculated, discuss some practical applications, and conclude with future work.
This document discusses machine learning projects for classifying and clustering glass data using R. It first describes classifying glass data using ridge regression and plotting the results. It shows the classification performance is good based on error rate but poor based on ROC. Using higher order polynomials improves ROC, TPR and FPR. It also notes how to properly implement ridge regression. The document then demonstrates clustering glass data using multi-dimensional scaling, k-means clustering, and hierarchical clustering and compares the clustering to the original labels.
1) The authors developed new models for quartz-enhanced photoacoustic spectroscopy (QEPAS) sensors that account for viscous damping effects in order to enable numerical optimization of sensor design.
2) Their viscous damping model describes how fluid viscosity attenuates acoustic pressure waves and dampens the resonant mechanical deformation of the quartz tuning fork in QEPAS sensors.
3) Preliminary experimental validation showed good agreement between the model and measurements of acoustic signal strength as a function of laser beam position, though discrepancies occurred at larger distances due to unmodeled QEPAS-ROTADE interaction effects.
1) The document discusses different radiotherapy techniques for head and neck tumors including conformal radiotherapy and IMRT.
2) It describes target volumes according to ICRU guidelines and the delineation of lymph node levels in the neck.
3) Examples are provided of different conformal radiotherapy plans including an "antique" 7-field plan, a "middle age" 9-field plan, and an "art nouveau" 10-field plan with comparisons of dose distributions and organ at risk sparing.
4) Inverse planning IMRT is also discussed with an example cumulative segment dose distribution.
This document reviews different techniques for predicting solar irradiance levels including:
1) Numerical weather prediction models and statistical prediction for short to long term forecasting.
2) MOS (Model Output Statistics) techniques using sky cover products from weather centers for short term prediction.
3) Satellite-based methods comparing different approaches and error analysis for short term forecasting.
4) Signal analysis techniques including wavelet transforms and artificial neural networks combined with recurrent networks for improved prediction.
Future work proposed includes combining these methods along with normalized data and forecasts from weather centers to improve prediction accuracy across timescales.
1. The document contains graphs and tables about Wikipedia data such as the number of editors over time, article quality metrics, and calculation times for important editor identification methods.
2. It analyzes the impact of reducing the number of editors on metrics like description amount, number of articles, and calculation time.
3. Methods that consider both description amount and number of articles showed higher correlation and lower increased rank than methods using a single metric.
Natalia Restrepo-Coupe_Remotely-sensed photosynthetic phenology and ecosystem...TERN Australia
油
This document discusses using remotely sensed data and tower eddy covariance CO2 flux measurements to study phenology and ecosystem productivity. It notes that flux tower data can help validate remote sensing phenology products by determining if they correctly capture dates of green-up, peak growing season, end of season, and season length. The document also aims to better understand what vegetation indices mean quantitatively and their biome-specific relationships to in-situ ecosystem behavior and capacity. Improving this understanding could lead to more robust land surface models informed by remote sensing.
Paper and pencil_cosmological_calculatorS辿rgio Sacani
油
The document describes a paper-and-pencil cosmological calculator designed for the CDM cosmological model. The calculator contains nomograms (graphs) for quantities like redshift, distance, size, age, and more for different redshift intervals up to z=20. It is based on cosmological parameters from the Planck mission of H0=67.15 km/s/Mpc, 立=0.683, and 立m=0.317. To use the calculator, the user finds a known value and reads off other quantities at the same horizontal level.
This document summarizes the shape context algorithm for shape matching and object recognition. It discusses computing shape contexts, which describe the distribution of relative positions of points around a shape. Shape contexts are represented as histograms of distances and angles between points. The algorithm finds correspondences between points on two shapes by matching their shape contexts and minimizing the total cost. Additional cost terms can be added, such as color or texture differences. The algorithm is shown to be robust to noise and inexact rotations.
1. The document discusses Wikipedia and analyzes data related to editor contributions and page views over time.
2. It finds that most edits on Wikipedia are made by a small group of elite editors, with 20% of editors contributing 80% of edits, following a Zipfian distribution.
3. The analysis also examines how reducing the number of editors impacts the amount of content on Wikipedia pages and determines that credible editors who make high quality contributions can help offset reductions in editor numbers.
This new release is features-rich, as we added several new functionality: trend analysis (for linear, polynomial, logarithmic, and exponential trends), histograms, spectral analysis (discrete Fourier transform), and more. We also revised the existing correlation function (XCF) to extend support for new methods (e.g. Kendall, Spearman, etc.), and added a statistical test for examining its significance. Finally, NumXL now includes a new unit-root and stationarity test: the Augmented Dickey-Fuller (ADF) test.
http://www.spiderfinancial.com/products/numxl
FDAs emphasis on quality by design began with the recognition that increased testing does not improve product quality (this has long been recognized in other industries).In order for quality to increase, it must be built into the product. To do this requires understanding how formulation and manufacturing process variables influence product quality.Quality by Design (QbD) is a systematic approach to pharmaceutical development that begins with predefined objectives and emphasizes product and process understanding and process control, based on sound science and quality risk management.
This presentation - Part IV in the series- deals with the concepts of Design Space, Design of experiments and Models. This presentation was compiled from material freely available from FDA , ICH , EMEA and other free resources on the world wide web.
This document discusses analog to digital converters (ADCs). It begins by explaining that ADCs convert analog signals, which have infinite states, into digital signals that only have two states (on or off). The document then discusses the basic concepts of quantization and coding in ADCs. It provides examples of different types of ADCs including flash, dual-slope, voltage-to-frequency, and successive approximation ADCs. For successive approximation ADCs, it provides a detailed example of the conversion process. Finally, it discusses implementing an ADC using the MC68HC11A8 microcontroller and its registers.
This document provides an overview of basic statistics concepts and terminology. It discusses descriptive and inferential statistics, measures of central tendency (mean, median, mode), measures of variability, distributions, correlations, outliers, frequencies, t-tests, confidence intervals, research designs, hypotheses testing, and data analysis procedures. Key steps in research like research design, data collection, and statistical analysis are outlined. Descriptive statistics are used to describe data while inferential statistics investigate hypotheses about populations. Common statistical analyses and concepts are also defined.
This document outlines the key topics to be covered in an introductory econometrics course, including definitions, applications, and illustrations of econometrics. The course will define econometrics and statistics, explore how econometrics is used to test economic theories, estimate relationships, and make policy recommendations. Examples of applying econometrics to questions around class size and grades, racial discrimination in mortgage lending, taxes and cigarette smoking, and forecasting inflation will also be discussed to illustrate distinguishing econometrics from general statistics.
This document appears to be a chapter from a textbook on statistics that includes sample problems and answers. It provides the table of contents for homework problems 1-23 from Chapter 1 on introduction to statistics. The problems cover topics like identifying independent and dependent variables, distinguishing between experimental and non-experimental research studies, different types of variables, and computing statistical expressions. The document serves as a reference for students to review examples and solutions to common introductory statistics problems.
Applied Statistics : Sampling method & central limit theoremwahidsajol
油
This document discusses sampling methods and the central limit theorem. It provides details on types of probability sampling including simple random sampling, systematic sampling, stratified sampling, and cluster sampling. Simple random sampling involves randomly selecting items from a population so that each item has an equal chance of selection. Systematic sampling selects every kth item from a population. Stratified sampling divides a population into subgroups and then randomly samples from each subgroup. Cluster sampling divides a population into geographical clusters and randomly samples from each cluster. The document also explains that the central limit theorem states that the sampling distribution of sample means will approximate a normal distribution as sample size increases.
This document discusses the role and importance of statistics in scientific research. It begins by defining statistics as the science of learning from data and communicating uncertainty. Statistics are important for summarizing, analyzing, and drawing inferences from data in research studies. They also allow researchers to effectively present their findings and support their conclusions. The document then describes how statistics are used and are important in many fields of scientific research like biology, economics, physics, and more. It also provides examples of statistical terms commonly used in research studies and some common misuses of statistics.
Objective Determination Of Minimum Engine Mapping Requirements For Optimal SI...pmaloney1
油
The document discusses determining the minimum number of engine test points needed for optimal spark-ignition direct-injection engine calibration. Results found that approximately 100 spark sweeps were required to optimally calibrate the engine using a 2-stage modeling approach. A problem-solving approach was used that involved designing test matrices, developing 2-stage models, and comparing calibrations from reduced and exhaustive test designs.
Towards Probabilistic Assessment of ModularityKevin Hoffman
油
We propose new modularity metrics based on probabilistic computations over weighted design structure matrices, also known as dependency structure matrices.
We define how these metrics are calculated, discuss some practical applications, and conclude with future work.
This document discusses machine learning projects for classifying and clustering glass data using R. It first describes classifying glass data using ridge regression and plotting the results. It shows the classification performance is good based on error rate but poor based on ROC. Using higher order polynomials improves ROC, TPR and FPR. It also notes how to properly implement ridge regression. The document then demonstrates clustering glass data using multi-dimensional scaling, k-means clustering, and hierarchical clustering and compares the clustering to the original labels.
1) The authors developed new models for quartz-enhanced photoacoustic spectroscopy (QEPAS) sensors that account for viscous damping effects in order to enable numerical optimization of sensor design.
2) Their viscous damping model describes how fluid viscosity attenuates acoustic pressure waves and dampens the resonant mechanical deformation of the quartz tuning fork in QEPAS sensors.
3) Preliminary experimental validation showed good agreement between the model and measurements of acoustic signal strength as a function of laser beam position, though discrepancies occurred at larger distances due to unmodeled QEPAS-ROTADE interaction effects.
1) The document discusses different radiotherapy techniques for head and neck tumors including conformal radiotherapy and IMRT.
2) It describes target volumes according to ICRU guidelines and the delineation of lymph node levels in the neck.
3) Examples are provided of different conformal radiotherapy plans including an "antique" 7-field plan, a "middle age" 9-field plan, and an "art nouveau" 10-field plan with comparisons of dose distributions and organ at risk sparing.
4) Inverse planning IMRT is also discussed with an example cumulative segment dose distribution.
This document reviews different techniques for predicting solar irradiance levels including:
1) Numerical weather prediction models and statistical prediction for short to long term forecasting.
2) MOS (Model Output Statistics) techniques using sky cover products from weather centers for short term prediction.
3) Satellite-based methods comparing different approaches and error analysis for short term forecasting.
4) Signal analysis techniques including wavelet transforms and artificial neural networks combined with recurrent networks for improved prediction.
Future work proposed includes combining these methods along with normalized data and forecasts from weather centers to improve prediction accuracy across timescales.
1. The document contains graphs and tables about Wikipedia data such as the number of editors over time, article quality metrics, and calculation times for important editor identification methods.
2. It analyzes the impact of reducing the number of editors on metrics like description amount, number of articles, and calculation time.
3. Methods that consider both description amount and number of articles showed higher correlation and lower increased rank than methods using a single metric.
Natalia Restrepo-Coupe_Remotely-sensed photosynthetic phenology and ecosystem...TERN Australia
油
This document discusses using remotely sensed data and tower eddy covariance CO2 flux measurements to study phenology and ecosystem productivity. It notes that flux tower data can help validate remote sensing phenology products by determining if they correctly capture dates of green-up, peak growing season, end of season, and season length. The document also aims to better understand what vegetation indices mean quantitatively and their biome-specific relationships to in-situ ecosystem behavior and capacity. Improving this understanding could lead to more robust land surface models informed by remote sensing.
Paper and pencil_cosmological_calculatorS辿rgio Sacani
油
The document describes a paper-and-pencil cosmological calculator designed for the CDM cosmological model. The calculator contains nomograms (graphs) for quantities like redshift, distance, size, age, and more for different redshift intervals up to z=20. It is based on cosmological parameters from the Planck mission of H0=67.15 km/s/Mpc, 立=0.683, and 立m=0.317. To use the calculator, the user finds a known value and reads off other quantities at the same horizontal level.
This document summarizes the shape context algorithm for shape matching and object recognition. It discusses computing shape contexts, which describe the distribution of relative positions of points around a shape. Shape contexts are represented as histograms of distances and angles between points. The algorithm finds correspondences between points on two shapes by matching their shape contexts and minimizing the total cost. Additional cost terms can be added, such as color or texture differences. The algorithm is shown to be robust to noise and inexact rotations.
1. The document discusses Wikipedia and analyzes data related to editor contributions and page views over time.
2. It finds that most edits on Wikipedia are made by a small group of elite editors, with 20% of editors contributing 80% of edits, following a Zipfian distribution.
3. The analysis also examines how reducing the number of editors impacts the amount of content on Wikipedia pages and determines that credible editors who make high quality contributions can help offset reductions in editor numbers.
This new release is features-rich, as we added several new functionality: trend analysis (for linear, polynomial, logarithmic, and exponential trends), histograms, spectral analysis (discrete Fourier transform), and more. We also revised the existing correlation function (XCF) to extend support for new methods (e.g. Kendall, Spearman, etc.), and added a statistical test for examining its significance. Finally, NumXL now includes a new unit-root and stationarity test: the Augmented Dickey-Fuller (ADF) test.
http://www.spiderfinancial.com/products/numxl
FDAs emphasis on quality by design began with the recognition that increased testing does not improve product quality (this has long been recognized in other industries).In order for quality to increase, it must be built into the product. To do this requires understanding how formulation and manufacturing process variables influence product quality.Quality by Design (QbD) is a systematic approach to pharmaceutical development that begins with predefined objectives and emphasizes product and process understanding and process control, based on sound science and quality risk management.
This presentation - Part IV in the series- deals with the concepts of Design Space, Design of experiments and Models. This presentation was compiled from material freely available from FDA , ICH , EMEA and other free resources on the world wide web.
This document discusses analog to digital converters (ADCs). It begins by explaining that ADCs convert analog signals, which have infinite states, into digital signals that only have two states (on or off). The document then discusses the basic concepts of quantization and coding in ADCs. It provides examples of different types of ADCs including flash, dual-slope, voltage-to-frequency, and successive approximation ADCs. For successive approximation ADCs, it provides a detailed example of the conversion process. Finally, it discusses implementing an ADC using the MC68HC11A8 microcontroller and its registers.
Faster, More Effective Flowgraph-based Malware ClassificationSilvio Cesare
油
Silvio Cesare is a PhD candidate at Deakin University researching malware detection and automated vulnerability discovery. His current work extends his Masters research on fast automated unpacking and classification of malware. He presented this work last year at Ruxcon 2010. His system uses control flow graphs and q-grams of decompiled code as "birthmarks" to detect unknown malware samples that are suspiciously similar to known malware, reducing the need for signatures. He evaluated the system on 10,000 malware samples with only 10 false positives. The system provides improved effectiveness and efficiency over his previous work in 2010.
1. The document discusses the Discrete Cosine Transform (DCT), which is commonly used in image and video processing applications to decorrelate pixel data and reduce redundancy.
2. A typical image/video transmission system first applies a transformation like the DCT in the source encoder to decorrelate pixels, followed by quantization and entropy encoding to further compress the data.
3. The DCT maps the spatially correlated pixel data into transformed coefficients that are largely uncorrelated, allowing more efficient compression by reducing the number of bits needed to represent the image information.
1. The document discusses the Discrete Cosine Transform (DCT), which is commonly used in image and video processing applications to decorrelate pixel data and reduce redundancy.
2. A typical image/video transmission system first applies a transformation like the DCT in the source encoder to decorrelate pixels, followed by quantization, entropy encoding, and channel encoding for transmission.
3. The DCT aims to map spatially correlated pixel data into uncorrelated transform coefficients to exploit the fact that pixel values can be predicted from neighbors, allowing for better data compression compared to the original spatial domain representation.
1. The document discusses the Discrete Cosine Transform (DCT), which is commonly used in image and video processing applications to decorrelate pixel data and reduce redundancy.
2. A typical image/video transmission system first applies a transformation like the DCT in the source encoder to decorrelate pixel values, followed by quantization and entropy encoding to further compress the data.
3. The DCT maps the spatially correlated pixel data into transformed coefficients that are decorrelated. This decorrelation reduces interpixel redundancy and allows more efficient compression of image and video data.
Acute & Chronic Inflammation, Chemical mediators in Inflammation and Wound he...Ganapathi Vankudoth
油
A complete information of Inflammation, it includes types of Inflammation, purpose of Inflammation, pathogenesis of acute inflammation, chemical mediators in inflammation, types of chronic inflammation, wound healing and Inflammation in skin repair, phases of wound healing, factors influencing wound healing and types of wound healing.
Flag Screening in Physiotherapy Examination.pptxBALAJI SOMA
油
Flag screening is a crucial part of physiotherapy assessment that helps in identifying medical, psychological, occupational, and social barriers to recovery. Recognizing these flags ensures that physiotherapists make informed decisions, provide holistic care, and refer patients appropriately when necessary. By integrating flag screening into practice, physiotherapists can optimize patient outcomes and prevent chronicity of conditions.
PERSONALITY DEVELOPMENT & DEFENSE MECHANISMS.pptxPersonality and environment:...ABHAY INSTITUTION
油
Personality theory is a collection of ideas that explain how a person's personality develops and how it affects their behavior. It also seeks to understand how people react to situations, and how their personality impacts their relationships.
Key aspects of personality theory
Personality traits: The characteristics that make up a person's personality.
Personality development: How a person's personality develops over time.
Personality disorders: How personality theories can be used to study personality disorders.
Personality and environment: How a person's personality is influenced by their environment.
Unit 1: Introduction to Histological and Cytological techniques
Differentiate histology and cytology
Overview on tissue types
Function and components of the compound light microscope
Overview on common Histological Techniques:
o Fixation
o Grossing
o Tissue processing
o Microtomy
o Staining
o Mounting
Application of histology and cytology
Dr. Anik Roy Chowdhury
MBBS, BCS(Health), DA, MD (Resident)
Department of Anesthesiology, ICU & Pain Medicine
Shaheed Suhrawardy Medical College Hospital (ShSMCH)
1. Explain the physiological control of glomerular filtration and renal blood flow
2. Describe the humoral and autoregulatory feedback mechanisms that mediate the autoregulation of renal plasma flow and glomerular filtration rate
Stability of Dosage Forms as per ICH GuidelinesKHUSHAL CHAVAN
油
This presentation covers the stability testing of pharmaceutical dosage forms according to ICH guidelines (Q1A-Q1F). It explains the definition of stability, various testing protocols, storage conditions, and evaluation criteria required for regulatory submissions. Key topics include stress testing, container closure systems, stability commitment, and photostability testing. The guidelines ensure that pharmaceutical products maintain their identity, purity, strength, and efficacy throughout their shelf life. This resource is valuable for pharmaceutical professionals, researchers, and regulatory experts.
FAO's Support Rabies Control in Bali_Jul22.pptxWahid Husein
油
What is FAO doing to support rabies control programmes in Bali, Indonesia, using One Health approach with mass dog vaccination and integrated bite case management as main strategies
Solubilization in Pharmaceutical Sciences: Concepts, Mechanisms & Enhancement...KHUSHAL CHAVAN
油
This presentation provides an in-depth understanding of solubilization and its critical role in pharmaceutical formulations. It covers:
Definition & Mechanisms of Solubilization
Role of surfactants, micelles, and bile salts in drug solubility
Factors affecting solubilization (pH, polarity, particle size, temperature, etc.)
Methods to enhance drug solubility (Buffers, Co-solvents, Surfactants, Complexation, Solid Dispersions)
Advanced approaches (Polymorphism, Salt Formation, Co-crystallization, Prodrugs)
This resource is valuable for pharmaceutical scientists, formulation experts, regulatory professionals, and students interested in improving drug solubility and bioavailability.
BIOMECHANICS OF THE MOVEMENT OF THE SHOULDER COMPLEX.pptxdrnidhimnd
油
The shoulder complex acts as in coordinated fashion to provide the smoothest and greatest range of motion possible of the upper limb.
Combined motion of GH and ST joint of shoulder complex helps in:
Distribution of motion between other two joints.
Maintenance of glenoid fossa in optimal position.
Maintenance of good length tension
Although some amount of glenohumeral motion may occur while the other shoulder articulations remain stabilized, movement of the humerus more commonly involves some movement at all three shoulder joints.
Local Anesthetic Use in the Vulnerable PatientsReza Aminnejad
油
Local anesthetics are a cornerstone of pain management, but their use requires special consideration in vulnerable groups such as pediatric, elderly, diabetic, or obese patients. In this presentation, well explore how factors like age and physiology influence local anesthetics' selection, dosing, and safety. By understanding these differences, we can optimize patient care and minimize risks.
Best Sampling Practices Webinar USP <797> Compliance & Environmental Monito...NuAire
油
Best Sampling Practices Webinar USP <797> Compliance & Environmental Monitoring
Are your cleanroom sampling practices USP <797> compliant? This webinar, hosted by Pharmacy Purchasing & Products (PP&P Magazine) and sponsored by NuAire, features microbiology expert Abby Roth discussing best practices for surface & air sampling, data analysis, and compliance.
Key Topics Covered:
鏝 Viable air & surface sampling best practices
鏝 USP <797> requirements & compliance strategies
鏝 How to analyze & trend viable sample data
鏝 Improving environmental monitoring in cleanrooms
・ Watch Now: https://www.nuaire.com/resources/best-sampling-practices-cleanroom-usp-797
Stay informedfollow Abby Roth on LinkedIn for more cleanroom insights!
Best Sampling Practices Webinar USP <797> Compliance & Environmental Monito...NuAire
油
9th ICCS Noordwijkerhout
1. Real World Applications of
Proteochemometric Modeling
The Design of Enzyme Inhibitors and
Ligands of G-Protein Coupled Receptors
2. Contents
Our current approach to Proteochemometric Modeling
Part I: PCM applied to non-nucleoside reverse
transcriptase inhibitors and HIV mutants
Part II: PCM applied to small molecules and the
Adenosine receptors
Conclusions
3. What is PCM ?
Proteochemometric modeling needs both a ligand
descriptor and a target descriptor
Descriptors need to be compatible with each other and
need to be compatible with machine learning
technique...
Bio-Informatics
GJP van Westen, JK Wegner et al. MedChemComm (2011),16-30, 10.1039/C0MD00165A
4. What is PCM ?
Proteochemometric modeling needs both a ligand
descriptor and a target descriptor
Descriptors need to be compatible with each other and
need to be compatible with machine learning
technique...
Bio-Informatics
GJP van Westen, JK Wegner et al. MedChemComm (2011),16-30, 10.1039/C0MD00165A
5. What is PCM ?
Proteochemometric modeling needs both a ligand
descriptor and a target descriptor
Descriptors need to be compatible with each other and
need to be compatible with machine learning
technique...
Bio-Informatics
GJP van Westen, JK Wegner et al. MedChemComm (2011),16-30, 10.1039/C0MD00165A
6. Ligand Descriptors
Scitegic Circular Fingerprints
Circular, substructure based
fingerprints
Maximal diameter of 3 bonds from
central atom
Each substructure is converted to a
molecular feature
7. Target Descriptors
Select binding site residues from full protein
sequence
Each unique hashed feature represents one
amino acid type (comparable with circular
fingerprints)
8. Machine Learning
Using R-statistics as integrated with Pipeline Pilot
Version 2.11.1 (64-bits)
Sampled several machine learning techniques
SVM
Final method of choice
PLS
Random Forest
9. Real World Applications of PCM
Part I: PCM of NNRTIs (analog series) on 14 mutants
Output variable: pEC50
Data set provided by Tibotec
Prospectively validated
Part II: PCM of small molecules on the Adenosine
receptors
Output variable pKi
ChEMBL_04 / StarLite
Both human and rat data combined
Prospectively validated
10. Part I: PCM applied to NNRTIs
Which inhibitor(s) show(s) the best activity spectrum and
can proceed in drug development?
451 HIV Reverse Transcriptase Sequence
Mean StdDev
n
pEC50 pEC50
(RT) inhibitors 1 (wt) 8.3 0.6 451
14 HIV RT sequences 2 6.9 0.7 259
3 7.6 0.6 444
Between zero and 13 point 4 7.5 0.7 443
mutations (at NNRTI 5 7.4 0.8 429
6 6.0 0.6 316
binding site) 7 6.5 0.6 99
Large differences in 8 6.9 0.7 147
compound activity on 9 8.3 0.6 222
10 7.9 0.7 252
different sequences 11 7.5 0.7 257
12 8.0 0.6 242
13 7.4 0.8 244
14 8.2 0.8 220
11. Binding Site
Selected binding site based on point mutations present
in the different strains
24 residues were selected
12. Used model to predict missing values
C
o
m
p
o
u
n
d
s
Mutants
Original Dataset Completed with model
13. Prospective Validation
Compounds have been
experimentally validated
Predictions where pEC50
differs two sd from
compound average
(69 compound outliers)
Predictions where pEC50
differs two sd from
sequence average
(61 sequence outliers)
Assay validation
Completed with model
15. The Applicability Domain Concept Still
Holds in Target Space
Prediction error similarity shows a direct correlation with
average sequence similarity to training set
R022
R0 RMSE
1 1
0.8 0.8
R0 2 0.6 0.6
RMSE
0.4 0.4
0.2 0.2
0 0
-0.2 -0.2
0.5 0.6 0.7 0.8 0.9 1
Average Sequence Similarity with Training Set
16. The Applicability Domain Concept Still
Holds in Target Space
Prediction error similarity shows a direct correlation with
average sequence similarity to training set
R022
R0 RMSE
1 1
0.8 0.8
R0 2 0.6 0.6
RMSE
0.4 0.4
0.2 0.2
0 0
-0.2 -0.2
0.5 0.6 0.7 0.8 0.9 1
Average Sequence Similarity with Training Set
17. The Applicability Domain Concept Still
Holds in Target Space
Prediction error similarity shows a direct correlation with
average sequence similarity to training set
R022
R0 RMSE
1 1
0.8 0.8
R0 2 0.6 0.6
RMSE
0.4 0.4
0.2 0.2
0 0
-0.2 -0.2
0.5 0.6 0.7 0.8 0.9 1
Average Sequence Similarity with Training Set
18. Does PCM outperform scaling and QSAR?
PCM outperforms QSAR models trained with identical
descriptors on the same set
When considering outliers, PCM outperforms scaling
PCM can be applied to previously unseen mutants
Validation pEC50 10-NN 10-NN 10-NN
Assay PCM QSAR
Experiment scaling (both) (target) (cmpd)
R02 (Full plot) 0.88 0.69 0.69 0.31 0.41 0.21 0.28
R02 (Outliers) 0.88 0.61 0.59 0.36 0.34 0.32 0.18
RMSE (Full plot) 0.50 0.62 0.57 0.96 0.90 1.29 1.16
RMSE (Outliers) 0.50 0.52 0.58 1.06 0.72 1.39 1.29
22. Conclusions
PCM can guide inhibitor design by predicting bioactivity
profiles, as applied here to NNRTIs
We have shown prospectively that the performance of
PCM approaches assay reproducibility (RMSE 0.62 vs
0.50)
Interpretation allows selection between preferred
chemical substructures and substructures to be avoided
23. Part II: PCM applied to the Adenosine
Receptors
Model based on public data (ChEMBL_04)
Included:
Human receptor data
(Historic) Rat receptor data
Defined a single binding site (including ELs)
Based on crystal structure 3EML and translated selected
residues through MSA to other receptors
Looking for novel A2A receptor ligands taking SAR
information from other adenosine receptor subtypes
into account
25. Adenosine Receptor Data Set
Little overlap between species
Validation set consists of 4556 decoys and 43 known
actives
External
Receptor Human Rat Overlap Range (pKi) Decoy
Validation
A1 1635 2216 147 4.5 - 9.7 130 1139
A2A 1526 2051 215 4.5 - 10.5 57 1139
A2B 780 803 79 4.5 - 9.7 11 1139
A3 1661 327 82 4.5 - 10.0 255 1139
26. In-silico validation
External validation on in
house compound collection
Lower quality data set leads
to less predictive model
Inclusion of Rat data
improves model (RMSE 0.82
vs 0.87)
Our final model is able to
separate actives from
decoys
33 of the 43 known actives
were in the top 50
27. Prospective Validation
Scanned ChemDiv supplier database ( > 790,000 cmpds)
Selected 55 compounds with focus on diverse chemistry
Compounds were tested in-vitro
28. Conclusions
We have found novel compounds active (in the
nanomolar range) on the A2A receptor
Hit rate ~11 %
PCM models benefit from addition of similar targets from
other species (RMSE improves from 0.87 to 0.82)
PCM models can make robust predictions, even when
trained on data from different labs
29. Further discussion
Poster # 47 A. Hendriks, G.J.P. van Westen et al.
Proteochemometric Modeling as a Tool to Predict Clinical
Response to Antiretroviral Therapy Based on the Dominant
Patient HIV Genotype
Poster # 51 E.B. Lenselink, G.J.P. van Westen et al.
A Global Class A GPCR Proteochemometric Model: A
Prospective Validation
Poster # 54 R.F. Swier, G.J.P. van Westen et al.
3D-neighbourhood Protein Descriptors for Proteochemometric
Modeling
30. Acknowledgements
Prof. Ad IJzerman Prof. Herman van Vlijmen
Andreas Bender Joerg Wegner
Olaf van den Hoven Anik Peeters
Rianne van der Pijl Peggy Geluykens
Thea Mulder Leen Kwanten
Henk de Vries Inge Vereycken
Alwin Hendriks
Bart Lenselink
Remco Swier
31. Real World Applications of
Proteochemometric Modeling
The Design of Enzyme Inhibitors and
Ligands of G-Protein Coupled Receptors
33. Leave One Sequence Out
By leaving out one sequence in training and validating a
trained model on that sequence, model performance on
novel mutants is emulated