際際滷

際際滷Share a Scribd company logo
INSTITUTE OF PHARMACEUTICAL SCIENCE
(KUK)
Computers in Pharmaceutical Research and
Development
Presented To:Ms.Geeta Jangra
Presented By:Jasmine Sagwal
Class Roll No.04
Introduction
Currently computers are much prevalent in pharmaceutical research and development. Today
really hard to Think off a time when there were no computers accommodate to medicinal
chemist or biologist. Computers have transformed how latest drugs are developed & also have
promoted much to the discovery & research of drug development over the years.
Without computer it will take much time for researching and developing new drug. By using
online libraries no of compounds screening can be done and also trial and errors can be
minimized. To design and develop a molecule many scientific advances made possible by
computational approach.
History of Computers In Pharmaceutical
Research and Development
Today computer is an important tool for storage, transmission and to manage the information but
history of computer in pharmaceutical research and development twenty five years ago.
19th Century
The introduction to Quantitative Structure Activity Relationship(QSAR) developed in the mid of 19th
century by publishing papers and mathematically relating chemical structure to biological activity
with molecular description. This was help for engineering development of computers and their use.
At initial stage computer designed to be used for accounting purpose and then moderately set a
tool for scientific research.
1960s
 At starting of 1960s drug discovery by trial & error basis, Soil microbes & Medicinal plants was
two sources of therapeutic compounds. For the synthesis of therapeutically active compound the
chemist manually read the literature and used their creativity. Microbiologist & chemist would be
tested these compounds.
 Gradually by the development single compound called drug and after development called a
pharmaceutical product. Hence this method take time and cost also sometime inaccurate. Then
some companies took initiative to study chemicals using computers for drug discovery. With such
developments companies started passing using molecular mechanics, QSAR & Statistics.
 Lilly took the initiative to develop the computational chemistry and then truth is that
computational techniques considered as like other apparatus in laboratories.
1970s
Two new computer sources was introduced in 1970s as Cambridge Structural Database(CSD) &
the Protein Data Bank(PDB).
In 1980s pharmaceutical companies noted the establishment of personal computer but it was
difficult to use.
In 1984s, the Apple Macintosh launch new standard for easy handling of computer. This includes
the word processing graphics and managing small laboratory database so medicinal chemist
liked it. And the most important development of Chem Draw software. Next very important
development was to established potential to view 3D structures of compound .
1975 and 1985
According to the survey high profile pharma companies started to get aware with computer
aided methods of drug design. In US 48 chemical & Pharmaceutical companies get aware to
use of computer aided molecular drug design. Then between 1975 to 1985 doubling the ratio
for computational chemist every five years.
The 1990
The efforts taken in 1980s was studied 1990s in which results a large number of new
chemical entities achieved in pharmaceutical marketplace. And then the supercomputer was
introduced in market by such development computer have become an necessary factor in the
process of drug discovery and development in pharmaceuticals.
Statistical Modelling In Pharmaceutical Research
And Development
 Pharmaceutical companies having challenges for discovery as well as development of new
drug is to minimize the cost and time required from discovery to market meanwhile at the
same time raising quality standards.
 Business of pharmaceutical will be in danger zone if they dont have resolution to minimize
cost as well time. In future market market will not able to provide expensive drug incase of
their quality.
 So by the parallel way to this increasing challenges discovery and development of new drug is
needed by trial & error basis.
 The perspective to develop the new technology given rise to hope of establishment new
model in pharmaceutical industries significant to overcome the challenges of cost, speed as
well as quality.
 Modelling is nothing but concept of adopting just another new technologies.
Discriptive vs Mechanistic Modelling
DISCRIPTIVE MODELING
 If the purpose is just to provide a reasonable description of the data in some appropriate way without
any attempt at understanding the underlying phenomenon, that is, the data-generating mechanism,
then the family of models is selected based on its adequacy to represent the data structure.
 The net result in this case is only a descriptive model of the phenomenon.
 These models are very useful for discriminating between alternative hypothesis but are totally useless
for capturing the fundamental characteristics of a mechanism.
MECHANISTIC MODELING
 Whenever the interest lies in the understanding of the mechanisms of action, it is critical to be able
to count on a strong collaboration between scientists, specialists in the field, and statisticians or
mathematicians.
 The former must provide updated, rich, and reliable information about the problem.
 Whereas the latter are trained for translating scientific information in mathematical models.
 EXAMPLE
 A first evaluation of the data can be done by running non-parametric statistical estimation
techniques like, for example, the Nadaraya- Watson kernel regression estimate.
 These techniques have the advantage of being relatively cost-free in terms of assumptions, but
they do not provide any possibility of interpreting the outcome and are not at all reliable when
extrapolating.
 The fact that these techniques do not require a lot of assumptions makes them relatively close to
what algorithm-oriented people try to do.
 These techniques are essentially descriptive by nature and are useful for summarizing the data by
smoothing them and providing interpolated values.
 To represent various concepts Animal tumor growth data are used which are experience
within the development of a model and identification of use. Data denotes the tumor growth
in rats around duration of 80 days. We are interested in modeling the growth of
experimental tumors subcutaneously implanted in rats to be able to differentiate between
treatment regimens.
 Two groups of rat treated with different treatments, placebo and a new drug at a fixed dose.
To represent the growth of tumor there is particular model and also interest in the statistical
importance of effect of treatment. For the one subject who received the placebo for raw
data it represented as open circle. To assume subject the volume of tumor grows from near
about 0 to 3000 cubic millimeter.
Presentation (9).pptx
1.Descriptive model or empirical model, describes
the overall behaviour of the system in question,
without making any claim about the nature of the
underlying mechanisms that produce this
behaviour.
1.A mechanistic model is one where the basic
elements of the model have a direct
correspondence to the underlying mechanisms in
the system being modelled.
2.Descriptive model is generic term for activities
that create models by observation and
experiment.
2. A mechanistic model assumes that a complex
system can be understood by examining the
working of its individual parts and the manner in
which they are coupled.
Descriptive Model Mechanistic Model
Statistical Parameters Estimation
 Statistics is a scientific study of numerical data based on natural phenomena. It is also the
science of collecting, organizing, interpreting and reporting data.
 Once the model functional form has been decided upon and the experimental data have been
collected, a value for the model parameters (point estimation) and a confidence region for this
value (interval estimation) must be estimated from the available data.
 In its simplest form, the ordinary least squares criterion (OLS) prescribes as a loss function the
sum of the squared residuals relative to all observed points, where the residual relative to each
observed point is the difference between the observed and predicted value at that point.
Approximation of the Gompertz growth curve based on Taylor expansion for the internal
exponential term
 When the value of certain parameter by results shows closely to the actual values,
then the residual will be small also the sum of their squares will also be small,
indirectly loss will be less.
 Wrong or unacceptable value will evaluate through a model which predicts values
of parameter. very far from those actually observed, the residuals will be large and
the loss will be large.
 A nonlinear model with a known functional relationship.
 Where,
 yi-ith observation, corresponding to a vector xi of independent variables,
 慮- the true but unknown parameter value belonging to some acceptable domain
 u- the predicted value as a function of independent variables and parameter,
 竜i  true errors (which we only suppose for the moment to have zero mean value)
that randomly modify the theoretical value of the observation.
意鞄温稼一霞看顎盒彫

More Related Content

Presentation (9).pptx

  • 1. INSTITUTE OF PHARMACEUTICAL SCIENCE (KUK) Computers in Pharmaceutical Research and Development Presented To:Ms.Geeta Jangra Presented By:Jasmine Sagwal Class Roll No.04
  • 2. Introduction Currently computers are much prevalent in pharmaceutical research and development. Today really hard to Think off a time when there were no computers accommodate to medicinal chemist or biologist. Computers have transformed how latest drugs are developed & also have promoted much to the discovery & research of drug development over the years. Without computer it will take much time for researching and developing new drug. By using online libraries no of compounds screening can be done and also trial and errors can be minimized. To design and develop a molecule many scientific advances made possible by computational approach.
  • 3. History of Computers In Pharmaceutical Research and Development Today computer is an important tool for storage, transmission and to manage the information but history of computer in pharmaceutical research and development twenty five years ago. 19th Century The introduction to Quantitative Structure Activity Relationship(QSAR) developed in the mid of 19th century by publishing papers and mathematically relating chemical structure to biological activity with molecular description. This was help for engineering development of computers and their use. At initial stage computer designed to be used for accounting purpose and then moderately set a tool for scientific research.
  • 4. 1960s At starting of 1960s drug discovery by trial & error basis, Soil microbes & Medicinal plants was two sources of therapeutic compounds. For the synthesis of therapeutically active compound the chemist manually read the literature and used their creativity. Microbiologist & chemist would be tested these compounds. Gradually by the development single compound called drug and after development called a pharmaceutical product. Hence this method take time and cost also sometime inaccurate. Then some companies took initiative to study chemicals using computers for drug discovery. With such developments companies started passing using molecular mechanics, QSAR & Statistics. Lilly took the initiative to develop the computational chemistry and then truth is that computational techniques considered as like other apparatus in laboratories.
  • 5. 1970s Two new computer sources was introduced in 1970s as Cambridge Structural Database(CSD) & the Protein Data Bank(PDB). In 1980s pharmaceutical companies noted the establishment of personal computer but it was difficult to use. In 1984s, the Apple Macintosh launch new standard for easy handling of computer. This includes the word processing graphics and managing small laboratory database so medicinal chemist liked it. And the most important development of Chem Draw software. Next very important development was to established potential to view 3D structures of compound .
  • 6. 1975 and 1985 According to the survey high profile pharma companies started to get aware with computer aided methods of drug design. In US 48 chemical & Pharmaceutical companies get aware to use of computer aided molecular drug design. Then between 1975 to 1985 doubling the ratio for computational chemist every five years. The 1990 The efforts taken in 1980s was studied 1990s in which results a large number of new chemical entities achieved in pharmaceutical marketplace. And then the supercomputer was introduced in market by such development computer have become an necessary factor in the process of drug discovery and development in pharmaceuticals.
  • 7. Statistical Modelling In Pharmaceutical Research And Development Pharmaceutical companies having challenges for discovery as well as development of new drug is to minimize the cost and time required from discovery to market meanwhile at the same time raising quality standards. Business of pharmaceutical will be in danger zone if they dont have resolution to minimize cost as well time. In future market market will not able to provide expensive drug incase of their quality. So by the parallel way to this increasing challenges discovery and development of new drug is needed by trial & error basis. The perspective to develop the new technology given rise to hope of establishment new model in pharmaceutical industries significant to overcome the challenges of cost, speed as well as quality. Modelling is nothing but concept of adopting just another new technologies.
  • 8. Discriptive vs Mechanistic Modelling DISCRIPTIVE MODELING If the purpose is just to provide a reasonable description of the data in some appropriate way without any attempt at understanding the underlying phenomenon, that is, the data-generating mechanism, then the family of models is selected based on its adequacy to represent the data structure. The net result in this case is only a descriptive model of the phenomenon. These models are very useful for discriminating between alternative hypothesis but are totally useless for capturing the fundamental characteristics of a mechanism.
  • 9. MECHANISTIC MODELING Whenever the interest lies in the understanding of the mechanisms of action, it is critical to be able to count on a strong collaboration between scientists, specialists in the field, and statisticians or mathematicians. The former must provide updated, rich, and reliable information about the problem. Whereas the latter are trained for translating scientific information in mathematical models.
  • 10. EXAMPLE A first evaluation of the data can be done by running non-parametric statistical estimation techniques like, for example, the Nadaraya- Watson kernel regression estimate. These techniques have the advantage of being relatively cost-free in terms of assumptions, but they do not provide any possibility of interpreting the outcome and are not at all reliable when extrapolating. The fact that these techniques do not require a lot of assumptions makes them relatively close to what algorithm-oriented people try to do. These techniques are essentially descriptive by nature and are useful for summarizing the data by smoothing them and providing interpolated values.
  • 11. To represent various concepts Animal tumor growth data are used which are experience within the development of a model and identification of use. Data denotes the tumor growth in rats around duration of 80 days. We are interested in modeling the growth of experimental tumors subcutaneously implanted in rats to be able to differentiate between treatment regimens. Two groups of rat treated with different treatments, placebo and a new drug at a fixed dose. To represent the growth of tumor there is particular model and also interest in the statistical importance of effect of treatment. For the one subject who received the placebo for raw data it represented as open circle. To assume subject the volume of tumor grows from near about 0 to 3000 cubic millimeter.
  • 13. 1.Descriptive model or empirical model, describes the overall behaviour of the system in question, without making any claim about the nature of the underlying mechanisms that produce this behaviour. 1.A mechanistic model is one where the basic elements of the model have a direct correspondence to the underlying mechanisms in the system being modelled. 2.Descriptive model is generic term for activities that create models by observation and experiment. 2. A mechanistic model assumes that a complex system can be understood by examining the working of its individual parts and the manner in which they are coupled. Descriptive Model Mechanistic Model
  • 14. Statistical Parameters Estimation Statistics is a scientific study of numerical data based on natural phenomena. It is also the science of collecting, organizing, interpreting and reporting data. Once the model functional form has been decided upon and the experimental data have been collected, a value for the model parameters (point estimation) and a confidence region for this value (interval estimation) must be estimated from the available data. In its simplest form, the ordinary least squares criterion (OLS) prescribes as a loss function the sum of the squared residuals relative to all observed points, where the residual relative to each observed point is the difference between the observed and predicted value at that point.
  • 15. Approximation of the Gompertz growth curve based on Taylor expansion for the internal exponential term
  • 16. When the value of certain parameter by results shows closely to the actual values, then the residual will be small also the sum of their squares will also be small, indirectly loss will be less. Wrong or unacceptable value will evaluate through a model which predicts values of parameter. very far from those actually observed, the residuals will be large and the loss will be large. A nonlinear model with a known functional relationship.
  • 17. Where, yi-ith observation, corresponding to a vector xi of independent variables, 慮- the true but unknown parameter value belonging to some acceptable domain u- the predicted value as a function of independent variables and parameter, 竜i true errors (which we only suppose for the moment to have zero mean value) that randomly modify the theoretical value of the observation.