際際滷shows by User: StephenSenn1 / http://www.slideshare.net/images/logo.gif 際際滷shows by User: StephenSenn1 / Mon, 24 Jan 2022 11:28:28 GMT 際際滷Share feed for 際際滷shows by User: StephenSenn1 Has modelling killed randomisation inference frankfurt /slideshow/has-modelling-killed-randomisation-inference-frankfurt-251045286/251045286 hasmodellingkilledrandomisationinferencefrankfurt-220124112829
Lecture originally given in Frankfurt in 2006 discussing difference between design based and model based approaches to analysis of experiments]]>

Lecture originally given in Frankfurt in 2006 discussing difference between design based and model based approaches to analysis of experiments]]>
Mon, 24 Jan 2022 11:28:28 GMT /slideshow/has-modelling-killed-randomisation-inference-frankfurt-251045286/251045286 StephenSenn1@slideshare.net(StephenSenn1) Has modelling killed randomisation inference frankfurt StephenSenn1 Lecture originally given in Frankfurt in 2006 discussing difference between design based and model based approaches to analysis of experiments <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/hasmodellingkilledrandomisationinferencefrankfurt-220124112829-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Lecture originally given in Frankfurt in 2006 discussing difference between design based and model based approaches to analysis of experiments
Has modelling killed randomisation inference frankfurt from Stephen Senn
]]>
228 0 https://cdn.slidesharecdn.com/ss_thumbnails/hasmodellingkilledrandomisationinferencefrankfurt-220124112829-thumbnail.jpg?width=120&height=120&fit=bounds presentation 000000 http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
What is your question /slideshow/what-is-your-question-250659957/250659957 whatisyourquestion-211116092730
There are many questions one might ask of a clinical trial, ranging from what was the effect in the patients studied to what might the effect be in future patients via what was the effect in individual patients? The extent to which the answer to these questions is similar depends on various assumptions made and in some cases the design used may not permit any meaningful answer to be given at all. A related issue is confusion between randomisation, random sampling, linear model and true multivariate based modelling. These distinctions dont matter much for some purposes and under some circumstances but for others they do. A yet further issue is that causal analysis in epidemiology, which has brought valuable insights in many cases, has tended to stress point estimates and ignore standard errors. This has potentially misleading consequences. An understanding of components of variation is key. Unfortunately, the development of two particular topics in recent years, evidence synthesis by the evidence based medicine movement and personalised medicine by bench scientists has either paid scant attention to components of variation or to the questions being asked or both resulting in confusion about many issues. For instance, it is often claimed that numbers needed to treat indicate the proportion of patients for whom treatments work, that inclusion criteria determine the generalisability of results and that heterogeneity means that a random effects meta-analysis is required. None of these is true. The scope for personalised medicine has very plausibly been exaggerated and an important cause of variation in the healthcare system, physicians, is often overlooked. I shall argue that thinking about questions is important. ]]>

There are many questions one might ask of a clinical trial, ranging from what was the effect in the patients studied to what might the effect be in future patients via what was the effect in individual patients? The extent to which the answer to these questions is similar depends on various assumptions made and in some cases the design used may not permit any meaningful answer to be given at all. A related issue is confusion between randomisation, random sampling, linear model and true multivariate based modelling. These distinctions dont matter much for some purposes and under some circumstances but for others they do. A yet further issue is that causal analysis in epidemiology, which has brought valuable insights in many cases, has tended to stress point estimates and ignore standard errors. This has potentially misleading consequences. An understanding of components of variation is key. Unfortunately, the development of two particular topics in recent years, evidence synthesis by the evidence based medicine movement and personalised medicine by bench scientists has either paid scant attention to components of variation or to the questions being asked or both resulting in confusion about many issues. For instance, it is often claimed that numbers needed to treat indicate the proportion of patients for whom treatments work, that inclusion criteria determine the generalisability of results and that heterogeneity means that a random effects meta-analysis is required. None of these is true. The scope for personalised medicine has very plausibly been exaggerated and an important cause of variation in the healthcare system, physicians, is often overlooked. I shall argue that thinking about questions is important. ]]>
Tue, 16 Nov 2021 09:27:29 GMT /slideshow/what-is-your-question-250659957/250659957 StephenSenn1@slideshare.net(StephenSenn1) What is your question StephenSenn1 There are many questions one might ask of a clinical trial, ranging from what was the effect in the patients studied to what might the effect be in future patients via what was the effect in individual patients? The extent to which the answer to these questions is similar depends on various assumptions made and in some cases the design used may not permit any meaningful answer to be given at all. A related issue is confusion between randomisation, random sampling, linear model and true multivariate based modelling. These distinctions dont matter much for some purposes and under some circumstances but for others they do. A yet further issue is that causal analysis in epidemiology, which has brought valuable insights in many cases, has tended to stress point estimates and ignore standard errors. This has potentially misleading consequences. An understanding of components of variation is key. Unfortunately, the development of two particular topics in recent years, evidence synthesis by the evidence based medicine movement and personalised medicine by bench scientists has either paid scant attention to components of variation or to the questions being asked or both resulting in confusion about many issues. For instance, it is often claimed that numbers needed to treat indicate the proportion of patients for whom treatments work, that inclusion criteria determine the generalisability of results and that heterogeneity means that a random effects meta-analysis is required. None of these is true. The scope for personalised medicine has very plausibly been exaggerated and an important cause of variation in the healthcare system, physicians, is often overlooked. I shall argue that thinking about questions is important. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/whatisyourquestion-211116092730-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> There are many questions one might ask of a clinical trial, ranging from what was the effect in the patients studied to what might the effect be in future patients via what was the effect in individual patients? The extent to which the answer to these questions is similar depends on various assumptions made and in some cases the design used may not permit any meaningful answer to be given at all. A related issue is confusion between randomisation, random sampling, linear model and true multivariate based modelling. These distinctions dont matter much for some purposes and under some circumstances but for others they do. A yet further issue is that causal analysis in epidemiology, which has brought valuable insights in many cases, has tended to stress point estimates and ignore standard errors. This has potentially misleading consequences. An understanding of components of variation is key. Unfortunately, the development of two particular topics in recent years, evidence synthesis by the evidence based medicine movement and personalised medicine by bench scientists has either paid scant attention to components of variation or to the questions being asked or both resulting in confusion about many issues. For instance, it is often claimed that numbers needed to treat indicate the proportion of patients for whom treatments work, that inclusion criteria determine the generalisability of results and that heterogeneity means that a random effects meta-analysis is required. None of these is true. The scope for personalised medicine has very plausibly been exaggerated and an important cause of variation in the healthcare system, physicians, is often overlooked. I shall argue that thinking about questions is important.
What is your question from Stephen Senn
]]>
608 0 https://cdn.slidesharecdn.com/ss_thumbnails/whatisyourquestion-211116092730-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Vaccine trials in the age of COVID-19 /slideshow/vaccine-trials-in-the-age-of-covid19/248353174 covidpresentationmay2021-210519181043
The response to the COVID-19 crisis by various vaccine developers has been extraordinary, both in terms of speed of response and the delivered efficacy of the vaccines. It has also raised some fascinating issues of design, analysis and interpretation. I shall consider some of these issues, taking as my example, five vaccines: Pfizer/BioNTech, AstraZeneca/Oxford, Moderna, Novavax, and J&J Janssen but concentrating mainly on the first two. Among matters covered will be concurrent control, efficient design, issues of measurement raised by two-shot vaccines and implications for roll-out, and the surprising effectiveness of simple analyses. Differences between the five development programmes as they affect statistics will be covered but some essential similarities will also be discussed. ]]>

The response to the COVID-19 crisis by various vaccine developers has been extraordinary, both in terms of speed of response and the delivered efficacy of the vaccines. It has also raised some fascinating issues of design, analysis and interpretation. I shall consider some of these issues, taking as my example, five vaccines: Pfizer/BioNTech, AstraZeneca/Oxford, Moderna, Novavax, and J&J Janssen but concentrating mainly on the first two. Among matters covered will be concurrent control, efficient design, issues of measurement raised by two-shot vaccines and implications for roll-out, and the surprising effectiveness of simple analyses. Differences between the five development programmes as they affect statistics will be covered but some essential similarities will also be discussed. ]]>
Wed, 19 May 2021 18:10:42 GMT /slideshow/vaccine-trials-in-the-age-of-covid19/248353174 StephenSenn1@slideshare.net(StephenSenn1) Vaccine trials in the age of COVID-19 StephenSenn1 The response to the COVID-19 crisis by various vaccine developers has been extraordinary, both in terms of speed of response and the delivered efficacy of the vaccines. It has also raised some fascinating issues of design, analysis and interpretation. I shall consider some of these issues, taking as my example, five vaccines: Pfizer/BioNTech, AstraZeneca/Oxford, Moderna, Novavax, and J&J Janssen but concentrating mainly on the first two. Among matters covered will be concurrent control, efficient design, issues of measurement raised by two-shot vaccines and implications for roll-out, and the surprising effectiveness of simple analyses. Differences between the five development programmes as they affect statistics will be covered but some essential similarities will also be discussed. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/covidpresentationmay2021-210519181043-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> The response to the COVID-19 crisis by various vaccine developers has been extraordinary, both in terms of speed of response and the delivered efficacy of the vaccines. It has also raised some fascinating issues of design, analysis and interpretation. I shall consider some of these issues, taking as my example, five vaccines: Pfizer/BioNTech, AstraZeneca/Oxford, Moderna, Novavax, and J&amp;J Janssen but concentrating mainly on the first two. Among matters covered will be concurrent control, efficient design, issues of measurement raised by two-shot vaccines and implications for roll-out, and the surprising effectiveness of simple analyses. Differences between the five development programmes as they affect statistics will be covered but some essential similarities will also be discussed.
Vaccine trials in the age of COVID-19 from Stephen Senn
]]>
712 0 https://cdn.slidesharecdn.com/ss_thumbnails/covidpresentationmay2021-210519181043-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
To infinity and beyond v2 /slideshow/to-infinity-and-beyond-v2/239323454 toinfinityandbeyondv2-201118220033
The statistical revolution of the 20th century was largely concerned with developing methods for analysing small datasets. Students paper of 1908 was the first in the English literature to address the problem of second order uncertainty (uncertainty about the measures of uncertainty) seriously and was hailed by Fisher as heralding a new age of statistics. Much of what Fisher did was concerned with problems of what might be called small data, not only as regards efficient analysis but also as regards efficient design and in addition paying close attention to what was necessary to measure uncertainty validly. I shall consider the history of some of these developments, in particular those that are associated with what might be called the Rothamsted School, starting with Fisher and having its apotheosis in John Nelders theory of General Balance and see what lessons they hold for the supposed big data revolution of the 21st century. ]]>

The statistical revolution of the 20th century was largely concerned with developing methods for analysing small datasets. Students paper of 1908 was the first in the English literature to address the problem of second order uncertainty (uncertainty about the measures of uncertainty) seriously and was hailed by Fisher as heralding a new age of statistics. Much of what Fisher did was concerned with problems of what might be called small data, not only as regards efficient analysis but also as regards efficient design and in addition paying close attention to what was necessary to measure uncertainty validly. I shall consider the history of some of these developments, in particular those that are associated with what might be called the Rothamsted School, starting with Fisher and having its apotheosis in John Nelders theory of General Balance and see what lessons they hold for the supposed big data revolution of the 21st century. ]]>
Wed, 18 Nov 2020 22:00:33 GMT /slideshow/to-infinity-and-beyond-v2/239323454 StephenSenn1@slideshare.net(StephenSenn1) To infinity and beyond v2 StephenSenn1 The statistical revolution of the 20th century was largely concerned with developing methods for analysing small datasets. Students paper of 1908 was the first in the English literature to address the problem of second order uncertainty (uncertainty about the measures of uncertainty) seriously and was hailed by Fisher as heralding a new age of statistics. Much of what Fisher did was concerned with problems of what might be called small data, not only as regards efficient analysis but also as regards efficient design and in addition paying close attention to what was necessary to measure uncertainty validly. I shall consider the history of some of these developments, in particular those that are associated with what might be called the Rothamsted School, starting with Fisher and having its apotheosis in John Nelders theory of General Balance and see what lessons they hold for the supposed big data revolution of the 21st century. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/toinfinityandbeyondv2-201118220033-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> The statistical revolution of the 20th century was largely concerned with developing methods for analysing small datasets. Students paper of 1908 was the first in the English literature to address the problem of second order uncertainty (uncertainty about the measures of uncertainty) seriously and was hailed by Fisher as heralding a new age of statistics. Much of what Fisher did was concerned with problems of what might be called small data, not only as regards efficient analysis but also as regards efficient design and in addition paying close attention to what was necessary to measure uncertainty validly. I shall consider the history of some of these developments, in particular those that are associated with what might be called the Rothamsted School, starting with Fisher and having its apotheosis in John Nelders theory of General Balance and see what lessons they hold for the supposed big data revolution of the 21st century.
To infinity and beyond v2 from Stephen Senn
]]>
384 0 https://cdn.slidesharecdn.com/ss_thumbnails/toinfinityandbeyondv2-201118220033-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Approximate ANCOVA /slideshow/approximate-ancova/238685515 ms13sennapproximateancova-200930151510
Talk given at ISCB 2016 Birmingham For indications and treatments where their use is possible, n-of-1 trials represent a promising means of investigating potential treatments for rare diseases. Each patient permits repeated comparison of the treatments being investigated and this both increases the number of observations and reduces their variability compared to conventional parallel group trials. However, depending on whether the framework for analysis used is randomisation-based or model- based produces puzzling difference in inferences. This can easily be shown by starting on the one hand with the randomisation philosophy associated with the Rothamsted school of inference and building up the analysis through the block + treatment structure approach associated with John Nelders theory of general balance (as implemented in GenStat速) or starting on the other hand with a plausible variance component approach through a mixed model. However, it can be shown that these differences are related not so much to modelling approach per se but to the questions one attempts to answer: ranging from testing whether there was a difference between treatments in the patients studied, to predicting the true difference for a future patient, via making inferences about the effect in the average patient. This in turn yields interesting insight into the long-run debate over the use of fixed or random effect meta-analysis. Some practical issues of analysis will also be covered in R and SAS速, in which languages some functions and macros to facilitate analysis have been written. It is concluded that n-of-1 hold great promise in investigating chronic rare diseases but that careful consideration of matters of purpose, design and analysis is necessary to make best use of them. Acknowledgement This work is partly supported by the European Unions 7th Framework Programme for research, technological development and demonstration under grant agreement no. 602552. IDEAL ]]>

Talk given at ISCB 2016 Birmingham For indications and treatments where their use is possible, n-of-1 trials represent a promising means of investigating potential treatments for rare diseases. Each patient permits repeated comparison of the treatments being investigated and this both increases the number of observations and reduces their variability compared to conventional parallel group trials. However, depending on whether the framework for analysis used is randomisation-based or model- based produces puzzling difference in inferences. This can easily be shown by starting on the one hand with the randomisation philosophy associated with the Rothamsted school of inference and building up the analysis through the block + treatment structure approach associated with John Nelders theory of general balance (as implemented in GenStat速) or starting on the other hand with a plausible variance component approach through a mixed model. However, it can be shown that these differences are related not so much to modelling approach per se but to the questions one attempts to answer: ranging from testing whether there was a difference between treatments in the patients studied, to predicting the true difference for a future patient, via making inferences about the effect in the average patient. This in turn yields interesting insight into the long-run debate over the use of fixed or random effect meta-analysis. Some practical issues of analysis will also be covered in R and SAS速, in which languages some functions and macros to facilitate analysis have been written. It is concluded that n-of-1 hold great promise in investigating chronic rare diseases but that careful consideration of matters of purpose, design and analysis is necessary to make best use of them. Acknowledgement This work is partly supported by the European Unions 7th Framework Programme for research, technological development and demonstration under grant agreement no. 602552. IDEAL ]]>
Wed, 30 Sep 2020 15:15:10 GMT /slideshow/approximate-ancova/238685515 StephenSenn1@slideshare.net(StephenSenn1) Approximate ANCOVA StephenSenn1 Talk given at ISCB 2016 Birmingham For indications and treatments where their use is possible, n-of-1 trials represent a promising means of investigating potential treatments for rare diseases. Each patient permits repeated comparison of the treatments being investigated and this both increases the number of observations and reduces their variability compared to conventional parallel group trials. However, depending on whether the framework for analysis used is randomisation-based or model- based produces puzzling difference in inferences. This can easily be shown by starting on the one hand with the randomisation philosophy associated with the Rothamsted school of inference and building up the analysis through the block + treatment structure approach associated with John Nelders theory of general balance (as implemented in GenStat速) or starting on the other hand with a plausible variance component approach through a mixed model. However, it can be shown that these differences are related not so much to modelling approach per se but to the questions one attempts to answer: ranging from testing whether there was a difference between treatments in the patients studied, to predicting the true difference for a future patient, via making inferences about the effect in the average patient. This in turn yields interesting insight into the long-run debate over the use of fixed or random effect meta-analysis. Some practical issues of analysis will also be covered in R and SAS速, in which languages some functions and macros to facilitate analysis have been written. It is concluded that n-of-1 hold great promise in investigating chronic rare diseases but that careful consideration of matters of purpose, design and analysis is necessary to make best use of them. Acknowledgement This work is partly supported by the European Unions 7th Framework Programme for research, technological development and demonstration under grant agreement no. 602552. IDEAL <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/ms13sennapproximateancova-200930151510-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Talk given at ISCB 2016 Birmingham For indications and treatments where their use is possible, n-of-1 trials represent a promising means of investigating potential treatments for rare diseases. Each patient permits repeated comparison of the treatments being investigated and this both increases the number of observations and reduces their variability compared to conventional parallel group trials. However, depending on whether the framework for analysis used is randomisation-based or model- based produces puzzling difference in inferences. This can easily be shown by starting on the one hand with the randomisation philosophy associated with the Rothamsted school of inference and building up the analysis through the block + treatment structure approach associated with John Nelders theory of general balance (as implemented in GenStat速) or starting on the other hand with a plausible variance component approach through a mixed model. However, it can be shown that these differences are related not so much to modelling approach per se but to the questions one attempts to answer: ranging from testing whether there was a difference between treatments in the patients studied, to predicting the true difference for a future patient, via making inferences about the effect in the average patient. This in turn yields interesting insight into the long-run debate over the use of fixed or random effect meta-analysis. Some practical issues of analysis will also be covered in R and SAS速, in which languages some functions and macros to facilitate analysis have been written. It is concluded that n-of-1 hold great promise in investigating chronic rare diseases but that careful consideration of matters of purpose, design and analysis is necessary to make best use of them. Acknowledgement This work is partly supported by the European Unions 7th Framework Programme for research, technological development and demonstration under grant agreement no. 602552. IDEAL
Approximate ANCOVA from Stephen Senn
]]>
400 0 https://cdn.slidesharecdn.com/ss_thumbnails/ms13sennapproximateancova-200930151510-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
The Seven Habits of Highly Effective Statisticians /slideshow/the-seven-habits-of-highly-effective-statisticians/238329780 stephensennecb0049v2-200830110125
If you know why the title of this talk is extremely stupid, then you clearly know something about control, data and reasoning: in short, you have most of what it takes to be a statistician. If you have studied statistics then you will also know that a large amount of anything, and this includes successful careers, is luck. In this talk I shall try share some of my experiences of being a statistician in the hope that it will help you make the most of whatever luck life throws you, In so doing, I shall try my best to overcome the distorting influence of that easiest of sciences hindsight. Without giving too much away, I shall be recommending that you read, listen, think, calculate, understand, communicate, and do. I shall give you some example of what I think works and what I think doesnt In all of this you should never forget the power of negativity and also the joy of being able to wake up every day and say to yourself I love the small of data in the morning.]]>

If you know why the title of this talk is extremely stupid, then you clearly know something about control, data and reasoning: in short, you have most of what it takes to be a statistician. If you have studied statistics then you will also know that a large amount of anything, and this includes successful careers, is luck. In this talk I shall try share some of my experiences of being a statistician in the hope that it will help you make the most of whatever luck life throws you, In so doing, I shall try my best to overcome the distorting influence of that easiest of sciences hindsight. Without giving too much away, I shall be recommending that you read, listen, think, calculate, understand, communicate, and do. I shall give you some example of what I think works and what I think doesnt In all of this you should never forget the power of negativity and also the joy of being able to wake up every day and say to yourself I love the small of data in the morning.]]>
Sun, 30 Aug 2020 11:01:25 GMT /slideshow/the-seven-habits-of-highly-effective-statisticians/238329780 StephenSenn1@slideshare.net(StephenSenn1) The Seven Habits of Highly Effective Statisticians StephenSenn1 If you know why the title of this talk is extremely stupid, then you clearly know something about control, data and reasoning: in short, you have most of what it takes to be a statistician. If you have studied statistics then you will also know that a large amount of anything, and this includes successful careers, is luck. In this talk I shall try share some of my experiences of being a statistician in the hope that it will help you make the most of whatever luck life throws you, In so doing, I shall try my best to overcome the distorting influence of that easiest of sciences hindsight. Without giving too much away, I shall be recommending that you read, listen, think, calculate, understand, communicate, and do. I shall give you some example of what I think works and what I think doesnt In all of this you should never forget the power of negativity and also the joy of being able to wake up every day and say to yourself I love the small of data in the morning. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/stephensennecb0049v2-200830110125-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> If you know why the title of this talk is extremely stupid, then you clearly know something about control, data and reasoning: in short, you have most of what it takes to be a statistician. If you have studied statistics then you will also know that a large amount of anything, and this includes successful careers, is luck. In this talk I shall try share some of my experiences of being a statistician in the hope that it will help you make the most of whatever luck life throws you, In so doing, I shall try my best to overcome the distorting influence of that easiest of sciences hindsight. Without giving too much away, I shall be recommending that you read, listen, think, calculate, understand, communicate, and do. I shall give you some example of what I think works and what I think doesnt In all of this you should never forget the power of negativity and also the joy of being able to wake up every day and say to yourself I love the small of data in the morning.
The Seven Habits of Highly Effective Statisticians from Stephen Senn
]]>
2376 0 https://cdn.slidesharecdn.com/ss_thumbnails/stephensennecb0049v2-200830110125-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Minimally important differences v2 /slideshow/minimally-important-differences-v2/238072328 minimallyimportantdifferencesv2-200820082500
When estimating sample sizes for clinical trials there are several different views that might be taken as to what definition and meaning should be given to the sought-for treatment effect. However, if the concept of a minimally important difference (MID) does have relevance to interpreting clinical trials (which can be disputed) then its value cannot be the same as the clinically relevant difference (CRD) that would be used for planning them. A doubly pernicious use of the MID is as a means of classifying patients as responders and non-responders. Not only does such an analysis lead to an increase in the necessary sample size but it misleads trialists into making causal distinctions that the data cannot support and has been responsible for exaggerating the scope for personalised medicine. In this talk these statistical points will be explained using a minimum of technical detail. ]]>

When estimating sample sizes for clinical trials there are several different views that might be taken as to what definition and meaning should be given to the sought-for treatment effect. However, if the concept of a minimally important difference (MID) does have relevance to interpreting clinical trials (which can be disputed) then its value cannot be the same as the clinically relevant difference (CRD) that would be used for planning them. A doubly pernicious use of the MID is as a means of classifying patients as responders and non-responders. Not only does such an analysis lead to an increase in the necessary sample size but it misleads trialists into making causal distinctions that the data cannot support and has been responsible for exaggerating the scope for personalised medicine. In this talk these statistical points will be explained using a minimum of technical detail. ]]>
Thu, 20 Aug 2020 08:25:00 GMT /slideshow/minimally-important-differences-v2/238072328 StephenSenn1@slideshare.net(StephenSenn1) Minimally important differences v2 StephenSenn1 When estimating sample sizes for clinical trials there are several different views that might be taken as to what definition and meaning should be given to the sought-for treatment effect. However, if the concept of a minimally important difference (MID) does have relevance to interpreting clinical trials (which can be disputed) then its value cannot be the same as the clinically relevant difference (CRD) that would be used for planning them. A doubly pernicious use of the MID is as a means of classifying patients as responders and non-responders. Not only does such an analysis lead to an increase in the necessary sample size but it misleads trialists into making causal distinctions that the data cannot support and has been responsible for exaggerating the scope for personalised medicine. In this talk these statistical points will be explained using a minimum of technical detail. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/minimallyimportantdifferencesv2-200820082500-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> When estimating sample sizes for clinical trials there are several different views that might be taken as to what definition and meaning should be given to the sought-for treatment effect. However, if the concept of a minimally important difference (MID) does have relevance to interpreting clinical trials (which can be disputed) then its value cannot be the same as the clinically relevant difference (CRD) that would be used for planning them. A doubly pernicious use of the MID is as a means of classifying patients as responders and non-responders. Not only does such an analysis lead to an increase in the necessary sample size but it misleads trialists into making causal distinctions that the data cannot support and has been responsible for exaggerating the scope for personalised medicine. In this talk these statistical points will be explained using a minimum of technical detail.
Minimally important differences v2 from Stephen Senn
]]>
788 0 https://cdn.slidesharecdn.com/ss_thumbnails/minimallyimportantdifferencesv2-200820082500-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Clinical trials: quo vadis in the age of covid? /slideshow/senn-quo-vadis-phastar/237670467 sennquovadisphastar-200808140921
A discussion of the role of clinical trials in the age of COVID. My contribution to the phastar 2020 life sciences summit https://phastar.com/phastar-life-science-summit]]>

A discussion of the role of clinical trials in the age of COVID. My contribution to the phastar 2020 life sciences summit https://phastar.com/phastar-life-science-summit]]>
Sat, 08 Aug 2020 14:09:21 GMT /slideshow/senn-quo-vadis-phastar/237670467 StephenSenn1@slideshare.net(StephenSenn1) Clinical trials: quo vadis in the age of covid? StephenSenn1 A discussion of the role of clinical trials in the age of COVID. My contribution to the phastar 2020 life sciences summit https://phastar.com/phastar-life-science-summit <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/sennquovadisphastar-200808140921-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> A discussion of the role of clinical trials in the age of COVID. My contribution to the phastar 2020 life sciences summit https://phastar.com/phastar-life-science-summit
Clinical trials: quo vadis in the age of covid? from Stephen Senn
]]>
383 2 https://cdn.slidesharecdn.com/ss_thumbnails/sennquovadisphastar-200808140921-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
A century of t tests /slideshow/a-century-of-t-tests/236197133 acenturyoft-tests-200625085055
The story of Student's t-test including the history of the trial at Kalamazoo that provided the data that WS Gosset used to illustrate his test.]]>

The story of Student's t-test including the history of the trial at Kalamazoo that provided the data that WS Gosset used to illustrate his test.]]>
Thu, 25 Jun 2020 08:50:55 GMT /slideshow/a-century-of-t-tests/236197133 StephenSenn1@slideshare.net(StephenSenn1) A century of t tests StephenSenn1 The story of Student's t-test including the history of the trial at Kalamazoo that provided the data that WS Gosset used to illustrate his test. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/acenturyoft-tests-200625085055-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> The story of Student&#39;s t-test including the history of the trial at Kalamazoo that provided the data that WS Gosset used to illustrate his test.
A century of t tests from Stephen Senn
]]>
266 0 https://cdn.slidesharecdn.com/ss_thumbnails/acenturyoft-tests-200625085055-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Is ignorance bliss /slideshow/is-ignorance-bliss-230843523/230843523 isignorancebliss-200325114211
It is argued that when it comes to nuisance parameters an assumption of ignorance is harmful. On the other hand this raises problems as to how far one should go in searching for further data when combining evidence. ]]>

It is argued that when it comes to nuisance parameters an assumption of ignorance is harmful. On the other hand this raises problems as to how far one should go in searching for further data when combining evidence. ]]>
Wed, 25 Mar 2020 11:42:11 GMT /slideshow/is-ignorance-bliss-230843523/230843523 StephenSenn1@slideshare.net(StephenSenn1) Is ignorance bliss StephenSenn1 It is argued that when it comes to nuisance parameters an assumption of ignorance is harmful. On the other hand this raises problems as to how far one should go in searching for further data when combining evidence. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/isignorancebliss-200325114211-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> It is argued that when it comes to nuisance parameters an assumption of ignorance is harmful. On the other hand this raises problems as to how far one should go in searching for further data when combining evidence.
Is ignorance bliss from Stephen Senn
]]>
790 0 https://cdn.slidesharecdn.com/ss_thumbnails/isignorancebliss-200325114211-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
What should we expect from reproducibiliry /slideshow/what-should-we-expect-from-reproducibiliry/225841430 whatshouldweexpectfromreproducibiliry-200129115517
Is there really a reproducibility crisis and if so are P-values to blame? Choose any statistic you like and carry out two identical independent studies and report this statistic for each. In advance of collecting any data, you ought to expect that it is just as likely that statistic 1 will be smaller than statistic 2 as vice versa. Once you have seen statistic 1, things are not so simple but if they are not so simple, it is that you have other information in some form. However, it is at least instructive that you need to be careful in jumping to conclusions about what to expect from reproducibility. Furthermore, the forecasts of good Bayesians ought to obey a Martingale property. On average you should be in the future where you are now but, of course, your inferential random walk may lead to some peregrination before it homes in on the truth. But you certainly cant generally expect that a probability will get smaller as you continue. P-values, like other statistics are a position not a movement. Although often claimed, there is no such things as a trend towards significance. Using these and other philosophical considerations I shall try and establish what it is we want from reproducibility. I shall conclude that we statisticians should probably be paying more attention to checking that standard errors are being calculated appropriately and rather less to inferential framework. ]]>

Is there really a reproducibility crisis and if so are P-values to blame? Choose any statistic you like and carry out two identical independent studies and report this statistic for each. In advance of collecting any data, you ought to expect that it is just as likely that statistic 1 will be smaller than statistic 2 as vice versa. Once you have seen statistic 1, things are not so simple but if they are not so simple, it is that you have other information in some form. However, it is at least instructive that you need to be careful in jumping to conclusions about what to expect from reproducibility. Furthermore, the forecasts of good Bayesians ought to obey a Martingale property. On average you should be in the future where you are now but, of course, your inferential random walk may lead to some peregrination before it homes in on the truth. But you certainly cant generally expect that a probability will get smaller as you continue. P-values, like other statistics are a position not a movement. Although often claimed, there is no such things as a trend towards significance. Using these and other philosophical considerations I shall try and establish what it is we want from reproducibility. I shall conclude that we statisticians should probably be paying more attention to checking that standard errors are being calculated appropriately and rather less to inferential framework. ]]>
Wed, 29 Jan 2020 11:55:17 GMT /slideshow/what-should-we-expect-from-reproducibiliry/225841430 StephenSenn1@slideshare.net(StephenSenn1) What should we expect from reproducibiliry StephenSenn1 Is there really a reproducibility crisis and if so are P-values to blame? Choose any statistic you like and carry out two identical independent studies and report this statistic for each. In advance of collecting any data, you ought to expect that it is just as likely that statistic 1 will be smaller than statistic 2 as vice versa. Once you have seen statistic 1, things are not so simple but if they are not so simple, it is that you have other information in some form. However, it is at least instructive that you need to be careful in jumping to conclusions about what to expect from reproducibility. Furthermore, the forecasts of good Bayesians ought to obey a Martingale property. On average you should be in the future where you are now but, of course, your inferential random walk may lead to some peregrination before it homes in on the truth. But you certainly cant generally expect that a probability will get smaller as you continue. P-values, like other statistics are a position not a movement. Although often claimed, there is no such things as a trend towards significance. Using these and other philosophical considerations I shall try and establish what it is we want from reproducibility. I shall conclude that we statisticians should probably be paying more attention to checking that standard errors are being calculated appropriately and rather less to inferential framework. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/whatshouldweexpectfromreproducibiliry-200129115517-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Is there really a reproducibility crisis and if so are P-values to blame? Choose any statistic you like and carry out two identical independent studies and report this statistic for each. In advance of collecting any data, you ought to expect that it is just as likely that statistic 1 will be smaller than statistic 2 as vice versa. Once you have seen statistic 1, things are not so simple but if they are not so simple, it is that you have other information in some form. However, it is at least instructive that you need to be careful in jumping to conclusions about what to expect from reproducibility. Furthermore, the forecasts of good Bayesians ought to obey a Martingale property. On average you should be in the future where you are now but, of course, your inferential random walk may lead to some peregrination before it homes in on the truth. But you certainly cant generally expect that a probability will get smaller as you continue. P-values, like other statistics are a position not a movement. Although often claimed, there is no such things as a trend towards significance. Using these and other philosophical considerations I shall try and establish what it is we want from reproducibility. I shall conclude that we statisticians should probably be paying more attention to checking that standard errors are being calculated appropriately and rather less to inferential framework.
What should we expect from reproducibiliry from Stephen Senn
]]>
385 1 https://cdn.slidesharecdn.com/ss_thumbnails/whatshouldweexpectfromreproducibiliry-200129115517-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Personalised medicine a sceptical view /slideshow/personalised-medicine-a-sceptical-view/196947301 personalisedmedicineascepticalview-191123223550
Some grounds for believing that the current enthusiasm about personalised medicine is exaggerated, founded on poor statistics and represents a disappointing loss of ambition.]]>

Some grounds for believing that the current enthusiasm about personalised medicine is exaggerated, founded on poor statistics and represents a disappointing loss of ambition.]]>
Sat, 23 Nov 2019 22:35:50 GMT /slideshow/personalised-medicine-a-sceptical-view/196947301 StephenSenn1@slideshare.net(StephenSenn1) Personalised medicine a sceptical view StephenSenn1 Some grounds for believing that the current enthusiasm about personalised medicine is exaggerated, founded on poor statistics and represents a disappointing loss of ambition. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/personalisedmedicineascepticalview-191123223550-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Some grounds for believing that the current enthusiasm about personalised medicine is exaggerated, founded on poor statistics and represents a disappointing loss of ambition.
Personalised medicine a sceptical view from Stephen Senn
]]>
920 10 https://cdn.slidesharecdn.com/ss_thumbnails/personalisedmedicineascepticalview-191123223550-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
In search of the lost loss function /slideshow/in-search-of-the-lost-loss-function/192332525 insearchofthelostlossfunctioncorrected-191111145045
Sample size determination in clinical trials is considered from various ethical and practical perspectives. It is concluded that cost is a missing dimension and that the value of information is key.]]>

Sample size determination in clinical trials is considered from various ethical and practical perspectives. It is concluded that cost is a missing dimension and that the value of information is key.]]>
Mon, 11 Nov 2019 14:50:45 GMT /slideshow/in-search-of-the-lost-loss-function/192332525 StephenSenn1@slideshare.net(StephenSenn1) In search of the lost loss function StephenSenn1 Sample size determination in clinical trials is considered from various ethical and practical perspectives. It is concluded that cost is a missing dimension and that the value of information is key. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/insearchofthelostlossfunctioncorrected-191111145045-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Sample size determination in clinical trials is considered from various ethical and practical perspectives. It is concluded that cost is a missing dimension and that the value of information is key.
In search of the lost loss function from Stephen Senn
]]>
374 3 https://cdn.slidesharecdn.com/ss_thumbnails/insearchofthelostlossfunctioncorrected-191111145045-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
To infinity and beyond /slideshow/to-infinity-and-beyond-178815990/178815990 toinfinityandbeyondv6norwich-191003161508
An early and overlooked causal revolution in statistics was the development of the theory of experimental design, initially associated with the "Rothamstead School". An important stage in the evolution of this theory was the experimental calculus developed by John Nelder in the 1960s with its clear distinction between block and treatment factors in designed experiments. This experimental calculus produced appropriate models automatically from more basic formal considerations but was, unfortunately, only ever implemented in Genstat速, a package widely used in agriculture but rarely so in medical research. In consequence its importance has not been appreciated and the approach of many statistical packages to designed experiments is poor. A key feature of the Rothamsted School approach is that identification of the appropriate components of variation for judging treatment effects is simple and automatic. The impressive more recent causal revolution in epidemiology, associated with Judea Pearl, seems to have no place for components of variation, however. By considering the application of Nelders experimental calculus to Lords Paradox, I shall show that this reveals that solutions that have been proposed using the more modern causal calculus are problematic. I shall also show that lessons from designed clinical trials have important implications for the use of historical data and big data more generally. ]]>

An early and overlooked causal revolution in statistics was the development of the theory of experimental design, initially associated with the "Rothamstead School". An important stage in the evolution of this theory was the experimental calculus developed by John Nelder in the 1960s with its clear distinction between block and treatment factors in designed experiments. This experimental calculus produced appropriate models automatically from more basic formal considerations but was, unfortunately, only ever implemented in Genstat速, a package widely used in agriculture but rarely so in medical research. In consequence its importance has not been appreciated and the approach of many statistical packages to designed experiments is poor. A key feature of the Rothamsted School approach is that identification of the appropriate components of variation for judging treatment effects is simple and automatic. The impressive more recent causal revolution in epidemiology, associated with Judea Pearl, seems to have no place for components of variation, however. By considering the application of Nelders experimental calculus to Lords Paradox, I shall show that this reveals that solutions that have been proposed using the more modern causal calculus are problematic. I shall also show that lessons from designed clinical trials have important implications for the use of historical data and big data more generally. ]]>
Thu, 03 Oct 2019 16:15:08 GMT /slideshow/to-infinity-and-beyond-178815990/178815990 StephenSenn1@slideshare.net(StephenSenn1) To infinity and beyond StephenSenn1 An early and overlooked causal revolution in statistics was the development of the theory of experimental design, initially associated with the "Rothamstead School". An important stage in the evolution of this theory was the experimental calculus developed by John Nelder in the 1960s with its clear distinction between block and treatment factors in designed experiments. This experimental calculus produced appropriate models automatically from more basic formal considerations but was, unfortunately, only ever implemented in Genstat速, a package widely used in agriculture but rarely so in medical research. In consequence its importance has not been appreciated and the approach of many statistical packages to designed experiments is poor. A key feature of the Rothamsted School approach is that identification of the appropriate components of variation for judging treatment effects is simple and automatic. The impressive more recent causal revolution in epidemiology, associated with Judea Pearl, seems to have no place for components of variation, however. By considering the application of Nelders experimental calculus to Lords Paradox, I shall show that this reveals that solutions that have been proposed using the more modern causal calculus are problematic. I shall also show that lessons from designed clinical trials have important implications for the use of historical data and big data more generally. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/toinfinityandbeyondv6norwich-191003161508-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> An early and overlooked causal revolution in statistics was the development of the theory of experimental design, initially associated with the &quot;Rothamstead School&quot;. An important stage in the evolution of this theory was the experimental calculus developed by John Nelder in the 1960s with its clear distinction between block and treatment factors in designed experiments. This experimental calculus produced appropriate models automatically from more basic formal considerations but was, unfortunately, only ever implemented in Genstat速, a package widely used in agriculture but rarely so in medical research. In consequence its importance has not been appreciated and the approach of many statistical packages to designed experiments is poor. A key feature of the Rothamsted School approach is that identification of the appropriate components of variation for judging treatment effects is simple and automatic. The impressive more recent causal revolution in epidemiology, associated with Judea Pearl, seems to have no place for components of variation, however. By considering the application of Nelders experimental calculus to Lords Paradox, I shall show that this reveals that solutions that have been proposed using the more modern causal calculus are problematic. I shall also show that lessons from designed clinical trials have important implications for the use of historical data and big data more generally.
To infinity and beyond from Stephen Senn
]]>
697 2 https://cdn.slidesharecdn.com/ss_thumbnails/toinfinityandbeyondv6norwich-191003161508-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
De Finetti meets Popper /slideshow/de-finetti-meets-popper/162080673 definettimeetspopperv4-190807191201
Views of the role of hypothesis falsification in statistical testing do not divide as cleanly between frequentist and Bayesian views as is commonly supposed. This can be shown by considering the two major variants of the Bayesian approach to statistical inference and the two major variants of the frequentist one. A good case can be made that the Bayesian, de Finetti, just like Popper, was a falsificationist. A thumbnail view, which is not just a caricature, of de Finettis theory of learning, is that your subjective probabilities are modified through experience by noticing which of your predictions are wrong, striking out the sequences that involved them and renormalising. On the other hand, in the formal frequentist Neyman-Pearson approach to hypothesis testing, you can, if you wish, shift conventional null and alternative hypotheses, making the latter the strawman and by disproving it, assert the former. The frequentist, Fisher, however, at least in his approach to testing of hypotheses, seems to have taken a strong view that the null hypothesis was quite different from any other and there was a strong asymmetry on inferences that followed from the application of significance tests. Finally, to complete a quartet, the Bayesian geophysicist Jeffreys, inspired by Broad, specifically developed his approach to significance testing in order to be able to prove scientific laws. By considering the controversial case of equivalence testing in clinical trials, where the object is to prove that treatments do not differ from each other, I shall show that there are fundamental differences between proving and falsifying a hypothesis and that this distinction does not disappear by adopting a Bayesian philosophy. I conclude that falsificationism is important for Bayesians also, although it is an open question as to whether it is enough for frequentists. ]]>

Views of the role of hypothesis falsification in statistical testing do not divide as cleanly between frequentist and Bayesian views as is commonly supposed. This can be shown by considering the two major variants of the Bayesian approach to statistical inference and the two major variants of the frequentist one. A good case can be made that the Bayesian, de Finetti, just like Popper, was a falsificationist. A thumbnail view, which is not just a caricature, of de Finettis theory of learning, is that your subjective probabilities are modified through experience by noticing which of your predictions are wrong, striking out the sequences that involved them and renormalising. On the other hand, in the formal frequentist Neyman-Pearson approach to hypothesis testing, you can, if you wish, shift conventional null and alternative hypotheses, making the latter the strawman and by disproving it, assert the former. The frequentist, Fisher, however, at least in his approach to testing of hypotheses, seems to have taken a strong view that the null hypothesis was quite different from any other and there was a strong asymmetry on inferences that followed from the application of significance tests. Finally, to complete a quartet, the Bayesian geophysicist Jeffreys, inspired by Broad, specifically developed his approach to significance testing in order to be able to prove scientific laws. By considering the controversial case of equivalence testing in clinical trials, where the object is to prove that treatments do not differ from each other, I shall show that there are fundamental differences between proving and falsifying a hypothesis and that this distinction does not disappear by adopting a Bayesian philosophy. I conclude that falsificationism is important for Bayesians also, although it is an open question as to whether it is enough for frequentists. ]]>
Wed, 07 Aug 2019 19:12:01 GMT /slideshow/de-finetti-meets-popper/162080673 StephenSenn1@slideshare.net(StephenSenn1) De Finetti meets Popper StephenSenn1 Views of the role of hypothesis falsification in statistical testing do not divide as cleanly between frequentist and Bayesian views as is commonly supposed. This can be shown by considering the two major variants of the Bayesian approach to statistical inference and the two major variants of the frequentist one. A good case can be made that the Bayesian, de Finetti, just like Popper, was a falsificationist. A thumbnail view, which is not just a caricature, of de Finettis theory of learning, is that your subjective probabilities are modified through experience by noticing which of your predictions are wrong, striking out the sequences that involved them and renormalising. On the other hand, in the formal frequentist Neyman-Pearson approach to hypothesis testing, you can, if you wish, shift conventional null and alternative hypotheses, making the latter the strawman and by disproving it, assert the former. The frequentist, Fisher, however, at least in his approach to testing of hypotheses, seems to have taken a strong view that the null hypothesis was quite different from any other and there was a strong asymmetry on inferences that followed from the application of significance tests. Finally, to complete a quartet, the Bayesian geophysicist Jeffreys, inspired by Broad, specifically developed his approach to significance testing in order to be able to prove scientific laws. By considering the controversial case of equivalence testing in clinical trials, where the object is to prove that treatments do not differ from each other, I shall show that there are fundamental differences between proving and falsifying a hypothesis and that this distinction does not disappear by adopting a Bayesian philosophy. I conclude that falsificationism is important for Bayesians also, although it is an open question as to whether it is enough for frequentists. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/definettimeetspopperv4-190807191201-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Views of the role of hypothesis falsification in statistical testing do not divide as cleanly between frequentist and Bayesian views as is commonly supposed. This can be shown by considering the two major variants of the Bayesian approach to statistical inference and the two major variants of the frequentist one. A good case can be made that the Bayesian, de Finetti, just like Popper, was a falsificationist. A thumbnail view, which is not just a caricature, of de Finettis theory of learning, is that your subjective probabilities are modified through experience by noticing which of your predictions are wrong, striking out the sequences that involved them and renormalising. On the other hand, in the formal frequentist Neyman-Pearson approach to hypothesis testing, you can, if you wish, shift conventional null and alternative hypotheses, making the latter the strawman and by disproving it, assert the former. The frequentist, Fisher, however, at least in his approach to testing of hypotheses, seems to have taken a strong view that the null hypothesis was quite different from any other and there was a strong asymmetry on inferences that followed from the application of significance tests. Finally, to complete a quartet, the Bayesian geophysicist Jeffreys, inspired by Broad, specifically developed his approach to significance testing in order to be able to prove scientific laws. By considering the controversial case of equivalence testing in clinical trials, where the object is to prove that treatments do not differ from each other, I shall show that there are fundamental differences between proving and falsifying a hypothesis and that this distinction does not disappear by adopting a Bayesian philosophy. I conclude that falsificationism is important for Bayesians also, although it is an open question as to whether it is enough for frequentists.
De Finetti meets Popper from Stephen Senn
]]>
1008 8 https://cdn.slidesharecdn.com/ss_thumbnails/definettimeetspopperv4-190807191201-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Understanding randomisation /slideshow/understanding-randomisation/160992466 understandingrandomisationv2-190804160006
Lecture delivered at the Philosophy and Statistics fortnight 2019 at Virginia Tech ]]>

Lecture delivered at the Philosophy and Statistics fortnight 2019 at Virginia Tech ]]>
Sun, 04 Aug 2019 16:00:06 GMT /slideshow/understanding-randomisation/160992466 StephenSenn1@slideshare.net(StephenSenn1) Understanding randomisation StephenSenn1 Lecture delivered at the Philosophy and Statistics fortnight 2019 at Virginia Tech <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/understandingrandomisationv2-190804160006-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Lecture delivered at the Philosophy and Statistics fortnight 2019 at Virginia Tech
Understanding randomisation from Stephen Senn
]]>
624 9 https://cdn.slidesharecdn.com/ss_thumbnails/understandingrandomisationv2-190804160006-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
In Search of Lost Infinities: What is the n in big data? /slideshow/in-search-of-lost-infinities-what-is-the-n-in-big-data/137577741 insearchoflostinfinitiesv2-190321231445
In designing complex experiments, agricultural scientists, with the help of their statistician collaborators, soon came to realise that variation at different levels had very different consequences for estimating different treatment effects, depending on how the treatments were mapped onto the underlying block structure. This was a key feature of the Rothamsted approach to design and analysis and a strong thread running through the work of Fisher, Yates and Nelder, being expressed in topics such as split-pot designs, recovering inter-block information and fractional factorials. The null block-structure of an experiment is key to this philosophy of design and analysis. However modern techniques for analysing experiments stress models rather than symmetries and this modelling approach requires much greater care in analysis, with the consequence that you can easily make mistakes and often will. In this talk I shall underline the obvious, but often unintentionally overlooked, fact that understanding variation at the various levels at which it occurs is crucial to analysis. I shall take three examples, an application of John Nelders theory of general balance to Lords Paradox, the use of historical data in drug development and a hybrid randomised non-randomised clinical trial, the TARGET study, to show that the data that many, including those promoting a so-called causal revolution, assume to be big may actually be rather small. The consequence is that there is a danger that the size of standard errors will be underestimated or even that the appropriate regression coefficients for adjusting for confounding may not be identified correctly. I conclude that an old but powerful experimental design approach holds important lessons for observational data about limitations in interpretation that mere numbers cannot overcome. Small may be beautiful, after all. ]]>

In designing complex experiments, agricultural scientists, with the help of their statistician collaborators, soon came to realise that variation at different levels had very different consequences for estimating different treatment effects, depending on how the treatments were mapped onto the underlying block structure. This was a key feature of the Rothamsted approach to design and analysis and a strong thread running through the work of Fisher, Yates and Nelder, being expressed in topics such as split-pot designs, recovering inter-block information and fractional factorials. The null block-structure of an experiment is key to this philosophy of design and analysis. However modern techniques for analysing experiments stress models rather than symmetries and this modelling approach requires much greater care in analysis, with the consequence that you can easily make mistakes and often will. In this talk I shall underline the obvious, but often unintentionally overlooked, fact that understanding variation at the various levels at which it occurs is crucial to analysis. I shall take three examples, an application of John Nelders theory of general balance to Lords Paradox, the use of historical data in drug development and a hybrid randomised non-randomised clinical trial, the TARGET study, to show that the data that many, including those promoting a so-called causal revolution, assume to be big may actually be rather small. The consequence is that there is a danger that the size of standard errors will be underestimated or even that the appropriate regression coefficients for adjusting for confounding may not be identified correctly. I conclude that an old but powerful experimental design approach holds important lessons for observational data about limitations in interpretation that mere numbers cannot overcome. Small may be beautiful, after all. ]]>
Thu, 21 Mar 2019 23:14:45 GMT /slideshow/in-search-of-lost-infinities-what-is-the-n-in-big-data/137577741 StephenSenn1@slideshare.net(StephenSenn1) In Search of Lost Infinities: What is the n in big data? StephenSenn1 In designing complex experiments, agricultural scientists, with the help of their statistician collaborators, soon came to realise that variation at different levels had very different consequences for estimating different treatment effects, depending on how the treatments were mapped onto the underlying block structure. This was a key feature of the Rothamsted approach to design and analysis and a strong thread running through the work of Fisher, Yates and Nelder, being expressed in topics such as split-pot designs, recovering inter-block information and fractional factorials. The null block-structure of an experiment is key to this philosophy of design and analysis. However modern techniques for analysing experiments stress models rather than symmetries and this modelling approach requires much greater care in analysis, with the consequence that you can easily make mistakes and often will. In this talk I shall underline the obvious, but often unintentionally overlooked, fact that understanding variation at the various levels at which it occurs is crucial to analysis. I shall take three examples, an application of John Nelders theory of general balance to Lords Paradox, the use of historical data in drug development and a hybrid randomised non-randomised clinical trial, the TARGET study, to show that the data that many, including those promoting a so-called causal revolution, assume to be big may actually be rather small. The consequence is that there is a danger that the size of standard errors will be underestimated or even that the appropriate regression coefficients for adjusting for confounding may not be identified correctly. I conclude that an old but powerful experimental design approach holds important lessons for observational data about limitations in interpretation that mere numbers cannot overcome. Small may be beautiful, after all. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/insearchoflostinfinitiesv2-190321231445-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> In designing complex experiments, agricultural scientists, with the help of their statistician collaborators, soon came to realise that variation at different levels had very different consequences for estimating different treatment effects, depending on how the treatments were mapped onto the underlying block structure. This was a key feature of the Rothamsted approach to design and analysis and a strong thread running through the work of Fisher, Yates and Nelder, being expressed in topics such as split-pot designs, recovering inter-block information and fractional factorials. The null block-structure of an experiment is key to this philosophy of design and analysis. However modern techniques for analysing experiments stress models rather than symmetries and this modelling approach requires much greater care in analysis, with the consequence that you can easily make mistakes and often will. In this talk I shall underline the obvious, but often unintentionally overlooked, fact that understanding variation at the various levels at which it occurs is crucial to analysis. I shall take three examples, an application of John Nelders theory of general balance to Lords Paradox, the use of historical data in drug development and a hybrid randomised non-randomised clinical trial, the TARGET study, to show that the data that many, including those promoting a so-called causal revolution, assume to be big may actually be rather small. The consequence is that there is a danger that the size of standard errors will be underestimated or even that the appropriate regression coefficients for adjusting for confounding may not be identified correctly. I conclude that an old but powerful experimental design approach holds important lessons for observational data about limitations in interpretation that mere numbers cannot overcome. Small may be beautiful, after all.
In Search of Lost Infinities: What is the n in big data? from Stephen Senn
]]>
931 2 https://cdn.slidesharecdn.com/ss_thumbnails/insearchoflostinfinitiesv2-190321231445-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
NNTs, responder analysis & overlap measures /slideshow/nnts-responder-analysis-overlap-measures/128728313 nntsetcdecember2018v2-190121210921
Unfortunately, some have interpreted Numbers Needed to Treat as indicating the proportion of patients on whom the treatment has had a causal effect. This interpretation is very rarely, if ever, necessarily correct. It is certainly inappropriate if based on a responder dichotomy. I shall illustrate the problem using simple causal models. One also sometimes encounters the claim that the extent to which two distributions of outcomes overlap from a clinical trial indicates how many patients benefit. This is also false and can be traced to a similar causal confusion.]]>

Unfortunately, some have interpreted Numbers Needed to Treat as indicating the proportion of patients on whom the treatment has had a causal effect. This interpretation is very rarely, if ever, necessarily correct. It is certainly inappropriate if based on a responder dichotomy. I shall illustrate the problem using simple causal models. One also sometimes encounters the claim that the extent to which two distributions of outcomes overlap from a clinical trial indicates how many patients benefit. This is also false and can be traced to a similar causal confusion.]]>
Mon, 21 Jan 2019 21:09:21 GMT /slideshow/nnts-responder-analysis-overlap-measures/128728313 StephenSenn1@slideshare.net(StephenSenn1) NNTs, responder analysis & overlap measures StephenSenn1 Unfortunately, some have interpreted Numbers Needed to Treat as indicating the proportion of patients on whom the treatment has had a causal effect. This interpretation is very rarely, if ever, necessarily correct. It is certainly inappropriate if based on a responder dichotomy. I shall illustrate the problem using simple causal models. 鐃One also sometimes encounters the claim that the extent to which two distributions of outcomes overlap from a clinical trial indicates how many patients benefit. This is also false and can be traced to a similar causal confusion. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/nntsetcdecember2018v2-190121210921-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Unfortunately, some have interpreted Numbers Needed to Treat as indicating the proportion of patients on whom the treatment has had a causal effect. This interpretation is very rarely, if ever, necessarily correct. It is certainly inappropriate if based on a responder dichotomy. I shall illustrate the problem using simple causal models. 鐃One also sometimes encounters the claim that the extent to which two distributions of outcomes overlap from a clinical trial indicates how many patients benefit. This is also false and can be traced to a similar causal confusion.
NNTs, responder analysis & overlap measures from Stephen Senn
]]>
619 5 https://cdn.slidesharecdn.com/ss_thumbnails/nntsetcdecember2018v2-190121210921-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Seventy years of RCTs /slideshow/seventy-years-of-rcts/123069811 seventyyearsofrcts-181115083733
This year marks the 70th anniversary of the Medical Research Council randomised clinical trial (RCT) of streptomycin in tuberculosis led by Bradford Hill. This is widely regarded as a landmark in clinical research. Despite its widespread use in drug regulation and in clinical research more widely and its high standing with the evidence based medicine movement, the RCT continues to attracts criticism. I show that many of these criticisms are traceable to failure to understand two key concepts in statistics: probabilistic inference and design efficiency. To these methodological misunderstandings can be added the practical one of failing to appreciate that entry into clinical trials is not simultaneous but sequential. I conclude that although randomisation should not be used as an excuse for ignoring prognostic variables, it is valuable and that many standard criticisms of RCTs are invalid.]]>

This year marks the 70th anniversary of the Medical Research Council randomised clinical trial (RCT) of streptomycin in tuberculosis led by Bradford Hill. This is widely regarded as a landmark in clinical research. Despite its widespread use in drug regulation and in clinical research more widely and its high standing with the evidence based medicine movement, the RCT continues to attracts criticism. I show that many of these criticisms are traceable to failure to understand two key concepts in statistics: probabilistic inference and design efficiency. To these methodological misunderstandings can be added the practical one of failing to appreciate that entry into clinical trials is not simultaneous but sequential. I conclude that although randomisation should not be used as an excuse for ignoring prognostic variables, it is valuable and that many standard criticisms of RCTs are invalid.]]>
Thu, 15 Nov 2018 08:37:33 GMT /slideshow/seventy-years-of-rcts/123069811 StephenSenn1@slideshare.net(StephenSenn1) Seventy years of RCTs StephenSenn1 This year marks the 70th anniversary of the Medical Research Council randomised clinical trial (RCT) of streptomycin in tuberculosis led by Bradford Hill. This is widely regarded as a landmark in clinical research. Despite its widespread use in drug regulation and in clinical research more widely and its high standing with the evidence based medicine movement, the RCT continues to attracts criticism. I show that many of these criticisms are traceable to failure to understand two key concepts in statistics: probabilistic inference and design efficiency. To these methodological misunderstandings can be added the practical one of failing to appreciate that entry into clinical trials is not simultaneous but sequential. I conclude that although randomisation should not be used as an excuse for ignoring prognostic variables, it is valuable and that many standard criticisms of RCTs are invalid. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/seventyyearsofrcts-181115083733-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> This year marks the 70th anniversary of the Medical Research Council randomised clinical trial (RCT) of streptomycin in tuberculosis led by Bradford Hill. This is widely regarded as a landmark in clinical research. Despite its widespread use in drug regulation and in clinical research more widely and its high standing with the evidence based medicine movement, the RCT continues to attracts criticism. I show that many of these criticisms are traceable to failure to understand two key concepts in statistics: probabilistic inference and design efficiency. To these methodological misunderstandings can be added the practical one of failing to appreciate that entry into clinical trials is not simultaneous but sequential. I conclude that although randomisation should not be used as an excuse for ignoring prognostic variables, it is valuable and that many standard criticisms of RCTs are invalid.
Seventy years of RCTs from Stephen Senn
]]>
732 4 https://cdn.slidesharecdn.com/ss_thumbnails/seventyyearsofrcts-181115083733-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
The Rothamsted school meets Lord's paradox /slideshow/the-rothamsted-school-meets-lords-paradox/121603215 therothamstedschoolmeetslordsparadox-181102213941
Lords paradox is a notoriously difficult puzzle that is guaranteed to provoke discussion, dissent and disagreement. Two statisticians analyse some observational data and come to radically different conclusions, each of which has acquired defenders over the years since Lord first proposed his puzzle in 1967. It features in the recent Book of Why by Pearl and McKenzie, who use it to demonstrate the power of Pearls causal calculus, obtaining a solution they claim is unambiguously right. They also claim that statisticians have failed to get to grips with causal questions for well over a century, in fact ever since Karl Pearson developed Galtons idea of correlation and warned the scientific world that correlation is not causation. However, only two years before Lord published his paradox John Nelder outlined a powerful causal calculus for analyzing designed experiments based on a careful distinction between block and treatment structure. This represents an important advance in formalizing the approach to analysing complex experiments that started with Fisher 100 years ago, when he proposed splitting variability using the square of the standard deviation, which he called the variance, continued with Yates and has been developed since the 1960s by Rosemary Bailey, amongst others. This tradition might be referred to as The Rothamsted School. It is fully implemented in Genstat速 but, as far as I am aware, not in any other package. With the help of Genstat速, I demonstrate how the Rothamsted School would approach Lords paradox and come to a solution that is not the same as the one reached by Pearl and McKenzie, although given certain strong but untestable assumptions it would reduce to it. I conclude that the statistical tradition may have more to offer in this respect than has been supposed. ]]>

Lords paradox is a notoriously difficult puzzle that is guaranteed to provoke discussion, dissent and disagreement. Two statisticians analyse some observational data and come to radically different conclusions, each of which has acquired defenders over the years since Lord first proposed his puzzle in 1967. It features in the recent Book of Why by Pearl and McKenzie, who use it to demonstrate the power of Pearls causal calculus, obtaining a solution they claim is unambiguously right. They also claim that statisticians have failed to get to grips with causal questions for well over a century, in fact ever since Karl Pearson developed Galtons idea of correlation and warned the scientific world that correlation is not causation. However, only two years before Lord published his paradox John Nelder outlined a powerful causal calculus for analyzing designed experiments based on a careful distinction between block and treatment structure. This represents an important advance in formalizing the approach to analysing complex experiments that started with Fisher 100 years ago, when he proposed splitting variability using the square of the standard deviation, which he called the variance, continued with Yates and has been developed since the 1960s by Rosemary Bailey, amongst others. This tradition might be referred to as The Rothamsted School. It is fully implemented in Genstat速 but, as far as I am aware, not in any other package. With the help of Genstat速, I demonstrate how the Rothamsted School would approach Lords paradox and come to a solution that is not the same as the one reached by Pearl and McKenzie, although given certain strong but untestable assumptions it would reduce to it. I conclude that the statistical tradition may have more to offer in this respect than has been supposed. ]]>
Fri, 02 Nov 2018 21:39:41 GMT /slideshow/the-rothamsted-school-meets-lords-paradox/121603215 StephenSenn1@slideshare.net(StephenSenn1) The Rothamsted school meets Lord's paradox StephenSenn1 Lords paradox is a notoriously difficult puzzle that is guaranteed to provoke discussion, dissent and disagreement. Two statisticians analyse some observational data and come to radically different conclusions, each of which has acquired defenders over the years since Lord first proposed his puzzle in 1967. It features in the recent Book of Why by Pearl and McKenzie, who use it to demonstrate the power of Pearls causal calculus, obtaining a solution they claim is unambiguously right. They also claim that statisticians have failed to get to grips with causal questions for well over a century, in fact ever since Karl Pearson developed Galtons idea of correlation and warned the scientific world that correlation is not causation. However, only two years before Lord published his paradox John Nelder outlined a powerful causal calculus for analyzing designed experiments based on a careful distinction between block and treatment structure. This represents an important advance in formalizing the approach to analysing complex experiments that started with Fisher 100 years ago, when he proposed splitting variability using the square of the standard deviation, which he called the variance, continued with Yates and has been developed since the 1960s by Rosemary Bailey, amongst others. This tradition might be referred to as The Rothamsted School. It is fully implemented in Genstat速 but, as far as I am aware, not in any other package. With the help of Genstat速, I demonstrate how the Rothamsted School would approach Lords paradox and come to a solution that is not the same as the one reached by Pearl and McKenzie, although given certain strong but untestable assumptions it would reduce to it. I conclude that the statistical tradition may have more to offer in this respect than has been supposed. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/therothamstedschoolmeetslordsparadox-181102213941-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Lords paradox is a notoriously difficult puzzle that is guaranteed to provoke discussion, dissent and disagreement. Two statisticians analyse some observational data and come to radically different conclusions, each of which has acquired defenders over the years since Lord first proposed his puzzle in 1967. It features in the recent Book of Why by Pearl and McKenzie, who use it to demonstrate the power of Pearls causal calculus, obtaining a solution they claim is unambiguously right. They also claim that statisticians have failed to get to grips with causal questions for well over a century, in fact ever since Karl Pearson developed Galtons idea of correlation and warned the scientific world that correlation is not causation. However, only two years before Lord published his paradox John Nelder outlined a powerful causal calculus for analyzing designed experiments based on a careful distinction between block and treatment structure. This represents an important advance in formalizing the approach to analysing complex experiments that started with Fisher 100 years ago, when he proposed splitting variability using the square of the standard deviation, which he called the variance, continued with Yates and has been developed since the 1960s by Rosemary Bailey, amongst others. This tradition might be referred to as The Rothamsted School. It is fully implemented in Genstat速 but, as far as I am aware, not in any other package. With the help of Genstat速, I demonstrate how the Rothamsted School would approach Lords paradox and come to a solution that is not the same as the one reached by Pearl and McKenzie, although given certain strong but untestable assumptions it would reduce to it. I conclude that the statistical tradition may have more to offer in this respect than has been supposed.
The Rothamsted school meets Lord's paradox from Stephen Senn
]]>
7994 7 https://cdn.slidesharecdn.com/ss_thumbnails/therothamstedschoolmeetslordsparadox-181102213941-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
https://cdn.slidesharecdn.com/profile-photo-StephenSenn1-48x48.jpg?cb=1649357284 Worked in the pharmaceutical industry, in the National Health Service and in further education and in a research institute. Have consulted on drug development for over 50 clients Specialties: Statistical methods for designing and analysing drug development programmes and clinical trials www.senns.demon.co.uk/home.html https://cdn.slidesharecdn.com/ss_thumbnails/hasmodellingkilledrandomisationinferencefrankfurt-220124112829-thumbnail.jpg?width=320&height=320&fit=bounds slideshow/has-modelling-killed-randomisation-inference-frankfurt-251045286/251045286 Has modelling killed r... https://cdn.slidesharecdn.com/ss_thumbnails/whatisyourquestion-211116092730-thumbnail.jpg?width=320&height=320&fit=bounds slideshow/what-is-your-question-250659957/250659957 What is your question https://cdn.slidesharecdn.com/ss_thumbnails/covidpresentationmay2021-210519181043-thumbnail.jpg?width=320&height=320&fit=bounds slideshow/vaccine-trials-in-the-age-of-covid19/248353174 Vaccine trials in the ...