際際滷shows by User: DanielOberski / http://www.slideshare.net/images/logo.gif 際際滷shows by User: DanielOberski / Wed, 20 Mar 2019 14:37:17 GMT 際際滷Share feed for 際際滷shows by User: DanielOberski Differential Privacy and social science /slideshow/oberski-differential-privacy-and-social-science/137318883 oberski-vvsor-2019-190320143718
Science should be open and reproducible, and so the pressure is on to publish data, including data on people. But pressure is also on to protect individual research participants for which it is necessary to inject randomness in the data. I will discuss why this is true, and what it means for researchers who want to analyze data from people. As I will argue, differential privacy will likely force us to change our ways: we will need to account for privacy error in our statistics, increase our sample sizes, make more use of preregistration or other self-limitation where possible, and sometimes, completely change our data collection designs.]]>

Science should be open and reproducible, and so the pressure is on to publish data, including data on people. But pressure is also on to protect individual research participants for which it is necessary to inject randomness in the data. I will discuss why this is true, and what it means for researchers who want to analyze data from people. As I will argue, differential privacy will likely force us to change our ways: we will need to account for privacy error in our statistics, increase our sample sizes, make more use of preregistration or other self-limitation where possible, and sometimes, completely change our data collection designs.]]>
Wed, 20 Mar 2019 14:37:17 GMT /slideshow/oberski-differential-privacy-and-social-science/137318883 DanielOberski@slideshare.net(DanielOberski) Differential Privacy and social science DanielOberski Science should be open and reproducible, and so the pressure is on to publish data, including data on people. But pressure is also on to protect individual research participants for which it is necessary to inject randomness in the data. I will discuss why this is true, and what it means for researchers who want to analyze data from people. As I will argue, differential privacy will likely force us to change our ways: we will need to account for privacy error in our statistics, increase our sample sizes, make more use of preregistration or other self-limitation where possible, and sometimes, completely change our data collection designs. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/oberski-vvsor-2019-190320143718-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Science should be open and reproducible, and so the pressure is on to publish data, including data on people. But pressure is also on to protect individual research participants for which it is necessary to inject randomness in the data. I will discuss why this is true, and what it means for researchers who want to analyze data from people. As I will argue, differential privacy will likely force us to change our ways: we will need to account for privacy error in our statistics, increase our sample sizes, make more use of preregistration or other self-limitation where possible, and sometimes, completely change our data collection designs.
Differential Privacy and social science from Daniel Oberski
]]>
1061 2 https://cdn.slidesharecdn.com/ss_thumbnails/oberski-vvsor-2019-190320143718-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Oberski EAM 2018 - Incidental data for serious social research /slideshow/oberski-eam-2018-incidental-data-for-serious-social-research/107470327 oberski-eam2018-nomovies-180725165310
Incidental data are data that people produce incidentally, as a byproduct of the normal course of operations of a platform, business, or government. Well-known examples include using Twitter, Facebook, Google search, smartphones, badges, etc. to study social phenomena, such as election behavior, attitudes, employment, or consumer confidence. It has been almost ten years since various high-impact papers and books have proclaimed the end of traditional social research and the beginning of a new era of exciting new possibilities for social science. In this talk, I review the evidence from the past decade or so regarding the value of incidental data for social research. This includes not only research in social science, but also in the humanities, data mining, and machine learning communities. While it is safe to say traditional social research has not ended, I conclude that incidental data may indeed allow for an "update" of (some of) social science. However, to accomplish this, a considerable amount of work is still needed; I envision that methodologists will be at the forefront of this work - if we can "update" ourselves as well. I end the lecture with some suggestions of where methodologists could start "pulling the thread" of other literatures to start leveraging incidental data for social research.]]>

Incidental data are data that people produce incidentally, as a byproduct of the normal course of operations of a platform, business, or government. Well-known examples include using Twitter, Facebook, Google search, smartphones, badges, etc. to study social phenomena, such as election behavior, attitudes, employment, or consumer confidence. It has been almost ten years since various high-impact papers and books have proclaimed the end of traditional social research and the beginning of a new era of exciting new possibilities for social science. In this talk, I review the evidence from the past decade or so regarding the value of incidental data for social research. This includes not only research in social science, but also in the humanities, data mining, and machine learning communities. While it is safe to say traditional social research has not ended, I conclude that incidental data may indeed allow for an "update" of (some of) social science. However, to accomplish this, a considerable amount of work is still needed; I envision that methodologists will be at the forefront of this work - if we can "update" ourselves as well. I end the lecture with some suggestions of where methodologists could start "pulling the thread" of other literatures to start leveraging incidental data for social research.]]>
Wed, 25 Jul 2018 16:53:10 GMT /slideshow/oberski-eam-2018-incidental-data-for-serious-social-research/107470327 DanielOberski@slideshare.net(DanielOberski) Oberski EAM 2018 - Incidental data for serious social research DanielOberski Incidental data are data that people produce incidentally, as a byproduct of the normal course of operations of a platform, business, or government. Well-known examples include using Twitter, Facebook, Google search, smartphones, badges, etc. to study social phenomena, such as election behavior, attitudes, employment, or consumer confidence. It has been almost ten years since various high-impact papers and books have proclaimed the end of traditional social research and the beginning of a new era of exciting new possibilities for social science. In this talk, I review the evidence from the past decade or so regarding the value of incidental data for social research. This includes not only research in social science, but also in the humanities, data mining, and machine learning communities. While it is safe to say traditional social research has not ended, I conclude that incidental data may indeed allow for an "update" of (some of) social science. However, to accomplish this, a considerable amount of work is still needed; I envision that methodologists will be at the forefront of this work - if we can "update" ourselves as well. I end the lecture with some suggestions of where methodologists could start "pulling the thread" of other literatures to start leveraging incidental data for social research. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/oberski-eam2018-nomovies-180725165310-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Incidental data are data that people produce incidentally, as a byproduct of the normal course of operations of a platform, business, or government. Well-known examples include using Twitter, Facebook, Google search, smartphones, badges, etc. to study social phenomena, such as election behavior, attitudes, employment, or consumer confidence. It has been almost ten years since various high-impact papers and books have proclaimed the end of traditional social research and the beginning of a new era of exciting new possibilities for social science. In this talk, I review the evidence from the past decade or so regarding the value of incidental data for social research. This includes not only research in social science, but also in the humanities, data mining, and machine learning communities. While it is safe to say traditional social research has not ended, I conclude that incidental data may indeed allow for an &quot;update&quot; of (some of) social science. However, to accomplish this, a considerable amount of work is still needed; I envision that methodologists will be at the forefront of this work - if we can &quot;update&quot; ourselves as well. I end the lecture with some suggestions of where methodologists could start &quot;pulling the thread&quot; of other literatures to start leveraging incidental data for social research.
Oberski EAM 2018 - Incidental data for serious social research from Daniel Oberski
]]>
772 1 https://cdn.slidesharecdn.com/ss_thumbnails/oberski-eam2018-nomovies-180725165310-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
ESRA2015 course: Latent Class Analysis for Survey Research /slideshow/esra2015-course-latent-class-analysis-for-survey-research/51458541 esra-course-slides-150810132719-lva1-app6892
際際滷s for a 3-hour short course I gave at the European Survey Research Association's 2015 meeting in Reykjav鱈k, Iceland. This course gives a short introduction to Latent Class Analysis (LCA) for survey methodologists. R code and some Latent GOLD input is also provided. The R code and data for the examples can be found at http://daob.nl/wp-content/uploads/2015/07/ESRA-LCA-analyses-data.zip]]>

際際滷s for a 3-hour short course I gave at the European Survey Research Association's 2015 meeting in Reykjav鱈k, Iceland. This course gives a short introduction to Latent Class Analysis (LCA) for survey methodologists. R code and some Latent GOLD input is also provided. The R code and data for the examples can be found at http://daob.nl/wp-content/uploads/2015/07/ESRA-LCA-analyses-data.zip]]>
Mon, 10 Aug 2015 13:27:18 GMT /slideshow/esra2015-course-latent-class-analysis-for-survey-research/51458541 DanielOberski@slideshare.net(DanielOberski) ESRA2015 course: Latent Class Analysis for Survey Research DanielOberski 際際滷s for a 3-hour short course I gave at the European Survey Research Association's 2015 meeting in Reykjav鱈k, Iceland. This course gives a short introduction to Latent Class Analysis (LCA) for survey methodologists. R code and some Latent GOLD input is also provided. The R code and data for the examples can be found at http://daob.nl/wp-content/uploads/2015/07/ESRA-LCA-analyses-data.zip <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/esra-course-slides-150810132719-lva1-app6892-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> 際際滷s for a 3-hour short course I gave at the European Survey Research Association&#39;s 2015 meeting in Reykjav鱈k, Iceland. This course gives a short introduction to Latent Class Analysis (LCA) for survey methodologists. R code and some Latent GOLD input is also provided. The R code and data for the examples can be found at http://daob.nl/wp-content/uploads/2015/07/ESRA-LCA-analyses-data.zip
ESRA2015 course: Latent Class Analysis for Survey Research from Daniel Oberski
]]>
4799 11 https://cdn.slidesharecdn.com/ss_thumbnails/esra-course-slides-150810132719-lva1-app6892-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Complex sampling in latent variable models /slideshow/complex-sampling-in-latent-variable-models/49265445 oberski-complex-slides-150611115723-lva1-app6892
How complex (survey) sampling interacts with latent variable modeling and why that is important.]]>

How complex (survey) sampling interacts with latent variable modeling and why that is important.]]>
Thu, 11 Jun 2015 11:57:23 GMT /slideshow/complex-sampling-in-latent-variable-models/49265445 DanielOberski@slideshare.net(DanielOberski) Complex sampling in latent variable models DanielOberski How complex (survey) sampling interacts with latent variable modeling and why that is important. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/oberski-complex-slides-150611115723-lva1-app6892-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> How complex (survey) sampling interacts with latent variable modeling and why that is important.
Complex sampling in latent variable models from Daniel Oberski
]]>
1341 7 https://cdn.slidesharecdn.com/ss_thumbnails/oberski-complex-slides-150611115723-lva1-app6892-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
lavaan.survey: An R package for complex survey analysis of structural equation models /slideshow/oberski-lavaansurveyslides/49265357 oberski-lavaan-survey-slides-150611115447-lva1-app6892
Discusses open source software that allows structural equation modeling of complex survey data.]]>

Discusses open source software that allows structural equation modeling of complex survey data.]]>
Thu, 11 Jun 2015 11:54:47 GMT /slideshow/oberski-lavaansurveyslides/49265357 DanielOberski@slideshare.net(DanielOberski) lavaan.survey: An R package for complex survey analysis of structural equation models DanielOberski Discusses open source software that allows structural equation modeling of complex survey data. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/oberski-lavaan-survey-slides-150611115447-lva1-app6892-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Discusses open source software that allows structural equation modeling of complex survey data.
lavaan.survey: An R package for complex survey analysis of structural equation models from Daniel Oberski
]]>
3433 6 https://cdn.slidesharecdn.com/ss_thumbnails/oberski-lavaan-survey-slides-150611115447-lva1-app6892-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
How good are administrative register data and what can we do about it? /slideshow/oberski-datasciencepitch201504/49265287 oberski-data-science-pitch-2015-04-150611115244-lva1-app6892
Data science pitch held at Tilburg Data Science Center]]>

Data science pitch held at Tilburg Data Science Center]]>
Thu, 11 Jun 2015 11:52:44 GMT /slideshow/oberski-datasciencepitch201504/49265287 DanielOberski@slideshare.net(DanielOberski) How good are administrative register data and what can we do about it? DanielOberski Data science pitch held at Tilburg Data Science Center <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/oberski-data-science-pitch-2015-04-150611115244-lva1-app6892-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Data science pitch held at Tilburg Data Science Center
How good are administrative register data and what can we do about it? from Daniel Oberski
]]>
755 2 https://cdn.slidesharecdn.com/ss_thumbnails/oberski-data-science-pitch-2015-04-150611115244-lva1-app6892-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Multidirectional survey measurement errors: the latent class MTMM model /slideshow/multidirectional-survey-measurement-errors-the-latent-class-mtmm-model/49265228 oberski-aapor-2015-150611115107-lva1-app6892
Perhaps one day we will figure out how to ask perfect survey questions. In the meantime, survey analyses are biased by random and correlated measurement errors, and evaluating the extent of such errors is therefore essential, both to remove the bias and to improve our question design. When there is no gold standard, these errors are often estimated using multitrait-multimethod (MTMM) experiments or longitudinal data by applying linear or ordinal factor models, which assume that (latent) measurement is linear and that the only type of method bias is one that pushes the answers monotonely in a particular directionthat of acquiescence, for example. However, not all measurement is linear and not all method bias is monotone. Extreme response tendencies, for example, are nonmonotone, as are primacy and recency effects, which act on just one category. Just as the monotone kind, such method effects will also lead to spurious dependencies among different survey questions, distorting their true relationships. Diagnosing, preventing, or correcting for such distortions therefore calls for a model that can account for them. For this purpose I will discuss the latent class MTMM model (Oberski 2011). In it, a latent loglinear modeling approach is combined with the MTMM design to yield a model that provides detailed information about the measurement quality of survey questions while also dealing with nonmonotone method biases. I will discuss the method's assumptions and demonstrate it on a few often-used survey questions. Standard software for latent class analysis can be used to estimate this model, so that evaluating the extent of nonlinear random and correlated measurement errors is now a reasonably user-friendly experience for survey researchers.]]>

Perhaps one day we will figure out how to ask perfect survey questions. In the meantime, survey analyses are biased by random and correlated measurement errors, and evaluating the extent of such errors is therefore essential, both to remove the bias and to improve our question design. When there is no gold standard, these errors are often estimated using multitrait-multimethod (MTMM) experiments or longitudinal data by applying linear or ordinal factor models, which assume that (latent) measurement is linear and that the only type of method bias is one that pushes the answers monotonely in a particular directionthat of acquiescence, for example. However, not all measurement is linear and not all method bias is monotone. Extreme response tendencies, for example, are nonmonotone, as are primacy and recency effects, which act on just one category. Just as the monotone kind, such method effects will also lead to spurious dependencies among different survey questions, distorting their true relationships. Diagnosing, preventing, or correcting for such distortions therefore calls for a model that can account for them. For this purpose I will discuss the latent class MTMM model (Oberski 2011). In it, a latent loglinear modeling approach is combined with the MTMM design to yield a model that provides detailed information about the measurement quality of survey questions while also dealing with nonmonotone method biases. I will discuss the method's assumptions and demonstrate it on a few often-used survey questions. Standard software for latent class analysis can be used to estimate this model, so that evaluating the extent of nonlinear random and correlated measurement errors is now a reasonably user-friendly experience for survey researchers.]]>
Thu, 11 Jun 2015 11:51:07 GMT /slideshow/multidirectional-survey-measurement-errors-the-latent-class-mtmm-model/49265228 DanielOberski@slideshare.net(DanielOberski) Multidirectional survey measurement errors: the latent class MTMM model DanielOberski Perhaps one day we will figure out how to ask perfect survey questions. In the meantime, survey analyses are biased by random and correlated measurement errors, and evaluating the extent of such errors is therefore essential, both to remove the bias and to improve our question design. When there is no gold standard, these errors are often estimated using multitrait-multimethod (MTMM) experiments or longitudinal data by applying linear or ordinal factor models, which assume that (latent) measurement is linear and that the only type of method bias is one that pushes the answers monotonely in a particular directionthat of acquiescence, for example. However, not all measurement is linear and not all method bias is monotone. Extreme response tendencies, for example, are nonmonotone, as are primacy and recency effects, which act on just one category. Just as the monotone kind, such method effects will also lead to spurious dependencies among different survey questions, distorting their true relationships. Diagnosing, preventing, or correcting for such distortions therefore calls for a model that can account for them. For this purpose I will discuss the latent class MTMM model (Oberski 2011). In it, a latent loglinear modeling approach is combined with the MTMM design to yield a model that provides detailed information about the measurement quality of survey questions while also dealing with nonmonotone method biases. I will discuss the method's assumptions and demonstrate it on a few often-used survey questions. Standard software for latent class analysis can be used to estimate this model, so that evaluating the extent of nonlinear random and correlated measurement errors is now a reasonably user-friendly experience for survey researchers. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/oberski-aapor-2015-150611115107-lva1-app6892-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Perhaps one day we will figure out how to ask perfect survey questions. In the meantime, survey analyses are biased by random and correlated measurement errors, and evaluating the extent of such errors is therefore essential, both to remove the bias and to improve our question design. When there is no gold standard, these errors are often estimated using multitrait-multimethod (MTMM) experiments or longitudinal data by applying linear or ordinal factor models, which assume that (latent) measurement is linear and that the only type of method bias is one that pushes the answers monotonely in a particular directionthat of acquiescence, for example. However, not all measurement is linear and not all method bias is monotone. Extreme response tendencies, for example, are nonmonotone, as are primacy and recency effects, which act on just one category. Just as the monotone kind, such method effects will also lead to spurious dependencies among different survey questions, distorting their true relationships. Diagnosing, preventing, or correcting for such distortions therefore calls for a model that can account for them. For this purpose I will discuss the latent class MTMM model (Oberski 2011). In it, a latent loglinear modeling approach is combined with the MTMM design to yield a model that provides detailed information about the measurement quality of survey questions while also dealing with nonmonotone method biases. I will discuss the method&#39;s assumptions and demonstrate it on a few often-used survey questions. Standard software for latent class analysis can be used to estimate this model, so that evaluating the extent of nonlinear random and correlated measurement errors is now a reasonably user-friendly experience for survey researchers.
Multidirectional survey measurement errors: the latent class MTMM model from Daniel Oberski
]]>
598 2 https://cdn.slidesharecdn.com/ss_thumbnails/oberski-aapor-2015-150611115107-lva1-app6892-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Predicting the quality of a survey question from its design characteristics: SQP /slideshow/predicting-the-quality-of-a-survey-question-from-its-design-characteristics-sqp/49265193 oberski-presentation-slides-150611115007-lva1-app6892
Talk held at the Census Bureau, 2011.]]>

Talk held at the Census Bureau, 2011.]]>
Thu, 11 Jun 2015 11:50:07 GMT /slideshow/predicting-the-quality-of-a-survey-question-from-its-design-characteristics-sqp/49265193 DanielOberski@slideshare.net(DanielOberski) Predicting the quality of a survey question from its design characteristics: SQP DanielOberski Talk held at the Census Bureau, 2011. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/oberski-presentation-slides-150611115007-lva1-app6892-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Talk held at the Census Bureau, 2011.
Predicting the quality of a survey question from its design characteristics: SQP from Daniel Oberski
]]>
768 1 https://cdn.slidesharecdn.com/ss_thumbnails/oberski-presentation-slides-150611115007-lva1-app6892-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Predicting the quality of a survey question from its design characteristics /slideshow/predicting-the-quality-of-a-survey-question-from-its-design-characteristics/49265107 obe-bcn-qem-ess-sqp-150611114712-lva1-app6891
SQP is a program that predicts the reliability of a survey question from its design characteristics. This presentation held at the European Social Surveys' Quality Enhancement Meeting in Barcelona discusses the workings behind the program.]]>

SQP is a program that predicts the reliability of a survey question from its design characteristics. This presentation held at the European Social Surveys' Quality Enhancement Meeting in Barcelona discusses the workings behind the program.]]>
Thu, 11 Jun 2015 11:47:12 GMT /slideshow/predicting-the-quality-of-a-survey-question-from-its-design-characteristics/49265107 DanielOberski@slideshare.net(DanielOberski) Predicting the quality of a survey question from its design characteristics DanielOberski SQP is a program that predicts the reliability of a survey question from its design characteristics. This presentation held at the European Social Surveys' Quality Enhancement Meeting in Barcelona discusses the workings behind the program. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/obe-bcn-qem-ess-sqp-150611114712-lva1-app6891-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> SQP is a program that predicts the reliability of a survey question from its design characteristics. This presentation held at the European Social Surveys&#39; Quality Enhancement Meeting in Barcelona discusses the workings behind the program.
Predicting the quality of a survey question from its design characteristics from Daniel Oberski
]]>
686 3 https://cdn.slidesharecdn.com/ss_thumbnails/obe-bcn-qem-ess-sqp-150611114712-lva1-app6891-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Detecting local dependence in latent class models /slideshow/detecting-local-dependence-in-latent-class-models/49265028 presentation-lca-locdep-150611114454-lva1-app6891
Latent class (mixture) models are often used in wide range of fields. These models assume that the observed variables are independent given the latent classes: local independence. What if this assumption does not hold?]]>

Latent class (mixture) models are often used in wide range of fields. These models assume that the observed variables are independent given the latent classes: local independence. What if this assumption does not hold?]]>
Thu, 11 Jun 2015 11:44:54 GMT /slideshow/detecting-local-dependence-in-latent-class-models/49265028 DanielOberski@slideshare.net(DanielOberski) Detecting local dependence in latent class models DanielOberski Latent class (mixture) models are often used in wide range of fields. These models assume that the observed variables are independent given the latent classes: local independence. What if this assumption does not hold? <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/presentation-lca-locdep-150611114454-lva1-app6891-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Latent class (mixture) models are often used in wide range of fields. These models assume that the observed variables are independent given the latent classes: local independence. What if this assumption does not hold?
Detecting local dependence in latent class models from Daniel Oberski
]]>
1758 1 https://cdn.slidesharecdn.com/ss_thumbnails/presentation-lca-locdep-150611114454-lva1-app6891-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
A measure to evaluate latent variable model fit by sensitivity analysis /slideshow/oberski-epcinterestleiden/49264940 oberski-epc-interest-leiden-150611114131-lva1-app6892
Latent variable models involve restrictions on the data that can be formulated in terms of "misspecifications": restrictions with a model-based meaning. Examples include zero cross-loadings and local dependencies, as well as measurement invariance or differential item functioning. If incorrect, misspecifications can potentially disturb the main purpose of the latent variable analysisseriously so in some cases. Recently, I proposed to evaluate whether a particular analysis at hand is such a case or not. To do this, I define a measure based on the likelihood of the restricted model that approximates the change in the parameters of interest if the misspecification were freed, the EPC-interest. The main idea is to examine the EPC-interest and free those misspecifications that are important while ignoring those that are not. I have implemented the EPC-interest in the lavaan software for structural equation modeling and the Latent Gold software for latent class analysis. This approach can resolve several problems and inconsistencies in the current practice of model fit evaluation used in latent variable analysis, something I illustrate using analyses from the measurement invariance literature and from item response theory.]]>

Latent variable models involve restrictions on the data that can be formulated in terms of "misspecifications": restrictions with a model-based meaning. Examples include zero cross-loadings and local dependencies, as well as measurement invariance or differential item functioning. If incorrect, misspecifications can potentially disturb the main purpose of the latent variable analysisseriously so in some cases. Recently, I proposed to evaluate whether a particular analysis at hand is such a case or not. To do this, I define a measure based on the likelihood of the restricted model that approximates the change in the parameters of interest if the misspecification were freed, the EPC-interest. The main idea is to examine the EPC-interest and free those misspecifications that are important while ignoring those that are not. I have implemented the EPC-interest in the lavaan software for structural equation modeling and the Latent Gold software for latent class analysis. This approach can resolve several problems and inconsistencies in the current practice of model fit evaluation used in latent variable analysis, something I illustrate using analyses from the measurement invariance literature and from item response theory.]]>
Thu, 11 Jun 2015 11:41:31 GMT /slideshow/oberski-epcinterestleiden/49264940 DanielOberski@slideshare.net(DanielOberski) A measure to evaluate latent variable model fit by sensitivity analysis DanielOberski Latent variable models involve restrictions on the data that can be formulated in terms of "misspecifications": restrictions with a model-based meaning. Examples include zero cross-loadings and local dependencies, as well as measurement invariance or differential item functioning. If incorrect, misspecifications can potentially disturb the main purpose of the latent variable analysisseriously so in some cases. Recently, I proposed to evaluate whether a particular analysis at hand is such a case or not. To do this, I define a measure based on the likelihood of the restricted model that approximates the change in the parameters of interest if the misspecification were freed, the EPC-interest. The main idea is to examine the EPC-interest and free those misspecifications that are important while ignoring those that are not. I have implemented the EPC-interest in the lavaan software for structural equation modeling and the Latent Gold software for latent class analysis. This approach can resolve several problems and inconsistencies in the current practice of model fit evaluation used in latent variable analysis, something I illustrate using analyses from the measurement invariance literature and from item response theory. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/oberski-epc-interest-leiden-150611114131-lva1-app6892-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Latent variable models involve restrictions on the data that can be formulated in terms of &quot;misspecifications&quot;: restrictions with a model-based meaning. Examples include zero cross-loadings and local dependencies, as well as measurement invariance or differential item functioning. If incorrect, misspecifications can potentially disturb the main purpose of the latent variable analysisseriously so in some cases. Recently, I proposed to evaluate whether a particular analysis at hand is such a case or not. To do this, I define a measure based on the likelihood of the restricted model that approximates the change in the parameters of interest if the misspecification were freed, the EPC-interest. The main idea is to examine the EPC-interest and free those misspecifications that are important while ignoring those that are not. I have implemented the EPC-interest in the lavaan software for structural equation modeling and the Latent Gold software for latent class analysis. This approach can resolve several problems and inconsistencies in the current practice of model fit evaluation used in latent variable analysis, something I illustrate using analyses from the measurement invariance literature and from item response theory.
A measure to evaluate latent variable model fit by sensitivity analysis from Daniel Oberski
]]>
1045 2 https://cdn.slidesharecdn.com/ss_thumbnails/oberski-epc-interest-leiden-150611114131-lva1-app6892-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
https://cdn.slidesharecdn.com/profile-photo-DanielOberski-48x48.jpg?cb=1615982150 daob.nl https://cdn.slidesharecdn.com/ss_thumbnails/oberski-vvsor-2019-190320143718-thumbnail.jpg?width=320&height=320&fit=bounds slideshow/oberski-differential-privacy-and-social-science/137318883 Differential Privacy a... https://cdn.slidesharecdn.com/ss_thumbnails/oberski-eam2018-nomovies-180725165310-thumbnail.jpg?width=320&height=320&fit=bounds slideshow/oberski-eam-2018-incidental-data-for-serious-social-research/107470327 Oberski EAM 2018 - Inc... https://cdn.slidesharecdn.com/ss_thumbnails/esra-course-slides-150810132719-lva1-app6892-thumbnail.jpg?width=320&height=320&fit=bounds slideshow/esra2015-course-latent-class-analysis-for-survey-research/51458541 ESRA2015 course: Laten...