ݺߣshows by User: kim.herzig / http://www.slideshare.net/images/logo.gif ݺߣshows by User: kim.herzig / Tue, 28 Jun 2016 04:54:04 GMT ݺߣShare feed for ݺߣshows by User: kim.herzig Keynote AST 2016 /slideshow/keynote-ast-2016/63509323 icseast2016-160628045404
Kim Herzig giving a Keynote at the workshop for software testing (AST 2016) in Austin Texas 2016.]]>

Kim Herzig giving a Keynote at the workshop for software testing (AST 2016) in Austin Texas 2016.]]>
Tue, 28 Jun 2016 04:54:04 GMT /slideshow/keynote-ast-2016/63509323 kim.herzig@slideshare.net(kim.herzig) Keynote AST 2016 kim.herzig Kim Herzig giving a Keynote at the workshop for software testing (AST 2016) in Austin Texas 2016. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/icseast2016-160628045404-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Kim Herzig giving a Keynote at the workshop for software testing (AST 2016) in Austin Texas 2016.
Keynote AST 2016 from Kim Herzig
]]>
502 6 https://cdn.slidesharecdn.com/ss_thumbnails/icseast2016-160628045404-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Empirically Detecting False Test Alarms Using Association Rules @ ICSE 2015 /kim.herzig/test-fp-predictionicse2015 testfppredictionicse2015-150610092902-lva1-app6892
Applying code changes to software systems and testing these code changes can be a complex task that involves many different types of software testing strategies, e.g. system and integration tests. However, not all test failures reported during code integration are hinting towards code defects. Testing large systems such as the Microsoft Windows operating system requires complex test infrastructures, which may lead to test failures caused by faulty tests and test infrastructure issues. Such false test alarms are particular annoying as they raise engineer attention and require manual inspection without providing any benefit. The goal of this work is to use empirical data to minimize the number of false test alarms reported during system and integration testing. To achieve this goal, we use association rule learning to identify patterns among failing test steps that are typically for false test alarms and can be used to automatically classify them. A successful classification of false test alarms is particularly valuable for the product teams as manual test failure inspection is an expensive and time-consuming process that not only costs engineering time and money but also slows down product development. We evaluating our approach on system and integration tests executed during Windows 8.1 and Microsoft Dynamics AX development. Performing more than 10,000 classifications for each product, our model shows a mean precision between 0.85 and 0.90 predicting between 34% and 48% of all false test alarms.]]>

Applying code changes to software systems and testing these code changes can be a complex task that involves many different types of software testing strategies, e.g. system and integration tests. However, not all test failures reported during code integration are hinting towards code defects. Testing large systems such as the Microsoft Windows operating system requires complex test infrastructures, which may lead to test failures caused by faulty tests and test infrastructure issues. Such false test alarms are particular annoying as they raise engineer attention and require manual inspection without providing any benefit. The goal of this work is to use empirical data to minimize the number of false test alarms reported during system and integration testing. To achieve this goal, we use association rule learning to identify patterns among failing test steps that are typically for false test alarms and can be used to automatically classify them. A successful classification of false test alarms is particularly valuable for the product teams as manual test failure inspection is an expensive and time-consuming process that not only costs engineering time and money but also slows down product development. We evaluating our approach on system and integration tests executed during Windows 8.1 and Microsoft Dynamics AX development. Performing more than 10,000 classifications for each product, our model shows a mean precision between 0.85 and 0.90 predicting between 34% and 48% of all false test alarms.]]>
Wed, 10 Jun 2015 09:29:02 GMT /kim.herzig/test-fp-predictionicse2015 kim.herzig@slideshare.net(kim.herzig) Empirically Detecting False Test Alarms Using Association Rules @ ICSE 2015 kim.herzig Applying code changes to software systems and testing these code changes can be a complex task that involves many different types of software testing strategies, e.g. system and integration tests. However, not all test failures reported during code integration are hinting towards code defects. Testing large systems such as the Microsoft Windows operating system requires complex test infrastructures, which may lead to test failures caused by faulty tests and test infrastructure issues. Such false test alarms are particular annoying as they raise engineer attention and require manual inspection without providing any benefit. The goal of this work is to use empirical data to minimize the number of false test alarms reported during system and integration testing. To achieve this goal, we use association rule learning to identify patterns among failing test steps that are typically for false test alarms and can be used to automatically classify them. A successful classification of false test alarms is particularly valuable for the product teams as manual test failure inspection is an expensive and time-consuming process that not only costs engineering time and money but also slows down product development. We evaluating our approach on system and integration tests executed during Windows 8.1 and Microsoft Dynamics AX development. Performing more than 10,000 classifications for each product, our model shows a mean precision between 0.85 and 0.90 predicting between 34% and 48% of all false test alarms. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/testfppredictionicse2015-150610092902-lva1-app6892-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Applying code changes to software systems and testing these code changes can be a complex task that involves many different types of software testing strategies, e.g. system and integration tests. However, not all test failures reported during code integration are hinting towards code defects. Testing large systems such as the Microsoft Windows operating system requires complex test infrastructures, which may lead to test failures caused by faulty tests and test infrastructure issues. Such false test alarms are particular annoying as they raise engineer attention and require manual inspection without providing any benefit. The goal of this work is to use empirical data to minimize the number of false test alarms reported during system and integration testing. To achieve this goal, we use association rule learning to identify patterns among failing test steps that are typically for false test alarms and can be used to automatically classify them. A successful classification of false test alarms is particularly valuable for the product teams as manual test failure inspection is an expensive and time-consuming process that not only costs engineering time and money but also slows down product development. We evaluating our approach on system and integration tests executed during Windows 8.1 and Microsoft Dynamics AX development. Performing more than 10,000 classifications for each product, our model shows a mean precision between 0.85 and 0.90 predicting between 34% and 48% of all false test alarms.
Empirically Detecting False Test Alarms Using Association Rules @ ICSE 2015 from Kim Herzig
]]>
778 3 https://cdn.slidesharecdn.com/ss_thumbnails/testfppredictionicse2015-150610092902-lva1-app6892-thumbnail.jpg?width=120&height=120&fit=bounds presentation 000000 http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
The Art of Testing Less without Sacrificing Quality @ ICSE 2015 /slideshow/icse-2015-theowide/49212004 icse2015theowide-150610092020-lva1-app6892
Testing is a key element of software development processes for the management and assessment of product quality. In most development environments, the software engineers are responsible for ensuring the functional correctness of code. However, for large complex software products, there is an additional need to check that changes do not negatively impact other parts of the software and they comply with system constraints such as backward compatibility, performance, security etc. Ensuring these system constraints may require complex verification infrastructure and test procedures. Although such tests are time consuming and expensive and rarely find defects they act as an insurance process to ensure the software is compliant. However, long lasting tests increasingly conflict with strategic aims to shorten release cycles. To decrease production costs and to improve development agility, we created a generic test selection strategy called THEO that accelerates test processes without sacrificing product quality. THEO is based on a cost model, which dynamically skips tests when the expected cost of running the test exceeds the expected cost of removing it. We replayed past development periods of three major Microsoft products resulting in a reduction of 50% of test executions, saving millions of dollars per year, while maintaining product quality.]]>

Testing is a key element of software development processes for the management and assessment of product quality. In most development environments, the software engineers are responsible for ensuring the functional correctness of code. However, for large complex software products, there is an additional need to check that changes do not negatively impact other parts of the software and they comply with system constraints such as backward compatibility, performance, security etc. Ensuring these system constraints may require complex verification infrastructure and test procedures. Although such tests are time consuming and expensive and rarely find defects they act as an insurance process to ensure the software is compliant. However, long lasting tests increasingly conflict with strategic aims to shorten release cycles. To decrease production costs and to improve development agility, we created a generic test selection strategy called THEO that accelerates test processes without sacrificing product quality. THEO is based on a cost model, which dynamically skips tests when the expected cost of running the test exceeds the expected cost of removing it. We replayed past development periods of three major Microsoft products resulting in a reduction of 50% of test executions, saving millions of dollars per year, while maintaining product quality.]]>
Wed, 10 Jun 2015 09:20:20 GMT /slideshow/icse-2015-theowide/49212004 kim.herzig@slideshare.net(kim.herzig) The Art of Testing Less without Sacrificing Quality @ ICSE 2015 kim.herzig Testing is a key element of software development processes for the management and assessment of product quality. In most development environments, the software engineers are responsible for ensuring the functional correctness of code. However, for large complex software products, there is an additional need to check that changes do not negatively impact other parts of the software and they comply with system constraints such as backward compatibility, performance, security etc. Ensuring these system constraints may require complex verification infrastructure and test procedures. Although such tests are time consuming and expensive and rarely find defects they act as an insurance process to ensure the software is compliant. However, long lasting tests increasingly conflict with strategic aims to shorten release cycles. To decrease production costs and to improve development agility, we created a generic test selection strategy called THEO that accelerates test processes without sacrificing product quality. THEO is based on a cost model, which dynamically skips tests when the expected cost of running the test exceeds the expected cost of removing it. We replayed past development periods of three major Microsoft products resulting in a reduction of 50% of test executions, saving millions of dollars per year, while maintaining product quality. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/icse2015theowide-150610092020-lva1-app6892-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Testing is a key element of software development processes for the management and assessment of product quality. In most development environments, the software engineers are responsible for ensuring the functional correctness of code. However, for large complex software products, there is an additional need to check that changes do not negatively impact other parts of the software and they comply with system constraints such as backward compatibility, performance, security etc. Ensuring these system constraints may require complex verification infrastructure and test procedures. Although such tests are time consuming and expensive and rarely find defects they act as an insurance process to ensure the software is compliant. However, long lasting tests increasingly conflict with strategic aims to shorten release cycles. To decrease production costs and to improve development agility, we created a generic test selection strategy called THEO that accelerates test processes without sacrificing product quality. THEO is based on a cost model, which dynamically skips tests when the expected cost of running the test exceeds the expected cost of removing it. We replayed past development periods of three major Microsoft products resulting in a reduction of 50% of test executions, saving millions of dollars per year, while maintaining product quality.
The Art of Testing Less without Sacrificing Quality @ ICSE 2015 from Kim Herzig
]]>
853 2 https://cdn.slidesharecdn.com/ss_thumbnails/icse2015theowide-150610092020-lva1-app6892-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Code Ownership and Software Quality: A Replication Study @ MSR 2015 /slideshow/msr-2015-ownership/49195685 msr2015ownership-150609223709-lva1-app6891
In a traditional sense, ownership determines rights and duties in regard to an object, for example a property. The owner of source code usually refers to the person that invented the code. However, larger code artifacts, such as files, are usually composed by multiple engineers contributing to the entity over time through a series of changes. Frequently, the person with the highest contribution, e.g. the most number of code changes, is defined as the code owner and takes responsibility for it. Thus, code ownership relates to the knowledge engineers have about code. Lacking responsibility and knowledge about code can reduce code quality. In an earlier study, Bird et al. [1] showed that Windows binaries that lacked clear code ownership were more likely to be defect prone. However recommendations for large artifacts such as binaries are usually not actionable. E.g. changing the concept of binaries and refactoring them to ensure strong ownership would violate system architecture principles. A recent replication study by Foucault et al. [2] on open source software replicate the original results and lead to doubts about the general concept of ownership impacting code quality. In this paper, we replicated and extended the previous two ownership studies [1, 2] and reflect on their findings. Further, we define several new ownership metrics to investigate the dependency between ownership and code quality on file and directory level for 4 major Microsoft products. The results confirm the original findings by Bird et al. [1] that code ownership correlates with code quality. Using new and refined code ownership metrics we were able to classify source files that contained at least one bug with a median precision of 0.74 and a median recall of 0.38. On directory level, we achieve a precision of 0.76 and a recall of 0.60.]]>

In a traditional sense, ownership determines rights and duties in regard to an object, for example a property. The owner of source code usually refers to the person that invented the code. However, larger code artifacts, such as files, are usually composed by multiple engineers contributing to the entity over time through a series of changes. Frequently, the person with the highest contribution, e.g. the most number of code changes, is defined as the code owner and takes responsibility for it. Thus, code ownership relates to the knowledge engineers have about code. Lacking responsibility and knowledge about code can reduce code quality. In an earlier study, Bird et al. [1] showed that Windows binaries that lacked clear code ownership were more likely to be defect prone. However recommendations for large artifacts such as binaries are usually not actionable. E.g. changing the concept of binaries and refactoring them to ensure strong ownership would violate system architecture principles. A recent replication study by Foucault et al. [2] on open source software replicate the original results and lead to doubts about the general concept of ownership impacting code quality. In this paper, we replicated and extended the previous two ownership studies [1, 2] and reflect on their findings. Further, we define several new ownership metrics to investigate the dependency between ownership and code quality on file and directory level for 4 major Microsoft products. The results confirm the original findings by Bird et al. [1] that code ownership correlates with code quality. Using new and refined code ownership metrics we were able to classify source files that contained at least one bug with a median precision of 0.74 and a median recall of 0.38. On directory level, we achieve a precision of 0.76 and a recall of 0.60.]]>
Tue, 09 Jun 2015 22:37:09 GMT /slideshow/msr-2015-ownership/49195685 kim.herzig@slideshare.net(kim.herzig) Code Ownership and Software Quality: A Replication Study @ MSR 2015 kim.herzig In a traditional sense, ownership determines rights and duties in regard to an object, for example a property. The owner of source code usually refers to the person that invented the code. However, larger code artifacts, such as files, are usually composed by multiple engineers contributing to the entity over time through a series of changes. Frequently, the person with the highest contribution, e.g. the most number of code changes, is defined as the code owner and takes responsibility for it. Thus, code ownership relates to the knowledge engineers have about code. Lacking responsibility and knowledge about code can reduce code quality. In an earlier study, Bird et al. [1] showed that Windows binaries that lacked clear code ownership were more likely to be defect prone. However recommendations for large artifacts such as binaries are usually not actionable. E.g. changing the concept of binaries and refactoring them to ensure strong ownership would violate system architecture principles. A recent replication study by Foucault et al. [2] on open source software replicate the original results and lead to doubts about the general concept of ownership impacting code quality. In this paper, we replicated and extended the previous two ownership studies [1, 2] and reflect on their findings. Further, we define several new ownership metrics to investigate the dependency between ownership and code quality on file and directory level for 4 major Microsoft products. The results confirm the original findings by Bird et al. [1] that code ownership correlates with code quality. Using new and refined code ownership metrics we were able to classify source files that contained at least one bug with a median precision of 0.74 and a median recall of 0.38. On directory level, we achieve a precision of 0.76 and a recall of 0.60. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/msr2015ownership-150609223709-lva1-app6891-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> In a traditional sense, ownership determines rights and duties in regard to an object, for example a property. The owner of source code usually refers to the person that invented the code. However, larger code artifacts, such as files, are usually composed by multiple engineers contributing to the entity over time through a series of changes. Frequently, the person with the highest contribution, e.g. the most number of code changes, is defined as the code owner and takes responsibility for it. Thus, code ownership relates to the knowledge engineers have about code. Lacking responsibility and knowledge about code can reduce code quality. In an earlier study, Bird et al. [1] showed that Windows binaries that lacked clear code ownership were more likely to be defect prone. However recommendations for large artifacts such as binaries are usually not actionable. E.g. changing the concept of binaries and refactoring them to ensure strong ownership would violate system architecture principles. A recent replication study by Foucault et al. [2] on open source software replicate the original results and lead to doubts about the general concept of ownership impacting code quality. In this paper, we replicated and extended the previous two ownership studies [1, 2] and reflect on their findings. Further, we define several new ownership metrics to investigate the dependency between ownership and code quality on file and directory level for 4 major Microsoft products. The results confirm the original findings by Bird et al. [1] that code ownership correlates with code quality. Using new and refined code ownership metrics we were able to classify source files that contained at least one bug with a median precision of 0.74 and a median recall of 0.38. On directory level, we achieve a precision of 0.76 and a recall of 0.60.
Code Ownership and Software Quality: A Replication Study @ MSR 2015 from Kim Herzig
]]>
1119 3 https://cdn.slidesharecdn.com/ss_thumbnails/msr2015ownership-150609223709-lva1-app6891-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Issre2014 test defectprediction /slideshow/issre2014-test-defectprediction-41517117/41517117 issre2014-testdefectprediction-141113102710-conversion-gate02
Using Pre-Release Test Failures to Build Defect Prediction Models]]>

Using Pre-Release Test Failures to Build Defect Prediction Models]]>
Thu, 13 Nov 2014 10:27:10 GMT /slideshow/issre2014-test-defectprediction-41517117/41517117 kim.herzig@slideshare.net(kim.herzig) Issre2014 test defectprediction kim.herzig Using Pre-Release Test Failures to Build Defect Prediction Models <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/issre2014-testdefectprediction-141113102710-conversion-gate02-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Using Pre-Release Test Failures to Build Defect Prediction Models
Issre2014 test defectprediction from Kim Herzig
]]>
1645 3 https://cdn.slidesharecdn.com/ss_thumbnails/issre2014-testdefectprediction-141113102710-conversion-gate02-thumbnail.jpg?width=120&height=120&fit=bounds presentation 000000 http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
The Impact of Test Ownership and Team Structure on the Reliability and Effectiveness of Quality Test Runs /slideshow/esem2014-test-organization-2/39321165 esem2014-testorganization2-140920091047-phpapp02
Context: Software testing is a crucial step in most software development processes. Testing software is a key component to manage and assess the risk of shipping quality products to customers. But testing is also an expensive process and changes to the system need to be tested thoroughly which may take time. Thus, the quality of a software product depends on the quality of its underlying testing process and on the effectiveness and reliability of individual test cases. Goal: In this paper, we investigate the impact of the organizational structure of test owners on the reliability and effectiveness of the corresponding test cases. Prior empirical research on organizational structure has focused only on developer activity. We expand the scope of empirical knowledge by assessing the impact of organizational structure on testing activities. Method: We performed an empirical study on the Windows build verification test suites (BVT) and relate effectiveness and reliability measures of each test run to the complexity and size of the organizational sub-structure that enclose all owners of test cases executed. Results: Our results show, that organizational structure impacts both test effectiveness and test execution reliability. We are also able to predict effectiveness and reliability with fairly high precision and recall values. Conclusion: We suggest to review test suites with respect to their organizational composition. As indicated by the results of this study, this would increase the effectiveness and reliability, development speed and developer satisfaction. More details: ESEM 2014 presentation for paper "The Impact of Test Ownership and Team Structure on the Reliability and Effectiveness of Quality Test Runs". For more details please see http://dl.acm.org/citation.cfm?id=2652524.2652535&coll=DL&dl=GUIDE&CFID=569962862&CFTOKEN=20804180.]]>

Context: Software testing is a crucial step in most software development processes. Testing software is a key component to manage and assess the risk of shipping quality products to customers. But testing is also an expensive process and changes to the system need to be tested thoroughly which may take time. Thus, the quality of a software product depends on the quality of its underlying testing process and on the effectiveness and reliability of individual test cases. Goal: In this paper, we investigate the impact of the organizational structure of test owners on the reliability and effectiveness of the corresponding test cases. Prior empirical research on organizational structure has focused only on developer activity. We expand the scope of empirical knowledge by assessing the impact of organizational structure on testing activities. Method: We performed an empirical study on the Windows build verification test suites (BVT) and relate effectiveness and reliability measures of each test run to the complexity and size of the organizational sub-structure that enclose all owners of test cases executed. Results: Our results show, that organizational structure impacts both test effectiveness and test execution reliability. We are also able to predict effectiveness and reliability with fairly high precision and recall values. Conclusion: We suggest to review test suites with respect to their organizational composition. As indicated by the results of this study, this would increase the effectiveness and reliability, development speed and developer satisfaction. More details: ESEM 2014 presentation for paper "The Impact of Test Ownership and Team Structure on the Reliability and Effectiveness of Quality Test Runs". For more details please see http://dl.acm.org/citation.cfm?id=2652524.2652535&coll=DL&dl=GUIDE&CFID=569962862&CFTOKEN=20804180.]]>
Sat, 20 Sep 2014 09:10:46 GMT /slideshow/esem2014-test-organization-2/39321165 kim.herzig@slideshare.net(kim.herzig) The Impact of Test Ownership and Team Structure on the Reliability and Effectiveness of Quality Test Runs kim.herzig Context: Software testing is a crucial step in most software development processes. Testing software is a key component to manage and assess the risk of shipping quality products to customers. But testing is also an expensive process and changes to the system need to be tested thoroughly which may take time. Thus, the quality of a software product depends on the quality of its underlying testing process and on the effectiveness and reliability of individual test cases. Goal: In this paper, we investigate the impact of the organizational structure of test owners on the reliability and effectiveness of the corresponding test cases. Prior empirical research on organizational structure has focused only on developer activity. We expand the scope of empirical knowledge by assessing the impact of organizational structure on testing activities. Method: We performed an empirical study on the Windows build verification test suites (BVT) and relate effectiveness and reliability measures of each test run to the complexity and size of the organizational sub-structure that enclose all owners of test cases executed. Results: Our results show, that organizational structure impacts both test effectiveness and test execution reliability. We are also able to predict effectiveness and reliability with fairly high precision and recall values. Conclusion: We suggest to review test suites with respect to their organizational composition. As indicated by the results of this study, this would increase the effectiveness and reliability, development speed and developer satisfaction. More details: ESEM 2014 presentation for paper "The Impact of Test Ownership and Team Structure on the Reliability and Effectiveness of Quality Test Runs". For more details please see http://dl.acm.org/citation.cfm?id=2652524.2652535&coll=DL&dl=GUIDE&CFID=569962862&CFTOKEN=20804180. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/esem2014-testorganization2-140920091047-phpapp02-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Context: Software testing is a crucial step in most software development processes. Testing software is a key component to manage and assess the risk of shipping quality products to customers. But testing is also an expensive process and changes to the system need to be tested thoroughly which may take time. Thus, the quality of a software product depends on the quality of its underlying testing process and on the effectiveness and reliability of individual test cases. Goal: In this paper, we investigate the impact of the organizational structure of test owners on the reliability and effectiveness of the corresponding test cases. Prior empirical research on organizational structure has focused only on developer activity. We expand the scope of empirical knowledge by assessing the impact of organizational structure on testing activities. Method: We performed an empirical study on the Windows build verification test suites (BVT) and relate effectiveness and reliability measures of each test run to the complexity and size of the organizational sub-structure that enclose all owners of test cases executed. Results: Our results show, that organizational structure impacts both test effectiveness and test execution reliability. We are also able to predict effectiveness and reliability with fairly high precision and recall values. Conclusion: We suggest to review test suites with respect to their organizational composition. As indicated by the results of this study, this would increase the effectiveness and reliability, development speed and developer satisfaction. More details: ESEM 2014 presentation for paper &quot;The Impact of Test Ownership and Team Structure on the Reliability and Effectiveness of Quality Test Runs&quot;. For more details please see http://dl.acm.org/citation.cfm?id=2652524.2652535&amp;coll=DL&amp;dl=GUIDE&amp;CFID=569962862&amp;CFTOKEN=20804180.
The Impact of Test Ownership and Team Structure on the Reliability and Effectiveness of Quality Test Runs from Kim Herzig
]]>
2961 4 https://cdn.slidesharecdn.com/ss_thumbnails/esem2014-testorganization2-140920091047-phpapp02-thumbnail.jpg?width=120&height=120&fit=bounds presentation 000000 http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Predicting Defects Using �Change Genealogies (ISSE 2013) /slideshow/issre-slideshare/28090983 issre-slideshare-131110124155-phpapp02
]]>

]]>
Sun, 10 Nov 2013 12:41:55 GMT /slideshow/issre-slideshare/28090983 kim.herzig@slideshare.net(kim.herzig) Predicting Defects Using �Change Genealogies (ISSE 2013) kim.herzig <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/issre-slideshare-131110124155-phpapp02-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br>
Predicting Defects Using Change Genealogies (ISSE 2013) from Kim Herzig
]]>
1282 4 https://cdn.slidesharecdn.com/ss_thumbnails/issre-slideshare-131110124155-phpapp02-thumbnail.jpg?width=120&height=120&fit=bounds presentation White http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Mining and Untangling Change Genealogies (PhD Defense Talk) /slideshow/mining-and-untangling-change-genealogies-phd-defense-talk/21876315 phdtalkkimherzig-130525012448-phpapp02
]]>

]]>
Sat, 25 May 2013 01:24:48 GMT /slideshow/mining-and-untangling-change-genealogies-phd-defense-talk/21876315 kim.herzig@slideshare.net(kim.herzig) Mining and Untangling Change Genealogies (PhD Defense Talk) kim.herzig <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/phdtalkkimherzig-130525012448-phpapp02-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br>
Mining and Untangling Change Genealogies (PhD Defense Talk) from Kim Herzig
]]>
814 3 https://cdn.slidesharecdn.com/ss_thumbnails/phdtalkkimherzig-130525012448-phpapp02-thumbnail.jpg?width=120&height=120&fit=bounds presentation 000000 http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
The Impact of Tangled Code Changes /slideshow/the-impact-of-tangled-code-changes/21875275 msr2013-130525010543-phpapp02
]]>

]]>
Sat, 25 May 2013 01:05:43 GMT /slideshow/the-impact-of-tangled-code-changes/21875275 kim.herzig@slideshare.net(kim.herzig) The Impact of Tangled Code Changes kim.herzig <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/msr2013-130525010543-phpapp02-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br>
The Impact of Tangled Code Changes from Kim Herzig
]]>
1573 3 https://cdn.slidesharecdn.com/ss_thumbnails/msr2013-130525010543-phpapp02-thumbnail.jpg?width=120&height=120&fit=bounds presentation White http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Mining Cause Effect Chains from Version Archives - ISSRE 2011 /slideshow/mining-cause-effect-chains-from-version-archives-issre-2011/10392636 issre2011-111130004221-phpapp02
Software reliability is determined by software changes. How do these changes relate to each other? By analyzing the impacted method definitions and usages, we determine dependencies between changes, resulting in a change genealogy that captures how earlier changes enable and cause later ones. Model checking this genealogy reveals temporal process patterns that encode key features of the software process: “Whenever class A is changed, its test case is later updated as well.” Such patterns can be validated automatically: In an evaluation of four open source histories, our prototype would recommend pending activities with a precision of 60– 72%.]]>

Software reliability is determined by software changes. How do these changes relate to each other? By analyzing the impacted method definitions and usages, we determine dependencies between changes, resulting in a change genealogy that captures how earlier changes enable and cause later ones. Model checking this genealogy reveals temporal process patterns that encode key features of the software process: “Whenever class A is changed, its test case is later updated as well.” Such patterns can be validated automatically: In an evaluation of four open source histories, our prototype would recommend pending activities with a precision of 60– 72%.]]>
Wed, 30 Nov 2011 00:42:20 GMT /slideshow/mining-cause-effect-chains-from-version-archives-issre-2011/10392636 kim.herzig@slideshare.net(kim.herzig) Mining Cause Effect Chains from Version Archives - ISSRE 2011 kim.herzig Software reliability is determined by software changes. How do these changes relate to each other? By analyzing the impacted method definitions and usages, we determine dependencies between changes, resulting in a change genealogy that captures how earlier changes enable and cause later ones. Model checking this genealogy reveals temporal process patterns that encode key features of the software process: “Whenever class A is changed, its test case is later updated as well.” Such patterns can be validated automatically: In an evaluation of four open source histories, our prototype would recommend pending activities with a precision of 60– 72%. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/issre2011-111130004221-phpapp02-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Software reliability is determined by software changes. How do these changes relate to each other? By analyzing the impacted method definitions and usages, we determine dependencies between changes, resulting in a change genealogy that captures how earlier changes enable and cause later ones. Model checking this genealogy reveals temporal process patterns that encode key features of the software process: “Whenever class A is changed, its test case is later updated as well.” Such patterns can be validated automatically: In an evaluation of four open source histories, our prototype would recommend pending activities with a precision of 60– 72%.
Mining Cause Effect Chains from Version Archives - ISSRE 2011 from Kim Herzig
]]>
774 2 https://cdn.slidesharecdn.com/ss_thumbnails/issre2011-111130004221-phpapp02-thumbnail.jpg?width=120&height=120&fit=bounds presentation 000000 http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Network vs. Code Metrics to Predict Defects: A Replication Study /kim.herzig/network-vs-code-metrics-to-predict-defects-a-replication-study esem11-110927082918-phpapp02
]]>

]]>
Tue, 27 Sep 2011 08:29:15 GMT /kim.herzig/network-vs-code-metrics-to-predict-defects-a-replication-study kim.herzig@slideshare.net(kim.herzig) Network vs. Code Metrics to Predict Defects: A Replication Study kim.herzig <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/esem11-110927082918-phpapp02-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br>
Network vs. Code Metrics to Predict Defects: A Replication Study from Kim Herzig
]]>
976 1 https://cdn.slidesharecdn.com/ss_thumbnails/esem11-110927082918-phpapp02-thumbnail.jpg?width=120&height=120&fit=bounds presentation 000000 http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Capturing the Long Term Impact of Changes /slideshow/capturing-the-long-term-impact-of-changes/3968651 capturingthelongtermimpactofchangesv4-100504152833-phpapp01
I gave this talk on the doctoral symposium on ICSE 2010. The number of slides was limited to 5 slides. Otherwise I would but things bit more into context. ]]>

I gave this talk on the doctoral symposium on ICSE 2010. The number of slides was limited to 5 slides. Otherwise I would but things bit more into context. ]]>
Tue, 04 May 2010 15:28:30 GMT /slideshow/capturing-the-long-term-impact-of-changes/3968651 kim.herzig@slideshare.net(kim.herzig) Capturing the Long Term Impact of Changes kim.herzig I gave this talk on the doctoral symposium on ICSE 2010. The number of slides was limited to 5 slides. Otherwise I would but things bit more into context. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/capturingthelongtermimpactofchangesv4-100504152833-phpapp01-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> I gave this talk on the doctoral symposium on ICSE 2010. The number of slides was limited to 5 slides. Otherwise I would but things bit more into context.
Capturing the Long Term Impact of Changes from Kim Herzig
]]>
482 5 https://cdn.slidesharecdn.com/ss_thumbnails/capturingthelongtermimpactofchangesv4-100504152833-phpapp01-thumbnail.jpg?width=120&height=120&fit=bounds presentation 000000 http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Software Engineering Course 2009 - Mining Software Archives /kim.herzig/software-engineering-course-2009-mining-software-archives se09miningsoftwarereposnew-090730153940-phpapp01
]]>

]]>
Thu, 30 Jul 2009 15:39:30 GMT /kim.herzig/software-engineering-course-2009-mining-software-archives kim.herzig@slideshare.net(kim.herzig) Software Engineering Course 2009 - Mining Software Archives kim.herzig <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/se09miningsoftwarereposnew-090730153940-phpapp01-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br>
Software Engineering Course 2009 - Mining Software Archives from Kim Herzig
]]>
722 7 https://cdn.slidesharecdn.com/ss_thumbnails/se09miningsoftwarereposnew-090730153940-phpapp01-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
https://cdn.slidesharecdn.com/profile-photo-kim.herzig-48x48.jpg?cb=1565846411 My research is concerned with empirical software engineering and mining software repositories. In particular, I'm analyzing development processes, test strategies, and version repository branching structures and their impact on code quality and development agility and efficiency at Microsoft. www.kim-herzig.de https://cdn.slidesharecdn.com/ss_thumbnails/icseast2016-160628045404-thumbnail.jpg?width=320&height=320&fit=bounds slideshow/keynote-ast-2016/63509323 Keynote AST 2016 https://cdn.slidesharecdn.com/ss_thumbnails/testfppredictionicse2015-150610092902-lva1-app6892-thumbnail.jpg?width=320&height=320&fit=bounds kim.herzig/test-fp-predictionicse2015 Empirically Detecting ... https://cdn.slidesharecdn.com/ss_thumbnails/icse2015theowide-150610092020-lva1-app6892-thumbnail.jpg?width=320&height=320&fit=bounds slideshow/icse-2015-theowide/49212004 The Art of Testing Les...