際際滷shows by User: serge_demeyerUA / http://www.slideshare.net/images/logo.gif 際際滷shows by User: serge_demeyerUA / Wed, 02 Oct 2024 12:00:23 GMT 際際滷Share feed for 際際滷shows by User: serge_demeyerUA "In Silico" Research: Software Engineering to the Rescue /slideshow/in-silico-research-software-engineering-to-the-rescue/272145951 threatstoinstrumentvalidity2023final-241002120023-1f0c8ffc
In the last decade, In Silico research has become a standard tool in the arsenal of scientific research. It complements the traditional in vitro and in vivo research, by careful construction of a simulation model for the phenomenon under investigation and executing this model on high-performance computers. These simulation models are empowered by data pipelines encoded with a diverse set of programming tools (Matlab, Python, R, Jupyter notebooks, ). In Silico research is a critical driver in todays digital society, witness the debate around climate change or the policy concerning COVID protection measures. However, just like all research infrastructure, such data processing pipelines need proper calibration and maintenance. There is an inherent threat to validity (named Instrument Validity) stating that a data processing pipeline may not measure what it is designed to measure, hence the results may not be truthful. Indeed, subtle changes in the underlying program code or data schema may affect the results of the simulation in unpredictable ways, completely invalidating the conclusion. In this presentation we will share recent and not so recent insights from software engineering research which allow to alleviate this threat to validity.]]>

In the last decade, In Silico research has become a standard tool in the arsenal of scientific research. It complements the traditional in vitro and in vivo research, by careful construction of a simulation model for the phenomenon under investigation and executing this model on high-performance computers. These simulation models are empowered by data pipelines encoded with a diverse set of programming tools (Matlab, Python, R, Jupyter notebooks, ). In Silico research is a critical driver in todays digital society, witness the debate around climate change or the policy concerning COVID protection measures. However, just like all research infrastructure, such data processing pipelines need proper calibration and maintenance. There is an inherent threat to validity (named Instrument Validity) stating that a data processing pipeline may not measure what it is designed to measure, hence the results may not be truthful. Indeed, subtle changes in the underlying program code or data schema may affect the results of the simulation in unpredictable ways, completely invalidating the conclusion. In this presentation we will share recent and not so recent insights from software engineering research which allow to alleviate this threat to validity.]]>
Wed, 02 Oct 2024 12:00:23 GMT /slideshow/in-silico-research-software-engineering-to-the-rescue/272145951 serge_demeyerUA@slideshare.net(serge_demeyerUA) "In Silico" Research: Software Engineering to the Rescue serge_demeyerUA In the last decade, In Silico research has become a standard tool in the arsenal of scientific research. It complements the traditional in vitro and in vivo research, by careful construction of a simulation model for the phenomenon under investigation and executing this model on high-performance computers. These simulation models are empowered by data pipelines encoded with a diverse set of programming tools (Matlab, Python, R, Jupyter notebooks, ). In Silico research is a critical driver in todays digital society, witness the debate around climate change or the policy concerning COVID protection measures. However, just like all research infrastructure, such data processing pipelines need proper calibration and maintenance. There is an inherent threat to validity (named Instrument Validity) stating that a data processing pipeline may not measure what it is designed to measure, hence the results may not be truthful. Indeed, subtle changes in the underlying program code or data schema may affect the results of the simulation in unpredictable ways, completely invalidating the conclusion. In this presentation we will share recent and not so recent insights from software engineering research which allow to alleviate this threat to validity. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/threatstoinstrumentvalidity2023final-241002120023-1f0c8ffc-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> In the last decade, In Silico research has become a standard tool in the arsenal of scientific research. It complements the traditional in vitro and in vivo research, by careful construction of a simulation model for the phenomenon under investigation and executing this model on high-performance computers. These simulation models are empowered by data pipelines encoded with a diverse set of programming tools (Matlab, Python, R, Jupyter notebooks, ). In Silico research is a critical driver in todays digital society, witness the debate around climate change or the policy concerning COVID protection measures. However, just like all research infrastructure, such data processing pipelines need proper calibration and maintenance. There is an inherent threat to validity (named Instrument Validity) stating that a data processing pipeline may not measure what it is designed to measure, hence the results may not be truthful. Indeed, subtle changes in the underlying program code or data schema may affect the results of the simulation in unpredictable ways, completely invalidating the conclusion. In this presentation we will share recent and not so recent insights from software engineering research which allow to alleviate this threat to validity.
"In Silico" Research: Software Engineering to the Rescue from University of Antwerp
]]>
3 0 https://cdn.slidesharecdn.com/ss_thumbnails/threatstoinstrumentvalidity2023final-241002120023-1f0c8ffc-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Research Methods in Computer Science and Software Engineering /slideshow/research-methods-in-computer-science-and-software-engineering/272106397 researchmethodsredux2024-240930131759-9e1b93ff
際際滷 deck from a tutorial aimed at PhD students who want to have a better grasp on what exactly is good research. We explore the role of research methods in computer science, drawing upon practical examples from empirical approaches in software engineering.]]>

際際滷 deck from a tutorial aimed at PhD students who want to have a better grasp on what exactly is good research. We explore the role of research methods in computer science, drawing upon practical examples from empirical approaches in software engineering.]]>
Mon, 30 Sep 2024 13:17:59 GMT /slideshow/research-methods-in-computer-science-and-software-engineering/272106397 serge_demeyerUA@slideshare.net(serge_demeyerUA) Research Methods in Computer Science and Software Engineering serge_demeyerUA 際際滷 deck from a tutorial aimed at PhD students who want to have a better grasp on what exactly is good research. We explore the role of research methods in computer science, drawing upon practical examples from empirical approaches in software engineering. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/researchmethodsredux2024-240930131759-9e1b93ff-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> 際際滷 deck from a tutorial aimed at PhD students who want to have a better grasp on what exactly is good research. We explore the role of research methods in computer science, drawing upon practical examples from empirical approaches in software engineering.
Research Methods in Computer Science and Software Engineering from University of Antwerp
]]>
68 0 https://cdn.slidesharecdn.com/ss_thumbnails/researchmethodsredux2024-240930131759-9e1b93ff-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
MUT4SLX: Extensions for Mutation Testing of Stateflow Models /slideshow/mut4slx-extensions-for-mutation-testing-of-stateflow-models/266751490 vst2024presentationmut4slxfinal-240312154740-c9f9e44c
Several experience reports illustrate that mutation testing is capable of supporting a shift-left testing strategy for software systems coded in textual programming languages like C++. For graphical modeling languages like Simulink, such experience reports are missing, primarily because of a lack of adequate tool support. In this paper, we extend MUT4SLX, a tool for automatic mutant generation and test execution of Simulink models based on block diagrams. The tool is extended to support mutation operators for Stateflow models, which, to the best of our knowledge, are not supported by any other tool. The current version of MUT4SLX has 8 operators that are modeled after realistic faults (mined from an industrial bug database) and are fast to inject (by only replacing the parameter values). An experimental evaluation on four sample projects shows that MUT4SLX is capable of performing mutation analysis reasonably fast, but mutant execution is always more time-consuming.]]>

Several experience reports illustrate that mutation testing is capable of supporting a shift-left testing strategy for software systems coded in textual programming languages like C++. For graphical modeling languages like Simulink, such experience reports are missing, primarily because of a lack of adequate tool support. In this paper, we extend MUT4SLX, a tool for automatic mutant generation and test execution of Simulink models based on block diagrams. The tool is extended to support mutation operators for Stateflow models, which, to the best of our knowledge, are not supported by any other tool. The current version of MUT4SLX has 8 operators that are modeled after realistic faults (mined from an industrial bug database) and are fast to inject (by only replacing the parameter values). An experimental evaluation on four sample projects shows that MUT4SLX is capable of performing mutation analysis reasonably fast, but mutant execution is always more time-consuming.]]>
Tue, 12 Mar 2024 15:47:40 GMT /slideshow/mut4slx-extensions-for-mutation-testing-of-stateflow-models/266751490 serge_demeyerUA@slideshare.net(serge_demeyerUA) MUT4SLX: Extensions for Mutation Testing of Stateflow Models serge_demeyerUA Several experience reports illustrate that mutation testing is capable of supporting a shift-left testing strategy for software systems coded in textual programming languages like C++. For graphical modeling languages like Simulink, such experience reports are missing, primarily because of a lack of adequate tool support. In this paper, we extend MUT4SLX, a tool for automatic mutant generation and test execution of Simulink models based on block diagrams. The tool is extended to support mutation operators for Stateflow models, which, to the best of our knowledge, are not supported by any other tool. The current version of MUT4SLX has 8 operators that are modeled after realistic faults (mined from an industrial bug database) and are fast to inject (by only replacing the parameter values). An experimental evaluation on four sample projects shows that MUT4SLX is capable of performing mutation analysis reasonably fast, but mutant execution is always more time-consuming. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/vst2024presentationmut4slxfinal-240312154740-c9f9e44c-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Several experience reports illustrate that mutation testing is capable of supporting a shift-left testing strategy for software systems coded in textual programming languages like C++. For graphical modeling languages like Simulink, such experience reports are missing, primarily because of a lack of adequate tool support. In this paper, we extend MUT4SLX, a tool for automatic mutant generation and test execution of Simulink models based on block diagrams. The tool is extended to support mutation operators for Stateflow models, which, to the best of our knowledge, are not supported by any other tool. The current version of MUT4SLX has 8 operators that are modeled after realistic faults (mined from an industrial bug database) and are fast to inject (by only replacing the parameter values). An experimental evaluation on four sample projects shows that MUT4SLX is capable of performing mutation analysis reasonably fast, but mutant execution is always more time-consuming.
MUT4SLX: Extensions for Mutation Testing of Stateflow Models from University of Antwerp
]]>
40 0 https://cdn.slidesharecdn.com/ss_thumbnails/vst2024presentationmut4slxfinal-240312154740-c9f9e44c-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
AI For Software Engineering: Two Industrial Experience Reports /slideshow/ai-for-software-engineering-two-industrial-experience-reports/253964734 aiforsoftwareengineeringsmartdelta-221102075629-80377bfc
Technical Presentation for the Plenary 2 meeting of the SmartDelta meeting.]]>

Technical Presentation for the Plenary 2 meeting of the SmartDelta meeting.]]>
Wed, 02 Nov 2022 07:56:28 GMT /slideshow/ai-for-software-engineering-two-industrial-experience-reports/253964734 serge_demeyerUA@slideshare.net(serge_demeyerUA) AI For Software Engineering: Two Industrial Experience Reports serge_demeyerUA Technical Presentation for the Plenary 2 meeting of the SmartDelta meeting. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/aiforsoftwareengineeringsmartdelta-221102075629-80377bfc-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Technical Presentation for the Plenary 2 meeting of the SmartDelta meeting.
AI For Software Engineering: Two Industrial Experience Reports from University of Antwerp
]]>
15 0 https://cdn.slidesharecdn.com/ss_thumbnails/aiforsoftwareengineeringsmartdelta-221102075629-80377bfc-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Test Amplification in Python An Industrial Experience Report /slideshow/test-amplification-in-python-an-industrial-experience-report/253391616 vasterastestamplification-221007112233-35d08f36
Software test amplification is the act of strengthening manually written test-cases to exercise the boundary conditions of the system under test. Several academic tool prototypes have been proposed by the research community so far DSpot (for Java), AmPyfier (for Python) and Small-Amp (for Pharo-Smalltalk). Up until now, these tool prototypes have only been validated on a series of open source systems; concrete experience reports from actual use within the software industry are still lacking. In this presentation, we will share our experience with AmPyfier as applied within the context of Garvis, a start up company from the University of Antwerp.]]>

Software test amplification is the act of strengthening manually written test-cases to exercise the boundary conditions of the system under test. Several academic tool prototypes have been proposed by the research community so far DSpot (for Java), AmPyfier (for Python) and Small-Amp (for Pharo-Smalltalk). Up until now, these tool prototypes have only been validated on a series of open source systems; concrete experience reports from actual use within the software industry are still lacking. In this presentation, we will share our experience with AmPyfier as applied within the context of Garvis, a start up company from the University of Antwerp.]]>
Fri, 07 Oct 2022 11:22:32 GMT /slideshow/test-amplification-in-python-an-industrial-experience-report/253391616 serge_demeyerUA@slideshare.net(serge_demeyerUA) Test Amplification in Python An Industrial Experience Report serge_demeyerUA Software test amplification is the act of strengthening manually written test-cases to exercise the boundary conditions of the system under test. Several academic tool prototypes have been proposed by the research community so far DSpot (for Java), AmPyfier (for Python) and Small-Amp (for Pharo-Smalltalk). Up until now, these tool prototypes have only been validated on a series of open source systems; concrete experience reports from actual use within the software industry are still lacking. In this presentation, we will share our experience with AmPyfier as applied within the context of Garvis, a start up company from the University of Antwerp. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/vasterastestamplification-221007112233-35d08f36-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Software test amplification is the act of strengthening manually written test-cases to exercise the boundary conditions of the system under test. Several academic tool prototypes have been proposed by the research community so far DSpot (for Java), AmPyfier (for Python) and Small-Amp (for Pharo-Smalltalk). Up until now, these tool prototypes have only been validated on a series of open source systems; concrete experience reports from actual use within the software industry are still lacking. In this presentation, we will share our experience with AmPyfier as applied within the context of Garvis, a start up company from the University of Antwerp.
Test Amplification in Python An Industrial Experience Report from University of Antwerp
]]>
22 0 https://cdn.slidesharecdn.com/ss_thumbnails/vasterastestamplification-221007112233-35d08f36-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Technical Debt in Start-ups / Scale-Ups /slideshow/technical-debt-in-startups-scaleups/252955421 td4vistechdebtinstartups-220913171433-65ad7927
Most start-ups aspire to become scale-ups someday. However, the constant pressure to add new features to a digital product sooner or later leads to technical debt. In this lightning talk we will present an experience report on how we dealt with technical debt on automated tests within the context of a start-up company named garvis.]]>

Most start-ups aspire to become scale-ups someday. However, the constant pressure to add new features to a digital product sooner or later leads to technical debt. In this lightning talk we will present an experience report on how we dealt with technical debt on automated tests within the context of a start-up company named garvis.]]>
Tue, 13 Sep 2022 17:14:32 GMT /slideshow/technical-debt-in-startups-scaleups/252955421 serge_demeyerUA@slideshare.net(serge_demeyerUA) Technical Debt in Start-ups / Scale-Ups serge_demeyerUA Most start-ups aspire to become scale-ups someday. However, the constant pressure to add new features to a digital product sooner or later leads to technical debt. In this lightning talk we will present an experience report on how we dealt with technical debt on automated tests within the context of a start-up company named garvis. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/td4vistechdebtinstartups-220913171433-65ad7927-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Most start-ups aspire to become scale-ups someday. However, the constant pressure to add new features to a digital product sooner or later leads to technical debt. In this lightning talk we will present an experience report on how we dealt with technical debt on automated tests within the context of a start-up company named garvis.
Technical Debt in Start-ups / Scale-Ups from University of Antwerp
]]>
15 0 https://cdn.slidesharecdn.com/ss_thumbnails/td4vistechdebtinstartups-220913171433-65ad7927-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Social Coding Platforms Facilitate Variant Forks /slideshow/social-coding-platforms-facilitate-variant-forks/252955315 demeyerreveweesr2022keynote-220913170412-0ba7d431
(Keynote presented at the WEESR and REVE 2022 workshop colocated with SPLC2022) Social coding platforms centred around git provide explicit facilities to share code between projects: forks, pull requests, cherry-picking to name but a few. Variant forks are an interesting phenomenon in that respect, as it permits for different projects to peacefully co-exist, yet explicitly acknowledge the common ancestry. The phenomenon of variant forks is quite common: in a recent study we found more than 400 open source projects originating from a common code-base. In this keynote we share our insights on the phenomenon of variant forks on social coding platforms. First, we report the results of an exploratory qualitative analysis on the motivations for creating variant forks. Next we illustrate how bug fixes may (should?) be transferred from one variant to another. As such we hope to inspire researchers to study the phenomenon of variant forks.]]>

(Keynote presented at the WEESR and REVE 2022 workshop colocated with SPLC2022) Social coding platforms centred around git provide explicit facilities to share code between projects: forks, pull requests, cherry-picking to name but a few. Variant forks are an interesting phenomenon in that respect, as it permits for different projects to peacefully co-exist, yet explicitly acknowledge the common ancestry. The phenomenon of variant forks is quite common: in a recent study we found more than 400 open source projects originating from a common code-base. In this keynote we share our insights on the phenomenon of variant forks on social coding platforms. First, we report the results of an exploratory qualitative analysis on the motivations for creating variant forks. Next we illustrate how bug fixes may (should?) be transferred from one variant to another. As such we hope to inspire researchers to study the phenomenon of variant forks.]]>
Tue, 13 Sep 2022 17:04:12 GMT /slideshow/social-coding-platforms-facilitate-variant-forks/252955315 serge_demeyerUA@slideshare.net(serge_demeyerUA) Social Coding Platforms Facilitate Variant Forks serge_demeyerUA (Keynote presented at the WEESR and REVE 2022 workshop colocated with SPLC2022) Social coding platforms centred around git provide explicit facilities to share code between projects: forks, pull requests, cherry-picking to name but a few. Variant forks are an interesting phenomenon in that respect, as it permits for different projects to peacefully co-exist, yet explicitly acknowledge the common ancestry. The phenomenon of variant forks is quite common: in a recent study we found more than 400 open source projects originating from a common code-base. In this keynote we share our insights on the phenomenon of variant forks on social coding platforms. First, we report the results of an exploratory qualitative analysis on the motivations for creating variant forks. Next we illustrate how bug fixes may (should?) be transferred from one variant to another. As such we hope to inspire researchers to study the phenomenon of variant forks. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/demeyerreveweesr2022keynote-220913170412-0ba7d431-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> (Keynote presented at the WEESR and REVE 2022 workshop colocated with SPLC2022) Social coding platforms centred around git provide explicit facilities to share code between projects: forks, pull requests, cherry-picking to name but a few. Variant forks are an interesting phenomenon in that respect, as it permits for different projects to peacefully co-exist, yet explicitly acknowledge the common ancestry. The phenomenon of variant forks is quite common: in a recent study we found more than 400 open source projects originating from a common code-base. In this keynote we share our insights on the phenomenon of variant forks on social coding platforms. First, we report the results of an exploratory qualitative analysis on the motivations for creating variant forks. Next we illustrate how bug fixes may (should?) be transferred from one variant to another. As such we hope to inspire researchers to study the phenomenon of variant forks.
Social Coding Platforms Facilitate Variant Forks from University of Antwerp
]]>
25 0 https://cdn.slidesharecdn.com/ss_thumbnails/demeyerreveweesr2022keynote-220913170412-0ba7d431-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Finding Bugs, Fixing Bugs, Preventing Bugs - Exploiting Automated Tests to Increase Reliability /slideshow/finding-bugs-fixing-bugs-preventing-bugs-exploiting-automated-tests-to-increase-reliability-251484545/251484545 effectsbarco2022slideshare-220331153822
Presentation for BARCO and the EFFECTS Project ---Abstract--- With the rise of agile development, software teams all over the world embrace faster release cycles as *the* way to incorporate customer feedback into product development processes. Yet, faster release cycles imply rethinking the traditional notion of software quality: agile teams must balance reliability (minimize known defects) against agility (maximize ease of change). This talk will explore the state-of-the-art in software test automation and the opportunities this may present for maintaining this balance. We will address questions like: Will our test suite detect critical defects early? If not, how can we improve our test suite? Where should we fix a defect?]]>

Presentation for BARCO and the EFFECTS Project ---Abstract--- With the rise of agile development, software teams all over the world embrace faster release cycles as *the* way to incorporate customer feedback into product development processes. Yet, faster release cycles imply rethinking the traditional notion of software quality: agile teams must balance reliability (minimize known defects) against agility (maximize ease of change). This talk will explore the state-of-the-art in software test automation and the opportunities this may present for maintaining this balance. We will address questions like: Will our test suite detect critical defects early? If not, how can we improve our test suite? Where should we fix a defect?]]>
Thu, 31 Mar 2022 15:38:21 GMT /slideshow/finding-bugs-fixing-bugs-preventing-bugs-exploiting-automated-tests-to-increase-reliability-251484545/251484545 serge_demeyerUA@slideshare.net(serge_demeyerUA) Finding Bugs, Fixing Bugs, Preventing Bugs - Exploiting Automated Tests to Increase Reliability serge_demeyerUA Presentation for BARCO and the EFFECTS Project ---Abstract--- With the rise of agile development, software teams all over the world embrace faster release cycles as *the* way to incorporate customer feedback into product development processes. Yet, faster release cycles imply rethinking the traditional notion of software quality: agile teams must balance reliability (minimize known defects) against agility (maximize ease of change). This talk will explore the state-of-the-art in software test automation and the opportunities this may present for maintaining this balance. We will address questions like: Will our test suite detect critical defects early? If not, how can we improve our test suite? Where should we fix a defect? <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/effectsbarco2022slideshare-220331153822-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Presentation for BARCO and the EFFECTS Project ---Abstract--- With the rise of agile development, software teams all over the world embrace faster release cycles as *the* way to incorporate customer feedback into product development processes. Yet, faster release cycles imply rethinking the traditional notion of software quality: agile teams must balance reliability (minimize known defects) against agility (maximize ease of change). This talk will explore the state-of-the-art in software test automation and the opportunities this may present for maintaining this balance. We will address questions like: Will our test suite detect critical defects early? If not, how can we improve our test suite? Where should we fix a defect?
Finding Bugs, Fixing Bugs, Preventing Bugs - Exploiting Automated Tests to Increase Reliability from University of Antwerp
]]>
88 0 https://cdn.slidesharecdn.com/ss_thumbnails/effectsbarco2022slideshare-220331153822-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
VST2022SmallAmpAmpyfier.pdf /slideshow/vst2022smallampampyfierpdf/251451140 vst2022smallampampyfier-220328142829
際際滷s used for the VST2022 workshop. ---Abstract--- Software test amplification is the act of strengthening manually written test-cases to exercise the boundary conditions of the system under test. It has been demonstrated by the research community to work for the programming language Java, relying on the static type system to safely transform the code under test. In dynamically typed languages, such type declarations are not available, and as a consequence test amplification has yet to find its way to programming languages like Smalltalk, Python, Ruby and Javascript. The AnSyMo research group has created two proof of concept tools for languages without a static type system: AmPyfier (for Python) and Small-Amp (for Pharo-Smalltalk). In this tool demonstration paper we explain how we relied on profiling libraries present in the respective eco-systems to infer the necessary type information for enabling full-blown test amplification.]]>

際際滷s used for the VST2022 workshop. ---Abstract--- Software test amplification is the act of strengthening manually written test-cases to exercise the boundary conditions of the system under test. It has been demonstrated by the research community to work for the programming language Java, relying on the static type system to safely transform the code under test. In dynamically typed languages, such type declarations are not available, and as a consequence test amplification has yet to find its way to programming languages like Smalltalk, Python, Ruby and Javascript. The AnSyMo research group has created two proof of concept tools for languages without a static type system: AmPyfier (for Python) and Small-Amp (for Pharo-Smalltalk). In this tool demonstration paper we explain how we relied on profiling libraries present in the respective eco-systems to infer the necessary type information for enabling full-blown test amplification.]]>
Mon, 28 Mar 2022 14:28:29 GMT /slideshow/vst2022smallampampyfierpdf/251451140 serge_demeyerUA@slideshare.net(serge_demeyerUA) VST2022SmallAmpAmpyfier.pdf serge_demeyerUA 際際滷s used for the VST2022 workshop. ---Abstract--- Software test amplification is the act of strengthening manually written test-cases to exercise the boundary conditions of the system under test. It has been demonstrated by the research community to work for the programming language Java, relying on the static type system to safely transform the code under test. In dynamically typed languages, such type declarations are not available, and as a consequence test amplification has yet to find its way to programming languages like Smalltalk, Python, Ruby and Javascript. The AnSyMo research group has created two proof of concept tools for languages without a static type system: AmPyfier (for Python) and Small-Amp (for Pharo-Smalltalk). In this tool demonstration paper we explain how we relied on profiling libraries present in the respective eco-systems to infer the necessary type information for enabling full-blown test amplification. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/vst2022smallampampyfier-220328142829-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> 際際滷s used for the VST2022 workshop. ---Abstract--- Software test amplification is the act of strengthening manually written test-cases to exercise the boundary conditions of the system under test. It has been demonstrated by the research community to work for the programming language Java, relying on the static type system to safely transform the code under test. In dynamically typed languages, such type declarations are not available, and as a consequence test amplification has yet to find its way to programming languages like Smalltalk, Python, Ruby and Javascript. The AnSyMo research group has created two proof of concept tools for languages without a static type system: AmPyfier (for Python) and Small-Amp (for Pharo-Smalltalk). In this tool demonstration paper we explain how we relied on profiling libraries present in the respective eco-systems to infer the necessary type information for enabling full-blown test amplification.
VST2022SmallAmpAmpyfier.pdf from University of Antwerp
]]>
21 0 https://cdn.slidesharecdn.com/ss_thumbnails/vst2022smallampampyfier-220328142829-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Formal Verification of Developer Tests: a Research Agenda Inspired by Mutation Testing /slideshow/formal-verification-of-developer-tests-a-research-agenda-inspired-by-mutation-testing/250523770 demeyer2021isola02-211025153342
With the current emphasis on DevOps, automated software tests become a necessary ingredient for continuously evolving, high-quality software systems. This implies that the test code takes a significant portion of the complete code base test to code ratios ranging from 3:1 to 2:1 are quite common. We argue that "testware'" provides interesting opportunities for formal verification, especially because the system under test may serve as an oracle to focus the analysis. As an example we describe five common problems (mainly from the subfield of mutation testing) and how formal verification may contribute. We deduce a research agenda as an open invitation for fellow researchers to investigate the peculiarities of formally verifying testware. ]]>

With the current emphasis on DevOps, automated software tests become a necessary ingredient for continuously evolving, high-quality software systems. This implies that the test code takes a significant portion of the complete code base test to code ratios ranging from 3:1 to 2:1 are quite common. We argue that "testware'" provides interesting opportunities for formal verification, especially because the system under test may serve as an oracle to focus the analysis. As an example we describe five common problems (mainly from the subfield of mutation testing) and how formal verification may contribute. We deduce a research agenda as an open invitation for fellow researchers to investigate the peculiarities of formally verifying testware. ]]>
Mon, 25 Oct 2021 15:33:42 GMT /slideshow/formal-verification-of-developer-tests-a-research-agenda-inspired-by-mutation-testing/250523770 serge_demeyerUA@slideshare.net(serge_demeyerUA) Formal Verification of Developer Tests: a Research Agenda Inspired by Mutation Testing serge_demeyerUA With the current emphasis on DevOps, automated software tests become a necessary ingredient for continuously evolving, high-quality software systems. This implies that the test code takes a significant portion of the complete code base test to code ratios ranging from 3:1 to 2:1 are quite common. We argue that "testware'" provides interesting opportunities for formal verification, especially because the system under test may serve as an oracle to focus the analysis. As an example we describe five common problems (mainly from the subfield of mutation testing) and how formal verification may contribute. We deduce a research agenda as an open invitation for fellow researchers to investigate the peculiarities of formally verifying testware. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/demeyer2021isola02-211025153342-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> With the current emphasis on DevOps, automated software tests become a necessary ingredient for continuously evolving, high-quality software systems. This implies that the test code takes a significant portion of the complete code base test to code ratios ranging from 3:1 to 2:1 are quite common. We argue that &quot;testware&#39;&quot; provides interesting opportunities for formal verification, especially because the system under test may serve as an oracle to focus the analysis. As an example we describe five common problems (mainly from the subfield of mutation testing) and how formal verification may contribute. We deduce a research agenda as an open invitation for fellow researchers to investigate the peculiarities of formally verifying testware.
Formal Verification of Developer Tests: a Research Agenda Inspired by Mutation Testing from University of Antwerp
]]>
90 0 https://cdn.slidesharecdn.com/ss_thumbnails/demeyer2021isola02-211025153342-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Reproducible Crashes: Fuzzing Pharo by Mutating the Test Methods /slideshow/reproducible-crashes-fuzzing-pharo-by-mutating-the-test-methods/244126416 vst2021fuzzingpharoslideshare-210310084141
Fuzzing (or Fuzz Testing) is a technique to verify the robustness of a program-under-test. Valid input is replaced by random values with the goal to force the program-under-test into unresponsive states. In this position paper, we propose a white box Fuzzing approach by transforming (mutating) existing test methods. We adopt the mechanisms used for test amplification to generate crash inducing tests, which developers can reproduce later. We provide anecdotal evidence that our approach towards Fuzzing reveals crashing issues in the Pharo environment. ]]>

Fuzzing (or Fuzz Testing) is a technique to verify the robustness of a program-under-test. Valid input is replaced by random values with the goal to force the program-under-test into unresponsive states. In this position paper, we propose a white box Fuzzing approach by transforming (mutating) existing test methods. We adopt the mechanisms used for test amplification to generate crash inducing tests, which developers can reproduce later. We provide anecdotal evidence that our approach towards Fuzzing reveals crashing issues in the Pharo environment. ]]>
Wed, 10 Mar 2021 08:41:40 GMT /slideshow/reproducible-crashes-fuzzing-pharo-by-mutating-the-test-methods/244126416 serge_demeyerUA@slideshare.net(serge_demeyerUA) Reproducible Crashes: Fuzzing Pharo by Mutating the Test Methods serge_demeyerUA Fuzzing (or Fuzz Testing) is a technique to verify the robustness of a program-under-test. Valid input is replaced by random values with the goal to force the program-under-test into unresponsive states. In this position paper, we propose a white box Fuzzing approach by transforming (mutating) existing test methods. We adopt the mechanisms used for test amplification to generate crash inducing tests, which developers can reproduce later. We provide anecdotal evidence that our approach towards Fuzzing reveals crashing issues in the Pharo environment. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/vst2021fuzzingpharoslideshare-210310084141-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Fuzzing (or Fuzz Testing) is a technique to verify the robustness of a program-under-test. Valid input is replaced by random values with the goal to force the program-under-test into unresponsive states. In this position paper, we propose a white box Fuzzing approach by transforming (mutating) existing test methods. We adopt the mechanisms used for test amplification to generate crash inducing tests, which developers can reproduce later. We provide anecdotal evidence that our approach towards Fuzzing reveals crashing issues in the Pharo environment.
Reproducible Crashes: Fuzzing Pharo by Mutating the Test Methods from University of Antwerp
]]>
121 0 https://cdn.slidesharecdn.com/ss_thumbnails/vst2021fuzzingpharoslideshare-210310084141-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Finding Bugs, Fixing Bugs, Preventing Bugs Exploiting Automated Tests to Increase Reliability /slideshow/finding-bugs-fixing-bugs-preventing-bugs-exploiting-automated-tests-to-increase-reliability/238933667 shiftiwsfkeynote2020slideshare-201021151858
With the rise of agile development, software teams all over the world embrace faster release cycles as *the* way to incorporate customer feedback into product development processes. Yet, faster release cycles imply rethinking the traditional notion of software quality: agile teams must balance reliability (minimize known defects) against agility (maximize ease of change). This talk will explore the state-of-the-art in software test automation and the opportunities this may present for maintaining this balance. We will address questions like: Will our test suite detect critical defects early? If not, how can we improve our test suite? Where should we fix a defect? (Keynote for the SHIFT 2020 and IWSF 2020 Workshops, October 2020)]]>

With the rise of agile development, software teams all over the world embrace faster release cycles as *the* way to incorporate customer feedback into product development processes. Yet, faster release cycles imply rethinking the traditional notion of software quality: agile teams must balance reliability (minimize known defects) against agility (maximize ease of change). This talk will explore the state-of-the-art in software test automation and the opportunities this may present for maintaining this balance. We will address questions like: Will our test suite detect critical defects early? If not, how can we improve our test suite? Where should we fix a defect? (Keynote for the SHIFT 2020 and IWSF 2020 Workshops, October 2020)]]>
Wed, 21 Oct 2020 15:18:57 GMT /slideshow/finding-bugs-fixing-bugs-preventing-bugs-exploiting-automated-tests-to-increase-reliability/238933667 serge_demeyerUA@slideshare.net(serge_demeyerUA) Finding Bugs, Fixing Bugs, Preventing Bugs Exploiting Automated Tests to Increase Reliability serge_demeyerUA With the rise of agile development, software teams all over the world embrace faster release cycles as *the* way to incorporate customer feedback into product development processes. Yet, faster release cycles imply rethinking the traditional notion of software quality: agile teams must balance reliability (minimize known defects) against agility (maximize ease of change). This talk will explore the state-of-the-art in software test automation and the opportunities this may present for maintaining this balance. We will address questions like: Will our test suite detect critical defects early? If not, how can we improve our test suite? Where should we fix a defect? (Keynote for the SHIFT 2020 and IWSF 2020 Workshops, October 2020) <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/shiftiwsfkeynote2020slideshare-201021151858-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> With the rise of agile development, software teams all over the world embrace faster release cycles as *the* way to incorporate customer feedback into product development processes. Yet, faster release cycles imply rethinking the traditional notion of software quality: agile teams must balance reliability (minimize known defects) against agility (maximize ease of change). This talk will explore the state-of-the-art in software test automation and the opportunities this may present for maintaining this balance. We will address questions like: Will our test suite detect critical defects early? If not, how can we improve our test suite? Where should we fix a defect? (Keynote for the SHIFT 2020 and IWSF 2020 Workshops, October 2020)
Finding Bugs, Fixing Bugs, Preventing Bugs Exploiting Automated Tests to Increase Reliability from University of Antwerp
]]>
185 0 https://cdn.slidesharecdn.com/ss_thumbnails/shiftiwsfkeynote2020slideshare-201021151858-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Test Automation Maturity: A Self-Assessment Tool /slideshow/test-automation-maturity-a-selfassessment-tool/238933437 taimselfassessment2020slideshare-201021144947
With the rise of agile development and the adoption of continuous integration, the software industry has seen an increasing interest in test automation. Many organizations invest in test automation but fail to reap the expected benefits, most likely due to a lack of test-automation maturity. In this talk, we present the results of a test automation maturity survey collecting responses of 151 practitioners coming from 101 organizations in 25 countries. We make observations regarding the state of the practice and provide a benchmark for assessing the maturity of an agile team. The benchmark resulted in a self-assessment tool for practitioners to be released under an open source license. An alfa version is presented herein. The research underpinning the survey has been conducted through the TESTOMAT project, a European project with 34 partners coming from 6 different countries. (Presentation delivered at the Test Automation Days and the Testnet Autumn Event; October 2020)]]>

With the rise of agile development and the adoption of continuous integration, the software industry has seen an increasing interest in test automation. Many organizations invest in test automation but fail to reap the expected benefits, most likely due to a lack of test-automation maturity. In this talk, we present the results of a test automation maturity survey collecting responses of 151 practitioners coming from 101 organizations in 25 countries. We make observations regarding the state of the practice and provide a benchmark for assessing the maturity of an agile team. The benchmark resulted in a self-assessment tool for practitioners to be released under an open source license. An alfa version is presented herein. The research underpinning the survey has been conducted through the TESTOMAT project, a European project with 34 partners coming from 6 different countries. (Presentation delivered at the Test Automation Days and the Testnet Autumn Event; October 2020)]]>
Wed, 21 Oct 2020 14:49:46 GMT /slideshow/test-automation-maturity-a-selfassessment-tool/238933437 serge_demeyerUA@slideshare.net(serge_demeyerUA) Test Automation Maturity: A Self-Assessment Tool serge_demeyerUA With the rise of agile development and the adoption of continuous integration, the software industry has seen an increasing interest in test automation. Many organizations invest in test automation but fail to reap the expected benefits, most likely due to a lack of test-automation maturity. In this talk, we present the results of a test automation maturity survey collecting responses of 151 practitioners coming from 101 organizations in 25 countries. We make observations regarding the state of the practice and provide a benchmark for assessing the maturity of an agile team. The benchmark resulted in a self-assessment tool for practitioners to be released under an open source license. An alfa version is presented herein. The research underpinning the survey has been conducted through the TESTOMAT project, a European project with 34 partners coming from 6 different countries. (Presentation delivered at the Test Automation Days and the Testnet Autumn Event; October 2020) <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/taimselfassessment2020slideshare-201021144947-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> With the rise of agile development and the adoption of continuous integration, the software industry has seen an increasing interest in test automation. Many organizations invest in test automation but fail to reap the expected benefits, most likely due to a lack of test-automation maturity. In this talk, we present the results of a test automation maturity survey collecting responses of 151 practitioners coming from 101 organizations in 25 countries. We make observations regarding the state of the practice and provide a benchmark for assessing the maturity of an agile team. The benchmark resulted in a self-assessment tool for practitioners to be released under an open source license. An alfa version is presented herein. The research underpinning the survey has been conducted through the TESTOMAT project, a European project with 34 partners coming from 6 different countries. (Presentation delivered at the Test Automation Days and the Testnet Autumn Event; October 2020)
Test Automation Maturity: A Self-Assessment Tool from University of Antwerp
]]>
419 0 https://cdn.slidesharecdn.com/ss_thumbnails/taimselfassessment2020slideshare-201021144947-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Keynote VST2020 (Workshop on Validation, Analysis and Evolution of Software Tests) /slideshow/keynote-workshop-on-validation-analysis-and-evolution-of-software-tests/229116670 vstkeynote2020slideshare-200225100144
A keynote delivered for the 3rd Workshop on Validation, Analysis and Evolution of Software Tests February 18, 2020 | co-located with SANER 2020, London, Ontario, Canada. http://vst2020.scch.at Abstract - With the rise of agile development, software teams all over the world embrace faster release cycles as *the* way to incorporate customer feedback into product development processes. Yet, faster release cycles imply rethinking the traditional notion of software quality: agile teams must balance reliability (minimize known defects) against agility (maximize ease of change). This talk will explore the state-of-the-art in software test automation and the opportunities this may present for maintaining this balance. We will address questions like: Will our test suite detect critical defects early? If not, how can we improve our test suite? Where should we fix a defect? The research underpinning all of this has been validated under "in vivo" circumstances through the TESTOMAT project, a European project with 34 partners coming from 6 different countries. ]]>

A keynote delivered for the 3rd Workshop on Validation, Analysis and Evolution of Software Tests February 18, 2020 | co-located with SANER 2020, London, Ontario, Canada. http://vst2020.scch.at Abstract - With the rise of agile development, software teams all over the world embrace faster release cycles as *the* way to incorporate customer feedback into product development processes. Yet, faster release cycles imply rethinking the traditional notion of software quality: agile teams must balance reliability (minimize known defects) against agility (maximize ease of change). This talk will explore the state-of-the-art in software test automation and the opportunities this may present for maintaining this balance. We will address questions like: Will our test suite detect critical defects early? If not, how can we improve our test suite? Where should we fix a defect? The research underpinning all of this has been validated under "in vivo" circumstances through the TESTOMAT project, a European project with 34 partners coming from 6 different countries. ]]>
Tue, 25 Feb 2020 10:01:44 GMT /slideshow/keynote-workshop-on-validation-analysis-and-evolution-of-software-tests/229116670 serge_demeyerUA@slideshare.net(serge_demeyerUA) Keynote VST2020 (Workshop on Validation, Analysis and Evolution of Software Tests) serge_demeyerUA A keynote delivered for the 3rd Workshop on Validation, Analysis and Evolution of Software Tests February 18, 2020 | co-located with SANER 2020, London, Ontario, Canada. http://vst2020.scch.at Abstract - With the rise of agile development, software teams all over the world embrace faster release cycles as *the* way to incorporate customer feedback into product development processes. Yet, faster release cycles imply rethinking the traditional notion of software quality: agile teams must balance reliability (minimize known defects) against agility (maximize ease of change). This talk will explore the state-of-the-art in software test automation and the opportunities this may present for maintaining this balance. We will address questions like: Will our test suite detect critical defects early? If not, how can we improve our test suite? Where should we fix a defect? The research underpinning all of this has been validated under "in vivo" circumstances through the TESTOMAT project, a European project with 34 partners coming from 6 different countries. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/vstkeynote2020slideshare-200225100144-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> A keynote delivered for the 3rd Workshop on Validation, Analysis and Evolution of Software Tests February 18, 2020 | co-located with SANER 2020, London, Ontario, Canada. http://vst2020.scch.at Abstract - With the rise of agile development, software teams all over the world embrace faster release cycles as *the* way to incorporate customer feedback into product development processes. Yet, faster release cycles imply rethinking the traditional notion of software quality: agile teams must balance reliability (minimize known defects) against agility (maximize ease of change). This talk will explore the state-of-the-art in software test automation and the opportunities this may present for maintaining this balance. We will address questions like: Will our test suite detect critical defects early? If not, how can we improve our test suite? Where should we fix a defect? The research underpinning all of this has been validated under &quot;in vivo&quot; circumstances through the TESTOMAT project, a European project with 34 partners coming from 6 different countries.
Keynote VST2020 (Workshop on Validation, Analysis and Evolution of Software Tests) from University of Antwerp
]]>
187 0 https://cdn.slidesharecdn.com/ss_thumbnails/vstkeynote2020slideshare-200225100144-thumbnail.jpg?width=120&height=120&fit=bounds presentation White http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Saner open steeringcommittee2018campobassodoubleblind /slideshow/saner-open-steeringcommittee2018campobassodoubleblind/92037608 saneropensteeringcommittee2018campobassodoubleblind-180327092628
During the SANER 2018 Conference (in Campobasso, Italy) I chaired a discussion on double-blind reviewing. Here are the slides used to stimulate the discussion. It is based on a survey among the SANER reviewers to understand how double-blind reviewing is perceived in the field.]]>

During the SANER 2018 Conference (in Campobasso, Italy) I chaired a discussion on double-blind reviewing. Here are the slides used to stimulate the discussion. It is based on a survey among the SANER reviewers to understand how double-blind reviewing is perceived in the field.]]>
Tue, 27 Mar 2018 09:26:28 GMT /slideshow/saner-open-steeringcommittee2018campobassodoubleblind/92037608 serge_demeyerUA@slideshare.net(serge_demeyerUA) Saner open steeringcommittee2018campobassodoubleblind serge_demeyerUA During the SANER 2018 Conference (in Campobasso, Italy) I chaired a discussion on double-blind reviewing. Here are the slides used to stimulate the discussion. It is based on a survey among the SANER reviewers to understand how double-blind reviewing is perceived in the field. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/saneropensteeringcommittee2018campobassodoubleblind-180327092628-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> During the SANER 2018 Conference (in Campobasso, Italy) I chaired a discussion on double-blind reviewing. Here are the slides used to stimulate the discussion. It is based on a survey among the SANER reviewers to understand how double-blind reviewing is perceived in the field.
Saner open steeringcommittee2018campobassodoubleblind from University of Antwerp
]]>
189 1 https://cdn.slidesharecdn.com/ss_thumbnails/saneropensteeringcommittee2018campobassodoubleblind-180327092628-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
https://cdn.slidesharecdn.com/profile-photo-serge_demeyerUA-48x48.jpg?cb=1727870160 Serge Demeyer is a professor at the University of Antwerp (Department of Mathematics and Computer Science) and the spokesperson for the ANSYMO (Antwerp System Modelling) research group. He directs a research lab investigating the theme of "Software Reengineering" (LORE - Lab On REengineering). His main research interest concerns software reengineering, more specifically the evolution of object-oriented software systems. He is an active member of the corresponding international research communities, serving in various conference organization and program committees. He has written a book entitled "Object-Oriented Reengineering" and edited a book on "Software Evolution". He also authored num win.ua.ac.be/~sdemey/ https://cdn.slidesharecdn.com/ss_thumbnails/threatstoinstrumentvalidity2023final-241002120023-1f0c8ffc-thumbnail.jpg?width=320&height=320&fit=bounds slideshow/in-silico-research-software-engineering-to-the-rescue/272145951 &quot;In Silico&quot; Research: ... https://cdn.slidesharecdn.com/ss_thumbnails/researchmethodsredux2024-240930131759-9e1b93ff-thumbnail.jpg?width=320&height=320&fit=bounds slideshow/research-methods-in-computer-science-and-software-engineering/272106397 Research Methods in Co... https://cdn.slidesharecdn.com/ss_thumbnails/vst2024presentationmut4slxfinal-240312154740-c9f9e44c-thumbnail.jpg?width=320&height=320&fit=bounds slideshow/mut4slx-extensions-for-mutation-testing-of-stateflow-models/266751490 MUT4SLX: Extensions fo...