- Steve Thair presented on whether the current model of load/performance testing is broken for modern web applications.
- He discussed how Betfair separated load injection from performance measurement due to the complexity of their system.
- The current model of load testing with waterfalls, single reports, and scripted user journeys is insufficient for continuous delivery and real user monitoring needs.
- Thair advocated for cheaper and more continuous methods like session replay from logs and APM tools to align with modern development practices.
Agile Open Source Performance Testing Workshop for Business ManagersClever Moe
油
The document discusses open source performance testing and the Agile Performance Test Methodology and Tools. It promotes the PushToTest open source testing platform, which provides functionality for authoring, executing, and analyzing tests across multiple environments. The presentation covers the benefits of the PushToTest approach for both individuals and organizations implementing agile development and performing load and performance testing.
The document discusses enhanced equipment quality assurance (EEQA) and equipment health monitoring (EHM) methods to ensure reliable semiconductor manufacturing equipment. It provides:
1) An overview of the EEQA and EHM projects, including goals to reduce equipment variability and efficiently track performance.
2) Details on EEQA approaches like collecting equipment data to validate functional capabilities and monitor variations.
3) The 2011 EHM project timeline and objectives to demonstrate fingerprinting effectiveness using an equipment data model.
4) An equipment fingerprinting pilot to refine use cases and demonstrate the fingerprinting process using real manufacturing data.
Are you tired of spending hours trying to reproduce and diagnose bugs? Do you have a hard time getting testers and developers to talk to each other? Is it difficult to determine which tests are most important to run after you produce a new build? Software testing is perhaps the #1 area of investment for the application lifecycle management capabilities of Microsoft Visual Studio 2010. In this session, we will introduce the software testing capabilities offered by Visual Studio 2010, which are covered comprehensively in the respective sessions. Given that you want to deliver high quality code, when you drive your entire software development lifecycle with tests you will dramatically improve overall quality.
Quality Best Practices & Toolkit for Enterprise FlexFran巽ois Le Droff
油
Quality Best Practices & Toolkit for Enterprise Flex
Presentation given at the French Flex User group : "les tontons flexeurs" on the 21st of July 2009
Author : Xavier Agnetti, Fran巽ois Le Droff (and Alex Ulhmann)
Copyright: Adobe
The document discusses using the Groovy programming language for testing purposes. It covers why Groovy may be a good fit for testing, an introduction to Groovy, different types of testing drivers and tools that can be used with Groovy like web drivers, test runners, and other non-web drivers. It also discusses going beyond traditional unit and integration testing with Groovy and considering polyglot and model-driven testing options.
Application Quality with Visual Studio 2010Anna Russo
油
The document discusses how to use Microsoft Test Manager, Visual Studio 2010, and Team Foundation Server 2010 to improve software quality through test management, test automation, and reporting. It covers managing testing resources with planning workbooks, improving reporting on test runs and bugs, creating automated tests using coded UI tests, and best practices for automated testing including integrating virtual machines for manual or automated testing in a test lab.
This report compares the performance of Apache Hadoop to IBM Platform Symphony, which leverages IBM middleware to accelerate Hadoop. A benchmark using 302 jobs from the Statistical Workload Injector for MapReduce (SWIM), based on production Facebook workloads, found that Symphony accelerated Hadoop by an average of 7.3x. Symphony's advantage declined slowly with increasing shuffle size. In a "sleep" test of scheduling latency, Symphony was 74x faster than Hadoop alone. While these results may depend on configuration settings, the test systems used identical hardware, software, and network configurations. The report provides detailed information on the test methodology, systems tested, and results.
QATS provides quality assurance and testing services with over 1000 test consultants who are mostly certified professionals. They have expertise in various domains and testing tools. QATS offers consulting, execution, innovation accelerators, and flexible partnership models. Their differentiators include proven domain expertise, solution accelerators, delivery models, and skills in test automation and performance testing.
The document discusses using automated testing tools and techniques in agile teams. It covers emerging technologies like continuous integration, testing DSLs, model-based testing, and example driven testing. It provides examples of generating test cases using all combinations and all pairs techniques to systematically test interactions. The document also introduces QuickCheck for automatically generating random test data to replace manually created test scenarios.
This document discusses challenges with traditional Java development approaches and how the z2 Environment addresses them. It notes that traditional approaches do not scale well for modular systems or large projects. Maven is useful for libraries but does not ensure source or system consistency. The z2 Environment provides a runtime that automatically updates based on source repository changes, ensuring a consistent environment across development, testing, and production.
OTM DELIVERED: How Business Process Outsourcing and Preconfigured Solutions...MavenWire
油
How to leverage BPO (Business Process Outsourcing) to reduce your OTM (Oracle Transportation Management) implementation costs and focus on your core competencies.
Presented by Samuel Levin at MavenWire.
The document describes Quality on Submit (QOS), an end-to-end software development quality process that provides instant feedback through continuous integration. QOS integrates code checking, automated building, testing, static analysis, security scanning, and data collection into a single workflow. When changes are committed to the source code management system, builds are deployed and all automated tests, analyses, and scans run in parallel. Results are stored in a database and developers are notified, with optional gamification to increase quality awareness. The process aims to find issues early at low cost while increasing transparency. SAP has implemented QOS successfully in its products.
The document discusses the need for a research platform architecture similar to SAP or Oracle platforms that are commonly used in other industries. It notes that research does not fit well with existing platforms like SAP or Oracle due to a lack of relevant business applications and high costs. The document proposes an open source-based research platform architecture with common foundational components like messaging brokers, application servers, rules engines, and reporting tools to simplify integration and application development. Specific examples are given around how such a platform could enable integrated inventory management, loose coupling between applications like electronic lab notebooks and ordering systems, centralized workflow and process management, and enterprise reporting.
Agile Open Source Performance Test Workshop for Developers, Testers, IT OpsClever Moe
油
Training For Selenium, soapUI, Sahi, TestMaker Performance Testing. 際際滷 deck from the free Webinar titled "Technical Training On The Agile Open Source Way To Load Test, Scalability Test, and Stress Test." Learn the Agile Open Source Testing way to load and performance test your Web applications, Rich Internet Applications (RIA, using Ajax, Flex, Flash, Oracle Forms, Applets,) and SOAP and REST Web services. This free Webinar delivers a testing methodology, tools, and best/worst practices.
This document discusses strategies for migrating legacy code to an agile architecture. It provides an overview of Cerner Corporation, a leading healthcare IT company, and their experience migrating legacy code. The key strategies discussed are:
1. Refreshing, migrating, rewriting, or deprecating legacy code based on business value and functionality.
2. Planning migrations by defining drivers, challenges, and critical success factors to envision the end goal.
3. Cerner's strategy of incremental migration to minimize risks while supporting innovation and maintaining developer skills.
Att lyckas med integration av arbetet fr奪n flera scrum team - Christophe Acho...manssandstrom
油
This document discusses strategies for integrating work from multiple Scrum teams. It outlines the role of an integration team in continuously integrating work. Key success factors for the integration team include: integrating work early, having the necessary resources and environments, practicing continuous integration, using automated tests, maintaining at least two test environments, performing early performance tests, stopping work if integration breaks, having a clear contract between development and integration teams, making the integration process and status visible.
=============================
THIS PRESENTATION IS OUTDATED
See a newer version here: http://www.slideshare.net/openservices/introduction-to-oslc-and-linked-data
===================================
An introduction to Open Services for Lifecycle Collaboration (OSLC):
- The OSLC community
- Linked Data and RDF
- OSLC specifications
Java technology allows programs to run on a variety of hardware platforms, including the mainframe computing platform epitomized by z Systems. The z Systems zOS operating system has a set of unique capabilities, and IBM SDK for Java provides a set of high performance Java APIs complemented by z/OS specific APIs for applications that require deep integration. This talk shows how IBM makes use of the z/OS platform to deliver world-class runtimes on the world leading mainframe.
Originally presented at the z/OS bootcamp in Hursley, 2015
The Orion contract is a complex project involving Lockheed Martin as the prime contractor and many subcontractors. The contract is structured into three schedules for design, development, testing, production, and operations. Since the initial award, the contract has undergone several changes totaling over $2 billion to realign requirements and accommodate changes to the Constellation program. These changes ensured Orion's design supported its mission of transporting crew to the International Space Station.
Using the Groovy dynamic language for primarily functional / acceptance / customer / BDD testing with a forward looking perspective. Also considers polyglot options. The techniques and lessons learned can be applied to other kinds of testing and are also applicable to similar languages. Drivers and Runners discussed include: Native Groovy, HttpBuilder, HtmlUnit, WebTest, Watij, Selenium, WebDriver, Tellurium, JWebUnit, JUnit, TestNG, Spock, EasyB, JBehave, Cucumber, Robot Framework and FitNesse/Slim. Also looks at JMeter, ScalaCheck, Choco, AllPairs and ModelJUnit
London web performance WPO Lessons from the field June 2013Stephen Thair
油
Web Performance - random lessons learnt from delivering WPO, Load testing and APM consulting in the UK. PLus a bit about WebPageTest Private Instances etc
Measuring mobile performance (@LDNWebPerf Version)Stephen Thair
油
A presentation to the London Web Performance User Group covering the different ways of measuring Mobile web performance and some of the strength & weaknesses of each, depending on your needs.
Velocity 2011 Feedback - architecture, statistics and SPDYStephen Thair
油
A presentation on the Velocity 2011 conference from Pieter Ennes from Watchmouse to the London Web Performance Meetup Group. He covers some of this thoughts on the conference and also a brief overview of SPDY.
Continuous Integration - A Performance Engineer's TaleStephen Thair
油
Andrew Harding from Betfair's presentation on web performance testing in a continuous integration environment. Covers some good reasons why and why not to do perf testing during continuous integration.
Web Performance Optimisation at times.co.ukStephen Thair
油
Optimizing dynamic websites like www.thetimes.co.uk and www.thesundaytimes.co.uk isn't an easy task!
Speeding up a site requires a "war plan" and having a clear vision, dedicated team, appropriate tools and most importantly speed comparison data with similar sites.
Mehdi Ali, Optimisation Manager for the Times websites, will show us how this strategy was applied for The Times and Sunday Times sites with great results.
Practical web performance - Site Confidence Web Performance SeminarStephen Thair
油
Over of Web performance optimisation (WPO) as well as some results from 25 web performance site analysis. Some information on Mobile web performance as well.
Ankur Gupta has over 5.5 years of experience in software quality assurance and testing. He has worked on projects in various domains for clients like British Telecom, TNT Express, and Greenway Health. Some of his responsibilities include test planning, case design, execution, defect logging, performance and API testing. He has expertise in Agile methodologies, tools like ALM, JMeter, SOAPUI, and technologies like SQL Server, Azure. He is skilled at requirements analysis, traceability, collaboration, and documentation. He has received several awards for his work from previous employers.
DevOps in Practice: When does "Practice" Become "Doing"?Michael Elder
油
DevOps has emerged as the hot trend in development buzzword-ology. With a few quick paragraphs, it proposes to decimate all of the traditional problems you've encountered during your development experience.
In IBM UrbanCode, we build products to help customers follow good DevOps practices. You may think DevOps is about the release process, but really it's about applying a mix of automation and operational practices earlier in your development life cycle so that rolling out to production becomes easier. DevOps promotes a focus on small-batch changes over large complex updates which are harder to predict and harder to roll back when problems occur. With greater velocity, rolling out smaller changes becomes more common place. Additionally, IBM UrbanCode makes extensive application of cloud technology that intercepts well with practices in DevOps around production-like environments.
In this talk, Michael Elder describes how we practice DevOps internally with a mixture of IBM-built and open source tools. He'll discuss the areas that we do well and the challenges that we have with changing our culture around areas like test automation. On top of that, he'll describe how you can leverage these approaches in your own development process!
The document discusses using automated testing tools and techniques in agile teams. It covers emerging technologies like continuous integration, testing DSLs, model-based testing, and example driven testing. It provides examples of generating test cases using all combinations and all pairs techniques to systematically test interactions. The document also introduces QuickCheck for automatically generating random test data to replace manually created test scenarios.
This document discusses challenges with traditional Java development approaches and how the z2 Environment addresses them. It notes that traditional approaches do not scale well for modular systems or large projects. Maven is useful for libraries but does not ensure source or system consistency. The z2 Environment provides a runtime that automatically updates based on source repository changes, ensuring a consistent environment across development, testing, and production.
OTM DELIVERED: How Business Process Outsourcing and Preconfigured Solutions...MavenWire
油
How to leverage BPO (Business Process Outsourcing) to reduce your OTM (Oracle Transportation Management) implementation costs and focus on your core competencies.
Presented by Samuel Levin at MavenWire.
The document describes Quality on Submit (QOS), an end-to-end software development quality process that provides instant feedback through continuous integration. QOS integrates code checking, automated building, testing, static analysis, security scanning, and data collection into a single workflow. When changes are committed to the source code management system, builds are deployed and all automated tests, analyses, and scans run in parallel. Results are stored in a database and developers are notified, with optional gamification to increase quality awareness. The process aims to find issues early at low cost while increasing transparency. SAP has implemented QOS successfully in its products.
The document discusses the need for a research platform architecture similar to SAP or Oracle platforms that are commonly used in other industries. It notes that research does not fit well with existing platforms like SAP or Oracle due to a lack of relevant business applications and high costs. The document proposes an open source-based research platform architecture with common foundational components like messaging brokers, application servers, rules engines, and reporting tools to simplify integration and application development. Specific examples are given around how such a platform could enable integrated inventory management, loose coupling between applications like electronic lab notebooks and ordering systems, centralized workflow and process management, and enterprise reporting.
Agile Open Source Performance Test Workshop for Developers, Testers, IT OpsClever Moe
油
Training For Selenium, soapUI, Sahi, TestMaker Performance Testing. 際際滷 deck from the free Webinar titled "Technical Training On The Agile Open Source Way To Load Test, Scalability Test, and Stress Test." Learn the Agile Open Source Testing way to load and performance test your Web applications, Rich Internet Applications (RIA, using Ajax, Flex, Flash, Oracle Forms, Applets,) and SOAP and REST Web services. This free Webinar delivers a testing methodology, tools, and best/worst practices.
This document discusses strategies for migrating legacy code to an agile architecture. It provides an overview of Cerner Corporation, a leading healthcare IT company, and their experience migrating legacy code. The key strategies discussed are:
1. Refreshing, migrating, rewriting, or deprecating legacy code based on business value and functionality.
2. Planning migrations by defining drivers, challenges, and critical success factors to envision the end goal.
3. Cerner's strategy of incremental migration to minimize risks while supporting innovation and maintaining developer skills.
Att lyckas med integration av arbetet fr奪n flera scrum team - Christophe Acho...manssandstrom
油
This document discusses strategies for integrating work from multiple Scrum teams. It outlines the role of an integration team in continuously integrating work. Key success factors for the integration team include: integrating work early, having the necessary resources and environments, practicing continuous integration, using automated tests, maintaining at least two test environments, performing early performance tests, stopping work if integration breaks, having a clear contract between development and integration teams, making the integration process and status visible.
=============================
THIS PRESENTATION IS OUTDATED
See a newer version here: http://www.slideshare.net/openservices/introduction-to-oslc-and-linked-data
===================================
An introduction to Open Services for Lifecycle Collaboration (OSLC):
- The OSLC community
- Linked Data and RDF
- OSLC specifications
Java technology allows programs to run on a variety of hardware platforms, including the mainframe computing platform epitomized by z Systems. The z Systems zOS operating system has a set of unique capabilities, and IBM SDK for Java provides a set of high performance Java APIs complemented by z/OS specific APIs for applications that require deep integration. This talk shows how IBM makes use of the z/OS platform to deliver world-class runtimes on the world leading mainframe.
Originally presented at the z/OS bootcamp in Hursley, 2015
The Orion contract is a complex project involving Lockheed Martin as the prime contractor and many subcontractors. The contract is structured into three schedules for design, development, testing, production, and operations. Since the initial award, the contract has undergone several changes totaling over $2 billion to realign requirements and accommodate changes to the Constellation program. These changes ensured Orion's design supported its mission of transporting crew to the International Space Station.
Using the Groovy dynamic language for primarily functional / acceptance / customer / BDD testing with a forward looking perspective. Also considers polyglot options. The techniques and lessons learned can be applied to other kinds of testing and are also applicable to similar languages. Drivers and Runners discussed include: Native Groovy, HttpBuilder, HtmlUnit, WebTest, Watij, Selenium, WebDriver, Tellurium, JWebUnit, JUnit, TestNG, Spock, EasyB, JBehave, Cucumber, Robot Framework and FitNesse/Slim. Also looks at JMeter, ScalaCheck, Choco, AllPairs and ModelJUnit
London web performance WPO Lessons from the field June 2013Stephen Thair
油
Web Performance - random lessons learnt from delivering WPO, Load testing and APM consulting in the UK. PLus a bit about WebPageTest Private Instances etc
Measuring mobile performance (@LDNWebPerf Version)Stephen Thair
油
A presentation to the London Web Performance User Group covering the different ways of measuring Mobile web performance and some of the strength & weaknesses of each, depending on your needs.
Velocity 2011 Feedback - architecture, statistics and SPDYStephen Thair
油
A presentation on the Velocity 2011 conference from Pieter Ennes from Watchmouse to the London Web Performance Meetup Group. He covers some of this thoughts on the conference and also a brief overview of SPDY.
Continuous Integration - A Performance Engineer's TaleStephen Thair
油
Andrew Harding from Betfair's presentation on web performance testing in a continuous integration environment. Covers some good reasons why and why not to do perf testing during continuous integration.
Web Performance Optimisation at times.co.ukStephen Thair
油
Optimizing dynamic websites like www.thetimes.co.uk and www.thesundaytimes.co.uk isn't an easy task!
Speeding up a site requires a "war plan" and having a clear vision, dedicated team, appropriate tools and most importantly speed comparison data with similar sites.
Mehdi Ali, Optimisation Manager for the Times websites, will show us how this strategy was applied for The Times and Sunday Times sites with great results.
Practical web performance - Site Confidence Web Performance SeminarStephen Thair
油
Over of Web performance optimisation (WPO) as well as some results from 25 web performance site analysis. Some information on Mobile web performance as well.
Ankur Gupta has over 5.5 years of experience in software quality assurance and testing. He has worked on projects in various domains for clients like British Telecom, TNT Express, and Greenway Health. Some of his responsibilities include test planning, case design, execution, defect logging, performance and API testing. He has expertise in Agile methodologies, tools like ALM, JMeter, SOAPUI, and technologies like SQL Server, Azure. He is skilled at requirements analysis, traceability, collaboration, and documentation. He has received several awards for his work from previous employers.
DevOps in Practice: When does "Practice" Become "Doing"?Michael Elder
油
DevOps has emerged as the hot trend in development buzzword-ology. With a few quick paragraphs, it proposes to decimate all of the traditional problems you've encountered during your development experience.
In IBM UrbanCode, we build products to help customers follow good DevOps practices. You may think DevOps is about the release process, but really it's about applying a mix of automation and operational practices earlier in your development life cycle so that rolling out to production becomes easier. DevOps promotes a focus on small-batch changes over large complex updates which are harder to predict and harder to roll back when problems occur. With greater velocity, rolling out smaller changes becomes more common place. Additionally, IBM UrbanCode makes extensive application of cloud technology that intercepts well with practices in DevOps around production-like environments.
In this talk, Michael Elder describes how we practice DevOps internally with a mixture of IBM-built and open source tools. He'll discuss the areas that we do well and the challenges that we have with changing our culture around areas like test automation. On top of that, he'll describe how you can leverage these approaches in your own development process!
The document introduces performance testing and provides an overview of key concepts. It discusses why performance testing is important to ensure an application's speed, scalability, stability, and user experience. The document also defines performance validation, testing, and engineering and contrasts their differences. Finally, it outlines the typical methodology for performance engineering including evaluating systems, developing test assets, analyzing results, and tuning performance.
Managing Application Performance: A Simplified Universal ApproachTechWell
油
In response to increasing market demand for well-performing applications, many organizations implement performance testing programs, often at great expense. Sadly, these solutions alone are often insufficient to keep pace with emerging expectations and competitive pressures. Scott Barber shares the fundamentals of implementing T4APM including specific examples from recent client implementations. T4APM is a simple and universal approach that is valuable independently or as an extension of existing performance testing programs. The approach hinges on applying a simple and unobtrusive "Target, Test, Trend, Tune cycle to tasks in your application lifecyclefrom a single unit test through entire system production monitoring. Leveraging T4APM on a particular task may require knowledge specific to the task, but learning how to leverage the approach does not. Scott provides everything you need to become the T4APM coach and champion, and to help your team keep up with increasing demand for better performance, regardless of your current title or role.
The document discusses approaches to software deployments, noting that traditional organizations deploy software 4 times per year while newer organizations deploy software 15 times per day. It advocates automating the entire software delivery pipeline including deployments to reduce risks and costs, by applying the principle of "if it hurts, do it often" through continuous integration, delivery, and deployment.
This document discusses performance testing for the Talentcall.com application. The objectives of performance testing are to reduce latency, scale to maximum users, minimize downtime, identify hotspots, and provide infrastructure recommendations. Performance testing benefits include a reliable, scalable and responsive application. The document outlines the performance testing process, including benchmarking, load testing, stress testing, metrics collection, and testing concurrent users and business transactions. It describes how performance testing identifies critical transactions, establishes goals and test plans, runs test cases, and provides performance reports to optimize the application's performance.
The Testers Role: Balancing Technical Acumen and User AdvocacyTechWell
油
Melissa Tondi discussed the changing role of testers from a focus on user advocacy to increased technical skills. She explained how factors like new technologies and tools caused the pendulum to shift from users to technical skills. However, both are needed for a balanced approach. Her recommendations included strategies for test automation, exploratory testing, accessibility, mobile testing, performance testing, and security testing that balance technical skills with user advocacy.
Consistently delivering and maintaining well performing applications doesn't just happen, it requires a solid architecture, sound development, continual attention, diligence and expertise. It also requires appropriate testing, not simply of release-candidate builds, but of designs, units, integrations, and physical components... both during development and in production. The question is, how can a team accomplish all of that under all of today's pressure to deliver quickly and cheaply?
Join Scott Barber for this Keynote Address to hear about what successful organizations are doing to consistently deliver well performing applications, to learn the underlying principles and practices that enable those organizations to create, test, and maintain those well performing applications without breaking either the budget or the schedule, and what the key items are that virtually every team can implement right away, to dramatically improve the consistency and overall performance of their applications.
How CapitalOne Transformed DevTest or Continuous Delivery - AppSphere16AppDynamics
油
Making the leap to continuous delivery is precarious for any organization, but the concerns are greatly exacerbated when your organization services approximately 45 million bank accounts. Committed to maintaining flawless user experiences while accelerating release cadence, Capital One faced a daunting challenge as it transformed culture, processes, and technical infrastructure in its evolution to continuous delivery.
Join this session with Capital One's Michael Bonamassa and Parasoft's Wayne Airole and learn from their insights on what DevTest changes are critical for responding to extreme digital disruption.
Key takeaways:
o The changing responsibilities of DevTest in a "continuous everything" world
o What skill sets software testers need to ride the wave of digital transformation
o How service virtualization and continuous testing measure the risk of a release candidate
o How to evolve the culture and process to support continuous delivery
o What technical infrastructure is required for real-time test automation and continuous delivery maturation
For more information, go to: www.appdynamics.com
Automated performance testing simulates real users to determine an application's speed, scalability, and stability under load before deployment. It helps detect bottlenecks, ensures the system can handle peak load, and provides confidence that the application will work as expected on launch day. The process involves evaluating user expectations and system limits, creating test scripts, executing load, stress, and duration tests while monitoring servers, and analyzing results to identify areas for improvement.
Automated performance testing simulates real users to determine an application's speed, scalability, and stability under load before deployment. It helps detect bottlenecks, ensures the system can handle peak usage, and provides confidence that the application will work as expected on launch day. The process involves evaluating user needs, drafting test scripts, executing different types of load tests, and monitoring servers and applications to identify performance issues or degradation over time.
Neotys organized its first Performance Advisory Council in Scotland, the 14th & 15th of November.
With 15 Load Testing experts from several countries (UK, France, New-Zeland, Germany, USA, Australia, India) we explored several theme around Load Testing such as DevOps, Shift Right, AI etc.
By discussing around their experience, the methods they used, their data analysis and their interpretation, we created a lot of high-value added content that you can use to discover what will be the future of Load Testing.
You want to know more about this event ? https://www.neotys.com/performance-advisory-council
San Jose Selenium Meet-up PushToTest TestMaker PresentationClever Moe
油
With the Selenium project team's focus on making the WebDriver APIs a W3C standard, this is a good time to talk about where Selenium is going and the support it is getting from commercial open source companies. Frank Cohen, CEO at PushToTest, will discuss Selenium tools - record/playback utilities, deploying Selenium scripts to the Cloud, results analysis tools to surface functional issues and performance bottlenecks, and operational test database repositories - needed to be productive and successful with Selenium.
Best Practices In Load And Stress Testing Cmg Seminar[1]Munirathnam Naidu
油
The document discusses best practices for performance testing. It provides an overview of the typical performance testing process, including defining goals, planning tests, scripting tests, executing tests, analyzing results, and delivering findings. It also discusses considerations for choosing testing tools and resources as well as common pitfalls to avoid, such as not testing, poor planning, relying on customers to find issues, using the wrong tools, and failing to properly isolate variables.
How to Improve Performance Testing Using InfluxDB and Apache JMeterInfluxData
油
The document describes an end-to-end performance testing framework that uses InfluxDB and Grafana for real-time monitoring and analytics. It includes a replay framework that eliminates data loss during performance tests by writing transactions to a CSV file when the database is down and replaying from the CSV when the database comes back up. The framework provides benefits like increased efficiency, accuracy, and scalability for performance and load testing.
This document discusses using LiveCycle Data Services (LCDS) to power rich enterprise applications. It summarizes a presentation about using LCDS and Flex to build the Zephyr test management platform, highlighting some of the challenges faced and resources that helped, such as LCDS documentation and the FlexCoders Yahoo group. The presentation aimed to show how LCDS can make applications truly dynamic and provide real-time metrics, dashboards, and global collaboration.
IBM Pulse 2013 session - DevOps for Mobile AppsSanjeev Sharma
油
1) The document discusses DevOps for mobile app delivery, highlighting the benefits of combining Agile development and DevOps.
2) It outlines several DevOps best practices for mobile apps, including continuous integration, continuous delivery, and continuous testing.
3) The document recommends implementing these practices through automated build and deployment scripts, maintaining separate build environments for each SDK version, and simulating backend services during testing.
Shuvam has over 3 years of experience in performance testing and is currently working as a performance test engineer. He has experience testing large e-commerce websites like Macys.com and Bloomingdales.com. Some of his responsibilities include strategizing and conducting performance tests, analyzing results, and ensuring system reliability. He is proficient in tools like JMeter, LoadRunner, and monitors like Dynatrace. Shuvam also has experience working with developers, networking teams, and other teams to troubleshoot issues.
Shuvam Dutta has over 2 years of experience in performance testing large web applications. He has experience testing e-commerce sites like Macy's and Bloomingdale's, ensuring system reliability, capacity, and scalability. Currently working as a performance test engineer, his responsibilities include strategizing and conducting performance tests, analyzing results, and identifying bottlenecks. He has strong skills in Java, SQL, scripting, API testing tools, and monitoring tools.
7 lessons from velocity 2011 (Meetup feedback session for London Web Performa...Stephen Thair
油
A presentation on the Velocity 2011 conference to the London Web Performance Meetup group by Stephen Thair (Seriti Consulting) covering some of the key messages and takeaways from this year's event.
Measuring Mobile Web Performance presentation at the London Ajax Mobile Conference 2nd July 2011. Covers the basics of web performance measurement and looks specifically at the measurement of page load speed from mobile devices.
Web performance and measurement - UKCMG Conference 2011 - steve thairStephen Thair
油
The slides from my presentation on web performance and measurement at the UK CMG conference in May 2011. It incorporates some of my slides from the earlier Web Performance 101 presentation with new material focussing on measuring web performance
An overview of web performance automation in the Production environment - "faster ways to make your website faster". Covers things like sample .htaccess files through to performance accelerators like mod_pagespeed and Aptimize through to DSA's like Cotendo.
Web Performance 101 presentation from Feb 2011 meetup, presented by Steve Thair from Seriti Consulting.
Covers the basics of why web performance is important for your business, the key "rules" and the tools that are available in the market today.
Seatwave Web Peformance Optimisation Case StudyStephen Thair
油
A web performance optimisation case study presented by Seatwave at the London Web Performance Meetup, Jan 2011.
The PDF is in Landscape so you might be better to download it and then shift-ctrl-+ to rotate it clockwise in Adobe Acrobat Reader.
Configuration Management - The Operations Managers ViewStephen Thair
油
A presentation from the BCS COnfiguration Management Special Interest Group conference 2009. It gives "the other side of the story from a Operation Manager\'s perspective.
Test Expo 2009 Site Confidence & Seriti Consulting Load Test Case StudyStephen Thair
油
The document provides an overview of load testing a website, including tips on designing and conducting the test. It discusses determining test objectives and critical user journeys, setting targets for transactions and concurrent users, using analytics to inform the test design, and analyzing results to identify performance bottlenecks and take corrective action. Contact details are provided for vendors that can assist with load testing tools and services.
Test Expo 2009 Site Confidence & Seriti Consulting Load Test Case StudyStephen Thair
油
Is the current model of load testing broken ukcmg - steve thair
1. Is the current model of load
testing broken?
Steve Thair
Seriti Consulting
@TheOpsMgr
2. 2
Some background.
My User Group Dec 2011 -
http://www.meetup.com/London-Web-Performance-Group/
(c) Seriti Consulting, 2011
3. Delivery Alert Deviation
from
Team
normal?
Ready Fail Fail to
for Build meet
Test? SLAs?
Deploy
Measurements
Environment
Continuous End User
Perf Trends
Load Response
Injection
For Each Data
Service/API Point
Usage response
Profile
Production
logs System
Perf Details
monitors
http://www.slideshare.net/sthair/continuous-integration-a-performance-engineers-journey
CONFIDENTIAL and not for reproduction without prior written consent. 息 of the Sporting Exchange Limited.
4. 4
The Killer Comment
Weve had to look at separating load
injection from performance
measurement.
- Andrew Harding, Betfair
http://www.seriticonsulting.com/blog/2011
/12/9/is-the-current-model-of-
loadperformance-testing-broken.html
(c) Seriti Consulting, 2011 @TheOpsMgr #ukcmg #webperf
5. So why are you paying all 5
that money for that
What are key expensive brand-name
reasons behind load testing tool then?
that decision?
Surely there are
open source or
cheap cloud
tools if all you
want is load?
What other
issues might we
have with
What are you
traditional load
using to measure
testing models
Performance
in a Web 2.0
then?
world?
@TheOpsMgr #ukcmg #webperf
6. 6
Defining the current model
Stephen didnt explicitly define
what the current model of
load/performance testing is
Alex Podelko
http://applicationperformanceengineeringhub.com/is-the-current-model-
of-loadperformance-testing-broken/
@TheOpsMgr #ukcmg #webperf
7. 7
Current Model Straw Man
(1) Waterfall Development Cycle
(2) Load Tool compiles the report
(3) Reporting at the end of the test
(4) Request / Response paradigm
(5) Well-defined customer journeys
(c) Seriti Consulting, 2011
8. 8
Testing in a CI/Agile world
Performance testing initiated every time code is committed to
the CM repository (e.g. SVN, GIT etc)
Betfair issue was that their system was complex with many
layers of caching etc so it took longer to warm up the
environment (to achieve a steady performance state) than
they had between check-ins
So they needed continuous injection to keep the
environment constantly warm
The test tool never stopped to compile the report
So any tool that reported at the end wasnt as useful
@TheOpsMgr #ukcmg #webperf
9. 9
Test tools in a APM & RUM world
traditional test tools generally have the ability to deploy
agents to gather metrics from the target environment
But the depth of analysis and correlation falls well below that
of modern Application Performance Management tools e.g.
AppDynamics
APM tools offer a deeper insight and better event correlation
across tiers
And they are getting (much) cheaper
@TheOpsMgr #ukcmg #webperf
12. 12
WebSockets
HTTP 1.1
Start Timer
GET /index.html
Response 200 OK
Stop Timer
WebSockets
Start Timer
Socket upgrade
N many frames
Stop??? Bi-directional socket channel
@TheOpsMgr #ukcmg #webperf
13. 13
HTTP 2.0
HTTP 1.1
Sequential
ordered
HTTP 2.0 (& SPDY)
Multiplexed over a
single connection
Responses
returned out of
sequence
Hard to time!
http://stackoverflow.com/questions/10480122/difference-between-http-pipeling-and-http-multiplexing-with-spdy
@TheOpsMgr #ukcmg #webperf
15. 15
Visitor Flow
How many paths thru a
website?
Classic script-driven
approaches cant hope to
address the complexity
Network and log file
replay solutions?
@TheOpsMgr #ukcmg #webperf
16. 16
A PCAP Solution to Replay?
POC solution based on Cloudmeter
Pion + custom scripts
Read a PCAP (network capture)
Identify the HTTP traffic
Filter it (based on your requirements)
Parameterise it (query strings, POST
parameters etc)
Randomise inputs from SQL, CSV etc
Replay it against a test environment
i.e. change the base URL
Amplify & rate throttle req/sec as
required
http://www.cloudmeter.com/pion/data-processing.php
@TheOpsMgr #ukcmg #webperf
17. 17
Personal Opinion!
YMMV
So whats the Answer?
Use the cheapest method to generate load that you can
find
Move away from scripting-based approaches towards using
real-user session replay (if possible!)
Generate load continuously
Measure continuously
using APM & RUM type tools FOSS or Commercial
Look for changes in histograms, averages, standard dev etc
Protocol and Framework aware instrumentation
AFAIK this currently doesnt exist
@TheOpsMgr #ukcmg #webperf
18. 18
@LDNWebPerf User Group!
Join our London Web Performance Meetup
http://www.meetup.com/London-Web-Performance-Group/
Next Wednesday 17th Oct 7pm Central London
Follow us on Twitter @LDNWebPerf
#LDNWebPerf & #WebPerf
(c) Seriti Consulting, 2011 @TheOpsMgr #ukcmg #webperf
19. 19
About Me
21yrs IT experience.
Started with www in 1998 (IIS3! Site Server 3!).
Web Architect @ BNP Paribas, CSFB etc
Web Operations Manager for www.totaljobs.com, www.tes.co.uk
Professional Services Manager @ www.siteconfidence.com
Seriti Consulting specialising in web operations, management and
Performance
e:stephen.thair@seriticonsulting.com
m:+44 7971 815 940
Twitter: http://twitter.com/TheOpsMgr
Blog: http://www.seriticonsulting.com/blog/
LinkedIn: http://uk.linkedin.com/in/stephenthair
Skype: seriti-steve
(c) Seriti Consulting, 2011 @TheOpsMgr #ukcmg #webperf
#2: Thanks for coming to my talk I know it must have been hard to tear yourself away from all about Workload License charges in IBM System Z so I appreciate your trust! We have lot of really interesting stuff to talk aboutYou will get a lot of food for thought and I will confess right now that I dont know all the answers to this as yet, but hopefully we might find some out along the way!
#3: I run a monthly Meetup group on Web Performance and back in December 2011 we had a presentation from the Performance team at Betfair about performance testing in their continuous integration environment.
#5: So I am like what, huh, thats that? You separated load injection from performance measurement? Doesnt that sort of destroy half the value proposition of the all those expensive load testing tools? Why did you do that?How are you measuring it then? And then all sorts of other issues came out of that
#6: So I had all these questions in my head but before I get to that I need to address Alex Podelkos objection raised in a comment on my blog
#7: True, I didnt. So say hello to my little friend the Straw Man!
#9: How do you get results when the testing never stops
#10: APM tools offer more insight. I mean, thats what they are designed to do so its hardly a surprise. So increasing in my load testing I search
#11: Some RUM tools are even free like Google SiteSpeedSo why do I need expensive load tools if I all I am doing is measuring load?
#12: WebSockets is a new HTML 5 API protocol for bi-directional real-time communication between browser (client) and server.But the key here is that there isnt a nice request/response round-trip any more the very thing that most of the current generation of test tools rely on (especially the HTTP 1.1 level protocol tools like JmeterAnd in a HTTP 2.0 world it gets even worse because you have HTTP channel multiplexing
#13: Websites are getting more and more complex especially as we add in new functionality like AJAX and HTML5 I have been playing with a solution