Week Six: Fragile Software and the Tax Man
This story about tax site failures repeats every year, not just inside the United States. Tax preparation and collection systems fail under high loads. Whether it is a commercial software package, such as the one noted below, or even a government system, we are surrounded by fragile systems. And the taxpayer pays the price.
On April 1, the Mexican Income Tax site crashed. On April 10, India's Goods and Services tax payment website crashed. And on April 15, H&R Block's tax preparation software had issues. Downforeveryoneorjustme{dot}com observed a 32-minute outage for the IRS website. All these failures occurred on the day taxes were due.
Part of producing a successful performance test is understanding human behavior and how this impacts the use of systems. People delay using tedious and slow systems and create a self-fulfilling prophecy on system performance through the collective increase in load within a narrow window of time.
Now is a great time to bring your performance leads in and ask them how they are accounting for human behavior as a part of their software testing. Given how few performance testers are provided training and mentoring as part of professional development, many conversations with performance leads result in awkward silences or disconcerting answers. Let's fix these foundational problems, so you don’t face internal barriers when solving your own fragile software problem.
Let's book some time for a conversation: https://lnkd.in/ep_5rCve
If you are an eCommerce site with spot sales, how is that excitement captured as people rush to see and buy the latest thing? If you are testing internal systems, how is user procrastination captured in the system's modeling and the test's design? If your test models human behavior, do you have an appropriate delay model between requests representing the time an end user is thinking and typing before submitting the following request? Can you think about completing that entire form correctly and submitting it in four seconds? These options impact the quality of the output from the tests.
Why bring this up in the context of fragile software? Understanding what to fix to improve system performance and resiliency and reduce fragility begins with high-quality data. The phrase "…garbage in, garbage out…" applies to all performance efforts. A poorly designed and properly executed test will produce as low-quality output, as the other permutations (well-designed/poorly executed, and poorly designed/executed). A low-quality data set leaves no opportunity to find and address the issues that lead to the fragility in production.
Let’s talk! https://lnkd.in/ep_5rCve
https://lnkd.in/emUpSNWH
Corporate Partner at Miller Thomson LLP
1yCongratulations Hervé Pilo, Delphine d'Aspe & Alexandre DERBAL.