Wherein I reflect on when Test Driven Development is testing and when it is not by observing my own state of mind during a bit of hobby coding. Your mileage my vary. #softwaretesting #softwaredevelopment #testing
Wayne Roseberry’s Post
More Relevant Posts
-
There is a lot of debate among artists about generative AI. I find it useful for reference models. The image on the left was generated by Bing Image Creator with the prompt "photograph of a woman holding clipboard, pencil, holding eraser of pencil to her chin in contemplation." The cartoon is my own drawing. I wasn't sure how to position the pencil, hands, arms, clipboard, and of course I used the reference image as a guide. The rest of the image is of course different, but having a source of reference material derived from whatever prompt I give it is useful. I find these sorts of use cases are missed in the discussion. Legitimately helpful tools that do not at all detract from the human creative process. I feel the same applies to testing and development. Lots of focus on "replacement" scenarios, a lot less focus on how to make it useful, make people better at what they do.
To view or add a comment, sign in
-
Posts that are of the form: "I did <this>, and it got me the <job, promotion, other thing>..." often come with an implied or explicitly stated "...therefore if you don't have one yet, you are someone weak or not trying hard enough or being unrealistic or <some other kind of character or moral flaw>" For example: "I got a job because I was realistic about the market expectations." Implied message "If you still haven't found a job, it is probably because you are being unrealistic." Maybe that person who got a job and thousands of others who did not all were competing for the ONE open job at the same rate and level, and that one person who got it was being not one iota "more realistic." They were just the one who out of thousands of others got the job. But they feel smarter and more important if believe it was because they knew or did something nobody else did and they share it with the world. It is helpful to remind ourselves of survivorship bias. This bias is one that causes us to hyperfocus on examples of something working, looking for attributes, conditions, or behaviors that signal either guaranteed success or guaranteed failure, ignoring attributes, conditions, or behaviors that were present in examples where something did not succeed which may be more important to know or understand.
To view or add a comment, sign in
-
Does anybody else hate it when you the repro condition for a bug MIGHT mean changing which monitor you are connected to (no, really, it might), which means you have to crawl around under your desk and figure out which cables go where? #softwaretesting
To view or add a comment, sign in
-
A recent article and video I created got a "Not repro" response back from the creator of the tool I was testing. They tried same thing, multiple times, multiple platforms. The person in question took time out of a trip to check it out, so you have to give them respect for looking at it. I believe this is going to be the basis for my next piece of testing content. Open confession, "Not Repro" bugs are intimidating. It means that there is some variable that did not get captured or conveyed between what you told the other person about the bug. The challenge is more difficult than just conveying the repro steps that worked for you. Now the steps have to account for the difference between what you experienced and what the other person experiences. There is a popular flavor of joke about the war between developers and testers, and "Not repro" is one of the classics in that canon. I get the joke, get the humor of it, and laugh when the form the joke takes is clever. But my true feeling about the topic is a sense of obligation to help, and most of the time developers I work with are genuinely curious and want to understand what is going on. Yes, I have worked with developers that are immature and unreasonable about it, and I have my own ways of politely but demonstrably training them out of that behavior, but it is the exception in what is normally a partnership of professionalism. Right now, the bigger problem is convincing myself to take a look. I am always at least a little bit afraid I won't figure it out. This post might be me nudging myself not to let it slide. #softwaretesting #softwaredevelopment #worksonmymachine
To view or add a comment, sign in
-
In defense of data and measurements: A guilty pleasure, I am watching all the international versions of the show "The Traitors," a reality game show that is a long-run live version of the party game "Mafia," or video game "Among Us." I am not going to go into the details of the game rules, but the key point is the game relies on lying, deception, and discovering who is not whom they claim to be. A common player pattern I see in the game are players who are absolutely certain that they know how to read people, they know how to tell who is lying, they know how to pick up on subtle clues. The truth is that each of these people perform very poorly in their decisions and evaluations, with the odds of a random guess dictating more predictably how their decisions play out than their own assessment of the situation. (I started that paragraph with "The most common player pattern..." and then realized I was committing the same crime I discuss in this post, so I changed it). This lines up with my own anecdotal experience with how people assess situations in general. Decades of working with engineers, I have noticed that over and over until somebody has seen data with measurements and trends that their own prediction about how prevalent, common, frequent, or in what proportion something is will almost always be wrong. Very wrong. Orders of magnitude wrong. Likewise, when people try to attribute an observation with why that happens. Until people have seen the inside of a system (mechanical, computational, or social) their guess about what led to a final outcome is almost always wrong. What people are very good at is noticing things. When someone says "I saw this.. and it looked odd", or "this happened to me...," pay attention. Their description may or may not be accurate, but it is almost always the case that something did happen, and that something was at least unusual to that person. Investigate, look closer, zero in on what it might be, and then start with the data and measurements. #softwaretesting #softwaredevelopment
To view or add a comment, sign in
-
I predict accusing someone of posting content created by AI is going to be the new ad hominem attack. Ad hoc detecting content created by AI is going to be the new armchair athlete sport. People thinking they are better at spotting it than they really are will dominate the game.
To view or add a comment, sign in
-
Think like a tester and change the test case steps as you go. Because sometimes you have no other choice. I started this article intended to make a comparison between the different kinds of test cases one gets from different graph traversal strategies on test models, but decided it was more interesting to talk about how when we test we make changes to the plan, even when we rote execute a series of test cases (generated for us by a modeling tool). The application under test is Test Case Studio, a browser recording tool that allows testers to capture their actions as a series of steps. Bugs I found with my testing forced me to change my test steps in order to keep going. I also found either new ideas, or paying attention to details forced me to make modifications as I went. A nod to James Bach, all testing is in some way exploratory, where the key attribute of how exploratory the testing is lies in the amount of agency the tester has in making decisions and changing what they do. Even stepwise execution is exploratory if the tester exercises control and decision making. https://lnkd.in/gc84N35Q #softwaretesting #softwaredevelopment #testcasestudio #testcompass
Following Cases From A Model Even When You Are Blocked
waynemroseberry.github.io
To view or add a comment, sign in
-
Of course, I have to believe the response that makes me sound more interesting. Right?
To view or add a comment, sign in
-
I find the notion of generalized AI prompts interesting. It forms the basis of many application ideas. It is also an interesting test problem to think about. The prompt itself is like a piece of code calling into the platform that is the AI model. Add to that the components feeding the prompt and processing the response. There is a lot to build a test approach around. l would probably enjoy working on something like this for the technical challenge. I would, however, precede that with a question about what problem we were trying to solve and whether we were using the right tools to do it. I think that question would remain across my entire testing approach.
To view or add a comment, sign in
-
Is anybody else as annoyed as I am with the way copying the URL from the browser defaults to giving you the page title with a formatted link instead of the raw URL? Does anybody else paste into Notepad to strip the formatting, and then copy/paste from Notepad into the target app to get what you really wanted? I almost never want the prettified link to a page. I almost always want the full, real URL. My rule is that even if the link is on PAPER or elsewhere the person cannot click they can still get the information necessary to follow the link. Okay... now I prepare myself for the punishing deluge of people who are going to share with me the keystroke (which I am going to rapidly forget, because I don't remember these things - ESPECIALLY when they are not supported in all apps) that does what I need.
To view or add a comment, sign in