Testing Computer Software, 2nd Edition

Category: Programming
Author: Cem Kaner, Jack L. Falk, Hung Quoc Nguyen
4.0
All Stack Overflow 9

Comments

by anonymous   2017-08-20

Testing Computer Software is a good book on how to do all kinds of different types of testing; black box, white box, test case design, planning, managing a testing project, and probably a lot more I missed.

For the example you give, I would do something like this:

  1. For each field, I would think about the possible values you can enter, both valid and invalid. I would look for boundary cases; if a field is numeric, what happens if I enter a value one less than the lower bound? What happens if I enter the lower bound as a value? Etc.
  2. I would then use a tool like Microsoft's Pairwise Independent Combinatorial Testing (PICT) Tool to generate as few test scenarios as I could across the cases for all input fields.
  3. I would also write an automated test to pound away on the form using random input, capture the results and see if the responses made sense (virtual monkeys at a keyboard).
by anonymous   2017-08-20

I agree with others who say, "likes to break things."

Also diversity of backgrounds is good on a test team. Former developers often help the team get going with automation, former customer support people can be fierce defenders of a user's right to a useable UI, and so on.

The article, Testers and Developers Think Differently, by Bret Pettichord, contrasts the mindsets and traits that are helpful in each role. For example, it elaborates on themes like these:

  • Good testers
    • Empirical
    • What’s observed
    • Skeptics
  • Good developers
    • Theoretical
    • How it’s designed
    • Believers

Here are notes from Cem Kaner's Testing Computer Software, on some attributes and skills useful to a tester:

  • Integrity, and a commitment to quality
  • An empirical frame of reference vs. theoretical. Tests as miniature experiments.
  • Education
  • Some programming background. Useful, not essential.
  • Experience using many computers and many software packages
  • Knowledge of combinatorics. How many test cases required to fully evaluate some aspect of a program?
  • Excellent spoken and written communication.
  • Good at error guessing.
  • Fast abstraction skills.
  • Good with puzzles.
  • Very conscious of efficiency.
  • Able to juggle many tasks.
  • Good at scheduling
  • Careful observer, patient, attends to detail.
  • Role-playing imagination.
  • Able to read and write specifications.
by anonymous   2017-08-20

This really depends on what level of job you expect to land for your first position. If you are happy writing very simple test cases over and over, basic Java with a good understanding of refactoring (e.g., reusing code by extracting common code into methods, rather than copy-and-pasting), then basic Java is fine. The better your coding skills, the more you can do on your own without relying on developers to create fixtures for you. I personally really enjoy coding, and try to get SDET positions where I am writing lots of test tools - so I try to write code almost as well as a full-time developer; I do allow myself to lag a year or so behind "state of the art" so I can focus on testing as well.

Even more important than programming skills are your test skills. You won't land any test job unless you work on these.

Have you read Cem Kaner's Testing Computer Software or another basic testing book? (If you've found another good one - please write a comment to me! I'm looking for good intro test books to recommend, Kaner's still seems to be the best but is getting a little outdated) And followed it up with something a bit more advanced and thorough, like Alan Page's How we Test Software at Microsoft (again, comments on better books appreciated)? If you still want more reading, Beautiful Testing is a great book for more advanced professionals.

Have you tried to "test" common objects around you, to get used to thinking about how things can fail? Can you determine which areas of testing (functional, security, performance, safety, etc.) are most important for a given object or program, and then come up with a list of tests you could write to test that aspect of it, including boundary and error testing? Can you do this in an orderly fashion in an interview? If it is a program, can you then implement those tests? Once you land a job, can you advocate for the importance of a bug without upsetting the developer who wrote that code? And, can you work with developers to introduce quality into the product before the bugs get written?

These kinds of questions are great for The Software Testing Club, btw. This seems to be the site that is getting the most credibility as a resource for QA professionals asking these kinds of meta questions. I'd still look to Stack Overflow for specific, objective "how-to" questions.