A place for startup entrepreneurs to meet in Pune
Most startups seem to care about quality. However, it is important to rethink what testing really means.
Most teams start software development projects with a description of what to build. This is called a design, or a specification or a requirement (I am using these terms loosely for now). For example, you may have a plan to search for text in a document (in an application like Notepad). The word, 'requirement' is used very commonly and is a weighty word. I will discuss the problem with the word, 'requirement', in another post. Most people equate testing with answering the question, 'Does it work as we said it would?' In the case of searching for text, you would try searching and check if the software finds the text you are looking for. Seems reasonable? Testers and/or developers may spend substantial time following the steps in the requirements document to verify that the software works as stated. Along the way, few or many things don't work, resulting in defects.
In most cases 'does it work' is not a particularly challenging question. In the case of the search function, you can do as little as you want to check if it works. Although, it's most likely that for something like search you will try out more than the obvious cases.
There can be challenges in answering, 'does it work?'. When you have a new software or new functionality, it may be challenging to understand the new features. What if you were working on an image search? You would probably have to learn how images are categorized. You could classify this activity as learning. Learning is an important part of all software development. In some cases, though, learning can occupy a lot of time. Some software are complex in nature. Enterprise software may involve working with web servers or databases. It is challenging to make sure that the new features work with all the moving parts. For software such as games or other interactive software, such as desktop publishing, it is difficult to list down every possible interaction. In this case, experienced users, e.g., gamers, may try to play the game using the new features. Although the question, 'does it work', is not challenging, in some cases the challenge is learning, many moving parts, complexity and high interactivity.
A significant step up from, 'Does it work?', is 'Does it not do what it isn't supposed to do?' A simple way to think about this is that the software is not supposed to crash. This opens up infinite possibilities of actions which might cause the software to crash. Of course, it's obvious that the software should not crash. However, the question also opens up infinite possibilities, which are not as obvious, of what the software should not do. For example, the software should not confuse users or it should not be slow. (You might say that we should add those in the 'requirements'. That will be another post).
Does this mean you won't check what the software is supposed to do? No it doesn't! You will do that while you answer the more interesting and challenging question. 'Does it not work' is also a much more difficult question to answer. Answering that requires answering, 'Does it work?'. 'Does it work' just isn't the focus of testing. While learning requires greater effort for some software, it is still more important to keep asking, 'does it not work?', along with learning or even use that question to direct learning.
'Does it not work?' is a simplistic way to think of testing. The purpose of testing is much more nuanced (I will discuss more of that in the future). However, teams spend most of their time answering, 'Does it work?' Instead, they should spend most of their time answering, 'Does it not work?' The focus of this post is on changing the most common misconception about software testing. You should not spend a lot of time asking, 'Does it work?' I'll provide ideas of what you can do in a future post.
Agile uses the concept of 'user stories'. Instead of focusing on the functionality of the software, user stories are focused on broad user needs. Instead of stating that you want to implement a dialog for searching for text, a user story would state that a user wants to search for words and phrases in a document. There is no mention of a dialog. Despite the use of the concept of user stories, many teams seem to keep using the word requirements (incorrectly in my opinion), or think of user stories as requirements.
The ideas in this post are largely influenced by the work of James Bach, Michael Bolton, Cem Kaner and Jerry Weinberg. For new visitors those blogs may be a bit overwhelming, so I am leaving out references to their specific blog posts.
I work as a consultant managing development teams and working in business development. I have worked in IBM managing test teams for security software as well as doing program management. I have also managed test teams in Autodesk. I have worked with a service provider managing teams and projects in outsourced product development.
I am available for consulting roles - managing development, test, support teams. I am also interested in business development projects, especially to customers in Singapore and Australia.
I have worked with CAD/CAM, security software, enterprise software and can work with complex technology.