One of hardest tasks for any IT department is that of testing. This is, in my experience, area where we have greatest failures (next to estimating cost and length of a project) and where we have incredibly room for improvement.
I have been in business for a long, long time, and lack of testing never fails to amaze me. There have been times when I've received "finished" programs from developers which didn't even run! Obviously code had never been tested, at least not in any meaningful way.
Before any testing can begin (and obviously this should also be done before coding starts) you must have a thorough analysis and design. You see, a program or system must be tested against specification and a set of standards. It cannot be done arbitrarily or randomly.
Your specification explains what your systems are trying to accomplish. The specification might say something like "a standard URL will be accepted in address field". Your standards would state that all buffers must be checked for overrun conditions, URLs in a valid format, and so on. The standards apply to ALL testing, while specifications apply to specific program or system.
A very critical fact (which seems to be completely unknown to Microsoft) is marketing department is not in charge of testing. To be done correctly, testing actually requires top-notch people who have been specially trained and who are highly motivated to do their jobs well.
You also cannot make a hundred thousand copies of a product and send it out to tens of thousands of beta testers without a clear set of goals, expert supervision and constant management and expect to get anything meaningful back. Beta testing is vital to a project, but it does not and cannot replace professional testing staff. Another fact which seems to be invisible to Microsoft is purpose of beta testing is to test, not to market a product. Marketing is an essential part of a product plan, but it has absolutely no place in testing plan.
What are some of common testing mistakes?
Testing to prove a program or system works - I know you want your programs to work, but purpose of testing is simply to test, not to prove you are best programmer on planet. Testing needs to hit a program hard, right between eyes. Your job as a tester is to ensure that program meets specifications, and that any deficiencies are found and properly recorded.
Trying to prove a program does not work - Again, purpose of testing is to test, not to prove anything. You should always have a well defined testing plan and follow that plan.
Using testing to prototype a product - Prototyping is an extremely useful part of analysis and design phases of a project. The purpose is to give your users and customers a way to see what something will look and feel like before implementing a project. Once design is done prototypes should be thrown away and not used again.
Using testing to design performance - Performance goals must be understood before a project leaves design phase. By time a project is implemented (much less tested) you should completely know how it will perform (minus possibility of bad programming, which is a different problem which testing is designed to uncover). Testing will, however, validate that product does perform as indicated in specifications.
Testing without a test plan - I don't know how many programmers I've seen that just wade right in and start testing. Come on people, how can you test something if you don't have a plan? What are you trying to prove?
Testing without a specification - Remember, purpose of testing is to prove that a system or program meets specifications. That's all. It's very difficult to do that without a specification right in front of you. Of course, this assumes that you have a specification to begin with ...
Asking developers to test their own programs - This is one of biggest mistakes (next to writing any code without a very good specification) that you can make. How can a programmer test his or her own code? First of all, programmers make lousy testers - testing is a field all to itself and programmers are almost never trained well in this area. Second, developers of a system have a conflict of interest - they want their software to work. Testers need to approach with a more open mind.
Testing without a goal - If you don't have a goal in mind for your testing, you don't know when you are done. What are you trying to accomplish? Absolutely no bugs of any kind (not very practicable)? The best goal is 100% compliance with specifications. This does pit burden on analysis and design team - but that is exactly where responsibility lies.