The job title 'software tester' suggests the economic benefit of deferring software testing:
I've noticed that this job has grown in popularity and that it often does not require technical skills but rather sufficient computer skills to follow the testing program that the organisation pursues.
I'm a bit of an idealist in that I believe that such an arrangement, while expedient in the short term, may undermine precisely the goal it pursues; namely, quality.
If testability is not a design feature then the act of testing merely skirts the surface. While testing may uncover bugs lurking under the hood, it ignores the systematic model for the underlying code that accounts for its testability. It would be much easier for the designer, or even the developer, who is closer to the code, to assess what may be obvious weaknesses when examined from their vantage point rather than the software tester who merely interacts with interfaces. In other words, if testing is done after development, it is already too late however many bugs may be found. In short, it is methodologically unsound.
Furthermore, tests introduced at design and development stage may include interface tests so much easier than the cumbersome task of having a third party undertake it.
It is widely accepted that testing is not only hard and time-consuming but incomprehensive. I think a breakthrough in software testing that make it easy to write PERFECT CODE at a fraction of the cost (time, effort etc.) would be a much welcomed break from the status quo of vanity metrics like code coverage.