I came across this interesting situation in a Business Process Outsourcing (BPO) company. The issue was with 'target setting' for a team that was doing transaction processing kind of work. These were reasonably important transactions and errors could have a significant adverse impact on customer satisfaction. However,this process involved a lot of manual intervention and error rates were quite high. The problem was 'what sort of accuracy targets' should be set for the team.
The head of the division was of the firm opinion that the target should always be 100% accuracy (or zero error). His reasoning was 'how can we plan for making an error'? (i.e. If we set the target as 99% accuracy, aren't we telling the agents that they can afford to make one mistake in every 100 transactions? Won't that make them complacent? How can we tell the customer that we are targeting anything less than perfection?). This also lead to initiatives like declaring an 'error-free' month. This involved giving a pep talk to the team and making them take a pledge that they won't make any mistakes for one month. The pep talk also included another very interesting line of reasoning: "Can't you do one transaction without error? If you can do that what prevents you from repeating the same 1000 times? This is all that is needed to make an 'error-free' month". So this 'error-free' month initiative was launched with a lot of hype. Sadly, the error rates increased during the 'error-free' month.
One key issue here was that while the above approach had a lot of 'intuitive' appeal, it went against the basic principles of goal setting. To be motivating, the goals/targets have to be challenging and achievable. As the process involved a large amount of human element/manual intervention, zero error was impossible. So a 'zero-error target' would only de-motivate the employees (as they are ‘guaranteed to fail’ sooner or later). So the solution was to set a target (demanding but achievable target) keeping in mind the current capability of the team, improve the process/team capability and raise the performance bar/targets accordingly(ensuring that the targets remain demanding but achievable). At rhetoric level, the key is to distinguish between a 'performance target for the current performance period' and an 'ideal that we aspire for'.
Another aspect here is the limitations imposed by ‘diminishing returns’ and ‘process entitlement’. Each process has a performance limit (entitlement) beyond which its performance can't be significantly improved without redesigning the process. Processes that have a lot of manual intervention tend to reach this limit much before the level of 100% accurate performance. So unless the process is 'redesigned' (e.g. automated) very high performance targets would be impossible. Also when the performance level approaches the current 'process entitlement' limit, 'return-to-effort' ratio for performance improvement efforts (without redesigning the process) tends to fall drastically. So performance improvement beyond a certain point might cost too much, may be even more than what the customer is willing to pay!
It is interesting to note that this argument holds good (at least at the system level and organizations are systems) even for the most sensitive cases like surgeries. Obviously no one wants to have (or even think about the possibility of ) a botched up surgery. But if the cost of moving from 99.5% success rate to 99.9% success rate would make the surgery unaffordable to almost everyone, it might not really be helpful. Of course, this doesn't in any way mean that the surgeon plans to fail in 0.5% of the surgeries. The surgeon puts in utmost effort to make every surgery successful. It just means that a hospital will still pay the surgeon his/her salary (and people will still come in for surgery) even if there is a small % of failures!