As I had mentioned earlier (See Paradox of Potential Assessment), the basic issue in potential assessment (which sometimes does not get enough attention) is 'potential for what?’ Many answers are possible here. They include
1. Potential to be effective in a particular job/position
2. Potential to be effective in a particular job family
3. Potential to be effective at a particular level (responsibility level)
4. Potential to take up leadership positions in the company
5. Potential to move up the organization ladder/levels very quickly etc.
Logically, the first four answers should lead to the creation of a capability framework that details the requirements (functional and behavioral competencies) to be effective in the job/job family/level/leadership positions that we are talking about. Once this is done, a competency based assessment centre is often used to assess the potential of employees against that framework. This is where the trouble begins (Actually, the problems start earlier than this – with the definition of ‘potential’ and with the creation of capability framework. But that is another story).
Let us begin by looking at a couple of basic issues. An assessment centre is essentially a simulation*. Hence, there are always questions on the extent to which the simulation matches reality (requirements of the job//level). This becomes even more problematic in the case of international assessment centres (for global roles/with participants from different countries) as the cultural differences needs to be factored in when designing the assessment centres and while interpreting/evaluating the behavior/responses of the participants (e.g. what is an effective response/acceptable behavior in one culture might not be so in other cultures).
Since we need to avoid a situation where the participant was able to give the correct answer/response in the simulation because he/she knew the correct answer/response based on prior experience/knowledge and not because he/she was able to arrive at the correct answer by himself/herself in response to the situation(and hence demonstrating the competency) the simulations often use a context that is different from the immediate job/organization context – while trying to test the same underlying competencies required. This can bring additional complications in ensuring adequate match between simulation and reality. By the way, this is one of the factors (knowledge of the correct answer without knowing how to arrive at the correct answer) that can give to the ‘bright on the outside – hollow inside’ kind of situation mentioned at the beginning of this post. Another factor could be ‘sublimated careers’ (See Career Development & Sublimation).
Each of the tools/exercises in the assessment centre is designed to test a set of competencies. This implies that each of the participants should have sufficient opportunity to fully demonstrate all the relevant behaviors corresponding to all the competencies during the exercise. Assuming that there are 4 competencies (each with 3 relevant behavioral indicators) being tested in the particular exercise, it would mean each participant should have an opportunity to demonstrate 12 behaviors. If the evaluation on the behavior is done using a frequency scale (e.g. always, most of the times, sometimes, rarely etc.) it would imply the need to demonstrate each of the behaviors multiple times during the exercise (e.g. demonstrating the behavior 3 times will get the participant the highest rating) and that would imply a total of 36 behaviors. Of course, if this is a group exercise, this number will get multiplied by the number of participants (e.g. 36*6 = 216 behaviors for a group of 6 participants). This is practically impossible to do in a 45 minutes group exercise! Of course, exercises can be of longer duration and there can be more number of exercises (requiring fewer numbers of competencies/behaviors to be tested per exercise). However, considering the cost and time pressures in most organizations, this becomes difficult. This implies that the very design of the assessment centres might prevent the participants from fully demonstrating their competencies/potential during the centre – leading to artificially lower potential evaluations.
Now, let us come back to problems specific to using assessment centres as a tool to measure potential. Even in a best case scenario, what the assessment centre is measuring is the degree to which the employee/participant demonstrates the behaviors corresponding to requisite competencies during the assessment centre. So, at best it can give a good estimate of the current level of readiness of the employee for a particular role/level. However, this does not really indicate the potential of the employee to take up that role/reach that level in the organization hierarchy in the future. This is because the employee has the opportunity to learn/develop the competencies during the intervening period. Assessment centre can’t give any indication on the extent to which (and the speed at which) the employee will further develop/enhance the competencies.
Assessments centres are based on competency models. As I had mentioned in 'Competency frameworks - An intermediate stage?', one of the basic assumptions behind developing a competency model is that there is one particular behavioral pattern that would lead to superior results in a particular job(i.e there is 'one best way' to do the job). This might not be a valid assumption, in the case of most of the non-routine jobs. If there are other ways to be effective in the job (say, based on a deep understanding of the context/great relationships with all the stakeholders), it can lead to 'successful on the job but failed in the assessment centre' kind of scenarios. Of course, it can be argued that such individuals won't be successful if they are moved to a different geography and hence a low rating on their potential coming from the assessment centre is valid. However, it still does not negate the fact that they can be effective in that role/level in that particular context. Yes, this (producing results without possessing the specified competencies) can sometimes resemble the ‘bright on the outside –hollow inside’ kind of situation mentioned earlier.
Another problem is that the results of the assessment centres are rarely conclusive - in the case of most the participants. What you get as the result of the assessment centre is a score on each of the competencies (say on a 5 point scale). Converting these scores into a ‘Yes or No’ decision on whether the employee has the potential to move into the role/level often involves a many inferential leaps (similar to the ‘leaps of faith’ mentioned in the title of this post). It is easy to string these scores together into some sort of a decision rule/algorithm (e.g. If a participant has a score of 3 and above on 3 of the 5 competencies, and an average score of 3 overall, the answer is an ‘Yes’ etc.). Of course, we can do tricks like assigning different weights to the individual competencies and specifying minimum scores on some competencies and come up with a decision rule that appears to be very objective (or even profound!) and that gives a clear ‘Yes or No’ decision (on if the participant has the potential or not) . But the design/choice of the algorithm is more of an art than a science and it can be quite subjective and even arbitrary (unless the organization is willing to invest a lot of time and money in full-fledged validation study)!
So what does this mean? To me, assessment centre is a tool; a tool that has certain capabilities and certain limitations. The tool can be improved (if there is sufficient resource investment) to enhance the capabilities and reduce the limitations to some extent. But, some basic limitations will remain. Hence, if one is aware of the limitations and the capabilities, one can make an informed decision on whether it makes business sense to use this tool in particular context – depending on what one is trying to achieve and the organization constraints/boundary conditions. If you push me for being more specific, the best answer that I am capable of at this point is as follows - ‘It is valuable to use assessment centres as one of the inputs if the objective is just to assess the current level of readiness of the employee for a particular role/level. If the objective is to assess the potential of the employee to take up that role/reach that level in the organization hierarchy in the future, assessment centres are of limited value when the intervening time period is long – say anything above 2 years’!!!
*Note: Assessment Centres need not always be pure simulations. Tools like Behavioral Event Interview (BEI) are often used as part of assessment centres. However, it becomes difficult to use BEI in an assessment centre designed to test the employee’s potential for a higher level role. This is because employee might not have had enough opportunities (till that time in his/her career) to handle situations that require the higher order competencies (required for the higher level/role and hence being tested in the assessment centre). Hence she/he will be at a disadvantage when asked (during the BEI) to provide evidence of having handled situations/tasks that require the higher level competencies.Any comments/suggestions?