Saturday, December 22, 2012

Using Assessment Centres for Evaluating Potential – A Leap of Faith?

“She is very bright”, said the first HR Manager. “She is the definition of a tube light - bright on the outside and hollow inside”, said the second HR Manager. Two senior HR professionals, and two very different inferences – about the same employee! I heard this conversation a long time ago. It came back to me recently, when I was thinking about Assessment Centres – the effectiveness of Assessment Centres as a tool to evaluate the ‘potential’ of employees, to be more precise. Usually, comments on 'brightness' of an employee have more to do with the perceived 'potential' of the employee as opposed to his/her performance!

As I had mentioned earlier (See Paradox of Potential Assessment), the basic issue in potential assessment (which sometimes does not get enough attention) is 'potential for what?’ Many answers are possible here. They include

1. Potential to be effective in a particular job/position
2. Potential to be effective in a particular job family
3. Potential to be effective at a particular level (responsibility level)
4. Potential to take up leadership positions in the company
5. Potential to move up the organization ladder/levels very quickly etc.

Logically, the first four answers should lead to the creation of a capability framework that details the requirements (functional and behavioral competencies) to be effective in the job/job family/level/leadership positions that we are talking about. Once this is done, a competency based assessment centre is often used to assess the potential of employees against that framework. This is where the trouble begins (Actually, the problems start earlier than this – with the definition of ‘potential’ and with the creation of capability framework. But that is another story).

Let us begin by looking at a couple of basic issues. An assessment centre is essentially a simulation*. Hence, there are always questions on the extent to which the simulation matches reality (requirements of the job//level). This becomes even more problematic in the case of international assessment centres (for global roles/with participants from different countries) as the cultural differences needs to be factored in when designing the assessment centres and while interpreting/evaluating the behavior/responses of the participants (e.g. what is an effective response/acceptable behavior in one culture might not be so in other cultures).  

Since we need to avoid a situation where the participant was able to give the correct answer/response in the simulation because he/she knew the correct answer/response based on prior experience/knowledge and not because he/she was able to arrive at the correct answer by himself/herself in response to the situation(and hence demonstrating the competency) the simulations often use a context that is different from the immediate job/organization context – while trying to test the same underlying competencies required. This can bring additional complications in ensuring adequate match between simulation and reality. By the way, this is one of the factors (knowledge of the correct answer without knowing how to arrive at the correct answer) that can give to the ‘bright on the outside – hollow inside’ kind of situation mentioned at the beginning of this post. Another factor could be ‘sublimated careers’ (See Career Development & Sublimation).

Each of the tools/exercises in the assessment centre is designed to test a set of competencies. This implies that each of the participants should have sufficient opportunity to fully demonstrate all the relevant behaviors corresponding to all the competencies during the exercise. Assuming that there are 4 competencies (each with 3 relevant behavioral indicators) being tested in the particular exercise, it would mean each participant should have an opportunity to demonstrate 12 behaviors. If the evaluation on the behavior is done using a frequency scale (e.g. always, most of the times, sometimes, rarely etc.) it would imply the need to demonstrate each of the behaviors multiple times during the exercise (e.g. demonstrating the behavior 3 times will get the participant the highest rating) and that would imply a total of 36 behaviors. Of course, if this is a group exercise, this number will get multiplied by the number of participants (e.g. 36*6 = 216 behaviors for a group of 6 participants). This is practically impossible to do in a 45 minutes group exercise! Of course, exercises can be of longer duration and there can be more number of exercises (requiring fewer numbers of competencies/behaviors to be tested per exercise). However, considering the cost and time pressures in most organizations, this becomes difficult. This implies that the very design of the assessment centres might prevent the participants from fully demonstrating their competencies/potential during the centre – leading to artificially lower potential evaluations.

Now, let us come back to problems specific to using assessment centres as a tool to measure potential. Even in a best case scenario, what the assessment centre is measuring is the degree to which the employee/participant demonstrates the behaviors corresponding to requisite competencies during the assessment centre. So, at best it can give a good estimate of the current level of readiness of the employee for a particular role/level. However, this does not really indicate the potential of the employee to take up that role/reach that level in the organization hierarchy in the future. This is because the employee has the opportunity to learn/develop the competencies during the intervening period. Assessment centre can’t give any indication on the extent to which (and the speed at which) the employee will further develop/enhance the competencies.

Assessments centres are based on competency models. As I had mentioned in 'Competency frameworks - An intermediate stage?',  one of the basic assumptions behind developing a competency model is that there is one particular behavioral pattern that would lead to superior results in a particular job(i.e there is 'one best way' to do the job). This might not be a valid assumption, in the case of most of the non-routine jobs. If there are other ways to be effective in the job (say, based on a deep understanding of the context/great relationships with all the stakeholders), it can lead to 'successful on the job  but failed in the assessment centre' kind of scenarios. Of course, it can be argued that such individuals won't be successful if they are moved to a different geography and hence a low rating on their potential coming from the assessment centre is valid. However, it still does not negate the fact that they can be effective in that role/level in that particular context. Yes, this (producing results without possessing the specified competencies) can sometimes resemble the ‘bright on the outside –hollow inside’ kind of situation mentioned earlier. 

Another problem is that the results of the assessment centres are rarely conclusive - in the case of most the participants. What you get as the result of the assessment centre is a score on each of the competencies (say on a 5 point scale). Converting these scores into a ‘Yes or No’ decision on whether the employee has the potential to move into the role/level often involves a many inferential leaps (similar to the ‘leaps of faith’ mentioned in the title of this post). It is easy to string these scores together into some sort of a decision rule/algorithm (e.g.  If a participant has a score of 3 and above on 3 of the 5 competencies, and an average score of 3 overall, the answer is an ‘Yes’ etc.). Of course, we can do tricks like assigning different weights to the individual competencies and specifying minimum scores on some competencies and come up with a decision rule that appears to be very objective (or even profound!) and that gives a clear ‘Yes or No’ decision (on if the participant has the potential or not) . But the design/choice of the algorithm is more of an art than a science and it can be quite subjective and even arbitrary (unless the organization is willing to invest a lot of time and money in full-fledged validation study)!

So what does this mean? To me, assessment centre is a tool; a tool that has certain capabilities and certain limitations. The tool can be improved (if there is sufficient resource investment) to enhance the capabilities and reduce the limitations to some extent. But, some basic limitations will remain. Hence, if one is aware of the limitations and the capabilities, one can make an informed decision on whether it makes business sense to use this tool in particular context – depending on what one is trying to achieve and the organization constraints/boundary conditions. If you push me for being more specific, the best answer that I am capable of at this point is as follows - ‘It is valuable to use assessment centres as one of the inputs if the objective is just to assess the current level of readiness of the employee for a particular role/level. If the objective is to assess the potential of the employee to take up that role/reach that level in the organization hierarchy in the future, assessment centres are of limited value when the intervening time period is long – say anything above 2 years’!!!

*Note: Assessment Centres need not always be pure simulations. Tools like Behavioral Event Interview (BEI) are often used as part of assessment centres. However, it becomes difficult to use BEI in an assessment centre designed to test the employee’s potential for a higher level role. This is because employee might not have had enough opportunities (till that time in his/her career) to handle situations that require the higher order competencies (required for the higher level/role and hence being tested in the assessment centre). Hence she/he will be at a disadvantage when asked (during the BEI) to provide evidence of having handled situations/tasks that require the higher level competencies.
Any comments/suggestions?

Saturday, May 12, 2012

Performance ratings and the ‘above average effect’

“Performance ratings will be shared with the employees next week. We expect employee attrition to go up significantly in the next few months”, said the HR Manager.

It is a fact that in many organizations the attrition percentage goes up in the months after the annual performance ratings are announced. Some of this is because of the process linkages. Salary hikes and bonuses (that are linked to the performance ratings) usually follow soon after (or along with) the announcement of the performance ratings and it might make logical sense for employees to receive the bonus (after all one has worked for an entire year to get that) and the higher salary and then negotiate a better salary (with a  new company) based that. But some of the resignations are a direct emotional reaction to the performance ratings. Based on my experience across multiple companies (as an employee and as a consultant), I have often wondered why the sharing of performance ratings is such an unpleasant experience – both for the employees and for the Managers of the employees.

There could be many reasons for this. The performance objectives and targets might not have been properly defined or agreed upon. There might have been changes in the context or factors outside the employee’s control that made the targets unreasonable/impossible to achieve. The performance feedback might not have been given regularly and accurately (managers often try to ‘soften’ negative feedback) and hence the rating might have come as a surprise for the employee. But I feel that most of the unpleasantness of the situation is related to a psychological phenomenon known as ‘superiority illusion’ or the ‘above average effect’.

'Illusory superiority' is a cognitive bias that causes people to overestimate their positive qualities and abilities and to underestimate their negative qualities, relative to others. This manifests in a wide range of areas including intelligence, possession of desirable characteristics/personality traits, performance on tests and of course ‘on the job performance’ (for which performance rating is an indicator). While the exact percentages can vary based on the social/economic/cultural context, typically in a group at least 75-90% of the members rate themselves as 'above average'.

This fact (that at least 75% of the people rate themselves as 'above average') creates trouble when it comes to performance ratings. These days companies are keen on ‘differentiating based on performance’ (say ‘to build a performance driven culture’) and this would mean that when it comes to performance ratings, the relative performance of the employees becomes a critical factor apart from the absolute performance (performance against agreed upon targets). Whether or not a fixed percentage distribution of ratings are prescribed, some sort of a ‘normal curve’ emerges. Typically, the positively differentiated performance ratings (i.e. if we have a 1 to 5 scale with 1 being the lowest and 5 being the highest; ratings of 4 and 5 ) form about 25%. Thus only about 25% of the employees will get ‘above average’ performance ratings. The arithmetic is simple and the conclusion is inevitable. If at least 75% of the employees consider their performance to be ‘above average’ and only 25% of the employees will get ‘above average performance ratings’, then at least 50% of the employees will be disappointed with their performance ratings. Thus, sharing of performance ratings is likely to be an unpleasant experience – both for the employee and for the Manager.

Now let us look at this from the Manager’s point of view. Experienced people managers know that the problem described above will happen (though they might not be aware of the exact percentages/degree of the problem). But they can’t do much about it as the two critical factors (employee’s tendency to rate their performance as 'above average' and the maximum percentage/number of the ‘above average performance ratings’ that the Managers can give) are largely outside their control. Managers do what they can. This can range from ‘expectation management’ to ‘pushing for a higher percentage of above average ratings for their team’ to ‘providing other rewards and recognition to compensate for the unpleasantness created by lower than expected performance ratings’ to disowning the performance ratings (blaming it on HR and/or senior leadership). But these are of limited utility as they are not addressing the core problem. Also, this can lead to a situation where the employees lose confidence - in the Manager and in the Performance Management System. Another option for the Manager is to staff his/her team with people who have a low self-image (masochists are welcome!). But if the Manager wants the employees to have high self-belief/confidence when dealing with customers and low self-belief/confidence when interacting with the Manager, then it calls for a Janus-faced personality. While such personalities can be found in abundance in extremely hierarchical organizations (see Followership behaviors of leaders), it might not be a viable strategy for ‘normal’ organizations!

Logically speaking, grappling with this problem for an extended period of time and gaining insights and wisdom from the struggle should help the Manager to be more reasonable when estimating his/her own relative performance (and hence the performance rating he/she deserves) and to be more understanding when the Manager’s Manager tries to share and explain the Manager’s performance rating. But, as the studies in ‘Behavioral Economics’ have demonstrated, being aware of a ‘bias’ need not necessarily help one to overcome the bias! No wonder managers often dread the entire business of performance ratings – giving the performance ratings to their team and receiving their own performance ratings!!!!

The research done on the ‘above average effect’ has thrown up some interesting findings that might help us (at least to some extent) in dealing with this problem in the context of performance ratings. It has been found that the individuals who were worst at performing the tasks were also worst at estimating their relative performance/degree of skill in those tasks. It has also been found that given training, the worst subjects improved the accuracy of their estimate of their relative performance apart from getting better at the tasks.

Another possibility here is to make the performance ratings less dependent on relative performance and more dependent on absolute performance (performance against agreed upon targets) or to increase the percentage of 'above average ratings'. But these kind of steps can go against the performance management philosophy of the organization (of differentiation based on performance) and hence impractical. If the context so permits, standardization of performance objectives/targets for a particular role and making the information on the performance of employees on the objectives/targets available to all can also be looked at. Research shows that self-evaluation (especially in comparative contexts) is driven primarily by an intuitive ‘heuristic process’ as opposed to a logical/effortful ‘evidence-based process’. However, by making valid & reliable data on relative performance available and by encouraging the employees to look at it (and may be even participate in an open discussion about it) before they do the self-evaluation (and evaluation of their relative performance), the influence of the ‘evidence-based’ part on the decision making-process might increase.

Again, we can make discussions on the challenges related to the 'above average effect' part of the performance management related communication and training  for employees and Managers. May be, we can even build in some 'nudges' (like asking the employees to write down three things that their peers have done better than them - as part of the self  assessment) that will prompt them to deal with their cognitive bias (of superiority illusion) in a more rational manner.

Apart from this, ensuring that basics of performance management - performance planning, coaching, feedback and review - are done well also helps, though they don’t directly address the problem we are discussing. It is similar to ‘taking antibiotics for dealing with a viral infection’. While they don’t solve the core problem (virus) they do help in preventing secondary infections and hence has some utility in some cases (especially when effective anti-viral drugs are not available and the possibility of secondary infections are high)! The problem we are dealing with here is too 'human' to be completely solved by 'performance management techniques' and we have to live with it to some extent as the price for being human!

Any ideas/comments?

Thursday, March 8, 2012

On what ‘good’ looks like…

“I am leaving this organization because my manager and I have very different ideas on what ‘good’ looks like in my domain, and we have agreed to disagree. It was not a matter of lack of clarity on the performance objectives and targets. The issue was a fundamental disconnect on what those objectives and targets should be and how they should be achieved – on what ‘excellence’ means in my role and in my domain”, said the Function Leader during his exit interview.
In my career so far, I have had the good fortune of experiencing many organization contexts – either as an external consultant or as an employee. Based on these experiences, I have come to realize that organizations often have different definitions of the ‘picture of success at an individual level’ (i.e. what good individual performance looks like). While the tasks/deliverable will vary from one job to the other within the organization, there are common patterns that hold good across jobs in an organization on what good performance (or ‘excellence’ or ‘quality’) looks like. But, these patterns can vary a lot from one organization to the other. When people move from one organization to the other this can create ‘rude shocks’ – for both the employee and the organization – especially when an employee who has been successful in one  organization joins another organization that has a different definition of excellence('what good looks like').
Let us take a closer look at these underlying(tacit) definitions of quality (or excellence). While each organization has its own underlying definition (assumption), it can be useful to conceptualize these underlying assumptions as points in a continuum between two polar opposites : 'absence of variation' and 'presence of value'. 
At one end we have organizations where the underlying definition of quality is very much similar to the ‘six sigma definition’ – ‘absence of variation’. In these organizations, the performance of an employee considered to the excellent, if he/she thinks through the goals before agreeing to the same, creates a detailed plan to work towards the goals in a systematic manner, archives goals even if there were changes in the environment (through scenario planning, risk analysis & mitigation and sheer focus). These organizations also tend to value and invest in building capability/expertise – at both people and process level. Hence the premium is on good design, deep expertise, meticulous planning, reliability, consistency, coherence and congruence. In extreme cases this can lead to rigidity.  
At the other end of the continuum we have organizations where the definition is more like 'presence of value' or ‘fitness for purpose’ (with the ‘purpose’ changing quite often). Here the focus is on ‘trial and error’. Simply put, this means do whatever makes most sense in a particular situation. In these organizations, muddling through things is acceptable and even preferred (over thinking through things and seeking clarity before starting work). People who insist on planning and consistency are considered to be ‘risk-averse’ (or even to be 'lacking in courage'). Operating with contradictions (and lack of coherence & consistency) is considered to be ‘heroic’. A lot of emphasis is placed on pragmatism (as opposed to expertise) and on workarounds.  Hence the premium is on ‘flexibility’ and ‘crisis handling’. In extreme cases it can lead to an organization that jumps from one idea (goal or fad) to another on a frequent basis.
Of course, there are many other dimensions (for the variation in the underlying definitions on what good performance looks like) in addition to dimension represented by the continuum between the two end points mentioned above. There is nothing inherently 'good' or 'bad' about these underlying definitions - they are just different (equally valid) ways of looking at the world. The point is that these variations exist across organizations and it could have a significant bearing on performance, employee satisfaction, engagement and retention.
To some extent, these assumptions are related to the environment in which the organization is operating in. But it is often it is a matter of the preferred way of responding to the environment. These assumptions are also closely related to the culture of the organization- especially the deeper levels of culture – values and basic underlying assumptions. Theoretically speaking, the match between the employees’ and the organizations’ definitions of ‘what good performance looks like’, is represented by some dimensions of ‘person-organization’ fit. However, an intellectual discussion on the low scores on some dimensions of ‘person-organization’ fit might not fully bring out the reality (trauma!) of the ‘rude shocks’ for the employee and for the organization (mentioned earlier in our discussion).
This brings us to the question of adaptation. Employees can adjust. Organizations can change too – though usually it is a very slow process and require a ‘critical mass of new employees with different preferences’. The individual’s definition of ‘good’ can also change. However,  the individual’s definition of ‘good’ is shaped mainly by his/her personality and his/her ‘early career experiences’ (see 'Influence of early career experiences') and a change in the same requires lot of time and a critical mass of high impact (profound or traumatic) new (different) experiences. Hence, for the time being, let us focus on the issue of new employees attempting to align with the organization’s definition of ‘good performance’.
Yes, employees do realize that they are unlikely to find an organization that provides a 100% match to their preferences and that they need to adjust. But if an employee needs to constantly act outside his/her preferences it can lead to stress.  This can also lead to mediocrity as the individuals are not able to play to their strengths. Excellence and engagement at individual level requires the opportunity ‘to bring more of who you are into what you do’ (see 'Employee engagement and the story of the Sky Maiden'). It is critical for those employees for who looks at work as one of avenues for self-expression. Similarly, when organizations talk about connecting with employees at higher levels of the needs hierarchy, this becomes important for the organizations also.
Now let us come back to the exit case that we saw in the beginning of this post. Ideally, the employee and his manager should have been able to arrive at a higher ground that integrates their conflicting points of view (like the struggle between thesis and antithesis results in a higher more truthful synthesis of the two - in Hegelian Metaphysics).But this ideal state is often not possible within the constraints of the organization context and the individuals involved. Sometimes (as the existentialist philosopher Kierkegaard says), people will have to make ‘either/or’ decisions (and the seductive beauty of Hegelian ‘and/both’ turns out to be an illusion).
One of my all time favorite books is ‘Zen and the Art of Motorcycle Maintenance’ by Robert M. Pirsig. This book begins with the lines “And what is good, Phaedrus, And what is not good, Need we ask anyone to tell us these things?" In the context of our discussion (for a person who is trying to join a new organization or for an organization trying to hire someone), the answer should be a loud ‘YES’.  Yes, it is worthwhile to ask this explicitly, listen carefully, ‘read’ between the lines and to be very careful about what is left unsaid!!!

Sunday, January 22, 2012

A political paradox for OD & HR

“This is a political issue and we should resolve it politically”, said the senior consultant. I heard this interesting piece of ‘wisdom’ at an early stage in my career as an OD/HR consultant and it had left me somewhat confused.

I knew that as external consultants one of our main tasks was to diagnose the core issue/root problem correctly (as opposed to merely documenting the symptoms) so that we can design an intervention at the appropriate level. I also knew that ‘workplace politics’ existed in many of our client organizations. What confused me was the part that said ‘we should resolve it politically’. ‘Organizational politics’ was a ‘bad’ word for me at that time – something that incompetent people do to further their selfish motives – something that we as external  consultants should keep a safe distance from. So the suggestion that we should use political means to resolve the issue alarmed me. Over the last decade, I have developed a better understanding of the paradoxical nature of organizational politics and its implications for anyone who wants to lead/facilitate change in business organizations. 

As we have seen earlier (see 'Paradox of business orientation of HR'), a paradox occurs when there are multiple perspectives/opinions (doxa) that exist alongside (para)- each of which is true - but they appear to be in conflict with one another. Let us look at some of these opinions about organizational politics.

1. Politics is essentially about power. Any activity that reinforces or alters the existing power balance in a relationship, group or organization is a political activity. Organization development(OD) is about facilitating change. To make change happen power needs to be exercised and hence all Organization Development is essentially political.
2. Politics is based on informal power - power that is not officially sanctioned. Hence politics is illegitimate in the organization context.
3. A large part of the work in any organization takes place through the 'informal organization' (informal channels that are not captured in the organization structure/job descriptions/chart of authority/operating manual). Keeping this in mind, one can't claim that organization politics is illegitimate just because it is based on informal power.
4. Organization politics is undesirable as it is all about pursuing selfish interests.
5. Organization politics need not be about pursuing selfish interests. It is necessary in order to secure resources and further ideas in an organization. Both ‘bad politics’ (characterized by impression management, deceit, manipulation and coercion) and ‘good politics’ (characterized by awareness, creativity, innovation, informed judgment, and critical self-monitoring) exist in organizations.  
6. A good organization culture can eliminate organizational politics
7. Politics will be present in any group of human beings. The only way to avoid politics is to define and enforce detailed rules and procedures for all activities and interactions among the employees. This would be very difficult to do in most organizations and this would get more difficult when uncertain and fast changing business environment requires organizations to be dynamic and rapidly evolving. When an organization is in transition there won’t be clearly established rules/procedures and hence politics will become more prevalent. Since organizations are likely to spend increasing amounts of time in the ‘transition state’(because of the multiples waves of change), politics will become even more prevalent.
8. Politics is a social construct. Hence the behaviors that are perceived to be 'politcal' in one organization might not be perceived as 'political' in another organization.

So where does this leave us? I think that organization politics is  a reality and any one driving or facilitating change in an organization (like a business leader or an HR/OD professional) need to develop an accurate understanding of the power structure and political dynamics of the organization. One of the key reasons why many of the change efforts fail (and why many of the consultants’ reports/recommendations gather dust without getting implemented) is that they didn’t pay sufficient attention to the political dynamics of the organization. As Human Resource Management (HR) professionals move from transactional roles to more consultative/'change agent like' roles, they need to develop the ability to naviagte the 'polical waters' of the orgnization better. Again, if the change facilitators don't pay attention to the political dynamics, they might end up as ‘pawns in the political game’ or even as ‘sacrificial lambs in the political battle’

I also think that both formal and informal influence needs to be used to maximize the chances of the change effort's success. This will become increasingly critical as the organizations become more fluid (with less rigidly/clearly defined procedures) and dynamic (fast changing with higher degree of uncertainty both externally and internally).

However, I feel that the OD consultant should not ‘play politics’ (i.e. become a political activist) as that would mean driving a political agenda/imposing the consultant’s agenda on the organization. This goes back to the ‘process consulting’ foundations of OD where the consultant’s role is to enable the organization to solve its problems (and to increase its problem solving capability) as opposed to providing solutions. Yes, I agree that all HR/OD consulting need not be process consulting and that the dividing line between the mandate of the HR/OD initiative/project and the political agenda of the consultant (especially internal consultant) is not always clear.

Hence, my current thinking is that the change facilitator/change leader should gather data on the political dynamics of the organization (power structure, various clusters of interests and their assumptions/world view/agenda/unstated concerns, interrelationships among the various clusters etc.) and leverage the same to improve diagnosis, solution design and implementation. This includes presenting (at appropriate times/stages) relevant data on the conflicting assumptions/interests without taking sides. This can also reduce the relevance of politics by making relevant parts of the informal (unstated/implicit) elements of the organization dynamics more formal (stated/explicit). This is not unlike a psychoanalyst helping a patient to be more psychologically healthy by enabling the patent to make some of the relevant parts of the unconscious more conscious (and hence better integrated). Most managers consider politics as a routine part of organizational life - though they might not talk about it openly. Hence, incorporating (without any negative associations) discussions/training on 'understanding and managing the political dimension of change' in the change management intervention, will give the leaders/managers a legitimate platform and skills to surface, talk about and deal with this dimension thereby increasing the probability of the successful implementation of the change.  

Another relevant analogy is the approach for incorporating feelings and emotions into the decision-making process. Feelings and emotions are real – though they might not be rational – and hence they can’t be ignored.  However, ‘making decisions based on emotions’ is not desirable, from an effectiveness point of view. We can improve the quality of our decisions by gathering data on the emotions/feelings of the stakeholders/ourselves (including impact of the various decisions/possible options on the feelings/emotions of the stakeholders) and using the same to inform our diagnosis, solution design and implementation. Similarly, we can improve the effectiveness of our change interventions (diagnosis, solution design and implementation) by leveraging the data on the political dynamics of the organization without ‘playing politics’. Yes, this is a tightrope walk that requires very high degree of self awareness and critical-self monitoring. But it is something that HR/OD consultants must do to maintain their integrity, credibility, effectiveness & relevance!