Career planning is one of the most interesting rituals in HR. But before we come to career planning let us look at the myth of Sisyphus. We come across Sisyphus in Greek mythology. The myth says that because of his trickery Sisyphus was cursed by the gods. As a result of that he had to repeat a maddening procedure forever. He was compelled to roll a huge rock up a steep hill, but before he reached the top of the hill, the rock always escaped him and he had to begin again. Actually, similar stories exists in other cultures also. For example, in my home state (Kerala) there is a similar story about Naranathu Bhranthan. Naranathu Bhranthan was considered to be a 'siddha' (an 'enlightened' or 'realized' being) though some of his behaviors appeared to be rather 'strange'. He used to follow the same procedure as that of Sisyphus (though not on a full time basis!). But he was doing it out of choice. Also in his case the stones would not automatically roll back. So he would manage to get many big stones to the top of the hill. Then he would push them down one by one and he would laugh loudly as they roll down the slope. We will come back to Naranathu Bhranthan later in this post. As I have mentioned in an earlier post, myths are important as they contain eternal truths, though myths might be too true to be real.
Now let us come back to career planning. Organizations in general and HR professionals in particular, invest a lot of time an effort in career planning. There are very good reasons for doing so. A large number of studies have shown that 'opportunity for career development' is one of the most important things that employees look for in an organization. So the organizations (and HR professionals) have to do something about this. The typical response is to map out career paths. Since organizations are keen on approaching this 'strategically'/with a long term perspective, these career paths provide the 'growth paths' extending over many years. Since there are many types of employee profiles, employee preferences, positions and career options, often these lead to a huge amount of detail. This of course implies a large amount of time/resource investment. But there is a paradox here. In many industries (especially in sectors like IT/ITES/BPO in India) the attrition rates are very high. So in many organizations most of the employees would leave before they complete 3 years in the organization. Hence these long term career plans get wasted in the case of most of the employees. This is where Sisyphus comes in. We put in a lot of effort in formulating detailed career paths (like Sisyphus rolling the huge rock up the hill). But before they could make significant progress along these nice career paths most of the employees leave (or 'escape' like the rock in the case of Sisyphus). So does 'career planning' amount to some sort of a 'Sisyphus-like curse' for HR professionals?
May be the situation would improve if we can target career planning efforts to those employees who are likely to stay on with the organization for a long time. While it can be argued that career planning itself would reduce attrition, this does not seem to work very well in many organizations. May be career planning (at least in the traditional form) would have a significant influence only on some employees (who already have a some sort of a long term perspective and also have a good degree of person-organization fit!). Of course there are more innovative approaches to career development that are being experimented.
Another way to look at this situation is to say that the 'career planning ritual' is both 'necessary and beneficial', though the manifestation of the results might not essentially be in terms of employees moving along the prescribed career paths. The ritual itself might help in building positive energy and it might also considered to be a necessary condition (though often not a sufficient condition) for positive organizational outcomes. May be we are more like Naranathu Bhranthan than like Sisyphus. This would imply that we are formulating these career paths knowing that most of the employees won't really follow them. So we are like Naranathu Bhranthan who was following the 'Sisyphus-like' procedure out of choice. By the way, the word 'bhranthan' in Malayalam language means a 'madman'. So we can see that though Naranathu Bhranthan appeared to be 'mad' to many people (and hence he was called a 'bhranthan'), he was the 'master of his madness' and that he was laughing at life itself (remember - he was also considered to be a 'siddha'). Perhaps career planning in rapidly changing high attrition environments would always be a maddening activity. But each one of us can attempt to be a 'master of the madness' rather than being a slave. May be we can also laugh like Naranathu Bhranthan used to do (though not so loudly - lest we might be considered to be 'mad' by the 'masters' in our organizations !) when the employees grow beyond (or even 'jump' out of) the elaborate career paths that we had created with so much effort !
So are you laughing ?
Note: Please see here for another example of the connection between HR and 'madness'.
Prasad Oommen Kurian's blog on Human Capital Managment and Organization Development
Tuesday, August 21, 2007
Tuesday, August 14, 2007
Research and a three-year-old
The incident that triggered the thought process behind this post happened when my son was about 3 years old. This was the time when he was trying to figure out 'cause and effect' relationships. So he used to say things like "If I shout in the class, my teacher will scold me", "I ran very fast in the park. That is why I fell down" etc. Those days we used to have an evening ritual. I would put my son on my shoulders and go for a walk. This 'sitting on the shoulders' arrangement made conversations easy even when there was a lot of noise all around. So this led to a lot of interesting discussions. There is nothing quite like a conversation with a curious, confident and talkative three-year-old to force one to be aware of and to question one's assumptions!
These walks would take us near a manned railway crossing/gate. Since he likes to see trains, we would stand there for a long time. After a few days he told me about a 'discovery' he has made "The gate has closed. That is why the train is coming"! Now, we all know that the 'causation' (if any) is the other way around. But purely based on his observations this was not so. He sees one thing happening (gate closes). After that something else always happens (train comes). Based on his 'life experience so far' (or his understanding of the 'system'/'universe') it was reasonable for him to think that if something happens and something else always happens after that the first thing might be causing the second thing (this principle had worked for him in the two examples mentioned above - running in the park and shouting in the class).
So, how would I convince him that his conclusion was wrong? The only way that I could think of was to tell him about the larger system (the railway system in this case - that makes the trains run and the gates close). This solution 'worked' only because there was someone around who knew about the larger system. He could not have come to the 'correct conclusion' purely based on his observations and his life experience thus far (i.e. based on his understanding of the 'system' at that point) .
Now, if we look at the research in behavioral science (or may be research in general), often we don't have the luxury of fully knowing the larger system in which the phenomena that we are observing are happening. Also there might not be anyone who has an adequate understanding of the system to 'enlighten' us. Actually, such understanding might not even exist! (as all the 'possible' events/system behaviors might not have been observed or even taken place so far - e.g. unusual/rare events/system behaviors like those that could result from malfunctioning of railway signals, human error, train breakdowns, accidents etc. or events like 'two trains passing through the railway gate at the same time on parallel tracks' that could arise from from a peculiar/uncommon combination of factors - if we stick with our original example). Often, there is no way we can study the 'entire system' (actually it would be very difficult even to determine the exact boundaries of the relevant 'system' in a particular study). We might not be in a position to look at all the data. So have to decide what data we would study and what data we would leave out. This could bring in biases (e.g. selection bias, survivorship bias etc.) and limitations. Thus, there is a significant risk that we might make the wrong inference (since we are limited by our observations and our current level of understanding of the system).
In addition to this, there are the standard problems with spurious correlations, mistaking correlation for causation, determining the direction of causation ('A causes B' or 'B causes A' or 'C causes both A and B' etc.) and assumptions regarding the homogeneity/uniformity of the system (assuming that findings that are valid in one part of the system are equally valid in other parts of the system). Of course, there are ways of expanding both our 'current level of understanding' and our data set/observations (e.g. study of the existing 'research' in the domain- if relevant and available). But, if we examine most of the 'research' that happens within organizations (for diagnosis and decision making - to solve the immediate problems in particular organization contexts), the pressures of time and resources might dilute the efforts to expand the 'understanding and data set'. Again, it is possible that the 'system' might have changed (in subtle but significant ways - without us noticing it) from what it was at the time we studied it/derived inferences on system behavior. Considering the nature and pace of change in many of the human systems that we are taking about, this could pose a big challenge for making available 'valid actionable inferences' to guide our decision making. Keeping all this in mind, can we expect to do always better than what my three-year-old had managed to do?
Note: I am not saying that useful behavioral research can't be conducted in organizations. My point is just that it requires a convergence of 'realistic expectations', 'will' and 'resources' - which, unfortunately, is not very common in most 'real world' organization contexts. If the 'research problem' can be defined narrowly, I would not even rule out the possibility of 'experiments' (though 'experiments' might not be a 'politically correct' term in organization contexts ; 'pilot studies' might be more appropriate). If such experiments can be conducted in the filed of medicine (where - literally - 'life and death' issues are involved), why can't we try them in business organizations (with proper precautions)? Of course, the problems like the ones that I have mentioned above (e.g. too many variables, difficulty in conducting 'controlled experiments', insufficient understanding of the system, biases in selection of data, assumptions about homogeneity and stability of the population/system etc.) still apply. But we might still get some useful information and/or insights.
Any comments/thoughts/ideas?
See somewhat related posts here, here and here.
These walks would take us near a manned railway crossing/gate. Since he likes to see trains, we would stand there for a long time. After a few days he told me about a 'discovery' he has made "The gate has closed. That is why the train is coming"! Now, we all know that the 'causation' (if any) is the other way around. But purely based on his observations this was not so. He sees one thing happening (gate closes). After that something else always happens (train comes). Based on his 'life experience so far' (or his understanding of the 'system'/'universe') it was reasonable for him to think that if something happens and something else always happens after that the first thing might be causing the second thing (this principle had worked for him in the two examples mentioned above - running in the park and shouting in the class).
So, how would I convince him that his conclusion was wrong? The only way that I could think of was to tell him about the larger system (the railway system in this case - that makes the trains run and the gates close). This solution 'worked' only because there was someone around who knew about the larger system. He could not have come to the 'correct conclusion' purely based on his observations and his life experience thus far (i.e. based on his understanding of the 'system' at that point) .
Now, if we look at the research in behavioral science (or may be research in general), often we don't have the luxury of fully knowing the larger system in which the phenomena that we are observing are happening. Also there might not be anyone who has an adequate understanding of the system to 'enlighten' us. Actually, such understanding might not even exist! (as all the 'possible' events/system behaviors might not have been observed or even taken place so far - e.g. unusual/rare events/system behaviors like those that could result from malfunctioning of railway signals, human error, train breakdowns, accidents etc. or events like 'two trains passing through the railway gate at the same time on parallel tracks' that could arise from from a peculiar/uncommon combination of factors - if we stick with our original example). Often, there is no way we can study the 'entire system' (actually it would be very difficult even to determine the exact boundaries of the relevant 'system' in a particular study). We might not be in a position to look at all the data. So have to decide what data we would study and what data we would leave out. This could bring in biases (e.g. selection bias, survivorship bias etc.) and limitations. Thus, there is a significant risk that we might make the wrong inference (since we are limited by our observations and our current level of understanding of the system).
In addition to this, there are the standard problems with spurious correlations, mistaking correlation for causation, determining the direction of causation ('A causes B' or 'B causes A' or 'C causes both A and B' etc.) and assumptions regarding the homogeneity/uniformity of the system (assuming that findings that are valid in one part of the system are equally valid in other parts of the system). Of course, there are ways of expanding both our 'current level of understanding' and our data set/observations (e.g. study of the existing 'research' in the domain- if relevant and available). But, if we examine most of the 'research' that happens within organizations (for diagnosis and decision making - to solve the immediate problems in particular organization contexts), the pressures of time and resources might dilute the efforts to expand the 'understanding and data set'. Again, it is possible that the 'system' might have changed (in subtle but significant ways - without us noticing it) from what it was at the time we studied it/derived inferences on system behavior. Considering the nature and pace of change in many of the human systems that we are taking about, this could pose a big challenge for making available 'valid actionable inferences' to guide our decision making. Keeping all this in mind, can we expect to do always better than what my three-year-old had managed to do?
Note: I am not saying that useful behavioral research can't be conducted in organizations. My point is just that it requires a convergence of 'realistic expectations', 'will' and 'resources' - which, unfortunately, is not very common in most 'real world' organization contexts. If the 'research problem' can be defined narrowly, I would not even rule out the possibility of 'experiments' (though 'experiments' might not be a 'politically correct' term in organization contexts ; 'pilot studies' might be more appropriate). If such experiments can be conducted in the filed of medicine (where - literally - 'life and death' issues are involved), why can't we try them in business organizations (with proper precautions)? Of course, the problems like the ones that I have mentioned above (e.g. too many variables, difficulty in conducting 'controlled experiments', insufficient understanding of the system, biases in selection of data, assumptions about homogeneity and stability of the population/system etc.) still apply. But we might still get some useful information and/or insights.
Any comments/thoughts/ideas?
See somewhat related posts here, here and here.
Saturday, August 4, 2007
Paradox of 'hiring good people and letting them decide'
How do we build a high performance organization? There are many 'answers' to this question. There has to be many answers (or at least 'attempted answers'), because, this is the core issue in 'management'. Hence, most of the management literature should be dealing with some aspect of this this question ('quest' !) in some way. So we have many approaches/answers. There is one particular approach that I find to be particularly interesting. It is something like this : "Hire good people and empower them to decide what is to be done and how it is to be done". The basic idea here is that in a complex and rapidly changing environment, it the traditional approach of specifying (to each employee) what exactly has to be done is unlikely to work. So it is better to hire good people and let them figure out what needs to be done.
I am not saying that this approach is 'wrong'. My point is that there is a paradox here. In order to hire 'good' people the organization has to use a definition of 'good' (a 'working definition' of what 'good' means in their context - so that it can be used in the hiring process/ as the selection criteria). After all one can't do hiring without some sort of criteria (implicit or explicit). This leads to an interesting situation. This definition of 'good' (implicit or explicit) is colored by the current thinking in the organization. To put it in another way, the criteria for a good hire gets influenced by the organization's (often implicit) understanding of what is to be done, how it is to be done and hence what sort of a person can do it. So the existing limitations (and prescriptions of what is to be done/how it is to be done) gets built into the hiring criteria at least to some extent.
Let us look at the most common example of this situation. Organization 'A' is in trouble. The organization does not have a clear understanding of what is to be done to get out of this situation. So it decides to hire a 'good' CEO and let him/her figure out what needs to be done. However, when the organization chooses a 'good CEO' that choice is colored by the explicit/implicit definition of 'a good CEO' which in turn is limited by the current thinking/consciousness in the organization. This can be addressed to some extent by looking at 'best practices' (what has worked in CEO selection elsewhere in similar situations) and by using external advisers. But this might not always work as the the uniqueness of that particular organization context might get missed out and also because the external advice/best practice information often goes through one level of processing within the organization (when decision making is done by existing people) which in turn brings in the limitations in the current processing/thinking in the organization.
Hence the approach of 'hiring good people and letting them figure out what needs to be done' might not be as simple as it appears to be. Actually, it can not be simple. Otherwise it would have been very easy to build and sustain high performing organizations.
Any comments?
See a related link here
I am not saying that this approach is 'wrong'. My point is that there is a paradox here. In order to hire 'good' people the organization has to use a definition of 'good' (a 'working definition' of what 'good' means in their context - so that it can be used in the hiring process/ as the selection criteria). After all one can't do hiring without some sort of criteria (implicit or explicit). This leads to an interesting situation. This definition of 'good' (implicit or explicit) is colored by the current thinking in the organization. To put it in another way, the criteria for a good hire gets influenced by the organization's (often implicit) understanding of what is to be done, how it is to be done and hence what sort of a person can do it. So the existing limitations (and prescriptions of what is to be done/how it is to be done) gets built into the hiring criteria at least to some extent.
Let us look at the most common example of this situation. Organization 'A' is in trouble. The organization does not have a clear understanding of what is to be done to get out of this situation. So it decides to hire a 'good' CEO and let him/her figure out what needs to be done. However, when the organization chooses a 'good CEO' that choice is colored by the explicit/implicit definition of 'a good CEO' which in turn is limited by the current thinking/consciousness in the organization. This can be addressed to some extent by looking at 'best practices' (what has worked in CEO selection elsewhere in similar situations) and by using external advisers. But this might not always work as the the uniqueness of that particular organization context might get missed out and also because the external advice/best practice information often goes through one level of processing within the organization (when decision making is done by existing people) which in turn brings in the limitations in the current processing/thinking in the organization.
Hence the approach of 'hiring good people and letting them figure out what needs to be done' might not be as simple as it appears to be. Actually, it can not be simple. Otherwise it would have been very easy to build and sustain high performing organizations.
Any comments?
See a related link here