I chose a very different career path from my Dad, Bob. He is an engineer who spent his career in the nuclear power industry. He worked for a while at the United Nations in Vienna, travelling around the world evaluating the practices at nuclear power plants. After reading a paper I co-wrote last fall, my dad mentioned to me that he thought there was a lot of overlap between his work at the UN and what I did as an evaluator in the non-profit sector -and I think he is right. As different as our professional interests are, his wisdom about changing the way people relate to evidence translates really well to my work. In honour of my Dad and all dads on Father’s Day, here is a lightly edited version of the conversation we had over email.
Andrew: What was the purpose of the office where you worked in Vienna?
Bob: The whole thing started at the UN as a response to Chernobyl and Three Mile Island. There was a strong recognition that the continued viability of the nuclear industry was very vulnerable to the performance of the weakest brethren. This created an environment where the industry was willing to share information and ideas, in some cases with competitors, in order to achieve a high standard of operation, rather than just meeting the regulatory requirements. That led to the creation of an office that would send teams out to nuclear power stations around the world to do reviews and make recommendations.
Andrew: So a crisis created a space where people were more willing to collaborate and to learn. I feel like we are seeing that happen now in my work. How did you decide on the questions that would guide you when evaluating a plant?
Bob: We would write a standard which identified the level of excellence that the best plants achieved. Each year we chose an area, such as such as operations, maintenance, or radiation protection, and we brought in people from the plants we had evaluated who could help us update the relevant standards. This input was use to update a 250 page manual which identified, in detail, what excellence was in each of the assessed areas.
Andrew: So, the standards you used were always evolving through feedback and discussion, even though they were very technical and specific. How did you go about doing the actual evaluation visits?
Bob: When we went to evaluate a plant, the evaluation team members were ideally highly respected experts in their field. They came from a range of UN member states. We trained the team members in evaluation techniques and then they went into the plant and observed. They debriefed their local plant counterpart each day to ensure that their facts were accurate and complete. The team leader would review their findings each day and ensure that they were well researched and based on achieving the standard, rather than suggesting that the plant being evaluated should copy the way things were done at the expert’s home plant. It was also required that the team member gather at least five fact-based examples of the problem before they debriefed their plant counterpart on a specific issue. The team leader debriefed the plant manager each day on the most significant issues, and encouraged the plant to challenge their validity if needed.
Andrew: I love the idea that evaluation team members were matched to various managers from the plant who had similar expertise, in order to reflect together on what was being learned through the evaluation. How did you translate all that into a final report?
Bob: The team would write up recommendations and suggested solutions. We tried to prioritize them so that the local staff were not overwhelmed. Often, we focused on big-picture management issues that, if resolved, would fix other issues as well. The recommendations had to be achievable, measurable and written in a way that allowed the plant to decide how they would best be achieved, given their organizational and cultural environment.
Andrew: If the team couldn’t agree when they came together to write up recommendations, what did you do?
Bob: If the team could not agree that the issue was worthy of the plant’s attention then it would either be dropped or perhaps reduced from a recommendation to a suggestion. If the fact gathering was done effectively then the discussion would often focus on the wording. I strongly told the team that what I wanted to see first were facts. They would lead us to the correct finding. The first plant I went to as a team leader. I was challenged about the validity of a finding during a presentation to the Plant staff. I just rolled out the facts we had gathered, which were all agreed to by the plant personnel, and asked the questioner what conclusion he thought was valid based on the facts. He just sat down.
Andrew: It seems like the standards were set up in such a way that they focused on achieving outcomes and not on standardizing all procedures. The evaluation team members were able to make thoughtful conclusions taking local context into account. How do you create a set of standards that were shared, and widely understood, without letting the system become overly rigid? I feel like we struggle to find that balance sometimes in the nonprofit sector.
Bob: We emphasized to the plant that what we had produced was a three-week evaluation which could help improve the plant. Our report was not a blueprint of everything that was needed to make the plant excellent. In some cases, ’good practices’ were observed which were better than the current definition of excellence. These would be part of the report and may ultimately result in an upgrade of the standard. Approximately three years later a team would return to the plant to assess the progress which had been made. We typically found that 90% or more of the issues were either resolved or on a path to resolution.
Andrew: I like the three-year follow up idea. I also like the idea that you sometimes changed the global standards based on what you learned from one project. I guess you’re right – the process of helping an organization learn really isn’t that different whether you are working with a power plant or a nonprofit.