Listing to Starboard
“I’m not going to rearrange the furniture on the deck of the Titanic.” – Rogers Morton
Okay. I admit it. I’m conflicted.
Conflicted over the value that company ranking lists provide in the field of corporate responsibility, citizenship, and sustainability (collectively, let’s say CR) – value to external stakeholders, consumers, and the companies themselves. The list of company CR ranking lists keeps growing, with some organizations developing multiple lists. Over the past couple of years, I’ve said that the only growth industry in this struggling economy is CR conferences management, since the number of conferences continues to grow and seem to occur every week. However, I should amend this and add another growth industry – the development and publishing of company CR ranking lists.
Here are some of my pros and cons to these CR ranking lists:
First, an advantage: Transparency. These company ranking lists indicate which companies perform well against the methodologies used to generate the lists. They provide external stakeholders with information, context and the rationale (hopefully) for how they rank companies.
A second advantage: They certainly encourage companies to do better in managing their issues, and disclosing information – listing their performance, in essence.
A third advantage: Validation and pride for the companies that make these lists. I admit that I certainly like it when my employer makes it on a list.
Now, a disadvantage: Transparency, again. One has to question how well they do provide transparency for the following reasons:
(1) They all have different methodologies and rate issues differently. How else can you explain companies performing well on some rankings, worse on others, and not appearing on others? For example, a food industry company places in the 300’s ranking range on one list, in the top 100 on another, and not at all on a third. It is certainly true that some companies show up on a multitude of lists, but this is sometimes a result of the next point.
(2) Some ranking lists reward based upon the extent to which companies publicly provide information that is used in the ranking methodology, not necessarily upon the performance of the companies. If you disclose certain data, you get credit; if not, no credit. No matter if the issue is material to the company or not; no matter if the company is really managing the issue or not. There’s a saying in this field: What gets measured gets managed. It’s a broad statement, and not completely true. It’s the material issues that get managed effectively, not necessarily all measured ones.
(3) The information provided is sometimes wrong. This is not to say this is the ranking’s fault; it can be due to inaccurate, incomplete or suspect information from the companies themselves. For instance, I know of a company that placed very well on an environmental impact section of a ranking because the ranking agency underestimated the company’s greenhouse gas emissions by approximately 45% (or almost 7 million metric tons) since the company had not disclosed global emissions at the time. And, yet, on another contemporary list, they also received credit as supplying great data.
A second disadvantage is the growing number of CR ranking lists.
(1) There are quite a few of them. On April 1, Ethisphere ran an April Fool’s story on two new rankings lists they were undertaking: the “least ethical” companies and the “kinda ethical” companies ranking lists. As they understood, the best April Fool’s Day jokes have kernels of truth and plausibility about them. In this case, with the proliferation of ranking lists, why not more, including a “kinda ethical” ranking? I have to admit that it was a clever joke.
(2) Survey fatigue sets in. I recall listening in on a conference call a couple of years ago, when a sustainability professional at another company said they could not focus upon sustainability because they were too busy answering surveys and questions on what they were doing in sustainability. Ironic.
And, a risk – one that I discussed last fall in Chicago during a Question & Answer session at, yes, a CR conference: Managing your CR program primarily for inclusion on these ranking lists, rather than managing the issues which are material to your company. I referred to this as a pitfall in managing CR. This can lead to an imbalance in what you are carrying – the valuable cargo is scant, the ballast is prodigious, but not appropriately stowed. Listing can be a result, either to port or starboard. But the result is that the voyage can be choppy, the route is suspect, and the wholeness or integrity is in question when squalls appear.
My company works hard to avoid this pitfall in managing our CR program. We focus upon the issues we can manage, modify or influence, rather than simply putting out numerous data points without any management of the issues they entail. In other words, we are working to manage our CR program responsibly for the long-term, with work still in progress.
Like I said, I’m conflicted. The advantages in CR ranking lists exist, but, so do challenges and disadvantages. If companies are simply measuring their issues in order to rank well, highlighting this in comparison to their competitors, and putting out never-ending press releases about them, they’re simply arranging deck chairs for the appearance, while the voyage isn’t what it should be, and the destination may not be where their stakeholders intended to go.