Articles
Tools for Developing Collective Thinking on Your Team
- By Bryan Lapidus, FPAC
- Published: 1/31/2022
All too often, in a desire to get along or breed consensus, teams get mired in groupthink. This is dangerous as it can lead to poor decision-making based on a very narrow perspective. When we think collectively — also referred to as cognitive diversity — we use the creativity, knowledge, and experience of everyone around us to tackle an issue. By using the information collected from multiple points of view, we gain a greater understanding of the issue, and a more sophisticated approach to solving it.
Managing collective thinking
Warren Hatch, CEO of Good Judgment, spent some time talking about cognitive diversity in his AFP webinar titled, “5 Key Steps to Superforecasting.” When asked how we go about spurring collective thinking in the workplace, he offered some advice, which started with creating the right environment.
“One thing you can do is to have a structure such as the RAND corporation’s Delphi Method provides, where everybody's voice counts the same,” said Hatch. “You don't want to have a room where it's just the loudest voices, because the correlation between an extrovert (somebody who is loud) and forecast accuracy is zero. You're not getting a better forecast result by listening to the loudest people; you are instead losing information from the quieter people. You want to hear from everybody.
“Having that cognitive diversity on a level playing field pays off huge dividends, so that the voice coming from operations and the voice coming from risk management — we don't know who it is, and we're not getting anchored on who they are. Instead, we're focused on the quality of the forecast, the quality of their thinking.”
Hatch said that one way to level the playing field is to have an anonymized platform through which everyone exchanges views, essentially crowdsourcing. For example, say you put the question up to the group: What three risks should our department be most concerned about in 2022? Everyone makes a nomination — anonymously — along with a brief rationale as to why they think that. The ideas are circulated so everyone sees them, and then they vote.
Using a system like this, you get the full range of thinking from the group on what the risks are, and you've heard from all voices. The big difference is that now the risks are weighted based on what the group really thinks is important, removing personalities and the impact of the HIPPO—the highest paid person’s opinion.
Using question storming
To explain the difference between your forecast and the actual, Hatch advised borrowing a tool from scenario analysis. First, come up with a list of plausible key scenarios. Then come up with small questions that are very specific, questions that will collectively tell you something useful about your scenario (i.e., question storming). These are things we can measure, and if they correlate, a conclusion can be drawn. For example, if you were trying to figure out if the economy was about to overheat, you could look at participation rates, minimum wage directions, and supply chain tautness among other things. Your goal is to put together a cluster of questions and agree in advance that if they go this way, then your forecast is correct.
“Having a cluster of questions to tackle more complex topics is one way to deal with a fuzzier world with precise probability estimates, and connect the two to be more relevant while also being rigorous,” said Hatch.
How to avoid jumping to conclusions
According to Daniel Kahneman, an Israeli psychologist and economist known for his work on the psychology of judgment and decision-making, and behavioral economics, there are two systems of thinking: System 1 and System 2.
System 1 is our instinctive reactions. “If you hear a rustling in the bush, it might be a lion, and you don't want to get out a piece of paper and a pencil and work out what the odds are that it's actually a lion or something else, you just run. That’s System 1: You run, you rely on your built-in programming,” said Hatch.
Another way to think of it is if you're driving down the road, and your companion asks you, “What is three times two?” That's System 1; you got it right away. But if they asked you, “What's the square root of 521?” you would have to slow down a little bit. That's System 2. For more complex scenarios, it’s System 2 you want to rely on.
“The other thing you can do is to check your confidence,” said Hatch, “because that's what it is about: How confident are you in what you know?”
One way to do this is by taking some calibration quizzes [Good Judgment offers this LINK; requires non-AFP site registration, then navigate to “Quizzes” on left frame]. You’ll be asked obscure questions such as the distance to Neptune. A higher number and a lower number are given across all 10 questions, and the goal is to have nine of them fall inside the range, and one outside the range. When that happens, your confidence calibration is 90% for those 10 questions. The challenge is that people think they know more than they do. Instead of getting nine out of 10, to be perfectly calibrated in that sense, they might get five right.
“If you go through some of those exercises, it can be really painful, and that can help you recognize situations where you should be more cognitively reflective. You'll pause because now you've gone through the experience and say, well, my 90% confidence, maybe it's more like 50, and I should go and take a closer look.
“The majority of people tend to just say, ‘Oh, I know this’ and move on,” said Hatch. “People who go through that kind of a calibration quiz will slow down — and get them right.”
Want to know more? AFP Members can watch Warren Hatch’s entire presentation in AFP Learn.
PARTNER CONTENT:
Copyright © 2024 Association for Financial Professionals, Inc.
All rights reserved.