Mastering Leadership with AI: The Future of Executive Coaching
When I first signed up to write this article, I was excited to explore generative AI and its possibilities in executive coaching. However, my feelings changed after completing the CDI's AI for Career Practitioners course. Now, I have more foundational knowledge but far less enthusiasm for exploring and playing around with different AI platforms.
Instead, I have become more interested in philosophical questions.
What is intelligence, and what do we mean by it?
Should we even be calling it Artificial Intelligence?
How might AI impact our ability as humans to think and reason?
What questions/issues should remain firmly within the human domain?
And if, as it appears, both the term and AI are here to stay — How can leaders use it to enhance their impact?
What are the risks and dangers for leaders?
What is intelligence?
There is no overarching definition, but we can usually recognise intelligence in operation; we know intelligent people even if we can't always define precisely what makes them so.
From Merriam — Webster online dictionary:
The ability to learn or understand or to deal with new or trying situations: Reason
Also, the skilled use of reason
: the ability to apply knowledge to manipulate one's environment or to think abstractly as measured by objective criteria (such as tests)
With the definition given above, I am curious about the words "learn" and "understand." Is AI genuinely able to do this? Can it understand? Will it ever be able to recognise its own lack of understanding?
The notion of artificial intelligence has, somehow, ceased making sense to me. There is a part of me that thinks intelligence is either there or it is not. Regardless, the term is here to stay, whether I like it or not.
In her article Theories of Intelligence in Psychology, Kendra Cherry, MSEd, suggests that generally, intelligence is recognised as the ability to:
Learn from experience: The acquisition, retention, and use of knowledge is an important component of intelligence.
Recognise problems: To use knowledge, people first must identify the problems it might address.
Solve problems: People must then use what they have learned to come up with solutions to problems.
I also wonder about the ability to make intuitive leaps through flashes of insight and juxtaposition.
So, what is Artificial Intelligence?
I found a few definitions online that helped me to make some sense of AI and what it means:
1) Artificial intelligence is the science of making machines that can think like humans. It can do things that are considered “smart.”
2) Artificial intelligence, or AI, is a technology that enables computers and machines to simulate human intelligence and problem-solving capabilities.
3) Artificial intelligence is a machine’s ability to perform some cognitive functions we usually associate with human minds.
The words that jump out at me from the above definitions are associate, like and simulate, which means to imitate, replicate or duplicate. I understand that currently, AI appears to think like humans and assume a human-like mask, but that is all. It may seem obvious, but I wonder if there is a danger we forget this.
Risks and dangers of AI
I can’t help but reflect on the phrase: Garbage in, Garbage out. Artificial intelligence will only be as good as the information and the data it is fed. It will only be as reliable and trustworthy as the data used to train the algorithm. If that data is flawed, incorrect, biased, simplistic, or untrue, then it seems highly likely that the AI outputs will be too, and this could well be the source of AI’ hallucinations.’ The term used to describe outputs that are untrue, nonsensical, misleading, or just plain wrong.
It also reminds me of the Hans Christian Andersen folktale The Emperor's New Clothes. It is a tale about two clever and morally ambivalent tailors who offer to create a magnificent set of clothes to adorn a vain emperor. They promise to use the finest threads, the costliest silks, and the most up-to-date styles, but they produce nothing.
On the day of the unveiling, everyone goes along with the façade, admiring the new suit, the cut, and the materials. No one wants to be the first to tell the truth — until a child speaks up and points to what has been evident from the start. There is nothing there.
Is there a danger that when the hype and excitement die down, we will realise there was nothing much there? Or not quite as much as we hoped?
Is there also a danger that we offload our responsibility as human leaders onto AI? It wasn't me, Miss — AI made me do it. Might we become passive recipients of AI insights, thoughts, and extrapolations rather than active agents? How easily might we be seduced by plausible-sounding reports and insights?
We must educate people to critically assess, review, question, and interrogate all information presented to them via AI so that we don't become overly dependent on it just because AI can produce something that seems reasoned and somehow more objective, even if it is not entirely accurate.
In her talk, The Thoughts the Civilised Keep, at the Royal Philosophical Society, Glasgow, 2022, Professor Shannon Vallor asserts that human minds must be active and competent movers within the space of reasons and moral reasons. We cannot be inert receivers of knowledge. We must be kept within the loop and have the time and space we need as humans to debate, discuss and challenge.
In the same talk, the Professor invites the audience to consider a list of questions that most of us would be happy for AI to address:
'What's the weather like outside?'
'Where can I get a good steak?'
'How do you spell ubiquitous?'
But then she presented a second list, and this gave me pause:
'What is a fair outcome of this decision?'
'What does this child need?'
'What does beauty look like?'
'Does this person deserve their freedom?'
What do you think — what role should AI play in making these kinds of decisions? As a leader, how comfortable are you with AI addressing these issues?
And if you want to access Professor Vallor's talk on YouTube, The Thoughts the Civilised Keep, you can do so here:
So, if AI is here to stay — how can leaders use it to enhance their impact? How might it benefit them and the people around them?
It might help leaders to frame and test out the questions that need to be asked — identify where they need to drill down and examine data at a more granular level. As a leader, I might want to look at what is happening within my organisation regarding rates of pay, who is getting promoted, attrition rates, rates of sickness and who we are recruiting. It might help me to drill down, make sense of the data I already have, and pull together a narrative to help me understand the real story behind the numbers.
I could use AI to satisfy my curiosity as a leader and explore the landscape I find myself in. If I take the time to develop and frame the right questions, AI could help me explore more broadly and deeply. AI could also help me test questions as I converse with ChatGPT or other generative chatbots.
AI could help me identify the patterns and trends most relevant to my sector, putting me in a stronger position to navigate it. It could also provide me with relevant stories that help me understand what is happening and how I might move forward.
If I remember its limitations, AI could provide another sounding board alongside my coach. If I remind myself that it cannot offer wisdom, judgement and perspective, it can supply supporting evidence, additional ideas, models and plausible approaches.
I could consider creating an AI version of myself and reflect on what this could most usefully do. Creating a Janice bot would involve substantial time and effort, so I'd want to be clear about the real benefits. Still, some organisations already offer this service to coaches.
AI might help me manage fluid situations — as I feed the chatbot new information as a situation evolves. It could help me war game different scenarios as things progress, and I ask a series of what-if questions.
But then, how much time do you, as a leader, want to invest in your relationship with AI? How do you view it as an opportunity, a threat or both? What potential do you see?
Like anything, you will probably need to invest time and energy in building your relationship with AI — you will only get out what you are prepared to put in.
It's fair to say I remain largely sceptical about using generative AI in coaching, but I also recognise that I still have much to learn. I struggle with the potential lack of depth and nuance and am reticent to invest chunks of my time conversing with a machine.
But to counteract my rather curmudgeonly attitude towards generative AI, I am including a link to Danny Mirza's webinar, "Generative AI for Careers Services—Friend or Foe?" His enthusiasm, energy, positivity, and knowledge about it are a great counterpoint to my own. He is also incredibly generous in sharing his know-how about creating prompts more likely to produce sensible results.
It seems my relationship with generative AI still needs a little more work 😉.
Until next time