E-mentor logo
EN
Informacje o artykule

W wersji drukowanej czasopisma artykuł znajduje się na s. 5-9.

PDF file Pobierz artykuł w wersji PDF
Jak cytować

Olcott, D. Jr. (2024). 2025: A Race Odyssey. Leadership in the era of AI. e-mentor, 5(107), 5-9.

2025: A Race Odyssey. Leadership in the era of AI

Don Olcott, Jr.

Recenzje i felietony

2025: A Race Odyssey. Leadership in the era of AI

The 20th century could be seen as a race between two versions of man-made hell – the jackbooted state totalitarianism of Orwell’s 1984, and the hedonistic ersatz paradise of Huxley’s Brave new world, where absolutely everything is a consumer good and human beings are engineered to be happy.

Margaret Atwood, 2007

Introduction

It is indeed no secret that many social and literary critics have noted that 1984 (Orwell, 1949) has never truly arrived; whereas conversely, a Brave new world (Huxley, 1932) is seen by many as ubiquitous, everywhere around us and in a sense a source of anxiety and angst for the human condition. The good news is that science and social fiction in all their guises are usually an entertaining mix, and often a synthesis, of truth and fiction, prophecy and satire, and the real and imaginary. Perhaps it would be prudent when talking about leadership in the era of AI to refrain from adopting either-or and black-and-white positions, as even between the dystopian extremes of Orwell and Huxley is there a continuum of options, choices and issues.

The acceleration of AI development over the past two years has been a catalyst for exploring whether this major shift will unlock the keys to the kingdom of leadership, and indeed, it does raise many relevant questions. Does AI make average leaders better leaders? Can AI contribute to more effective leadership and organisational performance? Does AI have the capacity to harness professional development and staff training by drawing upon immense meta ‘training’ data that refines the algorithms of AI? Does AI dehumanise leadership and its human connections? Does AI necessarily guarantee better decision-making? And lastly, perhaps the most challenging one, is how can AI even begin to approximate scenarios and analyses where human experience, context and judgment must decipher the optimum decision matrix at the right time, for the right reasons, and for the right outcomes? Leadership is not for the faint-hearted. Many experts argue that AI is simply a tool, nothing more and nothing less, which is a safe answer for sure. Conversely, some argue that AI can be an indispensable leadership resource that will literally change the dynamics of behaviour and performance. If only it were that easy. Inevitably, there is truth in both these perspectives, and yet I think it is more prudent to conceptualise AI along a continuum – some forms of AI will align well with leadership roles; other forms perhaps not so much. I will try to consider this continuum in my narrative and give the reader a snapshot of opportunities and challenges pertinent for leadership in the era of AI.

AI brings opportunities and challenges for contemporary organisations, while the rapid and exponential expansion of AI has been the catalyst for how we think about the role of leadership. This is true in business, government, social services, medicine, trade and transport, the military, K-12 schools and universities across the globe, with the whole world mesmerised by the AI hype and showing ubiquitous interest. Moreover, the general zeitgeist is cautiously optimistic about AI’s potential and eventual results, but there are still many concerns amongst the populace about AI And yet, amidst this mass blitz of AI, there seems to be this palpable expectation that it is the game changer of all game changers.

Shân Wareing (2023b) reminds us that AI is here to stay in universities. Leaders cannot ignore AI, and they should avoid creating fear and panic amongst staff and stakeholders, and not blame staff for not being ready for the AI race. One should ask what this really means? Universities have historically tended to be quite conservative when it comes to innovations. Fears amongst the populace inside and outside the university are common when there are high levels of uncertainty and ambiguity associated with digital technocracy.

Indeed, in some contexts, AI is controversial and divisive, as the events in the Gaza and Ukrainian wars have recently shown. Many university leaders remain silent in the shadows on controversial and divisive social and political issues. Although AI is certainly not a synonym for genocide, it does raise fears among many across the university and beyond. Indeed, the good, bad and ugly of AI will arise as this Race Odyssey picks up speed. Where will it lead to and with what results?

Information is power, power is money, and money is control when it comes to the supertech players. A Race Odyssey is not semantics and this jostling for position is underway to see who will lead AI in contemporary society, who will control the massive data sets for AI ‘training,’ what economic models will drive AI in the marketplace, and what the impact be on geopolitical power alignments, democratic ideals, equality, equity and other social justice dynamics? There are many legitimate challenges and many potential opportunities, and many of these are not black and white, existing in the ‘grey zone.’ For now, and perhaps for most of us, the verdict on AI is still out. Of course, the axiom of A Race Odyssey may be viewed as a euphemism for ‘danger straight ahead’ to remind us all that it is a dangerous race if we do not slow down long enough to navigate the multitude of issues associated with AI It is likely that we could, indeed, arrive at a crossroads of reckoning in the future, unable to save us even from ourselves.

A Race Odyssey on the Horizon

David De Cremer’s (2020) suggests that the usual suspects – Amazon, Facebook, Apple, Microsoft, Google and so forth control most of the AI data (Knowledge of Wharton Staff, 2020). How do we make AI inclusive and equitable for the masses? Whoever controls the mass data sets controls the ‘training,’ the descriptive algorithms for data, how the algorithms are written and for whom, for reaching what results, the costs, and ultimately the profits of AI in a global consumer digital market.

The cold hard truth is that AI is a serious race for power, profit and position in the global digital marketplace in 2025. Paradoxically, most of the players in this race aren’t clear about what their competitive advantage or market differentiator is for AI, but they are intent on being the last machine standing when the algorithm dust settles. It is too early in the game to know who will win. Will it be Big Brother, Winston Smith, John the Savage, corporate capitalists [Trump AI Enterprises] and populists, the human race, or will we be unable to save us from ourselves as a new Hal 9000++ decides for us?

Perhaps the brave new world was only a prophetic euphemism for our eventual extinction. And who said leadership does not matter? A drastic reminder that sometimes great leadership is tempering, even preventing something from happening rather than creating something new to marvel the masses. Now isn’t that a novel use of Orwellian newspeak! Kirk to Enterprise – please beam us up Scotty!

AI - what are we doing here?

De Cremer also argues that the end-users of AI must always be humans. No kidding, if AI itself becomes the end user then game over. Given AI’s effectiveness at repetitive, routine tasks, and thinking systematically and consistently, perhaps AI is more suited for hard skills, whereas leadership preserves its soft skill domains such as visioning, social engagement, partnership development and empathy. Indeed, this brings Maravec’s Paradox to mind – what is easy for humans is difficult for AI, and what is difficult for humans seems rather easy for AI If we use A. I. for the right things, for the right reasons, and for the right outcomes, this seems like a good start for the new yellow brick road to algorithm Oz.

Advocates argue that AI is a game changer for creating more dynamic and visionary leaders, with some commentators even arguing that the mass data sets that AI will manage will undoubtedly produce better decision making, improved judgment, and a new age of leadership. I guess these advocates need reminding that for the past thirty years we have had more information and knowledge at our fingertips than at any time in history, and the quality of leadership ‘ain’t got any better in general.’

What universities and leaders do is sometimes difficult to make sense. What has become clear is that AI brings opportunities and challenges for governments, organisations, staff and perhaps most importantly the leaders who drive the pulse of decision-making amidst this digital transformation. We will explore some of the vantage points for university leaders.

The purpose of this essay is to present a practical commentary on how AI can potentially support more effective leadership, as well as discussing some of the challenges associated with AI and leadership. As noted earlier, there are numerous grey area issues about AI that cannot be completely separated from the role of leadership, however it is beyond the scope of this brief to cover the entire playing field. This essay is about leadership in the era of AI.

Finally, Professor Real, fully recharged with an extra dose of soma, will end this magnanimous expose but telling aspiring and current leaders exactly what a SMART person would do about AI in their organisation. If the reader wants to skip to the end and discard the preface journey, then perhaps genuine leadership is not for them. Leadership, if nothing else, is life in the fast lane. Time to buckle up!

A general leadership resource: What does AI bring to the game?

Given the obvious focus of AI systems built around data, algorithms and analyses, the starting point must ask whether these systems can initially be resources for leaders? Shan Wareing (2023a) discusses five key traits for all higher education leaders: passion, curiosity, battle-hardened confidence, team smarts, a simple mindset and fearlessness. The problem with these leadership platitudes is that one can read whatever they wish into their meaning. A simple mindset for deciphering complex data analyses to make organizational survival and dependent decisions sounds daft, and fearlessness is often a synonym for reckless courage and creates more problems than solutions, while there is a fine line between battle-hardened confidence and sporadic arrogance.

Purdy and Williams (2023) note that business leaders are simply stressed out by the volume of decisions, which has increased 10-fold in the last decade. Undoubtedly, poor decision making may have an impact on the financial health and success of the organization. The authors argue that AI can help with tracking, predicting, simulating and serving as co-pilots or sounding boards. They also argue that organisations need to keep AI expertise up to date, ask the right questions, and ensure the experience level of the user is balanced – practical and machine literacy. Poor decision-making is detrimental to organisations far beyond the loss of profits, leading to diminished confidence in leaders, poor morale, and sometimes even a mass exodus of employees. Leadership is a serious business, and consistently poor decision-making negates most of the other attributes we associate with leadership. AI can be most effective if domain-specific and focused. Real-life expertise matters, and the more we know about a given topic the better AI can reduce errors.

Can simulations be enhanced through AI systems? Indeed, a greater data and experiential pool to draw upon allows for better real-life scenarios, which in turn suggests that these could provide valuable insights and baseline approaches to simulations and their use by leaders in organisations. Simulations, no matter how real and data-based, are still simulations. It is enough to ask a fighter pilot, a Formula 1 driver or a footballer taking a final penalty kick in the World Cup with no time left and a 1:1 draw. Simulations, no matter how good, have limitations, because they are created and refined by humans.

Alan Brown (2024) suggests that leaders retain control of visioning, being flexible, empathetic, and monitoring the emotional climate across the organisation. In other words, there are certain human roles that are emotive-based and beyond the capacity of AI to truly interpret and experience them, regardless of the amount of data used for specific algorithms. Training AI systems with mass data (repository) about known situations results in AI learning from its mistakes, with training data refined, but is this intelligence? Did Hal learn from his mistakes on the Discovery or did they learn how to better control Discovery and its crew?

AI advocates have also noted than one positive use of AI is in the training and development portfolios of staff. Expanding staff knowledge, training employees, providing models for the ethical and responsible use of AI, and for data analyses.

Bramly (2024) highlights five key ways in which AI is impacting leadership development:

  1. Personal planning and training plans
  2. Data-driven insights
  3. Diversity and inclusion
  4. Virtual coaching and mentoring
  5. Continuous learning and adaptation.

Selected challenges for AI: You can’t always get what you want

Will AI become your colleague or a Dictator? AI will be challenged by soft human emotion skills – empathy, critical thinking and emotional intelligence. The truth, however, is that some human leaders lack any resemblance of human emotive and empathetic skills. In some other instances, AI problems include issues of potential bias, ethics violations, data-provenance concerns, and accuracy (Schaffner, 2024). Additional issues include AI bias, FUD factors (fear, uncertainty & doubt) and data protection and security concerns.

At Oxford University, three researchers – Sandra Wachter, Chris Russell and Brent Mittelstand (2024) argue that LLMs produce responses that are plausible, helpful and confident but contain factual inaccuracies, misleading references and biased information. Deciphering this sounds much like a lawyer arguing that their client is telling the truth, just not all the truth and hence a distorted and one-sided version of reality.

Human experience, judgment and decision making

Purdy and Williams (2023) raise some pointed questions for businesses investing in such technology, and by extension, university leaders may be prudent to consider these in their decision-making processes. As a decision maker, when do you trust the machine over the human? What are the conditions for effective human-machine collaboration? How do existing human expertise and judgement enter the equation?

Emotive human attributes are poorly matched with AI (empathy, visioning, social interactions, critical thinking, etc). More importantly, and leaders should pay heed, you cannot teach experience, judgement and context, which are the core trifecta of good decision-making. Leaders in all their human guises and disguises, make the calls no one else wants to make. AI cannot build trust, navigate social dynamics and foster future collaborations. These take innovative human skills grounded in sound judgment, experience and knowledge of the context of a given decision matrix.

No amount of data nor machines can produce that unique a blend of experience, context, insight and judgement to make consistently sound decisions. At the same time, we must not discount AI playing a role in supporting these unique human attributes. Perhaps a complementarity of leadership + AI is a viable resource for the future.

Is AI the new decision-making game changer for academic leaders? I previously wrote about the 005 axiom of decision-making and that nearly all problems have 4-5 good solutions. The goal of finding a single, perfect decision is simply not realistic in a complex world, as leaders often need to synthesise these five options into one practical solution and strategy, which also creates an interesting challenge for AI Can AI synthesise all these options, evaluate the decision variables independently and formulate an optimum solution that is an integrated solution from the five options? The answer would in all probability be yes, provided that the specific logarithms and training instructions were part of the mental schemata of the AI systems used; and sufficient data were part of the repository covering all five solution options.

Rasmus Hougaard, Jacqueline Carter and Rob Stembridge (2024) present an interesting vantage point on leadership, suggesting that AI is best suited for research and data analysis and repetitive simple tasks, whereas human leaders are better for individual and team development, as well as strategic thinking and decision making. This reminds us of a line in the film The Imitation Game, where Alan Turing states ‘only a machine can beat another machine.’

Hougaard, Carter and Stembridge raise a very important question based on their alignment of what AI aligns, and what humans can do. Is AI more aligned with management than leadership? Core qualities of human leadership are compassion, awareness, and wisdom, and these seem more aligned with human qualities than with the rational, repetitive linear type of tasks we tend to associate with AI We should be cautious here, because language can become our nemesis if we are too arbitrary in how we define management and how we define leadership. Assigning management tasks to AI and leadership to people-oriented attributes tends to ignore the overlap between them.

Indeed, these interactive dimensions of AI, human attributes, management and leadership raise many issues beyond this paper. Moreover, we have seen this before in Hersey and Blanchard’s (1977) situational leadership theory, where task versus people orientation are the two primary dimensions. However, this theory also requires an assessment of the maturity level of whom the leader is interacting with in determining whether they are more task-oriented or more people-oriented. This was mentioned by Purdy and Williams earlier in this paper as the experience curve that is an important factor in AI Finally, and which is of significance, situational leadership does not mean the leader has a laundry list of solutions in their coat pocket that they can align to any potentiality at any given time.

Successful leadership, and consequently thriving organisations, have leaders whose followers and employees know who the leader is as a person, what they value and believe, and how they generally approach any leadership issue and processes they use. Employees of organisations do not want leaders who change on every whim to solve leadership issues and make decisions, and, in reality, would probably prefer a machine to make their decisions than to have a human who essentially fails to display the essential human emotions and attributes of being human. AI can and should never replace this leadership formula.

University leaders: An action agenda for AI

We have covered a range of issues and highlighted only select vantage points of AI and leadership in this paper. Now it’s time to decipher what we covered and offer some insights for university leaders to consider. Perhaps if we are lucky, we might move one step closer to the brave new world we have long awaited – well at least in the university.

  1. What is the strategic vision for your university/school/department/company and AI? Do you know what actual AI capacity is running in your organisation tonight? Inventory your full digital capacity, establish a task force for Discovery (the name of the spacecraft in 2001) to fully assess whether AI is the right choice for your institution, your staff, your institutional values and culture. Can AI provide staff training plans, professional development, and identify areas for improvement in individual portfolios?
  2. Engage with your staff, faculty and colleagues across the campus. The FUD factor – fear, uncertainty and doubt – may not be real for you or your leadership team, but it is to your followers. Open the dialogue on your campus. The most serious debacle of this entire AI movement is the assumption that AI is the best option available for humanity.
  3. Who are your stakeholders for AI? Duplication of the same things is probably one of the greatest wastes of resources ever perpetuated by the social organisation we fondly call the modern university. Is every university going to embrace this alone, just like they have on most truly game-changing innovation since E = mc2? Can AI be a catalyst for collaboration? Yes, I think it must.
  4. When is AI too much AI? What is your ethical mandate for AI on your campus? The new normal of advocacy of AI and articles/best practices for AI seems to preface everything with the mantra that we will examine the ethical implications of AI and then go full-speed ahead and curse the icebergs. My best guesstimate is that in the last year, at least 80% of major scholarly journals have had a special issue around AI, with every one of them in small fine print stating ‘ethical implications of AI have been evaluated within reason.’
  5. What are the costs and benefits of AI for your campus? Costs come in many guises, and many of them are not financial. What are the costs of diminishing staff morale, and for the sake of efficiencies when do people start to feel undervalued and more expendable than machines? The pandemic, according to most data analyses, increased the digital divide and the gap between the haves and the have-nots. Is AI access, quality, equity and cost the same for all?

Summary

Indeed, the most promising strategy here is to create a balance between AI and our human capacity. If university leaders take the necessary time to plan, assess and pilot various options of AI in concert with valued human attributes, then institutional leaders can find the right balance for the right reasons with the right people and the right outcomes that serve society, students, the community and humanity. Decision-making can be enhanced and the university a become a better place to live, work, explore and discover.

Transparency is mandatory on university campuses to engage students, staff, faculty, administrators, alumni and all external stakeholders in the issues and decisions pertaining to using AI in ways that leverage the university’s capacity to serve all its stakeholders. The rhetoric about ethical considerations needs to be grounded in practical applications of AI, rather than crying wolf because others are crying wolf. AI will probably create some divisive and controversial issues that will have to be dealt with as they emerge., and silence in the shadows or running for cover are not leadership options for a university’s leadership, as they are neither leadership nor courageous.

In the final analysis, leadership in the era of AI is not that different from leadership at any other time. University leaders get paid to lead and create a vibrant, innovative, responsive, and thriving organisation. Leaders have one key performance indicator and only one: To create a thriving, successful organisation. Leaders are not hired to be liked or to make friends. As Steve Jobs said, if you want to be liked, ‘sell ice cream.’ If you want to be a leader, then lead.

References

About the author

Author's photo Don Olcott, Jr.
Don Olcott, Jr.

The author is President of HJ & Associates, specialising in global open and distance learning based in Romania and a Consultant Associate with Universal Learning Systems (ULS) in Barcelona, Spain. He holds the academic rank of Professor Extraordinarius of Leadership and ODL at the University of South Africa and adjunct instructor with Oldenburg University, Germany. Don is a Senior Fellow of the European Distance and eLearning Network (EDEN). He is former President and Chairman of the Board of the United States Distance Learning Association (USDLA) and served on the USDLA Board from 2001-2016. He was recipient of the 2024 USDLA Leadership award in higher education and the 2023 recipient of the International Council of Open and Distance Education (ICDE) Prize of Excellence for Lifetime Contributions to the field. He was also the 2013 recipient of the ICDE Individual Prize of Excellence for leadership and service to global ODL. Don was inducted into the International Adult and Continuing Education Hall of Fame in November 2024 in Florence, Italy.