Abstract
Homo sapiens have always wielded their intelligence to exercise dominion over nature by inventing a myriad of tools. However, this may be increasingly causing detriment to the balance and boundaries of the living environment. And now, through our own creation and design, this human intelligence is aggregated, externalised and made artificial. With AI, anyone can become average at virtually no cost. Knowledge is abundant and easily accessible through the Internet, allowing any literate person to attain an average level of understanding. The true differentiation going forward will now lie in human qualities such as curiosity, discipline, and ambition.
With recent advances and applications in AI, it is no longer a question of “if” AI will change everything we do but a question of “when” and “how”. In the workplace, corporate leaders have to start to think of adoption, competitiveness and culture. And in society, policy makers have to think of AI’s impact on employment, education, ethics etc. and how to intervene and regulate markets.
This paper examines what will shape leadership in this future AI world and considers challenges such as navigating the geopolitics of AI, honing the human comparative advantage and identifying the implications of AI adoption on jobs and free markets.
Honing the Human Advantage
At the individual level, there are two sets of attributes comprising of hard and soft skills, and qualities that a leader must possess and balance. Clearly, going forward, AI will contribute significantly on the hard aspects to provide more informed analytical decision-making, drive operational and cost efficiency, and enhance domain expertise. In short, the AI revolution will automate and commoditize the data-driven aspects of management while leaving the softer elements of leadership to humans [1].
Harmonizing Bots and Humans
The future firm is an AI decision factory [2] where decision-making is a science. Analytics will systematically convert multiple sources of data into predictions, insights and choices, which in turn will robotize processes and automate operational workflows. For example, in Formula 1 racing, the competition is no longer just in product engineering but in data analytics and algorithms, AI is sharpening race strategies to souping up performance in car design. In financial services, stock trading is now dominated by high-frequency trading bots whose algorithms are designed by human quants. In this weaponized race of algorithms, finding and inspiring the best talent to enrol onto a visionary journey will intensify.
In a hybrid workplace where humans co-exist with bots and AI co-pilots, temptations to compare efficiency and productivity will arise; therewith, exercising patience, empathy and actively engaging to harness the positive human comparative advantages will be key. In fact, success may hinge not on whether your AI bot is flashier than your rival’s, but rather on how you synchronize and align your teams with their new digital colleagues.
One can argue that soft skills of engagement, adaptability, and visioning will not be different from the situation today but what is important is that as the hard skills become more eclipsed by machines, the functional centrality of human qualities such as integrity, humility, empathy and intuition will be amplified.
Managing Algorithmic Risks
Here we identify some risks that leaders will need to be aware of. One such risk is the lack of transparency in deep learning models used in, say, pricing and marketing strategies; how conclusions are derived may be opaque and not easy to comprehend, and this in turn will lead to distrust.
AI systems can inadvertently perpetuate societal biases due to limited or homogeneous training data or narrow algorithmic design. This may result in serious discrimination and unfairness; hence it is important to train the models using diverse data to ensure robustness in design.
With Generative AI, content can be produced from existing audio, images, text, synthetic data and is fast challenging the human claim on the monopoly on creativity. The downside is, this avalanche of content comes with a whole swathe of undesirable falsehoods, deepfakes and half-truths. Leaders have to champion transparency and be pillars for truthfulness and truth-seeking to fight deception that can erode the trust in technology adoption.
The enthusiasm towards AI automation needs to be tempered; it should be used as an augmentation tool for humans, coupled with supervision to avoid rogue scenarios.
Navigating the Geopolitics of AI
The Economist in its 2018 review of the book AI Superpowers: China, Silicon Valley and the New World Order by Kai-Fu Lee depicted the AI Future as such that “The world will devolve into a neo imperial order, in which, if they are to tap into vital applications, other countries will have to become vassal states of one of the AI superpowers.”
To fathom the implications of such as scenario, it is useful to understand the latest advances in Generative AI. One key research breakthrough in the last few years is Large Language Models (LLM); these are foundational models that use deep learning algorithms to process and understand natural language. The generic models are trained on transformers which can use smaller unlabelled sets of text data to learn patterns and fine-tune for specific industry domains. Now, these LLMs are multi-modal, which means that they can model not just text, but audio and images as well. LLMs can perform many types of natural language tasks, such as translating languages, analysing sentiments, chatbot conversations etc. Many US global tech companies and nations are in a race to develop such LLMs, Examples of these are Google’s Google’s PaLM 2, OpenAI’s ChatGPT-4, UAE open-source Falcon, and Paris Mistral.
To not fall into the vassal-state trap, policy-makers will have to consider whether their nation should join the race to invest in Sovereign AI in both the areas of physical and data infrastructures. The data infrastructures are sovereign foundation LLMs developed by local teams and trained on local datasets to promote inclusiveness, considering specific dialects, cultural nuances, customs and practices to preserve national identity and autonomy.
From a firm’s perspective, a corporate leader will need to be aware of two issues. The first is about being watchful of technology lock-ins, of using an LLM that is proprietary eg. using specific parameters that affect supply-chain dependencies. This may entail huge switching costs in future. Another instance would be about using an LLM that is not trained on broad-based data which may contain hidden biases and prejudices. This second issue relates to data sovereignty where vast amounts of data collected through business transactions are subjected to a lot more scrutiny in some regions eg. EU or India, with respect to privacy and how data is used to train LLMs. These governments may introduce regulations on data storage localisation or payment for usage of data to train models.
Innovation and Augmentation, not Automation
With every wave of technology breakthrough since the mechanised weave looms of the first industrial revolution, adoption has ploughed on despite protests from Luddites and the like, on job displacements. There has been so far a series of “creative destruction” where more jobs have been created than destroyed by the machines; in fact, economic growth in the longer view has been exponential.
The age-old economic thesis on factors of production such as land, capital, entrepreneurship and labour has so far held true in terms of GDP and productivity growth—at least until recent developments in digital technology, especially AI, threatens to change how we live and work. Since the mid-2000s, a troubling and puzzling question has been why recent transformation has failed to deliver productivity growth, and why income inequality is worsening.
In his book The Turing Trap: The Promise and Peril of Human-Like Artificial Intelligence, Erik Brynjolfsson argues that research technologists have been busy focusing on building machines to replicate human intelligence. Businesses have been using abundant capital to generate substitutes for the labour factor of production, substantially via better and more intelligent machines. Brynjolfssson claims that “the obsession with cloning human intelligence has led to AI and automation that too often simply substitute workers, rather than extending human capabilities and allowing people to do new tasks.” [3] The excessive focus on human-like AI, he writes, drives down wages for most people “even as it amplifies the market power of a few” who own and control the technologies and those with the capital to buy the technologies to replace labour.
Investing in Education
In the past, when obsolescence of an industry occurred, a whole new sector would be spawned, partly because labour was still required to make the new machines eg. cars to replace horse buggies, computers to replace calculators etc. But today the new machines are merely intelligent software whose development requires only a few highly skilled cognitive technologists. Cerebral cybernetics is trumping manual and mundane work, and hollowing out the middle job spectrum. Since the use of AI will be relatively cheaper than human intelligence, the least talented and least innovative people risk being left behind. So, reducing inequalities in the future will depend on reducing intellectual inequalities and the best time-tested method is in investing in education and training [4].
In a capitalist society where the role of leaders of firms is to maximize shareholder value, it is more expedient to focus on the denominator to cut costs rather than to innovate and increase revenue from new quality goods and services. It is often easier to just swap a machine than to rethink processes and invest in technologies that take advantage of AI to expand the firm’s range of products and improve worker productivity. Machines have no unions and they will continue running even in a pandemic.
For many companies, Industry 4.0 is still the “next thing” to invest in; this is namely digital transformation covering robotic automation, internet of things (IOT), data analytics, robotic process automation (RPA), cloud data centres, and machine learning AI. When Industry 4.0 does become a huge success in raising efficiency and productivity, the fear is that this will be at the expense of jobs and incomes. While human labour is heavily taxed, there is no payroll tax on robots or automation. Without good jobs and income, who then will be able to afford to buy the products and services companies produce? The invisible hand of supply and demand of markets will shrink, or in the worst case, vanish.
There is apprehension that the gains from AI will accrue to a small group of Big Tech technocrat billionaires and this will exacerbate inequality. Hence EU is propounding the concept of Industry 5.0, a further extension beyond CSR, ESG and Triple Bottom Line (People, Profits and Planet). At its core, Industry 5.0 reflects a shift from a focus on economic value to a focus on societal value, and from welfare to human-centric wellbeing.
In sum, the industrial environment whether it be business, socio-political or physical, will undoubtedly become more complex. Leaders will have to strike a balance between humans and machines, and continue to hone and tap on the comparative advantages humans will despite everything, still possess.
Tony Yeoh
Research Fellow, Penang Institute, Malaysia
Special Advisor to State of Penang Executive Councilor for Digital
Footnotes
[1] As AI Makes More Decisions, the Nature of Leadership Will Change by Tomas Chamorro-Premuzic, Michael Wade, and Jennifer Jordan Harvard Business Review January 22, 2018
[2] Competing in the Age of AI – How machine intelligence changes the rules of business by Marco Iansiti and Karim R. Lakhani HBR (January–February 2020)
[3] How to solve AI’s inequality problem by David Rotman April 19, 2022 MIT Technology Review
[4] The Geopolitics of AI and Robotics Interview of Laurent Alexandre by Nicolas Miailhe https://journals.openedition.org/factsreports/4507