We began by asking ChatGPT – powered by GPT-4-turbo – to generate a series of predictions, both optimistic and cautionary, about the future of fund management.
Then we invited Prof. Sotirios Sabanis, who leads a team of AI researchers at the University of Edinburgh that’s been collaborating with us at the Centre for Investing Innovation, to weigh in.
The result is an exchange between machine-generated foresight and human expertise – a glimpse into what the future might hold when algorithms and people routinely work side by side:
Portfolio construction gets a sidekick
AI’s take: AI will assist fund managers by rapidly generating optimised portfolios based on real-time data, investor preferences, and risk constraints — not replacing the manager, but acting as a tireless quant assistant.Prof. Sabanis responds: Rapidly generated, optimised portfolios rely heavily – and will continue to do so – on finely-tuned optimisation algorithms, advanced statistical and stochastic modelling, and adequate computational resources. AI introduces a new level of automation by integrating the collection and processing of diverse data inputs – including some available in real time – with these core components. The result is the creation of information-rich decision-making tools that enable fund managers to gain deeper, faster and more granular insights. When developed with the right level of sophistication, these AI tools can act as supercharged quantitative assistants, offering fund managers unprecedented real-time analytical capabilities.
Sentiment as a signal
AI’s take: Natural language processing will mine news, earnings calls, and social media to detect shifts in sentiment before they show up in prices. But beware: signal or noise?Prof. Sabanis responds: Recent breakthroughs in natural language processing (NLP) have led to significant scientific advances, the full implications of which are only beginning to be explored across a wide range of application domains – including asset management – where early results have been promising. Sentiment analysis has seen a notable improvement in predictive power, driven by these NLP developments. However, sentiment cannot be fully captured through textual data alone. To enhance predictive accuracy and reduce noise in relevant datasets, complementary analytical tools are required. The integration of multimodal data – such as video, audio and text – enabled by emerging AI technologies, offers a promising path forward.
Forecasting gets a facelift
AI’s take: AI models will enhance macroeconomic forecasting by identifying nonlinear patterns and hidden correlations — though they may still struggle with black swan events and regime shifts.Prof. Sabanis responds: There is growing evidence that AI models will significantly enhance macroeconomic forecasting by leveraging diverse, information-rich datasets and uncovering deeper correlations and interdependencies. A compelling example comes from recent advances in AI’s predictive capabilities for inherently unstable systems, such as weather forecasting. These developments offer a glimpse into AI’s potential in navigating complex, dynamic environments. However, accurately modelling extreme events well in advance remains a substantial challenge. The rarity of such events, coupled with their non-linear, non-stationary and non-periodic characteristics, limits the availability of training data and constrains the predictive accuracy – even for today’s most advanced AI systems.
Human bias, meet machine bias
AI’s take: AI may reduce some behavioural biases — but introduce new ones. Overfitting, data drift, and model opacity could create false confidence in flawed outputs.Prof. Sabanis responds: AI can act as a powerful behavioural counterweight – but only when its limitations are well-understood and carefully managed. In other words, AI holds significant promise as a behavioural stabiliser in financial decision-making – particularly through its capacity to reduce common cognitive biases such as overreaction, recency bias and confirmation bias. By anchoring investment strategies in structured data and statistical learning, rather than emotion or narrative, AI can enhance objectivity.
However, its effectiveness is not unconditional. AI systems are not neutral; they can introduce machine-driven biases rooted in their training data, algorithmic architecture and feedback loops. This creates a new class of risk – systemic, often opaque and harder to detect than human behavioural patterns. Risks such as overfitting, data drift and model opacity are particularly dangerous because they give an illusion of precision and reliability, often masked by high back test performance or technical complexity. If left unchecked, these flaws may propagate through portfolios, investment committees and client reporting chains – ultimately undermining the very trust that AI is meant to bolster. Hence, robust stress testing, continuous validation and model interpretability frameworks are essential safeguards to ensure that AI adds real value to the investment process – rather than introducing new risks or causing unexpected problems.
The rise of the ‘explainable AI’ arms race
AI’s take: As AI becomes more embedded in decision-making, the pressure to explain its logic to clients, regulators, and boards will intensify. Transparency will be a competitive edge.Prof. Sabanis responds: As AI becomes increasingly embedded in financial decision-making, explainability is no longer optional – it’s a strategic imperative. Clients, regulators and boards will expect straightforward, credible answers as to how AI models arrive at key insights or investment decisions. Transparency will not only be a compliance requirement but a source of competitive differentiation.
Firms that can articulate how their AI systems work – what data is used, how decisions are made and where risks lie – will build stronger client confidence and demonstrate operational maturity. The future edge will lie not just in building powerful AI, but in making it explainable and trustworthy. This will require investment in interpretable models, documentation frameworks and governance protocols. Transparency serves not only as a means of mitigating downside risk, but also as a critical enabler of broader adoption, a catalyst for attracting institutional capital and a foundation for establishing long-term strategic credibility within an AI-driven financial ecosystem.