What happens when you ask artificial intelligence (AI) to predict how it might change the job of fund management, and then invite a leading AI researcher to respond?

That’s the premise behind this article, which explores how AI could reshape the investment management process within the next five years.

We began by asking ChatGPT, powered by GPT-4-turbo, to generate a series of predictions, both optimistic and cautionary, about the future of fund management.

We then invited an expert to weigh in with their insight. Professor Sotirios Sabanis, who leads a team of AI researchers at the University of Edinburgh that’s been collaborating with us at the Centre for Investing Innovation.

The result is an exchange between machine-generated foresight and human expertise – a glimpse into what the future might hold when algorithms and people routinely work side by side.

Portfolio construction gets a sidekick

AI’s take: AI will assist fund managers by rapidly generating optimized portfolios based on real-time data, investor preferences, and risk constraints, not replacing the manager, but acting as a tireless quant assistant.

The professor’s response

Rapidly generated, optimized portfolios rely heavily – and will continue to do so – on finely-tuned optimization algorithms, advanced statistical and stochastic modelling, and adequate computational resources. AI introduces a new level of automation by integrating the collection and processing of diverse data inputs, including some available in real time, with these core components. The result is the creation of information-rich decision-making tools that enable fund managers to gain deeper, faster, and more granular insights. When developed with the right level of sophistication, these AI tools can serve as supercharged quantitative assistants, providing fund managers with unprecedented real-time analytical capabilities.

Sentiment as a signal

AI’s take: Natural language processing will mine news, earnings calls, and social media to detect shifts in sentiment before they show up in prices. But beware: signal or noise?

The professor’s response

Recent breakthroughs in natural language processing (NLP) have led to significant scientific advances, the full implications of which are only beginning to be explored across a wide range of application domains, including asset management, where early results have been promising. Sentiment analysis has seen a notable improvement in predictive power, driven by advancements in NLP. However, sentiment cannot be fully captured through textual data alone. To enhance predictive accuracy and reduce noise in relevant datasets, complementary analytical tools are required. The integration of multimodal data, such as video, audio, and text, enabled by emerging AI technologies, offers a promising path forward.

Forecasting gets a facelift

AI’s take: AI models will enhance macroeconomic forecasting by identifying nonlinear patterns and hidden correlations, though they may still struggle with black swan events and regime shifts.

The professor’s response

There is growing evidence that AI models will significantly enhance macroeconomic forecasting by leveraging diverse, information-rich datasets and uncovering deeper correlations and interdependencies. A compelling example comes from recent advances in AI’s predictive capabilities for inherently unstable systems, such as weather forecasting. These developments provide a glimpse into AI’s potential for navigating complex and dynamic environments. However, accurately modelling extreme events well in advance remains a substantial challenge. The rarity of such events, coupled with their non-linear, non-stationary, and non-periodic characteristics, limits the availability of training data and constrains the predictive accuracy, even for today’s most advanced AI systems.

Human bias, meet machine bias

AI’s take: AI may reduce some behavioral biases but introduce new ones. Overfitting, data drift, and model opacity could create false confidence in flawed outputs.

The professor’s response

AI can act as a powerful behavioral counterweight, but only when its limitations are well understood and carefully managed. In other words, AI holds significant promise as a behavioral stabilizer in financial decision-making, particularly through its capacity to reduce common cognitive biases such as overreaction, recency bias, and confirmation bias. By anchoring investment strategies in structured data and statistical learning, rather than emotion or narrative, AI can enhance objectivity. However, its effectiveness is not unconditional. AI systems are not neutral; they can introduce machine-driven biases rooted in their training data, algorithmic architecture, and feedback loops. This creates a new class of risk – systemic, often opaque, and harder to detect than human behavioral patterns. Risks such as overfitting, data drift, and model opacity are particularly dangerous because they create an illusion of precision and reliability, often masked by high back-test performance or technical complexity. If left unchecked, these flaws may propagate through portfolios, investment committees, and client reporting chains – ultimately undermining the very trust that AI is meant to bolster. Hence, robust stress testing, continuous validation, and model interpretability frameworks are essential safeguards to ensure that AI adds real value to the investment process, rather than introducing new risks or causing unexpected problems.

The rise of the explainable AI arms race

AI’s take: As AI becomes more embedded in decision-making, the pressure to explain its logic to clients, regulators, and boards will intensify. Transparency will be a competitive edge.

The professor’s response

As AI becomes increasingly embedded in financial decision-making, explainability is no longer optional – it’s a strategic imperative. Clients, regulators, and boards will expect straightforward and credible answers regarding how AI models arrive at key insights or inform investment decisions. Transparency will not only be a compliance requirement but also a source of competitive differentiation.

Final thoughts

We believe firms that can articulate how their AI systems work – what data is used, how decisions are made, and where risks lie – will build stronger client confidence and demonstrate operational maturity. The future edge will lie not just in building powerful AI, but in making it explainable and trustworthy. This will require investment in interpretable models, documentation frameworks, and governance protocols.

Transparency serves not only as a means of mitigating downside risk but also as a critical enabler of broader adoption, a catalyst for attracting institutional capital, and a foundation for establishing long-term strategic credibility within an AI-driven financial ecosystem.

Important information

Projections are offered as opinion and are not reflective of potential performance. Projections are not guaranteed, and actual events or results may differ materially.

AA-080725-195805-1