Search
Search
Close this search box.

hu / en

Artificial Intelligence is Transforming Economic Research—But Only Rigour, Transparency, and Policy Can Maximize the Benefits – by Imre Fertő

Illustration: Getty Images

 

Artificial Intelligence is Transforming Economic Research—But Only Rigour, Transparency, and Policy Can Maximize the Benefits

By Imre Fertő

Artificial intelligence (AI) has long been touted as a game-changer for the sciences. Yet, in the field of economics, its proliferation marks not just another technological upgrade, but a fundamental systems-level transformation. As digital technologies and computational power expand, AI’s adaptive, data-driven methods are reshaping how economists analyze causal relationships, distribute resources, and engage with society. But with great promise comes an equal measure of risk—ethical dilemmas, inequality, and the danger of opaque, “black box” algorithms.

This post draws on recent research to explore three central questions for economic science: How can AI’s data-centric logic be reconciled with economics’ causal analytic tradition? What are the distributional consequences of AI for labour, capital, and knowledge? And finally, what institutional and regulatory safeguards are needed to harness AI’s potential while minimizing harm?

From ‘Small Data, Strong Assumptions’ to ‘Big Data, Flexible Algorithms’

For decades, economics has relied on a paradigm of “few data, strong model assumptions”—making the most of limited information with rigorous but restrictive statistical tools. AI and machine learning upend this logic. Today’s “big data, flexible algorithms” era enables the discovery of patterns previously hidden from view: random forests, LASSO, causal forests, deep neural networks, and natural language models are now mainstream in economic research. These methods are particularly powerful at variable selection, uncovering non-linearities, and capturing heterogeneous effects—capabilities traditional models often lack.

Crucially, this new analytical arsenal is not just about crunching more numbers, but about interpreting new sources: satellite imagery, administrative microdata, open-ended survey responses, and vast textual corpora. Deep learning can, for example, convert satellite nightlight data into real-time proxies for regional GDP, or use transformer-based language models to rapidly summarize literature and even generate novel hypotheses.

Yet, as AI-driven methods proliferate, so too do their risks. Black box models often sacrifice interpretability for predictive accuracy. When economic policy relies on these models, understanding why a prediction is made—what variables drive the outcome—becomes as important as the prediction itself.

The Ethics and Inequalities of an AI-Driven Discipline

The rapid spread of AI brings new opportunities but also intensifies existing inequalities—both within the research community and society at large. Leading universities and central banks can easily build dedicated AI infrastructure, while smaller or less wealthy institutions struggle with access and expertise. This threatens a two-tier research ecosystem, deepening institutional divides.

AI’s social impact goes further. Models trained on past data may encode and reinforce historical discrimination: credit scoring or hiring algorithms risk perpetuating disadvantage for already marginalized groups. Regulatory and scholarly oversight are thus critical. The most effective solutions combine technical tools (e.g., fairness metrics, shadow models running in parallel with traditional econometrics) with standard-setting (mandatory audit trails, open data, and model cards).

Data privacy, too, looms large. With the growing use of granular personal microdata in economic research, ethical minimum standards must include clear protocols for data protection, encryption, and public transparency. Bringing user and citizen groups into the design process—and disclosing expected data risks at project outset—can bolster public trust.

Transparency and interpretability must become non-negotiable. Peer-reviewed journals increasingly demand that authors share code and provide algorithmic explanations. Explainable AI (XAI) methods—such as SHAP values or counterfactual explanations—are fast becoming prerequisites for publication, not optional extras.

Causal Reasoning: The Non-Negotiable Core of Economics

AI’s main promise lies in its analytical power, but its main challenge lies in reconciling predictive accuracy with causal inference—the bedrock of economics. Predicting outcomes is not the same as understanding causes. A deep neural network may forecast job market shifts with high precision, but policymakers need to know which levers to pull for desired social outcomes.

Emerging approaches like double machine learning (DML) blend the best of both worlds. DML techniques handle large, complex datasets while still producing robust causal estimates, thanks to cross-validation and orthogonalization tricks that limit bias and overfitting. But these tools require rigorous validation, transparent assumptions, and careful matching of models to theory. Without such checks, economists risk being seduced by spurious correlations or the appearance of accuracy.

This new “bilingualism”—the ability to work fluently with both machine learning and causal inference—must be embedded in economic training. Leading PhD programmes now routinely include machine learning courses for economists, with an emphasis on version control, programming, and research ethics. The ideal economist, today, has a “T-shaped” skill profile: broad technical literacy, but deep causal understanding in at least one domain.

Policy, Regulation, and the Path Forward

If AI is to serve the public good, policy must evolve alongside technology. Regulators need not only to observe AI-enabled markets for anti-competitive behaviour (e.g., algorithmic collusion in pricing), but also to intervene when necessary. Open-source tools, accessible computing resources, and capacity building in developing economies are essential to avoid exacerbating global divides.

Economic education must keep pace. Lifelong learning, micro-credentials, and employer-led “data academies” can help workers adapt as automation changes job requirements. Social policies should target not only those displaced by technological change, but also those at risk of exclusion from its benefits.

Finally, cross-sectoral collaboration is key. Data sharing between public and private sectors, under strict safeguards, can fuel better analysis and evidence-based policymaking. And internationally, coordinated regulation is necessary to prevent a “race to the bottom” on ethics and transparency.

Conclusion: Embracing Innovation with Caution and Care

Artificial intelligence is neither a panacea nor a threat to the foundations of economics. Rather, it is a catalyst—expanding what is possible in empirical research, but also amplifying longstanding problems of identification, inequality, and governance. The challenge for economists is to harness AI’s strengths while upholding methodological rigour, ethical standards, and a commitment to social inclusion.

The future of economic research will be shaped not only by the power of algorithms, but by the strength of our institutions, the clarity of our methods, and the inclusiveness of our policies. AI can make economics faster, deeper, and more responsive—but only if we ensure that innovation is guided by transparency, accountability, and the public good.

 

 

This post is based on the article:
“Mesterséges intelligencia a közgazdasági kutatásban” (Artificial Intelligence in Economic Research)
Külgazdaság, May-June 2025. vol. LXIX. 
https://doi.org/10.47630/KULG.2025.69.5-6.63

 

 

 

 

 

 

 

2025

Jul

25

M

T

W

T

F

S

S

30

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

1

2

3

Next month >