How can one use mathematics and data to navigate uncertainty, test economic claims, and make policy more evidenced based driven.
How can one use mathematics and data to navigate uncertainty, test economic claims, and make policy more evidenced based driven?
Step 1: What is the problem?
This question was triggered by a question my economics teacher asked.
"You want to get into Data science and would like to be an Macroeconomist. What do you think data science will provide us with in the next 10 years?"
And yes, although I do want to get into data science and aspire to be a Macroeconomist, I did end up struggling with the question.
My argument went as follows. Whilst economics provides diagrams, tools and a medium to explain and understand, data science is where the real fun begins, its the building blocks to making these models and diagrams. Take the Phillips curve for example, when Phillips was analyzing nearly a century of British economic data, from around the 1860s to about the 1960s, he found a correlation between inflation and unemployment. In the same way, we can use data and mathematics to address pressing issues like inequality. Emperical data can be scraped and analysed and we can understand patterns in relationships that economists haven’t modeled yet.
But even as a I said my argument, I could notice flaws. What about the harm principle? What about choice? Why does the government have so much power in controlling our patterns? Does the government have power? Is it not a bit dystopian that this much data about us is in the hands of someone else that is not us? Who gets to decide who is better off from this and who is "left behind"?
....Hence why there's a need for this blog post.
The best thing to first do, both for my understanding, and for people that might have interviews coming up, is probably to look at the benefits of increased data and mathmatical modelling.
Step 2: Where does data come into this?
The next decade will potentially see widespread use of:
-
satellite imagery for economic activity (being able to start measuring crop yields, construction activity, shipping flows, deforestation, and even poverty in areas that I guess are usually overlooked upon)
-
mobility data for labor markets (tracking job search behaviour, what jobs there currently are, where people are needed)
-
scanner and card transaction data for prices (real time inflation, household consumption, and market power)
-
online job postings for skills gaps (skills shortages, mismatches, and emerging industries)
-
night-time light data for regional development (track industrial activity, blackouts, inequality across regions, and urban growth at an incredibly fine scale)
With most data that I see whilst writing for the mailing list, it usually takes a quarter or annual period to come out, however each of the above bullet points indicates realtime data. Macroeconomist can analyse trends immediately. Like surely point number 4 addresses the issues of market failure in the labour market due to information gaps?
I feel like a Macroeconomist with access to high-frequency data is like a doctor who suddenly moves from doing annual check ups to continuous heart monitoring. Like, you don’t just see what happened but you see how, when, and why it’s happening.
If both workers and firms can see, instantly, what skills are in demand, what wages are offered, and where shortages lie, a basic economics lesson will show a reduction in frictional unemployment. The international growth foundation highlighed that reducing information gaps in the labour market can increase employment outcomes by 2 to 5 percentage points and boost earnings by 10-30% for jobseekers who better understand the opportunities available to them.
Step 3: Where does maths come into this?
Now the data science aspect is covered, what about the maths bit. I feel like maths will help with the uncertainty that is everywhere in economics. We rarely know a policy’s exact effect; we only see noisy results. Mathematics helps us quantify, manage, and interpret that uncertainty instead of being misled by it
When I was looking at a close friend’s personal statement, I came across bayesian reasoning which enables us to start updating beliefs with new evidence
What Bayesian inference does is that it essentially allows us to provide a framework for how we should change our beliefs when new data arrives. As an economist, it helps avoid common fallacies such as:
A medical test may be highly accurate but still produce many false positives if the underlying condition is rare.
Likewise, an economic forecast may be precise but misleading if the prior probability of an event is low.
This new thing I've learnt is that it forces us to ask: How surprising is this data, and how should it shift our beliefs?
There's also regression to the mean which helps us in avoiding false narratives
So, countries, companies, or students who perform extremely badly (or well) tend to move toward the average next year even without intervention. So, despite doing well or bad in year 1, they tend to become average in year 2. And many economists often mistake this natural rebound for the effect of policy. What mathematics helps identify is whether improvement is true causation or simply statistical drift.
And then we can also test Economic Claims with causal inference
Many economic questions boil down to causality: Did this policy cause the observed outcome, or would it have happened anyway?
Since we economists can rarely run controlled lab experiments, we can use math tools to extract causal effects from messy data.
Step 4: What mathmatical methods can we use?
There are a variety of ways of doing this. For example, I watched a video that explained:
Difference in differences
This method compares changes over time between a “treated” group (such as govt raising the minimum wage) and a “control” group (a similar gov that didn’t).
It relies on the parallel trends assumption, that without the policy, both would have moved similarly.
The Card & Krueger’s minimum wage study is a nice example to look at. https://www.youtube.com/watch?v=Cfi8r0WnwoE
Instrumental variables
When a policy decision is influenced by many hidden factors, IV finds a variable that affects the policy but not the outcome directly.
This isolates causal pathways, reducing bias.
https://www.youtube.com/watch?v=NLgB2WGGKUw
Natural experiments
Sometimes history runs its own experiments: a sudden policy change, a new regulation, or an unexpected exogenous shocks. This is where careful POTENTIAL statistical design can treat these events almost like randomized trials.
Machine learning for prediction (but with extreme caution)
ML can allow us economists to detect patterns that traditional models miss.
But it can overfit, making predictions that fail in new settings.
It will combine statistical robustness with economic intuition to avoid treating correlation as causation.
However much we love policy decisions, there are always trade-offs, limited budgets, and problems with our political constraints. Mathematics helps policymakers by providing. Then also GPT provide me with:
a. Quantitative evaluation of competing policies
Cost-benefit analysis, optimisation models, and simulation methods estimate which interventions deliver the greatest impact per unit of resources.
b. Stress-testing policy under uncertainty
Scenario analysis, Monte Carlo simulations, and sensitivity checks reveal which decisions are robust even when assumptions change.
c. Better forecasting for better planning
Time-series modelling helps governments prepare for:
recessions
inflation shocks
climate risks
demographic change
d. Understanding when models fail
A critical takeaway, echoed by Dani Rodrik, is that economic models are tools, not truths.
Their usefulness depends on whether assumptions match reality.
Mathematics helps reveal when models break down, preventing blind overconfidence.
e. Linking policies to measurable indicators
GDP-linked bonds, carbon pricing, performance-based aid, and education incentives are all examples where mathematical metrics turn vague goals into actionable frameworks.
Step 5: So where does one end?
Like I guess the dilemma is clear:
Maths helps us understand the world.
Data helps us measure the world.
But neither tells us how to "govern" that information. Governance is the "normative" aspect.
So If someone asked the same question, Imo the answer is this:
Maths and data should improve economic understanding but not replace human judgment or democratic accountability.
Data science and Math in 10 years can beautifully:
-
reduce uncertainty,
-
challenge bad theories,
-
shape smarter policy,
-
and give us early warnings about economic distress.
But it should not create a world where people are simply datapoints to be optimised.
The next 10 years of data science will be exciting, pretty transformative, and potentially even scary- but only beneficial if paired with:
-
strong privacy safeguards,
-
transparent governance,
-
public consent,
-
and an understanding that economics is ultimately about people, not just patterns.
In other words:
the challenge is not just technical- it is ethical.
...the end.....
Comments
Post a Comment