AI Is Reshaping Think Tanks by Automating Support Work, Not Strategic Judgment
Think tanks are one of the easiest industries to misunderstand in the AI era.
At first glance, they look highly exposed. Policy research depends on reading large document sets, synthesizing evidence, drafting reports, analyzing economic data, monitoring sentiment, and publishing commentary. Those are exactly the kinds of workflows modern AI handles well enough to unsettle labor economics.
But the core product of a serious think tank is not a PDF. It is judgment that other people trust.
That is why the March 25, 2026 source assessment places the sector in a moderate overall replacement band of roughly 28-38%. AI is powerful across the support layer of policy work. It is much weaker in the parts of the industry that rely on political sensitivity, donor relationships, expert credibility, coalition management, geopolitical intuition, and value-laden tradeoffs.
The right framing is not “AI replaces policy research.” It is “AI compresses the machinery around policy research while raising the premium on judgment.”
The Sector Is Intellectually Influential but Financially Fragile
The source estimates:
- roughly 6,500+ think tanks globally
- more than 3,800 listed in the Open Think Tank Directory
- coverage across 102+ countries
- top-tier budgets reaching around $263 million at the largest institutions
- average U.S. think tank salaries around $124,832
- entry-level analyst pay around $50,000-$65,000
- senior fellow compensation reaching $150,000-$250,000+
But the more important context is stress.
The source highlights that roughly one-third of think tanks experienced funding decline in 2024, with severe pressure linked to aid cuts and donor retrenchment. It also notes that 36% of institutions reported political polarization having a strong effect on their work, up sharply from the year before.
This matters because AI does not hit think tanks in a neutral environment. It arrives in a sector that is already financially strained, politically contested, and under constant pressure to prove relevance. That combination makes automation more attractive in administrative and production-heavy roles, but it also makes trusted human judgment more valuable where institutions are fighting to maintain influence.
AI Adoption Is Real, but Government and Policy Use Is Still Limited in Practice
The source captures the current contradiction well.
On one side, think tanks and policy researchers now have strong tools for:
- literature retrieval,
- summarization,
- policy text comparison,
- data modeling,
- grant writing,
- and research drafting.
The tool layer already includes products such as:
- Semantic Scholar
- Elicit
- Consensus
- GPT-4 class models
- Claude
- grant-writing assistants
- and domain-specific NLP stacks
On the other side, the source cites an OECD 2025 finding that actual operational use of the latest generative AI models inside most government departments remains limited. That matters because think tanks do not just produce knowledge for themselves. They produce it for decision ecosystems that are still cautious, risk-sensitive, and heavily shaped by human credibility.
So AI is powerful in the workflow, but institutional trust has not been automated.
The Highest-Risk Jobs Sit in the Content and Production Layer
The source’s top-risk table is directionally convincing because it clusters around the kinds of work AI can already do at scale.
The Most Exposed Roles
| Role | Estimated AI replacement rate | Why exposure is high |
|---|---|---|
| Social Media Policy Communications Specialist | 75-85% | Posting, repackaging, infographic production, scheduling, and reporting are highly automatable |
| Policy Data Scientist | 65-75% | Standard modeling and code-heavy quantitative workflows are increasingly assisted or automated |
| Grant Writer | 60-75% | Proposal drafting, compliance formatting, and RFP mapping are increasingly software-driven |
| Polling Analyst | 60-70% | Survey workflows, segmentation, sentiment analysis, and reporting are becoming platformized |
| Research Publishing Editor | 55-70% | Editing, consistency checks, formatting, and production workflows are ideal AI support layers |
| Administrative Operations Manager | 55-65% | Scheduling, workflow tracking, and routine operations can be automated heavily |
These roles share the same underlying structure:
- they convert source material into formatted output,
- they run on deadlines and repeated templates,
- and much of their value historically came from execution volume rather than strategic authority.
That is exactly where AI moves fastest.
The social media role is especially exposed because the entire workflow can now be chained together: summarize a report, generate platform-specific variants, create visuals, schedule them, monitor engagement, and draft performance reports. In a financially constrained think tank, that makes standalone social distribution roles hard to defend.
Grant writing shows the same pattern. The source cites specialized AI grant tools that can draft, structure, and compliance-check much of the application process. The role does not vanish, but the execution-heavy layer becomes much thinner.
Policy Analysis Is Not Safe, but It Is Not Being Erased Either
The central policy-research layer sits in a middle band:
- Policy Analyst at 35-45%
- Policy Researcher at 30-40%
- Legislative Analyst at 40-50%
- Regulatory Impact Assessor at 45-55%
- Public Policy Modeler at 40-55%
This distribution is important because it rejects both extremes.
AI can already speed up:
- source retrieval,
- case comparison,
- policy memo structuring,
- bill comparison,
- evidence summarization,
- and baseline quantitative analysis.
But the source repeatedly stresses what still resists automation:
- political feasibility judgment,
- stakeholder conflict mapping,
- understanding informal power,
- sequencing recommendations in a live political cycle,
- and translating evidence into something decision-makers will actually act on.
That is the right dividing line.
A policy analyst’s output is not valuable because it contains information. It is valuable because it interprets evidence inside a live power environment. AI can generate more analysis. It still struggles to know which analysis matters politically, reputationally, and institutionally.
Leadership Roles Stay Human Because Influence Is Not a Dataset
The lowest-risk roles in the source belong to the strategic leadership tier:
- Think Tank President / Director at 5-10%
- Research Director at 10-15%
- Program Director at 12-18%
- Senior Fellow at 15-25%
- Resident Scholar at 15-25%
This is not because those people do not use AI. They will likely use it heavily.
It is because their real value comes from:
- long-built reputation,
- access to policymakers,
- fundraising ability,
- research agenda setting,
- public credibility,
- and the ability to make high-stakes judgment calls in ambiguous environments.
A think tank president is not mainly paid to summarize information. They are paid to decide:
- what the institution should stand for,
- which policy bets are worth taking,
- which relationships need to be protected,
- and how the organization survives under funding and political stress.
Those are not output-generation tasks. They are authority tasks.
Economic, Social, and International Research Split Along Judgment Lines
The source’s category structure makes one thing very clear: not all analysis work is equally exposed.
Economic research generally lands in a moderate band because AI is strong at:
- data cleaning,
- model execution,
- forecasting assistance,
- and document support.
But macroeconomic interpretation, institutional judgment, and causal-policy reasoning remain harder to automate.
Social research is more exposed where the work depends on survey processing, polling operations, and standardized analysis. That is why polling and some social-research roles rate higher in replacement risk. But fieldwork design, cultural interpretation, and sensitive qualitative work still depend heavily on people.
International relations and security research remain among the least exposed areas because geopolitical work relies on:
- hidden context,
- incomplete information,
- trust networks,
- diplomatic nuance,
- and inference under uncertainty.
The source is right to emphasize that AI can help collect, summarize, and compare information, but it cannot yet credibly replace geopolitical judgment.
Communications and Fundraising Get Compressed Before Strategy Does
Two parts of the think tank business model face especially direct pressure: communications and fundraising operations.
On communications, the source points to a sector where AI can now:
- turn reports into social posts,
- generate visual assets,
- support video and audio production,
- optimize publishing schedules,
- and automate analytics.
That does not remove senior communications strategy, but it does remove a large amount of execution labor.
On fundraising, the split is equally clear.
The source rates:
- Grant Writer at 60-75%
- Finance Manager at 50-60%
- Administrative Operations Manager at 55-65%
but keeps:
- Development Director at 15-25%
- Donor Relations Manager at 20-30%
much lower.
That is exactly the right pattern. Proposal drafting, compliance formatting, tracking deadlines, and budget assembly are structured workflows. Donor trust, strategic alignment, and long-term relationship building are not.
The sector can automate grant mechanics faster than it can automate fundraising credibility.
New AI-Native Policy Roles Will Expand Even as Legacy Roles Shrink
The source’s last category is strategically the most important.
It treats AI-created policy roles as a growth zone, including:
- AI Policy Researcher
- Digital Governance Analyst
- Platform Economy Regulation Researcher
- AI Ethics Policy Specialist
- and related climate / tech-policy hybrids
These roles are not safe because AI is irrelevant to them. They are safe because AI creates the problem space.
As AI systems spread, institutions need more people who can interpret:
- AI regulation,
- model risk,
- international standards,
- governance frameworks,
- platform power,
- ethics tradeoffs,
- and public legitimacy.
This is why the think tank sector’s future is not simply smaller. It is narrower in its low-level support functions and more concentrated around judgment-intensive, governance-heavy, and influence-bearing roles.
The Real Fault Line Is Between Formatted Output and Trusted Judgment
The most useful way to read the whole industry is through one distinction.
Formatted output is exposed.
That includes:
- literature summaries,
- first drafts,
- social distribution,
- standard editing,
- grant templates,
- dashboard production,
- and routine administrative coordination.
Trusted judgment is more resilient.
That includes:
- political reading,
- elite relationship management,
- donor cultivation,
- strategic policy framing,
- international risk interpretation,
- ethics and governance tradeoffs,
- and expert testimony.
This is why the sector’s future probably looks like this:
- smaller production teams,
- heavier use of AI for research support,
- higher leverage for senior staff,
- rising demand for AI governance talent,
- and a sharper premium on people whose names themselves carry authority.
The Strategic Conclusion
Think tanks are not mainly in the business of producing text. They are in the business of producing judgment that influential people are willing to act on.
AI changes how that judgment is prepared:
- faster literature review,
- faster modeling,
- faster memo drafting,
- faster communications packaging,
- faster grant-production work.
But it does not yet solve the hardest part of the job:
- deciding what matters,
- deciding what is politically possible,
- deciding what is ethically defensible,
- and earning the trust required to shape public decisions.
That is why AI will cut deeply into the support architecture of policy research without fully replacing the strategic core.
In think tanks, the first thing to shrink is not judgment. It is everything that used to surround judgment.
Sources
The links below are preserved from the original Chinese source file and cleaned into English.
- On Think Tanks, State of the Sector Report 2025
https://onthinktanks.org/state-of-the-sector-report-2025/ - On Think Tanks, The Promise and Perils of AI in Shaping Tomorrow’s Think Tanks and Foundations
https://onthinktanks.org/articles/the-promise-and-perils-of-ai-in-shaping-tomorrows-think-tanks-and-foundations/ - ACM Digital Library, The End of the Policy Analyst?
https://dl.acm.org/doi/10.1145/3604570 - OECD, Governing with Artificial Intelligence
https://www.oecd.org/en/publications/2025/06/governing-with-artificial-intelligence_398fa287/full-report/ai-in-policy-evaluation_c88cc2fd.html - Deloitte, AI Future of Work in Public Sector Policymaking
https://www.deloitte.com/us/en/insights/industry/government-public-sector-services/ai-future-of-work-in-government/ai-future-of-work-in-public-sector-policymaking.html - RAND, For Geopolitics, What AI Can’t Do Will Be as Important
https://www.rand.org/pubs/commentary/2025/04/for-geopolitics-what-ai-cant-do-will-be-as-important.html - PMC, AI and International Relations Analysis Framework
https://pmc.ncbi.nlm.nih.gov/articles/PMC11575148/ - Joseph Rowntree Foundation, Will AI Replace Policymakers?
https://www.jrf.org.uk/ai-for-public-good/will-ai-replace-policymakers - ZipRecruiter, Think Tank Salary
https://www.ziprecruiter.com/Salaries/Think-Tanks-Salary - 80,000 Hours, Think Tank Research Career Review
https://80000hours.org/career-reviews/think-tank-research/ - Grant Assistant, The Best AI Grant Writing Tools for Nonprofits in 2025
https://www.grantassistant.ai/resources/articles/the-best-ai-grant-writing-tools-for-nonprofits-in-2025 - FundRobin, Best AI Grant Writing Tools for Nonprofits
https://www.fundrobin.com/articles/how-to-guide/ai-tools-for-nonprofits/best-ai-grant-writing-tools-nonprofits/ - International AI Safety Report, 2026 Report Extended Summary for Policymakers
https://internationalaisafetyreport.org/publication/2026-report-extended-summary-policymakers - Council on Foreign Relations, How 2026 Could Decide the Future of Artificial Intelligence
https://www.cfr.org/articles/how-2026-could-decide-future-artificial-intelligence - Research.com, AI Automation and the Future of Economics Degree Careers
https://research.com/advice/ai-automation-and-the-future-of-economics-degree-careers - Community Solutions, Using AI at a Think Tank
https://www.communitysolutions.com/resources/using-ai-think-tank - On Think Tanks, Navigating Funding Challenges and Existential Threats in a Changing World
https://onthinktanks.org/articles/make-think-tanks-great-again-navigating-funding-challenges-and-existential-threats-in-a-changing-world/ - Diplo, AI and Diplomacy
https://www.diplomacy.edu/topics/ai-and-diplomacy/