← Latest
arXiv cs.AI Dec 24, 2025 18:24 UTC

Scaling Laws for Economic Productivity: Experimental Evidence in LLM-Assisted Consulting, Data Analyst, and Management Tasks

This paper derives `Scaling Laws for Economic Impacts' -- empirical relationships between the training compute of Large Language Models (LLMs) and professional productivity.

Receipts Open original

What’s new (20 sec)

This paper derives `Scaling Laws for Economic Impacts' -- empirical relationships between the training compute of Large Language Models (LLMs) and professional productivity.

Why it matters (2 min)

  • This paper derives `Scaling Laws for Economic Impacts' -- empirical relationships between the training compute of Large Language Models (LLMs) and professional productivity.
  • In a preregistered experiment, over 500 consultants, data analysts, and managers completed professional tasks using one of 13 LLMs.
  • Open receipts to verify and go deeper.

Go deeper (8 min)

Context

This paper derives `Scaling Laws for Economic Impacts' -- empirical relationships between the training compute of Large Language Models (LLMs) and professional productivity. In a preregistered experiment, over 500 consultants, data analysts, and managers completed professional tasks using one of 13 LLMs. We find that each year of AI model progress reduced task time by 8%, with 56% of gains driven by increased compute and 44% by algorithmic progress. However, productivity gains were significantly larger for non-agentic analytical tasks compared to agentic workflows requiring tool use. These findings suggest continued model scaling could boost U.S. productivity by approximately 20% over the next decade.

For builders

Builder: scan the abstract + experiments; look for code, datasets, and evals.

Verify

Prefer primary announcements, papers, repos, and changelogs over reposts.

Receipts

  1. Scaling Laws for Economic Productivity: Experimental Evidence in LLM-Assisted Consulting, Data Analyst, and Management Tasks (arXiv cs.AI)