The Economics of AI-Generated Applications: Signal Degradation and Labor Market Consequences
- Jonathan H. Westover, PhD
- 11 hours ago
- 11 min read
Listen to this article:
Abstract: Large language models have fundamentally altered the economics of written job applications by reducing production costs to near-zero. This article examines the market-level consequences through evidence from Freelancer.com, a major digital labor platform. Analysis reveals how AI-generated applications degraded a critical quality signal that previously enabled efficient worker-employer matching. Pre-LLM, employers valued customized applications equivalent to a $26 bid reduction; this premium fell 64% post-LLM as customization lost predictive power for worker ability. Structural estimates reveal the equilibrium impact: eliminating credible written signals caused high-ability workers (top quintile) to experience 19% lower hiring rates while low-ability workers (bottom quintile) saw 14% higher rates. Total market surplus declined 1% while worker surplus fell 4%, with efficiency losses concentrated among high-ability workers unable to credibly differentiate themselves. These findings illuminate economic risks facing organizations that rely on written applications for screening and suggest strategic responses centered on performance-based evaluation, verifiable credentials, and contract design.
The arrival of large language models like ChatGPT represents a textbook case of technology-driven market disruption—but with a counterintuitive twist. Rather than improving market efficiency by reducing transaction costs, LLMs degraded a critical information mechanism, making markets less efficient and less meritocratic.
The economics are straightforward. Writing customized job applications traditionally required costly effort—time spent reading job descriptions, researching employers, and crafting tailored responses. This cost structure created a separating equilibrium: high-ability workers, who faced lower effort costs and higher returns to differentiation, invested more in customization than low-ability workers (Spence, 1973). Employers rationally interpreted customization as a noisy but informative signal of quality.
LLMs collapsed this equilibrium by reducing writing costs to approximately zero. When ChatGPT can generate a polished, customized application in seconds, the cost no longer separates types. The formerly costly signal becomes cheap talk, destroying its informational value (Crawford & Sobel, 1982) and forcing markets to reallocate workers based on remaining observable characteristics—primarily price.
This article synthesizes evidence from a comprehensive empirical investigation using data from Freelancer.com, a platform where over 85 million users have transacted since 2009. Three findings emerge with particular clarity: (1) employer willingness to pay for application customization fell 64%, (2) matching efficiency deteriorated as high-ability workers lost their primary differentiation mechanism, and (3) distributional consequences favored low-ability workers due to positive correlation between ability and opportunity costs.
These patterns carry implications extending far beyond digital platforms. Any market relying on written applications for screening—college admissions, professional services hiring, grant competitions—faces similar vulnerabilities as AI tools democratize access to sophisticated writing assistance (Autor, 2024).
The Digital Labor Market Context
Market Structure and Information Problems
Digital labor platforms exemplify markets with severe information asymmetries (Horton, 2017). Freelancer.com's typical transaction involves an employer posting a job description, receiving 30–60 applications from globally distributed workers, and selecting one freelancer for a fixed-price contract typically ranging 30–30–30–250 within 24–48 hours.
This compressed timeline creates acute adverse selection (Akerlof, 1970). Unlike traditional labor markets with multi-stage interviews and reference checks, platform employers must make rapid decisions based on limited information. Observable characteristics—reputation scores, past ratings, portfolio links—explain only 3% of total variation in worker ability. The remaining 97% is unobserved at the application stage.
Written proposals serve dual functions: they communicate what the worker would do, and—critically—how much effort the worker invested in understanding the specific job. A worker who carefully reads a job description and writes a detailed, job-specific response reveals information about ability through costly action (Pallais, 2014).
The Pre-LLM Signaling Equilibrium
Evidence from 960,000 applications submitted to 33,000 job postings between January 2021 and November 2022 documents a functioning signaling equilibrium. Using an LLM-based measure quantifying how customized each proposal is to its specific job posting (scored 0–18), analysis reveals:
Employers valued customization highly: A one-standard-deviation increase in customization (2.96 points) generated the same hiring probability increase as a $26 bid reduction—about 39% of a standard deviation in price. This willingness to pay premium wages indicates employers found customization informative.
Customization predicted effort: Workers who spent more time on applications produced more customized proposals. A one-point increase in log effort (roughly doubling time spent) associated with a 0.62-point increase in customization score.
Effort predicted outcomes: Among hired workers, those whose proposals exhibited higher customization completed jobs successfully at significantly higher rates. Controlling for worker reputation and bid price, a one-point increase in customization associated with a 1.6 percentage point increase in five-star completion probability.
These patterns align with classic signaling theory (Spence, 1973). High-ability workers found it less costly to produce customized proposals, invested more effort, and sent stronger signals. Employers observed this correlation and rationally rewarded customization, sustaining the separating equilibrium.
Signal Degradation: Evidence and Mechanisms
Collapse of Customization Premiums
Using 438,000 applications to 5,500 job postings from March–July 2024 (post-LLM period), demand estimation reveals dramatic changes. Employer willingness to pay for customization fell to 15perstandarddeviation(from15 per standard deviation (from 15perstandarddeviation(from26 pre-LLM), a 42% decline. The coefficient on customization in hiring probability models fell from 0.097 to 0.034—a 65% drop.
Event study analysis tracking employer demand in two-month windows around ChatGPT's November 2022 release shows gradual but persistent adjustment. The predicted hiring probability increase from customization fell from 35% in mid-2022 to 25% by March 2023 to 10% by July 2024. Employers learned that customization no longer predicted quality, consistent with research on learning in platform markets (Cabral & Hortaçsu, 2010).
Breakdown of Effort-Signal Correlation
Pre-LLM, workers who spent more time produced more customized applications (elasticity = 0.62). Post-LLM, this relationship collapsed. Among the 14% of post-LLM applications created using Freelancer.com's integrated AI writing tool, increased time investment negatively predicted customization (elasticity = -0.35). Workers spending more time on AI-written applications devoted effort to activities other than customization.
For non-AI applications post-LLM, the effort-customization elasticity fell to 0.50, suggesting degradation even when workers didn't use the platform's tool—likely reflecting off-platform AI use (Eloundou et al., 2023) or workers recognizing that investing effort in customization no longer paid off.
Critically, measured effort continued predicting job success post-LLM (coefficient = 0.046), but customization's predictive power disappeared entirely (coefficient = -0.0001). The underlying economic relationship—workers who invest more effort tend to be higher ability—remained intact. What changed was employers' ability to observe effort through the written signal.
Equilibrium Consequences: Structural Model Evidence
Modeling Framework and Key Findings
To quantify equilibrium effects, the research develops a structural model combining Spence (1973) signaling, discrete choice demand (McFadden, 1974), and multi-dimensional auctions (Che, 1993). Workers choose costly effort to produce signals; employers maximize expected utility when hiring; competition occurs simultaneously on price and quality.
The model's critical insight: worker types are two-dimensional—ability (a) and opportunity cost (c). Both are unobserved initially but become estimable through equilibrium choices. High-ability workers face lower costs of producing signals but empirically exhibit positive correlation with opportunity costs (ρ = 0.19)—they have higher outside options, making them less willing to accept low wages.
This positive correlation proves crucial. Workers employers most want to hire are least able to compete on price once signaling fails.
The No-Signaling Counterfactual
The model simulates a counterfactual equilibrium where writing costs fall to zero, rendering signals uninformative. Workers choose only bids; employers form beliefs based solely on observable characteristics. Comparing this "no-signaling" equilibrium to the estimated pre-LLM baseline reveals:
Hiring rates by ability quintile:
Top quintile (80th–100th percentile): -19%
Second quintile: -10%
Third quintile: -4%
Fourth quintile: +3%
Bottom quintile: +14%
The market becomes markedly less meritocratic. High-ability workers who previously signaled quality cannot differentiate themselves from low-ability competitors with similar observable characteristics. Low-ability workers benefit from camouflage—employers cannot distinguish them from high-ability workers in their observable group.
Welfare implications:
4% decline in worker surplus: High-ability losses outweigh low-ability gains
<1% increase in employer surplus: Lower wages roughly offset reduced quality
1% decline in total surplus: Pure deadweight loss from misallocation
These seemingly modest percentages translate to millions in lost value given market scale and understate long-run impacts from high-ability workers exiting platforms (Levin, 2003) and employers reducing hiring due to adverse selection (Greenwald, 1986).
Organizational Responses: Economic Logic and Strategic Implementation
Performance-Based Revelation Mechanisms
Economic logic: When screening fails, enable rapid learning through performance observation (Gibbons & Katz, 1991). The value of information from observing actual work often exceeds the cost of hiring under uncertainty.
Skills testing, work samples, and trial projects function as performance-based revelation. Rather than inferring ability from written claims, employers directly observe capability through structured tasks (Autor & Scarborough, 2008).
Implementation approaches:
Pre-hire assessments: Standardized skills testing, timed challenges, portfolio review
Work samples: Paid micro-projects replicating actual job tasks
Structured auditions: Live problem-solving sessions, presentations, collaborative exercises
Toptal requires applicants to pass language assessments, timed coding challenges, live problem-solving sessions, and test projects before acceptance. Acceptance rates hover around 3%, creating a credible quality signal written applications cannot provide. Similarly, HackerRank and Codility enable employers to assess technical skills through standardized challenges, shifting evaluation from cheap talk to demonstrated performance.
Exploratory Contracts and Sequential Screening
Economic logic: When pre-hire signals fail, structured probationary periods allow learning through observed performance while limiting downside risk (Pries & Rogerson, 2005).
Systematizing exploratory contracts—offering short paid trials (2–5 hours) at below-market rates with mutual option to continue—formalizes learning while limiting employer risk and worker opportunity cost.
Implementation approaches:
Micro-projects: 2–5 hour paid tasks with explicit conversion potential
Probationary periods: 30–90 day trials with structured evaluation milestones
Graduated onboarding: Increasing scope/compensation conditional on performance
Google extensively uses temporary contractors in a pipeline where strong performers transition to full employment, recognizing that initial screening provides limited information (Autor, 2024). Platforms could subsidize exploratory contracts (reducing fees for micro-projects) to facilitate learning and improve long-run match quality.
Verifiable Credentials and Reputation Infrastructure
Economic logic: When cheap talk becomes uninformative, verifiable credentials gain relative value (Riley, 2001). Signals that cannot be easily faked—authenticated portfolios, third-party certifications, aggregated performance scores—partially substitute for degraded application signals.
Implementation approaches:
Standardized testing: Platform-administered skills assessments with verified scores
Authenticated portfolios: Blockchain-verified work samples, code repositories, publication records
Third-party certification: Industry credentials, professional licenses, educational transcripts
Granular performance metrics: Detailed project-level ratings, completion statistics, quality scores
Upwork offers 150+ standardized skills tests; workers who pass display verified badges (Stanton & Thomas, 2015). GitHub provides developers verifiable portfolios showing actual code contributions. Organizations should weight these verifiable credentials more heavily relative to written claims when reviewing applications. Platform operators should invest in verification infrastructure—automated testing, work authentication systems, granular performance rubrics—creating AI-resistant signals.
Contract Design and Incentive Alignment
Economic logic: When screening fails, contract structure can substitute by aligning incentives (Holmström, 1979). Performance-contingent compensation reduces dependence on accurate pre-hire assessment.
Implementation approaches:
Milestone-based payment: Compensation tied to delivery of verified outputs
Quality escrow: Portion of payment contingent on client satisfaction scores
Tournament structures: Multiple workers compete; payment to winner(s) only
Profit-sharing: Compensation linked to measurable business outcomes
99designs uses contest structures where multiple designers submit work; clients pay only the winner. This bypasses application screening entirely—workers signal through demonstrated output. While inefficient (multiple workers invest unpaid effort), it solves signal disruption by making ability directly observable. Similarly, milestone-based contracts with quality gates create incentives for accurate self-selection (Bolton & Dewatripont, 2005).
Long-Term Market Infrastructure
Portable, Verifiable Work Histories
Platform-specific reputations create fragmentation. Workers building reputation on one platform cannot leverage it elsewhere, reducing labor mobility and market efficiency (Horton, 2017). Blockchain-based credentialing systems promise interoperable, tamper-proof work records that workers control and present across contexts.
Critical design elements include interoperability standards enabling cross-platform verification, granular competency frameworks capturing specific skills (Autor, 2013), privacy-preserving selective disclosure (Camenisch & Lysyanskaya, 2001), and decay mechanisms reducing weight of outdated credentials.
Continuous Monitoring and Adaptive Screening
LLM capabilities evolve rapidly; today's detection systems may fail tomorrow (Eloundou et al., 2023). Organizations need adaptive learning systems that continuously monitor signal validity and update screening practices. This requires treating hiring as a feedback loop: track which screening methods predict performance, identify when signals degrade, and rapidly experiment with alternatives (Brynjolfsson & McElheran, 2016). Statistical process control techniques can detect when established signals lose predictive power, triggering review before adverse selection becomes severe.
Relationship-Based Exchange
When transaction costs of screening each hire become prohibitive, shifting toward relational contracting offers an alternative (Baker et al., 2002). Rather than optimizing discrete hires, organizations invest in identifying and developing ongoing relationships with proven workers. The economics favor relationships when repeated interactions amortize screening costs, relationship-specific investments create switching costs that stabilize partnerships, and reputation effects discipline opportunism (Greif, 1993).
Conclusion
The economics of AI-generated applications illuminate a counterintuitive market failure: technology that reduces production costs can destroy value by eliminating information. Large language models made written applications cheap to produce but eliminated their function as costly signals separating high- from low-ability workers. The result is a less meritocratic market with adverse selection favoring workers who compete on price precisely because their abilities are lower.
Evidence from Freelancer.com documents this progression: employers' willingness to pay for customization fell 64%; the correlation between customization and effort collapsed; and structural estimates reveal high-ability workers face 19% lower hiring rates while low-ability workers enjoy 14% gains. Total surplus declined 1%, representing pure deadweight loss.
Strategic imperatives:
Recognize that written application quality has decoupled from worker quality
Shift screening toward verifiable, performance-based mechanisms
Design contracts that enable learning through exploratory engagements
Invest in relationship development to amortize screening costs
Monitor signal validity continuously and adapt as LLM capabilities evolve
The disruption of labor market signaling by generative AI is neither temporary nor easily reversed. Organizations that adapt their screening infrastructure will maintain hiring quality and match efficiency. Those relying on degraded signals risk systematic adverse selection and measurable productivity losses.
References
Akerlof, G. A. (1970). The market for "lemons": Quality uncertainty and the market mechanism. Quarterly Journal of Economics, 84(3), 488–500.
Autor, D. H. (2013). The "task approach" to labor markets: An overview. Journal for Labour Market Research, 46(3), 185–199.
Autor, D. H. (2024). Applying AI to rebuild middle class jobs. NBER Working Paper, No. 32140.
Autor, D. H., & Scarborough, D. (2008). Does job testing harm minority workers? Evidence from retail establishments. Quarterly Journal of Economics, 123(1), 219–277.
Baker, G., Gibbons, R., & Murphy, K. J. (2002). Relational contracts and the theory of the firm. Quarterly Journal of Economics, 117(1), 39–84.
Bolton, P., & Dewatripont, M. (2005). Contract theory. MIT Press.
Brynjolfsson, E., & McElheran, K. (2016). The rapid adoption of data-driven decision-making. American Economic Review: Papers & Proceedings, 106(5), 133–139.
Cabral, L., & Hortaçsu, A. (2010). The dynamics of seller reputation: Evidence from eBay. Journal of Industrial Economics, 58(1), 54–78.
Camenisch, J., & Lysyanskaya, A. (2001). An efficient system for non-transferable anonymous credentials with optional anonymity revocation. In Advances in Cryptology—EUROCRYPT 2001 (pp. 93–118). Springer.
Che, Y.-K. (1993). Design competition through multidimensional auctions. RAND Journal of Economics, 24(4), 668–680.
Crawford, V. P., & Sobel, J. (1982). Strategic information transmission. Econometrica, 50(6), 1431–1451.
Eloundou, T., Manning, S., Mishkin, P., & Rock, D. (2023). GPTs are GPTs: An early look at the labor market impact potential of large language models. arXiv preprint arXiv:2303.10130.
Gibbons, R., & Katz, L. F. (1991). Layoffs and lemons. Journal of Labor Economics, 9(4), 351–380.
Greenwald, B. C. (1986). Adverse selection in the labour market. Review of Economic Studies, 53(3), 325–347.
Greif, A. (1993). Contract enforceability and economic institutions in early trade: The Maghribi traders' coalition. American Economic Review, 83(3), 525–548.
Holmström, B. (1979). Moral hazard and observability. Bell Journal of Economics, 10(1), 74–91.
Horton, J. J. (2017). The effects of algorithmic labor market recommendations: Evidence from a field experiment. Journal of Labor Economics, 35(2), 345–385.
Levin, J. (2003). Relational incentive contracts. American Economic Review, 93(3), 835–857.
McFadden, D. (1974). Conditional logit analysis of qualitative choice behavior. In P. Zarembka (Ed.), Frontiers in econometrics (pp. 105–142). Academic Press.
Pallais, A. (2014). Inefficient hiring in entry-level labor markets. American Economic Review, 104(11), 3565–3599.
Pries, M., & Rogerson, R. (2005). Hiring policies, labor market institutions, and labor market flows. Journal of Political Economy, 113(4), 811–839.
Riley, J. G. (2001). Silver signals: Twenty-five years of screening and signaling. Journal of Economic Literature, 39(2), 432–478.
Spence, M. (1973). Job market signaling. Quarterly Journal of Economics, 87(3), 355–374.
Stanton, C. T., & Thomas, C. (2015). Landing the first job: The value of intermediaries in online hiring. Review of Economic Studies, 82(3), 1086–1117.

Jonathan H. Westover, PhD is Chief Academic & Learning Officer (HCI Academy); Associate Dean and Director of HR Programs (WGU); Professor, Organizational Leadership (UVU); OD/HR/Leadership Consultant (Human Capital Innovations). Read Jonathan Westover's executive profile here.
Suggested Citation: Westover, J. H. (2025). The Economics of AI-Generated Applications: Signal Degradation and Labor Market Consequences. Human Capital Leadership Review, 27(4). doi.org/10.70175/hclreview.2020.27.4.5

















