Economics
See recent articles
 [1] arXiv:2406.13122 [pdf, html, other]

Title: Testing for Underpowered LiteraturesSubjects: Econometrics (econ.EM)
How many experimental studies would have come to different conclusions had they been run on larger samples? I show how to estimate the expected number of statistically significant results that a set of experiments would have reported had their sample sizes all been counterfactually increased by a chosen factor. The estimator is consistent and asymptotically normal. Unlike existing methods, my approach requires no assumptions about the distribution of true effects of the interventions being studied other than continuity. This method includes an adjustment for publication bias in the reported tscores. An application to randomized controlled trials (RCTs) published in top economics journals finds that doubling every experiment's sample size would only increase the power of twosided ttests by 7.2 percentage points on average. This effect is small and is comparable to the effect for systematic replication projects in laboratory psychology where previous studies enabled accurate power calculations ex ante. These effects are both smaller than for nonRCTs. This comparison suggests that RCTs are on average relatively insensitive to sample size increases. The policy implication is that grant givers should generally fund more experiments rather than fewer, larger ones.
 [2] arXiv:2406.13395 [pdf, other]

Title: Bayesian Inference for Multidimensional Welfare ComparisonsSubjects: Econometrics (econ.EM)
Using both singleindex measures and stochastic dominance concepts, we show how Bayesian inference can be used to make multivariate welfare comparisons. A fourdimensional distribution for the wellbeing attributes income, mental health, education, and happiness are estimated via Bayesian Markov chain Monte Carlo using unitrecord data taken from the Household, Income and Labour Dynamics in Australia survey. Marginal distributions of beta and gamma mixtures and discrete ordinal distributions are combined using a copula. Improvements in both wellbeing generally and poverty magnitude are assessed using posterior means of singleindex measures and posterior probabilities of stochastic dominance. The conditions for stochastic dominance depend on the class of utility functions that is assumed to define a social welfare function and the number of attributes in the utility function. Three classes of utility functions are considered, and posterior probabilities of dominance are computed for one, two, and fourattribute utility functions for three time intervals within the period 2001 to 2019.
 [3] arXiv:2406.13563 [pdf, other]

Title: Serotonin as a Creativity PumpComments: 18 pages, 17 figuresSubjects: General Economics (econ.GN)
The location of Western Civilization defined nations in Europe and in the United States within the largest global pollen environments on the planet is proposed as a key factor leading to their success. Environments with dense pollen concentrations will cause large up and down changes in serum histamine directly causing reductions and increases in brain serotonin i.e., a larger serotonin slope, linked to higher levels of creativity. The pollen ecosystem in northern latitude nations is thus considered the hidden driver of the success of these populations as the biochemical interaction between histamine and serotonin leads to a creativity pump that is proposed as the fundamental driver of intelligence in micro and macro human populations.
 [4] arXiv:2406.13749 [pdf, html, other]

Title: Combining Combined Forecasts: a Network ApproachComments: WP version 202406Subjects: Theoretical Economics (econ.TH)
This study investigates the practice of experts aggregating forecasts before informing a decisionmaker. The significance of this subject extends to various contexts where experts inform their assessments to a decisionmaker following discussions with peers. My findings show that, irrespective of the information structure, aggregation rules introduce no bias to decisionmaking in expected terms. Nevertheless, the concern revolves around variance. In situations where experts are equally precise, and pairwise correlation of forecasts is the same across all pairs of experts, the network structure plays a pivotal role in decisionmaking variance. For classical structures, I show that star networks exhibit the highest variance, contrasting with $d$regular networks that achieve zero variance, emphasizing their efficiency. Additionally, by employing the Poisson random graph model under the assumptions of a large network size and a small connection probability, the results indicate that both the expected Network Bias and its variance converge to zero as the network size becomes sufficiently large. These insights enhance the understanding of decisionmaking under different information, network structures and aggregation rules. They enrich the literature on combining forecasts by exploring the effects of prior network communication on decisionmaking.
 [5] arXiv:2406.13783 [pdf, html, other]

Title: Nash equilibria of quasisupermodular gamesSubjects: Theoretical Economics (econ.TH); Computer Science and Game Theory (cs.GT)
We prove three results on the existence and structure of Nash equilibria for quasisupermodular games. A theorem is purely ordertheoretic, and the other two involve topological hypotheses. Our topological results genralize Zhou's theorem (for supermodular games) and Calciano's theorem.
 [6] arXiv:2406.13826 [pdf, html, other]

Title: Testing identification in mediation and dynamic treatment modelsComments: 49 pages, 4 figuresSubjects: Econometrics (econ.EM); Methodology (stat.ME)
We propose a test for the identification of causal effects in mediation and dynamic treatment models that is based on two sets of observed variables, namely covariates to be controlled for and suspected instruments, building on the test by Huber and Kueck (2022) for single treatment models. We consider models with a sequential assignment of a treatment and a mediator to assess the direct treatment effect (net of the mediator), the indirect treatment effect (via the mediator), or the joint effect of both treatment and mediator. We establish testable conditions for identifying such effects in observational data. These conditions jointly imply (1) the exogeneity of the treatment and the mediator conditional on covariates and (2) the validity of distinct instruments for the treatment and the mediator, meaning that the instruments do not directly affect the outcome (other than through the treatment or mediator) and are unconfounded given the covariates. Our framework extends to posttreatment sample selection or attrition problems when replacing the mediator by a selection indicator for observing the outcome, enabling joint testing of the selectivity of treatment and attrition. We propose a machine learningbased test to control for covariates in a datadriven manner and analyze its finite sample performance in a simulation study. Additionally, we apply our method to Slovak labor market data and find that our testable implications are not rejected for a sequence of training programs typically considered in dynamic treatment evaluations.
 [7] arXiv:2406.13969 [pdf, html, other]

Title: Nonparametric Analysis of Random Utility Models Robust to Nontransitive PreferencesSubjects: Theoretical Economics (econ.TH)
The Random Utility Model (RUM) is the gold standard in describing the behavior of a population of consumers. The RUM operates under the assumption of transitivity in consumers' preference relationships, but the empirical literature has regularly documented its violation. In this paper, I introduce the Random Preference Model (RPM), a novel framework for understanding the choice behavior in a population akin to RUMs, which preserves monotonicity and accommodates nontransitive behaviors. The primary objective is to test the null hypothesis that a population of rational consumers generates crosssectional demand distributions without imposing constraints on the unobserved heterogeneity or the number of goods. I analyze data from the UK Family Expenditure Survey and find evidence that contradicts RUMs and supports RPMs. These findings underscore RPMs' flexibility and capacity to explain a wider spectrum of consumer behaviors compared to RUMs. This paper generalizes the stochastic revealed preference methodology of McFadden & Richter (1990) for finite choice sets to settings with nontransitive and possibly nonconvex preference relations.
 [8] arXiv:2406.14046 [pdf, html, other]

Title: Estimating TimeVarying Parameters of Various Smoothness in Linear Models via Kernel RegressionSubjects: Econometrics (econ.EM)
We consider estimating nonparametric timevarying parameters in linear models using kernel regression. Our contributions are twofold. First, We consider a broad class of timevarying parameters including deterministic smooth functions, the rescaled random walk, structural breaks, the threshold model and their mixtures. We show that those timevarying parameters can be consistently estimated by kernel regression. Our analysis exploits the smoothness of timevarying parameters rather than their specific form. The second contribution is to reveal that the bandwidth used in kernel regression determines the tradeoff between the rate of convergence and the size of the class of timevarying parameters that can be estimated. An implication from our result is that the bandwidth should be proportional to $T^{1/2}$ if the timevarying parameter follows the rescaled random walk, where $T$ is the sample size. We propose a specific choice of the bandwidth that accommodates a wide range of timevarying parameter models. An empirical application shows that the kernelbased estimator with this choice can capture the randomwalk dynamics in timevarying parameters.
 [9] arXiv:2406.14174 [pdf, html, other]

Title: Redistribution Through Market SegmentationSubjects: Theoretical Economics (econ.TH)
Consumer data can be used to sort consumers into different market segments, allowing a monopolist to charge different prices at each segment. We study consumeroptimal segmentations with redistributive concerns, i.e., that prioritize poorer consumers. Such segmentations are efficient but may grant additional profits to the monopolist, compared to consumeroptimal segmentations with no redistributive concerns. We characterize the markets for which this is the case and provide a procedure for constructing optimal segmentations given a strong redistributive motive. For the remaining markets, we show that the optimal segmentation is surprisingly simple: it generates one segment with a discount price and one segment with the same price that would be charged if there were no segmentation. We also show that a regulator willing to implement the redistributiveoptimal segmentation does not need to observe precisely the composition and the frequency of each market segment, the aggregate distribution over prices suffices.
 [10] arXiv:2406.14198 [pdf, html, other]

Title: Guaranteed shares of benefits and costsSubjects: Theoretical Economics (econ.TH); Computer Science and Game Theory (cs.GT)
In a general fair division model with transferable utilities we discuss endogenous lower and upper guarantees on individual shares of benefits or costs. Like the more familiar exogenous bounds on individual shares described by an outside option or a stand alone utility, these guarantees depend on my type but not on others' types, only on their number and the range of types. Keeping the range from worst share to best share as narrow as permitted by the physical constraints of the model still leaves a large menu of tight guarantee functions. We describe in detail these design options in several iconic problems where each tight pair of guarantees has a clear normative meaning: the allocation of indivisible goods or costly chores, cost sharing of a public facility and the exploitation of a commons with substitute or complementary inputs. The corresponding benefit or cost functions are all sub or supermodular, and for this class we characterise the set of minimal upper and maximal lower guarantees in all two agent problems.
 [11] arXiv:2406.14238 [pdf, other]

Title: The Economics of Coal Phaseouts: Auctions as a Novel Policy Instrument for the Energy TransitionJournalref: Climate Policy, pp.112 (2024)Subjects: General Economics (econ.GN)
The combustion of coal, the most polluting form of energy, must be significantly curtailed to limit global average temperature increase to well below 2 degrees C. The effectiveness of carbon pricing is frequently undermined by suboptimally low prices and rigid market structures. Consequently, alternative approaches such as compensation for the early closure of coalfired power plants are being considered. While bilateral negotiations can lead to excessive compensation due to asymmetric information, a competitive auction can discover the true cost of closure and help allocate funds more efficiently and transparently. Since Germany is the only country till date to have implemented a coal phaseout auction, we use it to analyse the merits and demerits of the policy, drawing comparisons with other countries that have phased out coal through other means. The German experience with coal phaseout auctions illustrates the necessity of considering additionality and interaction with existing climate policies, managing dynamic incentives, and evaluating impacts on security of supply. While theoretically auctions have attractive properties, in practice, their design must address these concerns to unlock the full benefits. Where auctions are not appropriate due to a concentration in coal plant ownership, alternative strategies include enhanced incentives for scrappage and repurposing of coal assets.
 [12] arXiv:2406.14380 [pdf, html, other]

Title: Estimating Treatment Effects under Recommender Interference: A Structured Neural Networks ApproachSubjects: Econometrics (econ.EM); Machine Learning (cs.LG); Methodology (stat.ME)
Recommender systems are essential for contentsharing platforms by curating personalized content. To evaluate updates of recommender systems targeting content creators, platforms frequently engage in creatorside randomized experiments to estimate treatment effect, defined as the difference in outcomes when a new (vs. the status quo) algorithm is deployed on the platform. We show that the standard differenceinmeans estimator can lead to a biased treatment effect estimate. This bias arises because of recommender interference, which occurs when treated and control creators compete for exposure through the recommender system. We propose a "recommender choice model" that captures how an item is chosen among a pool comprised of both treated and control content items. By combining a structural choice model with neural networks, the framework directly models the interference pathway in a microfounded way while accounting for rich viewercontent heterogeneity. Using the model, we construct a double/debiased estimator of the treatment effect that is consistent and asymptotically normal. We demonstrate its empirical performance with a field experiment on Weixin shortvideo platform: besides the standard creatorside experiment, we carry out a costly blocked doublesided randomization design to obtain a benchmark estimate without interference bias. We show that the proposed estimator significantly reduces the bias in treatment effect estimates compared to the standard differenceinmeans estimator.
 [13] arXiv:2406.14382 [pdf, html, other]

Title: Identification of fiscal SVARIVs in small open economiesSubjects: General Economics (econ.GN)
We propose a novel instrumental variable to identify fiscal shocks in small open economies. Under the assumptions that unexpected changes in trading partners correlate with output of an open economy and unexpected fiscal shocks of a small economy are unrelated to its trading partners' forecast errors, we use forecast errors of trading partner economies to proxy unexpected shocks in domestic output. We show that this instrument is relevant and find evidence that supports its exogeneity. Using this IV strategy, we find that the twoyear cumulative spending multiplier is around 1 for Canada and 0.5 for euro area small open economies.
New submissions for Friday, 21 June 2024 (showing 13 of 13 entries )
 [14] arXiv:2406.11308 (crosslist from cs.LG) [pdf, html, other]

Title: Management Decisions in Manufacturing using Causal Machine Learning  To Rework, or not to Rework?Comments: 30 pages, 10 figuresSubjects: Machine Learning (cs.LG); Artificial Intelligence (cs.AI); Econometrics (econ.EM); Machine Learning (stat.ML)
In this paper, we present a datadriven model for estimating optimal rework policies in manufacturing systems. We consider a single production stage within a multistage, lotbased system that allows for optional rework steps. While the rework decision depends on an intermediate state of the lot and system, the final product inspection, and thus the assessment of the actual yield, is delayed until production is complete. Repair steps are applied uniformly to the lot, potentially improving some of the individual items while degrading others. The challenge is thus to balance potential yield improvement with the rework costs incurred. Given the inherently causal nature of this decision problem, we propose a causal model to estimate yield improvement. We apply methods from causal machine learning, in particular double/debiased machine learning (DML) techniques, to estimate conditional treatment effects from data and derive policies for rework decisions. We validate our decision model using realworld data from optoelectronic semiconductor manufacturing, achieving a yield improvement of 2  3% during the colorconversion process of white lightemitting diodes (LEDs).
 [15] arXiv:2406.13166 (crosslist from cs.LG) [pdf, other]

Title: Enhancing supply chain security with automated machine learningComments: 22 pagesSubjects: Machine Learning (cs.LG); General Economics (econ.GN); Optimization and Control (math.OC)
This study tackles the complexities of global supply chains, which are increasingly vulnerable to disruptions caused by port congestion, material shortages, and inflation. To address these challenges, we explore the application of machine learning methods, which excel in predicting and optimizing solutions based on large datasets. Our focus is on enhancing supply chain security through fraud detection, maintenance prediction, and material backorder forecasting. We introduce an automated machine learning framework that streamlines data analysis, model construction, and hyperparameter optimization for these tasks. By automating these processes, our framework improves the efficiency and effectiveness of supply chain security measures. Our research identifies key factors that influence machine learning performance, including sampling methods, categorical encoding, feature selection, and hyperparameter optimization. We demonstrate the importance of considering these factors when applying machine learning to supply chain challenges. Traditional mathematical programming models often struggle to cope with the complexity of largescale supply chain problems. Our study shows that machine learning methods can provide a viable alternative, particularly when dealing with extensive datasets and complex patterns. The automated machine learning framework presented in this study offers a novel approach to supply chain security, contributing to the existing body of knowledge in the field. Its comprehensive automation of machine learning processes makes it a valuable contribution to the domain of supply chain management.
 [16] arXiv:2406.13726 (crosslist from math.OC) [pdf, html, other]

Title: Global Solutions to Master Equations for Continuous Time Heterogeneous Agent Macroeconomic ModelsSubjects: Optimization and Control (math.OC); Machine Learning (cs.LG); General Economics (econ.GN)
We propose and compare new global solution algorithms for continuous time heterogeneous agent economies with aggregate shocks. First, we approximate the agent distribution so that equilibrium in the economy can be characterized by a high, but finite, dimensional nonlinear partial differential equation. We consider different approximations: discretizing the number of agents, discretizing the agent state variables, and projecting the distribution onto a finite set of basis functions. Second, we represent the value function using a neural network and train it to solve the differential equation using deep learning tools. We refer to the solution as an Economic Model Informed Neural Network (EMINN). The main advantage of this technique is that it allows us to find global solutions to high dimensional, nonlinear problems. We demonstrate our algorithm by solving important models in the macroeconomics and spatial literatures (e.g. Krusell and Smith (1998), Khan and Thomas (2007), Bilal (2023)).
 [17] arXiv:2406.13835 (crosslist from cs.GT) [pdf, html, other]

Title: Bundling in Oligopoly: Revenue Maximization with SingleItem CompetitorsComments: Accepted to EC 2024Subjects: Computer Science and Game Theory (cs.GT); Theoretical Economics (econ.TH)
We consider a principal seller with $m$ heterogeneous products to sell to an additive buyer over independent items. The principal can offer an arbitrary menu of product bundles, but faces competition from smaller and more agile singleitem sellers. The singleitem sellers choose their prices after the principal commits to a menu, potentially undercutting the principal's offerings. We explore to what extent the principal can leverage the ability to bundle product together to extract revenue.
Any choice of menu by the principal induces an oligopoly pricing game between the singleitem sellers, which may have multiple equilibria. When there is only a single item this model reduces to Bertrand competition, for which the principal's revenue is $0$ at any equilibrium, so we assume that no single item's value is too dominant. We establish an upper bound on the principal's optimal revenue at every equilibrium: the expected welfare after truncating each item's value to its revenuemaximizing price. Under a technical condition on the value distributions  that the monopolist's revenue is sufficiently sensitive to price  we show that the principal seller can simply price the grandbundle and ensure (in any equilibrium) a constant approximation to this bound (and hence to the optimal revenue). We also show that for some value distributions violating our conditions, grandbundle pricing does not yield a constant approximation to the optimal revenue in any equilibrium.  [18] arXiv:2406.13882 (crosslist from cs.LG) [pdf, other]

Title: Allocation Requires Prediction Only if Inequality Is LowComments: Appeared in Fortyfirst International Conference on Machine Learning (ICML), 2024Subjects: Machine Learning (cs.LG); Computers and Society (cs.CY); Theoretical Economics (econ.TH)
Algorithmic predictions are emerging as a promising solution concept for efficiently allocating societal resources. Fueling their use is an underlying assumption that such systems are necessary to identify individuals for interventions. We propose a principled framework for assessing this assumption: Using a simple mathematical model, we evaluate the efficacy of predictionbased allocations in settings where individuals belong to larger units such as hospitals, neighborhoods, or schools. We find that predictionbased allocations outperform baseline methods using aggregate unitlevel statistics only when betweenunit inequality is low and the intervention budget is high. Our results hold for a wide range of settings for the price of prediction, treatment effect heterogeneity, and unitlevel statistics' learnability. Combined, we highlight the potential limits to improving the efficacy of interventions through prediction.
 [19] arXiv:2406.14145 (crosslist from stat.AP) [pdf, html, other]

Title: Temperature in the Iberian Peninsula: Trend, seasonality, and heterogeneityComments: 49 pages, 20 figuresSubjects: Applications (stat.AP); Econometrics (econ.EM)
In this paper, we propose fitting unobserved component models to represent the dynamic evolution of bivariate systems of centre and logrange temperatures obtained monthly from minimum/maximum temperatures observed at a given location. In doing so, the centre and logrange temperature are decomposed into potentially stochastic trends, seasonal, and transitory components. Since our model encompasses deterministic trends and seasonal components as limiting cases, we contribute to the debate on whether stochastic or deterministic components better represent the trend and seasonal components. The methodology is implemented to centre and logrange temperature observed in four locations in the Iberian Peninsula, namely, Barcelona, Coruña, Madrid, and Seville. We show that, at each location, the centre temperature can be represented by a smooth integrated random walk with timevarying slope, while a stochastic level better represents the logrange. We also show that centre and logrange temperature are unrelated. The methodology is then extended to simultaneously model centre and logrange temperature observed at several locations in the Iberian Peninsula. We fit a multilevel dynamic factor model to extract potential commonalities among centre (logrange) temperature while also allowing for heterogeneity in different areas in the Iberian Peninsula. We show that, although the commonality in trends of average temperature is considerable, the regional components are also relevant.
Cross submissions for Friday, 21 June 2024 (showing 6 of 6 entries )
 [20] arXiv:2205.10310 (replaced) [pdf, html, other]

Title: Treatment Effects in Bunching Designs: The Impact of Mandatory Overtime Pay on HoursSubjects: Econometrics (econ.EM)
This paper studies the identifying power of bunching at kinks when the researcher does not assume a parametric choice model. I find that in a general choice model, identifying the average causal response to the policy switch at a kink amounts to confronting two extrapolation problems, each about the distribution of a counterfactual choice that is observed only in a censored manner. I apply this insight to partially identify the effect of overtime pay regulation on the hours of U.S. workers using administrative payroll data, assuming that each distribution satisfies a weak nonparametric shape constraint in the region where it is not observed. The resulting bounds are informative and indicate a relatively small elasticity of demand for weekly hours, addressing a longstanding question about the causal effects of the overtime mandate.
 [21] arXiv:2303.14298 (replaced) [pdf, html, other]

Title: Sensitivity Analysis in Unconditional Quantile EffectsSubjects: Econometrics (econ.EM)
This paper proposes a framework to analyze the effects of counterfactual policies on the unconditional quantiles of an outcome variable. For a given counterfactual policy, we obtain identified sets for the effect of both marginal and global changes in the proportion of treated individuals. To conduct a sensitivity analysis, we introduce the quantile breakdown frontier, a curve that (i) indicates whether a sensitivity analysis is possible or not, and (ii) when a sensitivity analysis is possible, quantifies the amount of selection bias consistent with a given conclusion of interest across different quantiles. To illustrate our method, we perform a sensitivity analysis on the effect of unionizing low income workers on the quantiles of the distribution of (log) wages.
 [22] arXiv:2305.19089 (replaced) [pdf, html, other]

Title: Impulse Response Analysis of Structural Nonlinear Time Series ModelsComments: 58 pages with appendices, 16 color figuresSubjects: Econometrics (econ.EM)
This paper proposes a semiparametric sieve approach to estimate impulse response functions of nonlinear time series within a general class of structural autoregressive models. We prove that a twostep procedure can flexibly accommodate nonlinear specifications while avoiding the need to choose of fixed parametric forms. Sieve impulse responses are proven to be consistent by deriving uniform estimation guarantees, and an iterative algorithm makes it straightforward to compute them in practice. With simulations, we show that the proposed semiparametric approach proves effective against misspecification while suffering only minor efficiency losses. In a US monetary policy application, we find that the pointwise sieve GDP response associated with an interest rate increase is larger than that of a linear model. Finally, in an analysis of interest rate uncertainty shocks, sieve responses imply significantly more substantial contractionary effects both on production and inflation.
 [23] arXiv:2403.09265 (replaced) [pdf, html, other]

Title: Zonal vs. Nodal Pricing: An Analysis of Different Pricing Rules in the German DayAhead MarketComments: 36 pages, 7 figuresSubjects: General Economics (econ.GN)
The European electricity market is based on large pricing zones with a uniform dayahead price. The energy transition leads to changes in supply and demand and increasing redispatch costs. In an attempt to ensure efficient market clearing and congestion management, the EU Commission has mandated the Bidding Zone Review (BZR) to reevaluate the configuration of European bidding zones. Based on a unique data set published in the context of the BZR for the target year 2025, we compare various pricing rules for the German power market. We compare market clearing and pricing for different zonal and nodal models, including their generation costs and associated redispatch costs. In numerical experiments with this dataset, the differences in the average prices in different zones are low. Congestion arises as well, but not necessarily on the crosszonal interconnectors. The total costs across different configurations are similar and the reduction of standard deviations in prices is also small. This might be different with other load and generation scenarios, but the BZR data is important as it was created to make a decision about splits of the existing bidding zones. Nodal pricing rules lead to the lowest total cost. We also evaluate differences of nodal pricing rules with respect to the necessary uplift payments, which is relevant in the context of the current discussion on nonuniform pricing in the EU. While the study focuses on Germany, the analysis is relevant beyond and feeds into the broader discussion about pricing rules in nonconvex markets.
 [24] arXiv:2405.04816 (replaced) [pdf, html, other]

Title: Testing the FairnessImprovability of AlgorithmsSubjects: Econometrics (econ.EM); Data Structures and Algorithms (cs.DS); Applications (stat.AP)
Many organizations use algorithms that have a disparate impact, i.e., the benefits or harms of the algorithm fall disproportionately on certain social groups. Addressing an algorithm's disparate impact can be challenging, especially because it is often unclear whether reducing this impact is possible without sacrificing other important objectives of the organization, such as accuracy or profit. Establishing the improvability of algorithms with respect to multiple criteria is of both conceptual and practical interest: in many settings, disparate impact that would otherwise be prohibited under US federal law is permissible if it is necessary to achieve a legitimate business interest. The question is how a policymaker can formally substantiate, or refute, this necessity defense. In this paper, we provide an econometric framework for testing the hypothesis that it is possible to improve on the fairness of an algorithm without compromising on other prespecified objectives. Our proposed test is simple to implement and can be applied under any exogenous constraint on the algorithm space. We establish the largesample validity and consistency of our test, and illustrate its practical application by evaluating a healthcare algorithm originally considered by Obermeyer et al 2019. In this application, we reject the null hypothesis that it is not possible to reduce the algorithm's disparate impact without compromising on the accuracy of its predictions.
 [25] arXiv:2406.09734 (replaced) [pdf, html, other]

Title: Embracing the EnemySubjects: Theoretical Economics (econ.TH)
We study an organization with a principal and two agents. All three have a longrun agenda which drives their repeated interactions. The principal can influence the competition for agency by endorsing an agent. Her agenda is more aligned with her ``friend'' than with her ``enemy.'' Even when fully aligned with the friend, the principal embraces the enemy by persistently endorsing him once an initial ``cordon sanitaire'' to exclude the enemy breaks exogenously. A dynamically optimizing principal with extreme agenda either implements the commitment solution or reverts to static Nash. For less extreme principals, losing commitment power has more gradual effects.
 [26] arXiv:2401.11568 (replaced) [pdf, html, other]

Title: A Note on the Stability of Monotone Markov ChainsSubjects: Probability (math.PR); Theoretical Economics (econ.TH)
This note studies monotone Markov chains, a subclass of Markov chains with extensive applications in operations research and economics. While the properties that ensure the global stability of these chains are well studied, their establishment often relies on the fulfillment of a certain splitting condition. We address the challenges of verifying the splitting condition by introducing simple, applicable conditions that ensure global stability. The simplicity of these conditions is demonstrated through various examples including autoregressive processes, portfolio allocation problems and resource allocation dynamics.
 [27] arXiv:2402.09321 (replaced) [pdf, html, other]

Title: CollusionResilience in Transaction Fee Mechanism DesignSubjects: Computer Science and Game Theory (cs.GT); Theoretical Economics (econ.TH)
Users bid in a transaction fee mechanism (TFM) to get their transactions included and confirmed by a blockchain protocol. Roughgarden (EC'21) initiated the formal treatment of TFMs and proposed three requirements: user incentive compatibility (UIC), miner incentive compatibility (MIC), and a form of collusionresilience called OCAproofness. Ethereum's EIP1559 mechanism satisfies all three properties simultaneously when there is no contention between transactions, but loses the UIC property when there are too many eligible transactions to fit in a single block. Chung and Shi (SODA'23) considered an alternative notion of collusionresilience, called csidecontractproofness (cSCP), and showed that, when there is contention between transactions, no TFM can satisfy UIC, MIC, and cSCP for any c at least 1. OCAproofness asserts that the users and a miner should not be able to "steal from the protocol." On the other hand, the cSCP condition requires that a coalition of a miner and a subset of users should not be able to profit through strategic deviations (whether at the expense of the protocol or of the users outside the coalition).
Our main result is the first proof that, when there is contention between transactions, no (possibly randomized) TFM in which users are expected to bid truthfully satisfies UIC, MIC, and OCAproofness. This result resolves the main open question in Roughgarden (EC'21). We also suggest several relaxations of the basic model that allow our impossibility result to be circumvented.  [28] arXiv:2403.03589 (replaced) [pdf, html, other]

Title: Active Adaptive Experimental Design for Treatment Effect Estimation with Covariate ChoicesSubjects: Methodology (stat.ME); Machine Learning (cs.LG); Econometrics (econ.EM); Machine Learning (stat.ML)
This study designs an adaptive experiment for efficiently estimating average treatment effects (ATEs). In each round of our adaptive experiment, an experimenter sequentially samples an experimental unit, assigns a treatment, and observes the corresponding outcome immediately. At the end of the experiment, the experimenter estimates an ATE using the gathered samples. The objective is to estimate the ATE with a smaller asymptotic variance. Existing studies have designed experiments that adaptively optimize the propensity score (treatmentassignment probability). As a generalization of such an approach, we propose optimizing the covariate density as well as the propensity score. First, we derive the efficient covariate density and propensity score that minimize the semiparametric efficiency bound and find that optimizing both covariate density and propensity score minimizes the semiparametric efficiency bound more effectively than optimizing only the propensity score. Next, we design an adaptive experiment using the efficient covariate density and propensity score sequentially estimated during the experiment. Lastly, we propose an ATE estimator whose asymptotic variance aligns with the minimized semiparametric efficiency bound.