Quantitative Finance
See recent articles
- [1] arXiv:2406.12983 [pdf, other]
-
Title: Reinforcement Learning for Corporate Bond Trading: A Sell Side PerspectiveComments: Working PaperSubjects: Computational Finance (q-fin.CP); Machine Learning (cs.LG); Optimization and Control (math.OC)
A corporate bond trader in a typical sell side institution such as a bank provides liquidity to the market participants by buying/selling securities and maintaining an inventory. Upon receiving a request for a buy/sell price quote (RFQ), the trader provides a quote by adding a spread over a \textit{prevalent market price}. For illiquid bonds, the market price is harder to observe, and traders often resort to available benchmark bond prices (such as MarketAxess, Bloomberg, etc.). In \cite{Bergault2023ModelingLI}, the concept of \textit{Fair Transfer Price} for an illiquid corporate bond was introduced which is derived from an infinite horizon stochastic optimal control problem (for maximizing the trader's expected P\&L, regularized by the quadratic variation). In this paper, we consider the same optimization objective, however, we approach the estimation of an optimal bid-ask spread quoting strategy in a data driven manner and show that it can be learned using Reinforcement Learning. Furthermore, we perform extensive outcome analysis to examine the reasonableness of the trained agent's behavior.
- [2] arXiv:2406.12995 [pdf, other]
-
Title: Essays on Responsible and Sustainable FinanceComments: PhD thesisSubjects: General Finance (q-fin.GN)
The dissertation consists of three essays on responsible and sustainable finance. I show that local communities should be seen as stakeholders to decisions made by corporations. In the first essay, I examine whether the imposition of fiduciary duty on municipal advisors affects bond yields and advising fees. Using a difference-in-differences analysis, I show that bond yields reduce by 9\% after the imposition of the SEC Municipal Advisor Rule. In the second essay, we analyze the impact of USD 40 billion of corporate subsidies given by U.S. local governments on their borrowing costs. We find that winning counties experience a 15 bps increase in bond yield spread as compared to the losing counties. In the third essay, we provide new evidence that the bankruptcy filing of a locally-headquartered and publicly-listed manufacturing firm imposes externalities on the local governments. Compared to matched counties with similar economic trends, municipal bond yields for affected counties increase by 10 bps within a year of the firm filing for bankruptcy. The final essay examines whether managers walk the talk on the environmental and social discussion. We train a deep-learning model on various corporate sustainability frameworks to construct a comprehensive Environmental and Social (E and S) dictionary. Using this dictionary, we find that the discussion of environmental topics in the earnings conference calls of U.S. public firms is associated with higher pollution abatement and more future green patents.
- [3] arXiv:2406.12999 [pdf, html, other]
-
Title: Robust convex risk measuresSubjects: Risk Management (q-fin.RM); Mathematical Finance (q-fin.MF)
We study the general properties of robust convex risk measures as worst-case values under uncertainty on random variables. We establish general concrete results regarding convex conjugates and sub-differentials. We refine some results for closed forms of worstcase law invariant convex risk measures under two concrete cases of uncertainty sets for random variables: based on the first two moments and Wasserstein balls.
- [4] arXiv:2406.13486 [pdf, html, other]
-
Title: Mean-Variance Portfolio Selection in Long-Term Investments with Unknown Distribution: Online Estimation, Risk Aversion under Ambiguity, and Universality of AlgorithmsComments: 21 pages, working paper, first draft version (may contain errors)Subjects: Mathematical Finance (q-fin.MF); Machine Learning (cs.LG); Probability (math.PR); Portfolio Management (q-fin.PM)
The standard approach for constructing a Mean-Variance portfolio involves estimating parameters for the model using collected samples. However, since the distribution of future data may not resemble that of the training set, the out-of-sample performance of the estimated portfolio is worse than one derived with true parameters, which has prompted several innovations for better estimation. Instead of treating the data without a timing aspect as in the common training-backtest approach, this paper adopts a perspective where data gradually and continuously reveal over time. The original model is recast into an online learning framework, which is free from any statistical assumptions, to propose a dynamic strategy of sequential portfolios such that its empirical utility, Sharpe ratio, and growth rate asymptotically achieve those of the true portfolio, derived with perfect knowledge of the future data.
When the distribution of future data has a normal shape, the growth rate of wealth is shown to increase by lifting the portfolio along the efficient frontier through the calibration of risk aversion. Since risk aversion cannot be appropriately predetermined, another proposed algorithm updating this coefficient over time forms a dynamic strategy approaching the optimal empirical Sharpe ratio or growth rate associated with the true coefficient. The performance of these proposed strategies is universally guaranteed under specific stochastic markets. Furthermore, in stationary and ergodic markets, the so-called Bayesian strategy utilizing true conditional distributions, based on observed past market information during investment, almost surely does not perform better than the proposed strategies in terms of empirical utility, Sharpe ratio, or growth rate, which, in contrast, do not rely on conditional distributions. - [5] arXiv:2406.13508 [pdf, html, other]
-
Title: Pricing VIX options under the Heston-Hawkes stochastic volatility modelComments: 28 pagesSubjects: Mathematical Finance (q-fin.MF)
We derive a semi-analytical pricing formula for European VIX call options under the Heston-Hawkes stochastic volatility model introduced in arXiv:2210.15343. This arbitrage-free model incorporates the volatility clustering feature by adding an independent compound Hawkes process to the Heston volatility. Using the Markov property of the exponential Hawkes an explicit expression of $\text{VIX}^2$ is derived as a linear combination of the variance and the Hawkes intensity. We apply qualitative ODE theory to study the existence of some generalized Riccati ODEs. Thereafter, we compute the joint characteristic function of the variance and the Hawkes intensity exploiting the exponential affine structure of the model. Finally, the pricing formula is obtained by applying standard Fourier techniques.
- [6] arXiv:2406.13539 [pdf, html, other]
-
Title: Robust Lambda-quantiles and extreme probabilitiesComments: 30 pagesSubjects: Mathematical Finance (q-fin.MF)
In this paper, we investigate the robust models for $\Lambda$-quantiles with partial information regarding the loss distribution, where $\Lambda$-quantiles extend the classical quantiles by replacing the fixed probability level with a probability/loss function $\Lambda$. We find that, under some assumptions, the robust $\Lambda$-quantiles equal the $\Lambda$-quantiles of the extreme probabilities. This finding allows us to obtain the robust $\Lambda$-quantiles by applying the results of robust quantiles in the literature. Our results are applied to uncertainty sets characterized by three different constraints respectively: moment constraints, probability distance constraints via Wasserstein metric, and marginal constraints in risk aggregation. We obtain some explicit expressions for robust $\Lambda$-quantiles by deriving the extreme probabilities for each uncertainty set. Those results are applied to optimal portfolio selection under model uncertainty.
- [7] arXiv:2406.13563 [pdf, other]
-
Title: Serotonin as a Creativity PumpComments: 18 pages, 17 figuresSubjects: General Economics (econ.GN)
The location of Western Civilization defined nations in Europe and in the United States within the largest global pollen environments on the planet is proposed as a key factor leading to their success. Environments with dense pollen concentrations will cause large up and down changes in serum histamine directly causing reductions and increases in brain serotonin i.e., a larger serotonin slope, linked to higher levels of creativity. The pollen ecosystem in northern latitude nations is thus considered the hidden driver of the success of these populations as the biochemical interaction between histamine and serotonin leads to a creativity pump that is proposed as the fundamental driver of intelligence in micro and macro human populations.
- [8] arXiv:2406.14238 [pdf, other]
-
Title: The Economics of Coal Phaseouts: Auctions as a Novel Policy Instrument for the Energy TransitionJournal-ref: Climate Policy, pp.1-12 (2024)Subjects: General Economics (econ.GN)
The combustion of coal, the most polluting form of energy, must be significantly curtailed to limit global average temperature increase to well below 2 degrees C. The effectiveness of carbon pricing is frequently undermined by sub-optimally low prices and rigid market structures. Consequently, alternative approaches such as compensation for the early closure of coal-fired power plants are being considered. While bilateral negotiations can lead to excessive compensation due to asymmetric information, a competitive auction can discover the true cost of closure and help allocate funds more efficiently and transparently. Since Germany is the only country till date to have implemented a coal phaseout auction, we use it to analyse the merits and demerits of the policy, drawing comparisons with other countries that have phased out coal through other means. The German experience with coal phaseout auctions illustrates the necessity of considering additionality and interaction with existing climate policies, managing dynamic incentives, and evaluating impacts on security of supply. While theoretically auctions have attractive properties, in practice, their design must address these concerns to unlock the full benefits. Where auctions are not appropriate due to a concentration in coal plant ownership, alternative strategies include enhanced incentives for scrappage and repurposing of coal assets.
- [9] arXiv:2406.14382 [pdf, html, other]
-
Title: Identification of fiscal SVAR-IVs in small open economiesSubjects: General Economics (econ.GN)
We propose a novel instrumental variable to identify fiscal shocks in small open economies. Under the assumptions that unexpected changes in trading partners correlate with output of an open economy and unexpected fiscal shocks of a small economy are unrelated to its trading partners' forecast errors, we use forecast errors of trading partner economies to proxy unexpected shocks in domestic output. We show that this instrument is relevant and find evidence that supports its exogeneity. Using this IV strategy, we find that the two-year cumulative spending multiplier is around 1 for Canada and 0.5 for euro area small open economies.
New submissions for Friday, 21 June 2024 (showing 9 of 9 entries )
- [10] arXiv:2406.13166 (cross-list from cs.LG) [pdf, other]
-
Title: Enhancing supply chain security with automated machine learningComments: 22 pagesSubjects: Machine Learning (cs.LG); General Economics (econ.GN); Optimization and Control (math.OC)
This study tackles the complexities of global supply chains, which are increasingly vulnerable to disruptions caused by port congestion, material shortages, and inflation. To address these challenges, we explore the application of machine learning methods, which excel in predicting and optimizing solutions based on large datasets. Our focus is on enhancing supply chain security through fraud detection, maintenance prediction, and material backorder forecasting. We introduce an automated machine learning framework that streamlines data analysis, model construction, and hyperparameter optimization for these tasks. By automating these processes, our framework improves the efficiency and effectiveness of supply chain security measures. Our research identifies key factors that influence machine learning performance, including sampling methods, categorical encoding, feature selection, and hyperparameter optimization. We demonstrate the importance of considering these factors when applying machine learning to supply chain challenges. Traditional mathematical programming models often struggle to cope with the complexity of large-scale supply chain problems. Our study shows that machine learning methods can provide a viable alternative, particularly when dealing with extensive datasets and complex patterns. The automated machine learning framework presented in this study offers a novel approach to supply chain security, contributing to the existing body of knowledge in the field. Its comprehensive automation of machine learning processes makes it a valuable contribution to the domain of supply chain management.
- [11] arXiv:2406.13726 (cross-list from math.OC) [pdf, html, other]
-
Title: Global Solutions to Master Equations for Continuous Time Heterogeneous Agent Macroeconomic ModelsSubjects: Optimization and Control (math.OC); Machine Learning (cs.LG); General Economics (econ.GN)
We propose and compare new global solution algorithms for continuous time heterogeneous agent economies with aggregate shocks. First, we approximate the agent distribution so that equilibrium in the economy can be characterized by a high, but finite, dimensional non-linear partial differential equation. We consider different approximations: discretizing the number of agents, discretizing the agent state variables, and projecting the distribution onto a finite set of basis functions. Second, we represent the value function using a neural network and train it to solve the differential equation using deep learning tools. We refer to the solution as an Economic Model Informed Neural Network (EMINN). The main advantage of this technique is that it allows us to find global solutions to high dimensional, non-linear problems. We demonstrate our algorithm by solving important models in the macroeconomics and spatial literatures (e.g. Krusell and Smith (1998), Khan and Thomas (2007), Bilal (2023)).
- [12] arXiv:2406.13789 (cross-list from physics.soc-ph) [pdf, html, other]
-
Title: Death, Taxes, and Inequality. Can a Minimal Model Explain Real Economic Inequality?Comments: 16 pages, 6 figures, 1 table, 1 algorithm tableSubjects: Physics and Society (physics.soc-ph); Computational Finance (q-fin.CP)
Income inequalities and redistribution policies are modeled with a minimal, endogenous model of a simple foraging economy. The model is scaled to match human lifespans and overall death rates. Stochastic income distributions from the model are compared to empirical data from actual economies. Empirical data are fit to implied distributions providing necessary resolution for comparison. The impacts of redistribution policies on total wealth, income distributions, and inequality are shown to be similar for the empirical data and the model. These comparisons enable detailed determinations of population welfare beyond what is possible with total wealth and inequality metrics. Estate taxes in the model appear quite effective in reducing inequality without reducing total wealth. Significant income inequality emerges for the model for a population of equally capable individuals presented with equal opportunities. Stochastic population instability at both the high and low ends of infertility are considered.
- [13] arXiv:2406.13794 (cross-list from eess.SY) [pdf, html, other]
-
Title: Adaptive Curves for Optimally Efficient Market MakingSubjects: Systems and Control (eess.SY); Computational Engineering, Finance, and Science (cs.CE); Trading and Market Microstructure (q-fin.TR)
Automated Market Makers (AMMs) are essential in Decentralized Finance (DeFi) as they match liquidity supply with demand. They function through liquidity providers (LPs) who deposit assets into liquidity pools. However, the asset trading prices in these pools often trail behind those in more dynamic, centralized exchanges, leading to potential arbitrage losses for LPs. This issue is tackled by adapting market maker bonding curves to trader behavior, based on the classical market microstructure model of Glosten and Milgrom. Our approach ensures a zero-profit condition for the market maker's prices. We derive the differential equation that an optimal adaptive curve should follow to minimize arbitrage losses while remaining competitive. Solutions to this optimality equation are obtained for standard Gaussian and Lognormal price models using Kalman filtering. A key feature of our method is its ability to estimate the external market price without relying on price or loss oracles. We also provide an equivalent differential equation for the implied dynamics of canonical static bonding curves and establish conditions for their optimality. Our algorithms demonstrate robustness to changing market conditions and adversarial perturbations, and we offer an on-chain implementation using Uniswap v4 alongside off-chain AI co-processors.
- [14] arXiv:2406.14074 (cross-list from math.PR) [pdf, html, other]
-
Title: Strong existence and uniqueness of a calibrated local stochastic volatility modelSubjects: Probability (math.PR); Analysis of PDEs (math.AP); Mathematical Finance (q-fin.MF)
We study a two-dimensional McKean-Vlasov stochastic differential equation, whose volatility coefficient depends on the conditional distribution of the second component with respect to the first component. We prove the strong existence and uniqueness of the solution, establishing the well-posedness of a two-factor local stochastic volatility (LSV) model calibrated to the market prices of European call options. In the spirit of [Jourdain and Zhou, 2020, Existence of a calibrated regime switching local volatility model.], we assume that the factor driving the volatility of the log-price takes finitely many values. Additionally, the propagation of chaos of the particle system is established, giving theoretical justification for the algorithm [Julien Guyon and Henry-Labordère, 2012, Being particular about calibration.].
- [15] arXiv:2406.14537 (cross-list from cs.LG) [pdf, html, other]
-
Title: MacroHFT: Memory Augmented Context-aware Reinforcement Learning On High Frequency TradingComments: Accepted to KDD 2024Subjects: Machine Learning (cs.LG); Trading and Market Microstructure (q-fin.TR)
High-frequency trading (HFT) that executes algorithmic trading in short time scales, has recently occupied the majority of cryptocurrency market. Besides traditional quantitative trading methods, reinforcement learning (RL) has become another appealing approach for HFT due to its terrific ability of handling high-dimensional financial data and solving sophisticated sequential decision-making problems, \emph{e.g.,} hierarchical reinforcement learning (HRL) has shown its promising performance on second-level HFT by training a router to select only one sub-agent from the agent pool to execute the current transaction. However, existing RL methods for HFT still have some defects: 1) standard RL-based trading agents suffer from the overfitting issue, preventing them from making effective policy adjustments based on financial context; 2) due to the rapid changes in market conditions, investment decisions made by an individual agent are usually one-sided and highly biased, which might lead to significant loss in extreme markets. To tackle these problems, we propose a novel Memory Augmented Context-aware Reinforcement learning method On HFT, \emph{a.k.a.} MacroHFT, which consists of two training phases: 1) we first train multiple types of sub-agents with the market data decomposed according to various financial indicators, specifically market trend and volatility, where each agent owns a conditional adapter to adjust its trading policy according to market conditions; 2) then we train a hyper-agent to mix the decisions from these sub-agents and output a consistently profitable meta-policy to handle rapid market fluctuations, equipped with a memory mechanism to enhance the capability of decision-making. Extensive experiments on various cryptocurrency markets demonstrate that MacroHFT can achieve state-of-the-art performance on minute-level trading tasks.
Cross submissions for Friday, 21 June 2024 (showing 6 of 6 entries )
- [16] arXiv:2210.01726 (replaced) [pdf, html, other]
-
Title: Detecting asset price bubbles using deep learningComments: 31 pages, 3 figuresSubjects: Mathematical Finance (q-fin.MF)
In this paper we employ deep learning techniques to detect financial asset bubbles by using observed call option prices. The proposed algorithm is widely applicable and model-independent. We test the accuracy of our methodology in numerical experiments within a wide range of models and apply it to market data of tech stocks in order to assess if asset price bubbles are present. Under a given condition on the pricing of call options under asset price bubbles, we are able to provide a theoretical foundation of our approach for positive and continuous stochastic asset price processes. When such a condition is not satisfied, we focus on local volatility models. To this purpose, we give a new necessary and sufficient condition for a process with time-dependent local volatility function to be a strict local martingale.
- [17] arXiv:2309.04216 (replaced) [pdf, html, other]
-
Title: Liquidity Dynamics in RFQ Markets and Impact on PricingSubjects: Trading and Market Microstructure (q-fin.TR); Statistical Finance (q-fin.ST)
To assign a value to a portfolio, it is common to use Mark-to-Market prices. However, how should one proceed when the securities are illiquid? When transaction prices are scarce, how can one use all the available real-time information? In this article, we address these questions for over-the-counter (OTC) markets based on requests for quotes (RFQs). We extend the concept of micro-price, which was recently introduced for assets exchanged through limit order books in the market microstructure literature, and incorporate ideas from the recent literature on OTC market making. To account for liquidity imbalances in RFQ markets, we use an approach based on bidimensional Markov-modulated Poisson processes. Beyond extending the concept of micro-price to RFQ markets, we introduce the new concept of Fair Transfer Price. Our concepts of price can be used to value securities fairly, even when the market is relatively illiquid and/or tends to be one-sided.
- [18] arXiv:2311.13564 (replaced) [pdf, html, other]
-
Title: High order universal portfoliosSubjects: Portfolio Management (q-fin.PM); Numerical Analysis (math.NA)
The Cover universal portfolio (UP from now on) has many interesting theoretical and numerical properties and was investigated for a long time. Building on it, we explore what happens when we add this UP to the market as a new synthetic asset and construct by recurrence higher order UPs. We investigate some important theoretical properties of the high order UPs and show in particular that they are indeed different from the Cover UP and are capable to break the time permutation invariance. We show that under some perturbation regime the second high order UP has better Sharp ratio than the standard UP and briefly investigate arbitrage opportunities thus created. Numerical experiences on a benchmark from the literature confirm that high order UPs improve Cover's UP performances.
- [19] arXiv:2401.17265 (replaced) [pdf, html, other]
-
Title: Partial Law Invariance and Risk MeasuresSubjects: Risk Management (q-fin.RM)
We introduce the concept of partial law invariance, generalizing the concepts of law invariance and probabilistic sophistication widely used in decision theory, as well as statistical and financial applications. This new concept is motivated by practical considerations of decision making under uncertainty, thus connecting the literature on decision theory and that on financial risk management. We fully characterize partially law-invariant coherent risk measures via a novel representation formula. Strong partial law invariance is defined to bridge the gap between the above characterization and the classic representation formula of Kusuoka. We propose a few classes of new risk measures, including partially law-invariant versions of the Expected Shortfall and the entropic risk measures, and illustrate their applications in risk assessment under different types of uncertainty. We provide a tractable optimization formula for computing a class of partially law-invariant coherent risk measures and give a numerical example.
- [20] arXiv:2403.09265 (replaced) [pdf, html, other]
-
Title: Zonal vs. Nodal Pricing: An Analysis of Different Pricing Rules in the German Day-Ahead MarketComments: 36 pages, 7 figuresSubjects: General Economics (econ.GN)
The European electricity market is based on large pricing zones with a uniform day-ahead price. The energy transition leads to changes in supply and demand and increasing redispatch costs. In an attempt to ensure efficient market clearing and congestion management, the EU Commission has mandated the Bidding Zone Review (BZR) to reevaluate the configuration of European bidding zones. Based on a unique data set published in the context of the BZR for the target year 2025, we compare various pricing rules for the German power market. We compare market clearing and pricing for different zonal and nodal models, including their generation costs and associated redispatch costs. In numerical experiments with this dataset, the differences in the average prices in different zones are low. Congestion arises as well, but not necessarily on the cross-zonal interconnectors. The total costs across different configurations are similar and the reduction of standard deviations in prices is also small. This might be different with other load and generation scenarios, but the BZR data is important as it was created to make a decision about splits of the existing bidding zones. Nodal pricing rules lead to the lowest total cost. We also evaluate differences of nodal pricing rules with respect to the necessary uplift payments, which is relevant in the context of the current discussion on non-uniform pricing in the EU. While the study focuses on Germany, the analysis is relevant beyond and feeds into the broader discussion about pricing rules in non-convex markets.
- [21] arXiv:2406.09765 (replaced) [pdf, other]
-
Title: Application of Natural Language Processing in Financial Risk DetectionSubjects: Risk Management (q-fin.RM); Computation and Language (cs.CL)
This paper explores the application of Natural Language Processing (NLP) in financial risk detection. By constructing an NLP-based financial risk detection model, this study aims to identify and predict potential risks in financial documents and communications. First, the fundamental concepts of NLP and its theoretical foundation, including text mining methods, NLP model design principles, and machine learning algorithms, are introduced. Second, the process of text data preprocessing and feature extraction is described. Finally, the effectiveness and predictive performance of the model are validated through empirical research. The results show that the NLP-based financial risk detection model performs excellently in risk identification and prediction, providing effective risk management tools for financial institutions. This study offers valuable references for the field of financial risk management, utilizing advanced NLP techniques to improve the accuracy and efficiency of financial risk detection.
- [22] arXiv:1707.00199 (replaced) [pdf, html, other]
-
Title: Utility maximization in constrained and unbounded financial markets: Applications to indifference valuation, regime switching, consumption and Epstein-Zin recursive utilityComments: 90 pagesSubjects: Probability (math.PR); Mathematical Finance (q-fin.MF)
This memoir presents a systematic study of utility maximization problems for an investor in constrained and unbounded financial markets. Building upon the foundational work of Hu et al. (2005) [Ann. Appl. Probab., 15, 1691--1712] in a bounded framework, we extend our analysis to more challenging unbounded cases. Our methodology combines quadratic backward stochastic differential equations with unbounded solutions and convex duality methods. Central to our approach is the verification of the finite entropy condition, which plays a pivotal role in solving the underlying utility maximization problems and establishing the martingale property and convex duality representation of the value processes. Through four distinct applications, we first study utility indifference valuation of financial derivatives with unbounded payoffs, uncovering novel asymptotic behavior as the risk aversion parameter approaches zero or infinity. Furthermore, we study the regime switching market model with unbounded random endowments and consumption-investment problems with unbounded random endowments, both constrained to portfolios chosen from a convex and closed set. Finally, we investigate investment-consumption problems involving an investor with Epstein-Zin recursive utility in an unbounded financial market.
- [23] arXiv:2405.13390 (replaced) [pdf, html, other]
-
Title: Convergence analysis of kernel learning FBSDE filterSubjects: Machine Learning (cs.LG); Numerical Analysis (math.NA); Mathematical Finance (q-fin.MF)
Kernel learning forward backward SDE filter is an iterative and adaptive meshfree approach to solve the nonlinear filtering problem. It builds from forward backward SDE for Fokker-Planker equation, which defines evolving density for the state variable, and employs KDE to approximate density. This algorithm has shown more superior performance than mainstream particle filter method, in both convergence speed and efficiency of solving high dimension problems.
However, this method has only been shown to converge empirically. In this paper, we present a rigorous analysis to demonstrate its local and global convergence, and provide theoretical support for its empirical results. - [24] arXiv:2406.10719 (replaced) [pdf, html, other]
-
Title: Trading Devil: Robust backdoor attack via Stochastic investment models and Bayesian approachComments: (Last update) Stochastic investment models and a Bayesian approach to better modeling of uncertainty : adversarial machine learning or Stochastic market. arXiv admin note: substantial text overlap with arXiv:2402.05967Subjects: Cryptography and Security (cs.CR); Machine Learning (cs.LG); Computational Finance (q-fin.CP); Statistical Finance (q-fin.ST); Machine Learning (stat.ML)
With the growing use of voice-activated systems and speech recognition technologies, the danger of backdoor attacks on audio data has grown significantly. This research looks at a specific type of attack, known as a Stochastic investment-based backdoor attack (MarketBack), in which adversaries strategically manipulate the stylistic properties of audio to fool speech recognition systems. The security and integrity of machine learning models are seriously threatened by backdoor attacks, in order to maintain the reliability of audio applications and systems, the identification of such attacks becomes crucial in the context of audio data. Experimental results demonstrated that MarketBack is feasible to achieve an average attack success rate close to 100% in seven victim models when poisoning less than 1% of the training data.