2024 Volume 3 Issue 1
Published: 26 March 2024
  


  • Select all
    |
  • Juanjuan Meng, Hui Wang, Yu (Alan) Yang, Mingshan Zhang
    2024, 3(1): 27-54.
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    The paper examines the impact of the “Double Reduction” policy implemented in 2021 on the academic burden,family education expenditure,and physical and mental health of parents/students in primary and secondary schools.The excessive academic burden has raised concerns in society and academia.The rise of the extracurricular education industry and educational competition may have worsened the situation,leading to increased stress on students and parental anxiety.This exam-oriented approach and the focus on further education not only burden families financially but also hinder students’ physical and mental development,stifling creativity and innovation necessary for future progress.Previous policies aimed at reducing educational burden primarily focused on decreasing workload within schools,but studies found that parents often compensated by increasing spending on extracurricular education,and this intensified competition disproportionately affect students from low-income families and rural areas.To address these issues,the“Double Reduction” policy was introduced on July 242021,aiming to effectively reduce academic workload,off-campus training burden,family educational expenses,and parental effort.It imposes strict regulations on both in-school learning and off-campus tutoring,focusing on curbing excessive competition and promoting a balanced educational resource distribution.

    We conducted a nationwide survey covering approximately 2 000 primary and secondary school parents from 29 provinces across the country to examine the effects of the “Double Reduction” policy.The survey collected detailed information on students and families during the two semesters,before and after the implementation of the policy.The questionnaire was designed in line with the guiding principles and objectives of the policy,consisting of three main sections:the reduction of academic burden within and outsideschool (detailed activities of students during school hours,after-school study,and off-campus tutoring),family educational investment (financial and time investment in various educational activities),and the physical and mental health of parents and students,as well as their subjective perceptions and beliefs.Using the survey data,we employed an individual fixed effects model and a generalized difference-in-differences model based on policy intensityto compare the two semesters before and after the implementation of the “Double Reduction” policy.

    We find that after the policy implementation,the average total duration of students’ after-school study decreased by approximately one-fourth,family educational expenditure and parental time investment decreased by about 15%,and there were significant improvements in parental stress,students’physical and mental health,learning initiative,and parent-child relationships.These effects exhibited considerable heterogeneity across different households.Compared to families with parents holding graduate degrees,families with parents with undergraduate degrees or below experienced a more pronounced reduction in the burden of educational expenditure,and these parents and students also experienced relatively greater improvements in their physical and mental health.Further exploration of parental attitudes revealed that parents generally believed that the overall impact of the “Double Reduction” policy on their children was positive.Moreover,parents with more positive views towards the policy and those who believed that other families would also reduce their competitive educational investment simultaneously tended to reduce their educational burden to a greater extent.This suggests that parental decisions regarding educational investment are largely strategic responses to cope with the educational decisions of other families in a competitive environment,resembling the prisoner’s dilemmain educational investment.

    It should be emphasized that depression is now widespread among Chinese adolescents,as well as the anxiety and pressure experienced by parents.Therefore,variables related to the physical and mental health of students and parents are crucial dimensions for policy evaluation,which have not received sufficient attention in previous literature.The findings of this analysis indicate that the “Double Reduction” policy has significant positive effects on improving the physical and mental health of parents and children.These findings contribute to a timely and comprehensive understanding of the multifaceted impacts of the “Double Reduction” policy,providing support for the continued advancement and optimization of educational policies.

    This study is the first comprehensive quantitative analysis of the effects of the “Double Reduction” policy using household-level data on educational investment and behaviors.Existing research has mainly focused on the reduction of academic burden within schools prior to 2021.However,the uniqueness of the “Double Reduction” policy lies in its simultaneous restriction of both in-school education supply and off-school tutoring services.Until now,there has been very limited research on the effects of the “Double Reduction” policy.This study,based on detailed micro-level survey data covering two academic semesters before and after the policy implementation,can more directly and causally quantify the effects of the “Double Reduction” policy,effectively filling the gap in existing research.These rigorous and timely evaluations of the effects of the “Double Reduction” policy can provide references for the continued policy refinements in educational burden reduction and the establishment of long-term instituitional arrangement.

  • Shuo Liu, Ji Shen, Zhenyang Wang
    2024, 3(1): 55-82.
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    To survive fierce competition,firms can invest in product innovation and cater their designs to match the preferences of certain consumer segments,thus cultivating brand loyalty.However,it is well-documented in the marketing literature that firms sometimes would rather direct their efforts in making consumers believe (or misbelieve) that the products are more differentiated than they actually are,so that a loyalty premium can be commanded even when true product values are very similar.In particular,firms often obfuscate product information through complex offerings,confusing pricing,excessive features,and limited disclosure,making it difficult for consumers to make informed decisions.This phenomenon appears to contradict the conventional,neoclassical framework,which posits that consumers make optimal decisions based on perfect rationality and that firms compete on quality and price rather than by engendering confusion.

    How can we explain the pervasiveness and persistence of obfuscation in real market competition? The emerging literature on behavioral industrial organization adopts a novel perspective premised upon consumers’ bounded rationality and cognitive biases.In this article,we focus specifically on one such bias—correlation neglect,which refers to the tendency to underestimate or even completely neglect the correlations between various information sources,and which has attracted significant research interest recently.In this paper,we provide a theoretical framework to explore how this type of consumer naivete impacts firms’ competition strategies and overall social welfare.

    To illustrate the key idea of our paper,consider two fund management companies offering investment products based on very similar underlying assets.Although aware that their products’ returns are highly correlated,the companies may prefer not to convey this fact to consumers,because doing so would intensify fee competition.Instead,the companies may engage in obfuscation,with the aim of generating the perception that their offerings differ significantly.For instance,they could use distinctive industry jargon to describe their investment portfolios,cherry-pick performance benchmarks for comparison,or highlight either the impressive credentials or dazzling prior performance of their fund managers.Were consumers rational enough and able to discern the underlying homogeneity of the two products,such informational obfuscation would prove ineffective.However,if consumers evaluate each piece of information that they receive in isolation without properly accounting for the prior correlation,the companies’ obfuscation tactics could succeed in creating an illusion of differentiation.In this way,correlation neglect grants the companies exaggerated market power to charge higher fees than competition would otherwise necessitate.

    The key premise in the example above is that even if aware of the potential correlation between competing products,consumers may fail to assess it accurately,let alone fully incorporate it into their decisions.Formally,correlation neglect refers to the cognitive bias where individuals underestimate or even completely ignore the correlation between different information sources when updating beliefs on which choices are based.To study the implications of such biases for market competition,this paper develops a duopoly model with correlation-neglecting consumers.We demonstrate that equilibrium outcomes differ substantially across settings.With perfect knowledge and accurate accounting of correlation,firms have no incentive to obfuscate because competition eliminates any potential gains.However,when consumers underestimate or neglect correlation,firms can often obfuscate to soften competition and earn extra profits at the expense of consumers.Our results offer a rationale for firms using misleading marketing messages,while shedding light on how policy interventions such as consumer education or mandatory basic goods may help or backfire.In sum,the core contribution of our analysis is to show how correlation neglect enables obfuscated marketing to emerge and persist,which could have implications for future research on other biases or market structures as well.

    In the paper,we present a two-stage duopoly competition model in which two firms compete on marketing and price for customers.The valuations may vary between products,but can be arbitrarily correlated.Here,the true degree of correlation between product values can be interpreted as a measure of product differentiation.Consumers cannot directly observe the true valuations,but instead receive a signal from each firm,which is composed of the true valuation of the product offered by that firm and an unbiased noise with variance being chosen strategically by the firm.The noise that one particular firm adds to the signal may change consumers’ valuation for its product,but does not affect consumers’ willingness to pay for the competing product.In the first stage,two firms simultaneously choose their obfuscation strategy.Upon receiving signals from both firms,consumers update their beliefs about product values.In the second stage,the firms post price and engage in the conventional Bertrand competition.Essentially,we assume that firms can manipulate perceived correlation and valuations by providing noisy signals about product values,thereby impeding product comparison.

    We first establish a benchmark result: if consumers are rational enough to properly assess and accounting for the true correlation, firms would opt for maximal transparency of product information. Then we turn to our main discussion of the impact of correlation neglect.Here,consumers understand the information provided by each firm in isolation, but they hold incorrect beliefs about the correlation between the true product values and, as a consequence, misjudge the interdependence of the signals that they received. We show that when products become sufficiently homogeneous, firms would adopt a moderate level of obfuscation for their marketing strategies. The equilibrium results and subsequent comparative statics and welfare analysis show that as the gap between the true and perceived degrees of correlation increases (i.e., as consumer naivete increases),firms’ profits rise while consumer/social surplus decreases due to a higher probability of mismatch in purchase.

    Lastly, we explore two extensions of our main analysis.First,we characterize the conditions under which asymmetric equilibria can also emerge in our setting. Comparing the symmetric and asymmetric equilibria suggests that the asymmetric equilibrium yields higher profits for firms but a lower surplus for consumers.Second,we extend the discussion to general value distributions, demonstrating the robustness of our core insights.

  • Lung-fei Lee, Jihai Yu
    2024, 3(1): 83-114.
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    This paper reviews the literature on spatial panel data models in econometrics.In recent decades,panel data models with spatial interactions have become increasingly important in empirical research,as they account for dynamic and spatial dependencies and control for unobservable heterogeneity.With panel data,we can not only have a larger sample size to improve the efficiency of the estimators,but also investigate some problems that cross-sectional data cannot handle,such as heterogeneity and state dependence across time.

    This paper first introduces various spatial panel data model specifications,which are divided into two categories:static spatial panels and dynamic spatial panels.For static spatial panel data models,the regressors do not include a time-lagged term,but the disturbances can have serial correlation along with the spatial correlation.Depending on whether the individual effects are correlated with regressors,we have fixed effects models and random effects models.For spatial dynamic panel data models,we need to consider the influence of the initial period,and the length of time periods is important for asymptotic analysis.Depending on the eigenvalue structure of the dynamic process,we have stable,spatial cointegration,unit root,and explosive processes.Besides these two benchmark models,various model specifications are proposed in the literature,such as the semiparametric approach,common factors,endogenous spatial weights matrix,simultaneous equations model,and structural change.

    We then introduce corresponding estimation methods in detail for the two benchmarks,including quasi-maximum likelihood estimation and generalized moment method.For the static spatial panel data models,we present likelihood approaches for the fixed and random effects models.For the fixed effects model,we can either estimate those fixed effects directly along with the regression coefficients,or transform the data to eliminate those fixed effects and then perform the estimation.The latter transformation approach can avoid the incidental parameter problem and yield consistent estimation for all parameters.For the random effects model,we do not have the incidental parameter problem and the estimators are more efficient than those from the fixed effects model.In the literature for spatial panel data,the Hausman test is proposed for the model specification,which is also applicable to a general static spatial panel model with serially and spatially correlated disturbances.For the dynamic spatial panel data models,we first present likelihood estimation for various spatial dynamic panel data depending on their eigenvalues.Even though the maximum likelihood estimator is consistent over a long time period,asymptotic bias will still invalidate the statistical inference.Thus,a bias correction procedure is recommended to eliminate the asymptotic bias,where the bias formula might take different forms depending on the stability feature of the data-generating process.We then review the GMM estimation utilizing both linear and quadratic moments.Compared with QML estimation,the GMM is computationally convenient,is valid regardless of short and long time periods,and is applicable when the spatial weight matrix is not row-normalized under the model with time effects.The GMM estimation can be as efficient as QML estimation,and more efficient if the disturbances are not normally distributed.For both QML and GMM estimation,estimation and inference might be invalid under cross-sectional heteroskedasticity.We review recent work on this issue,including adjusted QML estimation and recentered method of the moment.The adjusted QML makes the bias correction for the score vector of the likelihood function,while the recentered method of moments investigates the correlation of endogenous regressors and disturbances.

    Finally,the semi-parametric estimation of spatial panel data models in recent years is reviewed,where spatial weights matrix,exogenous regressor,spatial lag,or regression coefficients could be nonparametrically specified.In conclusion,we expect that dyadic data and nonparametric model specification tests for the spatial panel data can be two promising fields in future research.

  • Qiuyuan Ai, Zhijian Zhan, Cong Wang, Jie Song
    2024, 3(1): 115-144.
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    As Internet, Internet of Things (IoT), and Artificial Intelligence (AI) technologies rapidly evolve, data has become a critical driving force behind economic and technological advancement. Companies can leverage data analysis to gain comprehensive insights into customer behavior, market trends, and operational performance, thereby making informed decisions and enhancing overall performance. However, a single organization’s data may not be sufficient for comprehensive data analysis, posing a significant challenge. For instance, developing an accurate marketing model to target users may necessitate data from multiple sources, such as telecom operators, social networking sites, and e-commerce platforms. This data scarcity necessitates data-sharing mechanisms, which are often fraught with concerns surround data privacy,ethics,and legality. In this regard, Federated Learning (FL)—a novel machine learning paradigm—has garnered increasing attention. FL participants can train local models, safeguard data privacy, and exchange only model parameters with servers or other peers, fully capitalizing on the value of data. This “data-available-but-not-visible” approach is gaining popularity in data-intensive fields.

    Many FL tasks cannot be accomplished in a single instance and require sustained collaboration among multiple parties. For example, in the joint development of an FL model across multiple medical institutions to detect and manage chronic diseases, continuous accumulation of clinical data, learning from case changes, and model robustness and predictability improvements are necessary to reflect the latest medical knowledge and practices. Current literature on FL cooperative behavior and incentive mechanisms,however,primarily focuses on cross-device federated learning and considers only one-off cooperation. This modeling is inadequate for characterizing practical cross-silo long-term FL patterns. On the one hand, cross-silo FL participants, who also accumulate a certain amount of data,have more complex and diverse strategic options compared to those in cross-device FL.Participants can choose to participate in public training or solely improve their model utility through local training.On the other hand,when cooperation transitions from a one-off to a long-term scenario,time inconsistency issues may lead to free-riding behaviors,incentivizing participants to delay data contributions while enjoying the benefits of others’ contributions.To address these limitations,this study concentrates on the long-term cross-silo FL process,establishing a dynamic game model to characterize federated clients’ interactive strategies and proposing a reinforcement learning-based incentive mechanism to encourage rational participant contribution,aiming to boost the FL system’s overall revenue.

    This paper first establishes a dynamic game model to characterize federated clients’ long-term interactive strategies.We devise a cooperation contract in which the central server only transmits the aggregated parameters to current training period contributors.With the long-term cross-silo FL cooperation process divided into several model training periods,clients have two strategic choices in each period:to participate in public federated training or to retain data for local training only.At the end of  each period,clients receive feedback parameters from the central server and gain corresponding benefits based on their local models’ accuracy.In this framework,clients face a trade-off between participation costs and potential early contribution benefits.Given the information accumulation in the model with the client’s input,clients also confront a cross-period decision-making problem regarding resource allocation throughout the entire long-term FL cooperation process.Based on these background assumptions,this paper establishes a game tree to consider the game solution,where clients’ decisions in each training period are based on full knowledge of past cooperation and rational expectations of future actions.Through backward induction,we solve for the client’s equilibrium strategy,which exhibits intermittent contribution gaps,clearly deviating from the socially optimal cooperative pattern.

    Building on the above game analysis,this paper subsequently designs a dynamic incentive scheme based on reinforcement learning,setting incentives for different training periods based on clients’ cooperation progress.Firstly,we regard the FL organization as a central planner responsible for issuing incentives before each training period to encourage federated client input.The Deep Reinforcement Learning (DRL) agent assists the central planner in making incentive decisions,with federated clients serving as the environment with which the agent interacts.On the one hand,we meticulously design the state,action,and reward of the DRL method to fully encompass the information of the federated learning cooperation process.On the other hand,we introduce enhancements to the traditional Deep Q-Network (DQN) method,such as Double Deep Q-Network (DDQN),prioritized replay,and noisy network,to augment the method’s performance.Through extensive experiments,we verify the scheme’s effectiveness in improving the system’s total revenue and controlling incentive costs.Reasonable incentive cost penalties can guide the DRL agent towards the most cost-effective incentive scheme,accurately incentivizing low-willingness cooperation periods of clients,and the system revenue under the same budget significantly surpasses that of fixed incentives.

    This paper not only theoretically uncovers the dynamic patterns in long-term cross-silo federated learning cooperation but also proposes innovative incentive mechanisms to enhance cooperation efficiency,offering fresh insights and methodologies for effectively facilitating data sharing and cooperation in the contemporary information era.

  • Xue Yang, Siyu Ding, Zichao Ling, Ke Tan
    2024, 3(1): 145-180.
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    Equity crowdfunding is a new financing mode with the advantages of low threshold,high efficiency,and low cost.It provides a financing channel for small and medium-sized enterprises and absorbs the idle funds of the public.However,the emerging mode also faces many problems,such as legal environment,credit system,intellectual property protection,and social cognition,which affect the high-quality development of enterprises and pose risks to investors’ decision-making.Therefore,this article aims to explore the key factors that affect the post-crowdfunding financing performance of equity crowdfunding enterprises,based on the signal theory and using the data from Seedrs equity crowdfunding platform and Crunchbase.We examine the impacts of project characteristics,crowdfunding performance,and advertising effect on the subsequent financing performance of start-ups that completed equity crowdfunding.The subsequent financing performance includes three aspects:whether they can enter the next round of financing,the time required for the next round of financing,and the amount of the next round of financing.

    Results show that project characteristics (such as project valuation,target amount,equity share,and policy preference),crowdfunding performance (such as previous financing experience),and the advertising effect (such as the number of investors),all have significant positive effects on the subsequent financing performance(Whether they can enter the subsequent financing, the time required for the subsequent financing or the amount of the subsequent financing).In addition,video introduction length,city level,company operation object,digitalization level,financing completion time,and company age also have certain impacts on the subsequent financing.

    In specific,we review the existing literature on crowdfunding,equity crowdfunding,and private equity,and summarize the research gaps and contributions.It points out that most of the existing literature focuses on whether the equity crowdfunding projects can successfully raise funds,but lacks a detailed and reasonable study on whether the financing enterprises can successfully enter the next round of financing,and the subsequent financing performance of the enterprises.

    We further propose the research hypotheses based on the signal theory,and analyze the impact of project characteristics,crowdfunding performance,and advertising effect on the subsequent financing performance.It argues that project characteristics,crowdfunding performance,and advertising effect can be regarded as information signals for investors,and the signals from these aspects will greatly affect investors’ decision-making and subsequent financing performance.The positive signals from these aspects can enhance investors’ confidence,attract more funds,and promote the long-term success of equity crowdfunding enterprises.

    We collect and process the data from Seedrs equity crowdfunding platform and Crunchbase,and conduct descriptive statistics and regression analysis.The article selects 1,024 equity crowdfunding projects that were successfully completed on the Seedrs platform from January 2012 to December 2019,and matches them with the data from Crunchbase to obtain information of the subsequent financing rounds.Three dependent variables are defined to measure the subsequent financing performance:whether the project can enter the next round of financing,the time required for the next round of financing,and the amount of the next round of financing.Several independent variables are constructed to capture the project characteristics,crowdfunding performance,and advertising effect.We control for some other variables that may affect the subsequent financing performance,such as video introduction length,city level,company operation object,digitalization level,financing completion time,and company age.

    The article reports the main results of the regression analysis,and discusses the implications and limitations.The article finds that:

    (1)Project characteristics have significant positive effects on the subsequent financing performance.Specifically,project valuation,target amount,equity share,and policy preference all have positive effects on whether the project can enter the next round of financing,the time required for the next round of financing,or the amount of the next round of financing.These results suggest that project characteristics can send positive signals to investors about the project quality and potential return,and affect investors’ evaluation and decision-making on the project.

    (2)Crowdfunding performance also has significant positive effects on the subsequent financing performance.Specifically,previous financing experience has a positive effect on whether the project can enter the next round of financing,and the amount of the next round of financing.This result suggests that previous financing experience can send a positive signal to investors about the project’s credibility and ability,and affect investors’ trust and confidence in the project.

    (3)Advertising effect also has significant positive effects on the subsequent financing performance.Specifically,the number of investors has a positive effect on the amount of the next round of financing.This result suggests that the number of investors can send a positive signal to investors about the project’s exposure and reputation,and affect investors’ perception and attention to the project.

    The article concludes by summarizing the main findings,contributions,and implications,and pointing out the limitations and directions for future research.The article argues that this study provides a systematic empirical analysis of the post-crowdfunding financing performance of equity crowdfunding projects,and reveals the key factors and mechanisms that affect the subsequent financing performance from different aspects.The article also argues that this study has important implications for equity crowdfunding platforms,start-ups,and investors,as well as for policymakers and regulators.The article acknowledges that this study has some limitations,such as the data source,the measurement of variables,and the causal inference,and suggests that future research can further explore the impact of equity crowdfunding on the long-term development and survival of start-ups,and the role of social networks and institutional factors in equity crowdfunding.

  • Xiaochen Zhang, Jing Zhang, Kuangnan Fang, Xiaodong Yan
    2024, 3(1): 181-198.
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    The prediction of financial distress among listed companies has perennially been a focal point in financial research.The scientific model is conducive to preventing financial distress and improving the early warning management of the crisis.From a methodological perspective,the financial distress prediction problem can be framed as a binary classification issue,where company information acts as explanatory variables for the prediction model.The output is binary,with 1 indicating a company facing financial distress and 0 indicating a company not facing financial distress.Among various financial distress prediction methods,the logistic model is widely used due to its advantages of simple calculation and straightforward coefficient interpretation.

    Hambrick and Mason (1984) suggested that the psychological factors such as internal cognition,emotions,and values of executives determine their decision-making behavior,thereby significantly impacting business management,financial condition,and future development.With China’s  rapid economic development,enterprise investment,mergers and acquisitions,and group operations have resulted in an increasing number of directors,supervisors,and senior management personnel concurrently holding positions in two or more enterprises,forming a chain network of executives.The executive chain network embeds the network in the enterprise through the connection of executives,which makes enterprises more and more closely related and has a significant impact on enterprise behavior and performance.Hence,it becomes imperative to integrate the executive chain network into the model when predicting the financial distress of listed companies.However,existing financial distress prediction models have largely overlooked the impact of the executive network.

    We propose a logistic model that incorporates prior information regarding the sample’s network to handle data with a network structure.We categorize variables into structural variables and non-structural variables based on whether their coefficients are influenced by the network structure.The coefficients of structural variables are allowed to vary across different samples.The similarity of the structural variable coefficients corresponding to the connected samples in the sample network is encouraged to be similar by the Laplacian quadratic penalty function.The first part of objective function is the negative log-likelihood function,and the second part is the Laplacian quadratic penalty function.The tuning parameter is selected by five-fold cross-validation.If the tuning parameter is 0,objective function reduces to traditional logistic method.The prediction process involves three steps.Firstly,the model is trained based on the samples and sample networks of the training set,and we get the estimation of the coefficients of non-structural variables and the structural variables corresponding to the training set.Secondly,the coefficient of the structural variables corresponding to the new samples is first calculated based on the sample network.Finally,the explanatory variables and estimated coefficients are utilized for prediction.

    Section 3 presents simulation studies to evaluate performance of the proposed method.The proposed method is compared with traditional logistic models,neural networks (NNets),random forests (RF),support vector machine models (SVM1 for sigmoids and SVM2 for polynomials),and decision tree models.Monte Carlo simulation results demonstrate that the proposed method performs better than other methods.This suggests that considering the sample network structure can improve the effectiveness of parameter estimation and prediction for new samples.Furthermore,it is evident that in cases where the sample size is small and the variables possess a special structure,conventional black box models exhibit subpar performance.

    We employ the proposed method to forecast the financial distress of listed companies.To ensure the availability of data,listed companies marked with “*ST” and “ST” are treated as samples of financially distressed companies.Using the data of listed companies from year t-2 to predict whether they will be marked with “*ST” and “ST” in year t.The explanatory variables serve as the foundation of the entire predictive model,and the selection of scientifically sound indicators is paramount.We select 38 indicators,and the specific indicators are detailed in Table 1,with descriptive analyses provided in Table 2.The data used to construct executive network is sourced from CSMAR database.If two listed companies share at least one identical executive,they are deemed connected in the network.The prediction results show that the prediction performance of the proposed method is significantly better than others.Thus,incorporating the executive network into the model can improve accuracy.In order to analyze the coefficient estimation,we established the proposed method and the traditional logistic model based on the entire sample dataset.The estimated values corresponding to these two methods are depicted in Figures 7 and 8.

    In future research,there is potential to integrate more intricate network structures into financial distress prediction models,while employing variable selection methods to handle high-dimensional data effectively.Additionally,this study primarily focuses on the logistic model with a sample network structure,which could be expanded to include other models such as multi-class logistic regression and Poisson regression.Exploring the application of different models across various fields would be a valuable avenue for further investigation.

  • Hu Yang, Yuhao Cheng, Ji Li, Yu Zhang
    2024, 3(1): 199-226.
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    As the e-commerce market matures,competition among e-commerce platforms has become increasingly intense.The difficulty of attracting new customers is far greater than maintaining existing ones,making customer repurchases a crucial means for e-commerce platforms to increase profits.Predicting customers’repurchase tendencies/frequencies is key to formulating marketing strategies for these platforms,attracting widespread attention in fields such as marketing,operations research,statistics,and computer science.Predicting customer repurchase tendencies also helps marketers understand the main factors affecting consumer loyalty,thereby better serving platform customer relationship management.Existing research often relies on theories of consumer behavior,proposing hypotheses and using methods like surveys and structural equation modeling to confirm factors influencing consumer repurchases.Some studies adopt data-driven approaches,using models like random forests to predict consumers’repurchase intentions.As e-commerce accumulates more data,data-driven research methods are gaining importance.However,these methods are limited to modeling frequency domain indicators and struggle to depict consumers’online browsing trajectories.Consumers’online shopping behaviors not only record their product-seeking process but also reflect their shopping intentions,which can,to some extent,indicate their repurchase intentions.Common approaches transform online shopping behaviors into frequency domain indicators like click counts for modeling,which fails to effectively depict the popularity of clicked products on e-commerce platforms and also obscures the interaction between consumers and products.Complex network analysis methods offer new insights into mining online consumer behaviors and have been applied to some extent.Studies show that the number of links to a product associats with its demand,and the centralization of similar product networks impacts the demand for focal products.Therefore,using complex networks to depict consumers’online clicking behaviors and extracting relevant features can significantly improve the accuracy of  repurchases prediction.Beyond accuracy,marketing is more concerned with model interpretability.An interpretable prediction model can help us grasp the factors affecting consumer repurchase intentions,thereby avoiding risks due to unmet marketing expectations.

    This study proposes an interpretable consumer repurchase prediction model based on clickstream networks.The model,grounded in consumer behavior theory,employs complex network methods to measure users’browsing activities and extracts features that characterize product popularity and consumer behavior,ensuring a degree of interpretability of the extracted features.It then uses classic machine learning models to predict whether a consumer will repurchase the same product within 7 days.Through a series of comparative experiments,the study demonstrates that the three sets of features extracted based on consumer behavior theory—product click features,consumer click features,and interaction click features—all enhance the accuracy of repurchase predictions.Moreover,the removal of any one category of features from the feature set constructed from the clickstream network significantly decreases prediction accuracy compared to the model with complete features,further confirming the necessity of including clickstream features in the prediction model.In terms of the model’s interpretability,the features extracted on the basis of consumer behavior theory in this study have inherent interpretability.This is further confirmed by post-hoc analysis using Shapley values,which also validate the importance of the extracted features.Finally,robustness analysis,including Lasso feature selection and adjusting the proportion of training samples,also proves that the method proposed in this study has a stable effect.Therefore,the interpretable consumer repurchase prediction model based on clickstream networks proposed in this study shows relatively good performance in terms of prediction accuracy,interpretability,and robustness.

    This research interprets the role of clickstream networks in predicting repurchase intentions from a big data-driven perspective.Compared with classic theory-driven studies,this research may not reveal the causal relationship between clicks and repurchase intentions,but by modeling repurchase intentions,it can provide references and insights for business operations management.We believe that in the process of making recommendations,businesses should,on the one hand,recommend products with a high likelihood of repurchase to consumers; on the other hand,they should reduce recommendations of products with particularly low purchase intentions to consumers.To enhance consumer purchase intentions,it is necessary to combine theory-driven approaches for argumentation,which is where data-driven methods fall short.In terms of research methodology,although the features extracted in this paper based on theories such as consumer behavior have a certain degree of accuracy,robustness,and interpretability,they are still limited compared to the automatic feature extraction of deep learning methods.Regarding the research data,the method in this paper only uses one month’s data,and both the indicators and data have certain limitations.However,as a data-driven research method,it holds practical significance.In future research,we will explore the use of more advanced methods for modeling,such as deep graph neural networks,and further propose more management-relevant research questions based on business practice,develop more data,and test these in the application process within businesses.

  • Jie Mao, Cheng Wan
    2024, 3(1): 227-246.
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    Based on the theory of growth at risk,this papermeasures the economicdownside risk for each city in China,and empirically examines the impact of local government debt on the economic downturn risk under the background of the implementation of the new “budget law”.The result shows that the increase of local government debt can reduce the economic downside risk in the short term,while it will aggravate the economic downside risk in the medium and long term.This result remains robust not only by using different measures of local government debt and economic downside risk but also by taking spatial spillover effects and endogenous  situationsinto consideration.Moreover,after classifying the samples according to the level of economic marketization,this paper further finds that compared with the regions with a lower degree of economic marketization,the increase of local government debt will significantly reduce the economic downside risk in the short-term and significantly increase the economic downside risk in the long term in the regions with higher local government debt.And so is the increase of local government debt in the regions with larger economic downside risk.This paper further finds that local government debt will affect the economic downturn risk through three mechanisms,namely the investment crowding out mechanism,the credit crowding out mechanism and the innovation suppression mechanism.

    Compared with the existing literature,this paper may have two marginal contributions.First,unlike most of the existing literature on economic downside risks focusing on the measurement and prediction of economic downside risks,this paper examines the causal relationship between the local government debt and the macroeconomic downside risks,whichenriches the literature on macroeconomic downside risks.Second,unlike most of the existing literature on local government debt focusing on the short-term economic effects of local government debt,this paper examines both the short-term and long-term economic effects of local government debt from an inter-temporal perspective,which broadens the research perspective on local government debt.

    This paper not only provides a new perspective on the causal inference of economic downturn risks,but also provides a reference for effectively preventing systemic financial risks and local government debt risks and maintaining high-quality economic development.According to the conclusions,this paper makes several suggestions as follows.First,to control the scale of local government debt from the source and prevent systemic financial risks caused by localgovernment debt risks,the local governments should further establish the local government debt management system,improve the local government debt information disclosure system,actively implement the local government debt supervision and early warning mechanism,and strictly implement debt investment and financing decision-making mechanism.Second,the local governments should fully recognize the dual influence of the local government debt that the local government debt could be raised to improve the ability of local governments to regulate the economy,promote the rapid development of urban public infrastructure,and stimulate the stable growth of the local economy on one hand and the local government debt would bring negative impacts in the long term on the other hand.Last but not least,local governments should give more rights to micro-market entities,avoid excessive crowding out of enterprise resources,promote better allocation of market resources,minimize the negative consequence of local government debt on the macroeconomy,and constantly optimize the construction of macro-economic governance system,so as to ultimately obtain the high-quality economic development.

  • Ruxiao Xing, Bo Li, Yunchao Guo
    2024, 3(1): 247-268.
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    With the continuous evolution of the global value chain (GVC) over time,China’s strategic emerging industries have been deeply embedded in the global value chain.By integrating into global innovation networks,strategic emerging industries achieve continuous evolution and development.Under the current pressure of green transformation,it has become inevitable to cultivate and develop green and environmentally friendly strategic emerging industries.Hence,as the global division of labor becomes more pronounced,the integration of China’s strategic emerging industries into the GVC will undeniably influence their green technological innovation.However,there is still little literature linking the green technology innovation of enterprises with the GVC participation of their industries.Given that enterprises are the main subjects of green transformation implementation,we argue that it is necessary to focus on enterprise green technology innovation.In addition,few studies have considered the unique characteristics and strategic significance of strategic emerging industries when examining corporate green technology innovation within these industries.

    Based on data from 1476 samples of Chinese strategic emerging industries,this paper constructs a fixed-effects model using enterprise data.We explore the influence mechanism of GVC participation in Chinese strategic emerging industries on firms’ green technology innovation.The results reveal that: ① the GVC participation position of strategic emerging industries significantly and positively affects the green technology innovation performance of enterprises; ② backward participation plays a negative mediating utility in the GVC participation position and enterprises’ green technology innovation;  ③ the regional macroeconomic level positively regulates the relationship between GVC participation position and green technology innovation.

    According to the above research results,this paper gets the following insights.First,the government must recognize the importance of the industry’s GVC embedding status for enterprises’ green technology innovation,and actively promote the development of strategic emerging industries.Secondly,the development of green technology cannot be separated from economic support.Therefore,the government can combine the development of traditional industries and strategic emerging industries,using new technologies to empower traditional industries to achieve economic growth.At the same time,the government can also formulate multi-stage and multi-type policies to encourage the re-research and development of green technology innovation.Third,enterprises should maintain an aggressive attitude when participating in the global division of labor.Enterprises should establish a network of trustworthy relationships with other enterprises,strengthen their sense of independent innovation,and learn in the process of cooperation.By strengthening enterprises’ sense of independent innovation and learning,we can promote the benign development of the industry from the enterprise perspective and get rid of the dilemma of being trapped at the bottom of the value chain.

    This paper enriches the study of economic transformation of strategic emerging industries from the enterprise perspective.It not only provides a new perspective for the study of value chain theory and green technology innovation,but also provides theoretical reference and guidance for the practice of green innovation in other industries.