context stringlengths 101 1.75k | A stringlengths 103 2.54k | B stringlengths 103 1.92k | C stringlengths 103 1.91k | D stringlengths 101 2.37k | label stringclasses 4
values |
|---|---|---|---|---|---|
Nonetheless, in modern large-scale assessments or surveys where the data collection scope is unprecedentedly big and high-dimensional, both N𝑁Nitalic_N and J𝐽Jitalic_J can be quite large. | JML is currently considered the most efficient tool for estimating GoM models. However, due to its iterative manner, JML’s efficiency is still unsatisfactory when applied to modern big datasets with many observations and many items. Therefore, it is desirable to develop more scalable and non-iterative estimation method... | Although this JML algorithm is computationally more efficient compared to MCMC algorithms, it is still not scalable to very large-scale response data due to its iterative manner. Therefore, it is of interest to develop a non-iterative estimation method suitable to analyze modern datasets with a large number of items an... | JML is currently considered the most efficient tool for estimating GoM models. However, due to its iterative manner, JML’s efficiency is still unsatisfactory when applied to very large-scale data with many observations and many items. Therefore, it is desirable to develop more scalable and non-iterative estimation meth... | Our method provides comparable estimation results compared with JML and Gibbs sampling, and is much more scalable to large datasets with many subjects and many items. | A |
1)^{2}}=\frac{B(N-1)+2+1/(N-1)}{16(N-1)^{2}}.italic_γ - ∑ start_POSTSUBSCRIPT italic_j = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_N end_POSTSUPERSCRIPT italic_a start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT < divide start_ARG ⌊ roman_log start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ( italic_N - 1 ) ⌋ + 3 + 1 / ( it... | Operations that are carried out to obtain each new term of the series, and to update the partial sum and error bound. The total number of these operations is roughly proportional to the total number of terms in the partial sum when the algorithm ends, NMsubscript𝑁𝑀N_{M}italic_N start_POSTSUBSCRIPT italic_M end_POSTSU... | Standard simulation methods, as implemented in computer software, typically define the parameter τ𝜏\tauitalic_τ as a floating-point value. This inevitably incurs a loss of precision. More specifically, it is not possible to represent irrational values (or even certain rational values) exactly using floating-point vari... | This paper describes a general algorithm that solves the above problem when there exists a representation of τ𝜏\tauitalic_τ as a series with rational terms. The algorithm is described in §2, and its basic properties are addressed. The complexity of the algorithm is analysed in §3. Application to specific values, inclu... | Thanks to Peter Occil for bringing to the author’s attention the problem of simulating Euler’s constant without using floating-point arithmetic, which led to the algorithm and results presented in this paper; also for pointing out reference [11], and for some corrections to an early version of the manuscript. | D |
However, there does not seem to exist theoretical guarantees for the SCMS algorithm to consistently estimate the full ridge set, and, as discussed below (see Section 2.3 and the Appendix), the SCMS algorithm might miss some parts of the ridge, although the point-wise convergence property of SCMS is studied in Zhang and... | The remaining part of the paper is organized as follows. In Section 2 we introduce the formal definition of ridges. This is followed by our extraction algorithms, whose performance is illustrated using some numerical studies in ℝ2superscriptℝ2\mathbb{R}^{2}blackboard_R start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT. The m... | We apply our algorithms to a data set of active and extinct volcanoes in Japan available at https://en.wikipedia.org/wiki/List_of_volcanoes_in_Japan. The locations of these volcanoes exhibit a clear filamentary structure with three major branches sharing an intersection. The results using SCMS and our algorithms are sh... | Brief outline of this section: As the algorithms are targeting Ridge(f^)^𝑓(\widehat{f})( over^ start_ARG italic_f end_ARG ), while the theoretical target is Ridge(f),𝑓(f),( italic_f ) , we first control the distance between these two sets (see Theorem 6). Then, in Theorem 8, we consider the continuous version of the ... | This important section can be interpreted as providing population level versions of our main convergence results for the proposed algorithms presented above. Indeed, the algorithms can be interpreted as ‘perturbed versions’ of corresponding population level versions. We will discuss the precise meaning of this in what ... | A |
\prime}(\bm{r}_{k})\})bold_italic_V start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT = roman_diag { italic_ψ start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ( bold_italic_r start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ) } ( bold_italic_I start_POSTSUBSCRIPT italic_n end_POSTSU... | For a small constant η>0𝜂0\eta>0italic_η > 0 independent of n,p𝑛𝑝n,pitalic_n , italic_p, say η=0.05𝜂0.05\eta=0.05italic_η = 0.05, | error, up to an arbitrary small constant η>0𝜂0\eta>0italic_η > 0 independent of n,p𝑛𝑝n,pitalic_n , italic_p. | If (M,q,η,μ,γ)𝑀𝑞𝜂𝜇𝛾(M,q,\eta,\mu,\gamma)( italic_M , italic_q , italic_η , italic_μ , italic_γ ) and η~>0~𝜂0\tilde{\eta}>0over~ start_ARG italic_η end_ARG > 0 are independent of n,p𝑛𝑝n,pitalic_n , italic_p then | |ψ|≤M𝜓𝑀|\psi|\leq M| italic_ψ | ≤ italic_M for some constant M𝑀Mitalic_M independent of n,p𝑛𝑝n,pitalic_n , italic_p; | A |
Clearly, constantly changing the target density within a Monte Carlo algorithm difficult its theoretical analysis. Moreover, in MCMC algorithms, updating the surrogate using past states of the chain produces the loss of Markov property, so (as in the adaptive MCMC literature) one needs to carefully address this point [... | Secondly, the surrogate construction is driven by the Monte Carlo algorithm, namely, the surrogate is refined in regions discovered by the algorithm along the iterations. | In the two-stage scheme, the process of building the surrogate and performing the sampling is separated. During the initial stage, the primary objective is to minimize the bias of the surrogate. In the subsequent stage, our focus shifts to reducing the variance of the Monte Carlo approximation of the surrogate. Due to ... | A generic MH algorithm targeting a surrogate that is refined over T𝑇Titalic_T iterations is given in Algorithm 3. This algorithm falls within the iterative refinement scheme from the previous section. | Blocks B1 and B3 refer to the two possible strategies (offline or iteratively within the Monte Carlo steps) for building the surrogate. The former considers an offline construction, that is totally independent of the Monte Carlo algorithm that will be run afterwards. The latter construction aims to build the surrogate ... | A |
We prove uniform rates for an RKHS estimator of the long term dose response curve, under effective dimension and smoothness conditions on the regression and embedding. An interesting direction for future work is whether uniform rate improvements are possible by placing additional assumptions, perhaps building on the te... | Section 6 demonstrates that our long term dose response estimators recover the long term effects of continuous actions reasonably well, using real data. | To demonstrate that our proposed kernel methods are practical for empirical research, we evaluate their ability to recover long term dose response curves. Using short term experimental data and long term observational data, our methods measure similar long term effects as an oracle method that has access to long term e... | We illustrate the practicality of our approach by estimating the long term dose response of Project STAR, modelling class size as a continuous action. By allowing for continuous actions and heterogeneous links, our long term dose response estimate suggests that the effects of class size are nonlinear. Using short term ... | Previous methods using kernels for continuous actions do not handle the complex linkage between short term and long term effects across data sources, and therefore cannot analyze long term causal inference. Our contribution is a method to do so. | B |
As seen in Section 4.3.2, the performance of SAA is poor for certain instances even with infinitely many samples. Our goal in this section is twofold: first, we complement the discussion sparked by Proposition 7 by investigating whether there exists a policy which can ensure a vanishing regret for the pricing problem u... | We next present an alternative sample-size-agnostic policy for which the asymptotic worst-case vanishes as ϵitalic-ϵ\epsilonitalic_ϵ goes to 00. Furthermore, we characterize the worst-case performance of that policy, showing that it has the best possible dependence with respect to ϵitalic-ϵ\epsilonitalic_ϵ. | We show in Section 4.3 that the upper bounds based on the approximation parameter directly imply bounds on the worst-case regret of SAA for Newsvendor under both heterogeneity types and for pricing under the Kolmogorov heterogeneity. Furthermore, we complement these results with lower bounds on the best achievable perf... | For the Wasserstein distance, we show that similarly to pricing, SAA incurs a worst-case regret which does not shrink to 00 as ϵitalic-ϵ\epsilonitalic_ϵ goes to 00; but a policy which inflates the SAA decision appropriately achieves rate-optimality. The proof techniques and results derived for ski-rental leverage the s... | Recall the pricing problem with Wasserstein distance introduced in Section 4.3.2. For this problem, we have seen that SAA incurs an asymptotic worst-case regret which is not vanishing as ϵitalic-ϵ\epsilonitalic_ϵ goes to 00. | D |
We run 50 simulations and we compare the chain ladder model performance to the following three sets of potential models: | Models EIRsubscriptEIR\text{EI}_{\text{R}}EI start_POSTSUBSCRIPT R end_POSTSUBSCRIPT on the test set across the NAIC datasets. On each dataset we selected the best performing model via a validation set from the three families | For validation and testing, we adopt the approach illustrated in Figure 12. In particular, on each of the 50505050 runs and for each of the three model families (sets (a), (b) and (c)), we first choose the model that minimizes the EIRsubscriptEIR\text{EI}_{\text{R}}EI start_POSTSUBSCRIPT R end_POSTSUBSCRIPT on the vali... | We displayed the EIRsubscriptEIR\text{EI}_{\text{R}}EI start_POSTSUBSCRIPT R end_POSTSUBSCRIPT for the different lines of business in Figure 9. The plot shows, for each family of model the EIRsubscriptEIR\text{EI}_{\text{R}}EI start_POSTSUBSCRIPT R end_POSTSUBSCRIPT on the test of the best performing model among each f... | To evaluate the performance of each set we start by splitting the data into training, validation and testing as illustrated in Figure 11 in Appendix E. Then, for each dataset the best model within the three different model sets is selected based on the validation set, and, finally, the error incidence (EImsubscriptEI𝑚... | B |
Detailed analysis on model selection (i.e. choice of models in SuperLearner), choice of positivity constant ϵitalic-ϵ\epsilonitalic_ϵ and potential outliers is presented in the supplementary materials, which justifies the choice in our analysis. The effects of high fruit intake on preterm birth, preeclampsia, gestation... | As described in detail previously (Haas et al.,, 2015), nuMoM2b enrolled 10,083 people in 8 US medical centers from 2010 to 2013. Eligibility criteria included a viable singleton pregnancy, 6-13 completed weeks of gestation at enrollment, and no previous pregnancy that lasted ≥20absent20\geq 20≥ 20 weeks of gestation. ... | Preterm birth, small-for-gestational-age birth, preeclampsia, and gestational diabetes are adverse pregnancy and birth outcomes that contribute to one-quarter of infant deaths in the U.S. and pose a tremendous economic and emotional burden for societies and families (Butler et al.,, 2007; Stevens et al.,, 2017; Dall et... | Our methods with above adjustments are applied to each combination of treatment A∈{fruit intake,vegetable intake}𝐴fruit intakevegetable intakeA\in\{\text{fruit intake},\text{vegetable intake}\}italic_A ∈ { fruit intake , vegetable intake } and outcome Y∈{preterm Birth, SGA birth, gestational diabetes,Y\in\{\text{prete... | From the results above, we see the effects of high fruit intake on preterm birth, preeclampsia and SGA birth are significantly negative at level 0.05 in the target population, which implies eating more fruit potentially causes a lower risk of these adverse pregnancy outcomes. For the results on vegetables, the effect o... | D |
To summarize, the main contributions of this work are: (a) Basis encoding We propose introducing a slight modification to the computational graph of FM variants that facilitates encoding of numerical features as vector of basis functions; (b) Spanning properties We show that our modification makes any model from a fami... | A very simple but related approach to ours was presented in Covington et al. (2016). The work uses neural networks and represents a numerical value z𝑧zitalic_z as the triplet (z,z2,z)𝑧superscript𝑧2𝑧(z,z^{2},\sqrt{z})( italic_z , italic_z start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT , square-root start_ARG italic_z e... | Since our approach works on any tabular dataset, and isn’t specific to recommender systems, we mainly test our approach versus binning on several tabular data-sets with abundant numerical features that have a strong predictive power: the California housing (Pace & Barry, 1997) , adult income (Kohavi, 1996), Higgs (Bald... | Putting aside the FM variants, there is a large body of work dealing with neural networks training over tabular data (Arik & Pfister, 2021; Badirli et al., 2020; Gorishniy et al., 2021; Huang et al., 2020; Popov et al., 2020; Somepalli et al., 2022; Song et al., 2019; Hollmann et al., 2022). Neural networks have the po... | Finally, any comprehensive discussion on tabular data would be incomplete without mentioning gradient boosted decision trees (GBDT) (Chen & Guestrin, 2016; Ke et al., 2017; Prokhorenkova et al., 2018), which are known to achieve state-of-the-art results (Gorishniy et al., 2021; Shwartz-Ziv & Armon, 2022). However, GBDT... | C |
The impact of our contributions lies with a modular and scalable formulation of synthetic AIF agents. Using variational calculus, we have derived general message update rules for GFE-based control. This allows for a modular approach to synthetic AIF, where custom message updates can be derived and reused across models ... | The general update rules allow for deriving GFE-based messages around alternative sub-models, including continuous-variable models and possibly chance-constrained models (van de Laar | The impact of our contributions lies with a modular and scalable formulation of synthetic AIF agents. Using variational calculus, we have derived general message update rules for GFE-based control. This allows for a modular approach to synthetic AIF, where custom message updates can be derived and reused across models ... | In this section we apply the general message update rules of Sec. 4.4 to a specific discrete-variable model that is often used in AIF practice. Using the general results we derive messages on this specific model. | The message updates for a data-constrained observation variable (Fig. 3, left) reduce to standard VMP updates, as derived by (van de Laar, 2019, App. A). | A |
Comparison with SNN on Real-data. We tried to also compare with SNN on the Glance dataset. In this dataset, the number of users is 1305130513051305 and the number of content items is 1471147114711471. Unfortunately, SNN was not able to finish running in a reasonable time. Indeed, even on the synthetically generated dat... | Table 1: Comparison of performance of MNN and USVT on Glance data. As can be seen, MSE for MNN is >28x better. | Table 3: MSE, MAE, and runtime for MNN and SNN on synthetic datasets (average ±plus-or-minus\pm± standard deviation across 10 experimental repeats). −−--- - means the method did not complete within 24242424 hours. | Table 2: R2superscript𝑅2R^{2}italic_R start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT, MSE, MAE, and max error for matrix completion methods on synthetic datasets (average ±plus-or-minus\pm± standard deviation across 10 experimental repeats). | Comparison with SNN on Real-data. We tried to also compare with SNN on the Glance dataset. In this dataset, the number of users is 1305130513051305 and the number of content items is 1471147114711471. Unfortunately, SNN was not able to finish running in a reasonable time. Indeed, even on the synthetically generated dat... | B |
In preliminary work presented at QEST 2022 (Kofnov et al., 2022), we provided a solution to this problem leveraging the theory of general Polynomial Chaos Expansion (gPCE) (Xiu and Karniadakis, 2002), which consists of decomposing a non-polynomial random function into a linear combination of orthogonal polynomials. gPC... | In Fig. 1 we illustrate our gPCE-based approach via the Taylor rule in monetary policy, where we estimate the expected interest rate given a target inflation rate and the gross domestic product (GDP). In this example, we approximate the original log function with 5555th degree polynomials and obtain a Prob-solvable loo... | The program uses a non-polynomial function (log) in the loop body to update the continuous-state variable (i𝑖iitalic_i). The top right panel contains the Prob-Solvable loop (with polynomial updates) obtained by approximating the log function using polynomial chaos expansion (up to 5555th degree). In the bottom left, w... | In preliminary work presented at QEST 2022 (Kofnov et al., 2022), we provided a solution to this problem leveraging the theory of general Polynomial Chaos Expansion (gPCE) (Xiu and Karniadakis, 2002), which consists of decomposing a non-polynomial random function into a linear combination of orthogonal polynomials. gPC... | Figure 1. The probabilistic loop in the top left panel encodes the Taylor rule (Taylor, 1993), an equation that prescribes a value for the short-term interest rate based on a target inflation rate and the gross domestic product. | A |
In this section, we describe the preliminary graph node classification approach based on discrete potential theory after introducing some essential mathematical concepts. | In this paper, we propose a probability-based objective function for semi-supervised node classification that takes advantage of simplicial interactions of varying order. Given that densely connected nodes are likely to have similar properties, our proposed objective function imposes a greater penalty when nodes connec... | In many real-world systems, network interactions are not only pairwise, but involve the joint non-linear couplings of more than two nodes [26]. Here, we fix some terminology on higher-order networks that will be used throughout the paper. | Networks represented by graphs consist of nodes representing entities of the system, and edges depicting their interactions. Such graphical representations facilitate insights into the system’s modular structure or its inherent communities [1, 2]. While traditional graph analysis methods only considered pairwise intera... | We also propose a novel graph generation model, Stochastic Block Tensor Model (SBTM). In general, traditional SBM-generated networks differ significantly from many real-world networks. Specifically, when comparing networks of equivalent density (that is, networks with an identical count of nodes and edges), SBM-based m... | B |
Finally, we note that our work also fits into the literature that leverages the power of ML for causal analysis and policy evaluation. The value added of doubly-robust procedures has been explored in applied works by, for example, Knaus (2022), Bach | Our method for panel data models with individual fixed effects is general and particularly relevant for applied researchers. We provide new estimation tools within the existing DML framework for use on panel data. In doing so, we broaden the reach of DML to a large family of empirical problems for which the time dimens... | Naghi (2024a, b). Because panel data are widely used in applied analyses, our proposed procedures for panel data models have the potential to attract the interest of applied researchers from various fields broadening the applicability of DML. | The second approach we consider follows more conventional techniques for panel data by transforming the data to remove entirely the fixed effects from the analysis. | In this paper, we develop and assess novel DML procedures for estimating treatment (or causal) effects from panel data with fixed effects. The procedures we propose are extensions of the correlated random effects (CRE), within-group (WG) and first-difference (FD) estimators commonly used for linear models to scenarios ... | B |
Global Energy Network Eθglobal(c,y)superscriptsubscript𝐸𝜃𝑔𝑙𝑜𝑏𝑎𝑙𝑐𝑦E_{{\bm{\theta}}}^{global}({\bm{c}},{\bm{y}})italic_E start_POSTSUBSCRIPT bold_italic_θ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_g italic_l italic_o italic_b italic_a italic_l end_POSTSUPERSCRIPT ( bold_italic_c , bold_italic_y ). | The class and concept energy networks model class labels and concepts separately; in contrast, the global energy network model the global relation between class labels and concepts. | The class energy network learns the dependency between the input and the class label, while the concept energy network learns the dependency between the input and each concept separately. In contrast, our global energy network learns (1) the interaction between different concepts and (2) the interaction between all con... | Our ECBM consists of three energy networks collectively parameterized by 𝜽𝜽{\bm{\theta}}bold_italic_θ: (1) a class energy network E𝜽class(𝒙,𝒚)superscriptsubscript𝐸𝜽𝑐𝑙𝑎𝑠𝑠𝒙𝒚E_{{\bm{\theta}}}^{class}({\bm{x}},{\bm{y}})italic_E start_POSTSUBSCRIPT bold_italic_θ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ita... | To predict 𝒄𝒄{\bm{c}}bold_italic_c and 𝒚𝒚{\bm{y}}bold_italic_y given the input 𝒙𝒙{\bm{x}}bold_italic_x, we freeze the feature extractor F𝐹Fitalic_F and the energy network parameters 𝜽𝜽{\bm{\theta}}bold_italic_θ and search for the optimal prediction of concepts 𝒄^^𝒄\widehat{{\bm{c}}}over^ start_ARG bold_itali... | B |
]=\int_{\mathcal{X}}|\mathbf{C}_{\nu}(x_{*})|p_{\nu}(x_{*}|\mathbf{y})dx_{*}.caligraphic_H ( italic_ν ) = blackboard_E start_POSTSUBSCRIPT italic_p start_POSTSUBSCRIPT italic_ν end_POSTSUBSCRIPT end_POSTSUBSCRIPT [ | bold_C start_POSTSUBSCRIPT italic_ν end_POSTSUBSCRIPT ( ⋅ ) | ] = ∫ start_POSTSUBSCRIPT caligraphic_X e... | This functional is a weighted integrated Mean Squared Prediction Error (weighted IMSPE). The weight is the posterior density, which focuses the attention on the region of interest for the inverse problem. | We are interested in the variance integrated over the posterior distribution. The quantity of interest Dnsubscript𝐷𝑛D_{n}italic_D start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT is given by: | Based on these results, one could argue that the CSQ strategy can be situationally better as it is easier to set up while providing similar performance in the end. However, two counter-arguments can be pointed out. First of all, the IP-SUR strategy does exhibit a guarantee for the convergence of the integrated variance... | This work presents two new sequential design strategies to build efficient Gaussian process surrogate models in Bayesian inverse problems. These strategies are especially important for cases where the posterior distribution in the inverse problem has thin support or is high-dimensional, in which case space-filling desi... | A |
Control of the familywise error rate at a fixed level no longer works for ancestor regression. However, there is still a separation between ancestors and non-ancestors in terms of the effect size. For long time series, there is a sweet spot with high power at a fairly low error rate. Hence, the ordering of the p-values... | For non-ancestors, the observed average of the absolute z-statistics is close to the theoretical mean under the asymptotic null distribution as desired. On the right-hand side, we see that we can control the type I error at the desired level for every sample size. As expected, the power to detect ancestors increases wi... | Additionally, we show on the left side the performance of the LiNGAM algorithm as described in Hyvärinen et al., (2010). For this, we use the code published together with Moneta et al., (2013). As the LiNGAM algorithm by default does not search for sparse estimates 𝐁^τsubscript^𝐁𝜏\hat{\mathbf{B}}_{\tau}over^ start_A... | Control of the familywise error rate at a fixed level no longer works for ancestor regression. However, there is still a separation between ancestors and non-ancestors in terms of the effect size. For long time series, there is a sweet spot with high power at a fairly low error rate. Hence, the ordering of the p-values... | At our target level α=0.05𝛼0.05\alpha=0.05italic_α = 0.05, the power remains comparable to the case without hidden variables or is even increased for some sample sizes. The unobserved variables are mainly a problem for error control as the assumptions are not fulfilled but not for detection per se. Similarly, the powe... | D |
Figure 2(a) and 2(b) show the decision boundary of the model for the former and later cases respectively. | Figure 6 demonstrates how the scaling factor (α𝛼\alphaitalic_α) influences test accuracy for models trained on datasets with 25%percent2525\%25 % label corruption. We observe that increasing α𝛼\alphaitalic_α generally improves test accuracy up to a certain point, after which accuracy gradually declines. | The noise transition matrix 𝒯𝒯\mathcal{T}caligraphic_T is a square matrix of size K×K𝐾𝐾K\times Kitalic_K × italic_K, where K𝐾Kitalic_K is the number of classes, which captures the conditional probability distribution of label corruption. | We leverage the cross-entropy loss, denoted by ℒℒ\mathcal{L}caligraphic_L, of the trained model with parameters θ∗subscript𝜃\theta_{*}italic_θ start_POSTSUBSCRIPT ∗ end_POSTSUBSCRIPT. | We observe that, with label corruption, the test accuracy of the model drops by 4.8%percent4.84.8\%4.8 % from the model trained without any label corruption. | D |
Suppose that T≫k2log3Tmuch-greater-than𝑇superscript𝑘2superscript3𝑇T\gg k^{2}\log^{3}Titalic_T ≫ italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT roman_log start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT italic_T. | where C1>0subscript𝐶10C_{1}>0italic_C start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT > 0 is some sufficiently large universal constants. Then | there exists some x′superscript𝑥′x^{\prime}italic_x start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT in 𝒩εsubscript𝒩𝜀\mathcal{N}_{\varepsilon}caligraphic_N start_POSTSUBSCRIPT italic_ε end_POSTSUBSCRIPT such that | Then there exists some universal constant C5>0subscript𝐶50C_{5}>0italic_C start_POSTSUBSCRIPT 5 end_POSTSUBSCRIPT > 0 such that, for | constant cR>0subscript𝑐𝑅0c_{R}>0italic_c start_POSTSUBSCRIPT italic_R end_POSTSUBSCRIPT > 0 such that | C |
In order to circumvent parametric modeling assumptions, we propose a unification of the generative model and the inference process. Building upon the point estimators, we define a distinct variational family over global and local parameters for a fully Bayesian treatment of all variables. | We show how partial states and re-sampled indices generated by Smc can be interpreted as auxiliary random variables within a pseudo-marginal framework, thus establishing connections between variational pseudo-marginal methods and Vsmc (Naesseth et al., 2018; Moretti et al., 2021). | Pseudo-marginal methods are a class of statistical techniques used to approximate difficult-to-compute probabilities, typically by introducing auxiliary random variables to form an unbiased estimate of the target probability (Andrieu & Roberts, 2009). Beaumont (2003) introduced a method in genetics to sample genealogi... | A recent body of research has melded variational inference (VI) and sequential search. These connections are realized through the development of a variational family for hidden Markov models, employing Sequential Monte Carlo (Smc) as the marginal likelihood estimator (Maddison et al., 2017; Naesseth et al., 2018; Le et... | Section 3.1 adapts the Csmc approach to perform inference on jet tree structures. Section 3.2 reformulates Vcsmc for inference on global parameters. Section 3.2.1 utilizes Vcsmc methodology to learn parameters as point estimates. Section 3.2.2 defines a prior on the model parameters to construct a variational approxima... | A |
However, in less extreme cases, determining the need for zero-inflation versus an appropriate choice of block structure becomes essential. | Furthermore, Dong et al. [15] and Motalebi et al. [32] specifically focused on adapting stochastic block models to account for excess zeroes, underscoring the importance of accurately modelling sparsity for realistic network analysis. | To address this, appropriate likelihood-ratio tests and model comparison techniques have been developed for various models [8, 32, 15, 6]. | In red, the expected edge count distribution according to a DCSBM whose blocks have been obtained by modularity maximisation. | Traditional network models, such as the G(N,p)𝐺𝑁𝑝G(N,p)italic_G ( italic_N , italic_p ) [17, 22], configuration models [9, 20, 7], and stochastic block models [26, 36], have been instrumental in advancing our understanding of complex networks. | B |
Hollander (2024)]. The fact that we can also identify the second-order asymptotics of order n𝑛nitalic_n allows us to prove a large deviation principle for the number of edges in the graph (in Theorem 2.3), as well as prove that most triangles are actually vertex disjoint (in Theorem 2.2), which would not have been pos... | In practice, we can only observe a large network without knowing its full architecture. From the modeling perspective it is important to be able to estimate unknown parameter(s) from observations. In this section, we show that it is possible to consistently estimate the parameters in the exponential random graph in (3.... | The type of models we investigated may be extended as well. We focussed on the number of vertices in triangles, but it would be natural to consider the number of edges in triangles instead. Since this number can vary much more (the number is at most n(n−1)/2𝑛𝑛12n(n-1)/2italic_n ( italic_n - 1 ) / 2 rather than n𝑛ni... | The above results in turn allowed us to suggest a range of sparse exponential random graph models, which is important because it is hard to identify sparse exponential random graph models with many triangles. Finally, our results allowed us to prove that the parameters of the model can be consistently estimated, a prop... | This paper is organised as follows. In Section 2, we estimate the second-order of the large-deviation probabilities of the rare event that a sparse Erdős–Rényi random graph has a linear number of vertices in triangles, study the structure of the graph conditionally on this rare event, and provide proofs for our main re... | C |
By contrast, normalizing flow (NF) models [14, 15] work by applying a series of bijective transformations to a simple base distribution (usually uniform or Gaussian) to deterministically convert samples to a desired target distribution. While NFs have been successfully used for posterior approximation [16, 17, 18, 19, ... | More recently, diffusion-based models (DBMs) [26, 27, 28, 29, 30, 31, 32, 33] have been shown to achieve state-of-the-art results in several generative tasks, including image, sound, and text-to-image generation. These models work by stipulating a fixed forward noising process (e.g., a forward stochastic differential e... | Limitations: One limitation of our model is its reliance on the participation ratio (7) as a measure of dimensionality. Because PR relies only on second-order statistics and our proposals (9) are formulated in the data eigenbasis, our method tends to favor the top principal components of the data when reducing dimensio... | Specifically, our contributions are: First, focusing on the case of unconditional generative models, we show how a previously established link between the SDE defining diffusion models and the probability flow ODE (pfODE) that gives rise to the same Fokker-Planck equation [30] can be used to define a unique, determinis... | Figure 1: SDE-ODE Duality of diffusion-based models. The forward (noising) SDE defining the DBM (left) gives rise to a sequence of marginal probability densities whose temporal evolution is described by a Fokker-Planck equation (FPE, middle). But this correspondence is not unique: the probability flow ODE (pfODE, right... | A |
Simultaneous estimation of multiple quantiles is asymptotically more efficient than separate estimation of individual regression quantiles or ignoring within-subject dependency (Cho, Kim, and Kim 2017). However, this approach does not guarantee non-crossing quantiles, which can affect the validity of the predictions an... | Most existing conformal methods for regression either directly predict the lower and upper endpoints of the interval using quantile regression models (Romano, Patterson, and Candès 2019; Kivaranovic, Johnson, and Leeb 2020; Sesia and Candès 2020; Gupta, Kuchibhotla, and Ramdas 2022) or first estimate the full condition... | Our proposed method for constructing non-convex prediction sets is related to the work of (Izbicki, Shimizu, and Stern 2022), who introduce a profile distance to measure the similarity between features and construct prediction sets based on neighboring samples. | Two advanced methods, Conformal Quantile Regression (CQR) (Romano, Patterson, and Candès 2019) and Conformal Histogram Regression (CHR) (Sesia and Romano 2021), extend this framework: | However, the validity of the produced intervals is only guaranteed for specific models under certain regularity and asymptotic conditions (Steinwart and Christmann 2011; Takeuchi et al. 2006; Meinshausen 2006). Many related methods for constructing valid prediction intervals can be encompassed within the nested conform... | D |
For these five datasets, the left plots present the desired miscoverage rate α𝛼\alphaitalic_α versus the true coverage rate. The closer the curve aligns with the line 1−α1𝛼1-\alpha1 - italic_α, the easier it is for the method to achieve the desired coverage. Our proposed method, CIA, is very close to the line 1−α1𝛼1... | This gap is particularly relevant in applications such as transductive conformal prediction on traffic networks. For example, existing Graph Neural Network (GNN) methods can predict the label of each road, where the label can be considered as the cost of traversing that road. This problem has been studied in (Huang et ... | Our main proposed method can be described as follows. The core idea is to establish a confidence interval using the exchangeability of the groups of indices. This method involves finding the absolute value of the difference between the sum of labels in a group and its prediction, or absolute residual, and using this as... | Under the choices of α𝛼\alphaitalic_α on the left plots, the right plots compare the coverage versus the prediction set size. The lower the curve, the more efficient and informative the prediction set provided by the method. In the plots of Figure 2, CIA is the most efficient in all datasets. In the plots of Figure 3,... | For these five datasets, the left plots present the desired miscoverage rate α𝛼\alphaitalic_α versus the true coverage rate. The closer the curve aligns with the line 1−α1𝛼1-\alpha1 - italic_α, the easier it is for the method to achieve the desired coverage. Our proposed method, CIA, is very close to the line 1−α1𝛼1... | C |
=1:4,j=1:5italic_I ( bold_x [ italic_i ] , bold_y [ italic_j ] ) = ∫ ∫ italic_p ( bold_x [ italic_i ] , bold_y [ italic_j ] ) roman_log divide start_ARG italic_p ( bold_x [ italic_i ] , bold_y [ italic_j ] ) end_ARG start_ARG italic_p ( bold_x [ italic_i ] ) italic_p ( bold_y [ italic_j ] ) end_ARG , italic_i = 1 : 4 ,... | We then we choose independent GP-based priors for each output 𝐲[j]𝐲delimited-[]𝑗\mathbf{y}[j]bold_y [ italic_j ], which use as features only those inputs that exhibit some influence on the output (‘influential subset’ in Table 2). The mean is chosen as a linear function of the form m(𝐱)=θvf=θ𝐱[0]𝑚𝐱𝜃vf𝜃𝐱d... | Table 2: Materials surrogate modeling: sensitivity analysis. Mutual information I(𝐱[i],𝐲[j])𝐼𝐱delimited-[]𝑖𝐲delimited-[]𝑗I(\mathbf{x}[i],\mathbf{y}[j])italic_I ( bold_x [ italic_i ] , bold_y [ italic_j ] ) (in nats) is computed using the scikit-learn package [38]. | To design the functional prior density p(g)𝑝𝑔p(g)italic_p ( italic_g ) in this non-trivial multi-input multi-output example, we first ran a simple sensitivity analysis on the training data to determine if some inputs had negligible influence on some of the outputs. Table 2 shows the mutual information between each i... | The mutual information equates 0 only if input 𝐱[i]𝐱delimited-[]𝑖\mathbf{x}[i]bold_x [ italic_i ] and output 𝐲[j]𝐲delimited-[]𝑗\mathbf{y}[j]bold_y [ italic_j ] are independent, which is the case for some pairs of input-output in this example. | D |
In Figure 3(b), for moderate c𝒵subscript𝑐𝒵c_{\mathcal{Z}}italic_c start_POSTSUBSCRIPT caligraphic_Z end_POSTSUBSCRIPT, the choice-only estimator with the weak-preference design outperforms the transductive design (fig. 3(a)), demonstrating that focusing on queries with weak preferences improves estimation. However, ... | \neq z^{*}]blackboard_P [ start_OPERATOR roman_arg roman_max end_OPERATOR start_POSTSUBSCRIPT italic_z ∈ caligraphic_Z end_POSTSUBSCRIPT italic_z start_POSTSUPERSCRIPT ⊤ end_POSTSUPERSCRIPT over^ start_ARG italic_θ end_ARG ≠ italic_z start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT ], for three GSE variations, shown as func... | To address these challenges, we propose a computationally efficient method for estimating linear human utility functions from both choices and response times, grounded in the difference-based EZ diffusion model [67, 8]. Our method leverages response times to transform binary choices into richer continuous signals, fram... | Figure 3(c) shows that the choice-decision-time estimator consistently outperforms the choice-only estimators under both the transductive and weak-preference designs, particularly for strong preferences. This suggests that for queries with strong preferences, decision times complement choices and improve estimation, co... | In fixed-budget best-arm identification, our choice-decision-time estimator’s ability to extract more information from queries with strong preferences is especially valuable. Bandit learners, such as GSE [3], strategically sample queries, update estimates of θ∗superscript𝜃\theta^{*}italic_θ start_POSTSUPERSCRIPT ∗ end... | C |
Likewise, using Buffon’s needle experiment, we demonstrated that the QRNG results pass the t-test (hypothesis: mean = π𝜋\piitalic_π, data normally distributed) for sample sizes up to 4.54×4.54\times4.54 × larger than those achieved by the parallel PRNG, thus resulting in a ∼2×\sim 2\times∼ 2 × better approximation of ... | Comparing a self-certifying quantum random number generator (c.f. fig. 1A and Supplementary Materials SM sec. A.1.2) to industry-standard PRNGs (SM sec. A.1.1), we demonstrate that the QRNG leads to better approximations than the PRNGs for both methods. We show that the results obtained with the QRNG pass the sign test... | 50×1005010050\times 10050 × 100 sets of 1000 points, each of which is defined by a single-precision number pair representing the x𝑥xitalic_x- and y𝑦yitalic_y-coordinates. Although the direct application of NN𝑁𝑁N\!Nitalic_N italic_N measure does not illustrate any statistically significant difference, the measures ... | Additionally, based on a uniformity analysis, we assessed differences in the random sampling underpinning the MC simulations: our findings suggest that the QRNG, especially at small sample sizes, offers a better dispersion of samples than the PRNG indicating a tendency of the QRNG towards more uniformly distributed sam... | In this work, we assessed the effect of various entropy sources on the outcomes of stochastic simulations. Herein, we assembled a test suite based on Monte Carlo simulations, and a palette of statistical tests, with varying underlying assumptions, to compare a quantum random number generator (QRNG) to pseudo-random num... | C |
Reinforcement learning from human feedback is extensively utilized to align large language models with human preferences (Bai et al., 2022; Ramamurthy et al., 2023; Xiao et al., 2024; Liu et al., 2024). The established pipeline for LLM alignment via RLHF involves three essential steps using a pretrained LLM (Ouyang et ... | In this section, we formulate our problem as a D𝐷Ditalic_D-optimal design problem, and propose a dual active learning for simultaneous conversation-teacher selection while adhering to the constrained sample budget T𝑇Titalic_T. Following this, we compute a pessimistic policy that leverages the learned reward estimator... | Supervised fine-tuning (SFT): First, supervised learning is employed to fine-tune the LLM’s parameters, yielding a policy that takes each prompt (e.g., question) as input, and outputs their completion (e.g., response). | Reward learning: Next, we collect a dataset of comparisons, including two completions for each prompt. The ordinal preferences will be provided by human experts to compare these completions. These preferences are then used to train a reward function, which measures the goodness of a given completion for each prompt, vi... | Reinforcement learning: Finally, an RL algorithm, typically the proximal policy optimization (Schulman et al., 2017), is applied to the prompt-conversation-reward triplets to output the final policy based on the SFT-trained policy and the learned reward function. | B |
Using similar arguments as above, the result of Corollary 2.4 follows directly from a straightforward application of Corollary 3.4. However, for other values of α𝛼\alphaitalic_α, Lemma A.2 can also be used to determine the corresponding upper bounds. Specifically, when α=12𝛼12\alpha=\frac{1}{2}italic_α = divide start... | This paper is organized as follows. In Section 2, we state Theorem 2.3, which provides error bounds between the target function fλ,μsubscript𝑓𝜆𝜇f_{\lambda,\mu}italic_f start_POSTSUBSCRIPT italic_λ , italic_μ end_POSTSUBSCRIPT and the approximation obtained via the online regularized algorithm in ℋKsubscriptℋ𝐾\mathc... | We show that the error bounds depend on an additional factor involving the mixing time tmixsubscript𝑡mixt_{\text{mix}}italic_t start_POSTSUBSCRIPT mix end_POSTSUBSCRIPT of the Markov chain. The resulting error rates for e.g., is of the form 𝒪(tmixt−θ)𝒪subscript𝑡mixsuperscript𝑡𝜃\mathcal{O}\big{(}t_{\text{mix}}t^... | In this paper, we extend the classical framework of online learning algorithms by relaxing i.i.d. assumption. Instead, we consider samples along a Markov chain trajectory with stationary distribution. Under this setting, we achieve nearly optimal learning rates, introducing an additional factor that depends on the chai... | In Theorem 2.3, the decomposition (7) distinguishes between the initial error and the sampling error. The initial error at time step t𝑡titalic_t, denoted by ℰinit(t)subscriptℰinit𝑡\mathcal{E}_{\text{init}}(t)caligraphic_E start_POSTSUBSCRIPT init end_POSTSUBSCRIPT ( italic_t ), arises deterministically and depends o... | C |
4.23e−1±4.03e−1plus-or-minus4.23e14.03e14.23\mathrm{e}{-1}\pm 4.03\mathrm{e}{-1}4.23 roman_e - 1 ± 4.03 roman_e - 1 | 2.92e−2±2.54e−2plus-or-minus2.92e22.54e22.92\mathrm{e}{-2}\pm 2.54\mathrm{e}{-2}2.92 roman_e - 2 ± 2.54 roman_e - 2 | 7.06e−1±5.54e−1plus-or-minus7.06e15.54e17.06\mathrm{e}{-1}\pm 5.54\mathrm{e}{-1}7.06 roman_e - 1 ± 5.54 roman_e - 1 | 2.41e−1±1.54e−1plus-or-minus2.41e11.54e12.41\mathrm{e}{-1}\pm 1.54\mathrm{e}{-1}2.41 roman_e - 1 ± 1.54 roman_e - 1 | 2.54e−1±2.59e−1plus-or-minus2.54e12.59e12.54\mathrm{e}{-1}\pm 2.59\mathrm{e}{-1}2.54 roman_e - 1 ± 2.59 roman_e - 1 | D |
\sqrt{\frac{C_{2}\xi_{|V_{k}|}\beta_{t}\Psi_{t|V_{k}|}}{t|V_{k}|}}\right),italic_R start_POSTSUBSCRIPT italic_A italic_B end_POSTSUBSCRIPT ( italic_t ) ≤ divide start_ARG 1 end_ARG start_ARG italic_M end_ARG ∑ start_POSTSUBSCRIPT italic_k = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT | italic... | By picking n𝑛nitalic_n to be the clique cover number of the graph G𝐺Gitalic_G, Theorem 3.1 yields the following corollary. | Suppose k(x,x′)≤1𝑘𝑥superscript𝑥′1k(x,x^{\prime})\leq 1italic_k ( italic_x , italic_x start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ) ≤ 1 for all x,x′𝑥superscript𝑥′x,x^{\prime}italic_x , italic_x start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT. Let θ(G)𝜃𝐺\theta(G)italic_θ ( italic_G ) and ω(G)𝜔𝐺\omega(G)italic_ω (... | The proof of Corollary 3.2 follows from (i) applying Cauchy-Schwarz to bound the term ∑k=1n|Vk|≤n∑k=1n|Vk|=Mnsuperscriptsubscript𝑘1𝑛subscript𝑉𝑘𝑛superscriptsubscript𝑘1𝑛subscript𝑉𝑘𝑀𝑛\sum_{k=1}^{n}\sqrt{|V_{k}|}\leq\sqrt{n}\sqrt{\sum_{k=1}^{n}|V_{k}|}=\sqrt{Mn}∑ start_POSTSUBSCRIPT italic_k = 1 end_POSTSUBSCR... | Picking Gssubscript𝐺𝑠G_{s}italic_G start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT to be the largest complete subgraph of the communication network G𝐺Gitalic_G then yields the following corollary. | A |
Our proposed architecture is based on pre-trained Transformer models. Transformer-based neural processes (Müller et al.,, 2021; Nguyen and Grover,, 2022; Chang et al.,, 2024) serve as the foundational structure for our approach, but they have not considered experimental design. Decision Transformers (Chen et al.,, 2021... | In this paper, we proposed an amortized framework for decision-aware Bayesian experimental design (BED). | In this paper, we propose an amortized decision-making-aware BED framework, see Fig. 1(c). We identify two key aspects where previous amortized BED methods fall short when applied to downstream decision-making tasks. First, the training objective of the existing methods does not consider downstream decision tasks. Ther... | In this section, we evaluate our proposed framework on several tasks. Our experimental approach is detailed in Appendix B. In Section F.3, we provide additional ablation studies of TNDP to show the effectiveness of our query head and the non-myopic objective function. The code to reproduce our experiments is available ... | Results. The results are shown in Fig. 3(b), where we can see that TNDP achieves significantly better average accuracy than other methods. Additionally, we conduct an ablation study of TNDP in Section F.3 to verify the effectiveness of fqsubscript𝑓qf_{\text{q}}italic_f start_POSTSUBSCRIPT q end_POSTSUBSCRIPT. We furth... | C |
3.84±0.91subscript3.84plus-or-minus0.91\mathbf{3.84}_{\pm 0.91}bold_3.84 start_POSTSUBSCRIPT ± 0.91 end_POSTSUBSCRIPT | 4.99±1.04subscript4.99plus-or-minus1.044.99_{\pm 1.04}4.99 start_POSTSUBSCRIPT ± 1.04 end_POSTSUBSCRIPT | 4.99±1.04subscript4.99plus-or-minus1.044.99_{\pm 1.04}4.99 start_POSTSUBSCRIPT ± 1.04 end_POSTSUBSCRIPT | 4.99±1.04subscript4.99plus-or-minus1.044.99_{\pm 1.04}4.99 start_POSTSUBSCRIPT ± 1.04 end_POSTSUBSCRIPT | 4.99±1.04subscript4.99plus-or-minus1.044.99_{\pm 1.04}4.99 start_POSTSUBSCRIPT ± 1.04 end_POSTSUBSCRIPT | A |
\hat{\mu}_{B^{2},X},Y\rangle_{B}^{2}\odot(X_{i}\ominus\hat{\mu}_{B^{2},X})over^ start_ARG italic_C end_ARG start_POSTSUBSCRIPT italic_B start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT , italic_X end_POSTSUBSCRIPT ( italic_Y ) = divide start_ARG 1 end_ARG start_ARG italic_n end_ARG ⊙ ⨁ start_POSTSUBSCRIPT italic_i = 1 end_P... | Despite the attention that the Bayes space methodology for density data analysis has attracted over the last decades, little focus has been paid to robust frameworks necessary for meaningful analysis in the presence of anomalies. Therefore, this paper introduces robust density PCA (RDPCA) as a methodology to robustly e... | However, as previously emphasized, the eigendecomposition (4) and consequently also the optimization problem (3), are sensitive to the presence of outlying curves in the sample. Therefore, in order to achieve a robust estimation of the functional PCs, the underlying covariance will be based on a sub sample consisting o... | Consequently, the structure of the covariance or correlation function can also be significantly influenced in the presence of outlying curves. This is showcased in Figure 9 where robust and non-robust correlations are compared. Between wavelength 100 to 250 and around 400, the outliers exhibit a different structure. Es... | One of our initial assumptions was that the data had been observed at a dense grid. In the case where the data is only available at a sparse grid, further extensions would require smoothing by an appropriate basis, e.g., CB-splines (Machalová et al., 2021) specifically developed for Bayes spaces. Naturally, the next st... | B |
R2superscript𝑅2R^{2}italic_R start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT (Coefficient of Determination), indicating how closely inferred values match the ground truth. | One of the notable contributions is a detailed analysis of reconstruction attacks, wherein a malicious institution can subtract its local counts from the global aggregated counts to infer other institutions’ data. Prior efforts have often acknowledged the feasibility of federated approaches but provided only partial in... | Few Providers (2–3). Large overlap yields near-perfect accuracy in both datasets when only one other site is present. The attacker’s knowledge heavily overlaps with that single remaining provider, making subtraction-based inference almost exact. | Overlap Impact. Large overlap is devastating for privacy only when the federation is small. With many providers, large overlap ironically confuses the attacker more, driving RMSE upward and reconstruction quality downward. | Lung Cancer. Under no overlap, RMSE remains comparatively low, implying the attacker’s estimates are (ironically) more accurate than in large overlap when many sites are present. Large overlap grows steeply with more providers, showing the attack’s failure on multi-site shared data. | B |
We calculated various measures to check the predictive accuracy of FPET. All measures are based on countries with data after 2018. | The second set of measures is focused on the differences between left-out survey data and point predictions. We do not necessarily expect these differences to be small for all survey data: we only expect small differences for survey data that is subject to small sampling and non-sampling errors, such as most DHSs. Resu... | The first set of measures is focused on the comparison between point estimates and uncertainty intervals (UIs) from the training and full data set. We calculate prediction errors in the FPET point predictions, referring to the difference between the FPET point prediction for a given year and its updated estimate. We al... | Table 4: Summary of FPET1 prediction errors (in percentage) for the year 2020. An error refers to the difference between the estimate for the indicator based on the full data set and the indicator based on the training data, for the year 2020 (3 year forecast horizon). A positive (negative) error indicates that the pre... | Table 1: Summary of prediction errors (in percentage) for the year 2020. An error refers to the difference between the estimate for the indicator based on the full data set and the indicator based on the training data, for the year 2020 (3 year forecast horizon). A positive (negative) error indicates that the predictio... | B |
Spintronic devices are built using magnetic materials, as the magnetization (magnetic moment per unit volume) of a magnet is a macroscopic manifestation of its correlated electron spins. The prototypical spintronic device, called the magnetic tunnel junction (MTJ), is a three-layer device which can act both as a memory... | An MTJ can serve as a natural source of randomness upon aggressive scaling, i.e. when the FL of the MTJ is shrunk to such a small volume that it toggles randomly just due to thermal energy in the vicinity. It is worth noting that the s-MTJ can produce a Bernoulli distribution like probability density function (PDF), wi... | Figure 1 depicts our hardware configuration for sampling a single Float16 value. Each disubscript𝑑𝑖d_{i}italic_d start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT is an s-MTJ device. The devices d10,⋯,d14subscript𝑑10⋯subscript𝑑14d_{10},\cdots,d_{14}italic_d start_POSTSUBSCRIPT 10 end_POSTSUBSCRIPT , ⋯ , italic_d start... | The number of control bits in an s-MTJ device impacts both energy consumption and the precision of setting the energy bias, which in turn affects the available probabilities of obtaining bit samples. Figure 2 illustrates this relationship. This section evaluates the approximation error caused by imprecision in achievin... | Spintronic devices are built using magnetic materials, as the magnetization (magnetic moment per unit volume) of a magnet is a macroscopic manifestation of its correlated electron spins. The prototypical spintronic device, called the magnetic tunnel junction (MTJ), is a three-layer device which can act both as a memory... | A |
From 2019 one additional point is given to the pilot that occupied a position in the top ten and furthermore has the fastest lap in the race. | From the FIA site, we can retrieve the drivers classification for each GP of the considered championship. | of ties between the ranked elements. Kendall corrected evolutive coefficient can be considered as an extension of a correlation coefficient of two rankings applied to m𝑚mitalic_m rankings and therefore, as output, τ^ev∙superscriptsubscript^𝜏𝑒𝑣∙\widehat{\tau}_{ev}^{\bullet}over^ start_ARG italic_τ end_ARG start_POS... | and, again, some rules are applied to break the ties, if any. Our collection of rankings are precisely the rankings of each GP in a season, both for drivers and constructors. We use these series of rankings to compute | FIA has some rules to break ties between the pilots and therefore the ranking of the drivers can be considered as ranking with no ties. | D |
=−S˙−cIabsent˙𝑆𝑐𝐼\displaystyle=-\dot{S}-c\,I= - over˙ start_ARG italic_S end_ARG - italic_c italic_I | If 0<d≪10𝑑much-less-than10<d\ll 10 < italic_d ≪ 1 represents a small proportion of initially infected individuals, the initial conditions of the system are given by (S(0),I(0),R(0))=(1,d,0)𝑆0𝐼0𝑅01𝑑0(S(0),I(0),R(0))=(1,d,0)( italic_S ( 0 ) , italic_I ( 0 ) , italic_R ( 0 ) ) = ( 1 , italic_d , 0 ). From the form... | The above system may be considerably simplified, by removing last two equations. Indeed, by dividing the last equation by the first one and solving the resulting differential equation under the assumption that [SS](0)=μ[S](0)delimited-[]𝑆𝑆0𝜇delimited-[]𝑆0[SS](0)=\mu[S](0)[ italic_S italic_S ] ( 0 ) = italic_μ [... | Note that the first equation in the system above involves only S𝑆Sitalic_S. Since it describes the decline of susceptibles (and consequently the emergence of new cases), it is often referred to as the epidemic curve equation (see, for instance, [7]). | The above system is known as the SIR compartmental ODE model, the simplest example of a deterministic system describing the spread of a disease in a closed population. From the equations, we note the following. | C |
For simplicity of notation, assume that tn→t∗→subscript𝑡𝑛superscript𝑡t_{n}\rightarrow t^{*}italic_t start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT → italic_t start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT as n→∞→𝑛n\rightarrow\inftyitalic_n → ∞. | Because ψ^^𝜓\widehat{\psi}over^ start_ARG italic_ψ end_ARG is assumed to be Lipschitz continuous on (0,∞)0(0,\infty)( 0 , ∞ ) | Because the pointwise supremum of any collection of continuous functions is lower semi-continuous, we have | In order to apply the dominated convergence theorem to extend the result in Lemma 1 to the continuous index set case, we need to ensure the convergence of the expected value of the supremum. | not only are continuous in the quadratic mean but also almost surely have modification that is sample continuous, | B |
While conceptually simple, the computational demands of this grow quickly with the size of the state space. Thus, in the next section, we discuss a method based on Bayesian optimization to allocate any computational budget we may have more efficiently. | We use two main metrics: the entropy of the posterior distribution over reward parameters after a given number of steps of active learning and the expected return (with respect to the initial state distribution and environment dynamics) of an apprentice policy maximizing this expected return (also with respect to the p... | We first collect a fixed initial number of samples for each state. Then, we repeat the following until we have exhausted a budget of trajectories T𝑇Titalic_T. Following standard Gaussian updating, after an observation of a new hypothetical trajectory from s𝑠sitalic_s, we update the parameters | and compute a new EIG estimate for the value s∗superscript𝑠{s}^{*}italic_s start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT maximizing the upper confidence bound: | We propose to use Bayesian optimization [8], in particular the upper confidence bound (UCB) algorithm [9], to adaptively choose from which initial states to sample additional hypothetical trajectories to efficiently estimate the EIG. We still use the basic structure of (2), but instead of using the same number of sampl... | D |
Tucker-decomposition 𝒟=𝒢⋅(𝐀(1),𝐀(2),𝐀(3))𝒟⋅𝒢superscript𝐀1superscript𝐀2superscript𝐀3\mathcal{D}=\mathcal{G}\cdot(\mathbf{A}^{(1)},\mathbf{A}^{(2)},\mathbf{A}^{(3)})caligraphic_D = caligraphic_G ⋅ ( bold_A start_POSTSUPERSCRIPT ( 1 ) end_POSTSUPERSCRIPT , bold_A start_POSTSUPERSCRIPT ( 2 ) end_POSTSUPERSCRIPT ,... | 𝐃(1)::superscript𝐃1absent\displaystyle\mathbf{D}^{(1)}:bold_D start_POSTSUPERSCRIPT ( 1 ) end_POSTSUPERSCRIPT : | The left singular vectors 𝚵(1),𝚵(2)superscript𝚵1superscript𝚵2\mathbf{\Xi}^{(1)},\mathbf{\Xi}^{(2)}bold_Ξ start_POSTSUPERSCRIPT ( 1 ) end_POSTSUPERSCRIPT , bold_Ξ start_POSTSUPERSCRIPT ( 2 ) end_POSTSUPERSCRIPT and 𝚵(3)superscript𝚵3\mathbf{\Xi}^{(3)}bold_Ξ start_POSTSUPERSCRIPT ( 3 ) end_POSTSUPERSCRIPT can be obt... | 𝐃(3)::superscript𝐃3absent\displaystyle\mathbf{D}^{(3)}:bold_D start_POSTSUPERSCRIPT ( 3 ) end_POSTSUPERSCRIPT : | 𝐃(2)::superscript𝐃2absent\displaystyle\mathbf{D}^{(2)}:bold_D start_POSTSUPERSCRIPT ( 2 ) end_POSTSUPERSCRIPT : | A |
In Section 2, we formally define important covariates in the compositional setting via the Markov boundary and prove that it remains well-defined under mild conditions despite compositionality. Section 3 details our methods for testing and controlled variable selection and their theoretical guarantees. In Section 4, we... | Item 1 in 2.1 says that, after accounting for the covariates in the Markov boundary, all the remaining covariates provide no further information about Y𝑌Yitalic_Y. Item 2 says that the Markov boundary is the minimal such set, in the sense that no subset of it has the property in item 1. Together, this definition infor... | Unfortunately, when the covariates are compositional, there will in general be multiple Markov boundaries of Y𝑌Yitalic_Y, so the remaining subsections of this section are devoted to establishing mild conditions under which the above definition provides a well-defined, unique, and nontrivial set of important covariates... | Formalizing important covariates under compositionality: To overcome the aforementioned misalignment between hypotheses (conditional or unconditional) and true signals in a parsimonious regression model with compositional covariates, we define the set of important covariates as the minimal set of covariates that togeth... | As highlighted in the previous section, the typical ways of defining an important covariate are not well-suited to compositional covariates. In this section, we put forth the Markov boundary as a solution and argue that, unlike conditional or unconditional dependence, membership in the Markov boundary continues to capt... | D |
We use the integrated likelihood (2) throughout and assume that y𝑦yitalic_y and the columns of X𝑋Xitalic_X have | The literature on the connection between Bayesian posterior modes and estimators described as solutions | Table 2: Prior scaling and data augmentation parameterization in the Bayesian elastic net literature. Double horizontal | literature. In this section we review the four combinations of representation and form, provide the corresponding posterior | Bayesian regression models with connections to the elastic net have also received extensive attention in the literature. | A |
This update has a step size that takes the steepness into account, as in Adadelta, but also tends to move in the same direction, as in Momentum. | Figure 4 shows how different parameter update method performs within the innermost loop in our alternating Tweedie regression. The three different update methods are Fisher scoring type update with and without learning rate adjustment, and gradient descent Adam update. The loss reductions presented are from the first i... | Most of the gradient descent variants do not use the second derivative of the objective function, which is because it is often difficult to compute the second derivative. We believe the use of adaptive learning rate together with the Fisher information matrix in our algorithm provides more benefit than the variants of ... | Figures 6 and 7 show more detailed examination of the Fisher scoring update with or without learning rate adjustment. We can see from Figure 6, during the first 20 iterations, the two cases behave similarly in the norm of score vector. However, from 20th to around 130th iteration, the update with no learning rate is mo... | We applied the alternating Tweedie regression algorithm to the generated data. For comparison, the gradient descent algorithm with Adam update was applied to the data. For the Adam update, we used the Adam optimizer (torch.optim.Adam) from the PyTorch package. A learning rate scheduler was used to reduce learning rate ... | B |
Ray (2022) provide a systematic literature review on approaches and algorithms to mitigate cold-start problems in recommender systems. | Matrix factorization is also used in natural language processing (NLP) in recent years. Word2Vec by Mikolov | Nonnegative matrix factorization (NMF) is particularly useful when dealing with non-negative data, such as in image processing and text mining. Gan | et al. (2013a, 2013b) marks a milestone in NLP history. Although no clear matrices are presented in their study, Word2Vec models the co-occurrence of words and phrases using latent vector representations via a shallow neural network model. Another well-known example of matrix factorization in NLP is the word representa... | Matrix factorization is a fundamental technique in linear algebra and data science, widely used for dimensionality reduction, data compression, and feature extraction. Recent research expands its use in various fields, including recommendation systems (e.g., collaborative filtering), bioinformatics, and signal processi... | A |
The above algorithm basically starts with a set of thresholding values and use cross-validation to obtain the initial best thresholding value which has the smallest cross-validation error. Then around the neighborhood, find an even better one, which has error close to the best but not in the expense of too many variabl... | Ten multi-class gene expression data sets for human cancers were investigated in this study and are listed in Table 2. These data sets were kindly provided by the authors of Tan | In this section, we discuss the performance of the three algorithms using the 10 multi-class human cancers data sets. In all that follows, our reported misclassification error refers to the percentage of misclassified test samples. Since random partition of the training data in cross-validation could lead to different ... | Again we use deep search algorithm (2.2) for selecting optimal thresholding parameter. Our analysis for this section is also for the 10 multi-class human cancers datasets listed in Table 2. In all that follows, our reported misclassification error refers to the percentage of misclassified test samples. | Héberger (2013)). For each algorithm, the test errors (or number of selected genes) for the ten data sets were ranked. For the same data set, two algorithms may have the same test error. However, they were not ranked together (see Table 3). Due to sample sizes being very different (see Table 2) for different cancer dat... | A |
Understanding the nonlabor income effect is just as important as having a reliable estimate of the slope elasticity. First, if we want to predict the effect of tax reforms, say the introduction of a liveable guaranteed income, it would make a large difference whether the nonlabor income effect is zero or say -0.5, whic... | Thus, panel data normally used are not well designed to accurately capture the nonlabor income effect. Since sizeable precise estimates of nonlabor income effects are rare, many studies neglect to account for nonlabor income, arguing that it is known that the nonlabor income effect is small. However, this reasoning is ... | For λ=1.00E−06𝜆1.00𝐸06\lambda=1.00E-06italic_λ = 1.00 italic_E - 06, the estimated nonlabor income elasticity is 0.0074 with a standard error of 0.0355. This implies a 95% confidence interval of (-0.0622, 0.0770). The estimate is neither significantly different from zero nor from -0.06. Likewise, if we define the no... | Second, knowing the nonlabor income effect is important when calculating the compensated taxable income elasticity, which is the relevant elasticity when calculating deadweight losses of taxes. | Understanding the nonlabor income effect is just as important as having a reliable estimate of the slope elasticity. First, if we want to predict the effect of tax reforms, say the introduction of a liveable guaranteed income, it would make a large difference whether the nonlabor income effect is zero or say -0.5, whic... | C |
README.md exists but content is empty.
- Downloads last month
- 2