Dataset Viewer
Auto-converted to Parquet Duplicate
annotations
list
agreement
float64
source_file
string
id_parag
string
final_parag
string
candidate_comments
list
[ { "id": 86721992, "completed_by": { "id": 70661 }, "result": [ "Différent" ] }, { "id": 86723280, "completed_by": { "id": 126844 }, "result": [ "Différent" ] }, { "id": 86723989, "completed_by": { "id": 140156 }, "result":...
100
2310.19029-03cf4a9a918931f8.jsonl
2310.19029-03cf4a9a918931f8_1
\textbf{Phase 1 (training)}: we recruited three undergraduate students majoring in linguistics. The students were trained in three steps in order to produce consistent annotations. We first assigned 50 words to each linguist and trained them to conduct the annotation jointly. Second, we assigned the same 150 words to e...
[ "annotator training and calibration, the annotation process and data quality and validation. \nTo ensure the accuracy and consistency of the annotations, it is essential carefully select and train the annotator linguists. In this subsection, we will describe the process we used to select and train the linguists for...
[ { "id": 86904180, "completed_by": { "id": 62471 }, "result": [ "Différent" ] }, { "id": 86981896, "completed_by": { "id": 126844 }, "result": [ "Différent" ] }, { "id": 86989533, "completed_by": { "id": 70661 }, "result": ...
100
2310.19029-03cf4a9a918931f8.jsonl
2310.19029-03cf4a9a918931f8_2
We evaluated the coverage of both lexicons based on the sense-annotated tokens. As Table \ref{lexicons-evaluation} shows, Modern has higher coverage of lemmas (80\%) compared to Ghani's coverage (78\%), and has higher sense coverage (83\%) compared to Ghani (78\%). Moreover, glosses in Modern are more precise, less amb...
[ "If a sense is missing, linguists can manually add a new sense in one of the lexicons; and when a lemma is missing, we add a new lemma and its senses manually. " ]
[ { "id": 86738563, "completed_by": { "id": 70661 }, "result": [ "Réécriture" ] }, { "id": 86771982, "completed_by": { "id": 140156 }, "result": [ "Réécriture" ] }, { "id": 86826868, "completed_by": { "id": 126844 }, "result...
100
2310.19029-03cf4a9a918931f8.jsonl
2310.19029-03cf4a9a918931f8_3
where $fo_{ij}$ is the observed frequency of the categories ($i$ and $j$) per the annotators selection, $fe_{ij}$ is the expected frequency for both annotators' selected categories, $(y_{i}-y_{jx})$ denotes the distance between the categories, and $K$ is number of categories.
[ "where $w_{ij}^x=\\frac{(y_{ix}-y_{jx})^2}{(k-1)^2}$, $y_{ix}$ and $y_{jx}$ denotes the score of gloss $x$ for a given token by annotator $i$ and $j$, respectively, and $k$ is number of categories. $fo_{ij}$ is the observed frequency of the categories ($i$ \\& $j$) per the annotators selection. $fe_{ij}$ is the exp...
[ { "id": 86722737, "completed_by": { "id": 70661 }, "result": [ "Différent" ] }, { "id": 86723424, "completed_by": { "id": 126844 }, "result": [ "Réécriture" ] }, { "id": 86723873, "completed_by": { "id": 140156 }, "result"...
66.666667
2310.19029-03cf4a9a918931f8.jsonl
2310.19029-03cf4a9a918931f8_4
Both LWK and QWK take the distance between categories into consideration, where the distance is defined as the number of categories separating the two annotators' selection. The difference is that LWK calculates the distance linearly while QWK calculates it quadratically. For measuring the ranking error deviation among...
[ "Kappa: data is nominal so no categories, the agreement is yes or no . threshold is >=60. \nWeighted Kappa: data is ordinal and measure the agreement based on the categories distance between annotators Linear Weighted: the distances between categories are equal Quadratic Weithed: the distances scores between categ...
[ { "id": 86961182, "completed_by": { "id": 62471 }, "result": [ "Différent" ] }, { "id": 86965601, "completed_by": { "id": 140156 }, "result": [ "Réécriture" ] }, { "id": 86970738, "completed_by": { "id": 126844 }, "result"...
66.666667
2310.19051-15a66630f02a71de.jsonl
2310.19051-15a66630f02a71de_1
\subsubsection*{Acknowledgments} This work was supported in part by the National Natural Science Foundation of China under grant number 62167003, and in part by the Hainan Provincial Natural Science Foundation of China under grant number 720RC616.
[ ", in part by the Hainan Province Key R&D Program Project under grant number ZDYF2021GXJS010, and in part by the Major Science and Technology Project of Haikou City under grant number 2020006." ]
[ { "id": 86962981, "completed_by": { "id": 62471 }, "result": [ "Différent" ] }, { "id": 86977232, "completed_by": { "id": 62306 }, "result": [ "Différent" ] }, { "id": 86977368, "completed_by": { "id": 126844 }, "result": ...
100
2310.19064-7fbe2739dfcbc2f2.jsonl
2310.19064-7fbe2739dfcbc2f2_1
Apple tasting is usually presented as an example of a more general partial feedback setting called \textit{partial monitoring} games, where the player’s feedback is specified by a feedback matrix \citep{cesa2006prediction, bartok2014partial}. Related to partial monitoring games is sequential prediction with \textit{gra...
[ "In addition to establishing upper and lower bounds for apple tasting in terms of bounds on the false positive and false negative mistakes in the full-information setting, \\cite{helmbold2000apple} also give expected mistake bounds in terms of the size of $\\mathcal{H}$. More specifically, they show that for any hy...
[ { "id": 86960508, "completed_by": { "id": 62471 }, "result": [ "Différent" ] }, { "id": 86979199, "completed_by": { "id": 126844 }, "result": [ "Différent" ] }, { "id": 86979034, "completed_by": { "id": 140156 }, "result":...
100
2310.19064-7fbe2739dfcbc2f2.jsonl
2310.19064-7fbe2739dfcbc2f2_2
\subsection{Notation} Let $\Xcal$ denote the instance space and $\mathcal{H} \subseteq \{0, 1\}^{\mathcal{X}}$ denote a binary hypothesis class.
[ "Given an instance $x \\in \\mathcal{X}$, and any collection of hypothesis $V \\subseteq \\{0, 1\\}^{\\mathcal{X}}$, we let $V(x) := \\{h(x): h \\in V\\}$ denote the projection of $V$ onto $x$." ]
[ { "id": 86731004, "completed_by": { "id": 140156 }, "result": [ "Réécriture" ] }, { "id": 86726057, "completed_by": { "id": 70661 }, "result": [ "Différent" ] }, { "id": 86732169, "completed_by": { "id": 126844 }, "result"...
66.666667
2310.19064-7fbe2739dfcbc2f2.jsonl
2310.19064-7fbe2739dfcbc2f2_3
In the apple tasting feedback model, the adversary still picks a labeled instance $(x_t, y_t) \in \mathcal{X} \times \{0, 1\}$ and reveals $x_t$ to the learner. However, the learner only gets to observe the true label $y_t$ if they predict $\hat{y}_t = 1$. Analogous to the full-information setting, a hypothesis class $...
[ "for any chosen sequence of labeled examples $(x_t, y_t) \\in \\mathcal{X} \\times \\{0, 1\\}$, the algorithm, while only receiving feedback when predicting $1$, makes predictions $\\mathcal{A}(x_t) \\in \\{0, 1\\}$ at every iteration $t \\in [T]$ such that its \\emph{expected regret}," ]
[ { "id": 86980402, "completed_by": { "id": 140156 }, "result": [ "Différent" ] }, { "id": 86984108, "completed_by": { "id": 126844 }, "result": [ "Différent" ] }, { "id": 86984842, "completed_by": { "id": 70661 }, "result":...
100
2310.19064-7fbe2739dfcbc2f2.jsonl
2310.19064-7fbe2739dfcbc2f2_4
A hypothesis class $\Hcal$ is online learnable under apple tasting feedback, if there exists an (potentially randomized) algorithm $\mathcal{A}$ such that its \emph{expected regret},
[ "for any chosen sequence of labeled examples $(x_t, y_t) \\in \\mathcal{X} \\times \\{0, 1\\}$, the algorithm, while only receiving feedback when predicting $1$, makes predictions $\\mathcal{A}(x_t) \\in \\{0, 1\\}$ at every iteration $t \\in [T]$ such that its \\emph{expected regret}," ]
[ { "id": 86738693, "completed_by": { "id": 70661 }, "result": [ "Différent" ] }, { "id": 86901183, "completed_by": { "id": 62306 }, "result": [ "Différent" ] }, { "id": 86901328, "completed_by": { "id": 62471 }, "result": [...
100
2310.19064-7fbe2739dfcbc2f2.jsonl
2310.19064-7fbe2739dfcbc2f2_5
$$ \inf_{\mathcal{A}} \texttt{M}_{\mathcal{A}}(T, \mathcal{H})=\begin{cases} \Theta(1), & \text{if $\texttt{w}(\mathcal{H}) = 1$}\\ \Theta(\sqrt{T}) & \text{if $2 \leq \texttt{w}(\mathcal{H}) < \infty$}\\ \Theta(T), & \text{otherwise} \end{cases} $$
[ "If $\\texttt{AL}_{\\texttt{w}(\\mathcal{H})}(\\mathcal{H}) > \\sqrt{(\\texttt{w}(\\mathcal{H}) - 1)T}$, then we have that $\\inf_{\\mathcal{A}}\\texttt{M}_{\\mathcal{A}}(T, \\mathcal{H}) \\leq 3\\texttt{AL}_{\\texttt{w}(\\mathcal{H})}$ while " ]
[ { "id": 86829935, "completed_by": { "id": 126844 }, "result": [ "Différent" ] }, { "id": 86861673, "completed_by": { "id": 70661 }, "result": [ "Différent" ] }, { "id": 86898739, "completed_by": { "id": 62471 }, "result": ...
100
2310.19064-7fbe2739dfcbc2f2.jsonl
2310.19064-7fbe2739dfcbc2f2_6
$$\inf_{\mathcal{A}}\texttt{M}_{\mathcal{A}}(T, \mathcal{H}) \leq \inf_{w \in \mathbb{N}} \left\{\texttt{AL}_w(\mathcal{H}) + 2\sqrt{(w-1)T} \right\}.
[ "\\leq \\min\\Bigl\\{\\texttt{AL}_{\\texttt{w}(\\mathcal{H})}(\\mathcal{H}) + 2\\sqrt{(\\texttt{w}(\\mathcal{H})-1)T}, 2\\sqrt{\\texttt{L}(\\mathcal{H})T}\\Bigl\\}." ]
[ { "id": 86961591, "completed_by": { "id": 62471 }, "result": [ "Différent" ] }, { "id": 86972171, "completed_by": { "id": 140156 }, "result": [ "Différent" ] }, { "id": 86971725, "completed_by": { "id": 126844 }, "result":...
100
2310.19064-7fbe2739dfcbc2f2.jsonl
2310.19064-7fbe2739dfcbc2f2_7
The upperbound in Theorem \ref{thm:real} follows by picking $w = \texttt{w}(\mathcal{H})$ in Lemma \ref{lem:up}. Note that if you pick $w = \texttt{L}(\mathcal{H})+1$, then $\texttt{AL}_w(\mathcal{H}) = \texttt{L}(\mathcal{H})$ and we get an upperbound of $3\sqrt{\texttt{L}(\mathcal{H})T}$ on the expected mistakes.
[ "\\leq \\min\\Bigl\\{\\texttt{AL}_{\\texttt{w}(\\mathcal{H})}(\\mathcal{H}) + 2\\sqrt{(\\texttt{w}(\\mathcal{H})-1)T}, 2\\sqrt{\\texttt{L}(\\mathcal{H})T}\\Bigl\\}." ]
[ { "id": 86900883, "completed_by": { "id": 62471 }, "result": [ "Différent" ] }, { "id": 86901879, "completed_by": { "id": 62306 }, "result": [ "Réécriture" ] }, { "id": 86909644, "completed_by": { "id": 140156 }, "result":...
66.666667
2310.19064-7fbe2739dfcbc2f2.jsonl
2310.19064-7fbe2739dfcbc2f2_8
Lemma \ref{lem:up} follows from composing the next two lemmas. Lemma \ref{lem:soaconstrained} shows that if $\texttt{AL}_w(\mathcal{H}) < \infty$, then there exists a deterministic online learner, under \textit{full-information} feedback, that makes at most $w-1$ false negative mistakes and $\texttt{AL}_w(\mathcal{H})$...
[ "Theorem \\ref{lem:up} implies that when $\\texttt{w}(\\mathcal{H}) = 1$, a constant upperbound on the expected regret is possible. In fact, when $\\texttt{AL}_1(\\mathcal{H}) < \\infty$, there exists a \\textit{deterministic} online learner which makes at most $\\texttt{AL}_1(\\mathcal{H}) $ mistakes in the realiz...
[ { "id": 86899470, "completed_by": { "id": 62471 }, "result": [ "Différent" ] }, { "id": 86962132, "completed_by": { "id": 140156 }, "result": [ "Réécriture" ] }, { "id": 86963798, "completed_by": { "id": 126844 }, "result"...
66.666667
2310.19064-7fbe2739dfcbc2f2.jsonl
2310.19064-7fbe2739dfcbc2f2_9
\begin{lemma} For any hypothesis class $\mathcal{H}$ and $w \in \mathbbm{N}$ such that $\texttt{AL}_w(\mathcal{H}) < \infty$, there exists a deterministic online learner which, under full-information feedback, makes at most $w-1$ \emph{false negative} mistakes and at most $\texttt{AL}_w(\mathcal{H})$ \emph{false positi...
[ "Theorem \\ref{lem:up} implies that when $\\texttt{w}(\\mathcal{H}) = 1$, a constant upperbound on the expected regret is possible. In fact, when $\\texttt{AL}_1(\\mathcal{H}) < \\infty$, there exists a \\textit{deterministic} online learner which makes at most $\\texttt{AL}_1(\\mathcal{H}) $ mistakes in the realiz...
[ { "id": 86961192, "completed_by": { "id": 62471 }, "result": [ "Différent" ] }, { "id": 86979305, "completed_by": { "id": 126844 }, "result": [ "Différent" ] }, { "id": 86979891, "completed_by": { "id": 140156 }, "result":...
100
2310.19064-7fbe2739dfcbc2f2.jsonl
2310.19064-7fbe2739dfcbc2f2_10
The lowerbound in Theorem \ref{thm:real} follows by picking $w = \texttt{w}(\mathcal{H}) - 1$ and and $w = \texttt{L}(\mathcal{H}) + 1$ respectively.
[ "Moreover, compared to the upperbound given by Theorem \\ref{lem:up}, the lower bound given by Theorem \\ref{thm:lb} is tight up to an additive factor of $\\texttt{AL}_{\\texttt{w}(\\mathcal{H})}(\\mathcal{H})$. " ]
[ { "id": 86961583, "completed_by": { "id": 62471 }, "result": [ "Différent" ] }, { "id": 86972522, "completed_by": { "id": 62306 }, "result": [ "Réécriture" ] }, { "id": 86974059, "completed_by": { "id": 126844 }, "result":...
66.666667
2310.19064-7fbe2739dfcbc2f2.jsonl
2310.19064-7fbe2739dfcbc2f2_11
Let $\mathcal{H} \subseteq \{0, 1\}^{\mathcal{X}}$, $w \in \mathbbm{N}$, and $T \in \mathbb{N}$ be the time horizon. Since learning under apple tasting feedback implies learner under full-information feedback, a lowerbound of $\frac{\min\{T, \texttt{L}(\mathcal{H})\}}{2}$ on the minimax expected mistakes follows trivia...
[ "For all $w \\geq \\texttt{L}(\\mathcal{H}) + 1$, the stated lowerbound becomes $\\frac{1}{2}\\sqrt{\\texttt{L}(\\mathcal{H}) \\,\\min\\{T, \\texttt{L}(\\mathcal{H})\\}} \\leq \\frac{\\texttt{L}(\\mathcal{H})}{2}$, matching the full-information feedback lowerbound. Accordingly, suppose $w \\leq \\texttt{L}(\\mathca...
[ { "id": 86960147, "completed_by": { "id": 62471 }, "result": [ "Différent" ] }, { "id": 86973407, "completed_by": { "id": 62306 }, "result": [ "Réécriture" ] }, { "id": 86973554, "completed_by": { "id": 140156 }, "result":...
66.666667
2310.19064-7fbe2739dfcbc2f2.jsonl
2310.19064-7fbe2739dfcbc2f2_12
We first construct a path $\sigma^{\star}$ down $\mathcal{T}$ recursively using $\mathcal{A}$. Starting with $\sigma^{\star}_1$, let $A_1$ be the event that $\mathcal{A}$, if presented with $\frac{d}{w}$ copies of the root node $x^{\star}_1$, predicts $1$ on at least one of the copies. Then, set $\sigma^{\star}_1 = 0$ ...
[ "Let $\\Sigma$ denote the set of all valid paths down $\\mathcal{T}$. Fix a path $\\sigma \\in \\Sigma$. Let $x_1, ..., x_{|\\sigma|}$ be the sequence of instances labeling the internal nodes along the path $\\sigma$ down $\\mathcal{T}$, where $x_1$ is the instance labeling the root node. We will define a sequence ...
[ { "id": 86961188, "completed_by": { "id": 62471 }, "result": [ "Différent" ] }, { "id": 86972979, "completed_by": { "id": 62306 }, "result": [ "Différent" ] }, { "id": 86973442, "completed_by": { "id": 140156 }, "result": ...
66.666667
2310.19064-7fbe2739dfcbc2f2.jsonl
2310.19064-7fbe2739dfcbc2f2_13
We now construct our hard labeled stream in blocks of size $\frac{d}{w}$. Each block only contains a single labeled instance, repeated $\frac{d}{w}$ times. For the first block $B_1$, repeat the labeled instance $(x_1^{\star}, 0)$ if $\sigma^{\star}_1 = 0$ and otherwise repeat the labeled instance $(x_1^{\star}, 1)$. Li...
[ "We will now use the events above to construct a hard stream for $\\mathcal{A}$ with the stated guarantee. Let $\\sigma^{\\star} \\in \\Sigma$ be the valid path such that for all $j \\in [|\\sigma^{\\star}|]$, if $\\sigma^{\\star}_j = 0$ then $\\mathbbm{P}(A^j_{\\sigma^{\\star}}) \\geq \\frac{1}{2}$ and if $\\sigma...
[ { "id": 86725307, "completed_by": { "id": 70661 }, "result": [ "Différent" ] }, { "id": 86727072, "completed_by": { "id": 140156 }, "result": [ "Différent" ] }, { "id": 86765284, "completed_by": { "id": 62306 }, "result": ...
66.666667
2310.19064-7fbe2739dfcbc2f2.jsonl
2310.19064-7fbe2739dfcbc2f2_14
We now lower bound the expected mistakes of $\mathcal{A}$ on the entire stream $S$ by considering the number of ones in $\sigma_{\star}$ on a case by case basis. Note that since $\sigma^{\star}$ is a valid path down $\mathcal{T}$, we have $w \leq |\sigma^{\star}| \leq d$. Consider the case where $\sigma^{\star}$ has $w...
[ "We will now use the events above to construct a hard stream for $\\mathcal{A}$ with the stated guarantee. Let $\\sigma^{\\star} \\in \\Sigma$ be the valid path such that for all $j \\in [|\\sigma^{\\star}|]$, if $\\sigma^{\\star}_j = 0$ then $\\mathbbm{P}(A^j_{\\sigma^{\\star}}) \\geq \\frac{1}{2}$ and if $\\sigma...
[ { "id": 86904730, "completed_by": { "id": 62471 }, "result": [ "Différent" ] }, { "id": 86985472, "completed_by": { "id": 62306 }, "result": [ "Réécriture" ] }, { "id": 86985645, "completed_by": { "id": 126844 }, "result":...
66.666667
2310.19064-7fbe2739dfcbc2f2.jsonl
2310.19064-7fbe2739dfcbc2f2_15
\begin{thm}[EXP4.AT Regret Bound] If $\eta = \sqrt{\frac{\ln{N}}{2T}}$, then for any sequence of true labels $y_1, ..., y_T$, the predictions $\hat{y}_1, ..., \hat{y}_T$, output by EXP4.AT satisfy:
[ "\\begin{lemma} For any $\\eta > 0$ and any sequence of true labels $y_1, ..., y_T$, the probabilities $p_1, ..., p_T$ output by EXP4.AT satisfy" ]
[ { "id": 86962815, "completed_by": { "id": 62471 }, "result": [ "Différent" ] }, { "id": 86973156, "completed_by": { "id": 62306 }, "result": [ "Différent" ] }, { "id": 86973844, "completed_by": { "id": 126844 }, "result": ...
100
2310.19064-7fbe2739dfcbc2f2.jsonl
2310.19064-7fbe2739dfcbc2f2_16
$$\mathbbm{E}\left[\sum_{t=1}^T \mathbbm{1}\{y_t \neq \hat{y}_t\} \right] \leq \inf_{j \in [N]} \sum_{t=1}^T \mathbbm{1}\{y_t \neq \mathcal{E}^j_t\} + 3\sqrt{T\ln{N}}.$$
[ "$$\\sum_{t=1}^T \\sum_{y \\in \\{0, 1\\}} p_t^y \\hat{\\ell}_t(y) - \\inf_{i \\in [N]}\\sum_{t=1}^T \\hat{\\ell}_t(\\mathcal{E}_t^i) \\leq \\frac{\\ln N}{\\eta} + \\eta \\sum_{t=1}^T \\hat{\\ell}_t(1) + \\eta\\sum_{t=1}^T \\sum_{y \\in \\{0, 1\\}} p_t^y \\hat{\\ell}_t(y)^2.$$" ]
[ { "id": 86865047, "completed_by": { "id": 70661 }, "result": [ "Différent" ] }, { "id": 86901885, "completed_by": { "id": 62306 }, "result": [ "Différent" ] }, { "id": 86903568, "completed_by": { "id": 62471 }, "result": [...
100
2310.19064-7fbe2739dfcbc2f2.jsonl
2310.19064-7fbe2739dfcbc2f2_17
$$\sum_{t=1}^T \sum_{y \in \{0, 1\}} p_t^y \hat{\ell}_t(y) - \inf_{j \in [N]}\sum_{t=1}^T \hat{\ell}_t(\mathcal{E}_t^{j} ) \leq \frac{\ln N}{\eta} + \eta \sum_{t=1}^T \hat{\ell}_t(1) + \eta\sum_{t=1}^T p_t^1(1 - p_t^1) \hat{\ell}_t(0)^2 + \eta\sum_{t=1}^T p_t^1 \hat{\ell}_t(1)^2.$$ \end{lemma}
[ "$$\\sum_{t=1}^T \\sum_{y \\in \\{0, 1\\}} p_t^y \\hat{\\ell}_t(y) - \\sum_{t=1}^T \\sum_{y \\in \\{0, 1\\}} \\hat{\\ell}_t(\\mathcal{E}_t^{j}) \\leq \\frac{\\ln N}{\\eta} + \\eta \\sum_{t=1}^T \\hat{\\ell}_t(1) + \\eta\\sum_{t=1}^T \\sum_{y \\in \\{0, 1\\}} p_t^y \\hat{\\ell}_t(y)^2.$$ " ]
[ { "id": 86904454, "completed_by": { "id": 62471 }, "result": [ "Différent" ] }, { "id": 86972604, "completed_by": { "id": 62306 }, "result": [ "Différent" ] }, { "id": 86972967, "completed_by": { "id": 140156 }, "result": ...
100
2310.19064-7fbe2739dfcbc2f2.jsonl
2310.19064-7fbe2739dfcbc2f2_18
$$\sum_{t=1}^T \sum_{i=1}^N q_t^i \ell^{\prime}_t(\mathcal{E}_t^i) \leq \sum_{t=1}^T \ell^{\prime}_t(\mathcal{E}_t^j) + \frac{\ln N}{\eta} + \eta \sum_{t=1}^{T} \sum_{i=1}^N q_t^i (\ell^{\prime}_t(\mathcal{E}_t^i))^2.$$
[ "Next, observe that $$ \\sum_{i=1}^N q_t(i) \\hat{z}_t(i) = \\mathbb{E}_{i \\sim q_t}\\left[\\mathbb{E}_{y \\sim \\mathcal{E}_t^i}\\left[\\hat{\\ell}_t(y) \\right] \\right] = \\mathbb{E}_{\\hat{y}_t \\sim p_t}\\left[\\hat{\\ell}_t(\\hat{y}_t) \\right] $$" ]
[ { "id": 86737279, "completed_by": { "id": 70661 }, "result": [ "Différent", "Différent" ] }, { "id": 86898875, "completed_by": { "id": 62471 }, "result": [ "Différent", "Différent" ] }, { "id": 86899523, "completed_by": { ...
100
2310.19064-7fbe2739dfcbc2f2.jsonl
2310.19064-7fbe2739dfcbc2f2_19
$$\sum_{t=1}^T \sum_{i=1}^N q_t^i \hat{\ell}_t(\mathcal{E}_t^i) \leq \sum_{t=1}^T \hat{\ell}_t(\mathcal{E}_t^j) + \frac{\ln N}{\eta} + \eta \sum_{t=1}^{T} \sum_{i=1}^N q_t^i (\ell^{\prime}_t(\mathcal{E}_t^i))^2.$$
[ "Next, observe that $$ \\sum_{i=1}^N q_t(i) \\hat{z}_t(i) = \\mathbb{E}_{i \\sim q_t}\\left[\\mathbb{E}_{y \\sim \\mathcal{E}_t^i}\\left[\\hat{\\ell}_t(y) \\right] \\right] = \\mathbb{E}_{\\hat{y}_t \\sim p_t}\\left[\\hat{\\ell}_t(\\hat{y}_t) \\right] $$", "$$\\sum_{t=1}^T \\mathbb{E}_{\\hat{y}_t \\sim p_t}\\le...
[ { "id": 86899135, "completed_by": { "id": 62471 }, "result": [ "Différent", "Différent" ] }, { "id": 86901956, "completed_by": { "id": 62306 }, "result": [ "Différent", "Différent" ] }, { "id": 86909247, "completed_by": { ...
100
2310.19064-7fbe2739dfcbc2f2.jsonl
2310.19064-7fbe2739dfcbc2f2_20
Next, observe that $$ \sum_{i=1}^N q_t^i \hat{\ell}_t(\mathcal{E}_t^i) = \left(\sum_{i=1}^N q_t^i \mathcal{E}_t^i\right) \hat{\ell}_t(1) + \left(1 - \sum_{i=1}^N q_t^i \mathcal{E}_t^i\right) \hat{\ell}_t(0) = \frac{1}{1-\eta}\sum_{y \in \{0, 1\}} p_t^y \hat{\ell}_t(y) - \frac{\eta}{1-\eta} \hat{\ell}_t(1)$$
[ "Next, observe that $$ \\sum_{i=1}^N q_t(i) \\hat{z}_t(i) = \\mathbb{E}_{i \\sim q_t}\\left[\\mathbb{E}_{y \\sim \\mathcal{E}_t^i}\\left[\\hat{\\ell}_t(y) \\right] \\right] = \\mathbb{E}_{\\hat{y}_t \\sim p_t}\\left[\\hat{\\ell}_t(\\hat{y}_t) \\right] $$", "$$\\sum_{t=1}^T \\mathbb{E}_{\\hat{y}_t \\sim p_t}\\le...
[ { "id": 86962042, "completed_by": { "id": 62471 }, "result": [ "Différent" ] }, { "id": 86977397, "completed_by": { "id": 62306 }, "result": [ "Différent" ] }, { "id": 86979004, "completed_by": { "id": 140156 }, "result": ...
100
2310.19064-7fbe2739dfcbc2f2.jsonl
2310.19064-7fbe2739dfcbc2f2_21
$$\frac{1}{1-\eta} \sum_{t=1}^T \sum_{y \in \{0, 1\}} p_t^y \hat{\ell}_t(y) - \frac{\eta}{(1 - \eta)} \sum_{t=1}^T \hat{\ell}_t(1) \leq \sum_{t=1}^T \hat{\ell}_t(\mathcal{E}_t^{j}) + \frac{\ln N}{\eta} + \frac{\eta}{1-\eta}\sum_{t=1}^T \sum_{y \in \{0, 1\}} p_t^y \ell^{\prime}_t(y)^2.$$
[ "$$\\sum_{t=1}^T \\mathbb{E}_{\\hat{y}_t \\sim p_t}\\left[\\hat{\\ell}_t(\\hat{y}_t) \\right] - \\sum_{t=1}^T \\mathbb{E}_{y \\sim \\mathcal{E}_t^{i^*}}\\left[\\hat{\\ell}_t(y)\\right] \\leq \\frac{\\ln N}{\\eta} + \\eta\\sum_{t=1}^T |\\text{supp}(p_t)|.$$ " ]
[ { "id": 86722930, "completed_by": { "id": 70661 }, "result": [ "Différent", "Différent" ] }, { "id": 86723473, "completed_by": { "id": 126844 }, "result": [ "Différent", "Différent" ] }, { "id": 86724058, "completed_by": { ...
100
2310.19064-7fbe2739dfcbc2f2.jsonl
2310.19064-7fbe2739dfcbc2f2_22
$$\sum_{t=1}^T \sum_{y \in \{0, 1\}} p_t^y \hat{\ell}_t(y) - (1-\eta) \sum_{t=1}^T \hat{\ell}_t(\mathcal{E}_t^{j}) \leq \frac{(1 - \eta)\ln N}{\eta} + \eta \sum_{t=1}^T \hat{\ell}_t(1) + \eta\sum_{t=1}^T \sum_{y \in \{0, 1\}} p_t^y \ell^{\prime}_t(y)^2$$ which further implies the guarantee:
[ "$$\\sum_{t=1}^T \\mathbb{E}_{\\hat{y}_t \\sim p_t}\\left[\\hat{\\ell}_t(\\hat{y}_t) \\right] - \\sum_{t=1}^T \\mathbb{E}_{y \\sim \\mathcal{E}_t^{i^*}}\\left[\\hat{\\ell}_t(y)\\right] \\leq \\frac{\\ln N}{\\eta} + \\eta\\sum_{t=1}^T |\\text{supp}(p_t)|.$$ ", "$$\\sum_{t=1}^T \\sum_{y \\in \\{0, 1\\}} p_t^y \\el...
[ { "id": 86962915, "completed_by": { "id": 62471 }, "result": [ "Différent" ] }, { "id": 86981567, "completed_by": { "id": 140156 }, "result": [ "Différent" ] }, { "id": 86981396, "completed_by": { "id": 126844 }, "result":...
100
2310.19064-7fbe2739dfcbc2f2.jsonl
2310.19064-7fbe2739dfcbc2f2_23
$$ \sum_{t=1}^T \sum_{y \in \{0, 1\}} p_t^y \hat{\ell}_t(y) - \sum_{t=1}^T \hat{\ell}_t(\mathcal{E}_t^{j}) \leq \frac{\ln N}{\eta} + \eta \sum_{t=1}^T \hat{\ell}_t(1) + \eta\sum_{t=1}^T p_t^1 (1 - p_t^1) \hat{\ell}_t(0)^2 + \eta\sum_{t=1}^T p_t^1 \hat{\ell}_t(1)^2$$
[ "$$\\sum_{t=1}^T \\sum_{y \\in \\{0, 1\\}} p_t^y \\ell^{\\prime}_t(y) - \\sum_{t=1}^T \\sum_{y \\in \\{0, 1\\}} \\ell^{\\prime}_t(\\mathcal{E}_t^{j}) \\leq \\frac{\\ln N}{\\eta} + \\eta \\sum_{t=1}^T \\ell^{\\prime}_t(1) + \\eta\\sum_{t=1}^T \\sum_{y \\in \\{0, 1\\}} p_t^y \\ell^{\\prime}_t(y)^2.$$ " ]
[ { "id": 86734777, "completed_by": { "id": 70661 }, "result": [ "Différent" ] }, { "id": 86901068, "completed_by": { "id": 62306 }, "result": [ "Différent" ] }, { "id": 86901283, "completed_by": { "id": 62471 }, "result": [...
100
2310.19064-7fbe2739dfcbc2f2.jsonl
2310.19064-7fbe2739dfcbc2f2_24
$$\sum_{t=1}^T \sum_{y \in \{0, 1\}} p_t^y \hat{\ell}_t(y) - \sum_{t=1}^T \hat{\ell}_t(\mathcal{E}_t^{j}) \leq \frac{\ln N}{\eta} + \eta \sum_{t=1}^T \hat{\ell}_t(1) + \eta\sum_{t=1}^T p_t^0 p_t^1 \hat{\ell}_t(0)^2 + \eta\sum_{t=1}^T p_t^1 \hat{\ell}_t(1)^2.$$
[ "$$\\sum_{t=1}^T \\sum_{y \\in \\{0, 1\\}} p_t^y \\ell^{\\prime}_t(y) - \\sum_{t=1}^T \\sum_{y \\in \\{0, 1\\}} \\ell^{\\prime}_t(\\mathcal{E}_t^{j}) \\leq \\frac{\\ln N}{\\eta} + \\eta \\sum_{t=1}^T \\ell^{\\prime}_t(1) + \\eta\\sum_{t=1}^T \\sum_{y \\in \\{0, 1\\}} p_t^y \\ell^{\\prime}_t(y)^2.$$ " ]
[ { "id": 86732094, "completed_by": { "id": 70661 }, "result": [ "Réécriture" ] }, { "id": 86732236, "completed_by": { "id": 140156 }, "result": [ "Réécriture" ] }, { "id": 86766173, "completed_by": { "id": 62306 }, "result"...
66.666667
2310.19065-66c30bd111f2a070.jsonl
2310.19065-66c30bd111f2a070_1
In the Learning from Label Proportions (LLP) problem, the goal is to infer a classifier $f: \mathcal{X} \mapsto \mathcal{Y}$ that maps items $\X \in \mathcal{X}$ to labels $Y \in \mathcal{Y}$. LLP differs from a standard classification problem in that items do not come with individual labels. Instead, during the traini...
[ "In the Learning from Label Proportions (LLP) problem, the goal is to infer a classifier $f: \\mathcal{X} \\mapsto \\mathcal{Y}$ that maps items $\\X \\in \\mathcal{X}$ to labels $Y \\in \\mathcal{Y}$. LLP differs from a standard classification problem in that items do not come with individual labels. Instead, dur...
[ { "id": 86982830, "completed_by": { "id": 140156 }, "result": [ "Réécriture" ] }, { "id": 86985509, "completed_by": { "id": 62306 }, "result": [ "Réécriture" ] }, { "id": 86985597, "completed_by": { "id": 126844 }, "result...
100
2310.19065-66c30bd111f2a070.jsonl
2310.19065-66c30bd111f2a070_2
Following the notation of \cite{kddpaper2023}, we define an instance of the LLP problem over $N$ items taken from feature space $\mathcal{X}$; each item is associated to one of $L \leq N$ disjoint groups, called bags. A specific problem instance is given by a set of pairs $D = \{(\mathbf{x}_i, b_i), i = 1, \dots, N\}$ ...
[ "Following the notation of \\cite{kddpaper2023}, we define an instance of the LLP problem over $N$ items taken from feature space $\\mathcal{X}$; each item is associated to one of $L \\leq N$ disjoint sets, called bags. A specific problem instance is given by a set of pairs $D = \\{(\\mathbf{x}_i, b_i), i = 1, \\do...
[ { "id": 86738436, "completed_by": { "id": 70661 }, "result": [ "Réécriture" ] }, { "id": 86901770, "completed_by": { "id": 62306 }, "result": [ "Réécriture" ] }, { "id": 86901868, "completed_by": { "id": 62471 }, "result":...
100
2310.19065-66c30bd111f2a070.jsonl
2310.19065-66c30bd111f2a070_3
Given a dataset with known labels, in which items have been grouped into bags, it can be important to test which LLP variant the dataset represents. We assume that $\X \not \indep Y$, which is necessary for there to be a non-trivial algorithm for classification. Hence, we need to perform five tests to identify the vari...
[ "Given a dataset with known labels, in which items have been grouped into bags, it can be important to test which LLP variant the dataset represents. We assume that $\\X \\not \\indep Y$, which is necessary for there to be a non-trivial algorithm for classification. Hence, we need to perform five tests to identify...
[ { "id": 86964479, "completed_by": { "id": 140156 }, "result": [ "Réécriture" ] }, { "id": 86972808, "completed_by": { "id": 62306 }, "result": [ "Réécriture" ] }, { "id": 86973477, "completed_by": { "id": 70661 }, "result"...
100
2310.19065-66c30bd111f2a070.jsonl
2310.19065-66c30bd111f2a070_4
Likewise, the problem of hyperparameter selection in LLP has only been considered in a small number of studies. The authors in \cite{hernandez2019framework} proposed a $k$-fold based method that assign bags to folds in a way in which each fold have similar proportions in comparison to the entire dataset. Since this met...
[ "Likewise, the problem of hyperparameter selection in LLP has only been considered in a small number of studies. The authors in \\cite{hernandez2019framework} proposed a $k$-fold based method that assign bags to folds in a way in which each fold have similar proportion of positive instances in comparison to the en...
[ { "id": 87251072, "completed_by": { "id": 70661 }, "result": [ "Réécriture" ] }, { "id": 87259386, "completed_by": { "id": 126844 }, "result": [ "Réécriture" ] }, { "id": 87267890, "completed_by": { "id": 62306 }, "result"...
100
2310.19065-66c30bd111f2a070.jsonl
2310.19065-66c30bd111f2a070_5
In this section, we present methods to generate LLP datasets for different variants given a base (classification) dataset, a specified number of bags, bag proportions, and sizes of bags. We assume that each item in the base dataset is used once in the resulting LLP dataset, so a minimal condition for feasibility is tha...
[ "In this section, we present methods to generate LLP datasets for different variants given a base (classification) dataset, a specified number of bags, bag proportions, and sizes of bags. We assume that each item in the base dataset is used once in the resulting LLP dataset, so a minimal condition for feasibility ...
[ { "id": 86986273, "completed_by": { "id": 62306 }, "result": [ "Réécriture" ] }, { "id": 86986902, "completed_by": { "id": 70661 }, "result": [ "Réécriture" ] }, { "id": 86986918, "completed_by": { "id": 126844 }, "result"...
100
2310.19065-66c30bd111f2a070.jsonl
2310.19065-66c30bd111f2a070_6
The input of the problem is as follows: a standard classification dataset $D_c = \{(x_i, y_i), i = 1,\dots,N\}$, where the item-label pairs are i.i.d. observations of a random vector $(\mathbf{X}, Y)$ with distribution $P_{\mathbf{X}, Y} ( \mathbf{x}, y)$; the number of classes $C$; the number of bags, $L$; the size of...
[ "The input of the problem is as follows: a standard classification dataset $D_c = \\{(x_i, y_i), i = 1,\\dots,N\\}$, where the item-label pairs are i.i.d. observations of a random vector $(\\mathbf{X}, Y)$ with distribution $P_{\\mathbf{X}, Y} ( \\mathbf{x}, y)$; the number of bags, $L$; the size of each bag, i.e.,...
[ { "id": 86734101, "completed_by": { "id": 70661 }, "result": [ "Réécriture" ] }, { "id": 86773192, "completed_by": { "id": 140156 }, "result": [ "Différent" ] }, { "id": 86827432, "completed_by": { "id": 126844 }, "result"...
66.666667
2310.19065-66c30bd111f2a070.jsonl
2310.19065-66c30bd111f2a070_7
In other words, the goal of the problem is to obtain a data generation mechanism that allows us to sample from a, possibly unknown, probability distribution $Pr(B \given \X, Y)$ which respects the dependence structure imposed by a given LLP variant and the constraints imposed by the bag sizes and bag proportions. In th...
[ "In other words, the goal of the problem is to obtain a data generation mechanism that allows us to sample from a, possibly unknown, probability distribution $Pr(B \\given \\X, Y)$ which respects the dependence structure imposed by a given LLP variant and the constraints imposed by the bag sizes and bag proportions...
[ { "id": 86982260, "completed_by": { "id": 140156 }, "result": [ "Réécriture" ] }, { "id": 86985190, "completed_by": { "id": 70661 }, "result": [ "Réécriture" ] }, { "id": 86985010, "completed_by": { "id": 126844 }, "result...
100
2310.19065-66c30bd111f2a070.jsonl
2310.19065-66c30bd111f2a070_8
One should note that the desired sampling mechanism can not directly take into account the input matrix of proportions, $\mathbf{P}$, since that would imply $Y \notindep B$. As as consequence, any attempt to generate a Naive LLP dataset in which the components of $\mathbf{P}$ differ significantly of the global proporti...
[ "One should note that the desired sampling mechanism can not directly take into account the input vector of proportions, $\\mathbf{p}$, once that would imply $Y \\notindep B$. As as consequence, any attempt to generate a Naive LLP dataset in which the components of $\\mathbf{p}$ differ significantly of the global p...
[ { "id": 86904649, "completed_by": { "id": 62471 }, "result": [ "Différent" ] }, { "id": 86973208, "completed_by": { "id": 62306 }, "result": [ "Différent" ] }, { "id": 86974600, "completed_by": { "id": 140156 }, "result": ...
100
2310.19065-66c30bd111f2a070.jsonl
2310.19065-66c30bd111f2a070_9
Since $Pr(Y\given B)$ is specified by the bag proportions ($\mathbf{P}$), $Pr(B)$ can be computed from the bag sizes ($\mathbf{s}$), and $Pr(Y)$ can be computed from the labels, the bag assignment rule $Pr(B \given \X, Y)$ can be obtained without needing to consider the features of items.
[ "Since $Pr(Y\\given B)$ is specified by the bag proportions ($\\mathbf{p}$), $Pr(B)$ can be computed from the bag sizes ($\\mathbf{s}$), and $Pr(Y)$ can be computed from the labels, the bag assignment rule $Pr(B \\given \\X, Y)$ can be obtained without needing to consider the features of items." ]
[ { "id": 86960559, "completed_by": { "id": 62471 }, "result": [ "Réécriture" ] }, { "id": 86976916, "completed_by": { "id": 62306 }, "result": [ "Différent" ] }, { "id": 86977267, "completed_by": { "id": 126844 }, "result":...
66.666667
2310.19065-66c30bd111f2a070.jsonl
2310.19065-66c30bd111f2a070_10
In order to overcome these issues, our first step is to cluster the items of the input dataset. The idea is to partition the items into $Q$ clusters ($1, \dots, Q$) in a way that items of the same cluster share some similarity in the feature space. One could imagine simply using the clusters as bags. However, by doing ...
[ "In order to overcome these issues, our first step is to cluster the items of the input dataset. The idea is to partition the items into $C$ clusters ($1, \\dots, C$) in a way that items of the same cluster share some similarity in the feature space. One could imagine simply using the clusters as bags. However, by ...
[ { "id": 86865182, "completed_by": { "id": 70661 }, "result": [ "Différent" ] }, { "id": 86959575, "completed_by": { "id": 62471 }, "result": [ "Différent" ] }, { "id": 86961934, "completed_by": { "id": 140156 }, "result": ...
100
2310.19065-66c30bd111f2a070.jsonl
2310.19065-66c30bd111f2a070_11
To solve this, it helps to rewrite (\ref{eq:intermediate-prob-derivation}) as a matrix equation: [EQUATION] where $\mathbf{P}_{Y,B} \in \mathbb{R}^{C \times L}$ represents $Pr(Y,B)$, $\mathbf{P}_{Y,Z}\in\mathbb{R}^{C \times Q}$ represents $Pr(Y, Z = z)$, and $\mathbf{P}_{B|Z} \in \mathbb{R}^{Q \times L}$ represents $Pr...
[ "where $\\mathbf{P}_{Y,B} \\in \\mathbb{R}^{2 \\times L}$ represents $Pr(Y,B)$, $\\mathbf{P}_{Y,Z}\\in\\mathbb{R}^{2\\times C}$ represents $Pr(Y, Z = z)$, and $\\mathbf{P}_{B|Z} \\in \\mathbb{R}^{C\\times L}$ represents $Pr(B\\given Z = z)$. Note that $\\mathbf{P}_{Y,B}$ and $\\mathbf{P}_{Y,Z}$ can be easily obtai...
[ { "id": 86981077, "completed_by": { "id": 140156 }, "result": [ "Différent" ] }, { "id": 86985743, "completed_by": { "id": 62306 }, "result": [ "Différent" ] }, { "id": 86985764, "completed_by": { "id": 70661 }, "result": ...
100
2310.19065-66c30bd111f2a070.jsonl
2310.19065-66c30bd111f2a070_12
To generate an Intermediate problem instance we obtain (or approximate) $\mathbf{P}_{B\given Z}$ by solving $\mathbf{P}_{B\given Z} = \argmin_{\mathbf{A}\in\mathcal{A}}\Vert\mathbf{P}_{Y,B} - \mathbf{P}_{Y,Z}\mathbf{A}\Vert_F$, where $\Vert\cdot\Vert_F$ represents the Frobenius norm and the optimization domain $\mathca...
[ "To generate an Intermediate problem instance we obtain (or approximate) $\\mathbf{P}_{B\\given Z}$ by solving $\\mathbf{P}_{B\\given Z} = \\argmin_{\\mathbf{A}\\in\\mathcal{A}}\\Vert\\mathbf{P}_{Y,B} - \\mathbf{P}_{Y,Z}\\mathbf{A}\\Vert_F$, where $\\Vert\\cdot\\Vert_F$ represents the Frobenius norm and the optimiz...
[ { "id": 86869076, "completed_by": { "id": 70661 }, "result": [ "Différent" ] }, { "id": 86901733, "completed_by": { "id": 62306 }, "result": [ "Différent" ] }, { "id": 86901935, "completed_by": { "id": 62471 }, "result": [...
100
2310.19065-66c30bd111f2a070.jsonl
2310.19065-66c30bd111f2a070_13
Again, let $Z$ be the random variable representing the cluster of a given item. Then, the goal is to find a three-dimensional array $\mathbf{P}_{Z,Y,B} \in \mathbb{R}^{Q \times C \times L}$ that encodes $Pr(Z, Y, B)$. To find a suitable $\mathbf{P}_{Z,Y,B}$, we rely on the fact that the marginals $Pr(Z)$, $Pr(Y)$, $Pr(...
[ "Again, let $Z$ be the random variable representing the cluster of a given item. Then, the goal is to find a three-dimensional array $\\mathbf{P}_{Z,Y,B} \\in \\mathbb{R}^{C\\times 2 \\times L}$ that encodes $Pr(Z, Y, B)$. To find a suitable $\\mathbf{P}_{Z,Y,B}$, we rely on the fact that the marginals $Pr(Z)$, $Pr...
[ { "id": 86981659, "completed_by": { "id": 140156 }, "result": [ "Réécriture" ] }, { "id": 86983624, "completed_by": { "id": 126844 }, "result": [ "Réécriture" ] }, { "id": 86984280, "completed_by": { "id": 70661 }, "result...
100
2310.19065-66c30bd111f2a070.jsonl
2310.19065-66c30bd111f2a070_14
For each LLP variant, there are some limitations of the generation methods. For the Naive and Simple variants, the limitations are due to the dependence structure. In the Naive case, since the bags are independent of both items and labels, one can generate only datasets with proportions close to the global proportion. ...
[ "For each LLP variant, there are some limitations of the generation methods. For the Naive and Simple variants, the limitations are due to the dependence structure. In the Naive case, since the bags are independent of both items and labels, one can generate only datasets with proportions close to the global proport...
[ { "id": 86862765, "completed_by": { "id": 70661 }, "result": [ "Différent" ] }, { "id": 86958501, "completed_by": { "id": 62471 }, "result": [ "Différent" ] }, { "id": 86959105, "completed_by": { "id": 126844 }, "result": ...
100
2310.19065-66c30bd111f2a070.jsonl
2310.19065-66c30bd111f2a070_15
In the other hand, the main limitation of Intermediate and Hard generation methods is that they rely on a clustering assignment. For the Intermediate variant, each bag contains a mixture of the clusters. As a result the proportions and bag sizes that can be achieved by the generation process are limited to convex combi...
[ "For each LLP variant, there are some limitations of the generation methods. For the Naive and Simple variants, the limitations are due to the dependence structure. In the Naive case, since the bags are independent of both items and labels, one can generate only datasets with proportions close to the global proport...
[ { "id": 86962216, "completed_by": { "id": 62471 }, "result": [ "Différent" ] }, { "id": 86984898, "completed_by": { "id": 140156 }, "result": [ "Réécriture" ] }, { "id": 86985100, "completed_by": { "id": 70661 }, "result":...
66.666667
2310.19065-66c30bd111f2a070.jsonl
2310.19065-66c30bd111f2a070_16
In the experimental setup, we are also careful with splitting the data between training and testing sets and evaluation metrics. Regarding the train/test split, after splitting the data, we have to assure that the proportions vector passed to the algorithm is the true proportions of the training data. For this reason, ...
[ "\\gf{As a final result of all these considerations, we present a meta algorithm to evaluate LLP algorithms\\footnoteref{footnote}. Given an LLP dataset, a LLP algorithm, a hyperparameter selection strategy, and a space of hyperparameters, we start splitting the data between training and testing sets. Afterwards, w...
[ { "id": 86739411, "completed_by": { "id": 70661 }, "result": [ "Réécriture" ] }, { "id": 86901642, "completed_by": { "id": 62306 }, "result": [ "Différent" ] }, { "id": 86901717, "completed_by": { "id": 62471 }, "result": ...
66.666667
2310.19065-66c30bd111f2a070.jsonl
2310.19065-66c30bd111f2a070_17
For each combination of dataset, algorithm, and hyperparameter selection strategy, we run 30 executions of the meta algorithm presented in \S~\ref{sec:standard-eval-methods} (see Algorithm \ref{alg:meta-alg-llp}), using 75\% of the data for training. We measure performance using the $F_1$-score of the learned model in ...
[ "For each combination of dataset, algorithm, and hyperparameter selection strategy, we run 30 executions of the meta algorithm presented in \\S~\\ref{sec:standard-eval-methods}, using 75\\% of the data for training. We measure performance using the $F_1$-score of the learned model in the test set. " ]
[ { "id": 86981701, "completed_by": { "id": 126844 }, "result": [ "Différent", "Différent" ] }, { "id": 86980798, "completed_by": { "id": 140156 }, "result": [ "Différent", "Réécriture" ] }, { "id": 86981853, "completed_by": {...
77.777778
2310.19065-66c30bd111f2a070.jsonl
2310.19065-66c30bd111f2a070_18
Our experimental setup spans a total of 1,440 experiments, i.e., 360 experiments for each hyperparameter selection strategy. We trained 1,440$\times$30 = 43,200 models, ie, each algorithm was trained 8,640 times. To handle such large experimental setup, we used the resources from Boston University's Shared Computing Cl...
[ "For each combination of dataset, algorithm, and hyperparameter selection strategy, we run 30 executions of the meta algorithm presented in \\S~\\ref{sec:standard-eval-methods}, using 75\\% of the data for training. We measure performance using the $F_1$-score of the learned model in the test set. ", "In terms of...
[ { "id": 86984271, "completed_by": { "id": 140156 }, "result": [ "Réécriture" ] }, { "id": 86986101, "completed_by": { "id": 62306 }, "result": [ "Différent" ] }, { "id": 86986083, "completed_by": { "id": 126844 }, "result"...
66.666667
2310.19065-66c30bd111f2a070.jsonl
2310.19065-66c30bd111f2a070_19
Figure~\ref{fig:best-algorithm-dataset-variant} shows that the superiority of various algorithms differs depending on the LLP variant and base dataset. That is, there is no best algorithm across all variants and base datasets. For instance, we can see that the neural network DLLP is superior for the CIFAR-10 dataset, ...
[ "Following \\cite{kddpaper2023}, split-bag strategies outperform full-bag strategy for the non Naive LLP variants. Moreover, as in the LLP algorithms comparison, we observe that the performance of the hyperparameters selection strategies changes depending on the base dataset and variant. Although split-bag has a be...
[ { "id": 86903975, "completed_by": { "id": 62471 }, "result": [ "Différent" ] }, { "id": 86973250, "completed_by": { "id": 62306 }, "result": [ "Réécriture" ] }, { "id": 86973983, "completed_by": { "id": 126844 }, "result":...
66.666667
2310.19077-66c30bd111f2a070.jsonl
2310.19077-66c30bd111f2a070_1
The network is shared by a set of packet types (flows) $\PD:=\{1,2,\cdots,\PKT\}$. A packet type (flow) $j \in \PD$ is characterized by its source $s_j \in \V$, its destination $z_j \in \V$, an end-to-end deadline $d_j \in \mathbb N\cup\{0\}$, and a weight $w_j \in \mathbb R^+$. Packets of different types arrive during...
[ "Packets of different types arrive in this network during a time horizon of length $\\TC$. We use $\\PD:=\\{1,2,\\cdots,\\PKT\\}$ to denote the set of packet types. A packet of type $\\mathrm j \\in \\PD$ is characterized by its source $s_j \\in \\V$, its destination $z_j \\in \\V$, its relative deadline $d_j \\in...
[ { "id": 86986351, "completed_by": { "id": 62306 }, "result": [ "Réécriture", "Réécriture" ] }, { "id": 86987461, "completed_by": { "id": 126844 }, "result": [ "Réécriture", "Réécriture" ] }, { "id": 86987517, "completed_by":...
100
2310.19077-66c30bd111f2a070.jsonl
2310.19077-66c30bd111f2a070_2
\subsection{\texorpdfstring{\Cref{randomized-scheduling-offline}: \ALGOFFSPELLED}{}} We first introduce \Cref{randomized-scheduling-offline}, a scheduling algorithm that probabilistically forwards packets at each time slot based on their \textit{age} (time since their arrival), \textit{type}, and their \textit{current ...
[ "Specifically, \\Cref{randomized-scheduling-offline} maintains a set of forwarding probabilities $f_{j\\ell}^\\trl$, at each node $v$, for $j \\in \\PD$, $\\ell \\in \\mathcal{O}(v)$, and $\\trl \\in \\{1,2,\\cdots, d_j\\}$. forwards packets probabilistically from their current node, choosing link $\\ell$, with pr...
[ { "id": 87255280, "completed_by": { "id": 70661 }, "result": [ "Réécriture" ] }, { "id": 87258817, "completed_by": { "id": 126844 }, "result": [ "Réécriture" ] }, { "id": 87317336, "completed_by": { "id": 140156 }, "result...
100
2310.19077-66c30bd111f2a070.jsonl
2310.19077-66c30bd111f2a070_3
To see that, recall that $x_{jk}$ is the probability of scheduling packet-type $j$ over route-schedule $k$, and forwarding variable $\fl j \ell \tau$ can be interpreted as the fraction of type-$j$ packets scheduled over link $\ell$ at age $\tau$. Then, \dref{ES-obj} and \dref{ES:one-choice} are equivalent to \dref{FS-o...
[ " The simplified \\LP $\\EST$ still has exponentially many variables (there are exponentially many route-schedules $k$). \\NEW{We resolve this, by reformulating \\EST through a change of variables, allowing us} to work with a small set of \\textit{forwarding variables}, and find the corresponding forwarding-based ...
[ { "id": 86863755, "completed_by": { "id": 70661 }, "result": [ "Différent" ] }, { "id": 86900916, "completed_by": { "id": 62306 }, "result": [ "Réécriture" ] }, { "id": 86901079, "completed_by": { "id": 62471 }, "result": ...
66.666667
2310.19077-66c30bd111f2a070.jsonl
2310.19077-66c30bd111f2a070_4
In our simulations, we assign a random reward uniformly chosen between $(0,1)$ for each source-destination pair and a deadline $10$ time slots to each packet. To further evaluate the impact of traffic intensity and the number of flows, we simulate two cases. In one case, we maintain all top $30$ source-destination pair...
[ "In \\Cref{fig:comparison-a} we consider the randomized packet types considered earlier. In our first comparison we scale the traffic intensity and capacity of the networks. We observe that our algorithm maintains a growing advantage over GLS-FP for larger Link capacities and traffic capacities." ]
[ { "id": 86736611, "completed_by": { "id": 70661 }, "result": [ "Différent" ] }, { "id": 86902160, "completed_by": { "id": 62306 }, "result": [ "Différent" ] }, { "id": 86903544, "completed_by": { "id": 62471 }, "result": [...
100
2310.19077-66c30bd111f2a070.jsonl
2310.19077-66c30bd111f2a070_5
Indeed, we have \[ \sum_{\phi=1}^{\PHASES} \TC_\phi \epsilon/\TCP \overset{(a)}= \sqrt{\mu \log(2\PKT/\epsilon) \epsilon/\TCP} \sum_{\phi=1}^{\PHASES} \sqrt{2}^{(\phi-1)} \overset{(b)}\leq \frac{\sqrt{\mu \log(2\PKT/\epsilon) \epsilon/\TCP} }{\sqrt \epsilon (\sqrt 2 -1 )} = \frac{\sqrt{ \mu \log(2 \PKT/\epsilon)/\TC^\p...
[ "\\left(\\frac{2}{\\sqrt 2 -1}\\right)^2 \\mu \\log(\\log(1/\\epsilon) \\frac{2P}{\\epsilon})/\\epsilon^2$ we get the bound for $\\epsilon$" ]
[ { "id": 86726704, "completed_by": { "id": 140156 }, "result": [ "Réécriture" ] }, { "id": 86729608, "completed_by": { "id": 70661 }, "result": [ "Réécriture" ] }, { "id": 86730712, "completed_by": { "id": 126844 }, "result...
100
2310.19077-66c30bd111f2a070.jsonl
2310.19077-66c30bd111f2a070_6
\end{proof} \section{General Distributions with stationary arrival rates} In this section we consider general distributions on the arrival processes $\{\ajt\}$ with stationary arrival rates, i.e., $\avgpacks_{j}^t \equiv \avgpacks_j$. First, we provide a preliminary definition which allows us to extend \Cref{main-theor...
[ "\\begin{definition} Consider a nonnegative integer-valued random variable $Y$ with distribution $\\mathcal D$, and suppose $Y$ is bounded by a constant $M$. Consider a decomposition of $Y$ as the sum of $M$ (possibly dependent) Bernoulli random variables $\\{Y_{i}\\in \\{0,1\\}\\}$: \\[ Y = \\sum_{i=1}^{M} Y_{i}...
[ { "id": 86872685, "completed_by": { "id": 70661 }, "result": [ "Différent", "Différent" ] }, { "id": 86901245, "completed_by": { "id": 62306 }, "result": [ "Réécriture", "Différent" ] }, { "id": 86901344, "completed_by": { ...
77.777778
2310.19077-66c30bd111f2a070.jsonl
2310.19077-66c30bd111f2a070_7
We now state \Cref{main-theorem-offline-dependencies}, which generalizes \Cref{main-theorem-offline}. \begin{theorem} Consider stationary arrival processes $\{a_{j}^t\}$, with the property that each packet type's total traffic within a fixed time window, $\sumtlast a_{j}^t$, has at most dependency degree $D$ (for all ...
[ "that can impact any link at any time. In other words, a lower dependency-degree results in better performance guarantees, given $\\CMN$. \n\\begin{theorem} \\NEW{Consider arrival processes $\\{a_{j}^t\\}$ such that each link's packet arrivals at any time slot can depend on a maximum of $(D-1)$ other packets}, g...
[ { "id": 86983219, "completed_by": { "id": 140156 }, "result": [ "Différent" ] }, { "id": 86986615, "completed_by": { "id": 62306 }, "result": [ "Différent" ] }, { "id": 86986660, "completed_by": { "id": 70661 }, "result": ...
100
2310.19077-66c30bd111f2a070.jsonl
2310.19077-66c30bd111f2a070_8
\begin{lemma} In the case that $\sumtlast a_{j}^t$ has at most dependency degree $D$, for all $j$ and $t$, using the forwarding probabilities $\{f_{j\ell}^{\tau \star}\}$ for scheduling packets (Lines \ref{alg-packet-for}-\ref{alg-packet-for-end} of \Cref{randomized-scheduling-offline}), the probability of a packet bei...
[ "that can impact any link at any time. In other words, a lower dependency-degree results in better performance guarantees, given $\\CMN$. \n\\begin{theorem} \\NEW{Consider arrival processes $\\{a_{j}^t\\}$ such that each link's packet arrivals at any time slot can depend on a maximum of $(D-1)$ other packets}, g...
[ { "id": 86904259, "completed_by": { "id": 62471 }, "result": [ "Réécriture" ] }, { "id": 86952483, "completed_by": { "id": 140156 }, "result": [ "Réécriture" ] }, { "id": 86959197, "completed_by": { "id": 126844 }, "result...
100
2310.19077-79afd71960e8496f.jsonl
2310.19077-79afd71960e8496f_1
In order to evaluate the performance of our algorithms in practical settings, in addition to extensive synthetic simulations, we provide simulations using real traffic traces, and over real networks. The results indicate that, despite the presence of highly non-stationary traffic in the network traces, our algorithms ...
[ "In order to assess the efficacy of our algorithms in realistic scenarios, we complement our comprehensive synthetic simulations with simulations utilizing actual traffic traces." ]
[ { "id": 86962697, "completed_by": { "id": 62471 }, "result": [ "Réécriture" ] }, { "id": 86976610, "completed_by": { "id": 62306 }, "result": [ "Différent" ] }, { "id": 86979014, "completed_by": { "id": 126844 }, "result":...
66.666667
2310.19080-0acee387fce24dbd.jsonl
2310.19080-0acee387fce24dbd_1
\centering \caption{Analysis on rewards of boxes produced by different detectors. We report mean and std of box reward on the \lyft dataset. }
[ "\\centering \\caption{Analysis on rewards of boxes produced by different detectors. We report mean and std of box reward on the \\lyft dataset. }" ]
[ { "id": 86872809, "completed_by": { "id": 70661 }, "result": [ "Réécriture", "Différent" ] }, { "id": 86907852, "completed_by": { "id": 140156 }, "result": [ "Réécriture", "Réécriture" ] }, { "id": 86903978, "completed_by": ...
55.555556
2310.19081-66c30bd111f2a070.jsonl
2310.19081-66c30bd111f2a070_1
\item \textbf{Linear-frequency power spectrogram:} \begin{comment} Represents the time on the x-axis, the frequency in Hz on a linear scale on the y-axis, and the power in dB \cite{bib21p0}. In audio forensics, a linear-frequency power spectrogram is used for various purposes, including: Identification of audio events...
[ "To conclude, the linear-frequency power spectrogram enables experts to interpret spectral characteristics of an audio recording, identify relevant events, enhance audio quality, detect tampering, and provide assistance in transcription and voice analysis.", "A log-frequency power spectrogram is a valuable tool i...
[ { "id": 86904233, "completed_by": { "id": 62471 }, "result": [ "Différent", "Différent" ] }, { "id": 86973366, "completed_by": { "id": 62306 }, "result": [ "Différent", "Différent" ] }, { "id": 86974249, "completed_by": { ...
100
2310.19081-66c30bd111f2a070.jsonl
2310.19081-66c30bd111f2a070_2
\item \textbf{Log-frequency power spectrogram:} Such features can be obtained from a spectrogram by converting the linear frequency axis (measured in Hertz) into a logarithmic axis (measured in pitches).
[ "To conclude, the linear-frequency power spectrogram enables experts to interpret spectral characteristics of an audio recording, identify relevant events, enhance audio quality, detect tampering, and provide assistance in transcription and voice analysis.", "A log-frequency power spectrogram is a valuable tool i...
[ { "id": 86725034, "completed_by": { "id": 70661 }, "result": [ "Réécriture", "Réécriture" ] }, { "id": 86731507, "completed_by": { "id": 140156 }, "result": [ "Réécriture", "Réécriture" ] }, { "id": 86827742, "completed_by":...
55.555556
2310.19081-66c30bd111f2a070.jsonl
2310.19081-66c30bd111f2a070_3
Its logarithmic representation of frequency content enables experts to extract unique voice characteristics, classify sounds, segment audio recordings, enhance transcription accuracy, and detect potential tampering or manipulation. The resulting representation is also called log-frequency spectrogram.
[ "To conclude, the linear-frequency power spectrogram enables experts to interpret spectral characteristics of an audio recording, identify relevant events, enhance audio quality, detect tampering, and provide assistance in transcription and voice analysis.", "A log-frequency power spectrogram is a valuable tool i...
[ { "id": 86981051, "completed_by": { "id": 140156 }, "result": [ "Réécriture" ] }, { "id": 86985818, "completed_by": { "id": 62306 }, "result": [ "Réécriture" ] }, { "id": 86985907, "completed_by": { "id": 70661 }, "result"...
66.666667
2310.19081-66c30bd111f2a070.jsonl
2310.19081-66c30bd111f2a070_4
Chroma STFT features are useful for analyzing the harmonic content of an audio signal and can be used in a variety of applications such as music information retrieval, audio classification, and speech recognition. They provide a way to represent the pitch content of an audio signal in a compact and efficient way and ca...
[ "Chroma STFT is a type of feature extraction method used in audio signal processing that represents the chroma content of an audio signal in the time-frequency domain. The chroma content of an audio signal is the set of 12 pitch classes that are commonly used in Western music (i.e., the 12 notes of the chromatic sc...
[ { "id": 86962493, "completed_by": { "id": 62471 }, "result": [ "Différent" ] }, { "id": 86971544, "completed_by": { "id": 126844 }, "result": [ "Réécriture" ] }, { "id": 86971652, "completed_by": { "id": 140156 }, "result"...
66.666667
2310.19081-66c30bd111f2a070.jsonl
2310.19081-66c30bd111f2a070_5
The choice of DCT basis functions depends on the specific application and the trade-offs between computational efficiency, frequency resolution, and energy compaction. In many cases, the standard DCT-II is a good choice for audio signal processing applications, but other DCT bases may be more appropriate for certain ty...
[ "Here, we will discuss comparing different DCT bases. \nThe most commonly used DCT basis functions are the DCT-II, DCT-III, and DCT-IV. The DCT-II is also known as the \"standard\" DCT and is commonly used for audio compression, while the DCT-III and DCT-IV are less widely used but have certain advantages for some...
[ { "id": 86961028, "completed_by": { "id": 62471 }, "result": [ "Différent" ] }, { "id": 86972097, "completed_by": { "id": 126844 }, "result": [ "Différent" ] }, { "id": 86972297, "completed_by": { "id": 140156 }, "result":...
66.666667
2310.19081-66c30bd111f2a070.jsonl
2310.19081-66c30bd111f2a070_6
Suppose we wanted to test the quality of the neural networks available in the automatic speech recognition application for a specific language, using files not belonging to datasets. It is possible to do this by the pipeline creation section. The user creates a pipeline and add as many steps as necessary to compare th...
[ "Using the Deep Audio Analyzer, it was possible to create different types of pipelines in a few steps to test the different state-of-the-art neural networks for each type of task, using files not belonging to datasets. In particular, " ]
[ { "id": 86986662, "completed_by": { "id": 62306 }, "result": [ "Différent" ] }, { "id": 86987223, "completed_by": { "id": 70661 }, "result": [ "Différent" ] }, { "id": 86987324, "completed_by": { "id": 126844 }, "result": ...
66.666667
2310.19081-66c30bd111f2a070.jsonl
2310.19081-66c30bd111f2a070_7
\subsection{Results} \subsubsection{Model Evaluation on different Datasets} Tables \ref{tab:EvaluationASR}, \ref{tab:EvaluationSpeechSeparation} show the Evaluation module applied Automatic Speech recognition task and Speech Separation task with pre-trained models on some datasets. However, evaluations conducted on dif...
[ "Using the Deep Audio Analyzer, it was possible to create different types of pipelines in a few steps to test the different state-of-the-art neural networks for each type of task, using files not belonging to datasets. In particular, " ]
[ { "id": 87251223, "completed_by": { "id": 70661 }, "result": [ "Différent" ] }, { "id": 87259490, "completed_by": { "id": 126844 }, "result": [ "Réécriture" ] }, { "id": 87267771, "completed_by": { "id": 62306 }, "result":...
66.666667
2310.19081-66c30bd111f2a070.jsonl
2310.19081-66c30bd111f2a070_8
\section{Conclusion } In this paper, we described Deep Audio Analyzer, an audio analysis platform that aims to cover the entire audio analysis process. Deep Audio Analyzer is a framework that allows the comparison of state-of-the-art models for speech analysis with no lines of code. It enables researchers to reduce the...
[ "\\subsection{Conclusion} Deep Audio Analyzer is a framework that allows to compare state-of-the-art models for speech analysis with no lines of code. This framework enables researchers to reduce the time enabling rapid benchmarking of different models used. In addition, through the pipeline creation feature, it is...
[ { "id": 86904520, "completed_by": { "id": 62471 }, "result": [ "Différent", "Différent" ] }, { "id": 86964379, "completed_by": { "id": 140156 }, "result": [ "Réécriture", "Différent" ] }, { "id": 86966006, "completed_by": { ...
77.777778
2310.19083-00514a3d604be5ad.jsonl
2310.19083-00514a3d604be5ad_1
The computations times are $3.8\si{\second}$, $3.4\si{\second}$, and $1.9\si{\second}$ for the three cases. As expected, the projections show that $\outerBRSAEsuper{-\tau}{1} \supset \outerBRSAEsuper{-\tau}{2} \supset \outerBRSAEsuper{-\tau}{3}$ since the input capacity increases via $\zeta^{(1)} \leq \zeta^{(2)} \leq...
[ "In contrast, the projection onto the axes $x_2$-$x_4$ is equal for $\\outerBRSAEsuper{-\\tau}{1}$ and $\\outerBRSAEsuper{-\\tau}{1}$, as these dimensoins are only affected by the disturbance $w_2$ that is equal in both cases. \nIn the third case, the doubled input capacity allows more states to avoid the target se...
[ { "id": 86961821, "completed_by": { "id": 62471 }, "result": [ "Différent" ] }, { "id": 86971383, "completed_by": { "id": 126844 }, "result": [ "Différent" ] }, { "id": 86971630, "completed_by": { "id": 140156 }, "result":...
100
2310.19083-00514a3d604be5ad.jsonl
2310.19083-00514a3d604be5ad_2
First of all, we notice that the backward reachable sets are symmetric with respect to the origin along some dimensions, which is caused by the symmetry of the input set and the disturbance set---except for $u_{(1)} \in [-9.81,2.38]$, which becomes apparent in the projection onto the $x_3$-$x_6$ axes. The sets $\inner...
[ "Please also note that the projection onto the $x_3$-$x_6$ axis looks the same for $\\innerBRSEAsuper{-\\tau}{1}$ and $\\innerBRSEAsuper{-\\tau}{2}$ since these dimensions are only affected by the first input $u_{(1)}$, which is the same in both cases." ]
[ { "id": 86983189, "completed_by": { "id": 140156 }, "result": [ "Différent" ] }, { "id": 86985566, "completed_by": { "id": 126844 }, "result": [ "Différent" ] }, { "id": 86985598, "completed_by": { "id": 62306 }, "result":...
100
2310.19083-00514a3d604be5ad.jsonl
2310.19083-00514a3d604be5ad_3
The computation of $\innerBRSEAsuper{-\tau}{3}$ takes a non-zero disturbance into account, but also increases the input capacity compared to $\innerBRSEAsuper{-\tau}{2}$: The first three projections show a much smaller backward reachable set as the additional input capacity is outweighed by the disturbance. In contras...
[ "Please also note that the projection onto the $x_3$-$x_6$ axis looks the same for $\\innerBRSEAsuper{-\\tau}{1}$ and $\\innerBRSEAsuper{-\\tau}{2}$ since these dimensions are only affected by the first input $u_{(1)}$, which is the same in both cases." ]
[ { "id": 86979248, "completed_by": { "id": 126844 }, "result": [ "Différent" ] }, { "id": 86986389, "completed_by": { "id": 62306 }, "result": [ "Différent" ] }, { "id": 86988461, "completed_by": { "id": 70661 }, "result": ...
100
2310.19083-00514a3d604be5ad.jsonl
2310.19083-00514a3d604be5ad_4
Let us now address some critical aspects regarding our proposed backward reachability algorithms: First of all, the target set $\targetset{}$ has to be represented as a polytope. While the manual design of polytopes is quite intuitive, the target set may come from another algorithm and thus be represented by a differe...
[ "Another future direction is to generalize the presented backward reachability algorithms to nonlinear continuous-time systems. \nA well-known issue in using set propagation for forward reachability of nonlinear systems is the wrapping effect, which can lead to an explosion of the set size over time. \nStill, some ...
[ { "id": 86986714, "completed_by": { "id": 62306 }, "result": [ "Différent" ] }, { "id": 86986867, "completed_by": { "id": 70661 }, "result": [ "Différent" ] }, { "id": 86989188, "completed_by": { "id": 126844 }, "result": ...
100
2310.19083-00514a3d604be5ad.jsonl
2310.19083-00514a3d604be5ad_5
As discussed in the respective subsections, the approximation errors of all backward reachable sets except the time-point maximal backward set are non-zero even in the limit $\Delta t \to 0$. Still, one can tighten the time-point and time-interval minimal backward reachable sets in arbitrary directions by additional s...
[ "Another future direction is to generalize the presented backward reachability algorithms to nonlinear continuous-time systems. \nA well-known issue in using set propagation for forward reachability of nonlinear systems is the wrapping effect, which can lead to an explosion of the set size over time. \nStill, some ...
[ { "id": 86904744, "completed_by": { "id": 62471 }, "result": [ "Différent" ] }, { "id": 86973042, "completed_by": { "id": 62306 }, "result": [ "Différent" ] }, { "id": 86974731, "completed_by": { "id": 140156 }, "result": ...
100
2310.19083-19e054f9fc31aef7.jsonl
2310.19083-19e054f9fc31aef7_1
We introduce some general notation, basics of set-based arithmetic, and fundamentals on forward reachability analysis required for the main body of this article.
[ "We introduce the operations $\\min \\I{} = a$ and $\\max \\I{} = b$, returning the infimum and supremum, respectively." ]
[ { "id": 86962515, "completed_by": { "id": 62471 }, "result": [ "Différent" ] }, { "id": 86980096, "completed_by": { "id": 140156 }, "result": [ "Différent" ] }, { "id": 86980859, "completed_by": { "id": 126844 }, "result":...
100
2310.19083-19e054f9fc31aef7.jsonl
2310.19083-19e054f9fc31aef7_2
The set of real numbers is denoted by $\R{}$, the set of natural numbers without zero is denoted by $\N{}$, and the subset $\{a,a+1,...,b\} \subset \N{}$ for $0 < a < b$, is denoted by $\Nint{a}{b}$. We denote scalars and vectors by lowercase letters and matrices by uppercase letters. For a vector $s \in \R{n}$, $\no...
[ "We introduce the operations $\\min \\I{} = a$ and $\\max \\I{} = b$, returning the infimum and supremum, respectively." ]
[ { "id": 86952985, "completed_by": { "id": 140156 }, "result": [ "Différent" ] }, { "id": 86959552, "completed_by": { "id": 62471 }, "result": [ "Différent" ] }, { "id": 86964845, "completed_by": { "id": 126844 }, "result":...
100
2310.19083-19e054f9fc31aef7.jsonl
2310.19083-19e054f9fc31aef7_3
Interval matrices extend intervals by using matrices as lower and upper limits and are denoted in bold calligraphic letters, e.g. $\intmat{I}$. The operations $\centerOp{\S{}}$ and $\boxOp{\S{}}$ compute the volumetric center and tightest axis-aligned interval outer approximation of the set $\S{}$, respectively.
[ "We introduce the operations $\\min \\I{} = a$ and $\\max \\I{} = b$, returning the infimum and supremum, respectively." ]
[ { "id": 86868677, "completed_by": { "id": 70661 }, "result": [ "Différent" ] }, { "id": 86900275, "completed_by": { "id": 62306 }, "result": [ "Réécriture" ] }, { "id": 86900784, "completed_by": { "id": 62471 }, "result": ...
66.666667
2310.19083-19e054f9fc31aef7.jsonl
2310.19083-19e054f9fc31aef7_4
where $A \in \R{n \times n}$ is the system matrix in \cref{def:FRS} and the interval matrix $\E{}$ is the remainder of the exponential matrix \cite[Eq.~(3.2)]{Althoff2010diss}:
[ ", whereas the linear map in \\eqref{eq:innerZ} can be evaluated similarly to the common matrix exponential $e^{At}$ by extracting $A^{-1}$." ]
[ { "id": 86723478, "completed_by": { "id": 70661 }, "result": [ "Différent" ] }, { "id": 86724473, "completed_by": { "id": 140156 }, "result": [ "Réécriture" ] }, { "id": 86761805, "completed_by": { "id": 62306 }, "result":...
66.666667
2310.19083-80278337999fe4ea.jsonl
2310.19083-80278337999fe4ea_1
Since the respective sets are closed under the applied operations, the entire approximation error is incurred by the outer and inner approximation of the particular solutions. Since these approximations converge to their exact counterparts in the limit $\Delta t \to 0$ by \cref{prop:convergence}, the approximation err...
[ "which we can evaluate using \\eqref{eq:dH_Zprop} and \\eqref{eq:dH_Zinnerouter}. \nWe obtain the same formula as in \\eqref{eq:dH_BRSEA} for the approximation error of the outer approximation $d_H(\\BRSEA{-t},\\outerBRSEA{-t})$. \nNote that this error converges to 0 for $\\Delta t \\to 0$." ]
[ { "id": 86733266, "completed_by": { "id": 70661 }, "result": [ "Différent" ] }, { "id": 86902027, "completed_by": { "id": 62306 }, "result": [ "Différent" ] }, { "id": 86903517, "completed_by": { "id": 62471 }, "result": [...
100
2310.19083-80278337999fe4ea.jsonl
2310.19083-80278337999fe4ea_2
\begin{lemma}[Distributivity of Minkowski difference over convex hull] For three compact, convex, and nonempty sets $\S{1}, \S{2}, \S{3} \subset \R{n}$, we have [EQUATION] \end{lemma} \begin{proof} See Appendix. \end{proof}
[ "\\begin{lemma}[Distributivity of Minkowski difference over linear map] For an invertible matrix $M \\in \\R{n \\times n}$ and two compact, convex, and nonempty sets $\\S{1}, \\S{2} \\in \\R{n}$, we have" ]
[ { "id": 86873432, "completed_by": { "id": 70661 }, "result": [ "Différent" ] }, { "id": 86901563, "completed_by": { "id": 62306 }, "result": [ "Différent" ] }, { "id": 86903757, "completed_by": { "id": 62471 }, "result": [...
100
2310.19083-80278337999fe4ea.jsonl
2310.19083-80278337999fe4ea_3
The union over all $\steps{}$ steps, that is, [EQUATION] is an inner approximation of the time-interval backward reachable set $\BRSEA{-\tau}$ in \eqref{eq:def_BRSEA_ti} over the time interval $\tau = [t_0,\tFinal{}]$. \end{theorem} \begin{proof} See Appendix. \end{proof}
[ "\\begin{corollary} As the time step size goes to $0$, our inner approximation of the time-interval solution $\\innerBRSEA{-\\tau_k}$ converges to the inner approximation of the time-point solution $\\innerBRSEA{-t_k}$, i.e.," ]
[ { "id": 86985710, "completed_by": { "id": 62306 }, "result": [ "Différent" ] }, { "id": 86987068, "completed_by": { "id": 126844 }, "result": [ "Différent" ] }, { "id": 86988511, "completed_by": { "id": 70661 }, "result": ...
100
2310.19083-80278337999fe4ea.jsonl
2310.19083-80278337999fe4ea_4
\cref{alg:BRSEA_ti} implements \cref{thm:BRSEA_ti}, where we explicitly consider the more general case of a time interval $\tau = [t_0,\tFinal{}]$ with $t_0 > 0$: We pre-compute the particular solutions $\innerZU{t}$ and $\outerZW{t}$ until time $t_0$ in line~\ref{alg:BRSEA_ti:initZUZW} and pre-compute the polytopes $\...
[ "\\begin{corollary} As the time step size goes to $0$, our inner approximation of the time-interval solution $\\innerBRSEA{-\\tau_k}$ converges to the inner approximation of the time-point solution $\\innerBRSEA{-t_k}$, i.e.," ]
[ { "id": 86982763, "completed_by": { "id": 140156 }, "result": [ "Réécriture" ] }, { "id": 86984461, "completed_by": { "id": 126844 }, "result": [ "Réécriture" ] }, { "id": 86984586, "completed_by": { "id": 70661 }, "result...
66.666667
2310.19083-88f7da84fe68d42f.jsonl
2310.19083-88f7da84fe68d42f_1
Let us briefly consider autonomous systems $\dot{x} = f(x)$, where the backward reachable set is equal to the forward reachable set for the time-inverted dynamics $\dot{x} = -f(x)$ using the target set $\targetset{}$ as the initial set. If the target set represents an unsafe set, one can use established forward reacha...
[ "If on the other hand, the target set is a goal set, we require to compute an inner approximation: Works for linear time-invariant systems use the convex hull of extremal points \\cite{Girard2006} or simulation runs \\cite{Frehse2015CDC} for time-point solutions. \nTime-interval solutions can be inner approximated ...
[ { "id": 87251432, "completed_by": { "id": 70661 }, "result": [ "Différent" ] }, { "id": 87258940, "completed_by": { "id": 126844 }, "result": [ "Différent" ] }, { "id": 87267911, "completed_by": { "id": 62306 }, "result": ...
100
2310.19083-88f7da84fe68d42f.jsonl
2310.19083-88f7da84fe68d42f_2
An approach for decoupled dynamics has been presented in \cite{Chen2015CDC}. In the context of systems coupled by multi-agent interaction, the decoupled computation has been augmented by a higher-level control using mixed integer programming \cite{Chen2016CDC}. Moreover, a deep neural network has been trained to outp...
[ "Furthermore, HJ reachability has been combined with reinforcement learning, enabling the analysis of up to $18$-dimensional systems at the cost of safety guarantees \\cite{Fisac2019}." ]
[ { "id": 86971857, "completed_by": { "id": 140156 }, "result": [ "Réécriture" ] }, { "id": 86975112, "completed_by": { "id": 62306 }, "result": [ "Différent" ] }, { "id": 86977188, "completed_by": { "id": 126844 }, "result"...
66.666667
2310.19083-c40c44d3bd98be61.jsonl
2310.19083-c40c44d3bd98be61_1
If the target set represents an unsafe set, one utilizes the notion of \emph{minimal} reachability \cite[Sec.~4.2]{Mitchell2007HSCC}: The minimal backward reachable set contains all states that cannot avoid entering the target set regardless of the chosen control input. Consequently, all states within the backward rea...
[ "For instance, the maximal backward reachable set of a quadrotor for a target set contains all states from which the quadrotor can reach the target." ]
[ { "id": 86977631, "completed_by": { "id": 140156 }, "result": [ "Différent" ] }, { "id": 86981968, "completed_by": { "id": 70661 }, "result": [ "Différent" ] }, { "id": 86981536, "completed_by": { "id": 126844 }, "result":...
100
2310.19083-c40c44d3bd98be61.jsonl
2310.19083-c40c44d3bd98be61_2
If the target set represents a goal set, the concept of \emph{maximal} reachability \cite[Sec.~4.1]{Mitchell2007HSCC} is applicable: The maximal backward reachable set contains all states from which we can steer into the target set despite worst-case disturbances. Note that any initial state only requires to reach the...
[ "For instance, the maximal backward reachable set of a quadrotor for a target set contains all states from which the quadrotor can reach the target." ]
[ { "id": 86904795, "completed_by": { "id": 62471 }, "result": [ "Réécriture" ] }, { "id": 86971274, "completed_by": { "id": 126844 }, "result": [ "Différent" ] }, { "id": 86971157, "completed_by": { "id": 140156 }, "result"...
66.666667
2310.19083-c40c44d3bd98be61.jsonl
2310.19083-c40c44d3bd98be61_3
Maximal backward reachability is closely related to controller synthesis: The backward reachable set contains all states for which a controller exists such that the target set is reachable.
[ "For instance, the maximal backward reachable set of a quadrotor for a target set contains all states from which the quadrotor can reach the target." ]
[ { "id": 86961976, "completed_by": { "id": 62471 }, "result": [ "Différent", "Différent" ] }, { "id": 86973378, "completed_by": { "id": 62306 }, "result": [ "Différent", "Différent" ] }, { "id": 86973836, "completed_by": { ...
100
2310.19083-c40c44d3bd98be61.jsonl
2310.19083-c40c44d3bd98be61_4
In this article, we compute minimal and maximal backward reachable sets for continuous-time linear time-invariant (LTI) systems. As there are many similar definitions of backward reachable sets as well as related concepts, we postpone the literature review to \cref{sec:relatedwork}. This allows us to use the prelimin...
[ "For instance, the maximal backward reachable set of a quadrotor for a target set contains all states from which the quadrotor can reach the target.", "\\item An extension of the proposed maximal backward reachable set computation to respect state constraints over the entire time horizon (\\cref{ssec:BRSEA_constr...
[ { "id": 86960033, "completed_by": { "id": 62471 }, "result": [ "Différent", "Différent" ] }, { "id": 86976426, "completed_by": { "id": 62306 }, "result": [ "Différent", "Différent" ] }, { "id": 86978137, "completed_by": { ...
100
2310.19083-c40c44d3bd98be61.jsonl
2310.19083-c40c44d3bd98be61_5
\begin{itemize} \item An inner and outer approximation for the time-point minimal backward reachable set (\cref{ssec:BRSAE_tp}). \item An outer approximation of the time-interval minimal backward reachable set (\cref{ssec:outerBRSAE_ti}). \item An inner and outer approximation for the time-point maximal backward reac...
[ "For instance, the maximal backward reachable set of a quadrotor for a target set contains all states from which the quadrotor can reach the target.", "\\item An extension of the proposed maximal backward reachable set computation to respect state constraints over the entire time horizon (\\cref{ssec:BRSEA_constr...
[ { "id": 86864491, "completed_by": { "id": 70661 }, "result": [ "Différent" ] }, { "id": 86902003, "completed_by": { "id": 62306 }, "result": [ "Différent" ] }, { "id": 86903538, "completed_by": { "id": 62471 }, "result": [...
100
2310.19083-cda0fcec3a49cc6f.jsonl
2310.19083-cda0fcec3a49cc6f_1
In general, backward reachability analysis aims to compute the set of states that reach a target set $\targetset{} \subset \R{n}$ after a certain elapsed time $t$ (time-point backward reachable set) or at any time within the interval $\tau = [t_0,\tFinal{}]$ (time-interval backward reachable set). We assume the target...
[ "We are mainly interested in computing outer approximations of the minimal backward reachable sets \\eqref{eq:def_BRSAE_tp}-\\eqref{eq:def_BRSAE_ti} to enclose all states leading to unsafe system behavior." ]
[ { "id": 86735471, "completed_by": { "id": 70661 }, "result": [ "Différent" ] }, { "id": 86899723, "completed_by": { "id": 62306 }, "result": [ "Différent" ] }, { "id": 86900691, "completed_by": { "id": 62471 }, "result": [...
100
2310.19083-cda0fcec3a49cc6f.jsonl
2310.19083-cda0fcec3a49cc6f_2
Case \ding{192} in \cref{fig:BRS} illustrates the time-point set \eqref{eq:def_BRSAE_tp}: For all states within the minimal backward reachable set $\BRSAE{-t}$, such as $x_0^{(1)}$, the target set $\targetset{}$ is unavoidable regardless of the input signal $u^{(1)}(\cdot)$. For any initial state outside $\BRSAE{-t}$ ...
[ "We are mainly interested in computing outer approximations of the minimal backward reachable sets \\eqref{eq:def_BRSAE_tp}-\\eqref{eq:def_BRSAE_ti} to enclose all states leading to unsafe system behavior." ]
[ { "id": 86737479, "completed_by": { "id": 70661 }, "result": [ "Différent" ] }, { "id": 86899624, "completed_by": { "id": 62306 }, "result": [ "Différent" ] }, { "id": 86900523, "completed_by": { "id": 62471 }, "result": [...
100
2310.19083-cda0fcec3a49cc6f.jsonl
2310.19083-cda0fcec3a49cc6f_3
In the following definition of the \emph{maximal} backward reachable set, the target set represents a goal set into which we want to steer the state despite worst-case disturbances.
[ "We are mainly interested in computing outer approximations of the minimal backward reachable sets \\eqref{eq:def_BRSAE_tp}-\\eqref{eq:def_BRSAE_ti} to enclose all states leading to unsafe system behavior." ]
[ { "id": 86725993, "completed_by": { "id": 70661 }, "result": [ "Différent" ] }, { "id": 86731303, "completed_by": { "id": 140156 }, "result": [ "Différent" ] }, { "id": 86826323, "completed_by": { "id": 126844 }, "result":...
100
2310.19083-cda0fcec3a49cc6f.jsonl
2310.19083-cda0fcec3a49cc6f_4
The time-interval maximal backward reachable set \cite[Def.~3]{Chen2018ANNUREV} requires the state to pass through $\targetset{}$ anytime in the time interval $\tau$.
[ "We want to compute inner approximations of maximal backward reachable sets \\eqref{eq:def_BRSEA_tp}-\\eqref{eq:def_BRSEA_ti} such that the contained states are guaranteed to reach $\\targetset{}$ in the presence of worst-case disturbances." ]
[ { "id": 86734490, "completed_by": { "id": 70661 }, "result": [ "Différent" ] }, { "id": 86772256, "completed_by": { "id": 140156 }, "result": [ "Différent" ] }, { "id": 86826780, "completed_by": { "id": 126844 }, "result":...
100
2310.19083-dd0212a15f4f4aea.jsonl
2310.19083-dd0212a15f4f4aea_1
where we introduce the operator $\ooplus$ to distinguish this operation from the exact Minkowski sum. The runtime complexity is $\bigO{\cons{} n\gamma}$. \end{proposition} \begin{proof} See Appendix. \end{proof}
[ "We will also evaluate the $2n$ axis-aligned support functions, which increases the runtime complexity to $\\bigO{(\\cons{}+2n)n\\gamma}$." ]
[ { "id": 86962190, "completed_by": { "id": 62471 }, "result": [ "Réécriture" ] }, { "id": 86972505, "completed_by": { "id": 62306 }, "result": [ "Réécriture" ] }, { "id": 86973642, "completed_by": { "id": 126844 }, "result"...
100
2310.19083-dd0212a15f4f4aea.jsonl
2310.19083-dd0212a15f4f4aea_2
Under \cref{ass:runtimecomplexity} and following \cref{tab:setops}, the outer approximative Minkowski sum from \cref{prop:minkSum_polyzono}, the Minkowski difference, and the linear map in the computation of the outer approximation $\outerBRSAE{-t}$ are all $\bigO{n^3}$, while the computation of the inner approximation...
[ "Under \\cref{ass:runtimecomplexity}, the computation of the outer approximation $\\outerBRSAE{-t}$ is marginally dominated by the over-approximative Minkowski sum, which is $\\bigO{(\\cons{}+2n)n, since the Minkowski difference and linear map are at most $\\bigO{(\\cons{}+2n)n \\steps{}n}$ and $\\bigO{(\\cons{}+2n...
[ { "id": 86865240, "completed_by": { "id": 70661 }, "result": [ "Différent", "Différent", "Différent" ] }, { "id": 86902456, "completed_by": { "id": 62471 }, "result": [ "Différent", "Différent", "Différent" ] }, { "i...
100
2310.19083-f52570d8d336f342.jsonl
2310.19083-f52570d8d336f342_1
\emph{Proof of \cref{prop:minkSum_polyzono}}: \\ We insert $\poly{} \oplus \Z{}$ into \eqref{eq:sFset} to obtain The runtime complexity follows from the $\cons{}$ support function evaluations of $\Z{}$ \eqref{eq:sF_zono}. \hfill $\square$
[ "\\emph{Proof of \\cref{prop:union}}: \\\\ The reason is the order of quantifiers \\cite[Prop.~2]{Mitchell2007}. \n\\hfill $\\square$", "\\emph{Proof of \\cref{prop:BRSAE_tp}}: \\\\ This is a continuization of the discrete-time case proven in \\cite[Thm.~2.4]{Kurzhanskiy2011}. \n\\hfill $\\square$", "Note that ...
[ { "id": 86961607, "completed_by": { "id": 62471 }, "result": [ "Différent" ] }, { "id": 86972330, "completed_by": { "id": 140156 }, "result": [ "Différent" ] }, { "id": 86972358, "completed_by": { "id": 62306 }, "result": ...
100
2310.19083-f52570d8d336f342.jsonl
2310.19083-f52570d8d336f342_2
\emph{Proof of \cref{thm:BRSAE_ti}}: \\ By considering only a finite subset of input signals $\inputsignalssub{} \subset \inputsignals{}$, we obtain an outer approximation:
[ "\\emph{Proof of \\cref{prop:BRSAE_tp}}: \\\\ This is a continuization of the discrete-time case proven in \\cite[Thm.~2.4]{Kurzhanskiy2011}. \n\\hfill $\\square$" ]
[ { "id": 86986486, "completed_by": { "id": 62306 }, "result": [ "Différent", "Différent" ] }, { "id": 86986967, "completed_by": { "id": 70661 }, "result": [ "Différent", "Différent" ] }, { "id": 86989231, "completed_by": { ...
100
2310.19083-f52570d8d336f342.jsonl
2310.19083-f52570d8d336f342_3
\emph{Proof of \cref{lmm:convMinkDiff}}: \\ We plug into the definitions of the Minkowski difference \eqref{eq:def_minkDiff} and convex hull \eqref{eq:def_conv}: from which it follows that [EQUATION] since $\S{} \oplus \S{3} \ominus \S{3} = \S{}$ holds \cite[Lemma~1(iii)]{Yang2022CSL}. \hfill $\square$
[ "\\emph{Proof of \\cref{prop:BRSEA_tp}}: \\\\ This is a continuization of the discrete-time case proven in \\cite[Thm.~2.4]{Kurzhanskiy2011}. \n\\hfill $\\square$", "currently not needed \\emph{Proof of \\cref{lmm:linMapMinkDiff}}: \\\\ Using the matrix inverse $M^{-1} \\in \\R{n \\times n}$, we have which is equ...
[ { "id": 86721962, "completed_by": { "id": 70661 }, "result": [ "Différent" ] }, { "id": 86730802, "completed_by": { "id": 126844 }, "result": [ "Différent" ] }, { "id": 86729966, "completed_by": { "id": 140156 }, "result":...
100
2310.19083-f52570d8d336f342.jsonl
2310.19083-f52570d8d336f342_4
\emph{Proof of \cref{thm:BRSEA_ti}}: \\ A single time-interval solution $\BRSEA{-\tau_k}$ over $\tau_k = [t_k,t_{k+1}]$ covering part of the union in \eqref{eq:BRSEA_tp_union} can be expressed by
[ "\\emph{Proof of \\cref{prop:BRSEA_tp}}: \\\\ This is a continuization of the discrete-time case proven in \\cite[Thm.~2.4]{Kurzhanskiy2011}. \n\\hfill $\\square$" ]
[ { "id": 86960950, "completed_by": { "id": 62471 }, "result": [ "Différent" ] }, { "id": 86972939, "completed_by": { "id": 62306 }, "result": [ "Différent" ] }, { "id": 86973924, "completed_by": { "id": 140156 }, "result": ...
100
2310.19083-f52570d8d336f342.jsonl
2310.19083-f52570d8d336f342_5
\caption{Generator matrix $G$ of the safe terminal set $\zono{\matzeros{},G}$ for the quadrotor system in \cref{ssec:terminalset} computed using the approach in \cite{Gruber2021CSL}.}
[ "\\caption{Dynamics equations with $g = 9.81$ for the form \\eqref{eq:linsys} for the quadrotor system analyzed in \\cref{ssec:terminalset}.}" ]
[ { "id": 86983269, "completed_by": { "id": 140156 }, "result": [ "Réécriture" ] }, { "id": 86985432, "completed_by": { "id": 62306 }, "result": [ "Différent" ] }, { "id": 86985624, "completed_by": { "id": 126844 }, "result"...
66.666667
2310.19084-f7e6a5f92dad7d5f.jsonl
2310.19084-f7e6a5f92dad7d5f_1
\item Higher human resemblance is significantly correlated to better language modeling. Scaling improves the resemblance by scaling law \citep{henighan_scaling_2020}, while instruction tuning reduces it. All models have higher resemblance to L2 rather than to L1, suggesting further room for the improvement in language ...
[ "\\item Instruction tuning has limited effect on the general attention distribution on plain text, but enhance the model's sensitivity to instructions. It reduces the human resemblance, increases the trivial pattern reliance. " ]
[ { "id": 86962710, "completed_by": { "id": 62471 }, "result": [ "Différent" ] }, { "id": 86971429, "completed_by": { "id": 126844 }, "result": [ "Différent" ] }, { "id": 86971487, "completed_by": { "id": 140156 }, "result":...
66.666667
2310.19084-f7e6a5f92dad7d5f.jsonl
2310.19084-f7e6a5f92dad7d5f_2
\paragraph{Instruction tuned LLMs} achieve better performances in performing tasks \citep{ouyang_training_2022,wei_finetuned_2022,sanh_multitask_2022}. It seems that the LLMs are better at understanding human instructions. However, there is little discussion on how the instruction tuning process affects the language pe...
[ "\\item Instruction tuning has limited effect on the general attention distribution on plain text, but enhance the model's sensitivity to instructions. It reduces the human resemblance, increases the trivial pattern reliance. " ]
[ { "id": 86961055, "completed_by": { "id": 70661 }, "result": [ "Différent", "Différent", "Réécriture" ] }, { "id": 86959957, "completed_by": { "id": 62471 }, "result": [ "Différent", "Différent", "Différent" ] }, { "...
66.666667
2310.19084-f7e6a5f92dad7d5f.jsonl
2310.19084-f7e6a5f92dad7d5f_3
To show the effect of scaling and instruction tuning on different models, we compare the self-attention scores of different LLMs given the same input, by viewing model attention as probability distributions and calculating the general attention divergence based on Jensen-Shannon divergence.
[ "To clarify, the focus of this work is not interpreting the model prediction by attention, but is analyzing the attention patterns with the help of human attention.", "The basic idea is to compare the self-attention of models in different scales and training stages to study the effects of the two factors on langu...
End of preview. Expand in Data Studio
README.md exists but content is empty.
Downloads last month
52