Building a polynomial commitment scheme from the FRI-IOPP

$\newcommand{\fxd}[0]{\mathbb{F}[X]^{< d}} \newcommand{\fxdone}[0]{\mathbb{F}[X]^{< d-1}} \newcommand{\RS}[0]{\mathrm{RS}} \newcommand{\drc}[0]{\mathrm{D}_{r, c}\,} \newcommand{\fo}[0]{\ff^\Omega} \newcommand{\eval}[0]{\mathrm{E}}$
Here we cover how to build a polynomial commitment scheme (PCS) from the FRI interactive oracle proof of proximity (FRI-IOPP). Specifically, we explain how to reduce an evaluation proof to a proof of proximity. It is assumed that the reader already understands the FRI-IOPP. Everything here was derived from reading Transparent PCS with polylogarithmic communication complexity (Vlasov & Panarin, 2019).

We assume the usual setup for FRI. Let $\ff$ denote a finite field, write $\Omega \subset \ff$ for a subgroup of the group of units $\ff^\times$ with $|\Omega|$ a power of two. Let $\fo$ denote the vector space of functions $\Omega \to \ff$, considered as vectors with entries indexed by some fixed enumeration of $\Omega$ (we’ll call elements of $\fo$ “words”). Let $\eval: \fx \to \fo$ denote the evaluation map, i.e. $$ (\eval f)_\omega = f(\omega) \quad \forall f \in \fx \quad \forall \omega \in \Omega.$$ Write $\Delta$ for the relative Hamming distance on $\fo$, and let $\RS_k = \eval (\fx^{< k}) \subset \fo$ denote the Reed-Solomon code with degree bound $k$. Fix throughout some degree bound $1 < d < |\Omega|$. We will be concerned with two separate Reed-Solomon codes, namely $\RS_d$ and $\RS_{d-1}$ (note that both use the same evaluation points $\Omega$). Write $\delta_0$ for the unique decoding radius of $\RS_d$, i.e. $\delta_0$ is maximal such that, for any $f \in \fxd$ and $u \in \fo$, $$ \Delta(u, \eval f) \leqslant \delta_0 \implies \left( \ \forall g \in \fxd \quad \Delta(u, \eval g) \leqslant \delta_0 \implies f = g \ \right).$$

The data of a commitment in the FRI-PCS with degree bound $d$ is a word $u \in \fo$ (more precisely: an oracle to $u$). Assume for now that the prover is honest. In the simplest case, $u = \eval f$ for some $f \in \fxd$. Crucially, however, we require something weaker than this of a commitment $u$: it is sufficient that $u$ be within the unique decoding radius of some codeword $\eval f$, i.e. $$\begin{equation}\label{UDR}\tag{UDR} \Delta (u, \eval f) \leqslant \delta_0 \quad \exists f \in \fxd.\end{equation}$$ Any such $u$ is considered to be a valid commitment to its corresponding $f \in \fxd$. Why is this subtlety necessary? Because if the stricter condition $u = \eval f$ were required, then the (soon to be explained) reduction of an evaluation proof to an invocation of the FRI-IOPP wouldn’t be possible. The FRI-IOPP isn’t sufficiently sensitive to reliably distinguish between a codeword and nearby non-codewords (for an explicit demonstration, see here).

Assuming that the prover has sent a commitment $u \in \fo$ to the verifier, an evaluation proof proceeds as follows. The verifier chooses an evaluation point $r \in \ff \setminus \Omega$ and sends this to the prover, and the prover responds with the purported evaluation $c$ of the polynomial committed to (in the case of an honest prover, $c = f(r)$ where $f$ is determined by \eqref{UDR}). The prover wants to establish $$\begin{equation}\label{TwoPart}\tag{TwoPart} \exists f \in \fxd \quad \st \quad \Delta(u, \eval f) \leqslant \delta_0 \ \ \text{and} \ \ f(r) = c.\end{equation}$$
Since $f(r) = c$ if and only if $f-c$ is divisible by $X-r$, and degree is additive under polynomial multiplication, this two part claim is equivalent to the following unified claim:
$$\begin{equation}\label{Unified}\tag{Unified} \exists q \in \fxdone \quad \st \quad \Delta(u, \eval ((X-r)q + c)) \leqslant \delta_0.\end{equation}$$ Thus we are interested in the proximity of the commitment $u$ to a specific subset of $\RS_d$, namely the subset consisting of codewords of the form $\eval ((X-r)q + c)$. Note, however, that we can not immediately apply FRI-IOPP to establish this claim, since this subset is not itself a Reed-Solomon code (it isn’t even a linear subspace of $\fo$). The good news is that, with a small amount of work, it can be seen that the claim \eqref{Unified} is equivalent to a claim that can be established using the FRI-IOPP (with degree bound $d-1$). This claim involves a vector $\drc u \in \fo$ derived from $u$, which we’ll define below. And thus the FRI-PCS evaluation proof will be established using by a proximity proof!

Firstly, a note on oracles. Importantly, the verifier does not need to receive or read all of the $u \in \fo$ that commits to $f \in \fxd$. Since an evaluation proof reduces to an invocation of the FRI-IOPP, it is sufficient for the verifier to be supplied with a mechanism to query entries $u_\omega$ of $u$ at arbitrary indices $\omega \in \Omega$. In an information theoretic presentation, this mechanism is abstracted as an oracle. In implementation, the oracle is typically instantiated with a Merkle commitment to the tree whose leaves are the $u_\omega$ (enumerated in some agreed upon manner). The prover binds itself to $u$ by sending the Merkle root to the verifier, and the verifier queries an entry $u_\omega$ by asking the prover for the Merkle path from the root to the corresponding leaf.

Let’s now derive the reduction of the evaluation proof to an instance of the FRI-IOPP. Given bijections $\theta_w : \ff \to \ff$ for each $\omega \in \Omega$, a bijection $\theta : \fo \to \fo$ of the space of all words can be defined by $$(\theta v)_\omega = \theta_\omega (v_\omega) \quad \forall v \in \fo,\quad \forall \omega \in \Omega.$$ Call such a map $\theta$ a “component-wise bijection”; it is immediate for any such map that $$\begin{equation}\label{HDInv}\tag{HDInv} \Delta(\theta u,\ v) = \Delta(u,\ \theta^{-1} v) \quad \forall u, v \in \fo.\end{equation}$$ For any $r \in \ff \setminus \Omega$ and $c \in \ff$, define a component-wise bijection $\drc: \fo \to \fo$ by $$ (\drc u)_\omega = \frac{u_\omega – c}{\omega – r} \quad \forall u \in \fo, \quad \forall \omega \in \Omega.$$ Its inverse is given by $$ (\drc^{-1} (v))_\omega = (\omega – r) v_\omega + c \quad \forall v \in \fo, \quad \forall \omega \in \Omega,$$ from which it follows immediately that $$\begin{equation}\label{DrcInverse}\tag{DrcInverse} (\drc^{-1} \circ \eval) (q) = \eval ( (X-r) q + c) \quad \forall q \in \fx.\end{equation}$$ Combining \eqref{HDInv} and \eqref{DrcInverse}, we obtain $$\begin{equation}\label{StepDown}\tag{StepDown} \Delta(\drc u,\ \eval (q)) = \Delta(u,\ \eval((X-r)q + c)) \quad \forall v \in \fo \quad \forall q \in \fx,\end{equation}$$ and substituting this into the claim \eqref{Unified}, we obtain the equivalent claim $$\begin{equation}\label{FRIable}\tag{FRIable} \exists q \in \fxdone \quad \st \quad \Delta(\drc u, \eval q) \leqslant \delta_0,\end{equation}$$ which the prover and verifier can establish using the FRI-IOPP! \eqref{FRIable} is nothing more than the statement that $\drc u$ is $\delta_0$-close to $\RS_{d-1}$.

It is instructive to now consider the two possible cheating strategies for the prover in this reduction to the FRI-IOPP. The first is where the first claim of \eqref{TwoPart} doesn’t hold, i.e. for all $f \in \fxd$, $u$ is not within the unique decoding radius of some $\eval f$. Put differently, this is just saying that $\Delta(u,\ \RS_d) > \delta_0$, i.e. “$u$ is $\delta_0$-far from the code $\RS_d$”. Thus, by \eqref{StepDown}, for any $q \in \fxdone$, $$ \Delta(\drc u,\ \eval q) = \Delta(u,\ \eval ((X-r)q + c)) \geqslant \Delta(u,\ \RS_d) > \delta_0,$$ and so claim \eqref{FRIable} is false, and will be caught with the soundness bound of the FRI-IOPP. The only other cheating strategy for the prover is that where the $u$ is in the unique decoding radius of $\eval f$ for some $f \in \fxd$, but the claimed evaluation is false, i.e. $f(r) \ne c$. Then for any $q \in \fxdone$, we have $f \ne (X-r)q + c$ (since otherwise the evaluations at $c$ would match), and so $\eval f$ and $\eval ((X-r)q + c)$ are distinct codewords. Thus, by \eqref{StepDown}, $$\Delta(\drc u,\ \eval q) = \Delta(u,\ \eval ((X-r) q + c)) > \delta_0,$$ since $u$ can be within $\delta_0$ (the unique decoding radius) of at most one codeword (which is $\eval f$).

From equivalences of Reed-Solomon codes to algebraic geometry codes

Reed-Solomon codes

Let $\mathbb{F}$ be a finite field, and write $\mathbb{F}[X]^{< n}$ for the polynomials of degree less than $n$ over $\mathbb{F}$. For any $\omega \in \mathbb{F}^n$, write $$ \epsilon_\omega: \mathbb{F}[X]^{< n} \to \mathbb{F}^n \qquad f \mapsto (f(\omega_i))_{i=1, \dots, n} \quad \forall f$$ for the evaluation map. For any such choice of $n, \omega$ and of $0 < k \leqslant n$, let $$ RS_{n,k}[\omega] := \epsilon_\omega (\mathbb{F}[X]^{< k})$$ denote the corresponding Reed-Solomon code. We’ll require that the entries of $\omega$ are distinct, so that $RS_{n,k}[\omega]$ is a $k$-dimensional subspace of $\mathbb{F}^n$ and hence is a $(n,k)$ linear code. The ambient space $\mathbb{F}^n$ is considered to be equipped with a fixed basis, and comparison of the coefficients of two vectors with respect to this basis defines the Hamming distance.

Equivalences of linear codes

Recall that two $(n,k)$ linear codes are equivalent if there exists a linear isomorphism $\mathbb{F}^n \to \mathbb{F}^n$ that preserves the Hamming distance on the ambient and maps the codes to one another. Expressed with respect to the fixed bases, an equivalence of codes is a matrix with precisely one non-zero entry in each row and column.

Equivalences of Reed-Solomon codes

The construction of a Reed-Solomon $RS_{n,k}[\omega]$ code depends on the choice of evaluation points $\omega$ – but to what extent does it really matter which $\omega$ is chosen? It is easy to see that it doesn’t always matter. For instance, permuting the entries of $\omega$ gives an equivalent code. A more interesting example: if $\lambda \in \mathbb{F}^\times$, then $RS_{n,k}[\lambda \omega] = RS_{n,k}[\omega]$, i.e. the two codes are coincide. To see this, just notice that any polynomial $f(X)$ has the same degree as $f(\lambda^{-1}X)$, and that $\epsilon_\omega (f(X)) = \epsilon_{\lambda \omega} (f(\lambda^{-1}(X)))$. Hence the codes coincide. This example generalizes immediately to invertible affine transformations of $\mathbb{F}$ applied to coordinate-wise to $\omega$.

Non-equivalent $(n, k)$-Reed-Solomon codes

There are choices of $\omega$ that lead to non-equivalent $RS_{n,k}[\omega]$, but this seems difficult to verify manually. A computational approach is to use Sage to compute the automorphism groups (i.e. group of self equivalences) of $RS_{n,k}[\omega]$ for various $\omega$. Equivalent codes must have isomorphic automorphism groups. So it is sufficient to find two choices of $\omega$ such that the automorphism groups of the associated code $RS_{n,k}[\omega]$ have distinct orders. For example:

from sage.all import *
from sage.coding.grs_code import GeneralizedReedSolomonCode

K = GF(7)
C1 = GeneralizedReedSolomonCode(vector(K, [0,1,2,3]), 2)
C2 = GeneralizedReedSolomonCode(vector(K, [0,1,2,4]), 2)
C1size = C1.automorphism_group_gens()[1]
C2size = C2.automorphism_group_gens()[1]
C1size, C2size  # outputs (48, 72)

Hence the $\mathbb{F_7}$-linear codes $RS_{4, 2}[(0, 1, 2, 3)]$ and $RS_{4, 2}[(0, 1, 2, 4)]$ are not equivalent.

Generalized Reed-Solomon codes

In order to find further examples of equivalences of Reed-Solomon codes, it turns out to be helpful to describe what at first appears to be a more general construction, but is in fact (up to equivalence of codes) simply a re-parameterization of the same objects. For $\omega \in \mathbb{F}^n$ with distinct entries and $\alpha \in (\mathbb{F}^\times)^n$, let $$\epsilon_{\omega, \alpha}: \mathbb{F}[X]^{<n} \to \mathbb{F}^n \quad f \mapsto (\alpha_i f(\omega_i))_{i=1, \dots, n} \quad \forall f.$$ Then $\epsilon_{\omega, \alpha}$ is linear, and the image of $\mathbb{F}[X]^{<k}$ under this map is the generalized Reed-Solomon code $GRS_{n,k}[\omega, \alpha]$. Notice that (trivially) a Reed-Solomon code is equivalent to every generalized Reed-Solomon code with the same evaluation points, i.e. $$\begin{equation}\label{Equiv}\tag{Equiv}RS_{n,k}[\omega] \cong GRS_{n,k}[\omega, \alpha] \quad \forall \alpha \in (\mathbb{F}^\times)^n.\end{equation}$$So the GRS codes are just a more redundant parameterization of the RS codes – nothing new is gained! Indeed, “generalized Reed-Solomon code” is a misleading name. Yet the GRS codes are more convenient when it comes to constructing equivalences: in order to demonstrate $RS_{n,k}[\omega] \cong RS_{n,k}[\omega’]$, it suffices to find $\alpha, \alpha’$ such that $GRS_{n,k}[\omega, \alpha] \cong GRS_{n,k}[\omega, \alpha’]$, and this is often easier.

For example, it turns out to be straight-forward to show the following (proof below) $$\begin{equation}\tag{Invert}\label{Invert}\forall \omega \in (\mathbb{F}^\times)^n \ \ \forall \alpha \in (\mathbb{F}^\times)^n \ \ \exists \beta \in ( \mathbb{F}^\times)^n \ \ : \ \ GRS_{n, k}[\omega, \alpha] = GRS_{n,k}[\omega^{-1}, \beta].\end{equation}$$ Indeed, $\beta_i = \alpha_i \omega_i^{-k}$ works. Setting $\alpha_i = 1 \ \forall i$ and invoking \eqref{Equiv} twice, this allows us to conclude that $$ RS[\omega] \cong GRS[\omega, \alpha] = GRS[\omega^{-1}, \beta] \cong RS[\omega^{-1}].$$This is a pretty neat result!

Following is a proof of \eqref{Invert}. While very simple, it seems to be peculiarly “linguistic” as it involves the re-ordering of some coefficients. We’ll explore this next.

(For more on the generalized Reed-Solomon codes and these equivalences, see these lecture notes).

Reed-Solomon codes as algebraic geometry codes

We’ve demonstrated that, if $\omega$ and $\omega’$ can be related by a composition of entry-wise affine transformations and inversions, then $RS_{n,k}[\omega] \cong RS_{n,k}[\omega’]$. Being able to build code equivalences from two apparently distinct sorts of transformations of $\mathbb{F}$ invites us to look for a more general context in which they are two instances of the same thing. A hint as to what that might be (if another was needed) is contained in the above proof of \eqref{Invert}: in what context does reversing the order of the coefficients of a polynomial become meaningful? This transformation would be more naturally described as interchanging the indeterminates of bivariate polynomials of homogeneous degree $k-1$. Indeed, the needed generalization uses the projective line $\text{P}^1(\mathbb{F})$ in place of $\mathbb{F}$ as the source of the evaluation points and uses “appropriate” functions on $\text{P}^1(\mathbb{F})$ in place of $\mathbb{F}[X]^{< k}$. The natural notion of functions on $\text{P}^1(\mathbb{F})$ are the “rational functions”: ratios of homogeneous bivariate polynomials where the numerator and denominator have the same degree. In this new context, the polynomials of degree $< k$ become those rational functions that only have poles at the point at infinity $(1:0)$, and then only of order $< k$. Importantly, these functions form a vector space, and the evaluation of any such function at an affine point $(\omega_i:1)$ is (since any pole could only be at infinity) an element of the field $\mathbb{F}$.

The first sort of equivalence of RS codes constructed above resulted from an affine transformation of $\mathbb{F}$. Just as invertible affine transformations transform the 1-dimensional affine space $\mathbb{F}$, homographies (i.e. isomorphisms of projective space) transform the projective line $\text{P}^1(\mathbb{F})$. A homography of the projective line has the form
$$ (X:Y) \mapsto (aX + bY: cX + dY) \quad \forall (X:Y) \in \text{P}^1(\mathbb{F}),$$
for some invertible 2×2 matrix $\begin{pmatrix}a & b\\ c & d\end{pmatrix}$. Let’s consider some special cases. Setting $c=0, d=1$, this is (an extension of) an invertible affine transformation of $\mathbb{F}$. On the other hand, setting $a=d=0$, $b=c=1$, we have an extension of the map that inverts non-zero field elements (and induces an interchanging of the indeterminates of bivariate homogeneous polynomials!). So the homographies are sufficiently general to include the two sorts of transformations $\mathbb{F}$ used above to construct our equivalences of RS codes.

Thus we might define a “projective RS code” by beginning with a sequence of $n$ distinct points $P_i \in \text{P}^1(\mathbb{F})$ and a constraint on the maximum order of poles at every point on $\text{P}^1(\mathbb{F})$ (and such that no poles are permitted at any $P_i$). Call this constraint $D$ and write $L(D)$ for the linear space of all rational functions that satisfy the constraint $D$. Since no poles are permitted at the $P_i$, the evaluation map $\epsilon : L(D) \to \mathbb{F}^n$ is well-defined linear map. Indeed, given a further technical assumption on the constraint, $\epsilon$ is injective, and we’ve constructed an $(n,k)$ code where $k=\text{dim}(L(D))$. Clearly it includes the familiar RS codes, and indeed it can be shown (see e.g. Walker Theorem 6.4 & Exercise 6.6) that all these codes are MDS codes. The natural generalization of the code equivalences constructed above are constructed from homographies of $\text{P}^1(\mathbb{F})$. The homographies transform the evaluation points as well as the functions being evaluated (along with their constraint). The advantage of this viewpoint is that now there is nothing to prove: the linear code is defined using structures in projective space, and since homographies are the structure preserving maps of projective space, it is vacuously true that homographies induce equivalences of codes.

Instead of defining such codes, we go just one step further, and consider an arbitrary non-singular projective plane curve $\mathcal{C} \subset \text{P}^2(\mathbb{F})$ in place of the projective line. There are $n$ distinct evaluation points $P_i \in \mathcal{C}$, and the “constraint” on the order of the poles takes the form of a divisor $D$ on $\mathcal{C}$ whose support does not contain the $P_i$. The space of functions $L(D)$ are the rational functions on the curve associated to the divisor $D$, i.e. is the Riemann-Roch space. The dimension $k$ of the resulting algebraic geometry code $AG[\mathcal{C}, (P_1, \dots P_n), D]$ is determined by the degree of the divisor and the genus of $\mathcal{C}$.

Further reading

These lecture notes are an excellent elementary introduction to equivalences of GRS codes. See Walker’s “Codes & Curves” for a speedy introduction to algebraic geometry codes, and read The automorphism groups of Reed-Solomon codes (Dür, 1987) for a description of the automorphism group of Reed-Solomon codes in terms of fractional linear transformations (i.e. homographies of the projective line).

FRI is a proof of proximity, not a low-degree test

(The following thought experiment was suggested to me by my colleague Ryan Cao; mistakes and invective are my own).

FRI is often described as a “low degree test”, which suggests that the verifier should reject with high probability if the degree is high. This is not the case, as the simple example below demonstrates. Indeed it is the polynomials of highest degree that have the highest chance of being erroneously accepted by the verifier. (What is true is that the FRI verifier rejects with high probability if the evaluations are far from the code: FRI is a proof of proximity).

Let $\mathbb{F}$ be a field, $\Omega \subset \mathbb{F}$ with $n = |\Omega|$, and write
$$ \epsilon_\Omega : \mathbb{F}[x]^{<n} \to \mathbb{F}^\Omega, \qquad f \mapsto (f(\omega))_{\omega \in \Omega} $$ for the evaluation map. For any $k \leq n$, let $\text{RS}[\Omega, k] := \epsilon_\Omega (\mathbb{F}[x]^{< k})$ be the Reed-Solomon code of evaluations of polynomials of degree strictly less than $k$ on $\Omega$.

Henceforth we assume that $\Omega$ is a subgroup of the multiplicative group $\mathbb{F}^\times$ and that $n = |\Omega|$ is a power of two. The degree bound $k=2^d$ will also be a power of two. For $\alpha \in \mathbb{F}$, let $\Phi_\alpha : \mathbb{F}[x] \to \mathbb{F}[x]$ be the FRI folding operation (in the coefficient domain) and write $\Psi_\alpha = \epsilon_\Omega \circ \Phi_\alpha \circ \epsilon_\Omega^{-1}$ for the corresponding operation in the evaluation domain. Then it’s not hard to check that for any $v \in \mathbb{F}^\Omega$ and $\alpha \in \mathbb{F}$, we have $$\begin{equation}\textstyle \tag{Fold}\label{Fold} (\Psi_\alpha(v))_{\omega^2} = \frac{1}{2} (1 + \frac{\alpha}{\omega}) v_{\omega} + \frac{1}{2} (1 – \frac{\alpha}{\omega}) v_{-\omega} \quad \forall \omega \in \Omega.\end{equation}$$

FRI operates by iterated folding, using challenges provided by the verifier. The claim of the proximity of $v^{(0)} := v \in \mathbb{F}^\Omega$ to the code $\text{RS}[\Omega, 2^d]$ is reduced to the claim of the proximity of $v^{(1)} := \Psi_\alpha v^{(0)}$ to the code $\text{RS}[\Omega^2, 2^{d-1}]$ for some $\alpha \in \mathbb{F}$. This process is repeated until the claim is that $v^{(d)}$ is close to the code $\text{RS}[\Omega^{2^d}, 1]$, at which point the verifier queries $t$ entries from $v^{(d)}$ and checks if they all agree: if they do, it concludes that $v^{(d)}$ is (w.h.p.) close to its code, and hence that the original vector $v^{(0)}$ is (w.h.p.) close to the original code. Proving that FRI is indeed a good test of proximity is complicated. It is easy to show, however, that FRI is not (indeed never intended being) a reliable means of detecting polynomials of high degree, and that is what we’ll do here.

For any $\omega \in \Omega$, let $L_{\omega, \Omega}$ denote the Lagrange interpolation polynomial, i.e. $$ L_{\omega, \Omega} = \prod_{\substack{\omega’ \in \Omega \ \omega’ \neq \omega}} \frac{x – \omega’}{\omega – \omega’} \in \mathbb{F}[x].$$ Then $\text{deg} L = n – 1$, i.e. it has has maximal degree (given the size of the evaluation domain $\Omega$). Its evaluations, however, are “one-hot” at $\omega$ $$ v_{\omega, \Omega} := \epsilon_\Omega (L_{\omega, \Omega}) = (0, \dots, 0, 1, 0, \dots, 0) $$ and in this sense are maximally close to the code, while not being in the code (having Hamming distance $1$ from the zero codeword). It’s easy to see from \eqref{Fold} that $$ \Psi_\alpha(v_{\omega, \Omega}) = \frac{1}{2} (1 + \frac{\alpha}{\omega}) v_{\omega^2, \Omega^2} $$ i.e. that except with negligible probability (when $\alpha=-\omega$), the folding will also have Hamming distance 1 from its code $\text{RS}[\Omega^2, 2^{d-1}]$. Note, however, that the relative Hamming distance has doubled, since the size of the evaluation domain has halved.

So let’s set $v^{(0)} := v_{\omega, \Omega}$. Then after $d$ rounds of folding, $v^{(d)}$ is (except with negligible probability) non-zero in exactly one place, and its relative distance from the code $\text{RS}[\Omega^{2^d}, 1]$ (which consists of constant vectors) is $2^d / 2^n$ i.e. the rate $\rho$ of the original code. The verifier thus accepts with probability $(1 – \rho)^t$. Given that $\rho$ is typically close to zero (e.g. $2^{22-64}$) and $t$ is always small, this probability is close to $1$. For this reason, it is misleading (if commonplace) to describe FRI as a “low-degree test”: the polynomials $L_{\omega, \Omega}$, which have maximal degree are routinely accepted by the FRI verifier. Why? Because the evaluations of these polynomials are close (“proximal”) to the code. FRI is a proof of proximity, after all.