$ \newcommand{\undefined}{} \newcommand{\hfill}{} \newcommand{\qedhere}{\square} \newcommand{\qed}{\square} \newcommand{\ensuremath}[1]{#1} \newcommand{\bit}{\{0,1\}} \newcommand{\Bit}{\{-1,1\}} \newcommand{\Stab}{\mathbf{Stab}} \newcommand{\NS}{\mathbf{NS}} \newcommand{\ba}{\mathbf{a}} \newcommand{\bc}{\mathbf{c}} \newcommand{\bd}{\mathbf{d}} \newcommand{\be}{\mathbf{e}} \newcommand{\bh}{\mathbf{h}} \newcommand{\br}{\mathbf{r}} \newcommand{\bs}{\mathbf{s}} \newcommand{\bx}{\mathbf{x}} \newcommand{\by}{\mathbf{y}} \newcommand{\bz}{\mathbf{z}} \newcommand{\Var}{\mathbf{Var}} \newcommand{\dist}{\text{dist}} \newcommand{\norm}[1]{\\|#1\\|} \newcommand{\etal} \newcommand{\ie} \newcommand{\eg} \newcommand{\cf} \newcommand{\rank}{\text{rank}} \newcommand{\tr}{\text{tr}} \newcommand{\mor}{\text{Mor}} \newcommand{\hom}{\text{Hom}} \newcommand{\id}{\text{id}} \newcommand{\obj}{\text{obj}} \newcommand{\pr}{\text{pr}} \newcommand{\ker}{\text{ker}} \newcommand{\coker}{\text{coker}} \newcommand{\im}{\text{im}} \newcommand{\vol}{\text{vol}} \newcommand{\disc}{\text{disc}} \newcommand{\bbA}{\mathbb A} \newcommand{\bbB}{\mathbb B} \newcommand{\bbC}{\mathbb C} \newcommand{\bbD}{\mathbb D} \newcommand{\bbE}{\mathbb E} \newcommand{\bbF}{\mathbb F} \newcommand{\bbG}{\mathbb G} \newcommand{\bbH}{\mathbb H} \newcommand{\bbI}{\mathbb I} \newcommand{\bbJ}{\mathbb J} \newcommand{\bbK}{\mathbb K} \newcommand{\bbL}{\mathbb L} \newcommand{\bbM}{\mathbb M} \newcommand{\bbN}{\mathbb N} \newcommand{\bbO}{\mathbb O} \newcommand{\bbP}{\mathbb P} \newcommand{\bbQ}{\mathbb Q} \newcommand{\bbR}{\mathbb R} \newcommand{\bbS}{\mathbb S} \newcommand{\bbT}{\mathbb T} \newcommand{\bbU}{\mathbb U} \newcommand{\bbV}{\mathbb V} \newcommand{\bbW}{\mathbb W} \newcommand{\bbX}{\mathbb X} \newcommand{\bbY}{\mathbb Y} \newcommand{\bbZ}{\mathbb Z} \newcommand{\sA}{\mathscr A} \newcommand{\sB}{\mathscr B} \newcommand{\sC}{\mathscr C} \newcommand{\sD}{\mathscr D} \newcommand{\sE}{\mathscr E} \newcommand{\sF}{\mathscr F} \newcommand{\sG}{\mathscr G} \newcommand{\sH}{\mathscr H} \newcommand{\sI}{\mathscr I} \newcommand{\sJ}{\mathscr J} \newcommand{\sK}{\mathscr K} \newcommand{\sL}{\mathscr L} \newcommand{\sM}{\mathscr M} \newcommand{\sN}{\mathscr N} \newcommand{\sO}{\mathscr O} \newcommand{\sP}{\mathscr P} \newcommand{\sQ}{\mathscr Q} \newcommand{\sR}{\mathscr R} \newcommand{\sS}{\mathscr S} \newcommand{\sT}{\mathscr T} \newcommand{\sU}{\mathscr U} \newcommand{\sV}{\mathscr V} \newcommand{\sW}{\mathscr W} \newcommand{\sX}{\mathscr X} \newcommand{\sY}{\mathscr Y} \newcommand{\sZ}{\mathscr Z} \newcommand{\sfA}{\mathsf A} \newcommand{\sfB}{\mathsf B} \newcommand{\sfC}{\mathsf C} \newcommand{\sfD}{\mathsf D} \newcommand{\sfE}{\mathsf E} \newcommand{\sfF}{\mathsf F} \newcommand{\sfG}{\mathsf G} \newcommand{\sfH}{\mathsf H} \newcommand{\sfI}{\mathsf I} \newcommand{\sfJ}{\mathsf J} \newcommand{\sfK}{\mathsf K} \newcommand{\sfL}{\mathsf L} \newcommand{\sfM}{\mathsf M} \newcommand{\sfN}{\mathsf N} \newcommand{\sfO}{\mathsf O} \newcommand{\sfP}{\mathsf P} \newcommand{\sfQ}{\mathsf Q} \newcommand{\sfR}{\mathsf R} \newcommand{\sfS}{\mathsf S} \newcommand{\sfT}{\mathsf T} \newcommand{\sfU}{\mathsf U} \newcommand{\sfV}{\mathsf V} \newcommand{\sfW}{\mathsf W} \newcommand{\sfX}{\mathsf X} \newcommand{\sfY}{\mathsf Y} \newcommand{\sfZ}{\mathsf Z} \newcommand{\cA}{\mathcal A} \newcommand{\cB}{\mathcal B} \newcommand{\cC}{\mathcal C} \newcommand{\cD}{\mathcal D} \newcommand{\cE}{\mathcal E} \newcommand{\cF}{\mathcal F} \newcommand{\cG}{\mathcal G} \newcommand{\cH}{\mathcal H} \newcommand{\cI}{\mathcal I} \newcommand{\cJ}{\mathcal J} \newcommand{\cK}{\mathcal K} \newcommand{\cL}{\mathcal L} \newcommand{\cM}{\mathcal M} \newcommand{\cN}{\mathcal N} \newcommand{\cO}{\mathcal O} \newcommand{\cP}{\mathcal P} \newcommand{\cQ}{\mathcal Q} \newcommand{\cR}{\mathcal R} \newcommand{\cS}{\mathcal S} \newcommand{\cT}{\mathcal T} \newcommand{\cU}{\mathcal U} \newcommand{\cV}{\mathcal V} \newcommand{\cW}{\mathcal W} \newcommand{\cX}{\mathcal X} \newcommand{\cY}{\mathcal Y} \newcommand{\cZ}{\mathcal Z} \newcommand{\bfA}{\mathbf A} \newcommand{\bfB}{\mathbf B} \newcommand{\bfC}{\mathbf C} \newcommand{\bfD}{\mathbf D} \newcommand{\bfE}{\mathbf E} \newcommand{\bfF}{\mathbf F} \newcommand{\bfG}{\mathbf G} \newcommand{\bfH}{\mathbf H} \newcommand{\bfI}{\mathbf I} \newcommand{\bfJ}{\mathbf J} \newcommand{\bfK}{\mathbf K} \newcommand{\bfL}{\mathbf L} \newcommand{\bfM}{\mathbf M} \newcommand{\bfN}{\mathbf N} \newcommand{\bfO}{\mathbf O} \newcommand{\bfP}{\mathbf P} \newcommand{\bfQ}{\mathbf Q} \newcommand{\bfR}{\mathbf R} \newcommand{\bfS}{\mathbf S} \newcommand{\bfT}{\mathbf T} \newcommand{\bfU}{\mathbf U} \newcommand{\bfV}{\mathbf V} \newcommand{\bfW}{\mathbf W} \newcommand{\bfX}{\mathbf X} \newcommand{\bfY}{\mathbf Y} \newcommand{\bfZ}{\mathbf Z} \newcommand{\rmA}{\mathrm A} \newcommand{\rmB}{\mathrm B} \newcommand{\rmC}{\mathrm C} \newcommand{\rmD}{\mathrm D} \newcommand{\rmE}{\mathrm E} \newcommand{\rmF}{\mathrm F} \newcommand{\rmG}{\mathrm G} \newcommand{\rmH}{\mathrm H} \newcommand{\rmI}{\mathrm I} \newcommand{\rmJ}{\mathrm J} \newcommand{\rmK}{\mathrm K} \newcommand{\rmL}{\mathrm L} \newcommand{\rmM}{\mathrm M} \newcommand{\rmN}{\mathrm N} \newcommand{\rmO}{\mathrm O} \newcommand{\rmP}{\mathrm P} \newcommand{\rmQ}{\mathrm Q} \newcommand{\rmR}{\mathrm R} \newcommand{\rmS}{\mathrm S} \newcommand{\rmT}{\mathrm T} \newcommand{\rmU}{\mathrm U} \newcommand{\rmV}{\mathrm V} \newcommand{\rmW}{\mathrm W} \newcommand{\rmX}{\mathrm X} \newcommand{\rmY}{\mathrm Y} \newcommand{\rmZ}{\mathrm Z} \newcommand{\bb}{\mathbf{b}} \newcommand{\bv}{\mathbf{v}} \newcommand{\bw}{\mathbf{w}} \newcommand{\bx}{\mathbf{x}} \newcommand{\by}{\mathbf{y}} \newcommand{\bz}{\mathbf{z}} \newcommand{\paren}[1]{( #1 )} \newcommand{\Paren}[1]{\left( #1 \right)} \newcommand{\bigparen}[1]{\bigl( #1 \bigr)} \newcommand{\Bigparen}[1]{\Bigl( #1 \Bigr)} \newcommand{\biggparen}[1]{\biggl( #1 \biggr)} \newcommand{\Biggparen}[1]{\Biggl( #1 \Biggr)} \newcommand{\abs}[1]{\lvert #1 \rvert} \newcommand{\Abs}[1]{\left\lvert #1 \right\rvert} \newcommand{\bigabs}[1]{\bigl\lvert #1 \bigr\rvert} \newcommand{\Bigabs}[1]{\Bigl\lvert #1 \Bigr\rvert} \newcommand{\biggabs}[1]{\biggl\lvert #1 \biggr\rvert} \newcommand{\Biggabs}[1]{\Biggl\lvert #1 \Biggr\rvert} \newcommand{\card}[1]{\left| #1 \right|} \newcommand{\Card}[1]{\left\lvert #1 \right\rvert} \newcommand{\bigcard}[1]{\bigl\lvert #1 \bigr\rvert} \newcommand{\Bigcard}[1]{\Bigl\lvert #1 \Bigr\rvert} \newcommand{\biggcard}[1]{\biggl\lvert #1 \biggr\rvert} \newcommand{\Biggcard}[1]{\Biggl\lvert #1 \Biggr\rvert} \newcommand{\norm}[1]{\lVert #1 \rVert} \newcommand{\Norm}[1]{\left\lVert #1 \right\rVert} \newcommand{\bignorm}[1]{\bigl\lVert #1 \bigr\rVert} \newcommand{\Bignorm}[1]{\Bigl\lVert #1 \Bigr\rVert} \newcommand{\biggnorm}[1]{\biggl\lVert #1 \biggr\rVert} \newcommand{\Biggnorm}[1]{\Biggl\lVert #1 \Biggr\rVert} \newcommand{\iprod}[1]{\langle #1 \rangle} \newcommand{\Iprod}[1]{\left\langle #1 \right\rangle} \newcommand{\bigiprod}[1]{\bigl\langle #1 \bigr\rangle} \newcommand{\Bigiprod}[1]{\Bigl\langle #1 \Bigr\rangle} \newcommand{\biggiprod}[1]{\biggl\langle #1 \biggr\rangle} \newcommand{\Biggiprod}[1]{\Biggl\langle #1 \Biggr\rangle} \newcommand{\set}[1]{\lbrace #1 \rbrace} \newcommand{\Set}[1]{\left\lbrace #1 \right\rbrace} \newcommand{\bigset}[1]{\bigl\lbrace #1 \bigr\rbrace} \newcommand{\Bigset}[1]{\Bigl\lbrace #1 \Bigr\rbrace} \newcommand{\biggset}[1]{\biggl\lbrace #1 \biggr\rbrace} \newcommand{\Biggset}[1]{\Biggl\lbrace #1 \Biggr\rbrace} \newcommand{\bracket}[1]{\lbrack #1 \rbrack} \newcommand{\Bracket}[1]{\left\lbrack #1 \right\rbrack} \newcommand{\bigbracket}[1]{\bigl\lbrack #1 \bigr\rbrack} \newcommand{\Bigbracket}[1]{\Bigl\lbrack #1 \Bigr\rbrack} \newcommand{\biggbracket}[1]{\biggl\lbrack #1 \biggr\rbrack} \newcommand{\Biggbracket}[1]{\Biggl\lbrack #1 \Biggr\rbrack} \newcommand{\ucorner}[1]{\ulcorner #1 \urcorner} \newcommand{\Ucorner}[1]{\left\ulcorner #1 \right\urcorner} \newcommand{\bigucorner}[1]{\bigl\ulcorner #1 \bigr\urcorner} \newcommand{\Bigucorner}[1]{\Bigl\ulcorner #1 \Bigr\urcorner} \newcommand{\biggucorner}[1]{\biggl\ulcorner #1 \biggr\urcorner} \newcommand{\Biggucorner}[1]{\Biggl\ulcorner #1 \Biggr\urcorner} \newcommand{\ceil}[1]{\lceil #1 \rceil} \newcommand{\Ceil}[1]{\left\lceil #1 \right\rceil} \newcommand{\bigceil}[1]{\bigl\lceil #1 \bigr\rceil} \newcommand{\Bigceil}[1]{\Bigl\lceil #1 \Bigr\rceil} \newcommand{\biggceil}[1]{\biggl\lceil #1 \biggr\rceil} \newcommand{\Biggceil}[1]{\Biggl\lceil #1 \Biggr\rceil} \newcommand{\floor}[1]{\lfloor #1 \rfloor} \newcommand{\Floor}[1]{\left\lfloor #1 \right\rfloor} \newcommand{\bigfloor}[1]{\bigl\lfloor #1 \bigr\rfloor} \newcommand{\Bigfloor}[1]{\Bigl\lfloor #1 \Bigr\rfloor} \newcommand{\biggfloor}[1]{\biggl\lfloor #1 \biggr\rfloor} \newcommand{\Biggfloor}[1]{\Biggl\lfloor #1 \Biggr\rfloor} \newcommand{\lcorner}[1]{\llcorner #1 \lrcorner} \newcommand{\Lcorner}[1]{\left\llcorner #1 \right\lrcorner} \newcommand{\biglcorner}[1]{\bigl\llcorner #1 \bigr\lrcorner} \newcommand{\Biglcorner}[1]{\Bigl\llcorner #1 \Bigr\lrcorner} \newcommand{\bigglcorner}[1]{\biggl\llcorner #1 \biggr\lrcorner} \newcommand{\Bigglcorner}[1]{\Biggl\llcorner #1 \Biggr\lrcorner} \newcommand{\ket}[1]{| #1 \rangle} \newcommand{\bra}[1]{\langle #1 |} \newcommand{\braket}[2]{\langle #1 | #2 \rangle} \newcommand{\ketbra}[1]{| #1 \rangle\langle #1 |} \newcommand{\e}{\varepsilon} \newcommand{\eps}{\varepsilon} \newcommand{\from}{\colon} \newcommand{\super}[2]{#1^{(#2)}} \newcommand{\varsuper}[2]{#1^{\scriptscriptstyle (#2)}} \newcommand{\tensor}{\otimes} \newcommand{\eset}{\emptyset} \newcommand{\sse}{\subseteq} \newcommand{\sst}{\substack} \newcommand{\ot}{\otimes} \newcommand{\Esst}[1]{\bbE_{\substack{#1}}} \newcommand{\vbig}{\vphantom{\bigoplus}} \newcommand{\seteq}{\mathrel{\mathop:}=} \newcommand{\defeq}{\stackrel{\mathrm{def}}=} \newcommand{\Mid}{\mathrel{}\middle|\mathrel{}} \newcommand{\Ind}{\mathbf 1} \newcommand{\bits}{\{0,1\}} \newcommand{\sbits}{\{\pm 1\}} \newcommand{\R}{\mathbb R} \newcommand{\Rnn}{\R_{\ge 0}} \newcommand{\N}{\mathbb N} \newcommand{\Z}{\mathbb Z} \newcommand{\Q}{\mathbb Q} \newcommand{\C}{\mathbb C} \newcommand{\A}{\mathbb A} \newcommand{\Real}{\mathbb R} \newcommand{\mper}{\,.} \newcommand{\mcom}{\,,} \DeclareMathOperator{\Id}{Id} \DeclareMathOperator{\cone}{cone} \DeclareMathOperator{\vol}{vol} \DeclareMathOperator{\val}{val} \DeclareMathOperator{\opt}{opt} \DeclareMathOperator{\Opt}{Opt} \DeclareMathOperator{\Val}{Val} \DeclareMathOperator{\LP}{LP} \DeclareMathOperator{\SDP}{SDP} \DeclareMathOperator{\Tr}{Tr} \DeclareMathOperator{\Inf}{Inf} \DeclareMathOperator{\size}{size} \DeclareMathOperator{\poly}{poly} \DeclareMathOperator{\polylog}{polylog} \DeclareMathOperator{\min}{min} \DeclareMathOperator{\max}{max} \DeclareMathOperator{\argmax}{arg\,max} \DeclareMathOperator{\argmin}{arg\,min} \DeclareMathOperator{\qpoly}{qpoly} \DeclareMathOperator{\qqpoly}{qqpoly} \DeclareMathOperator{\conv}{conv} \DeclareMathOperator{\Conv}{Conv} \DeclareMathOperator{\supp}{supp} \DeclareMathOperator{\sign}{sign} \DeclareMathOperator{\perm}{perm} \DeclareMathOperator{\mspan}{span} \DeclareMathOperator{\mrank}{rank} \DeclareMathOperator{\E}{\mathbb E} \DeclareMathOperator{\pE}{\tilde{\mathbb E}} \DeclareMathOperator{\Pr}{\mathbb P} \DeclareMathOperator{\Span}{Span} \DeclareMathOperator{\Cone}{Cone} \DeclareMathOperator{\junta}{junta} \DeclareMathOperator{\NSS}{NSS} \DeclareMathOperator{\SA}{SA} \DeclareMathOperator{\SOS}{SOS} \DeclareMathOperator{\Stab}{\mathbf Stab} \DeclareMathOperator{\Det}{\textbf{Det}} \DeclareMathOperator{\Perm}{\textbf{Perm}} \DeclareMathOperator{\Sym}{\textbf{Sym}} \DeclareMathOperator{\Pow}{\textbf{Pow}} \DeclareMathOperator{\Gal}{\textbf{Gal}} \DeclareMathOperator{\Aut}{\textbf{Aut}} \newcommand{\iprod}[1]{\langle #1 \rangle} \newcommand{\cE}{\mathcal{E}} \newcommand{\E}{\mathbb{E}} \newcommand{\pE}{\tilde{\mathbb{E}}} \newcommand{\N}{\mathbb{N}} \renewcommand{\P}{\mathcal{P}} \notag $
$ \newcommand{\sleq}{\ensuremath{\preceq}} \newcommand{\sgeq}{\ensuremath{\succeq}} \newcommand{\diag}{\ensuremath{\mathrm{diag}}} \newcommand{\support}{\ensuremath{\mathrm{support}}} \newcommand{\zo}{\ensuremath{\{0,1\}}} \newcommand{\pmo}{\ensuremath{\{\pm 1\}}} \newcommand{\uppersos}{\ensuremath{\overline{\mathrm{sos}}}} \newcommand{\lambdamax}{\ensuremath{\lambda_{\mathrm{max}}}} \newcommand{\rank}{\ensuremath{\mathrm{rank}}} \newcommand{\Mslow}{\ensuremath{M_{\mathrm{slow}}}} \newcommand{\Mfast}{\ensuremath{M_{\mathrm{fast}}}} \newcommand{\Mdiag}{\ensuremath{M_{\mathrm{diag}}}} \newcommand{\Mcross}{\ensuremath{M_{\mathrm{cross}}}} \newcommand{\eqdef}{\ensuremath{ =^{def}}} \newcommand{\threshold}{\ensuremath{\mathrm{threshold}}} \newcommand{\vbls}{\ensuremath{\mathrm{vbls}}} \newcommand{\cons}{\ensuremath{\mathrm{cons}}} \newcommand{\edges}{\ensuremath{\mathrm{edges}}} \newcommand{\cl}{\ensuremath{\mathrm{cl}}} \newcommand{\xor}{\ensuremath{\oplus}} \newcommand{\1}{\ensuremath{\mathrm{1}}} \notag $
$ \newcommand{\transpose}[1]{\ensuremath{#1{}^{\mkern-2mu\intercal}}} \newcommand{\dyad}[1]{\ensuremath{#1#1{}^{\mkern-2mu\intercal}}} \newcommand{\nchoose}[1]{\ensuremath} \newcommand{\generated}[1]{\ensuremath{\langle #1 \rangle}} \notag $
$ \newcommand{\eqdef}{\mathbin{\stackrel{\rm def}{=}}} \newcommand{\R} % real numbers \newcommand{\N}} % natural numbers \newcommand{\Z} % integers \newcommand{\F} % a field \newcommand{\Q} % the rationals \newcommand{\C}{\mathbb{C}} % the complexes \newcommand{\poly}} \newcommand{\polylog}} \newcommand{\loglog}}} \newcommand{\zo}{\{0,1\}} \newcommand{\suchthat} \newcommand{\pr}[1]{\Pr\left[#1\right]} \newcommand{\deffont}{\em} \newcommand{\getsr}{\mathbin{\stackrel{\mbox{\tiny R}}{\gets}}} \newcommand{\Exp}{\mathop{\mathrm E}\displaylimits} % expectation \newcommand{\Var}{\mathop{\mathrm Var}\displaylimits} % variance \newcommand{\xor}{\oplus} \newcommand{\GF}{\mathrm{GF}} \newcommand{\eps}{\varepsilon} \notag $
$ \newcommand{\class}[1]{\mathbf{#1}} \newcommand{\coclass}[1]{\mathbf{co\mbox{-}#1}} % and their complements \newcommand{\BPP}{\class{BPP}} \newcommand{\NP}{\class{NP}} \newcommand{\RP}{\class{RP}} \newcommand{\coRP}{\coclass{RP}} \newcommand{\ZPP}{\class{ZPP}} \newcommand{\BQP}{\class{BQP}} \newcommand{\FP}{\class{FP}} \newcommand{\QP}{\class{QuasiP}} \newcommand{\VF}{\class{VF}} \newcommand{\VBP}{\class{VBP}} \newcommand{\VP}{\class{VP}} \newcommand{\VNP}{\class{VNP}} \newcommand{\RNC}{\class{RNC}} \newcommand{\RL}{\class{RL}} \newcommand{\BPL}{\class{BPL}} \newcommand{\coRL}{\coclass{RL}} \newcommand{\IP}{\class{IP}} \newcommand{\AM}{\class{AM}} \newcommand{\MA}{\class{MA}} \newcommand{\QMA}{\class{QMA}} \newcommand{\SBP}{\class{SBP}} \newcommand{\coAM}{\class{coAM}} \newcommand{\coMA}{\class{coMA}} \renewcommand{\P}{\class{P}} \newcommand\prBPP{\class{prBPP}} \newcommand\prRP{\class{prRP}} \newcommand\prP{\class{prP}} \newcommand{\Ppoly}{\class{P/poly}} \newcommand{\NPpoly}{\class{NP/poly}} \newcommand{\coNPpoly}{\class{coNP/poly}} \newcommand{\DTIME}{\class{DTIME}} \newcommand{\TIME}{\class{TIME}} \newcommand{\SIZE}{\class{SIZE}} \newcommand{\SPACE}{\class{SPACE}} \newcommand{\ETIME}{\class{E}} \newcommand{\BPTIME}{\class{BPTIME}} \newcommand{\RPTIME}{\class{RPTIME}} \newcommand{\ZPTIME}{\class{ZPTIME}} \newcommand{\EXP}{\class{EXP}} \newcommand{\ZPEXP}{\class{ZPEXP}} \newcommand{\RPEXP}{\class{RPEXP}} \newcommand{\BPEXP}{\class{BPEXP}} \newcommand{\SUBEXP}{\class{SUBEXP}} \newcommand{\NTIME}{\class{NTIME}} \newcommand{\NL}{\class{NL}} \renewcommand{\L}{\class{L}} \newcommand{\NQP}{\class{NQP}} \newcommand{\NEXP}{\class{NEXP}} \newcommand{\coNEXP}{\coclass{NEXP}} \newcommand{\NPSPACE}{\class{NPSPACE}} \newcommand{\PSPACE}{\class{PSPACE}} \newcommand{\NSPACE}{\class{NSPACE}} \newcommand{\coNSPACE}{\coclass{NSPACE}} \newcommand{\coL}{\coclass{L}} \newcommand{\coP}{\coclass{P}} \newcommand{\coNP}{\coclass{NP}} \newcommand{\coNL}{\coclass{NL}} \newcommand{\coNPSPACE}{\coclass{NPSPACE}} \newcommand{\APSPACE}{\class{APSPACE}} \newcommand{\LINSPACE}{\class{LINSPACE}} \newcommand{\qP}{\class{\tilde{P}}} \newcommand{\PH}{\class{PH}} \newcommand{\EXPSPACE}{\class{EXPSPACE}} \newcommand{\SigmaTIME}[1]{\class{\Sigma_{#1}TIME}} \newcommand{\PiTIME}[1]{\class{\Pi_{#1}TIME}} \newcommand{\SigmaP}[1]{\class{\Sigma_{#1}P}} \newcommand{\PiP}[1]{\class{\Pi_{#1}P}} \newcommand{\DeltaP}[1]{\class{\Delta_{#1}P}} \newcommand{\ATIME}{\class{ATIME}} \newcommand{\ASPACE}{\class{ASPACE}} \newcommand{\AP}{\class{AP}} \newcommand{\AL}{\class{AL}} \newcommand{\APSPACE}{\class{APSPACE}} \newcommand{\VNC}[1]{\class{VNC^{#1}}} \newcommand{\NC}[1]{\class{NC^{#1}}} \newcommand{\AC}[1]{\class{AC^{#1}}} \newcommand{\ACC}[1]{\class{ACC^{#1}}} \newcommand{\TC}[1]{\class{TC^{#1}}} \newcommand{\ShP}{\class{\# P}} \newcommand{\PaP}{\class{\oplus P}} \newcommand{\PCP}{\class{PCP}} \newcommand{\kMIP}[1]{\class{#1\mbox{-}MIP}} \newcommand{\MIP}{\class{MIP}} $
$ \newcommand{\textprob}[1]{\text{#1}} \newcommand{\mathprob}[1]{\textbf{#1}} \newcommand{\Satisfiability}{\textprob{Satisfiability}} \newcommand{\SAT}{\textprob{SAT}} \newcommand{\TSAT}{\textprob{3SAT}} \newcommand{\USAT}{\textprob{USAT}} \newcommand{\UNSAT}{\textprob{UNSAT}} \newcommand{\QPSAT}{\textprob{QPSAT}} \newcommand{\TQBF}{\textprob{TQBF}} \newcommand{\LinProg}{\textprob{Linear Programming}} \newcommand{\LP}{\mathprob{LP}} \newcommand{\Factor}{\textprob{Factoring}} \newcommand{\CircVal}{\textprob{Circuit Value}} \newcommand{\CVAL}{\mathprob{CVAL}} \newcommand{\CircSat}{\textprob{Circuit Satisfiability}} \newcommand{\CSAT}{\textprob{CSAT}} \newcommand{\CycleCovers}{\textprob{Cycle Covers}} \newcommand{\MonCircVal}{\textprob{Monotone Circuit Value}} \newcommand{\Reachability}{\textprob{Reachability}} \newcommand{\Unreachability}{\textprob{Unreachability}} \newcommand{\RCH}{\mathprob{RCH}} \newcommand{\BddHalt}{\textprob{Bounded Halting}} \newcommand{\BH}{\mathprob{BH}} \newcommand{\DiscreteLog}{\textprob{Discrete Log}} \newcommand{\REE}{\mathprob{REE}} \newcommand{\QBF}{\mathprob{QBF}} \newcommand{\MCSP}{\mathprob{MCSP}} \newcommand{\GGEO}{\mathprob{GGEO}} \newcommand{\CKTMIN}{\mathprob{CKT-MIN}} \newcommand{\MINCKT}{\mathprob{MIN-CKT}} \newcommand{\IdentityTest}{\textprob{Identity Testing}} \newcommand{\Majority}{\textprob{Majority}} \newcommand{\CountIndSets}{\textprob{\#Independent Sets}} \newcommand{\Parity}{\textprob{Parity}} \newcommand{\Clique}{\textprob{Clique}} \newcommand{\CountCycles}{\textprob{#Cycles}} \newcommand{\CountPerfMatchings}{\textprob{\#Perfect Matchings}} \newcommand{\CountMatchings}{\textprob{\#Matchings}} \newcommand{\CountMatch}{\mathprob{\#Matchings}} \newcommand{\ECSAT}{\mathprob{E#SAT}} \newcommand{\ShSAT}{\mathprob{#SAT}} \newcommand{\ShTSAT}{\mathprob{#3SAT}} \newcommand{\HamCycle}{\textprob{Hamiltonian Cycle}} \newcommand{\Permanent}{\textprob{Permanent}} \newcommand{\ModPermanent}{\textprob{Modular Permanent}} \newcommand{\GraphNoniso}{\textprob{Graph Nonisomorphism}} \newcommand{\GI}{\mathprob{GI}} \newcommand{\GNI}{\mathprob{GNI}} \newcommand{\GraphIso}{\textprob{Graph Isomorphism}} \newcommand{\QuantBoolForm}{\textprob{Quantified Boolean Formulae}} \newcommand{\GenGeography}{\textprob{Generalized Geography}} \newcommand{\MAXTSAT}{\mathprob{Max3SAT}} \newcommand{\GapMaxTSAT}{\mathprob{GapMax3SAT}} \newcommand{\ELIN}{\mathprob{E3LIN2}} \newcommand{\CSP}{\mathprob{CSP}} \newcommand{\Lin}{\mathprob{Lin}} \newcommand{\ONE}{\mathbf{ONE}} \newcommand{\ZERO}{\mathbf{ZERO}} \newcommand{\yes} \newcommand{\no} $
Back to Computational Complexity
Back to notes

Expanders

50%

Introduction

Expander graph is an ubiquitous mathematical object in graph theory. It has various applications in diverse disciplines including applied math, computer science, geometry, probability etc. Discovered in 1970s and being proved existing by Pinsker, expander graph is indeed on of the most important mathematical invention.

Intuitively, expander graph is a family of sparse but highly connected graphs, or, an approximation of complete graph. One can view it with three different aspects: combinatorially/geometrically, probabilistically, and algebraically.

With the above three different point of views, expander graphs have various definition and each of them has relation to others with certain parameters setting.

This note will summarize important results about expander graph and some of my reflections. The materials I studied is the Pseudorandomness lecture notes by professor Salil Vadhan from Harvard university and the nice survey paper by professor Hoory, Linial, and Wigderson.

Motivation from complexity aspects

Here I will show a beautiful application of expander graph: error reduction in $\RP$. There are many others nice results such as error-correcting codes, circuit lower bound etc. One can find nice resources from the two materials I mentioned.

To start with, let me formulate the problem. As we know from previous introduction about randomized computation, $\RP$ is a complexity class that consists of problems having polynomial time algorithm with no false positive error while allowing false negative error. Since for the convenience of definition and construction, we only apply a loose bound 1/2 for the false negative error. What if we want the error to be negligible, say $2^{-k}$?

The simplest way is to repeat the algorithm $k$ times so that by Chernoff bound, we can reduce the error to $2^{-k}$. However, suppose the randomness required for single algorithm is $m$ bits, then after this trivial reduction, the required randomness will blow up to $mk$ bits.

Another approach is to use pairwise-independent randomness. The randomness can be reduced to $O(m+k)$, however, the running time will increase exponentially. Surprisingly, with expander graphs, doing random walk on it to generate pseudorandom bits with only constant randomness each round can successfully reduce the randomness while in the meantime preserve running time in constant factor. The results is compared in the following table.

Running Time Randomness  
Trivial $O(k)$ $O(mk)$
Pairwise-Independence $O(2^{k})$ $O(m+k)$
Expander graph $O(k)$ $m+O(k)$

Details of this error reduction will be discussed later after we formally define the notion of expansion and deriving several useful theorems and lemmas. For now, readers can have a taste of the power of expander graph.

Mathematical notions of expanders

To formulate concrete mathematical notions, we model an expander by a family of graphs indexed by the number of vertices $N$. We will typically focus on the undirected multigraphs in this note.

Vertex expansion

Intuitively, vertex expansion captures the graphs with the property that for any not too large subset of vertices, the number of their neighbors is not too small. Formally, we have

Given a $D$-regular $N$ vertices graph $G=(V,E)$, we say $G$ has $(K,h_v)$ vertex expansion if for any $S\subseteq V$, $|S|\leq K$, $|N(S)|\geq h_v\cdot|S|$. For normalized setting, say $G$ has $(K,h_v)$ vertex expansion if for any $S\subseteq V$, $|S|\leq K$, $|N(S)|\geq h_v\cdot d|S|$.

Intuitively, we want $h_v$ to be as large as possible while $D$ is treated as a constant.

Let $G$ be a $D-$regular graph with $N$ vertices. Suppose $G$ is a $(\alpha N,h_v)$ expander for some $\alpha>0$, then $h_v\leq D-1+O(1)$, where the $O(1)$ factor vanishes as $N\rightarrow\infty$.


The proof follows the exercise in Vadhan’s textbook.

The main idea is to reduce the vertex expander of a $D$-regular graph to the infinite $D$-regular tree $T_D$, which is intuitively the best-possible $D$-regular expander.

The proof contains three steps as follows.

  1. Show that if a $D-$regular digraph$G$ is a $(K,A)$ expander, then $T_D$ is a $(K,A)$ expander.

    Let $S$ be an arbitrary vertex set of $T_D$ of size at most $K$. If there are components of $S$ that have distance greater than 1, divide them into different set so that we have $S_1,\dots,S_t$. Note that the neighbors of distinct $S_i$, $S_j$ do not overlapped. Thus, it suffices to show that $\card{N_{T_D}(S_i)}\geq A\cdot\card{S_i}$ for each $i\in[t]$.

    Next, for each $i\in[t]$, suppose there are disconnected components, we pull them together by shrinking edges. Note that this won’t increase the expansion. As a result, in the following, we assume each $S_i$ are connected.

    Now, for each $i\in[t]$, embed $S_i$ into $G$ by $\phi_i:S_i\rightarrow V(G)$ as follows.

    (a) Traversing $S_i$ in a depth-first fashion. Map the root to arbitrary vertex in $G$.

    (b) For a mapped vertex $u$, map all its children $v$ sequentially as follows.

    • If there’s an unmapped neighbor of $\phi_i(u)$, map $v$ to it.
    • If not, map $v$ to a neighbor of another mapped vertex. Say the neighbor of $\phi_i(w)$ where $w$ is a mapped vertex in $S_i$.

    First, we need to verify that the above algorithm is well-defined. We assume that $G$ is connected. As $K\leq\card{V(G)}$, clearly that we can always find a mapped vertex whose neighbor is unmapped.

    To show that $\card{N_{T_D}(S_i)}\geq A\cdot\card{S_i}$, it suffices to construct a 1-1 mapping from $N_G(\phi_i(S_i))$ to $N_{T_D}(S_i)$. Suppose the previous traversing in $S_i$ is $u_1,u_2,\dots,u_{\card{S_i}}$, we first map $N_G(\phi_i(u_1))\backslash \phi_i(S_i)$ to $N_{T_D}(u_1)\backslash S_i$. By the above algorithm of constructing $\phi_i$, every neighbor of $u_1$ has been mapped to the neighbor $\phi_i(u_1)$ unless every neighbor of $\phi_i(u_1)$ were all mapped. In both cases, we have the following.

    \begin{equation}\label{eq:prob4-3-1-1} |N_G(\phi_i(u_1))\backslash \phi_i(S_i)|\leq|N_{T_D}(u_1)\backslash S_i|. \end{equation}

    Also, observe that for any distinct $u,v\in S_i$, $(N_{T_D}(u)\backslash S_i)\cap (N_{T_D}(v)\backslash S_i)=\emptyset$. That is,

    \begin{equation}\label{eq:prob4-3-1-2} |N_{T_D}(S_i)\backslash S_i| = \sum_{u\in S_i}|N_{T_D}(u)\backslash S_i|. \end{equation}

    As to $G$, we simply have \begin{equation}\label{eq:prob4-3-1-3} |N_G(\phi_i(S_i))\backslash \phi_i(S_i)|\leq\sum_{u\in S_i}|N_G(\phi_i(u))\backslash\phi_i(S_i)|. \end{equation}

    Combine \eqref{eq:prob4-3-1-1}, \eqref{eq:prob4-3-1-2}, and \eqref{eq:prob4-3-1-3}, we have

    \begin{equation} |N_G(\phi_i(S_i))| = |\phi_i(S_i)|+|N_G(\phi_i(S_i))\backslash \phi_i(S_i)|\leq|S_i|+|N_{T_D}(S_i)\backslash S_i|=|N_{T_D}(S_i)|. \end{equation}

    We conclude that $\card{N_{T_D}(S_i)}\geq A\cdot\card{S_i}$ for all $i\in[t]$ and thus $T_D$ is a $(K,A)$ expander.

  2. Show that for every $D\in\mathbb{N}$, there are infinitely many $K\in\mathbb{N}$ such that $T_D$ is not a $(K,D-1+2/K)$ expander.

    For any $K\in\mathbb{N}$, take $S_K$ to be the left-most branch of $T_D$ with $K$ vertices. Observe that $\card{S_K}=K$ and \begin{equation} |N(S_K)| = |S| + (D-2)\cdot K+2=(D-1)\cdot K+2 \end{equation} That is $\card{N(S_K)} < (D-1)\cdot K+3 = \card{S_K}\cdot(D-1+3/K)$, which means that $T_D$ is not a $(K,D-1+3/K)$ expander.

    To improve the upper bound for the vertex expansion, we consider $K$ in the form of $1+D+D\cdot(D-1)+\cdots+D\cdot(D-1)^t$ for some $t\geq0$. We choose the corresponding vertex set $S_K$ to be the first $t$ layers of $T_D$. Observe that

    \begin{align} K &= 1+D+D\cdot(D-1)+\cdots+D\cdot(D-1)^t\\
    &= 1+D+D\cdot(D-1)\cdot\frac{(D-1)^t-1}{D-2}\\
    &= \frac{D\cdot(D-1)^{t+1} - D\cdot(D-1) + (D+1)\cdot(D-2)}{D-2}\\
    &= \frac{D\cdot(D-1)^{t+1}-2}{D-2}, \end{align}

    and $\card{N(S_K)} = \card{S} + D\cdot(D-1)^{t+1}$. We have \begin{equation} |N(S_K)| = K + (D-2)\cdot K +2 = (D-1)\cdot K+2. \end{equation}

  3. Deduce that for constant $D\in\mathbb{N}$ and $\alpha>0$, if a $D-$regular, $N-$vertices digraph $G$ is an $(\alpha N,A)$ vertex expander, then $A\leq D-1+O(1)$, where $O(1)$ term vanishes as $N\rightarrow\infty$.

    Suppose $G$ is a $(\alpha N,A)$ where $A > D-1+2/(\alpha N)$. Then, by (1) we know that $T_D$ is a $(\alpha N, D-1+2/(\alpha N))$ expander. However, this contradicts to (2). Thus, we know that $A\leq D-1+2/(\alpha N) = D-1+O(1)$ and the $O(1)$ term vanishes as $N\rightarrow\infty$.

Spectral expansion

Intuitively, spectral expansion wants to use the difference of the second largest eigenvalue $\lambda$ of random walk matrix and 1 to capture the speed of convergence of a random walk to its stationary distribution. As long as $\lambda$ is far from 1, the convergence speed will be very fast. Thus, we define the spectral expansion as follows.

Given a $D$-regular $N$ vertices graph $G=(V,E)$, we say $G$ has $\gamma$ spectral expansion, where $0\leq\gamma\leq d$, if for any $x\in\mathbb{R}^N$, $\langle\mathbf{1},x\rangle=0$, $x^{\intercal}Ax\leq(d-\gamma)|x|^2$.

For a $D-$regular graph, it’s easy to see that we cannot achieve arbitrary spectral expansion. Namely, there’s an upper bound for spectral expansion. Specifically, we have

$\forall\ D-$regular multigraph $G$ with $N$ vertices, we have $\lambda(G)\geq\frac{2\sqrt{D-1}}{D}-O(1)$, i.e. $\gamma\leq 1-\frac{2\sqrt{D-1}}{D}+O(1)$, where $O(1)$ vanishes to 0 as $N\rightarrow\infty$.


In the following, let $T_D$ be the infinite $D-$regular tree and for graph $H$ and $l\in\mathbb{N}$, let $p_l(H)$ denote the probability that if we choose a random vertex $v$ in $H$ and do a random walk of length $2l$, we end back at vertex $v$.

The proof consists of three steps as follows.

  1. Show that $p_l(G)\geq p_l(T_D)\geq C_l\cdot (D-1)^l/D^l$, where $C_l$ is the $l$th Catalan number.

    Consider a $D-$regular digraph $G$. Basically, we can map each vertex to a vertex in $T_D$ as follow:

    • Arbitrarily choose vertex $v$ in $G$ and map it to the root of $T_D$.
    • Consider all the vertices connected from $v$, $N(v)$, map them to the first level child in arbitrary order.
    • Then, consider the vertices connected from all the vertices in first level, map them to the second level child in arbitrary order.
    • Repeat the process in step (b)-(c) infinitely.
    • Note that if $G$ is disconnected, there might be some vertices left unmapped. In that case, simply choose one of the node arbitrarily and map it to another root. Then, repeat step (a)-(d).
    • Finally, we will have several roots, then we map it to certain level in $T_D$.

    Observe that $p_{\ell}(G)$ is the same as the probability of randomly picking a vertex in $T_D$ and do a random walk of length $2\ell$, we end back at a vertex having the same corresponding vertex in $G$. Clearly that this probability is at least $p_{\ell}(T_D)$ and thus $p_{\ell}(G)\geq p_{\ell}(T_D)$.

    Next, consider arbitrary vertex in $u$ and count the number of downward length $2\ell$ path from $u$ to $u$. Clearly that there are exactly $C_{\ell}\cdot (D-1)^{\ell}$ choices. As $T_D$ is $D-$regular, the probability of taking each path is $1/D^{2\ell}$. As a path from $u$ to $u$ can also be upward or mixed, we can see that the probability to have a downward travel is a lower bound for $p_{\ell}(T_D)$. Concretely, we have $p_{\ell}(T_D)\geq C_{\ell}\cdot(D-1)^{\ell}/D^{2\ell}$.

  2. Show that $N\cdot p_l(G)\leq1+(N-1)\cdot\lambda(G)^{2l}$.

    By definition, we have $N\cdot p_{\ell}(G) = tr(M^{2\ell})$, where $M$ is the random walk matrix of $G$. Since we know that the trace of a matrix is equal to the summation of eigenvalues, we have

    \begin{align} N\cdot p_{\ell}(G) &= tr(M^{2\ell}) = \sum_{i=1}^N\lambda_i(M^{2\ell})\\
    &= \sum_{i=1}^N\lambda_i(M)^{2\ell}\\
    &\leq 1+(N-1)\cdot \lambda(G)^{2\ell} \end{align}

    The last inequality is because $\card{\lambda_i(M)}\leq\lambda(G)$ for all $i\geq2$.

  3. Using the fact that $C_l=\binom{2l}{l}/(l+1)$, prove that $\lambda(G)\geq\frac{2\sqrt{D-1}}{D}-O(1)$, where the $O(1)$ term vanishes as $N\rightarrow\infty$.

    From part (1) and (2) we have \begin{align} \lambda(G)^{2\ell}&\geq(\frac{N\cdot C_{\ell}\cdot(D-1)^{\ell}}{D^{2\ell}}-1)/(N-1)\\
    &\geq\frac{N\cdot C_{\ell}\cdot(D-1)^{\ell}}{D^{2\ell}\cdot N}-O(\frac{1}{N}) = \frac{\binom{2\ell}{\ell}\cdot(D-1)^{\ell}}{(\ell+1)\cdot D^{2\ell}}-O(\frac{1}{N}). \end{align}

     By Stirling's formula,
    

    \begin{align} \binom{2\ell}{\ell}\geq\sqrt{\frac{1}{\pi\ell}}2^{2\ell}/\ell. \end{align}

    Thus, we have

    \begin{equation} \left(\binom{2\ell}{\ell}/(\ell+1)\right)^{1/2\ell}\geq2-O(\frac{1}{\ell}). \end{equation}

    Let $\ell=N$, we have

    \begin{equation} \lambda(G) &\geq \frac{2\sqrt{D-1}}{D}-O(\frac{1}{N}). \end{equation}

    Although having an undesired upper bound for spectral expansion, a good news is that there exists a family of graph, Ramanujan graph, which asymptotically achieve the bound. Details about Ramanujan graph will be discussed in latter post.

Edge expansion

Edge expansion considers the number of edges among a subset of vertices and its complement. The formal definition is as follow.

We say a $D-$regular graph $G$ is a $(K,\epsilon)$ edge expander if $\forall S\subseteq V(G)$ with $\card{S}\leq K$, $e(S,\bar{S})\geq \epsilon\cdot\card{S}\cdot D$.

Intuitively, the definition of edge expansion is equivalent to asking $\frac{e(S,\bar{S})}{\card{S}\cdot D}\geq\epsilon$ for any not too large vertex set $S$, while the ratio $\frac{e(S,\bar{S})}{\card{S}\cdot D}$ can be thought of as the probability that, if we conditioned the stationary distribution on $S$, the probability of a random walk leaving $S$ in a single step.

Equivalent relation among distinct measures

For simplicity, here we only consider two relations: vertex expansion versus spectral expansion, and edge expansion versus spectral expansion.

From the above the equivalent relation among different measures of expansion, we can see that each definition is not exactly the same. That is, there are still some constant/small factor affecting the transformation between two measures. The fundamental reason might lie in the differences of their focusing, e.g. spectral expansion look at a more global property of a graph, while the other two adopt local and combinatorial definitions. As a result, graphs having the same vertex expansion might behave unbalanced in a global sense and resulting in a factor lost, e.g. the $1/D^2$ lost during transformation from vertex expansion to spectral expansion.

Different definitions of expansion have distinct properties. It depends on the applications to decide which one is better. In the following posts, I will introduce some nice results, e.g. Expander mixing lemma, so that we can see how to make good use the idea of expander graphs.

Status of Each Expansions

Let’s briefly recall the goal of each expansions. In the following, we fix $D$ as a constant and think of $N$ and $K$ as growing parameters.

Expanders and Random Walk

Given a $D$-regular graph $G$ with spectral expansion $\gamma=O(\sqrt{D})$, as long as $D$ is a constant,

Theorem about random walk on expanders

Let $G$ be a regular digraph with $N$ vertices and spectral expansion $1-\lambda$. Consider a random walk $V_1,\dots,V_t$ in $G$ from a uniform start vertex $V_1$. Then for any $\epsilon>0$, $\frac{1}{t}\sum_{i\in[t]}V_i$ is $(\lambda+\epsilon)$-close to $U_{N}$ with probability at least $1-2^{-\Omega(\epsilon^2t)}$.

Let $G$ be a regular digraph with $N$ vertices and spectral expansion $1-\lambda$. Consider a random walk $V_1,\dots,V_t$ in $G$ from a uniform start vertex $V_1$. Let $1\leq i_1<i_2<\cdots<c_k\leq t$ be an expander path and $B\subseteq[N]$ be the bad set. For any $\epsilon>0$, $\mathbb{P}[\forall s\in[k],\ V_{i_s}\in B]\leq \mu_B\cdot \prod_{s\in[k-1]}(\mu+\lambda^{\Delta_s}\cdot(1-\mu))$, where $\Delta_s=i_{s+1}-i_s$.

Technical Tools for Analyzing Random Walk on Expanders