$ \newcommand{\undefined}{} \newcommand{\hfill}{} \newcommand{\qedhere}{\square} \newcommand{\qed}{\square} \newcommand{\ensuremath}[1]{#1} \newcommand{\bit}{\{0,1\}} \newcommand{\Bit}{\{-1,1\}} \newcommand{\Stab}{\mathbf{Stab}} \newcommand{\NS}{\mathbf{NS}} \newcommand{\ba}{\mathbf{a}} \newcommand{\bh}{\mathbf{h}} \newcommand{\br}{\mathbf{r}} \newcommand{\bs}{\mathbf{s}} \newcommand{\bx}{\mathbf{x}} \newcommand{\by}{\mathbf{y}} \newcommand{\bz}{\mathbf{z}} \newcommand{\Var}{\mathbf{Var}} \newcommand{\dist}{\text{dist}} \newcommand{\norm}[1]{\\|#1\\|} \newcommand{\etal} \newcommand{\ie} \newcommand{\eg} \newcommand{\cf} \newcommand{\rank}{\text{rank}} \newcommand{\tr}{\text{tr}} \newcommand{\mor}{\text{Mor}} \newcommand{\hom}{\text{Hom}} \newcommand{\id}{\text{id}} \newcommand{\obj}{\text{obj}} \newcommand{\pr}{\text{pr}} \newcommand{\ker}{\text{ker}} \newcommand{\coker}{\text{coker}} \newcommand{\im}{\text{im}} \newcommand{\bbA}{\mathbb A} \newcommand{\bbB}{\mathbb B} \newcommand{\bbC}{\mathbb C} \newcommand{\bbD}{\mathbb D} \newcommand{\bbE}{\mathbb E} \newcommand{\bbF}{\mathbb F} \newcommand{\bbG}{\mathbb G} \newcommand{\bbH}{\mathbb H} \newcommand{\bbI}{\mathbb I} \newcommand{\bbJ}{\mathbb J} \newcommand{\bbK}{\mathbb K} \newcommand{\bbL}{\mathbb L} \newcommand{\bbM}{\mathbb M} \newcommand{\bbN}{\mathbb N} \newcommand{\bbO}{\mathbb O} \newcommand{\bbP}{\mathbb P} \newcommand{\bbQ}{\mathbb Q} \newcommand{\bbR}{\mathbb R} \newcommand{\bbS}{\mathbb S} \newcommand{\bbT}{\mathbb T} \newcommand{\bbU}{\mathbb U} \newcommand{\bbV}{\mathbb V} \newcommand{\bbW}{\mathbb W} \newcommand{\bbX}{\mathbb X} \newcommand{\bbY}{\mathbb Y} \newcommand{\bbZ}{\mathbb Z} \newcommand{\sA}{\mathscr A} \newcommand{\sB}{\mathscr B} \newcommand{\sC}{\mathscr C} \newcommand{\sD}{\mathscr D} \newcommand{\sE}{\mathscr E} \newcommand{\sF}{\mathscr F} \newcommand{\sG}{\mathscr G} \newcommand{\sH}{\mathscr H} \newcommand{\sI}{\mathscr I} \newcommand{\sJ}{\mathscr J} \newcommand{\sK}{\mathscr K} \newcommand{\sL}{\mathscr L} \newcommand{\sM}{\mathscr M} \newcommand{\sN}{\mathscr N} \newcommand{\sO}{\mathscr O} \newcommand{\sP}{\mathscr P} \newcommand{\sQ}{\mathscr Q} \newcommand{\sR}{\mathscr R} \newcommand{\sS}{\mathscr S} \newcommand{\sT}{\mathscr T} \newcommand{\sU}{\mathscr U} \newcommand{\sV}{\mathscr V} \newcommand{\sW}{\mathscr W} \newcommand{\sX}{\mathscr X} \newcommand{\sY}{\mathscr Y} \newcommand{\sZ}{\mathscr Z} \newcommand{\sfA}{\mathsf A} \newcommand{\sfB}{\mathsf B} \newcommand{\sfC}{\mathsf C} \newcommand{\sfD}{\mathsf D} \newcommand{\sfE}{\mathsf E} \newcommand{\sfF}{\mathsf F} \newcommand{\sfG}{\mathsf G} \newcommand{\sfH}{\mathsf H} \newcommand{\sfI}{\mathsf I} \newcommand{\sfJ}{\mathsf J} \newcommand{\sfK}{\mathsf K} \newcommand{\sfL}{\mathsf L} \newcommand{\sfM}{\mathsf M} \newcommand{\sfN}{\mathsf N} \newcommand{\sfO}{\mathsf O} \newcommand{\sfP}{\mathsf P} \newcommand{\sfQ}{\mathsf Q} \newcommand{\sfR}{\mathsf R} \newcommand{\sfS}{\mathsf S} \newcommand{\sfT}{\mathsf T} \newcommand{\sfU}{\mathsf U} \newcommand{\sfV}{\mathsf V} \newcommand{\sfW}{\mathsf W} \newcommand{\sfX}{\mathsf X} \newcommand{\sfY}{\mathsf Y} \newcommand{\sfZ}{\mathsf Z} \newcommand{\cA}{\mathcal A} \newcommand{\cB}{\mathcal B} \newcommand{\cC}{\mathcal C} \newcommand{\cD}{\mathcal D} \newcommand{\cE}{\mathcal E} \newcommand{\cF}{\mathcal F} \newcommand{\cG}{\mathcal G} \newcommand{\cH}{\mathcal H} \newcommand{\cI}{\mathcal I} \newcommand{\cJ}{\mathcal J} \newcommand{\cK}{\mathcal K} \newcommand{\cL}{\mathcal L} \newcommand{\cM}{\mathcal M} \newcommand{\cN}{\mathcal N} \newcommand{\cO}{\mathcal O} \newcommand{\cP}{\mathcal P} \newcommand{\cQ}{\mathcal Q} \newcommand{\cR}{\mathcal R} \newcommand{\cS}{\mathcal S} \newcommand{\cT}{\mathcal T} \newcommand{\cU}{\mathcal U} \newcommand{\cV}{\mathcal V} \newcommand{\cW}{\mathcal W} \newcommand{\cX}{\mathcal X} \newcommand{\cY}{\mathcal Y} \newcommand{\cZ}{\mathcal Z} \newcommand{\bfA}{\mathbf A} \newcommand{\bfB}{\mathbf B} \newcommand{\bfC}{\mathbf C} \newcommand{\bfD}{\mathbf D} \newcommand{\bfE}{\mathbf E} \newcommand{\bfF}{\mathbf F} \newcommand{\bfG}{\mathbf G} \newcommand{\bfH}{\mathbf H} \newcommand{\bfI}{\mathbf I} \newcommand{\bfJ}{\mathbf J} \newcommand{\bfK}{\mathbf K} \newcommand{\bfL}{\mathbf L} \newcommand{\bfM}{\mathbf M} \newcommand{\bfN}{\mathbf N} \newcommand{\bfO}{\mathbf O} \newcommand{\bfP}{\mathbf P} \newcommand{\bfQ}{\mathbf Q} \newcommand{\bfR}{\mathbf R} \newcommand{\bfS}{\mathbf S} \newcommand{\bfT}{\mathbf T} \newcommand{\bfU}{\mathbf U} \newcommand{\bfV}{\mathbf V} \newcommand{\bfW}{\mathbf W} \newcommand{\bfX}{\mathbf X} \newcommand{\bfY}{\mathbf Y} \newcommand{\bfZ}{\mathbf Z} \newcommand{\rmA}{\mathrm A} \newcommand{\rmB}{\mathrm B} \newcommand{\rmC}{\mathrm C} \newcommand{\rmD}{\mathrm D} \newcommand{\rmE}{\mathrm E} \newcommand{\rmF}{\mathrm F} \newcommand{\rmG}{\mathrm G} \newcommand{\rmH}{\mathrm H} \newcommand{\rmI}{\mathrm I} \newcommand{\rmJ}{\mathrm J} \newcommand{\rmK}{\mathrm K} \newcommand{\rmL}{\mathrm L} \newcommand{\rmM}{\mathrm M} \newcommand{\rmN}{\mathrm N} \newcommand{\rmO}{\mathrm O} \newcommand{\rmP}{\mathrm P} \newcommand{\rmQ}{\mathrm Q} \newcommand{\rmR}{\mathrm R} \newcommand{\rmS}{\mathrm S} \newcommand{\rmT}{\mathrm T} \newcommand{\rmU}{\mathrm U} \newcommand{\rmV}{\mathrm V} \newcommand{\rmW}{\mathrm W} \newcommand{\rmX}{\mathrm X} \newcommand{\rmY}{\mathrm Y} \newcommand{\rmZ}{\mathrm Z} \newcommand{\bb}{\mathbf{b}} \newcommand{\bv}{\mathbf{v}} \newcommand{\bw}{\mathbf{w}} \newcommand{\bx}{\mathbf{x}} \newcommand{\by}{\mathbf{y}} \newcommand{\bz}{\mathbf{z}} \newcommand{\paren}[1]{( #1 )} \newcommand{\Paren}[1]{\left( #1 \right)} \newcommand{\bigparen}[1]{\bigl( #1 \bigr)} \newcommand{\Bigparen}[1]{\Bigl( #1 \Bigr)} \newcommand{\biggparen}[1]{\biggl( #1 \biggr)} \newcommand{\Biggparen}[1]{\Biggl( #1 \Biggr)} \newcommand{\abs}[1]{\lvert #1 \rvert} \newcommand{\Abs}[1]{\left\lvert #1 \right\rvert} \newcommand{\bigabs}[1]{\bigl\lvert #1 \bigr\rvert} \newcommand{\Bigabs}[1]{\Bigl\lvert #1 \Bigr\rvert} \newcommand{\biggabs}[1]{\biggl\lvert #1 \biggr\rvert} \newcommand{\Biggabs}[1]{\Biggl\lvert #1 \Biggr\rvert} \newcommand{\card}[1]{\lvert #1 \rvert} \newcommand{\Card}[1]{\left\lvert #1 \right\rvert} \newcommand{\bigcard}[1]{\bigl\lvert #1 \bigr\rvert} \newcommand{\Bigcard}[1]{\Bigl\lvert #1 \Bigr\rvert} \newcommand{\biggcard}[1]{\biggl\lvert #1 \biggr\rvert} \newcommand{\Biggcard}[1]{\Biggl\lvert #1 \Biggr\rvert} \newcommand{\norm}[1]{\lVert #1 \rVert} \newcommand{\Norm}[1]{\left\lVert #1 \right\rVert} \newcommand{\bignorm}[1]{\bigl\lVert #1 \bigr\rVert} \newcommand{\Bignorm}[1]{\Bigl\lVert #1 \Bigr\rVert} \newcommand{\biggnorm}[1]{\biggl\lVert #1 \biggr\rVert} \newcommand{\Biggnorm}[1]{\Biggl\lVert #1 \Biggr\rVert} \newcommand{\iprod}[1]{\langle #1 \rangle} \newcommand{\Iprod}[1]{\left\langle #1 \right\rangle} \newcommand{\bigiprod}[1]{\bigl\langle #1 \bigr\rangle} \newcommand{\Bigiprod}[1]{\Bigl\langle #1 \Bigr\rangle} \newcommand{\biggiprod}[1]{\biggl\langle #1 \biggr\rangle} \newcommand{\Biggiprod}[1]{\Biggl\langle #1 \Biggr\rangle} \newcommand{\set}[1]{\lbrace #1 \rbrace} \newcommand{\Set}[1]{\left\lbrace #1 \right\rbrace} \newcommand{\bigset}[1]{\bigl\lbrace #1 \bigr\rbrace} \newcommand{\Bigset}[1]{\Bigl\lbrace #1 \Bigr\rbrace} \newcommand{\biggset}[1]{\biggl\lbrace #1 \biggr\rbrace} \newcommand{\Biggset}[1]{\Biggl\lbrace #1 \Biggr\rbrace} \newcommand{\bracket}[1]{\lbrack #1 \rbrack} \newcommand{\Bracket}[1]{\left\lbrack #1 \right\rbrack} \newcommand{\bigbracket}[1]{\bigl\lbrack #1 \bigr\rbrack} \newcommand{\Bigbracket}[1]{\Bigl\lbrack #1 \Bigr\rbrack} \newcommand{\biggbracket}[1]{\biggl\lbrack #1 \biggr\rbrack} \newcommand{\Biggbracket}[1]{\Biggl\lbrack #1 \Biggr\rbrack} \newcommand{\ucorner}[1]{\ulcorner #1 \urcorner} \newcommand{\Ucorner}[1]{\left\ulcorner #1 \right\urcorner} \newcommand{\bigucorner}[1]{\bigl\ulcorner #1 \bigr\urcorner} \newcommand{\Bigucorner}[1]{\Bigl\ulcorner #1 \Bigr\urcorner} \newcommand{\biggucorner}[1]{\biggl\ulcorner #1 \biggr\urcorner} \newcommand{\Biggucorner}[1]{\Biggl\ulcorner #1 \Biggr\urcorner} \newcommand{\ceil}[1]{\lceil #1 \rceil} \newcommand{\Ceil}[1]{\left\lceil #1 \right\rceil} \newcommand{\bigceil}[1]{\bigl\lceil #1 \bigr\rceil} \newcommand{\Bigceil}[1]{\Bigl\lceil #1 \Bigr\rceil} \newcommand{\biggceil}[1]{\biggl\lceil #1 \biggr\rceil} \newcommand{\Biggceil}[1]{\Biggl\lceil #1 \Biggr\rceil} \newcommand{\floor}[1]{\lfloor #1 \rfloor} \newcommand{\Floor}[1]{\left\lfloor #1 \right\rfloor} \newcommand{\bigfloor}[1]{\bigl\lfloor #1 \bigr\rfloor} \newcommand{\Bigfloor}[1]{\Bigl\lfloor #1 \Bigr\rfloor} \newcommand{\biggfloor}[1]{\biggl\lfloor #1 \biggr\rfloor} \newcommand{\Biggfloor}[1]{\Biggl\lfloor #1 \Biggr\rfloor} \newcommand{\lcorner}[1]{\llcorner #1 \lrcorner} \newcommand{\Lcorner}[1]{\left\llcorner #1 \right\lrcorner} \newcommand{\biglcorner}[1]{\bigl\llcorner #1 \bigr\lrcorner} \newcommand{\Biglcorner}[1]{\Bigl\llcorner #1 \Bigr\lrcorner} \newcommand{\bigglcorner}[1]{\biggl\llcorner #1 \biggr\lrcorner} \newcommand{\Bigglcorner}[1]{\Biggl\llcorner #1 \Biggr\lrcorner} \newcommand{\e}{\varepsilon} \newcommand{\eps}{\varepsilon} \newcommand{\from}{\colon} \newcommand{\super}[2]{#1^{(#2)}} \newcommand{\varsuper}[2]{#1^{\scriptscriptstyle (#2)}} \newcommand{\tensor}{\otimes} \newcommand{\eset}{\emptyset} \newcommand{\sse}{\subseteq} \newcommand{\sst}{\substack} \newcommand{\ot}{\otimes} \newcommand{\Esst}[1]{\bbE_{\substack{#1}}} \newcommand{\vbig}{\vphantom{\bigoplus}} \newcommand{\seteq}{\mathrel{\mathop:}=} \newcommand{\defeq}{\stackrel{\mathrm{def}}=} \newcommand{\Mid}{\mathrel{}\middle|\mathrel{}} \newcommand{\Ind}{\mathbf 1} \newcommand{\bits}{\{0,1\}} \newcommand{\sbits}{\{\pm 1\}} \newcommand{\R}{\mathbb R} \newcommand{\Rnn}{\R_{\ge 0}} \newcommand{\N}{\mathbb N} \newcommand{\Z}{\mathbb Z} \newcommand{\Q}{\mathbb Q} \newcommand{\mper}{\,.} \newcommand{\mcom}{\,,} \DeclareMathOperator{\Id}{Id} \DeclareMathOperator{\cone}{cone} \DeclareMathOperator{\vol}{vol} \DeclareMathOperator{\val}{val} \DeclareMathOperator{\opt}{opt} \DeclareMathOperator{\Opt}{Opt} \DeclareMathOperator{\Val}{Val} \DeclareMathOperator{\LP}{LP} \DeclareMathOperator{\SDP}{SDP} \DeclareMathOperator{\Tr}{Tr} \DeclareMathOperator{\Inf}{Inf} \DeclareMathOperator{\size}{size} \DeclareMathOperator{\poly}{poly} \DeclareMathOperator{\polylog}{polylog} \DeclareMathOperator{\min}{min} \DeclareMathOperator{\max}{max} \DeclareMathOperator{\argmax}{arg\,max} \DeclareMathOperator{\argmin}{arg\,min} \DeclareMathOperator{\qpoly}{qpoly} \DeclareMathOperator{\qqpoly}{qqpoly} \DeclareMathOperator{\conv}{conv} \DeclareMathOperator{\Conv}{Conv} \DeclareMathOperator{\supp}{supp} \DeclareMathOperator{\sign}{sign} \DeclareMathOperator{\perm}{perm} \DeclareMathOperator{\mspan}{span} \DeclareMathOperator{\mrank}{rank} \DeclareMathOperator{\E}{\mathbb E} \DeclareMathOperator{\pE}{\tilde{\mathbb E}} \DeclareMathOperator{\Pr}{\mathbb P} \DeclareMathOperator{\Span}{Span} \DeclareMathOperator{\Cone}{Cone} \DeclareMathOperator{\junta}{junta} \DeclareMathOperator{\NSS}{NSS} \DeclareMathOperator{\SA}{SA} \DeclareMathOperator{\SOS}{SOS} \DeclareMathOperator{\Stab}{\mathbf Stab} \DeclareMathOperator{\Det}{\textbf{Det}} \DeclareMathOperator{\Perm}{\textbf{Perm}} \DeclareMathOperator{\Sym}{\textbf{Sym}} \DeclareMathOperator{\Pow}{\textbf{Pow}} \newcommand{\iprod}[1]{\langle #1 \rangle} \newcommand{\cE}{\mathcal{E}} \newcommand{\E}{\mathbb{E}} \newcommand{\pE}{\tilde{\mathbb{E}}} \newcommand{\N}{\mathbb{N}} \renewcommand{\P}{\mathcal{P}} \notag $
$ \newcommand{\sleq}{\ensuremath{\preceq}} \newcommand{\sgeq}{\ensuremath{\succeq}} \newcommand{\diag}{\ensuremath{\mathrm{diag}}} \newcommand{\support}{\ensuremath{\mathrm{support}}} \newcommand{\zo}{\ensuremath{\{0,1\}}} \newcommand{\pmo}{\ensuremath{\{\pm 1\}}} \newcommand{\uppersos}{\ensuremath{\overline{\mathrm{sos}}}} \newcommand{\lambdamax}{\ensuremath{\lambda_{\mathrm{max}}}} \newcommand{\rank}{\ensuremath{\mathrm{rank}}} \newcommand{\Mslow}{\ensuremath{M_{\mathrm{slow}}}} \newcommand{\Mfast}{\ensuremath{M_{\mathrm{fast}}}} \newcommand{\Mdiag}{\ensuremath{M_{\mathrm{diag}}}} \newcommand{\Mcross}{\ensuremath{M_{\mathrm{cross}}}} \newcommand{\eqdef}{\ensuremath{ =^{def}}} \newcommand{\threshold}{\ensuremath{\mathrm{threshold}}} \newcommand{\vbls}{\ensuremath{\mathrm{vbls}}} \newcommand{\cons}{\ensuremath{\mathrm{cons}}} \newcommand{\edges}{\ensuremath{\mathrm{edges}}} \newcommand{\cl}{\ensuremath{\mathrm{cl}}} \newcommand{\xor}{\ensuremath{\oplus}} \newcommand{\1}{\ensuremath{\mathrm{1}}} \notag $
$ \newcommand{\transpose}[1]{\ensuremath{#1{}^{\mkern-2mu\intercal}}} \newcommand{\dyad}[1]{\ensuremath{#1#1{}^{\mkern-2mu\intercal}}} \newcommand{\nchoose}[1]{\ensuremath} \newcommand{\generated}[1]{\ensuremath{\langle #1 \rangle}} \notag $
$ \newcommand{\eqdef}{\mathbin{\stackrel{\rm def}{=}}} \newcommand{\R} % real numbers \newcommand{\N}} % natural numbers \newcommand{\Z} % integers \newcommand{\F} % a field \newcommand{\Q} % the rationals \newcommand{\C}{\mathbb{C}} % the complexes \newcommand{\poly}} \newcommand{\polylog}} \newcommand{\loglog}}} \newcommand{\zo}{\{0,1\}} \newcommand{\suchthat} \newcommand{\pr}[1]{\Pr\left[#1\right]} \newcommand{\deffont}{\em} \newcommand{\getsr}{\mathbin{\stackrel{\mbox{\tiny R}}{\gets}}} \newcommand{\Exp}{\mathop{\mathrm E}\displaylimits} % expectation \newcommand{\Var}{\mathop{\mathrm Var}\displaylimits} % variance \newcommand{\xor}{\oplus} \newcommand{\GF}{\mathrm{GF}} \newcommand{\eps}{\varepsilon} \notag $
$ \newcommand{\class}[1]{\mathbf{#1}} \newcommand{\coclass}[1]{\mathbf{co\mbox{-}#1}} % and their complements \newcommand{\BPP}{\class{BPP}} \newcommand{\NP}{\class{NP}} \newcommand{\RP}{\class{RP}} \newcommand{\coRP}{\coclass{RP}} \newcommand{\ZPP}{\class{ZPP}} \newcommand{\BQP}{\class{BQP}} \newcommand{\FP}{\class{FP}} \newcommand{\QP}{\class{QuasiP}} \newcommand{\VF}{\class{VF}} \newcommand{\VBP}{\class{VBP}} \newcommand{\VP}{\class{VP}} \newcommand{\VNP}{\class{VNP}} \newcommand{\RNC}{\class{RNC}} \newcommand{\RL}{\class{RL}} \newcommand{\BPL}{\class{BPL}} \newcommand{\coRL}{\coclass{RL}} \newcommand{\IP}{\class{IP}} \newcommand{\AM}{\class{AM}} \newcommand{\MA}{\class{MA}} \newcommand{\SBP}{\class{SBP}} \newcommand{\coAM}{\class{coAM}} \newcommand{\coMA}{\class{coMA}} \renewcommand{\P}{\class{P}} \newcommand\prBPP{\class{prBPP}} \newcommand\prRP{\class{prRP}} \newcommand\prP{\class{prP}} \newcommand{\Ppoly}{\class{P/poly}} \newcommand{\NPpoly}{\class{NP/poly}} \newcommand{\coNPpoly}{\class{coNP/poly}} \newcommand{\DTIME}{\class{DTIME}} \newcommand{\TIME}{\class{TIME}} \newcommand{\SIZE}{\class{SIZE}} \newcommand{\SPACE}{\class{SPACE}} \newcommand{\ETIME}{\class{E}} \newcommand{\BPTIME}{\class{BPTIME}} \newcommand{\RPTIME}{\class{RPTIME}} \newcommand{\ZPTIME}{\class{ZPTIME}} \newcommand{\EXP}{\class{EXP}} \newcommand{\ZPEXP}{\class{ZPEXP}} \newcommand{\RPEXP}{\class{RPEXP}} \newcommand{\BPEXP}{\class{BPEXP}} \newcommand{\SUBEXP}{\class{SUBEXP}} \newcommand{\NTIME}{\class{NTIME}} \newcommand{\NL}{\class{NL}} \renewcommand{\L}{\class{L}} \newcommand{\NQP}{\class{NQP}} \newcommand{\NEXP}{\class{NEXP}} \newcommand{\coNEXP}{\coclass{NEXP}} \newcommand{\NPSPACE}{\class{NPSPACE}} \newcommand{\PSPACE}{\class{PSPACE}} \newcommand{\NSPACE}{\class{NSPACE}} \newcommand{\coNSPACE}{\coclass{NSPACE}} \newcommand{\coL}{\coclass{L}} \newcommand{\coP}{\coclass{P}} \newcommand{\coNP}{\coclass{NP}} \newcommand{\coNL}{\coclass{NL}} \newcommand{\coNPSPACE}{\coclass{NPSPACE}} \newcommand{\APSPACE}{\class{APSPACE}} \newcommand{\LINSPACE}{\class{LINSPACE}} \newcommand{\qP}{\class{\tilde{P}}} \newcommand{\PH}{\class{PH}} \newcommand{\EXPSPACE}{\class{EXPSPACE}} \newcommand{\SigmaTIME}[1]{\class{\Sigma_{#1}TIME}} \newcommand{\PiTIME}[1]{\class{\Pi_{#1}TIME}} \newcommand{\SigmaP}[1]{\class{\Sigma_{#1}P}} \newcommand{\PiP}[1]{\class{\Pi_{#1}P}} \newcommand{\DeltaP}[1]{\class{\Delta_{#1}P}} \newcommand{\ATIME}{\class{ATIME}} \newcommand{\ASPACE}{\class{ASPACE}} \newcommand{\AP}{\class{AP}} \newcommand{\AL}{\class{AL}} \newcommand{\APSPACE}{\class{APSPACE}} \newcommand{\VNC}[1]{\class{VNC^{#1}}} \newcommand{\NC}[1]{\class{NC^{#1}}} \newcommand{\AC}[1]{\class{AC^{#1}}} \newcommand{\ACC}[1]{\class{ACC^{#1}}} \newcommand{\TC}[1]{\class{TC^{#1}}} \newcommand{\ShP}{\class{\# P}} \newcommand{\PaP}{\class{\oplus P}} \newcommand{\PCP}{\class{PCP}} \newcommand{\kMIP}[1]{\class{#1\mbox{-}MIP}} \newcommand{\MIP}{\class{MIP}} $
$ \newcommand{\textprob}[1]{\text{#1}} \newcommand{\mathprob}[1]{\textbf{#1}} \newcommand{\Satisfiability}{\textprob{Satisfiability}} \newcommand{\SAT}{\textprob{SAT}} \newcommand{\TSAT}{\textprob{3SAT}} \newcommand{\USAT}{\textprob{USAT}} \newcommand{\UNSAT}{\textprob{UNSAT}} \newcommand{\QPSAT}{\textprob{QPSAT}} \newcommand{\TQBF}{\textprob{TQBF}} \newcommand{\LinProg}{\textprob{Linear Programming}} \newcommand{\LP}{\mathprob{LP}} \newcommand{\Factor}{\textprob{Factoring}} \newcommand{\CircVal}{\textprob{Circuit Value}} \newcommand{\CVAL}{\mathprob{CVAL}} \newcommand{\CircSat}{\textprob{Circuit Satisfiability}} \newcommand{\CSAT}{\textprob{CSAT}} \newcommand{\CycleCovers}{\textprob{Cycle Covers}} \newcommand{\MonCircVal}{\textprob{Monotone Circuit Value}} \newcommand{\Reachability}{\textprob{Reachability}} \newcommand{\Unreachability}{\textprob{Unreachability}} \newcommand{\RCH}{\mathprob{RCH}} \newcommand{\BddHalt}{\textprob{Bounded Halting}} \newcommand{\BH}{\mathprob{BH}} \newcommand{\DiscreteLog}{\textprob{Discrete Log}} \newcommand{\REE}{\mathprob{REE}} \newcommand{\QBF}{\mathprob{QBF}} \newcommand{\MCSP}{\mathprob{MCSP}} \newcommand{\GGEO}{\mathprob{GGEO}} \newcommand{\CKTMIN}{\mathprob{CKT-MIN}} \newcommand{\MINCKT}{\mathprob{MIN-CKT}} \newcommand{\IdentityTest}{\textprob{Identity Testing}} \newcommand{\Majority}{\textprob{Majority}} \newcommand{\CountIndSets}{\textprob{\#Independent Sets}} \newcommand{\Parity}{\textprob{Parity}} \newcommand{\Clique}{\textprob{Clique}} \newcommand{\CountCycles}{\textprob{#Cycles}} \newcommand{\CountPerfMatchings}{\textprob{\#Perfect Matchings}} \newcommand{\CountMatchings}{\textprob{\#Matchings}} \newcommand{\CountMatch}{\mathprob{\#Matchings}} \newcommand{\ECSAT}{\mathprob{E#SAT}} \newcommand{\ShSAT}{\mathprob{#SAT}} \newcommand{\ShTSAT}{\mathprob{#3SAT}} \newcommand{\HamCycle}{\textprob{Hamiltonian Cycle}} \newcommand{\Permanent}{\textprob{Permanent}} \newcommand{\ModPermanent}{\textprob{Modular Permanent}} \newcommand{\GraphNoniso}{\textprob{Graph Nonisomorphism}} \newcommand{\GI}{\mathprob{GI}} \newcommand{\GNI}{\mathprob{GNI}} \newcommand{\GraphIso}{\textprob{Graph Isomorphism}} \newcommand{\QuantBoolForm}{\textprob{Quantified Boolean Formulae}} \newcommand{\GenGeography}{\textprob{Generalized Geography}} \newcommand{\MAXTSAT}{\mathprob{Max3SAT}} \newcommand{\GapMaxTSAT}{\mathprob{GapMax3SAT}} \newcommand{\ELIN}{\mathprob{E3LIN2}} \newcommand{\CSP}{\mathprob{CSP}} \newcommand{\Lin}{\mathprob{Lin}} \newcommand{\ONE}{\mathbf{ONE}} \newcommand{\ZERO}{\mathbf{ZERO}} \newcommand{\yes} \newcommand{\no} $

Research

My research interest is theoretical computer science (TCS), and in particular, computational complexity. I am enthralled by problems in circuit complexity, algebraic computation, and quantum computation. In addition, I am also interested in pseudorandomness, cryptography, coding theory, and the interaction of TCS with other fields.

Conference Publications

(Nearly) Efficient Algorithms for the Graph Matching Problem on Correlated Random Graphs
with Boaz Barak, Zhixian Lei, Tselil Schramm, Yueqi Sheng
Advances in Neural Information Processing Systems (NeurIPS 2019).
[arxiv] [conference version] [slides by Tselil] [poster] [ ] [ ]

@InProceedings{BCLSS19,
  title = {(Nearly) Efficient Algorithms for the Graph Matching Problem on Correlated Random Graphs},
  author = {Barak, Boaz and Chou, Chi-Ning and Lei, Zhixian and Schramm, Tselil and Sheng, Yueqi},
  booktitle = {Advances in Neural Information Processing Systems 32 (NeurIPS 2019)},
  pages = {9186--9194},
  year = {2019},
  url = {http://papers.nips.cc/paper/9118-nearly-efficient-algorithms-for-the-graph-matching-problem-on-correlated-random-graphs.pdf},
  archivePrefix = "arXiv",
  eprint = "1805.02349",
  primaryClass={cs.DS}
}

We consider the graph matching/similarity problem of determining how similar two given graphs $G_0,G_1$ are and recovering the permutation $\pi$ on the vertices of $G_1$ that minimizes the symmetric difference between the edges of $G_0$ and $\pi(G_1)$. Graph matching/similarity has applications for pattern matching, computer vision, social network anonymization, malware analysis, and more. We give the first efficient algorithms proven to succeed in the correlated Erdös-Rényi model (Pedarsani and Grossglauser, 2011).

Specifically, we give a polynomial time algorithm for the graph similarity/hypothesis testing task which works for every constant level of correlation between the two graphs that can be arbitrarily close to zero. We also give a quasi-polynomial ($n^{O(\log n)}$ time) algorithm for the graph matching task of recovering the permutation minimizing the symmetric difference in this model. This is the first algorithm to do so without requiring as additional input a "seed" of the values of the ground truth permutation on at least $n^{\Omega(1)}$ vertices. Our algorithms follow a general framework of counting the occurrences of subgraphs from a particular family of graphs allowing for tradeoffs between efficiency and accuracy.

Tracking the $\ell_2$ Norm with Constant Update Time
with Zhixian Lei, Preetum Nakkiran
Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2019).
[arxiv] [conference version] [slides] [ ] [ ]

@InProceedings{CLN19,
  author =	{Chi-Ning Chou and Zhixian Lei and Preetum Nakkiran},
  title =	{Tracking the l_2 Norm with Constant Update Time},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2019)},
  pages =	{2:1--2:15},
  ISSN =	{1868-8969},
  year =	{2019},
  URL =		{http://drops.dagstuhl.de/opus/volltexte/2019/11217},
  URN =		{urn:nbn:de:0030-drops-112175},
  annote =	{Keywords: Streaming algorithms, Sketching algorithms, Tracking, CountSketch},
  archivePrefix = "arXiv",
  eprint = "1807.06479",
  primaryClass={cs.DS}
}

The $\ell_2$ tracking problem is the task of obtaining a streaming algorithm that, given access to a stream of items $a_1,a_2,a_3,\ldots$ from a universe $[n]$, outputs at each time $t$ an estimate to the $\ell_2$ norm of the frequency vector $f^{(t)}\in \mathbb{R}^n$ (where $f^{(t)}_i$ is the number of occurrences of item $i$ in the stream up to time $t$). The previous work [Braverman-Chestnut-Ivkin-Nelson-Wang-Woodruff, PODS 2017] gave an streaming algorithm with (the optimal) space using $O(\epsilon^{-2}\log(1/\delta))$ words and $O(\epsilon^{-2}\log(1/\delta))$ update time to obtain an $\epsilon$-accurate estimate with probability at least $1-\delta$.

We give the first algorithm that achieves update time of $O(\log 1/\delta)$ which is independent of the accuracy parameter $\epsilon$, together with the nearly optimal space using $O(\epsilon^{-2}\log(1/\delta))$ words. Our algorithm is obtained using the Count Sketch of [Charilkar-Chen-Farach-Colton, ICALP 2002].

On the Algorithmic Power of Spiking Neural Networks
with Kai-Min Chung, Chi-Jen Lu
Innovations in Theoretical Computer Science (ITCS 2019).
[arxiv] [conference version] [slides] [poster] [ ] [ ]

@InProceedings{CCL19,
  author ={Chi-Ning Chou and Kai-Min Chung and Chi-Jen Lu},
  title ={On the Algorithmic Power of Spiking Neural Networks},
  booktitle ={10th Innovations in Theoretical Computer Science  Conference (ITCS 2019)},
  pages ={26:1--26:20},
  ISSN ={1868-8969},
  year ={2019},
  URL ={http://drops.dagstuhl.de/opus/volltexte/2018/10119},
  URN ={urn:nbn:de:0030-drops-101191}
  archivePrefix = "arXiv",
  eprint = "1803.10375",
  primaryClass={cs.NE}
}

Photo by Pennstatenews.

Spiking neural networks (SNNs) are mathematical models for biological neural networks such as our brain. In this work, we study SNNs through the lens of algorithms. In particular, we show that the firing rate of the integrate-and-fire SNN can efficiently solve the non-negative least squares problem and $\ell_1$ minimization problem. Further, our proof gives new interpretations on the integrate-and-fire SNN where the external charging and spiking effects can be viewed as gradient and projection respectively in the dual space.

Personalized Difficulty Adjustment for Countering the Double-Spending Attack in Proof-of-Work Consensus Protocols
with Yu-Jing Lin, Ren Chen, Hsiu-Yao Chang, I-Ping Tu, Shih-wei Liao
IEEE International Conference on Blockchain (Blockchain 2018).
[arxiv] [conference version] [ ] [ ]

@InProceedings{CLCCTL18,
  title={Personalized Difficulty Adjustment for Countering the Double-Spending Attack in Proof-of-Work Consensus Protocols},
  author={Chou, Chi-Ning and Lin, Yu-Jing and Chen, Ren and Chang, Hsiu-Yao and Tu, I-Ping and Liao, Shih-wei},
  booktitle={2018 IEEE Conference on Blockchain (Blockchain 2018)},
  pages={1457-1462},
  year={2018}
}
Bitcoin is the first secure decentralized electronic currency system. However, it is known to be inefficient due to its proof-of-work (PoW) consensus algorithm and has the potential hazard of double spending. In this paper, we aim to reduce the probability of double spending by decreasing the probability of consecutive winning. We first formalize a PoW-based decentralized secure network model in order to present a quantitative analysis. Next, to resolve the risk of double spending, we propose the personalized difficulty adjustment (PDA) mechanism which modifies the difficulty of each participant such that those who win more blocks in the past few rounds have a smaller probability to win in the next round. To analyze the performance of the PDA mechanism, we observe that the system can be modeled by a high-order Markov chain. Finally, we show that PDA effectively decreases the probability of consecutive winning and results in a more trustworthy PoW-based system.

Hardness vs Randomness for Bounded Depth Arithmetic Circuits
with Mrinal Kumar, Noam Solomon
Computational Complexity Conference (CCC 2018).
[arxiv] [eccc] [conference version] [slides] [ ] [ ]

@InProceedings{CKS18,
  author ={Chi-Ning Chou and Mrinal Kumar and Noam Solomon},
  title ={Hardness vs Randomness for Bounded Depth Arithmetic Circuits},
  booktitle ={33rd Computational Complexity Conference (CCC 2018)},
  pages ={13:1--13:17},
  ISSN ={1868-8969},
  year ={2018},
  URL ={http://drops.dagstuhl.de/opus/volltexte/2018/8876},
  URN ={urn:nbn:de:0030-drops-88765},
  annote ={Keywords: Algebraic Complexity, Polynomial Factorization Circuit Lower Bounds, Polynomial Identity Testing}
}

In this paper, we study the question of hardness-randomness tradeoffs for bounded depth arithmetic circuits. We show that if there is a family of explicit polynomials $\{f_n\}$, where $f_n$ is of degree $O(\log^2n/\log^2\log n)$ in $n$ variables such that $f_n$ cannot be computed by a depth $\Delta$ arithmetic circuits of size $\poly(n)$, then there is a deterministic sub-exponential time algorithm for polynomial identity testing of arithmetic circuits of depth $\Delta-5$.

This is incomparable to a beautiful result of Dvir et al.[SICOMP, 2009], where they showed that super-polynomial lower bounds for depth $\Delta$ circuits for any explicit family of polynomials (of potentially high degree) implies sub-exponential time deterministic PIT for depth $\Delta-5$ circuits of bounded individual degree. Thus, we remove the ``bounded individual degree" condition in the work of Dvir et al. at the cost of strengthening the hardness assumption to hold for polynomials of low degree.

The key technical ingredient of our proof is the following property of roots of polynomials computable by a bounded depth arithmetic circuit : if $f(x_1, x_2, \ldots, x_n)$ and $P(x_1, x_2, \ldots, x_n, y)$ are polynomials of degree $d$ and $r$ respectively, such that $P$ can be computed by a circuit of size $s$ and depth $\Delta$ and $P(x_1, x_2, \ldots, x_n, f) \equiv 0$, then, $f$ can be computed by a circuit of size $\poly(n, s, r, d^{O(\sqrt{d})})$ and depth $\Delta + 3$. In comparison, Dvir et al. showed that $f$ can be computed by a circuit of depth $\Delta + 3$ and size $\poly(n, s, r, d^{t})$, where $t$ is the degree of $P$ in $y$. Thus, the size upper bound in the work of Dvir et al. is non-trivial when $t$ is small but $d$ could be large, whereas our size upper bound is non-trivial when $d$ is small, but $t$ could be large.


Journal Publications

Closure Results for Polynomial Factorization
with Mrinal Kumar, Noam Solomon
Theory of Computing.
[journal version] [slides] [video] [ ] [ ]

@journal{CKS19J,
  author = {Chou, Chi-Ning and Kumar, Mrinal and Solomon, Noam},
  title = {Closure Results for Polynomial Factorization},
  year = {2019},
  pages = {1--34},
  doi = {10.4086/toc.2019.v015a013},
  publisher = {Theory of Computing},
  journal = {Theory of Computing},
  volume = {15},
  number = {13},
  URL = {http://www.theoryofcomputing.org/articles/v015a013},
  archivePrefix = "arXiv",
  eprint = "1803.05933",
  primaryClass={cs.CC}
}

In a sequence of fundamental results in the 1980s, Kaltofen (SICOMP 1985, STOC'86, STOC'87, RANDOM'89) showed that factors of multivariate polynomials with small arithmetic circuits have small arithmetic circuits. In other words, the complexity class $\VP$ is closed under taking factors. A natural question in this context is to understand if other natural classes of multivariate polynomials, for instance, arithmetic formulas, algebraic branching programs, bounded-depth arithmetic circuits or the class $\VNP$, are closed under taking factors.

In this paper, we show that all factors of degree $\log^a n$ of polynomials with $\poly(n)$-size depth-$k$ circuits have $\poly(n)$-size circuits of depth $O(k + a)$. This partially answers a question of Shpilka--Yehudayoff (Found. Trends in TCS, 2010) and has applications to hardness-randomness tradeoffs for bounded-depth arithmetic circuits.

As direct applications of our techniques, we also obtain simple proofs of the following results.

$\bullet$ The complexity class $\VNP$ is closed under taking factors. This confirms Conjecture~2.1 in Bürgisser's monograph (2000) and improves upon a recent result of Dutta, Saxena and Sinhababu (STOC'18) who showed a quasipolynomial upper bound on the number of auxiliary variables and the complexity of the verifier circuit of factors of polynomials in $\VNP$.

$\bullet$ A factor of degree $d$ of a polynomial $P$ which can be computed by an arithmetic formula (or an algebraic branching program) of size $s$ has a formula (an algebraic branching program, resp.) of size $\poly(s, d^{\log d}, \deg(P))$. This result was first shown by Dutta et al. (STOC'18) and we obtain a slightly different proof as an easy consequence of our techniques.

Our proofs rely on a combination of the ideas, based on Hensel lifting, developed in the polynomial factoring literature, and the depth-reduction results for arithmetic circuits, and hold over fields of characteristic zero or of sufficiently large characteristic.


Submitted/Manuscripts

ODE-Inspired Analysis for the Biological Version of Oja’s Rule in Solving Streaming PCA
with Mien Brabeeba Wang, in submission.
[arxiv] [ ] [ ]

@unpublished{CW19,
  title={ODE-Inspired Analysis for the Biological Version of Oja's Rule in Solving Streaming PCA},
  author={  Chou, Chi-Ning and Wang, Mien Brabeeba},
  note={In submission to STOC 2020},
  eprint={1911.02363},
  archivePrefix={arXiv},
  primaryClass={q-bio.NC}
}

Oja's rule [Oja, Journal of mathematical biology 1982] is a well-known biologically-plausible algorithm using a Hebbian-type synaptic update rule to solve streaming principal component analysis (PCA). Computational neuroscientists have known that this biological version of Oja's rule converges to the top eigenvector of the covariance matrix of the input in the limit. However, prior to this work, it was open to prove any convergence rate guarantee.

In this work, we give the first convergence rate analysis for the biological version of Oja's rule in solving streaming PCA. Moreover, our convergence rate matches the information theoretical lower bound up to logarithmic factors and outperforms the state-of-the-art upper bound for streaming PCA. Furthermore, we develop a novel framework inspired by ordinary differential equations (ODE) to analyze general stochastic dynamics. The framework abandons the traditional step-by-step analysis and instead analyzes a stochastic dynamic in one-shot by giving a closed-form solution to the entire dynamic. The one-shot framework allows us to apply stopping time and martingale techniques to have a flexible and precise control on the dynamic. We believe that this general framework is powerful and should lead to effective yet simple analysis for a large class of problems with stochastic dynamics.

Closure of $\VP$ under taking factors: a short and simple proof
with Mrinal Kumar, Noam Solomon, in submission.
[arxiv] [eccc] [ ] [ ]

@unpublished{CKS19,
  title={Closure of VP under taking factors: a short and simple proof},
  author={Chou, Chi-Ning and Kumar, Mrinal and Solomon, Noam},
  note={In submission to Theory of Computing},
  eprint={1903.02366},
  archivePrefix={arXiv},
  primaryClass={cs.CC}
}

In this note, we give a short, simple and almost completely self contained proof of a classical result of Kaltofen which shows that if an $n$ variate degree $d$ polynomial $f$ can be computed by an arithmetic circuit of size $s$, then each of its factors can be computed by an arithmetic circuit of size at most $\poly\left(s, n, d\right)$.

However, unlike Kaltofen's argument, our proof does not directly give an efficient algorithm for computing the circuits for the factors of $f$.