$ \newcommand{\undefined}{} \newcommand{\hfill}{} \newcommand{\qedhere}{\square} \newcommand{\qed}{\square} \newcommand{\ensuremath}[1]{#1} \newcommand{\bit}{\{0,1\}} \newcommand{\Bit}{\{-1,1\}} \newcommand{\Stab}{\mathbf{Stab}} \newcommand{\NS}{\mathbf{NS}} \newcommand{\ba}{\mathbf{a}} \newcommand{\bc}{\mathbf{c}} \newcommand{\bd}{\mathbf{d}} \newcommand{\be}{\mathbf{e}} \newcommand{\bh}{\mathbf{h}} \newcommand{\br}{\mathbf{r}} \newcommand{\bs}{\mathbf{s}} \newcommand{\bx}{\mathbf{x}} \newcommand{\by}{\mathbf{y}} \newcommand{\bz}{\mathbf{z}} \newcommand{\Var}{\mathbf{Var}} \newcommand{\dist}{\text{dist}} \newcommand{\norm}[1]{\\|#1\\|} \newcommand{\etal} \newcommand{\ie} \newcommand{\eg} \newcommand{\cf} \newcommand{\rank}{\text{rank}} \newcommand{\tr}{\text{tr}} \newcommand{\mor}{\text{Mor}} \newcommand{\hom}{\text{Hom}} \newcommand{\id}{\text{id}} \newcommand{\obj}{\text{obj}} \newcommand{\pr}{\text{pr}} \newcommand{\ker}{\text{ker}} \newcommand{\coker}{\text{coker}} \newcommand{\im}{\text{im}} \newcommand{\vol}{\text{vol}} \newcommand{\disc}{\text{disc}} \newcommand{\bbA}{\mathbb A} \newcommand{\bbB}{\mathbb B} \newcommand{\bbC}{\mathbb C} \newcommand{\bbD}{\mathbb D} \newcommand{\bbE}{\mathbb E} \newcommand{\bbF}{\mathbb F} \newcommand{\bbG}{\mathbb G} \newcommand{\bbH}{\mathbb H} \newcommand{\bbI}{\mathbb I} \newcommand{\bbJ}{\mathbb J} \newcommand{\bbK}{\mathbb K} \newcommand{\bbL}{\mathbb L} \newcommand{\bbM}{\mathbb M} \newcommand{\bbN}{\mathbb N} \newcommand{\bbO}{\mathbb O} \newcommand{\bbP}{\mathbb P} \newcommand{\bbQ}{\mathbb Q} \newcommand{\bbR}{\mathbb R} \newcommand{\bbS}{\mathbb S} \newcommand{\bbT}{\mathbb T} \newcommand{\bbU}{\mathbb U} \newcommand{\bbV}{\mathbb V} \newcommand{\bbW}{\mathbb W} \newcommand{\bbX}{\mathbb X} \newcommand{\bbY}{\mathbb Y} \newcommand{\bbZ}{\mathbb Z} \newcommand{\sA}{\mathscr A} \newcommand{\sB}{\mathscr B} \newcommand{\sC}{\mathscr C} \newcommand{\sD}{\mathscr D} \newcommand{\sE}{\mathscr E} \newcommand{\sF}{\mathscr F} \newcommand{\sG}{\mathscr G} \newcommand{\sH}{\mathscr H} \newcommand{\sI}{\mathscr I} \newcommand{\sJ}{\mathscr J} \newcommand{\sK}{\mathscr K} \newcommand{\sL}{\mathscr L} \newcommand{\sM}{\mathscr M} \newcommand{\sN}{\mathscr N} \newcommand{\sO}{\mathscr O} \newcommand{\sP}{\mathscr P} \newcommand{\sQ}{\mathscr Q} \newcommand{\sR}{\mathscr R} \newcommand{\sS}{\mathscr S} \newcommand{\sT}{\mathscr T} \newcommand{\sU}{\mathscr U} \newcommand{\sV}{\mathscr V} \newcommand{\sW}{\mathscr W} \newcommand{\sX}{\mathscr X} \newcommand{\sY}{\mathscr Y} \newcommand{\sZ}{\mathscr Z} \newcommand{\sfA}{\mathsf A} \newcommand{\sfB}{\mathsf B} \newcommand{\sfC}{\mathsf C} \newcommand{\sfD}{\mathsf D} \newcommand{\sfE}{\mathsf E} \newcommand{\sfF}{\mathsf F} \newcommand{\sfG}{\mathsf G} \newcommand{\sfH}{\mathsf H} \newcommand{\sfI}{\mathsf I} \newcommand{\sfJ}{\mathsf J} \newcommand{\sfK}{\mathsf K} \newcommand{\sfL}{\mathsf L} \newcommand{\sfM}{\mathsf M} \newcommand{\sfN}{\mathsf N} \newcommand{\sfO}{\mathsf O} \newcommand{\sfP}{\mathsf P} \newcommand{\sfQ}{\mathsf Q} \newcommand{\sfR}{\mathsf R} \newcommand{\sfS}{\mathsf S} \newcommand{\sfT}{\mathsf T} \newcommand{\sfU}{\mathsf U} \newcommand{\sfV}{\mathsf V} \newcommand{\sfW}{\mathsf W} \newcommand{\sfX}{\mathsf X} \newcommand{\sfY}{\mathsf Y} \newcommand{\sfZ}{\mathsf Z} \newcommand{\cA}{\mathcal A} \newcommand{\cB}{\mathcal B} \newcommand{\cC}{\mathcal C} \newcommand{\cD}{\mathcal D} \newcommand{\cE}{\mathcal E} \newcommand{\cF}{\mathcal F} \newcommand{\cG}{\mathcal G} \newcommand{\cH}{\mathcal H} \newcommand{\cI}{\mathcal I} \newcommand{\cJ}{\mathcal J} \newcommand{\cK}{\mathcal K} \newcommand{\cL}{\mathcal L} \newcommand{\cM}{\mathcal M} \newcommand{\cN}{\mathcal N} \newcommand{\cO}{\mathcal O} \newcommand{\cP}{\mathcal P} \newcommand{\cQ}{\mathcal Q} \newcommand{\cR}{\mathcal R} \newcommand{\cS}{\mathcal S} \newcommand{\cT}{\mathcal T} \newcommand{\cU}{\mathcal U} \newcommand{\cV}{\mathcal V} \newcommand{\cW}{\mathcal W} \newcommand{\cX}{\mathcal X} \newcommand{\cY}{\mathcal Y} \newcommand{\cZ}{\mathcal Z} \newcommand{\bfA}{\mathbf A} \newcommand{\bfB}{\mathbf B} \newcommand{\bfC}{\mathbf C} \newcommand{\bfD}{\mathbf D} \newcommand{\bfE}{\mathbf E} \newcommand{\bfF}{\mathbf F} \newcommand{\bfG}{\mathbf G} \newcommand{\bfH}{\mathbf H} \newcommand{\bfI}{\mathbf I} \newcommand{\bfJ}{\mathbf J} \newcommand{\bfK}{\mathbf K} \newcommand{\bfL}{\mathbf L} \newcommand{\bfM}{\mathbf M} \newcommand{\bfN}{\mathbf N} \newcommand{\bfO}{\mathbf O} \newcommand{\bfP}{\mathbf P} \newcommand{\bfQ}{\mathbf Q} \newcommand{\bfR}{\mathbf R} \newcommand{\bfS}{\mathbf S} \newcommand{\bfT}{\mathbf T} \newcommand{\bfU}{\mathbf U} \newcommand{\bfV}{\mathbf V} \newcommand{\bfW}{\mathbf W} \newcommand{\bfX}{\mathbf X} \newcommand{\bfY}{\mathbf Y} \newcommand{\bfZ}{\mathbf Z} \newcommand{\rmA}{\mathrm A} \newcommand{\rmB}{\mathrm B} \newcommand{\rmC}{\mathrm C} \newcommand{\rmD}{\mathrm D} \newcommand{\rmE}{\mathrm E} \newcommand{\rmF}{\mathrm F} \newcommand{\rmG}{\mathrm G} \newcommand{\rmH}{\mathrm H} \newcommand{\rmI}{\mathrm I} \newcommand{\rmJ}{\mathrm J} \newcommand{\rmK}{\mathrm K} \newcommand{\rmL}{\mathrm L} \newcommand{\rmM}{\mathrm M} \newcommand{\rmN}{\mathrm N} \newcommand{\rmO}{\mathrm O} \newcommand{\rmP}{\mathrm P} \newcommand{\rmQ}{\mathrm Q} \newcommand{\rmR}{\mathrm R} \newcommand{\rmS}{\mathrm S} \newcommand{\rmT}{\mathrm T} \newcommand{\rmU}{\mathrm U} \newcommand{\rmV}{\mathrm V} \newcommand{\rmW}{\mathrm W} \newcommand{\rmX}{\mathrm X} \newcommand{\rmY}{\mathrm Y} \newcommand{\rmZ}{\mathrm Z} \newcommand{\bb}{\mathbf{b}} \newcommand{\bv}{\mathbf{v}} \newcommand{\bw}{\mathbf{w}} \newcommand{\bx}{\mathbf{x}} \newcommand{\by}{\mathbf{y}} \newcommand{\bz}{\mathbf{z}} \newcommand{\paren}[1]{( #1 )} \newcommand{\Paren}[1]{\left( #1 \right)} \newcommand{\bigparen}[1]{\bigl( #1 \bigr)} \newcommand{\Bigparen}[1]{\Bigl( #1 \Bigr)} \newcommand{\biggparen}[1]{\biggl( #1 \biggr)} \newcommand{\Biggparen}[1]{\Biggl( #1 \Biggr)} \newcommand{\abs}[1]{\lvert #1 \rvert} \newcommand{\Abs}[1]{\left\lvert #1 \right\rvert} \newcommand{\bigabs}[1]{\bigl\lvert #1 \bigr\rvert} \newcommand{\Bigabs}[1]{\Bigl\lvert #1 \Bigr\rvert} \newcommand{\biggabs}[1]{\biggl\lvert #1 \biggr\rvert} \newcommand{\Biggabs}[1]{\Biggl\lvert #1 \Biggr\rvert} \newcommand{\card}[1]{\left| #1 \right|} \newcommand{\Card}[1]{\left\lvert #1 \right\rvert} \newcommand{\bigcard}[1]{\bigl\lvert #1 \bigr\rvert} \newcommand{\Bigcard}[1]{\Bigl\lvert #1 \Bigr\rvert} \newcommand{\biggcard}[1]{\biggl\lvert #1 \biggr\rvert} \newcommand{\Biggcard}[1]{\Biggl\lvert #1 \Biggr\rvert} \newcommand{\norm}[1]{\lVert #1 \rVert} \newcommand{\Norm}[1]{\left\lVert #1 \right\rVert} \newcommand{\bignorm}[1]{\bigl\lVert #1 \bigr\rVert} \newcommand{\Bignorm}[1]{\Bigl\lVert #1 \Bigr\rVert} \newcommand{\biggnorm}[1]{\biggl\lVert #1 \biggr\rVert} \newcommand{\Biggnorm}[1]{\Biggl\lVert #1 \Biggr\rVert} \newcommand{\iprod}[1]{\langle #1 \rangle} \newcommand{\Iprod}[1]{\left\langle #1 \right\rangle} \newcommand{\bigiprod}[1]{\bigl\langle #1 \bigr\rangle} \newcommand{\Bigiprod}[1]{\Bigl\langle #1 \Bigr\rangle} \newcommand{\biggiprod}[1]{\biggl\langle #1 \biggr\rangle} \newcommand{\Biggiprod}[1]{\Biggl\langle #1 \Biggr\rangle} \newcommand{\set}[1]{\lbrace #1 \rbrace} \newcommand{\Set}[1]{\left\lbrace #1 \right\rbrace} \newcommand{\bigset}[1]{\bigl\lbrace #1 \bigr\rbrace} \newcommand{\Bigset}[1]{\Bigl\lbrace #1 \Bigr\rbrace} \newcommand{\biggset}[1]{\biggl\lbrace #1 \biggr\rbrace} \newcommand{\Biggset}[1]{\Biggl\lbrace #1 \Biggr\rbrace} \newcommand{\bracket}[1]{\lbrack #1 \rbrack} \newcommand{\Bracket}[1]{\left\lbrack #1 \right\rbrack} \newcommand{\bigbracket}[1]{\bigl\lbrack #1 \bigr\rbrack} \newcommand{\Bigbracket}[1]{\Bigl\lbrack #1 \Bigr\rbrack} \newcommand{\biggbracket}[1]{\biggl\lbrack #1 \biggr\rbrack} \newcommand{\Biggbracket}[1]{\Biggl\lbrack #1 \Biggr\rbrack} \newcommand{\ucorner}[1]{\ulcorner #1 \urcorner} \newcommand{\Ucorner}[1]{\left\ulcorner #1 \right\urcorner} \newcommand{\bigucorner}[1]{\bigl\ulcorner #1 \bigr\urcorner} \newcommand{\Bigucorner}[1]{\Bigl\ulcorner #1 \Bigr\urcorner} \newcommand{\biggucorner}[1]{\biggl\ulcorner #1 \biggr\urcorner} \newcommand{\Biggucorner}[1]{\Biggl\ulcorner #1 \Biggr\urcorner} \newcommand{\ceil}[1]{\lceil #1 \rceil} \newcommand{\Ceil}[1]{\left\lceil #1 \right\rceil} \newcommand{\bigceil}[1]{\bigl\lceil #1 \bigr\rceil} \newcommand{\Bigceil}[1]{\Bigl\lceil #1 \Bigr\rceil} \newcommand{\biggceil}[1]{\biggl\lceil #1 \biggr\rceil} \newcommand{\Biggceil}[1]{\Biggl\lceil #1 \Biggr\rceil} \newcommand{\floor}[1]{\lfloor #1 \rfloor} \newcommand{\Floor}[1]{\left\lfloor #1 \right\rfloor} \newcommand{\bigfloor}[1]{\bigl\lfloor #1 \bigr\rfloor} \newcommand{\Bigfloor}[1]{\Bigl\lfloor #1 \Bigr\rfloor} \newcommand{\biggfloor}[1]{\biggl\lfloor #1 \biggr\rfloor} \newcommand{\Biggfloor}[1]{\Biggl\lfloor #1 \Biggr\rfloor} \newcommand{\lcorner}[1]{\llcorner #1 \lrcorner} \newcommand{\Lcorner}[1]{\left\llcorner #1 \right\lrcorner} \newcommand{\biglcorner}[1]{\bigl\llcorner #1 \bigr\lrcorner} \newcommand{\Biglcorner}[1]{\Bigl\llcorner #1 \Bigr\lrcorner} \newcommand{\bigglcorner}[1]{\biggl\llcorner #1 \biggr\lrcorner} \newcommand{\Bigglcorner}[1]{\Biggl\llcorner #1 \Biggr\lrcorner} \newcommand{\ket}[1]{| #1 \rangle} \newcommand{\bra}[1]{\langle #1 |} \newcommand{\braket}[2]{\langle #1 | #2 \rangle} \newcommand{\ketbra}[1]{| #1 \rangle\langle #1 |} \newcommand{\e}{\varepsilon} \newcommand{\eps}{\varepsilon} \newcommand{\from}{\colon} \newcommand{\super}[2]{#1^{(#2)}} \newcommand{\varsuper}[2]{#1^{\scriptscriptstyle (#2)}} \newcommand{\tensor}{\otimes} \newcommand{\eset}{\emptyset} \newcommand{\sse}{\subseteq} \newcommand{\sst}{\substack} \newcommand{\ot}{\otimes} \newcommand{\Esst}[1]{\bbE_{\substack{#1}}} \newcommand{\vbig}{\vphantom{\bigoplus}} \newcommand{\seteq}{\mathrel{\mathop:}=} \newcommand{\defeq}{\stackrel{\mathrm{def}}=} \newcommand{\Mid}{\mathrel{}\middle|\mathrel{}} \newcommand{\Ind}{\mathbf 1} \newcommand{\bits}{\{0,1\}} \newcommand{\sbits}{\{\pm 1\}} \newcommand{\R}{\mathbb R} \newcommand{\Rnn}{\R_{\ge 0}} \newcommand{\N}{\mathbb N} \newcommand{\Z}{\mathbb Z} \newcommand{\Q}{\mathbb Q} \newcommand{\C}{\mathbb C} \newcommand{\A}{\mathbb A} \newcommand{\Real}{\mathbb R} \newcommand{\mper}{\,.} \newcommand{\mcom}{\,,} \DeclareMathOperator{\Id}{Id} \DeclareMathOperator{\cone}{cone} \DeclareMathOperator{\vol}{vol} \DeclareMathOperator{\val}{val} \DeclareMathOperator{\opt}{opt} \DeclareMathOperator{\Opt}{Opt} \DeclareMathOperator{\Val}{Val} \DeclareMathOperator{\LP}{LP} \DeclareMathOperator{\SDP}{SDP} \DeclareMathOperator{\Tr}{Tr} \DeclareMathOperator{\Inf}{Inf} \DeclareMathOperator{\size}{size} \DeclareMathOperator{\poly}{poly} \DeclareMathOperator{\polylog}{polylog} \DeclareMathOperator{\min}{min} \DeclareMathOperator{\max}{max} \DeclareMathOperator{\argmax}{arg\,max} \DeclareMathOperator{\argmin}{arg\,min} \DeclareMathOperator{\qpoly}{qpoly} \DeclareMathOperator{\qqpoly}{qqpoly} \DeclareMathOperator{\conv}{conv} \DeclareMathOperator{\Conv}{Conv} \DeclareMathOperator{\supp}{supp} \DeclareMathOperator{\sign}{sign} \DeclareMathOperator{\perm}{perm} \DeclareMathOperator{\mspan}{span} \DeclareMathOperator{\mrank}{rank} \DeclareMathOperator{\E}{\mathbb E} \DeclareMathOperator{\pE}{\tilde{\mathbb E}} \DeclareMathOperator{\Pr}{\mathbb P} \DeclareMathOperator{\Span}{Span} \DeclareMathOperator{\Cone}{Cone} \DeclareMathOperator{\junta}{junta} \DeclareMathOperator{\NSS}{NSS} \DeclareMathOperator{\SA}{SA} \DeclareMathOperator{\SOS}{SOS} \DeclareMathOperator{\Stab}{\mathbf Stab} \DeclareMathOperator{\Det}{\textbf{Det}} \DeclareMathOperator{\Perm}{\textbf{Perm}} \DeclareMathOperator{\Sym}{\textbf{Sym}} \DeclareMathOperator{\Pow}{\textbf{Pow}} \DeclareMathOperator{\Gal}{\textbf{Gal}} \DeclareMathOperator{\Aut}{\textbf{Aut}} \newcommand{\iprod}[1]{\langle #1 \rangle} \newcommand{\cE}{\mathcal{E}} \newcommand{\E}{\mathbb{E}} \newcommand{\pE}{\tilde{\mathbb{E}}} \newcommand{\N}{\mathbb{N}} \renewcommand{\P}{\mathcal{P}} \notag $
$ \newcommand{\sleq}{\ensuremath{\preceq}} \newcommand{\sgeq}{\ensuremath{\succeq}} \newcommand{\diag}{\ensuremath{\mathrm{diag}}} \newcommand{\support}{\ensuremath{\mathrm{support}}} \newcommand{\zo}{\ensuremath{\{0,1\}}} \newcommand{\pmo}{\ensuremath{\{\pm 1\}}} \newcommand{\uppersos}{\ensuremath{\overline{\mathrm{sos}}}} \newcommand{\lambdamax}{\ensuremath{\lambda_{\mathrm{max}}}} \newcommand{\rank}{\ensuremath{\mathrm{rank}}} \newcommand{\Mslow}{\ensuremath{M_{\mathrm{slow}}}} \newcommand{\Mfast}{\ensuremath{M_{\mathrm{fast}}}} \newcommand{\Mdiag}{\ensuremath{M_{\mathrm{diag}}}} \newcommand{\Mcross}{\ensuremath{M_{\mathrm{cross}}}} \newcommand{\eqdef}{\ensuremath{ =^{def}}} \newcommand{\threshold}{\ensuremath{\mathrm{threshold}}} \newcommand{\vbls}{\ensuremath{\mathrm{vbls}}} \newcommand{\cons}{\ensuremath{\mathrm{cons}}} \newcommand{\edges}{\ensuremath{\mathrm{edges}}} \newcommand{\cl}{\ensuremath{\mathrm{cl}}} \newcommand{\xor}{\ensuremath{\oplus}} \newcommand{\1}{\ensuremath{\mathrm{1}}} \notag $
$ \newcommand{\transpose}[1]{\ensuremath{#1{}^{\mkern-2mu\intercal}}} \newcommand{\dyad}[1]{\ensuremath{#1#1{}^{\mkern-2mu\intercal}}} \newcommand{\nchoose}[1]{\ensuremath} \newcommand{\generated}[1]{\ensuremath{\langle #1 \rangle}} \notag $
$ \newcommand{\eqdef}{\mathbin{\stackrel{\rm def}{=}}} \newcommand{\R} % real numbers \newcommand{\N}} % natural numbers \newcommand{\Z} % integers \newcommand{\F} % a field \newcommand{\Q} % the rationals \newcommand{\C}{\mathbb{C}} % the complexes \newcommand{\poly}} \newcommand{\polylog}} \newcommand{\loglog}}} \newcommand{\zo}{\{0,1\}} \newcommand{\suchthat} \newcommand{\pr}[1]{\Pr\left[#1\right]} \newcommand{\deffont}{\em} \newcommand{\getsr}{\mathbin{\stackrel{\mbox{\tiny R}}{\gets}}} \newcommand{\Exp}{\mathop{\mathrm E}\displaylimits} % expectation \newcommand{\Var}{\mathop{\mathrm Var}\displaylimits} % variance \newcommand{\xor}{\oplus} \newcommand{\GF}{\mathrm{GF}} \newcommand{\eps}{\varepsilon} \notag $
$ \newcommand{\class}[1]{\mathbf{#1}} \newcommand{\coclass}[1]{\mathbf{co\mbox{-}#1}} % and their complements \newcommand{\BPP}{\class{BPP}} \newcommand{\NP}{\class{NP}} \newcommand{\RP}{\class{RP}} \newcommand{\coRP}{\coclass{RP}} \newcommand{\ZPP}{\class{ZPP}} \newcommand{\BQP}{\class{BQP}} \newcommand{\FP}{\class{FP}} \newcommand{\QP}{\class{QuasiP}} \newcommand{\VF}{\class{VF}} \newcommand{\VBP}{\class{VBP}} \newcommand{\VP}{\class{VP}} \newcommand{\VNP}{\class{VNP}} \newcommand{\RNC}{\class{RNC}} \newcommand{\RL}{\class{RL}} \newcommand{\BPL}{\class{BPL}} \newcommand{\coRL}{\coclass{RL}} \newcommand{\IP}{\class{IP}} \newcommand{\AM}{\class{AM}} \newcommand{\MA}{\class{MA}} \newcommand{\QMA}{\class{QMA}} \newcommand{\SBP}{\class{SBP}} \newcommand{\coAM}{\class{coAM}} \newcommand{\coMA}{\class{coMA}} \renewcommand{\P}{\class{P}} \newcommand\prBPP{\class{prBPP}} \newcommand\prRP{\class{prRP}} \newcommand\prP{\class{prP}} \newcommand{\Ppoly}{\class{P/poly}} \newcommand{\NPpoly}{\class{NP/poly}} \newcommand{\coNPpoly}{\class{coNP/poly}} \newcommand{\DTIME}{\class{DTIME}} \newcommand{\TIME}{\class{TIME}} \newcommand{\SIZE}{\class{SIZE}} \newcommand{\SPACE}{\class{SPACE}} \newcommand{\ETIME}{\class{E}} \newcommand{\BPTIME}{\class{BPTIME}} \newcommand{\RPTIME}{\class{RPTIME}} \newcommand{\ZPTIME}{\class{ZPTIME}} \newcommand{\EXP}{\class{EXP}} \newcommand{\ZPEXP}{\class{ZPEXP}} \newcommand{\RPEXP}{\class{RPEXP}} \newcommand{\BPEXP}{\class{BPEXP}} \newcommand{\SUBEXP}{\class{SUBEXP}} \newcommand{\NTIME}{\class{NTIME}} \newcommand{\NL}{\class{NL}} \renewcommand{\L}{\class{L}} \newcommand{\NQP}{\class{NQP}} \newcommand{\NEXP}{\class{NEXP}} \newcommand{\coNEXP}{\coclass{NEXP}} \newcommand{\NPSPACE}{\class{NPSPACE}} \newcommand{\PSPACE}{\class{PSPACE}} \newcommand{\NSPACE}{\class{NSPACE}} \newcommand{\coNSPACE}{\coclass{NSPACE}} \newcommand{\coL}{\coclass{L}} \newcommand{\coP}{\coclass{P}} \newcommand{\coNP}{\coclass{NP}} \newcommand{\coNL}{\coclass{NL}} \newcommand{\coNPSPACE}{\coclass{NPSPACE}} \newcommand{\APSPACE}{\class{APSPACE}} \newcommand{\LINSPACE}{\class{LINSPACE}} \newcommand{\qP}{\class{\tilde{P}}} \newcommand{\PH}{\class{PH}} \newcommand{\EXPSPACE}{\class{EXPSPACE}} \newcommand{\SigmaTIME}[1]{\class{\Sigma_{#1}TIME}} \newcommand{\PiTIME}[1]{\class{\Pi_{#1}TIME}} \newcommand{\SigmaP}[1]{\class{\Sigma_{#1}P}} \newcommand{\PiP}[1]{\class{\Pi_{#1}P}} \newcommand{\DeltaP}[1]{\class{\Delta_{#1}P}} \newcommand{\ATIME}{\class{ATIME}} \newcommand{\ASPACE}{\class{ASPACE}} \newcommand{\AP}{\class{AP}} \newcommand{\AL}{\class{AL}} \newcommand{\APSPACE}{\class{APSPACE}} \newcommand{\VNC}[1]{\class{VNC^{#1}}} \newcommand{\NC}[1]{\class{NC^{#1}}} \newcommand{\AC}[1]{\class{AC^{#1}}} \newcommand{\ACC}[1]{\class{ACC^{#1}}} \newcommand{\TC}[1]{\class{TC^{#1}}} \newcommand{\ShP}{\class{\# P}} \newcommand{\PaP}{\class{\oplus P}} \newcommand{\PCP}{\class{PCP}} \newcommand{\kMIP}[1]{\class{#1\mbox{-}MIP}} \newcommand{\MIP}{\class{MIP}} $
$ \newcommand{\textprob}[1]{\text{#1}} \newcommand{\mathprob}[1]{\textbf{#1}} \newcommand{\Satisfiability}{\textprob{Satisfiability}} \newcommand{\SAT}{\textprob{SAT}} \newcommand{\TSAT}{\textprob{3SAT}} \newcommand{\USAT}{\textprob{USAT}} \newcommand{\UNSAT}{\textprob{UNSAT}} \newcommand{\QPSAT}{\textprob{QPSAT}} \newcommand{\TQBF}{\textprob{TQBF}} \newcommand{\LinProg}{\textprob{Linear Programming}} \newcommand{\LP}{\mathprob{LP}} \newcommand{\Factor}{\textprob{Factoring}} \newcommand{\CircVal}{\textprob{Circuit Value}} \newcommand{\CVAL}{\mathprob{CVAL}} \newcommand{\CircSat}{\textprob{Circuit Satisfiability}} \newcommand{\CSAT}{\textprob{CSAT}} \newcommand{\CycleCovers}{\textprob{Cycle Covers}} \newcommand{\MonCircVal}{\textprob{Monotone Circuit Value}} \newcommand{\Reachability}{\textprob{Reachability}} \newcommand{\Unreachability}{\textprob{Unreachability}} \newcommand{\RCH}{\mathprob{RCH}} \newcommand{\BddHalt}{\textprob{Bounded Halting}} \newcommand{\BH}{\mathprob{BH}} \newcommand{\DiscreteLog}{\textprob{Discrete Log}} \newcommand{\REE}{\mathprob{REE}} \newcommand{\QBF}{\mathprob{QBF}} \newcommand{\MCSP}{\mathprob{MCSP}} \newcommand{\GGEO}{\mathprob{GGEO}} \newcommand{\CKTMIN}{\mathprob{CKT-MIN}} \newcommand{\MINCKT}{\mathprob{MIN-CKT}} \newcommand{\IdentityTest}{\textprob{Identity Testing}} \newcommand{\Majority}{\textprob{Majority}} \newcommand{\CountIndSets}{\textprob{\#Independent Sets}} \newcommand{\Parity}{\textprob{Parity}} \newcommand{\Clique}{\textprob{Clique}} \newcommand{\CountCycles}{\textprob{#Cycles}} \newcommand{\CountPerfMatchings}{\textprob{\#Perfect Matchings}} \newcommand{\CountMatchings}{\textprob{\#Matchings}} \newcommand{\CountMatch}{\mathprob{\#Matchings}} \newcommand{\ECSAT}{\mathprob{E#SAT}} \newcommand{\ShSAT}{\mathprob{#SAT}} \newcommand{\ShTSAT}{\mathprob{#3SAT}} \newcommand{\HamCycle}{\textprob{Hamiltonian Cycle}} \newcommand{\Permanent}{\textprob{Permanent}} \newcommand{\ModPermanent}{\textprob{Modular Permanent}} \newcommand{\GraphNoniso}{\textprob{Graph Nonisomorphism}} \newcommand{\GI}{\mathprob{GI}} \newcommand{\GNI}{\mathprob{GNI}} \newcommand{\GraphIso}{\textprob{Graph Isomorphism}} \newcommand{\QuantBoolForm}{\textprob{Quantified Boolean Formulae}} \newcommand{\GenGeography}{\textprob{Generalized Geography}} \newcommand{\MAXTSAT}{\mathprob{Max3SAT}} \newcommand{\GapMaxTSAT}{\mathprob{GapMax3SAT}} \newcommand{\ELIN}{\mathprob{E3LIN2}} \newcommand{\CSP}{\mathprob{CSP}} \newcommand{\Lin}{\mathprob{Lin}} \newcommand{\ONE}{\mathbf{ONE}} \newcommand{\ZERO}{\mathbf{ZERO}} \newcommand{\yes} \newcommand{\no} $

Research

I am a scientist and theorist working in computer science, physics and neuroscience. My main research interest is to understand the fundamental limits of computation (and/or beyond, e.g., intelligence) in different (complex) systems including artificial/mathematical, biological, and physical.

My academic journey began in theoretical computer science, specifically in computational complexity theory. This field adopts a bottom-up approach, striving for provable asymptotic understandings within computational models like Turing machines and circuits. In recent years, my interests have expanded to encompass (statistical) physics and neuroscience. These disciplines lean more towards a top-down approach, constructing observables and phenomenological models to elucidate nature and the workings of the brain. I value both these approaches for tackling challenging scientific questions and firmly believe in the symbiotic relationship between theory and practice.

At present, my primary research area is neuroscience. For a deeper dive into my work, please refer to the subsequent introductions detailing each of my current research directions.

In January 2022, I organized and taught a mini-course on “What is Computation? From Turing Machines to Black Holes and Neurons”. My mid-term research vision and agenda are embodied in the mini-course and all the resources are available here! Also, in 2023, I wrote a book (in Mandarin) based on this mini-course and the draft can be found here.

Neural Representational Geometry

Neuroscience

Neurons in the brain generate patterns of activity in response to external stimuli or while an animal is performing a task. These patterns, known as neural representations, are pivotal for understanding how the brain encodes information and performs computations. As we transition into an era of large-scale neural recordings, the need to decipher complex, high-dimensional, and noisy neural representations demands the development of new modeling and analysis frameworks to facilitate quantitative investigations.

The central theme of this research direction will be the development of quantitative foundations for neural representations, which emerge from learning and dynamics in neural circuits. We will utilize principles and methods from geometry, computation, and physics, and apply them to design testable hypotheses for experiments aimed at investigating the brain’s mechanisms at the population level.

[ ]

Representational learning by optimization of neural manifolds in an olfactory memory network
Bo Hu, Nesibe Z. Temiz, Chi-Ning Chou, Peter Rupprecht, Claire Meissner-Bernard, Benjamin Titze, SueYeon Chung, Rainer W Friedrich
Manuscript
[bioRxiv] [ ] [ ]

@article {Hu2024.11.17.623906,
  author = {Hu, Bo and Temiz, Nesibe Z. and Chou, Chi-Ning and Rupprecht, Peter and Meissner-Bernard, Claire and Titze, Benjamin and Chung, SueYeon and Friedrich, Rainer W},
  title = {Representational learning by optimization of neural manifolds in an olfactory memory network},
	elocation-id = {2024.11.17.623906},
	year = {2024},
	doi = {10.1101/2024.11.17.623906},
  publisher = {Cold Spring Harbor Laboratory},
  journal = {bioRxiv}
}

Higher brain functions depend on experience-dependent representations of relevant information that may be organized by attractor dynamics or by geometrical modifications of continuous "neural manifolds". To explore these scenarios we analyzed odor-evoked activity in telencephalic area pDp of juvenile and adult zebrafish, the homolog of piriform cortex. No obvious signatures of attractor dynamics were detected. Rather, olfactory discrimination training selectively enhanced the separation of neural manifolds representing task-relevant odors from other representations, consistent with predictions of autoassociative network models endowed with precise synaptic balance. Analytical approaches using the framework of manifold capacity revealed multiple geometrical modifications of representational manifolds that supported the classification of task-relevant sensory information. Manifold capacity predicted odor discrimination across individuals, indicating a close link between manifold geometry and behavior. Hence, pDp and possibly related recurrent networks store information in the geometry of representational manifolds, resulting in joint sensory and semantic maps that may support distributed learning processes.Competing Interest StatementThe authors have declared no competing interest.

Nonlinear Classification of Neural Manifolds with Contextual Information
Francesca Mignacco, Chi-Ning Chou, SueYeon Chung
Manuscript
[arxiv][ ] [ ]

@article{MCC24,
  title={Nonlinear classification of neural manifolds with contextual information},
  author={Francesca Mignacco and Chi-Ning Chou and SueYeon Chung},
  year={2024},
  eprint={2405.06851},
  archivePrefix={arXiv},
  primaryClass={q-bio.NC}
}

Understanding how neural systems efficiently process information through distributed representations is a fundamental challenge at the interface of neuroscience and machine learning. Recent approaches analyze the statistical and geometrical attributes of neural representations as population-level mechanistic descriptors of task implementation. In particular, manifold capacity has emerged as a promising framework linking population geometry to the separability of neural manifolds. However, this metric has been limited to linear readouts. Here, we propose a theoretical framework that overcomes this limitation by leveraging contextual input information. We derive an exact formula for the context-dependent capacity that depends on manifold geometry and context correlations, and validate it on synthetic and real data. Our framework's increased expressivity captures representation untanglement in deep networks at early stages of the layer hierarchy, previously inaccessible to analysis. As context-dependent nonlinearity is ubiquitous in neural systems, our data-driven and theoretically grounded approach promises to elucidate context-dependent computation across scales, datasets, and models.

Neural Manifold Capacity Captures Representation Geometry, Correlations, and Task-Efficiency Across Species and Behaviors
Chi-Ning Chou, Luke Arend, Albert J. Wakhloo, Royoung Kim, Will Slatton, SueYeon Chung
Manuscript
[bioRxiv] [ ] [ ]

@article {Chou2024.02.26.582157,
  author = {Chi-Ning Chou and Luke Arend and Albert J Wakhloo and Royoung Kim and Will Slatton and SueYeon Chung},
  title = {Neural Manifold Capacity Captures Representation Geometry, Correlations, and Task-Efficiency Across Species and Behaviors},
  elocation-id = {2024.02.26.582157},
  year = {2024},
  doi = {10.1101/2024.02.26.582157},
  publisher = {Cold Spring Harbor Laboratory},
  journal = {bioRxiv}
}

The study of the brain encompasses multiple scales, including temporal, spatial, and functional aspects. To integrate understanding across these different levels and modalities, it requires developing quantification methods and frameworks. Here, we present effective Geometric measures from Correlated Manifold Capacity theory (GCMC) for probing the functional structure in neural representations. We utilize a statistical physics approach to establish analytical connections between neural co-variabilities and downstream read-out efficiency. These effective geometric measures capture both stimulus-driven and behavior-driven structures in neural population activities, while extracting computationally-relevant information from neural data into intuitive and interpretable analysis descriptors. We apply GCMC to a diverse collection of datasets with different recording methods, various model organisms, and multiple task modalities. Specifically, we demonstrate that GCMC enables a wide range of multi-scale data analysis. This includes quantifying the spatial progression of encoding efficiency across brain regions, revealing the temporal dynamics of task-relevant manifold geometry in information processing, and characterizing variances as well as invariances in neural representations throughout learning. Lastly, the effective manifold geometric measures may be viewed as order parameters for phases related to computational efficiency, facilitating data-driven hypothesis generation and latent embedding.

Probing Biological and Artificial Neural Networks with Task-dependent Neural Manifolds
Michael Kuoch, Chi-Ning Chou, Nikhil Parthasarathy, Joel Dapello, James J DiCarlo, Haim Sompolinsky, SueYeon Chung.
Conference on Parsimony and Learning (CPAL 2024)
[arxiv] [conference version] [ ] [ ]

@inproceedings{KCPDDSC24,
title={Probing Biological and Artificial Neural Networks with Task-dependent Neural Manifolds},
author={Michael Kuoch and Chi-Ning Chou and Nikhil Parthasarathy and Joel Dapello and James J. DiCarlo and Haim Sompolinsky and SueYeon Chung},
booktitle={Conference on Parsimony and Learning (Proceedings Track)},
year={2023},
url={https://openreview.net/forum?id=MxBS6aw5Gd}
}

In recent years, growth in our understanding of the computations performed in both biological and artificial neural networks has largely been driven by either low-level mechanistic studies or global normative approaches. However, concrete methodologies for bridging the gap between these levels of abstraction remain elusive. In this work, we investigate the internal mechanisms of neural networks through the lens of neural population geometry, aiming to provide understanding at an intermediate level of abstraction, as a way to bridge that gap. Utilizing manifold capacity theory (MCT) from statistical physics and manifold alignment analysis (MAA) from high-dimensional statistics, we probe the underlying organization of task-dependent manifolds in deep neural networks and neural recordings from the macaque visual cortex. Specifically, we quantitatively characterize how different learning objectives lead to differences in the organizational strategies of these models and demonstrate how these geometric analyses are connected to the decodability of task-relevant information. Furthermore, these metrics show that macaque visual cortex data are more similar to unsupervised DNNs in terms of geometrical properties such as manifold position and manifold alignment. These analyses present a strong direction for bridging mechanistic and normative theories in neural networks through neural population geometry, potentially opening up many future research avenues in both machine learning and neuroscience.

Sensory cortex plasticity supports auditory social learning
Nihaad Paraouty, Justin D. Yao, Léo Varnet, Chi-Ning Chou, SueYeon Chung, Dan H. Sanes.
Nature Communications (2023).
[journal version][ ] [ ]

@journal{paraouty2023sensory,
  title={Sensory cortex plasticity supports auditory social learning},
  author={Paraouty, Nihaad and Yao, Justin D and Varnet, L{\'e}o and Chou, Chi-Ning and Chung, SueYeon and Sanes, Dan H},
  journal={Nature Communications},
  volume={14},
  number={1},
  pages={5828},
  year={2023},
  publisher={Nature Publishing Group UK London}
}

Social learning (SL) through experience with conspecifics can facilitate the acquisition of many behaviors. Thus, when Mongolian gerbils are exposed to a demonstrator performing an auditory discrimination task, their subsequent task acquisition is facilitated, even in the absence of visual cues. Here, we show that transient inactivation of auditory cortex (AC) during exposure caused a significant delay in task acquisition during the subsequent practice phase, suggesting that AC activity is necessary for SL. Moreover, social exposure induced an improvement in AC neuron sensitivity to auditory task cues. The magnitude of neural change during exposure correlated with task acquisition during practice. In contrast, exposure to only auditory task cues led to poorer neurometric and behavioral outcomes. Finally, social information during exposure was encoded in the AC of observer animals. Together, our results suggest that auditory SL is supported by AC neuron plasticity occurring during social exposure and prior to behavioral performance.

Quantum Advantage

Quantum

There is a huge gap between the theoretical endeavor and practical development of quantum computation. Quantum computational advantage is a bridge to propose near-term milestone to demonstrate the realizable computational power of quantum devices. I’m interested in both understanding better on the current quantum advantage proposals as well as proposing potentially useful quantum advantage tasks.

[ ]

Limitations of Linear Cross-Entropy as a Measure for Quantum Advantage
Xun Gao, Marcin Kalinowski, Chi-Ning Chou, Mikhail D. Lukin, Boaz Barak, Soonwon Choi
PRX Quantum (2024)
[journal version] [arxiv] [slides] [ ] [ ]

@article{PRXQuantum.5.010334,
  title = {Limitations of Linear Cross-Entropy as a Measure for Quantum Advantage},
  author = {Gao, Xun and Kalinowski, Marcin and Chou, Chi-Ning and Lukin, Mikhail D. and Barak, Boaz and Choi, Soonwon},
  journal = {PRX Quantum},
  volume = {5},
  issue = {1},
  pages = {010334},
  numpages = {27},
  year = {2024},
  month = {Feb},
  publisher = {American Physical Society},
  doi = {10.1103/PRXQuantum.5.010334},
  url = {https://link.aps.org/doi/10.1103/PRXQuantum.5.010334}
}

Demonstrating quantum advantage requires experimental implementation of a computational task that is hard to achieve using state-of-the-art classical systems. One approach is to perform sampling from a probability distribution associated with a class of highly entangled many-body wavefunctions. It has been suggested that this approach can be certified with the Linear Cross-Entropy Benchmark (XEB). We critically examine this notion. First, in a "benign" setting where an honest implementation of noisy quantum circuits is assumed, we characterize the conditions under which the XEB approximates the fidelity. Second, in an "adversarial" setting where all possible classical algorithms are considered for comparison, we show that achieving relatively high XEB values does not imply faithful simulation of quantum dynamics. We present an efficient classical algorithm that, with 1 GPU within 2s, yields high XEB values, namely 2-12% of those obtained in experiments. By identifying and exploiting several vulnerabilities of the XEB, we achieve high XEB values without full simulation of quantum circuits. Remarkably, our algorithm features better scaling with the system size than noisy quantum devices for commonly studied random circuit ensembles. To quantitatively explain the success of our algorithm and the limitations of the XEB, we use a theoretical framework in which the average XEB and fidelity are mapped to statistical models. We illustrate the relation between the XEB and the fidelity for quantum circuits in various architectures, with different gate choices, and in the presence of noise. Our results show that XEB's utility as a proxy for fidelity hinges on several conditions, which must be checked in the benign setting but cannot be assumed in the adversarial setting. Thus, the XEB alone has limited utility as a benchmark for quantum advantage. We discuss ways to overcome these limitations.

Limitations of Local Quantum Algorithms on Maximum Cuts of Sparse Hypergraphs and Beyond
with Peter J. Love, Juspreet Singh Sandhu, Jonathan Shi.
International Colloquium on Automata, Languages and Programming (ICALP 2022)
[arxiv] [conference version] [ ] [ ]

@InProceedings{CLSS22,
  author =	{Chou, Chi-Ning and Love, Peter J. and Sandhu, Juspreet Singh and Shi, Jonathan},
  title =	{Limitations of Local Quantum Algorithms on Random MAX-k-XOR and Beyond},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{41:1--41:20},
  year =	{2022}
}

In this work, we study the limitations of the Quantum Approximate Optimization Algorithm (QAOA) through the lens of statistical physics and show that there exists $\epsilon > 0$, such that $\epsilon \log(n)$ depth QAOA cannot arbitrarily-well approximate the ground state energy of random diluted $k$-spin glasses when $k \geq 4$ is even. This is equivalent to the weak approximation resistance of logarithmic depth QAOA to the Max-$k$-XOR problem. We further extend the limitation to other boolean constraint satisfaction problems as long as the problem satisfies a combinatorial property called the coupled overlap-gap property (OGP) [Chen et al., Annals of Probability, 47(3), 2019]. As a consequence of our techniques, we confirm a conjecture of Brandao et al. [arXiv:1812.04170, 2018] asserting that the landscape independence of QAOA extends to logarithmic depth —-- in other words, for every fixed choice of QAOA angle parameters, the algorithm at logarithmic depth performs almost equally well on almost all instances. Our results provide a new way to study the power and limit of QAOA through statistical physics methods and combinatorial properties.

Spoofing Linear Cross-Entropy Benchmarking in Shallow Quantum Circuits
with Boaz Barak and Xun Gao.
Innovations in Theoretical Computer Science Conference (ITCS 2021)
[arxiv] [conference version] [slides] [ ] [ ]

@InProceedings{BCG21,
  author =	{Boaz Barak and Chi-Ning Chou and Xun Gao},
  title =	{Spoofing Linear Cross-Entropy Benchmarking in Shallow Quantum Circuits},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{30:1--30:20},
  year =	{2021},
  archivePrefix = "arXiv",
  eprint = "2005.02421",
  primaryClass={quant-ph}
}

The linear cross-entropy benchmark (Linear XEB) has been used as a test for procedures simulating quantum circuits. Given a quantum circuit C with n inputs and outputs and purported simulator whose output is distributed according to a distribution p over $\{0,1\}^n$, the linear XEB fidelity of the simulator is $\mathcal{F}_C(p)=2^n\mathbb{E}_{x\sim p}q_C(x)-1$ where $q_C(x)$ is the probability that x is output from the distribution C|0n⟩. A trivial simulator (e.g., the uniform distribution) satisfies $\mathcal{F}_C(p)=0$, while Google's noisy quantum simulation of a 53 qubit circuit C achieved a fidelity value of (2.24±0.21)×10−3 (Arute et. al., Nature'19).

In this work we give a classical randomized algorithm that for a given circuit C of depth d with Haar random 2-qubit gates achieves in expectation a fidelity value of $\Omega(\frac{n}{L}\cdot15^{-d})$ in running time 𝗉𝗈𝗅𝗒(n,2L). Here L is the size of the *light cone* of C: the maximum number of input bits that each output bit depends on. In particular, we obtain a polynomial-time algorithm that achieves large fidelity of $\omega(1)$ for depth $O(\sqrt{\log n})$ two-dimensional circuits. To our knowledge, this is the first such result for two dimensional circuits of super-constant depth. Our results can be considered as an evidence that fooling the linear XEB test might be easier than achieving a full simulation of the quantum circuit.

Streaming Complexity

TCS

Streaming models are theoretical models for studying the scenario where input data is presented in a stream. We study the computational complexity of different problems with a focus on constraint satisfaction problems.

[ ]

Sketching Approximability of All Finite CSPs
with Alexander Golovnev, Madhu Sudan, Santhoshini Velusamy.
Journal of the ACM (2024).
[journal version] [arxiv] [ ] [ ]

@journal{CGSV24J,
  author = {Chou, Chi-Ning and Golovnev, Alexander and Sudan, Madhu and Velusamy, Santhoshini},
  title = {Sketching Approximability of All Finite CSPs},
  year = {2024},
  publisher = {Association for Computing Machinery},
  address = {New York, NY, USA},
  issn = {0004-5411},
  url = {https://doi.org/10.1145/3649435},
  doi = {10.1145/3649435},
  journal = {J. ACM},
}

A constraint satisfaction problem (CSP), $\textsf{Max-CSP}(\mathcal{F})$, is specified by a finite set of constraints $\mathcal{F} \subseteq \{[q]^k \to \{0,1\}\}$ for positive integers $q$ and $k$. An instance of the problem on $n$ variables is given by $m$ applications of constraints from $\mathcal{F}$ to subsequences of the $n$ variables, and the goal is to find an assignment to the variables that satisfies the maximum number of constraints. In the $(\gamma,\beta)$-approximation version of the problem for parameters $0 \leq \beta < \gamma \leq 1$, the goal is to distinguish instances where at least $\gamma$ fraction of the constraints can be satisfied from instances where at most $\beta$ fraction of the constraints can be satisfied. In this work we consider the approximability of this problem in the context of sketching algorithms and give a dichotomy result. Specifically, for every family $\mathcal{F}$ and every $\beta < \gamma$, we show that either a linear sketching algorithm solves the problem in polylogarithmic space, or the problem is not solvable by any sketching algorithm in $o(\sqrt{n})$ space.

Linear Space Streaming Lower Bounds for Approximating CSPs
with Alexander Golovnev, Madhu Sudan, Ameya Velingker Santhoshini Velusamy.
Symposium on Theory of Computing (STOC 2022)
[arxiv] [eccc] [conference version] [ ] [ ]

@article{CGSVV21,
  title={Linear Space Streaming Lower Bounds for Approximating CSPs},
  author={Chou, Chi-Ning and Golovnev, Alexander and Sudan, Madhu and Velingker, Ameya and Velusamy, Santhoshini},
  journal={arXiv preprint},
  year={2021},
  archivePrefix = "arXiv",
  eprint = "2106.13078",
  primaryClass={cs.CC}
}

We consider the approximability of constraint satisfaction problems in the streaming setting. For every constraint satisfaction problem (CSP) on n variables taking values in $\{0,\dots,q−1\}$, we prove that improving over the trivial approximability by a factor of $q$ requires $\Omega(n)$ space even on instances with $O(n)$ constraints. We also identify a broad subclass of problems for which any improvement over the trivial approximability requires $\Omega(n)$ space. The key technical core is an optimal, $q−(k−1)$-inapproximability for the case where every constraint is given by a system of $k−1$ linear equations mod $q$ over $k$ variables. Prior to our work, no such hardness was known for an approximation factor less than $1/2$ for any CSP. Our work builds on and extends the work of Kapralov and Krachun (Proc. STOC 2019) who showed a linear lower bound on any non-trivial approximation of the max cut in graphs. This corresponds roughly to the case of Max $k$-LIN-modq with $k=q=2$. Each one of the extensions provides non-trivial technical challenges that we overcome in this work.

Sketching Approximability of (Weak) Monarchy Predicates
with Alexander Golovnev, Amirbehshad Shahrasbi, Madhu Sudan, Santhoshini Velusamy.
Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2022)
[arxiv] [conference version] [ ] [ ]

@article{CGSSV22,
  title={Sketching Approximability of (Weak) Monarchy Predicates},
  author={Chou, Chi-Ning and Golovnev, Alexander and Shahrasbi, Amirbehshad and Sudan, Madhu and Velusamy, Santhoshini},
  eprint={2205.02345},
  archivePrefix={arXiv},
  primaryClass={cs.CC}
}

We analyze the sketching approximability of constraint satisfaction problems on Boolean domains, where the constraints are balanced linear threshold functions applied to literals. In particular, we explore the approximability of monarchy-like functions where the value of the function is determined by a weighted combination of the vote of the first variable (the president) and the sum of the votes of all remaining variables. The pure version of this function is when the president can only be overruled by when all remaining variables agree. For every $k \geq 5$, we show that CSPs where the underlying predicate is a pure monarchy function on $k$ variables have no non-trivial sketching approximation algorithm in $o(\sqrt{n})$ space. We also show infinitely many weaker monarchy functions for which CSPs using such constraints are non-trivially approximable by $O(\log(n))$ space sketching algorithms. Moreover, we give the first example of sketching approximable asymmetric Boolean CSPs.

Our results work within the framework of Chou, Golovnev, Sudan, and Velusamy (FOCS 2021) that characterizes the sketching approximability of all CSPs. Their framework can be applied naturally to get a computer-aided analysis of the approximability of any specific constraint satisfaction problem. The novelty of our work is in using their work to get an analysis that applies to infinitely many problems simultaneously.

Approximability of all Finite CSPs with Linear Sketches
with Alexander Golovnev, Madhu Sudan, Santhoshini Velusamy.
Symposium on Foundations of Computer Science (FOCS 2021)
[arxiv] [eccc] [conference version] [slides] [ ] [ ]

@article{CGSV21b,
  title={Approximability of all finite CSPs in the dynamic streaming setting},
  author={Chou, Chi-Ning and Golovnev, Alexander and Sudan, Madhu and Velusamy, Santhoshini},
  journal={arXiv preprint},
  year={2021},
  archivePrefix = "arXiv",
  eprint = "2105.01161",
  primaryClass={cs.CC}
}

A constraint satisfaction problem (CSP), Max-CSP($\mathcal{F}$), is specified by a finite set of constraints $\mathcal{F}\subseteq\{[q]^k\to\{0,1\}\}$ for positive integers $q$ and $k$. An instance of the problem on n variables is given by m applications of constraints from $\mathcal{F}$ to subsequences of the n variables, and the goal is to find an assignment to the variables that satisfies the maximum number of constraints. In the $(\gamma,\beta)$-approximation version of the problem for parameters $0\leq\beta\leq\gamma\leq1$, the goal is to distinguish instances where at least $\gamma$ fraction of the constraints can be satisfied from instances where at most $\beta$ fraction of the constraints can be satisfied.

In this work we consider the approximability of this problem in the context of sketching algorithms and give a dichotomy result. Specifically, for every family $\cF$ and every $\beta < \gamma$, we show that either a linear sketching algorithm solves the problem in polylogarithmic space, or the problem is not solvable by any sketching algorithm in $o(\sqrt{n})$ space.

Optimal Streaming Approximations for all Boolean Max 2-CSPs and Max k-SAT
with Alexander Golovnev, Santhoshini Velusamy.
Symposium on Foundations of Computer Science (FOCS 2020)
[arxiv] [conference version] [slides] [ ] [ ]

@InProceedings{CGV20,
  title={Optimal Streaming Approximations for all Boolean Max-2CSPs and Max-ksat},
  author={Chou, Chi-Ning and Golovnev, Alexander and Velusamy, Santhoshini},
  booktitle={2020 IEEE 61st Annual Symposium on Foundations of Computer Science (FOCS)},
  year={2020},
  pages={330-341},
  doi={10.1109/FOCS46700.2020.00039},
  archivePrefix = "arXiv",
  eprint = "2004.11796",
  primaryClass={cs.CC}
}

We prove tight upper and lower bounds on approximation ratios of all Boolean Max-2CSP problems in the streaming model. Specifically, for every type of Max-2CSP problem, we give an explicit constant $\alpha$, s.t. for any $\epsilon>0$ (i) there is an $(\alpha-\epsilon)$-streaming approximation using space $O(\log n)$; and (ii) any $(\alpha-\epsilon)$-streaming approximation requires space $\Omega(\sqrt{n})$. This generalizes the celebrated work of [Kapralov, Khanna, Sudan SODA 2015; Kapralov, Krachun STOC 2019], who showed that the optimal approximation ratio for Max-CUT was 1/2.

Prior to this work, the problem of determining this ratio was open for all other Max-2CSPs. Our results are quite surprising for some specific Max-2CSPs. For the Max-DCUT problem, there was a gap between an upper bound of 1/2 and a lower bound of 2/5 [Guruswami, Velingker, Velusamy APPROX 2017]. We show that neither of these bounds is tight, and the optimal ratio for Max-DCUT is 4/9. We also establish that the tight approximation for Max-2SAT is $\sqrt{2}/2$, and for Exact Max-2SAT it is 3/4. As a byproduct, our result gives a separation between space-efficient approximations for Max-2SAT and Exact Max-2SAT. This is in sharp contrast to the setting of polynomial-time algorithms with polynomial space, where the two problems are known to be equally hard to approximate.

Tracking the $\ell_2$ Norm with Constant Update Time
with Zhixian Lei, Preetum Nakkiran
Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2019).
[arxiv] [conference version] [slides] [ ] [ ]

@InProceedings{CLN19,
  author =	{Chi-Ning Chou and Zhixian Lei and Preetum Nakkiran},
  title =	{Tracking the l_2 Norm with Constant Update Time},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2019)},
  pages =	{2:1--2:15},
  ISSN =	{1868-8969},
  year =	{2019},
  URL =		{http://drops.dagstuhl.de/opus/volltexte/2019/11217},
  URN =		{urn:nbn:de:0030-drops-112175},
  annote =	{Keywords: Streaming algorithms, Sketching algorithms, Tracking, CountSketch},
  archivePrefix = "arXiv",
  eprint = "1807.06479",
  primaryClass={cs.DS}
}

The $\ell_2$ tracking problem is the task of obtaining a streaming algorithm that, given access to a stream of items $a_1,a_2,a_3,\ldots$ from a universe $[n]$, outputs at each time $t$ an estimate to the $\ell_2$ norm of the frequency vector $f^{(t)}\in \mathbb{R}^n$ (where $f^{(t)}_i$ is the number of occurrences of item $i$ in the stream up to time $t$). The previous work [Braverman-Chestnut-Ivkin-Nelson-Wang-Woodruff, PODS 2017] gave an streaming algorithm with (the optimal) space using $O(\epsilon^{-2}\log(1/\delta))$ words and $O(\epsilon^{-2}\log(1/\delta))$ update time to obtain an $\epsilon$-accurate estimate with probability at least $1-\delta$.

We give the first algorithm that achieves update time of $O(\log 1/\delta)$ which is independent of the accuracy parameter $\epsilon$, together with the nearly optimal space using $O(\epsilon^{-2}\log(1/\delta))$ words. Our algorithm is obtained using the Count Sketch of [Charilkar-Chen-Farach-Colton, ICALP 2002].

Quantum Complexity

Quantum TCS

Complexity theory studies the quantitative relation between different resources (e.g., time, space, entanglement, circuit size, number of samples). I’m interested in studying various quantum systems through complexity-theoretic lens and reveal new physical insights.

[ ]

Quantum Meets the Minimum Circuit Size Problem
with Nai-Hui Chia, Jiayu Zhang, Ruizhe Zhang.
Innovations in Theoretical Computer Science Conference (ITCS 2022)
[arxiv] [eccc] [conference version] [slides] [ ] [ ]

@inproceedings{CCZZ21,
  title={Quantum Meets the Minimum Circuit Size Problem},
  author={Chia, Nai-Hui and Chou, Chi-Ning and Zhang, Jiayu and Zhang, Ruizhe},
  booktitle={13th Innovations in Theoretical Computer Science Conference (ITCS 2022)},
  year={2022}
}

In this work, we initiate the study of the Minimum Circuit Size Problem (MCSP) in the quantum setting. MCSP is a problem to compute the circuit complexity of Boolean functions. It is a fascinating problem in complexity theory---its hardness is mysterious, and a better understanding of its hardness can have surprising implications to many fields in computer science.

We first define and investigate the basic complexity-theoretic properties of minimum quantum circuit size problems for three natural objects: Boolean functions, unitaries, and quantum states. We show that these problems are not trivially in NP but in QCMA (or have QCMA protocols). Next, we explore the relations between the three quantum MCSPs and their variants. We discover that some reductions that are not known for classical MCSP exist for quantum MCSPs for unitaries and states, e.g., search-to-decision reduction and self-reduction. Finally, we systematically generalize results known for classical MCSP to the quantum setting (including quantum cryptography, quantum learning theory, quantum circuit lower bounds, and quantum fine-grained complexity) and also find new connections to tomography and quantum gravity. Due to the fundamental differences between classical and quantum circuits, most of our results require extra care and reveal properties and phenomena unique to the quantum setting. Our findings could be of interest for future studies, and we post several open problems for further exploration along this direction.

Algorithmic Neuroscience

Neuroscience TCS

Can algorithmic thinking flourish our understanding in the brain? In this new research direction which I call “algorithmic neuroscience”, we want to study neuroscience by modeling systems and circuits as performing certain computation and trying to explore the underlying algorithmic ideas.

[ ]

ODE-Inspired Analysis for the Biological Version of Oja’s Rule in Solving Streaming PCA
with Mien Brabeeba Wang.
Conference on Learning Theory (COLT 2020).
[arxiv] [conference version (extended abstract)] [slides] [ ] [ ]

@InProceedings{CW20,
  title = 	 {ODE-Inspired Analysis for the Biological Version of Oja{'}s Rule in Solving Streaming PCA},
  author = 	 {Chou, Chi-Ning and Wang, Mien Brabeeba},
  booktitle = 	 {Proceedings of Thirty Third Conference on Learning Theory (COLT 2020)},
  pages = 	 {1339--1343},
  year = 	 {2020},
  editor = 	 {Abernethy, Jacob and Agarwal, Shivani},
  volume = 	 {125},
  series = 	 {Proceedings of Machine Learning Research},
  month = 	 {09--12 Jul},
  publisher = 	 {PMLR},
  pdf = 	 {http://proceedings.mlr.press/v125/chou20a/chou20a.pdf},
  url = 	 {http://proceedings.mlr.press/v125/chou20a.html},
  archivePrefix = "arXiv",
  eprint = "1911.02363",
  primaryClass={q-bio.NC}
}

Oja's rule [Oja, Journal of mathematical biology 1982] is a well-known biologically-plausible algorithm using a Hebbian-type synaptic update rule to solve streaming principal component analysis (PCA). Computational neuroscientists have known that this biological version of Oja's rule converges to the top eigenvector of the covariance matrix of the input in the limit. However, prior to this work, it was open to prove any convergence rate guarantee.

In this work, we give the first convergence rate analysis for the biological version of Oja's rule in solving streaming PCA. Moreover, our convergence rate matches the information theoretical lower bound up to logarithmic factors and outperforms the state-of-the-art upper bound for streaming PCA. Furthermore, we develop a novel framework inspired by ordinary differential equations (ODE) to analyze general stochastic dynamics. The framework abandons the traditional step-by-step analysis and instead analyzes a stochastic dynamic in one-shot by giving a closed-form solution to the entire dynamic. The one-shot framework allows us to apply stopping time and martingale techniques to have a flexible and precise control on the dynamic. We believe that this general framework is powerful and should lead to effective yet simple analysis for a large class of problems with stochastic dynamics.

On the Algorithmic Power of Spiking Neural Networks
with Kai-Min Chung, Chi-Jen Lu
Innovations in Theoretical Computer Science (ITCS 2019).
[arxiv] [conference version] [slides] [poster] [ ] [ ]

@InProceedings{CCL19,
  author ={Chi-Ning Chou and Kai-Min Chung and Chi-Jen Lu},
  title ={On the Algorithmic Power of Spiking Neural Networks},
  booktitle ={10th Innovations in Theoretical Computer Science  Conference (ITCS 2019)},
  pages ={26:1--26:20},
  year ={2019},
  URL ={http://drops.dagstuhl.de/opus/volltexte/2018/10119},
  archivePrefix = "arXiv",
  eprint = "1803.10375",
  primaryClass={cs.NE}
}

Photo by Pennstatenews.

Spiking neural networks (SNNs) are mathematical models for biological neural networks such as our brain. In this work, we study SNNs through the lens of algorithms. In particular, we show that the firing rate of the integrate-and-fire SNN can efficiently solve the non-negative least squares problem and $\ell_1$ minimization problem. Further, our proof gives new interpretations on the integrate-and-fire SNN where the external charging and spiking effects can be viewed as gradient and projection respectively in the dual space.

Spiking Neural Networks

Neuroscience

One of the main differences between biological neural networks and artificial neural networks (ANNs) is the way neurons interact with each other: while the former use instantaneous spikes, the latter use continuous (digital) signals. The main research questions I’m exploring are what’s the fundamental differences between spiking neural networks (SNNs) and ANNs and what are the different roles of spikes can play in the brain. On the practical side, I’m also very interested in demonstrating spiking computational advantage, e.g., through neuromorphic computing.

[ ]

A Superconducting Nanowire-based Architecture for Neuromorphic Computing
Andres E. Lombo, Jesus E. Lares, Matteo Castellani, Chi-Ning Chou, Nancy Lynch, Karl K. Berggren.
Neuromorphic Computing and Engineering (2022)
NCE Highlights of 2022
[journal version] [arxiv] [ ] [ ]

@article{LLCCLB22,
  title={A Superconducting Nanowire-based Architecture for Neuromorphic Computing},
  author={Lombo, Andres E. and Lares, Jesus E. and Castellani, Matteo and Chou, Chi-Ning and Lynch, Nancy and Berggren, Karl K.},
  journal={Neuromorphic Computing and Engineering},
  year={2022},
  publisher={IOP Publishing},
  url={http://iopscience.iop.org/article/10.1088/2634-4386/ac86ef}
}

Neuromorphic computing is poised to further the success of software-based neural networks by utilizing improved customized hardware. However, the translation of neuromorphic algorithms to hardware specifications is a problem that has been seldom explored. Building superconducting neuromorphic systems requires extensive expertise in both superconducting physics and theoretical neuroscience. In this work, we aim to bridge this gap by presenting a tool and methodology to translate algorithmic parameters into circuit specifications. We first show the correspondence between theoretical neuroscience models and the dynamics of our circuit topologies. We then apply this tool to solve linear systems by implementing a spiking neural network with our superconducting nanowire-based hardware.

On the Algorithmic Power of Spiking Neural Networks
with Kai-Min Chung, Chi-Jen Lu
Innovations in Theoretical Computer Science (ITCS 2019).
[arxiv] [conference version] [slides] [poster] [ ] [ ]

@InProceedings{CCL19,
  author ={Chi-Ning Chou and Kai-Min Chung and Chi-Jen Lu},
  title ={On the Algorithmic Power of Spiking Neural Networks},
  booktitle ={10th Innovations in Theoretical Computer Science  Conference (ITCS 2019)},
  pages ={26:1--26:20},
  year ={2019},
  URL ={http://drops.dagstuhl.de/opus/volltexte/2018/10119},
  archivePrefix = "arXiv",
  eprint = "1803.10375",
  primaryClass={cs.NE}
}

Photo by Pennstatenews.

Spiking neural networks (SNNs) are mathematical models for biological neural networks such as our brain. In this work, we study SNNs through the lens of algorithms. In particular, we show that the firing rate of the integrate-and-fire SNN can efficiently solve the non-negative least squares problem and $\ell_1$ minimization problem. Further, our proof gives new interpretations on the integrate-and-fire SNN where the external charging and spiking effects can be viewed as gradient and projection respectively in the dual space.

Circuit Complexity

TCS

Circuits are one of the fundamental computational models in the theory of computation. I’m interested in various circuit models (e.g., boolean circuits, arithmetic circuits, quantum circuits, and their special variants)and study the relation between circuit complexity and other aspects of computational problems.

[ ]

Closure Results for Polynomial Factorization
with Mrinal Kumar, Noam Solomon
Theory of Computing (2019).
[journal version] [slides] [video] [ ] [ ]

@journal{CKS19J,
  author = {Chou, Chi-Ning and Kumar, Mrinal and Solomon, Noam},
  title = {Closure Results for Polynomial Factorization},
  year = {2019},
  pages = {1--34},
  doi = {10.4086/toc.2019.v015a013},
  publisher = {Theory of Computing},
  journal = {Theory of Computing},
  volume = {15},
  number = {13},
  URL = {http://www.theoryofcomputing.org/articles/v015a013},
  archivePrefix = "arXiv",
  eprint = "1803.05933",
  primaryClass={cs.CC}
}

In a sequence of fundamental results in the 1980s, Kaltofen (SICOMP 1985, STOC'86, STOC'87, RANDOM'89) showed that factors of multivariate polynomials with small arithmetic circuits have small arithmetic circuits. In other words, the complexity class $\VP$ is closed under taking factors. A natural question in this context is to understand if other natural classes of multivariate polynomials, for instance, arithmetic formulas, algebraic branching programs, bounded-depth arithmetic circuits or the class $\VNP$, are closed under taking factors.

In this paper, we show that all factors of degree $\log^a n$ of polynomials with $\poly(n)$-size depth-$k$ circuits have $\poly(n)$-size circuits of depth $O(k + a)$. This partially answers a question of Shpilka--Yehudayoff (Found. Trends in TCS, 2010) and has applications to hardness-randomness tradeoffs for bounded-depth arithmetic circuits.

As direct applications of our techniques, we also obtain simple proofs of the following results.

$\bullet$ The complexity class $\VNP$ is closed under taking factors. This confirms Conjecture~2.1 in Bürgisser's monograph (2000) and improves upon a recent result of Dutta, Saxena and Sinhababu (STOC'18) who showed a quasipolynomial upper bound on the number of auxiliary variables and the complexity of the verifier circuit of factors of polynomials in $\VNP$.

$\bullet$ A factor of degree $d$ of a polynomial $P$ which can be computed by an arithmetic formula (or an algebraic branching program) of size $s$ has a formula (an algebraic branching program, resp.) of size $\poly(s, d^{\log d}, \deg(P))$. This result was first shown by Dutta et al. (STOC'18) and we obtain a slightly different proof as an easy consequence of our techniques.

Our proofs rely on a combination of the ideas, based on Hensel lifting, developed in the polynomial factoring literature, and the depth-reduction results for arithmetic circuits, and hold over fields of characteristic zero or of sufficiently large characteristic.

Closure of $\VP$ under taking factors: a short and simple proof
with Mrinal Kumar, Noam Solomon.
Manuscript
[arxiv] [eccc] [ ] [ ]

@unpublished{CKS19,
  title={Closure of VP under taking factors: a short and simple proof},
  author={Chou, Chi-Ning and Kumar, Mrinal and Solomon, Noam},
  note={In submission to Theory of Computing},
  eprint={1903.02366},
  archivePrefix={arXiv},
  primaryClass={cs.CC}
}

In this note, we give a short, simple and almost completely self contained proof of a classical result of Kaltofen which shows that if an $n$ variate degree $d$ polynomial $f$ can be computed by an arithmetic circuit of size $s$, then each of its factors can be computed by an arithmetic circuit of size at most $\poly\left(s, n, d\right)$.

However, unlike Kaltofen's argument, our proof does not directly give an efficient algorithm for computing the circuits for the factors of $f$.

Hardness vs Randomness for Bounded Depth Arithmetic Circuits
with Mrinal Kumar, Noam Solomon
Computational Complexity Conference (CCC 2018).
[arxiv] [eccc] [conference version] [slides] [ ] [ ]

@InProceedings{CKS18,
  author ={Chi-Ning Chou and Mrinal Kumar and Noam Solomon},
  title ={Hardness vs Randomness for Bounded Depth Arithmetic Circuits},
  booktitle ={33rd Computational Complexity Conference (CCC 2018)},
  pages ={13:1--13:17},
  ISSN ={1868-8969},
  year ={2018},
  URL ={http://drops.dagstuhl.de/opus/volltexte/2018/8876},
  URN ={urn:nbn:de:0030-drops-88765},
  annote ={Keywords: Algebraic Complexity, Polynomial Factorization Circuit Lower Bounds, Polynomial Identity Testing}
}

In this paper, we study the question of hardness-randomness tradeoffs for bounded depth arithmetic circuits. We show that if there is a family of explicit polynomials $\{f_n\}$, where $f_n$ is of degree $O(\log^2n/\log^2\log n)$ in $n$ variables such that $f_n$ cannot be computed by a depth $\Delta$ arithmetic circuits of size $\poly(n)$, then there is a deterministic sub-exponential time algorithm for polynomial identity testing of arithmetic circuits of depth $\Delta-5$.

This is incomparable to a beautiful result of Dvir et al.[SICOMP, 2009], where they showed that super-polynomial lower bounds for depth $\Delta$ circuits for any explicit family of polynomials (of potentially high degree) implies sub-exponential time deterministic PIT for depth $\Delta-5$ circuits of bounded individual degree. Thus, we remove the ``bounded individual degree" condition in the work of Dvir et al. at the cost of strengthening the hardness assumption to hold for polynomials of low degree.

The key technical ingredient of our proof is the following property of roots of polynomials computable by a bounded depth arithmetic circuit : if $f(x_1, x_2, \ldots, x_n)$ and $P(x_1, x_2, \ldots, x_n, y)$ are polynomials of degree $d$ and $r$ respectively, such that $P$ can be computed by a circuit of size $s$ and depth $\Delta$ and $P(x_1, x_2, \ldots, x_n, f) \equiv 0$, then, $f$ can be computed by a circuit of size $\poly(n, s, r, d^{O(\sqrt{d})})$ and depth $\Delta + 3$. In comparison, Dvir et al. showed that $f$ can be computed by a circuit of depth $\Delta + 3$ and size $\poly(n, s, r, d^{t})$, where $t$ is the degree of $P$ in $y$. Thus, the size upper bound in the work of Dvir et al. is non-trivial when $t$ is small but $d$ could be large, whereas our size upper bound is non-trivial when $d$ is small, but $t$ could be large.