$ \newcommand{\undefined}{} \newcommand{\hfill}{} \newcommand{\qedhere}{\square} \newcommand{\qed}{\square} \newcommand{\ensuremath}[1]{#1} \newcommand{\bit}{\{0,1\}} \newcommand{\Bit}{\{-1,1\}} \newcommand{\Stab}{\mathbf{Stab}} \newcommand{\NS}{\mathbf{NS}} \newcommand{\ba}{\mathbf{a}} \newcommand{\bc}{\mathbf{c}} \newcommand{\bd}{\mathbf{d}} \newcommand{\be}{\mathbf{e}} \newcommand{\bh}{\mathbf{h}} \newcommand{\br}{\mathbf{r}} \newcommand{\bs}{\mathbf{s}} \newcommand{\bx}{\mathbf{x}} \newcommand{\by}{\mathbf{y}} \newcommand{\bz}{\mathbf{z}} \newcommand{\Var}{\mathbf{Var}} \newcommand{\dist}{\text{dist}} \newcommand{\norm}[1]{\\|#1\\|} \newcommand{\etal} \newcommand{\ie} \newcommand{\eg} \newcommand{\cf} \newcommand{\rank}{\text{rank}} \newcommand{\tr}{\text{tr}} \newcommand{\mor}{\text{Mor}} \newcommand{\hom}{\text{Hom}} \newcommand{\id}{\text{id}} \newcommand{\obj}{\text{obj}} \newcommand{\pr}{\text{pr}} \newcommand{\ker}{\text{ker}} \newcommand{\coker}{\text{coker}} \newcommand{\im}{\text{im}} \newcommand{\vol}{\text{vol}} \newcommand{\disc}{\text{disc}} \newcommand{\bbA}{\mathbb A} \newcommand{\bbB}{\mathbb B} \newcommand{\bbC}{\mathbb C} \newcommand{\bbD}{\mathbb D} \newcommand{\bbE}{\mathbb E} \newcommand{\bbF}{\mathbb F} \newcommand{\bbG}{\mathbb G} \newcommand{\bbH}{\mathbb H} \newcommand{\bbI}{\mathbb I} \newcommand{\bbJ}{\mathbb J} \newcommand{\bbK}{\mathbb K} \newcommand{\bbL}{\mathbb L} \newcommand{\bbM}{\mathbb M} \newcommand{\bbN}{\mathbb N} \newcommand{\bbO}{\mathbb O} \newcommand{\bbP}{\mathbb P} \newcommand{\bbQ}{\mathbb Q} \newcommand{\bbR}{\mathbb R} \newcommand{\bbS}{\mathbb S} \newcommand{\bbT}{\mathbb T} \newcommand{\bbU}{\mathbb U} \newcommand{\bbV}{\mathbb V} \newcommand{\bbW}{\mathbb W} \newcommand{\bbX}{\mathbb X} \newcommand{\bbY}{\mathbb Y} \newcommand{\bbZ}{\mathbb Z} \newcommand{\sA}{\mathscr A} \newcommand{\sB}{\mathscr B} \newcommand{\sC}{\mathscr C} \newcommand{\sD}{\mathscr D} \newcommand{\sE}{\mathscr E} \newcommand{\sF}{\mathscr F} \newcommand{\sG}{\mathscr G} \newcommand{\sH}{\mathscr H} \newcommand{\sI}{\mathscr I} \newcommand{\sJ}{\mathscr J} \newcommand{\sK}{\mathscr K} \newcommand{\sL}{\mathscr L} \newcommand{\sM}{\mathscr M} \newcommand{\sN}{\mathscr N} \newcommand{\sO}{\mathscr O} \newcommand{\sP}{\mathscr P} \newcommand{\sQ}{\mathscr Q} \newcommand{\sR}{\mathscr R} \newcommand{\sS}{\mathscr S} \newcommand{\sT}{\mathscr T} \newcommand{\sU}{\mathscr U} \newcommand{\sV}{\mathscr V} \newcommand{\sW}{\mathscr W} \newcommand{\sX}{\mathscr X} \newcommand{\sY}{\mathscr Y} \newcommand{\sZ}{\mathscr Z} \newcommand{\sfA}{\mathsf A} \newcommand{\sfB}{\mathsf B} \newcommand{\sfC}{\mathsf C} \newcommand{\sfD}{\mathsf D} \newcommand{\sfE}{\mathsf E} \newcommand{\sfF}{\mathsf F} \newcommand{\sfG}{\mathsf G} \newcommand{\sfH}{\mathsf H} \newcommand{\sfI}{\mathsf I} \newcommand{\sfJ}{\mathsf J} \newcommand{\sfK}{\mathsf K} \newcommand{\sfL}{\mathsf L} \newcommand{\sfM}{\mathsf M} \newcommand{\sfN}{\mathsf N} \newcommand{\sfO}{\mathsf O} \newcommand{\sfP}{\mathsf P} \newcommand{\sfQ}{\mathsf Q} \newcommand{\sfR}{\mathsf R} \newcommand{\sfS}{\mathsf S} \newcommand{\sfT}{\mathsf T} \newcommand{\sfU}{\mathsf U} \newcommand{\sfV}{\mathsf V} \newcommand{\sfW}{\mathsf W} \newcommand{\sfX}{\mathsf X} \newcommand{\sfY}{\mathsf Y} \newcommand{\sfZ}{\mathsf Z} \newcommand{\cA}{\mathcal A} \newcommand{\cB}{\mathcal B} \newcommand{\cC}{\mathcal C} \newcommand{\cD}{\mathcal D} \newcommand{\cE}{\mathcal E} \newcommand{\cF}{\mathcal F} \newcommand{\cG}{\mathcal G} \newcommand{\cH}{\mathcal H} \newcommand{\cI}{\mathcal I} \newcommand{\cJ}{\mathcal J} \newcommand{\cK}{\mathcal K} \newcommand{\cL}{\mathcal L} \newcommand{\cM}{\mathcal M} \newcommand{\cN}{\mathcal N} \newcommand{\cO}{\mathcal O} \newcommand{\cP}{\mathcal P} \newcommand{\cQ}{\mathcal Q} \newcommand{\cR}{\mathcal R} \newcommand{\cS}{\mathcal S} \newcommand{\cT}{\mathcal T} \newcommand{\cU}{\mathcal U} \newcommand{\cV}{\mathcal V} \newcommand{\cW}{\mathcal W} \newcommand{\cX}{\mathcal X} \newcommand{\cY}{\mathcal Y} \newcommand{\cZ}{\mathcal Z} \newcommand{\bfA}{\mathbf A} \newcommand{\bfB}{\mathbf B} \newcommand{\bfC}{\mathbf C} \newcommand{\bfD}{\mathbf D} \newcommand{\bfE}{\mathbf E} \newcommand{\bfF}{\mathbf F} \newcommand{\bfG}{\mathbf G} \newcommand{\bfH}{\mathbf H} \newcommand{\bfI}{\mathbf I} \newcommand{\bfJ}{\mathbf J} \newcommand{\bfK}{\mathbf K} \newcommand{\bfL}{\mathbf L} \newcommand{\bfM}{\mathbf M} \newcommand{\bfN}{\mathbf N} \newcommand{\bfO}{\mathbf O} \newcommand{\bfP}{\mathbf P} \newcommand{\bfQ}{\mathbf Q} \newcommand{\bfR}{\mathbf R} \newcommand{\bfS}{\mathbf S} \newcommand{\bfT}{\mathbf T} \newcommand{\bfU}{\mathbf U} \newcommand{\bfV}{\mathbf V} \newcommand{\bfW}{\mathbf W} \newcommand{\bfX}{\mathbf X} \newcommand{\bfY}{\mathbf Y} \newcommand{\bfZ}{\mathbf Z} \newcommand{\rmA}{\mathrm A} \newcommand{\rmB}{\mathrm B} \newcommand{\rmC}{\mathrm C} \newcommand{\rmD}{\mathrm D} \newcommand{\rmE}{\mathrm E} \newcommand{\rmF}{\mathrm F} \newcommand{\rmG}{\mathrm G} \newcommand{\rmH}{\mathrm H} \newcommand{\rmI}{\mathrm I} \newcommand{\rmJ}{\mathrm J} \newcommand{\rmK}{\mathrm K} \newcommand{\rmL}{\mathrm L} \newcommand{\rmM}{\mathrm M} \newcommand{\rmN}{\mathrm N} \newcommand{\rmO}{\mathrm O} \newcommand{\rmP}{\mathrm P} \newcommand{\rmQ}{\mathrm Q} \newcommand{\rmR}{\mathrm R} \newcommand{\rmS}{\mathrm S} \newcommand{\rmT}{\mathrm T} \newcommand{\rmU}{\mathrm U} \newcommand{\rmV}{\mathrm V} \newcommand{\rmW}{\mathrm W} \newcommand{\rmX}{\mathrm X} \newcommand{\rmY}{\mathrm Y} \newcommand{\rmZ}{\mathrm Z} \newcommand{\bb}{\mathbf{b}} \newcommand{\bv}{\mathbf{v}} \newcommand{\bw}{\mathbf{w}} \newcommand{\bx}{\mathbf{x}} \newcommand{\by}{\mathbf{y}} \newcommand{\bz}{\mathbf{z}} \newcommand{\paren}[1]{( #1 )} \newcommand{\Paren}[1]{\left( #1 \right)} \newcommand{\bigparen}[1]{\bigl( #1 \bigr)} \newcommand{\Bigparen}[1]{\Bigl( #1 \Bigr)} \newcommand{\biggparen}[1]{\biggl( #1 \biggr)} \newcommand{\Biggparen}[1]{\Biggl( #1 \Biggr)} \newcommand{\abs}[1]{\lvert #1 \rvert} \newcommand{\Abs}[1]{\left\lvert #1 \right\rvert} \newcommand{\bigabs}[1]{\bigl\lvert #1 \bigr\rvert} \newcommand{\Bigabs}[1]{\Bigl\lvert #1 \Bigr\rvert} \newcommand{\biggabs}[1]{\biggl\lvert #1 \biggr\rvert} \newcommand{\Biggabs}[1]{\Biggl\lvert #1 \Biggr\rvert} \newcommand{\card}[1]{\left| #1 \right|} \newcommand{\Card}[1]{\left\lvert #1 \right\rvert} \newcommand{\bigcard}[1]{\bigl\lvert #1 \bigr\rvert} \newcommand{\Bigcard}[1]{\Bigl\lvert #1 \Bigr\rvert} \newcommand{\biggcard}[1]{\biggl\lvert #1 \biggr\rvert} \newcommand{\Biggcard}[1]{\Biggl\lvert #1 \Biggr\rvert} \newcommand{\norm}[1]{\lVert #1 \rVert} \newcommand{\Norm}[1]{\left\lVert #1 \right\rVert} \newcommand{\bignorm}[1]{\bigl\lVert #1 \bigr\rVert} \newcommand{\Bignorm}[1]{\Bigl\lVert #1 \Bigr\rVert} \newcommand{\biggnorm}[1]{\biggl\lVert #1 \biggr\rVert} \newcommand{\Biggnorm}[1]{\Biggl\lVert #1 \Biggr\rVert} \newcommand{\iprod}[1]{\langle #1 \rangle} \newcommand{\Iprod}[1]{\left\langle #1 \right\rangle} \newcommand{\bigiprod}[1]{\bigl\langle #1 \bigr\rangle} \newcommand{\Bigiprod}[1]{\Bigl\langle #1 \Bigr\rangle} \newcommand{\biggiprod}[1]{\biggl\langle #1 \biggr\rangle} \newcommand{\Biggiprod}[1]{\Biggl\langle #1 \Biggr\rangle} \newcommand{\set}[1]{\lbrace #1 \rbrace} \newcommand{\Set}[1]{\left\lbrace #1 \right\rbrace} \newcommand{\bigset}[1]{\bigl\lbrace #1 \bigr\rbrace} \newcommand{\Bigset}[1]{\Bigl\lbrace #1 \Bigr\rbrace} \newcommand{\biggset}[1]{\biggl\lbrace #1 \biggr\rbrace} \newcommand{\Biggset}[1]{\Biggl\lbrace #1 \Biggr\rbrace} \newcommand{\bracket}[1]{\lbrack #1 \rbrack} \newcommand{\Bracket}[1]{\left\lbrack #1 \right\rbrack} \newcommand{\bigbracket}[1]{\bigl\lbrack #1 \bigr\rbrack} \newcommand{\Bigbracket}[1]{\Bigl\lbrack #1 \Bigr\rbrack} \newcommand{\biggbracket}[1]{\biggl\lbrack #1 \biggr\rbrack} \newcommand{\Biggbracket}[1]{\Biggl\lbrack #1 \Biggr\rbrack} \newcommand{\ucorner}[1]{\ulcorner #1 \urcorner} \newcommand{\Ucorner}[1]{\left\ulcorner #1 \right\urcorner} \newcommand{\bigucorner}[1]{\bigl\ulcorner #1 \bigr\urcorner} \newcommand{\Bigucorner}[1]{\Bigl\ulcorner #1 \Bigr\urcorner} \newcommand{\biggucorner}[1]{\biggl\ulcorner #1 \biggr\urcorner} \newcommand{\Biggucorner}[1]{\Biggl\ulcorner #1 \Biggr\urcorner} \newcommand{\ceil}[1]{\lceil #1 \rceil} \newcommand{\Ceil}[1]{\left\lceil #1 \right\rceil} \newcommand{\bigceil}[1]{\bigl\lceil #1 \bigr\rceil} \newcommand{\Bigceil}[1]{\Bigl\lceil #1 \Bigr\rceil} \newcommand{\biggceil}[1]{\biggl\lceil #1 \biggr\rceil} \newcommand{\Biggceil}[1]{\Biggl\lceil #1 \Biggr\rceil} \newcommand{\floor}[1]{\lfloor #1 \rfloor} \newcommand{\Floor}[1]{\left\lfloor #1 \right\rfloor} \newcommand{\bigfloor}[1]{\bigl\lfloor #1 \bigr\rfloor} \newcommand{\Bigfloor}[1]{\Bigl\lfloor #1 \Bigr\rfloor} \newcommand{\biggfloor}[1]{\biggl\lfloor #1 \biggr\rfloor} \newcommand{\Biggfloor}[1]{\Biggl\lfloor #1 \Biggr\rfloor} \newcommand{\lcorner}[1]{\llcorner #1 \lrcorner} \newcommand{\Lcorner}[1]{\left\llcorner #1 \right\lrcorner} \newcommand{\biglcorner}[1]{\bigl\llcorner #1 \bigr\lrcorner} \newcommand{\Biglcorner}[1]{\Bigl\llcorner #1 \Bigr\lrcorner} \newcommand{\bigglcorner}[1]{\biggl\llcorner #1 \biggr\lrcorner} \newcommand{\Bigglcorner}[1]{\Biggl\llcorner #1 \Biggr\lrcorner} \newcommand{\ket}[1]{| #1 \rangle} \newcommand{\bra}[1]{\langle #1 |} \newcommand{\braket}[2]{\langle #1 | #2 \rangle} \newcommand{\ketbra}[1]{| #1 \rangle\langle #1 |} \newcommand{\e}{\varepsilon} \newcommand{\eps}{\varepsilon} \newcommand{\from}{\colon} \newcommand{\super}[2]{#1^{(#2)}} \newcommand{\varsuper}[2]{#1^{\scriptscriptstyle (#2)}} \newcommand{\tensor}{\otimes} \newcommand{\eset}{\emptyset} \newcommand{\sse}{\subseteq} \newcommand{\sst}{\substack} \newcommand{\ot}{\otimes} \newcommand{\Esst}[1]{\bbE_{\substack{#1}}} \newcommand{\vbig}{\vphantom{\bigoplus}} \newcommand{\seteq}{\mathrel{\mathop:}=} \newcommand{\defeq}{\stackrel{\mathrm{def}}=} \newcommand{\Mid}{\mathrel{}\middle|\mathrel{}} \newcommand{\Ind}{\mathbf 1} \newcommand{\bits}{\{0,1\}} \newcommand{\sbits}{\{\pm 1\}} \newcommand{\R}{\mathbb R} \newcommand{\Rnn}{\R_{\ge 0}} \newcommand{\N}{\mathbb N} \newcommand{\Z}{\mathbb Z} \newcommand{\Q}{\mathbb Q} \newcommand{\C}{\mathbb C} \newcommand{\A}{\mathbb A} \newcommand{\Real}{\mathbb R} \newcommand{\mper}{\,.} \newcommand{\mcom}{\,,} \DeclareMathOperator{\Id}{Id} \DeclareMathOperator{\cone}{cone} \DeclareMathOperator{\vol}{vol} \DeclareMathOperator{\val}{val} \DeclareMathOperator{\opt}{opt} \DeclareMathOperator{\Opt}{Opt} \DeclareMathOperator{\Val}{Val} \DeclareMathOperator{\LP}{LP} \DeclareMathOperator{\SDP}{SDP} \DeclareMathOperator{\Tr}{Tr} \DeclareMathOperator{\Inf}{Inf} \DeclareMathOperator{\size}{size} \DeclareMathOperator{\poly}{poly} \DeclareMathOperator{\polylog}{polylog} \DeclareMathOperator{\min}{min} \DeclareMathOperator{\max}{max} \DeclareMathOperator{\argmax}{arg\,max} \DeclareMathOperator{\argmin}{arg\,min} \DeclareMathOperator{\qpoly}{qpoly} \DeclareMathOperator{\qqpoly}{qqpoly} \DeclareMathOperator{\conv}{conv} \DeclareMathOperator{\Conv}{Conv} \DeclareMathOperator{\supp}{supp} \DeclareMathOperator{\sign}{sign} \DeclareMathOperator{\perm}{perm} \DeclareMathOperator{\mspan}{span} \DeclareMathOperator{\mrank}{rank} \DeclareMathOperator{\E}{\mathbb E} \DeclareMathOperator{\pE}{\tilde{\mathbb E}} \DeclareMathOperator{\Pr}{\mathbb P} \DeclareMathOperator{\Span}{Span} \DeclareMathOperator{\Cone}{Cone} \DeclareMathOperator{\junta}{junta} \DeclareMathOperator{\NSS}{NSS} \DeclareMathOperator{\SA}{SA} \DeclareMathOperator{\SOS}{SOS} \DeclareMathOperator{\Stab}{\mathbf Stab} \DeclareMathOperator{\Det}{\textbf{Det}} \DeclareMathOperator{\Perm}{\textbf{Perm}} \DeclareMathOperator{\Sym}{\textbf{Sym}} \DeclareMathOperator{\Pow}{\textbf{Pow}} \DeclareMathOperator{\Gal}{\textbf{Gal}} \DeclareMathOperator{\Aut}{\textbf{Aut}} \newcommand{\iprod}[1]{\langle #1 \rangle} \newcommand{\cE}{\mathcal{E}} \newcommand{\E}{\mathbb{E}} \newcommand{\pE}{\tilde{\mathbb{E}}} \newcommand{\N}{\mathbb{N}} \renewcommand{\P}{\mathcal{P}} \notag $
$ \newcommand{\sleq}{\ensuremath{\preceq}} \newcommand{\sgeq}{\ensuremath{\succeq}} \newcommand{\diag}{\ensuremath{\mathrm{diag}}} \newcommand{\support}{\ensuremath{\mathrm{support}}} \newcommand{\zo}{\ensuremath{\{0,1\}}} \newcommand{\pmo}{\ensuremath{\{\pm 1\}}} \newcommand{\uppersos}{\ensuremath{\overline{\mathrm{sos}}}} \newcommand{\lambdamax}{\ensuremath{\lambda_{\mathrm{max}}}} \newcommand{\rank}{\ensuremath{\mathrm{rank}}} \newcommand{\Mslow}{\ensuremath{M_{\mathrm{slow}}}} \newcommand{\Mfast}{\ensuremath{M_{\mathrm{fast}}}} \newcommand{\Mdiag}{\ensuremath{M_{\mathrm{diag}}}} \newcommand{\Mcross}{\ensuremath{M_{\mathrm{cross}}}} \newcommand{\eqdef}{\ensuremath{ =^{def}}} \newcommand{\threshold}{\ensuremath{\mathrm{threshold}}} \newcommand{\vbls}{\ensuremath{\mathrm{vbls}}} \newcommand{\cons}{\ensuremath{\mathrm{cons}}} \newcommand{\edges}{\ensuremath{\mathrm{edges}}} \newcommand{\cl}{\ensuremath{\mathrm{cl}}} \newcommand{\xor}{\ensuremath{\oplus}} \newcommand{\1}{\ensuremath{\mathrm{1}}} \notag $
$ \newcommand{\transpose}[1]{\ensuremath{#1{}^{\mkern-2mu\intercal}}} \newcommand{\dyad}[1]{\ensuremath{#1#1{}^{\mkern-2mu\intercal}}} \newcommand{\nchoose}[1]{\ensuremath} \newcommand{\generated}[1]{\ensuremath{\langle #1 \rangle}} \notag $
$ \newcommand{\eqdef}{\mathbin{\stackrel{\rm def}{=}}} \newcommand{\R} % real numbers \newcommand{\N}} % natural numbers \newcommand{\Z} % integers \newcommand{\F} % a field \newcommand{\Q} % the rationals \newcommand{\C}{\mathbb{C}} % the complexes \newcommand{\poly}} \newcommand{\polylog}} \newcommand{\loglog}}} \newcommand{\zo}{\{0,1\}} \newcommand{\suchthat} \newcommand{\pr}[1]{\Pr\left[#1\right]} \newcommand{\deffont}{\em} \newcommand{\getsr}{\mathbin{\stackrel{\mbox{\tiny R}}{\gets}}} \newcommand{\Exp}{\mathop{\mathrm E}\displaylimits} % expectation \newcommand{\Var}{\mathop{\mathrm Var}\displaylimits} % variance \newcommand{\xor}{\oplus} \newcommand{\GF}{\mathrm{GF}} \newcommand{\eps}{\varepsilon} \notag $
$ \newcommand{\class}[1]{\mathbf{#1}} \newcommand{\coclass}[1]{\mathbf{co\mbox{-}#1}} % and their complements \newcommand{\BPP}{\class{BPP}} \newcommand{\NP}{\class{NP}} \newcommand{\RP}{\class{RP}} \newcommand{\coRP}{\coclass{RP}} \newcommand{\ZPP}{\class{ZPP}} \newcommand{\BQP}{\class{BQP}} \newcommand{\FP}{\class{FP}} \newcommand{\QP}{\class{QuasiP}} \newcommand{\VF}{\class{VF}} \newcommand{\VBP}{\class{VBP}} \newcommand{\VP}{\class{VP}} \newcommand{\VNP}{\class{VNP}} \newcommand{\RNC}{\class{RNC}} \newcommand{\RL}{\class{RL}} \newcommand{\BPL}{\class{BPL}} \newcommand{\coRL}{\coclass{RL}} \newcommand{\IP}{\class{IP}} \newcommand{\AM}{\class{AM}} \newcommand{\MA}{\class{MA}} \newcommand{\QMA}{\class{QMA}} \newcommand{\SBP}{\class{SBP}} \newcommand{\coAM}{\class{coAM}} \newcommand{\coMA}{\class{coMA}} \renewcommand{\P}{\class{P}} \newcommand\prBPP{\class{prBPP}} \newcommand\prRP{\class{prRP}} \newcommand\prP{\class{prP}} \newcommand{\Ppoly}{\class{P/poly}} \newcommand{\NPpoly}{\class{NP/poly}} \newcommand{\coNPpoly}{\class{coNP/poly}} \newcommand{\DTIME}{\class{DTIME}} \newcommand{\TIME}{\class{TIME}} \newcommand{\SIZE}{\class{SIZE}} \newcommand{\SPACE}{\class{SPACE}} \newcommand{\ETIME}{\class{E}} \newcommand{\BPTIME}{\class{BPTIME}} \newcommand{\RPTIME}{\class{RPTIME}} \newcommand{\ZPTIME}{\class{ZPTIME}} \newcommand{\EXP}{\class{EXP}} \newcommand{\ZPEXP}{\class{ZPEXP}} \newcommand{\RPEXP}{\class{RPEXP}} \newcommand{\BPEXP}{\class{BPEXP}} \newcommand{\SUBEXP}{\class{SUBEXP}} \newcommand{\NTIME}{\class{NTIME}} \newcommand{\NL}{\class{NL}} \renewcommand{\L}{\class{L}} \newcommand{\NQP}{\class{NQP}} \newcommand{\NEXP}{\class{NEXP}} \newcommand{\coNEXP}{\coclass{NEXP}} \newcommand{\NPSPACE}{\class{NPSPACE}} \newcommand{\PSPACE}{\class{PSPACE}} \newcommand{\NSPACE}{\class{NSPACE}} \newcommand{\coNSPACE}{\coclass{NSPACE}} \newcommand{\coL}{\coclass{L}} \newcommand{\coP}{\coclass{P}} \newcommand{\coNP}{\coclass{NP}} \newcommand{\coNL}{\coclass{NL}} \newcommand{\coNPSPACE}{\coclass{NPSPACE}} \newcommand{\APSPACE}{\class{APSPACE}} \newcommand{\LINSPACE}{\class{LINSPACE}} \newcommand{\qP}{\class{\tilde{P}}} \newcommand{\PH}{\class{PH}} \newcommand{\EXPSPACE}{\class{EXPSPACE}} \newcommand{\SigmaTIME}[1]{\class{\Sigma_{#1}TIME}} \newcommand{\PiTIME}[1]{\class{\Pi_{#1}TIME}} \newcommand{\SigmaP}[1]{\class{\Sigma_{#1}P}} \newcommand{\PiP}[1]{\class{\Pi_{#1}P}} \newcommand{\DeltaP}[1]{\class{\Delta_{#1}P}} \newcommand{\ATIME}{\class{ATIME}} \newcommand{\ASPACE}{\class{ASPACE}} \newcommand{\AP}{\class{AP}} \newcommand{\AL}{\class{AL}} \newcommand{\APSPACE}{\class{APSPACE}} \newcommand{\VNC}[1]{\class{VNC^{#1}}} \newcommand{\NC}[1]{\class{NC^{#1}}} \newcommand{\AC}[1]{\class{AC^{#1}}} \newcommand{\ACC}[1]{\class{ACC^{#1}}} \newcommand{\TC}[1]{\class{TC^{#1}}} \newcommand{\ShP}{\class{\# P}} \newcommand{\PaP}{\class{\oplus P}} \newcommand{\PCP}{\class{PCP}} \newcommand{\kMIP}[1]{\class{#1\mbox{-}MIP}} \newcommand{\MIP}{\class{MIP}} $
$ \newcommand{\textprob}[1]{\text{#1}} \newcommand{\mathprob}[1]{\textbf{#1}} \newcommand{\Satisfiability}{\textprob{Satisfiability}} \newcommand{\SAT}{\textprob{SAT}} \newcommand{\TSAT}{\textprob{3SAT}} \newcommand{\USAT}{\textprob{USAT}} \newcommand{\UNSAT}{\textprob{UNSAT}} \newcommand{\QPSAT}{\textprob{QPSAT}} \newcommand{\TQBF}{\textprob{TQBF}} \newcommand{\LinProg}{\textprob{Linear Programming}} \newcommand{\LP}{\mathprob{LP}} \newcommand{\Factor}{\textprob{Factoring}} \newcommand{\CircVal}{\textprob{Circuit Value}} \newcommand{\CVAL}{\mathprob{CVAL}} \newcommand{\CircSat}{\textprob{Circuit Satisfiability}} \newcommand{\CSAT}{\textprob{CSAT}} \newcommand{\CycleCovers}{\textprob{Cycle Covers}} \newcommand{\MonCircVal}{\textprob{Monotone Circuit Value}} \newcommand{\Reachability}{\textprob{Reachability}} \newcommand{\Unreachability}{\textprob{Unreachability}} \newcommand{\RCH}{\mathprob{RCH}} \newcommand{\BddHalt}{\textprob{Bounded Halting}} \newcommand{\BH}{\mathprob{BH}} \newcommand{\DiscreteLog}{\textprob{Discrete Log}} \newcommand{\REE}{\mathprob{REE}} \newcommand{\QBF}{\mathprob{QBF}} \newcommand{\MCSP}{\mathprob{MCSP}} \newcommand{\GGEO}{\mathprob{GGEO}} \newcommand{\CKTMIN}{\mathprob{CKT-MIN}} \newcommand{\MINCKT}{\mathprob{MIN-CKT}} \newcommand{\IdentityTest}{\textprob{Identity Testing}} \newcommand{\Majority}{\textprob{Majority}} \newcommand{\CountIndSets}{\textprob{\#Independent Sets}} \newcommand{\Parity}{\textprob{Parity}} \newcommand{\Clique}{\textprob{Clique}} \newcommand{\CountCycles}{\textprob{#Cycles}} \newcommand{\CountPerfMatchings}{\textprob{\#Perfect Matchings}} \newcommand{\CountMatchings}{\textprob{\#Matchings}} \newcommand{\CountMatch}{\mathprob{\#Matchings}} \newcommand{\ECSAT}{\mathprob{E#SAT}} \newcommand{\ShSAT}{\mathprob{#SAT}} \newcommand{\ShTSAT}{\mathprob{#3SAT}} \newcommand{\HamCycle}{\textprob{Hamiltonian Cycle}} \newcommand{\Permanent}{\textprob{Permanent}} \newcommand{\ModPermanent}{\textprob{Modular Permanent}} \newcommand{\GraphNoniso}{\textprob{Graph Nonisomorphism}} \newcommand{\GI}{\mathprob{GI}} \newcommand{\GNI}{\mathprob{GNI}} \newcommand{\GraphIso}{\textprob{Graph Isomorphism}} \newcommand{\QuantBoolForm}{\textprob{Quantified Boolean Formulae}} \newcommand{\GenGeography}{\textprob{Generalized Geography}} \newcommand{\MAXTSAT}{\mathprob{Max3SAT}} \newcommand{\GapMaxTSAT}{\mathprob{GapMax3SAT}} \newcommand{\ELIN}{\mathprob{E3LIN2}} \newcommand{\CSP}{\mathprob{CSP}} \newcommand{\Lin}{\mathprob{Lin}} \newcommand{\ONE}{\mathbf{ONE}} \newcommand{\ZERO}{\mathbf{ZERO}} \newcommand{\yes} \newcommand{\no} $
Back to Computational Complexity
Back to notes

Basic Counting Complexity

80%

Introduction

Motivation

Maximum likelihood Bayes net

$\ShP$ and $\FP$

Now, let’s formally define the complexity classes for counting problems.

Let $f:\bit^{*}\rightarrow\N$, we say $f\in\FP$ if there exists a Turing machine $M:\bit^{*}\rightarrow\N$ such that for any $x\in\bit^{*}$, $M(x)=f(x)$ and $M(x)$ terminates in $p(\card{x})$ time where $p(\cdot)$ is some polynomial.

Let $f:\bit^{*}\rightarrow\N$, we say $f\in\ShP$ if there exists a Turing machine $M\bit^{*}\times\bit^{*}\rightarrow\N$ such that for any $x\in\bit^{*}$, $f(x) = \card{{y\in\bit^{m(\card{x})}:\ M(x,y)=1 }}$ where $m(\cdot)$ is some polynomial and $M(x,y)$ terminates in $p(\card{x})$ time for some polynomial $p(\cdot)$.

Note that $\FP$ and $\ShP$ are counting analog of the decision complexity classes $\P$ and $\NP$. That is, $\FP$ contains the counting problems that are easily computed while $\ShP$ contains the counting problems that count the number of witnesses.

Is $\ShP$ harder than $\NP$?

If $\CountCycles\in\FP$, then $\P=\NP$.


We prove the theorem by reducing $\HamCycle$ to $\CountCycles$. Namely, we are going to construct a polynomial mapping $f:\mathcal{G}\rightarrow\mathcal{G}$ where $\mathcal{G}$ is the set of graphs. For any graph $G=(V,E)$ where $\card{V}=n$, we have

  • If $G\in\HamCycle$, then the number of cycles in $f(G)\geq2^{n^2}$.
  • If $G\notin\HamCycle$, then the number of cycles in $f(G)<2^{n^2}$.

One can easily see that if $\CountCycles$ is in $\FP$, then we can compute the number of cycles in $f(G)$ in $\poly(n)$ time, \ie deciding if $G$ has a Hamiltonian cycle in $\poly(n)$ time.

Now, let’s construct $f$ as follows: for any edge $e\in E$, we replace $e$ with the below gadget.

Observe that there are $2^m$ distinct paths from $u$ to $v$ in $f(G)$. That is, when there’s a cycle in $G$ that passes $e$, then the cycle can be duplicated into $2^m$ distinct cycles in $f(G)$. As we replace every edges in $G$ with the gadget above, a cycle $C$ in $G$ can be uniquely mapped to $2^{\card{C}\cdot m}$ cycles in $f(G)$.

Now, we can estimate the number of cycles in $f(G)$.

  • If $G\in\HamCycle$, then there’s a cycle of length $n$ in $G$ thus there are at least $2^{n\cdot m}$ cycles in $f(G)$.
  • If $G\notin\HamCycle$, the there are at most $n!$ distinct length $n-1$ cycles in $G$. That is, there are at most $n!2^{(n-1)\cdot m}$ cycles in $f(G)$.

As a result, once $2^m>n!$, we can distinguish the two cases. By choosing $m=n^2$, we have $2^{m}>n!$ and thus

  • If $G\in\HamCycle$, then the number of cycles in $f(G)\geq2^{n^2}$.
  • If $G\notin\HamCycle$, then the number of cycles in $f(G)<2^{n^2}$.

#$\P$-completeness

Let $f:\bit^{*}\rightarrow\N$. We say $f$ is $\ShP$-hard if for any $g\in\ShP$, $g\in\FP^{f}$.

Furthermore, if $f\in\ShP$, then we say $f$ is $\ShP$-complete.

$\ShSAT$ is $\ShP$-complete

Recall that the first complete problem in $\NP$ is the satisfiability problem. So it’s natural to consider $\ShSAT$ as a candidate for $\ShP$-complete problem and it turns out it is.

$\ShSAT$ is $\ShP$-complete.

The idea is based on an observation in the Cook-Levin theorem. Concretely, one can see that the reduction in the Cook-Levin theorem is parsimonious, i.e., the mapping of witnesses is also 1-1 onto.


In Cook-Levin theorem, we consider an arbitrary $\NP$ problem $L$. As $L\in\NP$, it means that there exists a deterministic Turing machine $M$ such that on input $x\in\bit^{*}$ and witness $w\in\bit^{p(\card{x})}$ for some polynomial $p$, $M(x,w)=1$ iff $w$ is a valid witness for $x$.

Next, turn the tableau of $M$ on input $x,w$ into the assignment and turn the local verification of the rules of $M$ into logical gates. We then have a circuit $C_x$ such that an assignment $A$ is a satisfying assignment iff the first $p(\card{x})$ bits encode the witness $w$ of $x$. That is, the number of satisfying assignment of $C_x$ is exactly the same as the number of witnesses of $x$.

Finally, turn $C_x$ into a boolean formula $\phi_x$ such that $A$ is a satisfying assignment for $C_x$ iff $A$ is a satisfying assignment for $\phi_x$. As a result, we conclude that the number of witnesses of $x$ is the same as the number of satisfying assignment of $\phi_x$. In other words, by solving $\ShSAT$, one can solve any problems in $\ShP$.

Valiant’s theorem

$\Permanent$ is $\ShP$-complete.

At first glance, it seems that $\Permanent$ is not a counting problem. However, as we treat the input matrix $A$ as the adjacency matrix of a graph $G_A$, the permanent of $A$ is actually related to the number of weighted cycle covers of $G_A$.

Thus, by constructing special gadget to reduce from $\#\TSAT$ to counting the number of cycle covers, we can prove that $\Permanent$ is $\ShP$-complete.

First, let’s recall the definition of permanent and see its relation to cycle cover.

Let $A\in\bbR^{n\times n}$, define the permanent of $A$ as \begin{equation} \perm(A) := \sum_{\sigma\in S_n}\prod_{i\in[n]}A_{i,\sigma(i)}, \end{equation} where $S_n$ is the group contains all the permutation on $n$ symbols and $\sign(\sigma)$ is the number of switches used in $\sigma$.

Let $G=(V,E)$ be a graph. We say $C=\{C_1,\dots,C_k\}$ is a cycle cover for $C$ if

  • $C_i$ is a cycle in $G$ for any $i\in[k]$.
  • For any $v\in V$ there exists $C_i$ such that $v\in C_i$.
  • $C_1,\dots C_k$ are disjoint to each other.

Observe that for each $\sigma\in S_n$, $\sigma$ corresponds to a potential cycle cover for a graph of $n$ vertices and vice versa. Concretely, define $C_{\sigma}$ as follows. \begin{align} C_{\sigma} := \{&(i,\sigma(i),\sigma(\sigma(i)),\dots,\sigma^{k-1}(i)) |\ \forall i\in[n],\ k=\min_{t>1}\sigma^t(i)\nonumber\\
&,\ \not\exists 1\leq j<i\text{ such that }\sigma^t(j)=i\text{ for some }t \}. \end{align}

See the following figure as an example.

From the above observation, we have the following connection between cycle covers and permanent.

Let $A\in\R^{n\times n}$ and $G$ be the weighted graph with adjacency matrix $A$. Then, \begin{equation} \perm(A) = \sum_{C=(C_1,\dots,C_k):\ C\text{ is a cycle cover of }G}\prod_{i\in[k]}\prod_{e=(j_1,j_2)\in C_i}A_{j_1,j_2}. \end{equation}


\begin{align} \perm(A) &= \sum_{\sigma\in S_n}\prod_{i\in[n]}A_{i,\sigma(i)}\\
&=\sum_{\sigma\in S_n,\ C_{\sigma}=(C_1,\dots,C_k)}\prod_{i\in[k]}\prod_{e=(j_1,j_2)\in C_i}A_{j_1,j_2}\\
&= \sum_{C=(C_1,\dots,C_k):\ C\text{ is a cycle cover of }G}\prod_{i\in[k]}\prod_{e=(j_1,j_2)\in C_i}A_{j_1,j_2}. \end{align}

Define the problem of summing the weighted cycle covers of a graph as $\CycleCovers$.

By Lemma (connection between permanent and cycle covers), we have the following corollary.

$\CycleCovers\in\FP^{\Permanent}$.

Now, we are going to show that $\ShSAT\in\FP^{\CycleCovers}$ and thus combine with Corollary (Permanent is not easier than Cycle Covers), we have $\Permanent$ is $\ShP$-hard.

$\ShTSAT\in\FP^{\CycleCovers}$.


Let $\phi$ be a $\TSAT$ instance with $n$ variables $x_1,\dots,x_n$ and $m$ clauses $C_1,\dots,C_m$. We are going to define a reduction $f$ from 3CNF to a graph such that \begin{equation} \text{number of cycle covers in }f(\phi) = 9^{\text{number of satisfying assignment of }\phi}. \end{equation}

There are four gadget in the reduction:

  • Variable gadget: force the variable can only be either true or false

Note that there are only two possible cycle covers in the above gadget: We use real arrow to indicate that the edge was chosen and use dotted arrow to indicate that the edge was not chosen. The above figure illustrates that all the self-loop on the same side must be simultaneously chosen and exactly one side will be chosen.

We call the number of vertices on each side the width of the gadget.

  • Clause gadget: there exists cycle cover only when at least one of the literal is set to true

First, note that to have a cycle cover, the three outer edges in an edge gadget cannot be chosen simultaneously. Otherwise, there’s no cycle covers the node in the center. Thus, there are three cases:

  1. No outer edge There are nine possible cycle covers and each of them contributes weight 1.
  2. One outer edge There are three possible cycle covers each of then contributes weight 3.
  3. Two outer edges There is one possible cycle cover which contributes weight 9.

Finally, to replace edges of weight 3 with edges of edge 1, we use the following gadget.

  • NOT-XOR gadget: make sure that either the two edges are both chosen or are both not chosen

One can see that the two red paths (correspond to the $e_1$ and $e_2$) are either both being chosen or not.

  • XOR gadget: make sure the variable gadget and clause gadget are consistent

One can see that exactly one of $(u,u’)$ and $(v,v’)$ will be chosen in a valid cycle cover.

Finally, given $\phi(x_1,\dots,x_n)=C_1\wedge\cdots\wedge C_m$ where $C_j=(L_{j,1}\vee L_{j,2}\vee L_{j,3})$ for any $j\in[m]$ and $L_{j,k}=x_i$ or $\neg x_i$ for any $j\in[m],k\in[3]$ for some $i\in[n]$. We reduce $\phi$ to a graph $G_{\phi}$ as follows.

  • For each variable $x_i$, $i\in[n]$, associate $x_i$ with a variable gadget of width $m$.
  • For each clause $C_j$, $j\in[m]$, associate $C_j$ with an edge gadget.
  • Suppose for $j\in[m],k\in[3]$, $L_{j,k}=x_i$, then use the XOR gadget to connect one of the edges in the true side of the $x_i$’s gadget with one of the outer edges of $C_j$’s gadget.

The following is an example of $\phi(x_1,x_2,x_3,x_4)=(x_1\vee\neg x_3\vee x_4)\wedge(\neg x_1\vee\neg x_2\vee x_4)$.

Observe that

  • The number of vertices in the $G_{\phi}$ is $n\cdot(2m+2)+m\cdot(4+3\cdot3)+m\cdot3\cdot(6+2\cdot10)=2mn+2n+61m=\poly(n)$.
  • A valid cycle cover in $G_{\phi}$ must correspond to a satisfying assignment of $\phi$.
  • A satisfying assignment of $\phi$ contributes to exactly $9^m$ distinct cycle covers in $G_{\phi}$.

To sum up, we have \begin{equation} \text{number of cycle covers in }f(\phi) = 9^{\text{number of satisfying assignment of }\phi}. \end{equation}

As a result, we conclude that $\ShTSAT\in\FP^{\CycleCovers}$.

Combine Corollary (Permanent is not easier than Cycle Covers) with Lemma (Cycle Covers is #P-hard), we conclude that $\Permanent$ is $\ShP$-hard. Moreover, as the edges of the graph in the reduction from $\ShTSAT$ to $\FP^{\CycleCovers}$ have 0/1 weight, it also follows that computing permanent of 0/1 matrix is $\ShP$-hard.

Toda’s theorem

It’s natural to compare $\ShP$ with decisional complexity classes. However, as the output formats are different, we need to come up with a way to compare them. A natural candidate is using $P^{\ShP}$. Surprisingly, it turns out that $P^{\ShP}$ contains the whole polynomial hierarchy!

$\PH\in\P^{\#\P}$.

Roadmap

The proof of Toda’s theorem is inspired by a seemingly irrelevant theorem: the Valiant-Vazirani theorem.

$\SAT\leq_r^{1/8n}USAT$: Valiant-Vazirani theorem

There exists a PPT reduction $A$ such that for any unquantified boolean formula $\phi$ with $n$ variables $\bx$, $A(\phi)$ is an unquantified boolean formula with variables $\bx$ and $\by$ where $\by$ is of $\poly(n)$ length. We have

  • If $\phi\in\SAT$, then $\mathbb{P}[A(\phi)\in\USAT]\geq\frac{1}{8n}$.
  • If $\phi\notin\SAT$, then $\mathbb{P}[A(\phi)\in\SAT]=0$. Moreover, we can write $A(\phi) = \tau(\bx,\by)\wedge\phi(\bx)$ where the size of $\tau$ is polynomial in $n$.


Given boolean formula $\phi$, let $S:=\{x:\ \phi(x)=1\}$ be the set of satisfying assignments. The idea of Valiant-Vazirani theorem is to probabilistically construct a predicate $B(x)$ such that with non-negligible probability, exactly one element in $S$ satisfies that predicate. As a result, $\phi(x)\wedge B(x)$ will only has exactly one satisfying assignment.

To construct such $B$, let’s start with a simple trial.

  1. Randomly pick $y\in\bit^n$.
  2. Let $B_{\text{simple}}=(x=y)$ and output $A_{\text{simple}}(\phi)=\phi\wedge B_{\text{simple}}$.

One can see that

  • If $\phi\in\SAT$, then $\mathbb{P}[A_{\text{simple}}(\phi)\in\USAT]\geq\frac{\card{S}}{2^n}$.
  • If $\phi\notin\SAT$, then $\mathbb{P}[A_{\text{simple}}(\phi)\in\SAT]=0$.

Note that the completeness here could be exponentially small and is impossible to be amplified.

As a result, hash family turns out to be a natural tool since it can uniformly map the elements from the domain $\bit^n$ to the range $\bit^m$ where $1\leq m\leq n$.

Now, we can construct the predicate as follows.

  1. Randomly pick $h\in\mathcal{H}_{n,m}$.
  2. Let $B_{\text{hash}}=(h(x)=0^m)$ and $A_{\text{hash}}=\phi\wedge B_{\text{hash}}$.

One can see that

  • If $\phi\in\SAT$, then $\mathbb{E}[\card{\{x:\ A_{\text{hash}}(x)=1\}}]=\frac{\card{S}}{2^{n-m}}$.
  • If $\phi\notin\SAT$, then $\mathbb{P}[A_{\text{hash}}\in\SAT]=0$.

Note that the here we only compute the expectation of the size of the intersection of $S$ and the pre-image of the predicate in the completeness part. To compute the probability, we need to specify which hash family we are using. To theoretical computer scientist, the pairwise independent hash family is a randomness-efficient and good enough hash family.

Recall the definition of pairwise independent hash family as follows.

Let $n,m\in\bbN$, $\mathcal{H}_{n,m}\subseteq\{f:\bit^n\rightarrow\bit^m \}$ is a pairwise independent hash family from $\bit^n$ to $\bit^m$ if for any $x\neq x’\in\bit^n$ and $y,y’\in\bit^m$, \begin{equation} \mathbb{P}_{h\leftarrow\mathcal{H}_{n,m}}[h(x)=y\text{ and }h(x’)=y’]=\frac{1}{2^{2m}}. \end{equation}

As we fix the hash family we are using, we can compute the completeness as follows.

Take $m\in[n]$ such that $2^{m-2}\leq\card{S}\leq 2^{m-1}$. If $\phi\in\SAT$, then $\mathbb{P}[A_{\text{hash}}\in\USAT]\geq\frac{1}{8}$.


First, let $S_{A_{\text{hash}}}:= \{x:A_{\text{hash}}(x)=1\}$, by inclusion-exclusion principle, we have \begin{align} \mathbb{P}[A_{\text{hash}}\in\USAT] &= \sum_{x\in S_{A_{\text{hash}}}}\mathbb{P}[A_{\text{hash}}(x)=1]\\
&- \sum_{x\neq x’ \in S_{A_{\text{hash}}}}\mathbb{P}[A_{\text{hash}}(x)=1\text{ and }A_{\text{hash}}(x’)=1]. \end{align} For any $x\in S_{A_{\text{hash}}}$, \begin{align} \mathbb{P}[A_{\text{hash}}(x)=1]=\frac{1}{2^m}. \end{align} For any $x\neq x’\in S_{A_{\text{hash}}}$, \begin{align} \mathbb{P}[A_{\text{hash}}(x)=1\text{ and }A_{\text{hash}}(x’)=1]=\frac{1}{2^{2m}}. \end{align} Thus, \begin{align} \mathbb{P}[A_{\text{hash}}\in\USAT] &= \card{S_{A_{\text{hash}}}}\cdot\frac{1}{2^m}-\binom{\card{S_{A_{\text{hash}}}}}{2}\cdot\frac{1}{2^{2m}}\\
&\geq\frac{1}{4} - \frac{1}{2}\cdot(\frac{1}{2})^2=\frac{1}{8}. \end{align}

Finally, uniformly picking $m\in[n]$, we have probability at least $\frac{1}{n}$ such that $2^{m-2}\leq\card{S}\leq2^{m-1}$ and thus the theorem holds.

$\SAT\leq_r^{1-2^{-O(n)}}\oplus\SAT$

Though we got a randomized reduction from $\SAT$ to $\USAT$, however, we cannot amplify the completeness because of the following observation.

But this is not the end of the world, one can see that if we consider the parity of the number of satisfying assignments, then the above obstacles are no longer an issue to worry!

Let $f:\bit^{*}\rightarrow\bit$, we say $f\in\oplus\P$ if there exists a Turing machine $M\bit^{*}\times\bit^{*}\rightarrow\N$ such that for any $x\in\bit^{*}$, $f(x) = \card{{y\in\bit^{m(\card{x})}:\ M(x,y)=1 }}$ mod 2 where $m(\cdot)$ is some polynomial and $M(x,y)$ terminates in $p(\card{x})$ time for some polynomial $p(\cdot)$.

The goal of this step is to show a randomized reduction from $\SAT$ to $\oplus\SAT$.

There exists a PPT $A$ such that for any boolean formula $\phi$,

  • If $\phi\in\SAT$, then $\mathbb{P}[A(\phi)\in\oplus\SAT]\geq1-2^{-O(n)}$.
  • If $\phi\notin\SAT$, then $\mathbb{P}[A(\phi)\in\oplus\SAT]\leq2^{-O(n)}$.

That is, $\SAT\leq_r^{1-2^{-O(n)}}\oplus\SAT$.


First, given two boolean formulas $\phi(x),\phi’(y)$ that share no common variables, i.e. $x\neq y$, we have

$\phi$ $\phi’$ $\phi\vee\phi’$
$\in\oplus\SAT$ $\in\oplus\SAT$ $\in\oplus\SAT$
$\in\oplus\SAT$ $\notin\oplus\SAT$ $\notin\oplus\SAT$
$\notin\oplus\SAT$ $\in\oplus\SAT$ $\notin\oplus\SAT$
$\notin\oplus\SAT$ $\notin\oplus\SAT$ $\notin\oplus\SAT$

Note that the OR operation on $\phi$ and $\phi’$ does not preserve in the result of parity. As a result, we would like to construct certain operation such that the result of parity can be preserved.

Define the following operations.

  • (negation) $\neg^{\oplus}\phi(x):=\phi(x)\vee(z=1\wedge_{i}(x_i=1))$
  • (OR) $\phi(x)\vee^{\oplus}\phi’(y):=\neg_{\oplus}(\neg_{(\oplus}\phi(x))\vee\neg_{\oplus}(\phi’(y)))$
  • (AND) $\phi(x)\wedge^{\oplus}\phi’(y):=\phi(x)\wedge \phi’(y)$

We have

$\phi$ $\phi’$ $\neg^{\oplus}\phi$ $\phi\vee^{\oplus}\phi’$ $\phi\wedge^{\oplus}\phi’$
$\in\oplus\SAT$ $\in\oplus\SAT$ $\notin\oplus\SAT$ $\in\oplus\SAT$ $\in\oplus\SAT$
$\in\oplus\SAT$ $\notin\oplus\SAT$ $\notin\oplus\SAT$ $\in\oplus\SAT$ $\notin\oplus\SAT$
$\notin\oplus\SAT$ $\in\oplus\SAT$ $\in\oplus\SAT$ $\in\oplus\SAT$ $\notin\oplus\SAT$
$\notin\oplus\SAT$ $\notin\oplus\SAT$ $\in\oplus\SAT$ $\notin\oplus\SAT$ $\notin\oplus\SAT$

As a remark, one can see that it’s not clear that if we can do the same thing for $\USAT$.

With the above new operations, we can use the construction in Valiant-Vazirani theorem to amplify the correct probability for $\oplus\SAT$. Pick $h_1,\dots,h_t$ as we did in the Valiant-Vazirani theorem and duplicate $t$ copies of $x$ into $x_1,\dots,x_t$, we have for any $i\in[t]$,

  • If $\phi\in\SAT$, then $\mathbb{P}[\phi(x_i)\wedge(h_i(x_i)=0)\in\oplus\SAT ]\geq\frac{1}{8n}$.
  • If $\phi\notin\SAT$, then $\mathbb{P}[\phi(x_i)\wedge(h_i(x_i)=0)\in\oplus\SAT ]=0$.

Thus, by taking $A(x_1,\dots,x_t)=\vee^{\oplus}_{i\in[t]}\phi(x_i)\wedge(h_i(x_i)=0)$, we have

  • If $\phi\in\SAT$, then $\mathbb{P}[A(x_1,\dots,x_t)\in\oplus\SAT]\geq1-(1-\frac{1}{8n})^t$.
  • If $\phi\notin\SAT$, then $\mathbb{P}[A(x_1,\dots,x_t)\in\oplus\SAT]=0$.

Finally, pick $t=O(mn)$, we have $(1-\frac{1}{8n})^t\leq 2^{-O(n)}$ and thus the reduction holds.

$\Sigma_k\SAT\leq_r^{1-2^{-O(n)}}\oplus\SAT$

Now we know that $\SAT\leq_r^{1-2^{-O(n)}}\oplus\SAT$. As $\oplus\P$ is closed under complement, it’s natural to imagine that $\PH$ can be reduced to $\oplus\SAT$ as $\neg\forall=\exists$.

The goal of this step is to show a randomized reduction from $\PH$ to $\oplus\SAT$.

There exists a PPT $A$ such that for any quantified boolean formula $\psi=\exists x_1\forall x_2\cdots Q_kx_k\phi(x_1,\dots,x_k)$ for some constant $k$ and unquantified boolean formula $\phi$, $A(\psi)$ is an unquantified boolean formula and

  • If $\psi\in\Sigma_k\SAT$, then $\mathbb{P}[A(\psi)\in\oplus\SAT]\geq1-2^{-O(n)}$.
  • If $\psi\notin\Sigma_k\SAT$, then $\mathbb{P}[A(\psi)\in\oplus\SAT]\leq2^{-O(n)}$.

That is, for any $L\in\PH$, $L\leq_r^{1-2^{-O(n)}}\oplus\SAT$.


Let’s prove by induction on the number of quantifiers. We have proved the case where there’s one quantifier in the previous subsection. Now, assume the induction hypothesis holds for $k-1$ for some $k>1$, consider arbitrary quantified boolean formula $\psi=\exists x_1\forall x_2\cdots Q_kx_k\phi(x_1,\dots,x_k)$ for some constant $k$ and unquantified boolean formula $\phi$. In the following, we use $x$ to denote variable and use $\alpha$ when we fix $x$ to some value.

Define $\psi’(x_1):=\forall x_2\cdots Q_kx_k\phi(x_1,\dots,x_k)$, we have

  • If $\psi\in\Sigma_k\SAT$, then $\exists \alpha_1$ such that $\psi’(\alpha_1)\in\Pi_{k-1}\SAT$.
  • If $\psi\notin\Sigma_k\SAT$, then $\forall \alpha_1$, $\psi’(\alpha_1)\notin\Pi_{k-1}\SAT$.

By induction hypothesis, there exists PPT $A(\cdot)$ such that for any $\alpha_1$, $A(\psi’(\alpha_1))$ is a unquantified boolean formula and

  • If $\psi’(\alpha_1)\in\Pi_{k-1}\SAT$, then $\mathbb{P}[A(\psi’(\alpha_1))\in\oplus\SAT]\geq1-2^{-O(n)}$.
  • If $\psi’(\alpha_1)\notin\Pi_{k-1}\SAT$, then $\mathbb{P}[A(\psi’(\alpha_1))\in\oplus\SAT]\leq2^{-O(n)}$.

Let $A(\psi)=\neg^{\oplus}A(\neg^{\oplus}\psi’(x_1))$, observe that \begin{equation} 1-\prod_{\alpha_1}\oplus(\neg^{\oplus}A(\psi’(\alpha_1))) = \oplus(A(\psi)). \end{equation}

As a result, we have

  • If $\psi\in\Sigma_k\SAT$, then there exists $\alpha_1$ such that $\mathbb{P}[\neg^{\oplus}A(\psi’(\alpha_1))\notin\oplus\SAT]\geq1-2^{-O(n)}$. That is, $\mathbb{P}[A(\psi)\in\oplus\SAT]\geq1-2^{-O(n)}$.
  • If $\psi\notin\Sigma_k\SAT$, then for any $\alpha_1$ $\mathbb{P}[\neg^{\oplus}A(\psi’(\alpha_1))\in\oplus\SAT]\leq2^{-O(n)}$. That is, $\mathbb{P}[A(\psi)\notin\oplus\SAT]\leq\sum_{\alpha_1}2^{-O(n)}=2^{-O(n)}$.

Make sure you understand when $x_1$ is treated as free variable and when $x_1$ is treated as quantified variable.

Wrap up: $\PH\leq\P^{\ShP}$

Finally, we are going to derandomize the above reduction from $\Sigma_k\SAT$ to $\SAT$ mod $\poly(n)$. Clearly that $\SAT$ mod $\poly(n)$ is in $\P^{\ShP}$ and thus complete the proof of Toda’s theorem.

For any constant $k\in\bbN^+$, $\Sigma_k\leq\P^{\ShP}$. That is, $\PH\leq\P^{\ShP}$.


Note that the randomness used in the above randomized reduction is polynomial in $n$. Thus, it’s natural to augment the boolean formula and consider the random bits as variables. That is, for any $\psi$ an instance of $\Sigma_k\SAT$, write the output of the above reduction as $A(\psi)(\br)$ such that

  • If $\psi\in\Sigma_k\SAT$, then $\mathbb{P}_{\br}[A(\psi)(\br)\in\oplus\SAT]\geq1-2^{-O(n)}$.
  • If $\psi\notin\Sigma_k\SAT$, then $\mathbb{P}_{\br}[A(\psi)(\br)\in\oplus\SAT]\leq2^{-O(n)}$.

Note that here we use bold character $\br$ to denote variable and normal character $r$ to denote the value.

A natural idea is to count the number of satisfying assignments of $A(\psi)(r)$ (treating $r$ as variable) and look at its parity. However, it seems hopeless to consider the parity of $A(\psi)(r)$ since we cannot guarantee which cases will lie in $\oplus\SAT$ or not lie in $\oplus\SAT$ deterministically. One can see that the problem is due to the modulo part. Namely, parity is basically doing modulo 2 to the number of satisfying assignments, however, since there are many random bits, it would be easy to flip the result.

As a result, one way to circumvent this issue is to enlarge the modulo space!

Let $S>1$ be a positive integer and $g:\bbN\rightarrow\bbN$ such that $g(t)=4t^3-3t^4$. For any $x\in\bbN$,

  • If $x$ mod $S$ = 1, then $g(x)$ mod $S^2$ = 1.
  • If $x$ mod $S$ = 0, then $g(x)$ mod $S^2$ = 0.


  • If $x$ mod $S$ = 1, write $x=kS+1$ for some $k\in\bbN$. We have \begin{align} g(x)\text{ mod }S^2 &= 4\cdot(3kS+1) - 3\cdot(4kS+1)\text{ mod }S^2\\
    &= 1\text{ mod }S^2. \end{align}

  • If $x$ mod $S$ = 0, write $x=kS$ for some $k\in\bbN$. We have \begin{align} g(x)\text{ mod }S^2 = 0\text{ mod }S^2. \end{align}

Suppose there are $\ell$ random variables, then apply $g$ $\ell+2$ times on $A(\psi)(r)$ and yield $A’(\psi)$ such that for any $r$,

  • If $A(\psi)(r)\in\oplus\SAT$, then $\# A(\psi)(r)$ mod $2^{\ell+2}$ = 1.
  • If $A(\psi)(r)\notin\oplus\SAT$, then $\# A(\psi)(r)$ mod $2^{\ell+2}$ = 0.

Thus, we have \begin{align} \# A(\psi)\text{ mod }2^{\ell+2} &= \sum_{r}\# A(\psi)(r)\text{ mod }2^{\ell+2}\\
&= \card{\{r:\ A(\psi)(r)\in\oplus\SAT\}} = 2^{\ell}\cdot \mathbb{P}_{\br}[A(\psi)(\br)\in\oplus\SAT]. \end{align}

As a result, there exists $c$ such that

  • If $\psi\in\Sigma_k\SAT$, $\sum_{r}\# A(\psi)(r)\text{ mod }2^{\ell+2}>2^c$.
  • If $\psi\notin\Sigma_k\SAT$, $\sum_{r}\# A(\psi)(r)\text{ mod }2^{\ell+2}<2^c$.

Finally, as we can compute $\sum_{r}\# A(\psi)(r)\text{ mod }2^{\ell+2}$ in $\P^{\ShP}$, we conclude that $\Sigma_k\in\P^{\ShP}$.