TY - GEN
T1 - On White-Box Learning and Public-Key Encryption
AU - Liu, Yanyi
AU - Mazor, Noam
AU - Pass, Rafael
N1 - Publisher Copyright:
© Yanyi Liu, Noam Mazor, and Rafael Pass.
PY - 2025/2/11
Y1 - 2025/2/11
N2 - We consider a generalization of the Learning With Error problem, referred to as the white-box learning problem: You are given the code of a sampler that with high probability produces samples of the form y, f(y) + ϵ where ϵ is small, and f is computable in polynomial-size, and the computational task consist of outputting a polynomial-size circuit C that with probability, say, 1/3 over a new sample y′ according to the same distributions, approximates f(y′) (i.e., |C(y′) - f(y′)| is small). This problem can be thought of as a generalizing of the Learning with Error Problem (LWE) from linear functions f to polynomial-size computable functions. We demonstrate that worst-case hardness of the white-box learning problem, conditioned on the instances satisfying a notion of computational shallowness (a concept from the study of Kolmogorov complexity) not only suffices to get public-key encryption, but is also necessary; as such, this yields the first problem whose worst-case hardness characterizes the existence of public-key encryption. Additionally, our results highlights to what extent LWE “overshoots” the task of public-key encryption. We complement these results by noting that worst-case hardness of the same problem, but restricting the learner to only get black-box access to the sampler, characterizes one-way functions.
AB - We consider a generalization of the Learning With Error problem, referred to as the white-box learning problem: You are given the code of a sampler that with high probability produces samples of the form y, f(y) + ϵ where ϵ is small, and f is computable in polynomial-size, and the computational task consist of outputting a polynomial-size circuit C that with probability, say, 1/3 over a new sample y′ according to the same distributions, approximates f(y′) (i.e., |C(y′) - f(y′)| is small). This problem can be thought of as a generalizing of the Learning with Error Problem (LWE) from linear functions f to polynomial-size computable functions. We demonstrate that worst-case hardness of the white-box learning problem, conditioned on the instances satisfying a notion of computational shallowness (a concept from the study of Kolmogorov complexity) not only suffices to get public-key encryption, but is also necessary; as such, this yields the first problem whose worst-case hardness characterizes the existence of public-key encryption. Additionally, our results highlights to what extent LWE “overshoots” the task of public-key encryption. We complement these results by noting that worst-case hardness of the same problem, but restricting the learner to only get black-box access to the sampler, characterizes one-way functions.
KW - Public-Key Encryption
KW - White-Box Learning
UR - http://www.scopus.com/inward/record.url?scp=85218356662&partnerID=8YFLogxK
U2 - 10.4230/LIPIcs.ITCS.2025.73
DO - 10.4230/LIPIcs.ITCS.2025.73
M3 - ???researchoutput.researchoutputtypes.contributiontobookanthology.conference???
AN - SCOPUS:85218356662
T3 - Leibniz International Proceedings in Informatics, LIPIcs
BT - 16th Innovations in Theoretical Computer Science Conference, ITCS 2025
A2 - Meka, Raghu
PB - Schloss Dagstuhl- Leibniz-Zentrum fur Informatik GmbH, Dagstuhl Publishing
T2 - 16th Innovations in Theoretical Computer Science Conference, ITCS 2025
Y2 - 7 January 2025 through 10 January 2025
ER -