



Study with the several resources on Docsity
Earn points by helping other students or get them with a premium plan
Prepare for your exams
Study with the several resources on Docsity
Earn points to download
Earn points by helping other students or get them with a premium plan
Community
Ask the community for help and clear up your study doubts
Discover the best universities in your country according to Docsity users
Free resources
Download our free guides on studying techniques, anxiety management strategies, and thesis advice from Docsity tutors
A lecture note from a university course on cryptography, specifically focusing on the concept of one-way functions. One-way functions are essential in encryption schemes as they must be easy to compute but hard to invert. the formal definition of one-way functions and discusses the importance of negligible functions in this context. It also introduces the concept of strong one-way functions and provides an intuitive understanding of their significance.
What you will learn
Typology: Assignments
1 / 6
This page cannot be seen from the preview
Don't miss anything!
COM S 687 Introduction to Cryptography August 31, 2006
Instructor: Rafael Pass Scribe: Lucja Kot
We have seen that no encryption scheme can be perfectly secure if the keys it uses are shorter than the messages, even if the length difference is just one bit. We proved:
Theorem 1 Let E = (M, K, Gen, Enc, Dec) be a deterministic private-key encryption scheme where M = { 0 , 1 }n^ and K = { 0 , 1 }n−^1. Then E is not more than 1 / 2 -statistically secret.
This was proved by demonstrating the existence of m 0 , m 1 ∈ M, such that if we let T = {c | ∃k Enck(m 0 ) = c},
Pr [Enc (m 0 ) ∈ T ] − Pr [Enc (m 1 ) ∈ T ] ≥ 1 / 2.
We saw an attack that exploits the above vulnerability. Suppose the adversary (Eve) receives a ciphertext c. Eve knows that c is an encryption of either m 0 or m 1 , and these two messages were sent with equal probability. She can compute as follows:
We argue that this algorithm will output the message that was sent with probability ≥ 34. Consider first the case where m 0 was sent. Then the attack algorithm will output m 0 with probability 1. On the other hand, suppose m 1 was sent. Then, by the above Theorem, the algorithm will output m 1 with probability ≥ 12. As we assumed m 0 and m 1 have an equal probability of being sent, we see that Eve can indeed find the message with probability ≥ 12 × 1 + 12 × 12 = 34.
We closed last time by noting that the above attack requires exponential time, because the computation of M′^ requires computing a decryption for each k ∈ K. Consequently the attack, while worrying, is not in fact computationally feasible – particularly when n is a large enough number. This motivates our introduction of an adversary model where computation time is a bounded resource.
We start by formalizing what if means for an algorithm to compute a function.
Definition 1 (Algorithm) An algorithm is a (deterministic) Turing machine whose input and output are strings over some alphabet Σ. We usually have Σ = { 0 , 1 }.
Definition 2 (Running-time of Algorithms) A runs in time T (n) if for all x ∈ B∗, A(x) halts within T (|x|) steps. A runs in polynomial time (or is an efficient algorithm) if ∃ c such that A runs in time T (n) = nc.
Definition 3 (Deterministic Computation) Algorithm A is said to compute a func- tion f : { 0 , 1 }∗^ → { 0 , 1 }∗^ if A, on input x, outputs f (x), for all x ∈ B∗.
Remark: It is possible to argue with the choice of polynomial-time as a cutoff for “efficiency”, and indeed if the polynomial involved is large, computation may not be efficient in practice. There are, however, strong arguments to use the polynomial-time definition of efficiency:
Remark: Note that our treatment of computation is an asymptotic one. In practice, actual running time needs to be considered carefully, as do other “hidden” factors such as the size of the description of A. Thus, we will need to instantiate our formulae with numerical values that make sense in practice.
Definition 5 (Running-time of Randomized Algorithms) A randomized Turing ma- chine A runs in time T (n) if for all x ∈ B∗, A(x) halts within T (|x|) steps (independent of the content of A’s random tape). A runs in polynomial time (or is an efficient ran- domized algorithm) if ∃ c such that A runs in time T (n) = nc.
We extend our definition of computation to randomized algorithm.
Definition 6 Algorithm A is said to compute a function f : { 0 , 1 }∗^ → { 0 , 1 }∗^ if A, on input x, outputs f (x) with probability ≥ 23 for all x ∈ B∗.
At first sight the bound 23 might seem arbitrary. However, it can be shown (as in home- work 1) that the same class of functions will be computable by efficient randomized algorithms even if replacing the bound with either 12 + (^) poly^1 (|x|) or 1 − 2 −|x|. In other words, given a polynomial-time randomized algorithm A that computes a function with probability 12 + (^) poly^1 (n) , it is possible to obtain another polynomial-time randomized ma-
chine A′^ that computes the function with probability 1 − 2 −n.(A′^ simply takes multiple runs of A and finally outputs the most frequent output of A. The Chernoff bound can then be used to analyze the probability with which such a “majority” rule works.)
Efficient Adversaries. Polynomial-time randomized algorithms will be the principal model of efficient computation considered in this course. In the sequel, we will employ the terms polynomial-time randomized algorithm, probabilistic polynomial-time Turing machine (p.p.t, or PPT ), efficient randomized algorithm, or simple feasible algorithm interchangeably.
It is worthwhile to revisit the three above mentioned “hard” problems with respect to randomized computation.
Computationally hard functions are essential, but not (to our knowledge) sufficient, to produce encryption schemes. It turns out that we require functions with specific proper- ties, hardness being one of them.
At a high level, there are two basic desiderata for any encryption scheme:
x (^) f (x)
easy
hard
This suggests that we require functions that are easy to compute but hard to invert
There are several ways that the notion of one-wayness can be defined formally. We start with a definition that formalizes our intuition in the simplest way.
Definition 7 (Worst-case One-way Function) A function f : { 0 , 1 }∗^ → { 0 , 1 } is (worst-case) one-way if:
We will see that assuming SAT ∈/ BP P , one-way functions according to the above definition must exist (in fact, you will show that these two assumptions are equivalent). Note, however, that this definition allows for certain pathological functions – those where inverting the function for most x values is easy, as long as every machine fails to invert f(x) for infinitely many x’s. It is an open question whether such functions can still be used for good encryption schemes. This observation motivates us to refine our requirements. We want functions where for a randomly chosen x, the probability that we are able to invert the function is very small. With this new definition in mind, we begin by formalizing the notion of very small.