Memory-hard functions (MHFs) is a class of hash functions whose computational cost is dominated by memory cost, i.e., an evaluation that spends less memory has to incur a much larger time penalty. Memory-hardness is particularly useful in the setting of password hashing and cryptocurrencies. Since its first proposal by Colin Percival in 2009, many memory-hard hash heuristics were proposed, and the notion/design of MHFs has received a considerable amount of theoretical scrutiny as well. However, a large gap still exists when theory meets practice. On the one hand, most of the practical schemes are only heuristics without formal analysis; on the other hand, theoretical analyses are usually based on unrealistic assumptions: MHFs are considered as modes of operation of some underlying wide-block hash function H, modeled as a random oracle. Unfortunately, in practice, H is usually a heuristic design built from simpler primitives, which is far from a random oracle.
In this talk, I will present our progress in addressing both of the problems. Our main contributions are threefold. First, we prove that a widely-used MHF candidate, called Scrypt, is provably and optimally memory-hard. Second, we model simple cryptographic tools (e.g. AES) as the underlying ideal primitives and present a generic and provably secure MHF construction from hard-to-pebble graphs. The resulting scheme significantly decreases the efficiency gap between legitimate servers and ASICs equipped attackers. Finally, given the practice demands for H to have large outputs (to increase memory hardness without changing the description size of MHFs), we go back to the framework of constructing MHFs from H (with large output). Different from previous work, we take the finer-granularity of the hash function H into account and provide the first provably secure design of H from simpler primitives (e.g. fixed-key AES).