Barthe.2009.POPL

download Barthe.2009.POPL

of 12

Transcript of Barthe.2009.POPL

  • 8/12/2019 Barthe.2009.POPL

    1/12

    Formal Certification of Code-Based Cryptographic Proofs

    Gilles Barthe1,2 Benjamin Gregoire1,3 Santiago Zanella1,3

    1 Microsoft Research - INRIA Joint Centre, France2

    IMDEA Software, Madrid, Spain 3

    INRIA Sophia Antipolis - Mediterranee, [email protected] {Benjamin.Gregoire,Santiago.Zanella}@sophia.inria.fr

    Abstract

    As cryptographic proofs have become essentially unverifiable,cryptographers have argued in favor of developing techniques thathelp tame the complexity of their proofs. Game-based techniquesprovide a popular approach in which proofs are structured as se-quences of games, and in which proof steps establish the validityof transitions between successive games. Code-based techniquesform an instance of this approach that takes a code-centric view of

    games, and that relies on programming language theory to justifyproof steps. While code-based techniques contribute to formalizethe security statements precisely and to carry out proofs system-atically, typical proofs are so long and involved that formal veri-fication is necessary to achieve a high degree of confidence. WepresentCertiCrypt, a framework that enables the machine-checkedconstruction and verification of code-based proofs. CertiCrypt isbuilt upon the general-purpose proof assistant Coq, and draws onmany areas, including probability, complexity, algebra, and seman-tics of programming languages. CertiCryptprovides certified toolsto reason about the equivalence of probabilistic programs, includ-ing a relational Hoare logic, a theory of observational equivalence,verified program transformations, and game-based techniques suchas reasoning about failure events. The usefulness ofCertiCryptis demonstrated through various examples, including a proof ofsemantic security ofOAEP(with a bound that improves upon ex-isting published results), and a proof of existential unforgeabilityofFDH signatures. Our work provides a first yet significant steptowards Halevis ambitious programme of providing tool supportfor cryptographic proofs.

    Categories and Subject Descriptors D.3.1 [Programming Lan-guages]: Formal Definitions and Theory; D.3.4 [Programming

    Languages]: ProcessorsCompilers, Optimization; F.3.1 [Logicsand Meanings of Programs]: Specifying and Verifying and Reason-ing about Programs; F.3.2 [Logics and Meanings of Programs]:Semantics of Programming LanguagesOperational semantics,Denotational semantics, Program analysis.

    General Terms Languages, Security, Verification

    1. Introduction

    Provable security [33], whose origins can be traced back to thepioneering work of Goldwasser and Micali[18], advocates a math-

    Copyright c ACM, 2009. This is the authors version of the work. It is postedhere by permission of ACM for your personal use. Not for redistribution.

    The definitive version was published in Proceedings of the 36th annual ACM

    SIGPLAN-SIGACT symposium on Principles of programming languages, pp. 90-101.http://doi.acm.org/10.1145/1480881.1480894

    ematical approach based on complexity theory in which the goalsand requirements of cryptosystems are specified precisely, andwhere security proofs are carried out rigorously and make all un-derlying assumptions explicit. In a typical provable security set-ting, one reasons about effective adversaries, modeled as arbitraryprobabilistic polynomial-time Turing machines, and about theirprobability of thwarting a security objective, e.g. secrecy. In a sim-ilar fashion, security assumptions about cryptographic primitivesbound the probability of polynomial algorithms to solve hard prob-lems, e.g. computing discrete logarithms. The security proof is per-formed by reduction by showing that the existence of an effectiveadversary with a certain advantage in breaking security implies theexistence of an effective algorithm contradicting the security as-sumptions. Although the adoption of provable security has signif-icantly enhanced confidence in security proofs, several publishedproofs have been found incorrect (cf. [30]), and the cryptographiccommunity is increasingly wary that the field may be approachinga crisis of rigor[8, 19].

    The game-playing technique[8, 19, 31] is a general method tostructure and unify cryptographic proofs, thus making them lesserror-prone. Its central idea is to view the interaction between anadversary and the cryptosystem as a game, and to study transfor-mations that preserve security. In a typical game-based proof, one

    considers transitions of the formG, A h G, A, whereGandG

    are games,AandA are events, andhis a monotonic function suchthat PrG[A] h(PrG [A

    ]). One can obtain an upper bound forthe probability of an eventA0 in some initial gameG0 by succes-sively refiningG0, A0into a game/event pairGn, An,

    G0, A0h1 G1, A1

    hn Gn, An

    and then bounding the probability of event Anin Gn.Code-based techniques [8]is an instance of the game-playing

    technique whose distinguishing feature is to take a code-centricview of games, security hypotheses and computational assump-tions, that are expressed using (probabilistic, imperative, polyno-mial) programs. Under this view, game transformations becomeprogram transformations, and can be justified rigorously by seman-tic means; in particular, many transformations can be viewed as

    common program optimizations, and are justified by proving thatthe original and transformed programs are observationally equiva-lent. Although code-based proofs are easier to verify, they go farbeyond established theories of program equivalence and exhibit asurprisingly rich and broad set of reasoning principles that drawson program verification, algebraic reasoning, and probability andcomplexity theory. Thus, despite the beneficial effect of their un-derlying framework, code-based proofs remain inherently complex.Whereas Bellare and Rogaway [8] already observed that code-based proofs could be more easily amenable to machine-checking,Halevi [19] argued that formal verification techniques should beused to improve trust in cryptographic proofs, and set up a pro-

    1

    http://doi.acm.org/10.1145/1480881.1480894http://doi.acm.org/10.1145/1480881.1480894
  • 8/12/2019 Barthe.2009.POPL

    2/12

    gramme for building a tool that could be used by the cryptographiccommunity to mechanize their proofs.

    This article reports on a first yet significant step towardsHalevis programme. We describe CertiCrypt, a framework toconstruct machine-checked code-based proofs in the Coq proofassistant[34], supporting:

    Faithful and rigorous encoding of games. In order to be readily

    accessible to cryptographers, we have chosen a formalism thatis commonly used to describe games. Concretely, the lowestlayer ofCertiCryptis the formalization of pW HILE, an imper-ative programming language with random assignments, struc-tured datatypes, and procedure calls. We provide a deep anddependently-typed embedding of the syntax; thanks to depen-dent types, the typability of pW HILEprograms is obtained forfree. We also provide a small-step operational semantics usingthe measure monad of Audebaud and Paulin [4]. The seman-

    tics is instrumented to calculate the cost of running programs;this offers the means to define complexity classes, and in par-ticular to define formally the notion of effective (probabilisticpolynomial-time) adversary. In addition, we also model non-standard features, such as policies on variable accesses and pro-cedure calls, and use them to capture many assumptions left in-

    formal in cryptographic proofs.Exact security. Many security proofs establish an asymptotic be-

    havior for adversaries and show that the advantage of any ef-fective adversary is negligible w.r.t. a security parameter (whichtypically determines the length of keys or messages). However,the cryptographic community is increasingly focused on exactsecurity, a much more useful result since it gives hints as tohow to choose system parameters in practice to satisfy a secu-rity guarantee. The goal of exact security is to provide concretebounds both for the advantage of the adversary and for its ex-ecution time.CertiCryptsupports the former (but for the timebeing, not the latter).

    Full and independently verifiable proofs.CertiCryptadopts a for-mal semanticist perspective and goes beyond Halevis visionin two respects. First, it provides a unified framework to carry

    out full proofs; all intermediate steps of reasoning can be jus-tified formally, including complex side conditions that justifythe correctness of transformations (about probabilities, groups,polynomials, etc). Second, one notable feature of Coq, and thusCertiCrypt, is to support independent verifiability of proofs,which is an important motivation behind game-based proofs.More concretely, every proof is represented by a proof object,that can be checked automatically by a (small and trustworthy)proof checking engine. In order to trust a cryptographic proof,one only needs to check its statement, and not its details.

    Powerful and automated reasoning methods. CertiCrypt formal-izes a Relational Hoare Logic and a theory of observationalequivalence, and uses them as stepping stones to support themain tools of code-based reasoning through certified, reflec-tive tactics. In particular, CertiCrypt shows that many trans-

    formations used in code-based proofs, including common op-timizations, are semantics-preserving. One of its specific con-tributions is to prove formally the correctness of a variant oflazy sampling, which is used ubiquitously in cryptographicproofs. In addition,CertiCryptsupports methods based on fail-ure events (the so-called fundamental lemma of game-playing).

    We have successfully conducted nontrivial case studies that vali-date our design, show the feasibility of formally verifying crypto-graphic proofs, and confirm the plausibility of Halevis programme.

    Contents The purpose of this article is to provide an overviewof the CertiCrypt project, and to stir further interest in machine-

    checked cryptographic proofs. Additional details on design choices,on formalizing the semantics of probabilistic programs, and on casestudies, will be provided elsewhere. In consequence, the paper isorganized as follows: we begin in Section2 with two introductoryexamples of game-based proofs, namely the semantic security ofElGamal encryption, and the PRP/PRF switching lemma; in Sec-tion 3 we introduce the language we use to represent games andits semantics, and we discuss the notions of complexity and ter-mination; in Section ?? we present a probabilistic relational Hoarelogic that forms the core of our framework; in Sections5and6weoverview the formulation and automation of game transformationsinCertiCrypt; in Section7we report on two significant case stud-ies we have formalized in CertiCrypt: existential unforgeability ofthe FDH signature scheme, and semantic security ofOAEP; wefinish with a discussion of related work and concluding remarks.

    2. Basic examples

    This section illustrates the principles ofCertiCrypt on two basicexamples of game-based proofs: semantic security of ElGamalencryption and the PRP/PRF switching lemma. The language usedto represent games is formally introduced in the next section. Webegin with some basic definitions.

    The Random Oracle Model is a model of cryptography exten-sively used in security proofs in which some cryptographic primi-tives, e.g. hash functions, are assumed to be indistinguishable fromrandom functions (despite the fact that no real function can imple-ment a truly random function [13]). Such primitives are modeled byoracles that return random values in response to queries. The solecondition is that queries are answered consistently: if some valueis queried twice, the same response must be given. Our formalismcaptures the notion of random oracle using stateful procedures thatstore queries and their results, e.g.

    Oracle O(x) : ifx dom(L)theny ${0, 1} ; L (x, y) :: L;returnL[x]

    An asymmetric encryption scheme is composed of three algo-rithms: key generation KG(), where is the security parameter;

    encryption Enc(pk, m) where pk is a public key and m a plain-text; and decryptionnot relevant here. An asymmetric encryptionscheme is said to be semantically secure (equivalently, IND-CPAsecure) if it is infeasible to gain significant information about aplaintext given only a corresponding ciphertext and the public key.This is formally defined using the following game, where A and A

    are allowed to share state via global variables and thus are regardedas a single adaptive adversary:

    Game IND-CPA :(sk,pk) KG();(m0, m1) A(pk);b ${0, 1}; Enc(pk, mb);b A(pk, )

    The game first generates a new key pair and gives the publickey to the adversary, who returns two plaintexts m0, m1 of hischoice. Then, the challenger tosses a fair coin b and gives theencryption ofmb back to the adversary, whose goal is to guesswhich message has been encrypted. The scheme is IND-CPA iffor every effective adversary A, A, |PrIND-CPA[b= b

    ] 12

    | isnegligible in the security parameter, i.e. the adversary cannot domuch better than a blind guess. Formally, a function: N Risnegligible iff c. nc. n. n nc |(n)| n

    c.

    2.1 The ElGamal encryption scheme

    ElGamalis a widely used asymmetric encryption scheme, and anemblematic example of game-based proofs, as it embodies manyof the techniques described in Sections ?? and 5. The proof fol-

    2

  • 8/12/2019 Barthe.2009.POPL

    3/12

    d

    d

    Game ElGamal2 :x $ Zq; y $ Zq;(m0, m1) A(gx);z $ Zq; g z;b A(gx, gy , );b ${0, 1};d b = b

    d

    (4)

    (5)

    Game ElGamal:(x, ) KG();(m0, m1) A();b ${0, 1};(, ) Enc(, mb);b A(,,);d b = b

    (1)

    dGame ElGamal0 :x $ Zq; y $ Zq ;(m0, m1) A(g

    x);b ${0, 1}; g xy mb;b A(gx, gy , );d b = b

    (2)

    Game ElGamal1 :x $ Zq; y $ Zq;(m0, m1) A(gx);b ${0, 1};z $ Zq; g z mb;b A(gx, gy , );d b = b

    Lemma B PPT : PPT B.Proof. PPT tac. Qed.

    Lemma B wf : WFAdv B.Proof. ... Qed.

    Game DDH1 :x $ Zq ;y $ Zq ;z $ Zq ;d B(gx, gy, gz )

    inline l KG.inline l Enc.ep.

    deadcode.swap.eqobs in.

    inline r B.ep.deadcode.eqobs in.

    inline r B.ep.deadcode.swap.eqobs in.

    swap.eqobs hd 4.

    eqobs tl 2.apply mult pad.

    Adversary B( , , ) :(m0, m1) A();b ${0, 1};b A( , , mb);returnb = b

    Game DDH0 :x $ Zq ;y $ Zq;d B(gx, gy, gxy)

    Figure 1. Code-based proof ofElGamalsemantic security

    lows [31]; all games are defined in Fig.1. Given a cyclic group of

    orderq, and a generatorg , we define:1

    Key generation: KG() def= x $ Zq; return(x, gx)

    Encryption:Enc(, m) def= y $ Zq; return(gy, y m)

    ElGamalis IND-CPA secure under the Decisional Diffie-Hellman(DDH) assumption, which states that it is hard to distinguish be-

    tween triples of the form (gx, gy, gxy) and (gx, gy, gz) where x,y,zare uniformly sampled in Zq. In our setting, DDH is formu-lated precisely by stating that for any polynomial-time and well-formed adversaryB,|PrDDH0 [d] PrDDH1 [d]|is negligible in thesecurity parameter. Figure1presents a high level view of the proof:the square boxes represent games, whereas the rounded boxes rep-resent proof sketches of the transitions between games; the tacticsthat appear in these boxes hopefully have self-explanatory names,but are explained in more detail in Section 5. The rounded greyboxes represent proof sketches of side conditions that guaranteethat theDDH assumption is correctly applied. The proof proceedsby constructing an adversaryBagainst DDH such that the distri-bution ofb= b (i.e.d) after running theIND-CPAgameElGamalis exactly the same as the distribution ofd after running DDH0.Furthermore we show that the probability ofdbeingtruein DDH1

    is 12 for the same adversary B. The proof is summarized by the

    following equations:

    |PrElGamal[b= b] 1

    2| = |PrElGamal0 [d]

    12

    | (1)= |PrDDH0 [d]

    12

    | (2)= |PrDDH0 [d] PrElGamal2 [d]| (3)= |PrDDH0 [d] PrElGamal1 [d]| (4)= |PrDDH0 [d] PrDDH1 [d]| (5)

    1 The security parameter, implicit in this presentation, determines this cyclicgroup by indexing a family of groups where the DDH problem is believedintractable.

    Equation (1) is justified because ElGamaland ElGamal0induce thesame distribution ond(ElGamald ElGamal0). To prove this, weinline the calls to KGand Enc, and then perform expression prop-agation and dead code elimination (ep, deadcode). At this pointwe are left with two almost identical games, except the samplingofy is done later in one game than in the other. The tactic swapis used to hoist instructions whenever is possible in order to obtaina common prefix, and allows us to hoist the sampling ofy to theright place. We conclude by applying eqobs in that decides ob-servational equivalence of a program with itself. Equations (2) and(5) are obtained similarly, while (3) holds because b is independentfrom the sampling ofbin ElGamal2. Finally, to prove equation (4)we begin by removing the common part of the two games withthe exception of the instruction z $ Zq (eqobs hd, eqobs tl).We then apply an algebraic property of cyclic groups (mult pad):when multiplying a uniformly distributed element of the group byanother element, the result is uniformly distributed. This allows toprove thatz $ Zq ; g

    z mb andz $ Zq; gz induce

    the same distribution on.The proof concludes by applying the DDH assumption. We

    prove that the adversary Bis strict probabilistic polynomial-timeand well-formed (under the assumption that A and A are so). Theproof of the former condition is automated in CertiCrypt.

    2.2 The PRP/PRF switching lemma

    In cryptographic proofs, particularly those dealing with blockci-phers, it is often convenient to replace a pseudo-random permuta-tion (PRP) by a pseudo-random function (PRF). The PRP/PRFswitching lemma establishes that such a replacement does notchange significantly the advantage of an effective adversary. Ina code-based setting, the Switching Lemma states that

    |PrGPRP [d] PrGPRF [d]| q(q 1)

    2+1

    3

  • 8/12/2019 Barthe.2009.POPL

    4/12

    dd L

    Game GYPRF:Y [ ] ;while |Y|< qdo

    y ${0, 1} ; Y y :: YY Y;L [ ] ; d A()

    Oracle O(x) :

    ifx dom(L)thenif0 < |Y| then

    y hd(Y); Y tl(Y)elsey ${0, 1};L (x, y) :: L

    returnL[x]

    Game GbadPRP:bad false;L [ ] ; d A()

    Oracle O(x) :ifx dom(L)then

    y ${0, 1} ;ify img(L)then

    bad true;whiley img(L)do

    y ${0, 1}

    L (x, y) :: LreturnL[x]

    Game GPRP:L [ ]; d A()

    Oracle O(x) :ifx dom(L)then

    y ${0, 1} ;whiley img(L)do

    y ${0, 1}

    L (x, y) :: LreturnL[x]

    Game GbadPRF:bad false;L [ ]; d A()

    Oracle O(x) :ifx dom(L)then

    y ${0, 1} ;ify img(L)then

    bad trueL (x, y) :: L

    returnL[x]

    Game GPRF:L [ ] ; d A()

    Oracle O(x) :ifx dom(L)then

    y ${0, 1} ;L (x, y) ::L

    returnL[x]

    By lazy sampling

    |PrGbadPRP

    [d] PrGbadPRF

    [d]| PrGbadPRF

    [bad]

    By Fundamental Lemma

    Figure 2. Code-based proof of the PRP/PRF switching lemma

    where games GPRP and GPRF give the adversary access to anoracle that represents a random permutation and a random functionrespectively, and where q bounds the number of oracle queriesmade by A.

    The proof is split in two parts: the first part formalizes the

    intuition that the probability of the adversary outputting a givenvalue is the same if a PRP is replaced by a PRF and no collisionsare observed; it uses the Fundamental Lemma of game-playing(Lemma 2in Sec. 6). The second part provides an upper boundto the probability of a collision; it uses lazy sampling (Lemma1inSec.5.2).

    Figure2 provides a high-level view of the proof. To apply theFundamental Lemma, we introduce in the game GPRF a variablebad that is set to truewhenever a collision is found; we reformu-lateGPRP accordingly to be syntactically equal until bad is set.Using deadcode to eliminate the variable bad, we show that theresulting gamesGbadPRFand G

    badPRPare just semantics preserving re-

    formulations of the gamesGPRFandGPRPrespectively. Then, weapply the Fundamental Lemma to conclude that the difference inthe probability ofd = truebetween the two games is at most theprobability ofbadbeing set totruein gameGbad

    PRF.

    We then prove that the probability ofbad being set to true ingame GbadPRF is upper bounded by the probability of an elementappearing twice in the range ofL in GPRF. The proof uses theRelational Hoare Logic and Lemma () of Section ?? with thefollowing postcondition: ifbad is set in GbadPRFthen some elementappears twice in the range ofL in GPRF. Next, we introduce agameGYPRFwhere the answer to the first qqueries to the oracle aresampled at the beginning of the game and stored in a list Y. Usinglazy sampling, we prove by induction onqthat the game GPRFisequivalent toGYPRF w.r.tL. Finally, we bound the probability ofhaving a collision in L in GYPRF. To that end, we prove that anycollision in L is also present in Y provided the length ofL isless than or equal to q(we use Y as a ghostvariable to store thevalue ofY after being initialized). We conclude by bounding theprobability of sampling some value twice in

    Y by

    q(q1)

    2+1 .

    3. Games as programs

    The essence of code-based cryptographic proofs is to express in aunified semantic framework games, hypotheses, and results. Thissemanticist perspective allows a precise specification of the inter-action between the adversary and the challenger in a game, andto readily answer questions as: Which oracles does the adversaryhave access to? Can the adversary read/write this variable? Howmany queries the adversary can make to a given oracle? What isthe length of a bitstring returned by the adversary? Can the adver-

    sary repeat a query? Furthermore, other notions such as probabilis-tic polynomial-time complexity or termination fit naturally in thesame framework and complete the specification of adversaries andgames.

    3.1 The pWHILElanguageGames are formalized in pW HILE, a probabilistic imperative lan-guage with procedure calls. Given a set Vof variable identifiers,a set Pof procedure identifiers, a set Eof expressions, and a setD ofdistribution expressions, the set of commands can be definedinductively by the clauses:

    I ::= V E assignment| V $D random sampling| ifEthen Celse C conditional| while Edo C while loop| V P (E, . . . , E) procedure call

    C ::= nil nop| I; C sequence

    Rather than adopting the above definition, we impose that programs

    in pWHILEare typed. Thus,x e is well-formed only if the typesofx ande coincide, and ife thenc1elsec2is well-formed only ife is a boolean expression. In practice, we assume that variablesand values are typed, and define a dependently typed syntax ofprograms. An immediate benefit of using dependent types is thatthe type system of Coq ensures for free the well-typedness ofexpressions and commands. Although the formalization is carefullydesigned for being extensible w.r.t. user-defined types and operators

    (and we do exploit this in practice), it is sufficient for the purposeof this paper to consider an instance in which values are booleans,bitstrings, natural numbers, pairs, lists, and elements of a cyclicgroup. Similarly, we instantiateD so that values can be uniformlysampled from the set of booleans, natural intervals of the form[0..n], and bitstrings of a certain length. It is important to note thatthe formalization of expressions is not restricted to many-sortedalgebra: we make a critical use of dependent types to record thelength of bitstrings. This is used e.g. in the definition of the IND-CPA game for OAEP in Sec. 7.2 to constrain the adversary to returntwo bitstrings of equal length.

    Definition 1(Program). A program consists of a command and anenvironment, which maps a procedure identifier to its declaration,consisting of its formal parameters, its body, and a return expres-sion (we use an explicitreturnwhen writing games, though),

    decl def= {params: listV; body: C ; re: E } .

    The environment specifies the type of the parameters and the returnexpression, so that procedure calls are always well-typed.

    4

  • 8/12/2019 Barthe.2009.POPL

    5/12

    In a typical formalization, the environment will map proceduresto closed commands, with the exception of the adversaries whosecode is unknown, and thus modeled by variables of type C. This is astandard trick to deal with uninterpreted functions in a deep embed-ding. In the remainder of this section we assume an environment Eimplicitly given.

    In the rest of this paper we let ci range overC ; xi over V; eiover E;di over D; andGi over programs. The operator denotesthe bitwise exclusive or on bitstrings of equal length, and theconcatenation of two bitstrings.

    3.2 Operational semantics

    Programs in pWHILE are given a small-step semantics using themeasure monadM(X), whose type constructor is defined as

    M(X) def= (X [0, 1]) [0, 1]

    and whose operatorsunitand bindare defined as

    unit : X M(X) def= x. f. f xbind : M(X) (X M(Y)) M(Y)

    def= . M. f. ( x. M x f)

    The monadM(X)was proposed by Audebaud and Paulin[4]as a

    variant of the expectation monad used by Ramsey and Pfeffer [27],and builds on earlier work by Kozen [22]. The formalization of thesemantics heavily relies on Paulins axiomatization in Coq of the[0, 1]real intervalfor our purposes, it has been necessary to adddivision to the library.

    The semantics of commands and expressions are defined rel-ative to a given memory, i.e. a mapping from variables to val-ues. We let M denote the set of memories. Expressions are de-terministic; their semantics is standard and given by a functionexpr, that evaluates an expression in a given memory and re-turns a value. The semantics of distribution expressions is givenby a function distr. For a distribution expression d of typeT, we have that ddistr : M M(X), where X is theinterpretation of type T. For instance, in the previous sectionwe have used {0, 1} to denote the uniform distribution on bit-strings of length (the security parameter), formally, we have

    {0, 1}distr def= m f.P

    bs{0,1}12 f(bs). Thanks to depen-

    dent types, the semantics of expressions and distribution expres-sions is total. In the following, and whenever there is no confusion,we will drop the subscripts inexpr and distr.

    The semantics of commands relates a deterministic state to a(sub-)probability distribution over deterministic states and uses aframe stack to deal with procedure calls. Formally, a deterministicstate is a triple consisting of the current commandc : C , a memorym : M, and a frame stackF : list frame. We let S denote theset of deterministic states. One step execution1 : S M(S)is defined by the rules of Fig. 3. In the figure, we use a b asa notation for a1 = b and loc and glob to project memories onlocal and global variables respectively.

    We briefly comment on the transition rules for calling a proce-dure (3rd rule) and returning from a call (2nd rule). Upon a call,

    a new frame is appended to the stack, containing the destinationvariable, the return expression of the called procedure, the contin-uation to the call, and the local memory of the caller. The state re-sulting from the call contains the body of the called procedure, theglobal part of the memory, a local memory initialized to map theformal parameters to the value of the actual parameters just beforethe call, and the updated stack. When returning from a call with anon-empty stack, the top frame is popped, the return expression isevaluated and the resulting value is assigned to the destination vari-able after previously restoring the local memory of the caller; thecontinuation taken from the frame becomes the current command.If the stack is emptywhen returning from a call,the execution of the

    program terminates and the final state is embedded into the monadusing theunitoperator.

    Using the monadic constructions, one can define ann-step exe-cution functionn:

    s0def= units sn+1

    def= bindsn

    1

    Finally, the denotation of a command c in an initial memorym is

    defined to be the (limit) distribution of reachable final memories:cm: M(M) def= f. sup {(c,m, [ ])nf|final | n N}

    wheref|final : S [0, 1]is the function that when applied to a state(c,m,F)givesf(m)if it is a final state and 0 otherwise. Since thesequence(c,m, [ ])n f|final is increasing and upper bounded by1, this least upper bound always exists and corresponds to the limitof the sequence.

    We have shown that the semantics is discrete, we use this toapply a variant of Fubinis theorem for proving the rules [R-Comp]and [R-Trans] in the next section.

    Computing probabilities The advantage of using this monadicsemantics is that, if we use an arbitrary function as a continua-tion to the denotation of a program, what we get (for free) as aresult is its expected value w.r.t. the distribution of final memo-

    ries. In particular, we can compute the probability of an event Ain the distribution obtained after executing a command c in aninitial memory m by measuring its characteristic function 11A:Prc,m[A]

    def= cm 11A. For instance, one can verify that the de-

    notation ofx $[0..1]; y $[0..1]in the memorymis

    f. 14

    (f(m{0, 0/x,y}) +f(m{0, 1/x,y})+f(m{1, 0/x,y}) +f(m{1, 1/x,y}))

    and conclude that the probability of the event x y after executingthe command above is 3

    4.

    3.3 Probabilistic polynomial-time programs

    In general, cryptographic proofs reason about effective adversaries,which can only use a polynomially bounded number of resources.The complexity notion that captures this intuition, and which is

    pervasive in cryptographic proofs, is that of strict probabilisticpolynomial-time. Concretely, a program is said to be strict proba-bilistic polynomial-time (PPT) whenever there exists a polynomialbound (on some security parameter ) on the cost of each possi-ble execution, regardless of the outcome of its coin tosses. Other-wise said, a probabilistic program is PPT whenever the same pro-gram, seen as a non-deterministic program, terminates and the costof each possible run is bounded by a polynomial.

    Termination and efficiency are orthogonal. Consider, for in-stance, the following two programs:

    b true; whilebdob ${0, 1}

    b ${0, 1}; ifb then while true do nil

    The former terminates with probability 1 (it terminates within niterations with probability 1 2n), but may take an arbitrarily

    large number of iterations to terminate. The latter terminates withprobability 1

    2, but when it does, it takes only a constant time. We

    deal with termination and efficiency separately.

    Definition 2(Termination). The probability that a programc ter-minates starting from an initial memory m is cm 11true. We saythat a programcis absolutely terminating, and note itLossless(c),iff it terminates with probability 1 in any initial memory.

    To deal with efficiency, we non-intrusively instrument the se-mantics of our language to compute the cost of running a pro-gram. The instrumented semantics ranges over M(M N) in-stead of simplyM(M). We recall that our semantics is implicitly

    5

  • 8/12/2019 Barthe.2009.POPL

    6/12

    (nil, m, [ ]) unit(nil, m, [ ])

    (nil, m, (x,e,c,l) ::F) unit(c, (l,m.glob){em/x}, F)

    (x p(e); c,m,F) unit(E(p).body, ({em/E(p).params}, m.glob), (x, E(p).re,c,m.loc) :: F)

    (ife thenc1elsec2; c,m,F) unit(c1; c,m,F) ifem = true

    (ife thenc1elsec2; c,m,F) unit(c2; c,m,F) ifem = false

    (whilee doc; c, m , F ) unit(c; whilee doc; c, m , F ) ifem = true

    (whilee doc; c, m , F ) unit(c, m , F ) ifem = false

    (x e; c,m,F) unit(c, m{em/x}, F)

    (x $d; c,m,F) bind(dm)(v.unit(c, m{v/x}, F))

    Figure 3. Probabilistic semantics of pWHILEprograms

    parametrized by a security parameter, on which we base our no-tion of complexity.

    Definition 3 (Polynomially bounded distribution). We say that adistribution : M(M N) is (p, q)-bounded, where p and qare polynomials, whenever for every (m, n) occurring with non-

    zero probability in, the size of every value in the memorym isbounded by p() and the costn is bounded by q(). This notionis formally defined by means of the range predicate introduced inSec.??.

    Definition 4 (Strict probabilistic polynomial-time program). Aprogramc is strict probabilistic polynomial-time (PPT) iff it ter-minates absolutely, and there exist polynomial transformers F, Gsuch that for every(p, q)-bounded distribution, the distribution(bindc)is(F(p), q+G(p))-bounded.

    We can recover some intuition by taking = unit(m, 0)in theabove definition. In this case, we may paraphrase the condition asfollows: if the size of values in m is bounded by some polynomialp, and an execution of the program in m terminates with non-zeroprobability in memorym, then the size of values inm is boundedby the polynomialF(p), and the cost of the execution is boundedby the polynomialG(p). It is in this latter polynomial that boundsthe cost of executing the program that we are ultimately interested.The increased complexity in the definition is required for provingcompositionality results (e.g. the sequential composition of twoPPT programs results in a PPT program).

    Although our formalizations of termination and efficiency relyon semantic definitions, it is not necessary for users to reason di-rectly about the semantics of a program to prove it meets thosedefinitions. CertiCrypt implements a certified algorithm showingthat every program without loops and recursive calls is lossless .2

    CertiCryptalso provides another algorithm that, together with thefirst, establishes that a program is PPT provided that, additionally,the program does not contain expressions that might generate val-ues of superpolynomial size or take a superpolynomial time whenevaluated in a polynomially bounded memory.

    3.4 Adversaries

    In order to reason formally about security, we make explicit whichvariables and procedures are accessible to adversaries, and providea simple analysis to check whether an adversary respects its policy.Given a set of procedure identifiers O (the procedures that maybe called by the adversary), and sets of global variables GA (those

    2 It is of course a weak result in terms of termination of probabilisticprograms, but nevertheless sufficient as regards cryptographic applications.Extending our formalization to a certified termination analysis for loops isinteresting, but orthogonal to our main goals, and left for future work.

    I nil : I I i : I I c : O

    I i; c : OWritable(x) fv(e) I

    I x e : I {x}Writable(x) fv(d) I

    I x $d : I {x}fv(e) I I ci : Oi i= 1, 2

    I ife thenc1elsec2 : O1O2

    fv(e) I I c : I

    I whileedoc : Ifv(e) I Writable(x) o O

    I x o(e): I {x}fv(e) I Writable(x) A O wfA

    I x A(e) : I {x}GA Gro AE .params AE .body : O fv(AE .re) O

    wfAWritable(x) def= Local(x) x GA AE

    def= E(A).

    Figure 4. Static analysis for well-formedness of adversaries

    that can be read and written by the adversary) andGro (those thatthe adversary can only read), we say that an adversaryA is well-

    formed in an environment Eif the judgment wfA can be derivedusing the rules in Fig. 4. These rules guarantee that each time avariable is written by the adversary, the adversary has the right todo so; and that each time a variable is read by the adversary, it iseither a global variable in GA Gro or a local variable previouslyinitialized. A well-formed adversary is free to call oracles, but anyother procedure it calls must be a well-formed adversary itself.

    Additional constraints may be imposed on adversaries. For ex-ample, exact security proofs usually impose an upper bound to thenumber of calls adversaries can make to a given oracle, whereasfor some properties such as IND-CCA2 there are some restrictionson the parameters with which the oracles may be called. Likewise,some proofs impose extra conditions such as forbidding repeated ormalformed queries. These kinds of properties can be formalized us-ing lists that record the oracle calls, and verifying as postconditionthat the calls were legitimate.

    4. Relational Hoare Logic

    Shoup[31] classifies proof steps into three categories: transitionsbased on indistinguishabilitywhich typically involve applying asecurity hypothesis, e.g. theDDHassumption; transitions basedon failure eventswhich typically amount to bound the probabilityofbad, as in the Switching Lemma; and bridging stepswhichcorrespond to replacing or reorganizing code in a way that is notobservable by adversaries. In some circumstances, a bridging tran-sition from G1to G2may replace a program fragment Pby anotherfragmentP observationally equivalent toP. In general, however,

    6

  • 8/12/2019 Barthe.2009.POPL

    7/12

  • 8/12/2019 Barthe.2009.POPL

    8/12

    E1, nil E2, nil: [R-Skip] E1, c1 E2, c2:

    E1, c

    1 E2, c

    2 :

    E1, c1; c1 E2, c2; c

    2 :

    [R-Seq]

    E1, x1 e1 E2, x2 e2 : ( m1m2.(m1{e1m1/x1}) (m2{e2m2/x2})) [R-Ass]

    m1m2def= d1m1=f m1 m2 d2m2 v supp(d1m1).(m1{v/x1}) (m2{f m1m2v/x2})

    E1, x1 $d1 E2, x2 $d2: [R-Rand]

    m1m2. m1 m2 e1m1= e2m2 E1, c1 E2, c2 : e11 E1, c1 E2, c

    2: e11

    E1, ife1thenc1 elsec1 E2, ife2thenc2elsec

    2:

    [R-Cond]

    m1m2. m1 m2 e1m1= e2m2 E1, c1 E2, c2: e11

    E1, whilee1do c1 E2, whilee2do c2 : e11 [R-Whl]

    G1 G2: m1m2. m1 m2 m1

    m2 m1m2. m1 m2 m1 m2

    G1 G2 : [R-Sub]

    G1 G2: SYM() SYM()

    G2 G1 : [R-Sym]

    G1 G : G G2 : PER() PER()

    G1 G2 : [R-Tr]

    G1 G2 : G1 G2 :

    G1 G2 :

    [R-Case]

    Figure 5. Selection of derived rules of pRHL

    By instantiatingf andg to11, one can observe that observationalequivalence enjoys some form of termination sensitivity

    (G1 G2 : ) m1m2 G1 m111 =G2m211

    This interpretation of pRHL judgments is strongly connected to therelation between relational logics and information flow[3,9]. Weextensively use(=), and its variant () below, to fall back fromthe world of pRHL into the world of probabilities, in which securitystatements are expressed;

    G1 G2: fg

    m1 m2

    9=; G1m1f G2m2g ()

    wherefg def= m1m2. m1 m2 f(m1) g(m2).

    We conclude with an example that nicely illustrates someof the intricacies of pRHL. Let c = b $ {0, 1} and definem1 m2

    def= m1 b = m2 b. Then c c : true .

    Indeed, take such that 110,0 = 111,1 = 1/2 and 110,1 = 111,0 = 0. One can check that ensuresc m c m

    for all m and m. This example shows whythe lifting of a binary relation involves an existential quantification,and why it is not possible to always instantiate as the productdistribution in the definition of(one cannot establish the above

    judgment using the product distribution). Perhaps more surpris-ingly,

    c c : true

    also holds. Indeed, take

    such that

    110,0 =111,1 = 0 and110,1 =111,0 = 1/2. One cancheck that ensurescm cm

    for allm and m. Thus,the obvious rule

    G1 G2 : G1 G2 :

    G1 G2 :

    is unsound. While this example may seem unintuitive or eveninconsistent if one reasons in terms of deterministic states, itsintuitive significance in a probabilistic setting is that neither ofthe relations and are enough to tell apart the two finaldistributions.

    4.2 Observational equivalence

    Observational equivalence is derived as an instance of relationalHoare judgments in which pre and postconditions are restricted torelations based on equality over a subset of variables. Given a setXof variables, we define=X as

    m1 =X m2def= x X. m1x = m2x

    Then, the observational equivalence ofG1 andG2 w.r.t. an inputsetIand an output setO is defined as

    G1

    I

    O G2def

    =

    G1 G2 : =I=OAll derived rules for pRHL can be specialized to the case of obser-vational equivalence. For example, we have

    e1I e2 E1, c1

    IO E2, c2 E1, c

    1

    IO E2, c

    2

    E1, ife1thenc1elsec1

    IO E2, ife2thenc2elsec

    2

    where e1 I e2 iff for every memories m1and m2,m1 =Im2

    impliese1m1 = e2m2.To support automation, CertiCrypt implements a calculus of

    variable dependencies and provides two functions, eqobsIn andeqobsOut, that given a command c and a set O (respectively I)of output (input) variables compute a set I (O) of input (output)variables such that E1, c

    IO E2, c. CertiCrypt also provides

    tactics for two variants of observational equivalence that are widelyused in game-based proofs, namely

    G1 IO G2

    def= G1 G2: =I =O

    ,G1 IO G2

    def= G1 G2: =I =O

    These tactics use a (sound but incomplete) weakest preconditioncalculus for relational judgments.

    5. Proof methods for bridging steps

    CertiCryptprovides a powerful set of tactics and algebraic equiv-alences to automate bridging steps. Most tactics rely on an imple-mentation of a certified optimizer for pW HILE, with the exceptionoflazy sampling which has an ad hoc implementation. Algebraic

    8

  • 8/12/2019 Barthe.2009.POPL

    9/12

  • 8/12/2019 Barthe.2009.POPL

    10/12

    and O e, respectively. Lemma 1 can be applied taking e = x /

    dom(L), z = y, andc1 = Oe(x), c2 = Ol(x) to safely replaceoracle Ol by oracle Oein the environment of a program, samplingy at the beginning. Whenever a bound for the number of queries toa random oracle is known in advance, the above trick can be iter-atively applied to completely remove randomness from the oraclecode, as it is done in the proof of the Switching Lemma in Sec.2.2.

    5.3 Algebraic equivalences

    Bridging steps frequently use algebraic properties of language op-erators. The proof of semantic security ofElGamal uses the factthat in a cyclic multiplicative group, multiplication by a uniformlysampled element acts as a one-time pad:

    x $ Zq; gx {} y $ Zq; g

    y

    In the proof of security ofOAEPwe use optimistic sampling:

    x ${0, 1}k ; y x z

    {z}{x,y,z}y

    ${0, 1}k ; x y z

    and incremental sampling modulo a permutationf:

    x ${0, 1}k; y ${0, 1}

    ; z f(xy){z}z ${0, 1}k

    We show the usefulness of [R-Rand] by sketching the proof ofoptimistic sampling, as promised in Section ??. For readability, welete|i denoteemi. Define

    def= z|1= z|2 def= x|1 = x|2 y|1 = y|2 z|1 = z|2

    Let def= {x|1 z|1/y|1, y|2 z|2/x|2}. Then, by [R-Ass] wehave

    y x z x y z:

    Now take f m1 m2 v def= v z|2, and apply [R-Rand] and [R-

    Sub] to obtain x $ {0, 1}k y ${0, 1}

    k : .Note that the precondition we obtain after applying [R-Rand] is

    equivalent to because f is a bijection on {0, 1}k , and becausev.{v/x|1, v z|2/y|2} is equivalent toz|1= z|2by algebraicproperties of the operator. We conclude by applying [R-Seq].

    6. Proof methods for failure events

    A technique used very often to relate two games is based on whatcryptographers callfailure events. This technique relies on afunda-mental lemmathat allows to bound the difference in the probabilityof a given event in two games: one identifies a failure event andargues that both games behave identically until this event occurs.One can then bound the difference in probability of another eventby the probability of occurrence of the failure event in either game.

    Lemma 2(Fundamental lemma). LetG1andG2be two games,Aan event defined onG1,B an event defined on G2andF an eventdefined in both games. If

    PrG1 [A F] = PrG2 [B F], andPrG1 [F] PrG2 [F]

    then |PrG1 [A] PrG2 [B]| PrG2 [F].6

    In code-based proofs, the failure condition is generally indicatedby setting a global flag variable (usually called bad) to true. Thisspecialization allows to define a syntactic criterion for decidingwhether two games behave equivalently up to the raise of the failurecondition: we say that two games G1 andG2 are equal up to badand note it uptobad(G1, G2) whenever they are syntactically equalup to every point where the bad flag is set to trueand they do notreset the bad flag to falseafterward. For instance, games GbadPRPand

    6 The second hypothesis is usually omitted in the literature under the as-sumption that both games are absolutely terminating. In that case, eitherG1 or G2 will do on the right-hand side.

    Oracle Sign(m) :S m :: Sr H(m);returnf1(r)

    Oracle H(m) :ifm dom(L)then

    r ${0, 1}k;L (m, r) : :L

    returnL[x]

    Game EUFDH :L [ ]; S [ ];(m, x) A();d H(m)

    Figure 6. Initial game in the proof ofFDHunforgeability

    GbadPRFin Fig.2 satisfy this condition. We have used this syntacticcriterion to implement a specialization of the fundamental lemmafor game-based proofs.

    Lemma 3(Syntactic criterion for fundamental lemma).

    G1G2A. uptobad(G1, G2) PrG1 [A bad] = PrG2 [A bad]

    The first hypothesis in Lemma 2 may be proved automaticallyby using this syntactic criterion. To prove the second hypothesisit suffices to show that game G2 is absolutely terminating, forwhich we already have implemented a semi-decision procedure

    (see Sec.3.3).

    7. Case studies

    The purpose of this section is to announce the successful comple-tion of two experiments that validate the design and usability ofCertiCrypt. A detailed presentation of these works will be givenelsewhere.

    7.1 Existential unforgeability ofFDH

    The Full Domain Hash (FDH) scheme is a hash-and-sign signaturescheme based on the RSA family of trapdoor permutations, andin which the message is hashed onto the full domain of the RSAfunction. However, the same constructionand the reduction thatproves its securityremains valid for any family of trapdoor per-

    mutations. We have proved that FDH is existentially unforgeableunder adaptive chosen-message attacks in the random oracle modelfor the improved bound in [15]. The proof is about 3,500 lines long.

    We will consider a generic family of trapdoor permutations f(and their inversesf1)indexed by the security parameter, and anideal hash functionH :{0, 1} {0, 1}k , which we model as arandom oracle. The initial game of the proof is shown in Fig.6.

    Definition 6(Trapdoor permutation security). We say that a trap-door permutation is(t, )-secure if an inverter running within timet(k), when given a challenge y uniformly drawn from {0, 1}k

    succeeds in finding f1(y) with probability at most (k), i.e. iffor any well-formed adversaryBrunning within time t(k) inGfwe have PrGf

    x= f1(y)

    (k), where GameGf : y $

    {0, 1}k ; x B(y).

    Theorem 1(FDH exact security). Assume the underlying trapdoorpermutation is (t, )-secure. Then, a forger that makes at mostqhash andqsign queries to the hash and signing oracles respectivelyand runs within time t(k) = t(k)(qhash(k)+qsign(k)+1) O(Tf),succeeds in forging a signature for a new messagedifferent fromthe ones asked to the signing oraclewith probability at most

    (k) = qsign(k)

    (1 1qsign(k)+1

    )qsign(k)+1 (k) exp(1)qsign(k)

    (k)

    i.e. for a well-formed adversary A running within t(k) we havePrEUFDH[h= f(x)] (k).

    10

  • 8/12/2019 Barthe.2009.POPL

    11/12

  • 8/12/2019 Barthe.2009.POPL

    12/12

    Future work Numerous research directions remain to be ex-plored. Our first objective is to strengthen our result for OAEP. Itwould also be useful to formalize cryptographic meta-results suchas the equivalence between IND-CPAandIND-CCA2under plain-text awareness. Another objective would be to formalize proofs ofcomputational soundness of the symbolic model, see e.g.[1]andproofs of automated methods for proving security of primitivesand protocols, see e.g. [16, 23]. Finally, it would also be worth-while to explore applications ofCertiCrypt outside cryptography,in particular to randomized algorithms and complexity.

    Acknowledgments

    We would like to acknowledge Romain Janvier and FedericoOlmedo for their enthusiastic participation in the project, Chris-tine Paulin for the support provided in using her library, and theanonymous reviewers for their insightful comments.

    References

    [1] M. Abadi and P. Rogaway. Reconciling two views of cryptography(the computational soundness of formal encryption). Journal ofCryptology, 15(2):103127, 2002.

    [2] R. Affeldt, M. Tanaka, and N. Marti. Formal proof of provablesecurity by game-playing in a proof assistant. In Proceedings of

    International Conference on Provable Security, volume 4784 ofLecture Notes in Computer Science, pages 151168. Springer, 2007.

    [3] T. Amtoft, S. Bandhakavi, and A. Banerjee. A logic for informationflow in object-oriented programs. InProceedings of the 33rd ACMSymposium on Principles of Programming Languages, pages 91102.ACM Press, 2006.

    [4] P. Audebaud and C. Paulin-Mohring. Proofs of randomizedalgorithms in Coq.Science of Computer Programming, 2008.

    [5] M. Backes and P. Laud. Computationally sound secrecy proofsby mechanized flow analysis. In Proceedings of the 13th ACMConference on Computer and Communications Security, pages 370379. ACM Press, 2006.

    [6] G. Barthe, J. Cederquist, and S. Tarento. A machine-checkedformalization of the generic model and the random oracle model.

    In 2nd International Joint Conference on Automated Reasoning,pages 385399. Springer-Verlag, 2004.

    [7] M. Bellare and P. Rogaway. Optimal asymmetric encryption Howto encrypt with RSA. In Advances in Cryptology EUROCRYPT94,volume 950 of Lecture Notes in Computer Science, pages 92111.Springer-Verlag, 1995.

    [8] M. Bellare and P. Rogaway. The security of triple encryption anda framework for code-based game-playing proofs. In Advances inCryptology EUROCRYPT06, volume 4004 ofLecture Notes inComputer Science, pages 409426, 2006.

    [9] N. Benton. Simple relational correctness proofs for static analysesand program transformations. In Proceedings of the 31th ACMSymposium on Principles of Programming Languages, pages 1425.ACM Press, 2004.

    [10] Y. Bertot, B. Gregoire, and X. Leroy. A structured approach to proving

    compiler optimizations based on dataflow analysis. InInternationalWorkshop on Types for Proofs and Programs, volume 3839 ofLNCS,pages 6681. Springer-Verlag, 2006.

    [11] B. Blanchet. A computationally sound mechanized prover for securityprotocols. InIEEE Symposium on Security and Privacy, pages 140154, 2006.

    [12] B. Blanchet and D. Pointcheval. Automated security proofs withsequences of games. In Advances in Cryptology CRYPTO06,volume 4117 ofLecture Notes in Computer Science, pages 537554.Springer-Verlag, 2006.

    [13] R. Canetti, O. Goldreich, and S. Halevi. The random oraclemethodology, revisited. J. ACM, 51(4):557594, 2004.

    [14] R. Corin and J. den Hartog. A probabilistic Hoare-style logicfor game-based cryptographic proofs. In Proceedings of the33rd International Colloquium on Automata, Languages andProgramming, volume 4052 ofLNCS, pages 252263, 2006.

    [15] J.-S. Coron. On the exact security of Full Domain Hash. In Advancesin Cryptology, volume 1880 ofLecture Notes in Computer Science,pages 229235. Springer-Verlag, 2000.

    [16] J. Courant, M. Daubignard, C. Ene, P. Lafourcade, and Y. Lakhnech.Towards automated proofs for asymmetric encryption in the randomoracle model. In Computer and Communications Security. ACMPress, 2008.

    [17] E. Fujisaki, T. Okamoto, D. Pointcheval, and J. Stern. RSA-OAEP issecure under the RSA assumption. Journal of Cryptology, 17(2):81104, 2004.

    [18] S. Goldwasser and S. Micali. Probabilistic encryption. J. Comput.Syst. Sci., 28(2):270299, 1984.

    [19] S. Halevi. A plausible approach to computer-aided cryptographicproofs. Cryptology ePrint Archive, Report 2005/181, 2005.

    [20] J. Hurd, A. McIver, and C. Morgan. Probabilistic guarded commandsmechanized in HOL. Theor. Comput. Sci., 346(1):96112, 2005.

    [21] B. Jonsson, K. G. Larsen, and W. Yi. Probabilistic extensions ofprocess algebras. InHandbook of Process Algebra, pages 685711.

    Elsevier, 2001.[22] D. Kozen. Semantics of probabilistic programs. J. Comput. Syst. Sci.,

    22:328350, 1981.

    [23] P. Laud. Semantics and program analysis of computationally secureinformation flow. InEuropean Symposium on Programming, volume2028 ofLecture Notes in Computer Science, pages 7791. Springer-Verlag, 2001.

    [24] X. Leroy. Formal certification of a compiler back-end, or: pro-gramming a compiler with a proof assistant. In Proceedings of the33rd ACM Symposium Principles of Programming Languages, pages4254. ACM Press, 2006.

    [25] C. Meadows. Formal methods for cryptographic protocol analysis:Emerging issues and trends. IEEE Journal on Selected Areas inCommunications , 21(1):4454, 2003.

    [26] D. Nowak. A framework for game-based security proofs. In

    Information and Communications Security, volume 4861, pages319333. Springer-Verlag, 2007.

    [27] N. Ramsey and A. Pfeffer. Stochastic lambda calculus and monads ofprobability distributions. InProceedings of the 29th ACM Symposiumon Principles of Programming Languages, pages 154165. ACMPress, 2002.

    [28] A. Roy, A. Datta, A. Derek, and J. C. Mitchell. Inductive proofsof computational secrecy. In European Symposium On Research

    In Computer Security, volume 4734 of Lecture Notes in ComputerScience, pages 219234. Springer-Verlag, 2007.

    [29] A. Sabelfeld and D. Sands. A per model of secure information flowin sequential programs. Higher-Order and Symbolic Computation,14(1):5991, 2001.

    [30] V. Shoup. OAEP reconsidered. In Advances in Cryptology CRYPTO01, volume 2139 of Lecture Notes in Computer Science,

    pages 239259. Springer-Verlag, 2001.[31] V. Shoup. Sequences of games: a tool for taming complexity in

    security proofs. Cryptology ePrint Archive, Report 2004/332, 2004.

    [32] C. Sprenger and D. Basin. Cryptographically-sound protocol-modelabstractions. In Proceedings of CSF08, pages 115129. IEEEComputer Society, 2008.

    [33] J. Stern. Why provable security matters? In Advances in Cryptology EUROCRYPT03, volume 2656 of Lecture Notes in ComputerScience. Springer-Verlag, 2003.

    [34] The Coq development team. The Coq Proof Assistant ReferenceManual v8.1, 2006. Available athttp://coq.inria.fr.

    12

    http://coq.inria.fr/http://coq.inria.fr/http://coq.inria.fr/