• Non ci sono risultati.

Non-Laziness in Implicit Computational Complexity and Probabilistic λ-calculus

N/A
N/A
Protected

Academic year: 2022

Condividi "Non-Laziness in Implicit Computational Complexity and Probabilistic λ-calculus"

Copied!
117
0
0

Testo completo

(1)

Non-Laziness in Implicit Computational Complexity and Probabilistic λ-calculus

Final PhD defense

Gianluca Curzi

Universit`a di Torino Dipartimento di Informatica

June 12 2020

(2)

Abstract

(3)

Riassunto della Tesi

La tesi esplora i vantaggi della “non-laziness” sia nella Complessit`a Computazionale Implicita che nella computazione probabilistica.

Nella prima parte analizziamo i meccanismi di cancellazione e duplicazione lineare e introduciamo i sistemi di assegnazione di tipi LEM (Linearly Exponential Multiplicative Type Assignment) and LAM (Linearly Additive Multiplicative Type Assignment). Il primo sistema ha regole esponenziali “pi`u deboli” e soddisfa un teorema di cut-elimination cubico, mentre il secondo sistema ha regole additive “pi`u deboli”, chiamate additivi lineari, e soddisfa una normalizzazione lineare forte. La normalizzazione lineare in presenza di regole additive viene quindi recuperata senza introdurre strategie di riduzione lazy.

Studiamo inoltre una versione probabilistica degli additivi lineari nel sistema STA, il quale permette una caratterizzazione implicita delle le funzioni probabilistiche polinomiali che non dipende dalla strategia di riduzione. Tale caratterizzazione non richiede quindi l’uso di riduzioni lazy.

Nella seconda parte mostriamo che la relazione di bisimilarit`a nel lambda calcolo probabilistico e l’equivalenza contestuale coincidono rispetto alla semantica operazionale non-lazy e call-by-name (full abstraction). Questo risultato testimonia il potere

discriminante della non-laziness, in quanto tale corrispondenza fallisce nel caso lazy.

(4)

R´ esum´ e de Th` ese

Cette th`ese analyses les avantages de la “non-paresse” dans la Complexit´e Computationnelle Implicite et dans le lambda calcul probabiliste.

Dans la premi`ere partie on ´etude des m´ecanismes d’effacement et de duplication lin´eaire, et on introduit les syst`emes LEM (Linearly Exponential Multiplicative Type Assignment) et LAM (Linearly Additive Multiplicative Type Assignment). Le premier syst`eme a des r`egles exponentielles “plus faibles” et satisfait l’´elimination des coupures en temps cubique.

Le second syst`eme a des r`egles additives “plus faibles”, appel´ees additifs lin´eaires, et satisfait une normalisation lin´eaire forte. On peut donc retrouver une normalisation lin´eaire en pr´esence de r`egles additives sans introduire de strat´egies paresseuses.

On ´etude aussi une version probabiliste des additifs lin´eaires dans le syst`eme STA, qui permet une caract´erisation implicite des fonctions probabilistes polynomiales qui ne d´epend pas de la strat´egie de r´eduction. Donc, cette caract´erisation ne n´ecessite pas l’utilisation de r´eductions paresseuses.

Dans la deuxi`eme partie on montre que la relation de bisimilarit´e dans le lambda calcul probabiliste correspond `a l’´equivalence observationnelle par rapport `a la s´emantique op´erationnelle non-paresseuse d’appel par nom (full abstraction). Ce r´esultat t´emoigne du pouvoir discriminant de la non-paresse, ´etant donn´e que cette correspondance manquait dans le cadre paresseux.

(5)

Introduction

(6)

A genealogical tree. . .

Topics

Computational Complexity

Probabilistic computation

Probabilistic complexity e.g. PP, BPP. . .

ICC Program

equivalence in Λ

(7)

A genealogical tree. . .

Topics

Computational Complexity

Probabilistic computation

Probabilistic complexity e.g. PP, BPP. . .

ICC Program

equivalence in Λ

(8)

A genealogical tree. . .

Topics

Computational Complexity

Probabilistic computation

Probabilistic complexity e.g. PP, BPP. . .

ICC Program

equivalence in Λ

(9)

A genealogical tree. . .

Topics

Computational Complexity

Probabilistic computation

Probabilistic complexity e.g. PP, BPP. . .

ICC Program

equivalence in Λ

(10)

A genealogical tree. . .

Topics

Computational Complexity

Probabilistic computation

Probabilistic complexity e.g. PP, BPP. . .

ICC Program

equivalence in Λ

Dependency on the choice of the reduction strategy

(11)

A genealogical tree. . .

Topics

Computational Complexity

Probabilistic computation

Probabilistic complexity e.g. PP, BPP. . .

ICC Program

equivalence in Λ

Dependency on the choice of the reduction strategy

Laziness

(12)

Goal of this thesis

Benefits of “non-laziness” in ICC and program equivalence in Λ:

I (Part I) System LEM to compactly express linear erasure/duplication in Multiplicative Linear Logic. New tools for ICC: “linear” additive rules?

I (Part II)

(i) System LAM with linear additives has strong linear normalization (no laziness).

(ii) System STAwith probabilistic linear additives captures probabilistic polytime functions in the style of ICC (no laziness).

I (Part III) Non-laziness: key step to recover the missing matching between bisimilarity and contextual equivalence in Λcbn .

(13)

Goal of this thesis

Benefits of “non-laziness” in ICC and program equivalence in Λ:

I (Part I) System LEM to compactly express linear erasure/duplication in Multiplicative Linear Logic. New tools for ICC: “linear” additive rules?

I (Part II)

(i) System LAM with linear additives has strong linear normalization (no laziness).

(ii) System STAwith probabilistic linear additives captures probabilistic polytime functions in the style of ICC (no laziness).

I (Part III) Non-laziness: key step to recover the missing matching between bisimilarity and contextual equivalence in Λcbn .

(14)

Goal of this thesis

Benefits of “non-laziness” in ICC and program equivalence in Λ:

I (Part I) System LEM to compactly express linear erasure/duplication in Multiplicative Linear Logic. New tools for ICC: “linear” additive rules?

I (Part II)

(i) System LAM with linear additives has strong linear normalization (no laziness).

(ii) System STAwith probabilistic linear additives captures probabilistic polytime functions in the style of ICC (no laziness).

I (Part III) Non-laziness: key step to recover the missing matching between bisimilarity and contextual equivalence in Λcbn .

(15)

Part I

A type Assignment of Linear Erasure and Duplication

Gianluca Curzi and Luca Roversi. A type-assignment of linear erasure and duplication. In Theoretical Computer Science, 2020.

(16)

Introduction

I IMLL2 as type assignment for the linear λ-calculus.

I Encoding boolean circuits in IMLL2[Mairson 03, Mairson&Terui 03].

. . . n ≥ 0. . .

0 0

0

. . . n ≥ 0. . .

1 1

1

I Linear erasure/duplication: a way to “linearly” express fan-out.

I General linear erasure/duplication:

boolean data type 7→ finite data types

I This thesis: system LEM to compactly express linear erasure/duplication of IMLL2. Cut-elimination? Expressiveness?

(17)

Introduction

I IMLL2 as type assignment for the linear λ-calculus.

I Encoding boolean circuits in IMLL2[Mairson 03, Mairson&Terui 03].

. . . n ≥ 0. . .

0 0

0

. . . n ≥ 0. . .

1 1

1

I Linear erasure/duplication: a way to “linearly” express fan-out.

I General linear erasure/duplication:

boolean data type 7→ finite data types

I This thesis: system LEM to compactly express linear erasure/duplication of IMLL2. Cut-elimination? Expressiveness?

(18)

Introduction

I IMLL2 as type assignment for the linear λ-calculus.

I Encoding boolean circuits in IMLL2[Mairson 03, Mairson&Terui 03].

. . . n ≥ 0. . .

0 0

0

. . . n ≥ 0. . .

1 1

1

I Linear erasure/duplication: a way to “linearly” express fan-out.

I General linear erasure/duplication:

boolean data type 7→ finite data types

I This thesis: system LEM to compactly express linear erasure/duplication of IMLL2. Cut-elimination? Expressiveness?

(19)

Introduction

I IMLL2 as type assignment for the linear λ-calculus.

I Encoding boolean circuits in IMLL2[Mairson 03, Mairson&Terui 03].

. . . n ≥ 0. . .

0 0

0

. . . n ≥ 0. . .

1 1

1

I Linear erasure/duplication: a way to “linearly” express fan-out.

I General linear erasure/duplication:

boolean data type 7→ finite data types

I This thesis: system LEM to compactly express linear erasure/duplication of IMLL2. Cut-elimination? Expressiveness?

(20)

Introduction

I IMLL2 as type assignment for the linear λ-calculus.

I Encoding boolean circuits in IMLL2[Mairson 03, Mairson&Terui 03].

. . . n ≥ 0. . .

0 0

0

. . . n ≥ 0. . .

1 1

1

I Linear erasure/duplication: a way to “linearly” express fan-out.

I General linear erasure/duplication:

boolean data type 7→ finite data types

I This thesis: system LEM to compactly express linear erasure/duplication of IMLL2. Cut-elimination? Expressiveness?

(21)

Linear erasure and duplication in IMLL

2

I “Linear” Booleans:

B , ∀α.α ( α ( α ⊗ α 0 , λx.λy .xy 1 , λx.λy .yx

I Linear erasure by consumption of data:

EB, λz.let zII be x, y in (let y be I in x)

I Example: EB0, (λz.let zII be x, y in (let y be I in x)) (λx .λy .x ⊗ y)

I Linear duplication by selection and erasure:

DB, λz.π1(z(0 ⊗ 0)(1 ⊗ 1))

I Example: DB1, (λz.π1(z(0 ⊗ 0)(1 ⊗ 1)))(λx .λy .y ⊗ x)

(22)

Linear erasure and duplication in IMLL

2

I “Linear” Booleans:

B , ∀α.α ( α ( α ⊗ α 0 , λx.λy .xy 1 , λx.λy .yx

I Linear erasure by consumption of data:

EB, λz.let zII be x, y in (let y be I in x)

I Example: EB0, (λz.let zII be x, y in (let y be I in x)) (λx .λy .x ⊗ y)

I Linear duplication by selection and erasure:

DB, λz.π1(z(0 ⊗ 0)(1 ⊗ 1))

I Example: DB1, (λz.π1(z(0 ⊗ 0)(1 ⊗ 1)))(λx .λy .y ⊗ x)

(23)

Linear erasure and duplication in IMLL

2

I “Linear” Booleans:

B , ∀α.α ( α ( α ⊗ α 0 , λx.λy .xy 1 , λx.λy .yx

I Linear erasure by consumption of data:

EB, λz.let zII be x, y in (let y be I in x)

I Example: let (λx .λy .x ⊗ y)IIbe x , y in (let y be I in x )

I Linear duplication by selection and erasure:

DB, λz.π1(z(0 ⊗ 0)(1 ⊗ 1))

I Example: DB1, (λz.π1(z(0 ⊗ 0)(1 ⊗ 1)))(λx .λy .y ⊗ x)

(24)

Linear erasure and duplication in IMLL

2

I “Linear” Booleans:

B , ∀α.α ( α ( α ⊗ α 0 , λx.λy .xy 1 , λx.λy .yx

I Linear erasure by consumption of data:

EB, λz.let zII be x, y in (let y be I in x)

I Example: letIIbe x , y in (let y be I in x )

I Linear duplication by selection and erasure:

DB, λz.π1(z(0 ⊗ 0)(1 ⊗ 1))

I Example: DB1, (λz.π1(z(0 ⊗ 0)(1 ⊗ 1)))(λx .λy .y ⊗ x)

(25)

Linear erasure and duplication in IMLL

2

I “Linear” Booleans:

B , ∀α.α ( α ( α ⊗ α 0 , λx.λy .xy 1 , λx.λy .yx

I Linear erasure by consumption of data:

EB, λz.let zII be x, y in (let y be I in x)

I Example: I

I Linear duplication by selection and erasure:

DB, λz.π1(z(0 ⊗ 0)(1 ⊗ 1))

I Example: DB1, (λz.π1(z(0 ⊗ 0)(1 ⊗ 1)))(λx .λy .y ⊗ x)

(26)

Linear erasure and duplication in IMLL

2

I “Linear” Booleans:

B , ∀α.α ( α ( α ⊗ α 0 , λx.λy .xy 1 , λx.λy .yx

I Linear erasure by consumption of data:

EB, λz.let zII be x, y in (let y be I in x)

I Example: I

I Linear duplication by selection and erasure:

DB, λz.π1(z(0 ⊗ 0)(1 ⊗ 1))

I Example: DB1, (λz.π1(z(0 ⊗ 0)(1 ⊗ 1)))(λx .λy .y ⊗ x)

(27)

Linear erasure and duplication in IMLL

2

I “Linear” Booleans:

B , ∀α.α ( α ( α ⊗ α 0 , λx.λy .xy 1 , λx.λy .yx

I Linear erasure by consumption of data:

EB, λz.let zII be x, y in (let y be I in x)

I Example: I

I Linear duplication by selection and erasure:

DB, λz.π1(z(0 ⊗ 0)(1 ⊗ 1))

I Example: π1((λx .λy .y ⊗ x)(0 ⊗ 0)(1 ⊗ 1))

(28)

Linear erasure and duplication in IMLL

2

I “Linear” Booleans:

B , ∀α.α ( α ( α ⊗ α 0 , λx.λy .xy 1 , λx.λy .yx

I Linear erasure by consumption of data:

EB, λz.let zII be x, y in (let y be I in x)

I Example: I

I Linear duplication by selection and erasure:

DB, λz.π1(z(0 ⊗ 0)(1 ⊗ 1))

I Example: π1((1 ⊗ 1) ⊗ (0 ⊗ 0))

(29)

Linear erasure and duplication in IMLL

2

I “Linear” Booleans:

B , ∀α.α ( α ( α ⊗ α 0 , λx.λy .xy 1 , λx.λy .yx

I Linear erasure by consumption of data:

EB, λz.let zII be x, y in (let y be I in x)

I Example: I

I Linear duplication by selection and erasure:

DB, λz.π1(z(0 ⊗ 0)(1 ⊗ 1))

I Example: 1 ⊗ 1

(30)

The system LEM (this thesis)

I Generalization in IMLL2: from B to closed Π1-types (no negative ∀):

A closed Π1 → EA

A closed Π1 + inhabited → DA

I How to express general linear erasure/duplication of IMLL2?

I LEM= IMLL2+ “weaker” exponential rules:

x1σ1, . . . , xnσn` M :σ x1σ1, . . . , xnσn` M :´σ p

Γ, x :σ` M : τ Γ, y :´σ` M[y /x ] : τ d Γ ` M : τ

Γ, x :´σ` discardσx in M : τ w

Γ, y :´σ, z :´σ` M : τ `V :σ Γ, x :´σ` copyVσx as y , z in M : τ c σclosed and lazy (no negative ∀) andV value (closed normal linear λ-term).

(31)

The system LEM (this thesis)

I Generalization in IMLL2: from B to closed Π1-types (no negative ∀):

A closed Π1 → EA

A closed Π1 + inhabited → DA

I How to express general linear erasure/duplication of IMLL2?

I LEM= IMLL2+ “weaker” exponential rules:

x1σ1, . . . , xnσn` M :σ x1σ1, . . . , xnσn` M :´σ p

Γ, x :σ` M : τ Γ, y :´σ` M[y /x ] : τ d Γ ` M : τ

Γ, x :´σ` discardσx in M : τ w

Γ, y :´σ, z :´σ` M : τ `V :σ Γ, x :´σ` copyVσx as y , z in M : τ c σclosed and lazy (no negative ∀) andV value (closed normal linear λ-term).

(32)

Cut-elimination for lazy types (this thesis)

I Reduction rules:

(λx .M)N → M[N/x ] discardσV in M → M copyV

0

σ V as x , y in M → M[V/x ,V/y ] V value (closed normal linear λ-term).

I “Weaker” cut-elimination rules:

D

` V : σ

`V:´σ

∆, y :´σ, z : ´σ ` M : τ ` V0: σ

∆, x :´σ ` copyVσ0x as y , z in M : τ

∆ `copyVσ0V as y , z in M: τ

D

` V : σ

`V:´σ D

` V : σ

`V:´σ ∆, y : ´σ, z : ´σ ` M : τ

∆, z :´σ ` M[V /y] : τ

∆ `M[V /y , V /z]: τ

Theorem (Cut-elimination for lazy types)

Any derivation of alazy type(no negative ∀) can be rewritten into a cut-free one by the “weaker” cut-elimination rules in cubic time.

(33)

Cut-elimination for lazy types (this thesis)

I Reduction rules:

(λx .M)N → M[N/x ] discardσV in M → M copyV

0

σ V as x , y in MM[V /x , V /y ] V value (closed normal linear λ-term).

I “Weaker” cut-elimination rules:

D

` V : σ

`V:´σ

∆, y :´σ, z : ´σ ` M : τ ` V0: σ

∆, x :´σ ` copyVσ0x as y , z in M : τ

∆ `copyVσ0V as y , z in M: τ

D

` V : σ

`V:´σ D

` V : σ

`V:´σ ∆, y : ´σ, z : ´σ ` M : τ

∆, z :´σ ` M[V /y] : τ

∆ `M[V /y , V /z]: τ

Theorem (Cut-elimination for lazy types)

Any derivation of alazy type(no negative ∀) can be rewritten into a cut-free one by the “weaker” cut-elimination rules in cubic time.

(34)

Cut-elimination for lazy types (this thesis)

I Reduction rules:

(λx .M)N → M[N/x ] discardσV in M → M copyV

0

σ V as x , y in MM[V /x , V /y ] V value (closed normal linear λ-term).

I “Weaker” cut-elimination rules:

D

` V : σ

`V:´σ

∆, y :´σ, z : ´σ ` M : τ ` V0: σ

∆, x :´σ ` copyVσ0x as y , z in M : τ

∆ `copyVσ0V as y , z in M: τ

D

` V : σ

`V:´σ D

` V : σ

`V:´σ ∆, y : ´σ, z : ´σ ` M : τ

∆, z :´σ ` M[V /y] : τ

∆ `M[V /y , V /z]: τ

Theorem (Cut-elimination for lazy types)

Any derivation of alazy type(no negative ∀) can be rewritten into a cut-free one by the “weaker” cut-elimination rules in cubic time.

(35)

Exponential compression and applications (this thesis)

I Translation ( ): LEM → IMLL2:

σ (closed lazy) 7→ σ (closed Π1) discardσ 7→ Eσ

copyVσ 7→ Dσ

I Since size(DA) ∈ O(2size(A)2):

Theorem (Exponential compression)

If Γ `LEMM : σ then size(M) can be exponential w.r.t size(M).

I Applications: compact encodings of boolean circuits and natural numbers.

2 , λfx.copyI1f as f1, f2in f1(f2x ) succ, λnfx.copyI1f as f1, f2in f1(nf2x )

add, λmnfx.copyI1f as f1, f2in mf1(nf2x ).

(36)

Exponential compression and applications (this thesis)

I Translation ( ): LEM → IMLL2:

σ (closed lazy) 7→ σ (closed Π1) discardσ 7→ Eσ

copyVσ 7→ Dσ

I Since size(DA) ∈ O(2size(A)2):

Theorem (Exponential compression)

If Γ `LEMM : σ then size(M) can be exponential w.r.t size(M).

I Applications: compact encodings of boolean circuits and natural numbers.

2 , λfx.copyI1f as f1, f2in f1(f2x ) succ, λnfx.copyI1f as f1, f2in f1(nf2x )

add, λmnfx.copyI1f as f1, f2in mf1(nf2x ).

(37)

Exponential compression and applications (this thesis)

I Translation ( ): LEM → IMLL2:

σ (closed lazy) 7→ σ (closed Π1) discardσ 7→ Eσ

copyVσ 7→ Dσ

I Since size(DA) ∈ O(2size(A)2):

Theorem (Exponential compression)

If Γ `LEMM : σ then size(M) can be exponential w.r.t size(M).

I Applications: compact encodings of boolean circuits and natural numbers.

2 , λfx.copyI1f as f1, f2in f1(f2x ) succ, λnfx.copyI1f as f1, f2in f1(nf2x )

add, λmnfx.copyI1f as f1, f2in mf1(nf2x ).

(38)

Part II

Linear Additives and Probabilistic Polynomial Time

Gianluca Curzi. Linear Additives. To be presented in the workshop TLLA, 2020.

(39)

Introduction

I Additive rules of Linear Logic:

hM, Ni : A & B π1: A & B ( A π2: A & B ( B

I Variants of & to capture NP [Maurel 03, Matsuoka 04, Gaboardi et al. 08].

I Drawback: exponential blow up

e.g. (λx .hx , x i)M hM,Mi

I Lazy reduction to “freeze” evaluation [Girard 96].

I This thesis: exploit linear erasure/duplication of IMLL2

I linear additive rules for strong linear normalization;

I probabilistic linear additive rules to capture the probabilistic polytime functions.

(40)

Introduction

I Additive rules of Linear Logic:

hM, Ni : A & B π1: A & B ( A π2: A & B ( B

I Variants of & to capture NP [Maurel 03, Matsuoka 04, Gaboardi et al. 08].

I Drawback: exponential blow up

e.g. (λx .hx , x i)M hM,Mi

I Lazy reduction to “freeze” evaluation [Girard 96].

I This thesis: exploit linear erasure/duplication of IMLL2

I linear additive rules for strong linear normalization;

I probabilistic linear additive rules to capture the probabilistic polytime functions.

(41)

Introduction

I Additive rules of Linear Logic:

hM, Ni : A & B π1: A & B ( A π2: A & B ( B

I Variants of & to capture NP [Maurel 03, Matsuoka 04, Gaboardi et al. 08].

I Drawback: exponential blow up

e.g. (λx .hx , x i)M hM,Mi

I Lazy reduction to “freeze” evaluation [Girard 96].

I This thesis: exploit linear erasure/duplication of IMLL2

I linear additive rules for strong linear normalization;

I probabilistic linear additive rules to capture the probabilistic polytime functions.

(42)

Introduction

I Additive rules of Linear Logic:

hM, Ni : A & B π1: A & B ( A π2: A & B ( B

I Variants of & to capture NP [Maurel 03, Matsuoka 04, Gaboardi et al. 08].

I Drawback: exponential blow up

e.g. (λx .hx , x i)M hM,Mi

I Lazy reduction to “freeze” evaluation [Girard 96].

I This thesis: exploit linear erasure/duplication of IMLL2

I linear additive rules for strong linear normalization;

I probabilistic linear additive rules to capture the probabilistic polytime functions.

(43)

Introduction

I Additive rules of Linear Logic:

hM, Ni : A & B π1: A & B ( A π2: A & B ( B

I Variants of & to capture NP [Maurel 03, Matsuoka 04, Gaboardi et al. 08].

I Drawback: exponential blow up

e.g. (λx .hx , x i)M hM,Mi

I Lazy reduction to “freeze” evaluation [Girard 96].

I This thesis: exploit linear erasure/duplication of IMLL2

I linear additive rules for strong linear normalization;

I probabilistic linear additive rules to capture the probabilistic polytime functions.

(44)

Linear additives (this thesis)

I Linear additives(“weaker” additives), e.g.

x1:A` M1:B1 x2:A` M2:B2 `V :A x :A` copyVAN as x1, x2 in hM1, M2i :B1B2

∧R

Aclosed and lazy (no negative ∀) andV value (closed normal linear λ-term).

I SystemLAM= IMLL2+ linear additives:

Theorem (Strong linear normalization)

If Γ `LAMM : A then M normalizes in a linear number of steps.

I No need of laziness to recover linear normalization with additives!

I Probabilistic linear additives to capture probabilistic polytime functions?

(45)

Linear additives (this thesis)

I Linear additives(“weaker” additives), e.g.

x1:A` M1:B1 x2:A` M2:B2 `V :A x :A` copyVAN as x1, x2 in hM1, M2i :B1B2

∧R

Aclosed and lazy (no negative ∀) andV value (closed normal linear λ-term).

I SystemLAM= IMLL2+ linear additives:

Theorem (Strong linear normalization)

If Γ `LAMM : A then M normalizes in a linear number of steps.

I No need of laziness to recover linear normalization with additives!

I Probabilistic linear additives to capture probabilistic polytime functions?

(46)

Linear additives (this thesis)

I Linear additives(“weaker” additives), e.g.

x1:A` M1:B1 x2:A` M2:B2 `V :A x :A` copyVAN as x1, x2 in hM1, M2i :B1B2

∧R

Aclosed and lazy (no negative ∀) andV value (closed normal linear λ-term).

I SystemLAM= IMLL2+ linear additives:

Theorem (Strong linear normalization)

If Γ `LAMM : A then M normalizes in a linear number of steps.

I No need of laziness to recover linear normalization with additives!

I Probabilistic linear additives to capture probabilistic polytime functions?

(47)

The system STA

(this thesis)

I STA(polynomial time) [Gaboardi&Ronchi 09].

I Linear Lambda Calculus (confluence) [Simpson 05].

I Explicit dereliction (subject reduction) [Ronchi&Roversi 97].

I Linear additives + type-dependency (non-determinism) [Diaz-Caro 13].

Γ, x1: σ,. . ., xn n: σ ` M : τ (n ≥ 0) Γ, x : !σ ` M[x /x1, . . . , x /xn] : τ m

x1: σ1, . . . , xn: σn` M : σ

!x1: σ1, . . . , xn: !σn` M : !σ sp

(48)

The system STA

(this thesis)

I STA (polynomial time) [Gaboardi&Ronchi 09].

I Linear Lambda Calculus(confluence) [Simpson 05].

I Explicit dereliction (subject reduction) [Ronchi&Roversi 97].

I Linear additives + type-dependency (non-determinism) [Diaz-Caro 13].

Γ, x1: σ,. . ., xn n: σ ` M : τ (n ≥ 0) Γ, x : !σ ` M[x /x1, . . . , x /xn] : τ m

x1: σ1, . . . , xn: σn` M : τ

y1: !σ1, . . . , yn: !σn`!M[y1/x1, . . . , yn/xn] : !τ sp

(49)

The system STA

(this thesis)

I STA (polynomial time) [Gaboardi&Ronchi 09].

I Linear Lambda Calculus (confluence) [Simpson 05].

I Explicit dereliction(subject reduction) [Ronchi&Roversi 97].

I Linear additives + type-dependency (non-determinism) [Diaz-Caro 13].

Γ, x1: σ,. . ., xn n: σ ` M : τ (n ≥ 0) Γ, x : !σ ` M[d(x )/x1, . . . ,d(x )/xn] : τ m

x1: σ1, . . . , xn: σn` M : τ

y1: !σ1, . . . , yn: !σn` !M[d(y1)/x1, . . . ,d(yn)/xn] : !τ sp

(50)

The system STA

(this thesis)

I STA (polynomial time) [Gaboardi&Ronchi 09].

I Linear Lambda Calculus (confluence) [Simpson 05].

I Explicit dereliction (subject reduction) [Ronchi&Roversi 97].

I Linear additives+type-dependency (non-determinism) [Diaz-Caro 13].

Γ, x1: σ,. . ., xn n: σ ` M : τ (n ≥ 0) Γ, x : !σ ` M[d(x )/x1, . . . , d(x )/xn] : τ m

x1: σ1, . . . , xn: σn` M : τ

y1: !σ1, . . . , yn: !σn` !M[d(y1)/x1, . . . , d(yn)/xn] : !τ sp Γ ` M : A1∧ A2 C ∈ {A1, A2}

Γ ` projAC1∧A2(M) : C ∧E

(51)

Probabilistic features from type-dependency (this thesis)

I (1) let non-determinism arise naturally from type-dependency:

projAC1∧A2hM1, M2i C ∈ {A1, A2} surface reduction (no evaluation under the scope of !).

I (2) from non-determinism to probabilistic distributions [Dal Lago&Toldin 15]:

M ⇒D

I Starting from [Gaboardi&Ronchi 09] and [Dal Lago&Toldin 15]:

Theorem (Probabilistic Polytime Characterization)

STA captures the probabilistic polytime functions (no lazy reduction strategy).

I Similar characterizations for the classes PP ( ≤ 12) and BPP (< 12).

(52)

Probabilistic features from type-dependency (this thesis)

I (1) let non-determinism arise naturally from type-dependency:

M1← projA ∧AA hM1, M2i → M2

surface reduction (no evaluation under the scope of !).

I (2) from non-determinism to probabilistic distributions [Dal Lago&Toldin 15]:

M ⇒D

I Starting from [Gaboardi&Ronchi 09] and [Dal Lago&Toldin 15]:

Theorem (Probabilistic Polytime Characterization)

STA captures the probabilistic polytime functions (no lazy reduction strategy).

I Similar characterizations for the classes PP ( ≤ 12) and BPP (< 12).

(53)

Probabilistic features from type-dependency (this thesis)

I (1) let non-determinism arise naturally from type-dependency:

M1← projA ∧AA hM1, M2i → M2

surface reduction (no evaluation under the scope of !).

I (2) from non-determinism to probabilistic distributions [Dal Lago&Toldin 15]:

M ⇒D

I Starting from [Gaboardi&Ronchi 09] and [Dal Lago&Toldin 15]:

Theorem (Probabilistic Polytime Characterization)

STA captures the probabilistic polytime functions (no lazy reduction strategy).

I Similar characterizations for the classes PP ( ≤ 12) and BPP (< 12).

(54)

Probabilistic features from type-dependency (this thesis)

I (1) let non-determinism arise naturally from type-dependency:

M1← projA ∧AA hM1, M2i → M2

surface reduction (no evaluation under the scope of !).

I (2) from non-determinism to probabilistic distributions [Dal Lago&Toldin 15]:

M ⇒D

I Starting from [Gaboardi&Ronchi 09] and [Dal Lago&Toldin 15]:

Theorem (Probabilistic Polytime Characterization)

STA captures the probabilistic polytime functions (no lazy reduction strategy).

I Similar characterizations for the classes PP ( ≤ 12) and BPP (< 12).

(55)

Probabilistic features from type-dependency (this thesis)

I (1) let non-determinism arise naturally from type-dependency:

M1← projA ∧AA hM1, M2i → M2

surface reduction (no evaluation under the scope of !).

I (2) from non-determinism to probabilistic distributions [Dal Lago&Toldin 15]:

M ⇒D

I Starting from [Gaboardi&Ronchi 09] and [Dal Lago&Toldin 15]:

Theorem (Probabilistic Polytime Characterization)

STA captures the probabilistic polytime functions (no lazy reduction strategy).

I Similar characterizations for the classes PP ( ≤ 12) and BPP (< 12).

(56)

Part III

The Benefit of Being Non-Lazy in Probabilistic λ-calculus

Gianluca Curzi and Michele Pagani. The Benefit of Being Non-lazy in Probabilistic λ-calculus. In LICS, 2020.

(57)

Introduction

I Program equivalence: contextual equivalencevs bisimilarity.

I Applicative bisimilarity [Abramsky 93].

Λcbn= LTS

I Probabilistic applicative bisimilarity (PAB) [Dal Lago et al. 13].

Λcbn = LMC

(58)

Introduction

I Program equivalence: contextual equivalence vsbisimilarity.

I Applicative bisimilarity [Abramsky 93].

Λcbn= LTS

I Probabilistic applicative bisimilarity (PAB) [Dal Lago et al. 13].

Λcbn = LMC

(59)

Introduction

I Program equivalence: contextual equivalence vsbisimilarity.

I Applicative bisimilarity [Abramsky 93].

Λcbn= LTS

I Probabilistic applicative bisimilarity (PAB) [Dal Lago et al. 13].

Λcbn = LMC

(60)

Introduction

I Program equivalence: contextual equivalence vsbisimilarity.

I Applicative bisimilarity [Abramsky 93].

Λcbn= LTS

I Probabilistic applicative bisimilarity (PAB) [Dal Lago et al. 13].

Λcbn = LMC

(61)

I Contextual equivalence (=cxt) vs PAB (∼).

Full Abstraction = Soundness + Completeness.

(62)

I Contextual equivalence (=cxt) vs PAB (∼).

lazy value = function (e.g. λx .M)

Full Abstraction = Soundness + Completeness.

(63)

I Contextual equivalence (=cxt) vs PAB (∼).

lazy non-lazy

value = function (e.g. λx .M) λx .(λy .y )x →βλx .x

Full Abstraction = Soundness + Completeness.

(64)

I Contextual equivalence (=cxt) vs PAB (∼).

lazy non-lazy

cbn

(λx .xx )(II) →β (II)(II)

Full Abstraction = Soundness + Completeness.

(65)

I Contextual equivalence (=cxt) vs PAB (∼).

lazy non-lazy

cbn cbv (λx .xx )(II) →β (II)(II)

(λx .xx )(II) →β(λx .xx )I →βII

Full Abstraction = Soundness + Completeness.

(66)

I Contextual equivalence (=cxt) vs PAB (∼). Previous results:

lazy non-lazy

cbn cbv Soundness (∼ ⊆ =cxt) X Completeness (=cxt⊆ ∼) X

[Dal Lago et al. 14]

Full Abstraction = Soundness + Completeness.

(67)

I Contextual equivalence (=cxt) vs PAB (∼). Previous results:

lazy non-lazy

cbn cbv Soundness (∼ ⊆ =cxt) X X Completeness (=cxt⊆ ∼) X X

[Crubill´e&Dal Lago 14]

Full Abstraction = Soundness + Completeness.

(68)

I Contextual equivalence (=cxt) vs PAB (∼). Previous results:

lazy non-lazy

cbn cbv cbn+let

Soundness (∼ ⊆ =cxt) X X X

Completeness (=cxt⊆ ∼) X X X

[Kasterovic&Pagani 19]

Full Abstraction = Soundness + Completeness.

(69)

I Contextual equivalence (=cxt) vs PAB (∼). This thesis:

lazy non-lazy

cbn cbv cbn+let head

Soundness (∼ ⊆ =cxt) X X X X

Completeness (=cxt⊆ ∼) X X X X

Full Abstraction = Soundness + Completeness.

(70)

I Contextual equivalence (=cxt) vs PAB (∼). This thesis:

lazy non-lazy

cbn cbv cbn+let head

Soundness(∼ ⊆ =cxt) X X X X

Completeness (=cxt⊆ ∼) X X X

via Context Lemma (no Howe’s method)

Full Abstraction = Soundness + Completeness.

(71)

I Contextual equivalence (=cxt) vs PAB (∼). This thesis:

lazy non-lazy

cbn cbv cbn+let head

Soundness (∼ ⊆ =cxt) X X X X

Completeness(=cxt⊆ ∼) X X X X

via Context Lemma (no Howe’s method)

via Separation Theorem [Leventis 18] (no testing equivalence)

Full Abstraction = Soundness + Completeness.

(72)

Probabilistic λ-calculus Λ

and operational semantics J·K

I Probabilistic λ-calculusΛ:

M := x | λx .M | (MM) | M ⊕ M

H := λx1. . . xn.yM1. . . Mm (head nf)

I Big-step approximation ⇓ ⊆ Λ× D(HEAD), e.g.:

M ⇓D {H[N/x ] ⇓EH,N}λx .H ∈ supp(D)

MN⇓ X

λx .H ∈ supp(D)

D(λx.H) · EH,N+ X

H ∈ supp(D) H is neutral

D(H) · HN s5

I Big-step semantics: JM K := sup{D | M ⇓ D }

Theorem (Head = Spine)

∀M ∈ Λ, ∀H ∈ HEAD, ∀n ∈ N: Probnhead[M, H] = Probnspine[M, H].

(73)

Probabilistic λ-calculus Λ

and operational semantics J·K

I Probabilistic λ-calculusΛ:

M := x | λx .M | (MM) | M ⊕ M

H := λx1. . . xn.yM1. . . Mm (head nf)

I Big-step approximation ⇓ ⊆ Λ× D(HEAD), e.g.:

M ⇓D {H[N/x ] ⇓EH,N}λx .H ∈ supp(D)

MN⇓ X

λx .H ∈ supp(D)

D(λx.H) · EH,N+ X

H ∈ supp(D) H is neutral

D(H) · HN s5

I Big-step semantics: JM K := sup{D | M ⇓ D }

Theorem (Head = Spine)

∀M ∈ Λ, ∀H ∈ HEAD, ∀n ∈ N: Probnhead[M, H] = Probnspine[M, H].

(74)

Probabilistic λ-calculus Λ

and operational semantics J·K

I Probabilistic λ-calculusΛ:

M := x | λx .M | (MM) | M ⊕ M

H := λx1. . . xn.yM1. . . Mm (head nf)

I Big-step approximation ⇓ ⊆ Λ× D(HEAD), e.g.:

M ⇓D {H[N/x ] ⇓EH,N}λx .H∈ supp(D)

MN ⇓ X

λx .H ∈ supp(D)

D(λx.H) · EH,N+ X

H ∈ supp(D) H is neutral

D(H) · HN s5

I Big-step semantics: JM K := sup{D | M ⇓ D }

Theorem (Head = Spine)

∀M ∈ Λ, ∀H ∈ HEAD, ∀n ∈ N: Probnhead[M, H] = Probnspine[M, H].

(75)

Probabilistic λ-calculus Λ

and operational semantics J·K

I Probabilistic λ-calculusΛ:

M := x | λx .M | (MM) | M ⊕ M

H := λx1. . . xn.yM1. . . Mm (head nf)

I Big-step approximation ⇓ ⊆ Λ× D(HEAD), e.g.:

M ⇓D {H[N/x ] ⇓EH,N}λx .H∈ supp(D)

MN ⇓ X

λx .H ∈ supp(D)

D(λx.H) · EH,N+ X

H ∈ supp(D) H is neutral

D(H) · HN s5

I Big-step semantics: JM K := sup{D | M ⇓ D }

Theorem (Head = Spine)

∀M ∈ Λ, ∀H ∈ HEAD, ∀n ∈ N: Probnhead[M, H] = Probnspine[M, H].

(76)

Contextual equivalence (=

cxt

)

I Contextual equivalence(=cxt):

M =cxtN iff ∀C X

JC [M ]K = X

JC [N ]K.

C = term with a “hole” [·], e.g. λx .λy .[·]xy .

I Example: λz.z(Ω ⊕ I) 6=cxt λz.(zΩ ⊕ zI). If C , [·]∆ then:

(λz.z(Ω ⊕ I))∆ I

0.25

(λz.(zΩ ⊕ zI))∆ I

0.50

where I , λx.x, ∆ , λx.xx, and Ω , ∆∆.

(77)

Contextual equivalence (=

cxt

)

I Contextual equivalence(=cxt):

M =cxtN iff ∀C X

JC [M ]K = X

JC [N ]K.

C = term with a “hole” [·], e.g. λx .λy .[·]xy .

I Example: λz.z(Ω ⊕ I) 6=cxt λz.(zΩ ⊕ zI). If C , [·]∆ then:

(λz.z(Ω ⊕ I))∆ I

0.25

(λz.(zΩ ⊕ zI))∆ I

0.50

where I , λx.x, ∆ , λx.xx, and Ω , ∆∆.

(78)

Probabilistic Applicative Bisimilarity (∼)

I Λhead as a LMC:

M H

. . . H0

τ p τ

p0

λx .H M H[M/x ]

1

I Probabilistic applicative bisimulation: R = equivalence relation such that, e.g.

M

∀H R {H0 | H0R H}

N

τ p τ

p

I Example: if fix , (λy .I ⊕ yy )(λy .I ⊕ yy ) then λx.x ⊕ x ∼ fix, since:

λx .x ⊕ x I

1

τ fix I

1 τ

so R , {(λx .x ⊕ x,fix), (fix, λx .x ⊕ x )} ∪ {(N,N) | N ∈ Λ} is bisimulation.

(79)

Probabilistic Applicative Bisimilarity (∼)

I Λhead as a LMC:

M H

. . . H0

τ p τ

p0

λx .H M H[M/x ]

1

I Probabilistic applicative bisimulation: R = equivalence relation such that, e.g.

M

∀H R {H0 | H0R H}

N

τ p τ

p

I Example: if fix , (λy .I ⊕ yy )(λy .I ⊕ yy ) then λx.x ⊕ x ∼ fix, since:

λx .x ⊕ x I

1

τ fix I

1 τ

so R , {(λx .x ⊕ x,fix), (fix, λx .x ⊕ x )} ∪ {(N,N) | N ∈ Λ} is bisimulation.

(80)

Probabilistic Applicative Bisimilarity (∼)

I Λhead as a LMC:

M H

. . . H0

τ p τ

p0

λx .H M H[M/x ]

1

I Probabilistic applicative bisimilarity(PAB): ∼ = the “largest” bisimulation:

M

∀H{H0 | H0∼ H}

N

τ p τ

p

I Example: if fix , (λy .I ⊕ yy )(λy .I ⊕ yy ) then λx.x ⊕ x ∼ fix, since:

λx .x ⊕ x I

1

τ fix I

1 τ

so R , {(λx .x ⊕ x,fix), (fix, λx .x ⊕ x )} ∪ {(N,N) | N ∈ Λ} is bisimulation.

(81)

Probabilistic Applicative Bisimilarity (∼)

I Λhead as a LMC:

M H

. . . H0

τ p τ

p0

λx .H M H[M/x ]

1

I Probabilistic applicative bisimilarity(PAB): ∼ = the “largest” bisimulation:

M

∀H{H0 | H0∼ H}

N

τ p τ

p

I Example: if fix , (λy .I ⊕ yy )(λy .I ⊕ yy ) then λx.x ⊕ x ∼ fix, since:

λx .x ⊕ x I

1

τ fix I

1 τ

so R , {(λx .x ⊕ x,fix), (fix, λx .x ⊕ x )} ∪ {(N,N) | N ∈ Λ} is bisimulation.

(82)

Probabilistic Nakajima trees [Leventis 18]

I Separation Theorem [Leventis 18]: M =cxt N implies PT (M) = PT (N).

I B¨ohm tree (BT ):

BT (λx1. . . xn.yM1. . . Mk) , λx1. . . xn.y

BT (M1) BT (Mk) . . .

I Probabilistic Nakajima tree(PT ):

PT (M) ,

VT (H) VT (H0)

p p0

. . .

(83)

Probabilistic Nakajima trees [Leventis 18]

I Separation Theorem [Leventis 18]: M =cxt N implies PT (M) = PT (N).

I B¨ohm tree (BT ):

BT (λx1. . . xn.yM1. . . Mk) , λx1. . . xn.y

BT (M1) BT (Mk) . . .

I Probabilistic Nakajima tree(PT ):

PT (M) ,

VT (H) VT (H0)

p p0

. . .

(84)

Probabilistic Nakajima trees [Leventis 18]

I Separation Theorem [Leventis 18]: M =cxt N implies PT (M) = PT (N).

I Nakajima tree (BTη):

BTη(λx1. . . xn.yM1. . . Mk) , λx1. . . xnxn+1. . ..y

BTη(M1) BTη(Mk) BTη(xn+1)

. . . . . .

I Probabilistic Nakajima tree(PT ):

PT (M) ,

VT (H) VT (H0)

p p0

. . .

Riferimenti

Documenti correlati

results confirm data already obtained for bacterial communities (i.e. plant roots select few fungal species from the rhizosphere), and that both soil features and

Leur productivité peut aussi augmenter si le mot ou le syntagme étranger est facilement intégrable : si le mot se caractérise par une forme base très simple dans la langue modèle

Lower panel: column densities of the C IV systems of B1422 +231 computed by us (open dots) and by BSR03 (solid triangles).. case, in particular from the comparison of the C IV

We devised and fabricated a new (EG-FET)-MIP based chemosensor using a thin film of an MIP imprinted with the PQQPFPQQ gluten epitope as the recognition unit

è stata trattata alle diverse scale che inquadrano il mosaico degli spazi naturali e artificiali pieni e vuoti (gli spazi aper- ti monumentali e quelli minimali del quotidiano,

Alcuni studi hanno messo in evidenza come, sia in animali che in soggetti umani, l’esposizione a inquinanti ambientali/metalli pesanti provochi un aumento di

Sebbene il fantastico non sia un elemento ricorrente nella letteratura sarda moderna e contemporanea, non mancano le apparizioni di spiriti, diavoli e fate nelle leggende di

[r]