In this section we present the formalization of the ideas we hinted in the introduction. We start by the definitions of types. Next, we define the language and its reduction semantics. The static semantics is the core of our presentation: we first present a declarative type system that deduces possibly many types for the well-typed expressions and then the algorithms to decide whether an expression is well-typed or not.

In this section we formalize the ideas we outlined in the introduction. We start by the definition of types followed by the language and its reduction semantics. The static semantics is the core of our work: we first present a declarative type system that deduces (possibly many) types for well-typed expressions and then the algorithms to decide whether an expression is welltyped or not.

\subsection{Types}

...

...

@@ -80,8 +80,8 @@ intuitively, $\lambda^{\wedge_{i\in I}s_i\to t_i} x.e$ is a well-typed

value if for all $i{\in} I$ the hypothesis that $x$ is of type $s_i$

implies that the body $e$ has type $t_i$, that is to say, it is well

typed if $\lambda^{\wedge_{i\in I}s_i\to t_i} x.e$ has type $s_i\to

t_i$ for all $i\in I$. Every value is associated to a type: we said

described the types of constants and abstractions and, inductively,

t_i$ for all $i\in I$. Every value is associated to a type: we

described above the types of constants and abstractions and, inductively,

the type of a pair of values is the product of the types of the

values.

...

...

@@ -89,7 +89,7 @@ values.

\subsection{Dynamic semantics}

The dynamic semantics is defined as a classic left-to-right reduction with pairs with a specific rule for type-cases. We have the following notions of reduction:

The dynamic semantics is defined as a classic left-to-right cbv reduction for a $\lambda$-calculus with pairs enriched with a specific rule for type-cases. We have the following notions of reduction:

\[

\begin{array}{rcll}

(\lambda^{\wedge_{i\in I}s_i\to t_i} x.e)v &\reduces& e\subst x v\\

...

...

@@ -114,7 +114,7 @@ $\Cx[e]\reduces\Cx[e']$.

While the syntax and reduction semantics are on the whole pretty

standard, for the type system we will have to introduce several

unconventional features that we anticipated in

Section~\ref{sec:challenges}: this system is the core of our work. Let

Section~\ref{sec:challenges} and are at the core of our work. Let

us start with the standard part, that is the typing of the functional

core and the use of subtyping given by the following typing rules:

\begin{mathpar}

...

...

@@ -162,14 +162,13 @@ core and the use of subtyping given by the following typing rules:

{}

\qquad

\end{mathpar}

These rules are quite standard and do not need any particular explanation. Just notice that we used a classic subsumption rule (\emph{ie}, \Rule{Subs}) to embed subtyping in the type system.

These rules are quite standard and do not need any particular explanation besides those already given in Section~\ref{sec:syntax}. Just notice that we used a classic subsumption rule (\emph{ie}, \Rule{Subs}) to embed subtyping in the type system. Let us next focus on the unconventional aspects of our system, from the simplest to the hardest.

Let us now focus on the unconventional aspects of our system, starting

from the simplest ones. The first one is that, as explained in

The first one is that, as explained in

Section~\ref{sec:challenges}, our type assumptions are about

expressions. Therefore, the type environments in our rules, ranged over

by $\Gamma$, map \emph{expressions}---rather than just variables---into

types. This explains why the classic typing rule for variables is replaced by a more general \Rule{Env} rule defined as follows:\beppe{Changed the name of Occ into Env since it seems more appropriate}

types. This explains why the classic typing rule for variables is replaced by a more general \Rule{Env} rule defined below:\beppe{Changed the name of Occ into Env since it seems more appropriate}

\begin{mathpar}

\Infer[Env]

{}

...

...

@@ -188,7 +187,7 @@ environment $\Gamma$ with the static type deduced for the same

expression by using the other typing rules. This same intersection

rule is also used to infer the second unconventional aspect of our

system, that is, the fact that $\lambda$-abstractions can have negated

arrow types, as long as this negated types do not make their type empty:

arrow types, as long as these negated types do not make their type empty:

@@ -203,14 +202,14 @@ type $(\Int\to\Int)\wedge\neg(\Int\to\neg\Int)$, that is,

$(\Int\to\Int)\setminus(\Int\to\neg\Int)$ ). But the rule \Rule{Abs+}

above does not allow us to deduce negations of

arrows for abstractions: the rule \Rule{Abs-} makes this possible. As an aside, note that this kind

of deduction was already present in the system by~\cite{Frisch2008}

though it that system was motivated by the type semantics rather than

of deduction was already present in the system by~\citet{Frisch2008}

though it that system this deduction was motivated by the semantics of types rather than

by the soundness of the type system.

Rules \Rule{Abs+} and \Rule{Abs-} are not enough to deduce for

$\lambda$-abstractions all the types we wish. In particular, these

rules alone are not enough to type general overloaded functions. For

example, consider this simple example of a function that applied to an

instance, consider this simple example of a function that applied to an

integer returns its successor and applied to anything else returns

\textsf{true}:

\[

...

...

@@ -225,7 +224,7 @@ typing) the expression $x+1$ is of type \Int{} (which holds) and that under the

hypothesis that $x$ has type $\Int\setminus\Int$, that is $\Empty$

(we apply once more occurrence typing), \textsf{true} is of type \Int{}

(which \emph{does not} hold). The problem is that we are trying to type the

second case of a type-case even if we know that there is no chance that, under the hypothesis that $x$ is of type$\Int$,

second case of a type-case even if we know that there is no chance that, when $x$ is bound to an$\Int$,

that case will be ever selected. The fact that it is never selected is witnessed

by the presence of a type hypothesis with $\Empty$ type. To

avoid this problem (and type the term above) we add the rule

...

...

@@ -241,7 +240,7 @@ expression whose type environment contains an empty assumption:

Once more, this kind of deduction was already present in the system

by~\citet{Frisch2008} to type full fledged overloaded functions,

though it was embedded in the typing rule for the type-case. Here we

need the rules\Rule{Efq}, which is more general, to ensure the

need the rule \Rule{Efq}, which is more general, to ensure the

property of subject reduction.\beppe{Example?}

Finally, there is one last rule in our type system, the one that

...

...

@@ -264,15 +263,15 @@ the environments $\Gamma_i$'s under whose hypothesis the expressions

$e_i$'s are typed. The production of these environments is represented by the judgment

$\Gamma\evdash p e t \Gamma_i$ (with $p$ either $+$ or $-$). The

intuition is that when $\Gamma\evdash p e t \Gamma_i$ is provable

then $\Gamma_i$ is $\Gamma$ extended with type hypothesis for all

expressions occurring in $e$, type hypothesis that can be deduced

then $\Gamma_i$ is a version of $\Gamma$ extended with type hypotheses for all

expressions occurring in $e$, type hypotheses that can be deduced

assuming that the test $e\in t$ succeeds (for $p=+$) or fails (for

$p=-$).

All it remains to do is to show how to deduce judgments of the form

$\Gamma\evdash p e t \Gamma'$. For that we first have to define how

to denote occurrences of an expressions. These are just paths in the

syntax trees of the expressions, that is, possibly empty strings of

to denote occurrences of an expression. These are identified by paths in the

syntax tree of the expressions, that is, by possibly empty strings of

characters denoting directions starting from the root of the tree (we

use $\epsilon$ for the empty string/path, which corresponds to the

root of the tree).

...

...

@@ -295,7 +294,7 @@ undefined otherwise

To ease our analysis we used different directions for each kind of

term. So we have $0$ and $1$ for the function and argument of an

application, $l$ and $r$ for the $l$eft and $r$ight expressions forming a pair

and $f$ and $s$ for the argument of a $f$irst or of a $s$econd projection. The judgments $\Gamma\evdash p e t \Gamma'$ are deduced by these two rules:

and $f$ and $s$ for the argument of a $f$irst or of a $s$econd projection. The judgments $\Gamma\evdash p e t \Gamma'$ are then deduced by the following two rules:

\begin{mathpar}

% \Infer[Base]

% { \Gamma \vdash e':t' }

...

...

@@ -323,7 +322,7 @@ deduce from $\Gamma$ all the hypothesis already in $\Gamma$ (rule

occurrence $\varpi$ of the expression $e$ being checked, than we can add this

hypothesis to the produced type environment (rule \Rule{Path}). The rule

\Rule{Path} uses a (last) auxiliary judgement $\pvdash{\Gamma} p e t

\varpi:t'$ to deduce the type of the occurrence $\occ e \varpi$ when

\varpi:t'$ to deduce the type $t'$of the occurrence $\occ e \varpi$ when

checking $e$ against $t$ under the hypotheses $\Gamma$. This rule \Rule{Path} is subtler than it may appear at

first sight, insofar as the deduction of the type for $\varpi$ may already use

some hypothesis on $\occ e \varpi$ (in $\Gamma'$) and, from an