$\Empty$ and $\Any$ respectively denote the empty (that types no value)

and top (that types all values) types. Coinduction accounts for

...

...

@@ -38,7 +38,7 @@ and to $ \lor $, $ \land $, $ \lnot $, and $ \setminus $

as \emph{type connectives}.

The subtyping relation for these types, noted $\leq$, is the one defined

by~\citet{Frisch2008} to which the reader may refer for more details. Its formal definition is recalled in Appendix~\ref{} and a detailed description of the algorithm to decide it can be found in~\cite{Cas15}.

by~\citet{Frisch2008} to which the reader may refer. A detailed description of the algorithm to decide it can be found in~\cite{Cas15}.

For this presentation it suffices to consider that

types are interpreted as sets of \emph{values} ({i.e., either

constants, $\lambda$-abstractions, or pair of values: see

...

...

@@ -100,8 +100,8 @@ The dynamic semantics is defined as a classic left-to-right call-by-value reduct

\]

The semantics of type-cases uses the relation $v\in t$ that we

informally defined in the previous section. We delay its formal

definition to Section~\ref{sec:type-schemes} (where it deals with some corner cases for negated arrow types). Context reductions are

defined by the following reduction contexts:

definition to Section~\ref{sec:type-schemes} (where it deals with some corner cases for negated arrow types). Contextual reductions are

@@ -164,7 +164,7 @@ core and the use of subtyping, given by the following typing rules:

\end{mathpar}

These rules are quite standard and do not need any particular explanation besides those already given in Section~\ref{sec:syntax}. Just notice that we used a classic subsumption rule (i.e., \Rule{Subs}) to embed subtyping in the type system. Let us next focus on the unconventional aspects of our system, from the simplest to the hardest.

The first one is that, as explained in

The first unconventional aspect is that, as explained in

Section~\ref{sec:challenges}, our type assumptions are about

expressions. Therefore, in our rules the type environments, ranged over

by $\Gamma$, map \emph{expressions}---rather than just variables---into

...

...

@@ -226,7 +226,7 @@ Clearly, the expression above is well typed, but the rule \Rule{Abs+} alone

is not enough to type it. In particular, according to \Rule{Abs+} we

have to prove that under the hypothesis that $x$ is of type $\Int$ the expression

$(\tcase{x}{\Int}{x+1}{\textsf{true}})$ is of type $\Int$, too. That is, that under the

hypothesis that x has type $\Int\wedge\Int$ (we apply occurrence

hypothesis that $x$ has type $\Int\wedge\Int$ (we apply occurrence

typing) the expression $x+1$ is of type \Int{} (which holds) and that under the

hypothesis that $x$ has type $\Int\setminus\Int$, that is $\Empty$

(we apply once more occurrence typing), \textsf{true} is of type \Int{}

...

...

@@ -248,7 +248,8 @@ Once more, this kind of deduction was already present in the system

by~\citet{Frisch2008} to type full fledged overloaded functions,

though it was embedded in the typing rule for the type-case. Here we

need the rule \Rule{Efq}, which is more general, to ensure the

property of subject reduction.\beppe{Example?}

property of subject reduction.

%\beppe{Example?}

Finally, there is one last rule in our type system, the one that

implements occurrence typing, that is, the rule for the

...

...

@@ -278,7 +279,7 @@ $\Gamma$ with the hypothesis deduced assuming that $e\in\neg t$, that

is, for when the test $e\in t$ fails.

All it remains to do is to show how to deduce judgments of the form

$\Gamma\evdash e t \Gamma'$. For that we first have to define how

$\Gamma\evdash e t \Gamma'$. For that we first define how

to denote occurrences of an expression. These are identified by paths in the

syntax tree of the expressions, that is, by possibly empty strings of

characters denoting directions starting from the root of the tree (we

...

...

@@ -360,7 +361,7 @@ of rules.

{\pvdash\Gamma e t \varpi:t_1 \\\pvdash\Gamma e t \varpi:t_2 }

{\pvdash\Gamma e t \varpi:t_1\land t_2 }

{}

\\

\\

\Infer[PTypeof]

{\Gamma\vdash\occ e \varpi:t' }

{\pvdash\Gamma e t \varpi:t' }

...

...

@@ -371,7 +372,9 @@ of rules.

{\pvdash\Gamma e t \epsilon:t }

{}

\qquad

\\

\end{mathpar}

\begin{mathpar}

%\\ \\

\Infer[PAppR]

{\pvdash\Gamma e t \varpi.0:\arrow{t_1}{t_2}\\\pvdash\Gamma e t \varpi:t_2'}

{\pvdash\Gamma e t \varpi.1:\neg t_1 }

...

...

@@ -401,7 +404,6 @@ of rules.

{\pvdash\Gamma e t \varpi:t' }

{\pvdash\Gamma e t \varpi.s:\pair\Any{t'}}

{}

\qquad

\end{mathpar}

These rules implement the analysis described in

Section~\ref{sec:ideas} for functions and extend it to products. Let