The previous analysis already covers a large pan of realistic cases. For instance, the analysis already handles list data structures, since products and recursive types can encode them as right associative nested pairs, as it is done in the language CDuce~\cite{BCF03} (e.g., $X =\textsf{Nil}\vee(\Int\times X)$ is the type of the lists of integers). And even more since the presence of union types makes it possible to type heterogeneous lists whose content is described by regular expressions on types as proposed by~\citet{hosoya00regular}. Since the main application of occurrence typing is to type dynamic languages, then it is worth showing how to extend our work to records. We use the record types as they are defined in CDuce and which are obtained by extending types with the following two type constructors: \centerline{\(\textbf{Types} ~~ t ~ ::= ~ \record{\ell_1=t \ldots\ell_n=t}{t}\alt\Undef\)}

The previous analysis already covers a large gamut of realistic

cases. For instance, the analysis already handles list data

structures, since products and recursive types can encode them as

right associative nested pairs, as it is done in the language

CDuce~\cite{BCF03} (e.g., $X =\textsf{Nil}\vee(\Int\times X)$ is the

type of the lists of integers): see Code 8 in Table~\ref{tab:implem} of Section~\ref{sec:practical} for a concrete example. And even more since the presence of

union types makes it possible to type heterogeneous lists whose

content is described by regular expressions on types as proposed

by~\citet{hosoya00regular}. Since the main application of occurrence

typing is to type dynamic languages, then it is worth showing how to

extend our work to records. We use the record types as they are

defined in CDuce and which are obtained by extending types with the

following two type constructors:\\[1.4mm]

%

\centerline{\(\textbf{Types} ~~ t ~ ::= ~ \record{\ell_1=t \ldots\ell_n=t}{t}\alt\Undef\)}\\[1.4mm]

@@ -4,13 +4,13 @@ In this section we formalize the ideas we outlined in the introduction. We start

\subsection{Types}

\begin{definition}[Types]\label{def:types} The set of types \types{} is formed by the terms $t$ coinductively produced by the grammar:\vspace{-2mm}

\begin{definition}[Types]\label{def:types} The set of types \types{} is formed by the terms $t$ coinductively produced by the grammar:\vspace{-1.45mm}

\[

\begin{array}{lrcl}

\textbf{Types}& t & ::=& b\alt t\to t\alt t\times t\alt t\vee t \alt\neg t \alt\Empty

\end{array}

\]

and that satisfy the following conditions\vspace{-1mm}

and that satisfy the following conditions\vspace{-.85mm}

\begin{itemize}

\item (regularity) every term has a finite number of different sub-terms;

\item (contractivity) every infinite branch of a term contains an infinite number of occurrences of the

...

...

@@ -57,14 +57,14 @@ of functions that diverge on every argument). Type connectives

(i.e., union, intersection, negation) are interpreted as the

corresponding set-theoretic operators (e.g.,~$s\vee t$ is the

union of the values of the two types). We use $\simeq$ to denote the

symmetric closure of $\leq$: thus $s\simeq t$ means that $s$ and $t$ denote the same set of values and, as such, they are semantically the same type.

symmetric closure of $\leq$: thus $s\simeq t$(read, $s$ is equivalent to $t$) means that $s$ and $t$ denote the same set of values and, as such, they are semantically the same type.

\subsection{Syntax}\label{sec:syntax}

The expressions $e$ and values $v$ of our language are inductively generated by the following grammars:\vspace{-2mm}

The expressions $e$ and values $v$ of our language are inductively generated by the following grammars:\vspace{-1mm}

for $j=1,2$. In~\eqref{expressions}, $c$ ranges over constants

...

...

@@ -91,13 +91,13 @@ values.

\subsection{Dynamic semantics}\label{sec:opsem}

The dynamic semantics is defined as a classic left-to-right call-by-value reduction for a $\lambda$-calculus with pairs, enriched with specific rules for type-cases. We have the following notions of reduction:\vspace{-2mm}

The dynamic semantics is defined as a classic left-to-right call-by-value reduction for a $\lambda$-calculus with pairs, enriched with specific rules for type-cases. We have the following notions of reduction:\vspace{-1.2mm}

\[

\begin{array}{rcll}

(\lambda^{\wedge_{i\in I}s_i\to t_i} x.e)v &\reduces& e\subst x v\\[-.4mm]

%\beppe{I have doubt: is this safe or should we play it safer and

% deduce $t\wedge\neg(t_1\to t_2)$? In other terms is is possible to

...

...

@@ -257,7 +257,7 @@ need the rule \Rule{Efq}, which is more general, to ensure the

property of subject reduction.

%\beppe{Example?}

Finally, there is one last rule in our type system, the one that

Finally, there remains one last rule in our type system, the one that

implements occurrence typing, that is, the rule for the

type-case:\vspace{-1mm}

\begin{mathpar}

...

...

@@ -459,7 +459,7 @@ auxiliary deduction system for paths.

Multiple types have two distinct origins each requiring a distinct

technical solution. The first origin is the rule \Rule{Abs-} by which

it is possible to deduce for every well-typed lambda abstractions

it is possible to deduce for every well-typed lambda abstraction

infinitely many types, that is the annotation of the function

intersected with as many negations of arrow types as it is possible

without making the type empty. To handle this multiplicity we use and

...

...

@@ -572,11 +572,11 @@ straightforward: $(1)$ corresponds to checking that the function has

an arrow type, $(2)$ corresponds to checking that the argument is in

the domain of the arrow deduced for the function and $(3)$ corresponds

to returning the codomain of that same arrow. With set-theoretic types

things are more difficult since a function can be typed by, say, a

things get more difficult, since a function can be typed by, say, a

union of intersection of arrows and negations of types. Checking that

the function has a functional type is easy since it corresponds to

checking that it has a type subtype of $\Empty{\to}\Any$. Determining

its domain and the type of the application is more complicated and needs the operators $\dom{}$ and $\circ$that we informally described in Section~\ref{sec:ideas} where we also described the operator $\worra{}{}$. These three operators are used by our algorithm and are formally defined as follows:\vspace{-2mm}

its domain and the type of the application is more complicated and needs the operators $\dom{}$ and $\circ$ we informally described in Section~\ref{sec:ideas} where we also introduced the operator $\worra{}{}$. These three operators are used by our algorithm and formally defined as:\vspace{-2mm}

\begin{eqnarray}

\dom t & = &\max\{ u \alt t\leq u\to\Any\}

\\[-1mm]

...

...

@@ -635,7 +635,7 @@ extends $\Gamma$ with hypotheses on the occurrences of $e$ that are

the most general that can be deduced by assuming that $e\in t$ succeeds. For that we need the notation $\tyof{e}{\Gamma}$ which denotes the type scheme deduced for $e$ under the type environment $\Gamma$ in the algorithmic type system of Section~\ref{sec:algorules}.

That is, $\tyof{e}{\Gamma}=\ts$ if and only if $\Gamma\vdashA e:\ts$ is provable.

We start by defining the algorithm for each single occurrence, that is for the deduction of $\pvdash\Gamma e t \varpi:t'$. This is obtained by defining two mutually recursive functions $\constrf$ and $\env{}{}$:\vspace{-1.7mm}

We start by defining the algorithm for each single occurrence, that is for the deduction of $\pvdash\Gamma e t \varpi:t'$. This is obtained by defining two mutually recursive functions $\constrf$ and $\env{}{}$:\vspace{-1.3mm}