@@ -25,11 +25,11 @@ which corresponds to using in the quasi-constant function notation the
field $\ell= t \vee\Undef$.
For what concern expressions, we adapt CDuce records to our analysis. In particular records are built starting from the empty record expressions \erecord{} and by adding, updating, or removing fields:
For what concern expressions, we adapt CDuce records to our analysis. In particular records are built starting from the empty record expressions \erecord{} and by adding, updating, or removing fields:\vspace{-1mm}
\[
\begin{array}{lrcl}
\textbf{Expr}& e & ::=&\erecord{}\alt\recupd e \ell e \alt\recdel e \ell\alt e.\ell
\end{array}
\end{array}\vspace{-1mm}
\]
in particular $\recdel e \ell$ deletes the field $\ell$ from $e$, $\recupd e \ell e'$ adds the field $\ell=e'$ to the record $e$ (deleting any existing $\ell$ field), while $e.\ell$ is field selection with the reduction:
\(\erecord{...,\ell=e,...}.\ell\ \reduces\ e\).
...
...
@@ -61,7 +61,7 @@ Expressions are then typed by the following rules (already in algorithmic form).
@@ -70,8 +70,7 @@ Expressions are then typed by the following rules (already in algorithmic form).
\Infer[Proj]
{\Gamma\vdash e:t\and t\leq\orecord{\ell = \Any}}
{\Gamma\vdash e.\ell:\proj\ell{t}}
{e.\ell\not\in\dom\Gamma}
{e.\ell\not\in\dom\Gamma}\vspace{-2mm}
\end{mathpar}
To extend occurrence typing to records we add the paths the following values: $\varpi\in\{\ldots,a_\ell,u_\ell^1,u_\ell^2,r_\ell\}^*$, with
\(e.\ell\downarrow a_\ell.\varpi=\occ{e}\varpi\),
...
...
@@ -88,9 +87,7 @@ and add the following rules for the new paths:
{\pvdash\Gamma e t \varpi:t'}
{\pvdash\Gamma e t \varpi.r_\ell: (\recdel{t'}\ell) + \crecord{\ell\eqq\Any}}
{}
\\
\vspace{-2mm}\\
\Infer[PUpd1]
{\pvdash\Gamma e t \varpi:t'}
{\pvdash\Gamma e t \varpi.u_\ell^1: (\recdel{t'}\ell) + \crecord{\ell\eqq\Any}}
...
...
@@ -129,8 +126,8 @@ $t$ to the type $\Any\vee\Undef$, that is, to the type of all
undefined fields in an open record. So \Rule{PDel} and \Rule{PUpd1}
mean that if we remove, add, or redefine a field $\ell$ in an expression $e$
then all we can deduce for $e$ is that its field $\ell$ is undefined: since the original field was destroyed we do not have any information on it apart from the static one.
By $\constr{\varpi.u_\ell^1}{\Gamma,e,t}$---i.e., by \Rule{Ext1}, \Rule{PTypeof}, and \Rule{PInter}---the type for $x$ in the positive branch is $((\orecord{a=\Int, b=\Bool}\vee\orecord{a=\Bool, b=\Int})\land\orecord{a=\Int})+\crecord{a\eqq\Any}$.
It is equivalent to the type $\orecord{b=\Bool}$, and thus we can deduce that $x.b$ has the type $\Bool$.
where {\Cast{$t$}{$e$}} is a type-cast that dynamically checks whether the value returned by $e$ has type $t$.\footnote{Intuitively, \code{\Cast{$t$}{$e$}} is
syntactic sugar for \code{(typeof($e$)==="$t$")\,?\,$e$\,:\,(throw "Type
for $j=1,2$. In~\eqref{expressions}, $c$ ranges over constants
...
...
@@ -91,22 +91,24 @@ values.
\subsection{Dynamic semantics}\label{sec:opsem}
The dynamic semantics is defined as a classic left-to-right call-by-value reduction for a $\lambda$-calculus with pairs, enriched with specific rules for type-cases. We have the following notions of reduction:
The dynamic semantics is defined as a classic left-to-right call-by-value reduction for a $\lambda$-calculus with pairs, enriched with specific rules for type-cases. We have the following notions of reduction:\vspace{-2mm}
\[
\begin{array}{rcll}
(\lambda^{\wedge_{i\in I}s_i\to t_i} x.e)v &\reduces& e\subst x v\\
\pi_i(v_1,v_2)&\reduces& v_i & i=1,2\\
\tcase{v}{t}{e_1}{e_2}&\reduces& e_1&v\in t\\
\tcase{v}{t}{e_1}{e_2}&\reduces& e_2&v\not\in t\\
(\lambda^{\wedge_{i\in I}s_i\to t_i} x.e)v &\reduces& e\subst x v\\[-.4mm]
As usual we denote by $\Cx[e]$ the term obtained by replacing $e$ for
the hole in the context $\Cx$ and we have that $e\reduces e'$ implies
$\Cx[e]\reduces\Cx[e']$.
...
...
@@ -118,7 +120,7 @@ standard, for the type system we will have to introduce several
unconventional features that we anticipated in
Section~\ref{sec:challenges} and are at the core of our work. Let
us start with the standard part, that is the typing of the functional
core and the use of subtyping, given by the following typing rules:
core and the use of subtyping, given by the following typing rules:\vspace{-1mm}
\begin{mathpar}
\Infer[Const]
{}
...
...
@@ -147,7 +149,7 @@ core and the use of subtyping, given by the following typing rules:
% \Gamma \cvdash - e t e_2:t'}
% {\Gamma\vdash \ite {e} t {e_1}{e_2}: t'}
% { }
\\
\vspace{-2mm}\\
\Infer[Sel]
{\Gamma\vdash e:\pair{t_1}{t_2}}
{\Gamma\vdash\pi_i e:t_i}
...
...
@@ -162,7 +164,7 @@ core and the use of subtyping, given by the following typing rules:
{\Gamma\vdash e:t\\t\leq t' }
{\Gamma\vdash e: t' }
{}
\qquad
\qquad\vspace{-3mm}
\end{mathpar}
These rules are quite standard and do not need any particular explanation besides those already given in Section~\ref{sec:syntax}. Just notice that we used a classic subsumption rule (i.e., \Rule{Subs}) to embed subtyping in the type system. Let us next focus on the unconventional aspects of our system, from the simplest to the hardest.
...
...
@@ -170,7 +172,7 @@ The first unconventional aspect is that, as explained in
Section~\ref{sec:challenges}, our type assumptions are about
expressions. Therefore, in our rules the type environments, ranged over
by $\Gamma$, map \emph{expressions}---rather than just variables---into
types. This explains why the classic typing rule for variables is replaced by a more general \Rule{Env} rule defined below:
types. This explains why the classic typing rule for variables is replaced by a more general \Rule{Env} rule defined below:\vspace{-1mm}
\begin{mathpar}
\Infer[Env]
{}
...
...
@@ -180,7 +182,7 @@ types. This explains why the classic typing rule for variables is replaced by a
\Infer[Inter]
{\Gamma\vdash e:t_1\\\Gamma\vdash e:t_2 }
{\Gamma\vdash e: t_1 \wedge t_2 }
{}
{}\vspace{-3mm}
\end{mathpar}
The \Rule{Env} rule is coupled with the standard intersection introduction rule \Rule{Inter}
which allows us to deduce for a complex expression the intersection of
...
...
@@ -189,12 +191,12 @@ environment $\Gamma$ with the static type deduced for the same
expression by using the other typing rules. This same intersection
rule is also used to infer the second unconventional aspect of our
system, that is, the fact that $\lambda$-abstractions can have negated
arrow types, as long as these negated types do not make the type deduced for the function empty:
arrow types, as long as these negated types do not make the type deduced for the function empty:\vspace{-1mm}
We introduce the new syntactic category of \emph{types schemes} which are the terms~$\ts$ inductively produced by the following grammar.
\[
\begin{array}{lrcl}
\textbf{Type schemes}&\ts& ::=& t \alt\tsfun{\arrow t t ; \cdots ; \arrow t t}\alt\ts\tstimes\ts\alt\ts\tsor\ts\alt\tsempty
...
...
@@ -545,7 +547,7 @@ We also need to perform intersections of type schemes so as to intersect the sta
\end{lemma}
Finally, given a type scheme $\ts$ it is straightforward to choose in its interpretation a type $\tsrep\ts$ which serves as the canonical representative of the set (i.e., $\tsrep\ts\in\tsint\ts$):
\begin{definition}[Representative]
We define a function $\tsrep{\_}$ that maps every non-empty type scheme into a type, \textit{representative} of the set of types denoted by the scheme.\vspace{-3mm}
We define a function $\tsrep{\_}$ that maps every non-empty type scheme into a type, \textit{representative} of the set of types denoted by the scheme.\vspace{-2mm}
\begin{align*}
\begin{array}{lcllcl}
\tsrep t &=& t &\tsrep{\ts_1 \tstimes\ts_2}&=&\pair{\tsrep{\ts_1}}{\tsrep{\ts_2}}\\
...
...
@@ -862,9 +864,9 @@ a precision that the algorithm loses by applying \tsrep{} in the
definition of \constrf{} (case~\eqref{due} is the critical
one). Completeness is recovered by $(i)$ limiting the depth of the
derivations and $(ii)$ forbidding nested negated arrows on the
left-hand side of negated arrows.
left-hand side of negated arrows.\vspace{-.7mm}
\begin{definition}[Rank-0 negation]
A derivation of $\Gamma\vdash e:t$ is \emph{rank-0 negated} if \Rule{Abs--} never occurs in the derivation of a left premise of a \Rule{PAppL} rule.
A derivation of $\Gamma\vdash e:t$ is \emph{rank-0 negated} if \Rule{Abs--} never occurs in the derivation of a left premise of a \Rule{PAppL} rule.\vspace{-.7mm}
\end{definition}
\noindent The use of this terminology is borrowed from the ranking of higher-order
types, since, intuitively, it corresponds to typing a language in
As we explained in the introduction, both TypeScript and Flow deduce the type
\code{(number$\vee$string) $\to$ (number$\vee$string)} for the first definition of the function \code{foo} in~\eqref{foo}, and the more precise type\vspace{-3pt}
\begin{equation}\label{tyinter}
...
...
@@ -16,11 +17,11 @@ In this section we show how to do it by using the theory of occurrence
typing we developed in the first part of the paper. In particular we
collect the different types the parameter of a function is assigned in
its body to determine a partition of the domain types against which
the body of the function is checked. Consider a more involved example
\begin{alltt}\color{darkblue}
the body of the function is checked. Consider a more involved example\vspace{-.4mm}
\begin{alltt}\color{darkblue}\morecompact
function (x \textcolor{darkred}{: \(\tau\)}) \{
(x \(\in\) Real) ? \{ (x \(\in\) Int) ? succ x : sqrt x \} : \{\(\neg\)x \}
\}
\}\negspace
\end{alltt}
When $\tau$ is \code{Real|Bool} (we assume that \code{Int} is a
subtype of \code{Real}) we want to deduce for this function the
...
...
@@ -32,10 +33,10 @@ then the function must be rejected (since it tries to type
\code{$\neg\Real$}. Notice that typing the function under the
hypothesis that $\tau$ is \Any,
allows us to capture user-defined discrimination as defined
by~\citet{THF10} since, for instance
by~\citet{THF10} since, for instance\vspace{-.4mm}
\begin{alltt}\color{darkblue}\morecompact
let is_int x = (x\(\in\)Int)? true : false
in if is_int z then z+1 else 42
in if is_int z then z+1 else 42\negspace
\end{alltt}
is well typed since the function \code{is\_int} is given type
$(\Int\to\True)\wedge(\neg\Int\to\False)$. We choose a more general
...
...
@@ -101,7 +102,7 @@ binders):\vspace{-2mm}
\Gamma\evdash e t \Gamma_1\\\Gamma_1\vdash e_1\triangleright\psi_1\\
\Gamma\evdash e {\neg t}\Gamma_2 \\\Gamma_2\vdash e_2\triangleright\psi_2}