Commit bdce70de authored by Giuseppe Castagna's avatar Giuseppe Castagna
Browse files

harmonized section 1 and 2

parent cd2ab74d
...@@ -283,16 +283,16 @@ either diverge or return a value in $s$. As such the interpretation of $t\to s$ ...@@ -283,16 +283,16 @@ either diverge or return a value in $s$. As such the interpretation of $t\to s$
all the functions that diverge (at least) on $t$. Therefore removing $t\to s$ from a type $u$ removes from $u$ not only all the functions that when applied to a $t$ argument return a result in $s$, but also all the functions that diverge on $t$. all the functions that diverge (at least) on $t$. Therefore removing $t\to s$ from a type $u$ removes from $u$ not only all the functions that when applied to a $t$ argument return a result in $s$, but also all the functions that diverge on $t$.
Ergo $t_1\setminus (t_2^+\to Ergo $t_1\setminus (t_2^+\to
\neg(t_1\!\circ t_2^+)))$ removes, among others, all functions in $t_1$ that diverge on $t_2^+$. \neg(t_1\!\circ t_2^+)))$ removes, among others, all functions in $t_1$ that diverge on $t_2^+$.
Let us see all this on our example \eqref{exptre}, in particular, by showing how this technique deduces that the type of $x_1$ in the branch ``then'' is (a subtype of) $\Int{\vee}\String\to\Int$. Take the static type of $x_1$, that is $(\Int{\vee}\String\to\Int)\vee(\Bool{\vee}\String\to\Bool)$ and intersect it with Let us see all this on our example \eqref{exptre}, in particular, by showing how this technique deduces that the type of $x_1$ in the positive branch is (a subtype of) $\Int{\vee}\String\to\Int$. Take the static type of $x_1$, that is $(\Int{\vee}\String\to\Int)\vee(\Bool{\vee}\String\to\Bool)$ and intersect it with
$(t_2^+\to \neg(t_1\!\circ t_2^+))$, that is, $\String\to\neg\Int$. Since intersection distributes over unions we $(t_2^+\to \neg(t_1\!\circ t_2^+))$, that is, $\String\to\neg\Int$. Since intersection distributes over unions we
obtain obtain
% %
\[((\Int{\vee}\String{\to}\Int)\wedge\neg(\String{\to}\neg\Int))\vee((\Bool{\vee}\String{\to}\Bool)\wedge\neg(\String{\to}\neg\Int))\] \[((\Int{\vee}\String{\to}\Int)\wedge\neg(\String{\to}\neg\Int))\vee((\Bool{\vee}\String{\to}\Bool)\wedge\neg(\String{\to}\neg\Int))\]
% %
and since and since
$(\Bool{\vee}\String\to\Bool)\wedge\neg(\String\to\neg\Int)$ is empty $(\Bool{\vee}\String{\to}\Bool)\wedge\neg(\String{\to}\neg\Int)$ is empty
(because $\String\to\neg\Int$ contains $\Bool{\vee}\String\to\Bool$), (because $\String\to\neg\Int$ contains $\Bool{\vee}\String\to\Bool$),
then what we obtain is the left summand, a strict subtype of $\Int{\vee}\String\to\Int$, namely the functions of type $\Int{\vee}\String\to\Int$ minus those then what we obtain is the left summand, a strict subtype of $\Int{\vee}\String\to\Int$, namely the functions of type $\Int{\vee}\String{\to}\Int$ minus those
that diverge on all \String{} arguments. that diverge on all \String{} arguments.
......
In this section we formalize the ideas we outlined in the introduction. We start by the definition of types followed by the language and its reduction semantics. The static semantics is the core of our work: we first present a declarative type system that deduces (possibly many) types for well-typed expressions and then the algorithms to decide whether according to it an expression is well typed or not. In this section we formalize the ideas we outlined in the introduction. We start by the definition of types followed by the language and its reduction semantics. The static semantics is the core of our work: we first present a declarative type system that deduces (possibly many) types for well-typed expressions and then the algorithms to decide whether an expression is well typed or not.
\subsection{Types} \subsection{Types}
...@@ -19,7 +19,7 @@ We use the following abbreviations for types: $ ...@@ -19,7 +19,7 @@ We use the following abbreviations for types: $
t_1 \land t_2 \eqdef \neg (\neg t_1 \vee \neg t_2)$, t_1 \land t_2 \eqdef \neg (\neg t_1 \vee \neg t_2)$,
$t_ 1 \setminus t_2 \eqdef t_1 \wedge \neg t_2$, $\Any \eqdef \neg \Empty$. $t_ 1 \setminus t_2 \eqdef t_1 \wedge \neg t_2$, $\Any \eqdef \neg \Empty$.
In the definition, $b$ ranges over basic types In the definition, $b$ ranges over basic types
(\emph{eg}, \Int, \Bool), (e.g., \Int, \Bool),
$\Empty$ and $\Any$ respectively denote the empty (that types no value) $\Empty$ and $\Any$ respectively denote the empty (that types no value)
and top (that types all values) types. Coinduction accounts for and top (that types all values) types. Coinduction accounts for
recursive types and the condition on infinite branches bars out recursive types and the condition on infinite branches bars out
...@@ -40,20 +40,20 @@ as \emph{type connectives}. ...@@ -40,20 +40,20 @@ as \emph{type connectives}.
The subtyping relation for these types, noted $\leq$, is the one defined The subtyping relation for these types, noted $\leq$, is the one defined
by~\citet{Frisch2008} to which the reader may refer for more details. Its formal definition is recalled in Appendix~\ref{} and a detailed description of the algorithm to decide it can be found in~\cite{Cas15}. by~\citet{Frisch2008} to which the reader may refer for more details. Its formal definition is recalled in Appendix~\ref{} and a detailed description of the algorithm to decide it can be found in~\cite{Cas15}.
For this presentation it suffices to consider that For this presentation it suffices to consider that
types are interpreted as sets of \emph{values} ({\emph{ie}, either types are interpreted as sets of \emph{values} ({i.e., either
constants, $\lambda$-abstractions, or pair of values: see constants, $\lambda$-abstractions, or pair of values: see
Section~\ref{sec:syntax} right below) that have that type, and that subtyping is set Section~\ref{sec:syntax} right below) that have that type, and that subtyping is set
containment (\emph{ie}, a type $s$ is a subtype of a type $t$ if and only if $t$ containment (i.e., a type $s$ is a subtype of a type $t$ if and only if $t$
contains all the values of type $s$). In particular, $s\to t$ contains all the values of type $s$). In particular, $s\to t$
contains all $\lambda$-abstractions that when applied to a value of contains all $\lambda$-abstractions that when applied to a value of
type $s$, if their computation terminates, then they return a result of type $s$, if their computation terminates, then they return a result of
type $t$ (\emph{eg}, $\Empty\to\Any$ is the set of all type $t$ (e.g., $\Empty\to\Any$ is the set of all
functions\footnote{\label{allfunctions}Actually, for every type $t$, functions\footnote{\label{allfunctions}Actually, for every type $t$,
all types of the form $\Empty{\to}t$ are equivalent and each of them all types of the form $\Empty{\to}t$ are equivalent and each of them
denotes the set of all functions.} and $\Any\to\Empty$ is the set denotes the set of all functions.} and $\Any\to\Empty$ is the set
of functions that diverge on every argument). Type connectives of functions that diverge on every argument). Type connectives
(\emph{ie}, union, intersection, negation) are interpreted as the (i.e., union, intersection, negation) are interpreted as the
corresponding set-theoretic operators (\emph{eg},~$s\vee t$ is the corresponding set-theoretic operators (e.g.,~$s\vee t$ is the
union of the values of the two types). We use $\simeq$ to denote the union of the values of the two types). We use $\simeq$ to denote the
symmetric closure of $\leq$: thus $s\simeq t$ means that $s$ and $t$ denote the same set of values and, as such, they are semantically the same type. symmetric closure of $\leq$: thus $s\simeq t$ means that $s$ and $t$ denote the same set of values and, as such, they are semantically the same type.
...@@ -66,7 +66,7 @@ The expressions $e$ and values $v$ of our language are the terms inductively gen ...@@ -66,7 +66,7 @@ The expressions $e$ and values $v$ of our language are the terms inductively gen
\end{array} \end{array}
\end{equation} \end{equation}
for $i=1,2$. In~\eqref{expressions}, $c$ ranges over constants for $i=1,2$. In~\eqref{expressions}, $c$ ranges over constants
(\emph{eg}, \texttt{true}, \texttt{false}, \texttt{1}, \texttt{2}, (e.g., \texttt{true}, \texttt{false}, \texttt{1}, \texttt{2},
...) which are values of basic types (we use $\basic{c}$ to denote the ...) which are values of basic types (we use $\basic{c}$ to denote the
basic type of the constant $c$); $x$ ranges over variables; $(e,e)$ basic type of the constant $c$); $x$ ranges over variables; $(e,e)$
denote pairs and $\pi_i e$ their projections; $\tcase{e}{t}{e_1}{e_2}$ denote pairs and $\pi_i e$ their projections; $\tcase{e}{t}{e_1}{e_2}$
...@@ -100,7 +100,7 @@ The dynamic semantics is defined as a classic left-to-right call-by-value reduct ...@@ -100,7 +100,7 @@ The dynamic semantics is defined as a classic left-to-right call-by-value reduct
\] \]
The semantics of type-cases uses the relation $v\in t$ that we The semantics of type-cases uses the relation $v\in t$ that we
informally defined in the previous section. We delay its formal informally defined in the previous section. We delay its formal
definition to Section~\ref{sec:type-schemes} (where it deals with some corner cases for functional values). Context reductions are definition to Section~\ref{sec:type-schemes} (where it deals with some corner cases for negated arrow types). Context reductions are
defined by the following reduction contexts: defined by the following reduction contexts:
\[ \[
\Cx[] ::= [\,]\alt \Cx e\alt v\Cx \alt (\Cx,e)\alt (v,\Cx)\alt \pi_i\Cx\alt \tcase{\Cx}tee \Cx[] ::= [\,]\alt \Cx e\alt v\Cx \alt (\Cx,e)\alt (v,\Cx)\alt \pi_i\Cx\alt \tcase{\Cx}tee
...@@ -116,7 +116,7 @@ standard, for the type system we will have to introduce several ...@@ -116,7 +116,7 @@ standard, for the type system we will have to introduce several
unconventional features that we anticipated in unconventional features that we anticipated in
Section~\ref{sec:challenges} and are at the core of our work. Let Section~\ref{sec:challenges} and are at the core of our work. Let
us start with the standard part, that is the typing of the functional us start with the standard part, that is the typing of the functional
core and the use of subtyping given by the following typing rules: core and the use of subtyping, given by the following typing rules:
\begin{mathpar} \begin{mathpar}
\Infer[Const] \Infer[Const]
{ } { }
...@@ -162,11 +162,11 @@ core and the use of subtyping given by the following typing rules: ...@@ -162,11 +162,11 @@ core and the use of subtyping given by the following typing rules:
{ } { }
\qquad \qquad
\end{mathpar} \end{mathpar}
These rules are quite standard and do not need any particular explanation besides those already given in Section~\ref{sec:syntax}. Just notice that we used a classic subsumption rule (\emph{ie}, \Rule{Subs}) to embed subtyping in the type system. Let us next focus on the unconventional aspects of our system, from the simplest to the hardest. These rules are quite standard and do not need any particular explanation besides those already given in Section~\ref{sec:syntax}. Just notice that we used a classic subsumption rule (i.e., \Rule{Subs}) to embed subtyping in the type system. Let us next focus on the unconventional aspects of our system, from the simplest to the hardest.
The first one is that, as explained in The first one is that, as explained in
Section~\ref{sec:challenges}, our type assumptions are about Section~\ref{sec:challenges}, our type assumptions are about
expressions. Therefore, the type environments in our rules, ranged over expressions. Therefore, in our rules the type environments, ranged over
by $\Gamma$, map \emph{expressions}---rather than just variables---into by $\Gamma$, map \emph{expressions}---rather than just variables---into
types. This explains why the classic typing rule for variables is replaced by a more general \Rule{Env} rule defined below:\beppe{Changed the name of Occ into Env since it seems more appropriate} types. This explains why the classic typing rule for variables is replaced by a more general \Rule{Env} rule defined below:\beppe{Changed the name of Occ into Env since it seems more appropriate}
\begin{mathpar} \begin{mathpar}
...@@ -205,12 +205,12 @@ As explained in Section~\ref{sec:challenges}, we need to be able to ...@@ -205,12 +205,12 @@ As explained in Section~\ref{sec:challenges}, we need to be able to
deduce for, say, the function $\lambda^{\Int\to\Int} x.x$ a type such deduce for, say, the function $\lambda^{\Int\to\Int} x.x$ a type such
as $(\Int\to\Int)\wedge\neg(\Bool\to\Bool)$ (in particular, if this is as $(\Int\to\Int)\wedge\neg(\Bool\to\Bool)$ (in particular, if this is
the term $e$ in equation \eqref{bistwo} we need to deduce for it the the term $e$ in equation \eqref{bistwo} we need to deduce for it the
type $(\Int\to\Int)\wedge\neg(\Int\to\neg\Int)$, that is, type $(\Int\to t)\wedge\neg(\Int\to\neg\Bool)$, that is,
$(\Int\to\Int)\setminus(\Int\to\neg\Int)$ ). But the rule \Rule{Abs+} $(\Int\to t)\setminus(\Int\to\neg\Bool)$ ). But the sole rule \Rule{Abs+}
above does not allow us to deduce negations of above does not allow us to deduce negations of
arrows for abstractions: the rule \Rule{Abs-} makes this possible. As an aside, note that this kind arrows for abstractions: the rule \Rule{Abs-} makes this possible. As an aside, note that this kind
of deduction was already present in the system by~\citet{Frisch2008} of deduction was already present in the system by~\citet{Frisch2008}
though it that system this deduction was motivated by the semantics of types rather than, as in our case, though it that system this presence was motivated by the semantics of types rather than, as in our case,
by the soundness of the type system. by the soundness of the type system.
Rules \Rule{Abs+} and \Rule{Abs-} are not enough to deduce for Rules \Rule{Abs+} and \Rule{Abs-} are not enough to deduce for
...@@ -407,7 +407,7 @@ the deduction $\vdashp$. The rule \Rule{PInter} combined with ...@@ -407,7 +407,7 @@ the deduction $\vdashp$. The rule \Rule{PInter} combined with
\Rule{PTypeof} allows the system to deduce for an occurrence $\varpi$ \Rule{PTypeof} allows the system to deduce for an occurrence $\varpi$
the intersection of the static type of $\occ e \varpi$ (deduced by the intersection of the static type of $\occ e \varpi$ (deduced by
\Rule{PTypeof}) with the type deduced for $\varpi$ by the other $\vdashp$ rules. The \Rule{PTypeof}) with the type deduced for $\varpi$ by the other $\vdashp$ rules. The
rule \Rule{PEps} is the starting point of the analysis: if we are assuming that the test $e\in t$ succeeds, then we can assume that $e$ (\emph{ie}, rule \Rule{PEps} is the starting point of the analysis: if we are assuming that the test $e\in t$ succeeds, then we can assume that $e$ (i.e.,
$\occ e\epsilon$) has type $t$ (recall that assuming that the test $e\in t$ fails corresponds to having $\neg t$ at the index of the turnstyle). $\occ e\epsilon$) has type $t$ (recall that assuming that the test $e\in t$ fails corresponds to having $\neg t$ at the index of the turnstyle).
The rule \Rule{PApprR} implements occurrence typing for The rule \Rule{PApprR} implements occurrence typing for
the arguments of applications, since it states that if a function maps the arguments of applications, since it states that if a function maps
...@@ -466,15 +466,14 @@ to the presence of structural rules\footnote{\label{fo:rules}In logic, logical r ...@@ -466,15 +466,14 @@ to the presence of structural rules\footnote{\label{fo:rules}In logic, logical r
contraction) do not.} such as \Rule{Subs} and \Rule{Inter}. We contraction) do not.} such as \Rule{Subs} and \Rule{Inter}. We
handle it in the classic way: we define an algorithmic system that handle it in the classic way: we define an algorithmic system that
tracks the miminum type---actually, the minimum \emph{type scheme}---of an tracks the miminum type---actually, the minimum \emph{type scheme}---of an
expression; this system is obtained from the original system by eliminating expression; this system is obtained from the original system by removing
the two structural rules and by distributing suitable checks of the subtyping the two structural rules and by distributing suitable checks of the subtyping
relation in the remaining rules. To do that in the presence of relation in the remaining rules. To do that in the presence of
set-theoretic types we need to define some operators on types, which are given set-theoretic types we need to define some operators on types, which are given
in Section~\ref{sec:typeops}. in Section~\ref{sec:typeops}.
For what concerns the use of the auxiliary derivation for the $ $ For what concerns the use of the auxiliary derivation for the $ $
judgments, we present in Section~\ref{sec:typenv} an algorithm for judgments, we present in Section~\ref{sec:typenv} an algorithm that is sound and satisfies a limited form of
which we prove the property of soundness and a limited form of
completeness. All these notions are then used in the algorithmic typing completeness. All these notions are then used in the algorithmic typing
system given in Section~\ref{sec:algorules}. system given in Section~\ref{sec:algorules}.
...@@ -525,36 +524,33 @@ the set of all types. ...@@ -525,36 +524,33 @@ the set of all types.
We can now formally define the relation $v\in t$ used in We can now formally define the relation $v\in t$ used in
Section~\ref{sec:opsem} to define the dynamic semantics of Section~\ref{sec:opsem} to define the dynamic semantics of
the language. First, we associate each (possibly, not well-typed) the language. First, we associate each (possibly, not well-typed)
value to a type schema representing the best type information about value to a type scheme representing the best type information about
the value. By induction on the definition of values: $\tyof c {} = the value. By induction on the definition of values: $\tyof c {} =
\basic c$, $\tyof {\lambda^{\wedge_{i\in I}s_i\to t_i} x.e}{}= \basic c$, $\tyof {\lambda^{\wedge_{i\in I}s_i\to t_i} x.e}{}=
\tsfun{s_i\to t_i}_{i\in I}$, $\tyof {(v_1,v_2)}{} = \tyof \tsfun{s_i\to t_i}_{i\in I}$, $\tyof {(v_1,v_2)}{} = \tyof
{v_1}{}\tstimes\tyof {v_1}{}$. Then we have $v\in t\iffdef \tyof {v_1}{}\tstimes\tyof {v_1}{}$. Then we have $v\in t\iffdef \tyof
v{}\leq t$. v{}\leq t$.
We also need to perform intersections of type schemes so as to intersect the static type of an expression (\emph{ie}, the one deduced by conventional rules) with the one deduced by occurrence typing (\emph{ie}, the one derived by $\vdashp$). For our algorithmic system (see \Rule{Env$_{\scriptscriptstyle\mathcal{A}}$} in Section~\ref{sec:algorules}) all we need to define is the intersection of a type scheme with a type: We also need to perform intersections of type schemes so as to intersect the static type of an expression (i.e., the one deduced by conventional rules) with the one deduced by occurrence typing (i.e., the one derived by $\vdashp$). For our algorithmic system (see \Rule{Env$_{\scriptscriptstyle\mathcal{A}}$} in Section~\ref{sec:algorules}) all we need to define is the intersection of a type scheme with a type:
\begin{lemma}[\cite{Frisch2008}] \begin{lemma}[\cite{Frisch2008}]
Let $\ts$ be a type scheme and $t$ a type. We can compute a type scheme, written $t \tsand \ts$, such that Let $\ts$ be a type scheme and $t$ a type. We can compute a type scheme, written $t \tsand \ts$, such that
\(\tsint {t \tsand \ts} = \{s \alt \exists t' \in \tsint \ts.\ t \land t' \leq s \}\) \(\tsint {t \tsand \ts} = \{s \alt \exists t' \in \tsint \ts.\ t \land t' \leq s \}\)
\end{lemma} \end{lemma}
Finally, given a type schema it is straightforward to choose in its interpretation a type which serves as the canonical representative of the set: Finally, given a type scheme $\ts$ it is straightforward to choose in its interpretation a type $\tsrep\ts$ which serves as the canonical representative of the set (i.e., $\tsrep \ts \in \tsint \ts$):
\begin{definition}[Representative] \begin{definition}[Representative]
We define a function $\tsrep {\_}$ that maps every non-empty type scheme into a type, \textit{representative} of the set of types denoted by the scheme. We define a function $\tsrep {\_}$ that maps every non-empty type scheme into a type, \textit{representative} of the set of types denoted by the scheme.
\begin{align*} \begin{align*}
\begin{array}{lcl} \begin{array}{lcllcl}
\tsrep t &=& t\\ \tsrep t &=& t & \tsrep {\ts_1 \tstimes \ts_2} &=& \pair {\tsrep {\ts_1}} {\tsrep {\ts_2}}\\
\tsrep {\tsfunone {t_i} {s_i}_{i\in I}} &=& \bigwedge_{i\in I} \arrow {t_i} {s_i}\\ \tsrep {\tsfunone {t_i} {s_i}_{i\in I}} &=& \bigwedge_{i\in I} \arrow {t_i} {s_i} \qquad& \tsrep {\ts_1 \tsor \ts_2} &=& \tsrep {\ts_1} \vee \tsrep {\ts_2}\\
\tsrep {\ts_1 \tstimes \ts_2} &=& \pair {\tsrep {\ts_1}} {\tsrep {\ts_2}}\\
\tsrep {\ts_1 \tsor \ts_2} &=& \tsrep {\ts_1} \vee \tsrep {\ts_2}\\
\tsrep \tsempty && \textit{undefined} \tsrep \tsempty && \textit{undefined}
\end{array} \end{array}
\end{align*} \end{align*}
\end{definition} \end{definition}
Note that $\tsrep \ts \in \tsint \ts$.
\beppe{The explaination that follows is redundant with Section~\ref{sec:ideas}. Harmonize!}
\subsubsection{Operators for type constructors}\label{sec:typeops} \subsubsection{Operators for type constructors}\label{sec:typeops}
...@@ -572,40 +568,22 @@ to returning the codomain of that same arrow. With set-theoretic types ...@@ -572,40 +568,22 @@ to returning the codomain of that same arrow. With set-theoretic types
things are more difficult since a function can be typed by, say, a things are more difficult since a function can be typed by, say, a
union of intersection of arrows and negations of types. Checking that union of intersection of arrows and negations of types. Checking that
the function has a functional type is easy since it corresponds to the function has a functional type is easy since it corresponds to
checking that it has a type subtype of $\Empty\to\Any$. Determining checking that it has a type subtype of $\Empty{\to}\Any$. Determining
its domain and the type of the application is more complicated. For its domain and the type of the application is more complicated and needs the operators $\dom{}$ and $\circ$ that we informally described in Section~\ref{sec:ideas}, together with the description of the operator $\worra{}{}$. These three operators are used by our algorithm and are formally defined as follows:
instance, imagine we have a function of type \code{\(t=(\Int \to \Int)\) \(\wedge\) \((\Bool \to \Bool)\)}, which
denotes functions that will return an integer if applied to an integer,
and will return a Boolean if applied to a Boolean.
It is possible to compute the domain of such a type
(i.e., of the functions of this type), denoted by
\( \dom{t} = \Int \vee \Bool \),
that is the union of all the possible
input types. But what is the precise return type of such a
function? It depends on what the argument of such a function is:
either an integer or a Boolean---or both, denoted by the union type
\(\Int\vee\Bool\). This is computed by an operator we denote by \( \circ \).
More precisely, we denote by \( t_1\circ t_2\) the type of
an application of a function of type \(t_1\) to an argument of
type \(t_2\).
In the example with \code{\(t=(\Int \to \Int)\) \(\wedge\) \((\Bool \to \Bool)\)}, it gives \code{\( t \circ \Int = \Int \)},
\code{\( t \circ \Bool = \Bool \)}, and
\code{\( t \circ (\Int\vee\Bool) = \Int \vee \Bool \)}. In summary, given a functional type $t$ (\emph{ie}, a type $t$ such that $t\leq\Empty\to\Any$) our algorithms will use the following three operators
\begin{eqnarray} \begin{eqnarray}
\dom t & = & \max \{ u \alt t\leq u\to \Any\} \dom t & = & \max \{ u \alt t\leq u\to \Any\}
\\ \\[-.5mm]
\apply t s & = &\,\min \{ u \alt t\leq s\to u\} \apply t s & = &\,\min \{ u \alt t\leq s\to u\}
\\ \\[-.5mm]
\worra t s & = &\,\min\{u \alt t\circ(\dom t\setminus u)\leq \neg s\}\label{worra} \worra t s & = &\,\min\{u \alt t\circ(\dom t\setminus u)\leq \neg s\}\label{worra}
\end{eqnarray} \end{eqnarray}
The first two operators belongs to the theory of semantic subtyping while the last one is new and we described it in Section~\ref{sec:ideas}
We need similar operators for projections since the type $t$ of $e$ in $\pi_i e$ may not be a single product type but, say, a union of products: all we know is that $t$ must be a subtype of $\pair\Any\Any$. So let $t$ be a type such that $t\leq\pair\Any\Any$, then we define: We need similar operators for projections since the type $t$ of $e$ in $\pi_i e$ may not be a single product type but, say, a union of products: all we know is that $t$ must be a subtype of $\pair\Any\Any$. So let $t$ be a type such that $t\leq\pair\Any\Any$, then we define:
\begin{eqnarray} \begin{equation}
\bpl t & = & \min \{ u \alt t\leq \pair u\Any\}\\ \begin{array}{lcrlcr}
\bpl t & = & \min \{ u \alt t\leq \pair u\Any\}\qquad&\qquad
\bpr t & = & \min \{ u \alt t\leq \pair \Any u\} \bpr t & = & \min \{ u \alt t\leq \pair \Any u\}
\end{eqnarray} \end{array}
\end{equation}
All the operators above but $\worra{}{}$ are already present in the All the operators above but $\worra{}{}$ are already present in the
theory of semantic subtyping: the reader can find how to compute them theory of semantic subtyping: the reader can find how to compute them
and how to extend their definition to type schemes in~\cite[Section and how to extend their definition to type schemes in~\cite[Section
...@@ -727,7 +705,7 @@ that we want our analysis to be able to capture all the information ...@@ -727,7 +705,7 @@ that we want our analysis to be able to capture all the information
available from nested checks. If we gave up such a kind of precision available from nested checks. If we gave up such a kind of precision
then the definition of $\Refinef$ would be pretty easy: it must map then the definition of $\Refinef$ would be pretty easy: it must map
each subexpression of $e$ to the intersection of the types deduced by each subexpression of $e$ to the intersection of the types deduced by
$\vdashp$ (\emph{ie}, by $\env{}{}$) for each of its occurrences. That $\vdashp$ (i.e., by $\env{}{}$) for each of its occurrences. That
is, for each expression $e'$ occurring in $e$, $\Refine {e,t}\Gamma$ is, for each expression $e'$ occurring in $e$, $\Refine {e,t}\Gamma$
is a type environment that maps $e'$ into $\bigwedge_{\{\varpi \alt is a type environment that maps $e'$ into $\bigwedge_{\{\varpi \alt
\occ e \varpi \equiv e'\}} \env {\Gamma,e,t} (\varpi)$. As we \occ e \varpi \equiv e'\}} \env {\Gamma,e,t} (\varpi)$. As we
...@@ -750,7 +728,7 @@ $\Refinef$ yields for $x_1$ a type strictly more precise than the type deduced i ...@@ -750,7 +728,7 @@ $\Refinef$ yields for $x_1$ a type strictly more precise than the type deduced i
previous iteration. previous iteration.
The solution we adopt here is to bound the number of iterations to some number $n_o$. The solution we adopt here is to bound the number of iterations to some number $n_o$.
From a formal point of view, this means to give up the completeness of the algorithm: $\Refinef$ will be complete (\emph{ie}, it will find all solutions) for the deductions of $\Gamma \evdash e t \Gamma'$ of depth at most $n_o$. This is obtained by the following definition of $\Refinef$ From a formal point of view, this means to give up the completeness of the algorithm: $\Refinef$ will be complete (i.e., it will find all solutions) for the deductions of $\Gamma \evdash e t \Gamma'$ of depth at most $n_o$. This is obtained by the following definition of $\Refinef$
\[ \[
\begin{array}{rcl} \begin{array}{rcl}
\Refinef_{e,t} \eqdef (\RefineStep{e,t})^{n_o}\\ \Refinef_{e,t} \eqdef (\RefineStep{e,t})^{n_o}\\
...@@ -851,14 +829,13 @@ We now have all notions needed for our typing algorithm, which is defined by th ...@@ -851,14 +829,13 @@ We now have all notions needed for our typing algorithm, which is defined by th
{ \tcase {e} {t\!} {\!e_1\!}{\!e_2}\not\in\dom\Gamma} { \tcase {e} {t\!} {\!e_1\!}{\!e_2}\not\in\dom\Gamma}
\\ \\
\Infer[Proj\Aa] \Infer[Proj\Aa]
{\Gamma \vdashA e:\ts\and \ts\leq\pair\Any\Any} {\Gamma \vdashA e:\ts\and \!\!\ts\leq\pair{\Any\!}{\!\Any}}
{\Gamma \vdashA \pi_i e:\bpi_{\mathbf{i}}(\ts)} {\Gamma \vdashA \pi_i e:\bpi_{\mathbf{i}}(\ts)}
{\pi_i e\not\in\dom\Gamma} {\pi_i e{\not\in}\dom\Gamma}\hfill
\Infer[Pair\Aa] \Infer[Pair\Aa]
{\Gamma \vdashA e_1:\ts_1 \and \Gamma \vdashA e_2:\ts_2} {\Gamma \vdashA e_1:\ts_1 \and \!\!\Gamma \vdashA e_2:\ts_2}
{\Gamma \vdashA (e_1,e_2):{\ts_1}\tstimes{\ts_2}}%\pair{t_1}{t_2}} {\Gamma \vdashA (e_1,e_2):{\ts_1}\tstimes{\ts_2}}%\pair{t_1}{t_2}}
{(e_1,e_2)\not\in\dom\Gamma} {(e_1,e_2){\not\in}\dom\Gamma}
\end{mathpar} \end{mathpar}
The side conditions of the rules ensure that the system is syntax The side conditions of the rules ensure that the system is syntax
directed, that is, that at most one rule applies when typing a term: directed, that is, that at most one rule applies when typing a term:
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment