Commit 637f5fa9 authored by Giuseppe Castagna's avatar Giuseppe Castagna
Browse files

Presentation of the declarative part of the system

parent a00adb5d
......@@ -152,7 +152,9 @@ and $e''$ as long as they are applications, and deduce distinct types for all su
form applications. How to do it precisely is explained in the rest of
the paper but the key ideas are pretty simple and are explained next.
\subsection{Key ideas} First of all, in a strict language we can consider a type as denoting the set of values of that type and subtyping as set-containment of the denoted values.
\subsection{Key ideas}\label{sec:ideas}
First of all, in a strict language we can consider a type as denoting the set of values of that type and subtyping as set-containment of the denoted values.
Imagine we are testing whether the result of an application $e_1e_2$
is of type $t$ or not, and suppose we know that the static types of
$e_1$ and $e_2$ are $t_1$ and $t_2$ respectively. If the application $e_1e_2$ is
......
In this section we present the formalization of the ideas we hinted in the introduction. We start by the definitions of types. Next we define the language and its reduction semantics. The static semantics is the core of our presentation: we first present a declarative types system which deduces for every well-typed expressions several types and then the algorithms that allow us to decide well typedness.
In this section we present the formalization of the ideas we hinted in the introduction. We start by the definitions of types. Next, we define the language and its reduction semantics. The static semantics is the core of our presentation: we first present a declarative type system that deduces possibly many types for the well-typed expressions and then the algorithms to decide whether an expression is well-typed or not.
\subsection{Types}
\begin{definition}[Type syntax]\label{def:types} The set of types \types{} is formed by the terms $t$ coinductively produced by the following grammar
\begin{definition}[Types]\label{def:types} The set of types \types{} is formed by the terms $t$ coinductively produced by the following grammar
\[
\begin{array}{lrcl}
\textbf{Types} & t & ::= & b\alt t\to t\alt t\times t\alt t\vee t \alt \neg t \alt \Empty
......@@ -54,7 +54,8 @@ denotes the set of all functions.} and $\Any\to\Empty$ is the set
of functions that diverge on every argument). Type connectives
(\emph{ie}, union, intersection, negation) are interpreted as the
corresponding set-theoretic operators (\emph{eg},~$s\vee t$ is the
union of the values of the two types).
union of the values of the two types). We use $\simeq$ to denote the
symmetric closure of $\leq$: thus $s\simeq t$ means that $s$ and $t$ denote the same set of values and, as such, they are semantically the same type.
\subsection{Syntax}\label{sec:syntax}
The expressions $e$ and values $v$ of our language are the terms inductively generated by the following grammars:
......@@ -161,14 +162,14 @@ core and the use of subtyping given by the following typing rules:
{ }
\qquad
\end{mathpar}
These rules are quite standard and do not need any particular explanation. Just notice that we used a classic subsumption rule to embed subtyping in the type system.
These rules are quite standard and do not need any particular explanation. Just notice that we used a classic subsumption rule (\emph{ie}, \Rule{Subs}) to embed subtyping in the type system.
Let us now focus on the unconventional aspects of our system. starting
Let us now focus on the unconventional aspects of our system, starting
from the simplest ones. The first one is that, as explained in
Section~\ref{sec:challenges}, our assumptions are about
expressions. Therefore the type environments of our rules, ranged over
by $\Gamma$ map \emph{expressions}---rather than just variables---into
types. This explains why the classic typing rule for variables is replace by the more general \Rule{Env} rule below:\beppe{Changed the name of Occ into Env since it seems more appropriate}
Section~\ref{sec:challenges}, our type assumptions are about
expressions. Therefore, the type environments in our rules, ranged over
by $\Gamma$, map \emph{expressions}---rather than just variables---into
types. This explains why the classic typing rule for variables is replaced by a more general \Rule{Env} rule defined as follows:\beppe{Changed the name of Occ into Env since it seems more appropriate}
\begin{mathpar}
\Infer[Env]
{ }
......@@ -180,21 +181,21 @@ types. This explains why the classic typing rule for variables is replace by the
{ \Gamma \vdash e: t_1 \wedge t_2 }
{ }
\end{mathpar}
The rule is coupled with the classic intersection introduction rule
The \Rule{Env} rule is coupled with the classic intersection introduction rule
which allow us to deduce for a complex expression the intersection of
the types recorded by the occurrence typing analysis in the
environment $\Gamma$ with the static type deduced for the same
expression by using the other typing rules. This same intersection
rule is also used to infer the second unconventional aspect of our
system, that is the fact that $\lambda$ abstractions can have negated
arrow types as long as this negation does not make their type empty:
system, that is, the fact that $\lambda$-abstractions can have negated
arrow types, as long as this negated types do not make their type empty:
\begin{mathpar}
\Infer[Abs-]
{\Gamma \vdash \lambda^{\wedge_{i\in I}\arrow {s_i} {t_i}}x.e:t}
{ \Gamma \vdash\lambda^{\wedge_{i\in I}\arrow {s_i} {t_i}}x.e:\neg(t_1\to t_2) }
{ (t\wedge\neg(t_1\to t_2))\not\simeq\Empty }
\end{mathpar}
As explained in Section~\ref{sec:challenges} we need to be able to
As explained in Section~\ref{sec:challenges}, we need to be able to
deduce for, say, the function $\lambda^{\Int\to\Int} x.x$ a type such
as $(\Int\to\Int)\wedge\neg(\Bool\to\Bool)$ (in particular, if this is
the term $e$ in equation \eqref{bistwo} we need to deduce for it the
......@@ -211,35 +212,41 @@ $\lambda$-abstractions all the types we wish. In particular, these
rules alone are not enough to type general overloaded functions. For
example, consider this simple example of a function that applied to an
integer returns its successor and applied to anything else returns
\textsf{true}
\textsf{true}:
\[
\lambda^{(\Int\to\Int)\wedge(\neg\Int\to\Bool)} x.\tcase{x}{\Int}{x+1}{\textsf{true}}
\lambda^{(\Int\to\Int)\wedge(\neg\Int\to\Bool)} x\,.\,\tcase{x}{\Int}{x+1}{\textsf{true}}
\]
Clearly, the expression above is well typed, but the rule \Rule{Abs+}
Clearly, the expression above is well typed, but the rule \Rule{Abs+} alone
is not enough to type it. In particular, according to \Rule{Abs+} we
have to prove that under the hypothesis that $x$ is of type $\Int$ the expression
$\tcase{x}{\Int}{x+1}{\textsf{true}}$ is of type $\Int$, too. That is that under the
$(\tcase{x}{\Int}{x+1}{\textsf{true}})$ is of type $\Int$, too. That is, that under the
hypothesis that x has type $\Int\wedge\Int$ (we apply occurrence
typing) the expression $x+1$ is of type \Int (which is ok) and that under the
typing) the expression $x+1$ is of type \Int{} (which holds) and that under the
hypothesis that $x$ has type $\Int\setminus\Int$, that is $\Empty$
(once more we apply occurrence typing), \textsf{true} is of type \Int
(which is not ok). The problem is that we are trying to type the
second case of a type case even if we now that there is no chance that
that case will be selected. The fact that it is never selected is witnessed
by the fact that there is a type hypothesis with $\Empty$ type. To
(we apply once more occurrence typing), \textsf{true} is of type \Int{}
(which \emph{does not} hold). The problem is that we are trying to type the
second case of a type-case even if we know that there is no chance that, under the hypothesis that $x$ is of type $\Int$,
that case will be ever selected. The fact that it is never selected is witnessed
by the presence of a type hypothesis with $\Empty$ type. To
avoid this problem (and type the term above) we add the rule
\Rule{Efq} (\emph{ex falso quodlibet}) that allows to deduce any type
for an expression that will never be selected, that is for an
expression whose type environment has an empty assumption:
\Rule{Efq} (\emph{ex falso quodlibet}) that allows the system to deduce any type
for an expression that will never be selected, that is, for an
expression whose type environment contains an empty assumption:
\begin{mathpar}
\Infer[Efq]
{ }
{ \Gamma, (e:\Empty) \vdash e': t }
{ }
\end{mathpar}
Once more, this kind of deduction was already present in~\cite{Frisch2008} to type full fledged overloaded functions, though it was embedded in the typing rule for the type-case. Here we need the more general \Rule{Efq} rule, to ensure the property of subject reduction.\beppe{Example?}
Once more, this kind of deduction was already present in the system
by~\citet{Frisch2008} to type full fledged overloaded functions,
though it was embedded in the typing rule for the type-case. Here we
need the rules \Rule{Efq}, which is more general, to ensure the
property of subject reduction.\beppe{Example?}
Finally, there is one final rule that is missing in our type system and which is the core of our work, the rule for the type-case:\beppe{Changed to \Rule{If} to \Rule{Case}}
Finally, there is one last rule in our type system, the one that
implements occurrence typing, that is, the rule for the
type-case:\beppe{Changed to \Rule{If} to \Rule{Case}}
\begin{mathpar}
\Infer[Case]
......@@ -251,10 +258,10 @@ Finally, there is one final rule that is missing in our type system and which is
{\Gamma\vdash \tcase {e} t {e_1}{e_2}: t'}
{ }
\end{mathpar}
The rule checks wheter the expression $e$ whose type is tested is
The rule \Rule{Case} checks whether the expression $e$, whose type is being tested, is
well-typed and then performs the occurrence typing analysis that produces
the environments $\Gamma_i$'s under whose hypothesis the expressions
$e_i$'s are typed. The production of these environments is represented by the judgement
$e_i$'s are typed. The production of these environments is represented by the judgment
$\Gamma \evdash p e t \Gamma_i$ (with $p$ either $+$ or $-$). The
intuition is that when $\Gamma \evdash p e t \Gamma_i$ is provable
then $\Gamma_i$ is $\Gamma$ extended with type hypothesis for all
......@@ -262,9 +269,17 @@ expressions occurring in $e$, type hypothesis that can be deduced
assuming that the test $e\in t$ succeeds (for $p=+$) or fails (for
$p=-$).
All it remains to do is to show how to deduce judgements of the form $\Gamma \evdash p e t \Gamma'$. For that we first have to define how to denote occurrences of an expressions. These are just paths in the syntax trees of the expressions.
Let $e$ be an expression and $\varpi\in\{0,1,l,r,f,s\}^*$ a \emph{path}; we denote $\occ e\varpi$ the occurrence of $e$ reached by the path $\varpi$, that is
All it remains to do is to show how to deduce judgments of the form
$\Gamma \evdash p e t \Gamma'$. For that we first have to define how
to denote occurrences of an expressions. These are just paths in the
syntax trees of the expressions, that is, possibly empty strings of
characters denoting directions starting from the root of the tree (we
use $\epsilon$ for the empty string/path, which corresponds to the
root of the tree).
Let $e$ be an expression and $\varpi\in\{0,1,l,r,f,s\}^*$ a
\emph{path}; we denote $\occ e\varpi$ the occurrence of $e$ reached by
the path $\varpi$, that is
\[
\begin{array}{r@{\downarrow}l@{\quad=\quad}l}
e&\epsilon & e\\
......@@ -279,8 +294,8 @@ undefined otherwise
To ease our analysis we used different directions for each kind of
term. So we have $0$ and $1$ for the function and argument of an
application, $l$ and $r$ for the $l$eft and $r$ight projections of a pair
and $f$ and $s$ for the argument of a $f$irst or a $s$econd projection. The the judgements $\Gamma \evdash p e t \Gamma'$ are deduced by these two rules:
application, $l$ and $r$ for the $l$eft and $r$ight expressions forming a pair
and $f$ and $s$ for the argument of a $f$irst or of a $s$econd projection. The judgments $\Gamma \evdash p e t \Gamma'$ are deduced by these two rules:
\begin{mathpar}
% \Infer[Base]
% { \Gamma \vdash e':t' }
......@@ -301,33 +316,33 @@ and $f$ and $s$ for the argument of a $f$irst or a $s$econd projection. The the
{ \Gamma \evdash p e t \Gamma',(\occ e \varpi:t') }
{ }
\end{mathpar}
These rules describe how to deduce by occurrence typing the type
environts when checking that an expression $e$ has type $t$: we can
deduce $\Gamma$ all the hypothesis already in $\Gamma$ (rule
\Rule{Base}) and that if we can deduce a given for a particular
occurrence $\varpi$ of the $e$ checked, than we can add this
hypothesis to our type environment (rule \Rule{Path}). The rule
These rules describe how to produce by occurrence typing the type
environments while checking that an expression $e$ has type $t$. They state that $(i)$ we can
deduce from $\Gamma$ all the hypothesis already in $\Gamma$ (rule
\Rule{Base}) and that $(ii)$ if we can deduce a given type $t'$ for a particular
occurrence $\varpi$ of the expression $e$ being checked, than we can add this
hypothesis to the produced type environment (rule \Rule{Path}). The rule
\Rule{Path} uses a (last) auxiliary judgement $\pvdash {\Gamma} p e t
\varpi:t'$ to deduce the type of the occurrence $\occ e \varpi$ when
checking $e$ against $t$. This rule is subtler than it appears at
first sight, insofar as the deduction of the type for $\varpi$ may use
some hypothesis on $\occ e \varpi$ (in $\Gamma'$) and from an
algorithmic viewpoint this will imply the computation of a fix-point
(see Section~\ref{}). The last ingredient is the deduction of the
checking $e$ against $t$ under the hypotheses $\Gamma$. This rule \Rule{Path} is subtler than it may appear at
first sight, insofar as the deduction of the type for $\varpi$ may already use
some hypothesis on $\occ e \varpi$ (in $\Gamma'$) and, from an
algorithmic viewpoint, this will imply the computation of a fix-point
(see Section~\ref{sec:typenv}). The last ingredient for our type system is the deduction of the
judgements of the form $\pvdash {\Gamma} p e t \varpi:t'$ where
$\varpi$ is an occurrence of $e$. These is given by the following set
$\varpi$ is a path to an expression occurring in $e$. This is given by the following set
of rules.
\begin{mathpar}
\Infer[PIntersect]
{ \pvdash \Gamma p e t \varpi:t_1 \\ \pvdash \Gamma p e t \varpi:t_2 }
{ \pvdash \Gamma p e t \varpi:t_1\land t_2 }
{ }
\qquad
\Infer[PSubs]
{ \pvdash \Gamma p e t \varpi:t_1 \\ t_1\leq t_2 }
{ \pvdash \Gamma p e t \varpi:t_2 }
{ }
\qquad
\Infer[PIntersect]
{ \pvdash \Gamma p e t \varpi:t_1 \\ \pvdash \Gamma p e t \varpi:t_2 }
{ \pvdash \Gamma p e t \varpi:t_1\land t_2 }
{ }
\\
\Infer[PTypeof]
{ \Gamma \vdash \occ e \varpi:t' }
......@@ -345,9 +360,9 @@ of rules.
{ }
\\
\Infer[PAppR]
{ \pvdash \Gamma p e t \varpi.0:\arrow{t_1}{t_2} \\ \pvdash \Gamma p e t \varpi:t_2' \\ t_2\land t_2' \simeq \Empty }
{ \pvdash \Gamma p e t \varpi.0:\arrow{t_1}{t_2} \\ \pvdash \Gamma p e t \varpi:t_2'}
{ \pvdash \Gamma p e t \varpi.1:\neg t_1 }
{ }
{ t_2\land t_2' \simeq \Empty }
\\
\Infer[PAppL]
{ \pvdash \Gamma p e t \varpi.1:t_1 \\ \pvdash \Gamma p e t \varpi:t_2 }
......@@ -375,7 +390,41 @@ of rules.
{ }
\qquad
\end{mathpar}
Let us comment each rule in detail.
These rules implement the analysis described in
Section~\ref{sec:ideas} for functions and extend it to products. Let
us comment each rule in detail. \Rule{PSubs} is just subsumption for
the deduction $\vdashp$. The rule \Rule{PIntersect} combined with
\Rule{PTypeof} allows the system to deduce for an occurrence $\varpi$
the intersection of the static type of $\occ e \varpi$ (deduced by
\Rule{PTypeof}) and the type deduced for $\varpi$ by the other $\vdashp$ rules. The
rules \Rule{PEps\,$p$} are the starting points of the analysis: when
checking whether $e$ has type $t$ we can assume that $e$ (\emph{ie},
$\occ e\epsilon$) has type $t$ in the positive branch (rule
\Rule{PEps+}) and type $\neg t$ in the negative branch (rule
\Rule{PEps-}). The rule \Rule{PApprR} implements occurrence typing for
the arguments of applications, since it states that if a function maps
arguments of type $t_1$ in results of type $t_2$ and an application of
this function yields results (in $t'_2$) that cannot be in $t_2$
(since $t_2\land t_2' \simeq \Empty$), then the argument of this application cannot be of type $t_1$. \Rule{PAppR} performs the
occurrence typing analysis for the function part of an application,
since it states that if an application has type $t_2$ and the argument
of this application has type $t_1$, then the function in this
application cannot have type $t_1\to\neg t_2$. Rules \Rule{PPair\_}
are straightforward since they state that $i$-th projection of a pair
of type $\pair{t_1}{t_2}$ must be of type $t_i$. So are the last two
rules that essentially state that if $\pi_1 e$ (respectively, $\pi_2
e$) is of type $t'$, then the type of $e$ must be of the form
$\pair{t'}\Any$ (respectively, $\pair\Any{t'}$).
This concludes the presentation of our type system, which satisfies
the property of soundness, deduced, as customary, from the properties
of progress and subject reduction, whose proof is given in the
appendix.
\begin{theorem}[soundness]
For every expression $e$ such that $\varnothing\vdash e:t$ either there
exists a value $v$ of type $t$ such that $e\reduces^* v$ or $e$
diverges.
\end{theorem}
......@@ -390,7 +439,9 @@ Let us comment each rule in detail.
\item type of functions -> type schemes
\item elimination rules (app, proj,if) ->operations on types and how to compute them
\item not syntax directed: rules Subs, Intersect, Env.
\item Compute the environments for occurrence typing. Algorithm to compute $\Gamma\vdash\Gamma$
\end{enumerate}
......@@ -408,7 +459,7 @@ change the definition of typeof to take into account type-schemes for lambda abs
\paragraph{Algorithmic version}
\subsubsection{Type envs for occurrence typing}
\subsubsection{Type environments for occurrence typing}\label{sec:typenv}
......
......@@ -155,5 +155,6 @@
\newcommand{\tsint}[1]{\textbf{\textup{\{}}#1\textbf{\textup{\}}}}
\newcommand{\tsrep}[1]{\textsf{Repr}(#1)}
\newcommand{\pvdash}[4]{\vdash^{\texttt{Path}}_{#1,#3,#4,#2}}%\mathcal{P}
\newcommand{\vdashp}{\vdash^{\texttt{Path}}}
\newcommand{\pvdash}[4]{\vdashp_{#1,#3,#4,#2}}%\mathcal{P}
\newcommand{\evdash}[3]{\vdash^{\texttt{Env}}_{#2,#3,#1}}
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment