Commit f844b515 authored by Giuseppe Castagna's avatar Giuseppe Castagna
Browse files

space savings

parent 2c6b5d0c
......@@ -25,11 +25,11 @@ which corresponds to using in the quasi-constant function notation the
field $\ell = t \vee \Undef$.
For what concern expressions, we adapt CDuce records to our analysis. In particular records are built starting from the empty record expressions \erecord{} and by adding, updating, or removing fields:
For what concern expressions, we adapt CDuce records to our analysis. In particular records are built starting from the empty record expressions \erecord{} and by adding, updating, or removing fields:\vspace{-1mm}
\[
\begin{array}{lrcl}
\textbf{Expr} & e & ::= & \erecord {} \alt \recupd e \ell e \alt \recdel e \ell \alt e.\ell
\end{array}
\end{array}\vspace{-1mm}
\]
in particular $\recdel e \ell$ deletes the field $\ell$ from $e$, $\recupd e \ell e'$ adds the field $\ell=e'$ to the record $e$ (deleting any existing $\ell$ field), while $e.\ell$ is field selection with the reduction:
\(\erecord{...,\ell=e,...}.\ell\ \reduces\ e\).
......@@ -61,7 +61,7 @@ Expressions are then typed by the following rules (already in algorithmic form).
{\Gamma \vdash e_1:t_1\and t_1\leq\orecord {\!\!} \and \Gamma \vdash e_2:t_2}
{\Gamma \vdash \recupd{e_1}{\ell}{e_2}: t_1 + \crecord{\ell=t_2}}
{\recupd{e_1\!\!}{\!\!\ell}{e_2} \not\in\dom\Gamma}
\vspace{-2mm}\\
\Infer[Delete]
{\Gamma \vdash e:t\and t\leq\orecord {\!\!}}
{\Gamma \vdash \recdel e \ell: \recdel t \ell}
......@@ -70,8 +70,7 @@ Expressions are then typed by the following rules (already in algorithmic form).
\Infer[Proj]
{\Gamma \vdash e:t\and t\leq\orecord{\ell = \Any}}
{\Gamma \vdash e.\ell:\proj \ell {t}}
{e.\ell\not\in\dom\Gamma}
{e.\ell\not\in\dom\Gamma}\vspace{-2mm}
\end{mathpar}
To extend occurrence typing to records we add the paths the following values: $\varpi\in\{\ldots,a_\ell,u_\ell^1,u_\ell^2,r_\ell\}^*$, with
\(e.\ell\downarrow a_\ell.\varpi =\occ{e}\varpi\),
......@@ -88,9 +87,7 @@ and add the following rules for the new paths:
{ \pvdash \Gamma e t \varpi:t'}
{ \pvdash \Gamma e t \varpi.r_\ell: (\recdel {t'} \ell) + \crecord{\ell \eqq \Any}}
{ }
\\
\vspace{-2mm}\\
\Infer[PUpd1]
{ \pvdash \Gamma e t \varpi:t'}
{ \pvdash \Gamma e t \varpi.u_\ell^1: (\recdel {t'} \ell) + \crecord{\ell \eqq \Any}}
......@@ -129,8 +126,8 @@ $t$ to the type $\Any\vee\Undef$, that is, to the type of all
undefined fields in an open record. So \Rule{PDel} and \Rule{PUpd1}
mean that if we remove, add, or redefine a field $\ell$ in an expression $e$
then all we can deduce for $e$ is that its field $\ell$ is undefined: since the original field was destroyed we do not have any information on it apart from the static one.
For instance, consider the test:
\[\ifty{\recupd{x}{a}{0}}{\orecord{a=\Int, b=\Bool}\vee \orecord{a=\Bool, b=\Int}}{x.b}{\False}\]
For instance, consider the test:\\[1mm]
\centerline{\(\ifty{\recupd{x}{a}{0}}{\orecord{a=\Int, b=\Bool}\vee \orecord{a=\Bool, b=\Int}}{x.b}{\False}\)}\\[1mm]
By $\constr{\varpi.u_\ell^1}{\Gamma,e,t}$---i.e., by \Rule{Ext1}, \Rule{PTypeof}, and \Rule{PInter}---the type for $x$ in the positive branch is $((\orecord{a=\Int, b=\Bool}\vee \orecord{a=\Bool, b=\Int}) \land \orecord{a=\Int}) + \crecord{a\eqq \Any}$.
It is equivalent to the type $\orecord{b=\Bool}$, and thus we can deduce that $x.b$ has the type $\Bool$.
......
......@@ -15,17 +15,17 @@ In a sense, occurrence typing is a
discipline designed to push forward the frontiers beyond which gradual
typing is needed, thus reducing the amount of runtime checks needed. For
instance, the the JavaScript code of~\eqref{foo} and~\eqref{foo2} in the introduction can be
typed by using gradual typing:
typed by using gradual typing:\vspace{-.4mm}
\begin{alltt}\color{darkblue}\morecompact
function foo(x\textcolor{darkred}{ : \pmb{\dyn}}) \{
(typeof(x) === "number")? x++ : x.trim() \refstepcounter{equation} \mbox{\color{black}\rm(\theequation)}\label{foo3}
\}
\}\negspace
\end{alltt}
``Standard'' gradual typing inserts two dynamic checks since it compiles the code above into:
\begin{alltt}\color{darkblue}\morecompact
function foo(x) \{
(typeof(x) === "number")? (\textcolor{darkred}{\Cast{number}{{\color{darkblue}x}}})++ : (\textcolor{darkred}{\Cast{string}{{\color{darkblue}x}}}).trim()
\}
\}\negspace
\end{alltt}
where {\Cast{$t$}{$e$}} is a type-cast that dynamically checks whether the value returned by $e$ has type $t$.\footnote{Intuitively, \code{\Cast{$t$}{$e$}} is
syntactic sugar for \code{(typeof($e$)==="$t$")\,?\,$e$\,:\,(throw "Type
......
......@@ -60,11 +60,11 @@ union of the values of the two types). We use $\simeq$ to denote the
symmetric closure of $\leq$: thus $s\simeq t$ means that $s$ and $t$ denote the same set of values and, as such, they are semantically the same type.
\subsection{Syntax}\label{sec:syntax}
The expressions $e$ and values $v$ of our language are inductively generated by the following grammars:
The expressions $e$ and values $v$ of our language are inductively generated by the following grammars:\vspace{-2mm}
\begin{equation}\label{expressions}
\begin{array}{lrclr}
\textbf{Expr} &e &::=& c\alt x\alt ee\alt\lambda^{\wedge_{i\in I}s_i\to t_i} x.e\alt \pi_j e\alt(e,e)\alt\tcase{e}{t}{e}{e}\\[1mm]
\textbf{Values} &v &::=& c\alt\lambda^{\wedge_{i\in I}s_i\to t_i} x.e\alt (v,v)
\textbf{Expr} &e &::=& c\alt x\alt ee\alt\lambda^{\wedge_{i\in I}s_i\to t_i} x.e\alt \pi_j e\alt(e,e)\alt\tcase{e}{t}{e}{e}\\%[1mm]
\textbf{Values} &v &::=& c\alt\lambda^{\wedge_{i\in I}s_i\to t_i} x.e\alt (v,v)\\[-2mm]
\end{array}
\end{equation}
for $j=1,2$. In~\eqref{expressions}, $c$ ranges over constants
......@@ -91,22 +91,24 @@ values.
\subsection{Dynamic semantics}\label{sec:opsem}
The dynamic semantics is defined as a classic left-to-right call-by-value reduction for a $\lambda$-calculus with pairs, enriched with specific rules for type-cases. We have the following notions of reduction:
The dynamic semantics is defined as a classic left-to-right call-by-value reduction for a $\lambda$-calculus with pairs, enriched with specific rules for type-cases. We have the following notions of reduction:\vspace{-2mm}
\[
\begin{array}{rcll}
(\lambda^{\wedge_{i\in I}s_i\to t_i} x.e)v &\reduces& e\subst x v\\
\pi_i(v_1,v_2) &\reduces& v_i & i=1,2\\
\tcase{v}{t}{e_1}{e_2} &\reduces& e_1 &v\in t\\
\tcase{v}{t}{e_1}{e_2} &\reduces& e_2 &v\not\in t\\
(\lambda^{\wedge_{i\in I}s_i\to t_i} x.e)v &\reduces& e\subst x v\\[-.4mm]
\pi_i(v_1,v_2) &\reduces& v_i & i=1,2\\[-.4mm]
\tcase{v}{t}{e_1}{e_2} &\reduces& e_1 &v\in t\\[-.4mm]
\tcase{v}{t}{e_1}{e_2} &\reduces& e_2 &v\not\in t\\[-2mm]
\end{array}
\]
The semantics of type-cases uses the relation $v\in t$ that we
informally defined in the previous section. We delay its formal
definition to Section~\ref{sec:type-schemes} (where it deals with some corner cases for negated arrow types). Contextual reductions are
defined by the following evaluation contexts:
\[
defined by the following evaluation contexts:\\[1mm]
\centerline{\(
%\[
\Cx[] ::= [\,]\alt \Cx e\alt v\Cx \alt (\Cx,e)\alt (v,\Cx)\alt \pi_i\Cx\alt \tcase{\Cx}tee
\]
%\]
\)}\\[1mm]
As usual we denote by $\Cx[e]$ the term obtained by replacing $e$ for
the hole in the context $\Cx$ and we have that $e\reduces e'$ implies
$\Cx[e]\reduces\Cx[e']$.
......@@ -118,7 +120,7 @@ standard, for the type system we will have to introduce several
unconventional features that we anticipated in
Section~\ref{sec:challenges} and are at the core of our work. Let
us start with the standard part, that is the typing of the functional
core and the use of subtyping, given by the following typing rules:
core and the use of subtyping, given by the following typing rules:\vspace{-1mm}
\begin{mathpar}
\Infer[Const]
{ }
......@@ -147,7 +149,7 @@ core and the use of subtyping, given by the following typing rules:
% \Gamma \cvdash - e t e_2:t'}
% {\Gamma\vdash \ite {e} t {e_1}{e_2}: t'}
% { }
\\
\vspace{-2mm} \\
\Infer[Sel]
{\Gamma \vdash e:\pair{t_1}{t_2}}
{\Gamma \vdash \pi_i e:t_i}
......@@ -162,7 +164,7 @@ core and the use of subtyping, given by the following typing rules:
{ \Gamma \vdash e:t\\t\leq t' }
{ \Gamma \vdash e: t' }
{ }
\qquad
\qquad\vspace{-3mm}
\end{mathpar}
These rules are quite standard and do not need any particular explanation besides those already given in Section~\ref{sec:syntax}. Just notice that we used a classic subsumption rule (i.e., \Rule{Subs}) to embed subtyping in the type system. Let us next focus on the unconventional aspects of our system, from the simplest to the hardest.
......@@ -170,7 +172,7 @@ The first unconventional aspect is that, as explained in
Section~\ref{sec:challenges}, our type assumptions are about
expressions. Therefore, in our rules the type environments, ranged over
by $\Gamma$, map \emph{expressions}---rather than just variables---into
types. This explains why the classic typing rule for variables is replaced by a more general \Rule{Env} rule defined below:
types. This explains why the classic typing rule for variables is replaced by a more general \Rule{Env} rule defined below:\vspace{-1mm}
\begin{mathpar}
\Infer[Env]
{ }
......@@ -180,7 +182,7 @@ types. This explains why the classic typing rule for variables is replaced by a
\Infer[Inter]
{ \Gamma \vdash e:t_1\\\Gamma \vdash e:t_2 }
{ \Gamma \vdash e: t_1 \wedge t_2 }
{ }
{ }\vspace{-3mm}
\end{mathpar}
The \Rule{Env} rule is coupled with the standard intersection introduction rule \Rule{Inter}
which allows us to deduce for a complex expression the intersection of
......@@ -189,12 +191,12 @@ environment $\Gamma$ with the static type deduced for the same
expression by using the other typing rules. This same intersection
rule is also used to infer the second unconventional aspect of our
system, that is, the fact that $\lambda$-abstractions can have negated
arrow types, as long as these negated types do not make the type deduced for the function empty:
arrow types, as long as these negated types do not make the type deduced for the function empty:\vspace{-1mm}
\begin{mathpar}
\Infer[Abs-]
{\Gamma \vdash \lambda^{\wedge_{i\in I}\arrow {s_i} {t_i}}x.e:t}
{ \Gamma \vdash\lambda^{\wedge_{i\in I}\arrow {s_i} {t_i}}x.e:\neg(t_1\to t_2) }
{ (t\wedge\neg(t_1\to t_2))\not\simeq\Empty }
{ (t\wedge\neg(t_1\to t_2))\not\simeq\Empty }\vspace{-3mm}
\end{mathpar}
%\beppe{I have doubt: is this safe or should we play it safer and
% deduce $t\wedge\neg(t_1\to t_2)$? In other terms is is possible to
......@@ -220,10 +222,12 @@ $\lambda$-abstractions all the types we wish. In particular, these
rules alone are not enough to type general overloaded functions. For
instance, consider this simple example of a function that applied to an
integer returns its successor and applied to anything else returns
\textsf{true}:
\[
\textsf{true}:\\[1mm]
\centerline{\(
%\[
\lambda^{(\Int\to\Int)\wedge(\neg\Int\to\Bool)} x\,.\,\tcase{x}{\Int}{x+1}{\textsf{true}}
\]
%\]
\)}\\[1mm]
Clearly, the expression above is well typed, but the rule \Rule{Abs+} alone
is not enough to type it. In particular, according to \Rule{Abs+} we
have to prove that under the hypothesis that $x$ is of type $\Int$ the expression
......@@ -244,7 +248,7 @@ expression whose type environment contains an empty assumption:
\Infer[Efq]
{ }
{ \Gamma, (e:\Empty) \vdash e': t }
{ }
{ }\vspace{-3mm}
\end{mathpar}
Once more, this kind of deduction was already present in the system
by~\citet{Frisch2008} to type full fledged overloaded functions,
......@@ -255,8 +259,7 @@ property of subject reduction.
Finally, there is one last rule in our type system, the one that
implements occurrence typing, that is, the rule for the
type-case:
type-case:\vspace{-1mm}
\begin{mathpar}
\Infer[Case]
{\Gamma\vdash e:t_0\\
......@@ -265,7 +268,7 @@ type-case:
%t_0\not\leq t \Rightarrow
\Gamma \evdash e {\neg t} \Gamma_2 \\ \Gamma_2 \vdash e_2:t'}
{\Gamma\vdash \tcase {e} t {e_1}{e_2}: t'}
{ }
{ }\vspace{-3mm}
\end{mathpar}
The rule \Rule{Case} checks whether the expression $e$, whose type is
being tested, is well-typed and then performs the occurrence typing
......@@ -363,7 +366,7 @@ of rules.
{ \pvdash \Gamma e t \varpi:t_1 \\ \pvdash \Gamma e t \varpi:t_2 }
{ \pvdash \Gamma e t \varpi:t_1\land t_2 }
{ }
\\
\vspace{-1mm}\\
\Infer[PTypeof]
{ \Gamma \vdash \occ e \varpi:t' }
{ \pvdash \Gamma e t \varpi:t' }
......@@ -374,14 +377,12 @@ of rules.
{ \pvdash \Gamma e t \epsilon:t }
{ }
\qquad
\end{mathpar}
\begin{mathpar}
%\\ \\
\vspace{-1mm}\\
\Infer[PAppR]
{ \pvdash \Gamma e t \varpi.0:\arrow{t_1}{t_2} \\ \pvdash \Gamma e t \varpi:t_2'}
{ \pvdash \Gamma e t \varpi.1:\neg t_1 }
{ t_2\land t_2' \simeq \Empty }
\\
\vspace{-1mm}\\
\Infer[PAppL]
{ \pvdash \Gamma e t \varpi.1:t_1 \\ \pvdash \Gamma e t \varpi:t_2 }
{ \pvdash \Gamma e t \varpi.0:\neg (\arrow {t_1} {\neg t_2}) }
......@@ -391,7 +392,7 @@ of rules.
{ \pvdash \Gamma e t \varpi:\pair{t_1}{t_2} }
{ \pvdash \Gamma e t \varpi.l:t_1 }
{ }
\\
\vspace{-1mm}\\
\Infer[PPairR]
{ \pvdash \Gamma e t \varpi:\pair{t_1}{t_2} }
{ \pvdash \Gamma e t \varpi.r:t_2 }
......@@ -405,7 +406,7 @@ of rules.
\Infer[PSnd]
{ \pvdash \Gamma e t \varpi:t' }
{ \pvdash \Gamma e t \varpi.s:\pair \Any {t'} }
{ }
{ }\vspace{-3mm}
\end{mathpar}
These rules implement the analysis described in
Section~\ref{sec:ideas} for functions and extend it to products. Let
......@@ -495,6 +496,7 @@ system given in Section~\ref{sec:algorules}.
\subsubsection{Type schemes}\label{sec:type-schemes}
We introduce the new syntactic category of \emph{types schemes} which are the terms~$\ts$ inductively produced by the following grammar.
\[
\begin{array}{lrcl}
\textbf{Type schemes} & \ts & ::= & t \alt \tsfun {\arrow t t ; \cdots ; \arrow t t} \alt \ts \tstimes \ts \alt \ts \tsor \ts \alt \tsempty
......@@ -545,7 +547,7 @@ We also need to perform intersections of type schemes so as to intersect the sta
\end{lemma}
Finally, given a type scheme $\ts$ it is straightforward to choose in its interpretation a type $\tsrep\ts$ which serves as the canonical representative of the set (i.e., $\tsrep \ts \in \tsint \ts$):
\begin{definition}[Representative]
We define a function $\tsrep {\_}$ that maps every non-empty type scheme into a type, \textit{representative} of the set of types denoted by the scheme.\vspace{-3mm}
We define a function $\tsrep {\_}$ that maps every non-empty type scheme into a type, \textit{representative} of the set of types denoted by the scheme.\vspace{-2mm}
\begin{align*}
\begin{array}{lcllcl}
\tsrep t &=& t & \tsrep {\ts_1 \tstimes \ts_2} &=& \pair {\tsrep {\ts_1}} {\tsrep {\ts_2}}\\
......@@ -862,9 +864,9 @@ a precision that the algorithm loses by applying \tsrep{} in the
definition of \constrf{} (case~\eqref{due} is the critical
one). Completeness is recovered by $(i)$ limiting the depth of the
derivations and $(ii)$ forbidding nested negated arrows on the
left-hand side of negated arrows.
left-hand side of negated arrows.\vspace{-.7mm}
\begin{definition}[Rank-0 negation]
A derivation of $\Gamma \vdash e:t$ is \emph{rank-0 negated} if \Rule{Abs--} never occurs in the derivation of a left premise of a \Rule{PAppL} rule.
A derivation of $\Gamma \vdash e:t$ is \emph{rank-0 negated} if \Rule{Abs--} never occurs in the derivation of a left premise of a \Rule{PAppL} rule.\vspace{-.7mm}
\end{definition}
\noindent The use of this terminology is borrowed from the ranking of higher-order
types, since, intuitively, it corresponds to typing a language in
......
\newcommand{\negspace}{\vspace{-.5mm}}
As we explained in the introduction, both TypeScript and Flow deduce the type
\code{(number$\vee$string) $\to$ (number$\vee$string)} for the first definition of the function \code{foo} in~\eqref{foo}, and the more precise type\vspace{-3pt}
\begin{equation}\label{tyinter}
......@@ -16,11 +17,11 @@ In this section we show how to do it by using the theory of occurrence
typing we developed in the first part of the paper. In particular we
collect the different types the parameter of a function is assigned in
its body to determine a partition of the domain types against which
the body of the function is checked. Consider a more involved example
\begin{alltt}\color{darkblue}
the body of the function is checked. Consider a more involved example\vspace{-.4mm}
\begin{alltt}\color{darkblue}\morecompact
function (x \textcolor{darkred}{: \(\tau\)}) \{
(x \(\in\) Real) ? \{ (x \(\in\) Int) ? succ x : sqrt x \} : \{ \(\neg\)x \}
\}
\}\negspace
\end{alltt}
When $\tau$ is \code{Real|Bool} (we assume that \code{Int} is a
subtype of \code{Real}) we want to deduce for this function the
......@@ -32,10 +33,10 @@ then the function must be rejected (since it tries to type
\code{$\neg\Real$}. Notice that typing the function under the
hypothesis that $\tau$ is \Any,
allows us to capture user-defined discrimination as defined
by~\citet{THF10} since, for instance
by~\citet{THF10} since, for instance\vspace{-.4mm}
\begin{alltt}\color{darkblue}\morecompact
let is_int x = (x\(\in\)Int)? true : false
in if is_int z then z+1 else 42
in if is_int z then z+1 else 42\negspace
\end{alltt}
is well typed since the function \code{is\_int} is given type
$(\Int\to\True)\wedge(\neg\Int\to\False)$. We choose a more general
......@@ -101,7 +102,7 @@ binders):\vspace{-2mm}
\Gamma \evdash e t \Gamma_1\\ \Gamma_1\vdash e_1\triangleright\psi_1\\
\Gamma \evdash e {\neg t} \Gamma_2 \\ \Gamma_2\vdash e_2\triangleright\psi_2}
{\Gamma\vdash \ifty{e}{t}{e_1}{e_2}\triangleright\psi_\circ\cup\psi_1\cup\psi_2}
{}
{}\vspace{-2mm}
\end{mathpar}
Where $\psi\setminus\{x\}$ is the function defined as $\psi$ but undefined on $x$ and $\psi_1\cup \psi_2$ denote component-wise union%
\iflongversion%
......@@ -118,7 +119,7 @@ Where $\psi\setminus\{x\}$ is the function defined as $\psi$ but undefined on $x
\noindent
\else.~\fi
All that remains to do is replace the rule [{\sc Abs}+] with the
following rule
following rule\vspace{-1mm}
\begin{mathpar}
\Infer[AbsInf+]
{\Gamma,x:s\vdash e\triangleright\psi
......@@ -130,7 +131,7 @@ following rule
{
\Gamma\vdash\lambda x:s.e:\textstyle\bigwedge_{(s',t') \in T}s'\to t'
}
{}
{}\vspace{-2mm}
\end{mathpar}
Note the invariant that the domain of
$\psi$ is always equal to the
......@@ -153,7 +154,7 @@ It is not too difficult to generalize this rule when the lambda is
typed by an intersection type. Indeed, even if the programmer has
specified a particular intersection type for the function we may want
to refine each of its arrows. This is exactly what the following rule
does:
does:\vspace{-1mm}
\begin{mathpar}
\Infer[AbsInf+] {\forall i\in I~\Gamma,x:s_i\vdash
e\triangleright\psi_i
......@@ -164,7 +165,7 @@ does:
T_i = \{ (u, v) ~|~ u\in\psi_i(x) \land \Gamma, x:u\vdash e : v\}
} {\textstyle \Gamma\vdash\lambda^{\bigwedge_{i\in
I}s_i\to t_i} x.e:\bigwedge_{i\in I}(s_i\to
t_i)\land\bigwedge_{(u, v)\in T_i}(u\to v) } {}
t_i)\land\bigwedge_{(u, v)\in T_i}(u\to v) } {}\vspace{-2mm}
\end{mathpar}
Here, for each arrow declared in the interface of the function, we
first typecheck the body of the function as usual (to check that the
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment