Commit 448de750 authored by Giuseppe Castagna's avatar Giuseppe Castagna
Browse files

spell-check

parent 42b60d34
......@@ -6,7 +6,7 @@ decides whether there exists a type $t$ such that $\Gamma\vdash e:t$
is provable. For that we need to solve essentially two problems:
$(i)$~how to handle the fact that it is possible to deduce several
types for the same well-typed expression and $(ii)$~how to compute the
auxiliary deduction system for paths.
auxiliary deduction system $ \pvdash \Gamma e t$ for paths.
$(i)$. Multiple types have two distinct origins each requiring a distinct
technical solution. The first origin is the presence of
......@@ -39,7 +39,7 @@ that we propose in this section implies (i.e., it is less precise than) the one
Lemma~\ref{soundness_simple_ts}) and it is thus sound, too. The algorithm of this
section is not only simpler but, as we discuss in Section~\ref{sec:algoprop},
is also the one that should be used in practice. This is why we preferred to
present it here and relegate the presentation of type schemes to
present it here and relegate the presentation of the system with type schemes to
Appendix~\ref{app:typeschemes}.
$(ii)$. For what concerns the use of the auxiliary derivation for the $\Gamma \evdash e t \Gamma' $ and $\pvdash \Gamma e t \varpi:t'$
......@@ -63,11 +63,12 @@ system given in Section~\ref{sec:algorules}.
In order to define the algorithmic typing of expressions like
applications and projections we need to define the operators on
types we used in Section~\ref{sec:ideas}. Consider the classic rule \Rule{App} for applications. It essentially
does three things: $(i)$ it checks that the function has functional
does three things: $(i)$ it checks that the expression in the function
position has a functional
type; $(ii)$ it checks that the argument is in the domain of the
function, and $(iii)$ it returns the type of the application. In systems
without set-theoretic types these operations are quite
straightforward: $(i)$ corresponds to checking that the function has
straightforward: $(i)$ corresponds to checking that the expression has
an arrow type, $(ii)$ corresponds to checking that the argument is in
the domain of the arrow deduced for the function, and $(iii)$ corresponds
to returning the codomain of that same arrow. With set-theoretic types
......@@ -75,7 +76,7 @@ things get more difficult, since a function can be typed by, say, a
union of intersection of arrows and negations of types. Checking that
the function has a functional type is easy since it corresponds to
checking that it has a type subtype of $\Empty{\to}\Any$. Determining
its domain and the type of the application is more complicated and needs the operators $\dom{}$ and $\circ$ we informally described in Section~\ref{sec:ideas} where we also introduced the operator $\worra{}{}$. These three operators are used by our algorithm and formally defined as:\vspace{-3.5mm}
its domain and the type of the application is more complicated and needs the operators $\dom{}$ and $\circ$ we informally described in Section~\ref{sec:ideas} where we also introduced the operator $\worra{}{}$. These three operators are used by our algorithm and formally defined as:\vspace{-0.5mm}
\begin{eqnarray}
\dom t & = & \max \{ u \alt t\leq u\to \Any\}
\\[-1mm]
......@@ -93,12 +94,12 @@ We need similar operators for projections since the type $t$
of $e$ in $\pi_i e$ may not be a single product type but, say, a union
of products: all we know is that $t$ must be a subtype of
$\pair\Any\Any$. So let $t$ be a type such that $t\leq\pair\Any\Any$,
then we define:\vspace{-1.2mm}
then we define:\vspace{-0.7mm}
\begin{equation}
\begin{array}{lcrlcr}
\bpl t & = & \min \{ u \alt t\leq \pair u\Any\}\qquad&\qquad
\bpr t & = & \min \{ u \alt t\leq \pair \Any u\}
\end{array}\vspace{-1.2mm}
\end{array}\vspace{-0.7mm}
\end{equation}
All the operators above but $\worra{}{}$ are already present in the
theory of semantic subtyping: the reader can find how to compute them
......@@ -111,10 +112,10 @@ furthermore $t\leq\Empty\to\Any$, then $t \simeq \bigvee_{i\in
I}\left(\bigwedge_{p\in P_i}(s_p\to t_p)\bigwedge_{n\in
N_i}\neg(s_n'\to t_n')\right)$ with $\bigwedge_{p\in P_i}(s_p\to
t_p)\bigwedge_{n\in N_i}\neg(s_n'\to t_n') \not\simeq \Empty$ for all
$i$ in $I$. For such a $t$ and any type $s$ then we have:\vspace{-1.7mm}
$i$ in $I$. For such a $t$ and any type $s$ then we have:\vspace{-1.0mm}
%
\begin{equation}
\worra t s = \dom t \wedge\bigvee_{i\in I}\left(\bigwedge_{\{P\subseteq P_i\alt s\leq \bigvee_{p \in P} \neg t_p\}}\left(\bigvee_{p \in P} \neg s_p\right) \right)\vspace{-1.7mm}
\worra t s = \dom t \wedge\bigvee_{i\in I}\left(\bigwedge_{\{P\subseteq P_i\alt s\leq \bigvee_{p \in P} \neg t_p\}}\left(\bigvee_{p \in P} \neg s_p\right) \right)\vspace{-1.0mm}
\end{equation}
The formula considers only the positive arrows of each summand that
forms $t$ and states that, for each summand, whenever you take a subset
......
......@@ -24,7 +24,7 @@ to reuse some of their ideas to perform an information flow analysis on top of
our system to remove these limitations.
%
\iflongversion%
Some of the exensions we hinted to in Section~\ref{sec:practical}
Some of the extensions we hinted to in Section~\ref{sec:practical}
warrant a formal treatment. In particular, the rule [{\sc OverApp}]
only detects the application of an overloaded function once, when
type-checking the body of the function against the coarse input type
......
......@@ -64,11 +64,11 @@ field $\ell = t \vee \Undef$%
For what concerns \emph{expressions}, we cannot use CDuce record expressions
as they are, but we must adapt them to our analysis. In particular, we
consider records that are built starting from the empty record expression \erecord{} by adding, updating, or removing fields:\vspace{-1.5mm}
consider records that are built starting from the empty record expression \erecord{} by adding, updating, or removing fields:\vspace{-0.75mm}
\[
\begin{array}{lrcl}
\textbf{Expr} & e & ::= & \erecord {} \alt \recupd e \ell e \alt \recdel e \ell \alt e.\ell
\end{array}\vspace{-1.5mm}
\textbf{Expr} & e & ::= & \erecord {} ~\alt~ \recupd e \ell e ~\alt~ \recdel e \ell ~\alt~ e.\ell
\end{array}\vspace{-.75mm}
\]
in particular $\recdel e \ell$ deletes the field $\ell$ from $e$, $\recupd e \ell e'$ adds the field $\ell=e'$ to the record $e$ (deleting any existing $\ell$ field), while $e.\ell$ is field selection with the reduction:
\(\erecord{...,\ell=e,...}.\ell\ \reduces\ e\).
......@@ -77,7 +77,7 @@ To define record type subtyping and record expression type inference we need thr
%
Then two record types $t_1$ and $t_2$ are in subtyping relation, $t_1 \leq t_2$, if and only if for all $\ell \in \Labels$ we have $\proj \ell {t_1} \leq \proj \ell {t_2}$. In particular $\orecord{\!\!}$ is the largest record type.
Expressions are then typed by the following rules (already in algorithmic form).\vspace{-3mm}
Expressions are then typed by the following rules (already in algorithmic form).\vspace{-.1mm}
\begin{mathpar}
\Infer[Record]
{~}
......@@ -89,7 +89,8 @@ Expressions are then typed by the following rules (already in algorithmic form).
{\Gamma \vdash \recupd{e_1}{\ell}{e_2}: t_1 + \crecord{\ell=t_2}}
{\recupd{e_1\!\!}{\!\!\ell}{e_2} \not\in\dom\Gamma}
%\end{mathpar}
%\begin{mathpar}\vspace{-4mm}
%\begin{mathpar}
\vspace{-1.9mm}
\\
\Infer[Delete]
{\Gamma \vdash e:t\and t\leq\orecord {\!\!}}
......
......@@ -78,7 +78,7 @@ types they present go well beyond what can be expressed with the set-theoretic
types we use (as they allow almost any pure expression to occur in
types). However, the system forgoes any notion (or just characterization) of completeness
and the
subtyping algorithm is largely dependent on the subtle behaviour of
subtyping algorithm is largely dependent on the subtle behavior of
the SMT solver (which may timeout or give an incorrect model that
cannot be used as a counter-example to explain the type-error).
As with our work, the typing rule for the {\tt if~$e$ then $e_1$ else
......@@ -92,7 +92,7 @@ on ``base types''). This is done in the context of a dynamic
language and their approach is extended with polymorphism, dynamic
dispatch and record types.
\citet{Kent16} bridge the gap between prior work on occurence typing
\citet{Kent16} bridge the gap between prior work on occurrence typing
and SMT based (sub) typing. It introduces the $\lambda_{RTR}$ core calculus, an
extension of $\lambda_{TR}$ where the logical formulæ embedded in
types are not limited to built-in type predicates, but accept
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment