Commit fd19bfb6 authored by Giuseppe Castagna's avatar Giuseppe Castagna
Browse files

reduced section records

parent 2440530c
\subsection{Record types}
\label{ssec:struct}
\iflongversion%%%%%%%%%%%%%%
The previous analysis already covers a large gamut of realistic
cases. For instance, the analysis already handles list data
structures, since products and recursive types can encode them as
......@@ -14,7 +15,19 @@ extend our work to records. Although we use the record \emph{types} as they are
defined in CDuce we cannot do the same for CDuce record \emph{expressions}
which require non-trivial modifications for occurrence typing,
especially because we want to capture the typing of record field extension and field
deletion, two widely used record operations that current gradual typing systems fail to capture. CDuce record types are obtained by extending types with the
deletion, two widely used record operations that current gradual
typing systems fail to capture.
\else%%%%%%%%%%%
Since the main application of occurrence
typing is to type dynamic languages, then it is important to show that our
work can handle records. We cannot handle them via a simple encoding into
pairs, since we need a precise type analysis for operations such
as field addition, update, and deletion. We therefore extend our approach to deal
with the records (types and expressions) as they are to be found in the aforementioned
CDuce language, where these operations can be precisely typed.
\fi%%%%%
CDuce record \emph{types} can be embedded in our types by adding the
following two type constructors:\\[1.4mm]
%
\centerline{\(\textbf{Types} \quad t ~ ::= ~ \record{\ell_1=t \ldots \ell_n=t}{t}\alt \Undef\)}\\[1.4mm]
......@@ -49,11 +62,13 @@ field $\ell = t \vee \Undef$%
\ (note that ``$\ldots{}$'' is meta-syntax while ``{\large\textbf{..}}'' is syntax).
\fi
For what concerns expressions, we adapt CDuce records to our analysis. In particular records are built starting from the empty record expression \erecord{} by adding, updating, or removing fields:\vspace{-1mm}
For what concerns \emph{expressions}, we cannot use CDuce record expressions
as they are, but we must adapt them to our analysis. In particular, we
consider records that are built starting from the empty record expression \erecord{} by adding, updating, or removing fields:\vspace{-1.5mm}
\[
\begin{array}{lrcl}
\textbf{Expr} & e & ::= & \erecord {} \alt \recupd e \ell e \alt \recdel e \ell \alt e.\ell
\end{array}\vspace{-1mm}
\end{array}\vspace{-1.5mm}
\]
in particular $\recdel e \ell$ deletes the field $\ell$ from $e$, $\recupd e \ell e'$ adds the field $\ell=e'$ to the record $e$ (deleting any existing $\ell$ field), while $e.\ell$ is field selection with the reduction:
\(\erecord{...,\ell=e,...}.\ell\ \reduces\ e\).
......@@ -62,7 +77,7 @@ To define record type subtyping and record expression type inference we need thr
%
Then two record types $t_1$ and $t_2$ are in subtyping relation, $t_1 \leq t_2$, if and only if for all $\ell \in \Labels$ we have $\proj \ell {t_1} \leq \proj \ell {t_2}$. In particular $\orecord{\!\!}$ is the largest record type.
Expressions are then typed by the following rules (already in algorithmic form).
Expressions are then typed by the following rules (already in algorithmic form).\vspace{-3mm}
\begin{mathpar}
\Infer[Record]
{~}
......@@ -73,8 +88,9 @@ Expressions are then typed by the following rules (already in algorithmic form).
{\Gamma \vdash e_1:t_1\and t_1\leq\orecord {\!\!} \and \Gamma \vdash e_2:t_2}
{\Gamma \vdash \recupd{e_1}{\ell}{e_2}: t_1 + \crecord{\ell=t_2}}
{\recupd{e_1\!\!}{\!\!\ell}{e_2} \not\in\dom\Gamma}
\end{mathpar}
\begin{mathpar}\vspace{-4mm}\\
%\end{mathpar}
%\begin{mathpar}\vspace{-4mm}
\\
\Infer[Delete]
{\Gamma \vdash e:t\and t\leq\orecord {\!\!}}
{\Gamma \vdash \recdel e \ell: \recdel t \ell}
......@@ -90,7 +106,8 @@ To extend occurrence typing to records we add the following values to paths: $\v
\(\recdel e \ell\downarrow r_\ell.\varpi = \occ{e}\varpi\), and
\(\recupd{e_1}\ell{e_2}\downarrow u_\ell^i.\varpi = \occ{e_i}\varpi\)
and add the following rules for the new paths:
\begin{mathpar}
\begin{mathpar}\vspace{-8.7mm}\\
\Infer[PSel]
{ \pvdash \Gamma e t \varpi:t'}
{ \pvdash \Gamma e t {\varpi.a_\ell}:\orecord {\ell:t'} }
......@@ -100,7 +117,7 @@ and add the following rules for the new paths:
{ \pvdash \Gamma e t \varpi:t'}
{ \pvdash \Gamma e t \varpi.r_\ell: (\recdel {t'} \ell) + \crecord{\ell \eqq \Any}}
{ }
\vspace{-2mm}\\
\vspace{-3mm}\\
\Infer[PUpd1]
{ \pvdash \Gamma e t \varpi:t'}
{ \pvdash \Gamma e t \varpi.u_\ell^1: (\recdel {t'} \ell) + \crecord{\ell \eqq \Any}}
......
......@@ -58,7 +58,8 @@ as \emph{type connectives}.
The subtyping relation for these types, noted $\leq$, is the one defined
by~\citet{Frisch2008} to which the reader may refer for the formal
definition (see~\cite{types18post} for a simpler alternative
definition that we recall in Appendix~\ref{sec:subtyping}). A detailed description of the algorithm to decide it can be found in~\cite{Cas15}.
definition that we recall in Appendix~\ref{sec:subtyping} for the
reader's convenience). A detailed description of the algorithm to decide it can be found in~\cite{Cas15}.
For this presentation it suffices to consider that
types are interpreted as sets of \emph{values} ({i.e., either
constants, $\lambda$-abstractions, or pairs of values: see
......
......@@ -129,22 +129,17 @@ url={http://www.cduce.org/papers/frisch_phd.pdf}
url = {http://doi.acm.org/10.1145/1391289.1391293},
publisher = {{ACM}},
}
@article{Cas15,
author = {Giuseppe Castagna},
author = {G. Castagna},
title = {Covariance and Contravariance: a fresh look at an
old issue (a primer in advanced type systems for
learning functional programmers)},
journal = {Logical Method in Computer Science},
DMI-CATEGORY = {jour},
FILE = {covcon-again.pdf},
abstract = {Twenty years ago, in an article titled "Covariance and contravariance: conflict without a cause", I argued that covariant and contravariant specialization of method parameters in object-oriented programming had different purposes and deduced that, not only they could, but actually they should both coexist in the same language.
In this work I reexamine the result of that article in the light of recent advances in (sub-)typing theory and programming languages, taking a fresh look at this old issue.
Actually, the revamping of this problem is just an excuse for writing an essay that aims at explaining sophisticated type-theoretic concepts, in simple terms and by examples, to undergraduate computer science students and/or willing functional programmers.
Finally, I took advantage of this opportunity to describe some undocumented advanced techniques of type-systems implementation that are known only to few insiders that dug in the code of some compilers: therefore, even expert language designers and implementers may find this work worth of reading.},
note = {(Revised version: first version 2013). To appear. \url{https://arxiv.org/abs/1809.01427}.},
year = {2019},
annote = {types,ool,new},
journal = {Logical Methods in Computer Science},
NUMBER = {16},
issue = {1},
pages = {15:1--15:58},
year = {2020},
}
......
......@@ -290,6 +290,7 @@ The authors thank Paul-André Melliès for his help on type ranking.
\end{acks}
%% Bibliography
\bibliography{main}
......
......@@ -38,7 +38,10 @@ semantics to higher-order functions.
We define the interpretation $ \TypeInter{t} $ of a type $ t $
so that it satisfies the following equalities,
where $ \Pf $ denotes the restriction of the powerset to finite subsets:
where $ \Pf $ denotes the restriction of the powerset to finite
subsets and $ \ConstantsInBasicType{()}$ denotes the function
that assigns to each basic type the set of constants of that type, so
that $ c\in \ConstantsInBasicType(\basic {c})$:
\begin{align*}
\TypeInter{\Empty} & = \emptyset&
\TypeInter{t_1 \lor t_2} & = \TypeInter{t_1} \cup \TypeInter{t_2} &
......@@ -51,10 +54,6 @@ where $ \Pf $ denotes the restriction of the powerset to finite subsets:
\span\span
\span\span
\end{align*}
where $ \ConstantsInBasicType{()}$ is the function
that assigns to each basic type the set of constants of that type, so
that $ c\in \ConstantsInBasicType(\basic {c})$.
We cannot take the equations above
directly as an inductive definition of $ \TypeInter{} $
because types are not defined inductively but coinductively.
......@@ -64,7 +63,7 @@ Definition~\ref{def:types} ensures that the binary relation $\vartriangleright
t_i$, $t_1 \land t_2 \vartriangleright
t_i$, $\neg t \vartriangleright t$ is Noetherian.
This gives an induction principle \footnote{In a nutshell, we can do proofs by induction on the structure of unions and negations---and, thus, intersections---but arrows, products, and basic types are the base cases for the induction.} on $\types$ that we
we use to give the following definition (due to~\citet{types18post}
use combined with structural induction on $\Domain$ to give the following definition (due to~\citet{types18post}
and equivalent to but simpler than the definition in~\cite{Frisch2008}),
which validates these equalities.
......@@ -89,10 +88,6 @@ We define the \emph{set-theoretic interpretation}
$ \TypeInter{} : \types \to \Pd(\Domain) $
as $ \TypeInter{t} = \{d \in \Domain \mid (d : t)\} $.
\end{definition}
Notice that, since since the binary predicate $
(d : t) $ is defined on $\Domain\times\types$ and
$\Omega\not\in\Domain$, then $\Omega\not\in\TypeInter t$.
Finally,
we define the subtyping preorder and its associated equivalence relation
as follows.
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment