Commit 04c7b218 authored by Giuseppe Castagna's avatar Giuseppe Castagna
Browse files

added paragraph on gradual typing

parent 129254cb
Typescript and Flow are extensions of JavaScript that allow the programmer to specify in the code type annotations used to statically type-check the program. For instance, the following function definition is valid in both languages
TypeScript and Flow are extensions of JavaScript that allow the programmer to specify in the code type annotations used to statically type-check the program. For instance, the following function definition is valid in both languages
\begin{alltt}\color{darkblue}
function foo(x\textcolor{darkred}{ : number | string}) \{
return (typeof(x) === "number")? x+1 : x.trim(); \refstepcounter{equation} \mbox{\color{black}\rm(\theequation)}\label{foo}
......@@ -89,7 +89,7 @@ left or the right expressions of an application. In other words, we aim at estab
extract as much static information as possible from a type test. We leverage our
analysis on the presence of full-fledged set-theoretic types
connectives provided by the theory of semantic subtyping. Our analysis
will also yield three byproducts. First, to refine the type of the
will also yield two important byproducts. First, to refine the type of the
variables we have to refine the type of the expressions they occur in and
we can use this information to improve our analysis. Therefore our
occurrence typing approach will refine not only the types of variables
......@@ -98,12 +98,28 @@ inference). Second, and most importantly, the result of our analysis can be used
intersection types for functions, even in the absence of precise type
annotations such as the one in the definition of \code{foo} in~\eqref{foo2}: to put it simply, we are able to
infer the type~\eqref{eq:inter} for the unannotated pure JavaScript
code of \code{foo}, while in TypeScript/Flow
this requires an explicit type annotation. Third, we
show how to combine occurrence typing with gradual typing, in
particular how the former can be used to optimize the compilation of
the latter.
code of \code{foo} (i.e., no type annotation at all), while in TypeScript and Flow (and any other
formalism we are aware of)
this requires an explicit and full type annotation as the one given in~\eqref{foo2}.
Finally, the natural target for occurrence typing are languages with
dynamic type tests, in particular, dynamic languages. To type such
languages occurrence typing is often combined not only, as discussed
above, with set-theoretic types, but also with extensible record types
(to type objects) and gradual type system (to combine static and
dynamic typing) two features that we study in
Section~\ref{sec:extensions} as two extensions of our core formalism. Of
particular interest is the latter. \citet{Gre19} singles out
occurrence typing and gradual typing as \emph{the} two ``lineages'' that
partition the research on combining static and dynamic typing: he
identifies the former as the ``pragmatic,
implementation-oriented dynamic-first'' lineage and the latter as the
``formal, type-theoretic, static-first'' lineage. Here we demonstrate that
these two ``lineages'' are not orthogonal or mutually independent, and
we combine occurrence and gradual typing showing, in particular, how
the former can be used to optimize the compilation of the latter.
\subsection{Motivating examples}
We focus our study on conditionals that test types and consider the following syntax:
\(
\ifty{e}{t}{e}{e}
......@@ -398,7 +414,7 @@ second. Thus we would type the positive branch of \eqref{nest1} under the hypoth
that $x$ is of type $\Int$. But if we use the hypothesis generated by
the test of $f\,x$, that is, that $x$ is of type \Int, to check $g\,x$ against \Bool,
then the type deduced for $x$ is $\Empty$---i.e., the branch is never selected. In other words, we want to produce type
environmments for occurrence typing by taking into account all
environments for occurrence typing by taking into account all
the available hypotheses, even when these hypotheses are formulated later
in the flow of control. This will be done in the type systems of
Section~\ref{sec:language} by the rule \Rule{Path} and will require at
......@@ -459,11 +475,11 @@ available online.
\iflongversion
\subsubsection*{Contributions}
\fi
The main contributions of our work can be summarized as follows:
\noindent The main contributions of our work can be summarized as follows:
\begin{itemize}[left=0pt .. \parindent,nosep]
\item We provide a theoretical framwork to refine the type of
\item We provide a theoretical framework to refine the type of
expressions occurring in type tests, thus removing the limitations
of current approaches which require both the tests and the refinement
of current occurrence typing approaches which require both the tests and the refinement
to take place on variables.
\item We define a type-theoretic approach alternative to the
......@@ -473,14 +489,17 @@ The main contributions of our work can be summarized as follows:
\item We use our analysis for defining a formal framework that
reconstructs intersection types for unannotated or
partially-annotated functions, something that, in our ken, no
system did so far.
other current system can do.
\item We prove the soundness of our system. We define algorithms to
infer the types that we prove to be sound and show different completeness results
which in practice yield the completeness of any reasonble implementation.
which in practice yield the completeness of any reasonable implementation.
\item We show how to extend our approach to records with field
addition, update, and deletion operations.
\item We show how to extend to and combine occurrence typing with
gradual typing and apply our results to optimze the compilation of the
\item We show how occurrence typing can be extended to and combined with
gradual typing and apply our results to optimize the compilation of the
latter.
% \item We show how it is possible to extend our framework to
......@@ -488,7 +507,7 @@ The main contributions of our work can be summarized as follows:
\end{itemize}
We end this introduction by stressing the practical
implications of our work: a perfunctory inspection may give the
wrong impression that the only interest of the heavy formalisation
wrong impression that the only interest of the heavy formalization
that follows is to have generic expressions, rather than just variables, in type
cases: this would be a bad trade-off. The important point is,
instead, that our formalization is what makes analyses such as those
......
......@@ -102,7 +102,7 @@ The expressions $e$ and values $v$ of our language are inductively generated by
\begin{equation}\label{expressions}
\begin{array}{lrclr}
\textbf{Expr} &e &::=& c\alt x\alt ee\alt\lambda^{\wedge_{i\in I}s_i\to t_i} x.e\alt \pi_j e\alt(e,e)\alt\tcase{e}{t}{e}{e}\\[.3mm]
\textbf{Values} &v &::=& c\alt\lambda^{\wedge_{i\in I}s_i\to t_i} x.e\alt (v,v)\\[-1mm]
\textbf{Values} &v &::=& c\alt\lambda^{\wedge_{i\in I}s_i\to t_i} x.e\alt (v,v)\\
\end{array}
\end{equation}
for $j=1,2$. In~\eqref{expressions}, $c$ ranges over constants
......@@ -150,12 +150,12 @@ call-by-value weak reduction for a $\lambda$-calculus with pairs, enriched with
\end{array}
\]
Contextual reductions are
defined by the following evaluation contexts:\\[1mm]
\centerline{\(
%\[
defined by the following evaluation contexts:
%\\[1mm]\centerline{\(
\[
\Cx[] ::= [\,]\alt \Cx e\alt v\Cx \alt (\Cx,e)\alt (v,\Cx)\alt \pi_i\Cx\alt \tcase{\Cx}tee
%\]
\)}\\[1mm]
\]
%\)}\\[1mm]
As usual we denote by $\Cx[e]$ the term obtained by replacing $e$ for
the hole in the context $\Cx$ and we have that $e\reduces e'$ implies
$\Cx[e]\reduces\Cx[e']$.
......
......@@ -207,7 +207,7 @@
\label{sec:extensions}
\input{extensions}
\section{Towards a practical implementation}
\section{Implementation}
\label{sec:practical}
\input{practical}
......
@inproceedings{Gre19,
author = {Michael Greenberg},
editor = {Benjamin S. Lerner and
Rastislav Bod{\'{\i}}k and
Shriram Krishnamurthi},
title = {The Dynamic Practice and Static Theory of Gradual Typing},
booktitle = {3rd Summit on Advances in Programming Languages, {SNAPL} 2019, May
16-17, 2019, Providence, RI, {USA}},
series = {LIPIcs},
volume = {136},
pages = {6:1--6:20},
publisher = {Schloss Dagstuhl - Leibniz-Zentrum f{\"{u}}r Informatik},
year = {2019},
url = {https://doi.org/10.4230/LIPIcs.SNAPL.2019.6},
doi = {10.4230/LIPIcs.SNAPL.2019.6},
timestamp = {Tue, 11 Feb 2020 15:52:14 +0100},
biburl = {https://dblp.org/rec/conf/snapl/000219.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
@article{Cha2017,
author = {Chaudhuri, Avik and Vekris, Panagiotis and Goldman, Sam and Roch, Marshall and Levi, Gabriel},
title = {Fast and Precise Type Checking for JavaScript},
......
En ordre de pertinence:
Matthias Fellaisen (Northwestern)
Avik Chaudhuri (Facebook)
Andrew M. Kent (Galois Inc)
Panagiotis Vekris (Facebook)
Ranjit Jhala (UC San Diego)
Gavin Bierman (Oracle)
Catalin Hritcu (INRIA)
Jeremy Siek (Indiana) (co-autor 2 years ago)
Michael Greenberg (Pomona College)
Jan Vitek (Northwestern)
......@@ -21,7 +21,8 @@
\definecolor{darkblue}{rgb}{0,0.2,0.4}
\definecolor{darkred}{rgb}{0.7,0.4,0.4}
%\definecolor{darkred}{rgb}{0.7,0.4,0.4}
\definecolor{darkred}{rgb}{0.8,0.2,0.2}
\ifwithcomments
\newcommand{\ignore}[1]{{\color{red}\textbf{DELETED:}#1}}
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment