Commit d193f703 authored by Giuseppe Castagna's avatar Giuseppe Castagna
Browse files

First pass over

parent eaf6d4de
In this work we presented to core of our analysis of occurrence
typing, extended it to record types and a proposed a couple of novel
applications of the theory, namely the inference of
intersection types for functions and a static analysis to reduce the number of
casts inserted when compiling gradually-typed programs.
There is still a lot of work to do to fill the gap with real-word
programming languages. Some of it should be quite routine such as the
encoding of specific language constructions (e.g., \code{isInt},
\code{typeof},...), the handling of more complex
kinds of checks (e.g., generic Boolean expression, multi-case
type-checks) and even encompass sophisticated type matching as the one
performed by the language CDuce. Some other will require more
work. For example, our analysis cannot handle flow of information. In
particular, the result of a type test can flow only to the branches
but not outside the test. As a consequence the current system cannot
type a let binding such as
let x = (y\(\in\)Int)?`yes:`no in (x\(\in\)`yes)?y+1:not(y)
which is clearly safe when $y:\Int\vee\Bool$. Nor can this example be solved by partial evaluation since we do not handle nesting of tests in the condition\code{( ((y\(\in\)Int)?`yes:`no)\(\in\)`yes )? y+1 : not(y)},
and both are issues that system by~\citet{THF10} can handle. We think that it is possible
to reuse some of their ideas to perform a information flow analysis on the top of
our system to remove these limitations.
But the real challenges that lie ahead are the handling of side
effects and the addition of polymorphic types. Our analysis works in a
pure systems and extending it to cope with side-effects is not
immediate. We plan to do it by defining effect systems or by
performing some information flow analysis typically by enriching the
one we plan to develop for the limitations above. But our plan is not
more defined than that. For polymorphism, instead, we can easily adapt
the main idea of this work to the polymorphic setting. Indeed, the
main idea is to remove from the type of an expression all
the results of the expression that would make some test fail (or
succeed, if we are typing a negative branch). This is done by
applying an intersection to the type of the expression, so as to keep
only the values that may yield success (or failure) of the test. For
polymorphism the idea is the same, with the only difference that
besides applying an intersection we can also apply an
instantiation. The idea is to single out the two most general type
substitutions for which some test may succeed and fail, respectively, and apply these
substitutions to refine the types of the corresponding occurrences
in the ``then'' and ``else'' branches. Concretely, consider the test
$x_1x_2\in t$ where $t$ is a closed type and $x_1$, $x_2$ are
variables of type $x_1: s\to t$ and $x_2: u$ with $u\leq s$. For the
positive branch we first check whether there exists a type
substitution $\sigma$ such that $t\sigma\leq\neg\tau$. If it does not
exists, then this means that for all possible assignments of
polymorphic type variables of $s\to t$, the test may succeed, that is,
the success of the test does not depend on the particular instance of
$s\to t$ and, thus, it is not possible to pick some substitution for
refining the occurrence typing. If it exists, then
we find a type substitution $\sigma_\circ$ such that $\tau\leq
t\sigma_\circ$ and we refine the types of $x_1$ and $x_2$ for the
positive branch by applying $\sigma_\circ$ to their types. While the
idea is clear (see Appendix~\ref{app:roadmap} for a more detailed explanation),
the technical details are quite involved, especially when considering
functions typed by intersection types and/or when integrating gradual
typing. This deserves a whole pan of non trivial research that we plan to
develop in the near future.
\begin{definition}[Type syntax]\label{def:types} The set of types are the terms $t$ coinductively produced by the following grammar
\textbf{Types} & t & ::= & b\alt t\to t\alt t\times t\alt t\vee t \alt \neg t \alt \Empty
and that satisfy the following conditions
\item (regularity) the term has a finite number of different sub-terms;
\item (contractivity) every infinite branch of a type contains an infinite number of occurrences of the
arrows or product type constructors.
We introduce the following abbreviations for types: $
t_1 \land t_2 \eqdef \neg (\neg t_1 \vee \neg t_2)$,
$t_ 1 \setminus t_2 \eqdef t_1 \wedge \neg t_2$, $\Any \eqdef \neg \Empty$.
We refer to $ b $ and $ \to $ as \emph{type constructors}
and to $ \lor $, $ \land $, $ \lnot $, and $ \setminus $
as \emph{type connectives}.
\subsubsection{Operators for type constructors}
Let $t$ be a functional type (i.e., $t\leq\Empty\to\Any$) then
\dom t & = & \max \{ u \alt t\leq u\to \Any\}
\apply t s & = &\min \{ u \alt t\leq s\to u\}
\worra t s & = &\min\{u\leq \dom t\alt t\circ(\dom t\setminus u)\leq \neg s\}
\subsubsection{Type schemes}
Given a value $v$, the set of types $t$ such that $v \in t$ has no smallest element in general.
Indeed, the standard derivation rule for $\lambda$-abstraction values is:
{t=(\wedge_{i\in I}\arrow {s_i} {t_i})\land (\wedge_{j\in J} \neg (\arrow {s'_j} {t'_j}))\\t\not\leq \Empty}
{\vdash \lambda^{\wedge_{i\in I}\arrow {s_i} {t_i}}x.e : t}
In the next parts, we will need an algorithmic type system.
In order for it to be complete in regards to lambda-abstractions, we'll need to use a
new syntactic category, called \textbf{type schemes}, that denotes a set of types.
\textbf{Type schemes} & \ts & ::= & t \alt \tsfun {\arrow t t ; \cdots ; \arrow t t} \alt \ts \tstimes \ts \alt \ts \tsor \ts \alt \tsempty
\begin{definition}[Interpretation of type schemes]
We define a function $\tsint {\_}$ that maps type schemes to set of types.
\tsint t &=& \{s\alt t \leq s\}\\
\tsint {\tsfunone {t_i} {s_i}_{i=1..n}} &=& \{s\alt
\exists s_0 = \bigwedge_{i=1..n} \arrow {t_i} {s_i}
\land \bigwedge_{j=1..m} \neg (\arrow {t_j'} {s_j'}).\
\Empty \not\simeq s_0 \leq s \}\\
\tsint {\ts_1 \tstimes \ts_2} &=& \{s\alt \exists t_1 \in \tsint {\ts_1}\
\exists t_2 \in \tsint {\ts_2}.\ \pair {t_1} {t_2} \leq s\}\\
\tsint {\ts_1 \tsor \ts_2} &=& \{s\alt \exists t_1 \in \tsint {\ts_1}\
\exists t_2 \in \tsint {\ts_2}.\ {t_1} \vee {t_2} \leq s\}\\
\tsint \tsempty &=& \varnothing
Note that $\tsint \ts$ is closed under subsumption and intersection (straightforward induction).
We define a function $\tsrep {\_}$ that maps every non-empty type scheme into a type, \textit{representative} of the set of types denoted by the scheme.
\tsrep t &=& t\\
\tsrep {\tsfunone {t_i} {s_i}_{i\in I}} &=& \bigwedge_{i\in I} \arrow {t_i} {s_i}\\
\tsrep {\ts_1 \tstimes \ts_2} &=& \pair {\tsrep {\ts_1}} {\tsrep {\ts_2}}\\
\tsrep {\ts_1 \tsor \ts_2} &=& \tsrep {\ts_1} \vee \tsrep {\ts_2}\\
\tsrep \tsempty && \textit{undefined}
Note that $\tsrep \ts \in \tsint \ts$.
Let $\ts$ be a type scheme and $t$ a type. We can compute a type scheme, written $t \tsand \ts$, such that:
\[\tsint {t \tsand \ts} = \{s \alt \exists t' \in \tsint \ts.\ t \land t' \leq s \}\]
Let $\ts$ be a type scheme and $t$ a type. We can decide the assertion $t \in \tsint \ts$,
which we also write $\ts \leq t$.
\textbf{Expressions} & e & ::= & c\alt x \alt \lambda^{\bigwedge \arrow t t}x.e \alt \ite k t e e \alt e e\\
\textbf{Conditions} & k & ::= & c\alt x \alt \lambda^{\bigwedge \arrow t t}x.e \alt k k\\
\textbf{Values} & v & ::= & c \alt \lambda^{\bigwedge \arrow t t}x.e\\
Let $e$ be an expression and $\varpi\in\{0,1\}^*$ a \emph{path}; we denote $\occ e\varpi$ the occurrence of $e$ reached by the path $\varpi$, that is
e&\epsilon & e\\
e_0e_1& i.\varpi & \occ{e_i}\varpi\qquad i=0,1\\
undefined otherwise
A type environment $\Gamma$ is a mapping from occurrences (i.e., expressions) to types, up to alpha-renaming (i.e., $\lambda^tx.x$ and $\lambda^ty.y$ are mapped to the same type, if any).
It is necessary to map alpha-equivalent expressions to the same type in order for our type system to be invariant by alpha-renaming.
We use $\Gamma_1,\Gamma_2$ for the type environment obtained by unioning the two type environments giving priority to $\Gamma_2$ (define formally).
We suppose w.l.o.g that all variables abstracted in $\lambda$-abstractions are distincts (otherwise we can alpha-rename $\lambda$-abstractions).
\subsection{Declarative type system}
{ }
{ \Gamma \vdash e: \Gamma(e) }
{ e\in\dom\Gamma }
{ \Gamma \vdash e:t_1\\\Gamma \vdash e:t_2 }
{ \Gamma \vdash e: t_1 \wedge t_2 }
{ }
{ \Gamma \vdash e:t\\t\leq t' }
{ \Gamma \vdash e: t' }
{ }
{ }
{ \Gamma, (e:\Empty) \vdash e': t }
{ }
{ }
{\Gamma\vdash c:\basic{c}}
{ }
\Gamma \vdash e_1: \arrow {t_1}{t_2}\\
\Gamma \vdash e_2: t_1
{ \Gamma \vdash {e_1}{e_2}: t_2 }
{ }
{\Gamma,x:s_i\vdash e:t_i\\t\simeq \left(\bigwedge_{i\in I} \arrow {s_i} {t_i}\right)
\land \left(\bigwedge_{j\in J} \neg (\arrow {s'_j} {t'_j})\right)\\t\not\simeq\Empty}
\Gamma\vdash\lambda^{\wedge_{i\in I}\arrow {s_i} {t_i}}x.e:\textstyle t
{ }
% \Infer[If]
% {\Gamma\vdash e:t_0\\
% %t_0\not\leq \neg t \Rightarrow
% \Gamma \cvdash + e t e_1:t'\\
% %t_0\not\leq t \Rightarrow
% \Gamma \cvdash - e t e_2:t'}
% {\Gamma\vdash \ite {e} t {e_1}{e_2}: t'}
% { }
{\Gamma\vdash e:t_0\\
%t_0\not\leq \neg t \Rightarrow
\Gamma \evdash + e t \Gamma^+ \\ \Gamma^+ \vdash e_1:t'\\
%t_0\not\leq t \Rightarrow
\Gamma \evdash - e t \Gamma^- \\ \Gamma^- \vdash e_2:t'}
{\Gamma\vdash \ite {e} t {e_1}{e_2}: t'}
{ }
{\Gamma \vdash e:(t_1,t_2)}
{\Gamma \vdash \pi_i e:t_i}
{ }
{\Gamma \vdash e_1:t_1 \and \Gamma \vdash e_2:t_2}
{\Gamma \vdash (e_1,e_2):\pair {t_1} {t_2}}
{ }
\begin{center} \line(1,0){300} \end{center}
% \Infer[Base]
% { \Gamma \vdash e':t' }
% { \Gamma \cvdash p e t e':t' }
% { }
% \qquad
% \Infer[Path]
% { \pvdash \Gamma p e t \varpi:t_1 \\ \Gamma,(\occ e \varpi:t_1) \cvdash p e t e':t_2 }
% { \Gamma \cvdash p e t e':t_2 }
% { }
{ }
{ \Gamma \evdash p e t \Gamma }
{ }
{ \pvdash {\Gamma'} p e t \varpi:t' \\ \Gamma \evdash p e t \Gamma' }
{ \Gamma \evdash p e t \Gamma',(\occ e \varpi:t') }
{ }
\begin{center} \line(1,0){300} \end{center}
{ \pvdash \Gamma p e t \varpi:t_1 \\ \pvdash \Gamma p e t \varpi:t_2 }
{ \pvdash \Gamma p e t \varpi:t_1\land t_2 }
{ }
{ \pvdash \Gamma p e t \varpi:t_1 \\ t_1\leq t_2 }
{ \pvdash \Gamma p e t \varpi:t_2 }
{ }
{ \Gamma \vdash \occ e \varpi:t' }
{ \pvdash \Gamma p e t \varpi:t' }
{ }
{ }
{ \pvdash \Gamma + e t \epsilon:t }
{ }
{ }
{ \pvdash \Gamma - e t \epsilon:\neg t }
{ }
{ \pvdash \Gamma p e t \varpi.0:\arrow{t_1}{t_2} \\ \pvdash \Gamma p e t \varpi:t_2' \\ t_2\land t_2' \simeq \Empty }
{ \pvdash \Gamma p e t \varpi.1:\neg t_1 }
{ }
{ \pvdash \Gamma p e t \varpi.1:t_1 \\ \pvdash \Gamma p e t \varpi:t_2 }
{ \pvdash \Gamma p e t \varpi.0:\neg (\arrow {t_1} {\neg t_2}) }
{ }
{ \pvdash \Gamma p e t \varpi:\pair{t_1}{t_2} }
{ \pvdash \Gamma p e t \varpi.l:t_1 }
{ }
{ \pvdash \Gamma p e t \varpi:\pair{t_1}{t_2} }
{ \pvdash \Gamma p e t \varpi.r:t_2 }
{ }
{ \pvdash \Gamma p e t \varpi:t' }
{ \pvdash \Gamma p e t \varpi.f:\pair {t'} \Any }
{ }
{ \pvdash \Gamma p e t \varpi:t' }
{ \pvdash \Gamma p e t \varpi.s:\pair \Any {t'} }
{ }
\subsection{Algorithmic type system}
{ }
{ \Gamma, (e:\Empty) \vdash e': \Empty }
{ \text{With priority over other rules} }
{ \Gamma\setminus\{e\} \vdash e : \ts }
{ \Gamma \vdash e: \Gamma(e) \tsand \ts }
{ e\in\dom\Gamma}
{ }
{\Gamma\vdash c:\basic{c}}
{\Gamma,x:s_i\vdash e:\ts_i'\\ \ts_i'\leq t_i}
\Gamma\vdash\lambda^{\wedge_{i\in I}\arrow {s_i} {t_i}}x.e:\textstyle\tsfun {\arrow {s_i} {t_i}}_{i\in I}
{\lambda^{\wedge_{i\in I}\arrow {s_i} {t_i}}x.e\not\in\dom\Gamma}
\Gamma \vdash e_1: \ts_1\\
\Gamma \vdash e_2: \ts_2\\
\ts_1 \leq \arrow \Empty \Any\\
\ts_2 \leq \dom {\ts_1}
{ \Gamma \vdash {e_1}{e_2}: \ts_1 \circ \ts_2 }
{ {e_1}{e_2}\not\in\dom\Gamma}
{\Gamma\vdash e:\ts_0\\
% \left\{
% \begin{array}{ll} %\Gamma,
% \Refine + {e,t} \Gamma \vdash e_1 : \ts_1 & \text{ if } \ts_0 \not\leq \neg t\\
% \ts_1 = \Empty & \text{ otherwise}
% \end{array}\right.\\
% \left\{
% \begin{array}{ll} %\Gamma,
% \Refine - {e,t} \Gamma \vdash e_2 : \ts_2 & \text{ if } \ts_0 \not\leq t\\
% \ts_2 = \Empty & \text{ otherwise}
% \end{array}\right.
\Refine + {e,t} \Gamma \vdash e_1 : \ts_1\\
\Refine - {e,t} \Gamma \vdash e_2 : \ts_2}
{\Gamma\vdash \ite {e} t {e_1}{e_2}: \ts_1\tsor \ts_2}
%{\ite {e} t {e_1}{e_2}\not\in\dom\Gamma}
{\Gamma \vdash e:\ts\and \ts\leq\pair\Any\Any}
{\Gamma \vdash \pi_i e:\bpi_{\mathbf{i}}(\ts)}
{\pi_i e\not\in\dom\Gamma}
{\Gamma \vdash e_1:\ts_1 \and \Gamma \vdash e_2:\ts_2}
{\Gamma \vdash (e_1,e_2):{\ts_1}\tstimes{\ts_2}}%\pair{t_1}{t_2}}
\typep+\epsilon{\Gamma,e,t} & = & t\\
\typep-\epsilon{\Gamma,e,t} & = & \neg t\\
\typep{p}{\varpi.0}{\Gamma,e,t} & = & \neg(\arrow{\Gp p{\Gamma,e,t}{(\varpi.1)}}{\neg \Gp p {\Gamma,e,t} (\varpi)})\\
\typep{p}{\varpi.1}{\Gamma,e,t} & = & \worra{\tsrep {\tyof{\occ e{\varpi.0}}\Gamma}}{\Gp p {\Gamma,e,t} (\varpi)}\\
\typep{p}{\varpi.l}{\Gamma,e,t} & = & \bpl{\Gp p {\Gamma,e,t} (\varpi)}\\
\typep{p}{\varpi.r}{\Gamma,e,t} & = & \bpr{\Gp p {\Gamma,e,t} (\varpi)}\\
\typep{p}{\varpi.f}{\Gamma,e,t} & = & \pair{\Gp p {\Gamma,e,t} (\varpi)}\Any\\
\typep{p}{\varpi.s}{\Gamma,e,t} & = & \pair\Any{\Gp p {\Gamma,e,t} (\varpi)}\\ \\
\Gp p {\Gamma,e,t} (\varpi) & = & \typep p \varpi {\Gamma,e,t} \land \tsrep {\tyof {\occ e \varpi} \Gamma}\\
%\underbrace{\land \tyof {\occ e \varpi} {\Gamma\setminus\{\occ e \varpi\}}}_{\text{if $\occ e \varpi$ is not a variable}}}\\
&(\RefineStep p {e,t} (\Gamma))(e') =
%\tyof {e'} \Gamma \tsand
\bigwedge_{\{\varpi \alt \occ e \varpi \equiv e'\}}
\Gp p {\Gamma,e,t} (\varpi) & \text{if } \exists \varpi.\ \occ e \varpi \equiv e' \\
\Gamma(e') & \text{otherwise, if } e' \in \dom \Gamma\\
\text{undefined} & \text{otherwise}
&\Refine p {e,t} \Gamma=\fixpoint_\Gamma (\RefineStep p {e,t})\qquad \text{(for the order $\leq$ on environments, defined in the proofs section)}
All the functions above are defined iff all their subexpressions are (e.g., $\occ e{\varpi.i}$ must be defined).
The notation $\tyof{o}{\Gamma}$ denotes the type that can be deduced for the occurence $o$ under the type envirenment $\Gamma$.
That is, $\tyof{o}{\Gamma}=\ts$ if and only if $\Gamma\vdash o:\ts$ can be deduced by the typing rules.
\footnote{Note that the definition is well-founded.
This can be seen by analyzing the rule \Rule{If}: the definition of $\Refine + {e,t} \Gamma$ and $\Refine - {e,t} \Gamma$ use
$\tyof{\occ e{\varpi}}\Gamma$, and this is defined for all $\varpi$ since the first premisses of \Rule{If} states that
$\Gamma\vdash e:\ts_0$ (and this is possible only if we were able to deduce under the hypothesis $\Gamma$ the type of every occurrence of $e$.)}
The reason for the definition of $\RefineStep p {e,t}$ is that the same subterm of $e$
may occur several times in $e$ and therefore we can collect for every
occurrence a different type constraint. Since all the constraints must
hold, then we take their intersection. For instance, if $f$ is a function
of type $(s_1\times s_2\to t)\wedge( s_3\to\neg t)$ and we test
whether $f(x,x)$ is of type $t$ or not, then the test succeed only if $(x,x)$ is of
type $s_1\times s_2$, that is, that $x$ has both type $s_1$ and type
$s_2$ and thus their intersection $s_1{\wedge}s_2$.\\
Define how to compute $t\circ s$ and $\worra t s$
\[ t \simeq \bigvee_{i\in I}\left(\bigwedge_{p\in P_i}(s_p\to t_p)\bigwedge_{n\in N_i}\neg(s_n'\to t_n')\right) \]
with \[\forall i\in I.\ \bigwedge_{p\in P_i}(s_p\to t_p)\bigwedge_{n\in N_i}\neg(s_n'\to t_n') \not\simeq \Empty\]
\dom{t} & = & \bigwedge_{i\in I}\bigvee_{p\in P_i}s_p\\[4mm]
t\circ s & = & \bigvee_{i\in I}\left(\bigvee_{\{Q\subsetneq P_i\alt s\not\leq\bigvee_{q\in Q}s_q\}}\left(\bigwedge_{p\in P_i\setminus Q}t_p\right)\right)\hspace*{1cm}\makebox[0cm][l]{(for $s\leq\dom{t}$)}\\[4mm]
\worra t s & = & \dom t \wedge\bigvee_{i\in I}\left(\bigwedge_{\{P\subset P_i\alt s\leq \bigvee_{p \in P} \neg t_p\}}\left(\bigvee_{p \in P} \neg s_p\right) \right)
\beppe{Explain, especially the last one}
......@@ -6,7 +6,6 @@
volume = {35(9)},
series = {SIGPLAN Notices},
year = "2000",
url = ""
......@@ -65,7 +64,8 @@ author = "Wright, Andrew K. and Felleisen, Matthias",
AUTHOR = {V. Benzaken and G. Castagna and A. Frisch},
TITLE = {{CD}uce: an {XML}-Centric General-Purpose Language},
BOOKTITLE = {ICFP~'03, 8th ACM International Conference on Functional Programming},
PAGES ={51--63},
url = {},
PAGES ={51--63},
ADDRESS = {Uppsala, Sweden},
DMI-CATEGORY = {intc},
......@@ -76,7 +76,8 @@ author = "Wright, Andrew K. and Felleisen, Matthias",
author = {A. Frisch},
title = {Th\'eorie, conception et r\'ealisation d'un langage de programmation adapt\'e \`a {XML}.},
school = {Universit\'e Paris 7 Denis Diderot},
year = {2004}
year = {2004},
......@@ -90,6 +91,7 @@ year = {2004}
year = {2008},
issn = {0004-5411},
pages = {19:1--19:64},
url = {},
publisher = {{ACM}},
......@@ -121,12 +123,32 @@ Finally, I took advantage of this opportunity to describe some undocumented adva
SLIDES = polydeuces-slides.pdf,
author = {Castagna, Giuseppe and Lanvin, Victor},
title = {Gradual Typing with Union and Intersection Types},
journal = {Proc. ACM Program. Lang.},
issue_date = {September 2017},
volume = {1},
number = {ICFP},
month = aug,
year = {2017},
issn = {2475-1421},
pages = {41:1--41:28},
articleno = {41},
numpages = {28},
url = {},
doi = {10.1145/3110285},
acmid = {3110285},
publisher = {ACM},
AUTHOR = {Giuseppe Castagna and Victor Lanvin and Tommaso Petrucciani and Jeremy G.~Siek},
TITLE = {Gradual Typing: a New Perspective},
JOURNAL = {Proc.\ ACM Program.\ Lang.},
VOLUME = {3, Article 16},
NUMBER = {POPL\,'19 46nd ACM Symposium on Principles of Programming Languages},
url = {},
YEAR = {2019},
......@@ -220,9 +220,9 @@
\section{Related work}
......@@ -258,5 +258,8 @@
\subsection{A roadmap to polymorphic types}
TODO: update rules (add Empty rule, etc.)
Extending our work to the case of a polymorphic language is far from
trivial. Let us go back to our typical example
expression~\eqref{typical} of the introduction:
we have seen that occurrence typing for $x_1$ and $x_2$ was possible
only in very specific cases which depended on the form of the type of
$x_1$: $(1)$ the type of $x_2$ may be specialized only if the type of $x_1$
is an intersection of arrows and $(2)$ the type of $x_1$ may be specialized
only if the type of $x_1$ is a union of arrows. With polymorphic types
the first assertion is, strictly speaking, no longer true. The
simplest possible example to show it, is when $x_1$ is of type $\alpha\to\alpha$
and $x_2$ is of type $\alpha$. Then it is clear that we can assume
that $x_2$ has type $t$ when typing $e_1$ and that it has type $\neg
t$ when typing $e_2$.
The deduction becomes much more difficult when one adds subtyping and
set-theoretic types. Let us consider a couple of more examples to
exemplify the key cases.
Take again the expression in~\eqref{typagain} and imagine that $t$ is
\Int{} and that $x_1$ has type $\alpha\to
A non-trivial example of an expression
of this type is the function $\lambda^{\alpha\to
(\alpha\vee\Bool)}x. \ite e u x \texttt{true}$.}
We suppose the expression $x_1x_2$ to be well-typed and
therefore that $x_2$ is typed by a subtype of $\alpha$, say,
$\alpha\wedge t_\circ$. The case for the ``then'' is not very
different from the previous one when $x_1$ had type $\alpha\to\alpha$:
the application $x_1x_2$ has type $\alpha\vee\Bool$ so if the test $x_1x_2\in\Int$ succeeds, then it
is because the value yielded by the application is an integer and
this integer must come from the $\alpha$ summand of the union. Since
we do not know exactly which integer we may obtain, we include all of them
in $\alpha$, which yields \Int{} as the best possible approximation for
$\alpha$. So when typing $e_1$ we can safely assume that $x_2$ has
type \Int{} ---or, more precisely, $\Int\wedge t_\circ$, since we use the static type information about $x_2$---. The case for