Commit 3ccd510e authored by Giuseppe Castagna's avatar Giuseppe Castagna
Browse files

finished presentation of iteration

parent afe52f23
......@@ -718,7 +718,7 @@ hypotheses so they are contravariant: the larger the type the better
the hypothesis). Recall that in Section~\ref{sec:challenges} we said
that we want our analysis to be able to capture all the information
available from nested checks. If we gave up such a kind of precision
then the definition of $\Refine{}{}$ would be pretty easy: it must map
then the definition of $\Refinef$ would be pretty easy: it must map
each subexpression of $e$ to the intersection of the types deduced by
$\vdashp$ (\emph{ie}, by $\env{}{}$) for each of its occurrences. That
is, for each expression $e'$ occurring in $e$, $\Refine {e,t}\Gamma$
......@@ -733,41 +733,51 @@ In order to capture most of the type information from nested queries
the rule \Rule{Path} allows the deduction of the type of some
occurrence $\varpi$ to use a type environment $\Gamma'$ that may
contain information about some suboccurrences of $\varpi$. On the
algorithm this would correspond to apply the $\Refine{}{}$ defined
above to a result of $\Refine{}{}$, and so on. Therefore, ideally our
algorithm this would correspond to apply the $\Refinef$ defined
above to a result of $\Refinef$, and so on. Therefore, ideally our
algorithm should compute the type environment as a fixpoint of the
function $X\mapsto\Refine{e,t}{X}$. Unfortunately, an iteration of $\Refine{}{}$ may
function $X\mapsto\Refine{e,t}{X}$. Unfortunately, an iteration of $\Refinef$ may
not converge. As an example consider the (dumb) expression $\tcase
{x_1x_2}{\Any}{e_1}$: if $x_1:\Any\to\Any$, then every iteration of
$\Refine{}{}$ yields for $x_1$ a type strictly more precise than the
$\Refinef$ yields for $x_1$ a type strictly more precise than the
previous iteration.
The solution we adopt here is to bound the number of iterations to some number $n_o$.
From a formal point of view, this means to give up the completeness of the algorithm: $\Refine{}{}$ will be complete (\emph{ie}, it will find all solutions) for the deductions of $\Gamma \evdash e t \Gamma'$ of depth at most $n_o$. This is obtained by the following definition:
\begin{align*}
&\RefineStep {e,t} (\Gamma) = \Gamma' \text{ with:}\\
&\dom{\Gamma'}=\dom{\Gamma} \cup \{e' \alt \exists \varpi.\ \occ e \varpi \equiv e'\}\\
&\Gamma'(e') =
\left\{\begin{array}{ll}
From a formal point of view, this means to give up the completeness of the algorithm: $\Refinef$ will be complete (\emph{ie}, it will find all solutions) for the deductions of $\Gamma \evdash e t \Gamma'$ of depth at most $n_o$. This is obtained by the following definition of $\Refinef$
\[
\begin{array}{rcl}
\Refinef_{e,t} \eqdef (\RefineStep{e,t})^{n_o}\\
\text{where }\RefineStep {e,t}(\Gamma)(e') &=& \left\{\begin{array}{ll}
%\tyof {e'} \Gamma \tsand
\bigwedge_{\{\varpi \alt \occ e \varpi \equiv e'\}}
\env {\Gamma,e,t} (\varpi) & \text{if } \exists \varpi.\ \occ e \varpi \equiv e' \\
\Gamma(e') & \text{otherwise}
\end{array}\right.\\&\\
&\Refine {e,t} \Gamma={\RefineStep {e,t}}^{n_o}(\Gamma)\qquad\text{with $n$ a global parameter}
\end{align*}
\beppe{still to explain and better write the definition above}
\end{array}\right.
\end{array}
\]
Note in particular that $\Refine{e,t}\Gamma$ extends extends $\Gamma$ with hypotheses on the expressions occurring in $e$, since
$\dom{\Refine{e,t}\Gamma} = \dom{\RefineStep {e,t}}(\Gamma) = \dom{\Gamma} \cup \{e' \alt \exists \varpi.\ \occ e \varpi \equiv e'\}$.
In other terms, we try to find a fixpoint of $\RefineStep{e,t}$ but we
bound our search to $n_o$ iterations. Since $\RefineStep {e,t}$ is
monotone\footnote{\beppe{explain here the order on $\Gamma$.}}, then
every iteration yields a better solution. \beppe{Does it make
sense?}
While this is unsatisfactory from a formal point of view, in practice
the problem is a very mild one. Divergence may happen only when
refining the type of a function in an application: not only such a
refinement is meaningful only when the function is typed by a union
type (which is quite rare in practice)\footnote{The only impact of adding a negated arrow type to the type of a function is when we test whether the function has a given arrow type: in practice this never happens since programming languages test whether a value \emph{is} a function, rather than the type of a given function.}, but also we had to build the
expression that causes the divergence in quite an ad hoc way which
makes divergence even more unlikely: setting an $n_o$ twice the depth
of the syntax tree of the outermost type case should capture all
realistic cases \beppe{can we give an estimate based on benchmarking
the protottype?}
type (which is quite rare in practice)\footnote{The only impact of
adding a negated arrow type to the type of a function is when we
test whether the function has a given arrow type: in practice this
never happens since programming languages test whether a value
\emph{is} a function, rather than the type of a given function.},
but also we had to build the expression that causes the divergence in
quite an ad hoc way which makes divergence even more unlikely: setting
an $n_o$ twice the depth of the syntax tree of the outermost type case
should capture all realistic cases \beppe{can we give an estimate
based on benchmarking the protottype?}
......
......@@ -118,7 +118,8 @@
\newcommand{\Odd} {\textsf{Odd}}
\newcommand{\subst}[2]{\{#1 \mapsto #2\}}
\newcommand{\Refine}[2]{\textsf{Refine}_{#1}(#2)}
\newcommand{\Refinef}{\textsf{Refine}}
\newcommand{\Refine}[2]{\ensuremath{\Refinef_{#1}(#2)}}
\newcommand{\RefineStep}[1]{\textsf{RefineStep}_{#1}}
\newcommand{\constr}[2]{\textsf{Constr}_{#2}(#1)}
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment