Commit ddf58338 authored by Giuseppe Castagna's avatar Giuseppe Castagna
Browse files

more revisions

parent 4e306958
......@@ -77,9 +77,15 @@ following two type constructors:\\[1.4mm]
%% \textbf{Types} & t & ::= & \record{\ell_1=t \ldots \ell_n=t}{t}\alt \Undef
%% \end{array}
%% \]
\rev{%%%
where $\ell$ ranges over an infinite set of labels $\Labels$ and $\Undef$
is a special singleton type whose only value is the constant
$\undefcst$, a constant not in $\Any$. The type
is a special singleton type whose only value is a constant
$\undefcst$ which is not in $\Domain$ (for that it is a constant akin
to $\Omega$): as a consequence $\Undef$ and $\Any$ are distinct types,
the interpretation of the former being the constant $\undefcst$ while
the interpretation of the latter being the set of all the other values.
}%%%
The type
$\record{\ell_1=t_1 \ldots \ell_n=t_n}{t}$ is a \emph{quasi-constant
function} that maps every $\ell_i$ to the type $t_i$ and every other
$\ell \in \Labels$ to the type $t$ (all the $\ell_i$'s must be
......
......@@ -33,7 +33,13 @@ is \emph{not} of type \code{number} (and thus deduce from the type annotation th
\code{string}) in the ``else'' branch.
Occurrence typing was first defined and formally studied by
\citet{THF08} to statically type-check untyped Scheme programs, and later extended by \citet{THF10}
\citet{THF08} to statically type-check untyped Scheme
programs,\footnote{%
According to Sam Tobin-Hochstadt, the terminology \emph{occurrence
typing} was first used in a simplistic form by~\citet{Komon05},
although he was not aware of it the at the moment of
the writing of~\cite{THF08}.}
and later extended by \citet{THF10}
yielding the development of Typed Racket. From its inception,
occurrence typing was intimately tied to type systems with
set-theoretic types: unions, intersections, and negation of
......
@inproceedings{Komon05,
author = {Raghavan Komondoor and
Ganesan Ramalingam and
Satish Chandra and
John Field},
title = {Dependent Types for Program Understanding},
booktitle = {Tools and Algorithms for the Construction and Analysis of Systems,
11th International Conference, {TACAS} 2005},
series = {Lecture Notes in Computer Science},
volume = {3440},
pages = {157--173},
publisher = {Springer},
year = {2005},
url = {https://doi.org/10.1007/978-3-540-31980-1\_11},
doi = {10.1007/978-3-540-31980-1\_11},
}
@inproceedings{Gre19,
author = {Michael Greenberg},
editor = {Benjamin S. Lerner and
......
......@@ -69,48 +69,49 @@ for recursive types.}
\rev{%%%
For what concerns the first work by~\citet{THF08} it is interesting to
compare this work with ours because the comparison shows two rather
compare it with our work because the comparison shows two rather
different approaches to deal with the property of type
preservation. \citet{THF08} define a first type-system that does not
preservation. \citet{THF08} define a first type system that does not
satisfy type-preservation. The reason for that is that this first type
system checks all the branches of a type-case expression,
independently from whether they are selectable or not, and this may
result in a well-typed expression to reduces to an expression that it
is not well-typed because it contains a type-case expression that,
because of the reduction, has a branch that became non-selectable but
also ill-typed (see \cite[Section 3.3]{THF08}). To obviate this
problem they introduce a \emph{second} type system that extends the
previous one with some auxiliary typing rules that type type-case
expressions by skipping the typing of non selectable-branches and use
this second type-system only to prove type-preservation and obtain,
thus the soundness of their type system. In our work, instead, we
prefer to start directly with a system that satisfies
type-preservation. Our system does not have the problem of their first
system because we included the \Rule{Efq} rule for that very purpose,
that is skip the typing of non-selectable branches. The choice of one
or the other approach is a matter of taste and, in this specific case,
it boils down to deciding
whether some typing problems must be statically signalled by an error or a
warning. The approach of~\citet{THF08} ensures that every
subexpression of a program is well-typed and, if not, it generates a
type-error. Our approach allows some subexpression of a program to be ill-typed,
but only if they occurr in dead branches of type-cases. But in that case case any
reasonable implementation would flag a warning to signal the presence
of a dead branch. The very same reasons that explain the presence of \Rule{Eqf}
explain why in our system we included from the beginning the typing
rule \Rule{Abs-}
to deduce negated arrow types: we want a system that satisfies type
preservation (albeit, for a parallel reduction:
cf: \Appendix\ref{app:parallel}). We then defined an algorithmic
system that is not \emph{complete} with respect to the type-system but
from which it inherits its soundness. Of course we could have started
directly with a type-system corresponding to the algorithm (i.e., not
include the rule \Rule{Abs-}) and then extend this system with the
inference of negated arrows, to prove type preservation. ... the
advantage is that we can explore different subsystems that are
complete and argue that they are all we need in practice ....
independently from whether they are selectable or not; this may
result in a well-typed expression to reduce to an expression that
is not well-typed because it contains a type-case expression with
a branch that, due to the reduction, became both non-selectable
and ill-typed (see \cite[Section 3.3]{THF08}). To obviate this problem
they introduce a \emph{second} type system that extends the previous
one with some auxiliary typing rules that type type-case expressions
by skipping the typing of non-selectable branches. They use this
second type system only to prove type preservation and obtain, thus,
the soundness of their type system. In our work, instead, we prefer to
start directly with a system that satisfies type preservation. Our
system does not have the problem the first system of~\cite{THF08}
thanks to the
presence the \Rule{Efq} rule that we included for that very purpose,
that is, to skip non-selectable branches during typing. The choice of one
or the other approach is mostly a matter of taste and, in this specific case,
boils down to deciding whether some typing problems must be
signaled at compile time by an error or a warning. The approach
of~\citet{THF08} ensures that every subexpression of a program is
well-typed and, if not, it generates a type-error. Our approach allows
some subexpressions of a program to be ill-typed, but only if they
occur in dead branches of type-cases: in that case any reasonable
implementation would flag a warning to signal the presence of the dead
branches. The very same reasons that explain the presence in our system
of \Rule{Eqf}, explain why from the beginning we included in our
system the typing rule \Rule{Abs-} that deduces negated arrow types:
we wanted a system that satisfied type preservation (albeit, for a
parallel reduction: cf: \Appendix\ref{app:parallel}). We then defined
an algorithmic system that is not \emph{complete} with respect to the
type-system but from which it inherits its soundness. Of course, we
could have proceeded as~\citet{THF08} did: start directly with a
type-system corresponding to the algorithm (i.e., omit the
rule \Rule{Abs-}) and later extend this system with the rule to infer
negated arrows, the only purpose of this extension being to prove type preservation. We preferred not
to, not only because we favor type preserving systems, but also
because in this way we were able to characterize different subsystems that
are complete with respect to the algorithmic system, thus exploring
different language design and arguing about the usefulness.
}%%%rev
Another direction of research related to ours is the one of semantic
......
......@@ -109,23 +109,24 @@ there).
The approach of generalizing the type environment to prove
a more general set of propositions (here, about arbitrary expressions,
there, about expressions with a limited set a pure functions, but see
below) is also present in both. Similarly the need for an explosion
below) is also present in both.
>> Kim: check this
Similarly the need for an explosion
rule in order to circumvent difficulties in proofs of type soundness
is discussed in [22].
>> Kim: compare the rules added in Figure 6 of THF2008 and justified
>> in section 3.3 with the fact that we use ex-falso quodlibet. We can
>> add that like
>> THF2008 we use a different system since we have to use type-schemes
>> (see Appendix A.3.2), but also a different reduction since we use
>> parallel reductions
**DONE** we added a long discussion on this point in the related work
section when comparing with [22] (see lines: ??-??)
## Discussion of pure expressions
>> Beppe: say that our approach in some sense subsumes the two others. Add
>> a part of related work discussing about pure expressions. Use (not verbatim)
>> the rebuttal of POPL
>> the rebuttal of POPL. Thanks for the insight
Fundamentally, any system in which the type system reasons about the
......@@ -165,9 +166,13 @@ seem unneeded:
schemes. Why not simply present the system that's actually
implemented, which as the paper says is all that is really needed.
>> Beppe; answer on the interest of the fully complete system (and
stress en passant that type schemes are needed to prove
type-preservation
**Answer**: we added a long discussion about it in the related work
section (lines ??-??) when comparing with [22] (TH-Felleisen
2008). In a nutshell we prefer starting with a system that satisfies
type preservation, define a sound but not-complete algorithm and
explore subsystems for which completeness holds rather than (as in
[22]) start with a systeme that does not satisfy type preservation
and then add auxiliary rules to prove type preservation.
- the treatment of interdependent tests. Is this needed for any real
......@@ -213,10 +218,14 @@ test will make inference succeed.
understanding, TACAS 2005], although I did not know about that work
when I independently developed the idea and the name.
**Done** we added a footnote about it. Thanks.
* Is repeated re-checking of function bodies costly? Especially in the
nested-function case, this could be asymptotically expensive.
>> Kim: it is not costlier than CDuce, just one extra pass to "infer" the annotation
* On page 10, line 20, there's a reference to an explanation in
section 1.3 that does not seem to exist.
......@@ -226,7 +235,6 @@ Indeed, it is not very clear what we referred to (in the specific case
we meant the need to deduce negated arrow types for type preservation). We
rewrote the sentence to be more clear.
>> Beppe: rewrite to say that in section 1.3 we need sometimes to deduce negated arrows
* The claim at the end of page 15 seems too strong. Some languages
......@@ -241,7 +249,16 @@ rewrote the sentence to be more clear.
* on p17, Undef is chosen to be a constant not in 1. But 1 was
previously described to be \not{0}, which if 0 is empty means that
Undef must be in 1. Can this be explained further?
>> Beppe ... (ajouter constant not in D) page 8 (like \Omega)
**Answer:** By "a constant not 1" we meant "a constant not in the
(interpretation of) 1" or, if you prefer, not in the domain D
(e.g. like \Omega). Indeed, according to Section 2.2 [[\not{0}]] =
D\[[0]] = D. So [[0]] is empty and undef is not in (the
interpretation of) 1. We added an explaination (see lines
???-???). In short `Undef` is a special singleton type whose
intepretation contains only a the constant `undef` which is not
in D.
* In the definition of AbsInf+ on p21, could we exclude from $T$ the
elements \sigma where typechecking succeeds for $\sigmap\Uparrow$?
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment