@@ -94,6 +94,9 @@ relationship is needed in this paper.

The thesis is available at https://pnwamk.github.io/docs/dissertation.pdf

%% Mickael: premier draft et comparison avec worra

### Formal connections to prior work

The paper does a good job comparing the capabilities of the presented

...

...

@@ -101,16 +104,23 @@ system to past work, but it would be nice to characterize the

technical connections as well. For example, the

proof system for the $\vdash^{Path}$ judgment is very

similar to the proof system used in [23] (Figures 4 & 7

there). The approach of generalizing the type environment to prove

there).

%% Kim Compare [23]=THF2010

The approach of generalizing the type environment to prove

a more general set of propositions (here, about arbitrary expressions,

there, about expressions with a limited set a pure functions, but see

below) is also present in both. Similarly the need for an explosion

rule in order to circumvent difficulties in proofs of type soundness

is discussed in [22].

%% Kim: compare the rules added in Figure 6 of THF2008 and justified in section 3.3 with

%% the fact that we use ex-falso quodlibet

## Discussion of pure expressions

%% Beppe: say that our approach in some sense subsumes the two others. Add

%% a part of related work discussing about pure expressions. Use (not verbatim)

%% the rebuttal of POPL

Fundamentally, any system in which the type system reasons about the

potential dynamic behavior of terms in the way done with occurrence

typing will need to decide _which_ terms the type system should take

...

...

@@ -148,10 +158,15 @@ seem unneeded:

schemes. Why not simply present the system that's actually

implemented, which as the paper says is all that is really needed.

%% Beppe; answer on the interest of the fully complete system

- the treatment of interdependent tests. Is this needed for any real

example? I've looked at a lot of untyped programs that use type

tests and have never seen this pattern.

%% Mickael: tu t'en occupes

This paper makes a number of significant advances, which are somewhat

obscured by the complexity.

...

...

@@ -180,6 +195,8 @@ for `y` it will simply be `String \/ Number` (and the initial

typechecking will fail). As in `example14_alt` on page 27, a redundant

test will make inference succeed.

%% Yes indeed

## Smaller Comments

* "occurrence typing" both in simplistic form and with that name was

...

...

@@ -189,9 +206,13 @@ test will make inference succeed.

* Is repeated re-checking of function bodies costly? Especially in the

nested-function case, this could be asymptotically expensive.

%% Kim: it is not costlier than CDuce, just one extra pass to "infer" the annotation

* On page 10, line 20, there's a reference to an explanation in

section 1.3 that does not seem to exist.

%% Beppe: rewrite to say that in section 1.3 we need sometimes to deduce negated arrows

* The claim at the end of page 15 seems too strong. Some languages

reify instantiations of generic types at runtime, C# is a good

...

...

@@ -199,15 +220,18 @@ test will make inference succeed.

whether a value has a specific function type. That would of course

make the "counterintutive result" happen but this seem not so

counter-intutitive in a nominal type system.

%% Kim: Yes, indeed, in case of nominal typing you can test values (e.g., objects) with functional components

* on p17, Undef is chosen to be a constant not in 1. But 1 was

previously described to be \not{0}, which if 0 is empty means that

Undef must be in 1. Can this be explained further?

%% Beppe ... (ajouter constant not in D) page 8 (like \Omega)

* In the definition of AbsInf+ on p21, could we exclude from $T$ the

elements \sigma where typechecking succeeds for $\sigmap\Uparrow$?

That seems like it would result in strictly more-precise types.

%% Beppe To BE DONE, but in general the fewer the arrows the less precise is the type (but maybe it is not necessary, check)

...

...

@@ -231,7 +255,10 @@ The work comes with both the metatheory (that shows soundness of the system) and

1) Missing Related Work.

The authors provide a detailed comparison with the existing occurrence typing work, but barely mention related work on refinement types. There, type checking is also case sensitive, leading to similar problems (and related solutions) to the ones discussed in this work. For example,

- "Compositional reasoning and decidable checking for dependent contract types" by Knowles and Flanagan [1] address the problem of refined types appearing in arbitrary places, which is very similar to the first contribution of this work. An alternative approach followed by refinement type systems is to ANF-transform the programs. A discussion is missing on how these two alternatives related to the proposed solution. It appears that the proposed solution leads to non-terminating, algorithmic checking, while the two alternatives used by refinement types preserve decidability.

- "Compositional reasoning and decidable checking for dependent contract types" by Knowles and Flanagan [1] address the problem of refined types appearing in arbitrary places, which is very similar to the first contribution of this work.

%% Kim

An alternative approach followed by refinement type systems is to ANF-transform the programs. A discussion is missing on how these two alternatives related to the proposed solution. It appears that the proposed solution leads to non-terminating, algorithmic checking, while the two alternatives used by refinement types preserve decidability.

%% ANF, say that it is a completely different work that we explore in a different work (not published yet ... which is why we did not comment on it)

- Intersection and refinement types has been extensively studied [e.g., in 2 - 4]. Even thought the set-up and the goals of such systems are not identical to the ones presented here (e.g., they do not aim for inferences) there are many similarities (e.g., [3] also presents a set theoretic interpretation), thus a comparison is missing.

...

...

@@ -308,6 +335,8 @@ study the much more detailed type tests enabled by the formal framework, if

it's unlikely to be useful in practice. Is there a smaller/simpler formal

framework which preserves the usefulness of these ideas?

%% Beppe: voire reponse reviewer 1

Another relevant limitation is that the basic idea of registering the

dynamically tested type of a non-variable expression (an application, `e1 e2`)

will not work in a language with mutable state (we can't assume that `e1 e2`

...

...

@@ -315,6 +344,8 @@ will yield the same result if invoked twice) --- and the connections to

gradual typing and dynamically typed languages seems to make at least some

amount of mutation unavoidable in practice.

%% Beppe: voire reponse sur side-effects

It is also worth noting that the paper contains many different elements, and

perhaps it could be made more readable by expressing the same elements in more

than one paper: There is a calculus, there is an implementation, there are

...

...

@@ -323,18 +354,26 @@ discussion of an extension with records, with (functional) field modification

support, and all of this amounts to several rather different language/calculus

designs.

%% SKIP

You could perhaps say that the type system is somewhat accidental: Whenever

something doesn't work, we'll fiddle with it. The outcome is a set of rules

with some extra bells and whistles, and look: Now we can type check one more

example! I guess this criticism is somewhat unfair, but it may still be useful

to consider how and to which degree it can be refuted.

%% BOH????

Finally, the implementation section makes me think that the flow analysis is

more powerful than the type-theoretic approach used in this paper (cf. [23],

example 14). Could your ideas be restated using flow analysis, and would that

yield a similar improvement of the typing/inference? (that is, would you still

be able to produce the function types which are intersections of arrow types?)

%% Beppe: one does not exclude the other ... see answer about the different approaches of reviewer 1

All in all, this is a very interesting paper, technically convincing,

providing results that are potentially quite useful, and improving on the

typing precision offered by earlier approaches. At the same time, it does

...

...

@@ -386,3 +425,4 @@ implementation). Then we have the related work parts, esp. relative to [23],

but also relative to TypeScript and Flow. Couldn't you organize this section

in a clearer way, perhaps splitting it into an implementation section and an

evaluation section and a discussion (about the relationship to other work)?