scp-reviews-2108.md 27.3 KB
 Giuseppe Castagna committed Sep 14, 2021 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96  Reviewers' comments: Handling Editor's Comments: We have now received all three reviews for your submission. Overall, the reviewers see a lot of value in your submission, but they also raise several issues that should be addressed in a revision of the paper. In particular, please expand the discussion of related work to cover Andrew Kent's thesis work as well as related work on refinement types. Moreover, please add an explicit discussion of the limitations and practical applications of the proposed system. In addition to those specific requests, please also take the other comments into account when you revise your submission. Reviewer #1: # Summary This paper shows how existing work on set-theoretic types with complete semantic subtyping can be re-purposed to support _occurrence typing_, a typing discipline in which information from runtime type tests is used to refine the types of variables. Notably, existing systems that feature occurrence typing also typically feature set-theoretic type connectives so combining them makes sense. Fundamentally, the paper presents the following technical innovations: 1. A proof system for refining the types of expressions $\vdash_{Path}$ 2. A way to compute the refined type of a function argument given information about the type of the function result in the presence of full set-theoretic types. 3. An approach to inferring intersection types for functions by using the fact that occurrence typing may refine the type of the formal parameter. 4. An integration of (3) with gradual typing which among other things allows static removal of some casts. Basically, the \Uparrow operation of Pierce & Turner's Local Typing Inference paper is used to replace gradual types with non-gradual types, and then (3) is applied. There is extensive metatheory for the first two contributions; less for the others. There is also a simple implementation of the calculus with OCaml-style syntax. # Evaluation I think it's not possible for me to write this review both accurately and blindly, so: this is by Sam Tobin-Hochstadt, author of Typed Racket (and several of the citations in the bibliography). This paper significantly advances the state of the art in occurrence typing. I have reservations about parts of it, and recommend changes, but overall I like this paper a lot. ## Positive Comments Before describing changes I recommend, I'll start with the things I like about this work. 1. In many ways, this is a big simplification of prior work on occurrence typing, which can with effort accomplish many of the same goals. 2. Occurrence typing and set theoretic types belong together. I've said many times that occurrence typing is the elimination rule for union types, and this work demonstrates their close connection. Furthermore, types for existing untyped languages need both occurrence typing and set theoretic types, and thus they should work together. 3. The work here is done to a high standard of precision, in particular in the metatheory, and in a discipline where we can rarely re-use theorems, does a nice job of extending prior work while also keeping the prior results. 4. The idea about inference of intersections using different typings of the formal parameter is very simple and very clever, and is likely to be useful in other contexts as well. It would be immediately applicable to Typed Racket, for example. ## Recommendations for revision ### Relation to Andrew Kent's thesis Andrew Kent's 2019 PhD thesis is highly related to the present work. In particular, chapter 5 of that thesis is entitled "A set-theoretic foundation for occurrence typing", and figure 5.2 presents a definition for an inversion operator closely related to the "worra" operator in this paper. Many other parts of the work differ, but detailed discussion of the relationship is needed in this paper. The thesis is available at https://pnwamk.github.io/docs/dissertation.pdf  Giuseppe Castagna committed Sep 14, 2021 97 >> Mickael: premier draft et comparison avec worra  Giuseppe Castagna committed Sep 14, 2021 98   Mickael Laurent committed Sep 15, 2021 99 **ANSWER:**  Mickael Laurent committed Sep 17, 2021 100   Giuseppe Castagna committed Sep 20, 2021 101 A. Kent's approach is based on control flow. As soon as a variable is defined, it computes and remember under which constraints this variable can be in True or ~True (Beppe: isn it rather False and ¬False?). Typecases can only be done on variables (Beppe: really???), and the former information is used to refine the context of the then and else branches. A new idea of A. Kent's thesis compared to previous works on occurence typing is the use of semantic types to store these constraints, and the use of an application inversion function to compute them. We also use these ideas in our paper.  Giuseppe Castagna committed Sep 20, 2021 102 (Beppe: where do we use it other than for False and ¬False?) (Beppe: isint (f x) on ne peut rien apprendre sur x )  Mickael Laurent committed Sep 17, 2021 103 104 105 106 107 108 109 110 111 112 113 114  However, our approach is not based on control flow. We do not store any additional information about variables except their types. Instead, our typecases can be applied to arbitrary expressions e and only at this moment we perform a *backward analysis* to refine the type of the subexpressions composing e. One advantage of our method is that we can test any type, whether A. Kent's system can only test if an expression is True or not (because it is the only information that has been stored). Although some built-in functions such as is_int can be introduced in order to test whether a variable has the type int or not, the system will not be able to refine subexpressions composing x: if x is defined as f y, no refinement can be made about f or y when testing if is_int x is True or not. One drawback of our method, however, is the lack of control flow analysis. In particular, it forces us to inline the let bindings of an expression before typing it. >> Should we put some of this answer in the paper? *Concerning the application inversion function* The application inversion function from A. Kent's thesis is supposed to compute exactly the same type as our *worra* operator. However, we believe that there is a mistake (typo?) in its algorithmic definition (p108 fig 5.2).  Mickael Laurent committed Sep 15, 2021 115 116 117 118 119 120 121 122 123 124 125 Indeed, the formula of \tau_a seems incorrect: we think the first union should be an intersection. Here is an example for which the current definition seems unsound:  inv (((True->False) /\ (False->True)) \/ ((True->True) /\ (False->False)), True) = Bool \ (True\/False) = Empty  However, a function with a type ((True->False) /\ (False->True)) \/ ((True->True) /\ (False->False)) could definitively give a result in True. The correct result should be Bool.  Mickael Laurent committed Sep 17, 2021 126 If we change this union into an intersection, it becomes equivalent to our algorithmic definition of *worra* (p13) with a simple application of De Morgan.  Mickael Laurent committed Sep 15, 2021 127 128  **(END OF ANSWER)**  Giuseppe Castagna committed Sep 14, 2021 129   Giuseppe Castagna committed Sep 14, 2021 130 131 132 133 134 135 136 ### Formal connections to prior work The paper does a good job comparing the capabilities of the presented system to past work, but it would be nice to characterize the technical connections as well. For example, the proof system for the $\vdash^{Path}$ judgment is very similar to the proof system used in [23] (Figures 4 & 7  Giuseppe Castagna committed Sep 14, 2021 137 there).  Giuseppe Castagna committed Sep 14, 2021 138 >> Kim Compare [23]=THF2010  Giuseppe Castagna committed Sep 14, 2021 139 The approach of generalizing the type environment to prove  Giuseppe Castagna committed Sep 14, 2021 140 141 a more general set of propositions (here, about arbitrary expressions, there, about expressions with a limited set a pure functions, but see  Giuseppe Castagna committed Sep 15, 2021 142 143 144 145 146 below) is also present in both. >> Kim: check this Similarly the need for an explosion  Giuseppe Castagna committed Sep 14, 2021 147 rule in order to circumvent difficulties in proofs of type soundness  Giuseppe Castagna committed Sep 15, 2021 148 149 is discussed in [22].  Giuseppe Castagna committed Sep 15, 2021 150 151 152 **DONE** we added a long discussion on this point in the related work section when comparing with [22] (see lines: ??-??)  Giuseppe Castagna committed Sep 14, 2021 153 154 155  ## Discussion of pure expressions  Giuseppe Castagna committed Sep 14, 2021 156 157 158  >> Beppe: say that our approach in some sense subsumes the two others. Add >> a part of related work discussing about pure expressions. Use (not verbatim)  Giuseppe Castagna committed Sep 15, 2021 159 >> the rebuttal of POPL. Thanks for the insight  Giuseppe Castagna committed Sep 14, 2021 160   Giuseppe Castagna committed Sep 14, 2021 161   Giuseppe Castagna committed Sep 14, 2021 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 Fundamentally, any system in which the type system reasons about the potential dynamic behavior of terms in the way done with occurrence typing will need to decide _which_ terms the type system should take note of, and what equations it will use over those terms. For example, if we have the expression (f x) \in (1, 1), should that refine the type of (g x)? Possible questions to ask here are: is f pure? Is f syntactically equal to g? Can we prove that f and g are equivalent? This paper takes the following approach: all expressions are considered, and the equations considered are simply syntactic equality. The paper also suggests that considering all expressions is a significant advance in expressiveness. However, as the paper makes clear, this is only possible since all terms in the language are pure, and in an impure system some limits will be required. In contrast, Flow has a complex effect system to determine pure expressions, but also reasons only about syntactic equality. Yet differently, Typed Racket reasons about a limited set of pure operations (all data structure accessors) but reflects those in the type system so that the semantics of the path considered are reflected in the type. These are all reasonable choices, but I think a clearer presentation of the particular point in the design space chosen here would make the paper clearer. ## Complexity The system presented here is complicated, but it's not clear that it needs to be nearly so complex. The following sources of complexity seem unneeded: - the (unimplemented) fully complete system in terms of type schemes. Why not simply present the system that's actually implemented, which as the paper says is all that is really needed.  Giuseppe Castagna committed Sep 15, 2021 199 200 201 202 203 204 205 **Answer**: we added a long discussion about it in the related work section (lines ??-??) when comparing with [22] (TH-Felleisen 2008). In a nutshell we prefer starting with a system that satisfies type preservation, define a sound but not-complete algorithm and explore subsystems for which completeness holds rather than (as in [22]) start with a systeme that does not satisfy type preservation and then add auxiliary rules to prove type preservation.  Giuseppe Castagna committed Sep 14, 2021 206 207   Giuseppe Castagna committed Sep 14, 2021 208 209 210 211 - the treatment of interdependent tests. Is this needed for any real example? I've looked at a lot of untyped programs that use type tests and have never seen this pattern.  Giuseppe Castagna committed Sep 14, 2021 212 >> Mickael: tu t'en occupes  Mickael Laurent committed Sep 15, 2021 213 214 215 216 217 218 219 220 221 222 223 224  **ANSWER:** We agree with the reviewer on the fact that interdepedent tests do not seem to appear in real examples. The exemple we give page 6 to justify the need of interdependent tests is specifically built for this purpose and we did not find this pattern in the real examples we studied. Still, in a theorical point of view, we think that it is important to notice the existence of interdependent tests. It helps to understand why the parameter n_0 introduced in the algorithmic type system is required for the completeness. In a sense, it is similar to the use of type schemes in our algorithmic type system: it makes our algorithmic type system more complex but also more complete regarding to the declarative one. Then, once these formal results have been established and that the role of type schemes and n_0 are understood, we can choose to include it or not in a practical implementation depending on our needs. The remark of the reviewer seems to imply that a value of n_0 = 1 is enough in practice, and we agree with this. >> Maybe we should add this remark in the paper? We already say that choosing n_0 = 2*depth seems enough, but we could also say that n_0 = 1 is enough in most cases. **(END OF ANSWER)**  Giuseppe Castagna committed Sep 14, 2021 225   Giuseppe Castagna committed Sep 14, 2021 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 This paper makes a number of significant advances, which are somewhat obscured by the complexity. ## A note on inference Note that the inference approach also suffers when handling multi-argument functions with only one test, just as in the discussion of Example 14 from [23]. Consider:  function (x, y) { if (typeof x === "string") { return x.append(y); // using a method to be clear about string ops } else { return x + y; } }  If we annotate this function with the type (String, String) -> String /\ (Number, Number) -> Number it will obviously typecheck. However, if we annotate the bound variables both with the type String \/ Number, then the set of types for x will be String, Number, but for y it will simply be String \/ Number (and the initial typechecking will fail). As in example14_alt on page 27, a redundant test will make inference succeed.  Giuseppe Castagna committed Sep 15, 2021 254 >> Yes indeed, we can add a remark on that  Giuseppe Castagna committed Sep 14, 2021 255   Giuseppe Castagna committed Sep 14, 2021 256 257 258 259 260 261 262 ## Smaller Comments * "occurrence typing" both in simplistic form and with that name was originally done by [Komondoor et al, Dependent types for program understanding, TACAS 2005], although I did not know about that work when I independently developed the idea and the name.  Giuseppe Castagna committed Sep 15, 2021 263 264 265 **Done** we added a footnote about it. Thanks.  Giuseppe Castagna committed Sep 14, 2021 266 267 * Is repeated re-checking of function bodies costly? Especially in the nested-function case, this could be asymptotically expensive.  Giuseppe Castagna committed Sep 14, 2021 268  >> Kim: it is not costlier than CDuce, just one extra pass to "infer" the annotation  Giuseppe Castagna committed Sep 15, 2021 269 270   Giuseppe Castagna committed Sep 14, 2021 271 272 273  * On page 10, line 20, there's a reference to an explanation in section 1.3 that does not seem to exist.  Giuseppe Castagna committed Sep 15, 2021 274 275 276 277 278 279  **Answer:** Indeed, it is not very clear what we referred to (in the specific case we meant the need to deduce negated arrow types for type preservation). We rewrote the sentence to be more clear.  Giuseppe Castagna committed Sep 14, 2021 280   Giuseppe Castagna committed Sep 14, 2021 281 282 283 284 285 286 287  * The claim at the end of page 15 seems too strong. Some languages reify instantiations of generic types at runtime, C# is a good example of this. In this case, it's easy and constant-time to test whether a value has a specific function type. That would of course make the "counterintutive result" happen but this seem not so counter-intutitive in a nominal type system.  Giuseppe Castagna committed Sep 14, 2021 288  >> Kim: Yes, indeed, in case of nominal typing you can test values (e.g., objects) with functional components  Giuseppe Castagna committed Sep 14, 2021 289   Giuseppe Castagna committed Sep 14, 2021 290 291 292 293  * on p17, Undef is chosen to be a constant not in 1. But 1 was previously described to be \not{0}, which if 0 is empty means that Undef must be in 1. Can this be explained further?  Giuseppe Castagna committed Sep 15, 2021 294 295 296 297 298 299 300 301 302 303  **Answer:** By "a constant not 1" we meant "a constant not in the (interpretation of) 1" or, if you prefer, not in the domain D (e.g. like \Omega). Indeed, according to Section 2.2 [[\not{0}]] = D\[[0]] = D. So [[0]] is empty and undef is not in (the interpretation of) 1. We added an explaination (see lines ???-???). In short Undef is a special singleton type whose intepretation contains only a the constant undef which is not in D.  Giuseppe Castagna committed Sep 14, 2021 304 305 306 307  * In the definition of AbsInf+ on p21, could we exclude from $T$ the elements \sigma where typechecking succeeds for $\sigmap\Uparrow$? That seems like it would result in strictly more-precise types.  Giuseppe Castagna committed Sep 15, 2021 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330  **Answer**: proceeding as suggested would result in *less* precise types. This can be understood in general, since removing types from an intersection yields larger (i.e., less precise) types. But it can also be shown by slightly modifying the foo example as follows  function foo(x : ?) { (typeof(x) x== "number") ? x : x.trim()  where we replaced x for x+1 in the "then" branch. Repeating the reasoning of the paper we would obtain the type  (number ∧ ? → number ∧ ?) ∧ (number → number) ∧ ((?\number) → string).  removing the first arrow of the intersection would yield less a precise type: applying the function to an arguement of type number ∧ ? would yield a result of the same type which by a cast could be used in every context where a strict subtype of number is expected, which becomes impossible if we remove that arrow.  Giuseppe Castagna committed Sep 14, 2021 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353  Reviewer #2: Summary of the submission: -------------------------------- The paper presents four extensions to occurrence typing. First, it allows refinement of arbitrary expressions (thus removing the refined variable can only be at top level). Second, it allows for inference of intersection types. Third, it combines occurrence typing with dynamic tests and particular, gradual typing. Finally, it formalises these concepts in a language with records and proves meta-theoretical properties. The paper provides a motivation of the above suggested extensions and the technical challenges they impose. Then, it defines a core language with occurrence, intersection types and defines the runtime and (set-theoretic) static semantics. The static semantics are proved safe (via subject reduction) and approximated by an algorithmic systems that is proved sound and complete. Next, the system is extended with records and integrated with gradual types. Finally, the implementation of the system is discussed with 12 benchmarks that showcase the novel occurrence typing features of the system and 14 that compare against the system [23]. Analysis of the submission: -------------------------------- This is a very well written paper that addresses an interesting subject, i.e., practical features of occurence typing. The work comes with both the metatheory (that shows soundness of the system) and the implementation (that showcases the applicability of the new features). But, as detailed below, existing related work in not compared against the new contributions. 1) Missing Related Work. The authors provide a detailed comparison with the existing occurrence typing work, but barely mention related work on refinement types. There, type checking is also case sensitive, leading to similar problems (and related solutions) to the ones discussed in this work. For example,  Giuseppe Castagna committed Sep 14, 2021 354  - "Compositional reasoning and decidable checking for dependent contract types" by Knowles and Flanagan [1] address the problem of refined types appearing in arbitrary places, which is very similar to the first contribution of this work.  Giuseppe Castagna committed Sep 14, 2021 355  >> Kim  Giuseppe Castagna committed Sep 14, 2021 356  An alternative approach followed by refinement type systems is to ANF-transform the programs. A discussion is missing on how these two alternatives related to the proposed solution. It appears that the proposed solution leads to non-terminating, algorithmic checking, while the two alternatives used by refinement types preserve decidability.  Giuseppe Castagna committed Sep 14, 2021 357  >> ANF, say that it is a completely different work that we explore in a different work (not published yet ... which is why we did not comment on it)  Giuseppe Castagna committed Sep 14, 2021 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433  - Intersection and refinement types has been extensively studied [e.g., in 2 - 4]. Even thought the set-up and the goals of such systems are not identical to the ones presented here (e.g., they do not aim for inferences) there are many similarities (e.g., [3] also presents a set theoretic interpretation), thus a comparison is missing. - In 3.3 it is mentioned that "we are not aware of any study in this sense", i.e., the integration of occurrence and gradual types. [5] studies the integration of liquid with gradual types while preserving the gradual guarantees. 2) Meta-theory Clarifications Some formal statement on the expected running time of the algorithmic type checking is missing. Of course, the addressed problem (inference of intersections types) is difficult and the proposed type checking can diverge. The authors state (at pg 14) that "in practise the problem is a very mild one" but some discussion on the expected running time of the algorithm in practise is missing. 3) Evaluation Since the paper is motivated by the TypeScript and Flow systems, I expected that their system would be evaluated against these. Instead, section 4 evaluates against [23] and states that the new system "goes beyond what languages like TypeScipt and Flow do". Yet, the cost of the new features is not discussed. Is the presented system much slower than [23, TypeScipt, Flow]? Are the users required to write more annotations? Minor Comments: - pg 11: In the [BASE] rule is the comparison of the environments only syntactic or subtyping is also allowed. - Theorem 2.7 mentions n_0 which does not appear in the theorem statement. - Section 3: Have you shown soundness and completeness for the extended systems? - pg 27: "the the function" References used in this review: ----------------------------------- [1] https://dl.acm.org/doi/abs/10.1145/1481848.1481853 [2] https://prosecco.gforge.inria.fr/personal/hritcu/publications/rcf-and-or-coq-jcs2014.pdf [3] http://noamz.org/oplss16/refinements-notes.pdf [4] https://arxiv.org/abs/1503.04908 Reviewer #3: * Summary of the submission: This paper explores occurrence typing, which is a technique that assigns different (refined) types to different occurrences of a variable or application, based on the connection between the outcome of a dynamic type test and control flow. The paper relies on a set-theoretic approach to types (where the semantics of a type is considered to be the set of values having that type), which allows for union and intersection types as set union and set intersection, and negation types as set complement, yielding a very fine-grained type lattice. Apart from this, the paper differs from earlier work on occurrence typing (esp. [22, 23], which are based on flow analysis) by offering the ability to give more precise types (union of function types, rather than function types containing union types) to some functions in applications, and it keeps track of the dynamically tested type of application expressions as well as variables, thus revealing more precise type information than earlier approaches. The results also differ in some other ways, which may be seen as an opportunity to combine this approach with flow analysis. The paper presents a general typing framework that generalizes several aspects of earlier occurrence typing proposals and that can be applied to handle other problems, such as the detection of intersection types for unannotated functions. Finally, the paper presents extensions to the framework to cover records with field addition, update, and deletion, and it describes connections to gradually typed languages. * Analysis of the submission: The main contribution of this paper is the ability to yield more precise typings, in particular for some functions in application expressions whose type is tested. It also presents the novel idea that type tests may give rise to occurrence typings for non-variable expressions, In particular: a type test on a function application can yield an improved typing of the result of the application as well as an improved typing of the function, and this applies recursively for application expressions containing application expressions. The paper is technically convincing. The detailed comments below contain a couple of questions about technical details, but everything else looks precise and correct (I have not studied the proofs in appendices in detail, but I did not notice anything surprising there, either). However, the paper does raise some questions about the connection to practical languages. For example, it is noted (p15, line 53) that a practical language would not support dynamic type tests of functions using any non-trivial function type (only f \in 0 -> 1 would be supported, that is, we can test whether a given value is a function or not a function, we can't test it's type in any detail). This seems to justify a question about why it is useful to study the much more detailed type tests enabled by the formal framework, if it's unlikely to be useful in practice. Is there a smaller/simpler formal framework which preserves the usefulness of these ideas?  Giuseppe Castagna committed Sep 14, 2021 434 >> Beppe: voire reponse reviewer 1  Giuseppe Castagna committed Sep 14, 2021 435   Giuseppe Castagna committed Sep 14, 2021 436 437 438 439 440 441 442 Another relevant limitation is that the basic idea of registering the dynamically tested type of a non-variable expression (an application, e1 e2) will not work in a language with mutable state (we can't assume that e1 e2 will yield the same result if invoked twice) --- and the connections to gradual typing and dynamically typed languages seems to make at least some amount of mutation unavoidable in practice.  Giuseppe Castagna committed Sep 14, 2021 443 >> Beppe: voire reponse sur side-effects  Giuseppe Castagna committed Sep 14, 2021 444   Giuseppe Castagna committed Sep 14, 2021 445 446 447 448 449 450 451 452 It is also worth noting that the paper contains many different elements, and perhaps it could be made more readable by expressing the same elements in more than one paper: There is a calculus, there is an implementation, there are some restrictions that "any practical language would apply", there is a brief discussion of an extension with records, with (functional) field modification support, and all of this amounts to several rather different language/calculus designs.  Giuseppe Castagna committed Sep 14, 2021 453 >> SKIP  Giuseppe Castagna committed Sep 14, 2021 454 455   Giuseppe Castagna committed Sep 14, 2021 456 457 458 459 460 461 You could perhaps say that the type system is somewhat accidental: Whenever something doesn't work, we'll fiddle with it. The outcome is a set of rules with some extra bells and whistles, and look: Now we can type check one more example! I guess this criticism is somewhat unfair, but it may still be useful to consider how and to which degree it can be refuted.  Giuseppe Castagna committed Sep 14, 2021 462 >> BOH????  Giuseppe Castagna committed Sep 14, 2021 463   Giuseppe Castagna committed Sep 14, 2021 464 465 466 467 468 469 Finally, the implementation section makes me think that the flow analysis is more powerful than the type-theoretic approach used in this paper (cf. [23], example 14). Could your ideas be restated using flow analysis, and would that yield a similar improvement of the typing/inference? (that is, would you still be able to produce the function types which are intersections of arrow types?)  Giuseppe Castagna committed Sep 14, 2021 470   Giuseppe Castagna committed Sep 14, 2021 471 >> Beppe: one does not exclude the other ... see answer about the different approaches of reviewer 1  Giuseppe Castagna committed Sep 14, 2021 472   Giuseppe Castagna committed Sep 14, 2021 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 All in all, this is a very interesting paper, technically convincing, providing results that are potentially quite useful, and improving on the typing precision offered by earlier approaches. At the same time, it does bring together several rather different language designs, and it might be possible to adjust the paper such that it cuts down on the variation and presents a simpler and more compelling unified story. Nevertheless, I'll not hesitate to recommend that it can be published as is, or with small adjustments. ** Detailed comments: p3,10, 'generic expressions': You have used this phrase a couple of times at this point, please define it. p5,11, 'intersect it with': Surely the next term (t^+ -> \not t) should be negated. p7,44, typo: 'andjump' p8,33, '\partial \in [[t_2]]': It seems likely that the codomain of [[_]] is a subset of {\cal D}. Wouldn't this prevent any function that diverges on any argument from the domain t_1 from belonging to [[t1 -> t2]], for any t2? I would expect that any function in [[t1 -> t2]] would be allowed to diverge on any input. p8,47: Same problem, looks like a function cannot have a function type if it diverges on any argument in the domain. Is that really intended? p9: It seems likely that your calculus is somewhat similar to a simply typed lambda calculus (assuming that the type that is used to annotate each \lambda can't be infinite). So maybe all that non-termination stuff doesn't matter because all well-typed programs terminate. What do you think? p10,17: It would be nice if you could say something about why [Abs-] should be sound. In general, if you know that x \in A and (A /\ \not B) \not= \emptyset, then you certainly can't conclude that x \in \not B. p16,41: If the main application of occurrence typing is dynamic languages, how would you describe the relationship between occurrence typing and 'smart casts' in Kotlin, resp. type promotion in Dart? Those languages use a mostly sound static type system, and smart casts / type promotion play a big role in practice. p27: Having read the implementation section, I noted that it is dealing with several different things: There are a few sentences about the implementation. Then we have a small evaluation (coming up with examples, including the existing examples from [23], then typing them using your implementation). Then we have the related work parts, esp. relative to [23], but also relative to TypeScript and Flow. Couldn't you organize this section in a clearer way, perhaps splitting it into an implementation section and an evaluation section and a discussion (about the relationship to other work)?  Giuseppe Castagna committed Sep 14, 2021 524 >> Kim