Hacker News
6 hours ago by DanRosenwasser

Hi all, original author of the wiki page here. Please take the advice with a grain of salt. Specifically

- union types aren't always bad, DON'T avoid them

- DON'T feel the need to annotate every single thing

Please apply a critical lens as you read through this page. The document was meant for users who've hit rough perf issues, and we tried pretty hard to explain the nuances of each piece of advice.

an hour ago by presentation

Do you have any tips on diagnosing what a problem might be? I don’t know how to interpret the diagnostics flag output to actionable changes to my company’s code, and while I can blindly do what the wiki article suggests (found it a while back when trying to figure out what to do) I would much prefer if I weren’t just trying time consuming changes to our large codebase randomly... been stuck with slow compile performance with typescript for almost a year now and I can’t tell what I’m supposed to do, or if the TypeScript compiler is just too slow.

9 hours ago by jamamp

Of the three code-related sections, I think only Using Type Annotations makes sense. While the compiler _can_ infer the return type and the user can mouse-over the function to see what the language server has determined the type to be, I feel that explicitly noting what the return type is preferred. Yes, the compiler can act more quickly, but also it makes it more clear quickly to others working on the same project what the function does. Even in languages like Swift which are happy to use type inference, you still must annotate your functions.

The other two code-related sections seem odd, to write code that improves compile-time performance. It would be beneficial to see compile duration differences between projects that heavily use union types and projects that don't. Otherwise, changing your coding style and not using explicit features of a language that are hard to find in other languages seems counter-productive.

That said, the actual compiler configuration changes that follow seem very useful, from someone who doesn't write much TS.

8 hours ago by brundolf

> Otherwise, changing your coding style and not using explicit features of a language that are hard to find in other languages seems counter-productive

As with most optimization suggestions, I take these to be intended as a remedy when you're actually running into problems, not something to be done eagerly. I've never run into significant cross-project TypeScript performance issues personally, but I have heard of that happening to some people.

6 hours ago by Vinnl

It does explicitly say this at the top:

> The earlier on these practices are adopted, the better.

5 hours ago by DanRosenwasser

Technically, yes, but we don't expect most users to stumble onto this page unless they're already hitting perf issues.

6 hours ago by brundolf

Hum. I missed that. That does seem unideal.

3 hours ago by breatheoften

I'd love it if typescript could one day grow a mechanism to allow it to rewrite closure definitions to contain the inferred type declarations -- some kind of keyword to indicate "this return type will be re-inferred by compiler based on the call sites within local scope" and have it integrate with ide "rewrite on file save" infrastructure.

So you get minimal keyboard typing when passing around inline closures -- don't have to write the types yourself or maintain them as the code changes -- and any changes to the inferred return types would be visible in source control with diffs that provide quite rich information about changes that might have done something unexpected or propagated further than realized

9 hours ago by corytheboyd

My only issue with this is that it introduces the possibility for human error. It’s rare, but if the returned object fits more than one type (say, a superclass vs concrete class instance) the incorrect one could be selected and the code still compile. Is this even a valid concern?

The only time I see manual type annotation cause problems consistently is with React.FC<Props>: (props: T). People don’t always remember to provide their props interface as the generic, and instead directly annotate the props argument of the function. This is a subtle issue that breaks the “magic” props added by React (like children), leading to people adding their own children definitions to their props interfaces d’oh

8 hours ago by codeflo

I personally find that manual return type annotations actually prevent some errors. A common case: I forget the return statement in one branch, and TypeScript is happy to infer something like number|undefined as the return type.

6 hours ago by Klathmon

>leading to people adding their own children definitions to their props interfaces

IMO this is a feature not a bug. Type definitions aren't just for the compiler, they're also for the developer. Being able to see at a glance which components expect children and which don't is really valuable. Not to mention that there are situations where I want to restrict what kinds of children can be passed in (think render-props, or named slot projection patterns).

In other words, just because React supports an implicit definition of what a "child" can be doesn't mean that my specific component supports all of those same possibilities.

6 hours ago by corytheboyd

I see your point, and agree under the condition that I trust everyone contributing to the codebase knows it. But, in reality, they don’t. I’d rather have the children type available but unused most of the time than one-off type definitions of children.

Maybe for you my own projects I’ll employ your approach, because I agree from the fundamentals side of it

7 hours ago by spion

Would be really helpful if there was an autofix suggestion available so that LSP can add the return type for you.

10 hours ago by joubert

Worth emphasizing the first sentence:

faster compilations and editing experiences

i.e. not runtime perf.

9 hours ago by corytheboyd

I assumed it was going to be about compilation as the subject is TypeScript, which feels like a safe assumption. I suppose there could actually be runtime consequences of TypeScript too, as you’re still writing runtime code, just through a game of telephone with TypeScript :p

10 hours ago by tantalor

Well, of course not. You can't "run" TypeScript.

10 hours ago by karmakaze

No--but certain constructs could translate to slower than expected execution speed.

9 hours ago by cogman10

That's something that needs proof from a profiler. Intuition on what's "slower" is really awful, particularly for JITed languages.

I don't think you could give general advice on what is slower as that's a constantly moving target.

9 hours ago by staticassertion

You can't "run" C, or Go, or Javascript, etc.

8 hours ago by the_af

I think the parent comment meant that typescript is a static checker, which means it's not exercised at run time but in a prior step. Therefore "performant typescript" means to shorten the time it takes to perform these static checks, not the time it takes to run your code. In contrast, when people talk about optimizing C code they most often (but not always, of course) mean to write code that runs fast.

10 hours ago by arc0

Deno[0] supports running TypeScript without needing to compile it to JS.

[0] -- https://deno.land/

10 hours ago by nicoburns

Deno still compiles it to JS. It just does it for you.

10 hours ago by asutekku

That’s a topic not related to typescript, it’s javascript after all.

10 hours ago by nicoburns

In practice I don't think there are, but it's not inconceivable that there could be performance considerations specific to how TypeScript generates JavaScript.

6 hours ago by brundolf

If performance is enough of a problem to warrant a post like this, I'm a little surprised that tsc is itself still written in TypeScript. I get the benefits of that, but it doesn't seem super uncommon that the project gets pushed to its limits these days.

10 hours ago by renke1

That's rather sad, union types are really what I like most about TypeScript. This might explain why VS Code feels so slow sometimes when type checking, because I have a few types that rely heavily on unions.

10 hours ago by ojosilva

> However, if your union has more than a dozen elements, it can cause real problems in compilation speed. For instance, to eliminate redundant members from a union, the elements have to be compared pairwise, which is quadratic. This sort of check might occur when intersecting large unions, where intersecting over each union member can result in enormous types that then need to be reduced.

This statement makes me think... how come the TS compiler is not using something like a hash/map (object) of union members to basically ignore redundancy?

Or any other strategy really. The union of unique values in 2 or more arrays is a classic CS problem for which there are many, many performant solutions.

Anyone familiar with the TS internals? Maybe I'm not seeing the forest.

9 hours ago by jitl

What you’re not seeing is that TS is a structurally typed language, not a nominally typed language. Two type signatures defined in different places with a different name may be assignable to each other - so typescript usually has to deep-compare, recursively, the two types until a conflicting field is found.

7 hours ago by wizzwizz4

So just hash the structures?

5 hours ago by ben509

> This statement makes me think... how come the TS compiler is not using something like a hash/map (object) of union members to basically ignore redundancy?

The trouble is the operation isn't "is X a member of Y", rather it's "does X match any values of Y according to predicate P."

You can break that out if you have knowledge of possible X's and P, as is the case with type matching.

Say we are checking G[] against {str, "foo literal", int[]}. I have no idea how TS implements these internally, but say the underlying type expressions are:

    [{head: String}, 
     {head: String, constraint:"foo literal"}, 
     {head: Array, element:{head: Integer}}]
And G[] is {head:Array, element: {head: Generic, name: "G"}}.

We could reasonably require that heads are all simple values, and then group types as such:

    {String: [{head: String},
              {head: String, constraint:"foo literal"}],
     Array: [{head: Array, element: {head: Integer}}]}
You'd still have to try to match that generic parameter against all the possible Arrays, but you could add more levels of hashing.

The downside is, of course, it's quite tricky to group types like this and prove that it returns the same results as checking all pairs, especially when you have a complex type system.

8 hours ago by smt88

I've found VS Code to be pretty slow regardless of what I've used it for. The UI is snappy, but the hinting/linting/type-checking is sometimes shockingly slow.

The only reason I've used VS Code for more than an hour in the last few years is because its Svelte plugin was much better, but now JetBrains has a good Svelte plugin, and I'm back to JetBrains 100% of the time. It's worth every penny.

an hour ago by arcturus17

Yea, I've been using VSCode these past couple of weeks as I'm now coding in Python + JS after a few years of only JS with WebStorm... and I'm buying a PyCharm license tomorrow.

VSCode is fantastic in many ways, and I'll keep using for my markdown dev notes and for general purpose programming occasionally, but for anything of substance I'm a JetBrains convert.

8 hours ago by breck

id run some benchmarks first on your codebase. would be surprised if union types were a top bottleneck.

an hour ago by presentation

How do you do that? My company’s 60kLOC typescript codebase takes several minutes to compile and I don’t know how I’m supposed to diagnose what the problem might be; diagnostics flag exists but I don’t understand how I’m supposed to take action on it. Current plan is to break the project into a lot of smaller project references but yeah compile speed has probably taxed my company significantly in productivity and I don’t feel confident about addressing it.

4 hours ago by kostarelo

> Type inference is very convenient, so there's no need to do this universally - however, it can be a useful thing to try if you've identified a slow section of your code.

Can we actually identify which piece takes longer to compile?

2 hours ago by nandi95

I think you can in 4.1 add --generateTrace and the output path. The result you can inspect in the browser. Also there's -- extendedDiagnistics if I remember correctly that tells you which part of the compilation process takes what time

9 hours ago by stupidcar

We've encountered huge performance problems using TypeScript with the styled-components library. Something about how that library is typed results in multi-second delays before the VSCode intellisense updates on any file in our project. It's absolutely agonising.

9 hours ago by brendanmc6

Yup! Among other type definition problems. The types are very poorly maintained.

I’m planning to give Emotion a try as it has the same ‘styled’ api.

9 hours ago by renke1

Yeah, I had the same problem, kind of made me switch to Tailwind, actually. The places where I used dynamic rules based on props were replaced with dynamic classes.

9 hours ago by Etheryte

Note how some of these are directly at odds with writing easily maintainable code. For example, using type annotations for return types [0]:

  - import { otherFunc } from "other";
  + import { otherFunc, otherType } from "other";
  
  - export function func() {
  + export function func(): otherType {
        return otherFunc();
    }
Not manually annotating the return type here reduces both the work you need to do when refactoring and the visual overload when working with the code. In my opinion, both of those are far more important than small changes in compile time.

[0] https://github.com/microsoft/TypeScript/wiki/Performance#usi...

9 hours ago by staticassertion

I honestly find it so annoying when return types aren't annotated. It's considerably less overhead for me when I can look at the signature and see the return type, even if a few characters are added.

9 hours ago by jasonhansel

Agreed. I think Rust had the right approach here: require type annotations for function parameters & return values, but perform type inference within functions. That makes it obvious when a function's public interface (so to speak) has changed.

9 hours ago by aszen

Not really, rust is unusable for interactive programming because of this reason. I understand that's not rust target domain but still there's downsides to be had with their approach.

9 hours ago by Xenoamorphous

Exactly, and if I declare that a function returns a string, and then edit its code and add a return statement that returns a number by mistake, the compiler will tell me right away.

If I didn’t declare the return type then the function will silently now infer the return type to be “string | number” (which might or might not break compilation elsewhere).

9 hours ago by aszen

My experience with f sharp and recently with the new Haskell langauge server has been that not having written type annotations is a no issue because the language tooling still shows them above the function definition and better yet can autogenerate them.

So I think this is more of a tooling issue, do keep in mind global type inference is really handy for interactive programming in the repl and short scripts.

Type inference

8 hours ago by breck

VSCode shows return type on hover, and could easily show it all the time via an inline secondary notation, if desired. no need for more tokens.

6 hours ago by int_19h

The problem is that the type it infers is not necessarily the type you want to be exposed on that export. And then it's quite possible to have a bug in implementation of the function that results in a type that's outright wrong.

Types are a subset of contracts. Contracts are best explicit at the API boundary - which is to say, on exported functions and members of exported classes.

9 hours ago by lucideer

Annotated return types are one of the single most prominent readability benefits of typed syntax. Knowing with certainty what type will be returned in a single brief glance, without the cognitive overhead of parsing function body and checking various return statements (at best: returning a variable obtained from an external source may obscure type further).

For me this also greatly improves my ability to refactor quickly and confidently (not needing to check additional places/files during refactor for edge cases in expected type).

Yes any type-related refactor mistakes should in theory be picked up at compile time, or by smart IDEs but having the context in front of you still greatly improves speed when writing.

It's also great for code reviews where grokking context is trickier and IDE goodies are typically not as rich.

8 hours ago by breck

> in a single brief glance

this is something VSCode/editors could always show if desired via secondary notation, without requiring annotations.

9 hours ago by cogman10

I'm going to have to disagree here.

The return type is part of the contract for what a method does. Changing it is just as dangerous as changing the type on a method parameter.

I agree that using type inference locally is a boon to productivity. I disagree that it's a negative for a method return parameter.

Keeping it fixed prevents a future refactor from breaking the method expectations and contracts down the line.

9 hours ago by blauditore

I would argue the opposite: Omitting the return type makes refactorings significantly riskier, and code less readable in my opinion. Leaving out types should only be done in trivial, small-scope areas of code. When done on an exported, widely-used function, this smells like "write-only" code and is hard to maintain.

Daily Digest

Get a daily email with the the top stories from Hacker News. No spam, unsubscribe at any time.