Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Road to Fable 2.0: Lightweight types? #1318

Closed
alfonsogarciacaro opened this issue Jan 10, 2018 · 57 comments
Closed

Road to Fable 2.0: Lightweight types? #1318

alfonsogarciacaro opened this issue Jan 10, 2018 · 57 comments

Comments

@alfonsogarciacaro
Copy link
Member

One of the guidelines when designing Fable was to generate JS as standard as possible that could be easily consumed from other JS code. Because of this F# types (including records and unions) were compiled to the closest thing to real types in modern JS: ES2015 classes (with reflection info).

This is working well and it also means you can export the type to JS and call normally all properties and methods. However, it also means types become somewhat expensive (mainly in terms of code size) in JS, where it's more usual to define less classes and use more literal objects (aka, plain old JS objects or pojo). Example where 2 lines of F# result in 36 lines of JS. It's becoming particularly obvious in Elmish apps where you usually declare lots of small types to define messages and models. Fulma is an example of a very useful library that adds many KBs to the app bundle, which can be a big drawback for some users.

To counter this we are trying to optimize the compilation of certain types with a special attribute (see #1308 #1316). This is giving good results but it forces the user to remember to use the attribute and its special rules, and also to tolerate slightly different runtime semantics for the same type depending on the decorator (and we already have too many such attributes! Erase, Pojo...). A different approach for a new major version would be to compile all records and unions in a more lightweight-fashion so the semantics are more predictable and every user gets the benefits without using special attributes. An initial proposal could be to do something similar to what Reason/Bucklescript are doing

  • Compile unions as JS arrays: So for example Bar(6, "foo") becomes [0, 6, "foo"] (the first element is the tag index). Unions without any data field (e.g type Foo = A | B | C) could be compiled as integers (same as enums). This is what Reason/Bucklescript do and given how often unions are used in F# programs to model messages, etc, it should shave a lot of code as array syntax is very lightweight in JS, as well as performance gains (JS engines are very optimized to deal with arrays).

It's also possible to compile all union cases without data to an integer (I think Bucklescript does that) but all JS performance guides I've read recommend to avoid polymorphic methods, so if a function expects a union type it should be better to feed it always either an array or an integer, not each depending on the case.

  • Compile records as JS arrays (as Reason/Bucklescript do) or as pojos: The first option should provide better performance and code sizes (also easier runtime equality checks as we already have structural equality for arrays) while the second should make for a better debugging experience (you can read the object fields in the browser devtools) and compatibility with libraries like React.

In the second case, we'll probably need to add an extra property to tell F# records (with structural equality) apart from normal JS objects, unless we want to implement structural equality by default to all objects in Fable.

Notice this means there won't be a class/function being generated in JS to represent the type. Instance and static members of the type will become methods of the enclosing module. Actually, this is what the F# compiler does by default, but Fable mounts the members in the type.

Tuples will still be compiled as JS arrays and classes will likely remain more or less as they are now.

This will take a considerable amount of work and will be incompatible with some of the current Fable features, specially around reflection and type testing. We have to decide carefully if the pros outweigh the cons:

Pros

  • Reduced bundle sizes, improving download and parse times for your app
  • Better tree shaking (as type methods become module functions)
  • (Likely) better performance, as indexing an array should be faster than accessing a named property

Cons

  • Type testing and getting reflection info from boxed object won't be possible anymore for DUs and records

Same thing will happen when sending a type to JS: the external code won't be able to access type members. This may not be a big deal as Fable is not being used much to send F# types to JS AFAIK.

  • There will be challenges with runtime equality/comparison and string conversion
  • An object will have to be generated when a type is casted to an interface (to attach the interface methods)
  • Less readable JS code?
@MangelMaxime
Copy link
Member

From all my use case, I never needed reflection when using Fable so for me it's a yes.

After, perhaps we can still support reflection by using attributes if this is really needed. But in general I prefer avoiding reflection usage, I don't think it's a good practice and don't help in writing clean, robust code.

In order to make the JS code more readable, we could generate comments like something: [0 /* Bar */, 6, "foo"].
We could either generate this when in DEBUG mode or always and let plugins like uglifyJS removes comments from the output.

I am not sure to understand this part:

There will be challenges with runtime equality/comparison and string conversion

@Zaid-Ajaj
Copy link
Member

@MangelMaxime

But in general I prefer avoiding reflection usage, I don't think it's a good practice and don't help in writing clean, robust code.

Not a good practice? Any kind of metaprogramming (outside of unsafe interop) will not be possible without Reflection and since Fable doesn't support qoutations, Reflection is the only way. Many library implementers using Reflection expect to be able to read the metadata of types provided by the users.

@MangelMaxime
Copy link
Member

@Zaid-Ajaj I just said that because, each time I use Reflection finally I switched back to another solution. :)

Because, in general reflection usage is less frequent that's also why, I propose to support it via an attribute if we can't support it via another way. :)
Like so, the code is by default optimized.

@alfonsogarciacaro
Copy link
Member Author

alfonsogarciacaro commented Jan 10, 2018

Some notes :)

  • Reflection will only be disable for obj, it will still be available for types know at compile-time or generics (if you inline the function or use PassGenerics attribute). I think this should cover most of current use cases. It may be tricky for some fable-core functions that rely on reflection info at runtime, like ofJson and keyValueList but I hope we can find a work around.

  • Yes, comments are a good idea. They are automatically removed in production by UglifyJS so it should be fine.

  • About string conversion, for example, right now Fable first checks if the object implements ToString and then the type of the object. I'm not sure how this could be done without the type info at runtime (and whether it'll be possible or not to override ToString). We can get some info for the root objects at compile time, but for nested members it's more difficult.

@davidtme
Copy link
Contributor

Compile records as JS arrays (as Reason/Bucklescript do) or as pojos

Pojos is the winner for me as "compatibility with libraries like React" and debugging is a big plus.

@xtuc
Copy link
Contributor

xtuc commented Jan 10, 2018

as Reason/Bucklescript do

Bucklescript is a very good source for that kind of stuff.

So for example Bar(6, "foo") becomes [0, 6, "foo"]

One of the key selling point of Fable is the code readability, and someone that relies on this won't ever upgrade.

In Babel we usually have some option to tune the output, what if Fable adds a compilation flag which turns the optimizations on. The downside is having to maintains two code paths doing basically the same thing, to avoid that I would recommend using another AST pass (if optimization enabled) to transform: Bar(6, "foo") into [0, 6, "foo"].

In the case of the optimization being turned off, you could still debug or read the code as you currently can.

@et1975
Copy link
Member

et1975 commented Jan 10, 2018

  • I like the opt-in optimization approach.
  • Structural equality - I'd keep the F# semantics; if F# docs say that records are structurally equal by default Fable shouldn't change that (there's already an attribute for that, if you want to opt-out).
  • Unions - unions are already harder to read at runtime, perhaps we could go back to original Fable implementation (where case was human-readable) for Debug/non-optimized output and arrays for Release/optimized?
  • Records = Pojos + module members 👍
  • Reflection/Metaprogramming - I'd take quotations support over reflection any day. If we need reflection for existing features - fine, but otherwise I'd be ok with not having it.

@whitetigle
Copy link

In Babel we usually have some option to tune the output, what if Fable adds a compilation flag which turns the optimizations on.

Well I think it just summarizes my point of view. 😉

Fulma is an example of a very useful library that adds many KBs to the app bundle, which can be a big drawback for some users

We need numbers 😉

Anyway, as long as Fable apps are easy to debug it will be ok for me because in the long run, it's always maintenance on old code which becomes pricey.

@MangelMaxime
Copy link
Member

@whitetigle Quoting @alfonsogarciacaro

Approximately, a normal Fable SPA app is around 100KB minified but Fulma adds 200KB extra

And we have some work to remove around 100KB to Fulma, with a new version of Fable by manually adding [<Fable.Core.CompileAsArray>] (new attribute)

@jgrund
Copy link
Member

jgrund commented Jan 10, 2018

Don't know if you've seen this but recent V8 versions have changed their approach to optimizing code:

https://medium.com/the-node-js-collection/get-ready-a-new-v8-is-coming-node-js-performance-is-changing-46a63d6da4de

@evilz
Copy link

evilz commented Jan 10, 2018

Hi all,

  • Adding specific attribut should be avoid, opt-in optimization look's like a better approach

  • Structural equality: It's should work as standard F#

  • Array/Obj
    Array or object is maybe not so important since we still have good source map for debug.
    I don't want to debug/read js.

  • Buddle size is important
    Users navigate more often now on mobile device with 3G or 4G. it's always pain of waiting one or two minutes to get the APP.

@jgrund
Copy link
Member

jgrund commented Jan 10, 2018

FWIW, I've only used reflection recently and only as a last resort. I'm only using it on a generic function (with PassGenerics) so I think these changes won't effect that.

👍 on making compiled union / record types lighter

@inosik
Copy link
Contributor

inosik commented Jan 10, 2018

If this is going to be controlled with a global switch, we probably need something to override this behavior per type, module or even project (*.fsproj file where a file comes from). In case you explicitly want certain types to be classes.

Would it make sense to introduce this feature in Fable 1.4 (?) for non-public types only and then extend it to all public APIs in Fable 2?

@alfonsogarciacaro
Copy link
Member Author

Thanks a lot for the comments! They are very useful to make the design for Fable 2.0, I'll write a summary later 👍 Thanks also for the link @jgrund, the post was very good.

About compilation flags for optimizations, I think this may increase the maintenance cost too much if it changes the runtime semantics which can make some functions in fable-core not work as expected. However, it should be possible to do some specific changes to make debugging easier: like using strings for union tags during development and integers in production.

@stweb
Copy link

stweb commented Jan 11, 2018

Performance is a feature and I think Fable should offer better runtime optimisations. Opt-in via attributes makes the code less readable and also harder to share with .net server side. Thus my preference would be compiler / project switches for optimisation features while retaining the old behaviour (at least until proven obsolete). Opt-in is still a good choice for selected application.

I also thought that using F# arrays and lists could optionally be optimised to plain JS arrays for performance/size reasons.

@gerardtoconnor
Copy link

@alfonsogarciacaro performance is not that important so why bother, A wise man once told me dev experience is more important. 😝

Messing aside, reflection is the route of so much evil, you should wean people off it, if they need, they can stay on older version while they figure out a better way to implement without reflection. If it wasn't for reflection and a few other .net gems, we would be able to do much better flattening and inlining of execution graph in compiler (fsc).

The project switches statements are going to be nightmare to maintain and split the project in two directions anyway, there is always the historical releases for those who need old way as an alternate version but I think you need to pick a direction and go with it.

@Krzysztof-Cieslak
Copy link
Member

performance is not that important so why bother, A wise man once told me dev experience is more important

was that me? ;P

Performance optimizations are important when anyone is actually using the project... and they actually have a problem with performance. I'd be very careful in introducing premature optimizations that may make the developer experience worse just for sake of some benchmarks.

@MangelMaxime
Copy link
Member

MangelMaxime commented Jan 11, 2018

The main point of the "Lightweight types" isn't the performance. It's the bundle size if I understood my discussion with @alfonsogarciacaro .

The performance, are just a consequence of this point :)

In general, I rarely look at performance problems because it's kind of rare to see them. (I am just speaking of my experience :) )

@gerardtoconnor
Copy link

gerardtoconnor commented Jan 11, 2018

@Krzysztof-Cieslak, well you have in some guise at some stage, but I was referring to @alfonsogarciacaro in this instance ... don't worry, your a wise man too.

I fully agree, performance is important in library's where people using, some who will inevitably run into performance issues. I think the important balance is make the dev Api simple/elegant as possible and do all the hard performance work behind the scenes, its not a LOB app where we can/should have simple clear abstractions for complex logic ... It's a base library that should be magic and performant under the simple/elegant api, hiding complexities from Dev.

The main point of the "Lightweight types" isn't the performance. It's the bundle size if I understood my discussion with @alfonsogarciacaro .
The performance, are just a consequence of this point :)

Due to CPU cache sizes and IO bandwidth, the two are highly correlated 😄

@alfonsogarciacaro
Copy link
Member Author

@gerardtoconnor Hehe, dev experience is very important, that's what we've been doing the past two years and thanks to that we now have awesome projects like Ionide, Elmish or Fulma. Now we can focus on performance 😉 Also now that we have a wider user base, we can see better how Fable is used on the wild and what features can we drop: these optimizations will mean you cannot do type testing on obj anymore (and probably also not custom equality for DUs) but if this is OK for most users we can give it a go.

In any case, you're right about the difficulty to maintain different code generations, we'll have to define a clear path for Fable 2.0 instead. About reflection, Fable only allows a small part but it's useful for serialization and other tricks (like converting DUs into JS objects or dropdown options). This will be maintained (for types known at compile time or that use PassGenerics) but it will be available through module functions so it can be removed by tree shaking if not invoked.

@michaelsg
Copy link

michaelsg commented Jan 13, 2018

I am passing objects as messages between Fable and Akka.Net/F# on the server. Because the type of the message identifies the message, being able to include the type in the serialization and being able to deserialize something in Fable without knowing what the type might be, will remain important to me. After deserialization of a message on the client, its runtime type :? (instanceof) is used to route the message.
It would be fine to have to opt-into keeping the typeinfo and the true JS/runtime type so this keeps working. The majority of the types do not need this, however, so removing the proliferation of [<Pojo>] would be welcomed.

@kunjee17
Copy link

kunjee17 commented Jan 14, 2018

I just reached the 1MB size for prod bundle... Now, it is OK for high speed internet but not ok for anything other than that. And again it is indeed a big size. So, if anything coming as reducing the size. I am up for it.

Just to give idea. Where I can't even use [<Pojo>]

    type Validate = {
           IsValid : bool
           ErrMsg : string
        } with
           static member isValid =
               (fun m -> m.IsValid), (fun n m -> {m with IsValid = n})
           static member errMsg =
               (fun m -> m.ErrMsg), (fun n m -> {m with ErrMsg = n})

    type DocumentStatus = Select = 0 | All = 1 | New = 2 | InProcess = 3 | Processed = 4 | Error = 5

    type DocumentModel = {
        File : string
        FileName : string
        Label : string
        Status : DocumentStatus
        Patient : string
        Date : string
        Notes : string
    }with
        static member file = (fun m -> m.File), (fun n m -> {m with File = n})
        static member fileName = (fun m -> m.FileName), (fun n m -> {m with FileName = n})
        static member label = (fun m -> m.Label), (fun n m -> {m with Label = n})
        static member status = (fun m -> m.Status), (fun n m -> {m with Status = n})
        static member patient = (fun m -> m.Patient), (fun n m -> {m with Patient = n})
        static member date = (fun m -> m.Date), (fun n m -> {m with Date = n})
        static member note = (fun m -> m.Notes), (fun n m -> {m with Notes = n})


    type ErrorModel = {
        File : Validate
        FileName : Validate
        Label : Validate
        Patient : Validate
        Date : Validate
        Notes : Validate
    }with
        static member file = (fun m -> m.File), (fun n (m:ErrorModel) -> {m with File = n})
        static member fileName = (fun m -> m.FileName), (fun n (m:ErrorModel) -> {m with FileName = n})
        static member label = (fun m -> m.Label), (fun n (m:ErrorModel) -> {m with Label = n})
        static member patient = (fun m -> m.Patient), (fun n (m:ErrorModel) -> {m with Patient = n})
        static member date = (fun m -> m.Date), (fun n (m:ErrorModel) -> {m with Date = n})
        static member notes = (fun m -> m.Notes), (fun n (m:ErrorModel) -> {m with Notes = n})


    type Model = {
        Id : int
        ShowUploadModal : bool
        DocumentModel : DocumentModel
        ErrorModel : ErrorModel
        IsValid : bool
    }with
        static member documentModel = (fun m -> m.DocumentModel),(fun n m -> {m with DocumentModel = n})
        static member errorModel = (fun m -> m.ErrorModel), (fun n m -> {m with ErrorModel = n})

Now this gonna increase size like anything. Static functions are there to use lences - aether to modify record types.

And obviously many fish operators are missing for readability...

@MangelMaxime
Copy link
Member

@kunjee17 You can also not use static member to have the lenses support. You can write them like:

type Model = {
  Id : int }
}

let modelIdLens = ...

// or

module ModelLens =

let id = ...

And so be able to use the [<Pojo>] attributes.

Do you use CDN, in your application or are you bundling everything in it like your code and libraries too ?

@kunjee17
Copy link

kunjee17 commented Jan 14, 2018

@MangelMaxime I am bundling everything as of now. Here are the libraries I am using as of now

"dependencies": {
    "@servicestack/client": "^1.0.0",
    "animate.css": "^3.5.2",
    "babel-polyfill": "^6.26.0",
    "babel-runtime": "^6.26.0",
    "flatpickr": "^4.0.6",
    "font-awesome": "^4.7.0",
    "izitoast": "^1.1.5",
    "lowdb": "^1.0.0",
    "preact": "^8.2.1",
    "preact-compat": "^3.16.0",
    "remotedev": "^0.2.7"
  },
  "devDependencies": {
    "@types/lowdb": "^0.15.0",
    "babel-core": "^6.26.0",
    "babel-loader": "^7.1.2",
    "babel-plugin-transform-runtime": "^6.23.0",
    "babel-preset-env": "^1.6.1",
    "bulma": "0.5.2",
    "bulma-extensions": "^0.5.2",
    "css-loader": "^0.28.7",
    "fable-loader": "^1.1.2",
    "fable-utils": "^1.0.6",
    "file-loader": "^0.11.1",
    "loglevel": "^1.5.0",
    "node-sass": "^4.5.3",
    "sass-loader": "^6.0.6",
    "style-loader": "^0.18.2",
    "webpack": "^3.6.0",
    "webpack-dev-server": "^2.8.2"

I ll give a shot at what you said but putting things in module instead of static member

Update1
@MangelMaxime for same code I have shared in above comment. Generated code is 50 lines less in Fable reply and also there were no include if I use modules with Pojo instead of static functions with type.

For above code here is generated code

import { setType } from "fable-core/Symbol";
import _Symbol from "fable-core/Symbol";
import { compareRecords, equalsRecords } from "fable-core/Util";
export class Validate {
  constructor(isValid, errMsg) {
    this.IsValid = isValid;
    this.ErrMsg = errMsg;
  }

  [_Symbol.reflection]() {
    return {
      type: "Test.Validate",
      interfaces: ["FSharpRecord", "System.IEquatable", "System.IComparable"],
      properties: {
        IsValid: "boolean",
        ErrMsg: "string"
      }
    };
  }

  Equals(other) {
    return equalsRecords(this, other);
  }

  CompareTo(other) {
    return compareRecords(this, other) | 0;
  }

  static get isValid() {
    return [function (m) {
      return m.IsValid;
    }, function (n, m_1) {
      return new Validate(n, m_1.ErrMsg);
    }];
  }

  static get errMsg() {
    return [function (m) {
      return m.ErrMsg;
    }, function (n, m_1) {
      return new Validate(m_1.IsValid, n);
    }];
  }

}
setType("Test.Validate", Validate);
export class DocumentModel {
  constructor(file, fileName, label, status, patient, date, notes) {
    this.File = file;
    this.FileName = fileName;
    this.Label = label;
    this.Status = status | 0;
    this.Patient = patient;
    this.Date = date;
    this.Notes = notes;
  }

  [_Symbol.reflection]() {
    return {
      type: "Test.DocumentModel",
      interfaces: ["FSharpRecord", "System.IEquatable", "System.IComparable"],
      properties: {
        File: "string",
        FileName: "string",
        Label: "string",
        Status: "number",
        Patient: "string",
        Date: "string",
        Notes: "string"
      }
    };
  }

  Equals(other) {
    return equalsRecords(this, other);
  }

  CompareTo(other) {
    return compareRecords(this, other) | 0;
  }

  static get file() {
    return [function (m) {
      return m.File;
    }, function (n, m_1) {
      return new DocumentModel(n, m_1.FileName, m_1.Label, m_1.Status, m_1.Patient, m_1.Date, m_1.Notes);
    }];
  }

  static get fileName() {
    return [function (m) {
      return m.FileName;
    }, function (n, m_1) {
      return new DocumentModel(m_1.File, n, m_1.Label, m_1.Status, m_1.Patient, m_1.Date, m_1.Notes);
    }];
  }

  static get label() {
    return [function (m) {
      return m.Label;
    }, function (n, m_1) {
      return new DocumentModel(m_1.File, m_1.FileName, n, m_1.Status, m_1.Patient, m_1.Date, m_1.Notes);
    }];
  }

  static get status() {
    return [function (m) {
      return m.Status;
    }, function (n, m_1) {
      return new DocumentModel(m_1.File, m_1.FileName, m_1.Label, n, m_1.Patient, m_1.Date, m_1.Notes);
    }];
  }

  static get patient() {
    return [function (m) {
      return m.Patient;
    }, function (n, m_1) {
      return new DocumentModel(m_1.File, m_1.FileName, m_1.Label, m_1.Status, n, m_1.Date, m_1.Notes);
    }];
  }

  static get date() {
    return [function (m) {
      return m.Date;
    }, function (n, m_1) {
      return new DocumentModel(m_1.File, m_1.FileName, m_1.Label, m_1.Status, m_1.Patient, n, m_1.Notes);
    }];
  }

  static get note() {
    return [function (m) {
      return m.Notes;
    }, function (n, m_1) {
      return new DocumentModel(m_1.File, m_1.FileName, m_1.Label, m_1.Status, m_1.Patient, m_1.Date, n);
    }];
  }

}
setType("Test.DocumentModel", DocumentModel);
export class ErrorModel {
  constructor(file, fileName, label, patient, date, notes) {
    this.File = file;
    this.FileName = fileName;
    this.Label = label;
    this.Patient = patient;
    this.Date = date;
    this.Notes = notes;
  }

  [_Symbol.reflection]() {
    return {
      type: "Test.ErrorModel",
      interfaces: ["FSharpRecord", "System.IEquatable", "System.IComparable"],
      properties: {
        File: Validate,
        FileName: Validate,
        Label: Validate,
        Patient: Validate,
        Date: Validate,
        Notes: Validate
      }
    };
  }

  Equals(other) {
    return equalsRecords(this, other);
  }

  CompareTo(other) {
    return compareRecords(this, other) | 0;
  }

  static get file() {
    return [function (m) {
      return m.File;
    }, function (n, m_1) {
      return new ErrorModel(n, m_1.FileName, m_1.Label, m_1.Patient, m_1.Date, m_1.Notes);
    }];
  }

  static get fileName() {
    return [function (m) {
      return m.FileName;
    }, function (n, m_1) {
      return new ErrorModel(m_1.File, n, m_1.Label, m_1.Patient, m_1.Date, m_1.Notes);
    }];
  }

  static get label() {
    return [function (m) {
      return m.Label;
    }, function (n, m_1) {
      return new ErrorModel(m_1.File, m_1.FileName, n, m_1.Patient, m_1.Date, m_1.Notes);
    }];
  }

  static get patient() {
    return [function (m) {
      return m.Patient;
    }, function (n, m_1) {
      return new ErrorModel(m_1.File, m_1.FileName, m_1.Label, n, m_1.Date, m_1.Notes);
    }];
  }

  static get date() {
    return [function (m) {
      return m.Date;
    }, function (n, m_1) {
      return new ErrorModel(m_1.File, m_1.FileName, m_1.Label, m_1.Patient, n, m_1.Notes);
    }];
  }

  static get notes() {
    return [function (m) {
      return m.Notes;
    }, function (n, m_1) {
      return new ErrorModel(m_1.File, m_1.FileName, m_1.Label, m_1.Patient, m_1.Date, n);
    }];
  }

}
setType("Test.ErrorModel", ErrorModel);
export class Model {
  constructor(id, showUploadModal, documentModel, errorModel, isValid) {
    this.Id = id | 0;
    this.ShowUploadModal = showUploadModal;
    this.DocumentModel = documentModel;
    this.ErrorModel = errorModel;
    this.IsValid = isValid;
  }

  [_Symbol.reflection]() {
    return {
      type: "Test.Model",
      interfaces: ["FSharpRecord", "System.IEquatable", "System.IComparable"],
      properties: {
        Id: "number",
        ShowUploadModal: "boolean",
        DocumentModel: DocumentModel,
        ErrorModel: ErrorModel,
        IsValid: "boolean"
      }
    };
  }

  Equals(other) {
    return equalsRecords(this, other);
  }

  CompareTo(other) {
    return compareRecords(this, other) | 0;
  }

  static get documentModel() {
    return [function (m) {
      return m.DocumentModel;
    }, function (n, m_1) {
      return new Model(m_1.Id, m_1.ShowUploadModal, n, m_1.ErrorModel, m_1.IsValid);
    }];
  }

  static get errorModel() {
    return [function (m) {
      return m.ErrorModel;
    }, function (n, m_1) {
      return new Model(m_1.Id, m_1.ShowUploadModal, m_1.DocumentModel, n, m_1.IsValid);
    }];
  }

}
setType("Test.Model", Model);

And here is updated code

open Fable.Core
open Fable.Core.JsInterop

[<Pojo>]
type Validate = {
           IsValid : bool
           ErrMsg : string
        }

module ValidateLens = 
    let isValid =
        (fun m -> m.IsValid), (fun n m -> {m with IsValid = n})
    let errMsg =
        (fun m -> m.ErrMsg), (fun n m -> {m with ErrMsg = n})

type DocumentStatus = Select = 0 | All = 1 | New = 2 | InProcess = 3 | Processed = 4 | Error = 5

[<Pojo>]
type DocumentModel = {
    File : string
    FileName : string
    Label : string
    Status : DocumentStatus
    Patient : string
    Date : string
    Notes : string
}

module DocumentModelLens =
    let file = (fun m -> m.File), (fun n m -> {m with File = n})
    let fileName = (fun m -> m.FileName), (fun n m -> {m with FileName = n})
    let label = (fun m -> m.Label), (fun n m -> {m with Label = n})
    let status = (fun m -> m.Status), (fun n m -> {m with Status = n})
    let patient = (fun m -> m.Patient), (fun n m -> {m with Patient = n})
    let date = (fun m -> m.Date), (fun n m -> {m with Date = n})
    let note = (fun m -> m.Notes), (fun n m -> {m with Notes = n})

[<Pojo>]
type ErrorModel = {
    File : Validate
    FileName : Validate
    Label : Validate
    Patient : Validate
    Date : Validate
    Notes : Validate
}

module ErrorModelLens =
    let file = (fun m -> m.File), (fun n (m:ErrorModel) -> {m with File = n})
    let fileName = (fun m -> m.FileName), (fun n (m:ErrorModel) -> {m with FileName = n})
    let label = (fun m -> m.Label), (fun n (m:ErrorModel) -> {m with Label = n})
    let patient = (fun m -> m.Patient), (fun n (m:ErrorModel) -> {m with Patient = n})
    let date = (fun m -> m.Date), (fun n (m:ErrorModel) -> {m with Date = n})
    let notes = (fun m -> m.Notes), (fun n (m:ErrorModel) -> {m with Notes = n})

[<Pojo>]
type Model = {
    Id : int
    ShowUploadModal : bool
    DocumentModel : DocumentModel
    ErrorModel : ErrorModel
    IsValid : bool
}

module ModelLens =
    let documentModel = (fun m -> m.DocumentModel),(fun n m -> {m with DocumentModel = n})
    let errorModel = (fun m -> m.ErrorModel), (fun n m -> {m with ErrorModel = n})

it will generate more JavaScripty code

export const ValidateLens = function (__exports) {
  const isValid = __exports.isValid = [function (m) {
    return m.IsValid;
  }, function (n, m_1) {
    return {
      IsValid: n,
      ErrMsg: m_1.ErrMsg
    };
  }];
  const errMsg = __exports.errMsg = [function (m) {
    return m.ErrMsg;
  }, function (n, m_1) {
    return {
      IsValid: m_1.IsValid,
      ErrMsg: n
    };
  }];
  return __exports;
}({});
export const DocumentModelLens = function (__exports) {
  const file = __exports.file = [function (m) {
    return m.File;
  }, function (n, m_1) {
    return {
      File: n,
      FileName: m_1.FileName,
      Label: m_1.Label,
      Status: m_1.Status,
      Patient: m_1.Patient,
      Date: m_1.Date,
      Notes: m_1.Notes
    };
  }];
  const fileName = __exports.fileName = [function (m) {
    return m.FileName;
  }, function (n, m_1) {
    return {
      File: m_1.File,
      FileName: n,
      Label: m_1.Label,
      Status: m_1.Status,
      Patient: m_1.Patient,
      Date: m_1.Date,
      Notes: m_1.Notes
    };
  }];
  const label = __exports.label = [function (m) {
    return m.Label;
  }, function (n, m_1) {
    return {
      File: m_1.File,
      FileName: m_1.FileName,
      Label: n,
      Status: m_1.Status,
      Patient: m_1.Patient,
      Date: m_1.Date,
      Notes: m_1.Notes
    };
  }];
  const status = __exports.status = [function (m) {
    return m.Status;
  }, function (n, m_1) {
    return {
      File: m_1.File,
      FileName: m_1.FileName,
      Label: m_1.Label,
      Status: n,
      Patient: m_1.Patient,
      Date: m_1.Date,
      Notes: m_1.Notes
    };
  }];
  const patient = __exports.patient = [function (m) {
    return m.Patient;
  }, function (n, m_1) {
    return {
      File: m_1.File,
      FileName: m_1.FileName,
      Label: m_1.Label,
      Status: m_1.Status,
      Patient: n,
      Date: m_1.Date,
      Notes: m_1.Notes
    };
  }];
  const date = __exports.date = [function (m) {
    return m.Date;
  }, function (n, m_1) {
    return {
      File: m_1.File,
      FileName: m_1.FileName,
      Label: m_1.Label,
      Status: m_1.Status,
      Patient: m_1.Patient,
      Date: n,
      Notes: m_1.Notes
    };
  }];
  const note = __exports.note = [function (m) {
    return m.Notes;
  }, function (n, m_1) {
    return {
      File: m_1.File,
      FileName: m_1.FileName,
      Label: m_1.Label,
      Status: m_1.Status,
      Patient: m_1.Patient,
      Date: m_1.Date,
      Notes: n
    };
  }];
  return __exports;
}({});
export const ErrorModelLens = function (__exports) {
  const file_1 = __exports.file = [function (m) {
    return m.File;
  }, function (n, m_1) {
    return {
      File: n,
      FileName: m_1.FileName,
      Label: m_1.Label,
      Patient: m_1.Patient,
      Date: m_1.Date,
      Notes: m_1.Notes
    };
  }];
  const fileName_1 = __exports.fileName = [function (m) {
    return m.FileName;
  }, function (n, m_1) {
    return {
      File: m_1.File,
      FileName: n,
      Label: m_1.Label,
      Patient: m_1.Patient,
      Date: m_1.Date,
      Notes: m_1.Notes
    };
  }];
  const label_1 = __exports.label = [function (m) {
    return m.Label;
  }, function (n, m_1) {
    return {
      File: m_1.File,
      FileName: m_1.FileName,
      Label: n,
      Patient: m_1.Patient,
      Date: m_1.Date,
      Notes: m_1.Notes
    };
  }];
  const patient_1 = __exports.patient = [function (m) {
    return m.Patient;
  }, function (n, m_1) {
    return {
      File: m_1.File,
      FileName: m_1.FileName,
      Label: m_1.Label,
      Patient: n,
      Date: m_1.Date,
      Notes: m_1.Notes
    };
  }];
  const date_1 = __exports.date = [function (m) {
    return m.Date;
  }, function (n, m_1) {
    return {
      File: m_1.File,
      FileName: m_1.FileName,
      Label: m_1.Label,
      Patient: m_1.Patient,
      Date: n,
      Notes: m_1.Notes
    };
  }];
  const notes = __exports.notes = [function (m) {
    return m.Notes;
  }, function (n, m_1) {
    return {
      File: m_1.File,
      FileName: m_1.FileName,
      Label: m_1.Label,
      Patient: m_1.Patient,
      Date: m_1.Date,
      Notes: n
    };
  }];
  return __exports;
}({});
export const ModelLens = function (__exports) {
  const documentModel = __exports.documentModel = [function (m) {
    return m.DocumentModel;
  }, function (n, m_1) {
    return {
      Id: m_1.Id,
      ShowUploadModal: m_1.ShowUploadModal,
      DocumentModel: n,
      ErrorModel: m_1.ErrorModel,
      IsValid: m_1.IsValid
    };
  }];
  const errorModel = __exports.errorModel = [function (m) {
    return m.ErrorModel;
  }, function (n, m_1) {
    return {
      Id: m_1.Id,
      ShowUploadModal: m_1.ShowUploadModal,
      DocumentModel: m_1.DocumentModel,
      ErrorModel: n,
      IsValid: m_1.IsValid
    };
  }];
  return __exports;
}({});

So, what will be good route to take. Is this thing can be default in fable ? @alfonsogarciacaro

@kunjee17
Copy link

Update2

I am half way through modifying all models with Pojo . It is reducing the code; But seems so little. Will keep posting on update.

@MangelMaxime
Copy link
Member

I think it would be better to keep this discussion in a separate issue @kunjee17 just try keeping this one a discussion :)

@kunjee17
Copy link

@MangelMaxime sorry got carried away. But yes we need light weight types or a guidelines to achieve that...

@alfonsogarciacaro
Copy link
Member Author

alfonsogarciacaro commented Jan 14, 2018

@michaelsg I think we had a similar discussion when releasing Fable 0.7 or 1.0. To support deserialization with type info we added a global variable to hold a type dictionary. This is the only global variable in fable-core and I'd like to remove it for the next version, as I haven't heard of anybody else using this feature.

Generics will still be available so you could do something like the following to tag your JSON with the typename:

let inline toJsonWithTypeName (o: 'T) =
    toJson (typeof<'T>.FullName, o) // On server side you would use Newtonsoft.Json

let ofJsonWithTypeName json =
    let (typeName, parsed): string * obj = ofJson json
    if typeName = typeof<MyType1>.FullName then
        let x = inflate<MyType1> parsed
        // Do something with MyType1
        ()
    elif typeName = typeof<MyType2>.FullName then
        let x = inflate<MyType2> parsed
        // Do something with MyType2
        ()
    else
        failwithf "Cannot handle type %s" typeName

This code is not very elegant (probably we can improve it using Active Patterns or a type dictionary) but it's a small price to pay to improve the rest of your (and everybody else's) generated code. As Fable apps are growing we're noticing that the bundles are getting quite big so we need to remove some runtime info in order to remain competitive agains alternate solutions (like Reason).

One note of caution, inflate is currently not recursive. But I guess we can fix that if necessary.

Two other notes: unions and records will be plain objects so you will be able to parse them just with the native browser API JSON.parse. F# classes will still be translated to JS classes so you type testing will be possible at runtime.

@kunjee17 Yes, it's better to move the discussion to Fulma or open a new issue :) Please check the size of the gzipped bundle as that would be the size to download if your server gzips JS files, which should be doing ;) Fable 2.0 will improve the situation but if your bundle is growing too much at one point you may need to split your app.

@zpodlovics
Copy link
Contributor

zpodlovics commented Aug 27, 2018

@alfonsogarciacaro I am afraid the property inlining will not help Fable2 here as it looks like the property itself will introduces lots of additional operations and indirection (and will trash the data and instruction cache) instead of a simple object field access:(https://gist.github.com/zpodlovics/a454ad7521945f367bc13b69a7230118#file-fable2-profile-txt-L362). And inlining is not always a win - as you have to execute usually lot more code - instead of using a call reusing the hot instruction & uop cache. Small, tight code (eg.: K interpreter) could be really-really fast: (http://tech.marksblogg.com/billion-nyc-taxi-kdb.html).

Property related new functionality from the Fable2 profile:

 [C++ entry points]:
   ticks    cpp   total   name
  45882   43.0%   28.1%  v8::internal::Runtime_DefineDataPropertyInLiteral(int, v8::internal::Object**, v8::internal::Isolate*)

A few property related functionality overhead from the Fable2 profile:

   5137    3.1%    3.1%  v8::internal::JSObject::MigrateToMap(v8::internal::Handle<v8::internal::JSObject>, v8::internal::Handle<v8::internal::Map>, int)
[..]
   4623    2.8%    2.8%  v8::internal::Runtime_DefineDataPropertyInLiteral(int, v8::internal::Object**, v8::internal::Isolate*)
[..]
   3595    2.2%    2.2%  v8::internal::LookupIterator::WriteDataValue(v8::internal::Handle<v8::internal::Object>, bool)
[..]
   2734    1.7%    1.7%  v8::internal::LookupIterator::ApplyTransitionToDataProperty(v8::internal::Handle<v8::internal::JSReceiver>)
[..]
   2722    1.7%    1.7%  v8::internal::Map::TransitionToDataProperty(v8::internal::Handle<v8::internal::Map>, v8::internal::Handle<v8::internal::Name>, v8::internal::Handle<v8::internal::Object>, v8::internal::PropertyAttributes, v8::internal::PropertyConstness, v8::internal::Object::StoreFromKeyed)
[..]
   2582    1.6%    1.6%  v8::internal::LookupIterator::PrepareTransitionToDataProperty(v8::internal::Handle<v8::internal::JSReceiver>, v8::internal::Handle<v8::internal::Object>, v8::internal::PropertyAttributes, v8::internal::Object::StoreFromKeyed)
[..]
   2399    1.5%    1.5%  v8::internal::TransitionsAccessor::SearchTransition(v8::internal::Name*, v8::internal::PropertyKind, v8::internal::PropertyAttributes)
[..]
   2066    1.3%    1.3%  v8::internal::LookupIterator::State v8::internal::LookupIterator::LookupInRegularHolder<false>(v8::internal::Map*, v8::internal::JSReceiver*)
[..]
   1822    1.1%    1.1%  v8::internal::(anonymous namespace)::UpdateDescriptorForValue(v8::internal::Handle<v8::internal::Map>, int, v8::internal::PropertyConstness, v8::internal::Handle<v8::internal::Object>)
[..]
   1777    1.1%    1.1%  v8::internal::Object::AddDataProperty(v8::internal::LookupIterator*, v8::internal::Handle<v8::internal::Object>, v8::internal::PropertyAttributes, v8::internal::ShouldThrow, v8::internal::Object::StoreFromKeyed)
[..]
   1743    1.1%    1.1%  v8::internal::Handle<v8::internal::PropertyArray> v8::internal::Factory::CopyArrayAndGrow<v8::internal::PropertyArray>(v8::internal::Handle<v8::internal::PropertyArray>, int, v8::internal::PretenureFlag)
[..]
   1587    1.0%    1.0%  v8::internal::LookupIterator::UpdateProtector() [clone .part.349]
[..]
   1283    0.8%    0.8%  v8::internal::JSObject::DefineOwnPropertyIgnoreAttributes(v8::internal::LookupIterator*, v8::internal::Handle<v8::internal::Object>, v8::internal::PropertyAttributes, v8::internal::ShouldThrow, v8::internal::JSObject::AccessorInfoHandling)
[..]
   1270    0.8%    0.8%  int v8::internal::BinarySearch<(v8::internal::SearchMode)1, v8::internal::DescriptorArray>(v8::internal::DescriptorArray*, v8::internal::Name*, int, int*)
[..]
   1062    0.7%    0.7%  void v8::internal::LookupIterator::Start<false>()
[..]
   1049    0.6%    0.6%  void v8::internal::LookupIterator::NextInternal<false>(v8::internal::Map*, v8::internal::JSReceiver*)
[..]
    917    0.6%    0.6%  v8::internal::HeapObject::synchronized_set_map(v8::internal::Map*)

A few % here and another few % there and this small list suddenly responsible for: ~22.6%

@alfonsogarciacaro
Copy link
Member Author

alfonsogarciacaro commented Aug 28, 2018

Thanks for the new data @zpodlovics. To be sure I'm understanding it well, is this the change in the generated JS code which is causing the performance issue?

// Fable 1
class Foo {
  get MyProperty() {
     return costlyOperation(this);
  }
}
x.MyProperty // Use example

// Fable 2
function Foo$$get_MyProperty(this$) {
  return costlyOperation(this$);
}
Foo$$get_MyProperty(x) // Use example

I was hoping that using a function (besides better compatibility with JS tree shaking and minification) would help the JS runtime to make a static dispatch, but maybe I was wrong. However, in both cases costlyOperation is evaluated every time the property is called. We cannot change that because it's part of F# semantics. If you need to cache the value you need to do it explicitly or use an auto implemented property member val MyProperty = costlyOperation()

In any case I'm open to attach properties again to the prototype if this can improve performance. If we create a branch with this change, would you test it in your project?

@zpodlovics
Copy link
Contributor

zpodlovics commented Aug 28, 2018

The performance hit - according to my profiling - comes from the Fable1 object -> Fable2 property change. There are no costly operation in the encoding/decoding path. Each call is nothing more than a primitive type read/write (+ sometimes a few ( < 100 ) byte array read/write) + minimal transformation (usually integer bitness expansion / truncation) if needed. However there are lot's of fields in an encoder/decoder (not only primitive types, but structures (including nested structures and variable length data) that built from these primitive types). The encoder/decoder allocated once (with nested encoder/decoder for each non-primitive field/structure), so the message encoding/decoding could be done without allocation in the hot path.

A trivial header decoder/encoder with two field:

Fable1 generated js:
https://gist.github.com/zpodlovics/96013570fe8130628004ee8ed05cbfb1

Fable2 generated js:
https://gist.github.com/zpodlovics/917dd235b1c726921d44ba16ade3271f

Flamegraphs (using the earlier benchmarks - is there any better way than this to share svg?):

Fable1 flamegraph:
https://gist.github.com/zpodlovics/8b7a3fa3890388f1ad1e233e85275283

Fable2 flamegraph:
https://gist.github.com/zpodlovics/96b4b4ff169e477d16328abe28138c94

The field read/write operation must evaluated every time as expected. In a concurrent environment (js shared buffers + workers) not even the local cpu caching are allowed, the read/write done using the physical memory (volatile ops).

Thank you for your support! I am happy to help to improve Fable, so please create the branch and I will test it in my project.

Update:
"This article describes some key fundamentals that are common to all JavaScript engines — and not just V8, the engine the authors (Benedikt and Mathias) work on. As a JavaScript developer, having a deeper understanding of how JavaScript engines work helps you reason about the performance characteristics of your code."
https://mathiasbynens.be/notes/shapes-ics
https://mathiasbynens.be/notes/prototypes

@zpodlovics
Copy link
Contributor

@alfonsogarciacaro Based on my earlier updates (Mathias Bynens posts + conference videos) it seems that the performance hit comes from the multiple shape definition and the shape walking (plus the additional indirection introduced by the standalone functions?). Also the defineProperty usage also seems problemmatic. In theory V8 have some inline cache tracing functionality (at least in debug mode --trace-ic, I will check it out later.

const object1 = { x: 1, y: 2 };

vs

const object = {};
object.x = 5;
object.y = 6;

Probably also worth to check this:
https://medium.com/@bmeurer/surprising-polymorphism-in-react-applications-63015b50abc

@alfonsogarciacaro
Copy link
Member Author

@zpodlovics I'm not very used to JS profiling tools but I was aware that it's better to avoid polymorphism to help the JS engine optimize the code. However, I was hoping that the standalone functions could be optimized because they're used with the same type.

In any case, I've created the properties branch to experiment with attaching getters and setters (without indices) to the prototype. Can you please check it out and test it? You can find instructions on how to test local builds here: https://github.com/fable-compiler/Fable/tree/properties#using-your-local-build-in-your-projects

Maybe @ncave can also give it a go?

@zpodlovics
Copy link
Contributor

zpodlovics commented Aug 30, 2018

@alfonsogarciacaro My build fails with some Map/Set test error:

/usr/bin/yarn run test 
yarn run v1.5.1
$ node ../node_modules/mocha/bin/mocha ../build/tests --reporter dot -t 10000
[..]
  1447 passing (7s)
  24 failing
 TypeError: (0 , _Map.FSharpMap$$get_IsEmpty) is not a function
      at Context.<anonymous> (/tmp/fable-compiler/build/tests/MapTests.js:43:69)

Full error list:

https://gist.github.com/zpodlovics/a19bbfba7c1d47796c160604af3e90f3

Update: commented out the tests, now fighting with #1413

@alfonsogarciacaro
Copy link
Member Author

Sorry, I'm an idiot. Forgot to rebuild fable-core before running the tests. Let me fix this.

@alfonsogarciacaro
Copy link
Member Author

Still waiting for CI to finish, but hopefully it's fixed now 🙏

@alfonsogarciacaro
Copy link
Member Author

alfonsogarciacaro commented Aug 30, 2018

Tests are passing but the REPL is not building :/ Anyways, can you please give it a go @zpodlovics? If it works in your case and shows a performance improvement, we can work on top of it to fix the remaining issues.

@zpodlovics
Copy link
Contributor

@alfonsogarciacaro Thank you! I have managed to get it working (now it's based on fable-core.2.0.0-beta-002). While the code looks different, the performance roughly remains the same. I am afraid, there is no other way than dig deeper using the js profiling tools.

0: 7.343000(s) time, 7343.000000(ns/msg) average latency, 136184.120931(msg/s) average throughput - message size: 146 - GC count: 1
0: 7.554000(s) time, 7554.000000(ns/msg) average latency, 132380.195923(msg/s) average throughput - message size: 146 - GC count: 1
1: 7.593000(s) time, 7593.000000(ns/msg) average latency, 131700.250230(msg/s) average throughput - message size: 146 - GC count: 1
1: 7.636000(s) time, 7636.000000(ns/msg) average latency, 130958.617077(msg/s) average throughput - message size: 146 - GC count: 1
2: 7.542000(s) time, 7542.000000(ns/msg) average latency, 132590.824715(msg/s) average throughput - message size: 146 - GC count: 1
2: 7.700000(s) time, 7700.000000(ns/msg) average latency, 129870.129870(msg/s) average throughput - message size: 146 - GC count: 1
3: 7.701000(s) time, 7701.000000(ns/msg) average latency, 129853.265810(msg/s) average throughput - message size: 146 - GC count: 1
3: 7.652000(s) time, 7652.000000(ns/msg) average latency, 130684.788291(msg/s) average throughput - message size: 146 - GC count: 1
4: 7.707000(s) time, 7707.000000(ns/msg) average latency, 129752.173349(msg/s) average throughput - message size: 146 - GC count: 1
4: 7.805000(s) time, 7805.000000(ns/msg) average latency, 128122.998078(msg/s) average throughput - message size: 146 - GC count: 1
5: 7.751000(s) time, 7751.000000(ns/msg) average latency, 129015.610889(msg/s) average throughput - message size: 146 - GC count: 1
5: 7.778000(s) time, 7778.000000(ns/msg) average latency, 128567.755207(msg/s) average throughput - message size: 146 - GC count: 1
6: 7.673000(s) time, 7673.000000(ns/msg) average latency, 130327.121074(msg/s) average throughput - message size: 146 - GC count: 1
6: 7.772000(s) time, 7772.000000(ns/msg) average latency, 128667.009779(msg/s) average throughput - message size: 146 - GC count: 1
7: 7.675000(s) time, 7675.000000(ns/msg) average latency, 130293.159609(msg/s) average throughput - message size: 146 - GC count: 1
7: 8.168000(s) time, 8168.000000(ns/msg) average latency, 122428.991185(msg/s) average throughput - message size: 146 - GC count: 1
8: 7.753000(s) time, 7753.000000(ns/msg) average latency, 128982.329421(msg/s) average throughput - message size: 146 - GC count: 1
8: 7.634000(s) time, 7634.000000(ns/msg) average latency, 130992.926382(msg/s) average throughput - message size: 146 - GC count: 1
9: 7.616000(s) time, 7616.000000(ns/msg) average latency, 131302.521008(msg/s) average throughput - message size: 146 - GC count: 1
9: 7.654000(s) time, 7654.000000(ns/msg) average latency, 130650.640188(msg/s) average throughput - message size: 146 - GC count: 1
10: 7.653000(s) time, 7653.000000(ns/msg) average latency, 130667.712008(msg/s) average throughput - message size: 146 - GC count: 1
10: 7.598000(s) time, 7598.000000(ns/msg) average latency, 131613.582522(msg/s) average throughput - message size: 146 - GC count: 1

https://gist.github.com/zpodlovics/266d35c2064af1ecad1733ecb21a2d56

@zpodlovics
Copy link
Contributor

@alfonsogarciacaro Your generic parameters / interface explanation here (#1547 (comment)) gave me an idea to try out without any interface related "magic" with Fable2:

In the codec I have replaced the interface type with a concrete implementation type, and viola:

0: 3.847000(s) time, 3847.000000(ns/msg) average latency, 259942.812581(msg/s) average throughput - message size: 146 - GC count: 1
0: 3.824000(s) time, 3824.000000(ns/msg) average latency, 261506.276151(msg/s) average throughput - message size: 146 - GC count: 1
1: 3.822000(s) time, 3822.000000(ns/msg) average latency, 261643.118786(msg/s) average throughput - message size: 146 - GC count: 1
1: 3.821000(s) time, 3821.000000(ns/msg) average latency, 261711.593824(msg/s) average throughput - message size: 146 - GC count: 1
2: 3.827000(s) time, 3827.000000(ns/msg) average latency, 261301.280376(msg/s) average throughput - message size: 146 - GC count: 1
2: 3.855000(s) time, 3855.000000(ns/msg) average latency, 259403.372244(msg/s) average throughput - message size: 146 - GC count: 1
3: 3.925000(s) time, 3925.000000(ns/msg) average latency, 254777.070064(msg/s) average throughput - message size: 146 - GC count: 1
3: 3.833000(s) time, 3833.000000(ns/msg) average latency, 260892.251500(msg/s) average throughput - message size: 146 - GC count: 1
4: 3.825000(s) time, 3825.000000(ns/msg) average latency, 261437.908497(msg/s) average throughput - message size: 146 - GC count: 1
4: 3.827000(s) time, 3827.000000(ns/msg) average latency, 261301.280376(msg/s) average throughput - message size: 146 - GC count: 1
5: 3.841000(s) time, 3841.000000(ns/msg) average latency, 260348.867482(msg/s) average throughput - message size: 146 - GC count: 1
5: 3.818000(s) time, 3818.000000(ns/msg) average latency, 261917.234154(msg/s) average throughput - message size: 146 - GC count: 1
6: 3.803000(s) time, 3803.000000(ns/msg) average latency, 262950.302393(msg/s) average throughput - message size: 146 - GC count: 1
6: 3.794000(s) time, 3794.000000(ns/msg) average latency, 263574.064312(msg/s) average throughput - message size: 146 - GC count: 1
7: 3.825000(s) time, 3825.000000(ns/msg) average latency, 261437.908497(msg/s) average throughput - message size: 146 - GC count: 1
7: 3.836000(s) time, 3836.000000(ns/msg) average latency, 260688.216893(msg/s) average throughput - message size: 146 - GC count: 1
8: 3.853000(s) time, 3853.000000(ns/msg) average latency, 259538.022320(msg/s) average throughput - message size: 146 - GC count: 1
8: 3.838000(s) time, 3838.000000(ns/msg) average latency, 260552.371027(msg/s) average throughput - message size: 146 - GC count: 1
9: 3.851000(s) time, 3851.000000(ns/msg) average latency, 259672.812257(msg/s) average throughput - message size: 146 - GC count: 1
9: 3.833000(s) time, 3833.000000(ns/msg) average latency, 260892.251500(msg/s) average throughput - message size: 146 - GC count: 1
10: 3.849000(s) time, 3849.000000(ns/msg) average latency, 259807.742271(msg/s) average throughput - message size: 146 - GC count: 1
10: 3.904000(s) time, 3904.000000(ns/msg) average latency, 256147.540984(msg/s) average throughput - message size: 146 - GC count: 1

@MangelMaxime
Copy link
Member

MangelMaxime commented Aug 31, 2018

Oh really cool @zpodlovics. If I read correctly you even have performance improvement here.

Can I ask you to share an example of code before / after related to your point:

without any interface related "magic" with Fable2

Like that we can refer to it later, and perhaps others people can find it useful "trick" :)

@zpodlovics
Copy link
Contributor

zpodlovics commented Aug 31, 2018

@MangelMaxime Nothing really, just replaced the interface type with a concrete type (but you have to do more complex things than this to measure the call overhead correctly).

[<Interface>]
type IFunc1 =
    abstract member Func1: int32 -> int32

type MyFunc1() = 
  member this.Func1(x:int32) = x+2
  interface IFunc1 with
    member this.Func1(x: int32) = this.Func1(x)

[<Class>]
type TestInterface(instance: IFunc1) =
    member this.Call(x: int32) =
        instance.Func1(x)

type TestGeneric<'T when 'T:> IFunc1>(instance: 'T) =
    member this.Call(x: int32) =
        instance.Func1(x)

type TestSpecialized(instance: MyFunc1) =
    member this.Call(x: int32) =
        instance.Func1(x)

let myF = MyFunc1()
let myCI = TestInterface(myF)
let myCG = TestGeneric(myF)
let myCS = TestSpecialized(myF)

@et1975
Copy link
Member

et1975 commented Aug 31, 2018

Interesting, dynamic dispatch rears its ugly head :)
SRTPs to the rescue!

@zpodlovics
Copy link
Contributor

@et1975 @realvictorprm Inline interfaces to the rescue: fsharp/fslang-suggestions#641 and dotnet/fsharp#4726

@alfonsogarciacaro
Copy link
Member Author

Thanks for investigating this further @zpodlovics, so the performance loss comes from interface casting (in Fable 1 there was no casting as interface members were attached directly to the object) instead of the standalone properties. This is somewhat expected but we need to make users aware of this.

Another alternative could be to use object expressions to implement the interface instead of a type, these don't need casting.

@et1975
Copy link
Member

et1975 commented Aug 31, 2018

Is it casting or dynamic dispatch? Seems the distinction would be important to know.

@zpodlovics
Copy link
Contributor

@et1975 It's more like shape creation and shape migration/transformation - at least based on the profiling /flamegraph.

@alfonsogarciacaro
Copy link
Member Author

Yes, we don't have something like virtual tables, but Fable 2 transforms (actually wraps) objects when casted to interfaces. As this operation is implicit when passing an object to a function accepting the interface it may be that it happens too many times hurting performance. Maybe we can find a way to detect these cases and either find a workaround or inform the user.

@Nhowka
Copy link
Contributor

Nhowka commented Sep 1, 2018

Would it be possible to make it like a lazy value and calculated just once?

@alfonsogarciacaro
Copy link
Member Author

Great idea @Nhowka! I've created an interface-map branch that adds a map to objects to save the interface wrappers so they don't need to be generated multiple times. @zpodlovics If you've time, can you please check out that branch and give it a go with your app? (using interfaces as originally)

@zpodlovics
Copy link
Contributor

@alfonsogarciacaro It looks roughly the same:

0: 7.440000(s) time, 7440.000000(ns/msg) average latency, 134408.602151(msg/s) average throughput - message size: 146 - GC count: 1
0: 7.578000(s) time, 7578.000000(ns/msg) average latency, 131960.939562(msg/s) average throughput - message size: 146 - GC count: 1
1: 7.634000(s) time, 7634.000000(ns/msg) average latency, 130992.926382(msg/s) average throughput - message size: 146 - GC count: 1
1: 7.642000(s) time, 7642.000000(ns/msg) average latency, 130855.796912(msg/s) average throughput - message size: 146 - GC count: 1
2: 7.903000(s) time, 7903.000000(ns/msg) average latency, 126534.227509(msg/s) average throughput - message size: 146 - GC count: 1
2: 7.785000(s) time, 7785.000000(ns/msg) average latency, 128452.151574(msg/s) average throughput - message size: 146 - GC count: 1
3: 7.620000(s) time, 7620.000000(ns/msg) average latency, 131233.595801(msg/s) average throughput - message size: 146 - GC count: 1
3: 7.655000(s) time, 7655.000000(ns/msg) average latency, 130633.572828(msg/s) average throughput - message size: 146 - GC count: 1
4: 7.612000(s) time, 7612.000000(ns/msg) average latency, 131371.518655(msg/s) average throughput - message size: 146 - GC count: 1
4: 7.629000(s) time, 7629.000000(ns/msg) average latency, 131078.778346(msg/s) average throughput - message size: 146 - GC count: 1
5: 7.567000(s) time, 7567.000000(ns/msg) average latency, 132152.768601(msg/s) average throughput - message size: 146 - GC count: 1
5: 7.676000(s) time, 7676.000000(ns/msg) average latency, 130276.185513(msg/s) average throughput - message size: 146 - GC count: 1
6: 7.585000(s) time, 7585.000000(ns/msg) average latency, 131839.156229(msg/s) average throughput - message size: 146 - GC count: 1
6: 7.679000(s) time, 7679.000000(ns/msg) average latency, 130225.289751(msg/s) average throughput - message size: 146 - GC count: 1
7: 7.705000(s) time, 7705.000000(ns/msg) average latency, 129785.853342(msg/s) average throughput - message size: 146 - GC count: 1
7: 7.665000(s) time, 7665.000000(ns/msg) average latency, 130463.144162(msg/s) average throughput - message size: 146 - GC count: 1
8: 7.767000(s) time, 7767.000000(ns/msg) average latency, 128749.839063(msg/s) average throughput - message size: 146 - GC count: 1
8: 7.771000(s) time, 7771.000000(ns/msg) average latency, 128683.567108(msg/s) average throughput - message size: 146 - GC count: 1
9: 8.000000(s) time, 8000.000000(ns/msg) average latency, 125000.000000(msg/s) average throughput - message size: 146 - GC count: 1
9: 7.819000(s) time, 7819.000000(ns/msg) average latency, 127893.592531(msg/s) average throughput - message size: 146 - GC count: 1
10: 7.670000(s) time, 7670.000000(ns/msg) average latency, 130378.096480(msg/s) average throughput - message size: 146 - GC count: 1
10: 7.632000(s) time, 7632.000000(ns/msg) average latency, 131027.253669(msg/s) average throughput - message size: 146 - GC count: 1

The flamegraph also changed a bit:
https://gist.github.com/zpodlovics/8990c388b390aef83ef357d607d8683c

@alfonsogarciacaro
Copy link
Member Author

Hmm, so maybe the cost of using a map outweighs the benefit of having a cache. Well, the important thing is we have identified a potential bottleneck so we can write it in the documentation and then users can try changing interfaces by concrete types as in your case.

@ncave Maybe we could check with REPL in case it makes some difference there. I've fixed the bundle, but the bench project probably needs fixing too.

@zpodlovics
Copy link
Contributor

@alfonsogarciacaro It seems that any indirection introduced will trash the performance in a call heavy environment. However rewriting - by hand - lot's of existing and future shared (.NET + Fable) code is not really a solution.

Generic type parameters / flexible types should allow this kind of specialization (roughly the same way .NET do):
#1547 (comment)

@zpodlovics
Copy link
Contributor

zpodlovics commented Sep 6, 2018

Update: the new attach-interfaces branch now has roughly the same performance:

No interface (interface names replaced with the concrete implementation type):

0: 3.703000(s) time, 3703.000000(ns/msg) average latency, 270051.309749(msg/s) average throughput - message size: 146 - GC count: 1
0: 3.776000(s) time, 3776.000000(ns/msg) average latency, 264830.508475(msg/s) average throughput - message size: 146 - GC count: 1
1: 3.716000(s) time, 3716.000000(ns/msg) average latency, 269106.566200(msg/s) average throughput - message size: 146 - GC count: 1
1: 3.734000(s) time, 3734.000000(ns/msg) average latency, 267809.319764(msg/s) average throughput - message size: 146 - GC count: 1
2: 3.759000(s) time, 3759.000000(ns/msg) average latency, 266028.198989(msg/s) average throughput - message size: 146 - GC count: 1
2: 3.800000(s) time, 3800.000000(ns/msg) average latency, 263157.894737(msg/s) average throughput - message size: 146 - GC count: 1
3: 3.726000(s) time, 3726.000000(ns/msg) average latency, 268384.326355(msg/s) average throughput - message size: 146 - GC count: 1
3: 3.704000(s) time, 3704.000000(ns/msg) average latency, 269978.401728(msg/s) average throughput - message size: 146 - GC count: 1
4: 3.695000(s) time, 3695.000000(ns/msg) average latency, 270635.994587(msg/s) average throughput - message size: 146 - GC count: 1
4: 3.704000(s) time, 3704.000000(ns/msg) average latency, 269978.401728(msg/s) average throughput - message size: 146 - GC count: 1
5: 3.702000(s) time, 3702.000000(ns/msg) average latency, 270124.257158(msg/s) average throughput - message size: 146 - GC count: 1
5: 3.702000(s) time, 3702.000000(ns/msg) average latency, 270124.257158(msg/s) average throughput - message size: 146 - GC count: 1
6: 3.702000(s) time, 3702.000000(ns/msg) average latency, 270124.257158(msg/s) average throughput - message size: 146 - GC count: 1
6: 3.702000(s) time, 3702.000000(ns/msg) average latency, 270124.257158(msg/s) average throughput - message size: 146 - GC count: 1
7: 3.735000(s) time, 3735.000000(ns/msg) average latency, 267737.617135(msg/s) average throughput - message size: 146 - GC count: 1
7: 3.834000(s) time, 3834.000000(ns/msg) average latency, 260824.204486(msg/s) average throughput - message size: 146 - GC count: 1
8: 3.782000(s) time, 3782.000000(ns/msg) average latency, 264410.364886(msg/s) average throughput - message size: 146 - GC count: 1
8: 3.774000(s) time, 3774.000000(ns/msg) average latency, 264970.853206(msg/s) average throughput - message size: 146 - GC count: 1
9: 3.714000(s) time, 3714.000000(ns/msg) average latency, 269251.480883(msg/s) average throughput - message size: 146 - GC count: 1
9: 3.779000(s) time, 3779.000000(ns/msg) average latency, 264620.269913(msg/s) average throughput - message size: 146 - GC count: 1
10: 3.797000(s) time, 3797.000000(ns/msg) average latency, 263365.815117(msg/s) average throughput - message size: 146 - GC count: 1
10: 3.804000(s) time, 3804.000000(ns/msg) average latency, 262881.177708(msg/s) average throughput - message size: 146 - GC count: 1

perf stat:

 Performance counter stats for '/usr/local/stow/node-v10.9.0-linux-x64/bin/node out/App.js':

      82064.278305      task-clock (msec)         #    0.995 CPUs utilized          
            87,527      context-switches          #    0.001 M/sec                  
               722      cpu-migrations            #    0.009 K/sec                  
             5,982      page-faults               #    0.073 K/sec                  
   280,989,430,699      cycles                    #    3.424 GHz                      (83.33%)
     6,108,103,111      stalled-cycles-frontend   #    2.17% frontend cycles idle     (83.35%)
    64,063,336,685      stalled-cycles-backend    #   22.80% backend cycles idle      (33.32%)
   513,504,567,372      instructions              #    1.83  insn per cycle         
                                                  #    0.12  stalled cycles per insn  (49.98%)
   112,707,374,241      branches                  # 1373.404 M/sec                    (66.64%)
       314,186,288      branch-misses             #    0.28% of all branches          (83.32%)

      82.496763182 seconds time elapsed

Interface (type TestInterface(instance: IFunc1) style) using the attach-interfaces branch:

0: 3.610000(s) time, 3610.000000(ns/msg) average latency, 277008.310249(msg/s) average throughput - message size: 146 - GC count: 1
0: 3.733000(s) time, 3733.000000(ns/msg) average latency, 267881.060809(msg/s) average throughput - message size: 146 - GC count: 1
1: 3.747000(s) time, 3747.000000(ns/msg) average latency, 266880.170803(msg/s) average throughput - message size: 146 - GC count: 1
1: 3.702000(s) time, 3702.000000(ns/msg) average latency, 270124.257158(msg/s) average throughput - message size: 146 - GC count: 1
2: 3.757000(s) time, 3757.000000(ns/msg) average latency, 266169.816343(msg/s) average throughput - message size: 146 - GC count: 1
2: 3.776000(s) time, 3776.000000(ns/msg) average latency, 264830.508475(msg/s) average throughput - message size: 146 - GC count: 1
3: 3.712000(s) time, 3712.000000(ns/msg) average latency, 269396.551724(msg/s) average throughput - message size: 146 - GC count: 1
3: 3.676000(s) time, 3676.000000(ns/msg) average latency, 272034.820457(msg/s) average throughput - message size: 146 - GC count: 1
4: 3.777000(s) time, 3777.000000(ns/msg) average latency, 264760.391845(msg/s) average throughput - message size: 146 - GC count: 1
4: 3.743000(s) time, 3743.000000(ns/msg) average latency, 267165.375367(msg/s) average throughput - message size: 146 - GC count: 1
5: 3.824000(s) time, 3824.000000(ns/msg) average latency, 261506.276151(msg/s) average throughput - message size: 146 - GC count: 1
5: 3.767000(s) time, 3767.000000(ns/msg) average latency, 265463.233342(msg/s) average throughput - message size: 146 - GC count: 1
6: 3.796000(s) time, 3796.000000(ns/msg) average latency, 263435.194942(msg/s) average throughput - message size: 146 - GC count: 1
6: 3.721000(s) time, 3721.000000(ns/msg) average latency, 268744.961032(msg/s) average throughput - message size: 146 - GC count: 1
7: 3.824000(s) time, 3824.000000(ns/msg) average latency, 261506.276151(msg/s) average throughput - message size: 146 - GC count: 1
7: 3.811000(s) time, 3811.000000(ns/msg) average latency, 262398.320651(msg/s) average throughput - message size: 146 - GC count: 1
8: 3.811000(s) time, 3811.000000(ns/msg) average latency, 262398.320651(msg/s) average throughput - message size: 146 - GC count: 1
8: 3.807000(s) time, 3807.000000(ns/msg) average latency, 262674.021539(msg/s) average throughput - message size: 146 - GC count: 1
9: 3.866000(s) time, 3866.000000(ns/msg) average latency, 258665.287118(msg/s) average throughput - message size: 146 - GC count: 1
9: 3.822000(s) time, 3822.000000(ns/msg) average latency, 261643.118786(msg/s) average throughput - message size: 146 - GC count: 1
10: 3.861000(s) time, 3861.000000(ns/msg) average latency, 259000.259000(msg/s) average throughput - message size: 146 - GC count: 1
10: 3.817000(s) time, 3817.000000(ns/msg) average latency, 261985.852764(msg/s) average throughput - message size: 146 - GC count: 1

perf stat:

 Performance counter stats for '/usr/local/stow/node-v10.9.0-linux-x64/bin/node ../out/App.js':

      82623.523614      task-clock (msec)         #    0.994 CPUs utilized          
            89,557      context-switches          #    0.001 M/sec                  
               841      cpu-migrations            #    0.010 K/sec                  
             6,313      page-faults               #    0.076 K/sec                  
   281,714,381,415      cycles                    #    3.410 GHz                      (83.33%)
     6,714,688,151      stalled-cycles-frontend   #    2.38% frontend cycles idle     (83.34%)
    64,922,007,973      stalled-cycles-backend    #   23.05% backend cycles idle      (33.33%)
   513,781,341,985      instructions              #    1.82  insn per cycle         
                                                  #    0.13  stalled cycles per insn  (49.98%)
   112,755,544,325      branches                  # 1364.691 M/sec                    (66.64%)
       330,090,577      branch-misses             #    0.29% of all branches          (83.32%)

      83.108369641 seconds time elapsed

#1547 (comment)

@MangelMaxime
Copy link
Member

@alfonsogarciacaro Can we close this issue ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests