Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TS: use camel case in object field names #41

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

kklas
Copy link
Contributor

@kklas kklas commented Sep 2, 2020

We should use camel-case in interfaces generated from the schema as that's the ts/js convention. Implemented a decoder and encoder to convert fields in API objects to/from snake-case/camel-case.

Resolves #40

@tj
Copy link
Member

tj commented Sep 4, 2020

Awesome! I'll take a look soon

@tj
Copy link
Member

tj commented Sep 8, 2020

Looks pretty good to me, I think the only tricky part is say for example you have a nested object passed to a call which should not be camel-cased. In my case the user_id in add_events({ events: [{ fields: { user_id: xxx } }] }) should be left as-is, so I think just blindly recursing could lead to errors.

We could generate some functions specific to the generated types, but I'm still not sure it's worth it

@kklas
Copy link
Contributor Author

kklas commented Sep 8, 2020

Hmm probably not in that case. But should it be possible to pass object fields that are not defined in the schema through the API in the first place? I assumed the schema would prevent doing this. I.e. such use case could be covered with:
add_events({ events: [ [{ field: 'user_id', value: xxx }] ] })

@tj
Copy link
Member

tj commented Sep 8, 2020

At the moment it allows arbitrary objects:

        {
          "name": "fields",
          "description": "the log fields.",
          "type": "object"
        },

Which I'm using for user-defined event fields. I didn't think about that before, but to convert between the cases we'd pretty much have to generate the conversion functions from the types I guess, feels a bit ugly though

@kklas
Copy link
Contributor Author

kklas commented Sep 8, 2020

I agree about the conversion functions, but I'd argue the schema shouldn't allow arbitrary fields because the whole benefit of having a schema is that you know the fields and types.
I do see a point for the object types though as sometimes you will want to have arbitrary fields, but these should be defined as "maps" or something and not mixed with schema fields IMO. So maps could be map[string]interface{} in go and Maps in ES6 for example and they would allow this key/value/type flexibility for cases when it's needed.

The downside is as you mentioned that the decoder/encoder would need to have knowledge of the schema which means more code but I think we should consider this because:

I'm willing to have a hack at it for TS/JS, maybe there's an elegant way to do it.
But not sure how you imagined it. Is your plan to keep it absolutely simple even considering these trade-offs? Personally I'm OK with some boilerplate.

@tj
Copy link
Member

tj commented Sep 8, 2020

Snake-case doesn't bother me here but if we're solving for timestamps anyway we might as well. I still kind of like plain objects over using Maps personally, more just from a UX perspective, from a technical perspective I definitely agree.

I suppose the simplest would just be some simple encode/decode functions per-type, could be fancier but basically:

function encodeGetAlertsInput(o) {
  const v = {}
  
  if (Object.hasOwnProperty.call(o, 'projectId')) {
      v.projectId = o.projectId
  }

  return v
}

function decodeGetAlertsOutput(o) {
  const v = {}
  // ... map o.alerts with decodeAlert blah blah
  return v
}

function decodeAlert(o) {
  const v = {}
  
  if (Object.hasOwnProperty.call(o, 'created_at')) {
    v.createdAt = o.created_at
  }

  if (Object.hasOwnProperty.call(o, 'updated_at')) {
    v.updatedAt = o.updatedAt
  }

  // ... other fields

  return v
}

And sneak the timestamp -> Date conversion in there as well. Any thoughts on a cleaner solution there? Since we're generating we might as well make it as performant as possible I guess.

@kklas
Copy link
Contributor Author

kklas commented Sep 15, 2020

I have no strong preference with regards to objects vs maps, but I agree objects feel more natural. We can keep objects for now and maybe later add an option to the generator to generate maps if anyone ever needs it.

I guess the only alternative to per type encode/decode functions is that we paste the schema into the generated file and have just a single decoder and encoder which would traverse it.
We anyways need to traverse the schema in one way or another because we need to handle nested types (inside arrays and objects). The only difference is whether we do it in the go generator or client run time.

It's not clear to me which of these approaches would be better. I would lean for what is easier to implement and to me it seems the "schema in js" approach would be easier as I can imagine the go codegen code for the other approach getting nasty.
Looking at few examples traversing the schema in js shouldn't be too difficult even for the more complex examples.
But maybe I'm missing something. Any thoughts on this?

@tj
Copy link
Member

tj commented Sep 24, 2020

The generated approach is what I did with validation in the Go server portion and it works pretty nicely https://github.com/apex/rpc/blob/master/generators/gotypes/gotypes.go#L155-L168.

The schemas can get pretty huge so I don't think that would be ideal, especially for the browser JS client. I guess the JS client would be another reason to consider just leaving the fields as-is (timestamp stuff still applies though), but the extra bulk isn't really worth it there just for camelcase IMO.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[TS types] can we have camel casing for field names?
2 participants