Skip to content

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Time quantization #14

Closed
laughedelic opened this issue May 19, 2016 · 8 comments
Closed

Time quantization #14

laughedelic opened this issue May 19, 2016 · 8 comments

Comments

@laughedelic
Copy link

laughedelic commented May 19, 2016

Based on the asciinema/asciinema#156 that was recently merged, I want to propose the following feature: --max-wait option could take a sequence of maximum wait bounds (or there could be another option for that). For example:

asciinema rec -w 0.4 0.8 1 3

(the format is discussable, probably something with a non-space separator is better/easier to parse: 0.4,0.8,1,3)

This would mean that

  • time delays between 400ms and 800ms are cut to 400ms
  • time delays between 800ms and 1s are cut to 800ms
  • time delays between 1s and 3s are cut to 1s
  • time delays over 3s are cut to 3s (as with the single-valued max-wait)

This would allow to make some more adjustments to the time flow of the recording, such as minimizing typing delays (making it more fluent), while still being able to make short and long pauses (to point out something).

What do you think about this feature? I think it's not hard to do and I could implement it.

@laughedelic
Copy link
Author

@sickill any opinion about this?

@ku1ik
Copy link
Contributor

ku1ik commented Jun 6, 2016

That's an interesting idea. I have a feeling that this would be used by very small amount of users. -w as it is today is already very useful but from what I know not many people use it - people just use defaults.

I'm on the fence here... On one hand I really like the idea, on the other I know that it would add fair amount of code, code that will need to be maintained for probably 1% of users (prove me wrong as for the estimates here ;))

What about this:

I had this idea for a while to create a separate set of tools for processing asciicasts. Stuff like speed it up 2x, apply -w algorithm to already recorded asciicast file, locate+erase arbitrary text (visible passwords).

We could have a mechanism for adding extra commands to asciinema, done the same way as in git - when you run asciinema foo it checks if foo is its internal command, if not it looks for asciinema-foo binary in $PATH and runs it instead. You could write extra commands in any language.

Having above, we could create asciinema-quantize, or more general asciinema-process, which could support various switches (like improved -w, maybe -s to change speed of the whole recording). It would read input json, process it according to options, and write output json. If it was a popular command it could be promoted to internal command (or be shipped in asciinema packages as asciinema-* binaries).

I'm open to other suggestions!

@laughedelic
Copy link
Author

Actually I was thinking about the same: doing it externally, just processing the record json file. The other thing is that I don't have much time right now to learn Go, but if this external commands integration would work with any executables named by the convention, this could make extension of asciinema much easier.

Btw, I guess you know about asciinema2gif which is quite useful and just works. I saw some discussion here about gif conversion and that it's not in the plans, and I perfectly understand it, but still as users may have different needs, such extensibility could be a very nice way to let users fulfill their specific needs themselves.

@ku1ik
Copy link
Contributor

ku1ik commented Jun 7, 2016

Glad to hear similiar opinion. Are you more familiar with Python? What's the lang you would implement this in?

@laughedelic
Copy link
Author

@sickill well, actually I was using jq with some primitive filters so far. For example:

jq '.stdout |= map(.[0] *= 0.5)' record.json > record.twice-faster.json

Will produce a json which will be played by asciinema play twice faster. Or

jq '.stdout |= map(.[0] |= ([., 1.234] | min))' record.json > record.cut.json

is the same as setting max-wait time to 1.234. I'm not a jq-guru, so probably it can be done easier, but this is quite straightforward (if one is familiar with jq in general) and it works. Implementing the proposed time quantization feature this way is a bit more involved, but also trivial.

This can be, of course, wrapped in any kind of shell script with specified options. But if you want some more integration, it can be also done in a very similar manner using JMESPath, which has various implementations including Python and Go.

@cirocosta
Copy link

Hey @laughedelic ,

I had pretty much the same requirement as yours, so I ended up creating this https://github.com/cirocosta/asciinema-edit

It takes an asciinema cast (v2) and then mutates the event stream according to what you need.

I just finished adding quantization in the way you described, btw 👍

Hope it's useful for you!

Thx!

@laughedelic
Copy link
Author

Hi @cirocosta! Thanks for pinging me! It's awesome that you've made it into a tool and that it works with the v2 format. I'll try it next time I record an asciinema cast.

@bkil
Copy link

bkil commented Jul 24, 2022

If you also prescribed that any delay below the shortest specified one should be annulled (i.e., all content displayed with less delay should be merged), it could both improve storage efficiency and compressibility (as per asciinema/asciinema#515 ) and potentially mitigate privacy concerns regarding publishing keystroke typing biometrics.

@ku1ik ku1ik transferred this issue from asciinema/asciinema Apr 25, 2023
@asciinema asciinema locked and limited conversation to collaborators Apr 25, 2023
@ku1ik ku1ik converted this issue into discussion #15 Apr 25, 2023

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants