Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Particle Release Bug #80

Open
aSqrd-eSqrd opened this issue Jul 10, 2017 · 4 comments
Open

Particle Release Bug #80

aSqrd-eSqrd opened this issue Jul 10, 2017 · 4 comments

Comments

@aSqrd-eSqrd
Copy link
Contributor

I have found a bug regarding the release of particles. It is most noticeable the first time you enter a region. Specifically look at the high volume connections. It causes clumps of particles to be released.

I've attached a test case (JSON) file that exhibits this behavior. The maxVolume region is 5505. The highest volume connection is 4375 and the next highest volume is 345. Visually, the 4375 connection appears to not be carrying as much volume as the 345 connection for first 30 seconds or so.

Additionally, if you comment out the addition of the random value to the velocities of each particle in /src/base/connectionView.js so that all particles have the same velocity the release bug is much more easily seen. In fact, the high volume (4375) connection never appears as even a slightly continuous release of particles. It just burst out a clump of particles approximately ever 7.5 seconds, while the 345 connection is a solid stream of particles. It was when I gave all particles the same velocity that I actually found this bug. I need my end-users to stop thinking that some packets are "faster" than others when interpreting the Vizceral traffic display.


AFTER THOUGHT: After writing all this I realized that I could make it easier to debug and see the release bug by making an even simpler "slower" JSON test file. So I've attached a that file and a quick description of it. I wish I had started with it, so I suggest if you're following along that you use it. The release bug is very noticeable when using this file and static velocities.


I read the comments in connectionView.js for updateVolume() about not using a logarithmic scale for particle release/density and was trying to follow with why the times used where selected...
The first value is the volume and the second the max-particles-released-per-tick, right?

    this.rateMap = [
      [0, 0],
      [Number.MIN_VALUE, secondsPerReleaseToReleasesPerTick(10)],
      [1, secondsPerReleaseToReleasesPerTick(7)],
      [10, secondsPerReleaseToReleasesPerTick(5)],
    ];
    if (maxVolume > 0) {
      this.rateMap.push([100, 100 * linearRatio]);
    }
    if (maxVolume > 100) {
      this.rateMap.push([maxVolume, maxReleasesPerTick]);
    }

In my reasoning, I can see the granularity of rateMap as being a reason why higher volume connections are likely indistinguishable visually, but I wouldn't expect it to result in high volume connections not releasing particles on every tick. Still even with another entry in the rateMap for say half the of the maxVolume it doesn't seem likely to address the particle clumping on release. I didn't try adding another entry to rateMap as I can't figure out the pattern (?) to the secondsPerReleaseToreleasesPerTick() input.

Why 0 seconds, 10 seconds, 7, seconds, and 5 seconds?
Why the use of the linear ratio for just if the volume is 100?
Why use the linear interpolation (interpolateY in mapVolume) instead of the linear ratio?


Side Note

It appears to me that secondsPerReleaseToReleasesPerTick requires that the frame rate be 60fps, but targetFrameRate could cause this to be something totally different. I'm kind of basing this on assumptions based on rptToRPS being short for "release per tick to release per second" and rptToSPR being "release per tick to seconds per release", so I'm more asking for clarification than anything.
In all my testing/debugging I have made sure the targetFrameRate was set to 60.


Empirically, for the provided test data, I have found that the break point of where a noticeable gap shows up is around 600 (at 600 no noticeable gap, at 700 there IS a noticeable gap). This with the random velocities! It does not hold true for constant velocities! When using constant velocities the behavior is still clumped, but it is interesting to note that if you vary the volume the closer you get to 100, the longer the clumps are, e.g. the 4375 clump is quite short in length, but if you set the next highest volume to say 500 the clump is about 9/10ths of the length of the connection itself. And at 400 it is just barely not long enough or the next release not soon enough to appear as a continuous stream of particles when using static velocities.

For the provided test data, the particle release "parameters" are:

Volume Releases per Second Releases per Tick Seconds per Release
4375 905.9945 15.0999 0.001103
345 71.4441 1.190736 0.013997
1 0.14285 0.002381 7

The rateMap is:

Index [x, y]
0 [ 0, 0]
1 [5*10-324, 0.0016667]
2 [1, 0.0023809]
3 [10, 0.0033333]
4 [100, 0.3451407]
5 [5505, 19]

So for the 4375, the releases per tick seem reasonable since in rateMap 5505 gets 19 releases per tick, but it sure doesn't do that.


FILES

particle_release_bug.txt <-- Change file ending to JSON

particle_release_bug_second.txt<-- Change file ending to JSON
This file only has 6 nodes, and 5 connections. The maxVolume is only 60 and the connection volumes are 1, 7, 10, 25, and 45. The highest volume connections are still released as clumps, but you can also clearly see that the 1-RPS connection often looks/releases faster than the 7-RPS and 10-RPS connections.


I know you (@jrsquared) are busy, so I'm more than happy to run this down. But I'm at the point where I need guidance and information from the author(s). Stuff like what in the general concept of how the nextFreeParticleIndex is supposed to work and how it get to be a -1 and when it is -1 how it ever becomes positive again.

@aaronblohowiak
Copy link
Contributor

Hi! I touched this stuff last, so I think it is on me to address this. Awesome Issue. Thanks for the detail.

The job of connectionView is to manage the dots so you can compare the relative business of connections by seeing dots flying between nodes. The state we want to get to is where the observed dots flying on the connection is proportional to the adjusted connection volume. We also don't want dots to disappear mid-flight.

Our services tend to fall into different buckets of popularity and we wanted to make each of those buckets apparent while being able to see the differences between members of a bucket. I think this is a common thing, and so we should have the ability tweak the relationship between relative connection volume and the amount of traffic on a connection. This "tweaking" happens through rateMap, discussed more below.

Once we have the "adjusted" volume, we need to translate that to dots flying across the screen. In the code, we modeled this problem as setting up a target number of dots released per tick per connection. So, for each connection we need to know how many dots to release per tick.

With a particle system, you have a number of particles and there is a performance penalty for using more particles so you want to use the particles you have most efficiently. So, we create a pool of particles that we start small and let grow until it hits a maximum size (this.maxParticles) Since we have a pool of things that we want to use and release, I maintain a stack of indexes (this.freeIndexes) of particles that are not in flight. When we launch a particle, we pop the next index off the stack and set up its velocity and position. On each tick, the particle's position is progressed according to its velocity and when it reaches the destination node, we add it back to the free list.

nextFreeParticleIndex(n) is -1 when the free list is empty and we have already reached the maximum pool size. When the next particle completes its flight, it will be added to the free list and then nextFreeParticleIndex will return its index instead of -1.

so, that is a lot of detail about how the thing works. Now, why do you get a big burst at the beginning? Well, in the first few ticks we are releasing the target amount of dots per second, and after we do that we exhaust the pool and then have to wait for more dots to hit the destination node in order to start releasing more.

The "gap" is when we have exhausted the pool and are waiting for more particles to reach the destination.

We really want to make sure the total particles released per tick on any given connection is less than the average velocity times the length of the connection. Unfortunately, this means that the length of the connection impacts the release rate, which breaks the ability to compare between connections of different lengths.

In order to accurately set the maximum release rate and guarantee that we wouldn't run out of particles on a connection, we'd need to ensure that the max number of particles is about equal to the total release rate times the length divided by the velocity. If we would exceed that limit in any of the connections, we'd need to scale the release rate for all connections.

One way to ensure that we never run out of particles is to get rid of the max particles per connection and just let the particle system grow unbounded. The limit came from when we statically sized the particle system. Since I made it dynamically growing, most connections would stay small -- only the busiest connection would grow as the rates are all scaled to maxVolume.

rateMap

Unfortunately, we encoded "what looks good to us" for our particular data into the code. It was created through trial and error on our data. the whole rateMap business should have been configurable. I am not sure when the target FPS stuff was added to Vizceral -- I shouldn't have hardcoded 60 FPS. unfortunately, the speed of dots is related to the assumption of 60 fps desired and also the actual rate of requestAnimationFrame being fired -- we don't modulate the distance to travel based on velocity * time actually elapsed, we just move it by a set amount every time RAF fires.

@aaronblohowiak
Copy link
Contributor

aaronblohowiak commented Jul 25, 2017

^^^ this is a long-winded way to say i believe the fix is to delete

if (this.particleSystemSize >= this.maxParticles) {
       missedLaunches++;
       return -1;
     }

we should also update regionConnectionView to have a different maxParticleReleasedPerTick

@aaronblohowiak
Copy link
Contributor

I'll put together a PR later today.

@aSqrd-eSqrd
Copy link
Contributor Author

aSqrd-eSqrd commented Jul 25, 2017

@aaronblohowiak,
Thanks so much for the details. That information was exactly the stuff I was looking to learn. The explanation of how and why and the chunk

In order to accurately set the maximum release rate and guarantee that we wouldn't run out of particles on a connection, we'd need to ensure that the max number of particles is about equal to the total release rate times the length divided by the velocity. If we would exceed that limit in any of the connections, we'd need to scale the release rate for all connections.

are gold. I can totally accept trial-and-error till it looked good and known that helps me not beat my head against the wall unnecessarily.

There are a couple of things I found/did since I initially posted this issue that might need a look from you or be of interest.

  1. Was it intended that normalDistribution() draw from a normal probability distribution whose values are between -0.5 and 1.5?

    1. Summation of six uniform distributions yields values drawn from a normal distribution covering the range [0, 6], then shift by -3 yields [-3, 3], then divide by 3 to get [-1, 1], then add 0.5 ends up with [-0.5, 1.5]... so values drawn are in the range of [-0.5, 1.5].
    2. It doesn't seem to have to much affect whether it is drawn from [-1, 1] (no +0.5), or even [0, 1] (just divide by 6). Especially, since the circle nodes are on top of the connection endpoints and so the particles aren't visible till the pass out from behind the node circle, which has a radius of 10 at the least.
  2. I kind of found a work around that made the gap between the "blobs" of particles less noticeable, not un-noticeable, but significantly less. Especially, when using a static velocity.

What I did was to add a connectionLength property to the ConnectionView constructor. I then passed this into generateParticleSystem and use it and Math.random() to set the location that the a particle will initially be spawned at:

function generateParticleSystem (size, customWidth, connectionWidth, connectionDepth, connectionLength) {
//...
  for (let i = 0; i < size; i++) {
      // Position
    vertices[i * 3] = Math.random() * connectionLength;
    vertices[(i * 3) + 1] = customWidth ? connectionWidth - 
                                        (normalDistribution() * connectionWidth * 2) : 1;
// ...

Then so that this initial starting position that could be anywhere along the full length of the connection should only happen on a particle's first launch I added a simple check in launchParticles()

// Get/set the x position for the last particle index
if (this.positionAttr.getX(nextFreeParticleIndex) === 0) {  // <--- NEW
    this.positionAttr.setX(nextFreeParticleIndex, startX + rand);
    this.positionAttr.needsUpdate = true;
}                                                           // <--- NEW

This works nicely, because update() is checking the length of the connection and if the new x-position vx is greater than or equal to the connection's length it sets the x-position to zero and frees that particles index.

The end effect is that instead of particles shooting down a connection like a faucet was just turned on, they appear distributed across the full length and therefore a static particle velocity doesn't result in a big brick of particles on a high volume connection that launches the max allowed number of particles. As particles reach the end of the connection they are freed and are relaunched from the starting point same as before. This also solves a visually confusing issue of people assuming that there are different kinds of "particles" and some are faster and pass others. Likewise, they see the connection is in progress and running... they aren't witnessing the "birth" of the connection that the faucet-on launch was conveying.

Though, I haven't actually tried out seeing what a brand new high volume connection appearing looks like. In that case the bursting faucet-on visual would be good... man can't have my cake and eat it too.

There is also one other downside to the initial random population along the connection that I've noticed at the global view/renderer level and that is that it is slower to launch all of its particles and so you can see the particles appearing for several seconds.

  1. The code snippet in your "delete-this" post on this issue isn't something in the public OSS release. I searched the repo for missedLaunches and it doesn't occur in the public stuff. Must be part of your internal stuff.

Well, I thought I would share my findings. I'm not sure how interested the rest of the world is in a "non-bursting" initial particle launch and static velocities, but if you're interested I can push it to my public fork of vizceral and or make a pull request.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants