Skip to content

xaviervia/tessellation

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Tessellation (thesis)

v1.2.0 . See the live application in https://xaviervia.github.io/tessellation/

If you are looking for the library to build apps using this architecture, go to the tessellation library package instead.

tessellation screenshot

This app was tested and intended for a recent version of Chrome. It might not work somewhere else.

This project is a thesis on how to build front end applications.

Tessellation is a simple but not trivial application that keeps a set of points in the state and renders them as a Voronoi tessellation with a little help from d3 and React.

Contents

Features of the demonstration application

  • Saves automatically to local storage.
  • Synchronizes across browser tabs/windows.
  • Logs updates to the console.
  • Renders a SVG diagram and allows you to interact with it.
  • Seeds the state with random data.

The point is to explore a simple way of dealing with distinct types of side effects. These side effects are real requirements of many front end applications.

They also cover the full spectrum of effect directionalities, so in theory any side effect could be implemented with the same APIs. More on that later.

Principles

  • The core application logic is purely functional: that is, it’s done entirely with pure functions, and it’s implemented using functional programming patterns.
  • The core application logic is composable.
  • Side effects are wired to the state using a reactive programming approach.
  • All side effects are treated in the same way.
  • Few libraries are used and they are used as little as possible.

A note on the status of this thesis

Please take this at face value and don’t assume that I’m completely sold on the ideas put together here. Tessellation is an experiment, and while I’m rather happy with the results, there are no simple answers in programming. It will also likely evolve, and if that’s the case I’ll continue publishing the new versions as I refine the ideas that make up the architecture.

Let’s get started

First, open the live app in https://xaviervia.github.io/tessellation/.

You are watching a very simple example of a Voronoi tessellation, a kind of cell diagram automatically generated from points spread across a plane. There are 9 points in total.

If this is the first time you open the application, the distribution of dots that you see is random, and was generated by the Seed effect.

Let’s play around:

1. Reload the application

reload

The distribution of dots doesn’t change! This is because each time the state is updated, the new distribution of dots is saved by the LocalStorage effect into, well, localStorage. When you reload, the same LocalStorage effect recovers the points from there.

2. Resize the window

resize

The tessellation changes it’s proportions to fit the new window size. Each time you resize the window, the Resize effect pushes a new action. The app recalculates the positions of the dots and lines by skewing the 100x100 grid.

3. Click around

click

One dot is removed, and another dot is added in roughly the position where you clicked.

It will be placed roughly where the mouse was because positions are normalized from whatever size your window is to a 100x100 grid, so the application’s grid doesn’t match the pixels that you see. This action is sent from the View effect––which is a React component.

4. Open the console and click around

console

Each new distribution of dots is logged to the console. The Log effect is responsible for this.

Try clicking several times in the same place without moving the mouse: you will see that the distribution of dots is printed only once. The state management part of the application is taking care of not sending the same state twice to the effects. We will see this later in more detail.

5. Click the (undo) button

undo

The point distribution goes back to the previous one. How this is achieved is an interesting part of the state manipulation and state logic composition exploration.

6. Click the (reseed) button

reseed

A new random distribution of dots appears.

7. See two windows synchronized

sync

Open another browser window, go to https://xaviervia.github.io/tessellation/, put it side by side with the current one, and click around in one and the other: you would see that both update at the same time, completely synchronized. This is also done by the LocalStorage effect

With the windows side by side, there is a bug in the application that is fairly easy to reproduce. It was left there on purpose; we will discuss it later, but you can try to find it and guess what’s causing it if you want.

Core application logic

The application logic is contained in the files placed directly under the app/src and the app/src/reducers folders.

  • actions.js contains the constant declarations for the action types. I’m trying out using Symbols for action types constants. Symbols make for great action types because each Symbol is totally unique, which means that collision between them is impossible. I took inspiration for this from Keith Cirkel’s Metaprogramming in ES6: Symbol’s and why they’re awesome. Take this with a grain of salt: I haven’t tried it outside Chrome or in a real life app.

  • index.html is the HTML entry point. Sagui will configure Webpack to load it with the corresponding index.js.

  • index.js is the JavaScript entry point. It’s the only place where the effects meet the store, and the place where the application wiring is done. Note that, unlike in many other React applications, the index.js is not in charge of rendering: rendering the view is considered just another effect. Rendering is not part of the core of the application.

    This doesn’t mean that rendering is not important. As stated below, effects are the whole point of an application. Saying that rendering is not part of the core means only that rendering is not state management logic.

  • lenses.js is the only file that is, to some extent, a Ramda artifact. Ramda provides this really cool functionality for selecting values in nested object structures that is done via lensPath functions, which combined with view and set make for great ways of querying and immutably setting values in a complex state object. I still haven’t decided what to do with this file.

  • selectors.js contains the functions that make it easy to query the state blob.

  • store.js contains the initial state blob and the reducer, which is in turn built from all the high order reducers in reducers/.

  • reducers/ contains files with each of the high order reducers.

It’s important to notice that, with the exception of index.js, all of the above contain only pure functions. There is nothing weird going on in them, and any complexity in the code is derived mostly from the complexity of the application functionality itself. I consider this one of the biggest achievements of this proposed architecture.

The debuggable higher-order reducer is not pure, but the intent of that function is just to introspect the state while in development, so it doesn't really count.

The whole application state is contained in a single object blob, Redux style. Actually, the whole architecture is heavily inspired by Redux––with two differences:

  • To prove that the "single object" state approach transcends libraries and it's easily reproducible with any reactive programming library, Tessellation is not using Redux itself.
  • Tesselation’s store is highly decoupled from the UI: Redux was originally meant to represent the data necessary to drive the UI, and the patterns that emerged around it reflect that intent. The thesis here is that the way Redux drives state can be used to manage any type of side effect, and for that purpose I introduced a generalized wiring interface that––so the argument goes––can be used for any side effect.

To emphasize that the architecture is meant to be a generalization of Redux and not another Flux implementation––as in, another way of doing React––the nomenclature is slightly different:

  • dispatch is called push to represent the fact that the actual operation of dispatching an action is analogous to pushing into an array structure. As a matter of fact, it’s pushing data into a stream that gets reduced on each addition – or scanned in Flyd lingo.

  • There is no getState. State is pushed to the effects as an object each time the store gets updated.

Composing the state management logic from smaller pieces

Redux reducer composition is sometimes done with the combineReducers utility. combineReducers segments the state into several disconnected namespaces that the reducers target separately. In the past I had real problems with this approach: long story short, keeping reducers from affecting each other lead me and my colleagues to manufacture artificial actions whose only purpose was allow some reducers to affect parts of the state outside their namespace. I even built a middleware for that. It was messy. I take it as a cautionary tale.

The documentation itself warns us that combineReducers is just a convenience. Me and my colleagues eventually noticed that, and since then I explored alternative ways of working with several reducers.

I came to favor a very different approach:

Higher-order reducers

In this project I wanted to explore composition of reducers using a pattern that I saw presented in React Europe for undoing: higher-order reducers. In a nutshell, it means that instead of implementing your most basic reducer as:

const reducer = (state, action) => {
  switch (action.type) {
    case 'ADD':
      return state + 1

    default:
      return state
  }
}

…you take another reducer as the first argument and pass it through when you are not interested in the current action:

- const reducer = (state, action) => {
+ const highOrderReducer = (reducer) => (state, action) => {
  switch (action.type) {
    case 'ADD':
      return state + 1

    default:
-      return state
+      return reducer(state, action)
  }
}

The advantages are plenty, but the main hidden one I found is that is very natural to make your high order reducers parametrizable, which means they can be reusable. In a silly example, let's say that you want to reuse the above shown addition reducer, and in each specific application you want to use a different name for the action type and a different name for the property of the state that has to be affected. Then you could write your HOR (High Order Reducer) as follows:

const additionHighOrderReducer = (property, actionType) => (reducer) => (state, action) => {
  switch (action.type) {
    case actionType:
      return {
        ...state,
        [property]: state[property] + 1
      }

    default:
      return reducer(state, action)
  }
}

The other obvious advantage is that a HOR can manipulate the result of another reducer, introducing the intriguing possibility of performing meta actions on the state.

The undoable high order reducer

The following is the very simple implementation of the adding support for the undo operation with a high order reducer. It can be found in app/src/reducers/undoable.js:

export default (actionType) => (reducer) => (state, {type, payload}) => {
  const {undo, ...restOfState} = state

  switch (type) {
    case actionType:
      return state.undo

    default:
      const newState = reducer(state, {type, payload})

      return newState !== state
        ? {
          ...newState,
          undo: restOfState
        }
        : newState
  }
}

Notice how the default action runs the reducer that it received as an argument and checks if the result differs from the previous state. In case it is different, it will not just return the new state: it will also add an undo property to the new state––with a reference to the old state. To perform the undo operation, it’s as simple as returning state.undo.

High order reducer composition

undoable demonstrates the kind of power that HORs bring, but of course not all HORs need to be meta: HORs are valuable because you can chain them into a single reducer. You can’t do this with plain reducers.

How would that composition look like? Well, simple enough:

const reducer = compose(
  undoable(APP_UNDO),
  appHOR,
  pointsHOR
)((x) => x)

Note that:

  1. Keep in mind that to get an actual reducer out of a HOR, you need to call it with another reducer. The simplest possible reducer is the identity function––that is, the function that returns whatever argument it gets. Think about it: a reducer that gets a state and returns that same state is still a reducer.

We could build the composition with an useful reducer instead of using identity, but that would break the symmetry of the whole thing, since one of the parts of the application will not be a HOR.

  1. Since the undoable HOR intercepts the result of the other reducers, it needs to be the last element of the composition. Otherwise it would only operate on the identity function. Remember: compose works from right to left, so the above expression translates to:
const reducer = undoable(APP_UNDO)(appHOR(pointsHOR((x) => x)))

You can see a variation of this in app/src/store.js. The actual implementation is more complex because it requires the full action dictionary to be passed in to each high order reducer: but that’s an implementation detail of which I'm not completely sure about.

Any number of HORs can be chained in this way. Hopefully we can explore this further: it may be that, by adding configuration detailing what properties of that state tree the HORs should modify and what action types they should respond to, it becomes possible to have truly reusable pieces of application logic.

Effects

Now to the fun part.

The application store can be seen as a stream, a stream that gets updated each time that something is pushed into it, and then gets split into several other streams that get finally sent down to the different side effects.

That is the conceptual picture: the actual implementation will vary. For example: while this schema is a good analogy of how Redux works, the Redux store is not an actual reactive stream, and even in this application the store stream is not literally split and sent down to the effects. But the analogy still holds.

Here comes a semantic side note: I’m calling these effects instead of side effects because, as many pointed out, an application without side effects is just a way of transforming electricity into heat. Effects is a correct name: the purpose of applications is in fact to perform effects. Side effects on the other hand refer to the unintended consequences of performing an otherwise functional operation. In informal contexts I will use effect and side effect interchangeably, but I’ll try to stick to effects whenever nomenclature is important.

Effect directionality

Effects come in different flavors:

  • Incoming: Effects that only inject data into the application. Resize and Setup are examples of this type.

  • Outgoing: Effects that react to the new state by performing some operation, but never inject anything back. Log is an Outgoing Effect.

  • Bidirectional: Effects that react to the state and inject information back into it via actions. LocalStorage, Seed and View are of this type.

Notice that there is no implication whatsoever that incoming and outgoing effects need to be synchronous. As a matter of fact, incoming effects are like hardware interruptions: they come from the outside world at an arbitrary time. This is how the architecture proposed here manages asynchronous operations: if the result of the operation is important to the application state, an asynchronous effect will capture some state change and perform some operation somewhere, only to push an action back into the state whenever (and if) the operation is completed. Notice that as far as the application state is concerned, the temporality between those two events is completely irrelevant.

You can easily imagine constructing REST requests with this approach: say that a blog post was just submitted by the user: somewhere in the application state, there will be a sign that the REST communication effect will read and realize that a backend call needs to be performed: when that backend call is completed, it will push an action into the state signifying that the post was saved, or that a retry is necessary.

But I digress.

Categorizing the effects into these three types is not terribly useful: knowing if an effect is Incoming, Outgoing or Bidirectional doesn’t say much about what it does. The reason I talk about it is because it sheds some light into why the Effect Wiring API looks like it does.

Effect Wiring API

Effects are wired to the store with a simple API: each effect is a function with the signature––in Flowtype annotations:

type FluxAction = {
  type: Symbol,
  payload: any
}

type Effect = (push: (action: FluxAction) => void) => ?(state: any) => void

Logging to the console each time the state is updated is easy enough:

export default () => (state) => console.log(state)

Another thing that you can do is to push an action when the application is being set up. This is useful for initializing values on startup. In this case we can use a little API sugar: since the (state: any) => void part of the signature is optional (preceded by a ?) we can push directly:

import {RANDOM_VALUE} from 'actions'

export default (push) => {
  push({
    type: RANDOM_VALUE,
    payload: Math.random()
  })
}

LocalStorage effect

code

import {APP_SYNC} from 'actions'

export default (push) => {
  // Right after invocation, it checks if there is an entry in `localStorage`
  // for `tessellation`, and if there is, it immediately pushes an action
  // with the previously saved state.
  if (window.localStorage.getItem('tessellation')) {
    push({
      type: APP_SYNC,
      payload: JSON.parse(window.localStorage.getItem('tessellation'))
    })
  }

  // It sets up a listener on the window `storage` event, and whenever the
  // `tessellation` entry is updated it pushes another state override,
  // thus keeping it in sync with whatever other instance of the
  // application is running in a different window or tab.
  window.addEventListener('storage', ({key, newValue}) => key === 'tessellation' &&
    push({
      type: APP_SYNC,
      payload: JSON.parse(newValue)
    })
  )

  // Whenever there is a new state, it serializes it and stores it
  // in `localStorage`
  return (state) => window.localStorage.setItem(
    'tessellation',
    JSON.stringify(state.shared)
  )
}

The cross window/tab synchronization will conspire with another effect and come back to bite us in the gotchas.

Log effect

code

Logs the current points with a nice format.

import leftPad from 'left-pad'
import {point} from 'selectors'
import {compose, join, map} from 'ramda'

const p = (index) => compose(
  join(','),
  map((value) => leftPad(value, 2, 0)),
  point(index)
)

export default () => (state) => point(0)(state) &&
  console.log(
`
${p(0)(state)} ${p(1)(state)} ${p(2)(state)}
${p(2)(state)} ${p(4)(state)} ${p(5)(state)}
${p(6)(state)} ${p(7)(state)} ${p(8)(state)}

`
)

Resize effect

code

import {APP_RESIZE} from 'actions'

export default (push) => {
  // It immediately sets the size data
  push({
    type: APP_RESIZE,
    payload: {
      height: window.innerHeight,
      width: window.innerWidth
    }
  })

  // Sets up a listener so that each time that the window gets
  // resized an action is pushed with the new window size value
  window.addEventListener('resize', () => push({
    type: APP_RESIZE,
    payload: {
      height: window.innerHeight,
      width: window.innerWidth
    }
  }))
}

Seed effect

code

If the state doesn’t already have any points, it seeds the application with 9 random points.

import {map, range} from 'ramda'
import {APP_SEED} from 'actions'

const {floor, random} = Math

// This effect needs to be loaded last
// to avoid colliding with any state recovered from localStorage
export default (push) => (state) => state.shared.points.length === 0 &&
  push({
    type: APP_SEED,
    payload: map(
      () => [floor(random() * 100), floor(random() * 100)],
      range(0, 9)
    )
  })

Setup effect

code

When initialized, it immediately pushes an action with a newly generated UUID to identify the instance of the application. There is an interesting gotcha here: I initially modeled this as being part of the initialState object in the store, but you can see how that violates the purity of the store implementation.

It's a common temptation to perform "one off" side effects in the creation of the initialState, but doing that means that the initial state is no longer deterministic, which in turn is likely to create problems down the line. It's a good litmus test for the store implementation that no libraries with side effects are used within it.

import uuid from 'uuid'
import {APP_SETUP} from 'actions'

export default (push) => {
  push({
    type: APP_SETUP,
    payload: {
      id: uuid.v4()
    }
  })
}

View effect

code

Too long to display inline

The view effect is simply a React component that is rendered using ReactDOM into the actual DOM tree. The little d3 magic responsible for getting the polygon information for the Voronoi diagram lies in there, but other than that it's a pretty regular React component. And that's the point here: the React integration doesn't look weird in any way, but also it doesn’t look different from the integration of any other effect.

The interesting part in the React component is the small wiring that is done to tie the state of the Container component with the state passed in from the outside:

let onState

class Container extends Component {
  componentDidMount () {
    onState = (state) => this.setState({
      storeData: state
    })
  }

  render () {
    if (this.state == null) {
      return false
    }

    const {storeData} = this.state
    ...
  }
}

...

return onState

That is literally it. The onState function is defined on componentDidMount, and it’s set up to update the storeData property of the Container state each time that the application calls it. React will realize if the state actually changed, and inside the render function it’s just a matter of checking for the state being defined or not (which could easily be avoided by setting an initial state, but I didn’t want to cater for that scenario) and then the whole store state is available, one deconstruction away, in storeData. Selectors can be readily used to query that data.

For pushing information back into the state, this is what we do:

<Button
  onClick={() => push({ type: POINTS_CLEANUP })}
  title='reseed'></Button>

Again, nothing differs between this and how any other effect is wired.

Other components that are not the container can (and should) be kept generic. Sending push or "untreated" chunks of the state down the React tree is a nasty antipattern, but this is also true when sending dispatch and/or getState in React Redux, so there is nothing new under the sun here.

Putting it all together

Now that we have all our effects and state logic in place, let’s wire them together.

A word of caution: this is the most library-opinionated piece of the Tessellation code. There is a good reason for it: the index.js where this wiring up happens is the point of convergence of the functional and the reactive universes. On the one side you have the purely functional code that doesn’t deal with operations and instead only declares possible actions and state transformations without actually doing any of them. On the other side you have the effects, which can make all kind of asynchronous and nondeterministic operations happen. To make things easier, they are exposed through a uniform and reactive friendly interface.

The keystone to that makes this convergence possible is one of the crucial challenges that needed solving. But, fear not, because the solution is simple.

The store

Interestingly enough, the Redux approach of a stateful store that exposes a dispatch function and makes a new state available once the update function (reducer) is ran is completely analogous to a reactive programming stream. A stream is an object to which you can push data and then do all sort of operations over, such as mapping, reducing, etc. The Redux dispatch function is just the original stream, while the getState function is just the stream that you get after running reduce on dispatch.

This is exactly what we will do. Let’s take a look at the store implementation from the index, using Flyd:

import {on, stream, scan} from 'flyd'
import {initialState, reducer} from 'store'

const push = stream()

const store = scan(reducer, initialState, push)

You can read more about Flyd in the documentation, but the gist of it is that a Flyd stream is a function that if you call without arguments returns the last value that was put into it, and if you call it with arguments you push a value. It also implements map and many other array functions; and those functions return a new stream. In the above case, scan is just reduce: it calls the argument function for every value, in order, and passes down the previous accumulated value and the new one. It also takes an initial value for the accumulation. This exactly what the Redux store is: a reducer over the many actions that happened in the application.

Wiring the effects

Now we need to wire the effects, that are exposed following the Effect Wiring API. Conceptually, what we want to do is to start up by invoking all effects with the push function, then store the resulting listeners and subscribe to the store stream so that each time that a new state is available, all the listeners are called with it.

Flyd API's is unfortunately not extremely pretty here, which means that the result will not be too concise. Let’s explore the rest of the index.js code together:

import {on, stream, scan} from 'flyd'
import {initialState, reducer} from 'store'

import {compose, filter, map} from 'ramda'

import localStorage from 'effects/localStorage'
import log from 'effects/log'
import resize from 'effects/resize'
import seed from 'effects/seed'
import setup from 'effects/setup'
import view from 'effects/view'

const push = stream()

const store = scan(reducer, initialState, push)

const effects = [localStorage, log, resize, seed, setup, view]

const deduplicatedStore = stream()

let prevState
on((nextState) => {
  prevState !== nextState && deduplicatedStore(nextState)
  prevState = nextState
}, store)

const listeners = compose(
  filter((listener) => listener != null),
  map((effect) => effect(push)),
)(effects)

on(
  (state) => listeners.forEach((listener) => listener(state)),
  deduplicatedStore
)

We added the imports for all the effects and for some functions from Ramda that we are going to need. The interesting part starts with const effects …: we gather all our effects into an array, so that we can operate with them in bulk. Because they are exposed with the Effect Wiring API, we can treat them all the same way, and because they are independent from one another, we don’t care about what order they are called.

The deduplicatedStore is an artifact of Flyd––even when the new state is exactly the same object as the previous one, the stream will call all of it’s subscribers. There is a filter function in Flyd but I couldn’t make it work for this scenario, so instead I implemented the deduplicatedStore stream so that it will only be called when the state is really different.

To get the listeners for the various effects, we first map over all the effects and initialize them with the push stream: following the wiring API, they will return either functions and undefined values––incoming effects that are not interested in the state will return undefined. We need to filter those out, and we do with filter((listener) => listener != null). There, we have our array of listeners now.

In the last lines, we subscribe a function to the deduplicatedStore stream, and this function will take the state and call each of the listeners with it. Easy. We could totally use map instead of on, but using on renders visible that we expect side effects to happen in the listeners and we are not interested in their return values.

That’s all there is to it. Note that if the wiring API would force effects to always return a listener function, it will be even simpler:

-const listeners = compose(
-  filter((listener) => listener != null),
-  map((effect) => effect(push)),
-)(effects)
+const listeners = map(
+  (effect) => effect(push),
+  effects
+)

…and if Flyd would have a deduplication function out of the box, we would be able to even remove the deduplicatedStore.

We implemented all the wiring in a very small amount of code. It’s a good thing: our index.js is almost entirely generic and that means that our index.js is boilerplate. Boilerplate should be kept small.

If all the operations in index.js are generic, can I get it as a function instead of writing boilerplate? Well, we could write such function, but I meant to demonstrate how simple the integration is, without using any magic, and also to prove a point of how the same integration can be done with different libraries. If this architecture is successful, such library should most certainly be published.

Implementations with other libraries

Debugging

How do I get something like the magnificent Redux Chrome DevTools?

I’m glad you ask. Actually, you can’t get exactly that. The tools and documentation accompanying Redux are a great achievement of the developers and an excellent reason to keep using Redux in the short and middle term.

However! It’s pretty easy to get useful debug information about the evolution of the state in any architecture that uses reducers as the cornerstone of state management. In the reducers folder, there is debuggable HOR that can be used to decorate our reducer function and get more insights.

The implementation of debuggable is simple:

import {diff} from 'jiff'

export default (reducer) => (prevState, action) => {
  const nextState = reducer(prevState, action)

  console.log('action', action)
  console.log('state', nextState)
  console.log('diff', diff(prevState, nextState))

  return nextState
}

There is a working version of the debuggable being used in variations/debuggable

Extending this debugging tool should be rather straightforward.

Libraries

Several libraries used throughout this project. The architecture is built so that none of these is indispensable to the underlying thesis. Don’t get too fixated on the choice of libraries––libraries change depending on the context, while hopefully the overall approach will not.

Functional: Ramda

Ramda is the Swiss army knife of the functional programming community in JavaScript. It supports a close analog of the Prelude standard library of Haskell, and takes type signatures, performance, consistency and the functional principles seriously. It provides a good foundation of functions that is lacking in the JavaScript standard library. This makes it extremely useful for building applications using functional programming principles.

That said, plain modern ES (which you can get with the Babel polyfills) already contains a fair subset of Ramda’s functions. You can also find many of the same functions in lodash, 1-liners, etc.

Reactive: Flyd

Flyd is the most minimalistic and elegant reactive programming library that I could find. It provides an extremely easy way of creating streams and no-nonsense way of dealing with them. It follows the fantasy-land specification, which means Flyd streams interoperate fantastically with Ramda (although I’m not making use of that at all in this project).

There are many alternatives to Flyd out there. For the architecture suggested here, the most relevant might be Redux itself, but if you are looking for a more complete reactive programming toolkit you can take a look at most or Rx.

Rendering: React

I’m assuming React needs no introduction. The point here is that not even React is necessary for this architecture to work. Of course, as long as the side effects are treated as functions that are called every time that new state is generated, using a reactive UI library will make the implementation simpler.

React is not the only tool around for that: you can also try out Preact, Act, or virtual-dom, among many others.

Gotchas and easter eggs

Even with a reasonable architecture, nondeterministic operations are hard. The best we can hope for––at least for now––is to have an architecture that makes it easy to find and solve the issues arising from them.

The application has two bugs that were left there to demonstrate the kind of quirks that can emerge in this type of application, and ways to fix them within the approach of the architecture. Hopefully they will serve as a source of inspiration for how to fix analogous bugs in functional-reactive applications.

“Reseed then undo” bug

Seed-then-undo bug

Pay close attention to the tiles before the Reseed button is pressed and the ones right after pressing Undo and you will notice that they are not the same. But they should be: you are undoing to the state before reseeding.

What went wrong? Let’s enable the debuggable HOR and take a look at the action log for this operation:

Seed-then-undo log

As you can see, the app/SEED and app/UNDO are not the only operations happening. Why?

If we take a look at the implementation of the Reseed button:

<Button
  onClick={() => push({ type: POINTS_CLEANUP })}
  title='reseed'></Button>

…we can see that it’s not actually telling the application to reseed. Instead, it’s just removing the current points from the state. Turns out that the Seed effect will check the amount of points in the state and, finding that amount is zero, it will repopulate with a random set automatically.

When I implemented the Reseed button action as just an invocation of the POINTS_CLEANUP action, I felt clever. I thought:

“Hey, I already have the functionality for reseeding an empty set of points, what if for the button I just clear the state and leave that other effect do it’s job?”

This was an honest-to-God bug that I introduced when originally building this application, because I couldn’t foresee any issues with cleaning up instead of reseeding directly, and I didn’t want to reimplement the random number generation or extract it out from the Seed effect.

I could have taken the cleverness of the move as a warning sign––on the other hand, reusing the effects is one of the motivations for writing applications with these kind of architectures. So I wouldn’t say that “never being clever" is a solution either.

But I digress.

How does the bug happen? Well, the Reseed button does not actually seed but rather clears the points which gets immediately reseeded, so the Undo operation will remove the reseeding and go back to a clean state without points. This will, in turn, cause the Seed effect to activate and send a freshly baked set of random points again.

How could this be solved?

I can come up with two different strategies for solving this issue. Each has its own merits, but I find the second one more intriguing: it introduces a very state-centric way of thinking about the problem. A way of thinking that would have prevented the need for the “clever” move in the first place.

The helper way

You can see an implementation of this fix in fix-helper-way.

“Extract the random point generator and reuse it in the Reseed button action handler.”

A very obvious solution would be to go to the effects/seed.js file:

import {map, range} from 'ramda'
import {APP_SEED} from 'actions'

const {floor, random} = Math

// This effect needs to be loaded last
// to avoid colliding with any state recovered from localStorage
export default (push) => (state) => state.shared.points.length === 0 &&
  push({
    type: APP_SEED,
    payload: map(
      () => [floor(random() * 100), floor(random() * 100)],
      range(0, 9)
    )
  })

…extract the number generation as a helper:

-import {map, range} from 'ramda'
+import getRandomizedPoints from 'lib/getRandomizedPoints'
import {APP_SEED} from 'actions'

-const {floor, random} = Math

// This effect needs to be loaded last
// to avoid colliding with any state recovered from localStorage
export default (push) => (state) => state.shared.points.length === 0 &&
  push({
    type: APP_SEED,
-    payload: map(
-      () => [floor(random() * 100), floor(random() * 100)],
-      range(0, 9)
-    )
+    payload: getRandomizedPoints()
  })

…and reuse that helper in the Reseed button:

<Button
-  onClick={() => push({ type: POINTS_CLEANUP })}
+  onClick={() => push({
+    type: APP_SEED, payload: getRandomizedPoints()
+  })}
  title='reseed'>
  ✕
</Button>

The downside: a relevant part of the application’s functionality gets lost by being converted into a snowflake helper. While the getRandomizedPoints function is reusable in principle, it’s unlikely to be useful in any other application. It is not a good abstraction: it’s application specific logic hidden away as a helper.

The upside: this way of refactoring is familiar. It’s a close analog of Extract Method.

This is not a great upside. If we take familiarity as a litmus test for good architecture, we shouldn’t be trying a functional-reactive approach, we should just stick with whatever we were doing before.

The state way

You can see an implementation of this fix in fix-state-way.

I’d love to call this one, “The ironic way”. The idea is to:

Make randomness deterministic, and complete the seed operation in the reducer for APP_SEED

This sounds crazy. The point of randomness is that is not deterministic.

Well, not quite.

The point of randomness is that you can’t predict it. Determinism is a different matter, and according to your worldview randomness might not exist or be embedded in everything; entropy however does exist from a statistical point of view, and it’s what we use to get values that we can’t predict in otherwise predictable systems.

I’ll try not to digress too much, but an important component of (pseudo)randomness in computer software is mathematical functions that return values between 0 and 1 so that, for any 𝑓(x), 𝑓(x + 1) is not easily predictable without knowing the actual function before hand. Moreover, if you were to plot a vast amount of values of 𝑓(x) you would not find any discernible pattern, nor be able to extract any statistical clustering of values around a specific number. That is in an ideal world; in reality there are only better or worse approximations.

It is worth noting that unpredictability is a core subject in the work around architectures built to operate with asynchronous interactions with the outside world.

Either way, a pseudo-random number generation function is not random per se: if you send the same sequence of xs to it, you get the same values of 𝑓(x). This proves useful in real life, since to test the behavior of a program you might want to bombard it with random numbers, but if it fails, you will want to reproduce the random sequence to see what went wrong. Many pseudo-random number generation libraries provide functions that can be invoked with arbitrary values and will then return another function which will produce a sequence of pseudo-random numbers; in these libraries, calling the “factory function” with the same value will result in a set of functions that will produce the same sequence of numbers, every time. This is called seeding a pseudo-random number sequence.

Well, we could just do that: add a seed and a counter in the state, then invoke the function with the same seed and the counter value to get a pseudo-random number out of it. To get the initial seed––if there is none available from localStorage––we can reuse the application’s window id that we create for the APP_SETUP.

First we need a seedable random number generation library. Unfortunately JavaScript doesn’t come with one––Math.random is not seedable. Since we are not trying to come up with a trustworthy randomness generation function that can be used in security or in casinos, we can make do with (a variation of) Antti Sykäry’s answer in Stack Overflow:

// New file: app/src/lib/seedableRandom.js
import {reduce} from 'ramda'
const {floor, sin} = Math

export default (seed, counter) => {
  const x = sin(
    reduce((value, char) => value + char.charCodeAt(0), 0, seed.split('')) +
    counter
  ) * 10000
  return x - floor(x)
}

…then in the reducers/app.js:

-import {set} from 'ramda'
+import {map, range, set} from 'ramda'

import * as lenses from 'lenses'
+import seedableRandom from 'lib/seedableRandom'
+
+const {floor} = Math

export default (actions) => (reducer) => (state, {type, payload}) => {
  switch (type) {
    case actions.APP_RESIZE:
      return set(
        lenses.size,
        payload,
        state
      )

    case actions.APP_SEED:
-      return set(
-        lenses.points,
-        payload,
-        state
-      )
+      return {
+        ...state,
+        shared: {
+          ...state.shared,
+          // `18` is the amount of numbers that we will use,
+          // two numbers for each dot.
+          counter: (state.shared.counter || 0) + 18,
+          points: map(
+            (index) => [
+              floor(
+                seedableRandom(
+                  state.shared.seed,
+                  state.shared.counter + (index * 2)
+                ) * 100
+              ),
+              floor(
+                seedableRandom(
+                  state.shared.seed,
+                  state.shared.counter + (index * 2) + 1
+                ) * 100
+              )
+            ],
+            range(0, 9)
+          )
+        }
+      }

    case actions.APP_SETUP:
-      return set(
-        lenses.id,
-        payload.id,
-        state
-      )
+      return {
+        ...state,
+        local: {
+          ...state.local,
+          id: payload.id
+        },
+        shared: state.shared.seed
+          ? state.shared
+          : {
+            ...state.shared,
+            seed: payload.id,
+            counter: 0
+          }
+      }

    case actions.APP_SYNC:
      return {
        ...state,
        shared: payload
      }

    default:
      return reducer(state, {type, payload})
  }
}

…in effects/seed.js:

-import {map, range} from 'ramda'
import {APP_SEED} from 'actions'

-const {floor, random} = Math
-
// This effect needs to be loaded last
// to avoid colliding with any state recovered from localStorage
export default (push) => (state) => state.shared.points.length === 0 &&
  push({
-    type: APP_SEED,
-    payload: map(
-      () => [floor(random() * 100), floor(random() * 100)],
-      range(0, 9)
-    )
+    type: APP_SEED
  })

…and in effects/view.js:

import {voronoi} from 'd3-voronoi'
import Button from 'components/Button'

import * as selectors from 'selectors'
-import {APP_UNDO, POINTS_ADD, POINTS_CLEANUP} from 'actions'
+import {APP_UNDO, POINTS_ADD, APP_SEED} from 'actions'

...

          <Button
-            onClick={() => push({ type: POINTS_CLEANUP })}
+            onClick={() => push({ type: APP_SEED })}
            title='reseed'>
            ✕
          </Button>

...

There is some repetition that could be refactored out, but if we were to do that now we can do it as a high order reducer instead of a helper.

The downsize: it requires us to understand how pseudo-random number generation works and why Math.random is not good enough. This is only partially a downsize though: learning this is actually pretty useful.

The upside: this approach is far more powerful than the helper one. By moving the logic upstream from the effect to the state, we got the actual generation embedded in the application update function, which means that now any effect can use it.

“But the number generation became coupled with the reducer! Isn’t it better to have it in a decoupled helper!” you say. Yes and no. The fact is that this logic is now being done in a reducer is absolutely correct. It’s application logic, so it should be part of the state update function one way or another. We can still extract a helper for the point generation, but it will have a completely different signature from the getRandomizedPoints helper.

Sometimes, decoupling just means coupling with the right thing.

Note that rewriting a nondeterministic operation to be deterministic is a staple of the functional way. Functional programming is completely deterministic and cannot handle (side)effects happening inside it’s wiring: to be able to perform nondeterministic operations, functional programming puts these operations in safe containers called monads. The result is that effects are pushed out, away from the main logic.

In a way, the whole architecture presented here aims at reproducing the success of the monads, but using a reactive programming interface to make it easier to understand, since purely functional programming code is highly abstract, very formalized and consequently hard to follow.

A fault I often see in purely functional code is arcane idioms taking the spotlight and making the application purpose harder and harder to discern below them: this refers to Cheng Lou’s React Europe talk about solving the problems in the right level of abstraction.

My point of view is that the purely functional approach abstracts way too high. My hope is that this approach hits closer to the sweet spot.

Some corollaries of this approach that show how robust it can be:

  • Reseeding after an Undo will result in the same sequence being created. Since the Undo resets the counter to the previous value, it stands to reasons that the same values will be generated when seeding again.
  • For any particular state, Reseeding will give the same value in every window.

“Sync then reseed” bug

This bug is a variation of the previous one. The fix is the same, because the underlying problem is the same. Why mentioning it at all?

  1. This variation of the bug illustrates a race condition: a typical buggy scenario for applications that synchronize data. While undo is a fairly rare feature to support in web applications, synchronization is commonplace. Synchronization issues are more confusing and far harder to debug, so I preferred explaining the issue and the solution with the more discrete manifestation in the “Reseed then undo” form.
  2. It illustrates something unrelated but interesting anyway: when things go wild in terms of synchronization, the behavior of applications can be affected by unassuming things such as whether or not the mouse pointer is on top of them. Take a look at the animation carefully (or try it yourself): you will notice that the window where the mouse is being hovered changes slower than the other one. My hypothesis is that the presence of a hovering mouse causes extra computations to be done in the window, effectively slowing down its JavaScript thread.
  3. It’s kind of pretty.

Sync then reseed bug

In imperative architectures, there are no elegant solution to these issues. I can’t stress this enough: all imperative solutions to this set of problems require some sort of “trusted”, top-of-the-pyramid source of truth that all the nodes recognize as authoritative, or an extra set of events created specifically to indicate that the stream of changes is terminated, which will usually provide no guarantees anyway.

My personal takeaways

  • Symbols make for pretty cool action types.
  • High order reducers open up a world of possibilities for reusable application state management functions.
  • The streams could be useful in the effects as well. The point here was to keep the API library agnostic and minimize the use of Flyd, but I’m pretty sure streams can help out a lot when dealing with more complex incoming side effects, while still keeping the wiring API agnostic.
  • JSON Patch diffs can be very useful for debugging and possibly for storing or sharing a log of transformations. For example it could be useful to send the patches over a network to keep clients and servers synchronized.

Credits and references


  • Sagui is a great way of bootstrapping and building a modern web application without thinking about configuration.
  • Tachyons made the styling a breeze.
  • d3’s heavily mathematical approach made it a great tool to work together with Ramda and React.
  • jiff is very useful to get JSON diff’s between states; diffs that, since they follow the JSON Patch RFC6902, can then be applied to the state structure to reproduce the transformation.

License

The Unlicense

About

Thesis on how to build functional, reactive and well structured front end applications. A JavaScript way of doing the Elm architecture

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages