Skip to content

Readings.html and Files.html

jobchong edited this page Oct 9, 2015 · 2 revisions

Readings and Files

These two pages hold the links to all the files and links we share on Slack. How is this information held?

Files.html

At Files, the list of the files we share are shown. This is done via the file retrieval API supplied by Slack. If you are logged in on your browser to Slack, the fields will autopopulate with legalese and your username. The URL generated below is a JSON containing the details of the files.

How you then call this method is in the html itself, and is self-explanatory.

Readings.html

We can export Slack chat history here. This will download a batch of JSON files in separate folders according to channel.

We then have to concatenate all these JSON files for easy extraction of any links inside. If you install the JSON CLI you can cd to any channel folder and use for i in yearnumber*.json; do cat $i; echo ""; done | json -g > ~/**/myfile.json, but this only concatenates the files in each folder.

To concatenate all the json files in all the folders, we can use grunt.

This is an automation tool, immensely useful to us in this situation.

Note: install node.js first.

Install Grunt. Open Terminal/Command prompt:

npm install -g grunt-cli

Create a package.json file and gruntfile.js file in the root directory of the unzipped archive. Hence, if your unzipped folder is in ~/Downloads, put these two files in ~/Downloads/yourfolderhere.

What goes inside package.json:

{
  "name": "my-project-name",
  "version": "0.1.0",
  "devDependencies": {
  "grunt": "~0.4.5",
  }
}

This is a list of dependencies your grunt has.

What goes inside gruntfile.js:

module.exports = function(grunt) {

   grunt.initConfig({
     pkg: grunt.file.readJSON('package.json'),
     concat: {
       options: {
       separator: ''
     },
     dist: {
        src: ['**/*.json'],
     dest: 'concatenated.json'
     }	    
     },  
  });
   grunt.loadNpmTasks('grunt-contrib-concat');
       grunt.registerTask('default', ['concat',]);
};

This is the instruction manual for grunt when it runs.

Install contrib-concat:

npm install grunt-contrib-concat --save-dev

This installs the concatenation functionality with grunt. The concat: { you saw above in your gruntfile will be using it.

How to concatenate multiple jsons:

Go back to your gruntfile. See the src section? Insert all the folders in your downloaded archive which you want to concatenate in this form:

src: ['your-folder-here/*.json', 'your-folder-2/*.json']

etc. So one might be general/*.json. The expression /*.json means to take all files from that folder which end in .json. Make sure all these expressions are surrounded by inverted commas and have a comma after every expression.

Make sure the separator option is not ;, but . Inserting a semicolon after every file will make the result invalid JSON.

After that go back to Terminal:

grunt concat

concatenated.json will be in the root folder of your archive. Of course, you can change the name to whatever you want in your gruntfile.

Now open it with your text editor. You need to replace all instances of ][ with ,. Emacs provides this functionality with query-find-replace. Then copy it to your local Legalese repo.

This file is finally valid JSON. The call to $.getJSON in <script> will be able to parse this. Be aware that cross-origin requests for JSON is not allowed for local files, so you will have to push the changes to file.json to GitHub first and access legalese.io/file.json.

The regexp in

var url = value.match(/<(?!@)(.*?)>/)[1];
var itt = url.replace("|", "/");

search for all strings beginning with < and not followed by @, and ending with >. It then returns the characters between < and >. The exclusion of @ occurs because username tags start with <@ and are returned if this condition is not included. For some reason some URLs in the JSON also occur with |, and these also need to be replaced.

The problem with this approach is that it is static, and concatenated.json needs to be updated regularly with grunt and query-find-replace.