Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Resources #1

Open
hugovk opened this issue Oct 1, 2019 · 8 comments
Open

Resources #1

hugovk opened this issue Oct 1, 2019 · 8 comments
Labels

Comments

@hugovk
Copy link
Member

hugovk commented Oct 1, 2019

This is an open issue where you can comment and add resources that might come in handy for NaNoGenMo.

There are already a ton of resources on the old resources threads for the 2013 edition, the 2014 edition, the 2015 edition, the 2016 edition, the 2017 edition and the 2018 edition.

@hugovk hugovk added the admin label Oct 1, 2019
@hugovk hugovk pinned this issue Nov 2, 2019
@MineRobber9000
Copy link

MineRobber9000 commented Nov 3, 2019

If you're looking to gather text from other stories, I would definitely suggest using Fanfiction.net as a source.

There's an API you can download. Downloading stories, given a list of IDs, is quite simple.

import fanfiction
scraper = fanfiction.Scraper()
ids = [13412502, # Steven: The Regular Boy
13387579, # Everyone-Needs-A-Friend
13321479, # He Did It For Me: Steven's Legacy
13395849, # Never Alone
13414274, # Healing Tears
13420579, # When Connie met Steven
13420570, # Flares
13420510, # A New Age Rewritten
13420225, # The Great Diamond Authority
13419458, # Spindle the Swindler
13212617, # A Battle in the Mind
13399620, # Bittersweet Memories
13419881, # A Nightmarish Storm
13178650, # When the past repeats itself
13380008, # Amber
13394195, # I don't need the world to see, I've been the best I can be
11872183, # Burn Bright
13391150, # Era 4: A New Gem
13245084, # An easter to remember
13419524, # Never the Same
13419488, # Caught In The Grey (Oneshot)
13178229, # Steven Universe: Advent of the Gara Droids
13390525, # The Diamond, the Pearl, and Spinel
13332598, # A Diamond in the rough
13386343 # Greg Universe of Pearl Stars
]

stories = [scraper.scrape_story(id) for id in ids]
text = ""
for story in stories:
    for chapter in story["chapters"]:
        text+=story["chapters"][chapter].decode("utf-8")

# handle text

To gather lists of IDs, search for what you want and use this bookmarklet to copy the IDs to your clipboard (some assembly required to deal with empty-text links):

javascript:void%20function(){function%20e(e){var%20n=document.createElement(%22textarea%22);n.value=e,document.body.appendChild(n),n.select(),document.execCommand(%22copy%22),document.body.removeChild(n)}var%20n=document.getElementsByTagName(%22a%22),t=%22%22;for(i=0;i%3Cn.length;i++){var%20a=n[i].pathname.match(/^\/s\/([0-9]+)/);null!==a%26%26(t=t+a[1]+%22,%20%23%20%22+n[i].innerText+%22\n%22)}e(t)}();

EDIT: The old API I used doesn't work anymore (at least not in Python 3), so I changed libraries. The rest of the comment is still up-to-date.

@dhasenan
Copy link

dhasenan commented Nov 3, 2019

Wikimedia data dumps: https://dumps.wikimedia.org/

Useful if you want, for instance, to build a dictionary with pronunciations: you can grab the whole of Wiktionary for the desired language, then parse each entry to get the IPA transcription. Then you can use that to do a Finnegan's Wake on your work.

@dhasenan
Copy link

dhasenan commented Nov 3, 2019

Wikidata: https://dumps.wikimedia.org/wikidatawiki/entities/

This has a[n annoyingly indirect] data model holding information about a lot of entities in machine-readable format. If you want a travelog mentioning the various types of plants and animals a traveler came across, this can give you a lot of low-detail data about plants and animals. It won't tell you whether muskrats have tails or are semiaquatic, but it will tell you that they're mammals of the species Ondatra zibethicus, known as Razh-musk in Breton, have a conservation status of Least Concern, and have a range described by this map.

@cjeller1592
Copy link

For a resource to preview and share your NaNoGenMo project, I'd like to recommend Write.as. It includes an anonymous API, giving your output a cleaner presentation than, say, a Github gist.

Here is a simple example in Python using the 50,000 repetitions of the word "meow" idea. Check out additional documentation if you'd like to learn more.

If you'd like to use the command line instead, there is a CLI available. The repo for that is here

@dhasenan
Copy link

dhasenan commented Nov 5, 2019

For a resource to preview and share your NaNoGenMo project, I'd like to recommend Write.as.

It seems to support Markdown, which is pretty handy. If you're using Markdown or HTML, you can also use Calibre to convert it to ebook format for that extra bit of professionalism, which is all-important here. (Though that doesn't get you automatic hosting.) With a bit of tinkering, you can include custom stylesheets and the like.

https://pypi.org/project/EbookLib/ is a direct epub generation library for Python if you're going whole hog. https://github.com/dhasenan/epub for D.

@hugovk
Copy link
Member Author

hugovk commented Nov 6, 2019

Easy way to convert Markdown to PDF:

@jaredly
Copy link

jaredly commented Nov 13, 2019

I've been looking into food database things (hoping to include some sort of recipe something in my simulator), and here are the best two resources I've found:

@serin-delaunay
Copy link

serin-delaunay commented Dec 1, 2019

https://www.naturalreaders.com/online/ can do reasonably nice-sounding text-to-speech on large quantities of text without breaking. The 20 minute limit won't be enough for 50,000 words, but will give a decent sample of an NNGM novel spoken aloud.

I checked some other services and they mostly had much stricter limits for free use, or wouldn't play any sound. Or sounded no better than my OS's built-in-default.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

6 participants