Skip to content

ayushsubedi/datainnews_V2

Repository files navigation

Data in NEWS

Data in NEWS portal

How journalists use data in their reporting or in their writing can be viewed as a reflection of the country’s demand for statistics and data. Back in 2020, I developed an indicator model (for The World Bank) to assess the use of data by Nepali news portals based on the methodology proposed by Klein, Galdin, and Mohamedou 2016.

The results from the project was used in the research article published at http://documents1.worldbank.org/curated/en/805261601023506163/pdf/Use-of-Data-in-the-Private-Sector-of-Nepal-The-Current-State-and-Opportunities-in-Finance-Education-and-the-Media.pdf

As of 9th February 2020, the results showed that very few news articles indicate the source of data or mention development indicators. However, articles that discuss data, reports, research, statistics, and related topics do critically engage with these ideas.

Unfortunately, the data collection process was halted because we ran out of our AWS Activate for Startups credits.

TODOS

  • create a simple architecture diagram
  • curate a list of twitter handles of Nepali newspapers that put URL to their newspapers
  • use twint to create datasets (historic)
  • check if url is present, url does not 404, and url belongs to the publisher
  • use Newspaper to collect important information (author, etc) and complete the dataset
  • research dash, swifter and pandarallel
  • optimize regular expression functions (if possible this should be performed during scraping)
  • create a sqlite database to hold access information (and design schema around other meta) or find and alternative
  • add checkpoints (using twint resume or build it manually)

POC

  • create a full working version of the product
  • research downloading twint as pandas to merge with previous data

Twitter handles

  • @kathmandupost
  • @thehimalayan
  • @NepaliTimes
  • @OnlineKhabar_En
  • @RepublicaNepal

The Frugal way

  • This project leverages on Github Actions for scheduling and scraping purposes.
  • This project is hosted on Heroku free tier (free ssl)
  • Uses amazing open source projects, primariliy (Twint, Newspaper) etc