Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Why there are no descriptions? #1109

Open
agzam opened this issue Mar 7, 2023 · 1 comment
Open

Why there are no descriptions? #1109

agzam opened this issue Mar 7, 2023 · 1 comment

Comments

@agzam
Copy link

agzam commented Mar 7, 2023

Can descriptions be added? Otherwise, this collection of a bunch of URLs (albeit alphabetized) has little use. Maybe a script that goes through them and retrieves document.title would do?

@sysadmin-info
Copy link

sysadmin-info commented Dec 1, 2023

Can descriptions be added? Otherwise, this collection of a bunch of URLs (albeit alphabetized) has little use. Maybe a script that goes through them and retrieves document.title would do?

I agree. For technologies is fine as it is, for companies also, but for individuals categories at least one should be added. I do not know 100% of them and checking each site one by one is a nightmare. I am not sure is it legal. Something like this should do the job.

import requests
from bs4 import BeautifulSoup
from transformers import pipeline

# Initialize a summarization pipeline
summarizer = pipeline("summarization")

def crawl_and_summarize(url):
    # Crawl the webpage
    response = requests.get(url)
    soup = BeautifulSoup(response.content, 'html.parser')

    # Extract main content, could be more specific based on site structure
    text = soup.get_text()

    # Summarize the text
    summary = summarizer(text, max_length=130, min_length=30, do_sample=False)
    return summary[0]['summary_text']

def read_urls_from_file(file_path):
    with open(file_path, 'r') as file:
        return [line.strip() for line in file if line.strip()]

# File containing URLs, one per line
file_path = 'urls.txt'

# Read URLs from the file
urls = read_urls_from_file(file_path)

# Crawl and summarize each URL
for url in urls:
    try:
        summary = crawl_and_summarize(url)
        print(f"URL: {url}\nSummary: {summary}\n")
    except Exception as e:
        print(f"Error processing {url}: {e}")

In this script:

  • URLs are read from a file named 'urls.txt', but you can change the file_path variable to the actual path of your file.
  • The script reads each line from the file, strips any leading/trailing whitespace, and ignores empty lines.
  • Error handling is added to continue processing even if an error occurs with a specific URL.

Remember to place the 'urls.txt' file in the same directory as your script, or provide the absolute path to the file. Also, ensure that each URL in the file is on a new line.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants