Skip to content

yashhjaiin/13_Webframework_to_Predict_FakeNews

Repository files navigation

13_Webframework_to_Predict_FakeNews

Group Members :

  1. Yash Jain
  2. Jaagrut Shah
  3. Jigar Desai

Description about Project :

          Fake news has been a drag ever since the web boomed. The news or social media networks that allow us to gather information about incidents happening over this world can be contaminated with fake news to run a particular political or personal agenda. Combating this fake news is vital because the world’s view is formed by information. People not only make important decisions supported by information but also form their own opinions. If this information is fake, it can have devastating consequences. Verifying each news one by one a person is unfeasible. This project proposes a system that will attempt to consistently categorize and forecast whether a news item is false or not, to speed up the process of identifying fake news. On data-sets obtained from diverse sources such as Kaggle, machine learning techniques such as Naive Bayes, Logistic Regression, Random Forest, and SVM Classifier have been attempted and tested. The difficult process of detecting fake news is often made simple by combining the right models with the right instruments. So after perfroming research we decided to opt-out for Naive-Bayes. As we mentioned earlier we took our dataset from kaggle and Trained with above mentioned algorithms out of which naive bayes provided most reliable accuracy based on the new data feeded. The data was cleaned and processed well to numeric form from text by using the TFIDF Vectorizer for getting the weights as well as numeric form of data so that the model gets trained well. The model is now ready but we cannot use the jupyter notebook, we decided to use the Flask app so we needed to make a pipeine. After this we made our model.sav file so that the Pipeline is created for the model to follow up a set of processes and predict the text provided to it through the web app.
          In recent years, the World Wide Web (WWW) has grown into a massive repository of user-generated material and opinionated data. Users can easily share their thoughts and sentiments by using social media platforms such as Twitter, Facebook, and Whats-App. Twitter, Facebook, and other social media sites are examples of this. where millions of people express their opinions and sentiments regarding various topics in their everyday interactions. These ever-increasing subjective data are, without a doubt, an especially rich source of data for any careful decision-making process. Sentiment Analysis has emerged as a way to automate the analysis of such data. Its goal is to find opinionated material on the Internet and classify it according to its polarity, whether it has a positive or bad connotation, or if it tends to be a neutral statement in this world of opinions. This implies that any attempt to solve these challenges is required, so we decided to integrate a sentiment analyser along with the fake news one inorder to make it more ananlytical. However, the expanding scope of knowledge necessitates the use of automated data analysis techniques such as Natural Language Processing to analyze text. An in-depth study of several methodologies used in Fake News Prediction & Sentiment Analysis is conducted in this project ❤️.