Skip to content
This repository has been archived by the owner on Apr 11, 2019. It is now read-only.

No request token to open this page #147

Open
ddten opened this issue Apr 13, 2018 · 0 comments
Open

No request token to open this page #147

ddten opened this issue Apr 13, 2018 · 0 comments

Comments

@ddten
Copy link

ddten commented Apr 13, 2018

I have implemented a twitter application on my website based on OAuth.
But I got the error : Whoa there!
There is no request token for this page. That’s the special key we need from applications asking to use your Twitter account. Please go back to the site or application that sent you here and try again; it was probably just a mistake.

–> i reseted the costumery key & secret key, but nothing. I also, reseted my token, but still nothing.
My code is as follows:
#connect all libraries
library(twitteR)
library(ROAuth)
library(plyr)
library(dplyr)
library(stringr)
library(ggplot2)

#connect to API
download.file(url='https://curl.haxx.se/ca/cacert.pem', destfile='cacert.pem')
reqURL <- 'https://api.twitter.com/oauth/request_token'
accessURL <- 'https://api.twitter.com/oauth/access_token'
authURL <- 'https://api.twitter.com/oauth/authenticate'
consumerKey <- 'use29u2CLpp9Pb9aeALPBgKwf' #put the Consumer Key from Twitter Application
consumerSecret <- 'YRYvAExvschnj2fnPw4iIGC58xuk9xJyqDWG8zTTH3OBqDyuUY' #put the Consumer Secret from Twitter Application
Cred <- OAuthFactory$new(consumerKey=consumerKey,
consumerSecret=consumerSecret,
requestURL=reqURL,
accessURL=accessURL,
authURL=authURL)
Cred$handshake(cainfo = system.file('CurlSSL', 'cacert.pem', package = 'RCurl')) #There is URL in Console. You need to go to, get code and enter it in Console

save(Cred, file='twitter authentication.Rdata')
load('twitter authentication.Rdata') #Once you launched the code first time, you can start from this line in the future (libraries should be connected)
registerTwitterOAuth(Cred)

#the function for extracting and analyzing tweets
search <- function(searchterm)
{
#extact tweets and create storage file

list <- searchTwitter(searchterm, cainfo='cacert.pem', n=1500)
df <- twListToDF(list)
df <- df[, order(names(df))]
df$created <- strftime(df$created, '%Y-%m-%d')
if (file.exists(paste(searchterm, '_stack.csv'))==FALSE) write.csv(df, file=paste(searchterm, '_stack.csv'), row.names=F)

#merge the last extraction with storage file and remove duplicates
stack <- read.csv(file=paste(searchterm, '_stack.csv'))
stack <- rbind(stack, df)
stack <- subset(stack, !duplicated(stack$text))
write.csv(stack, file=paste(searchterm, '_stack.csv'), row.names=F)

#tweets evaluation function
score.sentiment <- function(sentences, pos.words, neg.words, .progress='none')
{
require(plyr)
require(stringr)
scores <- laply(sentences, function(sentence, pos.words, neg.words){
sentence <- gsub('[[:punct:]]', "", sentence)
sentence <- gsub('[[:cntrl:]]', "", sentence)
sentence <- gsub('\d+', "", sentence)
sentence <- tolower(sentence)
word.list <- str_split(sentence, '\s+')
words <- unlist(word.list)
pos.matches <- match(words, pos.words)
neg.matches <- match(words, neg.words)
pos.matches <- !is.na(pos.matches)
neg.matches <- !is.na(neg.matches)
score <- sum(pos.matches) - sum(neg.matches)
return(score)
}, pos.words, neg.words, .progress=.progress)
scores.df <- data.frame(score=scores, text=sentences)
return(scores.df)
}

pos <- scan('C://positive-words.txt', what='character', comment.char=';') #folder with positive dictionary
neg <- scan('C:/
/negative-words.txt', what='character', comment.char=';') #folder with negative dictionary
pos.words <- c(pos, 'upgrade')
neg.words <- c(neg, 'wtf', 'wait', 'waiting', 'epicfail')

Dataset <- stack
Dataset$text <- as.factor(Dataset$text)
scores <- score.sentiment(Dataset$text, pos.words, neg.words, .progress='text')
write.csv(scores, file=paste(searchterm, '_scores.csv'), row.names=TRUE) #save evaluation results

#total score calculation: positive / negative / neutral
stat <- scores
stat$created <- stack$created
stat$created <- as.Date(stat$created)
stat <- mutate(stat, tweet=ifelse(stat$score > 0, 'positive', ifelse(stat$score < 0, 'negative', 'neutral')))
by.tweet <- group_by(stat, tweet, created)
by.tweet <- summarise(by.tweet, number=n())
write.csv(by.tweet, file=paste(searchterm, '_opin.csv'), row.names=TRUE)

#chart
ggplot(by.tweet, aes(created, number)) + geom_line(aes(group=tweet, color=tweet), size=2) +
geom_point(aes(group=tweet, color=tweet), size=4) +
theme(text = element_text(size=18), axis.text.x = element_text(angle=90, vjust=1)) +
#stat_summary(fun.y = 'sum', fun.ymin='sum', fun.ymax='sum', colour = 'yellow', size=2, geom = 'line') +
ggtitle(searchterm)

ggsave(file=paste(searchterm, '_plot.jpeg'))

}

search("stockmarket") #enter keyword

Any one can help? ASAP please

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant