Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Perplexity ask #875

Draft
wants to merge 6 commits into
base: development
Choose a base branch
from
Draft

Perplexity ask #875

wants to merge 6 commits into from

Conversation

Keyrxng
Copy link
Contributor

@Keyrxng Keyrxng commented Oct 25, 2023

Resolves #866

Quality Assurance:

So after a lot of fucking around I managed to get our tokens estimates pretty dead on, but adding in the additional context from linked sources is throwing it off again

I tried to create formats where if we had the space we'd select the next format up and try to consume tokens that way but it wasn't very fruitful, so as is right now we are having GPT dictate the context that gets fed into Perp.

I think it's lacking a lot compared directly against gpt, plus it's required us to bring in another package into the build which I know you want to avoid at all costs.

If funded I'd spend the time and hack together a TikToken based tokenizer for use

@netlify
Copy link

netlify bot commented Oct 25, 2023

Deploy Preview for ubiquibot-staging failed.

Name Link
🔨 Latest commit 4134d3f
🔍 Latest deploy log https://app.netlify.com/sites/ubiquibot-staging/deploys/6539455800ea8f0008f901ab

@0x4007
Copy link
Member

0x4007 commented Nov 8, 2023

I think it's lacking a lot compared directly against gpt, plus it's required us to bring in another package into the build which I know you want to avoid at all costs.

To clarify, in my opinion, Perplexity is superior as a research agent, but not necessarily as a thinking agent. It is great at retrieving information quickly from many sources, and summarizing them.

However if asking it to think through something logically it is worse than GPT4.

If there was a better way to leverage these capabilities in an ergonomic way for the user that would be ideal.

so as is right now we are having GPT dictate the context that gets fed into Perp.

This seems like a nice conclusion. Perhaps using Perplexity as the researcher and ChatGPT as the thinker?

Maybe using Perplexity for /ask adds unnecessary complexity?

@Keyrxng
Copy link
Contributor Author

Keyrxng commented Nov 9, 2023

Perhaps using Perplexity as the researcher and ChatGPT as the thinker?

This would kick ass I agree. I'm unsure how best to apply it efficiently with the context size, if the token limits were flipped that would be workable. Until such times they release the bigger context window I think using it adds too much complexity to /ask where gpt3.5 performs just fine on it's own.

But I could see feeding perp all of our additional context and having it research and summarize, then gpt acting on the information that would probs be v effective.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

/research using perplexity api
2 participants