Skip to content

Releases: steamship-core/steamship-langchain

0.0.14

09 Mar 01:22
c1e3c32
Compare
Choose a tag to compare

⚠️ Breaking Changes ⚠️

  • Removed memory.ConversationBufferWindowMemory and memory.ConversationBufferMemory. They have been replaced with memory.ChatMessageHistory.

In order to keep up with the changes in upstream LangChain, we converted our Memory classes to a single ChatMessageHistory that may be used with the upstream conversational memory classes. This is a bit unfortunate, but should allow us to continue to stay in-step with the latest LangChain developments.

We apologize for the impact.

What's Changed

Full Changelog: 0.0.13...0.0.14

v0.0.13 - Hotfix for VectorStore deployments

27 Feb 18:27
f6a0deb
Compare
Choose a tag to compare

What's Changed

Full Changelog: 0.0.12...0.0.13

v0.0.12

24 Feb 23:40
a8378af
Compare
Choose a tag to compare

What's Changed

Full Changelog: 0.0.11...0.0.12

Add logging callback with doc fixes

24 Feb 17:08
b7ee8a3
Compare
Choose a tag to compare
Pre-release
Merge pull request #16 from steamship-core/doc-fix

docs: fix document_loaders index

v0.0.11 - VectorStore and Loaders

23 Feb 17:16
Compare
Choose a tag to compare

What's Changed

New Contributors

Full Changelog: 0.0.10...0.0.11

Full OpenAI support

14 Feb 19:23
d3093ef
Compare
Choose a tag to compare

This release updates the LLM support to provide a full drop-in replacement for the OpenAI LLM in LangChain. This allows users to use the Steamship backend for LLM calls with only a package name change. Thanks @EniasCailliau for the contribution.

Better support for plugin retry logic in client

08 Feb 22:18
0a45bc9
Compare
Choose a tag to compare

Includes more forgiving wait() time in Task handling around generation. This will work better with the updated retry / backoff behavior in the LLM plugin.