Releases: rethinkdb/rethinkdb
2.1.5 — Forbidden Planet
Bug fix release
Note: If you are building RethinkDB from source and you have built older versions of RethinkDB before, you might need to run make clean
before building RethinkDB 2.1.5.
Compatibility
- RethinkDB 2.1.5 servers cannot be mixed with servers running RethinkDB 2.1.4 or earlier
in the same cluster
Bug fixes
- Fixed a memory corruption bug that caused segmentation faults on some systems
(#4917) - Made the build system compatible with OS X El Capitan (#4602)
- Fixed spurious "Query terminated by
rethinkdb.jobs
table" errors (#4819) - Fixed an issue that caused changefeeds to keep failing after a table finished
reconfiguring (#4838) - Fixed a race condition that resulted in a crash with the message
std::terminate() called without any exception.
when losing a cluster connection
(#4878) - Fixed a segmentation fault in the
mark_ready()
function that could occur when
reconfiguring a table (#4875) - Fixed a segmentation fault when using changefeeds on
orderBy.limit
queries (#4850) - Made the Data Explorer handle changefeeds on
orderBy.limit
queries correctly (#4852) - Fixed a "Branch history is incomplete" crash when reconfiguring a table repeatedly in
quick succession (#4866) - Fixed a problem that caused
indexStatus
to report results for additional indexes that
were not specified in its arguments (#4868) - Fixed a segmentation fault when running RethinkDB on certain ARM systems (#4839)
- Fixed a compilation issue in the UTF-8 unit test with recent versions of Xcode (#4861)
- Fixed an
Assertion failed: [ptr_]
error when reconfiguring tables quickly with a
debug-mode binary (#4871) - Improved the detection of unsupported values in
r.js
functions to avoid a
Guarantee failed: [!key.IsEmpty() && !val.IsEmpty()]
crash in the worker process
(#4879) - Fixed an unitialized data access issue on shutdown (#4918)
Performance improvements
2.1.4 — Forbidden Planet
Bug fix release
Compatibility
- RethinkDB 2.1.4 servers cannot be mixed with servers running RethinkDB 2.1.1 or earlier
in the same cluster
Bug fixes
- Fixed a data corruption bug that could occur when deleting documents (#4769)
- The web UI no longer ignores errors during table configuration (#4811)
- Added a check in case
reconfigure
is called with a non-existent server tag (#4840) - Removed a spurious debug-mode assertion that caused a server crash when trying
to write to thestats
system table (#4837) - The
rethinkdb restore
andrethinkdb import
commands now wait for secondary
indexes to become ready before beginning the data import (#4832)
2.1.3 — Forbidden Planet
Bug fix release
Compatibility
- RethinkDB 2.1.3 servers cannot be mixed with servers running RethinkDB 2.1.1 or earlier
in the same cluster
Bug fixes
- Fixed a data corruption bug in the b-tree implementation (#4769)
- Fixed the
ssl
option in the JavaScript driver (#4786) - Made the Ruby driver compatible with Ruby on Rails 3.2 (#4753)
- Added the
backports.ssl_match_hostname
library to the Python driver package (#4683) - Changed the update check to use an encrypted https connection (#3988, #4643)
- Fixed access to
https
sources inr.http
on OS X (#3112) - Fixed an
Unexpected exception
error (#4758) - Fixed a
Guarantee failed: [pair.second]
crash that could occur during resharding
(#4774) - Fixed a bug that caused some queries to not report an error when interrupted (#4762)
- Added a new
"_debug_recommit"
recovery option toemergency_repair
(#4720) - Made error reporting in the Python driver compatible with
celery
andnose
(#4764) - Changed the handling of outdated indexes from RethinkDB 1.13 during an import to no
longer terminate the server (#4766)
Performance improvements
- Improved the latency when reading from a system table in
r.db('rethinkdb')
while the
server is under load (#4773) - Improved the parallelism of JSON encoding on the server to utilize multiple CPU cores
- Refactored JSON decoding in the Python driver to allow the use of custom JSON parsers
and to speed up pseudo type conversion (#4585) - Improved the prefetching logic in the Python driver to increase the throughput of
cursors - Changed the Python driver to use a more efficient data structure to store cursor
results (#4782)
Contributors
Many thanks to external contributors from the RethinkDB community for helping
us ship RethinkDB 2.1.3. In no particular order:
- Adam Grandquist (@grandquista)
- ajose01 (@ajose01)
- Paulius Uza (@pauliusuza)
2.0.5 — Yojimbo
Deprecated version: This release is a bug fix release for the deprecated 2.0.x version branch. For new installations, we recommend using the latest version of RethinkDB (currently 2.1.2).
Bug fix release
- Added precautions to avoid secondary index migration issues in subsequent releases
- Fixed a memory leak in
r.js
(#4663) - Added a workaround for an
eglibc
bug that caused anunexpected address family
error on startup (#4470) - Fixed a bug in the changefeed code that could cause crashes with the message
Guarantee failed: [active()]
(#4678) - Fixed a bug that caused intermittent server crashes with the message
Guarantee failed: [fn_id != __null]
in combination with ther.js
command (#4611) - Improved the performance of the
is_empty
term (#4592)
2.1.2 — Forbidden Planet
Bug fix release
Compatibility
- RethinkDB 2.1.2 servers cannot be mixed with servers running earlier versions in the
same cluster - Changefeeds on a
get_all
query no longer return initial values. This restores the
behavior from RethinkDB 2.0
Bug fixes
- Fixed an issue where writes could be acknowledged before all necessary data was written
to disk - Restored the 2.0 behavior for changefeeds on
get_all
queries to avoid various
issues and incompatibilities - Fixed an issue that caused previously migrated tables to be shown as unavailable (#4723)
- Made outdated secondary index warnings disappear once the problem is resolved (#4664)
- Made
index_create
atomic to avoid race conditions when multiple indexes were created
in quick succession (#4694) - Improved how query execution times are reported in the Data Explorer (#4752)
- Fixed a memory leak in
r.js
(#4663) - Fixed the
Branch history is missing pieces
error (#4721) - Fixed a race condition causing a crash with
Guarantee failed: [!send_mutex.is_locked()]
(#4710) - Fixed a bug in the changefeed code that could cause crashes with the message
Guarantee failed: [active()]
(#4678) - Fixed various race conditions that could cause crashes if changefeeds were present
during resharding (#4735, #4734, #4678) - Fixed a race condition causing a crash with
Guarantee failed: [val.has()]
(#4736) - Fixed an
Assertion failed
issue when running a debug-mode binary (#4685) - Added a workaround for an
eglibc
bug that caused anunexpected address family
error on startup (#4470) - Added precautions to avoid secondary index migration issues in subsequent releases
- Out-of-memory errors in the server's JSON parser are now correctly reported (#4751)
2.1.1 — Forbidden Planet
Bug fix release
- Fixed a problem where after migration, some replicas remained unavailable when
reconfiguring a table (#4668) - Removed the defunct
--migrate-inconsistent-data
command line argument (#4665) - Fixed the slider for setting write durability during table creation in the web UI
(#4660) - Fixed a race condition in the clustering subsystem (#4670)
- Improved the handling of error messages in the testing system (#4657)
2.1.0 — Forbidden Planet
Release highlights:
- Automatic failover using a Raft-based protocol
- More flexible administration for servers and tables
- Advanced recovery features
Read the blog post for more details.
Compatibility
Data files from RethinkDB versions 1.14.0 onward will be automatically migrated.
As with any major release, back up your data files before performing the upgrade.
If you're upgrading directly from RethinkDB 1.13 or earlier, you will need to manually
upgrade using rethinkdb dump
.
Note that files from the RethinkDB 2.1.0 beta release are not compatible with this
version.
Changed handling of server failures
This release introduces a new system for dealing with server failures and network
partitions based on the Raft consensus algorithm.
Previously, unreachable servers had to be manually removed from the cluster in order to
restore availability. RethinkDB 2.1 can resolve many cases of availability loss
automatically, and keeps the cluster in an administrable state even while servers are
missing.
There are three important scenarios in RethinkDB 2.1 when it comes to restoring the
availability of a given table after a server failure:
- The table has three or more replicas, and a majority of the servers that are hosting
these replicas are connected. RethinkDB 2.1 automatically elects new primary replicas
to replace unavailable servers and restore availability. No manual intervention is
required, and data consistency is maintained. - A majority of the servers for the table are connected, regardless of the number of
replicas. The table can be manually reconfigured using the usual commands, and data
consistency is always maintained. - A majority of servers for the table are unavailable. The new
emergency_repair
option
totable.reconfigure
can be used to restore table availability in this case.
System table changes
To reflect changes in the underlying cluster administration logic, some of the tables in
the rethinkdb
database changed.
Changes to table_config
:
- Each shard subdocument now has a new field
nonvoting_replicas
, that can be set to a
subset of the servers in thereplicas
field. write_acks
must now be either"single"
or"majority"
. Custom write ack
specifications are no longer supported. Instead, non-voting replicas can be used to set
up replicas that do not count towards the write ack requirements.- Tables that have all of their replicas disconnected are now listed as special documents
with an"error"
field. - Servers that are disconnected from the cluster are no longer included in the table.
- The new
indexes
field lists the secondary indexes on the given table.
Changes to table_status
:
- The
primary_replica
field is now calledprimary_replicas
and has an array of
current primary replicas as its value. While under normal circumstances only a single
server will be serving as the primary replica for a given shard, there can temporarily
be multiple primary replicas during handover or while data is being transferred between
servers. - The possible values of the
state
field now are"ready"
,"transitioning"
,
"backfilling"
,"disconnected"
,"waiting_for_primary"
and"waiting_for_quorum"
. - Servers that are disconnected from the cluster are no longer included in the table.
Changes to current_issues
:
- The issue types
"table_needs_primary"
,"data_lost"
,"write_acks"
,
"server_ghost"
and"server_disconnected"
can no longer occur. - A new issue type
"table_availability"
was added and appears whenever a table is
missing at least one server. Note that no issue is generated if a server which is not
hosting any replicas disconnects.
Changes to cluster_config
:
- A new document with the
id
"heartbeat"
allows configuring the heartbeat timeout for
intracluster connections.
New ReQL error types
RethinkDB 2.1 introduces new error types that allow you to handle different error classes
separately in your application if you need to. You can find the
complete list of new error types in the documentation.
As part of this change, ReQL error types now use the Reql
name prefix instead of Rql
(for example ReqlRuntimeError
instead of RqlRuntimeError
).
The old type names are still supported in our drivers for backwards compatibility.
Other API-breaking changes
.split('')
now treats the input as UTF-8 instead of an array of bytesnull
values in compound index are no longer discarded- The new
read_mode="outdated"
optional argument replacesuse_outdated=True
Deprecated functionality
The older protocol-buffer-based client protocol is deprecated in this release. RethinkDB
2.2 will no longer support clients that still use it. All "current" drivers listed on
the drivers page use the new JSON-based protocol and will continue to work
with RethinkDB 2.2.
New features
- Server
- Added automatic failover and semi-lossless rebalance based on Raft (#223)
- Backfills are now interuptible and reversible (#3886, #3885)
table.reconfigure
now works even if some servers are disconnected (#3913)- Replicas can now be marked as voting or non-voting (#3891)
- Added an emergency repair feature to restore table availability if consensus is lost
(#3893) - Reads can now be made against a majority of replicas (#3895)
- Added an emergency read mode that extracts data directly from a given replica for data
recovery purposes (#4388) - Servers with no responsibilities can now be removed from clusters without raising an
issue (#1790) - Made the intracluster heartbeat timeout configurable (#4449)
- ReQL
- All drivers
- Python driver
Improvements
- Server
- Improved the handling of cluster membership and removal of servers (#3262, #3897,
#1790) - Changed the formatting of the
table_status
system table (#3882, #4196) - Added an
indexes
field to thetable_config
system table (#4525) - Improved efficiency by making
datum_t
movable (#4056) - ReQL backtraces are now faster and smaller (#2900)
- Replaced cJSON with rapidjson (#3844)
- Failed meta operations are now transparently retried (#4199)
- Added more detailed logging of cluster events (#3878)
- Improved unsaved data limit throttling to increase write performance (#4441)
- Improved the performance of the
is_empty
term (#4592) - Small backfills are now prioritized to make tables available more quickly after a
server restart (#4383) - Reduced the memory requirements when backfilling large documents (#4474)
- Changefeeds using the
squash
option now send batches early if the changefeed queue
gets too full (#3942)
- Improved the handling of cluster membership and removal of servers (#3262, #3897,
- ReQL
.split('')
is now UTF-8 aware (#2518)- Improved the behaviour of compound index values containing
null
(#4146) - Errors now distinguish failed writes from indeterminate writes (#4296)
r.union
is now a top-level term (#4030)condition.branch(...)
now works just liker.branch(condition, ...)
(#4438)- Improved the detection of non-atomic
update
andreplace
arguments (#4582)
- Web UI
- JavaScript driver
- Python driver
- Ruby driver
- TCP keepalive is now enabled for all connections (#4572)
Bug fixes
time_of_date
anddate
now respect timezones (#4149)- Added code to work around a bug in some versions of GLIBC and EGLIBC (#4470)
- Updated the OS X uninstall script to avoid spurious error messages (#3773)
- Fixed a starvation issue with squashing changefeeds (#3903)
has_fields
now returns a selection when called on a table (#2609)- Fixed a bug that caused intermittent server crashes with the message
Guarantee failed: [fn_id != __null]
in combination with ther.js
command (#4611) - Web UI
- Python driver
- Fixed a missing argument error (#4402)
- JavaScript driver
- Ruby driver
- Made the EventMachine API raise an error when a connection is closed while handlers
are active (#4626)
- Made the EventMachine API raise an error when a connection is closed while handlers
Contributors
Many thanks to external contributors from the RethinkDB community for helping
us ship RethinkDB 2.1. In no particular order:
- Thomas Kluyver (@takluyver)
- Jonathan Phillips (@jipperinbham)
- Yohan Graterol (@yograterol)
- Adam Grandquist (@grandquista)
- Peter Hamilton (@hamiltop)
- Marshall Cottrell (@marshall007)
- Elias Levy (@eliaslevy)
- Ian Beringer (@ianberinger)
- Jason Dobry (@jmdobry)
...
2.1.0 beta
This is a beta release for RethinkDB 2.1. It is not for production use and has known
bugs. Please do not use this version for production data.
We are looking forward to your bug reports on GitHub
or on our mailing list.
Release highlights:
- Automatic failover using a Raft-based protocol
- More flexible administration for servers and tables
- Advanced recovery features
Read the blog post for more details.
Download
Update 07/27/2015: The server downloads have been updated to include additional bug fixes and improvements.
1. Download the server
- Source tarball
- OS X 64 bit dmg
- CentOS 6 and 7 64 bit | 32 bit
- Ubuntu 10.04 lucid 64 bit | 32 bit
- Ubuntu 12.04 precise 64 bit | 32 bit
- Ubuntu 13.10 saucy 64 bit | 32 bit
- Ubuntu 14.04 trusty 64 bit | 32 bit
- Ubuntu 14.10 utopic 64 bit | 32 bit
- Ubuntu 15.04 vivid 64 bit | 32 bit
- Debian wheezy 64 bit | 32 bit
- Debian jessie 64 bit | 32 bit
2. Download a driver
JavaScript
$ npm install http://download.rethinkdb.com/dev/2.1.0-0BETA1/rethinkdb-2.1.0-BETA1.nodejs.tgz
Python
$ pip install http://download.rethinkdb.com/dev/2.1.0-0BETA1/rethinkdb-2.1.0beta1.python.tar.gz
Ruby
$ wget http://download.rethinkdb.com/dev/2.1.0-0BETA1/rethinkdb-2.1.0.beta.1.gem
$ gem install rethinkdb-2.1.0.beta.1.gem
Compatibility
This beta release does not include automatic migration of data
directories from older versions of RethinkDB. The final release of RethinkDB 2.1 will
automatically migrate data from RethinkDB 1.14 and up.
If you're upgrading directly from RethinkDB 1.13 or earlier, you will need to manually
upgrade using rethinkdb dump
.
Changed handling of server failures
This release introduces a new system for dealing with server failures and network
partitions based on the Raft consensus algorithm.
Previously, unreachable servers had to be manually removed from the cluster in order to
restore availability. RethinkDB 2.1 can resolve many cases of availability loss
automatically, and keeps the cluster in an administrable state even while servers are
missing.
There are three important scenarios in RethinkDB 2.1 when it comes to restoring the
availability of a given table after a server failure:
- The table has three or more replicas, and a majority of the servers that are hosting
these replicas are connected. RethinkDB 2.1 automatically elects new primary replicas
to replace unavailable servers and restore availability. No manual intervention is
required, and data consistency is maintained. - A majority of the servers for the table are connected, regardless of the number of
replicas. The table can be manually reconfigured using the usual commands, and data
consistency is always maintained. - A majority of servers for the table are unavailable. The new
emergency_repair
option
totable.reconfigure
can be used to restore table availability in this case.
System table changes
To reflect changes in the underlying cluster administration logic, some of the tables in
the rethinkdb
database changed.
Changes to table_config
:
- Each shard subdocument now has a new field
nonvoting_replicas
, that can be set to a
subset of the servers in thereplicas
field. write_acks
must now be either"single"
or"majority"
. Custom write ack
specifications are no longer supported. Instead, non-voting replicas can be used to set
up replicas that do not count towards the write ack requirements.- Tables that have all of their replicas disconnected are now listed as special documents
with an"error"
field. - Servers that are disconnected from the cluster are no longer included in the table.
Changes to table_status
:
- The
primary_replica
field is now calledprimary_replicas
and has an array of
current primary replicas as its value. While under normal circumstances only a single
server will be serving as the primary replica for a given shard, there can temporarily
be multiple primary replicas during handover or while data is being transferred between
servers. - The possible values of the
state
field now are"ready"
,"transitioning"
,
"backfilling"
,"disconnected"
,"waiting_for_primary"
and"waiting_for_quorum"
. - Servers that are disconnected from the cluster are no longer included in the table.
Changes to current_issues
:
- The issue types
"table_needs_primary"
,"data_lost"
,"write_acks"
,
"server_ghost"
and"server_disconnected"
can no longer occur. - A new issue type
"table_availability"
was added and appears whenever a table is
missing at least one server. Note that no issue is generated if a server which is not
hosting any replicas disconnects.
Other API-breaking changes
.split('')
now treats the input as UTF-8 instead of an array of bytesnull
values in compound index are no longer discarded- The new
read_mode="outdated"
optional argument replacesuse_outdated=True
New features
- Server
- Added automatic failover and semi-lossless rebalance based on Raft (#223)
- Backfills are now interuptible and reversible (#3886, #3885)
table.reconfigure
now works even if some servers are disconnected (#3913)- Replicas can now be marked as voting or non-voting (#3891)
- Added an emergency repair feature to restore table availability if consensus is lost
(#3893) - Reads can now be made against a majority of replicas (#3895)
- Added an emergency read mode that extracts data directly from a given replica for data
recovery purposes (#4388) - Servers with no responsibilities can now be removed from clusters without raising an
issue (#1790)
- ReQL
- Added
ceil
,floor
andround
(#866)
- Added
- All drivers
- Python driver
Improvements
- Server
- Improved the handling of cluster membership and removal of servers (#3262, #3897,
#1790) - Changed the formatting of the
table_status
system table (#3882, #4196) - Added an
indexes
field to thetable_config
system table (#4525) - Improved efficiency by making
datum_t
movable (#4056) - ReQL backtraces are now faster and smaller (#2900)
- Replaced cJSON with rapidjson (#3844)
- Failed meta operations are now transparently retried (#4199)
- Added more detailed logging of cluster events (#3878)
- Improved unsaved data limit throttling (#4441)
- Improved the handling of cluster membership and removal of servers (#3262, #3897,
- ReQL
- Web UI
- JavaScript driver
- Python driver
- Added an
r.__version__
property (#3100)
- Added an
Bug fixes
time_of_date
anddate
now respect timezones (#4149)- Added code to work around a bug in some versions of GLIBC and EGLIBC (#4470)
- Python driver
- Fixed a missing argument error (#4402)
- JavaScript driver
- Made the handling of the
db
optional argument torun
consistent with the Ruby and
Python drivers (#4347)
- Made the handling of the
Contributors
Many thanks to external contributors from the RethinkDB community for helping
us ship RethinkDB 2.1. In no particular order:
- Thomas Kluyver (@takluyver)
- Jonathan Phillips (@jipperinbham)
- Yohan Graterol (@yograterol)
- Adam Grandquist (@grandquista)
- Peter Hamil...
2.0.4 — Yojimbo
Bug fix release
- Fixed the version number used by the JavaScript driver (#4436)
- Fixed a bug that caused crashes with a "Guarantee failed: [stop]" error (#4430)
- Fixed a latency issue when processing indexed
distinct
queries over low-cardinality data sets (#4362) - Changed the implementation of compile time assertions (#4346)
- Changed the Data Explorer to render empty results more clearly (#4110)
- Fixed a linking issue on ARM (#4064)
- Improved the message showing the query execution time in the Data Explorer (#3454, #3927)
- Fixed an error that happened when calling
info
on an ordered table stream (#4242) - Fixed a bug that caused an error to be thrown for certain streams in the Data Explorer (#4242)
- Increased the coroutine stack safety buffer to detect stack overflows in optarg processing (#4473)
2.0.3 — Yojimbo
Bug fix release
- Fixed a bug that broke autocompletion in the Data Explorer (#4261)
- No longer crash for certain types of stack overflows during query execution (#2639)
- No longer crash when returning a function from
r.js
(#4190) - Fixed a race condition when closing cursors in the JavaScript driver (#4240)
- Fixed a race condition when closing connections in the JavaScript driver (#4250)
- Added support for building with GCC 5.1 (#4264)
- Improved handling of coroutine stack overflows on OS X (#4299)
- Removed an invalid assertion in the server (#4313)