Skip to content

Latest commit

 

History

History
2367 lines (1251 loc) · 148 KB

README.md

File metadata and controls

2367 lines (1251 loc) · 148 KB

DolphinDB Release Notes

============================================================================================

Note: This README file contains release notes for DolphinDB Server version 1.30.22 and earlier. As of version 1.30.23, this file is no longer maintained. For release notes and documentation on the latest DolphinDB Server, please refer to the new DolphinDB Documentation.

============================================================================================

DolphinDB Server

Version: 1.30.22     Compatibility Level 1 with 1.30.21     For details, see Compatibility Changes in Version 1.30.22

Release Date: 2023-07-20

Linux64 binary | Linux64 JIT binary | Linux64 ABI binary | Windows64 binary | Windows64 JIT binary | Linux ARM64

Version: 1.30.21     Compatibility Level 2 with 1.30.20

Release Date: 2023-02-15

Linux64 binary | Linux64 JIT binary | Linux64 ABI binary | Windows64 binary | Windows64 JIT binary | Linux ARM64

Version: 1.30.20     Compatibility Level 2 with 1.30.19

Release Date: 2022-09-30

Linux64 binary | Linux64 JIT binary | Linux64 ABI binary | Windows64 binary | Windows64 JIT binary | Linux ARM64

Version: 1.30.19     Compatibility Level 1 with 1.30.18

Release Date: 2022-07-14

Linux64 binary | Linux64 JIT binary | Linux64 ABI binary | Windows64 binary | Windows64 JIT binary | Linux ARM64

Version: 1.30.18     Compatibility Level 2 with 1.30.17

Release Date: 2022-05-09

Linux64 binary | Linux64 JIT binary | Linux64 ABI binary | Windows64 binary | Windows64 JIT binary

Version: 1.30.17     Compatibility Level 3 with 1.30.16

Release Date: 2022-03-29

Linux64 binary | Linux64 JIT binary | Linux64 ABI binary | Windows64 binary | Windows64 JIT binary

Version: 1.30.16

Release Date: 2022-01-10

Linux64 binary | Linux64 JIT binary | Linux64 ABI binary | Windows64 binary | Windows64 JIT binary

Version: 1.30.15

Release Date: 2021-11-22

Linux64 binary | Linux64 JIT binary | Linux64 ABI binary | Windows64 binary | Windows64 JIT binary

Version: 1.30.14

Release Date: 2021-11-05

Linux64 binary | Linux64 JIT binary | Linux64 ABI binary | Windows64 binary | Windows64 JIT binary

Version: 1.30.13

Release Date: 2021-08-25

Linux64 binary | Linux64 JIT binary | Linux64 ABI binary | Windows64 binary | Windows64 JIT binary

Version: 1.30.12

Release Date: 2021-07-31

Linux64 binary | Linux64 JIT binary | Linux64 ABI binary | Windows64 binary | Windows64 JIT binary

Version: 1.30.11

Release Date: 2021-06-15

Linux64 binary | Linux64 JIT binary | Linux64 ABI binary | Windows64 binary | Windows64 JIT binary

Version: 1.30.10

Release Date: 2021-05-31

Linux64 binary | Linux64 JIT binary | Linux64 ABI binary | Windows64 binary | Windows64 JIT binary

Version: 1.30.9

Release Date: 2021-05-17

Linux64 binary | Linux64 JIT binary | Linux64 ABI binary | Windows64 binary | Windows64 JIT binary

Version: 1.30.8

Release Date: 2021-04-28

Linux64 binary | Linux64 JIT binary | Linux64 ABI binary | Windows64 binary | Windows64 JIT binary

Version: 1.30.7

Release Date: 2021-04-21

Linux64 binary | Linux64 JIT binary | Linux64 ABI binary | Windows64 binary | Windows64 JIT binary

Version: 1.30.6

Release Date: 2021-04-07

Linux64 binary | Linux64 JIT binary | Linux64 ABI binary

Version: 1.30.5

Release Date: 2021-03-30

Linux64 binary

Version: 1.30.4

Release Date: 2021-03-29

Linux64 binary

Version: 1.30.3

Release Date: 2021-02-28

Linux64 binary | Linux64 JIT binary | Linux64 ABI binary | Windows64 binary | Windows64 JIT binary

Version: 1.30.2

Release Date: 2021-01-25

Linux64 binary | Linux64 JIT binary | Linux64 ABI binary | Windows64 binary | Windows64 JIT binary

Version: 1.30.1

Release Date: 2021-01-12

Linux64 binary | Linux64 JIT binary | Linux64 ABI binary | Windows64 binary | Windows64 JIT binary

Version: 1.30.0

Release Date: 2020-12-29

Linux64 binary | Linux64 JIT binary | Linux64 ABI binary | Windows64 binary | Windows64 JIT binary

New Features

  • Added new function appendTuple! to append a tuple to another. (1.30.22.4)

  • Added new configuration parameter appendTupleAsAWhole to specify whether the tuple should be appended as an embedded tuple element, or if each of its elements should be appended independently to the target tuple. (1.30.22.4)

  • Added login information in logs, including login user, IP, port, status, etc. (1.30.22.4)

  • Added privilege VIEW_OWNER to support a user/group to create function views using addFunctionView. (1.30.22.4)

  • Support for partition pruning when the partitioning column is of the NANOTIMESTAMP type. (1.30.22.4)

  • Added new parameter isSequential to the plugin.txt to mark a function as order-sensitive or not. (1.30.22.4)

  • Added the cumdenseRank function to perform dense ranking of elements within cumulative windows. (1.30.22.3)

  • Added a new “dataInterval" option to the triggeringPattern parameter of the createCrossSectionalEngine function. This option enables calculations to be triggered based on timestamps from the input data. (1.30.22.3)

  • Added function parseJsonTable to parse a JSON object to an in-memory table. (1.30.22.2)

  • Added new configuration parameter tcpUserTimeout to set the socket option TCP_USER_TIMEOUT. (1.30.22.2)

  • Removed function getClusterReplicationMetrics. Added function getSlaveReplicationQueueStatus as an inheritance of getClusterReplicationMetrics.getSlaveReplicationQueueStatus retrieves the status of each execution queue in the slave clusters. (1.30.22.2)

  • Added configuration parameter clusterReplicationQueue to set the number of execution queues on each controller of the slave clusters. (1.30.22.2)

  • Added configuration parameter clusterReplicationWorkerNum to set the number of workers on each data node of the slave clusters. (1.30.22.2)

  • Added configuration parameter enableCoreDump to enable core dumps. It is only supported on Linux. (1.30.22)

  • Added configuration parameter disableCoreDumpOnShutdown to specify whether to generate core dumps on a graceful shutdown. It is only supported on Linux. (1.30.22)

  • Added configuration parameter allowMissingPartitions to specify the behavior when incoming data contains new partition values that do not match any existing partitions. (1.30.22)

  • Added function listRemotePlugins to obtain a list of available plugins. Added function installPlugin to download a plugin. (1.30.22)

  • Added configuration parameter volumeUsageThreshold to set the upper limit of the disk usage of a data node. (1.30.22)

  • Added function writeLogLevel to write logs of the specified level to the log file. (1.30.22)

  • Added function sessionWindow to group time-series data based on the session intervals. (1.30.22)

  • Added function summary to generate summary statistics of input data, including min, max, count, avg, std, and percentiles. (1.30.22)

  • Added functions encodeShortGenomeSeq and decodeShortGenomeSeq to encode and decode DNA sequences. (1.30.22)

  • Added function genShortGenomeSeq to perform DNA sequences encoding within a sliding window. (1.30.22)

  • Added function gramSchmidt to implement the Gram–Schmidt orthonormalization. (1.30.22)

  • Added function lassoBasic that has equivalent function to lasso but takes vectors as input arguments. (1.30.22)

  • Added 26 TopN functions: (1.30.22)

    • m-functions: mskewTopN, mkurtosisTopN
    • cum-functions: cumsumTopN, cumavgTopN, cumstdTopN, cumstdpTopN, cumvarTopN, cumvarpTopN, cumbetaTopN, cumcorrTopN, cumcovarTopN, cumwsumTopN, cumskewTopN, cumkurtosisTopN
    • tm-functions: tmsumTopN, tmavgTopN, tmstdTopN, tmstdpTopN, tmvarTopN, tmvarpTopN, tmbetaTopN, tmcorrTopN, tmcovarTopN, tmwsumTopN, tmskewTopN, tmkurtosisTopN
  • Added function initcap to set the first letter of each word in a string to uppercase and the rest to lowercase. (1.30.22)

  • Added functions splrep and splev for cubic spline interpolation. (1.30.22)

  • Added function scs to compute the optimal solution of linearly constrained linear or quadratic programming functions. (1.30.22)

  • Added function temporalSeq to generate time series at specified intervals. (1.30.22)

  • Added functions base64Encode and base64Decode to encode and decode Base64 digits. (1.30.22)

  • Added function addFunctionTypeInferenceRule to specify the inference rule of user-defined functions in DolphinDB JIT version. (1.30.22)

  • Added support for COMPLEX data type in DolphinDB JIT version. (1.30.22)

  • Added function createStreamDispatchEngine to create a streaming data dispatch engine. (1.30.22)

  • Added new configuration parameter logicOrIgnoreNull. The default value is true, which means to ignore NULL values in the operands. It should be set to false if you need the behavior of the or function to be consistent with old versions. (1.30.21.4)

  • is null is now supported in a case when clause. (1.30.21.4)

  • Added configuration parameter mvccCheckpointThreshold to set the threshold for the operations to trigger a checkpoint. (1.30.21.3)

  • Added function forceMvccCheckpoint to manually trigger a checkpoint. (1.30.21.3)

  • Added license server to manage resources for nodes specified by license. (1.30.21)

    • Related functions: getLicenseServerResourceInfo, getRegisteredNodeInfo.
    • Related configuration parameters: licenseServerSite, bindCores.
  • Added configuration parameter thirdPartyAuthenticator to authenticate user login over the third-party system. (1.30.21)

  • Server version is now automatically checked when a plugin is loaded. (1.30.21)

  • Support asynchronous replication across clusters for data consistency and offsite disaster recovery. (1.30.21)

  • Support Apache Arrow format. (1.30.21)

  • Added command setMaxConnections to dynamically configure the maximum connections on the current node. (1.30.21)

  • Added function demean to center a data set. This function can be used as a state function in the reactive state engine. (1.30.21)

  • Added parameter ordered for functions dict and syncDict to create an ordered dictionary where the key-value pairs are sorted in the same order as the input. (1.30.21)

  • Added support for the following binary operations (1.30.21):

    • dictionary and dictionary
    • scalar and dictionary
    • vector and dictionary
  • Added cumulative function cumnunique to obtain the cumulative count of unique elements. This function can be used as a state function in the reactive state engine. (1.30.21)

  • Added function stringFormat to generate strings with specified values and placeholders. (1.30.21)

  • Added function nanInfFill to replace NaN and Inf values. (1.30.21)

  • Added function byColumn to apply functions to each column of a matrix. The function is also supported in stream processing. (1.30.21)

  • Added function volumeBar for data grouping based on the cumulative sum. (1.30.21)

  • Added function enlist to return a vector (or tuple) with a scalar (or vector) as its element. (1.30.21)

  • Added operator eachAt(@) to access elements of vector/tuple/matrix/table/dictionary by index. (1.30.21)

  • Added new functions latestKeyedTable and latestIndexedTable to create a keyed table or indexed table with a time column. When a new record is appended to the table, it only overwrites the existing record with the same primary key if its timestamp is larger than the original one. (1.30.21)

  • Enhanced support for standard SQL features (1.30.21):

    • Clauses: drop, alter, case when, union/union all, join on, with as, create local temporary table
    • Predicates: (not) between and, is null/is not null, (not) exists/not exist, any/all
    • Functions: nullIf, coalesce
    • Keywords: distinct
  • Support multiple joins, join with table aliases, and join with a table object returned by a SQL subquery. (1.30.21)

  • SQL select can select a constant without specifying an alias, and the value will be used as the column name. (1.30.21)

  • SQL predicates and operators can be applied to the result table returned by a SQL subquery. (1.30.21)

  • Added configuration parameter oldChunkVersionRetentionTime to specify the retention time for old chunk versions in the system. (1.30.21)

  • Support built-in trading calendars of major exchanges and user-defined trading calendars. These calendars can be used in functions temporalAdd, resample, asFreq, and transFreq for frequency conversion. (1.30.21)

    • Related configuration parameter: marketHolidayDir
    • Related functions: addMarketHoliday, updateMarketHoliday, and getMarketCalendar to add, update and get user-defined trading calendars.
  • Added functions genericStateIterate and genericTStateIterate to iterate over streaming data with a sliding window. (1.30.21)

  • Support if-else statement in the reactive state engine. (1.30.21)

  • Added new function truncate for deleting the data in a DFS table while keeping the table schema. (1.30.20)

  • Added new function checkBackup for checking the integrity of backup files. Added new function getBackupStatus for displaying the detailed information about database backup and restore jobs. (1.30.20)

  • Added new functions backupDB, restoreDB, backupTable, and restoreTable for backing up / restoring an entire database or table. (1.30.20)

  • Added new configuration parameter logRetentionTime for specifying the system log retention period. (1.30.20)

  • Added new function triggerNodeReport for triggering a chunk information report for the specified data node. (1.30.20)

  • Added new function getUnresolvedTxn for getting the transactions in the resolution phase. (1.30.20)

  • The stream engine parser (streamEngineParser) now supports specifying user-defined functions with nested function as its metrics. (1.30.20)

  • Added new function conditionalIterate for recursive computation of metrics through conditional iteration. This function can only be used in the reactive state stream engine (createReactiveStateEngine). (1.30.20)

  • Added new function stateMavg for calculating the moving average based on previous results. This function can only be used in the reactive state stream engine (createReactiveStateEngine). (1.30.20)

  • Added new function stateIterate for linear recursion by linear iteration. This function can only be used in the reactive state stream engine (createReactiveStateEngine). (1.30.20)

  • Function mmaxPositiveStreak can now be used in the reactive state stream engine (createReactiveStateEngine). (1.30.20)

  • Window join engine (createWindowJoinEngine): When the parameter window=0:0, the size of the calculation window over the right table is determined by the difference between the timestamps of the corresponding record in the left table and its most recent record. (1.30.20)

  • Added new function regroup for grouped aggregation over a matrix based on user-specified column and/or row labels. (1.30.20)

  • Added new functions mifirstNot and milastNot for returning the index of the first/last non-NULL element in a sliding window. (1.30.20)

  • Added new function loc for accessing the rows and columns of a matrix by label(s) or a Boolean vector. (1.30.20)

  • Added new function til for creating a vector of consecutive integers starting from 0. (1.30.20)

  • Added new functions pack and unpack for packing and unpacking binary data. (1.30.20)

  • Added new function align for aligning two matrices based on row labels and/or column labels using the specified join method. (1.30.20)

  • DolphinDB (JIT) now supports accessing vector elements by index which can be a vector or a pair. (1.30.20)

  • Added new configuration parameters memLimitOfQueryResult and memLimitOfTaskGroupResult to restrict the memory usage of the intermediate and final results of queries; new function getQueryStatus to monitor the memory usage and execution status of the query. (1.30.19)

  • Added new functions isPeak and isValley to determine if the current element is the peak/valley of the neighboring elements. (1.30.19)

  • Added new function rowAt(X, Y). Return the element in each row of X based on the index specified by the corresponding element of Y. (1.30.19)

  • Added new functions rowImin and rowImax to get the index of the extreme value in each row. (1.30.19)

  • Added new machine learning function gmm to support Gaussian mixture model (GMM) clustering algorithms. (1.30.19)

  • Added new function valueChanged to detect the change between elements by comparing the current element with adjacent elements. (1.30.19)

  • Added new functions msum2 and tmsum2 to calculate the sum of squares in a sliding window. (1.30.19)

  • Added new functions prevState and nextState to find the element with a different state before/after the current element. (Consecutive elements with the same value are considered to be of the same state.) (1.30.19)

  • Added new function getSupportBundle. Return a file of support bundle containing system configuration and database information. (1.30.19)

  • Added new functions topRange and lowRange. For each element in X, return the maximum length of a window to the left of X where it is the max/min. The functions are also supported in the reactive state engine (createReactiveStateEngine). (1.30.19)

  • Added new parameter cumPositiveStreak for the reactive state engine (createReactiveStateEngine). (1.30.19)

  • Added new streaming engine dual ownership reactive state engine (createDualOwnershipReactiveStateEngine) with support for parallel computing of data with 2 grouping methods and different metrics. (1.30.19)

  • Introduced new table object "IPCInMemoryTable", interprocess in-memory table. Added related functions createIPCInMemoryTable, loadIPCInMemoryTable, dropIPCInMemoryTable and readIPCInMemoryTable. Interprocess in-memory table can be used in streaming scenarios to enable efficient data transfer between the DolphinDB server and client on the same machine. (1.30.19)

  • Added new function stretch to stretch a vector evenly to the specified length. (1.30.19)

  • Added new function getTransactionStatus to get the status of transactions. Added new command imtForceGCRedolog to skip the garbage collection of a transaction with the specified ID. (1.30.19)

  • Added new module "ops" for database operations. This module contains some commonly-used scripts for operations such as cancelling unfinished jobs in the cluster, viewing disk usage of a DFS table, deleting recovering partitions, closing inactive sessions, etc. (1.30.19)

  • Added new function setLogLevel to dynamically adjust the log level on the current node. (1.30.19)

  • Added new function cells to retrieve multiple cells from a matrix by the specified row and col indices. (1.30.18)

  • Added new function randDiscrete for sampling from a discrete probability distribution. (1.30.18)

  • Added new functions dynamicGroupCumsum and dynamicGroupCumcount, and their state functions in the reactive state streaming engine. (1.30.18)

  • Added new function createDistributedInMemoryTable to create a distributed in-memory table. (1.30.18)

  • Support tiered storage to store cold data on slow hard disks or object storage (Amazon S3). These data are read-only. (1.30.18)

  • Added transaction statement to encapsulate multiple SQL statements on an in-memory table or a shared table into one transaction. (1.30.18)

  • Added new function getRecoveryWorkerNum to obtain the number of recovery workers. (1.30.17)

  • Added new command getRecoveryWorkerNum to dynamically modify the number of recovery workers online. (1.30.17)

  • You can execute command kill -15 $PID or use the Web-based Cluster Manager for a graceful shutdown. (1.30.17)

  • Added new function imtUpdateChunkVersionOnDataNode to update the version number of the specified chunk on the data node. It is used to maintain the version consistency among multiple chunk replicas or between the data node and controller in a cluster. (1.30.17)

  • You can replay multiple tables with different schemas to a stream table and sort the data in a chronological order. (1.30.17)

  • Added new function concatMartix to concatenate multiple matrices horizontally or vertically. (1.30.17)

  • Added new higher-order functions firstHit and ifirstHit to obtain the first element in X that satisfies the specified condition. (1.30.17)

  • Added new function getCurrentSessionAndUser to obtain the sessionID and userID of the current session. (1.30.17)

  • Support standard SQL Join syntax. (1.30.17)

  • Added SQL keyword alter to add a new column to an existed table. (1.30.17)

  • Added SQL keyword create to create a database or a table. (1.30.17)

  • The type of data written into a temporal column of an in-memory table will be automatically converted to the data type of the column. (1.30.16)

  • Tables in the same partition can now be updated, written and deleted concurrently. (1.30.16)

  • Data node can now be recovered online with the latest data from other nodes. You can also enable asynchronous recovery via configuration. (1.30.16)

  • You can now use a new tag [HINT_EXPLAIN] in your SQL statement to return a JSON string indicating the execution process of the statement. (1.30.16)

  • You can now manage and query all computing tasks in a cluster. (1.30.16)

  • Added new function streamEngineParser to decompose a cross-sectional factor into a pipeline with multiple built-in streaming engines. This function parses the specified metrics to construct a calculation pipeline, where multiple built-in engines process the streaming data in sequence. (1.30.16)

  • Added new function existSubscriptionTopic to check whether a subscription topic has been created. (1.30.16)

  • Added new function createLookupJoinEngine to support left join of a stream table and a table with infrequent data updates. (1.30.16)

  • Added new function moveChunksAcrossVolume. When a new volume is added, use this function to move chunks in the old volume to the new one. (1.30.16)

  • When a new volume is added, you can now use function resetDBDirMeta to move the old volume's meta log to the new one. (1.30.16)

  • Added 10 new TopN functions, msumTopN, mavgTopN, mstdpTopN, mstdTopN, mvarTopN, mvarpTopN, mcorrTopN, mbetaTopN, mcovarTopN, and mwsumTopN. These functions can be used as state functions in the reactive state engine. (1.30.16)

  • Added new functions makeKey and makeOrderedKey to combine multiple columns into a BLOB column, so it can be used as the key of a dictionary or a set. (1.30.16)

  • Added new higher-order function aggrTopN, which sorts data based on the specified sorting column and returns aggregate calculation results of the first N rows in the table. (1.30.16)

  • Added new higher-order functions window and twindow for more general scenarios than move and tmove, with slightly different handling of window boundaries. (1.30.16)

  • Added new configuration parameter raftElectionTick, which specifies the waiting time to receive a heartbeat from the leader in a raft group before the followers switch to a new leader. Added new functions setCacheEngineMemSize, setTimeoutTick, setMaxMemSize, setReservedMemSize and setMaxBlockSizeForReservedMemory to support modifying the associated configuration online. (1.30.16)

  • Added new function loadNpz to import .npz files from NumPy. (1.30.16)

  • Added new function suspendRecovery to suspend node recovery tasks and added resumeRecovery to resume recovery tasks. (1.30.16)

  • Added new functions rowCorr, rowCovar, rowBeta, rowWsum and rowWavg for row-based calculation. (1.30.16)

  • Added new function fflush to flush buffer data to your file system. (1.30.16)

  • User-defined anonymous aggregate function is now available. (1.30.15)

  • Added new configuration parameter postStart to support a postStart.dos file, enabling the server to load scheduled jobs upon startup. (1.30.15)

  • Added new configuration parameters reservedMemSize and maxBlockSizeForReservedMemory to specify the maximum size of memory blocks that can be allocated to the application when the memory usage is close to maxMemSize to avoid crashes due to OOM. (1.30.15)

  • Added new functions such as cumfirstNot, cumlastNot, mfirst and mlast. They can be used as state functions in the reactive state engine. (1.30.15)

  • Added new function oneHot to implement one hot encoding. (1.30.15)

  • Added new command setAtomicLevel to set the atomic parameter of a database. If atomic parameter is set to 'CHUNK', it means concurrent writes to a chunk are allowed, but the atomicity at transaction level is no longer guaranteed. Only the atomicity at chunk level is guaranteed. (1.30.15)

  • Streaming engines ReactiveStateEngine, AnomalyDetectionEngine, SessionWindowEngine, DailyTimeSeriesEngine and TimeSeriesEnginereactive support high availability. (1.30.14)

  • Asynchronous replication across clusters is now available. (1.30.14)

  • Added new functions covarMatrix and corrMatrix to calculate the pairwise covariance and correlation. The calculation is 1 to 2 orders of magnitude faster than using functions cross and covar/corr. (1.30.14)

  • Added new configuration parameter stdoutLog and set it to true (or 1) to output the system log to stdout rather than a file. (1.30.14)

  • Added new higher-order function tmoving for sliding window calculation. The state functions moving and tmoving are now supported in the reactive state engine. (1.30.14)

  • Added new function runScript to run a script. (1.30.14)

  • Added new functions makeUnifiedCall, binaryExpr and unifiedExpr for metaprogramming. (1.30.14)

  • Added 23 new functions for calculation in a time-based sliding window: tmove, tmfirst, tmlast, tmsum, tmavg, tmcount, tmvar, tmvarp, tmstd, tmstdp, tmprod, tmskew, tmkurtosis, tmmin, tmmax, tmmed, tmpercentile, tmrank, tmcovar, tmbeta, tmcorr, tmwavg and tmwsum. These functions can be used as state functions in the reactive state engine. (1.30.14)

  • Added new functions sma, wma, dema, tema, trima, talib, talibNull and linearTimeTrend. (1.30.14)

  • Added new functions countNanInf and isNanInf to count the number of NaN and Inf in a scalar, vector or matrix. (1.30.14)

  • Added window join engine for stream tables. (1.30.14)

  • Optimized the following functions in time series engine: count, firstNot, ifirstNot, ilastNot, imax, imin, lastNot, nunique, prod, quantile, sem, sum3, sum4, mode and searchK. (1.30.14)

  • Added new function getConfigure. If the argument is a parameter name, it returns the value of the parameter; if the argument value is NULL, it returns all parameter values. (1.30.14)

  • Added new command clearCachedModules to forcibly clear the cached modules. Adopt the use clause to reload the module without restarting the node after cache clearing. Only administrators can execute this command. (1.30.14)

  • Added new functions rowSize, rowStdp, rowVarp, rowSkew and rowKurtosis for row-based aggregate operations. (1.30.14)

  • Added new function percentileRank to calculate the percentile of a value in a vector. (1.30.13)

  • Added new function zigzag to calculate the extreme values. (1.30.13)

  • Added new functions lowDouble and highDouble to decompose 16-byte data such as POINT and COMPLEX into a high 8-byte DOUBLE or a low 8-byte DOUBLE and save them. (1.30.13)

  • Added new function rdp to implement compression algorithm. (1.30.13)

  • Added new function wls to perform weighted least squares regression. (1.30.13)

  • Added equal join engine for stream tables. (1.30.13)

  • Added new functions ifNull and ifValid. (1.30.13)

  • Added new configuration parameter enableConcurrentDimensionalTableWrite to enable concurrent writes to dimension tables. (1.30.13)

  • Added new function isDataNodeInitialized to check whether the data node has completed initialization. (1.30.12)

  • Function replay now supports replaying data from multiple sources with identical schema to one stream table in chronological order. (1.30.12)

  • Added new function distance to calculate the distance in meters between two points with latitude and longitude coordinates. (1.30.12)

  • The streaming engines of subscribers now support high availability. (1.30.12)

  • The in-memory table now supports very long string type BLOB. (1.30.11)

  • Added new functions denseRank and rowDenseRank. (1.30.11)

  • After replacing the license file, execute function updateLicense to update the license without restarting the system. (1.30.11)

  • Added new configuration parameter datanodeRestartInterval in controller.cfg, indicating how long (in seconds) the agent node waits before restarting a data node automatically after it shuts down unexpectedly. (1.30.11)

  • The in-memory table now supports very long string type BLOB. (1.30.11)

  • Added new functions createAsofJoinEngine and appendForJoin to join two stream tables. (1.30.10)

  • Added new function getConnections to obtain information about all network connections on the local node. (1.30.10)

  • Added new parameter range to function interval to specify the interpolation range. (1.30.10)

  • Added new high-order function unifiedCall to specify the parameters in a tuple. (1.30.10)

  • Added new functions spearmanr and mutualInfo about correlation. (1.30.9)

  • Added new function createDailyTimeSeriesEngine to create a time series engine. Set parameter sessionBegin, sessionEnd to create the engine with specified session window. Users can specifiy parameter mergeSessionEnd to choose whether to include the data at the end of each session in the calculation. (1.30.9)

  • Added new function getStreamTableFilterColumn to obtain the filter column name of a stream table. (1.30.9)

  • Added new functions varp and stdp. (1.30.9)

  • Added new parameter lastBatchOnly to function CrossSectionalEngine. (1.30.8)

  • Added new optional parameter keepOrder to function createReactiveStateEngine. (1.30.8)

  • Added new data type DURATION to express time range concisely. (1.30.7)

  • Transferring compressed data between nodes is now available. (1.30.7)

  • Functions temporalAdd, dailyAlignedBar, bar, wj and pwj now support DURATION type parameters. (1.30.7)

  • Function keyedStreamTable now supports multiple key columns. (1.30.7)

  • Added SQL clause interval to perform interpolation. (1.30.7)

  • The function upsert! supports modifying the data in DFS tables (dimension table and distributed table included). (1.30.6)

  • SQL update and delete clauses can be used on DFS tables (dimension tables and distributed tables). (1.30.6)

  • Added new functions sqlUpdate and sqlDelete to dynamically create SQL update and delete clauses. (1.30.6)

  • Added new function createSessionWindowEngine to create a session window streaming engine. (1.30.6)

  • Transferring compressed data from the client to the server or vice versa is now available. (1.30.6)

  • Added new data types: COMPLEX and POINT. Use function complex and point to create these data types. (1.30.3)

  • Added new function getRequiredAPIVersion to check if it is necessary to update the API version to be compatible with DolphinDB server. (1.30.3)

  • The reactive state engine and the time series engine support snapshots to preserve the engine state for quick recovery from the previous snapshot if exceptions occur in streaming engines. (1.30.3)

  • Added new functions mskew, mkurtosis, mvarp, mstdp, cumvarp and cumstdp. These functions are also supported in the reactive state engine. (1.30.3)

  • Added new function winsorize. (1.30.3)

  • Added new higher-order function byRow for calculations on a matrix by each row. (1.30.3)

  • Built-in streaming engines including the time series engine, cross sectional engine, anomaly detection engine and reactive state engine can be used sequentially. (1.30.3)

  • Added reactive state engine for streaming with 2 new functions: createReactiveStateEngine and warmupStreamEngine. (1.30.2)

  • Added new functions ema, mskew, mkurtosis and mslr. (1.30.2)

  • Added new data forms indexed matrix and indexed series for panel data processing. Automatic alignment by row and column labels is now available for binary operations between indexed matrices, between indexed series and between an indexed matrix and an indexed series. (1.30.1)

  • Added new functions panel, indexedSeries, setIndexedMatrix!, setIndexedSeries!, isIndexedMatrix, isIndexedSeries, dropna, resample, asFreq, and merge for the indexed matrix or indexed series operations. (1.30.1)

  • Added new function renameTable to modify the name of a dimension table or a distributed table. (1.30.1)

  • Added new higher-order function withNullFill. (1.30.1)

  • Added new privilege DB_OWNER, with which users can create and manage databases. (1.30.1)

  • Added new function upsert! to write new records to or update old records in a keyed or indexed table. (1.30.1)

  • The result of function schema now includes the data type of the partitioning column: partitionColumnType. (1.30.1)

Improvements

  • Optimized the performance of function dropTable when deleting a partitioned table with over 100,000 partitions. (1.30.22.4)

  • The divisor of div/mod now can be negative numbers. (1.30.22.4)

  • Optimized transactions on compute nodes. (1.30.22.2)

  • Added parameter keepRootDir to function rmdir to specify whether to keep the root directory when deleting files. (1.30.22.2)

  • The license function obtains license information from memory by default. (1.30.22.2)

  • An empty table can be backed up by copying files. (1.30.22.2)

  • Optimized asynchronous replication (1.30.22.2):

    • After asynchronous replication is enabled globally, the system now allows operations on slave cluster databases which are not included in the replication scope.
    • The mechanism for pulling replication tasks from the master to the slave clusters has been improved.
  • <DataNodeNotAvail> error message now provides more details. (1.30.22.2)

  • A user-defined function allows the default value of a parameter to be an empty tuple (represented as []). (1.30.22.1)

  • Added user access control to the loadText function. (1.30.22.1)

  • Modifications made to user access privileges are logged. (1.30.22.1)

  • The resample function can take a matrix with non-strictly increasing row labels as an input argument. (1.30.22.1)

  • Optimized the join behavior for tuples. (1.30.22.1)

  • A ternary function can be passed as an input argument to the template accumulate in a reactive state engine. (1.30.22.1)

  • Added parameter validation to streamEngineParser: If triggeringPattern='keyCount', then keepOrder must be true. (1.30.22.1)

  • Configuration parameters localExecutors and maxDynamicLocalExecutor were discarded. (1.30.22)

  • Functions window and percentChange can be used as state functions in the reactive state engine. (1.30.22)

  • Support JOIN on multiple partitioned tables. (1.30.22)

  • Optimized the performance when using the dropTable function to delete a table with a large number of partitions. (1.30.22)

  • Support SQL keywords in all uppercase or lowercase. (1.30.22)

  • Support comma (,) to CROSS JOIN tables. (1.30.22)

  • Support line breaks for SQL statements, while keywords with multiple words, such as ORDER BY, GROUP BY, UNION ALL, INNER JOIN, cannot be split into two lines. (1.30.22)

  • The implementation of select * from a join b is changed from select * from join(a, b) to select * from cj(a, b). (1.30.22)

  • Support operator <> in SQL statements, which is equivalent to !=. (1.30.22)

  • Support keyword NOT LIKE in SQL statements. (1.30.22)

  • When LEFT JOIN, LEFT SEMI JOIN, FULL JOIN or EQUI JOIN on columns containing NULL values: (1.30.22)

    • In the previous versions: a NULL value is matched to another NULL.
    • Since the current version: a NULL value cannot be matched to another NULL.
  • For function sqlDS, a DFS table partitioned by DATEHOUR selected in sqlObj will now be correctly filtered by date. (1.30.22)

  • For the "Status" column returned by function getRecoveryTaskStatus, the previous status "Finish" is now changed to "Finished", "Abort" to "Aborted". (1.30.22)

  • Added inplace optimization fields, i.e., inplaceOptimization and optimizedColumns, when using HINT_EXPLAIN to check the execution plan of a GROUP BY clause when algo is "sort". (1.30.22)

  • The column name specified with the rename!, replaceColumn!, dropColumns! functions are no longer case sensitive. (1.30.22)

  • Added new parameters swColName and checkInput for the lasso and elasticNet functions to specify the sample weight and validation check, respectively. Added new parameters swColName for the ridge function. (1.30.22)

  • Added parameters x0, c, eps, and alpha for function qclp to specify absolute value constraints, solving accuracy, and relaxation parameters. (1.30.22)

  • Functions loadText, pLoadText, and extractTextSchema now can load a data file that contains a record with multiple newlines. (1.30.22)

  • The delimiter parameter of the loadText, pLoadText, loadTextEx, textChunkDS, extractTextSchema functions can be specified as one or more characters. (1.30.22)

  • When importing a table using function loadTextEx, an error will be reported if the table schema does not match the schema of the target database. (1.30.22)

  • Added check for the schema parameter of function loadTextEx. Since this version, the table specified by schema MUST NOT be empty, and the "name" and "type" columns must be of STRING type. (1.30.22)

  • Added new parameter tiesMethod, which is used to process the group of records with the same value, for the following moving TopN functions: mstdTopN, mstdpTopN, mvarTopN, mvarpTopN, msumTopN, mavgTopN, mwsumTopN, mbetaTopN, mcorrTopN, mcovarTopN. (1.30.22)

  • Optimized the prediction performance of function knn. (1.30.22)

  • The time series engine and daily time series engine now can output columns holding array vectors. (1.30.22)

  • Optimized the performance of the moving function used in the reactive state engine. (1.30.22)

  • The anomaly detection engine now can specify multiple grouping columns for parameter keyColumn. (1.30.22)

  • Added new parameter sortByTime for the createWindowJoinEngine and createAsOfJoinEngine functions to determine whether the result is returned in the order of timestamps globally. (1.30.22)

  • The streaming engine can now be shared with the share function/statement for concurrent writes. (1.30.22)

  • An error will be reported when using the share function/statement or the enableTableShareAndPersistence function to share the same table multiple times. (1.30.22)

  • An error will be reported if the data of INT type is appended to a SYMBOL column of the left table of a window join engine. (1.30.22)

  • DolphinDB JIT version supports the join operator (<-). (1.30.22)

  • The isort function in JIT version can take a tuple with vectors of equal length as input. (1.30.22)

  • The if expression in JIT version supports the in operator. (1.30.22)

  • Vectors can be accessed with Boolean index in JIT version. (1.30.22)

  • Support comments with multiple /**/ sections in one line. (1.30.22)

  • The function stringFormat now supports: data type matching, format alignment, decimal digits, and base conversion. (1.30.22)

  • The second parameter of function concat can be NULL. (1.30.22)

  • Function take can take a tuple or table as input. (1.30.22)

  • Function stretch can take a matrix or table as input. (1.30.22)

  • Functions in and find support table with one column. (1.30.22)

  • When the parameter moduleDir is configured as a relative path, the system searches the modules under the homeDir/modules directory. (1.30.22)

  • The result of function in, binsrch, find, or asof takes the same format as the input argument Y. (1.30.22)

  • An error is raised when passing a tuple to function rank. (1.30.22)

  • Added keyword distinct to eliminate duplicate records. It is currently not supported to be used with group by, context by, or pivot by. (1.30.21.6)

  • The outputElapsedInMicroseconds parameter of function createTimeSeriesEngine is renamed to outputElapsedMicroseconds. (1.30.21.4)

  • The fields "createTime" and "lastActiveTime" returned by function getSessionMemoryStat are now displayed in local time. (1.30.21.4)

  • Enhanced support for between and with standard SQL features. (1.30.21.4)

  • More operations on the IPC in-memory tables are logged for better tracking and debugging. (1.30.21.4)

  • Function getClusterDFSTables returns DFS tables to which the user has access. (1.30.21.3)

  • Parameter handler of function subscribeTable supports shared in-memory table, keyed table, and indexed table. (1.30.21.3)

  • Function cut now supports tables/matrices. (1.30.21.3)

  • Ordered Dictionary now supports unary window functions. (1.30.21.3)

  • Support checksum for the metadata files. (1.30.21)

  • To avoid excessive disk usage of recovery log, recovery tasks now will be cleared for the follower which has been switched from the leader. (1.30.21)

  • All backup and restore activities are fully logged. (1.30.21)

  • Added new parameters close, label, origin for function interval. (1.30.21)

  • Function getRecentJobs returns "clientIp" and "clientPort" indicating the client IP and port. (1.30.21)

  • Added new parameter warmup for function ema. If set to true, elements in the first (window-1) windows are calculated. (1.30.21)

  • The unary and ternary functions now can be specified for higher-order functions accumulate and reduce. (1.30.21)

  • Added new parameter outputElapsedMicroseconds for the reactive state engine and time-series engine to output the elapsed time. (1.30.21)

  • Added new parameter precision for functions rank and rowRank to set the precision of the values to be sorted. (1.30.21)

  • Parameter mode of function groups supports "vector" and "tuple". (1.30.21)

  • Function linearTimeTrend supports calculations of a matrix or table. (1.30.21)

  • Added support for iterations using multiple higher-order functions. Added new parameter consistent for higher-order functions eachLeft, eachRight, eachPre, eachPost, and reduce to determine the results' data type and form of tasks. (1.30.21)

  • Parameter tiesMethod of function rank and rowRank supports "first" to assign ranks to equal values in the order they appear in the vector. (1.30.21)

  • Function cut now supports scalar. (1.30.21)

  • Rows of a matrix can now be accessed with slice. (1.30.21)

  • The size of a tuple is no longer limited to 1048576. (1.30.21)

  • The defaultValue argument passed to function array now supports STRING type. (1.30.21)

  • Function memSize now returns the memory usage of tuple. (1.30.21)

  • Query results of different partitions now can be combined through multiple threads to reduce the elapsed time of merge phase. (1.30.21)

  • Function getSessionMemoryStat now returns related cache information. (1.30.21)

  • Column comments now can be added to mvcc tables with setColumnComment. (1.30.21)

  • Matrices now can be accessed with pair and vector as index. (1.30.21)

  • Modified the actual available memory configured by regularArrayMemoryLimit. (1.30.21)

  • Modified the upper limit on the number of DFS databases and tables. (1.30.21)

  • The filter condition of function streamfilter supports built-in functions. (1.30.21)

  • Added new parameter sortColumns for function replay to sort the data with the same timestamp. (1.30.21)

  • Support automatic alignment of data sources for N-to-1 replay. (1.30.21)

  • The window size is capped at 102400 when m-functions are used in the streaming engines. (1.30.21)

  • Optimized the performance of heterogeneous replay. (1.30.21)

  • Function streamEngineParser now supports function byRow nested with function contextby as metrics for the cross-section engine. (1.30.21)

  • Support higher-order function accumulate in streaming. (1.30.21)

  • Optimized the performance of function genericTStateIterate. (1.30.21)

  • Optimized the performance of function streamEngineParser. (1.30.21)

  • append / insert into operations on shared tables can be implemented with statement transaction. (1.30.21)

  • Optimized the performance of ej on partitioned tables. (1.30.21)

  • The select statement now supports using column alias or new column name as the filter condition in the where clause. (1.30.21)

  • Optimized the performance of keyword pivot by when the last column is the partitioning column. (1.30.21)

  • The keyword context by now supports specifying matrix and table. (1.30.21)

  • Optimized the performance of context by and group by. (1.30.21)

  • Optimized the performance of lsj at large data volumes. (1.30.21)

  • The temporal data types in a SQL where clause can now be automatically converted when interval is used to group data. (1.30.21)

  • The size of a tuple is no longer limited when used in SQL in condition. (1.30.21)

  • Modified the return value of function getSystemCpuUsage. (1.30.21)

  • Enhanced support for access control (1.30.21):

    • Extended privilege types at table level (TABLE_INSERT/TABLE_UPDATE/TABLE_DELETE) and database level (DB_INSERT/DB_UPDATE/DB_DELETE).

    • Modified DB_MANGE privilege which no longer permits database creations. Users with this privilege can only perform DDL operations on databases.

    • Modified DB_OWNER privilege which enables users to create databases with specified prefixes.

    • Added privilege types QUERY_RESULT_MEM_LIMIT and TASK_GROUP_MEM_LIMIT to set the upper limit of the memory usage of queries.

    • Access control-related functions now can be called on data nodes.

    • Modified the permission verification mechanism of DDL/DML operations.

    • Added parameter validation for access control:

      • An error is reported if the granularity of objs does not match the accessType of grant, deny, or revoke.
      • When the TABLE_READ/TABLE_WRITE/DBOBJ_*/VIEW_EXEC permission is granted, the existence of the applied object (database/table/function view) is checked first. If it does not exist, an error is reported.
      • When an object (database/table/function view) is deleted, the applied permissions are revoked. If a new object with the same name is created later, the permissions must be reassigned.
      • Permissions are retained for renamed tables.
  • Optimized the performance of user-defined functions in streaming engines in DolphinDB (JIT). (1.30.21)

  • DolphinDB (JIT) supports operator ratio. (1.30.21)

  • DolphinDB (JIT) supports more built-in functions: sum, avg, count, size, min, max, iif, moving. (1.30.21)

  • Functions backup, restore, and migrate support backup and restore of database partitions by copying files. (1.30.20)

  • Metacode of SQL statements can be passed to the parameter obj of function saveText. Partitions can be queried in parallel and written with a single thread. (1.30.20)

  • The system raises an error message if you specify the configuration parameter volumes for a single node using macro variable <ALIAS>. (1.30.20)

  • The parameter timeRepartitionSchema of function replayDS supports more temporal types. (1.30.20)

  • Optimized the garbage collection logic of window join engine. (1.30.20)

  • Identical expressions using user-defined functions are only calculated once in the reactive state stream engine. (1.30.20)

  • Added SQL keyword HINT_VECTORIZED to enable vectorization for data grouping. (1.30.20)

  • Optimized the computing performance of function rolling. (1.30.20)

  • Function getBackupList returns column "updateTime" for the last update time and column "rows" for the number of records in a partition. (1.30.20)

  • Added a new key "rows" to the dictionary returned by function getBackupMeta to show the number of rows in a partition. (1.30.20)

  • Added access control to 31 functions, which can only be executed by a logged-in user or administrator. (1.30.20)

  • updateLicense throws an exception if the authorization mode changes. (1.30.20)

  • When accessing a vector by index, NULL values are returned if the indices are out of bounds in DolphinDB (standard and JIT version). (1.30.20)

  • Optimized crc32 algorithm. (1.30.20)

  • Optimized function mrank. (1.30.20)

  • The maximum length for the data converted by function toJson is no longer limited to 1000. (1.30.20)

  • getClusterPerf(true) returns the information on all controllers in a high-availability cluster. This function also adds a return value isLeader to indicate whether the controller is the leader of the raft group. (1.30.19)

  • When using function restore, loadBackup, or getBackupMeta to access the backup partitions in a database whose chunk granularity is at TABLE level, the physical index is no longer required when specifying the parameter partition. (1.30.19)

  • Function getRecoveryTaskStatus adds a new return value FailureReason to display the reason for the recovery task failure. (1.30.19)

  • Optimized the compression algorithm for backup. (1.30.19)

  • If a jobId does not exist when using cancelJob, the system no longer throws an exception. Instead, it outputs the error message with the jobId to the log. (1.30.19)

  • Now can specify the configuration parameter persistenWorkerNum for a high-availability stream table. (1.30.19)

  • Added new parameter forceTriggerTime to createSessionWindowEngine to trigger the calculation in the last window if useSystemTime=false. (1.30.19)

  • When processing standard stream tables with streamFilter, you can now specify metacode of Boolean expressions for the filter condition. (1.30.19)

  • You can include the time column and/or join column from the left or right table as the output column(s) in the the parameter metrics of functions createEqualJoinEngine, createAsofJoinEngine and createLookupJoinEngine. (1.30.19)

  • The parameter keyPurgeFilter of createReactiveStateEngine must be metacode of Boolean expressions, otherwise an error will be raised. (1.30.19)

  • The parameter metrics of createLookupJoinEngine can be a tuple. (1.30.19)

  • Optimized the performance of select count(*) when the time granularity of a group by clause is more coarse-grained than that of a partition. (1.30.19)

  • Optimized the performance of the following functions when calling function rolling: cumsum, cummax, cummin, cumprod, and mcount. (1.30.19)

  • tar.gz file for offline server installation. (1.30.19)

  • A subscription starts from the latest incoming data if the persisted offset cannot be found. (1.30.19)

  • You can specify 00:00:00 for the parameter sessionEnd of function createDailyTimeSeriesEngine to indicate the end time is 00:00:00 of the next day (i.e., 24:00:00 of the day). (1.30.19)

  • The number of rows in the result set of fj is limited to a maximum of 2 billion rows. (1.30.19)

  • Function trueRange can be used as state function in the reactive state engine. (1.30.18)

  • Improved the performance of writing and reading SYMBOL type of data. (1.30.18)

  • The window functions cummed and cumpercentile can now be used as state functions in the reactive state streaming engine (createReactiveStateEngine). (1.30.18)

  • Added new parameter closed to time-series streaming engines (createTimeSeriesEngine and createDailyTimeSeriesEngine) to specify whether the left boundary or right boundary of the calculation window is inclusive. (1.30.18)

  • The keyColumn parameter of streamEngineParser is now case-insensitive. (1.30.18)

  • Added new parameter keyPurgeFreqInSec to time-series streaming engines (createTimeSeriesEngine and createDailyTimeSeriesEngine) to remove groups with no incoming data for a long time. (1.30.18)

  • Optimized the performance for using user-defined functions in time-series streaming engines (createTimeSeriesEngine and createDailyTimeSeriesEngine). (1.30.18)

  • streamFilter now supports processing columns of standard stream tables. Previously it only processes the output of heterogeneous replay(). (1.30.18)

  • The metrics parameter of createTimeSeriesEngine and createDailyTimeSeriesEngine now supports matrices. (1.30.18)

  • The rule parameter of resample now supports "H", "L", "U", "min", "N", and "S". Added new parameters closed, label, and origin to set the interval of groups. (1.30.18)

  • If an error is raised during the execution of the replay function, a runtime exception will be thrown. (1.30.18)

  • Optimized the performance of generating random integers. (1.30.18)

  • You can enable dataSync for the OLAP engine by setting configuration parameter dataSync = 1 on Windows. (1.30.17)

  • Added new parameters userId and password to function subscribeTable. The system will attempt to log in after a user is logged out accidentally to make sure the subscribed data can be written to a DFS table. (1.30.17)

  • Function getStreamingStat().subWorkers returns throttle in milliseconds. (1.30.17)

  • You can specify multiple matching columns for the asof join engine. (1.30.17)

  • Added new parameters snapshotDir and snapshotIntervalInMsgCount to the cross-sectional streaming engine to enable snapshot; Added new parameter raftGroup to enable high availability. (1.30.17)

  • Added new functions getLeftStream and getRightStream to support cascade of join engines. (1.30.17)

  • If a function with multiple returns is specified for the parameter metrics of a cross-sectional streaming engine (createCrossSectionalEngine) or a time-series streaming engine (createTimeSeriesEngine), the returned column names can be unspecified when creating the streaming engine. (1.30.17)

  • Added new command addAccessControl to add access control on a shared in-memory table (stream table included) or the streaming engine object. (1.30.17)

  • The SQL pivot by clause supports columns of UUID type. (1.30.17)

  • The upper limit of the result of function ceil or floor is raised to 2^53. (1.30.17)

  • If the last column of a pivot by clause is a partitioning column, and no aggregate or order-sensitive functions are included in the select clause, the performance has been optimized by nearly five times. (1.30.17)

  • Function med, kama and wma now support vectors of BOOL type. (1.30.17)

  • The parameter colNames of command addColumn can start with a digit. (1.30.17)

  • When loading a csv file with function loadText or loadTextEx, the upper limit for the first row of data is raised to 256 KB. (1.30.17)

  • Aggregate functions, window functions and vectorized functions support table as an input. (1.30.17)

  • The parameter count of function rand or normal supports a pair to specify the dimension of a matrix. (1.30.17)

  • Row-based logic functions (rowAnd, rowOr and rowXor) support input of INT type. (1.30.17)

  • Added new parameter closed to function bar to specify whether the left or right boundary is inclusive in each group. (1.30.17)

  • For a moving function, when X is an indexed series or an indexed matrix and window is a positive integer, the window slides over the index. (1.30.17)

  • The parameters of a user-defined function can be defined across multiple lines and separated by a comma. (1.30.17)

  • For a SQL order by clause, you can refer to a column by its name or the as alias. (1.30.17)

  • You can now specify a vector of SECOND, TIME or NANOTIME type for the parameter X of function dailyAlignedBar. (1.30.17)

  • Added new parameter forceTriggerSessionEndTime to the daily time-series streaming engine to specify the waiting time to trigger calculation in the window containing sessionEnd. (1.30.17)

  • Modified the parameter forceTriggerTime of the daily time-series streaming engine and the time-series streaming engine. The calculation in an uncompleted window of a group can be triggered by the latest ingested data of any other group. If the parameter fill is set, the specified filling methods are used to fill the results of the empty windows in the meantime. (1.30.17)

  • When updating in-memory tables with assignment statements, you can now use BOOL arrays in row filters. For example, t[`y, t[`y]>0] = 0 where t is a table and y is a column of t. (1.30.16)

  • New optional parameter sortColumns is added to function upsert!. Use this parameter to specify the sorting columns based on which the updated table will be sorted. (1.30.16)

  • cancelJob and cancelConsoleJob now support cancelling multiple jobs. Job cancelling is also faster when cluster is stuck. (1.30.16)

  • Function set now supports the BLOB data type. (1.30.16)

  • Now multiple records with identical key values are not allowed to insert into a keyed stream table in one batch. (1.30.16)

  • Faster execution speed for using functions atImin and atImax in widow join parameter aggs. (1.30.16)

  • Added new optional parameter clean to the command run to control whether to clear the variables in current session. (1.30.16)

  • The window parameter in function wj now supports duration types y (year), M (month) and B (business day). (1.30.16)

  • Function loadText now supports strings containing characters with ASCII value 0. (1.30.16)

  • You can now use conditional assignments on matrices. (1.30.16)

  • Added a new optional parameter atomic to function loadTextEx. The default value is "false". When loading a large file, specify this parameter as "false" to split the loading transaction into multiple transactions. (1.30.16)

  • Added new column remoteIP to the return value of functions getCompletedQueries and getRunningQueries. (1.30.16)

  • Now you can specify the configuration parameter stdoutLog as "2" to print system log to both stdout and the log file. (1.30.16)

  • The parameter metrics in anomaly detection engines can now contain sequence-related functions. (1.30.16)

  • When the time-series engine parameter windowSize is a vector, the elements can take the same values. (1.30.16)

  • Cross-sectional engine parameter keyColumn now supports vector type. (1.30.16)

  • Records in tuple form can now be inserted to streaming engines. (1.30.16)

  • New field memoryUsed is added to the return value of function getStreamEngineStat().CrossSectionalEngine, indicating the consumned memory of the cross-sectional engine. (1.30.16)

  • In asof join engines, parameter metrics can now include the right table's temporal column. (1.30.16)

  • Added read-only privilege for shared stream tables. (1.30.16)

  • Improved stability of controller nodes in high availability clusters. (1.30.16)

  • Information on delete and update operations can be printed in log. (1.30.16)

  • Added subscription topic information to the error messages of the stream subscription task in the log. (1.30.16)

  • Modified the parameter forceTriggerTime in the time-series engine. (1.30.15)

  • If updateTime in the time-series engine is specified, the output table is no longer restricted to a keyed table. (1.30.15)

  • Added new parameter triggeringInterval in the cross sectional engine (createCrossSectionalEngine) to specify the interval or keyCount at the latest timestamp to trigger calculation. (1.30.15)

  • The reactive state engine supports state function mmad. (1.30.15)

  • The time-series engine supports alignment of nanotimestamp type. (1.30.15)

  • Added access privilege control for shared stream table. (1.30.15)

  • Added parameters msgAsTable, batchSize, throttle, hash, filter, persistOffset, timeTrigger, handlerNeedMsgId and raftGroup to the output table of getStreamingStat().subWorkers. (1.30.15)

  • Functions sma, wma, dema, tema, trima, t3, ma, talib, alibNull and linearTimeTrend can be used as state functions in the reactive state engine. (1.30.15)

  • The dimension table supports concurrent deletes. (1.30.15)

  • Comparing STRING to NULL is now supported. (1.30.15)

  • Increased the precision of the following functions: stdp, std, varp, var, skew, kurtosis, mskew, mkurtosis, tmskew, tmkurtosis, as well as skew and kurtosis in window join. (1.30.15)

  • The first argument of a higher-order function is parsed as a function. (1.30.15)

  • User-defined functions support keyword arguments. (1.30.15)

  • Added input check to functions qr, ols and dot to prohibit a matrix with 0 rows or columns as an input. (1.30.15)

  • Function iif now returns NULL for a NULL condition. In the previous versions, a NULL condition is equivalent to a false condition. (1.30.14)

  • The parameter Y of function in can be a scalar including NULL. If Y is an untyped NULL, it returns false or a vector of false. (1.30.14)

  • No longer limit the maximum number of rows in a matrix to 8192 when conducting calculations like accumulate. (1.30.14)

  • Transaction conflict exception is no longer thrown for concurrent write, update and delete of dimension tables. (1.30.14)

  • The detailed replay progress of the redo log will be output in the log when a data node restarts. (1.30.14)

  • Removed the parameter range from function interval in SQL statement and added an optional parameter explicitOffset. When it is set to true, the starting time of the first window will be aligned to (t/step)*step where t is the starting time specified by the where clause. The default value of explicitOffset is false, indicating the starting time of the first window will not be aligned. (1.30.14)

  • The output of functions ols and wls when mode=2 has a new element of "Residual" which means the regression residuals. (1.30.14)

  • Added a new function residual for regression residuals. (1.30.14)

  • The compute node supports all client operations. (1.30.14)

  • If the second column in the SQL pivot by clause is of integral types, the column names of the result are now the second column values (converted into STRING). In previous versions, the prefix "C" is added to the column names. (1.30.14)

  • Added a new parameter *percent" to functions rank, denseRank, cumrank, rowRank and rowDenseRank to display the results in percentile. (1.30.14)

  • Added an optional parameter init to function kmeans to define the initial cluster centers. (1.30.14)

  • Optimized the performance of window join (wj) when used with the following aggregate functions: varp, stdp, prod, skew and kurtosis. (1.30.14)

  • Function unpivot supports non-numeric data types. (1.30.14)

  • DURATION window is supported in some sliding window functions such as msum with an indexed matrix or indexed series. (1.30.14)

  • Added an optional parameter atomic to function database. If atomic parameter is set to 'CHUNK', it means concurrent writes to a partition (chunk) are allowed, but the atomicity at transaction level is no longer guaranteed. Only the atomicity at chunk level is guaranteed. (1.30.14)

  • Added new parameter forceTriggerTime to functions createTimeSeriesEngine and createDailyTimeSeriesEngine. (1.30.13)

  • Shortened the minimum interval between tasks specified by the parameter scheduleTime of function scheduleJob to 5 minutes. (1.30.13)

  • Functions createTimeSeriesEngine and createDailyTimeSeriesEngine support multiple keyColumns. (1.30.13)

  • The parameter ignoreNull of the function upsert! can be specified for a distributed table. (1.30.13)

  • Added two optional parameters modules and overloadedOperatorsto function parseExpr to load modules and to assign values to variables in a expression with a dictionary, respectively. (1.30.13)

  • Added an optional parameter exec to function sql to generate exec statement. (1.30.13)

  • Function temporalAdd supports adding or subtracting business days (BusinessDay) and supports DATEHOUR type. (1.30.13)

  • Added an optional parameter step to function interval in SQL group by clause for sliding-window calculation. (1.30.13)

  • The map clause is supported in delete statement to execute the delete statement in each partition. (1.30.13)

  • User-defined functions support default parameter values. (1.30.13)

  • Added parameter raftGroup to function createTimeSeriesEngine to enable high availability for the time series engine. (1.30.13)

  • The parameter groupby of function sql supports multiple columns when groupFlag = PIVOTBY. (1.30.13)

  • Added two new parameters keyPurgeFilter and keyPurgeFreqInSecond to the reactive state engine (createReactiveStateEngine) for automatic key cleanup. (1.30.13)

  • The reactive state engine (createReactiveStateEngine) supports outputting results to distributed tables and stream tables. It can also accept the result of function replay as an input. (1.30.13)

  • Removed the default configuration parameter redoLog from configuration files dolphindb.cfg, controller.cfg, cluster.cfg on Windows. (1.30.13)

  • Optimized the performance of replaying redo log. With a large number of small transactions, the performance is improved by more than 10 times. (1.30.13)

  • To be compatible with standard SQL, no exception will be thrown when the select clause contains a column with the same name as a column specified by group by. (1.30.12)

  • The size of the script submitted by API and GUI is no longer capped at 64KB. (1.30.12)

  • The parameter aggs of function window join can use a tuple where each element represents an aggregate function. (1.30.12)

  • The cross sectional engine (createCrossSectionalEngine) will discard out-of-order data. (1.30.12)

  • Function setStreamTableFilterColumn supports high-availability stream tables. (1.30.12)

  • Added checks to function addVolumes to prevent it from being executed on a controller. (1.30.12)

  • Optimized the performance of functions skew and kurtosis in the time-series engine. (1.30.11)

  • Optimized the performance of the time-series engine when useSystemTime=true. (1.30.11)

  • Functions createTimeSeriesEngine, createCrossSectional, createDailyTimeSeriesEngine, createAnomalyDetectionEngine and createSessionWindowEngine can output a distributed table. (1.30.11)

  • Added an optional parameter contextbycolumn to the cross sectional engine (createCrossSectionalEngine). If it is specified, the calculation is conducted within each contextbycolumn group. (1.30.11)

  • Reprogrammed the output log in the server log: Classified part of the transaction processing details as DEBUG to avoid excessive log growth. (1.30.11)

  • Function haStreamTable supports multiple keyColumns. (1.30.10)

  • The parameter metrics of createTimeSeriesEngine, createDailyTimeSeriesEngine, createCrossSectionalEngine, createAnomalyDetectionEngine and createSessionWindowEngine can be a tuple. (1.30.10)

  • Added a new configuration parameter useHardLink to determine whether to enable hardlink in function sqlUpdate. (1.30.10)

  • Optimized the query performance of SQL context by clause when used together with limit clause. (1.30.10)

  • Each metric in functions createTimeSeriesEngine and createDailyTimeSeriesEngine can have its own filling method for empty windows by specifying parameter fill as a vector of strings. (1.30.10)

  • An exception will be thrown when the time column of outputTable is inconsistent with that of dummyTable in the anomaly detection engine (createAnomalyDetectionEngine). (1.30.10)

  • Bias correction is now available for functions skew and kurtosis on a distributed table. (1.30.9)

  • An exception will be thrown if one column is updated multiple times in the same update clause. (1.30.9)

  • Can specify 2 columns (such as date and time) for parameter timeColumn of the time-series engine. (1.30.9)

  • The where clause of function sqlUpdate supports function call. (1.30.9)

  • The second parameter of functions firstNot and lastNot can be specified for distributed tables. (1.30.9)

  • Changed the return value of function tableInsert from the number of records written to the number of records successfully written when inserting data into a distributed table. (1.30.9)

  • A warning will be reported in the log if the data is not written successfully as it is out of the partitioning range. (1.30.9)

  • A high-availability stream table can be dropped by function dropStreamTable after it is undefined with function undef. (1.30.9)

  • Optimized SQL performance on wide tables. (1.30.8)

  • Reactive state engine (createReactiveStateEngine) supports multiple keyColumns. (1.30.8)

  • The metrics of a cross sectional engine (createCrossSectionalEngine) can use both aggregate functions and non-aggregate functions. (1.30.8)

  • Changed the default value of maxConnections to 512. (1.30.7)

  • Renamed function createTimeSeriesAggregator to createTimeSeriesEngine. The former is now used as an alias. (1.30.7)

  • In addition to the function name, fixed parameters are displayed in partial application. (1.30.6)

  • Functions date, month and datehour can be used on a temporal type partitioning column for partition pruning. Suppose a database is partitioned by date (DATE type) and the data type of the partitioning column of a table is TIMESTAMP, we can use the following filtering condition: where date(time) = 2021.03.02. (1.30.6)

  • When partitioning a table with VALUE or RANGE domain by the time column, pruning will be done according to the filter condition indicated by where clause to improve query performance. If the time range of the involved partition necessarily satisfies the filter condition, users can delete the filter condition in subqueries of the partition. (1.30.6)

  • Improved the efficiency of cache engine to avoid unnecessary OOM. (1.30.6)

  • Added an optional parameter fill to function createTimeSeriesEngine. Its value can be "none", "null" and "ffill". The default value is "none" indicating that it returns no result for an empty window. (1.30.6)

  • Functions compress and decompress support table type input. (1.30.6)

  • Increased the maximum metadata size involved in a single transaction from the 16MB to 128MB to avoid the situation where some large tables cannot be deleted. (1.30.6)

  • Added two parameters snapshotDir and snapshotIntervalInMsgCount to the anomaly detection engine (createAnomalyDetectionEngine) to support snapshot. (1.30.6)

  • Renamed function createCrossSectionalAggregator to createCrossSectionalEngine. The former is now used as an alias. (1.30.6)

  • Adjusted the frequency of the system automatically checking the log file size to prevent it from exceeding set limit. (1.30.3)

  • Added an optional parameter stacking to function plot, which only takes effect when chartType = LINE/BAR/AREA. (1.30.3)

  • In function subscribeTable, when the optional parameter offset is set to -2, no exception will be thrown if the offset of the persistent file is not found. Instead, the offset will be modified to -1, starting the subscription from the latest data. (1.30.3)

  • Increased the CPU bound limit from 32 cores to 64 cores on Windows. (1.30.3)

  • When calling the function addMetrics to add metrics to the time series engine dynamically, an exception will be thrown if parameter windowSize differs from the original specification. (1.30.3)

  • The parameter filter of function subscribeTable supports hash filter and range filter in addition to value filter. (1.30.3)

  • The cross sectional engine (createCrossSectionalEngine) now supports vectorized functions in addition to aggregate functions. (1.30.3)

  • The aggregate functions specified by parameter aggs of window join support returning multiple values. (1.30.2)

  • String concatenated by function concat will be automatically converted to BLOB if the length is over 65535 bytes or it contains ASCII 0 (NULL). (1.30.2)

  • The aggregate functions specified by parameter metrics of the time series engine support returning multiple values. Metrics with aliases such as avg(price) as price are also supported. (1.30.2)

  • Improved the performance of serialization and deserialization of data type SYMBOL by 5 to 10 times between DolphinDB data nodes or between data node and API. (1.30.1)

  • Enhanced the stability of functions for parallel and distributed jobs. (1.30.1)

  • Now support matrices with 0 rows or columns. (1.30.1)

  • Added an optional parameter timeTrigger to function subscribeTable for time-triggered message processing. (1.30.1)

  • STRING longer than 65535 bytes will be automatically converted to BLOB during serialization. (1.30.1)

Issues Fixed

  • Data contention when updating a table schema led to OOM problem and server crash. (1.30.22.4)

  • The backup might get stuck when the backup directory (backupDir) is on NFS. (1.30.22.4)

  • The memory access out of bounds error occured when attempting to close a connection that was created after setting the maximum number of connections using setMaxConnections. (1.30.22.4)

  • When joining partitioned tables using a statement that did not conform to SQL standards, referencing a column from the left table in the where clasue caused the server to crash. (1.30.22.4)

  • If creating an IPC in-memory table failed, creating another one with the same name caused the server to crash. (1.30.22.4)

  • An error was reported when the filtering condition in a distributed query contained a comparison between operands of SECOND and INT type. (1.30.22.4)

  • The SYMBOL type in an IPC in-memory table was not compatible with the STRING type. (1.30.22.4)

  • An error message "getSubChunks failed, path'/xx' does not exist" was reported when restoring data to a newly-created database. (1.30.22.2)

  • The elements accessed based on labels by loc function were incorrect. This issue was introduced in version 1.30.22. (1.30.22.2)

  • Scale loss occurred when restoring DECIMAL data. (1.30.22.2)

  • If the parameter atomic of function database was set to 'CHUNK', the versions of metadata on the controller and data nodes may be inconsistent if a transaction involved multiple chunks. (1.30.22.2)

  • Passing a non-string variable to the parameter label of function interval crashed the server. (1.30.22.2)

  • For a table partitioned by temporal values, queries with where conditions on the partitioning column were slow. This issue was introduced in version 1.30.22. (1.30.22.2)

  • The overflowed intermediate result of function mprod caused server crash. (1.30.22.2)

  • Concurrent execution of restore (and other) transactions may result in inconsistent metadata after server restart. (1.30.22.2)

  • On Windows, the files function returned inaccurate fileSize values for files exceeding 2 GB. (1.30.22.1)

  • In a high-availability cluster, if an error occurred during serialization when using addFunctionView, the function was not cleared from memory. (1.30.22.1)

  • In a high-availability cluster, adding a function view containing plugin methods to a controller caused failures in other controllers. (1.30.22.1)

  • Users with DB_MANAGE privilege failed to grant permissions to other users. (1.30.22.1)

  • Adding a node may cause backup errors. (1.30.22.1)

  • Queries on DFS tables using COMPO partitioning may cause data loss if the query: (1.30.22.1)

    • Did not use aggregate functions, order-sensitive functions, row reduce functions (such as rowSum), or fill functions (such as ffill) in the select statement.

    • Used one of the partitioning columns (except the last one for COMPO partitioning) as a pivot-by column.

  • If an error occurred in a symbol base file, reloading the file caused server crash. (1.30.22.1)

  • Specifying a tuple containing functions or expressions with multiple returns for the metrics parameter of createReactiveStateEngine caused the server to crash. (1.30.22.1)

  • When querying a large DFS table using the SQL keyword TOP or GROUP BY, an error was potentially raised. (1.30.22)

  • When a SQL query specified a column name that couldn't be recognized, the error message returned contained an incorrect column name instead of the actual unrecognized column name from the query. (1.30.22)

  • Failures to write to a partition of a DFS table with many columns could cause the server to crash. (1.30.22)

  • Concurrently loading and deleting multiple tables in a database could cause subsequent loadTable operations to fail with an error reporting it cannot find the .tbl file. (1.30.22)

  • The head and tail functions could not be used in aggregate functions. This bug was introduced in DolphinDB 1.30.18. (1.30.22)

  • A deadlock could occur when concurrently renaming a dimension table via renameTable and querying the same table. (1.30.22)

  • When querying a table with a large number of partitions using a SQL query with BETWEEN...AND... for partition pruning, the error The number of partitions [xxxxx] relevant to the query is too large could be raised. (1.30.22)

  • Using calculations or functions in a CASE WHEN condition could crash the server. (1.30.22)

  • Using the DISTINCT keyword in SQL queries could return incorrect results. (1.30.22)

  • When querying a VALUE or RANGE partitioned DFS table, if the SELECT clause and GROUP BY clause both applied the same time conversion function (e.g. date()) to the partitioning column, but used different aliases for that column, incorrect results could be returned. (1.30.22)

  • When deleting data from a partitioned table using a SQL DELETE statement, if all nodes storing the replicas for the relevant partition were offline, the error chunktype mismatched for path could be raised. (1.30.22)

  • The use of local executors could lead to deadlock situations during task scheduling. (1.30.22)

  • In the DolphinDB JIT version, when appending large amounts of data to a reactive state engine (createReactiveStateEngine) that used user-defined functions, incorrect results could be returned. (1.30.22)

  • A deadlock may occur when unsubscribeTable was called from multiple nodes simultaneously. (1.30.22)

  • Server crashed when the capitalization of the column names specified in metrics and input tables of a left semi join engine (createLeftSemiJoinEngine) was inconsistent. (1.30.22)

  • Server crashed when appending data to a stream table and persisting the table at the same time. (1.30.22)

  • After DROP table was called to delete a stream table, the table could not be deleted or unsubscribed from. (1.30.22)

  • Syntax parsing issues: statements such as "/" == "a" could not be parsed correctly. (1.30.22)

  • An additional column was output when the second parameter of function ols consisted solely of 0. (1.30.22)

  • Server crashed due to parsing failure when the parameter aggs of function wj was not compliant. (1.30.22)

  • The result of function expr was incorrect if a DATEHOUR argument was passed. (1.30.22)

  • The web interface could not be accessed properly if the parameter webLoginRequired was configured to true. (1.30.22)

  • Incorrect results were returned when using cast to convert SYMBOL data. (1.30.22)

  • Function nullFill failed to fill the NULL values returned by function bucket. (1.30.22)

  • When a user-defined anonymous aggregate function was called with twindow in another user-defined function, an error func must be an aggregate function. was raised. (1.30.22)

  • When a DolphinDB process was started, server crashed if a script (as configured with parameter run) containing function submitJob was executed. (1.30.22)

  • A function name conflict occurred for the function view and module function at the server restart when the following conditions were satisfied at the same time (1.30.21.6):

    • in a standalone mode;
    • the function view was dropped after the module function was added to it;
    • the function defined in the module was passed to the addFunctionView, and the function view was dropped then;
    • the module was specified in the configuration parameter preloadModules to be preloaded.

    The error messages reported for other conflicts were enhanced.

  • In a cluster mode, when SSL was enabled (enableHTTPS=true) for connection, the session may be disconnected if a large amount of data was transferred from the server to the client. (1.30.21.6)

  • In a cluster mode, when joining tables under the same database (atomic = 'CHUNK') but on different nodes, incorrect results may be returned. (1.30.21.6)

  • The reactive state engine did not handle the namespaces defined in metrics. (1.30.21.6)

  • Incorrect results were returned by function mskew or mkurtosis if the input X contains consecutive identical values and the number of identical values is greater than window. (1.30.21.6)

  • An error occurred when using order by on columns of STRING type with limit 0, k or limit k on MVCC tables. (1.30.21.5)

  • When deleting a function view with dropFunctionView, a server crash may occur due to the absence of locking during log writing. (1.30.21.5)

  • When joining two tables with equi join or inner join, incorrect results were returned if the two matching columns are of STRING and NANOTIMESTAMP types. (1.30.21.5)

  • When loading tables with loadTable, data loss may occur on the cold storage tier if the table names were improperly verified. (1.30.21.5)

  • The select distinct statement is disabled. The keyword "distinct" is recognized as function distinct, i.e., the order of the elements in the result is not guaranteed to be the same as the input, and the column name is distinct_xxx. (1.30.21.5)

  • When the configuration parameter datanodeRestartInterval was set to a time less than 100 seconds, the data node was immediately restarted by the controller in a graceful shutdown situation or after the cluster was restarted. (1.30.21.4)

  • Incorrect conversion when the input of function toJson was a tuple which contains numeric scalars. (1.30.21.4)

  • Incorrect conversion when the input of function toJson was a dictionary with its values being vectors of ANY type. (1.30.21.4)

  • A server crash may occur when function bar with parameter interval set to 0 was used to query a partitioned table. (1.30.21.4)

  • For N-to-1 replay, an error was reported when the key of the dictionary (set by parameter inputTables) was specified as SYMBOL type. This bug occurred since version 1.30.21. (1.30.21.4)

  • Scheduled jobs failed to be executed due to the unsuccessful deserialization of file jobEditlog.meta at node startup. (1.30.21.4)

  • Scheduled jobs were still executed until the next server startup, even though the serialization was unsuccessful when they were created. (1.30.21.4)

  • A server crash occurred when the defaultValue parameter of function array is specified as a vector. (1.30.21.4)

  • Passing non-table data to the newData parameter of upsert! could crash the DolphinDB server. (1.30.21.4)

  • The upsert!() function would fail when the following three conditions were satisfied at the same time: (1.30.21.4)

    • Only one record was to be updated
    • The newData parameter contained NULL values
    • The ignoreNull parameter was set to true
  • Attempting to add multiple new columns to an MVCC table in an update statement would result in a data type error. (1.30.21.4)

  • When specifying a column containing special characters such as control characters, punctuation marks, and mathematical symbols in the group by clause of a query, these special characters were improperly ignored. (1.30.21.4)

  • dropColumns! could not delete in-memory tables with sequential partitions. (1.30.21.4)

  • A controller may crash when loading a partitioned table from the local disk. (1.30.21.4)

  • Function getClusterDFSTables may return tables that have been deleted or do not exist. (1.30.21.4)

  • The physical paths of partitions may not match the metadata after new data nodes are added and moveReplicas() is executed. (1.30.21.4)

  • For N-to-N replay, if an element of the input data source for a table was empty, data in the output table may be misplaced. (1.30.21.4)

  • Occasional failures of creating a streaming engine due to uninitialized internal variables. (1.30.21.4)

  • For operations involving data flushing, data may be lost or the operations may get stuck if the physical directory of a partition did not exist (e.g., it had been manually deleted). (1.30.21.4)

  • Incorrect result of the temporalAdd function when specifying the parameter unit as "M". (1.30.21.4)

  • Data storage error may occur when different operations were performed on the same partition. (1.30.21.3)

  • Users who are given DB_READ or TABLE_READ privileges may not be able to execute queries. (1.30.21.3)

  • Server crashed in a high-availability cluster when reading raft logs at the reboot of the controller. (1.30.21.3)

  • Server crashed when using loadText to load a CSV file that contains unpaired double quotes (""). (1.30.21.3)

  • After new columns were added to an MVCC table, a server crash occurred when checking the table schema with function schema or adding comments to the new columns with function setColumnComment. (1.30.21.3)

  • Invisible characters in the partitioning column resulted in inconsistent versions between controller and data node. (1.30.21.3)

  • Server crashed when updating the in-memory table with index out of bounds. (1.30.21.3)

  • A server crash may occur when users login frequently in high-concurrency scenarios. (1.30.21.3)

  • Server crashed when a user-defined function was specified for the metric of function StreamEngineParser. (1.30.21.3)

  • When the stream table was not defined on the publisher, the reconnection on the subscriber resulted in file descriptor leaks. (1.30.21.3)

  • Server crashed when using function parseExpr to convert a string containing a lambda function. (1.30.21.3)

  • An error was reported when using function parseExpr to convert a string ending with a semicolon. (1.30.21.3)

  • Server crashed when using function RepartitionDS to repartition a joined table and parameter partitionType is specified as VALUE. (1.30.21.3)

  • If the matching columns of two partitioned table were not partitioning columns, and some partitioning columns of two tables have the same column name(s), incorrect result was returned when filtering the data with the partitioning columns of the right table. (1.30.21.3)

  • For a DFS table value-partitioned by month, an incorrect result was output when filtering the data on the first day of a month by the where condition. (1.30.21.3)

  • Server crashed when using order by in conjunction with limit to sort column "DATE (case-sensitive)" in reverse order. (1.30.21.3)

  • An incorrect result was output when performing time-series aggregate functions (e.g., pre, rank, etc.) on multiple columns. (1.30.21.3)

  • The data in the in-memory table returned by the aggregation function will be changed when it was subsequently calculated by moving functions such as move. The data in the in-memory table returned by the aggregation function will be changed when it was subsequently calculated by moving functions such as move. (1.30.21)

  • An error was reported when anonymous aggregate function was specified for aggs of window join. (1.30.21)

  • For a DFS table value-partitioned by month, an incorrect result was output if the temporal type specified in the where clause was inconsistent with that of table columns, and the where condition contained the last day of the month. (1.30.21)

  • Server crashed when the independent variables (parameter X) of function ols was specified as a string. (1.30.21)

  • Server crashed when using function loadText to import data of string type. (1.30.21)

  • Server crashed when an MVCC table is used in a transaction statement. (1.30.21)

  • Incorrect results for using as with function deltas in conjunction with function corr. (1.30.21)

  • Terminating the DolphinDB process with kill-9 command may cause redo logs not to be removed. (1.30.21)

  • A crash may occur when a table containing string columns was calculated in a reactive state engine. (1.30.21)

  • Submitting a metacode containing undefined variables via submitJob resulted in a crash. (1.30.21)

  • A node crash may occur after recovery from network failure in a cluster. (1.30.21)

  • Using partial application in metaprogramming with context by could obtain incorrect results. (1.30.21)

  • sqlObj could not be recognized as metacode in replayDS. (1.30.21)

  • If the left table of lj is an in-memory table and the right one is a DFS table which is located under a multilevel directory (e.g., dfs://mydbs/quotedb), an error would be reported. (1.30.21)

  • An error was reported when the metric of function createTimeSeriesAggregator contained a keyColumn. (1.30.21)

  • The getClusterPerf function caused deadlocks when executed by two nodes at the same time in a high-availability cluster. (1.30.21)

  • A crash may occur when the accumulate function was executed multiple times. (1.30.21)

  • After the execution of function createDailyTimeSeriesEngine was completed, an error may be reported for the data of temporal type in query results in some scenarios. (1.30.21)

  • Unexpected result returned by function isValid when adding two empty strings. (1.30.21)

  • Null values were returned when more than 128 filtering conditions connected with keyword or were specified in the where clause. (1.30.21)

  • An exception thrown by function loadText may lead to deadlocks under high load. (1.30.21)

  • After the function was added to function view, the body queried by function getFunctionView had one less pair of brackets. (1.30.21)

  • Server crashed when a string vector was retrieved by slicing with index out of bounds. (1.30.21)

  • A crash may occur when using higher-order function each to apply a user-defined function to a table. (1.30.21)

  • No exception was thrown when the data appended to the cross sectional engine did not match the schema of dummy table. (1.30.21)

  • When joining DFS tables with the matching column different from the partitioning column, if the join result was queried by a select top clause and order by partitioning column, an incorrect result was returned. (1.30.21)

  • An error was reported when using function rpc or remoteRun to call a partially applied function. (1.30.21)

  • The file storing job logs was lost when it reached 1G. (1.30.21)

  • The number of openBLAS-threads was determined based on the configuration parameter openblasThreads, not on the number of CPU cores. (1.30.21)

  • Serialization on a data node failed if the metadata of a partition exceeded 128M. (1.30.20)

  • Failure of replaying redo log resulted in partition status errors on the data node. (1.30.20)

  • Partition was wrongly deleted due to migration failure when the partition was moved to the configured coldVolume of tiered storage. (1.30.20)

  • When creating a database with atomic='CHUNK', slow startup occurred due to excessive metadata on the controller. (1.30.20)

  • Old data was prematurely reclaimed after update. (1.30.20)

  • The original table was not immediately reclaimed after the renameTable operation was performed. (1.30.20)

  • Server crashed when specifying a partition path ended with a "/" for function dropPartition. (1.30.20)

  • Users cannot delete partitions that were automatically added when creating a DFS table with VALUE-based partitions by specifying conditions for dropPartition. (1.30.20)

  • Repeated deletions on an empty table caused cid errors in the metadata stored on the data node. (1.30.20)

  • The parameter dfsRecoveryConcurrency did not take effect after configuration. (1.30.20)

  • createReactiveStateEngine failed when specifying factor tablibNull for the metrics. (1.30.20)

  • Server crashed when specifying external variables for the metrics of streamEngineParser. (1.30.20)

  • Server crashed when the row count of the left table was smaller than the window size in window join. (1.30.20)

  • Server crashed when using exec with limit and the number of returned rows is less than limit. (1.30.20)

  • The isDuplicated and nunique functions returned incorrect results when working with DOUBLE and FLOAT data types. (1.30.20)

  • Calling parseExpr in user-defined functions caused parsing failure. (1.30.20)

  • The function getClusterPerf returned incorrect value of maxRunningQueryTime. (1.30.20)

  • Server crashed when using loadNpy to read excessively large npy files. (1.30.20)

  • Variables defined within a for-loop could not be accessed outside the loop using DolphinDB JIT version. (1.30.20)

  • Garbage collection of redo log got stuck when data was continuously written to an OLAP database. (1.30.19)

  • A node was wrongly considered alive by the controller after graceful shutdown. (1.30.19)

  • Partition locks were prematurely released due to timeout before a transaction was resolved, which led to write failure. (1.30.19)

  • Query conditions were wrongly processed when backing up data by specifying the conditions. (1.30.19)

  • Server crashed when the machine load was excessively high. (1.30.19)

  • When a cluster was restarted after a DFS table was updated, the original physical directories may not be recycled. (1.30.19)

  • Serialization failure of symbol base caused read errors. (1.30.19)

  • Streaming subscription failed to obtain data that was in memory but had been deleted from disk. (1.30.19)

  • Server crashed when the number of columns inserted by appendForJoin did not match the table schema of the left or right table of a join engine. (1.30.19)

  • When the parameter updateTime of function createSessionWindowEngine was specified and the output table was not a keyed table, the calculation could not be triggered after 2 * updateTime when a record arrived. (1.30.19)

  • Server crashed when data was continuously ingested to a daily time series engine after the session end. (1.30.19)

  • Failed to create a lookup join engine when the right table was specified as a shared keyed table. (1.30.19)

  • If a node was restarted when a stream table was persisted to disk, data loss and decompression failure may occur. (1.30.19)

  • Server crashed when specifying a time column (of a big array form) for dateColumn and timeColumn of replay and setting absoluteRate=false. (1.30.19)

  • No error was reported when specifying a user-defined function with a constant return value for the metrics of a reactive state engine. (1.30.19)

  • Server crashed when specifying temporary variables for metrics of createAnomalyDetectionEngine. (1.30.19)

  • When using SQL update with context by, if the first column was set to integral type and the subsequent columns were set to floating-point types, values in the floating-point columns were rounded. (1.30.19)

  • Concurrent pivot by queries may get stuck with few number of worker threads. (1.30.19)

  • Server crashed when using HINT_EXPLAIN to query data from a three-level partitioned table. (1.30.19)

  • Incorrect results when using function binsrch on a subarray of STRING type. (1.30.19)

  • Function cast returned empty when converting a vector of STRING type to a tuple. (1.30.19)

  • When aggregating multiple INT128 columns of an in-memory table, an error "The function min for reductive operations does not support data type INT128" occurred. (1.30.19)

  • getFunctionView did not return some function view bodies. (1.30.19)

  • Server crashed when an empty tuple was appended to itself and then loaded. (1.30.19)

  • Server crashed when using interval interpolation in a SQL query, if the granularity of the data type specified for the time range in the where clause is greater than the time granularity specified by the duration parameter of interval. (1.30.19)

  • Server crashed when using the function twindow in a SQL query. (1.30.19)

  • update on a DFS table failed when the set column names did not match the original column names (including case sensitive inconsistencies). (1.30.19)

  • When the function iterate was included in the metrics of a reactive state engine and the data cleaning mechanism was enabled, if data was inserted while the historical data was being cleaned, an error "vector::_M_default_append" was reported. (1.30.19)

  • When calling matrix([[datehour(0)],[datehour(0)]]) to create a matrix, an error "The data object for matrix function can't be string or any type" was reported. (1.30.19)

  • When specifying countNanInf for the parameter aggs of function wj, an error "An window join function must be an aggregate function" was reported. (1.30.19)

  • If no group was specified for createDailyTimeSeriesEngine, any uncalculated data from the previous day would be merged into the first window of the following day. (1.30.19)

  • The first window's calculation result across days in a daily time series engine was incorrect. Additionally, when the function fill was used to fill in NULL values, data outside the session was output. (1.30.19)

  • Tasks in In-Progress state could not be recovered during online recovery. (1.30.19)

  • Server crashed when specifying useSystemTime=true and mode for metrics of createTimeSeriesEngine. (1.30.19)

  • When specifying the tmove or move function for the metrics of createReactiveStateEngine, the server would crash if X was of the STRING or SYMBOL type. (1.30.19)

  • Failed to insert data with tableInsert when splitting and assigning a stream table with streamFilter. (1.30.19)

  • The insert failure of streamFilter may cause session disconnection due to excessively long error messages. (1.30.19)

  • When specifying function firstNot or lastNot for the metrics of createTimeSeriesEngine or createReactiveStateEngine and setting fill=`ffill, the output did not match expectations. (1.30.19)

  • Server crashed when specifying function mfirst or mlast for the metrics of createReactiveStateEngine, and X was of FLOAT, SHORT, CHAR, BOOL, INT128, STRING, or SYMBOL type. (1.30.19)

  • Executing function tableInsert changed the atomic level of a database from 'CHUNK' to 'TRANS'. (1.30.19)

  • When specifying tm-functions for the metrics of createReactiveStateEngine and the window parameter of the function is set to y, M or B, the calculation result was incorrect. (1.30.19)

  • Inconsistent STRING columns between replicas after online recovery. (1.30.19)

  • When passing TIME type data to index of resample and setting origin="end" and rule="D", an error "Invalid value for HourOfDay (valid values 0 - 23): 39" was reported. (1.30.19)

  • An error was reported when administrators (except the super admin) granted/denied/revoked permissions to themselves. (1.30.19)

  • Calculating imin or imax with byRow on a matrix with an empty row returned incorrect results. (1.30.19)

  • Function getControllerPef returned incorrect agent site when a controller crashed. (1.30.19)

  • When dataSync was not configured, an error occurred when dynamically calling the addNode function to add a node. (1.30.19)

  • An OOM error caused by concurrent writes, queries or calculations may lead to a server hang-up. (1.30.18)

  • When writes and reads are conducted in the OLAP engine at the same time, the result may be incorrect. (1.30.18)

  • When writing to an OLAP cache engine, if an exception other than OOM occurs, the system will repeatedly attempt to rewrite, which leads to a server hang-up. (1.30.18)

  • If moveReplicas is called after executing suspendRecovery, it fails to move some of the chunks. (1.30.18)

  • If a cluster is rebooted after submitting concurrent tasks, some chunks are always in the status of RECOVERING. (1.30.18)

  • If delete is used to delete large amount of data, incorrect information may be written to the checkpoint file, which causes a node to crash and cannot be restarted. (1.30.18)

  • If snapshot is enabled in the reactive state streaming engine, resubscription to a table with different metrics causes a server crash. (1.30.18)

  • Appending a single record to the lookup join engine may cause a server crash. (1.30.18)

  • If the data written to a high-availability stream table are in different schema, they can still enter the persistent queue, and the error "Can't find the object with name" is reported after a leader switch. (1.30.18)

  • If the parameter fill is specified for createDailyTimeSeriesEngine, the result of date without data is filled. (1.30.18)

  • A non-admin user can use function createUser. (1.30.18)

  • The command changePwd does not limit the length of the new password. (1.30.18)

  • matrix([],[]) leads to a server crash. (1.30.18)

  • When using exec with pivot by, if no function is used on the exec column, the statement will generate a table, rather than a matrix. (1.30.18)

  • If numJobs > 1 or numJobs = -1 is set in function randomForestClassifier for concurrent jobs, a repeated use of dataSource leads to a server crash. (1.30.18)

  • If the parameter duration of function interval has different precision with that of the parameter X, a crash occurs. (1.30.18)

  • When creating an in-memory keyed table with keyedTable(keyColumns, table), if the keyColumns in the table have duplicate values, a memory leak occurs. (1.30.18)

  • For moving functions that can be used on matrices, such as mcorr and mwavg, when calculating on indexed matrices, the label column may be lost in the result. (1.30.18)

  • A redo log recovery timeout may occur when a transaction is slowly synchronized to disk, resulting in data loss after the server restarts. (1.30.16)

  • A data node fails to start and reports the error message "Failed to open public key file. No such file or directory" when starting multiple DolphinDB clusters on one server. (1.30.16)

  • A scheduled job on a high availability cluster may fail to execute due to authentication failure of the switched leader because the initial UUIDs are different between controllers. (1.30.16)

  • After a data node crashes, the agent node reboots the offline node every second instead of at specified intervals configured in parameter datanodeRestartInterval. (1.30.16)

  • When the parameter jobFunc of scheduleJob contains a SQL update statement and a where clause with functions, the deserialization fails after the system restarts. (1.30.16)

  • After updating a distributed table, the original physical directories are not recycled if the transaction fails to commit. (1.30.16)

  • In a high concurrency scenario, a fully-occupied disk of the redo log may cause deadlock in the redo log recovery thread. (1.30.16)

  • If an OOM event occurs when writing to a data node, the node crashes after being stuck for a while instead of reporting the error. (1.30.16)

  • The chunk state is wrongly set when a transaction rollback times out, causing persistence failure in the stream table. (1.30.16)

  • Parameter validation error with function createCrossSectionalEngine: When timeColumn is specified without setting useSystemTime to be true, the error is not raised. (1.30.16)

  • For a time-series streaming engine, when useSystemTime is set to be true and outputTable is a distributed table, an exception of data type mismatch may be raised. (1.30.16)

  • If delayedTime is specified in the asof join streaming engine, write operations may lead to server crash. (1.30.16)

  • When appending more than 65536 rows of records to a high availability stream table twice and rollback occurs, the index.log reports an error "index.log contains invalid data" as two identical indices are written. (1.30.16)

  • Writing to a time-series streaming engine, daily time-series streaming engine or asof join streaming engine may lead to a server crash on Windows. (1.30.16)

  • When the subExecutor thread still has tasks in progress, after successfully executing unsubscribeTable, getStreamingStat().subWorkers still returns the unsubscribed topics. (1.30.16)

  • Loading a high availability stream table may fail after a node restarts. (1.30.16)

  • Reconnection fails after switching leaders in the raft groups of the stream table and the subscriber. (1.30.16)

  • When triggeringPattern = 'keyCount' and triggeringInterval is a tuple, createCrossSectionalEngine returns duplicate results. (1.30.16)

  • Error "Failed to decompress the vector. Invalid message format" occurs when loading a persisted stream table containing BLOB data. (1.30.16)

  • Crash occurs when writing BLOB data to a stream table and a single record exceeds 64KB. (1.30.16)

  • After assigning values to an added column in an in-memory table with select statement instead of exec statement, the node crashes when loading the table. (1.30.16)

  • An error "Read only object or object without ownership can't be applied to mutable function readRecord!" occurs when loading binary files with function readRecord!. (1.30.16)

  • The parser may report an error for function calls when the right bracket is not placed on the same line as the left bracket. (1.30.16)

  • When querying a partitioned table with value domain for the last k rows of records in each group (context by partitionCol limit -k), some of the results do not satisfy the where conditions if no eligible data exists in a partition. (1.30.16)

  • An error "More than one column has the duplicated name" occurs when calling function rolling or moving in SQL statement without specifying the generated column name. (1.30.16)

  • NULL values are generated if no data is found in a step duration of function interval. (1.30.16)

  • Wrongly-specified parameter rowKeys of function sliceByKey leads to server crash. (1.30.16)

  • Null flag is not set after replace! a vector with NULL values. (1.30.16)

Client Tools

GUI

Note: The version number of DolphinDB GUI has been adjusted to improve user experience. The new version number aligns with DolphinDB Server 200 and is forward compatible.

  • Added support for uploading a CSV file as an in-memory table, with a popup CSV editor for confirming the schema before creating the table. (1.30.22.2)

  • Added autosave feature to save scripts at scheduled intervals of 5 seconds. (1.30.22.2)

  • Added support for displaying the entire record by double-clicking the cell in the Data Browser. (1.30.22.2)

  • Added support for automatically wrapping excessively long log messages. (1.30.22.2)

  • Enhanced the error messages when the GUI fails to connect to the server. (1.30.22.2)

  • Enhanced the error messages when synchronizing modules to the server. (1.30.22.2)

  • Optimized the plot function for generating graphs by (1.30.22.2):

    • Retaining the right numerical axis when the x-axis is of temporal type.
    • Removing default symbols from the breakpoints in line charts.
  • When displaying negative numbers or DECIMAL32/64/128 data in the Log or Data Browser, commas are now used as the thousands separator for the integer part of values. (1.30.22.2)

  • Fixed an error with creating dimension tables using the Python Parser. (1.30.22.2)

  • Fixed an issue where negative values copied from cells in the Data Browser would be pasted as wrong values. (1.30.22.2)

  • Fixed an issue where the GUI would freeze after failing to connect to server. A limit has been added for the reconnection time. (1.30.22.2)

  • Fixed an issue where renaming a file in GUI would not actually change the file name in the file system as expected. (1.30.22.2)

  • Added support for DECIMAL128 data type. (1.30.22.1)

  • Added "Oracle" and "MySQL" options to the language dropdown for SQL parsing. (1.30.22.1)

  • Added "refresh" option to refresh variables. (1.30.22.1)

  • Added "Encrypt and synchronize to server" option to synchronize a module to server and save it as an encrypted .dom file. (1.30.22.1)

  • Support select NULL statement. (1.30.22.1)

  • The following keywords are highlighted (1.30.22.1): JOIN, FULL JOIN, LEFT SEMI JOIN, DECIMAL128, DATEHOUR, IS, CREATE DATABASE, create database, inner join, sub, full outer join, right outer join, left outer join, drop table, if exists , drop database, update...set, alter xxx drop/rename/add, create table, and nulls first

  • Optimized the script import logic of #include. (1.30.22.1)

  • Fixed an issue of incorrect target path when synchronizing modules. (1.30.21.3)

  • Enhanced syntax highlighting for some SQL keywords. (1.30.21.1)

  • Data of DECIMAL type can now be displayed. (1.30.21.1)

  • Enhanced syntax highlighting for DECIMAL32 and DECIMAL64. (1.30.21.1)

  • Tables containing tuples can now be viewed from the Variable Explorer. (1.30.21.1)

  • To save a variable locally (or upload a file to the DolphinDB server), right click a variable in the Variable Explorer and select "download" or "upload". (1.30.21.1)

  • Fixed the display of NULL values in the Data Browser. (1.30.21.1)

  • Fixed incorrect month data when GUI parses month(0)~month(11) returned from server. (1.30.21.1)

Web-Based User Interface

New Features

  • If your DolphinDB license is expiring in two weeks or less, a warning message is displayed when connecting to DolphinDB databases. (1.30.22)

  • When creating new tables through the graphical user interface, the column data type dropdown now includes the DECIMAL128 data type option. (1.30.22)

  • You can now create databases and tables through a graphical user interface. (1.30.22)

  • You can now view all databases and tables (including the schema, columns and partitions) in the Database view. (1.30.22)

  • If no code is selected, clicking the "Execute" button will now execute all of the code in the editor. (1.30.22)

  • Support for select NULL SQL statement. (1.30.22)

  • SQL keywords can now be recognized in all-capitalization. (1.30.22)

  • Support for creation and display of DECIMAL data type. (1.30.22)

  • Added switches for enabling/disabling code minimap (code outline) and code completion to the toolbar at the top of the editor. The toolbar also displays the code execution status and you can cancel a long running job by clicking "Executing". (1.30.21)

  • Added shortcuts to copy line up/down. (1.30.21)

  • Support for displaying Decimal32/64 values. (1.30.21)

  • In the Dataview, you can now select the text in a dictionary. (1.30.21)

  • A new menu is added to each DFS table in the Database view. You to view table schema, preview the first 100 records, and add columns to the table. (1.30.21)

  • In the Database view, you can now expand a DFS table to view its columns in a list and edit the comment of each column. (1.30.21)

  • Support for colored output in the terminal. (1.30.21)

  • Added new settings menu where you can customize the number of decimal places. For example, enter "2" to display numbers with 2 digits. (1.30.20)

  • Added support for visualization of dictionaries. (1.30.20)

  • You can now navigate to the associated documentation by clicking the error code (e.g., 'RefId: S00001'). (1.30.20)

  • "Shell" tab: Added new "Database" view for checking databases and tables. (1.30.20)

Improvements

  • Enhanced the design of the popups displaying version and node information in the upper-right corner. (1.30.22)

  • When a table is expanded in the Database view, the contents of that table are automatically displayed in the Data view. (1.30.22)

  • To prevent the interface from getting stuck, the Data view now limits displayed content to a maximum of 10,000 characters for tables, vectors, and dictionaries. (1.30.22)

  • The dropdown lists in the Controller/Nodes Configuration dialogs now include all available configuration parameters. (1.30.22)

  • The input for the regularArrayMemoryLimit configuration parameter has been changed from a dropdown menu to an input field in the Nodes Configuration dialog. (1.30.22)

  • Unnecessary input validation checks have been removed for certain configuration parameters in the Controller/Nodes Configuration dialogs. (1.30.22)

  • In the Nodes Configuration dialog, configuration parameters with an empty value field will not be included in the configuration file. (1.30.22)

  • Removed the DFS tab from the left navigation pane. Database and table information is now integrated into the Shell tab Database view. (1.30.22)

  • Rearranged page layout: Moved the terminal view to the top right and the data view to the bottom. (1.30.22)

  • The terminal view now displays up to 100,000 entries. (1.30.22)

  • Layout enhancements - table preview is now displayed at the bottom of the editor to fit more columns. (1.30.21)

  • Code in the editor is now auto saved. (1.30.21)

  • Improved page load speed. (1.30.21)

  • Data view enhancements: (1) column, row and data type information is displayed below each table; (2) enhanced horizontal scroll bar to display full table. (1.30.21)

  • Enhanced the fonts to reduce file size. (1.30.21)

  • Reduced line height in the Local Variables and Shared Variables views. (1.30.21)

  • The type of the connected node is now displayed at the top navigation bar. (1.30.21)

  • Enhanced Dataview display. (1.30.21)

  • Enhanced error messages for insufficient privileges to access database. (1.30.21)

  • If the path of a DFS database contains dots (e.g., dfs://aaa.bbb.ccc), it is recognized as its directory structure. The database is displayed under a directory tree in the Database view. (1.30.21)

  • The function documentation popup is now up to date with the DolphinDB official manual online. (1.30.21)

  • Users must log in to check the data node logs. (1.30.21)

  • Enhanced code highlighting to keep it consistent with the DolphinDB extension for Visual Studio Code. (1.30.20)

  • Numeric values are formatted with comma (,) as the thousands separator, e.g., 1,000,000,000. (1.30.20)

  • Updated keywords, code completion, and function documentation. (1.30.20)

  • The execution information is displayed in a more compact layout. (1.30.20)

  • Enhanced the "status" popover view to display status information in different categories. (1.30.20)

  • Enhanced table pagination design and added tooltips for icon buttons. (1.30.20)

  • "Job" tab enhancements: Adjusted the field names; Added support for job search by client IP. (1.30.20)

  • Now when connecting to a controller of a high-availability cluster on the web-based cluster manager, you will be redirected to the leader where information on all nodes are displayed. (1.30.19)

  • With the integrated user interface, you can now view, suspend and cancel jobs (running, submitted or scheduled) in DolphinDB. Note that after you have upgraded the server version, the "web" folder must be updated as well. The new version of Web-Based Cluster Manager uses the WebSocket protocol to enhance its support for binary protocols. Your web browser may need to be updated to the latest version. We recommend using the latest version of Chrome or Edge. (1.30.16)

Issues Fixed

  • Fixed an issue where partitions were incorrectly shown in the menu for dimension tables in the Database view. (1.30.22)

  • The system now displays an error message if you try to start data nodes without first logging in. (1.30.22)

  • Fixed an issue in the Data view where null values in DATE columns were incorrectly displayed as the string "null". Now, null DATE values are properly displayed as empty cells. (1.30.22)

  • Fixed incorrect syntax coloring in the Editor view for vectors containing empty string elements represented by a single back quote `. (1.30.22)

  • Fixed an issue where array vector columns were not included in the columns menu in the Database view. (1.30.22)

  • Clicking an empty symbol in the Local Variables view no longer triggers an error. (1.30.22)

  • The fonts in the terminal view and editor are now displayed in a fixed-width format. (1.30.22)

  • The Local Variables view now has a scroll bar. Previously, content overflowed into the Shared Variables view without a scroll bar. (1.30.22)

  • Fixed incorrect configuration parameter names related to asynchronous replication. (1.30.22)

  • Fixed function documentation display issue when you hover over functions such as append!. (1.30.21)

  • Fixed the matrix display issue. (1.30.21)

  • Fixed the REFID links in the terminal. (1.30.21)

  • Fixed font display issues in the terminal. (1.30.21)

  • Enhanced syntax highlighting logic; Fixed highlighting issues with set() and values(). (1.30.21)

  • Fixed the horizontal axis display issue when plotting an OHLC chart. (1.30.21)

  • Fixed the function documentation popup display issue. (1.30.21)

  • Fixed the date and time display issue in the terminal and Dataview. (1.30.21)

  • Enhanced the messages on login failures. (1.30.21)

  • The height of the Database view can now be resized. (1.30.21)

  • Fixed the lag issues in the database list. (1.30.21)

  • Fixed an issue where the temporal labels were not correctly formatted in a plot. (1.30.20)

  • If a data node is started via the web interface or a cluster is restarted repeatedly, defunct processes are generated. (1.30.20)

API

Java

  • Added support for DECIMAL128 data type. (1.30.22.1)

  • Added parameter sqlStd to method DBConnection for SQL parsing. (1.30.22.1)

  • Optimized the performance of BasicDBTask. (1.30.22.1)

  • The getString method will no longer return scientific notation for Float and Double data types when the absolute value is less than 0.000001 or greater than 1000000.0. (1.30.21.3)

  • Added method append to all vector classes. (1.30.21.1)

  • Added class AutoFitTableUpsert. (1.30.21.1)

  • The write mode of method MultithreadedTableWriter can now be upsert. (1.30.21.1)

  • Added callback method callbackHandler to MultithreadedTableWriter. (1.30.21.1)

  • Array vectors can be written to DFS tables with PartitionedTableAppender. (1.30.21.1)

  • Added support for DECIMAL data type. (1.30.21.1)

  • Subscribed data can be pushed through the connection established by the API subscriber. (1.30.21.1)

  • Modified the results of applying getRowJson to array vectors for JSON compatibility. (1.30.21.1)

  • Fixed the recurrent data submission issue in HA environment. (1.30.21.1)

  • Fixed incorrect month data when Java API parses month(0)~month(11) returned from server. (1.30.21.1)

  • Fixed an issue where "connection has been closed" is reported when array vectors are written to a DFS table with MultithreadedTableWriter. (1.30.21.1)

  • Fixed data corruption when the size of results exceeds 268,435,455 (i.e., (2^32-1)/16 ). (1.30.21.1)

  • Achieved load balancing of requests when connecting to the cluster through the API. (1.30.19.1)

  • dded new class streamDeserializer to parse the heterogeneous stream table. Added the streamDeserializer parameter for function subscribe to receive data parsed by streamDeserializer. (1.30.19.1)

  • Added new parameters userName and passWord to function subscribe. (1.30.19.1)

  • Support the reconnect parameter for DBConnection.connect to reconnect nodes automatically in scenarios where high availability is not enabled. (1.30.19.1)

  • Improved the ExclusiveDBConnectionPool class to run multiple DBConnection concurrently in the background. (1.30.19.1)

  • Added new parameters highAvailabilitySites, initialScript, compress, useSSL, and usePython to ExclusiveDBConnectionPool. (1.30.19.1)

  • Added new parameter usePython to method DBConnection.connect to parse scripts with Python parser. (1.30.19.1)

  • Added class BasicTableSchema to store the schema (including rows, cols, colName, colType, etc.) information of BasicTable. (1.30.19.1)

  • Added new parameter tableName to DBconnection.run to obtain the schema of an in-memory table. (1.30.19.1)

  • When writting to an in-memory table with MultithreadedTableWriter, dbPath must be set to NULL, and tableName must be specified as the in-memory table name. (1.30.19.1)

  • Support batch processing in streaming data subscription. (1.30.19.1)

  • Support COMPLEX, POINT, and SYMBOL types. (1.30.17.2)

  • Support array vectors. (1.30.17.2)

  • Added class MultithreadedTableWriter for multi-threaded writes to DFS tables, in-memory tables and dimension tables. (1.30.17.2)

  • Added parameter compress to DBConnection method to support upload and download of compressed data. (1.30.17.2)

  • Fixed the issue where Java API fails to switch to another data node after the current node is shut down gracefully in high-availability mode. (1.30.17.2)

  • Fixed the following issues:

    • When subscribing to a stream table published on Windows, an error "Connection reset" is reported;
    • When subscribing to a stream table published on Linux, data cannot be ingested after API gets stuck. (1.30.17.2)

Python

  • The PROTOCOL_ARROW serialization protocol is now supported on all operating systems for the DolphinDB Python API. (1.30.22.4)

  • Added support for uploading Pandas 2.0 DataFrames that use PyArrow as the storage backend. (1.30.22.4)

  • Explicit type conversions to Decimal32 and Decimal64 now support specifying scale. (1.30.22.4)

  • The data transmission protocol PROTOCOL_DDB now supports uploading and downloading array vectors of Decimal32 and Decimal64 data types. (1.30.22.3)

  • The run() method for Session and DBConnection classes has new parameters for specifying task parallelism and priority. (1.30.22.2)

  • Session objects now implement locks to ensure thread safety. (1.30.22.2)

  • The Table where method now generates SQL queries without adding parentheses around the passed condition when only a single condition is provided. (1.30.22.2)

  • Constructing a Table object with the dbPath and data parameters now loads the table using the same logic as the Session loadTable method. (1.30.22.2)

  • The DolphinDB Python API now supports NumPy 1.24.4. The previous version was compatible with NumPy 1.18 - 1.23.4. (1.30.22.2)

  • Fixed SQL query generation when combining multiple filtering conditions with the Table.where method to match the intended logic. (1.30.22.2)

  • Fixed an issue that previously caused the Table.drop method to not execute as expected under certain conditions. (1.30.22.2)

  • The Table.showSQL method now generates the correct SQL statement after applying the where method to TableUpdate and TableDelete objects (which are returned by update and delete). (1.30.22.2)

  • Fixed an issue where non-Table objects were incorrectly compressed when being uploaded using the Session.upload method. (1.30.22.2)

  • Fixed an issue where the SQL statements generated from Table methods were incorrectly concatenated. (1.30.22.2)

  • Fixed internal parameter handling issues that previously caused the Table.showSQL method to generate SQL statements with incorrect logic. (1.30.22.2)

  • The old README file for the DolphinDB Python API is deprecated and no longer maintained. Please use the new DolphinDB Python API manual going forward for up-to-date documentation on the latest version. (1.30.22.1)

  • Bugfix: Fixed an issue where a TableAppender (previously known as: tableAppender), TableUpserter (previously known as: tableUpsert), or PartitionedTableAppender failed to obtain a reference to a Session or a DBConnectionPool as the object had already been destructed. (1.30.22.1)

  • Bugfix: Fixed an issue where DBConnectionPool was destructed without calling shutDown(), causing the process to get stuck. (1.30.22.1)

  • Bugfix: Fixed occasional message parsing failures when subscribing to data streams. (1.30.22.1)

  • Bugfix: Removed unnecessary warning messages when uploading array vectors of BLOB, INT128, UUID, or IPADDR type using a TableAppender (previously known as: tableAppender), TableUpserter (previously known as: tableUpsert), or PartitionedTableAppender. (1.30.22.1)

  • Improvement: Enhanced message texts. (1.30.22.1)

  • Bugfix: Fixed an issue with the order of execution for queries containing multiple "where" conditions. (1.30.22.1)

  • Bugfix: Fixed memory leak when uploading DataFrames containing DECIMAL columns. (1.30.22.1)

  • New feature: The MultithreadedTableWriter now supports automatic reconnection when writing to stream tables. (1.30.22.1)

  • Improvement: The class "session" has been renamed to "Session". The old class name is still supported for backwards compatibility. (1.30.22.1)

  • Improvement: The class previously named "tableAppender" has been renamed to "TableAppender". The old class name is still supported for backwards compatibility. (1.30.22.1)

  • Improvement: The class previously named "tableUpsert" has been renamed to "TableUpserter". The old class name is still supported for backwards compatibility. (1.30.22.1)

  • New feature: A __DolphinDB_Type__ attribute can now be specified when uploading DataFrames to explicitly cast columns into particular DolphinDB data types. (1.30.22.1)

  • New feature: The Session and DBConnectionPool classes now support a new show_output parameter to specify whether to print the output of script executions in the Python terminal. (1.30.22.1)

  • New feature: Support for uploading and downloading NumPy data in C order. (1.30.22.1)

  • Improvement: An error message will be reported if something goes wrong with the handler during stream data subscription. (1.30.22.1)

  • Improvement: The logic for handling garbled or corrupted string data has been improved. (1.30.22.1)

  • New feature: Classes TableAppender (previously known as: tableAppender), TableUpserter (previously known as: tableUpsert), and PartitionedTableAppender now support auto data type conversion based on the table schema. (1.30.22.1)

  • Improvement: The message that was previously printed when a Table object is destructed has been removed. (1.30.22.1)

  • pandas version 1.0.0 or higher is now required as a dependency. (1.30.21.2

  • Fixed a segmentation error when calling the getUnwrittenData method after the MultithreadedTableWriter failed to insert data. (1.30.21.2

  • Downloading of BLOB data larger than 64 KB is now supported. (1.30.21.2

  • Fixed an issue where out-of-bounds access occurred when subscribing to data from DolphinDB server version 1.30.21/2.00.9 or later on MAC ARM. (1.30.21.2

  • Fixed incorrect data type conversion when uploading null values of np.datetime64 type.(1.30.21.2

  • Fixed decimal overflow when uploading a vector with the first element being a Decimal("NaN"). (1.30.21.2

  • Fixed a segmentation error when downloading BLOB sets using the PROTOCOL_DDB protocol. (1.30.21.2

  • Fixed an issues where a session variable named "db" was overwritten when calling the loadTableBySQL method. (1.30.21.2

  • Fixed an issue where the process would be stuck when data was not retrieved after calling the addTask method of DBConnectionPool. (1.30.21.2

  • Updated pybind11 to v2.9.2. (1.30.21.1)

  • Added support for Python 3.10. (1.30.21.1)

  • Added a new parameter protocol to Session and DBConnectionPool constructors to specify the data transfer protocol. (1.30.21.1)

  • Subscribed data can now be pushed using the connection initiated by the subscriber through Python API. (1.30.21.1)

  • Added a new parameter args to pass user-defined objects to DBConnectionPool.addTask. (1.30.21.1)

  • tableAppender, tableUpsert and PartitionedTableAppender now support uploading IPaddr, UUID, and INT128 values. (1.30.21.1)

  • Added support for downloading data using the Apache Arrow format. (1.30.21.1)

  • Added support for downloading and uploading DECIMAL values using the DolphinDB-customized data communication protocol. (1.30.21.1)

  • Error message enhancements (1.30.21.1)

  • Fixed semaphore creation error which is raised after multiple creations of MultithreadedTableWriter in macOS. (1.30.21.1)

  • Fixed the "unmarshall failed" error when downloading an empty table containing STRING columns with pickle enabled. (1.30.21.1)

  • Fixed the issue where the request is aborted when the subscribed data contains array vectors. (1.30.21.1)

  • Fixed the issue in uWSGI when executing a SQL query with the Python API. (1.30.21.1)

  • Fixed the issue when uploading np.nan values, the server displays "NaN" instead of converting them to NULL values. (1.30.21.1)

  • When subscribing to a stream table, if msgAsTable=True and batchSize is a positive integer, the messages will be processed by block. (1.30.19.4)

  • Python API now supports NumPy up to version 1.23.4, and pandas 1.5.2. (1.30.19.4)

  • Error message enhancements for data uploads. (1.30.19.4)

  • Error message enhancements for Python API on MacOS. (1.30.19.4)

  • Fixed an error when downloading data containing timestamps before 1970. (1.30.19.4)

  • Fixed a failure when writing data containing columns of type INT128/IPADDR/UUID/BLOB through tableAppender, tableUpsert and PartitionedTableAppender.(1.30.19.4)

  • Added error message when the specified value for batchSize is a decimal in stream subscription. (1.30.19.4)

  • Fixed server memory leak caused by undestroyed temporary database handle or table handle when deleting a partition with s.dropPartition or loading a table with s.loadTable. (1.30.19.4)

  • Added new setTimeOut method to the session class for configuring the TCP connection option TCP_USER_TIMEOUT. The method is only available on Linux. (1.30.19.3)

  • Added new parameter sortKeyMappingFunction to the createPartitionedTable method for dimensionality reduction of sort keys. (1.30.19.3)

  • You can now upload a DataFrame in the specified data type by setting its __DolphinDB_Type__ attribute. (1.30.19.3)

  • Fixed an issue where the uploading result of a Boolean object was incorrect. (1.30.19.3)

  • Support function hints. (1.30.19.2)

  • Support official Python 3.8-3.9 on Windows. (1.30.19.2)

  • Support uploading data with function runTaskAsync of DBConnectionPool. (1.30.19.2)

  • Added new method enableJobCancellation to session on Linux. You can use Ctrl+C to cancel all tasks of session.run() that are being executed. (1.30.19.2)

  • Fixed an issue that the server does not automatically release resources after a Table object is deleted. (1.30.19.2)

  • Support Python 3.7-3.9 in conda environment on Linux aarch64. (1.30.19.2)

  • The enableASYN parameter of session object is deprecated. Please use enableASYNC instead. (1.30.19.1)

  • Added new system variable version. You can check the version number of the API through dolphindb._version_. (1.30.19.1)

  • When writting to an in-memory table with MultithreadedTableWriter, dbPath must be set to NULL, and tableName must be specified as the in-memory table name. (1.30.19.1)

  • When calling print() with s.run(), the result can now be displayed on the API side. (1.30.19.1)

  • Added new object tableUpsert. (1.30.19.1)

  • Added new parameters mode and modeOPtion for MultithreadedTableWriter to update the indexed table, keyed table, or DFS table through upsert. (1.30.19.1)

  • Support uploading and reading array vectors of INT128, UUID, and IP types. Please set enablePickle=false before you upload or read array vectors of these types. (1.30.19.1)

  • Standardized the handling of null values. (1.30.19.1)

  • Support the reconnect parameter for session.connect to reconnect nodes automatically in scenarios where high availability is not enabled. (1.30.19.1)

  • Added new class streamDeserializer to parse the heterogeneous stream table. Added the streamDeserializer parameter for function subscribe to receive data parsed by streamDeserializer. (1.30.19.1)

  • Fixed an issue that the data cannot be downloaded when the data queried through the API has garbled characters. (1.30.19.1)

  • Fixed an issue that the port is not released in time after the session is closed. (1.30.19.1)

  • tableAppender now supports array vectors. (1.30.19.1)

  • Achieved load balancing of requests when connecting to the cluster through the API. (1.30.19.1)

  • Fixed the issue where the creation of objects in class DBConnectionPool fails when the parameter loadBalance is set to True. (1.30.17.3)

  • Fixed the issue where uploading a DataFrame fails if the first row of a string column is None. (1.30.17.4)

  • Support NumPy 1.22.3 and Pandas 1.4.2 (excluding Pandas 1.3.0). (1.30.17.2)

  • You can import a DataFrame with arrays to DolphinDB as a table with array vectors. (1.30.17.2)

  • Fixed issues of uploading and downloading any vectors. (1.30.17.2)

  • Changed the data type of errorCode of Class ErrorCodeInfo from int to string. (1.30.17.2)

  • Added new methods hasError and succeed to check whether the data is written properly. (1.30.17.2)

  • Added new class MultithreadedTableWriter for multi-threaded writes to partitioned DFS tables, in-memory tables and dimension tables, with support for SSL communication, compressed communication, high-availability data writes and more. (1.30.17.1)

  • New parameter compress for session objects for dowloading compressed data. (1.30.17.1)

  • Reduced the time taken by the Python Global Interpreter Lock (GIL) in a session. (1.30.17.1)

  • Added new method toList for table objects for converting array vectors to 2D arrays. (1.30.17.1)

  • PartitionedTableAppender now supports automatic temporal type conversion for table writes. (1.30.17.1)

  • Added new parameters engine, atomic, enableChunkGranularityConfig for session.database. Note that these parameters are only supported in the TSDB engine (provided in DolphinDB server 2.00.0 and later). (1.30.17.1)

  • Added new parameters compressMethods, sortColumns, keepDuplicates to database.createPartitionedTable. Note that these parameters are only supported in the TSDB engine (provided in DolphinDB server 2.00.0 and later). (1.30.17.1)

  • Fixed the data loss issue with session.subscribe. (1.30.17.1)

  • Adjusted the version numbering scheme of Python API to keep it consistent with that of the DolphinDB server. (1.30.16.1)

  • Connection to 200 and later versions of DolphinDB server is now supported. (1.30.16.1)

  • Upload and download of array vectors are now supported. (1.30.16.1)

  • Added new parameter keepAliveTime for the Session class method connect() to specify the duration between two keepalive transmissions. The default value is 30 (seconds). Specify a greater value for this parameter when querying large amount of data to avoid disconnections. (1.30.0.15)

  • orca: Fixed the issue with the function orca.panel. (1.30.0.10)

  • orca: New function for calculating the rolling rank. (1.30.0.9)

  • orca: Support for calculating weighted rolling mean. (1.30.0.9)

  • orca: New function orca.read_in_memory_table for querying DolphinDB in-memory tables. (1.30.0.9)

  • orca: New function orca.panel. (1.30.0.9)

  • orca: Fixed the issue that the specified where condition took no effect in window join. (1.30.0.9)

  • orca: Removed the lazy parameter from groupby as it only supports lazy evaluation now. (1.30.0.9)

  • New method runTaskAsyn for DBConnectionPool to provide a convenient way to execute asynchronous tasks concurrently. (1.30.0.8)

  • Fixed the issue that the where condition specified in update took no effect. (1.30.0.8)

  • Fixed a client crash issue when appending data asynchronously to database via Python API. (1.30.0.8)

  • Fixed the error that occurred when the name of the named object uploaded via Python API was the same as that of the named object in a previous upload. (1.30.0.8)

  • Removed the restriction that the pandas version must be lower than 1.0 for Python API installation. (1.30.0.7)

  • Added partitionedTableAppender for concurrent writes to a partitioned DFS table. (1.30.0.6)

  • Added new parameter fetchSize to run to specify the number of rows retrieved each time. (1.30.0.6)

  • Support for batch processing in streaming data subscription. (1.30.0.6)

  • Added new parameter clearMemory to automatically clear the variables generated within a session once the execution of run is completed. (1.30.0.6)

  • Version compatibility check is performed when connecting to DolphinDB server. (1.30.0.6)

  • When writing a DataFrame to DolphinDB via tableAppender, the date and time types in the DataFrame are automatically converted to the date and time types specified by the target table. (1.30.0.6)

  • Optimized data transmission performance. To connect to the latest DolphinDB server, upgrade Python API to 1.30.0.5 via pip3 install dolphindb==1.30.0.5. (1.30.0.5)