Releases: apache/seatunnel
2.1.1 Release
[Feature]
- Support json format config file
- Jdbc connector support partition
- Add ClickhouseFile sink on Spark engine
- Support compile with jdk11
- Add elasticsearch 7.x plugin on Flink engine
- Add Feishu plugin on Spark engine
- Add Spark http source plugin
- Add Clickhouse sink plugin on Flink engine
[Bugfix]
- Fix flink ConsoleSink not printing results
- Fix various jdbc type of dialect compatibility between JdbcSource and JdbcSink
- Fix when have empty data source, transform not execute
- Fix datetime/date string can't convert to timestamp/date
- Fix tableexits not contain TemporaryTable
- Fix FileSink cannot work in flink stream mode
- Fix config param issues of spark redis sink
- Fix sql parse table name error
- Fix not being able to send data to Kafka
- Fix resource lake of file.
- Fix When outputting data to doris, a ClassCastException was encountered
[Improvement]
- Change jdbc related dependency scope to default
- Use different command to execute task
- Automatic identify spark hive plugin, add enableHiveSupport
- Print config in origin order
- Remove useless job name from JobInfo
- Add console limit and batch flink fake source
- Add Flink e2e module
- Add Spark e2e module
- Optimize plugin load, rename plugin package name
- Rewrite Spark, Flink start script with code.
- To quickly locate the wrong SQL statement in flink sql transform
- Upgrade log4j version to 2.17.1
- Unified version management of third-party dependencies
- USe revision to manage project version
- Add sonar check
- Add ssl/tls parameter in spark email connector
- Remove return result of sink plugin
- Add flink-runtime-web to flink example
Please go to the official channel to download: https://seatunnel.apache.org/download
2.1.0 Release (First apache version)
- Use JCommander to do command line parameter parsing, making developers focus on the logic itself.
- Flink is upgraded from 1.9 to 1.13.5, keeping compatibility with older versions and preparing for subsequent CDC.
- Support for Doris, Hudi, Phoenix, Druid, and other Connector plugins, and you can find complete plugin support here plugins-supported-by-seatunnel.
- Local development extremely fast starts environment support. It can be achieved by using the example module without modifying any code, which is convenient for local debugging.
- Support for installing and trying out Apache SeaTunnel(Incubating) via Docker containers.
- SQL component supports SET statements and configuration variables.
- Config module refactoring to facilitate understanding for the contributors while ensuring code compliance (License) of the project.
- Project structure realigned to fit the new Roadmap.
- CI&CD support, code quality automation control (more plans will be carried out to support CI&CD development).
Please go to the official channel to download: https://seatunnel.apache.org/download
[Stable]v1.5.7
[Stable]v1.5.6
What's Changed
- [project rename] changed start-waterdrop.sh to start-seatunnel.sh, changed logo ascii code from waterdrop to seatunnel by @garyelephant
- [Feature] added the abstraction of BaseAction by @garyelephant in #810
- [feature] allow user to customize log4j.properties @garyelephant in #267 (comment)
- [bugfix] fixed a bug of kerberos config in spark config by @garyelephant in #590
- [bugfix] Fix bug of #719 by @RickyHuo in #743
[Stable]v1.5.3
-
[Feature] Added
hive
output plugin, see documentation -
[Feature] Added
redis
input plugin, see documentation
注意:Waterdrop 提供可直接执行的软件包,没有必要自行编译源代码,请点击下面waterdrop-1.5.3.zip 下载。
spark 版本要求 (>= 2.3, < 3.0)
[Stable] 1.5.2
[Feature] Add redis
input plugin, see documentation
注意:Waterdrop 提供可直接执行的软件包,没有必要自行编译源代码,请点击下面waterdrop-1.5.2.zip 下载。
spark 版本要求 (>= 2.3, < 3.0)
[Stable]v2.0.4
-
published api modules to maven central repo.
-
added waterdrop-config source code.
-
added build.md guide.
-
refined project code and pom.xml structure.
[Stable] v1.5.1
[Feature] Add redisStream
input plugin.
[Feature] mongoDB
input plugin add parameter of schema, support specify schema by yourself.
[Enhancement] Support type of Nullable(Decimal(P, S)) with clickhouse
output plugin.
[Enhancement] Output plugin using parameter of format
rather than serializer
[Bugfix] Fix #492 #517 #534
注意:Waterdrop 提供可直接执行的软件包,没有必要自行编译源代码,请点击下面waterdrop-1.5.0.zip 下载。
如果Github下载速度慢,可通过百度云(链接:https://pan.baidu.com/s/19GUwZPC2YBG9Pt7iuF9TNw 密码:upeb) 直接下载。
备注:spark >= 2.3 下载 waterdrop-1.5.1.zip, spark < 2.3 下载waterdrop-1.5.1-with-spark.zip
[Stable]v1.5.0
- [Enhancement] Support Chinese column name with ClickHouse output.
- [Enhancement] Remove useless code with antlr4.
- [Enhancement] Support specify queue with -q or --queue.
- [Enhancement] Optimize batch processing,drop unnecessary coding.
- [Feature] Replace third party jar package(config-1.3.3-SNAPSHOT.jar) with waterdrop-config module .
- [Feature] Add filter plugin of urldecode and urlencode.
- [Bugfix] Fix #392 #411 (Config Parse bug).
- [Bugfix] Support specify
--driver-memory
with waterdrop config file in spark section (#507) .
注意:Waterdrop 提供可直接执行的软件包,没有必要自行编译源代码,请点击下面waterdrop-1.5.0.zip 下载。
如果Github下载速度慢,可通过百度云(链接:https://pan.baidu.com/s/1vCpGUcpSdyetLMMB39J2fg 密码:ullf) 直接下载。
备注:spark >= 2.3 下载 waterdrop-1.5.0.zip, spark < 2.3 下载waterdrop-1.5.0-with-spark.zip
Upgrade Guide
- If you upgrade from a previous version, you have to update all plugin dependencies that developed by yourself.
[Stable]v1.4.3
- [Feature] Support ClickHouse Cluster Mode using parame of
cluster
. Reading ClickHouse table ofsystem.clusters
- [Fixbug] Fix a bug of
checkConfig
when usingresult_table_name
rather thantable_name
- [Fixbug] Fix a bug of
MongoDB
in Spark Structured Streaming Output - [Enhancement] Update dependency of ElasticSearch to 7.6.2
- [Enhancement] Update dependency of clickhouse-jdbc to 0.2.4
注意:Waterdrop 提供可直接执行的软件包,没有必要自行编译源代码,请点击下面waterdrop-1.4.3.zip 下载。
如果Github下载速度慢,可通过百度云(链接:https://pan.baidu.com/s/1Qik5I1IGsgx1u26plSOFDg 密码:fqkr) 直接下载。
备注:spark >= 2.3 下载 waterdrop-1.4.3.zip, spark < 2.3 下载waterdrop-1.4.3-with-spark.zip