Want to build Stargate locally or even start contributing to the project? This is the right place to get started.
If you're developing on macOS, we've added notes throughout to highlight a few specific differences.
The fastest way to build your own copy of the Stargate coordinator code and Docker images involves the following steps:
- Local build of coordinator images
- Make sure you are in the
coordinator
directory and haveJAVA_HOME
set to point to a JDK 1.8 installation - Do a local build of jar files for coordinator (include the
-P dse
option if you want to build the DSE version, but note this requires access to Datastax's Maven repository)./mvnw clean install -DskipTests -P dse
- Generate docker images (image tag will default to the Stargate version specified in the
pom.xml
):./build_docker_images.sh
- Make sure you are in the
You can then use the docker-compose scripts to start Stargate locally. See also the apis README for information on compiling and building images for the API services.
We use google-java-format for Java code, and xml-format-maven-plugin for XML.
Both are integrated with Maven: the build will fail if some files are not formatted correctly.
To fix formatting issues from the command line, run the following:
./mvnw xml-format:xml-format fmt:format
Stargate uses multiple JDKs for its various components, as described in the sections below.
NOTE: Coordinator related project is located in the coordinator/ directory.
The coordinator currently runs on Java 8 due to its backend dependencies. It's important to ensure that you have the correct JDK 8 installed before you can successfully compile the Stargate project. There are a number of versions of JDK 8 and a number of different ways to install them, but not all of them will work successfully with Stargate. For comparison, you can reference the JDK version used in our CI workflow.
Download JDK 8 from this link: https://adoptopenjdk.net/?variant=openjdk8&jvmVariant=hotspot
Install the JDK and add it to your path.
For example: if you are using a newer version of macOS, then you are likely using Z-Shell (zsh) by default. So open your ~/.zshrc
file and add the path there:
export JAVA_HOME="/Library/Java/JavaVirtualMachines/adoptopenjdk-8.jdk/Contents/Home"
export PATH="$JAVA_HOME/bin:$PATH"
NOTE: API related projects are located in the apis/ directory.
The Stargate API services that run externally to the coordinator node are located under the apis
directory. These services require a more modern JDK in order to take advantage of the latest tools and frameworks. See the APIs README for information on compiling and running the API services including the required JDK.
Scripts below assume you are located in the
coordinator/
directory.
Stargate uses Maven for builds. You can download and install Maven from this link or use the included maven wrapper script as you would ordinarily use the mvn
command.
To build locally run the following:
./mvnw clean package
You can also build a single module like this:
./mvnw package -pl cql -am
- NOTE: If you get a
Could not find or load main class org.apache.maven.wrapper.MavenWrapperMain
exception on Linux, upgrade your localwget
.
Recognizing that users will have different preferences on how to run Stargate, multiple options are supported.
We've provided Docker Compose scripts that can be used to run Stargate locally. These scripts can use Stargate Docker images created from a local build. Alternatively you can reference a released Stargate version to use containers from Docker Hub, without requiring a local build.
There are two options available for running Stargate in Kubernetes. If you already have a Cassandra cluster, we provide a Helm chart you can use to install Stargate alongside that cluster.
For a more complete distribution including Stargate, Cassandra, and operational tools such as Medusa and Reaper, see the K8ssandra project. K8ssandra includes multiple Kubernetes operators and more advanced features such as multi-cluster deployments.
Before starting Stargate locally, you will need an instance of Apache Cassandra®. The easiest way to do this is with a Docker image (see Cassandra docker images).
NOTE: due to the way networking works with Docker for Mac, the Docker method only works on Linux. We recommend CCM (see below) for use with macOS.
Docker: Start a Cassandra 4.0 instance:
docker run --name local-cassandra \
--net=host \
-e CASSANDRA_CLUSTER_NAME=stargate \
-d cassandra:4.0
Cassandra Cluster Manager: Start a Cassandra 4.0 instance (link to ccm. Note it's typically preferable to specify a patch version number such as 4.0.7
)
ccm create stargate -v 4.0.7 -n 1 -s -b
NOTE: Before starting Stargate on macOS you'll need to add a loopback:
sudo ifconfig lo0 alias 127.0.0.2
Start Stargate from the command line as follows:
./starctl --cluster-name stargate --cluster-seed 127.0.0.1 --cluster-version 4.0 --listen 127.0.0.2 --simple-snitch
# See all cli options with -h
Or use a pre-built image from Docker Hub (see the image page to list the available versions):
docker pull stargateio/coordinator-4_0:v2.0.0-ALPHA-17
docker run --name stargate -d stargateio/coordinator-4_0:v2.0.0-ALPHA-17 --cluster-name stargate --cluster-seed 127.0.0.1 --cluster-version 4.0 --listen 127.0.0.2 --simple-snitch
The starctl
script respects the JAVA_OPTS
environment variable.
For example, to set a Java system property with spaces in its value, run starctl
as shown below.
Note the double quotes embedded in the environment var value - it is re-evaluated (once) as a bash
token before being passed to the JVM. This is required to break the single value of JAVA_OPTS
into
a sequence of tokens. This kind of processing is not required for ordinary command line arguments,
therefore they do not need any extra quoting.
env JAVA_OPTS='-Dmy_property="some value"' ./starctl --cluster-name 'Some Cluster' ...
The instructions above describe how to start up a Stargate coordinator node and backing Cassandra cluster. If you are only using the CQL or gRPC interfaces to Stargate, these are the only components you need to start. Additional APIs including REST, GraphQL and Docs API are implemented as separate microservices which can be started independently using instructions found under the apis directory.
If you're an IntelliJ user you can create the JAR Application run configuration, pointing to the stargate-lib/stargate-starter-[VERSION].jar
and specifying stargate-lib/
as the working directory.
Then disable Instrumenting agent in Settings | Build, Execution, Deployment | Debugger | Async Stacktraces
.
This will allow you to debug directly using the IntelliJ debug run option.
You can debug any run configuration and tests as well.
java -jar -Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=5005 -Dstargate.libdir=./stargate-lib stargate-lib/stargate-starter-1.0-SNAPSHOT.jar
Alternatively, use the JAVA_OPTS
environment variable to pass debugging options to the JVM
env JAVA_OPTS='-Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=5005' ./starctl --cluster-name stargate ...
Then follow the steps found here.
Connect to CQL as normal on port 9042:
$ cqlsh 127.0.0.2 9042
Connected to stargate at 127.0.0.2:9042.
[cqlsh 6.0.0 | Cassandra 4.0.7 | CQL spec 3.4.5 | Native protocol v4]
Use HELP for help.
First, get an auth token to use on subsequent requests:
# Generate an auth token
curl -L -X POST 'http://127.0.0.2:8081/v1/auth' \
-H 'Content-Type: application/json' \
--data-raw '{
"username": "cassandra",
"password": "cassandra"
}'
Then use the token when accessing the REST API:
# Get all keyspaces using the auth token from the previous request
curl -L -X GET '127.0.0.2:8082/v1/keyspaces' \
--header 'accept: application/json' \
--header 'content-type: application/json' \
--header 'X-Cassandra-Token: <AUTH_TOKEN>'
Integration tests require that Cassandra Cluster Manager (ccm)
be installed and accessible via the OS PATH
.
The tests use ccm
to start transient storage nodes that are normally destroyed at
the end of the test run. However, if the test JVM is killed during execution, the external storage
node may continue running and may interfere with subsequent test executions. In this case, the
transient storage process needs to be stopped manually (e.g. by using the kill
command).
NOTE: to run integration tests on macOS, you'll need to enable several loopback addresses using the instructions below.
To run integration tests in the default configuration, run:
./mvnw verify
This will run integration tests for Cassandra 3.11 and 4.0. On a reasonably powerful laptop it takes about 40 minutes.
Note: Support for DSE is disabled by default.
To build and test Stargate with the DSE 6.8 persistence module, specify the dse
and it-dse-6.8
Maven profiles:
./mvnw verify -P dse -P it-dse-6.8
To run integration tests with all Cassandra and DSE persistence modules, run:
./mvnw verify -P it-cassandra-3.11 -P it-cassandra-4.0 -P dse -P it-dse-6.8
Note: Enabling only one of the it-*
Maven profiles will automatically disable the others.
If you're working with a single test to get something working or adding a new test, you may want to run with just that one test rather than waiting for the entire IT suite to complete. To do this, first make sure you have done a recent build, for example:
./mvnw clean install -DskipTests
Then you can run the individual test using the -Dit.test
option. For example, this runs one of the CQL integration tests:
mvn -pl testing -Pit-cassandra-4.0 verify -Dit.test=SimpleStatementTest
You can even run a single test case (method):
mvn -pl testing -Pit-cassandra-4.0 verify -Dit.test="SimpleStatementTest#namedValuesTest"
When debugging integration tests, you may prefer to manually control the storage node.
It does not matter how exactly the storage node is started (Docker, ccm or manual run) as
long as port 7000
is properly forwarded from 127.0.0.1
to the storage node. If you are managing
the storage node manually, use the following options to convey connection information to the test JVM:
-Dstargate.test.backend.use.external=true
-Dstargate.test.backend.cluster_name=<CLUSTER_NAME>
-Dstargate.test.backend.dc=<DATA_CENTER_NAME>
When integration tests run with debugging options, the related Stargate nodes will also be started with debugging options (using consecutive ports starting with 5100), for example:
-agentlib:jdwp=transport=dt_socket,server=n,suspend=y,address=localhost:5100
You can run multiple Java debuggers waiting for connections on ports 510N
and up -
one for each Stargate node required for the test. Note that most of the tests start only
one Stargate node.
The picture below shows the remote listening debug run configuration in IntelliJ. That configuration must be started before running the integration test in the debug mode. You will observe two or more JVMs in the debug model, one running the integration tests and at least one running Stargate.
You can start and debug integration tests individually in an IDE.
If you're using ccm
to manage storage nodes during tests, it must be accessible from the IDE's
execution environment (PATH
).
When tests are started manually via an IDE or JUnit Console Launcher, you can specify the type and version of the storage backend using the following Java system properties:
-Dccm.version=<version>
- the version of the storage cluster (e.g.4.0.7
)-Dccm.dse=<true|false>
- specifies whether the storage cluster is DSE or OSS Cassandra. Iffalse
this option can be omitted.
There are two custom JUnit 5 extensions used when running integration tests.
-
ExternalStorage
- manages starting and stopping storage nodes (Cassandra and/or DSE) through ccm. This extension is defined in thepersistence-test
module. The@ClusterSpec
annotation works in conjunction withExternalStorage
and defines parameters of the external storage nodes. When this extension is active, it will automatically inject test method parameters of typeClusterConnectionInfo
. -
StargateCoordinator
- manages starting and stopping Stargate nodes. This extension is defined in thetesting
module. The@StargateSpec
annotation works in conjunction withStargateCoordinator
and defines parameters of the Stargate nodes. When this extension is active, it will automatically inject test method parameters of typeStargateConnectionInfo
andStargateEnvironmentInfo
.
Integration tests that do not need Stargate nodes (e.g. Cassandra40PersistenceIT
) can use only
the ExternalStorage
extension by having the @ExtendWith(ExternalStorage.class)
annotation
either directly on the test class or on one of its super-classes.
Integration tests that need both storage and Stargate nodes, should use the @UseStargateCoordinator
annotation to activate both extensions in the right order.
The code element holding @ClusterSpec
or @StargateSpec
annotations controls the lifecycle of
the nodes they define. If the "spec" is present at the class level (inherited), the corresponding
nodes will be started/stopped according to @BeforeAll
/ @AfterAll
JUnit 5 callbacks. Similarly,
if the spec is present at the method level, each node's lifecycle will follow @BeforeEach
/
@AfterEach
callbacks. An exception to this rule is when the spec has the shared
property set
to true
, in which case the corresponding nodes will not be stopped until another test is executed
and that test requests different node parameters (when that happens the old nodes will be stopped,
and the new node(s) will be started before executing the new test). If the spec annotations are not
present on the code element, no action is taken by the extensions and storage / Stargate nodes
may or may not be available to the test depending on what happened before in the test execution
context.
Parameter injection works with any method where JUnit 5 supports parameter injection
(e.g. constructors, @Test
methods, @Before*
methods) if the corresponding storage / Stargate
nodes are available.
The integration tests use multiple loopback addresses which you will need to create individually on macOS. We recommend persisting the network aliases using a RunAtLoad launch daemon which OSX automatically loads on startup. For example:
Create a shell script:
sudo vim /Library/LaunchDaemons/com.ccm.lo0.alias.sh
Contents of the script:
#!/bin/sh
# create loopback addresses used by Stargate integration tests
# 127.0.0.2 - 127.0.0.11
for ((i=2;i<12;i++))
do
sudo /sbin/ifconfig lo0 alias 127.0.0.$i;
done
# 127.0.1.11 - 127.0.1.12
for ((i=11;i<13;i++))
do
sudo /sbin/ifconfig lo0 alias 127.0.1.$i;
done
# 127.0.2.1 - 127.0.2.60
for ((i=1;i<61;i++))
do
sudo /sbin/ifconfig lo0 alias 127.0.2.$i;
done
sudo /sbin/ifconfig lo0 alias 127.0.3.1;
Set access of the script:
sudo chmod 755 /Library/LaunchDaemons/com.ccm.lo0.alias.sh
Create a plist to launch the script:
sudo vim /Library/LaunchDaemons/com.ccm.lo0.alias.plist
Contents of the plist:
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>Label</key>
<string>com.ccm.lo0.alias</string>
<key>RunAtLoad</key>
<true/>
<key>ProgramArguments</key>
<array>
<string>/Library/LaunchDaemons/com.ccm.lo0.alias.sh</string>
</array>
<key>StandardErrorPath</key>
<string>/var/log/loopback-alias.log</string>
<key>StandardOutPath</key>
<string>/var/log/loopback-alias.log</string>
</dict>
</plist>
Set access of the plist:
sudo chmod 0644 /Library/LaunchDaemons/com.ccm.lo0.alias.plist
sudo chown root:staff /Library/LaunchDaemons/com.ccm.lo0.alias.plist
Launch the daemon now. MacOS will automatically reload it on startup.
sudo launchctl load /Library/LaunchDaemons/com.ccm.lo0.alias.plist
Verify you can ping 127.0.0.2 and 127.0.0.3, etc.
If you ever want to permanently kill the daemon, simply delete its plist from /Library/LaunchDaemons/.
To update the licenses-report.txt
you'll need to install fossa-cli. Once
you have that installed locally run the following from the root stargate
directory.
FOSSA_API_KEY=<TOKEN> fossa
FOSSA_API_KEY=<TOKEN> fossa report licenses > foo.txt
It's best to write the report to a temporary file and use your diff tool of choice to merge the two together since fossa-cli generates a ton of duplicates.
Finally, before committing your changes you'll want to clean up:
rm foo.txt .fossa.yml