Skip to content

Commit

Permalink
Merge pull request #256 from dragonchain/master
Browse files Browse the repository at this point in the history
Release 4.2.0
  • Loading branch information
cheeseandcereal committed Nov 20, 2019
2 parents 2f5d245 + 2be318c commit 4dc93ec
Show file tree
Hide file tree
Showing 65 changed files with 1,535 additions and 249 deletions.
2 changes: 1 addition & 1 deletion .version
Original file line number Diff line number Diff line change
@@ -1 +1 @@
4.1.0
4.2.0
2 changes: 1 addition & 1 deletion .vscode/settings.json
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@
"-l",
"150",
"-t",
"py37"
"py38"
],
"editor.formatOnSave": true,
"restructuredtext.confPath": "${workspaceFolder}/docs"
Expand Down
27 changes: 27 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,32 @@
# Changelog

## 4.2.0

- **Feature:**
- Add performance improvements when creating transactions and processing L1 blocks
- Add interchain support for binance
- **Bugs:**
- Change L5 block redisearch insert to upsert to prevent an occasional edge-case error which could cause an L5 to get stuck
- Don't require tail to be explicitly provided when requesting smart contract logs
- Fix a bug where L2+ chains could have the transaction processor go into a failure loop if a block failed to write to storage at some point
- Fix a bug where Ethereum L5 nodes could estimate a gas price of 0 for low-activity networks
- Fix a bug where an open-source chain couldn't build smart contracts due to a bad environment variable
- Fix a bug where a chain could infinitely retry to connect to dragon net
- Fix a bug with storage deletion using the disk storage interface which could cause unexpected failures
- Fix a bug with private docker registry delete when deleting smart contracts
- Fix a bug with smart contract heap get where pre-pending an extra '/' could give bad results
- Fix a bug where a smart contract key wouldn't get properly cleaned up on smart contract delete
- Fix a bug when updating/deleting a smart contract where Dragonchain could remove a docker image still being used by other contracts
- Fix a bug where updating a smart contract with the same image tag wouldn't always pull the latest version
- **Packaging:**
- Update redisearch, boto3, apscheduler, web3, and gunicorn dependencies
- Add bnb-tx, pycoin, and mnemonic dependencies for binance
- Add `binutils` and `musl-dev` alpine dependencies in Docker container temporarily [for gunicorn 20.0.0](https://github.com/benoitc/gunicorn/issues/2160)
- **Development:**
- Revert manual redisearch fixes with dependency fixes
- Change the way that transaction 404 stubbing is handled for pending transactions
- Update to python 3.8

## 4.1.0

Note this update adds the invoker tag field for indexing smart contract
Expand Down
7 changes: 4 additions & 3 deletions Dockerfile
Original file line number Diff line number Diff line change
@@ -1,8 +1,9 @@
FROM python:3.7-alpine as base
FROM python:3.8-alpine as base

WORKDIR /usr/src/core
# Install necessary base dependencies and set UTC timezone for apscheduler
RUN apk --no-cache upgrade && apk --no-cache add libffi libstdc++ gmp && echo "UTC" > /etc/timezone
RUN apk --no-cache upgrade && apk --no-cache add libffi libstdc++ gmp && echo "UTC" > /etc/timezone && apk --no-cache add binutils musl-dev
# apk --no-cache add binutils musl-dev is required for gunicorn 20.0.0 until https://github.com/benoitc/gunicorn/issues/2160 is fixed

FROM base AS builder
# Install build dependencies
Expand All @@ -15,7 +16,7 @@ RUN python3 -m pip install -r requirements.txt

FROM base AS release
# Copy the installed python dependencies from the builder
COPY --from=builder /usr/local/lib/python3.7/site-packages /usr/local/lib/python3.7/site-packages
COPY --from=builder /usr/local/lib/python3.8/site-packages /usr/local/lib/python3.8/site-packages
COPY --from=builder /usr/local/bin/gunicorn /usr/local/bin/gunicorn
# Copy our actual application
COPY --chown=1000:1000 . .
6 changes: 3 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
<div align="center">
<img width=300px height=300px src="https://dragonchain.com/static/media/dragonchain-logo-treasure.png" alt="Dragonchain Logo">
<img width=300px height=300px src="https://dragonchain-assets.s3.amazonaws.com/jojo.png" alt="Dragonchain Logo">

# Dragonchain

Expand Down Expand Up @@ -56,7 +56,7 @@ can do.

In order to develop locally you should be able to run `./tools.sh full-test` and have all checks pass. For this, a few requirements should be met:

1. Ensure that you have python 3.7 installed locally
1. Ensure that you have python 3.8 installed locally
1. Install OS dependencies for building various python package dependencies:
- On an arch linux system (with pacman): `./tools.sh arch-install`
- On a debian-based linux system (with apt): `./tools.sh deb-install` (Note on newer Ubuntu installations
Expand All @@ -76,7 +76,7 @@ to be separated from the rest of the (potentially conflicting) packages from the

In order to do this, instead of step 3 above, perform the following steps:

1. Ensure you have python venv installed, and run `python3.7 -m venv .venv`
1. Ensure you have python venv installed, and run `python3.8 -m venv .venv`
1. Activate the virtual environment in your shell by running `source .venv/bin/activate`
1. Upgrade the setup dependencies for the virtual environment: `pip install -U pip setuptools`
1. Install the core dependencies: `pip install -r requirements.txt`
Expand Down
5 changes: 3 additions & 2 deletions cicd/Dockerfile.dependencies
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
# This container is used as a base by Dockerfile.test in order to speed up dependency install for testing purposes only
FROM python:3.7-alpine
FROM python:3.8-alpine

# Install helm for linting chart, and yq for building docs
RUN wget -O helm-v2.14.3-linux-amd64.tar.gz 'https://get.helm.sh/helm-v2.14.3-linux-amd64.tar.gz' && \
Expand All @@ -9,7 +9,8 @@ RUN wget -O helm-v2.14.3-linux-amd64.tar.gz 'https://get.helm.sh/helm-v2.14.3-li
chmod +x yq && mv yq /usr/local/bin/

# Install dev build dependencies
RUN apk upgrade && apk add g++ make gmp-dev libffi-dev automake autoconf libtool && echo "UTC" > /etc/timezone
RUN apk --no-cache upgrade && apk --no-cache add g++ make gmp-dev libffi-dev automake autoconf libtool && echo "UTC" > /etc/timezone && apk --no-cache add binutils musl-dev
# apk --no-cache add binutils musl-dev is required for gunicorn 20.0.0 until https://github.com/benoitc/gunicorn/issues/2160 is fixed
# Install python dev dependencies
ENV SECP_BUNDLED_EXPERIMENTAL 1
ENV SECP_BUNDLED_WITH_BIGNUM 1
Expand Down
7 changes: 4 additions & 3 deletions cicd/Dockerfile.test
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Change FROM to python:3.7-alpine to test without the dependencies container
# Change FROM to python:3.8-alpine to test without the dependencies container
FROM dragonchain/dragonchain_core_dependencies:latest as base

# Install Helm for chart linting and/or yq for doc builds if it doesn't exist
Expand All @@ -13,7 +13,8 @@ RUN if ! command -v helm; then \

WORKDIR /usr/src/core
# Install necessary base dependencies and set UTC timezone for apscheduler
RUN apk --no-cache upgrade && apk --no-cache add libffi libstdc++ gmp && echo "UTC" > /etc/timezone
RUN apk --no-cache upgrade && apk --no-cache add libffi libstdc++ gmp && echo "UTC" > /etc/timezone && apk --no-cache add binutils musl-dev
# apk --no-cache add binutils musl-dev is required for gunicorn 20.0.0 until https://github.com/benoitc/gunicorn/issues/2160 is fixed

FROM base AS builder
# Install dev build dependencies
Expand All @@ -28,7 +29,7 @@ RUN python3 -m pip install --upgrade -r dev_requirements.txt

FROM base AS release
# Copy the installed python dependencies from the builder
COPY --from=builder /usr/local/lib/python3.7/site-packages /usr/local/lib/python3.7/site-packages
COPY --from=builder /usr/local/lib/python3.8/site-packages /usr/local/lib/python3.8/site-packages
# Sphinx is needed to build the docs
COPY --from=builder /usr/local/bin/sphinx-build /usr/local/bin/sphinx-build
# Copy our actual application
Expand Down
2 changes: 1 addition & 1 deletion docs/deployment/deploying.md
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,7 @@ Once the values are set, install the helm chart with:
```sh
helm repo add dragonchain https://dragonchain-charts.s3.amazonaws.com
helm repo update
helm upgrade --install my-dragonchain --values opensource-config.yaml --namespace dragonchain dragonchain/dragonchain-k8s --version 1.0.2
helm upgrade --install my-dragonchain --values opensource-config.yaml --namespace dragonchain dragonchain/dragonchain-k8s --version 1.0.3
```

If you need to change any values AFTER the helm chart has already been
Expand Down
7 changes: 3 additions & 4 deletions dragonchain/broadcast_processor/broadcast_functions_utest.py
Original file line number Diff line number Diff line change
Expand Up @@ -25,11 +25,10 @@
from dragonchain import exceptions


def async_test(function):
def async_test(coro):
def wrapper(*args, **kwargs):
coro = asyncio.coroutine(function)
future = coro(*args, **kwargs)
asyncio.get_event_loop().run_until_complete(future)
loop = asyncio.get_event_loop()
return loop.run_until_complete(coro(*args, **kwargs))

return wrapper

Expand Down
7 changes: 3 additions & 4 deletions dragonchain/broadcast_processor/broadcast_processor_utest.py
Original file line number Diff line number Diff line change
Expand Up @@ -25,11 +25,10 @@
from dragonchain import exceptions


def async_test(function):
def async_test(coro):
def wrapper(*args, **kwargs):
coro = asyncio.coroutine(function)
future = coro(*args, **kwargs)
asyncio.get_event_loop().run_until_complete(future)
loop = asyncio.get_event_loop()
return loop.run_until_complete(coro(*args, **kwargs))

return wrapper

Expand Down
6 changes: 3 additions & 3 deletions dragonchain/exceptions.py
Original file line number Diff line number Diff line change
Expand Up @@ -151,11 +151,11 @@ class SanityCheckFailure(DragonchainException):
"""Exception raised when sanity check fails"""


class RPCError(DragonchainException):
"""Exception raise when RPC has an error"""
class InterchainConnectionError(DragonchainException):
"""Exception raise when RPC / API call has an error"""


class RPCTransactionNotFound(DragonchainException):
class TransactionNotFound(DragonchainException):
"""Exception raised when a transaction is not found on an interchain network"""


Expand Down
28 changes: 12 additions & 16 deletions dragonchain/job_processor/contract_job.py
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,8 @@
import os
import base64
import copy
import string
import secrets
from typing import cast

import requests
Expand All @@ -40,7 +42,6 @@

EVENT = os.environ["EVENT"]
STAGE = os.environ["STAGE"]
IAM_ROLE = os.environ["IAM_ROLE"]
FAAS_GATEWAY = os.environ["FAAS_GATEWAY"]
FAAS_REGISTRY = os.environ["FAAS_REGISTRY"]
INTERNAL_ID = os.environ["INTERNAL_ID"]
Expand Down Expand Up @@ -142,11 +143,12 @@ def create_dockerfile(self) -> str:
with open(template_path) as file:
template = file.read()

# Interpolate with base image name
entropy = "".join(secrets.choice(string.ascii_letters) for _ in range(43))
# Interpolate with base image name and random entropy
if self.update_model:
dockerfile = template.format(customerBaseImage=self.update_model.image)
dockerfile = template.format(customerBaseImage=self.update_model.image, entropy=entropy)
else:
dockerfile = template.format(customerBaseImage=self.model.image)
dockerfile = template.format(customerBaseImage=self.model.image, entropy=entropy)

# Save as Dockerfile and return context directory
with open(dockerfile_path, "w") as file:
Expand Down Expand Up @@ -224,7 +226,7 @@ def pull_image(self, image_name: str) -> docker.models.images.Image:
return image

def delete_contract_image(self, image_digest: str) -> None:
_log.info("Deleting contract image")
_log.info(f"Deleting contract image {image_digest}")
try:
registry_interface.delete_image(repository="customer-contracts", image_digest=image_digest)
except Exception:
Expand Down Expand Up @@ -260,16 +262,14 @@ def build_contract_image(self) -> None:
try:
dockerfile_path = self.create_dockerfile()
_log.info(f"Building OpenFaaS image {self.faas_image}")
self.docker.images.build(path=dockerfile_path, tag=self.faas_image, rm=True, timeout=30)
self.docker.images.build(path=dockerfile_path, tag=self.faas_image, rm=True, timeout=45, pull=True)
except (docker.errors.APIError, docker.errors.BuildError):
_log.exception("Docker error")
self.model.set_state(state=self.end_error_state, msg="Docker build error")
raise exceptions.BadImageError("Docker build error")

_log.info(f"Pushing to ECR {self.faas_image}")
_log.info(f"Pushing to docker registry {self.faas_image}")
try:
# For on prem, the auth will need an abstraction layer so the customer can maintain a private registry for their contracts
# For now, we default to using ECR auth. This works in minikube as the localhost:5000 registry is unauthenticated, so auth_config gets ignored.
self.docker.images.push(f"{FAAS_REGISTRY}/customer-contracts", tag=self.model.id, auth_config=registry_interface.get_login())
image = self.docker.images.get(self.faas_image)
_log.debug(f"Built image attrs: {image.attrs}")
Expand Down Expand Up @@ -368,17 +368,13 @@ def delete_openfaas_function(self) -> None:
_log.info("OpenFaaS delete failure")

def delete_contract_data(self) -> None:
"""Remove all stored information on this smart contract
Returns:
None
"""
_log.info("Deleting contract data")
"""Remove all stored information on this smart contract"""
try:
_log.info(f"Deleting contract data for contract {self.model.id}")
storage.delete_directory(f"SMARTCONTRACT/{self.model.id}")
_log.info("Removing index")
smart_contract_dao.remove_smart_contract_index(self.model.id)
_log.info("Deleting txn type")
_log.info(f"Deleting txn type {self.model.txn_type}")
transaction_type_dao.remove_existing_transaction_type(self.model.txn_type)
key = f"KEYS/{self.model.auth_key_id}"
_log.info(f"Deleting HMAC key {key}")
Expand Down
18 changes: 7 additions & 11 deletions dragonchain/job_processor/job_processor.py
Original file line number Diff line number Diff line change
Expand Up @@ -32,12 +32,13 @@
INTERNAL_ID = os.environ["INTERNAL_ID"]
STAGE = os.environ["STAGE"]
REGISTRY = os.environ["REGISTRY"]
IAM_ROLE = os.environ["IAM_ROLE"]
IAM_ROLE = os.environ.get("IAM_ROLE")
NAMESPACE = os.environ["NAMESPACE"]
DEPLOYMENT_NAME = os.environ["DEPLOYMENT_NAME"]
STORAGE_TYPE = os.environ["STORAGE_TYPE"]
STORAGE_LOCATION = os.environ["STORAGE_LOCATION"]
SECRET_LOCATION = os.environ["SECRET_LOCATION"]
DRAGONCHAIN_IMAGE = os.environ["DRAGONCHAIN_IMAGE"]

_log = logger.get_logger()
_kube: kubernetes.client.BatchV1Api = cast(kubernetes.client.BatchV1Api, None) # This will always be defined before starting by being set in start()
Expand All @@ -58,14 +59,6 @@ def start() -> None:
start_task()


def get_image_name() -> str:
"""Get the image name of this version of Dragonchain
Returns:
A string path to the image being used in this Dragonchain.
"""
return f"{REGISTRY}/dragonchain_core:{STAGE}-{DRAGONCHAIN_VERSION}"


def get_job_name(contract_id: str) -> str:
"""Get the name of a kubernetes contract job
Args:
Expand Down Expand Up @@ -182,6 +175,9 @@ def attempt_job_launch(event: dict, retry: int = 0) -> None:
persistent_volume_claim=kubernetes.client.V1PersistentVolumeClaimVolumeSource(claim_name=f"{DEPLOYMENT_NAME}-main-storage"),
)
)
annotations = {}
if IAM_ROLE:
annotations["iam.amazonaws.com/role"] = IAM_ROLE

resp = _kube.create_namespaced_job(
namespace=NAMESPACE,
Expand All @@ -193,12 +189,12 @@ def attempt_job_launch(event: dict, retry: int = 0) -> None:
backoff_limit=1, # This is not respected in k8s v1.11 (https://github.com/kubernetes/kubernetes/issues/54870)
active_deadline_seconds=600,
template=kubernetes.client.V1PodTemplateSpec(
metadata=kubernetes.client.V1ObjectMeta(annotations={"iam.amazonaws.com/role": IAM_ROLE}, labels=get_job_labels(event)),
metadata=kubernetes.client.V1ObjectMeta(annotations=annotations, labels=get_job_labels(event)),
spec=kubernetes.client.V1PodSpec(
containers=[
kubernetes.client.V1Container(
name=get_job_name(event["id"]),
image=get_image_name(),
image=DRAGONCHAIN_IMAGE,
security_context=kubernetes.client.V1SecurityContext(privileged=True),
volume_mounts=volume_mounts,
command=["sh"],
Expand Down
9 changes: 2 additions & 7 deletions dragonchain/job_processor/job_processor_utest.py
Original file line number Diff line number Diff line change
Expand Up @@ -41,9 +41,6 @@


class TestJobPoller(unittest.TestCase):
def test_get_image_name(self):
self.assertEqual(job_processor.get_image_name(), "/dragonchain_core:test-")

def test_get_job_name(self):
self.assertEqual(job_processor.get_job_name("my-id"), "contract-my-id")

Expand Down Expand Up @@ -197,14 +194,12 @@ def test_attempt_job_launch_launches_job_correctly(self, mock_kube):
backoff_limit=1,
active_deadline_seconds=600,
template=kubernetes.client.V1PodTemplateSpec(
metadata=kubernetes.client.V1ObjectMeta(
annotations={"iam.amazonaws.com/role": ""}, labels=job_processor.get_job_labels(valid_task_definition)
),
metadata=kubernetes.client.V1ObjectMeta(annotations={}, labels=job_processor.get_job_labels(valid_task_definition)),
spec=kubernetes.client.V1PodSpec(
containers=[
kubernetes.client.V1Container(
name=job_processor.get_job_name(valid_task_definition["id"]),
image=job_processor.get_image_name(),
image=job_processor.DRAGONCHAIN_IMAGE,
security_context=kubernetes.client.V1SecurityContext(privileged=True),
volume_mounts=[
kubernetes.client.V1VolumeMount(name="dockersock", mount_path="/var/run/docker.sock"),
Expand Down
3 changes: 3 additions & 0 deletions dragonchain/job_processor/templates/dockerfile.template
Original file line number Diff line number Diff line change
Expand Up @@ -16,3 +16,6 @@ ENTRYPOINT []
EXPOSE 8080
HEALTHCHECK --interval=3s CMD [ -e /tmp/.lock ] || exit 1
CMD [ "/usr/bin/fwatchdog" ]

# Add random entropy to container to gaurentee that it has a unique digest hash
ENV {entropy}=""
11 changes: 11 additions & 0 deletions dragonchain/lib/crypto.py
Original file line number Diff line number Diff line change
Expand Up @@ -374,7 +374,18 @@ def encrypt_message(encryption_type: SupportedEncryption, priv_key: Union["Priva
sig_bytes = priv_key.ecdsa_serialize(priv_key.ecdsa_signature_normalize(priv_key.ecdsa_sign(msg=message_bytes, raw=True))[1])
else:
raise NotImplementedError("Unsupported encryption type")
return base64.b64encode(sig_bytes).decode("ascii")


def encrypt_secp256k1_message_compact(priv_key: "PrivateKey", message_bytes: bytes) -> str:
"""Encrypt a 32byte message (typically a hash, to use as a signature) (in its compact form)
Args:
priv_key: private key object of encryption type secp256k1
message_bytes: 32 byte python bytes object to encrypt
Returns:
Base 64 encoded signature string
"""
sig_bytes = priv_key.ecdsa_serialize_compact(priv_key.ecdsa_signature_normalize(priv_key.ecdsa_sign(msg=message_bytes, raw=True))[1])
return base64.b64encode(sig_bytes).decode("ascii")


Expand Down
6 changes: 6 additions & 0 deletions dragonchain/lib/crypto_utest.py
Original file line number Diff line number Diff line change
Expand Up @@ -799,6 +799,12 @@ def test_generic_signature(self):
sig = crypto.make_generic_signature(secp256k1, sha256, key, content)
self.assertTrue(crypto.check_generic_signature(secp256k1, sha256, key.pubkey, content, b64decode(sig)))

def test_encrypt_secp256k1_message_compact(self):
content = b"Ndj\x8e`tH\x06\x9f\xe2=\xb7\xc0\x85K\xcf4{;,Wn\x7fi\xba=A\x1b\xee<d\xbb"
encrypted = crypto.encrypt_secp256k1_message_compact(key, content)
expected_result = "ozzVHgADaCfxa2jO+nfwmpeaw836dKutXvtOVngkS6U2kimcFC3JRng4Dta0d+JMOSbEhZ8dX59epnaQmxrNHQ=="
self.assertEqual(encrypted, expected_result)

def test_unsupported_crypto(self):
l1block = make_l1_block()
l1block.proof = "sig="
Expand Down

0 comments on commit 4dc93ec

Please sign in to comment.