Skip to content

Commit

Permalink
docs: fix spelling (#15996)
Browse files Browse the repository at this point in the history
  • Loading branch information
jbampton committed May 15, 2024
1 parent dc306bf commit 8d6e9ee
Show file tree
Hide file tree
Showing 7 changed files with 12 additions and 12 deletions.
4 changes: 2 additions & 2 deletions docs/docs/en/guide/resource/configuration.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,8 +2,8 @@

- You could use `Resource Center` to upload text files, UDFs and other task-related files.
- You could configure `Resource Center` to use distributed file system like [Hadoop](https://hadoop.apache.org/docs/r2.7.0/) (2.6+), [MinIO](https://github.com/minio/minio) cluster or remote storage products like [AWS S3](https://aws.amazon.com/s3/), [Alibaba Cloud OSS](https://www.aliyun.com/product/oss), [Huawei Cloud OBS](https://support.huaweicloud.com/obs/index.html) etc.
- You could configure `Resource Center` to use local file system. If you deploy `DolphinScheduler` in `Standalone` mode, you could configure it to use local file system for `Resouce Center` without the need of an external `HDFS` system or `S3`.
- Furthermore, if you deploy `DolphinScheduler` in `Cluster` mode, you could use [S3FS-FUSE](https://github.com/s3fs-fuse/s3fs-fuse) to mount `S3` or [JINDO-FUSE](https://help.aliyun.com/document_detail/187410.html) to mount `OSS` to your machines and use the local file system for `Resouce Center`. In this way, you could operate remote files as if on your local machines.
- You could configure `Resource Center` to use local file system. If you deploy `DolphinScheduler` in `Standalone` mode, you could configure it to use local file system for `Resource Center` without the need of an external `HDFS` system or `S3`.
- Furthermore, if you deploy `DolphinScheduler` in `Cluster` mode, you could use [S3FS-FUSE](https://github.com/s3fs-fuse/s3fs-fuse) to mount `S3` or [JINDO-FUSE](https://help.aliyun.com/document_detail/187410.html) to mount `OSS` to your machines and use the local file system for `Resource Center`. In this way, you could operate remote files as if on your local machines.

## Use Local File System

Expand Down
2 changes: 1 addition & 1 deletion docs/docs/en/guide/start/docker.md
Original file line number Diff line number Diff line change
Expand Up @@ -128,7 +128,7 @@ and use `admin` and `dolphinscheduler123` as default username and password in th
![login](../../../../img/new_ui/dev/quick-start/login.png)

> Note: If you start the services by the way [using exists PostgreSQL ZooKeeper](#using-exists-postgresql-zookeeper), and
> strating with multiple machine, you should change URL domain from `localhost` to IP or hostname the api server running.
> starting with multiple machine, you should change URL domain from `localhost` to IP or hostname the api server running.
## Change Environment Variable

Expand Down
6 changes: 3 additions & 3 deletions docs/docs/en/guide/task/datafactory.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,11 +19,11 @@ DolphinScheduler DataFactory functions:

### Application Permission Setting

First, visit the `Subcription` page and choose `Access control (IAM)`, then click `Add role assignment` to the authorization page.
![Subcription-IAM](../../../../img/tasks/demo/datafactory_auth1.png)
First, visit the `Subscription` page and choose `Access control (IAM)`, then click `Add role assignment` to the authorization page.
![Subscription-IAM](../../../../img/tasks/demo/datafactory_auth1.png)
After that, select `Contributor` role which satisfy functions calls in data factory. Then click `Members` page, and click `Select members`.
Search application name or application `Object ID` to assign `Contributor` role to application.
![Subcription-Role](../../../../img/tasks/demo/datafactory_auth2.png)
![Subscription-Role](../../../../img/tasks/demo/datafactory_auth2.png)

## Configurations

Expand Down
2 changes: 1 addition & 1 deletion docs/docs/en/guide/task/kubernetes.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ K8S task type used to execute a batch task. In this task, the worker submits the
| Command | The container execution command (yaml-style array), for example: ["printenv"] |
| Args | The args of execution command (yaml-style array), for example: ["HOSTNAME", "KUBERNETES_PORT"] |
| Custom label | The customized labels for k8s Job. |
| Node selector | The label selectors for running k8s pod. Different value in value set should be seperated by comma, for example: `value1,value2`. You can refer to https://kubernetes.io/docs/reference/kubernetes-api/common-definitions/node-selector-requirement/ for configuration of different operators. |
| Node selector | The label selectors for running k8s pod. Different value in value set should be separated by comma, for example: `value1,value2`. You can refer to https://kubernetes.io/docs/reference/kubernetes-api/common-definitions/node-selector-requirement/ for configuration of different operators. |
| Custom parameter | It is a local user-defined parameter for K8S task, these params will pass to container as environment variables. |

## Task Example
Expand Down
2 changes: 1 addition & 1 deletion docs/docs/en/guide/task/mlflow.md
Original file line number Diff line number Diff line change
Expand Up @@ -148,7 +148,7 @@ After this, you can visit the MLflow service (`http://localhost:5000`) page to v

### Preset Algorithm Repository Configuration

If you can't access github, you can modify the following fields in the `commom.properties` configuration file to replace the github address with an accessible address.
If you can't access github, you can modify the following fields in the `common.properties` configuration file to replace the github address with an accessible address.

```yaml
# mlflow task plugin preset repository
Expand Down
6 changes: 3 additions & 3 deletions docs/docs/zh/guide/task/datafactory.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,10 +19,10 @@ DolphinScheduler DataFactory 组件的功能:

### 应用权限设置

首先打开当前`Subcription`页面,点击`Access control (IAM)`,再点击`Add role assignment`进入授权页面。
![Subcription-IAM](../../../../img/tasks/demo/datafactory_auth1.png)
首先打开当前`Subscription`页面,点击`Access control (IAM)`,再点击`Add role assignment`进入授权页面。
![Subscription-IAM](../../../../img/tasks/demo/datafactory_auth1.png)
首先选择`Contributor`角色足够满足调用数据工厂。然后选择`Members`页面,再选择`Select members`,检索APP名称或APP的`Object ID`并添加,从给指定APP添加权限.
![Subcription-Role](../../../../img/tasks/demo/datafactory_auth2.png)
![Subscription-Role](../../../../img/tasks/demo/datafactory_auth2.png)

## 环境配置

Expand Down
2 changes: 1 addition & 1 deletion docs/docs/zh/guide/task/mlflow.md
Original file line number Diff line number Diff line change
Expand Up @@ -139,7 +139,7 @@ mlflow server -h 0.0.0.0 -p 5000 --serve-artifacts --backend-store-uri sqlite://

### 内置算法仓库配置

如果遇到github无法访问的情况,可以修改`commom.properties`配置文件的以下字段,将github地址替换能访问的地址。
如果遇到github无法访问的情况,可以修改`common.properties`配置文件的以下字段,将github地址替换能访问的地址。

```yaml
# mlflow task plugin preset repository
Expand Down

0 comments on commit 8d6e9ee

Please sign in to comment.