ByConity/docker/executable_wrapper
Dao a23280cf4d update document 2023-03-15 13:53:43 +08:00
..
app update readme 2023-01-19 19:19:34 +08:00
config update docker wrapper 2023-01-19 08:48:00 +08:00
Dockerfile update docker image for executable wrapper 2023-03-08 18:07:28 +08:00
FDB_installation.md update doc 2023-03-08 18:22:01 +08:00
FDB_installation_zh.md Update FDB_installation_zh.md 2023-03-08 15:39:31 +08:00
HDFS_installation.md update doc 2023-03-08 18:22:01 +08:00
HDFS_installation_zh.md Update HDFS_installation_zh.md 2023-03-14 10:09:33 +08:00
Makefile update document 2023-03-15 13:53:43 +08:00
README.md update docker image for executable wrapper 2023-03-08 18:07:28 +08:00
Update_docker_image.md update document 2023-03-15 13:53:43 +08:00
run.sh update docker image for executable wrapper 2023-03-08 18:07:28 +08:00

README.md

The current way to deploy ByConity to physical servers is deployed via a docker wrapper. The docker wrapper image can be upgraded by following this guide. Please follow the below steps:

  • Deploy Foundation database. You can refer to the installation guide here. After this step will have a fdb cluster config file default located in /etc/foundationdb/fdb.cluster. Copy this file to ./config/fdb.clsuter
  • Deploy an HDFS cluster consist of name node and data node, and create the directory /user/clickhouse in HDFS for store data. You can refer to the installation guide here. After this step, you got the name node url which ussually the value of fs.defaultFS that you can find in the core-site.xml config.
  • Update file config/cnch_config.xml:
    • Update the value of hdfs_nnproxy tag to your hdfs namenode url.
    • Update the host tag inside service_discovery tags with the approprate address, ussually the ip address of the machine where you plan to install ByConity
  • Execute make image_pull to pull the docker image to your local
  • For the machine you plan to deploy TSO, execute ./run.sh tso to run tso. And in the same way, go the other machine where you plan to run other component and execute below command to make it run.
  • Execute ./run.sh server on 1 machine to run server
  • Execute ./run.sh read_worker on 1 machine to run read worker. You can have many workers by repeated this command on different machines. And add the workers info in service_discovery tags in config/cnch_config.xml so that server can know them.
  • Execute ./run.sh write_worker on 1 machine to run write worker
  • Execute ./run.sh dm on 1 machine to run daemon manager. TSO and DM are light weight services and can be run in the same machine with server or worker for resource efficient.
  • Execute ./run.sh cli on the machine that run server or ./run.sh cli2 {server_address} from any machine to connect to server using clickhouse-cli interface.
  • To stop any component execute ./run.sh stop {component_name} where component_name can be tso, server, .... After you stop if you want to run again then use ./run.sh start {component_name} otherwise you have got an error about container already in use from docker.