background preloader

Docker

Facebook Twitter

自定義橋接器 · 《Docker —— 從入門到實踐­》正體中文版. Docker 中為 Container 指定真實IP地址 – 使用 pipework. 預設情況下 Container 的網路是 NAT 環境,外部網路要連線到 Container 中的某個 port 得在啟動 Container 時加入 -p 指令轉發 port 到我們的 HOST 上, 如果有複數的容器同時使用 80 port 就必須將 port 號分別轉發在 HOST 的不同 port 號上,實際上線時我們不希望使用者拜訪的時候還得在網址最後加上如 :8080 這些 port 號。

Docker 中為 Container 指定真實IP地址 – 使用 pipework

這時候,就會希望 Container 有一個像 VirtualBox 的 Ghost OS 一樣可以訪問的 IP 地址。 一開始我按照 Docker 的文檔,在 HOST 建立了一個 bridge 介面和實體網卡做鏈結,然後再將 Docker 預設的 bridge 換成這個新設定的 bridge,雖然可以讓 Container 實際可以拿到一個對外 IP 位址,但是設定上實在有點囉嗦,在 Container 很多的情況下其實不好處理… [TIL] Learning note about Docker Swarm Mode. Start Docker Swarm Mode Docker Swarm Mode is specific for Docker Swarm Version 2 which only enable after Docker 1.12.

[TIL] Learning note about Docker Swarm Mode

It is cluster management system for Docker. GitHub - wongnai/docker-spark-standalone: Spark Standalone (on Docker) GitHub - SingularitiesCR/spark-docker: Apache Spark Docker Image. Big-data-europe/docker-spark: Apache Spark docker image. Wongnai/spark-standalone - Docker Hub. Running Apache Spark 2.0 on Docker · Spark examples. Spark-master: image: spark-2 command: bin/spark-class org.apache.spark.deploy.master.Master -h spark-master hostname: spark-master environment: MASTER: SPARK_CONF_DIR: /conf SPARK_PUBLIC_DNS: 127.0.0.1 expose: - 7001 - 7002 - 7003 - 7004 - 7005 - 7006 - 7077 - 6066 ports: - 4040:4040 - 6066:6066 - 7077:7077 - 8080:8080 volumes: - .

Running Apache Spark 2.0 on Docker · Spark examples

/conf/spark-master:/conf - . /data:/tmp/data spark-worker-1: image: spark-2 command: bin/spark-class org.apache.spark.deploy.worker.Worker hostname: spark-worker-1 environment: SPARK_CONF_DIR: /conf SPARK_PUBLIC_DNS: 127.0.0.1 SPARK_WORKER_CORES: 2 SPARK_WORKER_MEMORY: 2g SPARK_WORKER_PORT: 8881 SPARK_WORKER_WEBUI_PORT: 8081 links: - spark-master expose: - 7012 - 7013 - 7014 - 7015 - 7016 - 8881 ports: - 8081:8081 volumes: - . /conf/spark-worker-1:/conf - . /data:/tmp/data. Kitematic. Blog.docker. Docker - Build, Ship, and Run Any App, Anywhere. 用docker搭建spark集群. Cloudera/clusterdock - Docker Hub. To enable a multi-node cluster deployment on the same Docker host (as requested by CDH users for testing and self-learning), we have created a CDH topology for Apache HBase’s clusterdock framework, a simple, Python-based library designed to orchestrate multi-node cluster deployments on a single host.

cloudera/clusterdock - Docker Hub

Unlike existing tools like Docker Compose, which are great at managing microservice architectures, clusterdock orchestrates multiple containers to act more like traditional hosts. In this paradigm, a four-node Apache Hadoop cluster uses four containers. Inside Cloudera, we’ve found it to be a great tool for testing and prototyping (but not intended nor supported for production use). To begin, install Docker on your host. Older versions of Docker lack the embedded DNS server and correct reverse hostname lookup required by Cloudera Manager, so ensure you’re running Docker 1.11.0 or newer.