background preloader

Docker

Facebook Twitter

五分鐘 Kubernetes 有感 – Michael Hsu. Kubernetes 左架構圖雖然乍看複雜,但其實中只要認識其中幾個基本的元件: Cluster:由紫色 Master 來管理底下橘色的 Node。

五分鐘 Kubernetes 有感 – Michael Hsu

Pod:對 Container 再一次封裝,之後如果要 Scale,就會自動新增刪減 Pod,是 Immutable deploy 的最小單位。 Service:Pod 運行後需要透過定義 Service 才能讓外部的使用者訪問。 Install 學習上最快速方便的方法是,直接在本機端安裝單機版 Minikube Cluster ,因其安裝在 VituralBox 內任你盡情把玩。 # Install virtualbox brew cask install minikube$ brew install kubectl $ minikube start$ minikube dashboard 部署 Docker Image 可以選擇從 Dockerhub 下載 Image 到你的 Cluster 中,這邊使用筆者一個 Side Project Docker Image 來示範,最後會看到 Deployment / Pod 列表多了個運行中的項目: Running Docker containers on Bash on Windows - Jayway. When the Windows Subsystem for Linux (WSL) – or, as most people even at Microsoft often refer to it – Bash on Ubuntu on Windows – was announced on Microsoft’s Build conference 2016, a world of new tools opened up to us Windows devs.

Running Docker containers on Bash on Windows - Jayway

Personally, I love being able to choose between PowerShell, Bash or plain old cmd when I want to script something. And it’s always bugged me that I couldn’t get Docker working from Bash on Windows – until now. The original title of this post was “Running Docker from Bash on Windows”, but that would have been a slight overstatement. Docker requires access to quite a of lot system calls which aren’t necessarily all implemented on Windows, so getting the engine running under the WSL is probably not so easy.

Instead, we’ll run the Docker Engine on Windows, and connect to it from Bash. Here’s how to do it: 1. To install the Docker engine on Windows, just go to docker.com and download the appropriate distribution. 自定義橋接器 · 《Docker —— 從入門到實踐­》正體中文版. Docker 中為 Container 指定真實IP地址 – 使用 pipework. 預設情況下 Container 的網路是 NAT 環境,外部網路要連線到 Container 中的某個 port 得在啟動 Container 時加入 -p 指令轉發 port 到我們的 HOST 上, 如果有複數的容器同時使用 80 port 就必須將 port 號分別轉發在 HOST 的不同 port 號上,實際上線時我們不希望使用者拜訪的時候還得在網址最後加上如 :8080 這些 port 號。

Docker 中為 Container 指定真實IP地址 – 使用 pipework

這時候,就會希望 Container 有一個像 VirtualBox 的 Ghost OS 一樣可以訪問的 IP 地址。 一開始我按照 Docker 的文檔,在 HOST 建立了一個 bridge 介面和實體網卡做鏈結,然後再將 Docker 預設的 bridge 換成這個新設定的 bridge,雖然可以讓 Container 實際可以拿到一個對外 IP 位址,但是設定上實在有點囉嗦,在 Container 很多的情況下其實不好處理… Google 一下,發現 GitHub 上有一個叫 pipework 的 shell script 可以幫我快速設置這些繁瑣的步驟…而且只需一行指令 先到 pipework 整包專案下載回來,並解壓縮 wget unzip master.zip -d 然後會在你當前目錄下產生 /pipework-master 資料夾,pipework 的 script 已經設定成可執行屬性了。 這邊我使用 mysql:latest 和 wordpress:latest 這兩個 image 組合做為範例 首先啟動容器的方式和原本都沒有任何差異,不需做任何修改. [TIL] Learning note about Docker Swarm Mode. Start Docker Swarm Mode Docker Swarm Mode is specific for Docker Swarm Version 2 which only enable after Docker 1.12.

[TIL] Learning note about Docker Swarm Mode

It is cluster management system for Docker. GitHub - wongnai/docker-spark-standalone: Spark Standalone (on Docker) GitHub - SingularitiesCR/spark-docker: Apache Spark Docker Image. Big-data-europe/docker-spark: Apache Spark docker image. Wongnai/spark-standalone - Docker Hub. Running Apache Spark 2.0 on Docker · Spark examples. Spark-master: image: spark-2 command: bin/spark-class org.apache.spark.deploy.master.Master -h spark-master hostname: spark-master environment: MASTER: SPARK_CONF_DIR: /conf SPARK_PUBLIC_DNS: 127.0.0.1 expose: - 7001 - 7002 - 7003 - 7004 - 7005 - 7006 - 7077 - 6066 ports: - 4040:4040 - 6066:6066 - 7077:7077 - 8080:8080 volumes: - .

Running Apache Spark 2.0 on Docker · Spark examples

Kitematic. Blog.docker. Docker - Build, Ship, and Run Any App, Anywhere. 用docker搭建spark集群. Cloudera/clusterdock - Docker Hub. To enable a multi-node cluster deployment on the same Docker host (as requested by CDH users for testing and self-learning), we have created a CDH topology for Apache HBase’s clusterdock framework, a simple, Python-based library designed to orchestrate multi-node cluster deployments on a single host.

cloudera/clusterdock - Docker Hub

Unlike existing tools like Docker Compose, which are great at managing microservice architectures, clusterdock orchestrates multiple containers to act more like traditional hosts. In this paradigm, a four-node Apache Hadoop cluster uses four containers. Inside Cloudera, we’ve found it to be a great tool for testing and prototyping (but not intended nor supported for production use).

To begin, install Docker on your host. Older versions of Docker lack the embedded DNS server and correct reverse hostname lookup required by Cloudera Manager, so ensure you’re running Docker 1.11.0 or newer. Source /dev/stdin <<< "$(curl -sL With everything ready to go, let’s get started! Basic Usage Advanced Usage.