Difference between revisions of "Docker environment at D4Science"
From Gcube Wiki
(Created page with " == D4Science docker infrastructure == A production cluster is available, based on Docker Swarm [https://docs.docker.com/engine/swarm/]. The cluster consists of: * three ma...") |
|||
Line 24: | Line 24: | ||
* The services can be deployed into the Docker cluster as stacks [https://docs.docker.com/engine/swarm/stack-deploy/] using a specially crafted compose file. The Open ASFA case can be used as a working example [https://code-repo.d4science.org/InfraScience/ansible-role-open-asfa] | * The services can be deployed into the Docker cluster as stacks [https://docs.docker.com/engine/swarm/stack-deploy/] using a specially crafted compose file. The Open ASFA case can be used as a working example [https://code-repo.d4science.org/InfraScience/ansible-role-open-asfa] | ||
+ | * The web services must be connected to the <code>haproxy-public</code> network and setup to use the <code>dnsrr</code> deploy mode, to be discoverable by HAPROXY | ||
+ | * HAPROXY must be configured to expose the service. Example of a two instances shinyproxy service: | ||
+ | |||
+ | backend shinyproxy_bck | ||
+ | mode http | ||
+ | option httpchk | ||
+ | balance leastconn | ||
+ | http-check send meth HEAD uri / ver HTTP/1.1 hdr Host localhost | ||
+ | http-check expect rstatus (2|3)[0-9][0-9] | ||
+ | stick on src | ||
+ | stick-table type ip size 2m expire 180m | ||
+ | server-template shinyproxy- 2 shinyproxy_shinyproxy:8080 check resolvers docker init-addr libc,none | ||
+ | |||
+ | ==== Docker compose example ==== | ||
+ | |||
+ | * Use the Open ASFA one |
Revision as of 19:18, 22 October 2020
Contents
D4Science docker infrastructure
A production cluster is available, based on Docker Swarm [1]. The cluster consists of:
- three manager nodes
- currently, five worker nodes.
The running services are exposed using a double set of HAPROXY load balancers:
- A L4 layer, used to reach the http/https services exposed by the L7 layer
- A L7 layer, running in the swarm, configured to dinamically resolve the backend names using the Docker internal DNS service
Provisioning of the Docker Swarm
The Swarm, with portainer [2] and the L7 HAPROXY [3] installation is managed by ansible, starting from the role [4]
The load balancers architecture
- Describe how the L4 and L7 HAPROXY service work together
- Describe how the L7 HAPROXY talks to the backend services
Docker Stack
- The services can be deployed into the Docker cluster as stacks [5] using a specially crafted compose file. The Open ASFA case can be used as a working example [6]
- The web services must be connected to the
haproxy-public
network and setup to use thednsrr
deploy mode, to be discoverable by HAPROXY - HAPROXY must be configured to expose the service. Example of a two instances shinyproxy service:
backend shinyproxy_bck mode http option httpchk balance leastconn http-check send meth HEAD uri / ver HTTP/1.1 hdr Host localhost http-check expect rstatus (2|3)[0-9][0-9] stick on src stick-table type ip size 2m expire 180m server-template shinyproxy- 2 shinyproxy_shinyproxy:8080 check resolvers docker init-addr libc,none
Docker compose example
- Use the Open ASFA one