This guide provides a step by step instruction of how to deploy CCP on bare metal or a virtual machine.
CCP was tested on Ubuntu 16.04 x64. It will probably work on different OSes, but it’s not officialy supported.
CCP was tested on the environment created by Kargo, via fuel-ccp-installer, which manages k8s, calico, docker and many other things. It will probably work on different setup, but it’s not officialy supported.
Current tested version of different components are:
Component | Min Version | Max Version | Comment |
---|---|---|---|
Kubernetes | 1.2.4 | 1.3.5 | 1.3.0 to 1.3.3 won’t work |
Docker | 1.10.0 | 1.12.0 | |
Calico-node | 0.20.0 | 0.21.0 |
Additionaly, you will need to have working kube-proxy, kube-dns and docker registry.
If you don’t have a running k8s environment, please check out this guide
Warning
All further steps assume that you already have a working k8s installation.
Note
Some commands below may require root permissions
To clone the CCP CLI repo:
git clone https://git.openstack.org/openstack/fuel-ccp
To install CCP CLI and Python dependencies use:
pip install fuel-ccp/
Create CCP CLI configuration file:
mkdir /etc/ccp
cat > /etc/ccp/ccp.yaml << EOF
builder:
push: True
registry:
address: "127.0.0.1:31500"
repositories:
skip_empty: True
EOF
Append default topology and edit it, if needed:
cat fuel-ccp/etc/topology-example.yaml >> /etc/ccp/ccp.yaml
Append global CCP configuration:
cat >> /etc/ccp/ccp.yaml << EOF
configs:
private_interface: eth0
public_interface: eth1
neutron_external_interface: eth2
EOF
Make sure to adjust it to your environment, since the network configuration of your environment may be different.
Fetch CCP components repos:
ccp fetch
Build CCP components and push them into the Docker Registry:
ccp build
Deploy OpenStack:
ccp deploy
If you want to deploy only specific components use:
ccp deploy -c COMPONENT_NAME1 COMPONENT_NAME2
For example:
ccp deploy -c etcd mariadb keystone
By default, CCP deploying all components into “ccp” k8s namespace. You could set context for all kubectl commands to use this namespace:
kubectl config set-context ccp --namespace ccp
kubectl config use-context ccp
Get all running pods:
kubectl get pod -o wide
Get all running jobs:
kubectl get job -o wide
Note
Deployment is successful when all jobs have “1” (Successful) state.
Install openstack-client:
pip install python-openstackclient
openrc file for current deployment was created in the current working directory. To use it run:
source openrc-ccp
Run test environment deploy script:
bash fuel-ccp/tools/deploy-test-vms.sh -a create -n NUMBER_OF_VMS
This script will create flavor, upload cirrios image to glance, create network and subnet and launch bunch of cirrios based VMs.
Currently, we don’t have any external proxy (like Ingress), so, for now, we have to use k8s service “nodePort” feature to be able to access internal services.
Get nodePort of horizon service:
kubectl get service horizon -o yaml | awk '/nodePort: / {print $NF}'
Use external ip of any node in cluster plus this port to access horizon.
Get nodePort of nova-novncproxy service:
kubectl get service nova-novncproxy -o yaml | awk '/nodePort: / {print $NF}'
Take the url from Horizon console and replace “nova-novncproxy” string with an external IP of any node in cluster plus nodeport from the service.
To cleanup your environment run:
ccp cleanup
This will delete all VMs created by OpenStack and destroy all neutron networks. After it’s done it will delete all k8s pods in this deployment.
This section provides information on deploying StackLight, the monitoring and logging system for CCP.
Warning
StackLight requires Kubernetes 1.4 or higher, and its deployment will fail with Kubernetes 1.3 and lower. So before deploying StackLight make sure you use an appropriate version of Kubernetes.
StackLight is composed of several components. Some components are related to logging, and others are related to monitoring.
The “logging” components:
The “monitoring” components:
For fetching the StackLight repo (fuel-ccp-stacklight) and building the StackLight Docker images please refer to the Quick Start section as StackLight is not different from other CCP components for that matter. If you followed the Quick Start the StackLight images may be built already.
The StackLight Docker images are the following:
The StackLight components are regular CCP components, so the deployment of StackLight is done through the CCP CLI like any other CCP component. Please read the Quick Start section and make sure the CCP CLI is installed and you know how to use it.
StackLight may be deployed together with other CCP components, or independently as a separate deployment process. You may also want to deploy just the “logging” components of StackLight, or just the “monitoring” components. Or you may want to deploy all the StackLight components at once.
In any case you will need to create StackLight-related roles in your CCP configuration file (e.g. /etc/ccp/ccp.yaml) and you will need to assign these roles to nodes.
For example:
nodes:
node1:
roles:
- stacklight-backend
- stacklight-collector
node[2-3]:
roles:
- stacklight-collector
roles:
stacklight-backend:
- influxdb
- grafana
stacklight-collector:
- stacklight-collector
In this example we define two roles: stacklight-backend and stacklight-collector. The role stacklight-backend is assigned to node1, and it defines where influxdb and grafana will run. The role stacklight-collector is assigned to all the nodes (node1, node2 and node3), and it defines where stacklight-collector will run. In most cases you will want stacklight-collector to run on every cluster node, for node-level metrics to be collected for every node.
With this, you can now deploy influxdb, grafana and stacklight-collector with the following CCP command:
ccp deploy -c influxdb grafana stacklight-collector
Here is another example, in which both the “monitoring” and “logging” components will be deployed:
nodes:
node1:
roles:
- stacklight-backend
- stacklight-collector
node[2-3]:
roles:
- stacklight-collector
roles:
stacklight-backend:
- influxdb
- grafana
- elasticsearch
- kibana
stacklight-collector:
- stacklight-collector
- heka
- cron
And this is the command to use to deploy all the StackLight services:
ccp deploy -c influxdb grafana elasticsearch kibana stacklight-collector heka cron
To check the deployment status you can run:
kubectl --namespace ccp get pod -o wide
and check that all the StackLight-related pods have the RUNNING status.
As already explained in Quick Start CCP does not currently include an external proxy (such as Ingress), so for now the Kubernetes nodePort feature is used to be albe to access services such as Grafana and Kibana from outside the Kubernetes cluster.
This is how you can get the node port for Grafana:
$ kubectl get service grafana -o yaml | awk '/nodePort: / {print $NF}'
31124
And for Kibana:
$ kubectl get service kibana -o yaml | awk '/nodePort: / {print $NF}'
31426
This guide covers CCP specific requirements for defining Docker images.
All docker files should be located in docker/<component_name> directory, for example:
docker/horizon
docker/keystone
The docker directory may contain multiple components.
Each docker directory should contain a Dockerfile.j2 file. Dockerfile.j2 is a file which contains Docker build instructions in a Jinja2 template format. You can add additional files, which will be used in Dockerfile.j2, but only Dockerfile.j2 can be a Jinja2 template in this directory.
Please refer to the official Docker documentation which covers the Dockerfile format. CCP has some additional requirements, which is:
Here is an example of valid Dockerfile.j2: Keystone Dockerfile
Only specific variables can actually be used in Dockerfile.j2:
The CCP CLI provides additional feature for Docker images creation, which will help to use git repositories inside Dockerfile, it’s called copy_sources.
This feature uses configuration from service/files/defaults.yaml from the same repository or from global config, please refer to Application definition contribution guide for details.
After making any changes in docker directory, you should test it via build and deploy.
To test building, please run:
ccp build -c <component_name>
For example:
ccp build -c keystone
Make sure that image is built without errors.
To test the deployment, please build new images using the steps above and after run:
ccp deploy
Please refer to Quick Start for additional information.
This guide covers CCP specific DSL, which is used by the CCP CLI to populate k8s objects.
All application definition files should be located in the service/ directory, as a component_name.yaml file, for example:
service/keystone.yaml
All templates, such as configs, scripts, etc, which will be used for this service, should be located in service/<component_name>/files, for example:
service/files/keystone.conf.j2
All files inside this directory are Jinja2 templates. Default variables for these templates should be located in service/component_name/files/defaults.yaml inside the configs key.
There are three config locations, which the CCP CLI uses:
Before deployment, CCP will merge all these files into one dict, using the order above, so “component defaults” will override “global defaults” and “global config” will override everything.
This is project wide defaults, CCP keeps it inside fuel-ccp repository in fuel_ccp/resources/defaults.yaml file. This file defines global variables, that is variables that are not specific to any component, like eth interface names.
Each component repository could contain a service/files/defaults.yaml file with default config for this component only.
Optional config with global overrides for all services. Use it only if you need to override some defaults.
Each config could contain 3 keys:
Each key has its own purpose and isolation, so you have to add your variable to the right key to make it work.
Isolation:
Allowed content:
Example:
configs:
keystone_debug: false
So you could add “{{ keystone_debug }}” variable to you templates, which will be rendered into “false” in this case.
Isolation:
Allowed content:
For example:
versions:
influxdb_version: "0.13.0"
So you could add this to influxdb Dockerfile.j2:
curl https://dl.influxdata.com/influxdb/releases/influxdb_{{ influxdb_version }}_amd64.deb
Isolation:
Allowed content:
Remote git repository example:
sources:
openstack/keystone:
git_url: https://github.com/openstack/keystone.git
git_ref: master
Local git repository exaple:
sources:
openstack/keystone:
source_dir: /tmp/keystone
So you could add this to Dockerfile.j2:
{{ copy_sources("openstack/keystone", "/keystone") }}
CCP will use the chosen configuration, to copy git repository into Docker container, so you could use it latter.
Isolation:
Allowed content:
You could use it to get the private and public eth IP address. For example:
bind = network_topology["private"]["address"]
listen = network_topology["public"]["address"]
Isolation:
Allowed content:
Isolation:
Allowed content:
This is mainly used to pass some k8s related information to container, for example, you could use it to pass k8s node hostname to container via this variable:
Create env key:
env:
- name: CCP_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
Use this variable in some config:
{{ CCP_NODE_NAME }}
Please refer to Application definition language for detailed description of CCP DSL syntax.
There is a description of current syntax of application definition framework.
service:
name: service-name
ports:
- internal-port:external-port
daemonset: true
host-net: true
containers:
- name: container-name
image: container-image
probes:
readiness: readiness.sh
liveness: liveness.sh
volumes:
- name: volume-name
type: host
path: /path
pre:
- name: service-bootstrap
dependencies:
- some-service
- some-other-service
type: single
command: /tmp/bootstrap.sh
files:
- bootstrap.sh
user: user
- name: db-sync
dependencies:
- some-dep
command: some command
user: user
daemon:
dependencies:
- demon-dep
command: daemon.sh
files:
- config.conf
user: user
post:
- name: post-command
dependencies:
- some-service
- some-other-service
type: single
command: post.sh
files:
- config.conf
files:
config.conf:
path: /etc/service/config.conf
content: config.conf.j2
perm: "0600"
user: user
bootstrap.sh:
path: /tmp/bootstrap.sh
content: bootstrap.sh.j2
perm: "0755"
service
Name | Description | Required | Schema | Default |
---|---|---|---|---|
name | Name of the service. | true | string | |
containers | List of containers under multi-container pod | true | container array | |
ports | k8s Service will be created if specified (with NodePort type for now) Only internal or both internal:external ports can be specified | false | internal-port: external-port array | |
daemonset | Create DaemonSet instead of Deployment | false | boolean | false |
host-net | false | boolean | false |
container
Name | Description | Required | Schema | Default |
---|---|---|---|---|
name | Name of the container. It will be used to track status in etcd | true | string | |
image | Name of the image. registry, namespace, tag will be added by framework | true | string | |
probes | Readiness, liveness or both checks can be defined. Exec action will be used for both checks | false | dict with two keys:
|
|
volumes | false | volume array | ||
pre | List of commands that need to be executed before daemon process start | false | command array | |
daemon | true | command | ||
post | The same as for “pre” except that post commands will be executed after daemon process has been started | false | command array | |
env | An array of environment variables defined in kubernetes way. | false | env array |
volume
Name | Description | Required | Schema | Default |
---|---|---|---|---|
name | Name of the volume | true | string | |
type | host and empty-dir type supported for now | true | one of: [“host”, “empty-dir”] | |
path | Host path that should be mounted (only if type = “host”) | false | string | |
mount-path | Mount path in container | false | string | path |
readOnly | Mount mode of the volume | false | bool | False |
command
Name | Description | Required | Schema | Default |
---|---|---|---|---|
name | Name of the command. Required only for pre and post with type single | – | string | |
command | true | string | ||
dependencies | These keys will be polled from etcd before commands execution | false | string array | |
type | type: single means that this command should be executed once per openstack deployment. For commands with type: single Job object will be created type: local (or if type is not specified) means that command will be executed inside the same container as a daemon process. |
false | one of: [“single”, “local”] | local |
files | List of the files that maps to the keys of files dict. It defines which files will be rendered inside a container | false | file keys array | |
user | false | string |
files
Name | Description | Required | Schema | Default |
---|---|---|---|---|
Name of the file to refer in files list of commands | false | file array |
file
Name | Description | Required | Schema | Default |
---|---|---|---|---|
path | Destination path inside a container | true | string | |
content | Name of the file under {{ service_repo }}/service/files directory. This file will be rendered inside a container and moved to the destination defined with path | true | string | |
perm | false | string | ||
user | false | string |