Forest is an open source Java HTTP client framework. It can bind all HTTP request information (including URL, Header, Body and other information) to your custom Interface method, and send HTTP by calling the local interface method.
Kubespray allows you to deploy a production-ready Kubernetes cluster (using Ansible or Vagrant) and since v2.3 can work together with Kubernetes kubeadm.
Posted onEdited onInLinux
,
FedoraWord count in article: 798Reading time ≈1 mins.
GNOME on Fedora Cloud
GNOME (/ɡəˈnoʊm, ˈnoʊm/), originally an acronym for GNU Network Object Model Environment, is a free and open-source desktop environment for Linux operating systems.
Fedora Cloud Base images are for creating general purpose virtual machines (VMs). You can use the Qcow 2 image for use with Openstack or the compressed raw image. If you are not sure what to use, try the raw image.
Fedora is always free for anyone to use, modify, and distribute. It is built and used by people across the globe who work together as a community.
Harbor is an open source registry that secures artifacts with policies and role-based access control, ensures images are scanned and free from vulnerabilities, and signs images as trusted. Harbor, a CNCF Graduated project, delivers compliance, performance, and interoperability to help you consistently and securely manage artifacts across cloud native compute platforms like Kubernetes and Docker.
Examples about using Terraform Harbor Provider to manage Harbor projects and users.
kubeplay is a tool for offline deployment of kuberneres clusters based on kubespray.
Feature
All dependencies included, installing offline with one single command
amd64 and arm64 CPU architectures supported
Validity of certificate generated by kubeadm extended to 10 years
No docker dependency, seamless migrating container runtime to containerd
Ideal for toB privatized deployment scenarios as all rpm/deb packages (e.g. storage client) needed by bootstraping cluster can be installed offline
Multi-cluster deployment supported, deploying a new kubernetes cluster with Job Pods within kubernetes cluster
Offline installer built with GitHub Actions, no charge, 100% open source 100% free
compose
Running nginx and registry with nerdctl - https://github.com/containerd/nerdctl compose on deploy node where deployment tool would run, which provide offline resource download and image distribution services.
kubeplay-v0.1.0-alpha.3-centos-7.sha256sum.txt # checksum file kubeplay-v0.1.0-alpha.3-centos-7-amd64.tar.gz # for CentOS 7 amd64 kubeplay-v0.1.0-alpha.3-centos-7-amd64.tar.gz # for CentOS 7 arm64
Configuration
1 2 3 4 5 6 7
$ tar -xpf kubeplay-x.y.z-xxx-xxx.tar.gz $ cd kubeplay $ cp config-sample.yaml config.yaml $ vi config.yaml
The config.yaml configuration file is divided into the following sections:
compose:config for nginx and registry on current deploy node
kubespray:kubespray deployment config
invenory:ssh config for nodes of kubernetes cluster
default:default config values
compose
1 2 3 4 5 6 7
compose: # Compose bootstrap node ip, default is local internal ip internal_ip:172.20.0.25 # Nginx http server bind port for download files and packages nginx_http_port:8080 # Registry domain for CRI runtime download images registry_domain:kube.registry.local
kubespray
1 2 3 4 5 6 7 8 9 10 11 12 13 14
kubespray: # Kubernetes version by default, only support v1.20.6 kube_version:v1.21.3 # For deploy HA cluster you must configure a external apiserver access ip external_apiserver_access_ip:127.0.0.1 # Set network plugin to calico with vxlan mode by default kube_network_plugin:calico #Container runtime, only support containerd if offline deploy container_manager:containerd # Now only support host if use containerd as CRI runtime etcd_deployment_type:host # Settings for etcd event server etcd_events_cluster_setup:true etcd_events_cluster_enabled:true
inventory
inventory is the ssh login configuration for nodes of kubernetes cluster , supporting yaml, json, and ini formats.
# Cluster nodes inventory info inventory: all: vars: ansible_port:22 ansible_user:root ansible_ssh_pass:Password # Remmber to put your id_rsa into ./kubespray/config dir. # ansible_ssh_private_key_file: /kubespray/config/id_rsa hosts: node1: ansible_host:172.20.0.21 node2: ansible_host:172.20.0.22 node3: ansible_host:172.20.0.23 node4: ansible_host:172.20.0.24 children: kube_control_plane: hosts: node1: node2: node3: kube_node: hosts: node1: node2: node3: node4: etcd: hosts: node1: node2: node3: k8s_cluster: children: kube_control_plane: kube_node: gpu: hosts: {} calico_rr: hosts: {}
default value
The following default parameters are not recommended to be modified without special requirements, just leave them as default. Unmodified ntp_server value will be overrided by internal_ip from compose section; registry_ip and offline_resources_url are automatically generated based on the parameters in compose section thus not need to modify.
default: # NTP server ip address or domain, default is internal_ip ntp_server: -internal_ip # Registry ip address, default is internal_ip registry_ip:internal_ip # Offline resource url for download files, default is internal_ip:nginx_http_port offline_resources_url:internal_ip:nginx_http_port # Use nginx and registry provide all offline resources offline_resources_enabled:true # Image repo in registry image_repository:library # Kubespray container image for deploy user cluster or scale kubespray_image:"kubespray" # Auto generate self-signed certificate for registry domain generate_domain_crt:true # For nodes pull image, use 443 as default registry_https_port:443 # For push image to this registry, use 5000 as default, and only bind at 127.0.0.1 registry_push_port:5000 # Set false to disable download all container images on all nodes download_container:false
# enable support hubble in cilium cilium_enable_hubble:false # install hubble-relay, hubble-ui cilium_hubble_install:false # install hubble-certgen and generate certificates cilium_hubble_tls_generate:false # Kube Proxy Replacement mode (strict/probe/partial) cilium_kube_proxy_replacement:probe
Usages
Deploy a new cluster
1
$ bash install.sh
Add node to existing cluster
1
$ bash install.sh add-node $NODE_NAMES
Delete node from cluster
1
$ bash install.sh remove-node $NODE_NAME
Remove cluster
1
$ bash install.sh remove-cluster
Remove all components
1
$ bash install.sh remove
FAQs
msg: ‘Failed to download metadata for repo ‘‘docker-ce’’: Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried’
1 2 3 4 5
# yum update Yum offline resources Errors during downloading metadata for repository 'docker-ce': - Status code: 404 for http://192.168.8.120:8080/centos/8/os/x86_64/repodata/repomd.xml (IP: 192.168.8.121) Error: Failed to download metadata for repo 'docker-ce': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried
kubeasz is committed to providing tools for rapid deployment of high-availability k8s clusters, and also strives to become a reference book for k8s practice and use; deployment based on binary mode and automation using ansible-playbook; not only provides one-click installation scripts, but also can be divided according to the installation guide Step by step to install each component.