Starship is the minimal, blazing-fast, and infinitely customizable prompt for any shell!
Starship like the Oh My Zsh but it is a Cross-shell prompt.
Works on the most common shells on the most common operating systems. Use it everywhere!
Brings the best-in-class speed and safety of Rust, to make your prompt as quick and reliable as possible.
Every little detail is customizable to your liking, to make this prompt as minimal or feature-rich as you’d like it to be.
A Nerd Font (opens new window)installed and enabled in your terminal.
For instance, install 3270 Nerd Font
1 | brew tap homebrew/cask-fonts && |
For more details refer to [Nerd Fonts - Iconic font aggregator, glyphs/icons collection, & fonts patcher - https://www.nerdfonts.com/font-downloads].
Install Starship using Homebrew
package managers:
1 | brew install starship |
For more details refer to https://starship.rs/guide/#step-1-install-starship.
zsh
is the default Shell on macOS. Add the following init script to the end of ~/.zshrc
:
1 | ~/.zshrc |
Create ~/.zshrc
file if not exist.
1 | touch ~/.zshrc |
For more details refer to https://starship.rs/guide/#step-2-set-up-your-shell-to-use-starship
Start a new shell instance, and you should see your beautiful new shell prompt. If you’re happy with the defaults, enjoy!
If you’re looking to further customize Starship:
Configuration (opens new window)– learn how to configure Starship to tweak your prompt to your liking
Presets (opens new window)– get inspired by the pre-built configuration of others
1 | [WARN] - (starship::modules::battery): Unable to access battery information: |
Disable battery
in starship.toml
file.
1 | # ~/.config/starship.toml |
Create starship.toml
file if not exist.
1 | touch ~/.config/starship.toml |
For more details refer to Battery module is disabled in resolved config, even if toggled in custom config · Issue #5368 · starship/starship - https://github.com/starship/starship/issues/5368
[2] Starship: Cross-Shell Prompt - https://starship.rs/
[3] Homebrew — The Missing Package Manager for macOS (or Linux) - https://brew.sh/
[4] Oh My Zsh - a delightful & open source framework for Zsh - https://ohmyz.sh/
]]>col_active_importer_starter is a starter(or wrapper) to active_importer - https://github.com/continuum/active_importer gem.
col_active_importer_starter
makes full use of active_importer
gem to import tabular data from spreadsheets or similar sources into Active Record models.
active_importer
, such as:Add this line to your application’s Gemfile:
1 | gem 'col_active_importer_starter' |
And then execute:
1 | bundle |
Or install it yourself as:
1 | gem install col_active_importer_starter |
Suppose there is a ActiveRecord model Article
:
1 | class Article < ApplicationRecord |
and tabular data file data/Articles.xlsx
Title | Body |
---|---|
Article.1.title | Article.1.body |
Article.2.title | Article.2.body |
ArticleImporter
extend ColActiveImporterStarter::BaseImporter
1 | # app/importers/article_import.rb |
1 | ArticleImporter.execute("#{Rails.root}/data/Articles.1.xlsx") |
Or specify more arguments.
1 | params = { |
tmp/importers
directory to find the result file.Title | Body | Result ID | Result Message | |
---|---|---|---|---|
Article.1.title | Article.1.body | 1 | success | |
Article.2.title | Article.2.body | 2 | success |
Inspire by active_importer - https://github.com/continuum/active_importer.
Contributions are welcome! Take a look at our contributions guide for details.
The basic workflow for contributing is the following:
The gem is available as open source under the terms of the MIT License - https://opensource.org/licenses/MIT.
[1] [CloudoLife-RoR/col_active_importer_starter: col_active_importer_starter is a starter(or wrapper) to active_importer. - https://github.com/CloudoLife-RoR/col_active_importer_starter
[4] RubyGems.org | your community gem host - https://rubygems.org/
]]>Setting up Rails for the first time with all the dependencies necessary can be daunting for beginners. Docked Rails uses a Rails CLI Docker image to make it much easier, requiring only Docker to be installed.
First install Docker (and WSL on Windows). Then copy’n’paste into your terminal:
1 | docker volume create ruby-bundle-cache |
Then create your Rails app:
1 | rails new weblog |
That’s it! You’re running Rails on http://localhost:3000/posts
.
Of course you can also choose to enter a Docker container to create and run your Rails application. (Note the location of the --entrypoint /bin/bash
parameter.)
1 | docker run --rm -it --entrypoint /bin/bash -v $PWD:/rails -v ruby-bundle-cache:/bundle ghcr.io/rails/cli |
Run the following command after entering the container: (Note that the -b 0.0.0.0
parameter is specified in the last command.)
1 | rails new weblog |
BTW, Using a Docker container to run an Rails application in a virtualized Linux environment (such as macOS, Windows) may be slower than running the same application directly.
[3] Package cli - https://github.com/orgs/rails/packages/container/package/cli
]]>Recently, Google released the open source vulnerability scanner OSV-Scanner. OSV-Scanner is an officially supported front-end tool for the open source OSV database, written in Go, designed to scan open source applications to assess the security of any merged dependencies.
You can use OSV-Scanner to find vulnerabilities in Rails application dependencies, including Gemfile.lock, package-lock.json, yarn.lock, etc., the latest commit records in .git directories, and Debian-based mirrors.
There are some ways to install OSV-Scanner
The latest released binary version can be downloaded from Releases · google/osv-scanner - https://github.com/google/osv-scanner/releases .
Or install via package manager Windows Scoop, Homwbrew.
1 | Homwbrew (brew) |
For more information on Scoop, see Scoop - https://scoop.sh/.
For more information on Homebrew, see The Missing Package Manager for macOS (or Linux) — Homebrew - https://brew.sh/.
Alternatively, you can install from source by running:
1 | go install github.com/google/osv-scanner/cmd/osv-scanner@v1 |
This requires Go 1.18+.
OSV-Scanner collects a list of dependencies and versions used in a project, then matches this list with the OSV database via the OSV.dev API. You can have OSV-Scanner scan your application directory, import a version dependency lock file, scan Debian-based Docker images (preview feature), or scan SBOM software bill of materials files.
Traverse the directory listing to find:
Version dependent lock files (such as Gemfile.lock, package-lock.json, yarn.lock, etc.)
SBOM Software Bill of Materials
the latest commit record of the .git directory
Can be configured to traverse subdirectories recursively using the --recursive / -r flag.
Example
1 | osv-scanner -r . |
Use the lockfile package to support a wide range of lockfiles. Here is a list of currently supported lock files:
Example
1 | osv-scanner --lockfile=Gemfile.lock |
This tool will grab the list of installed packages in a Debian image and query them for vulnerabilities.
Currently only Debian-based Docker image scanning is supported.
Requires Docker to be installed and the tool to have permissions to invoke it.
Filesystems of Docker containers are not currently scanned, and have various other limitations. Please follow this issue - https://github.com/google/osv-scanner/issues/64 for updates on container scanning !
Example
1 | osv-scanner --docker image_name:latest |
image_name
is your Debian-based Rails application image.
SPDX - https://spdx.dev/ and CycloneDX SBOM - https://cyclonedx.org/. The format is automatically detected based on the input file content.
Example
1 | osv-scanner --sbom=sbom.json |
To configure scanning, place the osv-scanner.toml
file in the directory where the scan files are located. To override this file, pass the --config=/path/to/config.toml
argument.
Currently, there is only 1 configuration option:
To ignore a vulnerability, enter the ID under the IgnoreVulns key. (Optional) Add an expiration date or reason.
Example
1 | [[IgnoredVulns]] |
By default, osv-scanner outputs a human-readable table. To have osv-scanner output JSON, pass the --json
flag when calling osv-scanner.
When using the --json
flag, only JSON output will be printed to stdout, all other output will be directed to stderr. So, to only save the json output to a file, you can use osv-scanner --json ... > /path/to/file.json
to redirect the output.
[5] Docker: Accelerated, Containerized Application Development - https://www.docker.com/
[6] The Missing Package Manager for macOS (or Linux) — Homebrew - https://brew.sh/
]]>Ruby pg
gem depends on the operating system’s compilation tools and libraries. If the operating system compilation tool and library change (such as operating system upgrade, etc.), it may cause an error that pg
gem cannot access the database normally.
Recently upgraded to macOS Monterey 12.6, due to the update of the Xcode license agreement, the Rails project using pg gem cannot run normally:
1 | rails c |
Try installing the pg
gem again
1 | gem install pg |
Check the mkmf.log log file for errors
1 | ~.asdf/installs/ruby/3.0.0/lib/ruby/gems/3.0.0/extensions/x86_64-darwin-20/3.0.0/pg-1.4.3/mkmf.log |
It is confirmed that the problem is caused by the Xcode update license agreement when macOS Monterey 12.6 is upgraded.
There are two solutions:
Open Xcode, click to agree to the license agreement
Or open the Terminal terminal, run sudo xcodebuild -license to agree to the license agreement
Then reinstall the pg
gem
1 | gem uninstall pg |
Finally check that the pg
gem is working when Rails starts
1 | rails c |
[1] ged/ruby-pg: A PostgreSQL client library for Ruby - https://github.com/ged/ruby-pg
[2] pg | RubyGems.org | your community gem host - https://rubygems.org/gems/pg
]]>Crossplane is an open source Kubernetes add-on that enables platform teams to assemble infrastructure from multiple vendors, and expose higher level self-service APIs for application teams to consume, without having to write any code.
Provision and manage cloud infrastructure and services using kubectl
Created to power a more open cloud
There is a flavor of infrastructure for everyone on Crossplane
Publish simplified infrastructure abstractions for your applications
The Universal Cloud API
Run Crossplane anywhere
Kubernetes (K8S)
Kubernetes (K8s) is an open-source system for automating deployment, scaling, and management of containerized applications.
For more information about installing and using Kubernetes (K8s), see the Kubernetes (K8s) Docs.
Helm
Helm is the best way to find, share, and use software built for Kubernetes.
1 | Mac OS X |
For more information about installing and using Helm, see the Helm Docs.
Use Helm 3 to install the latest official stable release of Crossplane, suitable for community use and testing:
1 | kubectl create namespace crossplane-system |
1 | helm list -n crossplane-system |
The Crossplane CLI extends kubectl with functionality to build, push, and install Crossplane packages:
1 | curl -sL https://raw.githubusercontent.com/crossplane/crossplane/master/install.sh | sh |
[1] Crossplane - https://crossplane.io/
[2] Crossplane Docs - https://crossplane.io/docs/v1.7/getting-started/install-configure.html
[3] crossplane/crossplane: Your Universal Control Plane - https://github.com/crossplane/crossplane
]]>KubeVela is a modern application delivery platform that makes deploying and operating applications across today’s hybrid, multi-cloud environments easier, faster and more reliable.
KubeVela is infrastructure agnostic, programmable, yet most importantly, application-centric.
Application Centric
Programmable Workflow
Infrastructure Agnostic
Kubernetes (K8S)
Kubernetes (K8s) is an open-source system for automating deployment, scaling, and management of containerized applications.
For more information about installing and using Kubernetes (K8s), see the Kubernetes (K8s) Docs.
Helm
Helm is the best way to find, share, and use software built for Kubernetes.
1 | Mac OS X |
For more information about installing and using Helm, see the Helm Docs.
1 | curl -fsSl https://kubevela.io/script/install.sh | bash -s 1.3.0 |
1 | helm repo add kubevela https://charts.kubevela.net/core |
1 | vela addon enable velaux |
1 | vela install |
Install KubeVela 1.3.0+.
1 | curl -fsSl https://kubevela.io/script/install.sh | bash -s 1.3.0 |
1 | vela addon enable velaux |
TODO
[1] Make shipping applications more enjoyable. | KubeVela - https://kubevela.io/
[2] oam-dev/kubevela: The Modern Application Platform. - https://github.com/oam-dev/kubevela
[3] Releases · oam-dev/kubevela - https://github.com/oam-dev/kubevela/releases
]]>JumpServer is a Privileged Access Management (PAM) Complying with 4A Protocol of Operation and Security Auditing. JumpServer provides features include authentication, authorization, accounting and auditing.
This article is about how to use Helm to install JumpServer on Kubernetes (K8S).
Kubernetes (K8S)
Kubernetes (K8s) is an open-source system for automating deployment, scaling, and management of containerized applications.
For more information about installing and using Kubernetes (K8s), see the Kubernetes (K8s) Docs.
StorageClass
A StorageClass provides a way for administrators to describe the “classes” of storage they offer. Different classes might map to quality-of-service levels, or to backup policies, or to arbitrary policies determined by the cluster administrators. Kubernetes itself is unopinionated about what classes represent. This concept is sometimes called “profiles” in other storage systems.
Storage Classes | Kubernetes - https://kubernetes.io/docs/concepts/storage/storage-classes/
Helm
Helm is the best way to find, share, and use software built for Kubernetes.
1 | Mac OS X |
For more information about installing and using Helm, see the Helm Docs.
First, install MySQL databa to persistent data, and Redis to cache data.
Edit mysql/values.yaml and replace content within {{ }}
.
1 | # mysql/values.yaml |
Helm install Reids into bitnami-mysql
namespace.
1 | crate namespace: |
See pods about MySQL.
1 | kubectl get pods -n bitnami-mysql |
Remember to replace <Your JumpServer Database Password>
within your password.
1 | Enter into bitnami-mysql container |
Edit redis/values.yaml and replace content within {{ }}
.
1 | # redis/values.yaml |
Helm install Reids into bitnami-redis-jumpserver
namespace.
1 | crate namespace: |
See pods about Redis.
1 | kubectl get pods -n bitnami-redis-jumpserver |
Edit jumpserver/values.yaml and replace content within {{ }}
.
1 | # jumpserver/values.yaml |
Helm install jumpserver into jumpserver namespace.
1 | crate namespace: |
See pods about jumpserver.
1 | kubectl get pods -n jumpserver |
Destroy release created by Helm.
1 | helm uninstall jumpserver -n jumpserver |
koko
and lion
pods.Remember to set koko
and lion
storageClassName
within values.yaml
.
1 | + koko: |
[1] jumpserver/helm-charts - https://github.com/jumpserver/helm-charts
[5] Kubernetes Getting Started | Pulumi - https://www.pulumi.com/docs/get-started/kubernetes/
[6] Kubernetes - https://kubernetes.io/
[8] Storage Classes | Kubernetes - https://kubernetes.io/docs/concepts/storage/storage-classes/
]]>How to install Docker on macOS, please see the official documentation [Install Docker Desktop on Mac - https://docs.docker.com/docker-for-mac/install/](https://docs.docker.com/docker-for -mac/install/).
Docker can also be installed through the brew
package manager.
1 | brew install --cask docker |
For more information on brew
, see The Missing Package Manager for macOS (or Linux) — Homebrew - https://brew.sh/.
Execute the following command to pull SQL Server image:
1 | sudo docker pull mcr.microsoft.com/mssql/server:2017-latest |
Execute the following command to use the mcr.microsoft.com/mssql/server:2017-latest
image to create a container named sqlserver with port 1433.
1 | sudo docker run -e "ACCEPT_EULA=Y" -e "SA_PASSWORD=Passw0rd" \ |
-e "ACCEPT_EULA=Y"
sets the ACCEPT_EULA variable to an arbitrary value to confirm acceptance of the End User License Agreement.
-e "SA_PASSWORD=Passw0rd"
specifies a strong password with at least 8 characters that meets SQL Server password requirements.
-p 1433:1433
maps a TCP port in the host environment (first value) to a TCP port in the container (second value).
After --rm
exits the container, the container will be deleted, which is convenient for temporary testing.
--name sqlserver
specifies a custom name for the container instead of using a randomly generated name.
For more information about SQL Server images, please refer to [Microsoft SQL Server - Ubuntu based images | Microsoft Artifact Registry - https://mcr.microsoft.com/en-us/product/mssql/server/about](https:/ /mcr.microsoft.com/en-us/product/mssql/server/about)
1 | ps -e | grep sqlserver |
You can check the SQL Server running log by executing the docker logs
command.
1 | docker logs sqlserver |
Use the docker exec -it
command to start an interactive `Bash Shell’ inside the running container:
1 | docker exec -it sqlserver bash |
sqlserver is the name specified by the -–name
parameter when creating the container.
Use sqlcmd
for local connections inside the container. By default, sqlcmd
is not in the path, so the full path needs to be specified.
1 | /opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P "Passw0rd" |
If successful, the sqlcmd command prompt 1>
should be displayed.
How to install DBeaver on macOS, please refer to the official document Download | DBeaver Community - https://dbeaver.io/download/.
DBeaver can also be installed via the brew
package manager.
1 | brew install --cask dbeaver-community |
[1] Install on Mac | Docker Documentation - https://docs.docker.com/desktop/install/mac-install/
[4] Microsoft Data Platform | Microsoft - https://www.microsoft.com/en-us/sql-server/
[5] docker — Homebrew Formulae - https://formulae.brew.sh/cask/docker
[6] dbeaver-community — Homebrew Formulae - https://formulae.brew.sh/cask/dbeaver-community
[7] Download | DBeaver Community - https://dbeaver.io/download/
[8] DBeaver Community | Free Universal Database Tool - https://dbeaver.io/
[9] The Missing Package Manager for macOS (or Linux) — Homebrew - https://brew.sh/
]]>Terraform is an open-source infrastructure as code software tool that provides a consistent CLI workflow to manage hundreds of cloud services. Terraform codifies cloud APIs into declarative configuration files.
Teleport is a Certificate Authority and an Access Plane for your infrastructure. With Teleport you can:
Terraform relies on plugins called “providers” to interact with cloud providers, SaaS providers, and other APIs.
Terraform configurations must declare which providers they require so that Terraform can install and use them. Additionally, some providers require configuration (like endpoint URLs or cloud regions) before they can be used.
This article describe how to use Terraform to manage Teleport resources.
Use Infrastructure as Code to provision and manage any cloud, infrastructure, or service
Teleport: Easiest, most secure way to access infrastructure | Teleport - https://goteleport.com/
Teleport is a Certificate Authority and an Access Plane for your infrastructure. With Teleport you can:
See Installing Teleport | Teleport Docs - https://goteleport.com/docs/installation/ to learn more.
Create a folder teleport-terraform
to hold some temporary files:
1 | mkdir -p teleport-terraform |
Then, install the Terraform provider.
1 | MacOS |
In order for Terraform to manage resources in your Teleport cluster, it needs a signed identity file from the cluster’s certificate authority. The Terraform user cannot request this itself, and requires another user to impersonate this account in order to request a certificate.
Create a role that enables your user to impersonate the Terraform user. First, paste the following YAML document into a file called terraform-impersonator.yaml
:
1 | # terraform-impersonator.yaml |
Next, create the role:
1 | tctl create terraform-impersonator.yaml |
Assign this role to the current user. Log in to your Teleport cluster to assume the new role.
Put the following content into terraform.yaml
:
1 | # terraform.yaml |
Create the terraform user and role.
1 | tctl create terraform.yaml |
Next, request a signed certificate for the Terraform user:
1 | Self-Hosted |
This command should result in three PEM-encoded files: auth.crt
, auth.key
, and auth.cas
(certificate, private key, and CA certs, respectively).
1 | tctl auth sign --user=terraform --out=terraform-identity |
The above sequence should result in one PEM-encoded file: terraform-identity
.
Paste the following into a file called main.tf
to define an example user and role using Terraform.
Teleport Cloud
1 | # main.tf |
1 | # main.tf |
Check the contents of the teleport-terraform folder:
1 | ls |
1 | ls |
Init terraform and apply the spec:
1 | terraform init |
[1] Terraform Provider | Teleport Docs - https://goteleport.com/docs/setup/guides/terraform-provider/
[2] Terraform by HashiCorp - https://www.terraform.io/
[3] Teleport: Easiest, most secure way to access infrastructure | Teleport - https://goteleport.com/
]]>Teleport is a Certificate Authority and an Access Plane for your infrastructure. With Teleport you can:
Set up Single Sign-On and have one place to access your SSH servers, Kubernetes, Databases, Desktops, and Web Apps.
Use your favorite programming language to define access policies to your infrastructure.
Share and record interactive sessions across all environments.
This article will help you understand how Teleport works using Docker Compose. It will also show you how to use Teleport with OpenSSH, Ansible, and Teleport’s native client, tsh.
Teleport v9.0.0 Open Source or Enterprise.
Docker v20.10.7 or later and docker-compose v1.25.0 or later.
1 | docker-compose version |
Teleport uses the YAML file format for configuration. A full configuration reference file is shown below, this provides comments and all available options for teleport.yaml
By default, it is stored in /etc/teleport.yaml
.
1 | docker run --hostname localhost --rm \ |
It will generate teleport.yaml
file in ./runtime/teleport/etc/teleport
directory for container later.
1 | # /etc/teleport.yaml |
See Teleport Configuration Reference | Teleport Docs - https://goteleport.com/docs/setup/reference/config/ to learn more.
First, create docker-compose.yaml
file.
1 | # docker-compose.yaml |
Then, run docker-compose up
1 | docker-compose up |
Let’s jump into container with setup clients and explore Teleport:
From your local terminal
1 | docker exec -ti teleport /bin/bash |
We will run all future commands from the teleport container.
You can see Teleport’s nodes registered in the cluster using tsh ls command:
1 | From teleport container |
Create a Teleport user called cloudolife which is allowed to log in as either operating system user root or ubuntu.
1 | From term container |
Teleport will output a URL that you must open to complete the user sign-up process:
User “cloudolife” has been created but requires a password. Share this URL with the user to complete user setup, link is valid for 1h:
`https://localhost:3080/web/invite/your-token-here
NOTE: Make sure proxy.teleport:443
points at a Teleport proxy which users can access.
Port 443
on the Teleport container is published to the local host, so you can access the invitation page at https://localhost/web/invite/your-token-here
.
If you encounter an “Insecure Certificate Error” (or equivalent warning) that prevents the Teleport Web UI from opening, you can perform one of the following actions depending on your browser:
In Safari’s “This Connection Is Not Private” page, click “Show Details,” then click “visit this website.”
In Firefox, click “Advanced” from the warning page, then click “Accept the Risk and Continue.”
In Chrome’s warning page, type thisisunsafe
to ignore certificate validation for the Teleport Web UI.
Use tsh ssh
to login and run tctl
within Telepor.
1 | tsh --proxy=localhost login --user=cloudolife --insecure |
Remember to ignore x509: certificate signed by unknown authority
error with --insecure
option.
List nodes within Teleport.
1 | tsh ls |
Run tsh ssh
to login localhost
host within Teleport.
1 | tsh ssh root@localhost |
See Using TSH | Teleport Docs - https://goteleport.com/docs/server-access/guides/tsh/, Teleport CLI Reference | Teleport Docs - https://goteleport.com/docs/setup/reference/cli/#tsh to learn more.
Create another Teleport user called col which is allowed to log in as either operating system user root or ubuntu.
1 | From root@localhost within Teleport |
It will generate an invite token to add node.
1 | tctl nodes add |
Run teleport start
to register a host into Teleport as a node.
1 | On your host |
See Server Access Getting Started Guide | Teleport Docs - https://goteleport.com/docs/server-access/getting-started/ to learn more.
Check nodes.
1 | tctl nodes ls |
Run tsh ssh
to login your host through Teleport
1 | tsh ssh <Your User Name>@<Your Host>.local |
See Teleport CLI Reference | Teleport Docs - https://goteleport.com/docs/setup/reference/cli/#tctl to learn more.
You can stop the Teleport using:
1 | docker-compose down |
1 | tsh --proxy=localhost login --user=blogbin |
Remember to ignore x509: certificate signed by unknown authority
error with --insecure
option.
1 | tsh --proxy=localhost login --user=cloudolife --insecure |
[3] Using TSH | Teleport Docs - https://goteleport.com/docs/server-access/guides/tsh/
[4] Teleport CLI Reference | Teleport Docs - https://goteleport.com/docs/setup/reference/cli/#tsh
[5] Teleport CLI Reference | Teleport Docs - https://goteleport.com/docs/setup/reference/cli/#tctl
[7] Teleport: Easiest, most secure way to access infrastructure | Teleport - https://goteleport.com/
[8] Overview of Docker Compose | Docker Documentation - https://docs.docker.com/compose/
]]>JumpServer is the world’s first open-source Bastion Host and is licensed under the GPLv3. It is a 4A-compliant professional operation and maintenance security audit system.
JumpServer uses Python / Django for development, follows Web 2.0 specifications, and is equipped with an industry-leading Web Terminal solution that provides a beautiful user interface and great user experience
JumpServer adopts a distributed architecture to support multi-branch deployment across multiple cross-regional areas. The central node provides APIs, and login nodes are deployed in each branch. It can be scaled horizontally without concurrency restrictions.
1 | docker-compose version |
1 | git clone --depth=1 https://github.com/wojiushixiaobai/Dockerfile.git |
The user name and password of administrator is admin
and admin
by default.
First, open your browser to visit http://localhost
to update password, login JumpServer to manage accounts and assets.
Alternatively, you can log in JumpServer with ssh:
1 | ssh -p 2222 admin@<Your JumpServer> |
1 | ssh -p 2222 admin@<Your JumpServer> |
1 | # .ssh/ssh_config |
[1] jumpserver/Dockerfile: Jumpserver all in one Dockerfile - https://github.com/jumpserver/Dockerfile
[2] JumpServer - 开源堡垒机 - 官网 - https://www.jumpserver.org/
[4] FIT2CLOUD 飞致云 - 多云时代技术领先的企业级软件提供商 - https://www.fit2cloud.com/
]]>build-push-action
is a GitHub Action to build and push Docker images with Buildx with full support of the features provided by Moby BuildKit builder toolkit. This includes multi-platform build, secrets, remote cache, etc. and different builder deployment/namespacing options.
1 | name: ci |
You can build multi-platform images using the platforms input as described below.
💡 List of available platforms will be displayed and available through our setup-buildx - https://github.com/docker/setup-buildx-action#about action.
💡 If you want support for more platforms, you can use QEMU with our setup-qemu - https://github.com/docker/setup-qemu-action action.
1 | - |
See build-push-action/multi-platform.md at master · docker/build-push-action - https://github.com/docker/build-push-action/blob/master/docs/advanced/multi-platform.md to learn more.
If you want an “automatic” tag management and OCI Image Format Specification for labels, you can do it in a dedicated step. The following workflow will use the Docker metadata action to handle tags and labels based on GitHub actions events and Git metadata.
1 |
|
See build-push-action/tags-labels.md at master · docker/build-push-action - https://github.com/docker/build-push-action/blob/master/docs/advanced/tags-labels.md to learn more.
[4] Learn GitHub Actions - GitHub Docs - https://docs.github.com/en/actions/learn-github-actions
[5] Encrypted secrets - GitHub Docs - https://docs.github.com/en/actions/reference/encrypted-secrets
]]>Docker is an open platform for developing, shipping, and running applications. Docker enables you to separate your applications from your infrastructure so you can deliver software quickly. With Docker, you can manage your infrastructure in the same ways you manage your applications. By taking advantage of Docker’s methodologies for shipping, testing, and deploying code quickly, you can significantly reduce the delay between writing code and running it in production.
Docker Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application’s services. Then, with a single command, you create and start all the services from your configuration.
Fedora Linux is a Linux distribution developed by the Fedora Project which is sponsored primarily by Red Hat (an IBM subsidiary) with additional support and sponsors from other companies and organizations. Fedora contains software distributed under various free and open-source licenses and aims to be on the leading edge of open-source technologies. Fedora is the upstream source for Red Hat Enterprise Linux.
To get started with Docker Engine on Fedora, make sure you meet the prerequisites, then install Docker.
To install Docker Engine, you need the 64-bit version of one of these Fedora versions:
Fedora 34
Fedora 35
Before you install Docker Engine for the first time on a new host machine, you need to set up the Docker repository. Afterward, you can install and update Docker from the repository.
Install the dnf-plugins-core package (which provides the commands to manage your DNF repositories) and set up the stable repository.
1 | [root@fedora ~]# sudo dnf -y install dnf-plugins-core |
1 | [root@fedora ~]# sudo dnf install docker-ce docker-ce-cli containerd.io |
1 | [root@fedora ~]# sudo systemctl start docker |
1 | [root@fedora ~]# sudo docker run hello-world |
1 | sudo docker version |
On Linux, you can download the Docker Compose binary from the Compose repository release page on GitHub. Follow the instructions from the link, which involve running the curl
command in your terminal to download the binaries. These step-by-step instructions are also included below.
Run this command to download the current stable release of Docker Compose:
1 | sudo curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose |
Apply executable permissions to the binary:
1 | sudo chmod +x /usr/local/bin/docker-compose |
Test the installation.
1 | [root@fedora ~]# docker-compose version |
Docker images can support multiple architectures, which means that a single image may contain variants for different architectures, and sometimes for different operating systems, such as Windows.
When running an image with multi-architecture support, docker automatically selects the image variant that matches your OS and architecture.
Most of the Docker Official Images on Docker Hub provide a variety of architectures. For example, the busybox image supports amd64
, arm32v5
, arm32v6
, arm32v7
, arm64v8
, i386
, ppc64le
, and s390x
. When running this image on an x86_64 / amd64 machine, the amd64 variant is pulled and run.
Docker Desktop provides binfmt_misc multi-architecture support, which means you can run containers for different Linux architectures such as arm
, mips
, ppc64le
, and even s390x
.
This does not require any special configuration in the container itself as it uses qemu-static from the Docker for Mac VM. Because of this, you can run an ARM container, like the arm32v7
or ppc64le
variants of the busybox image.
1 | // daemon.json |
Docker is now making it easier than ever to develop containers on, and for Arm servers and devices. Using the standard Docker tooling and processes, you can start to build, push, pull, and run images seamlessly on different compute architectures. In most cases, you don’t have to make any changes to Dockerfiles or source code to start building for Arm.
Docker introduces a new CLI command called buildx
. You can use the buildx command on Docker Desktop for Mac and Windows to build multi-arch images, link them together with a manifest file, and push them all to a registry using a single command. With the included emulation, you can transparently build more than just native images. Buildx accomplishes this by adding new builder instances based on BuildKit, and leveraging Docker Desktop’s technology stack to run non-native binaries.
For more information about the Buildx CLI command, see Buildx and the docker buildx command line reference.
Run the docker buildx ls
command to list the existing builders. This displays the default builder, which is our old builder.
1 | docker buildx ls |
Create a new builder which gives access to the new multi-architecture features.
1 | docker buildx create --name mybuilder |
Alternatively, run docker buildx create --name mybuilder --use
to create a new builder and switch to it using a single command.
Switch to the new builder and inspect it.
1 | docker buildx use mybuilder |
Test the workflow to ensure you can build, push, and run multi-architecture images. Create a simple example Dockerfile, build a couple of image variants, and push them to Docker Hub.
The following example uses a single Dockerfile
to build an Ubuntu image with cURL installed for multiple architectures.
Create a Dockerfile with the following:
1 | # Dockerfile |
Build the Dockerfile with buildx
, passing the list of architectures to build for:
1 | docker buildx build --platform linux/amd64,linux/arm64,linux/arm/v7 -t username/demo:latest --push . |
Where, username is a valid Docker username.
Notes:
The --platform
flag informs buildx to generate Linux images for AMD 64-bit, Arm 64-bit, and Armv7 architectures.
The --push
flag generates a multi-arch manifest and pushes all the images to Docker Hub.
Inspect the image using docker buildx imagetools.
1 | docker buildx imagetools inspect username/demo:latest |
The image is now available on Docker Hub with the tag username/demo:latest
. You can use this image to run a container on Intel laptops, Amazon EC2 A1 instances, Raspberry Pis, and on other architectures. Docker pulls the correct image for the current architecture, so Raspberry Pis run the 32-bit Arm version and EC2 A1 instances run 64-bit Arm. The SHA tags identify a fully qualified image variant. You can also run images targeted for a different architecture on Docker Desktop.
You can run the images using the SHA tag, and verify the architecture. For example, when you run the following on a macOS:
1 | docker run --rm docker.io/username/demo:latest@sha256:2b77acdfea5dc5baa489ffab2a0b4a387666d1d526490e31845eb64e3e73ed20 uname -m |
In the above example, uname -m
returns aarch64
and armv7l
as expected, even when running the commands on a native macOS or Windows developer machine.
1 | docker buildx build --platform linux/amd64,linux/arm64,linux/arm/v7 -t username/demo:latest --push . |
Create a new builder which gives access to the new multi-architecture features, then use it.
1 | docker buildx create --name mybuilder |
[4] QEMU - https://wiki.qemu.org/Main_Page
[6] Learn GitHub Actions - GitHub Docs - https://docs.github.com/en/actions/learn-github-actions
[7] Encrypted secrets - GitHub Docs - https://docs.github.com/en/actions/reference/encrypted-secrets
]]>Fly.io is a global application distribution platform. We run your code in Firecracker microVMs around the world.
In this work-through we’re going to deploy an application to Fly. In this example, the application will come from a docker image, but first, we want to install all the tools you need to work with Fly. Which is one tool, flyctl.
Flyctl is a command-line utility that lets you work with Fly, from creating your account to deploying your applications. It runs on your local device so you’ll want to install the version that’s appropriate for your operating system:
If you have the Homebrew package manager installed, flyctl can be installed by running:
1 | brew install superfly/tap/flyctl |
If not, you can run the install script:
1 | curl -L https://fly.io/install.sh | sh |
Run the install script:
1 | curl -L https://fly.io/install.sh | sh |
Run the Powershell install script:
1 | iwr https://fly.io/install.ps1 -useb | iex |
If it’s your first time using Fly, you’ll need to sign up for an account. To do so, run:
1 | flyctl auth signup |
This will take you to the sign-up page where you can either:
Sign-up with email: Enter your name, email and password.
Sign-up with github: If you have a Github account, you can use that to sign up. Look out for the confirmatory email we’ll send you which will give you a link to set a password; you’ll need a password set so we can actively verify that it is you for some Fly operations.
You will also be prompted for credit card payment information, required for charges outside the free tier on Fly. See Pricing - https://fly.io/docs/about/pricing for more details on what is included in the free tier… If you do not enter a details here, you will be unable to create a new application on Fly until you add a credit card to your account.
Whichever route you take you will be signed up, signed in, and returned to your command line, ready to use Fly.
If you already have a Fly account, all you need to do is sign in with Flyctl. Simply run:
1 | flyctl auth login |
Your browser will open up with the Fly sign-in screen, enter your user name and password to sign in. If you signed up with Github, use the Sign in with Github button to sign in.
Whichever route you take you will be returned to your command line, ready to use Fly.
Fly allows you to deploy any kind of app as long as it is packaged in a Docker image. That also means you can just deploy a docker image and as it happens we have one ready to go in flyio/hellofly:latest.
Each Fly application needs a fly.toml file to tell the system how we’d like to deploy it. That file can be automatically generated with the flyctl launch command.
1 | flyctl launch --image flyio/hellofly:latest |
Organizations: Organizations are a way of sharing applications and resources between Fly users. Every fly account has a personal organization, called personal, which is only visible to your account. Let’s select that for this guide.
1 | ? Select region: ord (Chicago, Illinois (US)) |
Next, you’ll be prompted to select a region to deploy in. The closest region to you is selected by default. You can use this or change to another region.
You will also be prompted for credit card payment information, required for charges outside the free tier on Fly. See Pricing for more details on what is included in the free tier. If you do not enter a details here, you will be unable to create a new application on Fly until you add a credit card to your account.
At this point, flyctl creates an app for you and writes your configuration to a fly.toml file. The fly.toml file now contains a default configuration for deploying your app.
1 | # fly.toml |
The flyctl command will always refer to this file in the current directory if it exists, specifically for the app name value at the start. That name will be used to identify the application on the Fly platform. You can also see how the app will be built and that internal port setting. The rest of the file contains settings to be applied to the application when it deploys.
We’ll have more details about these properties as we progress, but for now, it’s enough to say that they mostly configure which ports the application will be visible on.
We are now ready to deploy our containerized app to the Fly platform. At the command line, just run:
1 | flyctl deploy |
This will lookup our fly.toml file, and get the app name hellofly from there. Then flyctl will start the process of deploying our application to the Fly platform. Flyctl will return you to the command line when it’s done.
Now the application has been deployed, let’s find out more about its deployment. The command flyctl status will give you all the essential details.
1 | flyctl status |
As you can see, the application has been deployed with a DNS hostname of hellofly.fly.dev. Your deployment’s name will, of course be different. We can also see that one instace of the app is now running in the fra (Frankfurt) region. Next, we connect to it.
The quickest way to connect to your deployed app is with the flyctl open command. This will open a browser on the http version of the site. That will automatically be upgraded to a https secured connection (when using the fly.dev domain) to connect to it securely. Add /name to flyctl open and it’ll be appended to the apps path and you’ll get an extra greeting from the hellofly application.
1 | flyctl open /fred |
Opening http://hellofly.fly.dev/fred
You have successfully deployed and connected to your first Fly application.
[1] Deploy app servers close to your users · Fly - https://fly.io/
[2] Hands-on with Fly - https://fly.io/docs/hands-on/start/
[3] Deploy Your Application via Dockerfile - https://fly.io/docs/getting-started/dockerfile/
]]>You already have a project wrapped up in a docker container - https://docs.docker.com/engine/reference/builder/ ? Great! Just deploy that!
The fly launch
command detects your Dockerfile and builds it. If you have Docker running locally, it builds it on your machine. If not, it builds it on a Fly build machine. Once your container is built, it’s deployed! Need some extra config? No sweat, we’ve got you covered. Let’s take a look.
1 | fly launch |
Let fly launch
generate an app name for you or pick your own.
Select the Fly.io region - https://fly.io/docs/reference/regions/ to deploy to. It defaults to the one closest to you.
The launch command generates a fly.toml
file for your project with the settings. You can deploy right away, or add some config first.
Most Dockerfiles expect some configuration settings through ENV
. The generated fly.toml
file has a place for you to add your custom ENV
settings. It’s the [env]
block.
1 | # fly.toml |
Add whatever values your Dockerfile or container requires.
Sometimes you have secrets that shouldn’t be checked in to git or shared publicly. For those settings, you can set them using fly secrets.
1 | flyctl secrets set MY_SECRET=romance |
You can list the secrets you’ve already set using fly secrets list
1 | fly secrets list |
The values aren’t display since they are secret!
If you didn’t previously deploy the app, you can do that now.
1 | fly deploy |
If you have Docker running locally, it builds it on your machine. If you don’t have Docker running locally, it builds it on a Fly build machine. Once your container is built, it’s deployed!
Run fly open
to open your deployed app in a browser.
1 | fly open |
You’re off and running!
Lots of applications deployed in a container have some state that they want to keep. Here are a couple resources to check out for ways to do that.
Persistent Volumes - https://fly.io/docs/reference/volumes/: You can create persistent volumes that you can mount into your container for reading and writing data that changes but isn’t blown away when you deploy again.
Postgres Database: Deploy a Fly Postgres Database. It automatically creates a DATABASE_URL ENV` when you attach it to your app.
[1] Deploy Your Application via Dockerfile - https://fly.io/docs/getting-started/dockerfile/
[2] Deploy app servers close to your users · Fly - https://fly.io/
[3] Hands-on with Fly - https://fly.io/docs/hands-on/start/
[4] Deploy Your Application via Dockerfile - https://fly.io/docs/getting-started/dockerfile/
[5] Volumes - https://fly.io/docs/reference/volumes/
[6] Postgres on Fly - https://fly.io/docs/reference/postgres/#about-postgres-on-fly
[8] docker container - https://docs.docker.com/engine/reference/builder/
]]>Fly.io is a global application distribution platform. We run your code in Firecracker microVMs around the world.
Getting an application running on Fly is essentially working out how to package it as a deployable image. Once packaged it can be deployed to the Fly infrastructure to run on the global application platform.
In this guide we’ll learn how to deploy a static site on Fly.
To be fair, a static site isn’t an app. So we’re really talking about deploying an app to serve some static content.
In this demonstration, we’ll use goStatic - https://hub.docker.com/r/pierrezemb/gostatic, a tiny web server written in Go that lets us serve static files with very little configuration. We’ll provide a Dockerfile and our content for Fly to transmogrify into a web server running in a VM.
You can clone all the files needed for this example from the hello-static
Github repository - https://github.com/fly-apps/hello-static to a local directory:
1 | git clone https://github.com/fly-apps/hello-static |
Alternatively, you can create all the files manually as you work through this guide.
To configure our application and deploy it on Fly.io, we need flyctl
, our CLI app for managing apps on Fly. If you’ve already installed it, carry on. If not, hop over to our installation guide - https://fly.io/docs/getting-started/installing-flyctl/. Once that’s installed you’ll want to log in to Fly.
At this point, if you have a local clone of the hello-static
repository, you could go ahead and run fly launch
from its root directory and get the static site deployed without further ado. But that wouldn’t be very illuminating. Let’s go through what’s included in the example repository and why.
If you cloned the repository, your new app already has its own directory. Otherwise, create one. This isn’t just for tidiness, or for letting flyctl
detect your app by the fly.toml
in the working directory (although these are good reasons). It also ensures that no extra files get included in the build context - https://docs.docker.com/engine/reference/commandline/build/ when the Docker image gets built.
We’ll do everything from within this directory:
1 | cd hello-static |
Our example will be a simple static site. That can be as trivial as a single index.html
file. Let’s make it only slightly more complicated by writing two html files and having them link to each other.
Put these html files into a subdirectory of their own, called public
. Files in this directory are the ones our goStatic
server will serve. Create the hello-static/public
directory if needed.
Here’s index.html, which is the landing page:
1 | <html> |
Here’s goodbye.html.
1 | <html> |
goStatic is designed to run in a container, and the image is available at Docker Hub. This is super convenient for us, because Fly apps need container images too!
We can use the goStatic image as a base image. We just have to copy our site’s files to /srv/http/ in the image.
Here’s our Dockerfile to do that:
1 | FROM pierrezemb/gostatic |
The Dockerfile should be placed in the working directory (here, hello-static).
We set the initial configuration for the app by running flyctl launch. This takes care of setting the app name, the Fly.io organization it belongs to, and a region to deploy to. It also generates a fly.toml file with more configuration settings.
The hello-static repository contains a fly.toml that will be detected by flyctl launch; you can use it to configure your app. Otherwise, a new fly.toml will be generated with flyctl launch, and we can edit it.
1 | flyctl launch |
This has configured the app with some default parameters, and generated a fly.toml configuration file for us.
Before deploying, we need to do one more thing. goStatic listens on port 8043 by default, but the default fly.toml assumes port 8080. Edit internal_port in the services section to reflect this:
1 | [[services]] |
Now we’re ready to deploy:
1 | flyctl deploy |
The output should end something like this, if everything has gone well:
1 | ==> Monitoring deployment |
The quickest way to browse your newly deployed application is with the flyctl open command.
1 | flyctl open |
Your browser will be sent to the displayed URL. Fly will auto-upgrade this URL to an HTTPS secured URL.
[1] - https://fly.io/docs/getting-started/static/
[2] Deploy app servers close to your users · Fly - https://fly.io/
[3] Hands-on with Fly - https://fly.io/docs/hands-on/start/
[4] Deploy Your Application via Dockerfile - https://fly.io/docs/getting-started/dockerfile/
[5] Installing flyctl - https://fly.io/docs/getting-started/installing-flyctl/
[7] docker build | Docker Documentation - https://docs.docker.com/engine/reference/commandline/build/
[8] pierrezemb/gostatic - Docker Image | Docker Hub - https://hub.docker.com/r/pierrezemb/gostatic
]]>1 | go mod tidygo: errors parsing go.mod: |
1 | // go.mod |
GraalVM Java is the High-performance runtime with new compiler optimizations to accelerate Java application performance and lower infrastructure costs on premises and in the cloud. GraalVM Java provides new innovation for the Java platform:
Makes Java applications go faster
True microservice solution for Java
Perfect for containerized workloads
asdf
is a single CLI tool for managing multiple runtime versions. It extend with a simple plugin system to install your favourite language: Dart, Elixir, Flutter, Golang (Go), Java, Node.js, Python, Ruby …
This article is about how to use asdf
and Java plugin to install GraalVM Java on macOS with the Homebrew package manager.
Xcode Command Line Tools
The Xcode Command Line Tools Package is a small self-contained package available for download separately from Xcode and that allows you to do command line development in macOS which is consists of the macOS SDK and command-line tools such as Clang, which are installed at this
For more information about installing and using Xcode Command Line Tools, see the Xcode Command Line Tools
Homebrew
Homebrew is the Missing Package Manager for macOS (or Linux).
For more information about installing and using Homebrew, see the Homebrew - https://brew.sh/.
1 | Install asdf dependencies |
1 | Install Java plugin requirements |
Update Java plugin.
1 | asdf plugin update java |
List all Java versions.
1 | asdf list all java | grep graalvm |
Install a Java version manually.
1 | asdf install java graalvm-21.3.0+java17 |
[1] Manage asdf - https://asdf-vm.com/#/core-manage-asdf
[2] GitHub - halcyon/asdf-java: A Java plugin for asdf-vm. - https://github.com/halcyon/asdf-java
[3] Home · rbenv/java-build Wiki - https://github.com/rbenv/java-build/wiki#suggested-build-environment
[4] Homebrew - https://brew.sh/
[5] Java | GraalVM - https://www.graalvm.org/java/
]]>