Error: listen tcp :10250: bind: address already in use on Alibaba Cloud Serverless Kuberneters (ASK)
Encountered an error Back-off restarting failed container when deploying cert-manager-webhook on Alibaba Cloud Serverless Kuberneters (ASK)
1 2 3 4 5 6 7 8 9
shell@Alicloud:~$ kubectl logs -f cert-manager-webhook-7d6d4c78bc-lr9zh -n cert-manager W0603 09:48:28.403062 1 client_config.go:608] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. W0603 09:48:28.404700 1 client_config.go:608] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. I0603 09:48:28.404863 1 webhook.go:69] cert-manager/webhook "msg"="using dynamic certificate generating using CA stored in Secret resource" "secret_name"="cert-manager-webhook-ca" "secret_namespace"="cert-manager" I0603 09:48:28.405047 1 server.go:148] cert-manager/webhook "msg"="listening for insecure healthz connections" "address"=":6080" Error: listen tcp :10250: bind: address already in use ...
Maybe ASK uses a single working node by default, which make it easy to port conflicts.
We can change 10250 to another port to avoid port conflicts。
1 2 3 4 5 6 7 8 9 10 11
# values.yaml
webhook:
# The port that the webhook should listen on for requests. # In GKE private clusters, by default kubernetes apiservers are allowed to # talk to the cluster nodes only on 443 and 10250. so configuring # securePort: 10250, will work out of the box without needing to add firewall # rules or requiring NET_BIND_SERVICE capabilities to bind port numbers <1000 # securePort: 10250 securePort:10251# change 10250 to another port.
Posted onEdited onInAwesome Software
,
NetworkWord count in article: 2.3kReading time ≈2 mins.
SSHFS
SSHFS allows you to mount a remote filesystem using SFTP. Most SSH servers support and enable this SFTP access by default, so SSHFS is very simple to use - there’s nothing to do on the server-side.
Posted onEdited onInAwesome Software
,
NetworkWord count in article: 14kReading time ≈13 mins.
GNU Wget
GNU Wget is a free software package for retrieving files using HTTP, HTTPS, FTP and FTPS, the most widely used Internet protocols. It is a non-interactive commandline tool, so it may easily be called from scripts, cron jobs, terminals without X-Windows support, etc.
Dive is a tool for exploring a docker image, layer contents, and discovering ways to shrink the size of your Docker/OCI image.
Basic Features
Show Docker image contents broken down by layer
As you select a layer on the left, you are shown the contents of that layer combined with all previous layers on the right. Also, you can fully explore the file tree with the arrow keys.
Indicate what’s changed in each layer
Files that have changed, been modified, added, or removed are indicated in the file tree. This can be adjusted to show changes for a specific layer, or aggregated changes up to this layer.
Estimate “image efficiency”
The lower left pane shows basic layer info and an experimental metric that will guess how much wasted space your image contains. This might be from duplicating files across layers, moving files across layers, or not fully removing files. Both a percentage “score” and total wasted file space is provided.
Quick build/analysis cycles
You can build a Docker image and do an immediate analysis with one command: dive build -t some-tag .
You only need to replace your docker build command with the same dive build command.
CI Integration
Analyze an image and get a pass/fail result based on the image efficiency and wasted space. Simply set CI=true in the environment when invoking any valid dive command.
Multiple Image Sources and Container Engines Supported
With the --source option, you can select where to fetch the container image from:
# macOS $ brew install dive # Ubuntu/Debian $ wget https://github.com/wagoodman/dive/releases/download/v0.9.2/dive_0.9.2_linux_amd64.deb $ sudo apt install ./dive_0.9.2_linux_amd64.deb # RHEL/Centos $ curl -OL https://github.com/wagoodman/dive/releases/download/v0.9.2/dive_0.9.2_linux_amd64.rpm $ rpm -i dive_0.9.2_linux_amd64.rpm # Arch Linux # Available as dive in the Arch User Repository (AUR). yay -S dive
See [Installation | wagoodman/dive: A tool for exploring each layer in a docker image - https://github.com/wagoodman/dive#installation](https://github.com/wagoodman/dive#installation) to learn more. ## Usages
To analyze a Docker image simply run dive with an image `tag/id/digest`:
```shell $ dive <your-image-tag>
or if you want to build your image then jump straight into analyzing it:
1
$ dive build -t <some-tag> .
CI Integration
Additionally you can run this in your CI pipeline to ensure you’re keeping wasted space to a minimum (this skips the UI):
1
$ CI=true dive <your-image>
UI Configuration
No configuration is necessary, however, you can create a config file and override values:
# supported options are "docker" and "podman" container-engine:docker # continue with analysis even if there are errors parsing the image archive ignore-errors:false log: enabled:true path:./dive.log level:info
# Note: you can specify multiple bindings by separating values with a comma. # Note: UI hinting is derived from the first binding keybinding: # Global bindings quit:ctrl+c toggle-view:tab filter-files:ctrl+f,ctrl+slash
# Layer view specific bindings compare-all:ctrl+a compare-layer:ctrl+l
diff: # You can change the default files shown in the filetree (right pane). All diff types are shown by default. hide: -added -removed -modified -unmodified
filetree: # The default directory-collapse state collapse-dir:false
# The percentage of screen width the filetree should take on the screen (must be >0 and <1) pane-width:0.5
# Show the file attributes next to the filetree show-attributes:true
layer: # Enable showing all changes from this layer and every previous layer show-aggregated-changes:false
dive will search for configs in the following locations:
Posted onEdited onInCloud Native
,
Storage
,
RedisWord count in article: 2kReading time ≈2 mins.
Can’t save in background: fork: Cannot allocate memory
APP log
1 2 3 4
# ${RAILS_ROOT}/log/production.log ... MISCONF Redis is configured to save RDB snapshots, but is currently not able to persist on disk. Commands that may modify the data set are disabled. Please check Redis logs for details about the error. ...
Redis Log - Can’t save in background: fork: Cannot allocate memory
1 2 3 4
[root@cloudolife /root]# tail -f /var/log/redis/redis.log 1573:M 14 Jan 16:56:04.011 * 1 changes in 900 seconds. Saving... 1573:M 14 Jan 16:56:04.011 # Can't save in background: fork: Cannot allocate memory 1573:M 14 Feb 16:56:10.045 * 1 changes in 900 seconds. Saving...
In traditional way, Linux gives you three options for what happens when a process tries to allocate some more memory, controlled by the value of the vm.overcommit_memory sysctl:
The kernel gives you the memory unless it thinks you would clearly overcommit the system (mode 0, the default, ‘heuristic overcommit’).
the kernel always gives you the memory (mode 1, ‘always overcommit’).
the kernel refuses to give you more memory if it would take the committed address space over the commit limit (mode 2, what I call ‘strict overcommit’).
(Disclaimer: all of this assumes a relatively recent 2.6 kernel.)
These settings control how Linux handles virtual memory limits.
After changing this setting from 0 to 1 Redis started persisting the data immediately and the overall performance increased dramatically.
To do so either open the file /proc/sys/vm/overcommit_memory and remove 0 and put 1 or run the following command.
This might now work as the file may be already in use by the system.
1
$ echo 1 > /proc/sys/vm/overcommit_memory
So other way to do the same is to add vm.overcommit_memory = 1 to /etc/sysctl.conf and then reboot or run the command sysctl vm.overcommit_memory=1 for this to take effect.
1 2 3 4 5
$ vi /etc/sysctl.conf # /etc/sysctl.conf vm.overcommit_memory=1
Sentry.init({ dsn: "https://[email protected]/0" , integrations: [ // Registers and configures the Tracing integration, // which automatically instruments your application to monitor its // performance, including custom Angular routing instrumentation newIntegrations.BrowserTracing({ tracingOrigins: ["localhost", "https://yourserver.io/api"], routingInstrumentation: Sentry.routingInstrumentation, }), ],
// Set tracesSampleRate to 1.0 to capture 100% // of transactions for performance monitoring. // We recommend adjusting this value in production tracesSampleRate: 1.0, });
You can also configure @sentry/angular to catch any Angular-specific exceptions reported through the @angular/core/ErrorHandler provider.
@sentry/angular exports a Trace Service, Directive, and Decorators that leverages the @sentry/tracing, Sentry’s Tracing integration, to add Angular-related spans to transactions. The service itself tracks route changes and durations, where directive and decorators are tracking component initializations.
Automatically Send Errors with ErrorHandler
@sentry/angular exports a function to instantiate an ErrorHandler provider that will automatically send JavaScript errors captured by the Angular’s error handler.
This snippet includes an intentional error, so you can test that everything is working as soon as you set it up:
1
myUndefinedFunction();
Errors triggered from within Browser DevTools are sandboxed, so will not trigger an error handler. Place the snippet directly in your code instead.
Learn more about manually capturing an error or message in our Usage documentation.
To view and resolve the recorded error, log into sentry.io or your and open your project. Clicking on the error’s title will open a page where you can see detailed information and mark it as resolved.