I like to keep a cheat sheet of things I regularly use, so I can read it repeatedly to help keep myself familiar with the tools. There is an always up-to-date version here.

Loftwah’s Cheatsheet


This is a repo with a bunch of stuff I use regularly.

| AWS ap-southeast-2 | AWS Guide | Canva | Cloudflare | CyberChef


| Admin Console | Cloud Platform | Drive | Gmail | Sheets | How Search Works |



# To start the docker daemon:
docker -d
# To start a container with an interactive shell:
docker run -ti <image-name> /bin/bash
# To "shell" into a running container (docker-1.3+):
docker exec -ti <container-name> bash
# To inspect a running container:
docker inspect <container-name> (or <container-id>)
# To get the process ID for a container:
docker inspect --format {{.State.Pid}} <container-name-or-id>
# To list (and pretty-print) the current mounted volumes for a container:
docker inspect --format='{{json .Volumes}}' <container-id> | python -mjson.tool
# To copy files/folders between a container and your host:
docker cp foo.txt mycontainer:/foo.txt
# To list currently running containers:
docker ps
# To list all containers:
docker ps -a
# To remove all stopped containers:
docker rm $(docker ps -qa)
# To list all images:
docker images
# To remove all untagged images:
docker rmi $(docker images | grep "^<none>" | awk "{print $3}")
# To remove all volumes not used by at least one container:
docker volume prune

Migrate image without registry

#Step1 - Save the Docker image as a tar file
docker save -o <path for generated tar file> <image name>

docker save -o c:/myfile.tar centos:16

#Step2 - copy your image to a new system with regular file transfer tools such as cp, scp or rsync(preferred for big files)

#Step3 - load the image into Docker
docker load -i <path to image tar file>

Clean up Docker before starting

docker kill $(docker ps -q)
docker rm $(docker ps -a -q)
docker rmi $(docker images -q)

Install Docker on Amazon Linux 2

Docker basics for Amazon ECS

sudo yum update -y
sudo amazon-linux-extras install docker -y
sudo service docker start
sudo systemctl enable docker
sudo usermod -aG docker ec2-user

Install Docker on Ubuntu

curl -fsSL https://get.docker.com -o get-docker.sh
sh get-docker.sh
sudo service docker start
sudo systemctl enable docker
sudo usermod -aG docker <username>

Install Docker-Compose

Note: use bash for this one, with zsh I get a parse error and I don’t know why yet.

sudo curl -L https://github.com/docker/compose/releases/latest/download/docker-compose-$(uname -s)-$(uname -m) -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose
docker-compose version

Shell Scripts

  • Shell Script to Install Docker on Ubuntu
set -e
#Uninstall old versions
sudo apt-get remove docker docker-engine docker.io containerd runc
#Update the apt package index:
sudo apt-get update
#Install packages to allow apt to use a repository over HTTPS:
sudo apt-get install -y \\
    apt-transport-https \\
    ca-certificates \\
    curl \\
    gnupg-agent \\
# Add docker's package signing key
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
# Add repository
sudo add-apt-repository -y \\
  "deb \[arch=amd64\] https://download.docker.com/linux/ubuntu \\
  $(lsb\_release -cs) \\
# Install latest stable docker stable version
sudo apt-get update
sudo apt-get -y install docker-ce
# Enable & start docker
sudo systemctl enable docker
sudo systemctl start docker
# add current user to the docker group to avoid using sudo when running docker
sudo usermod -a -G docker $USER
 # Output current version
docker -v
  • Shell Script to Install Docker on Centos
#Get Docker Engine - Community for CentOS + docker compose
set -e
#Uninstall old versions
sudo yum remove docker docker-common docker-selinux docker-engine-selinux docker-engine docker-ce
#Update the packages:
sudo yum update -y
#Install needed packages
sudo yum install -y yum-utils device-mapper-persistent-data lvm2
# Configure the docker-ce repo:
sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
# Install the latest docker-ce
sudo yum install docker-ce
# Enable & start docker
sudo systemctl enable docker.service
sudo systemctl start docker.service
# add current user to the docker group to avoid using sudo when running docker
sudo usermod -a -G docker $(whoami)
# Output current version
docker -v
  • Shell Script to Install Docker on AWS linux
#Get Docker Engine - Community for CentOS + docker compose
set -e
#Uninstall old versions
sudo yum remove docker docker-common docker-selinux docker-engine-selinux docker-engine docker-ce
#Update the packages:
sudo yum update -y
#Install the most recent Docker Community Edition package.
sudo amazon-linux-extras install docker -y
# Enable & start docker
sudo service docker start
# add current user to the docker group to avoid using sudo when running docker
#sudo usermod -a -G docker ec2-user
sudo usermod -a -G docker $(whoami)
# Output current version
 docker -v

Docker Compose

  • Shell Script to Install the latest version of docker-compose
# get latest docker compose released tag
COMPOSE\_VERSION=$(curl -s https://api.github.com/repos/docker/compose/releases/latest | grep 'tag\_name' | cut -d\\" -f4)
sudo curl -L "https://github.com/docker/compose/releases/download/${COMPOSE\_VERSION}/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
sudo chmod a+x /usr/local/bin/docker-compose
sudo ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose
# Output the  version
docker-compose -v


  • Dockerizing a simple nodeJs app
FROM node:latest
ADD ./app
RUN npm install
CMD npm start

Git and GitHub

Initial Git configuration

git config --global user.name "Dean Lofts"
git config --global user.email "[email protected]"


  • Remove deleted files from repo – git rm $(git ls-files --deleted)
  • Reset git repo (dangerous) – git reset --hard HEAD
  • Reset and remove untracked changes in repo – git clean -xdf
  • Ignore certificates when cloning via HTTPS – git config --global http.sslVerify false
  • Pull changes and remove stale branches – git pull --prune
  • Grab the diff of a previous version of a file – git diff [email protected]{1} ../../production.hosts
  • Grab the diff of a staged change – git diff --cached <file>
  • Undo a commit to a branch – git reset --soft HEAD~1
  • View files changed in a commit – git log --stat
  • Pull latest changes stashing changes first – git pull --autostash
  • Make an empty commit (good for CI) – git commit --allow-empty -m "Trigger notification"
  • Change remote repository URL – git remote set-url origin git://new.location"
  • fix “.gitignore not working” issue
Update .gitignore with the folder/file name you want to ignore. You can use anyone of the formats mentioned below (prefer format1)
### Format1  ###

### Format2  ###

Commit all the changes to git. Exclude the folder/files you dont want commit, in my case node\_modules
Execute the following command to clear the cache

git rm -r --cached .

Execute git status command and it should output node\_modules and sub directories marked for deletion
Now execute

git add .

git commit -m "fixed untracked files"

git push

Generate and auto-sign your commits with GPG

You can automatically sign your commits with a gpg key giving you a verified badge on your GitHub commits.

# Generate a key
gpg --gen-key
# See your gpg public key:
gpg --armor --export YOUR_KEY_ID
# YOUR_KEY_ID is the hash in front of `sec` in previous command. (for example sec 4096R/234FAA343232333 => key id is: 234FAA343232333)
# Set a gpg key for git:
git config --global user.signingkey your_key_id
# To sign a single commit:
git commit -S -a -m "Test a signed commit"
# Auto-sign all commits globaly
git config --global commit.gpgsign true


How to install ArgoCD

Install by script

kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
kubectl port-forward svc/argocd-server -n argocd 8080:443
kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d; echo

Install by helm

helm repo add argocd https://argoproj.github.io/argo-cd
helm install my-release argo/argo-cd


| Ubuntu Server Download | Linux Basics | Set Up SSH Keys | SSH Shortcut | Self Signed Certificate with Custom Root CA |

I like to install a few appliocations on top of Linux with my own custom configuration.


Install Linuxbrew by following these instructions:

/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"


Install zsh by following these instructions:

sudo apt install zsh -y
sudo chsh -s $(which zsh)
sh -c "$(curl -fsSL https://raw.githubusercontent.com/ohmyzsh/ohmyzsh/master/tools/install.sh)"
sudo apt install fonts-powerline -y
git clone https://github.com/zsh-users/zsh-autosuggestions ${ZSH_CUSTOM:-~/.oh-my-zsh/custom}/plugins/zsh-autosuggestions
git clone https://github.com/zsh-users/zsh-syntax-highlighting.git ${ZSH_CUSTOM:-~/.oh-my-zsh/custom}/plugins/zsh-syntax-highlighting

You need to update the ~/.zshrc file to add the plugins. If you’re using a file stored somewhere else create a link to the file in your home directory with ln -s /dotfiles/.zshrc ~/.zshrc. This will create a symlink to the file so you can store it in a Git repository.



| Config Tool | Nginx Proxy Manager |

  • Check installed modules – nginx -V
  • Pretty print installed modules – 2>&1 nginx -V | xargs -n1
  • Test a configuration without reloading – nginx -t
  • Stop all nginx processes – nginx -s stop
  • Start all nginx processes – nginx -s start
  • Restart all nginx processes – nginx -s restart
  • Realod nginx configuration (without restarting) – nginx -s reload

Node Version Manager

Install NVM

curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.1/install.sh | bash
## Only use below if you need it in ~/.zshrc or something
export NVM_DIR="$([ -z "${XDG_CONFIG_HOME-}" ] && printf %s "${HOME}/.nvm" || printf %s "${XDG_CONFIG_HOME}/nvm")"
[ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh" # This loads nvm
nvm install --lts

Piping Server

Piping Server transfers data to POST /hello or PUT /hello into GET /hello. The path /hello can be anything such as /mypath or /mypath/123/. A sender and receivers who specify the same path can transfer. Both the sender and the recipient can start the transfer first. The first one waits for the other.

You can also use Web UI like https://ppng.io on your browser. A more modern UI is found at https://piping-ui.org, which supports E2E encryption.

The most important thing is that the data are streamed. This means that you can transfer any data infinitely. The demo below transfers an infinite text stream with seq inf.

Install or Run the Piping Server

docker run -p 8181:8080 nwtgck/piping-server-rust

How to use the Piping Server

Piping Server is simple. You can transfer as follows.

# Transmit
cat <filename> | curl -T - https://pipe.apptizle.io/<filename>

# Receive
curl https://pipe.apptizle.io/<filename> > <filename>


# List s3 bucket permissions and keys
aws s3api get-bucket-acl --bucket examples3bucketname
aws s3api get-object-acl --bucket examples3bucketname --key dir/file.ext
aws s3api list-objects --bucket examples3bucketname
aws s3api list-objects-v2 --bucket examples3bucketname
aws s3api get-object --bucket examples3bucketname --key dir/file.ext localfilename.ext
aws s3api put-object --bucket examples3bucketname --key dir/file.ext --body localfilename.ext

Visual Studio Code

To keep things consistent I will be using Visual Studio Code for this. You can use any text editor, but this is what the examples will expect.

Download Visual Studio Code – Mac, Linux, Windows

Visual Studio Code for the Web


Checking Ports

# Show port and PID
netstat -tulpn
# Show process and listening port
ss -ltp
Show ports that are listening
ss -ltn
Show real time TCP and UDP ports
ss -stplu
Show all established connections
lsof -i
Show listening connections
lsof -ni | grep LISTEN

General Linux

Crontab Guru Examples

Copy the content of a folder to an existing folder
cp -a /source/. /dest/
Delete everything in a directory
rm /path/to/dir/*
Remove all sub-directories and files
rm -r /path/to/dir/*
Find and replace whole words in vim
# Replace all occurrences of string in a directory
Find and replace string - grep -rl "oldstring" ./ | xargs sed -i "" "s/oldstring/newstring/g"
# Listing Running Services Under SystemD in Linux
systemctl list-units --type=service
# Sort disk usage by most first
df -h | tail -n +2 | sort -rk5
# Check the size of a top level dicectory
du -h --max-depth=1 /tmp/
# Top 50 file sizes
du -ah / | sort -n -r | head -n 50
# Show directory sizes (must not be in root directory)
du -sh *
# Check disk usage per directory
du -h <dir> | grep [0-9\.]\+G
# Look for growing directories
watch -n 10 df -ah
# Ncurses based disk usage
ncdu -q
# Exlcude directories in find
find /tmp -not \( -path /tmp/dir -prune \) -type p -o -type b
# Remove files over 30 days old
find . -mtime +30 | xargs rm -rf
# Remove files older than 7 day starting with 'backup'
find . -type f -name "backup*" -mtime +7 -exec rm {} \;
# Generate generic ssh key pair
ssh-keygen -q -t rsa -f ~/.ssh/<name> -N '' -C <name>
# AWS PEM key to ssh PUB key
ssh-keygen -y -f eliarms.pem > eliarms.pub


  • Look through all files in the current dir for the word “foo” – grep -R “foo”.
  • View the last ten lines of output – grep -i -C 10 "invalid view source” /var/log/info.log
  • Display the line number of message – grep -n “pattern” <file>


  • Show process tree of all PIDs – ps auxwf
  • Show all process info and hierarchy (same as above)- ps -efH
  • Show orphaned processes for – ps -ef|awk '$3=="1" && /pandora/ { print $2 }'
  • Show all orphaned processes (could be daemons) – ps -elf | awk '{if ($5 == 1){print $4" "$5" "$15}}'
  • Show zombie processes – ps aux | grep Z


Packetlife Cheatsheets

# Check your public IP
curl http://whatismyip.org/
curl ifconfig.me
curl icanhazip.com
# Return the IP of an interface
ifconfig en0 | grep --word-regexp inet | awk '{print $2}'
ip add show eth0 | awk '/inet/ {print $2}' | cut -d/ -f1 | head -1
ip -br a sh eth0 | awk '{ print $3 }' (returns netmask)
ip route show dev eth0 | awk '{print $7}'
hostname -I (return ip only)
Check domain with specific NS
dig <example.com> @<ns-server>
Get NS records for a site
dig <example.com> ns
# Check nat rules for ip redirection
iptables -nvL -t nat


# Download a file and specify a new filename.
curl http://example.com/file.zip -o new_file.zip
# Download multiple files.
curl -O URLOfFirstFile -O URLOfSecondFile
# Download all sequentially-numbered files (1-24).
curl http://example.com/pic[1-24].jpg
# Download a file, following [L]ocation redirects, and automatically [C]ontinuing (resuming) a previous file transfer:
curl -O -L -C - http://example.com/filename
# Download a file and follow redirects.
curl -L http://example.com/file
# Download a file and pass HTTP Authentication.
curl -u username:password URL
# Download a file with a Proxy.
curl -x proxysever.server.com:PORT http://addressiwantto.access
# Download a file from FTP.
curl -u username:password -O ftp://example.com/pub/file.zip
# Resume a previously failed download.
curl -C - -o partial_file.zip http://example.com/file.zip
# Fetch only the HTTP headers from a response.
curl -I http://example.com
# Fetch your external IP and network info as JSON.
curl http://ifconfig.me/all/json
# Limit the rate of a download.
curl --limit-rate 1000B -O http://path.to.the/file
# POST to a form.
curl -F "name=user" -F "password=test" http://example.com
curl -H "Content-Type: application/json" -X POST -d '{"user":"bob","pass":"123"}' http://example.com
# Send a request with an extra header, using a custom HTTP method:
curl -H 'X-My-Header: 123' -X PUT http://example.com
# Pass a user name and password for server authentication:
curl -u myusername:mypassword http://example.com
# Pass client certificate and key for a resource, skipping certificate validation:
curl --cert client.pem --key key.pem --insecure https://example.com
# Check single port on single host
nmap -p <port> <host/IP>
# Intrusive port scan on a single host
nmap -sS <host/IP>
# Top ten port on a single host
nmap --top-ports 10 <host/IP>
# Scan from a list of targets:
nmap -iL [list.txt]
# OS detection:
nmap -O --osscan_guess [target]
# Save output to text file:
nmap -oN [output.txt] [target]
# Save output to xml file:
nmap -oX [output.xml] [target]
# Scan a specific port:
nmap -p [port] [target]
# Do an aggressive scan:
nmap -A [target]
# Speedup your scan: # -n => disable ReverseDNS # --min-rate=X => min X packets / sec
nmap -T5 --min-parallelism=50 -n --min-rate=300 [target]
# Traceroute:
nmap -traceroute [target]
# Example: Ping scan all machines on a class C network
nmap -sP
# Force TCP scan: -sT # Force UDP scan: -sU
# Discover DHCP information on an interface
nmap --script broadcast-dhcp-discover -e eth0
## Port Status Information
- Open: This indicates that an application is listening for connections on this port.
- Closed: This indicates that the probes were received but there is no application listening on this port.
- Filtered: This indicates that the probes were not received and the state could not be established. It also indicates that the probes are being dropped by some kind of filtering.
- Unfiltered: This indicates that the probes were received but a state could not be established.
- Open/Filtered: This indicates that the port was filtered or open but Nmap couldn’t establish the state.
- Closed/Filtered: This indicates that the port was filtered or closed but Nmap couldn’t establish the state.
## Additional Scan Types
nmap -sn: Probe only (host discovery, not port scan)
nmap -sS: SYN Scan
nmap -sT: TCP Connect Scan
nmap -sU: UDP Scan
nmap -sV: Version Scan
nmap -O: Used for OS Detection/fingerprinting
nmap --scanflags: Sets custom list of TCP using `URG ACK PSH RST SYN FIN` in any order
### Nmap Scripting Engine Categories
The most common Nmap scripting engine categories:
- auth: Utilize credentials or bypass authentication on target hosts.
- broadcast: Discover hosts not included on command line by broadcasting on local network.
- brute: Attempt to guess passwords on target systems, for a variety of protocols, including http, SNMP, IAX, MySQL, VNC, etc.
- default: Scripts run automatically when -sC or -A are used.
- discovery: Try to learn more information about target hosts through public sources of information, SNMP, directory services, and more.
- dos: May cause denial of service conditions in target hosts.
- exploit: Attempt to exploit target systems.
- external: Interact with third-party systems not included in target list.
- fuzzer: Send unexpected input in network protocol fields.
- intrusive: May crash target, consume excessive resources, or otherwise impact target machines in a malicious fashion.
- malware: Look for signs of malware infection on the target hosts.
- safe: Designed not to impact target in a negative fashion.
- version: Measure the version of software or protocols on the target hosts.
- vul: Measure whether target systems have a known vulnerability.

Password generation

  • Create a hash from password – openssl passwd -crypt <password>
  • Generate a random 8-character password (Ubuntu) – makepasswd -count 1 -minchars 8
  • Create a .passwd file with the user and random password – sudo htpasswd -c /etc/nginx/.htpasswd <user>


# verify if TLS 1.2 is supported
openssl s_client -connect google.com:443 -tls1_2
# Generate a new private key and Certificate Signing Request
openssl req -out CSR.csr -new -newkey rsa:2048 -nodes -keyout privateKey.key
# Generate a self-signed certificate
openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -keyout privateKey.key -out certificate.crt
# Generate a certificate signing request (CSR) for an existing private key
openssl req -out CSR.csr -key privateKey.key -new
# Generate a certificate signing request based on an existing certificate
openssl x509 -x509toreq -in certificate.crt -out CSR.csr -signkey privateKey.key
# Remove a passphrase from a private key
openssl rsa -in privateKey.pem -out newPrivateKey.pem
# Check a Certificate Signing Request (CSR)
openssl req -text -noout -verify -in CSR.csr
# Check a private key
openssl rsa -in privateKey.key -check
# Check a certificate
openssl x509 -in certificate.crt -text -noout
# Check a PKCS#12 file (.pfx or .p12)
openssl pkcs12 -info -in keyStore.p12
# Convert a DER file (.crt .cer .der) to PEM
openssl x509 -inform der -in certificate.cer -out certificate.pem
# Convert a PEM file to DER
openssl x509 -outform der -in certificate.pem -out certificate.der
# Convert a PKCS#12 file (.pfx .p12) containing a private key and certificates to PEM
openssl pkcs12 -in keyStore.pfx -out keyStore.pem -nodes
# Convert a PEM certificate file and a private key to PKCS#12 (.pfx .p12)
openssl pkcs12 -export -out certificate.pfx -inkey privateKey.key -in certificate.crt -certfile CACert.crt
# Convert a .PEM file to a value that can be passed in a JSON string
awk 'NF {sub(/\r/, ""); printf "%s\\n",$0;}' your_private_key.pem > output.txt

Tail log with coloured output

  • grc tail -f /var/log/filename


GitHub repositories with more than 2500 stars sorted by recently updated

Test a WebSocket using cURL

curl --include \
     --no-buffer \
     --header "Connection: Upgrade" \
     --header "Upgrade: websocket" \
     --header "Host: example.com:80" \
     --header "Origin: http://example.com:80" \
     --header "Sec-WebSocket-Key: SGVsbG8sIHdvcmxkIQ==" \
     --header "Sec-WebSocket-Version: 13" \

Minimal safe Bash script template

#!/usr/bin/env bash

set -Eeuo pipefail

script_dir=$(cd "$(dirname "${BASH_SOURCE[0]}")" &>/dev/null && pwd -P)

usage() {
  cat <<EOF
Usage: $(basename "${BASH_SOURCE[0]}") [-h] [-v] [-f] -p param_value arg1 [arg2...]
Script description here.
Available options:
-h, --help      Print this help and exit
-v, --verbose   Print script debug info
-f, --flag      Some flag description
-p, --param     Some param description

cleanup() {
  # script cleanup here

setup_colors() {
  if [[ -t 2 ]] && [[ -z "${NO_COLOR-}" ]] && [[ "${TERM-}" != "dumb" ]]; then
    NOFORMAT='\033[0m' RED='\033[0;31m' GREEN='\033[0;32m' ORANGE='\033[0;33m' BLUE='\033[0;34m' PURPLE='\033[0;35m' CYAN='\033[0;36m' YELLOW='\033[1;33m'

msg() {
  echo >&2 -e "${1-}"

die() {
  local msg=$1
  local code=${2-1} # default exit status 1
  msg "$msg"
  exit "$code"

parse_params() {
  # default values of variables set from params

  while :; do
    case "${1-}" in
    -h | --help) usage ;;
    -v | --verbose) set -x ;;
    --no-color) NO_COLOR=1 ;;
    -f | --flag) flag=1 ;; # example flag
    -p | --param) # example named parameter
    -?*) die "Unknown option: $1" ;;
    *) break ;;

  args=("[email protected]")

  # check required params and arguments
  [[ -z "${param-}" ]] && die "Missing required parameter: param"
  [[ ${#args[@]} -eq 0 ]] && die "Missing script arguments"

  return 0

parse_params "[email protected]"

# script logic here

msg "${RED}Read parameters:${NOFORMAT}"
msg "- flag: ${flag}"
msg "- param: ${param}"
msg "- arguments: ${args[*]-}"

Docker-Compose in GitHub Action

name: Test

      - main
      - features/**
      - dependabot/**
      - main

    timeout-minutes: 10
    runs-on: ubuntu-latest

      - name: Checkout
        uses: actions/[email protected]

      - name: Start containers
        run: docker-compose -f "docker-compose.yml" up -d --build

      - name: Install node
        uses: actions/[email protected]
          node-version: 14.x

      - name: Install dependencies
        run: npm install

      - name: Run tests
        run: npm run test

      - name: Stop containers
        if: always()
        run: docker-compose -f "docker-compose.yml" down

A stack data structure in Bash

class Stack {
    constructor() {
        this.items = []
        this.count = 0

    // Add element to top of stack
    push(element) {
        this.items[this.count] = element
        console.log(`${element} added to ${this.count}`)
        this.count += 1
        return this.count - 1

    // Return and remove top element in stack
    // Return undefined if stack is empty
    pop() {
        if(this.count == 0) return undefined
        let deleteItem = this.items[this.count - 1]
        this.count -= 1
        console.log(`${deleteItem} removed`)
        return deleteItem

    // Check top element in stack
    peek() {
        console.log(`Top element is ${this.items[this.count - 1]}`)
        return this.items[this.count - 1]

    // Check if stack is empty
    isEmpty() {
        console.log(this.count == 0 ? 'Stack is empty' : 'Stack is NOT empty')
        return this.count == 0

    // Check size of stack
    size() {
        console.log(`${this.count} elements in stack`)
        return this.count

    // Print elements in stack
    print() {
        let str = ''
        for(let i = 0; i < this.count; i++) {
            str += this.items[i] + ' '
        return str

    // Clear stack
    clear() {
        this.items = []
        this.count = 0
        console.log('Stack cleared..')
        return this.items

const stack = new Stack()











HTTPS Request with Node.js and TypeScript

import https, { RequestOptions } from 'https';

export interface HttpRequest extends RequestOptions {
  url: string;
  body?: string;
  method: 'GET' | 'POST' | 'PUT' | 'DELETE' | 'PATCH' | 'OPTIONS';

interface Response<T = Buffer | any> {
  body: T;
  statusCode: number;

export default async function httpRequest<T = Buffer | any>(
  options: HttpRequest,
): Promise<Response<T>> {
  const { url, body, ...rest } = options;

  if (!rest.method) {
    rest.method = 'GET';

  const response = await new Promise<Response>((resolve, reject) => {
    const chunks: any[] = [];

    const request = https.request(url, rest, httpResponse => {
      httpResponse.on('error', reject);
      httpResponse.on('data', chunk => chunks.push(chunk));

      httpResponse.on('end', () => {
        let buffer = Buffer.concat(chunks);

        try {
          buffer = JSON.parse(buffer.toString());
        } catch {

          body: buffer,
          statusCode: httpResponse.statusCode ?? 500,

    request.on('error', reject);

    if (['POST', 'PUT', 'PATCH'].includes(rest.method) && body) {


  if (response.statusCode >= 400) {
    throw new Error(`Unable to make request in ([${rest.method}] ${url}).`);

  return response;

Python function to calculate the aspect ratio of an image

def calculate_aspect(width: int, height: int) -> str:
    def gcd(a, b):
        """The GCD (greatest common divisor) is the highest number that evenly divides both width and height."""
        return a if b == 0 else gcd(b, a % b)

    r = gcd(width, height)
    x = int(width / r)
    y = int(height / r)

    return f"{x}:{y}"

calculate_aspect(1920, 1080) # '16:9'

HTML Simple Maintenance Page

<!DOCTYPE html>
<title>Site Maintenance</title>
  body {
    text-align: center;
    padding: 150px;
  h1 {
    font-size: 50px;
  body {
    font: 20px Helvetica, sans-serif;
    color: #333;
  article {
    display: block;
    text-align: left;
    width: 650px;
    margin: 0 auto;
  a {
    color: #dc8100;
    text-decoration: none;
  a:hover {
    color: #333;
    text-decoration: none;

  <h1>We&rsquo;ll be back soon!</h1>
      Sorry for the inconvenience but we&rsquo;re performing some maintenance at
      the moment. If you need to you can always
      <a href="mailto:#">contact us</a>, otherwise we&rsquo;ll be back online
    <p>&mdash; The Team</p>

Download the latest release from GitHub

curl -s https://api.github.com/repos/jgm/pandoc/releases/latest \
| grep "browser_download_url.*deb" \
| cut -d : -f 2,3 \
| tr -d \" \
| wget -qi -

SMTP Settings for common providers

Microsoft 365

To send emails using the Office365 server, enter these details:

SMTP Host: smtp.office365.com
SMTP Port: 587
SSL Protocol: OFF
TLS Protocol: ON
SMTP Username: (your Office365 username)
SMTP Password: (your Office365 password)

POP3 Host: outlook.office365.com
POP3 Port: 995
TLS Protocol: ON
POP3 Username: (your Office365 username)
POP3 Password: (your Office365 password)

IMAP Host: outlook.office365.com
IMAP Port: 993
Encryption: SSL
IMAP Username: (your Office365 username)
IMAP Password: (your Office365 password)

Amazon SES

Just go to the docs.

Google GSuite | Workspace (why did they rename this? lol)

Requires SSL: Yes
Port: 993

Requires SSL: Yes
Requires TLS: Yes (if available)
Requires Authentication: Yes (I got this off the website, I use it without auth all the time. You can set it up in Gmail, or in Google Admin Console)
Port for SSL: 465
Port for TLS/STARTTLS: 587

Environmental Variables


# Azure

# DigitalOcean

# Dockerhub

# Facebook

# GitHub

# Google Cloud

# Gitlab

# Mailgun

# MongoDB


# Sentry

# Slack

# Square

# Stripe

# Twilio

# Twitter

# Travis CI

# HashiCorp Vault

# Vultr

Awesome (Topic)

| A11Y | Agile | Authentication | Automation Scripts | Argo | AWS | Azure Policy | Bash | Books | Business Intelligence | Chaos Engineering |Cheatsheets | CI | Cloud Native | Cloud Security | CTO | Data Science | Dataset Tools | DevOps | DevSecOps | Discord Communities | Docker | Docker Compose | eBPF | GitHub Actions | Golang | GraphQL | gRPC | Guidelines | Home Kubernetes | JavaScript Mini Projects | json | k8s Resources | Kubernetes | Linux Containers | Linux Software | Leading and managing | Naming | Network Automation | Newsletters | No login web apps | No/Low Code | Nodejs | Nodejs Security | Notebooks | OSINT | PaaS | Pentest | Personal Security Checklist | Privacy | Product Design | Productivity | Prometheus Alerts | Python | Python Scripts | Raspberry Pi | README Template | REST API | Rust | SaaS Boilerplates | Scalability | Security Hardening | Self Hosted | Shell | Scripts | Software Architecture | Stacks | Startpage | Startup | Storage | Tech Blogs | Technical Writing | Threat Detection | Tips | UUID | VSCode | WP Speed Up |

Google dork cheatsheet

Search filters

allintextSearches for occurrences of all the keywords given.allintext:"keyword"
intextSearches for the occurrences of keywords all at once or one at a time.intext:"keyword"
inurlSearches for a URL matching one of the keywords.inurl:"keyword"
allinurlSearches for a URL matching all the keywords in the query.allinurl:"keyword"
intitleSearches for occurrences of keywords in title all or one.intitle:"keyword"
allintitleSearches for occurrences of keywords all at a time.allintitle:"keyword"
siteSpecifically searches that particular site and lists all the results for that site.site:"www.google.com"
filetypeSearches for a particular filetype mentioned in the query.filetype:"pdf"
linkSearches for external links to pages.link:"keyword"
numrangeUsed to locate specific numbers in your searches.numrange:321-325
before/afterUsed to search within a particular date range.filetype:pdf & (before:2000-01-01 after:2001-01-01)
allinanchor (and also inanchor)This shows sites which have the keyterms in links pointing to them, in order of the most links.inanchor:rat
allinpostauthor (and also inpostauthor)Exclusive to blog search, this one picks out blog posts that are written by specific individuals.allinpostauthor:"keyword"
relatedList web pages that are “similar” to a specified web page.related:www.google.com
cacheShows the version of the web page that Google has in its cache.cache:www.google.com


intext:"index of /"
Nina Simone intitle:”index.of” “parent directory” “size” “last modified” “description” I Put A Spell On You (mp4|mp3|avi|flac|aac|ape|ogg) -inurl:(jsp|php|html|aspx|htm|cf|shtml|lyrics-realm|mp3-collection) -site:.info
Bill Gates intitle:”index.of” “parent directory” “size” “last modified” “description” Microsoft (pdf|txt|epub|doc|docx) -inurl:(jsp|php|html|aspx|htm|cf|shtml|ebooks|ebook) -site:.info
parent directory DVDRip -xxx -html -htm -php -shtml -opendivx -md5 -md5sums
parent directory MP3 -xxx -html -htm -php -shtml -opendivx -md5 -md5sums
parent directory Name of Singer or album -xxx -html -htm -php -shtml -opendivx -md5 -md5sums
filetype:config inurl:web.config inurl:ftp
“Windows XP Professional” 94FBR
ext:(doc | pdf | xls | txt | ps | rtf | odt | sxw | psw | ppt | pps | xml) (intext:confidential salary | intext:"budget approved") inurl:confidential
ext:(doc | pdf | xls | txt | ps | rtf | odt | sxw | psw | ppt | pps | xml) (intext:confidential salary | intext:”budget approved”) inurl:confidential


Search Term

This operator searches for the exact phrase within speech marks only. This is ideal when the phrase you are using to search is ambiguous and could be easily confused with something else or when you’re not quite getting relevant enough results back. For example:

"Tinned Sandwiches"


This self-explanatory operator searches for a given search term OR an equivalent term.

site:facebook.com | site:twitter.com


site:facebook.com & site:twitter.com

Operators combinaison

(site:facebook.com | site:twitter.com) & intext:"login"
(site:facebook.com | site:twitter.com) (intext:"login")

Include results

This will order results by the number of occurrences of the keyword.

-site:facebook.com +site:facebook.*

Exclude results

site:facebook.* -site:facebook.com


Adding a tilde to a search word tells Google that you want it to bring back synonyms for the term as well. For example, entering “~set” will bring back results that include words like “configure”, “collection”, and “change”, which are all synonyms of “set”. Fun fact: “set” has the most definitions of any word in the dictionary.


Glob pattern (*)

Putting an asterisk in a search tells Google, ‘I don’t know what goes here. It’s perfect for finding half-remembered song lyrics or names of things.