Cool shell tricks

Updated:

This is an aggregation (so I don't loose them 😅️) of different shell tricks I have learned.

Other resources

Other resources - others

Other resources - others - websites

Other resources - mine

Other resources - mine - blogs

Other resources - mine - videos

This is a project that I am doing at work to teach people about how awesome bash is and some cool/nuanced things you can use it for. (i.e. easy prompt setup, shortcuts in the bash shell, built in commands, etc...) also, the repo: https://github.com/ProfessionallyEvil/bash_tricks/

Bash Tricks
Alex Rodriguez shares bash techniques for beginners and more advanced operators to be effective in a Linux/Unix environment.

bash

bash - General

bash - General - process query

ps -uxq <pid>
  • I am always curious about finding more information about a specific process, and this is how you can backtrack from something like a ss -plant to identify more information

bash - General - disk space

sudo du -hs ./* 2>/dev/null | sort -h
  • Based on feedback from a comment below, used `-hs` instead of just `-s`, so it is human readable.
  • explanation

bash - AWS

bash - AWS - aliases

bash - AWS - SSM

SSM stands for Simple Systems Management

alias aws_ssm_list="aws ec2 describe-instances --query \"Reservations[].Instances[].[InstanceId,Tags[?Key=='Name'].Value]\" --filter 'Name=instance-state-name,Values=running' | jq -r '.[][]'| grep -A 2 '^i' | sed 's/^/ /' | grep -vE '\[|--'"
  • this is a command that will list out all your running aws ec2 instances (for whatever region you specified in your aws cli config), and output their id above their name

then chain that with this:

alias aws_ssm_join="aws ssm start-session --target "
  • this will (once you install the ssm plugin aws cli: here) will let you create and join an ssm session with whatever ec2 instance id you specify afterwards (i.e. aws_ssm_join i-447e3abcef9123455

bash - AWS - CloudFront

distri_id='<distribution_id_for_cloudfront>' ; invalidation_id="$(aws cloudfront create-invalidation --distribution-id "${distri_id}" --paths "/*" | jq -r '.Invalidation.Id')" ; watch -n 1 aws cloudfront get-invalidation --distribution-id "${distri_id}" --id  "${invalidation_id}"
  • explaination
  • pre-reqs:
  • jq
  • aws cli
  • the only thing you have to input is the Cloudfront distribution id at the beginning of the oneline, and then it will invalidate it and also continuously check the status of it's invalidation

bash - AWS - CloudFormation

bash - AWS - CloudFormation - parameter overrides
# pull secrets from AWS secrets manager service by the name
infra_params=$( aws secretsmanager get-secret-value --secret-id "${secret_name}" )

# parse our secrets format (json) and add it to paramz array
mapfile -t paramz < <(jq -r '.SecretString' <<< "${infra_params}" | jq -r '.params[] | "\(.ParameterKey)=\(.ParameterValue)"')

# deploy cloudformation stack w/expanding the paramz array
aws cloudformation deploy --stack-name "${stack_name}" --template <template_name> --parameter-overrides "${paramz[@]}"
  • pre-reqs:
  • jq
  • aws cli
  • bash >= 4.4
  • We use this at work, Secure Ideas, in a bash script to take a few arguments, but I have stripped them out and left in the main logic of the script.
  • We use this to dynamically grab a secret from secrets manager (so we aren't hard coding anything into our source code)
  • then create an array and parse it with the mapfile builtin bash command to handle quoting for key/values with spaces (so you don't have to escape the spaces from the jq results i.e. how people appear to have solved it in the past).
  • which passes the properly quoted commands to the aws cloudformation deploy command, when we expand the array that was created by mapfile.
  • special thanks to @OchaunM and @84d93r for working with me on this.

bash - AWS - EC2

bash - AWS - EC2 - Security Group IP formatting
grep -oP '\d+\.\d+\.\d+\.\d+(\/\d+|)' <<'EOF' | sed 's,$,&/32,g' | tr '\n' ','
8.8.8.8 - personal home IP
1.1.1.1 - alice's ip address
EOF
  • Using the logic from this shell trick, we can format the ip address we get to the proper format that is needed for AWS security groups (SG). Logic breakdown:
  • grepping out IP addresses
  • reading them in from heredoc
  • pipe to sed and append the /32 to every line
  • and then comma delimit them so we can just paste them into the SG's asking for IP addresses for each service.

bash - Scripting

bash - Scripting - misc

mapfile -t env_vars <<< "$(env | cut -d '=' -f 1 | grep -vE '^(PATH|SHELL|HOME|PWD|USER)')"
unset "${env_vars[@]}"

so...this command is something I thought of to try and ensure not to have colliding variables (yes, I know in theory I shouldn't have environment variables colliding with script variables because all environment variables should be all caps while all scripts should be lowercase), but it got me thinking...

How could I wipe out all my environment variables except ones that are expected by a script...

so, that is what the code snippet above does...wipes out all environemnt variables that are stored in the array that is created ( read more as to why I chose mapfile here). Which the array is excluding all environment variables that are listed in that grep command (checkout the command that makes the array here (at explainshell)).

bash - Scripting - bash's "main"

function main(){
  your_function_here
}

# https://elrey.casa/bash/scripting/main
if [[ "${0}" = "${BASH_SOURCE[0]:-bash}" ]] ; then
  main "${@}"
fi

This is the bash's equivilent to python's main function. This is really helpful if you want to source the shell script and use it's functions inside of another script, or you just want to run the script directly. If you try to source the script without this, then either one of two situations will happen:

  1. everything will just execute instead of allowing you to re-use functions
  2. you won't be able to execute the script directly because there is nothing initiating all the functions inside of it.
  3. The :-bash was added to allow you to curl to bash with the main function. ( check out how to do this securely here

checkout bash - Scripting - source - set safety for more info on how to do this without affecting your shell evironment.

credit & more robust example

bash - Scripting - source

bash - Scripting - source - set safety
source <(grep -v '^set' <scripts_file.sh>)

so, this is how you can source a script without it affecting your current shell settings you have configured with set.

bash - Scripting - grep

bash - Scripting - grep - ip addresses
grep -oP '\d+\.\d+\.\d+\.\d+(\/\d+|)' file_with_ips
bash - Scripting - grep - heredoc w/ip addresses
grep -oP '\d+\.\d+\.\d+\.\d+(\/\d+|)' <<'EOF'
8.8.8.8 - personal home IP
1.1.1.1 - alice's ip address
EOF
bash - Scripting - grep - ip addresses + sort + uniq
grep -oP '\d+\.\d+\.\d+\.\d+(\/\d+|)' file_with_ips | sort -ut '.' -k1,1n -k2,2n -k3,3n -k4,4n
bash - Scripting - grep - no lines with ip addresses
grep -vP '\d+\.\d+\.\d+\.\d+(\/\d+|)' file_with_ips
bash - Scripting - grep - no lines with ip addresses + only domains
grep -vP '\d+\.\d+\.\d+\.\d+(\/\d+|)' file_with_ips | grep -oP '^(\s+|)(https://|)[\w\-\.]+'

bash - Scripting - nmap

bash - Scripting - nmap - Cidr calculations...
nmap -iL ip_ranges -sL -n | grep report | awk '{print $5}' | tee ip_addresses
  • explaination

    • so, I like automation, and if you have ever been handed a list of ip ranges by a client to do testing on, and they say

    Hit all those ranges except these subsets...

    then you know the struggle of trying to exlude ip addresses...well no more...

    • nmap is awesome! as you can see from the explainshell link above there is an option to do a -sL (List Scan), which essentially just lists out all the ip addresses that you would scan.
    • so, the command above by itself is awesome for outputting ip addresses without you have to calculate weird cidrs, but the following command can auto exclude ip address that you provide from a file.
      nmap -iL ip_ranges -sL -n --excludefile exclude.txt | grep report | awk '{print $5}'
      

bash - Scripting - zfs

bash - Scripting - zfs - deleting zfs-auto-snapshot snapshots

I am sure there is a better way to do this, and I will figure that out eventually, but for now when I need to make sure I clear out enough of the zfs auto snapshots I run this oneliner/script:

for pool in $(zpool list -Hg -o name) ; do
  log_file="${HOME}/deletion-${pool}.log"
  rm -f "${log_file}" &&
    for i in $(zfs list -t snapshot | grep "^${pool}" | grep 'zfs-auto-snap'| awk '{print $1}') ; do
      sudo zfs destroy -v "$i" | tee -a "${log_file}"
    done
done

bash - Scripting - Network Manager

If you like this section you should checkout my other blog post that enables random wifi mac addresses through Network Manager: https://blog.elreydetoda.sitehttp://blog.elreydetoda.sitehttp://blog.elreydetoda.site/ubuntu-install/#networkmanager

bash - Scripting - Network Manager - Disable autoconnect

Read more here about what device probing is, but the following script is used to disable auto connect for all your current wireless network profiles that are in Network Manager.

NOTE: If you are running zsh (i.e. oh-my-zsh) then you need to drop down to a bash shell for running this script. even running emulate -L sh doesn't properly recognize the mapfile command.

mapfile -t networks_array < <(nmcli -t -e no -f NAME,TYPE connection show | grep '802-11-wireless' | rev |cut -d ':' -f 2- | rev) &&
  for connection in "${networks_array[@]}" ; do
    printf 'currently modifying this profile: %s\n' "${connection}" && 
      sudo nmcli connection modify "${connection}" connection.autoconnect false
  done &&
    sudo systemctl restart NetworkManager

bash - Scripting - Modified script "hardening"

# https://elrey.casa/bash/scripting/harden
set -${-//[sc]/}eu${DEBUG:+xv}o pipefail
  • srcs
  • explainations:
    • this explains most of the options: https://explainshell.com/explain?cmd=set+-uexvo+pipefail
    • the $- is just saying apply whatever is currently set for your shell environment (apparently according to this: https://ss64.com/bash/set.html)
    • the last thing that isn't explained is the ${DEBUG:+ part, which means that if the environment variable DEBUG exists and is not empty then add the vx flags which are used for debugging things.
      • this means you can dynamically debug different scripts based on your environment variables set (i.e. DEBUG) instead of going in and commenting out that part when you want to debug (what I had been doing for a while now... 😅️)
  • take aways
    • I put that command at the top of all my scripts now (after the shebang), that helps make you aware of when you scripts are messing up sooner. So, you can catch issues with them before declaring them "ready for production"

    • I also created an alias of: alias dbgz="export DEBUG='true'" so I can just prepend that alias before my script like so: dbgz; <myscript> and it will "activate" my debugging part of my script.

    • lastly another example of using the debug functionality for a script would be something like this:

      DEBUG=true <myscript.sh>
      
      • this helps you not have to export DEBUG before you run the script

bash - scripting - dependency installation

# https://elrey.casa/bash/scripting/deps_check
function deps_install() {

  install_cmd=()

  if [[ "${EUID}" -ne 0 ]]; then
    install_cmd+=('sudo')
  fi

  # shellcheck disable=SC1091
  ID="$( source /etc/os-release ; [ -n "${ID_LIKE:-}" ] && echo "${ID_LIKE}" || echo "${ID:-}" )"
  case "${ID}" in
    *debian*)
      packages=( 'handbrake-cli|HandBrakeCLI' )
      package_manager='apt-get'
      package_manager_install_cmd=('install')
      ;;
    alpine)
      packages=()
      package_manager='apk'
      package_manager_install_cmd=('--update' '--no-cache' 'add')
      ;;
    *rhel*)
      packages=()
      package_manager='dnf'
      package_manager_install_cmd=('install')
      ;;
    *)
      echo "This script doesn't officially support your distro"
      exit 1
  esac

  need_to_install=''
  needs=()
  for package in "${packages[@]}"; do
    bin_provided="${package##*|}"
    package_name="${package%%|*}"
    if ! command -v "${bin_provided:-${package_name}}" > /dev/null ; then
      needs+=("${package_name}")
      need_to_install='true'
    fi
  done

  install_cmd=("${install_cmd[@]}" "${package_manager}" "${package_manager_install_cmd[@]}")

  if [[ -n "${need_to_install}" ]]; then
    printf 'need to install: %s\n' "${needs[@]}"
    printf '\nusing this command to install it: %s %s\n' "${install_cmd[*]}" "${needs[*]}"
    "${install_cmd[@]}" "${needs[@]}"
  fi

}
  • Used to list out your packages (depending on the platform), check if they exist and install them based on your architecture and if you are a root user or not.
  • This is a longer one, so I will get around to explaining it eventually... 😁
  • One thing of note is that you can do any three of these format ( for package listing ) and it will work "jq", "jq|" or "jq|jq".
    1. list only the package name
    2. list package name and not the binary name ( item after | )
    3. list <package>|<binary_name> which is used if the binary name you want is different than the package name ( i.e. handbrake-cli is HandBrakeCLI )

bash - scripting - privileged command

# https://elrey.casa/bash/scripting/priv_cmd
function privileged_cmd(){
  if [ "$EUID" -ne 0 ] ; then
    "${@}"
  else
    sudo "${@}"
  fi
}
  • check if user is root, and if not run command passed as sudo else execute command as is

bash - scripting - Y/n prompt

# https://elrey.casa/bash/scripting/yn-prompt
read -rN 1 -p 'Does this look good?[Y/n]: '
[[ "${REPLY,,}" == "n" ]] && echo "answer was no" || (echo "yes"; echo "really yes...")
  • the read command:
    • takes user input
    • -r is a best practice: https://www.shellcheck.net/wiki/SC2162
    • -N 1 means only one character response is allowed
    • -p 'Does this look good?[Y/n]: ' is the prompt provided to the user
    • automatically saves it's response to the REPLY variable if not provided a name
  • the rest:
    • only doing [[ ]] is a short hand for an if statement, but you could do a full if [[ ]] ; then as well
    • "${REPLY,,}" - this is variable substitution, and I can't remember where I learned it but ,, lowercases the variable output while ^^ uppercases it
    • "${REPLY,,}" == "n" checks to see if the answer was an n or N (since reply is lowercased)
    • && echo "answer was no" if the answer was an n then do this echo
    • || (echo "yes"; echo "really yes...") else do these chain of commands inside the ()

bash - misc

bash - misc - config grep

grep -vP '^((\s+|)(\/\/|#)|$)'
  • explaination:
    • so, whenever you have a config file and it is riddle with comments ( as all good configs should, so you know what you are doing in line ). If you want to only know what lines are not commented out and see only the lines that affect you.
    • grep flags explaination
    • ^((\s+|)(\/\/|#)|$): https://regex101.com/r/VsFGBQ/

bash - misc - ls glob

ls ./*
  • explaination:
    • so, sometimes you can come across weird filename, especially when doing wargames. So, this is a generally useful shell feature that can help to handle those weird characters.
  • summary:
    • ./ - directing the location for bash to expand ( your current directory )
    • * uses bash's globbing feature which just expands to all files in the current directory
  • more info:
    • Check out the asterisk section of this article for more info

sed

sed - parsing for urls

curl -fsSL "${current_terraform_url}" | sed -n '/href=".*linux_amd64.zip"/p' | awk -F'["]' '{print $10}'

Vagrant

Vagrant - Aliases

If you have used vagrant, or know how I love to automate things 😀️, at all then you know typing vagrant up, vagrant ssh, and vagrant destroy can get a little cumbersome...heck I got tired of typing it while typing the blog post up...🙃️

So, I created a few shell aliases for chaining vagrant commands together, and I even have an alias for printing out my aliases 😀️. One of the other notable aliases is the v-config alias, which is used to grep out all comments and blank lines of a Vagrantfile so that way you can see what your current Vagrantfile's configuration is.

alias v-newz_conn='vagrant destroy -f && vagrant up && vagrant ssh'
alias v-newz_up_conn='vagrant destroy -f && vagrant box update && vagrant up && vagrant ssh'
alias v-newz_up_snap_conn='vagrant destroy -f && vagrant box update && vagrant up && vagrant halt && vagrant snapshot push && vagrant up && vagrant ssh'
alias v-newz_snap_conn='vagrant destroy -f && vagrant up && vagrant halt && vagrant snapshot push && vagrant up && vagrant ssh'
alias v-snap_conn='vagrant halt && vagrant snapshot push && vagrant up && vagrant ssh'
alias v-reboot='vagrant halt && vagrant up'
alias v-reboot_conn='vagrant halt && vagrant up && vagrant ssh'
alias v-connect='vagrant up && vagrant ssh'
alias v-revert_conn='vagrant snapshot pop --no-delete && vagrant ssh'
alias v-revert_prov_conn='vagrant snapshot pop --no-delete --provision && vagrant ssh'
alias v-config="grep -vP '^\s+#|^#|^$' Vagrantfile"
alias v-aliases="grep '^alias v-' ~/.zshrc"

another trick to help chaining even my aliases together is that if you see where I have an alias that ends with conn or connect (i.e. when I do vagrant ssh) then you can pass -c exit to keep the command going instead of ssh'ing into the vagrant box.

A great example of this is when I revert a box to a snapshot (i.e. v-revert_conn and I want to then provision it with a specific provisioner I have declared in the Vagrantfile. Like this command here:

v-revert_conn -c exit && vagrant provision --provision-with static-analysis

Vagrant - Functions

Sometimes when you want to do something a little more complex than an alias you have to put the command in a shell function inside your rc file.

So, I currently have this vagrant function that I use to iterate over all my running vagrant boxes to stop all of them. This is really useful for two situation:

  1. I am getting ready to reboot/turn off my machine, and while I know that the vms should be fine, the sysadmin side of me says turn them off...it doesn't hurt
  2. If I want to start using a different hypervisor. Since I run linux I have both Virtualbox and also I use QEMU/Libvirt (combined with virt-manager), which is a more linux native hypervisor, but you can't run both those hypervisors at the same time. So, I will stop all my vms with virtualbox to then switch over to libvirt.
v-stop_all () {
	for i in $(vagrant global-status | grep -oP '\srunning\s+/.*' | cut -d ' ' -f 4-)
	do
		pushd $i || return 1 && vagrant halt && popd
	done
}

ansible-playbooks

I promise this has to do with shell scripting 😁️, it is using bash scripting to help extend your ansible playbooks by giving you more enrichment to your playbooks through bash/shel scripting.

ansible-playbooks - semver sorting

here is the best explaination for what the below code does and why: https://github.com/diodonfrost/ansible-role-vagrant/pull/1

an ansible playbook that does semantic version sorting, more here: https://github.com/diodonfrost/ansible-role-vagrant/pull/1
an ansible playbook that does semantic version sorting, more here: https://github.com/diodonfrost/ansible-role-vagrant/pull/1 - ansible-semver.yml

recording example here ( full screen to make it look right )