init - PoC

This commit is contained in:
Caffeine Fueled 2025-10-27 20:12:00 +01:00
commit 3484b45045
Signed by: cf7
GPG key ID: CA295D643074C68C
146 changed files with 10657 additions and 0 deletions

View file

@ -0,0 +1,13 @@
# CyberChef - How to remove empty lines
So, since I am too stupid to remove empty lines easily, I present to you my overcomplicated solution.
Search for the `Find / Replace` in the operations and replace `^(?:[\t ]*(?:\r?\n|\r))+` with nothing.
![screenshot of the cyberchef interface with the described workflow](/images/blog/cyberchef-remove-empty-lines.png)
It uses Perl's regex to find specific line breaks that indicate an empty line. Regex is still magic to me.
[Reference](https://www.ultraedit.com/support/tutorials-power-tips/ultraedit/remove-blank-lines.html)
---

View file

@ -0,0 +1,19 @@
# Nginx - check your public IP
Sometimes you just need your public IP, and nothing more. A simple config change in nginx can offer you exactly this.
Add the following `location` segment to the `server` segment of your choice. *You could replace `/ip` with another term*.
`location /ip { default_type text/plain; return 200 $remote_addr;}`
Now, if you visit the destination of the `server` segment with the subdirectory `/ip`, you'll find your IP. Try it out and visit [https://brrl.net/ip](https://brrl.net/ip).
The neat part is that it works well in the CLI too:
: `curl brrl.net/ip`
: `wget -qO- brrl.net/ip`
: Powershell
: `Invoke-RestMethod brrl.net/ip` or `irm brrl.net/ip`
Depending on your setup, some tweaking is necessary in regards to TLS, redirects, and so on.
---

View file

@ -0,0 +1,21 @@
# Tmux - synchronize the input of all panes within a window
So, you've got a tmux window with 10 panes, and you want to clear the panes, switch to a different directory, stop multiple process, and so on. There is a simple way to do it:
`Prefix` + `:set synchronize-panes on`
*Just in case: the default `prefix` is `CTRL` + `b`.*
The input of all panes within a window will be synchronized until you turn it off again:
`Prefix` + `:set synchronize-panes off`
### Create keybinding
If you need this function often, you could create a simple keybind for it. For examples, if you want to add it to `Prefix` + `e`, add this to your config file:
`bind e set-window-option synchronize-panes`
Load this config with `Prefix` + `:source-file ~/.tmux.conf` (or wherever your config file is located) and you can turn pane synchronization on and off with `Prefix` + `e`.
---

View file

@ -0,0 +1,128 @@
# Getting started with tmux
Tmux is a terminal multiplexer. It allows you to work with multiple terminal sessions at once.
## Installation
It is easy to install, and there are many guides already out there, so I won't cover it in this blog post.
## Tmux terminology
So, let us start with the basics.
`tmux server (programm) > session > window > pane`
The tmux server starts after running tmux. You can work on the attached sessions or detach them so they run in the background. Every server can have multiple sessions, every session can have multiple windows, and we can split a window into multiple panes. The pane is a normal terminal window at the end.
![tmux-overview](/images/blog/tmux-primer-1.png)
There are a lot of use cases for it. Tmux makes is easy to separate projects in different windows or sessions.
#### The prefix or lead and meta-key
The default prefix (or sometimes called 'lead') is `CTRL` + `b` (or `C-b`) and it is usually the start of a tmux shortcut or to use a tmux command. When you see something like: `Prefix` + `c`, press `CTRL` + `b`, and then `c`. I prefer `CTRL` + `s` for example. I'll explain how to change it in the next section.
As a side note: you'll find some shortcuts with an `M` in them. This is the `meta`-key. It is `ALT` for Linux, I think `CMD` in MacOS, and sometimes even `ESC`. The `meta`-key is rarely used, but worth looking it up.
#### The config file
You can temporarily change your tmux config by entering the setting:
`Prefix` + `:set -g prefix C-s`
This would change the `Prefix` as we described it before. If you want to make changes permanently, edit the config file. On Linux, the config file is usually in the home directory of the user `~/.tmux.conf`. If there is no config file, simply create it, and restart tmux - or reload it (will show in the end of this section). Just put `set -g prefix C-s` into the config file and tmux will use it after the restart.
There are many ways to customize tmux. Some examples: Vim-like bindings for pane movements, enabling mouse support, setting keyboard shortcuts, and so on.
The easiest way to reload the config file after changes is to use the following tmux command: `Prefix` + `:source-file ~/.tmux.conf` *(change the path accordingly)*.
## Working with panes
As mentioned before, you can split a window in multiple panes. You can split the window vertically or horizontally as you wish and change it as much as you want. I won't cover everything in this post, but I'll show you the basics.
Split horizontally:
: `Prefix` + `%`
Split vertically:
: `Prefix` + `"`
Move to another pane:
: `Prefix` + `ARROW KEY`
Convert the current pane into a new window:
: `Prefix` + `!`
Close current pane:
: `Prefix` + `x`
There are shortcuts for resizing, moving panes around, and so on, but those aren't that important for this primer.
Side note: I just wrote a separate post about sending input to all panes within a window. Feel free to check it out [here](https://ittavern.com/tmux-synchronize-the-input-of-all-panes-within-a-window/).
## Working with windows
I prefer to separate my projects with windows instead of sessions, but that is my personal preference.
Create a new window:
: `Prefix` + `c`
Rename current window:
: `Prefix` + `,`
Close current window:
: `Prefix` + `&`
Switch to next window:
: `Prefix` + `n`
Switch to previous window:
: `Prefix` + `p`
Switch to window by number:
: `Prefix` + `0`-`9`
## Working with sessions
Let me start with a shortcut that I just learned recently.
Overview:
: `Prefix` + `w`
This gives you a quick overview of all sessions and windows and lets you switch quickly.
Show all sessions:
: `tmux ls`
: `Prefix` + `s`
Create new session:
: `tmux new -s new-session`
: `:new -s new-session`
Rename session:
: `Prefix` + `$`
Kill the session:
: `:kill-session`
Detach session (will be active in the background):
: `Prefix` + `d`
Close a session:
: `tmux kill-session -t old-session`
Attach session:
: `tmux attach -t old-session`
Move to next session:
: `Prefix` + `)`
Move to previous session:
: `Prefix` + `(`
# Conclusion
This post hopefully will help you to get started with tmux. I'll cover more topics and features of tmux in the future.
Any notes or questions? - Feel free to reach out.
---

View file

@ -0,0 +1,62 @@
# Nginx - simple permanent or temporary redirects
## Temporary or permanent redirect
First you have to decide whether the redirect will be permanent (`301`), or just temporary (`302`). If you are uncertain, just pick temporary and switch later.
Use cases from my understanding:
Permanent `301` redirects:
: switching to another domain
: merging multiple domains
: switching from HTTP to HTTPs
: better SEO experience
Temporary `302` redirect:
: testing (A/B testing, etc)
: single redirects to another domain
: redirect to a maintenance page
: redirect traffic for load balancing
Both do the same, but still have their use cases. Switching them up could cause problems with various indexes of search engines, SEO, wrongly being flagged as a spammer, and so on.
## Simple redirects in nginx
For this example, I am going to use temporary `302` redirects.
#### Simple redirect of a sub-domain to a single URL
```
server {
listen 80;
listen 443;
server_name test2.brrl.net;
location / {
return 302 https://www.youtube.com/watch?v=dQw4w9WgXcQ;
}
}
```
That is a simple redirect of the root of the sub-domain. Try it out: [https://test2.brrl.net](https://test2.brrl.net).
If you want to create a redirection of a subdirectory like `/status`, simply change it accordingly:
```
server {
listen 80;
listen 443;
server_name brrl.net;
location /status {
return 302 https://status.brrl.net/status/overview;
}
}
```
With this config block, only the subdirectory `/status` would be redirected. For example: [https://brrl.net/status](https://brrl.net/status) redirects to [https://status.brrl.net/status/overview](https://status.brrl.net/status/overview).
#### other redirects
There are many more forms of redirects, but I am familiar enough to write about that. I might add more redirects later on, but I'll have to test beforehand.
---

View file

@ -0,0 +1,141 @@
# My use cases for CyberChef
### Formatting MAC addresses
Cisco seems to require a different format for every solution they have. I use this almost daily, so change the format of one or multiple MAC addresses.
Input:
`aa-aa-aa-bb-bb-bb`
Output:
```
aaaaaabbbbbb
AAAAAABBBBBB
aa-aa-aa-bb-bb-bb
AA-AA-AA-BB-BB-BB
aa:aa:aa:bb:bb:bb
AA:AA:AA:BB:BB:BB
aaaa.aabb.bbbb
AAAA.AABB.BBBB
```
[Try it yourself](https://baked.brrl.net/#recipe=Format_MAC_addresses('Both',true,true,true,true,false)&input=YWEtYWEtYWEtYmItYmItYmI)
**Tipp**: the easiest way to change the format of multiple formats, is to choose the desired format, input 1 MAC address per line, and remove the empty lines with a `Find/ Replace` operation with the following regex search `^(?:[\t ]*(?:\r?\n|\r))+`. For more information, visit [this post](https://ittavern.com/cyberchef-how-to-remove-empty-lines/).
### Looking up Linux permissions
Simple way to switch between various representations and shows the permissions.
Input:
`-rw-r--r--`
Output:
```
Textual representation: -rw-r--r--
Octal representation: 0644
File type: Regular file
+---------+-------+-------+-------+
| | User | Group | Other |
+---------+-------+-------+-------+
| Read | X | X | X |
+---------+-------+-------+-------+
| Write | X | | |
+---------+-------+-------+-------+
| Execute | | | |
+---------+-------+-------+-------+
```
[Try it yourself](https://baked.brrl.net/#recipe=Parse_UNIX_file_permissions\(\)&input=LXJ3LXItLXItLQo)
### Working with IT subnets
This function makes my life easier. It shows me the general network information and the range of a IP addresses for a subnet.
Input:
`10.121.10.8/28`
Output:
```
Network: 10.121.10.8
CIDR: 28
Mask: 255.255.255.240
Range: 10.121.10.0 - 10.121.10.15
Total addresses in range: 16
10.121.10.0
10.121.10.1
10.121.10.2
10.121.10.3
10.121.10.4
[...]
```
[Try it yourself](https://baked.brrl.net/#recipe=Parse_IP_range\(true,true,false\)&input=MTAuMTIxLjEwLjgvMjg)
### Converting blog titles to an URL-friendly format
I've created a small 'Recipe' to format my titles to URL/ text file friendly formats.
Input:
`My use cases for CyberChef`
Output:
`my-use-cases-for-cyberchef`
[Try it yourself](https://baked.brrl.net/#recipe=Find_/_Replace\(%7B'option':'Regex','string':'-'%7D,'',true,false,true,false\)Find_/_Replace\(%7B'option':'Regex','string':'%20%20'%7D,'%20',true,false,true,false\)Find_/_Replace\(%7B'option':'Regex','string':'%5C%5C.'%7D,'',true,false,true,false\)Find_/_Replace\(%7B'option':'Regex','string':'%20'%7D,'-',true,false,true,false\)To_Lower_case\(\)&input=TXkgdXNlIGNhc2VzIGZvciBDeWJlckNoZWY)
### Finding the difference in text
I only use this function for small configuration files or texts. For larger ones, I prefer vimdiff or Notepad++.
[Try it yourself](https://baked.brrl.net/#recipe=Diff\('%5C%5Cn%5C%5Cn','Character',true,true,false,true\)&input=SSBzd2VhciwgdGhlcmUgaXMgbm90aGluZyBtaXNzaW5nLgoKSSBzd2VhciwgdGhlcmUgaXMgbWlzc2luZy4)
### Changing chars to upper/lower case
I rarely use this function, but it has its use cases. Some passwords contain many characters, that can be difficult to differentiate, like `l`, `I`, `1`, `O`,`0`, and so on. I tend to use this feature if I only have 1 more try left, just to make sure.
And I know that copy+paste exists, but that isn't always an option.
[Try it yourself](https://baked.brrl.net/#recipe=To_Upper_case\('All'\)&input=VEhsU18xU19hX1A0UzV3b3JE)
### Adding or remove line numbers
This is self-explanatory. I do not need this feature that often, but comes in handy from time to time.
### Hashing things
If you need a hash of a string or file, CyberChef offers many algorithms. SHA, MD, bcrypt, and so on.
[Try it yourself](https://baked.brrl.net/#recipe=SHA2\('512',64,160\)&input=VEhsU18xU19hX1A0UzV3b3JE)
### Generating QR codes
I use it monthly to generate the QR code for our guest WLAN. Add `WIFI:S:MySSID;T:WPA;P:TH1S_P455W0RD;;` into the input field and it generates the QR code for you. I regularly use it for URLs too.
[Try it yourself](https://baked.brrl.net/#recipe=Generate_QR_Code\('PNG',8,2,'Medium'\)&input=V0lGSTpTOk15U1NJRDtUOldQQTtQOlRIMVNfUDQ1NVcwUkQ7Ow)
### Generating dummy texts / Lorem Ipsum
Really helpful to generate dummy text for all kinds of mock-ups.
[Try it yourself](https://baked.brrl.net/#recipe=Generate_Lorem_Ipsum\(3,'Paragraphs'\))
### Various utilities
I won't go into too much detail since it is fairly self-explanatory. Sorting lines, convert masses or distances, remove white spaces, Find/Replace, find unique strings, converting hexdumps, converting date/time formats, and so many more.
## Conclusion
CyberChef has become a great tool with many use cases. It is more the quick and dirty solution, but this is often all I need.
The source code can be found [here](https://github.com/gchq/CyberChef).
---

View file

@ -0,0 +1,22 @@
# Tmux - reload .tmux.conf configuration file
Restarting the tmux server every time you change the configuration is tedious and unnecessary.
From the shell:
: `tmux source-file ~/.tmux.conf`
As a tmux command:
: `Prefix` + `:source-file ~/.tmux.conf`
: *Just in case: the default prefix is `CTRL` + `b`*
Those methods reload the tmux configuration without affection the sessions or windows.
**Info**: some changes still require a restart of the tmux server. If you were to remove a key bind, you would need to restart the tmux server or explicitly unbind the key.
The server stops running if all sessions are closed or you kill it with `tmux kill-server` or kill the process with `pkill/kill`. `tmux kill-server` will send a `SIGTERM`, where `tmux kill-pane/kill-window/kill-session` will send a `SIGHUP`.
#### **Side note** to the location of the configuration file:
Tmux looks in `/etc/tmux.conf` for a system-wide configuration file, and then for a configuration file in the current user's home directory, e.x. `~/.tmux.conf`. If these files don't exist, tmux uses the default settings.
---

View file

@ -0,0 +1,21 @@
# Podman / Docker - expose port only to the localhost of the host machine
There are good reasons to expose a port of a docker container only to the localhost of the host machine. Security reasons or the use of a reverse proxy are only 2 of them (please don't ask for more). And it is fairly easy.
It is a simple modification to the argument of the `-p` flag while when running `podman run`:
`podman run -d -p 8080:80/tcp docker.io/library/httpd`
From the manual:
`-p, --publish strings Publish a container's port, or a range of ports, to the host (default [])`
This is a quick example which sets up a web server. The first part before the colon - in this case `8080` - is the exposed port on the host machine, on which the container would be reachable. The second part after the colon - `80/tcp` - is the used port within the container.
To limit the exposed port to the localhost of the host machine, just add the host loopback address in front of the host part like: `127.0.0.1:`. The new command would then be:
`podman run -d -p 127.0.0.1:8080:80/tcp docker.io/library/httpd`
That's it.
---

View file

@ -0,0 +1,57 @@
# Linux - connect to a serial port with screen
There are a bunch of programs out there, that can get you connected to a serial port of a switch, but using `screen` was the best and easiest solution I've found. Works perfectly in the CLI, can be run in the background, and easy to set up - if it is not already installed.
It worked with various combinations of serial-to-usb-cables, Cisco switches, and Linux machines. Let us start with the command itself:
* `sudo screen /dev/ttyUSB0 9600`
* `sudo screen` - run `screen` as sudo
* `/dev/ttyUSB0` - the tty number of the usb cable / adapter
* `9600` - the speed of the serial connection
You can kill the session with `CTRL` + `a`, then `k`, and confirm it with `y`.
### Finding the device / the tty number
Find the tty number while you are already connected:
`sudo dmesg | grep tty`
Output:
```markdown
kuser@pleasejustwork:~$ sudo dmesg | grep tty
[ 0.134050] printk: console [tty0] enabled
[1724834.635665] usb 3-1: FTDI USB Serial Device converter now attached to ttyUSB0
```
Shows the device while plugging it in:
`sudo dmesg -wH | grep tty`
Output:
```markdown
kuser@pleasejustwork:~$ sudo dmesg -wH | grep tty
[sudo] password for kuser:
[ +0,000022] printk: console [tty0] enabled
[ +0,001283] usb 3-1: FTDI USB Serial Device converter now attached to ttyUSB0
```
This is helpful if you are connected to multiple devices.
### Finding the correct speed
I haven't had to change this yet, but just in case:
`sudo stty -F /dev/ttyUSB0`
Output:
```markdown
kuser@pleasejustwork:~$ sudo stty -F /dev/ttyUSB0
speed 9600 baud; line = 0;
-brkint -imaxbel
```
---

View file

@ -0,0 +1,34 @@
# EICAR test file - riskless method to test your antivirus and firewall solution
Disclaimer: There are more meaningful, and more advanced solutions to test your security solutions, but for a quick, simple, and riskless test, the upcoming test files are more than enough.
## EICAR test file
The most common test file to test said solutions is the [EICAR Anti-Virus Test File](https://en.wikipedia.org/wiki/EICAR_test_file). The European Institute for Computer Antivirus Research (EICAR) and Computer Antivirus Research Organization (CARO) developed the test file, and is in the end a simple text file with a plain string of ASCII characters.
`X5O!P%@AP[4\PZX54(P^)7CC)7}$EICAR-STANDARD-ANTIVIRUS-TEST-FILE!$H+H*`
Most solutions will prevent you from downloading it or put it into quarantine, since it will be treated as a threat. That said, some providers - for example Malwarebytes [[1]](https://forums.malwarebytes.com/topic/9994-malwarebytes-cant-detect-eicar-test-virus/)[[2]](https://forums.malwarebytes.com/topic/191650-malwarebytes-3-frequently-asked-questions/?do=findComment&comment=1077438) - refused to add fake malware / test files to their database since they don't see any benefits.
More information and the download link can be found [here](https://www.eicar.org/download-anti-malware-testfile/).
Some additional information about the EICAR test file:
* [Anatomy of the EICAR Antivirus Test File](https://blog.nintechnet.com/anatomy-of-the-eicar-antivirus-test-file/)
* [EICARs TEST FILE HISTORY](https://web.archive.org/web/20151216140407/https://www.eicar.org/files/01_-_eicar_test_file_history.pdf)
* [The Use and Misuse of Test Files in Anti-Malware Testing](https://www.amtso.org/wp-content/uploads/2018/05/AMTSO-Use-and-Misuse-of-Test-Files-in-Anti-Malware-Testing-FINAL.pdf)
#### Vendor specific test files
Various vendors have specific test files for their solutions, but I am not too familiar with them.
* [Broadcom SOCAR cloud test file](https://knowledge.broadcom.com/external/article?legacyId=TECH216647)
* [Cisco AMP test file](https://docs.umbrella.com/umbrella-user-guide/docs/test-file-analysis)
* [FireEye test files](https://community.fireeye.dev/t/testing-sample-files/33)
* [McAfee](https://www.mcafee.com/support/?locale=en-US&articleId=TS101121&page=shell&shell=article-view)
* [Palo Alto Networks test file](https://docs.paloaltonetworks.com/wildfire/9-1/wildfire-admin/submit-files-for-wildfire-analysis/verify-wildfire-submissions/test-a-sample-malware-file) + [Additional Malware Test Files](https://docs.paloaltonetworks.com/wildfire/u-v/wildfire-whats-new/latest-wildfire-cloud-features/additional-malware-test-files)
* [Panda cloud test file](https://www.pandasecurity.com/en/support/card?Id=40066)
Just use your favorite search engine to look for <name of your solution> + 'test file'. For more advanced tests, reach out to the vendor of choice.
---

View file

@ -0,0 +1,55 @@
# Linux - How to work with complex commands
It can frustrate to work on complex commands in the terminal. I'll present you some tips on how to manage them. If you have another tip, I'd appreciate a quick message.
### Use backslash`\` to add a line break
This is fairly simple. Having one or multiple long lines with no structure can be messy and confusing. By adding `\` for a line break adds more structure. A really simple example:
`podman run -d --restart=always -p 127.0.0.1:3001:3001 -v /path/data:/app/data --name status.brrl.net docker.io/louislam/uptime-kuma:latest`
With line breaks:
```markdown
podman run -d \
--restart=always \
-p 127.0.0.1:3001:3001 \
-v /path/data:/app/data \
--name status.brrl.net \
docker.io/louislam/uptime-kuma:latest
```
It is easier to read and work with, at least in my opinion.
### Work on complex commands in your favorite $EDITOR
I'lll show you now, how you can edit complex commands in your favorite CLI editor.
Enter command `fc`, or keep `CTRL` pressed and enter `x` and `e` as keyboard shortcut. This will open your default CLI editor. After finishing working on the command you want to run, simply 'save and close', and the command will run right after.
*I am going to show you how yo set your default editor at the end of the post.*
The `fc` command is normally used to show the command history or re-edit already entered commands, but we can use it to work on complex commands. `fc --help` to find out more.
### Set default editor in the CLI
There are various ways to set the default editors, so you might have to look it up for your setup.
In general, it works to set the `$EDITOR` environment variable with the editor of choice. On most distros it should be 'nano', but you might prefer something else.
If we want to change our default editor to 'vim' temporarily, we can enter this command:
`export EDITOR="/bin/vim"`
You can double-check with:
`echo $EDITOR` or `env | grep EDITOR`
and
`$EDITOR test.txt`
**Important:** To change the default editor permanently, add `export EDITOR="/bin/vim"` to your `.bashrc` or whatever config file you use.
From now on, whenever you want to edit a command with `fc`, your favorite editor will open.
---

View file

@ -0,0 +1,74 @@
# nginx - simple and native authentication function
**Important disclaimer**: This solution is not secure! - It is fine for a quick and temporary solution for your local network, but it is not a secure solution for important ressources that are available over the internet.
As a side note: without TLS (HTTPs), the credentials will be sent in plain text, and are easily accessable.
### Creating the user
Even though you could do it per hand, it is recommended to use the Apache utility to create the user.
The package needed is called `apache2-utils` for Debian derivatives and `httpd-tools` for RHEL derivatives.
`sudo htpasswd -c /etc/nginx/htpasswd AzureDiamond` *# The username is case-sensitive and the path and name of the password file can be changed*
Now it is time to choose a secure password:
```markdown
New password:
Re-type new password:
Adding password for user AzureDiamond
```
You now can find the password file with the hashed password in the location of your choice:
```markdown
cat /etc/nginx/htpasswd
AzureDiamond:$apr1$8xZ0m9Yq$NVBN9veofzoV9vBoBK7z40
```
**Side note:** You can remove a user with the following command:
`sudo htpasswd -D /etc/nginx/htpasswd AzureDiamond` *# remember to choose the correct file*
### Change your nginx config
We can now add 2 line to our `server` or `location` segment to activate the authentication feature:
```markdown
auth_basic "You shall not pass!";
auth_basic_user_file /etc/nginx/htpasswd;
```
Check the nginx config with `sudo nginx -t` and if it confirms the correct syntax, restart the nginx service with `sudo systemctl restart nginx`.
[You can test it here: https://ittavern.com/azurediamond](https://ittavern.com/azurediamond)
### Exclude subdirectories
If you, for example, add the authentication to the root directory of your site, you can exclude chosen subdirectories by adding the following line to the `location` segment:
```markdown
location /api/ {
auth_basic off;
}
```
### White- / blacklist IPs
More step further, just work with white- and blacklists by adding chosen IPs like this to the chosen segment:
```markdown
deny 8.8.8.8;
allow 9.9.9.9;
allow 10.10.10.0/24;
deny all;
```
---
Special thanks to ruffy, for informing me about the processes behind it and the security risks.
---

View file

@ -0,0 +1,273 @@
# Getting started with nmap
**Disclaimer**: Only scan networks you have permission for. Many VPS providers do not allow the scanning of other networks and can cause you trouble. Please be aware of it.
## Installation
I won't cover the installation of nmap in this blog post. It is available for many OSs, and a simple lookup with your favorite search engine will give you enough results to get it done.
## What is nmap?
Nmap (Network mapper) is an open-source network and security auditing tool. It is used for network host and service discovery and has a wide range of use cases. It can scan ports, discover live hosts, detect service and OS versions, run vulnerability scans, and be used with many scripts.
I'll show you the basics of nmap in this post. This is more than enough to get started.
**Important**: I recommend using nmap as **root** since not all scans are available for non-root users. The kernel constrain standard users from using all functions of the NIC.
## Specify the hosts or networks to scan <a href="#target" id="target">#</a>
You'll start by defining the range of the scan. This is mandatory and there are multiple ways to do it.
Single address / host name:
: `nmap 10.10.20.1`
: `nmap scanme.nmap.org` *# You have permission to scan this domain / host. Visit [this page](http://scanme.nmap.org/) for more information. As mentioned before, be aware that many server providers prohibit the scan of other networks.*
There are several ways to define a range of targets:
: `nmap 10.10.10.1 10.10.10.2 10.10.10.3`
: `nmap 10.10.10.1,2,3`
: `nmap 10.10.10.1-50`
: `nmap 10.10.10.0/24`
Use a file with a list of targets (hosts/network):
: `nmap -iL /path/to/file.txt`
**Side note**: The list can have various formats. All hosts in one single line, separated by spaces, or you can put every host in a separate line or even combine it like this:
```markdown
10.10.10.1 10.10.20.2
10.10.30.3
```
Nmap would scan 3 hosts.
Choose a random number of hosts within a chosen range:
: `nmap 10.10.10.0/24 -iR 5`
#### Exclude hosts and networks from scans <a href="#target-exclusion" id="target-exclusion">#</a>
Choose hosts or networks that should be excluded:
: `nmap 192.168.0.0/24 --exclude 192.168.0.2`
Use a file with a list of exclusions:
: `nmap 10.10.10.0/24 --excludefile /path/to/file.txt`
## SPECIFIC PORT RANGES <a href="#ports" id="ports">#</a>
**Side note**: Without a flag, it runs the 1000 common TCP ports by default. [Source](https://nmap.org/book/port-scanning.html)
For a quick scan that only scans the first 100 ports, use the `-F` flag:
: `nmap 10.10.10.1 -F`
Scan of a single port:
: `nmap 10.10.10.0/24 -p 22`
Scan of several ports:
: `nmap 10.10.10.0/24 -p 22,80`
: `nmap 10.10.10.0/24 -p 1-100`
: `nmap 10.10.10.0/24 -p 80,90-100`
`-p-` would scan ALL ports (0 to 65535):
: `nmap 10.10.10.0/24 -p-`
TCP is the default protocol. You can specifically choose TCP or UDP like this:
TCP *(default)*:
: `nmap 10.10.10.0/24 -p T:53`
UDP:
: `nmap 10.10.10.0/24 -p U:53`
Combine both:
: `nmap 10.10.10.0/24 -p T:53,U:53`
**Important**: the `T:` and `U:` must be capitalized since it is case-sensitive.
If you only want to scan UDP ports, use the `-sU` flag to do so.
I am not familiar with it, but you can work with protocol names like this:
: `nmap 10.10.10.0/24 -p smtp` *# Thanks to k3vinw*
#### Exlude ports from scan <a href="#ports-exclusion" id="ports-explusion">#</a>
Simply us the `--exlude-ports` option and the ports / port range:
: `nmap 10.10.10.1 -p 1-100 --exlude-ports 22,53`
#### Set the source port
Use the `-g` flag to specify the source port of the scan:
: `nmap 10.10.10.1 -g 12345`
## Save output to file <a href="#output" id="output">#</a>
There are 3 formats you can pick between:
Console output:
: `-oN results.txt`
'Grepable' console output:
: `-oG results.txt`
XML format:
: `-oX results.txt`
Saves output of ALL 3 formats:
: `-oA results.txt`
If you want to append the results to a file, simply add the `--append-output` option to the command.
## Port states <a href="#port-states" id="port-states">#</a>
Nmap distinguishes the state of the port in six categories. This section is copied from the [official documentation](https://nmap.org/book/man-port-scanning-basics.html) since it is explained really well.
**open**
> An application is actively accepting TCP connections, UDP datagrams or SCTP associations on this port. Finding these is often the primary goal of port scanning. Security-minded people know that each open port is an avenue for attack. Attackers and pen-testers want to exploit the open ports, while administrators try to close or protect them with firewalls without thwarting legitimate users. Open ports are also interesting for non-security scans because they show services available for use on the network.
**closed**
> A closed port is accessible (it receives and responds to Nmap probe packets), but there is no application listening on it. They can be helpful in showing that a host is up on an IP address (host discovery, or ping scanning), and as part of OS detection. Because closed ports are reachable, it may be worth scanning later in case some open up. Administrators may want to consider blocking such ports with a firewall. Then they would appear in the filtered state, discussed next.
**filtered**
> Nmap cannot determine whether the port is open because packet filtering prevents its probes from reaching the port. The filtering could be from a dedicated firewall device, router rules, or host-based firewall software. These ports frustrate attackers because they provide so little information. Sometimes they respond with ICMP error messages such as type 3 code 13 (destination unreachable: communication administratively prohibited), but filters that simply drop probes without responding are far more common. This forces Nmap to retry several times just in case the probe was dropped due to network congestion rather than filtering. This slows down the scan dramatically.
**unfiltered**
> The unfiltered state means that a port is accessible, but Nmap is unable to determine whether it is open or closed. Only the ACK scan, which is used to map firewall rulesets, classifies ports into this state. Scanning unfiltered ports with other scan types such as Window scan, SYN scan, or FIN scan, may help resolve whether the port is open.
**open|filtered**
> map places ports in this state when it is unable to determine whether a port is open or filtered. This occurs for scan types in which open ports give no response. The lack of response could also mean that a packet filter dropped the probe or any response it elicited. So Nmap does not know for sure whether the port is open or being filtered. The UDP, IP protocol, FIN, NULL, and Xmas scans classify ports this way.
**closed|filtered**
> This state is used when Nmap is unable to determine whether a port is closed or filtered. It is only used for the IP ID idle scan.
## Scan timing / timing templates <a href="#scan-timing" id="scan-timing">#</a>
With these timing templates, you can decide how aggressively and fast you want to scan your targets. The lower the number, the slower scan and vice versa. You can choose them with the `-T` flag like this:
: `-T0` paranoid
: `-T1` sneaky
: `-T2` polite
: `-T3` normal (default)
: `-T4` aggressive
: `-T5` insane
`-T0` and `-T1`, for example, are used for IDS evasion. The scans are less aggressive, have more delay, look more random, and so on. `-T5` is really aggressive, fast and rather unreliable due loss of packets.
A detailed table of differences can be found in the [official documentation](https://nmap.org/book/performance-timing-templates.html)
## Scripts <a href="#scripts" id="scripts">#</a>
**Disclaimer + Important:** Scripts are not run in a sandbox and thus could accidentally or maliciously damage your system or invade your privacy. Never run scripts from third parties unless you trust the authors or have carefully audited the scripts yourself.
The Nmap Scripting Engine (NSE) allows you to use, and share various scripts. The scripts are written in Lua.
There are different categories of scripts. The current categories are: auth, broadcast, default, discovery, dos, exploit, external, fuzzer, intrusive, malware, safe, version, and vuln.
Run a script:
: `--script filename / category / directory`
: *all scripts in the category or directory would be loaded*
Nmap scripting is way beyond the scope of this post, and since I am not too familiar, I rather keep it short. I mostly use scripts for finding SMBv1 servers (`smb-os-discovery`), display of SSH authentication information (`ssh-auth-methods`) or all available DHCP server (`broadcast-dhcp-discover`). The last one is great for debug DHCP problems or find rogue DHCP servers.
Often enough scripts are used to find vulnerabilities. One example can be found [on Github](https://github.com/Diverto/nse-log4shell). A helpful script to check against **log4shell or LogJam vulnerabilities** (CVE-2021-44228).
For more information about scripts for nmap, check out the following blog post: [Getting started with nmap scripts](https://ittavern.com/getting-started-with-nmap-scripts/)
## Helpful additional scan options <a href="#more-options" id="more-options">#</a>
Verbosity of the scan:
: `-v` / `-vv` / `-vvv`
Increase verbosity on debug level:
: `-d` / `-dd` / ... or `-d1` to `-d9`
: often used if a bug in nmap is suspected
Choose the interface for the scan:
: `-e interfacename`
skip reverse DNS look-up:
: `-n`
force reverse DNS, even when host is offine:
: `-R`
use the DNS resolver of the system:
: `--system-dns`
use a specific DNS server for requests:
: `--dns-servers <server1>[,<server2>[,...]]`
show the results every X seconds/minutes:
: `--stats-every 1m / 10s`
: really great for long scans to check the progress
Scan IPv6 addresses:
: `-6 ::ffff:1234:abcd`
detecting the version of services running on the target:
: `-sV`
detecting operating system of the target by fingerprinting:
: `-O`
TCP Syn scan - Stealth mode:
: `-sS`
: sending TCP/SYN packet, waits for TCP/ACK. Slower, but less aggressive
TCP full connect - 3-way-handshake:
: `-sT`
: it is more acurate, but slower and noisier:
ICMP echo request / ping for a quick scan:
: `-sP`
No ICMP echo request / ping, nmap assumes the host is up:
: `-Pn`
ICMP echo request:
: `-PE`
ICMP Timestamp request:
: `-PP`
ICMP netmask request:
: `-PM`
TCP SYN ping:
: `-PS PORTNUMBER`
: *Port 40125 is the default, if no port entered*
TCP ACK Ping use
: `-PA PORTNUMBER`
: *Port 40125 is the default, if no port entered*
#### IDS/ FW Evasion <a href="#evasion" id="evasion">#</a>
This is a topic for another time and unnecessary for beginners, but just some IDS/FW evasion methods.
Decoy mode - tries to hide your IP in a pool of other IPs
: `nmap -D 10.10.10.22,10.10.10.44,10.10.10.66 10.10.10.1`
: `10.10.10.22` *# your own IP*
: `10.10.10.44` *# decoy IP*
: `10.10.10.66` *# decoy IP*
: `10.10.10.1` *# IP of target*
Change the source IP:
: `-S`
Spoof another MAC address:
: `--spoof-mac MAC-ADDRESS / prefix / vendor name`
Using a HTTP/SOCKS4 proxy:
: `--proxies URL,[url2],...`
# Conclusion
Nmap is unbelievably powerful and invaluable for my day-to-day work. I hope I could provide you some insight into the possibilities of nmap. If you think I forgot something, feel free to reach out.
---

View file

@ -0,0 +1,61 @@
# Ways to support open-source projects
There are many ways to support your favorite open-source project. Even though code contributions are the most obvious method, not everyone - including me - can do so. I just want to share some ideas, on how someone can support the open-source space.
#### Coding
As mentioned before, the most obvious contribution to an open-source project might be to code yourself. This can be a small bug fix, a new feature, or even becoming a maintainer of the whole project, depending on your time and capabilities.
#### Financial support & self-hosting
Consider donating money to the project. A lot of open-source projects are maintained by people who spend their spare time to code. Even small contributions help to pay the bills for hosting, coffee, pizza, and so on.
Check the project for the following options to donate money: [Patreon](https://www.patreon.com/search?q=open-source), [Liberapay](https://liberapay.com/explore/), [Open Collective](https://opencollective.com/discover?show=open%20source), ["buy me a coffee"](https://www.buymeacoffee.com/explore/opensource), PayPal (+ credit cards), direct wire transfer or cryptocurrencies.
*Just for the protocol: Donations != Claims/ Commissions. Please do not donate money and demand or expect a feature you have requested. That is not how it works.*
Not everyone is in the position to donate money, but I would consider it one of the easiest ways to support a project.
Another method is to self-host a service. An example would be to host a Gitea instance, and keep it open for public use. This opens the door for new people to try it out and get used to it.
#### Provide feedback, bug reports, & more
Found a bug? Got an idea for a new feature or improvements? Found a security vulnerability? Reach out to the project team respectfully. Please be clear about what you mean and read the docs before you do.
Use your individual skills to improve the project.
#### Translations
Providing a multilingual program or service can be challenging. From the technical standpoint of the localization, to the actual translation itself.
There are various ways for the technical implementation. From managed services like [crowdin](https://crowdin.com/) or [Transifex](https://www.transifex.com/), to simple text files within the git repo. The how-to should be described in the documentation.
Helping your favorite project to translate it to another language helps to make it more accessible for new people.
#### Provide help to the community
Being an active member of the community is an important part. Helping new users to solve problems or answer questions is a great way to build a healthy community. A significant side effect is that team member have more time to tackle coding related problems instead of answering questions. Some projects have forums, some use their bug trackers, some mailing-lists, some their social media accounts.
#### Create and share content
It doesn't matter what format you choose, but creating content about your favorite project is a great way to grow the community. Share your favorite functions, your use cases, exciting stories, or tutorials and guides. As mentioned, the format plays a secondary role: videos, blog posts, infographics, social media posts, and so on.
#### Send some appreciation
As mentioned before, many open-source projects are maintained by people that spend their free time to work on it. Sending them a simple 'Thank you' and 1-2 sentences, what the project is used for, can bring some joy and motivation.
#### Spread the word
Talk about it. Tell people why it is your favorite project, recommend it respectfully to others, and spread the word. I use Vim by the way. This is fairly similiar to a previous point and is self-explanatory anyway.
## and ...
I bet there are many more ways to support your favorite projects. Feel free to let me know.
---

View file

@ -0,0 +1,204 @@
# SSH - How to use public key authentication on Linux
**Disclaimer**:
* Please read the whole post before you start. This will help you avoid a lock-out
## Generating a secure key pair
SSH keys use asymmetric cryptographic algorithms that generate a pair of separate keys (a key pair). A private and a public key.
We are using the command `ssh-keygen` to generate our secure key pair. There are 3 common algorithms to choose from.
We are going to create a private and public key with the name `nameofthekey` in the `.ssh` directory of the current user. You should choose a expressive name, which makes it easier to work with multiple keys. Please make sure that the directory `~/.ssh/` exists.
**Important**: Please do use a secure password for the key generation.
[RSA](https://en.wikipedia.org/wiki/RSA_(cryptosystem)) *(RivestShamirAdleman)*
: `ssh-keygen -t rsa -b 4096 -f ~/.ssh/nameofthekey`
[ECDSA](https://en.wikipedia.org/wiki/Elliptic_Curve_Digital_Signature_Algorithm) *(Elliptic Curve Digital Signature Algorithm)*
: `ssh-keygen -t ecdsa -b 521 -f key1 ~/.ssh/nameofthekey`
[EdDSA ed25519](https://en.wikipedia.org/wiki/EdDSA#Ed25519):
: `ssh-keygen -t ed25519 -f ~/.ssh/nameofthekey`
Explanation:
: `ssh-keygen` # can be run as a standard user, man ssh-keygen for more information
: `-t [dsa | ecdsa | ecdsa-sk | ed25519 | ed25519-sk | rsa]` *# choose Algorithm*
: `-b bits` *# number of bits to use*
: `-f /path/and/name-of-keypair` *# choose a name for the keys*
```markdown
ssh-keygen -t rsa -b 4096 -f ~/.ssh/nameofthekey
Generating public/private rsa key pair.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in name
Your public key has been saved in name.pub
The key fingerprint is:
SHA256:8KkCBz2GFXusy6URXF4Z/8xVl+6dFhYV0MoDtqIqBfA kuser@pleasejustwork
The key's randomart image is:
+---[RSA 4096]----+
| o.. oo .o.B|
| . = = ... o =.|
| = B = o + + .|
| E = o o = = + |
| . = . S . + + +|
| + * o +.|
| * o . |
| . o |
| . |
+----[SHA256]-----+
```
This would give us 2 files: private key `nameofthekey`, and public key `nameofthekey.pub`.
**nameofthekey.pub** - public key
Example:
```markdown
ssh-rsa ktLfCNsABzCw9wE4U3JS8mn1t8jw2Q01wRvCaexpuE2adZYxgw4sNJfBOp3SmLEYeF3rcP1u9ffb2J8FOqFWj3egwjVvVrlDHwi6Jr1aTxOmNlGtNHfJiKuJxD3HxPFAuSImsR5IZF6Bki0LxQGxM4jx8NgDFQ5BWO0tJ0pNzSJdXOLwW0jqbdqdEHELnYZLmll6oeJ9j1LZx6GY5vjYxzeCxZTrHoFQPE2vdYsx7ajIKDzQpNdM9zhYRO10OM kuser@pleasejustwork
```
**nameofthekey** - private, password protected
Example:
```markdown
-----BEGIN OPENSSH PRIVATE KEY-----
iEnCTyTmiYVhFvUIYhlq07FZV3EaVpQalFqSRicpeaDqifcDLqdp5NAx11JT17iNhgRDMrTM7Pcs6kLFbXC8LWbhlJVTkhu9k5wIG9Ec6qBthyAzmnO7SpqFCtKAXmuG8uFJF9SeyLsXTFiIuK8UqfgG9SLvXSrhPFqSVWFVxQqmXiXL5MQ7iKOKAAAlwisfwrJ1DTNkd2C9nel7sorAU3gWQGh2beuEjzkRsYucR9lxO6jzLEejNSwyS7TNuOiEnCTyTmiYVhFvUIYhlq07FZV3EaVpQalFqSRicpeaDqifcDLqdp5NAx11JT17iNhgRDMrTM7Pcs6kLFbXC8LWbhlJVTkhu9k5wIG9Ec6qBthyAzmnO7SpqFCtKAXmuG8uFJF9SeyLsXTFiIuK8UqfgG9SLvXSrhPFqSVWFVxQqmXiXL5MQ7iKOKAAAlwisfwrJ1DTNkd2C9nel7sorAU3gWQGh2beuEjzkRsYucR9lxO6jzLEejNSwyS7TNuO
-----END OPENSSH PRIVATE KEY-----
```
## The correct permissions on the client
It is important to have the correct permissions for your key. For 2 reasons: restrict the access of other users, and some servers require it, when the 'StrictModes' is enabled. Later more.
```
sudo chmod 700 ~/.ssh
sudo chmod 644 ~/.ssh/authorized_keys
sudo chmod 644 ~/.ssh/known_hosts
sudo chmod 644 ~/.ssh/config
sudo chmod 600 ~/.ssh/nameofthekey # private key
sudo chmod 644 ~/.ssh/nameofthekey.pub # public key
```
## Get your public key on the server
You need access to the destination server in one way or another to add the newly generated **public** key. There are multiple ways.
In the end, the public key must be added to the `~/.ssh/authorized_keys` file. If it does not exist, it must be created. There can be multiple public keys in this file - one line per key, and there can be multiple `authorized_keys`, IF it is configured on the server.
#### No direct access to the server
Ask someone with access to add your public key to the `~/.ssh/authorized_keys` file.
#### Direct access via ssh and password auth
You most likely already have access to the server via ssh and normal password authentication. There are now multiple ways to add your public key to the server.
Simply use `ssh-copy-id`:
: `ssh-copy-id -i ~/.ssh/nameofthekey.pub remote-user@remote-server`
: This does everything for you, and adds your public key to the `authorized_keys` file on the remote machine.
Different way would be to copy the public key to the remote machine via `scp` / `rsync`, or something different, and redirect `>>` it to `~/.ssh/authorized_keys`. Another way would be to connect to the server, and copy-paste the content of the public key to `~/.ssh/authorized_keys`. Remember, if the path or file does not exist, just create it.
In the end, your chosen public key must be in the file `~/.ssh/authorized_keys` before you should continue.
## Configuration of the ssh server
**Important**: Some tips on how to work on the configuration file on the remote machine.
* do a backup of the configuration file before you do any changes!
* create 2 ssh sessions - 1 for working and testing, the other one as a backup.
* reload the config of the ssh server, rather than restarting the service. This does not kill the backup session.
* test the public key authentication before you turn off password authentication
---
We now have to edit the ssh server config file on the remote machine: `/etc/ssh/sshd_config` or in the config directory `/etc/ssh/sshd_conf.d`. It depends on your setup.
#### Enabling public key authentication on the server
Enable public key authentication in the config file:
: `PubkeyAuthentication yes`
Now, reload the config of the ssh server. Assuming you are using `systemd`:
: `sudo systemctl reload sshd`
Before we continue, please do try to connect to the remote machine with your ssh key:
: `ssh -i ~/.ssh/nameofthekey remote-user@remote-server` *# choose the private key!*
: enter the password for your private key, and you should be connected.
#### Enable the strict mode
Open the `sshd_config` file and add:
: `StrictModes yes`
: this makes sure, that the permissions are correct on the client side. You won't be able to connect to the server, if the permissions are not correct!
Now, reload the config of the ssh server:
: `sudo systemctl reload sshd`
**Important**: Please test the connection once more!
If you successfully connected to the remote machine, you can proceed to turn off password authentication.
#### Disable password authentication
**Last chance**: make sure that you have tested the public key authentication, and / or have another option to access the machine.
Open the `sshd_config` file and change one option:
: `PasswordAuthentication no`
This will disable the possibility to authenticate with a password, but you should still be able to log in with your public key, after reloading the config.
Reload the config of the ssh server:
: `sudo systemctl reload sshd`
**This should be it!**
[More SSH hardening options can be found here.](https://ittavern.com/ssh-server-hardening/)
## Debugging
Some debugging options on client:
: `-v` / `-vv` / `-vvv`
: `ssh -vvv -i ~/.ssh/nameofthekey remote-user@remote-server`
Some debugging options on server:
: `sudo journalctl -u ssh`
: `sudo grep ip.of.your.machine /var/log/auth.log`
You can change the log level of the server by editing the config file:
: `LogLevel INFO` *# default*
: `LogLevel DEBUG` *# enable DEBUG mode*
Don't forget to turn it off again before it fills up your storage.
## Manage private key identities with an agent
Nobody wants to enter their password for the private key every time they want to connect to a server. By using `ssh-add` - the OpenSSH auth agent - you can add your private key once for the session, and do not have to enter your private key password every time.
Check for identities:
: `ssh-add -L`
Add private key identity:
: `ssh-add ~/.ssh/nameofthekey` *# choose the private key and enter the password*
Remove all identities:
: `ssh-add -D`
#### Troubleshooting
If you run into:
: `Could not open a connection to your authentication agent.`
Just run `eval "$(ssh-agent)"` OR `` `eval ssh-agent` `` and right after `exec ssh-agent bash`. This restarts the agent and sets the correct environment variables from my understanding.
---

View file

@ -0,0 +1,257 @@
# 10 prompts - 1000 AI generated images - openAI Dall-E
## Table of content
* <a href="#cats">1 - Cats</a>
* <a href="#robot">2 - Robot</a>
* <a href="#donut">3 - Donut</a>
* <a href="#dackel">4 - Dackel</a>
* <a href="#poster">5 - Poster</a>
* <a href="#citylife">6 - Citylife</a>
* <a href="#dolphin">7 - Dolphin</a>
* <a href="#light">8 - Light</a>
* <a href="#monster">9 - Monster</a>
* <a href="#cyberpunk">10 - Cyberpunk</a>
* <a href="#tech">Technical write-up</a>
## What is this all about?
We were curious about how much variance the AI has. So, what would be the results if we were to request 100 images with the same prompt? - I won't review the results and rather just present the results to you.
These **prompts** are a result of a quick **brain storming**. If you have suggestions, please let me know. I might create more posts like this in the future. The goal was to have a wide range to motives, styles, and so.
These **images are unedited**. Generated - downloaded - created a montage; that is it. **These images are free for personal or commercial use and do not require any form of mentioning**. [Dall-e](https://labs.openai.com/about) gives ownership of the images to me, and I give you permission to do with it, whatever you want.
The resolution of the originals is 1024x1024 and I might provide a download link at some point. If you want a single image, feel free to reach out.
With testing, the **total costs** were around **20 EUR**. I'd say that it is acceptable.
You can find a **technical write-up** at the end of the post. But as a disclaimer: not best-practice. Feedback is still appreciated.
# Gallery
**So, enjoy!**
## 1 - Cats <a href="#cats" id="cats">#</a>
> photo of a kitten on a carpet in the living room, digital art
<img src="/images/ai/1/montage_cats.jpg"
alt="cats"
style="width: 100%;">
---
## 2 - Robot <a href="#robot" id="robot">#</a>
> small robot wandering around in an post-apocalyptic world, digital art
<img src="/images/ai/1/montage_robot.jpg"
alt="robot"
style="width: 100%;">
---
## 3 - Donut <a href="#donut" id="donut">#</a>
> minimalist logo of a donut shop
<img src="/images/ai/1/montage_donut.jpg"
alt="donut"
style="width: 100%;">
---
## 4 - Dackel <a href="#dackel" id="dackel">#</a>
> dackel in a suit in a library, digital art
<img src="/images/ai/1/montage_dackel.jpg"
alt="dackel"
style="width: 100%;">
---
## 5 - Poster <a href="#poster" id="poster">#</a>
> movie poster for an action movie from the 80s, digital art
<img src="/images/ai/1/montage_poster.jpg"
alt="poster"
style="width: 100%;">
---
## 6 - Citylife <a href="#citylife" id="citylife">#</a>
> a black and white photo of the life in new york
<img src="/images/ai/1/montage_citylife.jpg"
alt="citylife"
style="width: 100%;">
---
## 7 - Dolphin <a href="#dolphin" id="dolphin">#</a>
> sticker illustration of a cute dolphin
<img src="/images/ai/1/montage_dolphin.jpg"
alt="dolphin"
style="width: 100%;">
---
## 8 - Light <a href="#light id="light">#</a>
> area view of a city with street lights at night, digital art
<img src="/images/ai/1/montage_light.jpg"
alt="light"
style="width: 100%;">
---
## 9 - Monster <a href="#monster" id="monster">#</a>
> detailed sketch of an evil monster, digital art
<img src="/images/ai/1/montage_monster.jpg"
alt="monster"
style="width: 100%;">
---
## 10 - Cyberpunk <a href="#cyberpunk" id="cyberpunk">#</a>
> realistic photo of a colorful cyberpunk city in the rain at night, digital art
<img src="/images/ai/1/montage_cyberpunk.jpg"
alt="cyberpunk"
style="width: 100%;">
---
## Tech write-up <a id="tech">#</a>
**Side note**: To be clear, this is not best-practice. It got its job done, and that is all I needed. Still, feel free to reach out, happy to learn!
First, openai Dall-E API offers to generate the following sizes, with 3 different prices:
```
Resolution Price
1024×1024 $0.020 / image
512×512 $0.018 / image
256×256 $0.016 / image
```
I've generated the largest resolution.
### Limitations
So, I've decided to use the API via curl, the first limit I encountered is the '10 images per request'.
```markdown
{
"error": {
"code": null,
"message": "20 is greater than the maximum of 10 - 'n'",
"param": null,
"type": "invalid_request_error"
}
}
```
The next one would be the rate limit of 50 images per 5 minutes.
```markdown
{
"error": {
"code": null,
"message": "Rate limit reached for images per minute. Limit: 50/5min. Current: 60/5min. Please visit https://help.openai.com/en/articles/68839691 to learn how to increase your rate limit.",
"param": null,
"type": "requests"
}
}
```
In the end, the download of the generated images was limited too. After every category I had to switch to another VPN server location to bypass the limit.
### Script to download them all!
I did a small break after every category to check the result of the script, and whether all images were generated and downloaded.
I'll add some comments later, but in short:
: generate images and put curl response to file
: get URL from output file and remove the quotation marks `"`
: download images via curl
: wait one minute to avoid rate limit
```bash
#!/bin/bash
# For for-loop for the whole script due to the limitations
# Curl request to generate the images via API, and the save the output via -o flag to a file
for i in {1..10};
do
echo $i
curl -o output.txt https://api.openai.com/v1/images/generations \
-H "Content-Type: application/json" \
-H "Authorization: Bearer sk-sdsdskdsdsdsdeefefe" \
-d '{
"prompt": "small robot wandering around in an post-apocalyptic world, digital art",
"n":10,
"size":"1024x1024"
}'
# Gets the URLs of the generate images, removes quotation marks, and saves it to a new file (one URL per line)
cat output.txt | jq '.data[].url' | sed 's/"//g' > output_url.txt
# Finally, download images with curl to the current directory. I was told that this is not bet practice, but it worked.
cat output_url.txt | while read f; do curl "${f}" -O; done;
# wait 60 seconds before we start it all over again
sleep 60
done
```
Things to improve: start/stop, logs, error and information notification, speed
### Rename everything
In the next step, I had to rename all the files. The file names were cryptic and difficult to work with.
```bash
#!/bin/bash
a=1
n=cats
for i in ./1_cats/*; do
new=$(printf "./1_cats/"$n"_%04d.jpg" "$a")
mv -i -- "$i" "$new"
let a=a+1
done
```
The name scheme would look like: `cats_0001.png`
### Create montage with imagemagick
In the last step, I used `imagemagick` to create a montage with the following command.
`montage -geometry 200x200+2+2 -tile 4x -set label '%f' *.jpg montag.jpg`
Explanation:
: `montage` *# imagemagick function to create montages*
: `-geometry 200x200+2+2` *# size per image + min size of the padding between the images*
: `-tile 4x` *# setting for the layout, 4 columns, unlimited rows. 3x4 would be a limit of 3 columns and 4 rows*
: `-set label '%f'` *# adds the filename of the image on the montage*
: `*.jpg` *# use ALL `.jpg` file within this directory for the montage*
: `montag.jpg` *# name and format of the final montage*
---

View file

@ -0,0 +1,145 @@
# My IT EDC tool kit v2212
**Side note:** This is not an ad, and there are no affiliate links. Just a show case of my current EDC kit for professional and private use.
![edc-kit-preview](/images/blog/edc/2022-12/preview.png)
## What is an EDC kit?
EDC stands for 'Every Day Carry'. It is - as the name implies - a kit that you bring with you every day. As someone who likes to watch EDC kit show cases or read blog posts about EDCs, there is an unlimited range of use cases, tools, sizes, combinations, and so on. I recently bought a new bag and switched out various tools, so I thought it would be a great timing to show you the status quo.
For me personally, I like to be prepared. If I have the right tool with me, it will save me time and headaches. I am a Network Administrator for a living, and especially at work, it is a sign of professionalism to have certain tools at hand, and get the job done quickly. I have to carry more weight around, and some tools are rarely used, but it is worth it. I like to compare my EDC kit with others, I like to do research on new tools, and I like to use my tools!
I most often use my kit at work. As mentioned before - I am working as a Network Administrator. Installing and working on switches / servers, UPS, server racks, and sometimes even those machines from hell - printers.
As a side note: there is not 'the perfect EDC kit'. Times changes, tools are getting replaced, new situations come up, and it is impossible to have a tool for every (!) situation.
## General information
![edc-kit-overview](/images/blog/edc/2022-12/overview.png)
The **bag** is the Maxpedition BEEFY Pocket Organizer.
**Fully packed:**
**Lenghts**: 21 cm
**Width**: 16 cm
**Heights**: 14 cm
**Weights**: 2,5 kg
It is as clunky as it looks, but it fits perfectly into my 2 bags.
# Categories
I don't think it is necessary to show every tool separately, so I've categorised them. I've added some notes, but most should be self-explanatory.
## Building / dismantling
I split this category into two parts, since it is different to work on a server rack or a smartphone.
### Heavy
![edc-kit-tools](/images/blog/edc/2022-12/inst-big.png)
1. **bits and socket set** *(Wera Tool-Checks Plus)*
2. **pliers** *(Knipex Cobra 87 01 150)*
3. **wrenchs** - size 10 and 12, I might add size 8 at some point
4. **screwdriver handle for bits** (Wera 05051462001)
5. **Multitool** *(Gerber Suspension)*
6. **decent 1/4-inch square ratchet **- the Wera set got a small ratchet, but I destroyed at least 3 already
7. **various 1/4-inch square/hexagon adapters and extension** *(Milwaukee, Wera, Bosch)*
### Fine
![edc-kit-tools](/images/blog/edc/2022-12/inst-sma.png)
1. **various plastic tools**
2. **mini crowbar**
3. **thin ratchet wrench** - a recent addition. Would have been helpful in the past. Has extra short bits, but works with normal bits too
4. **precision screwdriver set** *(HOTO)*
## Connecting things
### Network
![edc-kit-tools](/images/blog/edc/2022-12/conn-net.png)
1. **7,5m RJ45 network cable** - it is clipped to the bag
2. **spare 1m RJ45 network cable** - just in case
3. **USB-to-RJ45 adapter** - there are multiple use cases for it: connecting to an additional device, troubleshooting a different network route, and as a spare part (just let your laptop drop while the RJ45 cable is plugged in)
### USB
![edc-kit-tools](/images/blog/edc/2022-12/conn-usb.png)
Welcome to dongle hell! My idea is to have 1 long cable, and adapters for every situations since a bunch of cables have a certain volume. I have had no problems with this solution, **yet**.
1. **3m USB Type-C/Type-C cable** - clipped to the bag
2. **USB HDMI capture card** - Troubleshooting session / server etc
3. **spare 32GB USB stick** - I lose 1 per month
4. **SD- and micro-SD USB adapter**
5. **micro-SD-to-SD adapter**
6. **charging USB protection** - not sure how those are called, but they prevent data transfers, so I can charge my devices securely at unknown USB sockets
7. **female USB Type-C to male micro-USB adapter**
8. **female USB Type-C to male mini-USB adapter** - jep, MINI USB, and yeah, I use them fairly often. Cisco switches uses mini-USB for console interfaces on the front.
9. **2 spare USB wireless cards** - same as the USB to RJ45 adapter, but with more driver problems
10. **female standard A USB to male USB Type-C adapter** - USB Type-C-only devices are way to common
11. **female USB Type-C to male standard A USB adapter**
12. **female micro-USB to female USB Type-C adapter**
Forgot the 1m USB Type-C to Type-C cable. Too lazy to re-shoot. Sue me.
## Fixating - keeping things together
![edc-kit-tools](/images/blog/edc/2022-12/fixating.png)
1. **around 5m of paracord**
2. **zip ties**
3. **superglue**
4. **velcro cable ties**
5. **duct tape**
## Light - Let there be light
![edc-kit-tools](/images/blog/edc/2022-12/light.png)
1. **Brennenstuhl LED torch PL200**
2. **EMOS Ultimate 50 flashlight**
Seems redundant, but they have their own use cases. The magnet on the torch is great!
## Misc
![edc-kit-tools](/images/blog/edc/2022-12/misc.png)
1. **snip** *(Klein)* - great to for stripping cables
2. **telescope magnet** - this is a recent addition. The last screw recovery took way too long without it
3. **plastic razor blades** - if you have to scrap something off a sensitive surface (stickers, glue, etc)
4. **lighter**
5. **1m mini measuring tape** *(Stanley)*
6. **female USB Type-C to various DC connector adapter**
And the gloves in the front. I am a fan of mechanixx gloves, and this pair is great. Thin enough to enjoy precise work, and protective enough against evil cut cable tie ends.
### Spares
![edc-kit-tools](/images/blog/edc/2022-12/spares.png)
1. **various types of batteries**
2. **spare mask, disinfection wipes, plaster**
3. **cash**
4. **small notebook**
5. **spare cage nuts** - nothing is more annoying than forgetting cage nuts, nothing
## Conclusion
So, this is it. All in all, I am pretty happy with my current EDC kit, and I won't change much any time soon - at least that is what I am telling myself.
You have tips, tool suggestions, or questions? - Feel free to reach out!
---

View file

@ -0,0 +1,218 @@
# Online Security Guide
## What is this about?
Let me start with; **there is no perfect security**. Your goal is to make it as difficult as possible to 'break in', so it is simply not worth it. There is a balance between security and usability, and you must find a good middle ground.
I keep it as short as possible and focus on the 'what' and 'why', not the 'how'. There are many ways to achieve the goals, but this is a topic for itself, and depends on the circumstances.
## "I am not a target" <a href="#i-am-not-a-target" id="i-am-not-a-target">#</a>
Unfortunately, anyone is, and yes, ANYONE can become a victim of a cybercrime. Cybercrime is highly lucrative, and criminals become more creative every year. Automation makes it simple to find easy targets or attack a large group of targets.
I'll try to provide you with enough information for safe internet use. If you feel overwhelmed, tackle one topic at a time, and keep improving. **It is never too late to care about your online security**.
## TLDR - 5 most crucial tips <a href="#tldr" id="tldr">#</a>
If you only take away these five things, I will be more than happy. These steps alone take your security to the next level and are crucial. I'll go into more detail later in the post.
1. **Password hygiene**; unique password for every account and a password length of at least 16 characters
2. enable **Multifactor-authentication** (MFA, or 2FA) wherever you can
3. **check twice, click once**; be more careful about what you click
4. **keep your device and software up-to-date**
5. **do not overshare**; everything can and will be used against you
The rest of the post contains the reasoning, examples, and further points.
## Account Security <a href="account-security" id="#account-security">#</a>
### Delete accounts that are no longer required
Archive and delete the account of the service. The account can't get hacked if it does not exist.
### Never share your credentials
You lose control over the account when you share your credentials. Even if you trust the other side, you often enough do not have control over the security measurements of the other site.
If you need to share credentials, change them as soon as the other site doesn't need them anymore.
### Use a separate email address for logins only
The theory is to treat the secondary email as some kind of password. Communicate 'contact@yourdomain.com' publicly, keep 'wehjcejn@anotherdomain.org' private, and use this second email address only for logins. It is up to you how far you go: different alias, different domain, different account, different provider,...
Having separate email addresses has multiple benefits, but the most important is that brute-force attacks and other methods with your public email address are pointless. The attacker needs the private email address and your password (and your MFA, obviously).
### Provide wrong answers to security questions
Name of your first pet? Keyboard. Childhood nickname? 1513sd_!rg. Be creative.
Answering security questions truthfully makes you vulnerable to social engineering attacks. If you answer them truthfully, the attacker could gather information via social media and other platforms to answer those 'security questions'. Please keep in mind to document your fake answers in a secure place and do backups.
---
## Password Security <a href="#password-security" id="password-security">#</a>
**Summarized: Generate and store a random and unique 16+ characters password for every account in your password manager.**
### Use a unique password per account
**Account breaches are inevitable**. There will be leaks, and user data will go public, which is out of your control. Vulnerabilities, rogue employees, misconfiguration, and a thousand ways how that can happen.
Imagine you have the same email and password on every service. If only one service leaks your credentials, attackers gain access to all your accounts. As mentioned before, automation makes it easy to find out and lock you out quickly.
Having a unique password for every service **limits the damage to the breached service**. Another benefit is that you do not have to change the credentials of all accounts if a single service leaks your credentials.
**Side note**: variations of a secure password don't count. `securepassword1`, `securepassword2` and `securepassword3` might be unique, but not secure. Just generate them randomly with your password manager.
### Use a sufficient password lenght
Obligatory xkcd comic:
![xkcd-password-936](/images/blog/xkcd-password-936.png)
[Source](https://xkcd.com/936/)
Complexity is good, length is great, and the combination of both is king. No matter the complexity, every password with less than 10 characters should be considered insecure. 12+ characters is a must, and I'd instead recommend 16+ characters. And why not more? - If you use a password generator, nothing speaks against a 30+ character password.
**Side note**: passphrases are great too, and they can be used for temporary passwords, where copy and paste is not an option. `dolphin chase mall nightmare` as a passphrase is secure enough, and easy to remember or share over the phone (I know, I know, not best practice, but sometimes there is no other way).
### Use a password manager
There are various solutions for every use case. Know your needs: offline availability, mobile-friendly, self-hosted or managed solution, open-source or proprietary, and so on.
Every solution has pros and cons. Knowing them is half the battle.
**Important**: Do regular backups of your password database. Most services provide such option, and use it. Don't forget to keep them encrypted.
### Generate random passwords
I think I've mentioned it before, but just to be sure: generate random and long passwords. Using personal information for password creation makes it easy to guess.
The same applies to passphrases; `firstname lastname 2022` is long, but not secure (assuming the attacker knows a little more).
### Keep it in a secure place
Self-explanatory; even the password manager needs a master password, which should not be written on a post-it and stuck on the monitor.
### *Controversial*: changing passwords regularly
Companies love - or sometimes have - to force their employees to change their passwords every `n` months. Anyone who had to endure it knows that this rather encourages bad password choices: `winter2022`,`spring2023`,`summer2023`, and so on.
It does not hurt to change passwords regularly, but it is not worth the hassle, and you should be fine if you follow the other tips.
---
## Multi-/2-factor authentication <a href="#mfa" id="mfa">#</a>
This authentication method requires the user to provide two or more factors to access the desired service. Those factors can be: **knowledge** (something you know (e.x. pin, password, security question)), **possession** (something you have (e.x. security token, security key, second device)), and **inherence** (something you are (e.x. fingerprint, iris)).
MFA protects you from various attacks and risks. Even if the attacker knows your email/username and password, they wouldn't be able to log into your account without the second factor.
#### Something you have
**Side note**: this applies to digital and hardware access.
**Recommended**: TOTP (Time-based One-time password):
: in short: the service provides you with a secure string, this secure string must be inserted into a TOTP generator, and that generator generates a new PIN every 30 seconds based on the current time and the secure string. There are mobile app, password managers, and desktop programs that can do it.
: **Important**: keep the secure string private, and do your backups!
: Another way to generate TOTPs is to use hardware tokens. The process is slightly different, depending on the vendor you use.
**Recommended**: Hardware keys:
: plug it into the device, add the key to the service of your choice, and with the next login, the service would request you to press the bottom on the key to verify, that you are in possession of the authorized key.
: **Important**: I recommend buying a second one as a backup. Some vendors provide tools to copy the configuration/ secrets to another key, or simply add both keys to the service.
Email-based MFA:
: maybe the most common method is MFA over email. You either get sent a verifying link or a pin to confirm your access to the email address. It has its own risks, since the breached email account could cause more 'damage'.
MFA over text message:
: same as Email-based MFA, but over text. It is **not** recommended to use this, when other options are available. Still, better than no MFA.
Push notifications to other devices/ sessions:
: in this case, you have to confirm a new login or activity on another device or session already verified in the system.
Certificates:
: user or device certificates can be created, and installed on a device. You can now limit access to a service to devices with a valid certificate that the service trusts. You can rarely find this on personal services, but I wanted to add it.
Smart cards:
: there can be special smart cards for your device, or USB smart cards. You add the smart card to the service as a trusted smart card, and you can login as long the smart card is connected.
: **Side note**: some hardware keys can be configured to act like a smart card, but it depends on the model.
#### Something you are
I won't go into detail, but here are some ways of biometric authentications: fingerprint scanning, facial recognition, voice recognition, iris/retinal scan, vein scan, hand geometry, and there are many more.
I've read somewhere that **biometric features should be considered usernames** rather than passwords and I agree.
First, they are more or less **not private**. There are multiple presentations in which they show how to get enough information of a fingerprint from a picture (!) to reconstruct it, and successfully authorize a login with it. (I can't find the link to the video, sorry!) Second, you **can't change it**. You can't change your fingerprint, your iris, and so on.
A 'password' that is not private and cannot be changed is not secure.
There are more security, accessibility and privacy concerns, but those a out of scope of this post.
#### Something you know
Security questions:
: you have to answer security questions, and you have to provide those answers to gain access to certain resources and so on.
PIN:
: just a simple PIN, besides the password.
---
**Important**: I cannot stress enough how important backups are. Even though MFA is a must and brings your online security to the next level, there is a legit risk of getting locked out if you lose access to the second factor.
## Do not overshare <a href="#over-sharing" id="over-sharing">#</a>
I might be paranoid, but the internet can be a dangerous place. As the police would say: '**everything you say can and will be used against you**'. This section relates to targeted rather than automated attacks.
In the time of social media - we do not speak enough about oversharing. The danger of getting doxed, or getting targeted increases with every piece of information you share. The easiest example would be if someone brags about cryptocurrency earnings, and would immediately get targeted by group X, that specializes in certain attacks.
Something you can do is **lie, share wrong information about yourself, use an alias**, and so on. It depends on the platform, but regularly **deleting old posts** can prevent further information gathering in the future.
Be skeptical and keep in mind: **the internet does not forget**.
## Check twice, click once <a href="#check-twice" id="check-twice">#</a>
The best security strategy is worthless if someone clicks and downloads anything negligently.
It also applies here: be skeptical. If it is too good to be true, it often is.
To provide some examples: 2 ways to deal with suspicious messages would be to, first, **verify the request over a different channel and do not use the contact information of the suspicious message**. Like asking your boss over the phone, if you really should send the money to this new client - just in case his email account is compromised. Second, if you receive a suspicious message of service provider X, **do not click on any links**. Instead, open your browser, login to provider X's service, and confirm the request there, or simply call them. Only click on links if it is necessary.
**Side note**: suspicious can be everything you did not expect or is out of the norm.
Being careful is an important part of being secure online.
## Secure your device <a href="#secure-device" id="secure-device">#</a>
**Keep your operating system, browser, antivirus, and everything else up-to-date**. I cannot stress enough how important that is.
Use **firewalls, antivirus, and ad-blockers** to block unwanted connections and content.
**Encrypt** everything you can to limit the damage of a security incident and protect your critical data.
Do **regular backups** to prevent data loss. That includes hardware damage, mal-/ransomware, theft, and so on. Store them in a secure place.
So, **VPN services**. In the end, it is a paid man-in-the-middle that masks/hides your activity from your ISP and your origin from the destination. But everything you hide from the ISP can be seen by the chosen VPN provider. It is simply a shift of trust.
I personally would recommend the use of a VPN, since the benefits outweigh the risks, but a VPN is not the high-end security solution that many providers promise to deliver. You cann download malware, your credit card information can be stolen, and you can still be tracked.
Do your research. There are good and bad VPN providers, and NEVER use free VPN or proxy providers!
In the end, I have to mention **Tor**. Tor routes your traffic through of network of nodes and makes it almost to track back. It is an important tool, but I am afraid that a detailed description is out of the scope of this post.
## Conclusion <a href="#conclusion" id="conclusion">#</a>
So, I hope I could provide some new ideas on how to protect your online activity. Just start with the five most important points that I showed at the start, and tackle other topics later. And keep in mind, there is no perfect security, just making it more complex, and limiting the damage in case of a security incident.
Questions:
: Should I add more examples, or is it already too long?
: Should I add recommendations, or should this be a neutral guide? Could be seen as bias and promotion.
: Should I write more about Tor?
: Should I write more about the risks of doxing, ransomware, theft, and how the tips help against it?
: Should I add label like 'must', 'important', 'optional', and so on?
**Feel free to reach out to send questions, more tips, different topics, and so on. I'd appreciate your feedback. The guide will be updated accordingly.**
---

View file

@ -0,0 +1,345 @@
# Guide to Wireshark display filters
# The goal of this post
This post is a quick reference for using the display filters in Wireshark. The display filter is used to filter a packet capture file or live traffic, and it is essential to know at least the basics if you want to use Wireshark for troubleshooting and other evaluations.
In this post, I'll focus on the display filters for IPv4 only. Wireshark offers a wide range of tools that are out of this post's scope. IPv6 will be added at some point.
There is no way to list every filter, and I try to concentrate on the most commonly used ones. In general, it is recommended to use the right-click function to add specific protocols/ fields/ values, etc, to the filter.
![filter-selection](/images/blog/wireshark-filter-selection.png)
Nevertheless, a list of all display filters can be found [here](https://www.wireshark.org/docs/dfref/). I've added links to the specific category to every protocol in the rest of the post.
If you think I forgot something important or want to share more tips, feel free to reach out. I'd appreciate it, and I am happy to learn.
In an attempt to keep it to the basics, I left out topics like functions, variables, macros, arithmetic operators, and some other advanced things. As mentioned before, I'll add IPv6 filters, some more context for when I use certain filters, more topics like OSPF, HTTP/s, and so others, and some more functions.
## Difference display filter and capture filter
### Capture filter
![capture-filter](/images/blog/wireshark-capture-filter.png)
The capture filter - as the name suggests - is a filter for the capturing of packets itself. With this filter turned on, you can start packet capture, and everything filtered out won't be saved. This is mainly for long packet captures or connections/devices with a lot of traffic helpful, and often enough necessary. Capture filters can have a different syntax and won't be tackled in this post.
### Display filter
![display-filter](/images/blog/wireshark-display-filter.png)
The display filter hides filtered packets and is mainly used on already saved packet capture files or live traffic.
---
Just so you know the difference when you search for more commands.
## Saving display filters <a href="#saving" id="saving">#</a>
There are two common ways to save filters. They can then be used in later sessions or help you switch between different filters, especially since certain filters can get very long.
### Display filter bookmark
![filter-bookmark](/images/blog/wireshark-display-filter-bookmark.png)
### Display filter buttons
![filter-buttons](/images/blog/wireshark-display-filter-button.png)
## Color of the display filter bar <a href="#color" id="color">#</a>
Green:
: Filter is accepted, syntax is ok
Red:
: Filter is NOT accepted, syntax is wrong
Yellow:
: Filter is accepted, syntax is ok, BUT the filter results might not be clear, e.x. if you reference a field that is present in multiple protocols
: *(haven't found too much information about it)*
## Operators <a href="#operators" id="operators">#</a>
### Logical operators
It runs from left to right and can be grouped with parentheses `()`.
Logical `AND`:
: `and` / `&&`
Logical `OR`:
: `or` / `||`
Logical `NOT`:
: `not` / `!`
: e.x. `!ip.src == 10.10.10.1` - this would filter out everything with the source IP of `10.10.10.1`
(Logical `XOR`):
: `xor` / `^^`
: **Side note**: read it multiple times, but does not work for me. I just 'craft' something like this:
: `(x and !y)or(!x and y)`
### Comparison operators
Equal:
: `eq` / `==`
Not Equal:
: `ne` / `!=`
Greater Than:
: `gt` / `>`
Less Than:
: `lt` / `<`
Greater than or Equal to:
: `ge` / `>=`
Less than or Equal to:
: `le` / `<=`
### Content filter
Filters for protocol, field, or slice that contains a specific value:
: `contains`
'Does the protocol or text string match the given case-insensitive Perl-compatible regular expression':
: `matches` / `~`
### Boolean
The following formats are accepted:
```
option == 1
option == True
option == TRUE
option == 0
option == False
option == FALSE
```
### Escape characters
I prefer to use the 'raw string' function, instead of fighting with escape characters:
: `smb.path contains r"\\SERVER\SHARE"`
List of escape sequences:
```
smb.path contains "\\\\SERVER\\SHARE"
\' single quote
\" double quote
\\ backslash
\a audible bell
\b backspace
\f form feed
\n line feed
\r carriage return
\t horizontal tab
\v vertical tab
\NNN arbitrary octal value
\xNN arbitrary hexadecimal value
\uNNNN Unicode codepoint U+NNNN
\UNNNNNNNN Unicode codepoint U+NNNNNNNN
```
# Time filter <a href="#time-filter" id="time-filter">#</a>
`frame.time >= "Dec 23, 2022 17:00:00" && frame.time <= "Dec 23, 2022 17:05:00"`
This filter is a simple time filter. Right-click on `frame.time` / Arrival time in the frame, and add it to the filter to work with it. Directly right-clicking on the 'time' column and applying the filter won't work since it inserts another format. I bet you can configure this, but I never bothered to try.
If you want to add more filters, simply put the time segment into parentheses, and add the new filter after or before it.
---
**Side note**: I am not sure if I am happy with the following format, and I might change it at some point. It is food enough for now, though.
[Full reference (eth)](https://www.wireshark.org/docs/dfref/e/eth.html)
You can choose between multiple MAC address formats:
: `aa-bb-cc-dd-ee-ff` *# dash delimiter*
: `aa:bb:cc:dd:ee:ff` *# colon delimiter*
: `aabb.ccdd.eeff` *# Cisco style*
MAC / Ethernet address:
: `eth.addr==aa-bb-cc-dd-ee-ff` *# Source+Destination MAC address*
: `eth.src==aa-bb-cc-dd-ee-ff` *# Source MAC address*
: `eth.dst==aa-bb-cc-dd-ee-ff` *# Destination MAC address*
VLAN:
: `eth.vlan.id==1`
## IP <a href="#ip" id="ip">#</a>
[Full reference (ip)](https://www.wireshark.org/docs/dfref/i/ip.html)
Filter for IP protocol:
: `ip`
Filter IP addresses:
: `ip.addr == 10.10.10.10` *# source+destination IP address*
: `ip.src == 10.10.20.50` *# source IP address*
: `ip.dst == 10.10.20.50` *# destination IP address*
**Side note**: You can filter whole subnets with CIDR notation like `10.10.20.0/24` too.
Filter packet TTL:
: `ip.ttl == 64`
## ICMP <a href="#ICMP" id="ICMP">#</a>
[Full reference (icmp)](https://www.wireshark.org/docs/dfref/i/icmp.html)
Filter for `ICMP`:
: `icmp`
ICMP echo request (ping):
: `icmp.type == 8`
ICMP echo reply (ping):
: `icmp.type == 0`
## ARP <a href="#arp" id="arp">#</a>
[Full reference (arp)](https://www.wireshark.org/docs/dfref/a/arp.html)
Target MAC address:
: `arp.dst.hw_mac`
Sender hardware address:
: `arp.src.hw`
Target IP address:
: `arp.dst.proto_ipv4`
Sender IP address:
: `arp.src.proto_ipv4`
## TCP <a href="#tcp" id="tcp">#</a>
[Full reference (tcp)](https://www.wireshark.org/docs/dfref/t/tcp.html)
Filter for TCP:
: `tcp`
Filter TCP ports:
: `tcp.port == 53` *# source+destination TCP port*
: `tcp.srcport == 68` *# source TCP port*
: `tcp.dstport == 68` *# destination TCP port*
**Side note**: filtering 'TCP streams' is helpful, but it is easier to right click on the TCP segment, and filter there instead of tpying in a filter.
### Examples
General troubleshooting for packet loss:
: `tcp.analysis.flags && !tcp.analysis.window_update`
: displays all retransmissions, duplicate ACKs, other TCP errors. I'll use this in combination with IP filters to get a feeling for the connection quality.
Look for 3-way-handshakes:
: `((tcp.flags == 0x02) || (tcp.flags == 0x12) ) || ((tcp.flags == 0x10) && (tcp.ack==1) && (tcp.len==0))`
Fitlers for TCP resets flag:
: `tcp.flags.reset==1`
## UDP <a href="#udp" id="udp">#</a>
[Full reference (udp)](https://www.wireshark.org/docs/dfref/u/udp.html)
Filter for UDP:
: `udp`
Filter UDP ports:
: `udp.port == 53` *# source+destination UDP port*
: `udp.srcport == 68` *# source UDP port*
: `udp.dstport == 68` *# destination UDP port*
## DHCP <a href="#dhcp" id="dhcp">#</a>
[Full reference (dhcp)](https://www.wireshark.org/docs/dfref/d/dhcp.html)
Filter for dhcp
: `dhcp`
Filter for type (DORA)
: `dhcp.option.dhcp == 1` *# Discover*
: `dhcp.option.dhcp == 2` *# Offer*
: `dhcp.option.dhcp == 3` *# Request*
: `dhcp.option.dhcp == 5` *# Discover*
Search for `hostname`:
: `dhcp.option.hostname == "pleasejustwork"`
Seach for various options:
: `dhcp.option.type == 3` *# Search for a specific option number*
: `dhcp.option.dhcp_server_id == 10.10.20.1` *# Option: (54) DHCP Server Identifier*
: `dhcp.option.type == 51` *# Option: (51) IP Address Lease Time*
: `dhcp.option.subnet_mask == 255.255.255.0` *# Option: (1) Subnet Mask (255.255.255.0)*
: `dhcp.option.router == 10.10.20.1` *# Option: (3) Router*
: `dhcp.option.domain_name_server == 9.9.9.9` *# Option: (6) Domain Name Server*
: I won't list all of them, but you can find all options [here](https://www.wireshark.org/docs/dfref/d/dhcp.html).
### Examples
Search for a DHCP discover message of specific MAC address:
: `(dhcp.hw.mac_addr == aa:bb:cc:dd:ee:ff) && (dhcp.option.dhcp == 1)`
: `(eth.src == aa:bb:cc:dd:ee:ff) && (dhcp.option.dhcp == 1)`
Finding rogue DHCP server:
: `dhcp && !dhcp.option.dhcp == 1 && !dhcp.option.dhcp_server_id == 10.10.20.1`
: it is DHCP, it is not a discover message, and is not our DHCP server for this network
: `(udp.dstport == 68) && !(dhcp.option.dhcp_server_id == 10.10.20.1)`
: this is another option to check for the dst port '68' and filter out our DHCP server
Check if other DNS server are getting populated:
: `dhcp.option.dhcp == 2 && !(dhcp.option.domain_name_server == 9.9.9.9) && !(dhcp.option.domain_name_server == 149.112.112.112)`
## DNS <a href="#dns" id="dns">#</a>
[Full reference (dns)](https://www.wireshark.org/docs/dfref/d/dns.html)
Filter for DNS queries:
: `dns`
Filter for DNS queries:
: `dns.flags.response == 0`
Filter for DNS responses:
: `dns.flags.response == 1`
Filter the domain on the DNS quieries:
: `dns.qry.name == "ittavern.com"` *# Discover*
Filter common DNS records:
: `dns.qry.type == 1` *# `A` record*
: `dns.qry.type == 28` *# `AAAA` record*
: `dns.qry.type == 16` *# `txt` record*
: `dns.qry.type == 5` *# `CNAME` record*
: `dns.qry.type == 33` *# `srv` record*
: `dns.qry.type == 15` *# `MX` record*
: `dns.qry.type == 2` *# `NS` record*
Filter for the DNS server answer:
: `dns.a == 94.130.76.189` *# answer of a `A` record*
: `dns.txt == "v=spf1 include:spf.protection.outlook.com -all"` *# answer of a `TXT` record request*
: and so on
### Examples
Look up what DNS servers are used:
: `(ip.dst == 10.64.0.1) && (dns)`
Show only DNS traffic of one client:
: `dns && (ip.dst==10.10.20.1 or ip.src==10.10.20.1)`
Check for slow responses:
: `dns.flags.rcode == 0 && dns.time > .3` *# might needs some fine tuning depending on the env*
Show DNS requests that couldn't be resolved:
: `dns.flags.rcode != 0` 
---

View file

@ -0,0 +1,227 @@
# Visual guide to SSH tunneling and port forwarding
To make it quick, I wish I had known about port forwarding and tunneling earlier. With this blog post, I try to understand it better myself and share some experiences and tips with you.
**Topics**: use cases, configuration, SSH jumphosts, local/remote/dynamic port forwarding, and limitations
## Use cases <a href="#use-cases" id="use-cases">#</a>
SSH tunneling and port forwarding can be used to forward TCP traffic over a secure SSH connection from the SSH client to the SSH server, or vice versa. TCP ports or UNIX sockets can be used, but in this post I'll focus on TCP ports only.
I won't go into details, but the following post should show enough examples and options to find use in your day-to-day work.
Security:
: encrypt insecure connections (FTP, other legacy protocols)
: access web admin panels via secure SSH tunnel (Pub Key Authentication)
: having potentially less ports exposed (only 22, instead of additional 80/443)
Troubleshooting:
: bypassing firewalls/content filters
: choosing different routes
Connection:
: reach server behind NAT
: use jumphost to reach internal servers over the internet
: exposing local ports to the internet
There are many more use cases, but this overview should give you a sense of possibilities.
# Port forwarding
Before we start: the options of the following examples and be combined and configured to suit your setup. As a side note: if the `bind_address` isn't set, localhost will be the default
## Configuration / Preparation <a href="#configuration" id="configuration">#</a>
* The **local and remote users must have the necessary permissions** on the local and remote machines respectivly to open ports. **Ports between 0-1024 require root privileges** - if not configured differently - and the rest of the ports can be configured by standard users.
* **configure clients and network firewalls accordingly**
SSH port forwarding must be enabled on the server:
: `AllowTcpForwarding yes`
: *It is enabled by default, if I recall it correctly*
If you forward ports on interfaces other than 127.0.01, then you'll need to enable `GatewayPorts` on the SSH server:
: `GatewayPorts yes`
Remember to **restart the ssh server service**.
## SSH jumphost / SSH tunnel <a href="#jumphost" id="jumphost">#</a>
Transparently connecting to a remote host through one or more hosts.
`ssh -J user@REMOTE-MACHINE:22 -p 22 user@10.99.99.1`
![ssh-port-forwarding-info](/images/blog/ssh-jh-1.png)
**Side note**: The port addressing can be removed, if the default port 22 is used!
On REMOTE-MACHINE as jumphost:
```markdown
[user@REMOTE-MACHINE]$ ss | grep -i ssh
tcp ESTAB 0 0 167.135.173.108:ssh 192.160.140.207:45960
tcp ESTAB 0 0 10.99.99.2:49770 10.99.99.1:ssh
```
Explanation:
: `167.135.173.108` - public IP of REMOTE-MACHINE
: `92.160.120.207` - public IP of LOCAL-MACHINE
: `10.99.99.2` - internal IP of REMOTE-MACHINE
: `10.99.99.1` - internal IP of REMOTE-WEBAPP
#### Using multiple jumphosts
Jumphosts must be separated by commas:
: `ssh -J user@REMOTE-MACHINE:22,user@ANOTHER-REMOTE-MACHINE:22 -p 22 user@10.99.99.1`
## Local Port Forwarding <a href="#local-port-forwarding" id="local-port-forwarding">#</a>
#### Example 1
`ssh -L 10.10.10.1:8001:localhost:8000 user@REMOTE-MACHINE`
![ssh-port-forwarding-info](/images/blog/ssh-lpf-1.png)
Access logs of the webserver on REMOTE-MACHINE that only listens on 127.0.0.1:
: `127.0.0.1 - - [30/Dec/2022 18:05:15] "GET / HTTP/1.1" 200`
: the request originates from LOCAL-MACHINE
#### Example 2
`ssh -L 8001:10.99.99.1:8000 user@REMOTE-MACHINE`
![ssh-port-forwarding-info](/images/blog/ssh-lpf-2.png)
Access logs of the webserver on REMOTE-WEBAPP:
: `10.99.99.2 - - [30/Dec/2022 21:28:42] "GET / HTTP/1.1" 200`
: the request originates from the intern IP of LOCAL-MACHINE (10.99.99.2)
## Remote Port Forwarding <a href="#remote-port-forwarding" id="remote-port-forwarding">#</a>
#### Example 1+2
`ssh -R 8000:localhost:8001 user@REMOTE-MACHINE`
![ssh-port-forwarding-info](/images/blog/ssh-rpf-1.png)
`ssh -R 8000:10.10.10.2:8001 user@REMOTE-MACHINE`
![ssh-port-forwarding-info](/images/blog/ssh-rpf-2.png)
#### Example 3
`ssh -R 10.99.99.2:8000:10.10.10.2:8001 user@REMOTE-MACHINE`
![ssh-port-forwarding-info](/images/blog/ssh-rpf-3.png)
**Important**: `GatewayPorts yes` must be enabled on the SSH server to listen on another interface than the loopback interface.
## Dynamic port forwarding <a href="#dynamic-port-forwarding" id="dynamic-port-forwarding">#</a>
To forward more than one port, SSH uses the [SOCKS](https://en.wikipedia.org/wiki/SOCKS) protocol. This is a transparent proxy protocol and SSH makes us of the most recent version SOCKS5.
Default port for SOCKS5 server is 1080 as defined in [RFC 1928](https://datatracker.ietf.org/doc/html/rfc1928).
The client must be configured correctly to use a SOCKS proxy. Either on the application or OS layer.
#### Example
`ssh -D 10.10.10.1:5555 user@REMOTE-MACHINE`
![ssh-port-forwarding-info](/images/blog/ssh-dpf-1.png)
Use `curl` on a 'LOCAL' client to test the correct connection/path:
: `curl -L -x socks5://10.10.10.1:5555 brrl.net/ip`
: *If everything works out, you should get the public IP of the REMOTE-MACHINE back*
## SSH TUN/TAP tunneling
I won't go into detail, but you can create a bi-directional TCP tunnel with the `-w` flag. The interfaces must be created beforehand, and I haven't tested it yet.
`-w local_tun[:remote_tun]`
## How to run SSH in the background <a href="#background" id="background">#</a>
The native way to run the tunnel in the background would be `-fN`:
: `-f` - run in the background
: `-N` - no shell
`ssh -fN -L 8001:127.0.0.1:8000 user@REMOTE-MACHINE`
Others than that: use screen or some other tools.
#### Stop the SSH running in the background
```markdown
user@pleasejustwork:~$ ps -ef | grep ssh
[...]
user 19255 1 0 11:40 ? 00:00:00 ssh -fN -L 8001:127.0.0.1:8000 user@REMOTE-MACHINE
[...]
```
Kill the process with the PID:
: `kill 19255`
## Keep SSH connection alive
I won't go into detail, but there are different ways to keep the SSH connection alive.
#### Handle timeouts with heartbeats
Both options can be set on the client or server, or both.
`ClientAliveInterval` will send a request every `n` seconds to keep the connection alive:
: `ClientAliveInterval 15`
`ClientAliveCountMax` is the number of heartbeat requests sent after not receiving an respond from the other side of the connection before terminating the connection:
: `ClientAliveCountMax 3`
: `3` is the default, and setting it to `0` will disable connection termination. In this example, the connection would drop after around 45 seconds without any responds.
#### Reconnecting after termination
There are mutliple ways to do it; autossh, scripts, cronjobs, and so on.
This is beyond this post and I might write about in the future.
## Limitations <a href="#limitations" id="limitations">#</a>
#### UDP
SSH depends on a reliable delivery to be able to decrypt everything correctly. UDP does not offer any reliability and is therefore not supported and recommended to use over the SSH tunnel.
That said, there are ways to do it as described in [this post](http://zarb.org/~gc/html/udp-in-ssh-tunneling.html). I still need to test it.
#### TCP-over-TCP
It lowers the throughput due to more overhead and increases the latency. On connections with packet loss or high latencies (e.x. satellite) it can cause a [TCP meltdown](https://openvpn.net/faq/what-is-tcp-meltdown/).
[This post](http://sites.inka.de/sites/bigred/devel/tcp-tcp.html) is a great write-up.
Nevertheless, I'd been using OpenVPN-over-TCP for a while, and it worked flawlessly. Less throughput than UDP, but reliable. So, it highly depends on your setup.
#### Not a VPN replacement
Overall, it is not a VPN replacement. SSH tunneling can be used as such, but a VPN is better suited for better performance.
#### Potential security risk
If you do not need those features, it is recommended to turn them of. Threat actors could use said features to avoid firewalls and other security measures.
---
General links:
: [SSH manual](https://www.man7.org/linux/man-pages/man1/ssh.1.html)
: [sshd_config manual](https://www.man7.org/linux/man-pages/man5/sshd_config.5.html)
The inspiration of this blog post are the following [unix.stackexchange answer](https://unix.stackexchange.com/a/115906) and [blog post of Dirk Loss](http://dirk-loss.de/ssh-port-forwarding.htm).
Thanks to Frank and ruffy for valuable feedback!
---

View file

@ -0,0 +1,111 @@
# Linux - unmount a busy target safely
# Goal - removing target without data loss
Unplugging or `unmount -l` (lazy unmount) can cause data loss. I want to share a way o avoid data loss.
**Side note**: `unmount -l` will let you unmount the device, but as far as I know only 'hides' the mountpoint, and active processes can still write on said device.
#### The problem
`Error unmounting /dev/sdc1: target is busy`
So, there are now different ways to unmount the target safely.
**Side note**: the most common case is that you are still in a directory of said target. It happened way too often to me.
## Preparation
Those steps are not necessary, but help you troubleshoot.
#### Finding the mount point
We are going to use `df -h` to find the mount point of the busy target. It is often not necessary, but it can be helpful.
```bash
kuser@pleasejustwork:~$ df -h
[...]
/dev/sdc1 59G 25G 35G 42% /media/kuser/hdd-target
```
#### Check if the device is still actively in use
Additionally, you can check the activity of said device with [iostat](https://www.man7.org/linux/man-pages/man1/iostat.1.html).
```bash
kuser@pleasejustwork:~$ iostat 1 -p sdc
vg-cpu: %user %nice %system %iowait %steal %idle
1,14 0,00 0,63 2,66 0,00 95,56
Device tps kB_read/s kB_wrtn/s kB_dscd/s kB_read kB_wrtn kB_dscd
sdc 13,00 0,00 7,50 0,00 0 7 0
sdc1 13,00 0,00 7,50 0,00 0 7 0
```
`iostat` is powerful, but in this case the most important columns here are `kB_read/s kB_wrtn/s`. If there is anything but `0,00`, the device is in use.
If there is any activity and you unplug or unmount the device forcefully, data loss will most likely occur.
## Finding the process
### Using 'fuser'
More information can be found in the [manual of 'fuser'](https://linux.die.net/man/1/fuser).
```bash
kuser@pleasejustwork:~$ fuser -vm /dev/sdc1
USER PID ACCESS COMMAND
/dev/sdc1: root kernel mount /media/kuser/hdd-target
kuser 43966 F.c.. kate
kuser 44842 ..c.. kate
```
I prefer 'fuser' since it is installed on most OS and does the job too.
### Using 'lsof'
More information can be found in the [manual of 'lsof' (list open files)](https://linux.die.net/man/8/lsof).
```bash
kuser@pleasejustwork:~$ lsof /dev/sdc1
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
kate 43966 kuser cwd DIR 8,33 32768 1 /media/kuser/hdd-target
kate 43966 kuser 24w REG 8,33 142 2176 /media/kuser/hdd-target/.busybusy.txt.kate-swp
```
or
```bash
kuser@pleasejustwork:~$ lsof /media/kuser/hdd-target
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
kate 43966 kuser cwd DIR 8,33 32768 1 /media/kuser/hdd-target
kate 43966 kuser 24w REG 8,33 142 2176 /media/kuser/hdd-target/.busybusy.txt.kate-swp
```
### Kill process / close program
Kill process by PID:
: `sudo kill -9 43966`
or simply close the program that is using the file in the GUI.
### Unmount
Try to unmount again.
## Some other methods
I have had no problems with those yet, but some notable mentions.
Some other things to look into:
: check the swap partition: `cat /proc/swaps`
: stop nfs-kernel-server
: stop samba/smb server
: check for symbolic links
There are more scenarios in which a target can be busy, but this should cover 95% of cases.
---

View file

@ -0,0 +1,99 @@
# SSH - run script or command at login
There a multiple use cases to run a script on login. Configuration, starting services, logging, sending a notification, and so on. I want to show you different ways to do so.
#### Example script
The example script will notify me via push notification on my smartphone as soon as a new SSH connection is established. You can use a simple command or a script, and I will use a script for this blog post.
`/path/to/script/notify-at-login.sh`
```bash
#!/bin/bash
# 1 - Script without output!
# IMPORTANT: Script with output break non-interactive sessions (scp, rsync, etc)
curl -d "\"$SSH_CONNECTION\" - \"$USER\" logged in" ntfy.sh/reallyecurestringfornotifications >/dev/null 2>&1
# If you only want to run the script for an interactive SSH login and need the output displayed, place the script right after section 2 and remove the redirect.
# 2 - Check if session is non-interactive (remote command, rsync, scp, etc)
if [[ $SSH_ORIGINAL_COMMAND ]]; then
eval "$SSH_ORIGINAL_COMMAND"
exit
fi
# 3 - choose your favorite shell for the SSH session
/bin/bash
```
Remember to make it executable:
: `sudo chmod +x /path/to/script/notify-at-login.sh`
**Side note**: I am using [ntfy](https://github.com/binwiederhier/ntfy) to send push notifications to my smartphone. In this example, the push notification would look this:
`92.160.50.201 40248 195.21.0.14 22 - <user> logged in`
#### Output on non-interactive connections
Just a reminder that you have to avoid any output of your script or command on non-interactive connections like rsync. Either prevent output from being displayed for non-interactive connections or all connections. The example script shows you one way to do so.
## ForceCommand
I prefer this method, and had been working pretty well so far. The user will run the command and it can't really be avoided by the client.
Use the `ForceCommand` option in your `/etc/ssh/sshd_config` file to run the script:
: `ForceCommand /path/to/script/notify-at-login.sh`
ForceCommand ignores any command or script supplied by the client and ~/.ssh/rc by default.
## PAM_exec
Put the script into a new directory `/etc/pam_scripts`, set the directory's permission to `0755` and the owner and group must be `root`. The files permissions are `0700`, must be executable and the owner and group must be `root` as well.
Directory:
: `sudo mkdir /etc/pam_scripts`
: `sudo chmod 0755 /etc/pam_scripts`
: `sudo chown root:root /etc/pam_scripts`
Script:
: `sudo chmod 0700 /etc/pam_scripts/notify-at-login.sh`
: `sudo chown root:root /etc/pam_scripts/notify-at-login.sh`
Enable `UsePAM` in the `/etc/ssh/sshd_config`:
: `UsePAM yes`
Tell PAM to run the script at SSH login by adding the following line to `etc/pam.d/sshd`:
: `session required pam_exec.so /etc/pam_scripts/notify-at-login.sh`
All scripts added to the `/etc/pam_scripts/` directory will be run as `root` at login.
## Shell startup & sshrc file
You can run the script by your preferred startup file (`.profile` / `.bashrc`, etc) or use the SSH-specific profiles that run additionally before the user shell is loaded.
For all users:
: `/etc/ssh/sshrc` *# runs only if there is no user-specific configuration file `~/.ssh/rc`*
Per user configuration in home dir:
: `~/.ssh/rc`
```markdown
~/.ssh/rc
Commands in this file are executed by ssh when the user
logs in, just before the user's shell (or command) is
started. See the sshd(8) manual page for more information.
```
Run the script via the startup file by adding the following line to it:
: `. /path/to/script/notify-at-login.sh`
Both the shell startup and sshrc files will be run by the user.
**Side note**: if security is a concern - like a login notification - it is not recommended to use this method. Profile config files can be avoided by `ssh user@server bash --norc --noprofile` and `~/.ssh/rc` can be changed by the user after the first login.
---

View file

@ -0,0 +1,281 @@
# Backup Guide - how to secure crucial data
This guide tries to share thoughts about various backup strategies, risks, storage mediums, and other things to consider. I won't go into technical details or suggest any tools since every backup strategy must be created individually, and there is a wide range of requirements. I rather want to give you some kind of checklist with things to think about. **There is not a perfect strategy solution or template that fits all needs.**
I've tried to keep this guide accessible for personal and corporate backups.
# WHY DO YOU NEED BACKUPS
The main goal of backups is data loss prevention. There are numerous risks that could cause data loss, and we try to prevent them with a backup strategy that fits our needs. I'll go into more detail in the next section.
#### Risks <a href="#risks" id="risks">#</a>
The following risks exist for data in production and for your backups! - There are many more, but this section will give you a feeling of the most common risks.
Environment threats:
: flooding/water/humidity
: fire/high temperature
: earthquake/shock
: EMP/Electricity
Human errors:
: loss of a device
: misconfiguration
: unintentional `sudo rm -rf / --no-preserve-root`
: lost access (password,key,...)
Threat actors:
: ransomware
: rogue employees
: hardware theft
: data tampering
Hardware/software:
: hardware failure
: bugs
: bitrot
Some 'disasters' affect only a single hard drive, some devices, or the whole network. A decent backup strategy mitigates those risks and helps to recover as fast as possible.
**Side note**: Backups do not prevent those risks, but minimize the damage and help to recover from them.
#### RAID/snapshots are no backups! <a href="#raid-is-not-a-backup" id="raid-is-not-a-backup">#</a>
**RAID** - *redundant array of independent disks* - is a method to either increase the performance, the availability and resiliency, or both. Misconfigured, it can even cause more damage; for example, a RAID0 can make the whole array useless after a disk failure. Don't let me get started with broken hardware RAID controllers or RAID expansions.
It protects against one of the most common data loss reasons: disk failure. It does not help you in case of human errors, ransomware, file corruption, and other use cases in which a backup would normally help you. And yeah, data recovery, in general, is not a function of RAID.
**A RAID is not a backup.**
**Snapshots** are short-term roll-back solutions in case of an update failure, system misconfiguration, and other critical measures. They are not independent of the VM environment, and they are often stored on the same disk as the server and still are a single point of failure. Since most snapshot solutions are not application-aware, data corruption of databases or other applications can occur when they are in progress while the snapshot was created.
Snapshots, therefore, should not be considered a valid backup!
Both solutions can be part of your backup strategy but can't replace a regular backup.
# Determine what to backup and why <a href="#what-to-backup" id="what-to-backup">#</a>
What and why you backup specific files highly depend on your needs. It is helpful to have an inventory of critical infrastructure to determine what to backup.
Furthermore, it is helpful to categorize data. **System data** (e.x. operating system), **application data** (e.x. configuration files), and **operational data** (e.x. data sheets, databases, emails). Operational data is the most important since it is necessary for daily business. This step recommends checking the size and kind of data to plan for the backup storage requirements.
**Side note**: I am not too familiar with certain laws or compliances like HIPAA, SOC2, PCI DSS, and so on, but talking with legal might be a good idea.
It is essential to know some processes and communicate with different departments. What is business critical, and what could wait in case of a disaster? And nobody needs a working frontend when the databases are not up and running. Knowing the processes will help you to avoid problems in the recovery phase. That said, those problems should be apparent when you test your recovery procedure.
Some other category is the frequency with which the data gets updated. An example would be: frequent (e.x. databases), rare (e.x. static content like intranet or docs), and already archived data (important, but don't have to be recovered immediately).
Remember to provide some kind of backup solution for devices like laptops and smartphones.
# Data Retention Policy <a href="#data-retention-policy" id="data-retention-policy">#</a>
With the Data Retention Policy, we try to specify how long to retain certain data. There are various factors you should consider: usefulness, compliance, laws, and so on.
Some system data, like old configuration files, can be deleted after a short time, but operational data, like invoices or contracts, must be stored for five and more years.
**Side note**: as mentioned before, this highly depends on your setup, and speaking to the relevant departments is recommended.
#### Backup/data deletion <a href="#data-deletion" id="data-deletion">#</a>
Deleting data or backups seems not worth talking about, but data can be easily recovered if it is not done correctly.
The methods differ from medium to medium. The most secure way would be to destroy the medium properly. Re-writing the medium with random ones and zeros multiple times and/or doing full encryption and destroying the key would be options if you want to resell the medium. Other than that, special tools can be used that differ from medium to medium.
Some laws/compliances require you to destroy data in a certain way. To make sure, speak with legal or a specific contact person.
To be secure, store your backups encrypted in the first place.
# Decide the backup frequency <a href="#data-frequency" id="data-frequency">#</a>
The frequency of your backups will determine the impact of a disaster in terms of data loss. The more frequently you do backups; the less is data loss in case a disaster occurs. There are two metrics you could consider: **RTO** and **RPO**.
![explanation-rto-rpo](/images/blog/backup-rto-rpo.png)
#### RPO (Recovery Point Objective)
With the RPO we want to determine how much data loss can be tolerated in case of a disaster. A RPO of 12 hours requires you to do two backups a day to fulfill this requirement. It wouldn't be sufficient to have daily backups the the RPO was 3 hours.
Every system can have its own RPO.
#### RTO (Recovery Time Objective)
With the RTO we want to determine the maximum tolerable amount of down time after **any** disaster. A RTO of 3 hours says that the system needs to be productive within 3 hours after a disaster. Some metrics to consider: cost per hour in case of a down time and external requirements like laws or contracts.
Like the RPO, every system can have its own RTO, and the RTO ends when data is recovered and it is up again.
# Document everything <a href="#documentaion" id="documentaion">#</a>
As in so many areas; documentation is king.
- what and why do you backup
- the frequency of backups
- the backup process
- the access to the backups
- the recovery process
It will be hectic and stressful if the DRP or backup plan is needed, so the better the documentation is, the faster you can recover your systems.
Something that should not be overlooked is a **contact list**. What people must be contacted to recover data and how can we reach them? Where is the offsite backup stored, and e.x. how can we reach the bank? This will save a lot of time.
Don't forget to **store** the documents **securely, but accessible**. Detached from the backup, like printed out, or on a USB stick in a safe.
# How to backup! <a href="#how-to-backup" id="how-to-backup">#</a>
As mentioned before, there is no perfect solution, and you must find a backup strategy that works for you. Like everything, it has pros and cons, and you have to decide what works for you. I'll show you some points to consider.
#### 3-2-1 rule <a href="#3-2-1-rule" id="3-2-1-rule">#</a>
I want to start with the well-known **3-2-1 rule**:
: have **3** copies of your data
: have **2** different storage methods/mediums
: have **1** copy offsite
The 3-2-1 rules should be considered the bare minimum of every backup strategy. I'll go into more detail in the following points.
#### Have multiple copies of your data <a href="#multiple-copies" id="multiple-copies">#</a>
Who would have known? But just to be sure, consider some points.
Sounds obvious, but avoid storing backups of a system on the same system or storage.
Spread copies over multiple mediums and use different methods. Every storage medium/method has its risks, and having copies on multiple mediums increases the resiliency overall.
#### Locations <a href="#locations" id="locations">#</a>
Make use of **different locations**.
Some examples would be:
- store a full backup in a bank vault or a different trusted location
- store backups from data center A in data center B, and vice versa
- store a backup in the cloud
Just make sure that you can access the offsite backups whenever you can and add this factor into your strategy.
#### Encrypt backup storage and transfer <a href="#encryption" id="encryption">#</a>
This is especially important for offsite backups but can be necessary for local backups too. Make sure that you use a **secure encryption method**, **use a secure password/password** or another method, and **encrypt the transit and storage**! Still will protect the integrity of your data from tampering of a third party, and makes your data worthless in case a third party gets access to the backups.
**Important**: **Do not lose the keys!** - Backup your decryption method, store it securely (not with your backups), and ensure that the decryption key is **accessible in any disaster scenario**!
#### Think about the right tools <a href="#right-tools" id="right-tools">#</a>
Could you access your backups in 10 years? Is the technology still around? Is the de-/encrpytion service provider still in business?
It is recommended to use **well-known open-source services**. Niche and proprietary services can be attractive short term, but they add a layer of dependency.
**Side note**: store an unencrypted version of the encryption tool with your backups, so it will be available if it is needed.
Try to **automate** as much as possible, so backups won't be forgotten, and make sure that the **backup process doesn't disrupt** the daily business.
#### Store backups immutable/read-only <a href="#immutable-storage" id="immutable-storage">#</a>
Keeping the backup storage immutable prevents anyone from tampering with the backups and increases the data integrity. There are cases in which you have to delete certain data from backups, but in general, it is recommended to store them immutable.
#### Choose the right storage medium <a href="#storage-medium" id="storage-medium">#</a>
There are multiple factors that will play into the choice of a storage medium.
- How much data do you have to store?
- How long do you need to store it?
- How much money do you want to spend?
- and many more
One example could be M-Discs: they claim to have a lifetime of [1000 years](http://www.mdisc.com/) and have a capacity between 25 and 100 GB. It can be an option for personal backups or small but critical company backups, but a 10 TB backup of operational data? - That is not a practical solution.
Things to consider:
- lifetime/sustainably
- accessibility
- future proof technology
- setup costs
- operational cost over time/per GB/TB
- practically
- write/read speed
- ...
The choice of medium will affect the recovery process and speed and is overall important.
#### Have the recovery process in mind <a href="#recovery-process" id="recovery-process">#</a>
Think backward from a recovery standpoint. You have to recover system 'A' and what else must be up to get system 'A'
running again? This might give you another perspective.
#### Avoid single point of failures <a href="#single-point-of-failure" id="single-point-of-failure">#</a>
![explanation-single-point](/images/blog/backup-single-point.png)
There are plenty of examples: single backup server, a single person with access to backups, single internet connection with cloud backups only, and so on.
#### Use different backup types <a href="#backup-types" id="backup-types">#</a>
I won't go into detail, but the main goal is to save time and storage.
**Full backups** - as the name implies - is a backup of all data.
![explanation-dif](/images/blog/backup-diff-backup.png)
**Differential backups** store the changes from the last full backup.
![explanation-inc-backup](/images/blog/backup-inc-backup.png)
**Incremental backups** store the changes from the last full backup or incremental backup.
#### Restrict and secure access to the backups <a href="#backup-access" id="backup-access">#</a>
Backups should only be accessible by trusted parties. Admins only, separate network, MFA, and other security measurements are recommended. The goal is further to limit the risks of tampering, theft or deletion.
**Side note**: make sure that you do not lock yourself out. This is critical and should be tested regularly.
# Trust but verify <a href="#verification" id="verification">#</a>
![explanation-monitoring](/images/blog/backup-monitoring.png)
**Monitor** your backup process and backup storage. Check the **logs** regularly and implement some kind of **alerting/notification** system.
Things to look for: failed backup jobs, unusual activities, access attempts, and so.
**Side note**: More details follow in the recovery section, but make sure to monitor and test the health of the backup medium too.
Let third party/experts **audit** your backup strategy. It is easy to overlook certain things, and it can be beneficial to have another perspective.
# Test recoverability regularly <a href="#test-recover-process" id="test-recover-process">#</a>
![explanation-recovery](/images/blog/backup-recovery.png)
Test your back regularly. From A to B, and play through various scenarios.
- do you still have access to everything?
- can you encrypt the backup?
- can you recover the needed system/data?
- Did a process change?
- is the documentation/manual still up to date?
- are we still in our required recovery time?
- is the contact list still up-to-date?
Something you should do too:
- update contact list
- manual/documentation
- include new coworkers to show them the process
- check health of hardware (storage, etc)
It is recommended to test it with **different hardware/software** to increase the resilience. If this is not an option, keep backup hardware and spare parts around.
#### Re-evaluate the backup strategy regularly
Systems, processes, people, requirements, and almost anything else change over time. This requires re-evaluating the backup strategy regularly. Notes from the test recoveries and conversations with contact persons should help to adjust the strategy accordingly.
# Conclusion
Creating a good backup strategy can be challenging, but it is crucial in the end.
This is the first version of this guide and I try to get into more detail in the future.
---

View file

@ -0,0 +1,220 @@
# SSH Troubleshooting Guide
I won't go into specific cases in this blog post. This is a general guide on how to gather the necessary information that will help you to get your problem fixed.
In this post, I'll use a **Linux** client and server as a reference.
## Logging <a href="#logging" id="logging">#</a>
**Client**
Get the verbose logging with the `-v` flag. This normally is enough, but if you need even more information, use `-vv` and `-vvv`.
**Server**
You can find the logs for your SSH Server here `/var/log/auth.log` or `/var/log/secure`.
For troubleshooting sessions, it is recommended to increase the log level from the default `LogLevel INFO` to `LogLevel DEBUG1` in your SSH server configuration `sshd_config`. This will gives you all the necessary information. The following log levels are available: `QUIET, FATAL, ERROR, INFO, VERBOSE, DEBUG, DEBUG1, DEBUG2, and DEBUG3`. Remember to **restart the SSH server** after changing this setting.
Another method is to check `journalctl` if you use systemd. The logs should be available via `sudo journalctl -r -u ssh -u sshd`.
Often enough, restarting the server is not an option. You simply can add anoher process with the same options, but inceased debug level and another port. This allows you to monitor the logs for a specific client without interupting the main SSH server.
`sudo /usr/sbin/sshd -dDp 2222`
**Side note**: make sure to use the absolute path or you will be greeted by the following error message `sshd re-exec requires execution with an absolute path`.
Thanks to [youRFate on Lobste.rs](https://lobste.rs/s/wombsw/ssh_troubleshooting_guide#c_fia3jk) for the tip!
## Common errors
As mentioned, there are many more, but the following list will give you a great starting point.
#### Hostname resolution <a href="#hostname" id="hostname">#</a>
```markdown
error output
ssh: Could not resolve hostname example.com: Name or service not known
```
This error message implies a problem with the DNS.
- check that the hostname is correct
- use the IP instead to test general connectivity
- check hostname resolution with `nslookup` or other tools
#### Connection timeout <a href="#timeout" id="timeout">#</a>
```markdown
Error output
ssh: connect to host 10.10.10.10 port 22: connection timed out
```
This error tells you that you can't reach the server at all.
Wrong destination IP:
: verify that the destination IP is correct
Routing:
: can the client reach the destination? Check the routing table and use ICMP to double-check (ping and traceroute). Consider that ICMP sometimes is blocked by network firewalls!
Firewalls:
: check the firewalls on the client, server, and network firewalls and make sure that the connection is allowed.
#### Connection refused <a href="#refused" id="refused">#</a>
```markdown
Error output
ssh: connect to host 10.10.10.10 port 22: connection refused
```
You can reach the server, but the server refuses the connection
Wrong destination IP:
: verify that the destination IP is correct
Listening SSH server port:
: is the default SSH port `22` used? You can check it with the `Port 22` in the `/etc/ssh/sshd_conf` file on the server.
: is the server listening on the communicated port? Check on the server with `ss -tulpen | grep -i :22` (use `netstat` on older Linux versions) or use tools like `nmap` to find the listening port (disclaimer: do not scan server you do not have the permission for)
SSH server running:
: make sure that the SSH server is running, e.x. with `systemctl status sshd`
#### Permission denied <a href="#permission" id="permission">#</a>
`Permission denied (publickey,password)`
Most likely a problem with the authentication.
Wrong user credentials:
: make sure that you use the correct username and password or private key.
: as a side note: the login as `root` is often forbidden by common security measures.
Missing permissions on the server:
: make sure that the user is allowed to log in via SSH.
: `/etc/ssh/sshd_config` > `AllowUsers` or `AllowGroups`
Wrong authentication method:
: most commonly, you'd log in via password or public key authentication.
: use the `-v` on the client to look for the following entry: `debug1: Authentications that can continue: password,publickey`. This gives you information on what the server accepts.
: to force an authentication option on the client, you could use the `-o` flag with SSH options. To force the login via password you could use something like this: `ssh -o PreferredAuthentications=password -o PubkeyAuthentication=no user@10.10.10.10`.
: if the desired option is unavailable, it must be configured on the server. `/etc/ssh/sshd_config`: `PubkeyAuthentication yes` and `PasswordAuthentication yes`. [It is recommended to use public key authentication only](https://ittavern.com/ssh-how-to-use-public-key-authentication-on-linux/).
Wrong permission and/or ownership of SSH-related files:
: most SSH servers check how permissive e.x. the SSH keys are, and can deny access if they are too permissive.
```markdown
sudo chmod 700 ~/.ssh
sudo chmod 644 ~/.ssh/authorized_keys
sudo chmod 644 ~/.ssh/known_hosts
sudo chmod 644 ~/.ssh/config
sudo chmod 600 ~/.ssh/nameofthekey # private key
sudo chmod 644 ~/.ssh/nameofthekey.pub # public key
```
Public key is missing in the `~/.ssh/authorized_keys` file:
: the public key must be added to the a.m. file. A how-to can be found in [this post](https://ittavern.com/ssh-how-to-use-public-key-authentication-on-linux/).
Private key no longer accepted on the server:
: some private keys are no longer considered secure, so the server could refuse the login with those keys.
: the best solution would be to update the SSH applications and generate new keys.
: a workaround would be to add the insecure key algorithm to the SSH server config to the accepted keys `PubkeyAcceptedKeyTypes`.
#### SSH protocol version <a href="#ssh-version" id="ssh-version">#</a>
`Protocol major versions differ: 1 vs. 2`
The client and server do not work with the same protocol version. That said that you should only use SSHv2 and disable SSHv1.
**Client**
With the `-v` flag you can see what the server offers:
: `debug1: Remote protocol version 2.0 [...]`
With the flags `-1` and `-2` you can decide whether the client should use SSH protocol version 1 or 2, respectivly.
**Server**
On the server, you can check the provided SSH protocol version in the configuration file:
: `grep Protocol /etc/ssh/sshd_config`
: `Protocol 1` *# SSHv1*
: `Protocol 2` *# SSHv2*
: `Protocol 1,2` *# SSHv1 + SSHv2*
If this option is missing, the mordern SSH server will use SSHv2 by default. It is worth adding it just to be sure and have it documented.
#### Failed host key verification <a href="#hostkey" id="hostkey">#</a>
```markdown
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@ WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that a host key has just been changed.
```
Clearing the host key from `~/.ssh/known_hosts` our use `ssh-keygen -R <ip-of-destination`. You should be able to connect normally.
If you were not informed about any changes, please contact the SSH server administrator to verify that everything is still secure.
#### Unable to negotiate ciphers, MACs, or KexAlgorithms <a href="#ciphers" id="ciphers">#</a>
```
Unable to negotiate with 10.10.10.10: no matching key exchange method found.
Their offer: diffie-hellman-group1-sha1
```
Use the `-vv` flag on the client to output the necessary information. On the server, you can see the information with the `LogLevel DEBUG2` and check with the following commands what is accepted by the server.
[Ciphers](https://man.openbsd.org/ssh_config#Ciphers):
: `ssh -Q cipher`
[MACs](https://man.openbsd.org/ssh_config#MACs):
: `ssh -Q mac`
[KexAlgorithms](https://man.openbsd.org/ssh_config#KexAlgorithms):
: `ssh -Q kex`
Most commonly old SSH software is the reason for those errors. They still support old and insecure methods, which are no longer supported by modern applications.
There are workarounds with the `-o` flag to set temporary options, but I am not too familiar with it.
`ssh -o KexAlgorithms=+diffie-hellman-group1-sha1 user@10.10.10.10`
#### Connect without startup file <a href="#startup-file" id="startup-file">#</a>
This is not that common but there are ways to lock you out after changes to the startup files like `.bashrc`, `.profile`, and so on. You simply can avoid loading those profile files with the following command.
`ssh -t user@host bash --norc --noprofile`
#### Handling SSH sessions with escape sequences <a href="#escape-sequence" id="escape-sequence">#</a>
SSH provides some **escape sequences** with which you can kill the session on the client.
```markdown
Supported escape sequences:
~. - terminate connection (and any multiplexed sessions)
~B - send a BREAK to the remote system
~C - open a command line
~R - request rekey
~V/v - decrease/increase verbosity (LogLevel)
~^Z - suspend ssh
~# - list forwarded connections
~& - background ssh (when waiting for connections to terminate)
~? - this message
~~ - send the escape character by typing it twice
(Note that escapes are only recognized immediately after newline.)
```
**Side note**: Start with a `RETURN` and keep `SHIFT` pressed while typing `~` and e.x. `?` to get this message. This depends on your keyboard layout.
You can send the sequence through one or more **SSH tunnel** by adding `~` in front of the sequence.
---

View file

@ -0,0 +1,74 @@
# Difference between RSS and Atom
I was curious about what the difference between RSS and Atom was. This blog post is a small primer to RSS and Atom feeds and describes the differences between both. I've linked links to the technical specification at the end of this post.
# General
[RSS](https://en.wikipedia.org/wiki/RSS) (Really Simple Syndication) and [Atom](https://en.wikipedia.org/wiki/Atom_(web_standard)) are often used interchangeably, and most feed readers can process both formats. Both use an open dialect of XML, which is computer-readable and allows feed-/RSS-/Atom readers to subscribe to a feed and pull new content to the client. RSS was released in 1999, and Atom followed a little bit later in 2005. The Harvard university specified RSS with the current version of 2.0, Atom is an IETF standard, and the current version is 1.0.
Those feeds provide a privacy-friendly way to consume the content. The content provider can't track the behavior on your feed reader, and most tracking methods do not work either.
The most common **use case** is to stay up-to-date on blogs/news sites and podcasts. But you can use RSS for even more: e.x. stay up-to-date on your favorite Youtube videos without an account. Simply visit the Youtube channel, open the homepage source code, and search for `rssURL`. Just copy the link like this `https://www.youtube.com/feeds/videos.xml?channel_id=UCW6xlqxSY3gGur4PkGPEUeA` into your feed reader, and you get notified when a new video is being published.
RSS uses either `.rss` or `.xml` as **file extension**, and Atom uses `.atom` or `.xml`.
## Main differences
In general: RSS has a broader adoption, but Atoms provides more features. I will try to describe some of them.
#### Content payloads
RSS only provides escaped HTML or plain text. Atom can provide various types of content within the same payload.
Atom is therefore recommended for more complex content.
#### Internationalization
RSS provides internationalization at the feed level and Atom at every individual element level. This means that you only need one feed per language for RSS and only one link for all languages for Atom.
Furthermore, Atom provides better support for international characters.
#### Markdown format
RSS does not support custom XML markup. Almost all text formatting is getting lost, and is especially in long-format text content troublesome since it hinders accessibility. It is recommended to use Atom if you want to preserve as much formatting as possible.
#### Autodiscovery
Both support **autodiscovery** which allows browsers and feed readers to automatically detect the RSS feed.
RSS:
```
<link rel="alternate" type="application/rss+xml"
title="The Title of your blog or whatever"
href="/rss/" />
```
Atom:
```
<link rel="alternate" type="application/atom+xml."
title="The Title of your blog or whatever"
href="/rss/" />
```
#### Implementation
Without having worked on them: Atom seems easier to work with since the code is reusable and more strict, and RSS is less strict but a bit more complex. I won't be able to quantify this, but I've read this multiple times.
# Further reading
I won't go into technical details, but there are great resources, such as the blog post of Sam Ruby:
: [Rss20AndAtom10Compared](http://www.intertwingly.net/wiki/pie/Rss20AndAtom10Compared)
: [RSS 2.0 Specification](https://cyber.harvard.edu/rss/rss.html)
: [RFC5023 The Atom Publishing Protocol](https://www.rfc-editor.org/rfc/rfc5023)
: [RFC4287 The Atom Syndication Format](https://www.rfc-editor.org/rfc/rfc4287)
: [XML RSS](https://www.w3schools.com/XML/xml_rss.asp)
# Conclusion
It is recommended to use Atom, since it is simpler to work with and has a wider feature set. Nevertheless, RSS will do the trick too.
I am currently using RSS and am probably going to switch to Atom in the future.
---

View file

@ -0,0 +1,90 @@
# Basics of Power over Ethernet (PoE)
**Power over Ethernet - or short 'PoE'** - allows you to supply DC power for another device over the ethernet network cable. The most common **Power Source Equipment (PSE)** types are switches and routers (**endspan**), but you could just as well put a PoE-injector (**midspan**) between a standard switch and the **Powered Device (PD)**. Especially in corporate environments, PoE devices are growing in popularity, and just to list some **examples of PDs**: VoIP hardware, wireless access points, access control terminals, security cameras, and many more.
The main advantage is that you only need one cable for data and power for each device, don't need an extra power outlet at the location of the device, and can control the power supply over the PSE interface. For example, this is great for access points mounted under the ceiling where 'simply unplug it' is not an option.
On the other side, devices that provide PoE functionality are regularly more expensive, are getting warmer due to the power supply, and consume more electricity. That said, if the switch is dead, so are the connected PoE devices. This makes the use of an USV almost inevitable, especially if you power critical infrastructure with it.
PoE generally requires Cat5+ cables and has a normal working distance of 100m. An extender can be used to increase the distance.
The usage of PoE over a connection should not have any effect on the transfer or latency of the data connection. That said, cheap hardware can still do, and I had 3 cases in which turning off PoE explicitly on a switch port helped to solve a problem with disconnecting a non-PoE device. I still blame the printers.
# Specification <a href="#specification" id="specification">#</a>
The following standards were created by the Institute of Electrical and Electronics Engineers (IEEE), and the following overview should give you a quick insight of the differences of the common standards.
**Important**: 802.3at/PoE+ and 802.3bt/PoE++ are backward compatible, as long as the PSE supports the higher standard (802.3at PSE supports 802.3af PD, but it does not work the other way around).
[IEEE 802.3af-2003](https://standards.ieee.org/ieee/802.3af/1090/):
: known as **PoE**
: **Type 1**
: max power delivered by PSE 15,4W / max power available at PD 12,95W
: <a href="#power-classes">power management classes</a> 1-3
: supported cabling Cat3 and Cat5+
: supported <a href="#modes">modes</a> are A and B
[IEEE 802.3at-2009](https://standards.ieee.org/ieee/802.3at/4553/):
: known as **PoE+** or **PoE Plus **
: **Type 2**
: max power delivered by PSE 30W / max power available at PD 25,5W
: <a href="#power-classes">power management classes</a> 1-4
: supported cabling Cat5+
: supported <a href="#modes">modes</a> are A and B
[IEEE 802.3bt-2018](https://standards.ieee.org/ieee/802.3bt/6749/):
: known as **PoE++** or **4PPoE**
: **Type 3**
: max power delivered by PSE 60W / max power available at PD 51W
: <a href="#power-classes">power management classes</a> 1-6
: supported cabling Cat5+
: supported <a href="#modes">modes</a> are A,B and 4PPoE
: **Type 4**
: max power delivered by PS 100W / max power available at PD 71,3W
: <a href="#power-classes">power management classes</a> 1-8
: supported cabling Cat5+
: supported <a href="#modes">mode</a> is only 4PPoE *(as all 4 pairs are required)*
**UPoE/UPoE+** are Cisco proprietary and I won't go into detail. I think it is still worth mentioning.
# Active PoE / Passive PoE <a href="#active-passive" id="active-passive">#</a>
Active and passive PoE are **not inter-compatible** and PSE and PD must support the same type.
PSE with **active PoE** does a handshake with the PD to determine how much power the PD requires and only after this handshake power will be sent to PD. Furthermore, active PoE connection often will be monitored and the PSE can turn off the power if there are any risks. Active PoE is more expensive but more common, reliable, and secure.
**Side note**: the above-mentioned standards are active PoE. Passive PoE has no standards.
**Passive PoE** - or 'Always-On PoE' - does not require any handshake and sends the configured power immediately. This means you need to know the requirements of the PD; otherwise, you could easily destroy your hardware. It was often used before the IEEE standards and is less expensive, but it is not recommended anymore since most modern PDs support active PoE.
**Side note**: some passive PoE PSEs can have a shorter distance and be limited to 100Mb/s.
# Power management classes <a href="#power-classes" id="power-classes">#</a>
Power management classes prevent the over-powering of PDs.
Class - power at PD:
: **Class 0** - 0W - 12.95W *(default)*
: **Class 1** - 0W - 3.84W (802.3af,802.3at,802.3bt)
: **Class 2** - 3,84W - 6,49W *(802.3af,802.3at,802.3bt)*
: **Class 3** - 6,49W - 12,95W *(802.3af,802.3at,802.3bt)*
: **Class 4** - 12,95W - 25,5W *(802.3at,802.3bt)*
: **Class 5** - 40W *(802.3bt Type 3+4)*
: **Class 6** - 51W *(802.3bt Type 3+4)*
: **Class 7** - 62W *(802.3bt Type 4)*
: **Class 8** - 71,3W *(802.3bt Type 4)*
# Modes <a href="#modes" id="modes">#</a>
There are three modes available. The following modes determine what pairs the power will be delivered to the PD. **Mode A** provides the power over the same pairs that are used for the data transfer (T568A pairs #1 + #2, T568B pairs #2 + #3) and **Mode B** delivers the power over the spare pairs (T568A + T568B pairs #3 + #4). **4PPoE** stands for 4-pairs Power over Ethernet - and as the name implies - uses all four pairs to deliver the power to the PD.
The **PSE decides** what mode will be used and PD have to support at least mode A **and** B by the IEEE standard.
#### Compatible vs compliant
'Compliant' means that the required standards are met by the PD, and 'compatible' means, that it can work with a standard, but don't have to. 'Compatible' often means mode B only, but this depends on the PD.
That said, I don't think that this is the case in 100%. I've seen multiple devices that are 'compliant', but are marked 'compatible'. I've read about this multiple times and I thought it would be worth mentioning.
---

View file

@ -0,0 +1,212 @@
# Getting started with GNU screen - Beginners Guide
[Screen](https://www.gnu.org/software/screen/) is a terminal multiplexer and has a wide feature set. It allows you to split your terminal window into multiple windows (split screen feature), detach sessions to let commands run in the background, connect to a device via serial interface, and many more.
Screen sessions keep running even if you disconnect, which is especially great for unreliable connections. There are more advanced use cases, but we will focus on the basics.
# Basics <a href="#basics" id="basics">#</a>
You can have multiple **sessions** within the screen and each session can contain multiple **windows**. When you use the split screen function, each panel would be a window called **region** in screen.
```markdown
screen
├───── session 29324.x
│ │
│ ├────── window 0: name x
│ │
│ └────── window 1: name y
└───── session 29399.a
├───── window 0: name a
└───── window 1: name b
```
#### Escape combination (Prefix) <a href="#prefix" id="prefix">#</a>
In this blog post, I'll call the **escape combination** 'prefix', but there are multiple names for it: meta key, leading key, escape combination, and some others.
The prefix tells the terminal that the following command or shortcut will be used in the screen context. Almost every shortcut starts with it and the **default prefix** is `CTRL` + `a`. So, if you see `Prefix` in the reference section, I mean this key combination. I'll show you how to change the prefix as an example in the configuration section.
A list of all default key bindings can be found in the [official documentaion](https://www.gnu.org/software/screen/manual/html_node/Default-Key-Bindings.html).
## Configuration files <a href="#configuration" id="configuration">#</a>
Screen won't create the startup configuration file by default but will look for these two files if it gets started.
`~/.screenrc` / `/usr/local/etc/screenrc`
**Comments** in the configuration file start with a `#`.
The following two sections will show some simple examples of different configurations.
#### Example 1: change prefix for screen
Adding the following line to you your configuration file changes the prefix to `CTRL` + `f`:
: `escape ^Ff`
You can change it to a difference key combination, especially as the default prefix key combination is commonly used otherwise.
#### Example 2: turn off the copyright message at the start
```markdown
#Do not show copyright msg at the start of a session
startup_message off
```
Simply add these lines to your configuration file, and the copyright message won't appear again.
## Logging <a href="#logging" id="logging">#</a>
Before we start with the sessions and windows, it might be beneficial to talk about logging. For most troubleshooting sessions, it is required to save the logs. I am going to show you some ways to do it.
#### Hardcopy
Use `Prefix` + `h` to create an output file with the content of the current screen window you are in. It will be saved as `hardcopy.n` (*'n' for the number of the current window *) in the directory from where you have started the screen initially. If you repeat the shortcut, the initial file will be overwritten.
If you want to **append** the output to a file, you can add `hardcopy_append on` to your configuration file.
If you want to change the directory in which the harcopy files will be saved, simply add `hardcopydir /your/dir/` to your configuration file.
#### Continuous logging
Logging is disabled by default.
You can start a logged screen session with `-L` flag + `-Logfile /path/to/logfile.txt`. If you are already in a session, you can activate it with `Prefix` + `SHIFT` + `h`. The output file will be called `screenlog.n`, where 'n' is the number of the current window.
## Working with sessions <a href="#sessions" id="sessions">#</a>
Show all sessions:
: `screen -ls`
```markdown
kuser@pleasejustwork:~$ screen -ls
There are screens on:
29265.demo-session (27.01.2023 02:19:51) (Detached)
26508.pts-8.pleasejustwork (26.01.2023 23:20:50) (Detached)
2 Sockets in /run/screen/S-kuser.
```
Start a new session:
: `screen`
Start a new session with a specific name:
: `screen -S nameofthissession`
Start a new detached session and run a command:
: `screen -d -m ping 10.10.10.10`
Detach the current session:
: `Prefix` + `d`
Create new session if there is none, or re-attach the last session:
: `screen -d -RR`
Re-attach session in terminal:
: `screen -r 2232` *# screen will auto complete if the prompt is unique*
: `screen -r nameofthissession` *# either use the session number or the name*
Kill session in terminal:
: `screen -X -S nameofsession quit`
: `screen -X 269 quit` *# auto-completes if unique*
Rename session in terminal:
: `screen -S OLDSESSIONNAME -X sessionname NEWSESSIONNAME`
: `Prefix` + `:sessionname NEW-NAME` *# screen command to change the current session name*
## Working with Windows <a href="#windows" id="windows">#</a>
Show list of all windows of current session:
: `Prefix` + `SHIFT` + `w`
Rename the current windows:
: `Prefix` + `SHIFT` + `a`
Jump to the next window:
: `Prefix` + `SPACE`
Jump to the previous window:
: `Prefix` + `p`
Kill the current window:
: `exit`
: `Prefix` + `k`
## Working with Regions / Split screen <a href="#split-screen" id="split-screen">#</a>
Screen has the feature to show multiple windows in a split screen. Every window would then be a so called 'Region' in screen.
Horizontally split window into two regions:
: `Prefix` + `SHIFT` +`s`
Vertically split window into two regions:
: `Prefix` + `|`
Jump to the next region:
: `Prefix` + `Tab`
Close the current region:
: `Prefix` + `x`
: *the window won't be terminated and just the split screen will be removed.*
Close all but the current region:
: `Prefix` + `q`
Fit the regions to a resized terminal window:
: `Prefix` + `SHIFT` +`f`
#### Layouts
You could create layouts, and save and reuse them later. This topic is out of the scope of this post and I am going to write about it later. You can get a reference and further information in the [official documentaion](https://www.gnu.org/software/screen/manual/screen.html#Layout).
## Screen commands <a href="#commands" id="commands">#</a>
It can be used to try out configurations and screen-specific commands.
`Prefix` + `:` + config
`Prefix` + `:logfile ~/path/to/new/logfile.txt`
I am not too familiar with screen commands, so I won't go into detail. A list of all commands can be found in the [official documentaion](https://www.gnu.org/software/screen/manual/screen.html#Command-Summary).
# Check if you are still in a screen session <a href="#active-session" id="active-session">#</a>
Screen sets an environment variable `STY`. If the output is empty, you are not in a screen session.
```markdown
kuser@pleasejustwork:~$ echo $STY
22829.demo
```
This won't work if you start up screen and SSH into a remote machine. Without further configuration, the variables stay local.
---
Another environment variable you could try is `TERM`.
```markdown
kuser@pleasejustwork:~$ screen
kuser@pleasejustwork:~$ echo $TERM
screen.xterm-256color
kuser@pleasejustwork:~$ exit
kuser@pleasejustwork:~$ echo $TERM
xterm-256color
```
Screen will add the prefix `screen.` in front of it.
This works even after connecting to a remote machine but presumes that you didn't mess with the `TERM` variable.
---
Another method would be to work with the screen prefix. You could simply use `Prefix` + `CTRL` + `t` to let screen tell you the time in the bottom left corner.
![screen-time](/images/blog/screen-show-time.png)
---

View file

@ -0,0 +1,220 @@
# Basics of the Linux Bash Command History with Examples
The bash command history shows the previously used commands. By default, the history is saved in memory per session and can be saved to a file for later sessions. We will explore ways to show, search and modify the history in this blog post.
I use RHEL and Debian-based Linux distributions and bash in this blog post as a reference.
# Configuration <a href="#configuration" id="configuration">#</a>
I want to start with ways to configure the behavior of the bash history.
The configuration of the history can be changed in the bash startup file. Those can typically be found in the home directory of the user.
System-wide `/etc/profile` or in the home directory `~/.bashrc` or `~/.profile`. There are more, but just to list some examples.
If you want to use an option for one session only, you can just type it in like this:
`HISTFILE=/dev/null` or `unset HISTFILE`
In both ways, you would disable the history for the current bash session.
# The basics <a href="#basics" id="basics">#</a>
The bash history should be enabled by default, but you might want to change some settings.
The history file that is stored on disk can be found with the following command:
: `echo $HISTFILE`
The default location for the history on disk is `~/.bash_history`.
Add the following option to change the name and storage location of the history file on disk:
: `HISTFILE=/path/to/the/history.txt`
Show the complete history from memory:
: `history`
Just show the last number of commands:
: `history 20`
Read history from disk to memory:
: `history -r`
Append history entries from memory to disk:
: `history -a`
Overwrite the disk history with the memory history:
: `history -w`
Since I am used to working with multiple sessions and I want to share the history of them, I've added the following line to my startup file to append every entry to the history on disk.
`export PROMPT_COMMAND='history -a'`
Delete a specific history entry or range:
: `history -d 20` *# one specific entry*
: `history -d 15-20` *# range*
: `history -d -5` *# last 'n' of entries*
#### Disabling bash command history <a href="#disable" id="disable">#</a>
As mentioned above, there are multiple options to disable the bash command history.
`HISTFILE=/dev/null` or `unset HISTFILE`
#### Number of history entries
The following option sets the number of entries that are displayed if you enter `history`:
: `HISTSIZE=20`
The following option sets the maximum number of entries in the history on disk:
: `HISTFILESIZE=2000`
# Search function <a href="#search" id="search">#</a>
You can start a **reversed search** through the history by pressing `CTRL` + `r` and entering the search term. You can jump to the next result by pressing `CTRL` + `r` again. After finding the desired command, you can press `TAB` to get filled to the current command line or press `ENTER` to run the command immediately.
If you skipped through your desired command, you can cancel the current search request with `CTRL` + `g` and start from the top again.
There is no native way to jump forward again - but you could add a forward search by adding `stty -ixon` to your startup file. The keyboard shortcut for the forward search is `CTRL` + `s`.
#### Using 'grep'
I prefer to use grep to find commands. Simply use one of these examples to do so.
```bash
history | grep SEARCHTERM
or
grep SEARCHTERM $HISTFILE
```
**Side note**: use the `-i` flag if you want search case-insentitive.
#### Add comments to commands
You can add comments to commands with `#`. This makes it easy to find commands again or document the thoughts behind the command in troubleshooting sessions and later reviews.
```bash
kuser@pleasejustwork:$ echo nightmare # dolphins chasing me in a mall
nightmare
```
# Exclusions <a href="#exclusion" id="exclusion">#</a>
We can add exclusion with the `HISTIGNORE` option in your startup file. This can be useful for privacy and security reasons.
There are some predefined options we can choose from:
: `ignorespaces` - if the command starts with a `SPACE`, it will be excluded
: `ignoredups` - duplicate commands will be excluded
: `ignoreboth` - both above-mentioned options together
If those options are not enough, you can create your own rules. For example, the `ignoreboth` rule could be written like this:
`HISTIGNORE="&:[ ]*"` *# the ampersand `&` means no duplicates, `:` is the separator, `[ ]*` checks if the command begins with a `SPACE`*
You can add commands too.
`HISTIGNORE="ls:pwd:cd"`
# Timestamps <a href="#timestamps" id="timestamps">#</a>
Timestamps are often important for reviews of troubleshooting sessions. With the `HISTTIMEFORMAT` option, you can add timestamps in various formats to your history.
The default history looks like this:
```bash
kuser@pleasejustwork:$ history 6
1150 history
1151 vim .bash_history
1152 vim .bashrc
1153 source .bashrc
1154 ls
1155 history
```
And the same lines look like this after adding `HISTTIMEFORMAT="%F %T "` to the configuration:
```bash
kuser@pleasejustwork:$ history 6
1150 02/02/23 18:03:32 history
1151 02/02/23 18:03:45 vim .bash_history
1152 02/02/23 18:05:03 vim .bashrc
1153 02/02/23 18:05:22 source .bashrc
1154 02/02/23 18:05:26 ls
1155 02/02/23 18:05:30 history
```
You can adjust the format with the following placeholders:
```bash
%d: day
%m: month
%y: year
%H: hour
%M: minutes
%S: seconds
%F: full date (Y-M-D format)
%T: time (H:M:S format)
%c: complete date and timestamp (day-D-M-Y H:M:S format)
```
# Re-run commands <a href="#rerun" id="rerun">#</a>
`!!` is a variable for the previous command and, for example, can be used to run the last command as 'sudo' .
```bash
kuser@pleasejustwork:$ whoami
kuser
kuser@pleasejustwork:$ sudo !!
[sudo] password for kuser:
root
```
---
`!` can be used to re-run the last command starting with a chosen term.
```bash
kuser@pleasejustwork:$ history 5
41 ping ittavern.com # !ping
42 whoami # !whoami
43 nmap -sP 10.10.10.0/24 # !nmap
44 vim .bashrc # !vim
45 history
kuser@pleasejustwork:$ !who # runs immediately and auto-completes
kuser
```
---
`!n` would run the 'n' command in the history, and `!-n` refers to the current command minus 'n'
```bash
kuser@pleasejustwork:$ history 5
41 ping ittavern.com # !41 / !-5
42 whoami # !42 / !-4
43 nmap -sP 10.10.10.0/24 # !43 / !-3
44 vim .bashrc # !44 / !-2
45 history # !45 / !-1 / !!
kuser@pleasejustwork:$ !-4 # runs immediately
kuser
```
#### modify and re-run previous command
With the following syntax, you can replace keywords from the previous command and run it again.
`^OLD KEY WORD^NEW KEY WORD OR PHRASE^`
**Example:**
```bash
kuser@pleasejustwork:$ sudo nmap -T3 10.10.22.0/24 -p 80,443
kuser@pleasejustwork:$ ^22^50^ # the command will be executed immediately
kuser@pleasejustwork:$ sudo nmap -T3 10.10.50.0/24 -p 80,443`
```
Use a backslash `\` as an escape character if you need to find or replace a `^`.
---

View file

@ -0,0 +1,210 @@
# Detecting Rogue DHCP Server
# What is a rogue DHCP server <a href="#what-is-a-rogue-dhcp-server" id="what-is-a-rogue-dhcp-server">#</a>
A rogue DHCP server is an unauthorized DHCP server that **distributes knowingly or unknowingly wrong or malicious information** to clients that send DHCP discover packets within a network. The following section lists some examples of rogue DHCP servers.
Devices with integrated DHCP server:
: most commonly routers that are newly connected to the network. Especially some mobile WLAN routers for hotspots can cause problems if they are connected to a network for a longer time. Non-tech people are often not aware of the consequences.
Threat actors:
: threat actors could spin up a DHCP server in your network to reroute traffic, distribute malicious information, e.x. an IP to a malicious DNS server, and cause a lot of damage after a short time.
Misconfiguration:
: there are many scenarios in which a misconfiguration could cause a rouge DHCP server to cause trouble. An easy example would be to accidentally activate the DHCP server on a firewall.
**Side note**: Every network should have measures to prevent a rogue DHCP server from causing trouble. I'll list some methods at the end of this post.
![dhcp-rogue-server](/images/blog/dhcp-rogue-server.png)
# Signs of a Rogue DHCP server <a href="#signs" id="signs">#</a>
Some signs of having a rogue DHCP server on your network are listed below:
- a client receives an IP from another subnet
- a client receives a duplicate IP within the network
- IP reservations do not work
- a client receives different network information (DNS, NTP, PXE, etc.)
- more than usual DHCP traffic
- DHCP traffic from new/unknown IPs
# What is DHCP <a href="#dhcp" id="dhcp">#</a>
I won't go into too much detail on how DHCP is. In a nutshell, DHCP stands for Dynamic Host Configuration Protocol and allows automatic assigning of IP addresses to devices and provides more information about the network, like the default gateway, subnet mask, DNS server, NTP server, and more.
The 'DORA' process is essential and should be basic knowledge when a DHCP troubleshooting session starts.
![dhcp-dora](/images/blog/dhcp-dora.png)
---
The following screenshots show a rough overview of the DORA process. Since this is not the main topic of this post, we don't need to go into detail.
**DHCPDISCOVER**
![DHCP-discover](/images/blog/dhcp-d.png)
**DHCPOFFER**
![DHCP-offer](/images/blog/dhcp-o.png)
**DHCPREQUEST**
![DHCP-request](/images/blog/dhcp-r.png)
**DHCPACK**
![DHCP-ack](/images/blog/dhcp-a.png)
---
So, enough theory; let us detect the rouge DHCP server.
# Detecting a rogue DHCP server <a href="#detecting" id="detecting">#</a>
There are various ways to detect a rogue DHCP server. Some work on the client or network level, or both.
In the following sections, we assume that we only have **one legitimate DHCP server on an IPv4 network**. Larger environments can have multiple of course, but this is not relevant, and the following detection methods work even if you have multiple servers.
**Side note**: You can **release the old and request a new IP** on **Windows** via command line `ipconfig /release` and `ipconfig /renew` and on **Linux** with `sudo dhclient -v -r` and `sudo dhclient -v`. Don't forget to specify the interface if you use multiple.
## Packet capture <a href="#packet-capture" id="packet-capture">#</a>
![DHCP-discover](/images/blog/dhcp-d.png)
It is important that the packet capture is taken on a client or intermediate device on the same network as the suspected rogue DHCP server. Wireshark and tcpdump are common tools to do so, and intermediate devices have their own tools.
You should look for **UDP traffic on ports 67 and 68**. It makes it easier to detect rogue DHCP servers if you are familiar with the above-mentioned 'DORA' process. Having **multiple 'Offer' packets** for a single 'Discover' packet from 1 or more IPs is an indicator for a rogue DHCP server. We have to keep IP spoofing in mind. Another option is to check on the server side: does the authorized DHCP server sends more than usual 'Offers' without receiving a 'Request'? - This is somewhat vague, but it could help to find a rogue DHCP server.
You can find more DHCP display filters for Wireshark in this [post](https://ittavern.com/guide-to-wireshark-display-filters/#dhcp).
## Using nmap <a href="#nmap" id="nmap">#</a>
Scan for IPs that listen on the UDP port 67 in your network:
: `sudo nmap -sU -p 67 -d 10.10.20.0/24`
: `-sU` - limits scan to UDP ports
: `-p 67` - destination port
: `-d` - optional: increase debug level.`-dd` for even more information
: `10.10.20.0/24` - your network
```bash
[...]
Completed UDP Scan at 23:22, 0.24s elapsed (2 total ports)
Overall sending rates: 16.82 packets / s, 470.93 bytes / s.
Nmap scan report for _gateway (10.10.20.1)
Host is up, received arp-response (0.00041s latency).
Scanned at 2023-02-06 23:21:58 CET for 2s
PORT STATE SERVICE REASON
67/udp open|filtered dhcps no-response
MAC Address: 90:6C:AC:78:80:FB (Fortinet)
Final times for host: srtt: 406 rttvar: 3765 to: 100000
[...]
```
This gives you a quick overview of your network.
#### nmap Scripts <a href="#nmap-scripts" id="nmap-scripts">#</a>
The required NSE script `broadcast-dhcp-discover` should be installed by default together with nmap. More information to the script can be found in the [official documentation](https://nmap.org/nsedoc/scripts/broadcast-dhcp-discover.html).
**Side note**: If you are using Linux, you can find the interface's name with `ip -br a` or `ip -br l`.
The default command looks like this:
: `sudo nmap --script broadcast-dhcp-discover -e eth0`
: by default, this script will ask for an IP for the MAC address `de:ad:c0:de:ca:fe`. Decent threat actors will sort those requests out to stay undetected. It is recommended to change the MAC address like in the following commands.
Nmap command to use a fixed or random MAC address:
: `sudo nmap --script broadcast-dhcp-discover --script-args broadcast-dhcp-discover.mac=aa:bb:cc:dd:ee:ff -e enp0s31f6`
: `sudo nmap --script broadcast-dhcp-discover --script-args broadcast-dhcp-discover.mac=random -e enp0s31f6`
**Sample output**
```bash
user@pleasejustwork:~$ sudo nmap --script broadcast-dhcp-discover --script-args broadcast-dhcp-discover.mac=aa:bb:cc:dd:ee:ff -e enp0s31f6
[sudo] password for kuser:
Starting Nmap 7.80 ( https://nmap.org ) at 2023-02-06 17:22 CET
Pre-scan script results:
| broadcast-dhcp-discover:
| Response 1 of 2:
| IP Offered: 10.10.20.57
| DHCP Message Type: DHCPOFFER
| Server Identifier: 10.10.20.1
| IP Address Lease Time: 7d00h00m00s
| Subnet Mask: 255.255.255.0
| Router: 10.10.20.1
| Domain Name Server: 9.9.9.9, 149.112.112.112
| Renewal Time Value: 3d12h00m00s
|_ Rebinding Time Value: 6d03h00m00s
| Response 2 of 2:
| IP Offered: 192.168.178.242
| DHCP Message Type: DHCPOFFER
| Server Identifier: 192.168.178.51
| IP Address Lease Time: 2m00s
| Renewal Time Value: 1m00s
| Rebinding Time Value: 1m45s
| Subnet Mask: 255.255.255.0
| Broadcast Address: 192.168.178.255
| Domain Name Server: 192.168.178.51
|_ Router: 192.168.178.1
WARNING: No targets were specified, so 0 hosts scanned.
Nmap done: 0 IP addresses (0 hosts up) scanned in 1.23 seconds
```
---
For more information about `nmap` visit the [nmap guide](https://ittavern.com/getting-started-with-nmap/) or other `nmap` [posts](https://ittavern.com/tags/nmap/).
## Windows DHCP server event logs <a href="#windows-event-logs" id="windows-event-logs">#</a>
The following event logs on the authorized Windows DHCP server can indicate a rogue DHCP server on a network.
| Event ID | Source | Message |
|--------------|-----------|------------|
| 1042 | Microsoft-Windows-DHCP-Server | The DHCP/BINL service running on this computer has detected a server on the network. If the server does not belong to any domain, the domain is listed as empty. The IP address of the server is listed in parentheses. |
| 1098 | Microsoft-Windows-DHCP-Server | Unreachable Domain |
| 1100 | Microsoft-Windows-DHCP-Server | Server Upgraded |
| 1101 | Microsoft-Windows-DHCP-Server | Cached authorization |
| 1103 | Microsoft-Windows-DHCP-Server | Authorized(servicing) |
| 1105 | Microsoft-Windows-DHCP-Server | Server found in our domain |
| 1107 | Microsoft-Windows-DHCP-Server | Network failure |
| 1109 | Microsoft-Windows-DHCP-Server | Server found that belongs to DS domain |
| 1110 | Microsoft-Windows-DHCP-Server | Another server was found |
| 1111 | Microsoft-Windows-DHCP-Server | Restarting rogue detection |
The source can be found on [microsoft.com](https://learn.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2008-r2-and-2008/cc726899(v=ws.10)).
You can check the logs regularly or add those events to your monitoring solution.
## Microsoft Rogue DHCP Checker <a href="#microsoft-roguechecker" id="microsoft-roguechecker">#</a>
Microsoft provided a tool to detect rogue DHCP servers, but this blog post from 2009 is no longer available. But thanks to archive.org we can find the [blog post](https://web.archive.org/web/20140812200404/http://blogs.technet.com/b/teamdhcp/archive/2009/07/03/rogue-dhcp-server-detection.aspx) and download the 'RogueChecker' there.
![dhcp-ms-roguechecker](/images/blog/dhcp-ms-roguechecker.png)
Installed it on Windows 10 and it seems to work.
## Turn off your own DHCP server <a href="#turn-of-legitimate-dhcp-server" id="turn-of-legitimate-dhcp-server">#</a>
Especially in larger networks, this often enough is not a solution, but I thought it would still be noteworthy. Disable the legitimate DHCP server in some way, release the IP on the client and ask for another IP. You shouldn't get a new legitimate IP address! - In case you receive a new IP address, the chances are high that there is a rogue DHCP server.
You can now check the DHCP server on the client and use other methods to find the rogue DHCP server on your network.
## Intrusion Detection Systems <a href="#ids" id="ids">#</a>
There are many solutions that cover the detection of rogue DHCP servers, but not all companies have the capacities to maintain such a system. Therefore, we do not need to go into detail, but it is still worth mentioning.
# Preventing actions of a rogue DHCP server <a href="#prevention" id="prevention">#</a>
Detecting is one thing; preventing any damage from a rouge DHCP server is another. This post focuses on detection, but I thought it won't hurt to list some prevention measurements.
- DHCP snooping/guarding on intermediate devices
- firewall policies that allow communication via UDP 67 and 68 only with authorized DHCP servers
- client management solution to check the correct DHCP server; does not work for printers and so on
- authorize DHCP servers in Active Directory and other services
---

View file

@ -0,0 +1,228 @@
# Simulate an unreliable network connection with tc and netem on Linux
**Side note**: Using a secondary network interface is recommended since the following commands could make a remote machine unreachable.
This a blog post about the basics of `netem` and `tc` on how to modify the **outgoing** traffic. You could modify the incoming traffic with an Intermediate Functional Block pseudo-device in Linux, but I am not too familiar with it and is out of scope for now.
# Reasons to simulate an unreliable network connection <a href="#reason" id="reason">#</a>
There are various reasons why you want to modify the traffic between devices. The last time we had to ensure that a streaming server in Frankfurt could handle incoming video streams with a high latency over an unreliable connection from the US. The other time we had to provide proof that some SAP modules can't handle the additional latency of a VPN and that the problem is on their side and not ours.
Some additional reasons besides troubleshooting could be:
: testing your network monitoring solution
: whether your application or server handles unreliable connections well
: simply do some research.
This post tries to provide enough information to help you troubleshoot various problems quickly and simulate certain scenarios.
#### Network shaping options
`tc` and `netem` provide are variety of options to shape the outgoing traffic.
We are going to cover the basics of the following options in this post:
: adding latency
: adding jitter
: sending duplicated packets
: adding a percentage of packet loss
: sending corrupt packets
Those options can be combined and will cover most of the cases.
# Basics of tc <a href="#basics" id="basics">#</a>
`tc` stands for 'traffic control' and, as the name implies, is used to configure the traffic control of the Linux kernel and is part of the `iproute2` package. [`Netem`](https://man7.org/linux/man-pages/man8/tc-netem.8.html) stands for 'network emulator' and is controlled by the `tc` command.
You can check quickly whether `tc` is available by typing `tc -V`.
```bash
kuser@pleasejustwork:~$ tc -V
tc utility, iproute2-5.15.0, libbpf 0.5.0
```
#### Show or delete current options
Show the currently applied options for an interface:
: `tc -p qdisc ls dev eth0`
```bash
kuser@pleasejustwork:~$ sudo tc -p qdisc ls dev eth0
qdisc netem 8001: root refcnt 2 limit 1000 delay 100ms 50ms
```
You can delete all options for a specific interface with the following command:
: `tc qdisc del dev eth0 root`
**Side note**: the following options are temporary and don't survive a reboot!
A breakdown of a common `tc` command can be found in the first example below.
#### Limiting to a specific IP or port
Unfortunately, it is not that easy to limit the applied options to a specific IP or port. It is possible, but outside the scope of this basic guide. To avoid problems, it is therefore recommended to use a secondary network interface.
I might rework this section at some point. For further reading, feel free to check the [official documentation](https://man7.org/linux/man-pages/man8/tc-ematch.8.html) for the filters.
# Units used for Parameters for the netem options <a href="#units" id="units">#</a>
Almost every 'nenum' option can have one or more parameters. I thought it would make sense to show you the available units before we start with the practical part.
#### Rates
The bandwidths can be specified with the following units:
: `bit` - Bits per second
: `kbit` - Kilobits per second
: `mbit` - Megabits per second
: `gbit` - Gigabits per second
: `tbit` - Terabits per second
: `bps` - Bytes per second
: `kbps` - Kilobytes per second
: `mbps` - Megabytes per second
: `gbps` - Gigabytes per second
: `tbps` - Terabytes per second
#### Time
The time for latency and other options can be specified as follows:
: `us` - Microseconds
: `ms` - Milliseconds
: `s` - Seconds
# Netem Options <a href="#options" id="options">#</a>
I am going to explain the syntax in the first scenario.
As a **reference**, this is the normal ICMP request/ ping over a separate interface.
```bash
kuser@pleasejustwork:~$ ping -c 10 -I eth0 10.10.22.1
PING 10.10.22.1 (10.10.22.1) from 10.10.22.51 eth0: 56(84) bytes of data.
64 bytes from 10.10.22.1: icmp_seq=1 ttl=255 time=0.458 ms
64 bytes from 10.10.22.1: icmp_seq=2 ttl=255 time=0.520 ms
64 bytes from 10.10.22.1: icmp_seq=3 ttl=255 time=0.453 ms
64 bytes from 10.10.22.1: icmp_seq=4 ttl=255 time=0.420 ms
64 bytes from 10.10.22.1: icmp_seq=5 ttl=255 time=0.513 ms
64 bytes from 10.10.22.1: icmp_seq=6 ttl=255 time=0.412 ms
64 bytes from 10.10.22.1: icmp_seq=7 ttl=255 time=0.550 ms
64 bytes from 10.10.22.1: icmp_seq=8 ttl=255 time=0.548 ms
64 bytes from 10.10.22.1: icmp_seq=9 ttl=255 time=0.402 ms
64 bytes from 10.10.22.1: icmp_seq=10 ttl=255 time=0.376 ms
--- 10.10.22.1 ping statistics ---
10 packets transmitted, 10 received, 0% packet loss, time 9202ms
rtt min/avg/max/mdev = 0.376/0.465/0.550/0.060 ms
```
## Add Latency / Delay <a href="#latency" id="latency">#</a>
The netem latency will be added to the normal latency of the connection.
`DELAY := delay TIME [ JITTER [ CORRELATION ]]`
`sudo tc qdisc add dev eth0 root netem delay 100ms`
: `sudo` *# run command as `sudo`*
: `tc` *# command stands for 'traffic control'*
: `qdisc` *# stands for 'Queue discipline'*
: `add|change|del` *# is the action that `tc` should perform to a option*
: `dev eth0` *# choosing the network interface*
: `root` *# qdiscs ID*
: `netem delay 100ms` *# 'netem' option + parameter*
**Results**
```bash
kuser@pleasejustwork:~$ sudo tc qdisc add dev eth0 root netem delay 100ms
[sudo] password for kuser:
kuser@pleasejustwork:~$ ping -c 10 -I eth0 10.10.22.1
PING 10.10.22.1 (10.10.22.1) from 10.10.22.51 eth0: 56(84) bytes of data.
64 bytes from 10.10.22.1: icmp_seq=1 ttl=255 time=101 ms
64 bytes from 10.10.22.1: icmp_seq=2 ttl=255 time=100 ms
64 bytes from 10.10.22.1: icmp_seq=3 ttl=255 time=100 ms
64 bytes from 10.10.22.1: icmp_seq=4 ttl=255 time=100 ms
64 bytes from 10.10.22.1: icmp_seq=5 ttl=255 time=100 ms
64 bytes from 10.10.22.1: icmp_seq=6 ttl=255 time=100 ms
64 bytes from 10.10.22.1: icmp_seq=7 ttl=255 time=101 ms
64 bytes from 10.10.22.1: icmp_seq=8 ttl=255 time=100 ms
64 bytes from 10.10.22.1: icmp_seq=9 ttl=255 time=100 ms
64 bytes from 10.10.22.1: icmp_seq=10 ttl=255 time=100 ms
--- 10.10.22.1 ping statistics ---
10 packets transmitted, 10 received, 0% packet loss, time 9013ms
rtt min/avg/max/mdev = 100.416/100.466/100.586/0.050 ms
```
To **remove** this `tc` rule, send the same command again, but replace `add` with `del`.
`sudo tc qdisc del dev eth0 root netem delay 100ms`
#### Add Jitter <a href="#jitter" id="jitter">#</a>
If you want to add more Jitter - or in other words - variance in latency, add another parameter at the end. This is a plus/minus value.
`sudo tc qdisc change dev eth0 root netem delay 100ms 50ms`
**Results**
```bash
kuser@pleasejustwork:~$ ping -c 10 -I eth0 10.10.22.1
PING 10.10.22.1 (10.10.22.1) from 10.10.22.51 eth0: 56(84) bytes of data.
64 bytes from 10.10.22.1: icmp_seq=1 ttl=255 time=105 ms
64 bytes from 10.10.22.1: icmp_seq=2 ttl=255 time=88.6 ms
64 bytes from 10.10.22.1: icmp_seq=3 ttl=255 time=108 ms
64 bytes from 10.10.22.1: icmp_seq=4 ttl=255 time=109 ms
64 bytes from 10.10.22.1: icmp_seq=5 ttl=255 time=130 ms
64 bytes from 10.10.22.1: icmp_seq=6 ttl=255 time=54.5 ms
64 bytes from 10.10.22.1: icmp_seq=7 ttl=255 time=141 ms
64 bytes from 10.10.22.1: icmp_seq=8 ttl=255 time=102 ms
64 bytes from 10.10.22.1: icmp_seq=9 ttl=255 time=124 ms
64 bytes from 10.10.22.1: icmp_seq=10 ttl=255 time=146 ms
--- 10.10.22.1 ping statistics ---
10 packets transmitted, 10 received, 0% packet loss, time 9011ms
rtt min/avg/max/mdev = 54.495/110.797/145.590/25.366 ms
```
The added latency will be in a range from **50-150ms** from now on.
#### Send duplicate packets <a href="#duplicate" id="duplicate">#</a>
Sending random duplicate packets over a specific interface:
: `sudo tc qdisc change dev eth0 root netem duplicate 1%`
## Simulate Packet loss <a href="#packet-loss" id="packet-loss">#</a>
There are various reasons for packet loss: an unreliable network connection, network congestion, bugs, and so on.
To drop random packets of a specific interface, simply use the following command:
: `sudo tc qdisc add dev eth0 root netem loss 20%`
**Results**
```bash
kuser@pleasejustwork:~$ ping -c 10 -I eth0 10.10.22.1
PING 10.10.22.1 (10.10.22.1) from 10.10.22.51 eth0: 56(84) bytes of data.
64 bytes from 10.10.22.1: icmp_seq=1 ttl=255 time=0.833 ms
64 bytes from 10.10.22.1: icmp_seq=2 ttl=255 time=0.414 ms
64 bytes from 10.10.22.1: icmp_seq=3 ttl=255 time=0.576 ms
64 bytes from 10.10.22.1: icmp_seq=4 ttl=255 time=0.443 ms
64 bytes from 10.10.22.1: icmp_seq=5 ttl=255 time=0.449 ms
64 bytes from 10.10.22.1: icmp_seq=6 ttl=255 time=0.510 ms
64 bytes from 10.10.22.1: icmp_seq=8 ttl=255 time=0.515 ms
64 bytes from 10.10.22.1: icmp_seq=10 ttl=255 time=0.302 ms
--- 10.10.22.1 ping statistics ---
10 packets transmitted, 8 received, 20% packet loss, time 9221ms
rtt min/avg/max/mdev = 0.302/0.505/0.833/0.145 ms
```
#### Corrupt packets <a href="#corrupt" id="corrupt">#</a>
Introduced an error at a random position of the packet.
`sudo tc qdisc change dev eth0 root netem corrupt 10%`
# Conclusions
As mentioned before, there are more advanced options, but this blog post should cover the basics.
---

View file

@ -0,0 +1,224 @@
# ICMP echo requests on Linux and Windows - Reference Guide
Just as a heads-up, this is going to be a quick reference guide for the use of the ICMP echo request - or better known as `PING`. I have to look up some options multiple times a week, so I thought it is beneficial to write it up in a post like this. I might add more options at some point, but those are the most important ones in my experience.
In a nutshell: ICMP echo requests can be used to check the reachability of two hosts on layer 3. This is indispensable in any troubleshooting session if the network is involved.
**Side note**: All Linux references should work on **MacOS** too.
# Simple ping without any options <a href="#ping" id="ping">#</a>
Linux:
: `ping 10.10.20.1`
**Results**
```markdown
kuser@pleasejustwork:~$ ping 10.10.20.1
PING 10.10.20.1 (10.10.20.1) 56(84) bytes of data.
64 bytes from 10.10.20.1: icmp_seq=1 ttl=255 time=0.594 ms
64 bytes from 10.10.20.1: icmp_seq=2 ttl=255 time=0.489 ms
64 bytes from 10.10.20.1: icmp_seq=3 ttl=255 time=0.501 ms
64 bytes from 10.10.20.1: icmp_seq=4 ttl=255 time=0.504 ms
64 bytes from 10.10.20.1: icmp_seq=5 ttl=255 time=0.534 ms
^C
--- 10.10.20.1 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4075ms
rtt min/avg/max/mdev = 0.489/0.524/0.594/0.037 ms
```
Windows - Cmd Line:
: `ping 10.10.20.1`
**Results**
```markdown
C:\Users\windows-sucks>ping 10.10.20.1
Pinging 10.10.20.1 with 32 bytes of data:
Reply from 10.10.20.1: bytes=32 time<1ms TTL=255
Reply from 10.10.20.1: bytes=32 time<1ms TTL=255
Reply from 10.10.20.1: bytes=32 time<1ms TTL=255
Reply from 10.10.20.1: bytes=32 time<1ms TTL=255
Ping statistics for 10.10.20.1:
Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 0ms, Maximum = 0ms, Average = 0ms
```
Windows - Powershell - Test-Connection:
: `Test-Connection 10.10.20.1`
**Side note**: this will take longer. The internet says it takes longer since the output is a Win32_PingStatus object you can work with. You can get a quick `True` or `False` with the `-Quiet` argument.
**Side note**: Not all options are available for PS 5.1. You can check your current version with `$PSVersionTable.PSVersion`.
**Results**
```markdown
PS C:\Users\windows-sucks> Test-Connection -Computername 10.10.20.1
Source Destination IPV4Address IPV6Address Bytes Time(ms)
------ ----------- ----------- ----------- ----- --------
DESKTOP-GP... 10.10.20.1 32 0
DESKTOP-GP... 10.10.20.1 32 0
DESKTOP-GP... 10.10.20.1 32 0
DESKTOP-GP... 10.10.20.1 32 0
```
Notable Mention: Windows - Powershell 5.1+ - Test-NetConnection:
: `Test-NetConnection 10.10.20.1` / `tnc 10.10.20.1`
: `Test-NetConnection` can be abbreviated with `tnc`
`Test-NetConnection` on only suited for ping requests without any options. I'll write about `Test-Connection` in the rest of the post since it offers more options.
**Results**
```markdown
PS C:\Users\windows-sucks> Test-NetConnection 10.10.20.1 ComputerName : 10.10.20.1
RemoteAddress : 10.10.20.1
InterfaceAlias : Ethernet
SourceAddress : 10.10.20.54
PingSucceeded : True
PingReplyDetails (RTT) : 0 ms
```
## Continuous ping requests <a href="#cont" id="cont">#</a>
Linux:
: *continuous pings by default*
Windows - Cmd Line:
: `/t` or `-t`
: *Can be interrupted with `CTRL` + `c`*
Windows - Powershell 7.2+ - Test-Connection:
: `-Repeat`
## Number of ping requests <a href="#number" id="number">#</a>
Sets the number of pings
Linux:
: `-c NUMBER`
: Default is continuous ping
Windows - Cmd Line:
: `/n NUMBER` / `-n NUMBER`
: Default is 4
Windows - Powershell 5.1+ - Test-Connection:
: `-Count NUMBER`
: Default is 4
## Using a specific interface <a href="#interface" id="interface">#</a>
Linux:
: `-I INTERFACE-NAME`
: *just use the name of the specific interface you want to use*
Windows - Cmd Line:
: `-S SOURCE-IP`
: *you have to choose the IP of the interface to use it for a ping*
## domain name resolution <a href="#resolution" id="resolution">#</a>
You get results faster if you can avoid domain name resolution.
Linux:
: *does the name resolution by default. Use `-n` to avoid it*
: `-n`
Windows - Cmd Line:
: *check IP for domain name*
: `/a` / `-a`
## Avoid output / quiet mode <a href="#quiet" id="quiet">#</a>
Linux:
: `-q`
: only shows the start and end summary
Windows - Cmd Line:
: `ping 10.10.20.2 > nul 2>&1`
: *no output at all*
Windows - Powershell 5.1+ - Test-Connection:
: `-Quiet`
: Just outputs `True` / `False`
## Add timestamp <a href="#timestamp" id="timestamp">#</a>
Linux:
: `-D`
: adds the timestamp in front of it in the UNIX format.
Windows:
: *haven't found an option. There are multiple ways with bash scripting*
## Packet Size <a href="#size" id="size">#</a>
Linux:
: `-s NUMBER`
: data bytes. The default is 56 bytes + 8 bytes ICMP header data.
Windows - Cmd Line:
: `/l NUMBER` / `-l NUMBER`
: data bytes. The default is 32 bytes + 8 bytes ICMP header data. Max is 65527.
Windows - Powershell 5.1+ - Test-Connection:
: `-BufferSize NUMBER`
: data bytes. The default is 32 bytes + 8 bytes ICMP header data.
## TTL / Time to live <a href="#ttl" id="ttl">#</a>
Sets the IP Time to live!
Linux:
: `-t NUMBER`
Windows - Cmd Line:
: `/i NUMBER` / `-i NUMBER`
Windows - Powershell 5.1+ - Test-Connection:
: `-MaxHops NUMBER`
: *default is 128*
## Sets "Don't Fragment" bit <a href="#df" id="df">#</a>
Sets the DF flag in the IP header.
Linux:
: `-M hint`
Windows - Cmd Line:
: `/f` / `-f`
Windows - Powershell 7.2+ - Test-Connection:
: `-DontFragment`
## IP Protocol 4 or 6 <a href="#protocol" id="protocol">#</a>
Linux:
: `-4` *# IPv4*
: `-6` *# IPv6*
Windows - Cmd Line:
: `/4` / `-4` *# IPv4*
: `/6` / `-6` *# IPv6*
Windows - Powershell 7.2+ - Test-Connection:
: `-IPv4` *# IPv4*
: `-IPv6` *# IPv6*
---

View file

@ -0,0 +1,191 @@
# Getting started with iperf3 - Network Troubleshooting
iperf3 is available for all kinds of operating systems. The download page is on their [official homepage](https://iperf.fr/iperf-download.php). I'll use **Linux** as a reference for the server and client.
# Basic usage
**iperf3** is a tool to **measure the throughput** between hosts in a network and can test TCP, UDP, and SCPT, whereby TCP is the default. iperf3 must be installed and active on two hosts in which one host acts as a server and the other one as a client. By default, you measure the upload from the client to the server, but you can test the download from the client with the `-R` flag.
**Side note**: You can use the network stack of your host (localhost) or public iperf3 servers for testing.
Server-side:
: `iperf3 -s 10.10.20.51 -p 5555`
: `-s` *# starts iperf3 server*
: `10.10.20.51` *# sets IP on which the server is listening*
: `-p 5555` *# sets listening port to `5555`. The default port is `5201`*
: TCP connection will be tested by default
: measure interval is 1 sec by default
Stop the server by pressing `CTRL` + `c`.
Client-side:
: `iperf3 -c 10.10.20.51 -p 5555`
: `-c` *# starts iperf3 as a client*
: `10.10.20.51` *# sets the server destination*
: `-p 5555` *# sets the port on which the server is listening*
**Output on the client**
```markdown
kuser@pleasejustwork:~$ iperf3 -c 10.10.20.51 -p 5555
Connecting to host 10.10.20.51, port 5555
[ 5] local 10.10.20.50 port 53512 connected to 10.10.20.51 port 5555
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 5.35 MBytes 44.9 Mbits/sec 9 197 KBytes
[ 5] 1.00-2.00 sec 4.72 MBytes 39.6 Mbits/sec 0 227 KBytes
[ 5] 2.00-3.00 sec 5.09 MBytes 42.7 Mbits/sec 1 172 KBytes
[ 5] 3.00-4.00 sec 4.65 MBytes 39.0 Mbits/sec 0 192 KBytes
[ 5] 4.00-5.00 sec 4.96 MBytes 41.6 Mbits/sec 0 206 KBytes
[ 5] 5.00-6.00 sec 4.96 MBytes 41.6 Mbits/sec 0 223 KBytes
[ 5] 6.00-7.00 sec 4.65 MBytes 39.0 Mbits/sec 0 237 KBytes
[ 5] 7.00-8.00 sec 4.84 MBytes 40.6 Mbits/sec 2 198 KBytes
[ 5] 8.00-9.00 sec 4.96 MBytes 41.6 Mbits/sec 0 220 KBytes
[ 5] 9.00-10.00 sec 4.96 MBytes 41.6 Mbits/sec 0 233 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 49.2 MBytes 41.2 Mbits/sec 12 sender
[ 5] 0.00-10.08 sec 48.7 MBytes 40.5 Mbits/sec receiver
iperf Done.
```
Side notes on the results of a TCP connection:
: `ID` column shows the internal ID per stream (bi-directional, parallel, etc.)
: `Transfer` column - how much data was transferred per interval
: `Bitrate` column - throughput per interval
: `Retr` column - retries needed in case of packet loss
: The throughput can be found in the summary
Results are shown on both sides. If you want to retrieve the server-side output, you can use the `--get-server-output` flag.
iperf3 initials a so-called 'contol connection' to exchange parameters and exchange test results and for the test data a separate TCP connection, the flow of UDP packets or SCTP connection. If not otherwise stated, random data is used for the test.
**Side note**: To check the connection for **Jitter**, choose UDP for your test. The summary will have an additional column for jitter.
It is important to know that every **server instance can only handle 1 test at a time**. The server will refuse all new attempts if it is currently in use.
`iperf3: error - unable to send control message: Connection reset by peer`
# General options
The following options can be used on the server and client sides.
#### Use a specific interface
Use a specific interface for iperf3:
: `-B IPADDRESS` / `-bind IPADDRESS`
#### Measurement interval
Set a specific delay between intervals:
: `-i NUMBER` / `--interval NUMBER` in seconds
: the default is 1 second, and the delay between the intervals can be disabled with `0`
#### Results / Output
Get **verbose output** with `-V` / `--verbose` to get a more detailed output on your test.
You can use `-- logfile FILENAME` to save the **output to a logfile**. Noted that iperf3 won't display any output to the console and by using the same file you **append to the already existing logfile**.
The results are shown on both sides. If you want to retrieve the output of the server-side, you can use the `--get-server-output` flag.
Determine the format of the throughput:
: `-k`/`-m`/`-g`/`-t` - Kbits/Mbits/Gbits/Tbits
: `-K`/`-M`/`-G`/`-T` - Kbyts/Mbyts/Gbyts/Tbyts
Output the results as **JSON** with the `-J` flag.
#### Reference data for the transfer
Use a specific file to simulate the transfer:
: `-F FILENAME` / `--file FILENAME`
: otherwise, random data will be used
: the actual file won't be transferred and will only be used as a reference
# Server-specific options
As mentioned in the basic usage section, you can start an iperf3 server with the `-s` flag.
Start iperf3 server:
: `-s` / `--server`
Run the server in the background as a daemon:
: `-D` / `--daemon`
# Client-specific options
Initiate iperf3 connection as a client:
: `-c` / `--client`
As mentioned before, by **default you test the upload from the client to the server**. Use the `-R` / `--reverse` flag to test the **download from the server to the client**.
You can use the `--bidir` flag to test down- and upload at the same time. Just a sample of how the output looks:
```markdown
[...]
[ ID][Role] Interval Transfer Bitrate Retr Cwnd
[ 5][TX-C] 0.00-1.00 sec 4.17 MBytes 35.0 Mbits/sec 14 157 KBytes
[ 7][RX-C] 0.00-1.00 sec 19.9 MBytes 167 Mbits/sec
[ 5][TX-C] 1.00-2.00 sec 4.10 MBytes 34.4 Mbits/sec 0 185 KBytes
[ 7][RX-C] 1.00-2.00 sec 20.1 MBytes 169 Mbits/sec
[ 5][TX-C] 2.00-3.00 sec 4.47 MBytes 37.5 Mbits/sec 0 202 KBytes
[ 7][RX-C] 2.00-3.00 sec 17.6 MBytes 147 Mbits/sec
[...]
[ 5][TX-C] 0.00-10.00 sec 41.5 MBytes 34.8 Mbits/sec 17 sender
[ 5][TX-C] 0.00-10.06 sec 41.2 MBytes 34.3 Mbits/sec receiver
[ 7][RX-C] 0.00-10.00 sec 205 MBytes 172 Mbits/sec 285 sender
[ 7][RX-C] 0.00-10.06 sec 201 MBytes 168 Mbits/sec receiver
```
Specific transport protocol:
: TCP is used by default
: `-u` / `--udp` for a flow of UDP packets
: `--sctp` for a SCTP connection
Choose between IPv4 and IPv6:
: `-4` / `--version4` *# IPv4*
: `-6` / `--version6` *# IPv6*
Determine the lengths of the test:
: `-t NUMBER` / `--time NUMBER` *# time in seconds; default is 10 seconds.*
Sets number of parallel client streams:
: `-P NUMBER` / `--parallel NUMBER` *# default is 1*
#### Protocol specific
The following options are used for specific problems or troubleshooting sessions, but they are worth mentioning for sure.
MSS / Maximum segment size:
: `-M NUMBER` / `--set-mss NUMBER` *# in bytes*
: MTU / maximum transmission unit minus 40 bytes = MSS
Window size:
: `-w NUMBER[kmgt]` / `--window NUMBER[kmgt]`
: `k`/`m`/`g`/`t` - Kbits/Mbits/Gbits/Tbits
No TCP delay:
: `-N` / `--no-delay`
: disables "Nagle's Algorithm"
Omit the first `n` seconds of the test in the beginning:
: `-O NUMBER` / `--omit NUMBER` in seconds
: used to ignore the TCP ramp-up phase
**Sample output**
```markdown
[...]
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 5.57 MBytes 46.7 Mbits/sec 13 196 KBytes (omitted)
[ 5] 1.00-2.00 sec 4.65 MBytes 39.0 Mbits/sec 0 227 KBytes (omitted)
[ 5] 0.00-1.00 sec 5.09 MBytes 42.7 Mbits/sec 1 174 KBytes
[ 5] 1.00-2.00 sec 4.65 MBytes 39.0 Mbits/sec 0 192 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-2.00 sec 9.74 MBytes 40.9 Mbits/sec 1 sender
[ 5] 0.00-2.07 sec 10.1 MBytes 41.1 Mbits/sec receiver
```
---

View file

@ -0,0 +1,97 @@
# My Offsite Backup - March 2023
![mob-2303-setup-1.jpg](/images/blog/mob-2303-setup-1.jpg)
While I was on a business trip the other day, I thought about a scenario in which my home would burn down or get robbed. A simple but essential question emerged:
What could I recover?
I already saved backups in the cloud, but I figured that I could not recover my data from it without my private laptop (which I had not with me this time).
At this point, I knew I had to change some things to ensure that my important data was properly backed up.
# The goal
Having a disaster recovery strategy for my most important data that is easy to maintain.
The **offline backup** should be stored **offsite in a secure and trustworthy location**. The data must be saved on at least **two mediums** to **reduce the risk of data loss due to hardware failure**. The data must be **encrypted** to secure my data in case of theft. **The case** should be easily transported and protect the mediums against common risks like shock and water. The **frequency of the offsite backup** should be around every 1-2 weeks.
For more information, please visit my [backup guide](https://ittavern.com/backup-guide/).
One of the main things to consider is: **I must be able to recover everything with just this one offsite backup**.
# The data
I am currently aggregating a ton of data to a local server to make future backups easier. It is spread over multiple devices, which can be a pain in the ass.
For now **I only backup important data** which can be subdivided further into '**frequently**' and '**rarely**' used or changed.
Some **examples of frequently used data** would be: SSH & PGP keys, password & 2FA database, configuration files, notes, and so on.
Some **examples of rarely used data** would be family photos & videos, ebooks, documents, and so on.
At this point, the frequently used data is around **10GB**, and the rarely used data is around **90GB**. This will increase by a factor of two or three after I get everything sorted and stored in one place.
# The Strategy
I've decided to use a **rotational system** in which I have **two identical cases** with storage mediums for the backups. With this setup, I can do the backups at home and switch this case with the recently done backups with the offsite backup and rotate like this repeatedly. It is more expansive, but saves a lot of time, brings more comfort, and even adds more resilience.
I won't go into detail on what **location** I have chosen for my offsite backup, but I can say that I've found someone so kind as to store it for a couple of beers a month.
# The hardware
![mob-2303-setup-1.jpg](/images/blog/mob-2303-setup-1.jpg)
Case:
: waterproof and shock-resistant **case**
: **cable tie**, to keep case closed in case of a fall
: **seal** sticker with ID, makes sure that I know if the case was opened at the offsite location
Content:
: **1TB HDD** in an anti-static bag and silica dehumidifier bags
: **128GB USB Stick**
: **YubiKey** (MFA)
The seal sticker can be removed without any residues, and a re-applied seal looks like this:
![mob-2303-seal.jpg](/images/blog/mob-2303-seal.jpg)
#### Upcoming Improvements
- Swap USB stick with SSD + anti-static bag
- swap the current case with a fire-proof case
- add a recovery manual to the case
# The software
I am already using [borg](https://www.borgbackup.org/) for my cloud backups, so I've also decided to use it for my offsite backups. I can encrypt my data, recover everything or single files only, save space, and can automate many things.
I will write about it in a separate blog post and link it here as soon as I have everything set up correctly. It works for now, but it isn't pretty.
#### Upcoming Improvements
- automate all the things
- document the process
# The routine
![mob-2303-routine.jpg](/images/blog/mob-2303-routine.jpg)
So, there's currently no routine. I've printed a template where I document backups with the case number, seal ID, changes I've made, and so on.
Backups and tests are done manually. It takes some time, but I can make sure that everything works and I will change it in the future.
#### Upcoming Improvements
- combine routine with cloud backups
- create a better documentation
- check backups automatically
- check the health of the hardware
# Conclusion
This backup strategy is relatively new and not battle-tested, but at this point I am happy with it. I can tell you that I sleep better!
I am going to modify the strategy over time and give you all an update every couple of months.
---

View file

@ -0,0 +1,163 @@
# Getting started with nmap scripts
# Disclaimer
**Scripts are not run in a sandbox and thus could accidentally or maliciously damage your system or invade your privacy**. Never run scripts from third parties unless you trust the authors or have carefully audited the scripts yourself.
**Only scan networks and hosts you have permission for**. Many hosting providers do not allow the scanning of other networks, and doing it anyways could cause you trouble. Please be aware of it.
---
This blog post will cover the general usage of nmap scripts, not the scripting itself. Check out the [getting started with nmap post](https://ittavern.com/getting-started-with-nmap/) if you are new to nmap.
# Basics usage <a href="#usage" id="usage">#</a>
The **Nmap Scripting Engine (NSE)** allows you to run and share pre-made and custom scripts. Scripts are written in Lua and use the file extension `.nse`. NSE will enable you to scan and analyze any host and network in-depth and according to your needs. Automation, vulnerability scans, and many other functions are possible with the NSE.
A list of all, by default, included scripts can be found in their [official docs](https://nmap.org/nsedoc/scripts/).
I mainly use scripts to find, enumerate and check SMB shares and SSH servers, finding potential rogue DHCP servers (consumer routers ftw), and some specific vuln scans for like log4j and other recent attacks.
Run a nmap with a script:
: `nmap --script=SCRIPTNAME TARGETNETWORK/HOST`
: *multiple syntaxes are allowed, as I'll show in the next example*
Example with different syntaxes:
: `nmap --script http-title scanme.nmap.org`
: `nmap --script=http-title scanme.nmap.org`
: `nmap --script 'http-title' scanme.nmap.org`
: `nmap --script "http-title" scanme.nmap.org`
: `nmap --script="http-title" scanme.nmap.org`
: *and I bet there are more, and you even can see the file extension `.nse` right after*
**Output:**
```bash
[...]
80/tcp open http
|_http-title: Go ahead and ScanMe!
[...]
```
**Side note**: Scanning the domain `scanme.nmap.org` is permitted in low volumes as stated on [their page](http://scanme.nmap.org/), but please do not abuse it!
#### Using multiple scripts <a href="#multiple-scripts" id="multiple-scripts">#</a>
There are various ways to use multiple scripts at once. The easiest way would be to separate them with a **comma**.
`nmap -p 80 --script=http-title,http-headers scanme.nmap.org`
Another way would be to use a whole directory with with `--datadir` argument, in which all scripts within the chosen directory would be running.
The last way is to pick a whole category of scripts. I'll write about categories further down in this post.
#### Script help page
You can use `--script-help` to get additional information of a script.
`nmap --script-help http-title.nse`
```markdown
Starting Nmap 7.80 ( https://nmap.org ) at 2023-04-07 16:23 CEST
http-title
Categories: default discovery safe
https://nmap.org/nsedoc/scripts/http-title.html
Shows the title of the default page of a web server.
The script will follow up to 5 HTTP redirects using the default rules in the
http library.
```
#### Script arguments
Some scripts require arguments. You can find them with `--script-help` or on the official page of the script.
The official syntax is:
: `--script-args <n1>=<v1>,<n2>={<n3>=<v3>},<n4>={<v4>,<v5>}`
: and it often enough takes me 1-2 tries to get everything right, depending on the script.
If you have many arguments to run, you can call them from a file with `--script-args-file FILENAME`.
# Script directory <a href="#directory" id="directory">#</a>
You usually can find the default scripts in the following directories.
Linux:
: `/usr/local/share/nmap/scripts or /usr/share/nmap/scripts or somewhere else, depending on the installing method.`
: or look for them via `locate *.nse`
Windows:
: `C:\Program Files\Nmap\scripts`
You can choose a different directory with the `--datadir` argument.
`nmap --datadir /some/random/path/to/scripts/ -sC -sV TARGETNETWORK`
NSE will look for the script in the following places until found:
: `--datadir`
: `$NMAPDIR`
: `~/.nmap` (Linux)
: `APPDATA>\nmap` (Windows)
: directory containing the `nmap` executable + `../share/nmap` in Linux
: `NMAPDATADIR`
: and the current directory
#### NSE data directory
More complex scripts require separate data sets, databases, and other things. Those must be placed in the NSE data directory. It works similarly to the script directory but is out of this post's scope. Most scripts that require this function will let you know. I just thought it would be beneficial to mention.
# Custom scripts <a href="#custom-scripts" id="custom-scripts">#</a>
It is straightforward to use and add custom scripts, that are either created by yourself or downloaded from the internet.
**I want to point to the disclaimer at the top of the post: only run scripts that you trust!**
Run a custom script in nmap:
: `nmap --script /path/to/script.nse TARGET`
Using the absolute path of a script would be the easiest way to do so. If the script works and you plan to use it more often, you can add it you the `script.db`, which contains all scripts and let you call the script with the name only. This file is generally in the same directory as the already included scripts.
Add the `.nse` file to the script directory and run the following command to add the script to `script.db`:
`sudo nmap --script-updatedb`
You should now be able to run the script with the name only.
# Script categories <a href="#script-categories" id="script-categories">#</a>
NSE categorizes its scripts, so you can run a bunch of them at once. The following categories are currently there:
`auth, broadcast, default, discovery, dos, exploit, external, fuzzer, intrusive, malware, safe, version, and vuln`
Most names are self-explanatory, and for more information, I'd like to refer you to the [official docs](https://nmap.org/book/nse-usage.html#nse-categories).
You can run nmap with all `default` scripts with the following command:
: `nmap --script=default TARGET`
: `nmap -sC TARGET` *# `-sC` is the short form and no other category has one to my knowledge*
Like the scripts, you could run multiple categories. Simply separate them with a **comma**.
#### Scripts in a category
I bet there are easier ways to check what scripts are in a category, but I'd just check the `script.db` for the specific category:
`grep -i 'default' script.db`
**Output**
```markdown
Entry { filename = "address-info.nse", categories = { "default", "safe", } }
Entry { filename = "afp-serverinfo.nse", categories = { "default", "discovery", "safe", } }
Entry { filename = "ajp-auth.nse", categories = { "auth", "default", "safe", } }
Entry { filename = "ajp-methods.nse", categories = { "default", "safe", } }
Entry { filename = "amqp-info.nse", categories = { "default", "discovery", "safe", "version", } }
[...]
```
---
Sources:
: [nmap Off Docs](https://nmap.org/book/man-nse.html)
---

View file

@ -0,0 +1,179 @@
# Curl on Linux - Reference Guide
Curl is a powerful tool that is mainly used to transfer data. It has way more functions, but I won't be able to cover everything. This blog post is mainly a reference for later use and not a step-by-step guide. Therefore I won't cover everything in depth.
Most of it should work on other operating systems too, but I'll use **Linux** as reference. I'll keep this page up-to-date and add more topics in the future.
# General <a href="#general" id="general">#</a>
**Side note**: put the URL into single or double quotes if it contains special characters.
By default, curl writes the received data to **stdout** and does not encode or decode received data.
A quick example to get you public IP:
: `curl brrl.net/ip`
: `curl -L brrl.net/ip` # `-L` to get through the HTTP>HTTP if necessary
#### Saving to disk <a href="#download" id="download">#</a>
You can redirect the content from stdout to another application, save it as a file or download the target file.
Download content to the file:
: `curl -L -o myip.txt brrl.net/ip` # save your public IP to a file called `myip.txt` in the current directory
If you want to **download a file** and keep the **original name**, use the `-O` (capital 'o') or `--remote-name` option.
If you want to create a **new directory**, you can use `--create-dirs` like this:
`curl -L --create-dirs -o path/from/current/dir/myip.txt brrl.net/ip`
The **permission** used is 0750.
#### Specific interface <a href="#interface" id="interface">#</a>
You can use the `--interface` option to use one specific interface. You are free to use the interface name, the IP address, or the hostname.
#### Specific DNS server <a href="#dns-server" id="dns-server">#</a>
You can choose a specific DNS server with the following option. Multiple DNS servers can be chosen and must be separated by a comma.
`--dns-servers 9.9.9.9:53,149.112.112.112:53`
#### Redirects <a href="#redirects" id="redirects">#</a>
If you want curl to follow redirects, simply use the `-L` flag.
#### Import curl options and targets from the file <a href="#import-options" id="import-options">#</a>
Some tasks require many options. To keep it organized, you can import those options from a file with the `-K` or `--config` and followed by the name of the file.
Example:
: `curl --config curl-options.txt https://example.com`
#### Data tranfer limits <a href="#transfer-limits" id="transfer-limits">#</a>
You can set up- and download limits with `--limit-rate`. The default are bytes/second, and you can use `K`,`M`,`G`,`T` for Kilo-,Mega-,Giga- and Terabyte, respectively.
```markdown
--limit-rate 10K
--limit-rate 1000
--limit-rate 10M
```
#### Parallel function <a href="#parallel" id="parallel">#</a>
To let curl transfer data parallel, you can use the `-Z` or `--parallel` and choose `--parallel-immediate` to start immediately.
`-Z --parallel-immediate`
The **default is 50** parallel transfers but can be set with `--parallel-max NUMBER`.
#### Continue downloads automatically
Unreliable connections are a pain, and you can tell curl to retry and continue downloads:
: `--retry 999 --retry-max-time 0 -C -`
: `--retry 999` # retry it 999 times
: `--retry-max-time 0` # prevent the default timeout between retries
: `-C -` # continue the transfer when you run the command again, and let curl figure out where to continue
[Source from StackExchange](https://superuser.com/a/142480)
# Wildcards / Multiple downloads <a href="#wildcards" id="wildcards">#</a>
**Side note**: make sure to put the full URL into single or double quotes if you work with wildcards and sequences.
#### Sets <a href="#sets" id="sets">#</a>
You can tell curl to transfer multiple files by putting the names into curly brac `{}`
Keep the original name:
: `curl -O 'http://{domain1,domain2,domain3}.com'`
: `curl -O 'http://domain.com/{uri1,uri2,uri3}'`
Rename files:
: `curl "http://{one,two}.example.com" -o "file_#1.txt"`
And you can use multiple sets, as shown in this example:
```bash
kuser@pleasejustwork:~/temp/curl$ curl "http://example.com/{1,2}/{3,4}" -o "file_#1_#2.txt"
[1/4]: http://example.com/1/3 --> file_1_3.txt
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 1256 100 1256 0 0 6404 0 --:--:-- --:--:-- --:--:-- 6375
[2/4]: http://example.com/1/4 --> file_1_4.txt
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 1256 100 1256 0 0 12753 0 --:--:-- --:--:-- --:--:-- 12816
[3/4]: http://example.com/2/3 --> file_2_3.txt
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 1256 100 1256 0 0 12765 0 --:--:-- --:--:-- --:--:-- 12816
[4/4]: http://example.com/2/4 --> file_2_4.txt
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 1256 100 1256 0 0 12804 0 --:--:-- --:--:-- --:--:-- 12816l
kuser@pleasejustwork:~/temp/curl$ ls
file_1_3.txt file_1_4.txt file_2_3.txt file_2_4.txt
```
#### Sequence <a href="#sequences" id="sequences">#</a>
Use `[]` for alphanumeric sequences:
: `curl -O 'http://example.com/picture-[1-51].img'`
: `curl -O 'http://example.com/attach-[a-z].img'`
It works with **leading zeros** too.
**Nested sequences** are not supported!
Adding steps:
: `curl -O 'http://example.com/picture-[1-50:2].img'` # every second picture
# Proxies <a href="#proxy" id="proxy">#</a>
I am not too familiar with the proxy functions. I normally just use it to download things from Tor.
If you are connected to Tor, you can reach the network through a SOCKS5 socket:
: `--socks5-hostname localhost:9150` # On Linux
: `--socks5-hostname localhost:9050` # Windows or Tor in Browser Bundle
: with this addition, you can reach the Tor network via curl.
For normal SOCKS5 sockets, you could simply use `--socks5` instead of `--socks5-hostname`, but with `--socks5-hostname`, the DNS resolution runs on the proxy.
The usual syntax for proxies looks like this, according to the manual:
: `-x, --proxy [protocol://]host[:port]`
: `curl --proxy http://proxy.example https://example.com`
: `curl --proxy socks5://proxy.example:12345 https://example.com`
Another example of HTTP basic auth proxy:
: `curl --proxy-basic --proxy-user user:password -x http://proxy.example https://example.com`
# Authentication <a href="#authentication" id="authentication">#</a>
Example for basic authentication:
: `curl -u name:password --basic https://example.com`
Example with bearer token:
: `curl http://username:password@example.com/api/ -H "Authorization: Bearer reallysecuretoken"`
Example with oauth2 bearer:
: `curl --oauth2-bearer "mF_9.B5f-4.1JqM" https://example.com`
Example with ssh public key authentication:
: `curl --pass secret --key file https://example.com`
# What else
And there is so much more, but I'll leave it like that for now. Things I am going to add in the future:
HTTP post/get requests, certificates troubleshooting, up- and downloading data through FTP, sftp, etc., mail /SMTP. I am unfamiliar with those, so I'll test a bunch before I add those topics here.
--

View file

@ -0,0 +1,268 @@
# Getting started with tcpdump
In this blog post, I assume that `tcpdump` is already installed since the installation method can vary from system to system, and basic Linux and CLI skills already exist. I'll try to keep it as short as possible while providing all the necessary information.
# General <a href="#general" id="general">#</a>
`tcpdump` is a CLI tool to capture network traffic to help you troubleshoot specific issues. I'll use a Linux system as a reference system.
To start a packet capture, simply type `sudo tcpdump` in your terminal. This will show you all packets from and to all interfaces, but it won't be saved anywhere. You can end the capture by pressing `CTRL` + `C.`
You can get more help with the `-h` / `--help` or get the current version of `tcpdump` with `--version`.
The following sections show you how to filter the traffic and save your packet captures to disk. For more advanced filters, you can use logical operators to combine filters.
# Limit the hosts or networks <a href="#host-filter" id="host-filter">#</a>
There are many ways to filter the packets you want to capture, and we are going to start with the host and network filters. Here are some examples:
Get all traffic for one specific IP address in both directions:
: `sudo tcpdump host 10.10.20.1`
**Results for a simple ping:**
```bash
kuser@pleasejustwork:~/9_temp/tcpdump$ sudo tcpdump host 10.10.20.1
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on wlp9s0, link-type EN10MB (Ethernet), snapshot length 262144 bytes
17:51:24.396399 IP pleasejustwork > _gateway: ICMP echo request, id 15, seq 4, length 64
17:51:24.402897 IP _gateway > pleasejustwork: ICMP echo reply, id 15, seq 4, length 64
17:51:25.398088 IP pleasejustwork > _gateway: ICMP echo request, id 15, seq 5, length 64
17:51:25.404880 IP _gateway > pleasejustwork: ICMP echo reply, id 15, seq 5, length 64
17:51:26.400067 IP pleasejustwork > _gateway: ICMP echo request, id 15, seq 6, length 64
17:51:26.404658 IP _gateway > pleasejustwork: ICMP echo reply, id 15, seq 6, length 64
17:51:27.401819 IP pleasejustwork > _gateway: ICMP echo request, id 15, seq 7, length 64
17:51:27.408093 IP _gateway > pleasejustwork: ICMP echo reply, id 15, seq 7, length 64
^C
8 packets captured
8 packets received by filter
0 packets dropped by kernel
```
**Side note:** In this example, I've pinked my gateway. If you don't want to resolve hostnames, simply use the `-n` flag.
You can specify whether the IP should be the source or destination instead of bi-directional traffic with `src` or `dst`:
: `sudo tcpdump src 10.10.10.10`
: `sudo tcpdump dst 10.10.10.10`
Use logical operators to filter for more than one host.
#### Network filter <a href="#network-filter" id="network-filter">#</a>
If you want to traffic for a **specific network**, you can use the `net` option together with the **network address** and **CIDR notation**.
Bi-directional packet capture of a specific network:
: `sudo tcpdump net 10.10.10.0/24`
You could combine this option with `src` or `dst` to see only the incoming or outgoing traffic:
: `sudo tcpdump src net 10.10.10.0/24`
: `sudo tcpdump dst net 10.10.10.0/24`
#### MAC address filter <a href="#mac-filter" id="mac-filter">#</a>
If you need to filter captures for a specific MAC address, you simply could use the previous filters with `ether`.
Packet capture filter by a specific MAC address:
: `sudo tcpdump ether host aa:aa:aa:bb:bb:bb` # bi-directional
: `sudo tcpdump ether src aa:aa:aa:bb:bb:bb` # source
: `sudo tcpdump ether dst aa:aa:aa:bb:bb:bb` # destination
`tcpdump` supports the most common formats and maybe even more:
: `aaaaaabbbbbb`
: `aa-aa-aa-bb-bb-bb`
: `aa:aa:aa:bb:bb:bb`
: `aaaa.aabb.bbbb`
#### Directional traffic filter
I've never used this option, but you can use a filter for incoming or outgoing traffic without any hosts with the `-Q` / `--direction` options:
: `sudo tcpdump -Q in` / `sudo tcpdump --direction=in` # all incoming traffic
: `sudo tcpdump -Q out` / `sudo tcpdump --direction=out` # all outgoing traffic
# Port filters <a href="#port-filter" id="port-filter">#</a>
Packet capture filter for a specific port:
: `sudo tcpdump port 53` # source or destination port
: `sudo tcpdump src port 53` # source port
: `sudo tcpdump dst port 53` # destination port
Use logical operators to filter for more than one host:
: `sudo tcpdump port 80 or port 443`
Use `portrange` instead if you want to filter a range of ports:
: `sudo tcpdump portrange 53` # source or destination port
: `src` and `dst` can be used too!
# Protocol filters <a href="#protocol-filter" id="protocol-filter">#</a>
The most common protocol filters are:
: `tcp`
: `udp`
: `icmp`
: `ip`
: `ip6`
: `arp`
# Using a specific interface <a href="#interface" id="interface">#</a>
Choosing the proper interface is one of my most used options to keep the pcap file as small as possible. Most servers have multiple NICs, and many troubleshooting sessions require me to be connected to multiple networks. Choosing a single interface keeps things sorted.
Since most servers have multiple network interfaces and my troubleshooting sessions with my laptop usually required me to select a specific interface, this might be one of the most used filters for me.
You can **list the available interfaces** with the `-D` / `--list-interfaces` options.
**Example of `tcpdump -D`:**
```bash
kuser@pleasejustwork:~/9_temp/tcpdump$ tcpdump -D
1.eth0 [Up, Running, Connected]
2.wg-mullvad [Up, Running]
3.any (Pseudo-device that captures on all interfaces) [Up, Running]
4.lo [Up, Running, Loopback]
[...]
```
To choose an interface, you could use the name of the interface or the number in front of it in the list.
---
To choose an interface for your packet capture, simply use `-i` / `--interface` like this:
: `sudo tcpdump -i 1` or
: `sudo tcpdump --interface=eth0`
You could use `any` as an interface for all interfaces, which is the current default anyway.
# Miscellaneous options <a href="#misc-options" id="misc-options">#</a>
These are just some filters that are important to know.
Tell `tcpdump` not to resolve hostnames:
: `-n`
Tell `tcpdump` not to resolve host or port names:
: `-nn`
Limit the number of packets captured and stop the capture if the limit is reached:
: `-c NUMBER`
Print absolute TCP sequence numbers:
: `-S` / `--absolute-tcp-sequence-numbers`
Import filter expression from file:
: `-F FILENAME`
: `sudo tcpdump -i 2 -F filterfile` # example
```bash
kuser@pleasejustwork:~/9_temp/tcpdump$ cat filterfile
net 10.10.20.0/24 and port 53
```
**Important:** Some options - like the choice of the interface - can not be put into this file, and the `tcpdump` user must be an owner or in the owner group of the file with the filters to get it working. Additional filters provided in the CLI will be ignored!
# Logical operators <a href="#logical-operators" id="logical-operators">#</a>
As mentioned before, filters can be combined, and logical operators can be used for more advanced filter combinations.
Here is the list of logical operators:
: `and` / `&&`
: `or` / `||`
: `not` / `!`
: `<`
: `>`
A more complex `tcpdump` with more options could look like this:
: `sudo tcpdump -n -i 2 "host 10.10.21.1 and (port 80 or port 443)"`
**Side note:** You need to place the filters in quotes if you want to use parentheses.
# Display options <a href="#display-options" id="display-options">#</a>
You've got various options to adjust the display of the captured packets in the terminal. This won't affect the raw packet capture that you would write to disk.
Increase the verbosity of the output:
: `-v` / `-vv` / `-vvv`
Decrease the verbosity of the output:
: `-q`
Print the packet number at the beginning of the line:
: `-#` / `--number`
Various options for timestamps at the beginning of the line:
: Default is `20:03:46.735488`
: `-t` # no timestamp
: `-tt` # as seconds since Jan 1, 1970, 00:00:00, UTC > `1686506678.821116`
: `-ttt` # print the delta to the previous packet in microseconds per default > `00:00:00.006491`
: `-tttt` # human readable with date and time > `2023-06-11 20:11:56.431621`
: `-ttttt` # delta between current and the first packet of this capture in microseconds per default > `00:00:04.013707`
# Saving capture to a file on disk <a href="#saving-to-disk" id="saving-to-disk">#</a>
Before we start, `tcpdump` overwrites files and does not append existing files. There is no option to change that, to my knowledge.
Use the `-w` flag to write the raw packets to disk:
: `-w traffic.pcap` # saves file to the current directory
: `-w -` # sends output to `stdout`
**Side note:** The file will contain the raw binary packet data and won't be human-readable. To change that, you can import the data with the `-r` option that I will explain later in this post.
The **default owner and group of the file** will be `tcpdump` but can be changed with the `-Z USERNAME` option. This drops the ownership, and the provided user will be the new owner of this file.
When you write the raw information to a file, `tcpdump` won't show any packets in the terminal. This can be changed with the `--print` option. That said, in the older version, `--print` is not available, and the same can be achieved with the following oneliner.
`--print` replacement for older versions:
: `sudo tcpdump -w - -U | tee traffic.pcap | tcpdump -r -`
: `-w -` # sends binary data to `stdout`
: `-U` # no buffering between packets
: `tee traffic.pcap` # write binary data to file and its own `stdout`
: `-r -` # reads binary data from `stdin` and presents it in a human-readable form
: [Source - Thanks to cnicutar on stackoverflow](https://stackoverflow.com/a/25604237)
#### Working with big files / long packet captures
Sometimes it is necessary to capture a lot of packets. There are some options on how to handle large pcap files.
Set the **maximum size of the pcap file** and **creates a new pcap file** if the limit is reached:
: `-C NUMBER` # default unit is 1.000.000 bytes or 1 MB. According to the docs, it can be changed to kilo- and gigabyte by adding `k/K` and `g/G`, respectively, after the number, but it did not work in my tests.
: `tcpdump` will just add a number behind the filename and count up.
**Side note:** If you encounter the `Permission denied` error, make sure that the user `tcpdump` is the owner or part of the owner group of the directory or use the `-Z USERNAME` option to choose another user for the file!
**Example:**
```bash
kuser@pleasejustwork:~/9_temp/tcpdump$ sudo tcpdump -n -w traffic.pcap -C 1 -Z kuser
tcpdump: listening on wlp9s0, link-type EN10MB (Ethernet), snapshot length 262144 bytes
^C3040 packets captured
3040 packets received by filter
0 packets dropped by kernel
kuser@pleasejustwork:~/9_temp/tcpdump$ ls -l
total 3096
-rw-r--r-- 1 kuser kuser 1000460 Jun 11 21:18 traffic.pcap
-rw-r--r-- 1 kuser kuser 1000708 Jun 11 21:18 traffic.pcap1
-rw-r--r-- 1 kuser kuser 1000574 Jun 11 21:18 traffic.pcap2
-rw-r--r-- 1 kuser kuser 157610 Jun 11 21:18 traffic.pcap3
```
If you want to **limit the number of files**, you can create a **rotating buffer** with `-W NUMBER`. If the chosen number of files is reached, `tcpdump` starts to overwrite the first file again. It must be combined with the `-C` option.
# Reading PCAPs <a href="#reading-pcap" id="reading-pcap">#</a>
As mentioned before, `tcpdump` saves everything raw in binary in a file that is not human readable. You can read this file again, **make it human readable again**, and **apply new filters again**.
Use the `-r` flag to read the pcap file:
: `-r traffic.pcap` # saves file to the current directory
: `-r -` # sends output to `stdin`
Almost all filters that are mentioned in this post can be applied to already existing pcap files.
---
I'll keep this blog post up to date. Feedback and tips are welcome.
---

View file

@ -0,0 +1,208 @@
# Create tmux layouts using bash scripts
I am a big fan of tmux, but there is - without adding plugins - a way to save and restore sessions or layouts. For this reason, I've decided to work on a bash script that restores and builds my favorite tmux layout at the start of the session. I've been testing it for the last couple of days, and it works well and saved me a ton of time.
Everyone has different requirements. I explain the commands so you can rebuild your favorite layout or session. And please remember that there are multiple ways to solve this.
The demo layout can be found at the bottom of <a href="#demo" >this post.</a>
# The start
I wanted to create a new tmux session for every project I'd start working on. Let us begin with the following framework.
**`tmux-project-x.sh`**
```bash
#!/bin/bash
session_name='project-x'
tmux has-session -t $session_name
if [ $? != 0 ]
then
### Create session ###
tmux new-session -ds $session_name
fi
tmux attach -t $session_name
```
In short, we define the session's name as a variable, check if this session exists, and if it does not, we create it. In the last step, we try to attach the tmux session.
Within this script, we can create our layout. Don't forget to make the script executable `sudo chmod +x tmux-project-x.sh`.
# Before we start - choose the right window and panel
The creation of the script involves a lot of trial and error. I hope I can provide you with some tips to make things easier.
**Side note:** just in case, here is a link to the [tmux primer](https://ittavern.com/getting-started-with-tmux/).
#### Session and window overview <a href="#overview" id="overview">#</a>
Get an overview of all tmux sessions, windows and panes by pressing the `Prefix` + `w` shortcut. This allows you to get a quick overview and move fastly within your tmux environment.
To get a quick overview of the panes of the current window, press `Prefix` + `q` to get the panes numbered like this:
![tmux-overview](/images/blog/tmux-primer-1.png)
#### Syntax of the tmux commands <a href="#syntax" id="syntax">#</a>
Just to provide you with a quick explanation of the syntax of the following commands.
Run a command in a specific pane:
: `tmux send-keys -t $session_name:1.0 'cd /random/path/' C-m C-l`
: `tmux send-keys` *# this is the tmux option, in this case it sends keystrokes*
: `-t $session_name` *# selects the session, in this case, the provided variable in this script*
: `:0.2` *# choses the window and pane, in this case, the first window and pane number 2*
: `'cd /random/path/'` *# the keystrokes, in this case a simple command - at this point, it would not run the command*
: `C-m` *# which means `CTRL` + `m` - this runs the previously sent command*
: `C-l` *# send the final keystroke, which means `CTRL` + `l` - to clear the current command line - looks way cleaner, but is optional*
Just to give you an idea of how a simple command can look and what everything means.
# The essential commands <a href="#commands" id="commands">#</a>
Create a new window:
: `tmux new-window -t $session_name:1`
Rename a specific window:
: `tmux rename-window -t $session_name:1 GENERAL`
Select a specific window:
: `tmux select-window -t $session_name:1`
Select a specific pane on the current window:
: `tmux select-pane -t 0`
#### Spliting windows <a href="#splitting" id="splitting">#</a>
Split current pane horizontally:
: `tmux split-window -h -p 50 -t $session_name:0`
: `-p 50` *# means 50% of the way*
```
┌────────────────┐ ┌────────────────┐
│ │ │ │
│ │ │ │
│ ├───► ├────────────────┤
│ │ │ │
│ │ │ │
└────────────────┘ └────────────────┘
```
Split current pane vertically:
: `tmux split-window -v -p 50 -t $session_name:0`
: `-p 50` *# means 50% of the way*
```
┌────────────────┐ ┌───────┬────────┐
│ │ │ │ │
│ │ │ │ │
│ ├───► │ │ │
│ │ │ │ │
│ │ │ │ │
└────────────────┘ └───────┴────────┘
```
---
**Side note:** Since there is a lot of trial and error involved, you can kill a tmux session with `Prefix` + `:kill-session`.
# Send keystrokes to pane <a href="#key-stroke" id="key-stroke">#</a>
There are many things you could do with this one. Toy around and see what works for you. Changing directories, creating temp files, open specific files, running commands, starting scripts or programs, and so on.
Some examples:
: `tmux send-keys -t $session_name:1.2 'cd /random/path/' C-m` *# change to another directory*
: `tmux send-keys -t $session_name:0.2 '~/random-script/'` *# without `C-m` the command will not be executed and only send to the terminal*
: `tmux send-keys -t $session_name:2.2 'htop' C-m` *# start `htop` in the third window (starts with 0) and pane number 2*
# Design / customization <a href="#design" id="design">#</a>
You can use color names like `red` or hex color codes like `#ff1900`
Background color of status bar:
: `tmux set -g status-style bg=red`
: `tmux set -g status-style bg=#dc23a6`
Set default border color of pane:
: `tmux set -g pane-border-style fg=magenta`
Set color of currently active pane in the window:
: `tmux set -g pane-active-border-style "bg=default fg=magenta"`
Set background color of panes:
: `tmux set -g window-style "fg=#E4E2E1 bg=#332E33"`
Set the background color of the currently active pane:
: `tmux set -g window-active-style "fg=white bg=black"`
# Demo <a href="#demo" id="demo">#</a>
![tmux-demo-layout](/images/blog/tmux-demo-layout.png)
**`demo.sh`**
```bash
#!/bin/bash
session_name='demo'
tmux has-session -t $session_name
if [ $? != 0 ]
then
# SESSION
cd ~
tmux new-session -ds $session_name
tmux set-window-option -t $session_name allow-rename off
tmux rename-window -t $session_name:0 random-overview
# GENERAL OPTIONS / DESIGN
tmux set -g status-style bg=red
tmux set -g status-style bg=red
tmux set -g pane-border-style "fg=white"
tmux set -g pane-active-border-style "bg=default fg=red"
tmux set -g window-style "fg=#E4E2E1 bg=#332E33"
tmux set -g window-active-style "fg=white bg=black"
# FIRST WINDOW
tmux send-keys -t $session_name:0.0 'htop' C-m
tmux split-window -v -p 25 -t $session_name:0
tmux send-keys -t $session_name:0.1 'echo hello' C-m
# SECOND WINDOW
tmux new-window -t $session_name:1
tmux set-window-option -t $session_name:1 allow-rename off
tmux rename-window -t $session_name:1 another-overview
tmux split-window -h -p 50 -t $session_name:1
tmux split-window -v -p 50 -t $session_name:1
tmux split-window -v -p 50 -t $session_name:1
tmux select-pane -t 0
tmux split-window -v -p 50 -t $session_name:1
# SELECT DEFAULT PANE AFTER OPENING
tmux select-window -t $session_name:1
tmux select-pane -t 0
fi
tmux attach -t $session_name
```
# Conclusion <a href="#conclusion" id="conclusion">#</a>
As I mentioned before, there are multiple ways to do it. From the config file to random plugins. I am still using it since it provides me with a lot of flexibility and per-project customizability. If you have any questions or tips, feel free to reach out.
---

View file

@ -0,0 +1,71 @@
# Troubleshooting Asking The Right Questions
# Asking the right question
In this post, I want to present some simple questions on how to start any troubleshooting session. The **main goal** is to gather enough information to narrow down the root cause of the problem, let you grasp the impact of this incident and set a priority, and decide what the next steps of the actual troubleshooting work will look like.
It should be clear that not all questions are needed for every session, but it can give you some ideas, and you can modify them to your needs. I bet I forgot some essential questions, so please let me know, and I'd be happy to add them to the post.
The primary motivation for this post is work-related. I've just celebrated the 100th ticket with `"It doesn't work"` with no further information that was forwarded to me, and I decided to write this post as a reference for the minimum of information any ticket should contain before it gets sent to the next level (besides restarting the device or checking DNS).
# 'W'-Questions
From my experience, you can sort many questions into categories of 'w'-questions:
- What?
- Where?
- Who?
- When?
To provide you with a quick example with one follow-up question each.
```markdown
What is the issue?
Do you see any error messages?
From where are you working right now?
Have you encountered this issue at other locations too?
Who is this issue affecting besides you?
Only the colleagues in the office or those working from home too?
When did it start?
Does the issue occur consistently or intermittently?
```
This is a basic example, but it can already provide enough information for your next steps. As mentioned before, there won't be a perfect template, the order of questions can be changed at any point, and the questions should be based on the already known information. Quick example: if the whole location can't access an internal service all of a sudden, you might not need to ask for the current version of the application of this single device.
The following questions are examples and can serve as a basis for your pool of questions. I've decided not to explain to them since those are fairly self-explanatory.
**Side note:** Some questions could be in multiple categories, but I've listed them once.
# WHAT / ISSUE & IMPACT
- What is the issue? What are you trying to do? What are you trying to accomplish?
- Has it ever worked before? Is this a new issue for you?
- Are there any error messages? Could you please provide us with a screenshot?
- Is it reproducible, or is it random?
- How does the issue affect your work? Can you continue at all? Is there a workaround available?
# WHERE / ENVIRONMENT
- Where are you working right now? Home office, location, office, etc.
- Has anything changed before the issue? Location? Network Updates? Hardware/Software?
- How are your devices connected to the network? Wifi/Cable? Guest network? Hotspot?
# WHEN / TIMELINE
- Since when exists the problem? Is it the first occurrence?
- Does the issue occur consistently or intermittently?
- Does the issue occur sporadically or at a specific time?
**Side note:** Timezones are your friend, please remember them. Please.
# WHO / IMPACT
- Is this issue affecting anyone else besides you? Are the involved at the same or different locations?
# Remarks
- Try to **avoid assumptions** and instead ask if you are not sure. This can save you some headaches later on.
- **Document along** the session and share your findings with the team.

View file

@ -0,0 +1,175 @@
# URL explained - The Fundamentals
In this post, I'll try to explain the syntax and use of an URL and the difference between URI, URL, URN, and URC.
# URL explained <a href="#url-explained" id="url-explained">#</a>
![url-explained](/images/blog/url-explained.png)
This will be our example for this post:
`https://username:password@www.example.com:443/path/to/page.html?query=file#fragment`
The format of this URL is built upon the URI generic syntax that looks like this [2]:
`URI = scheme ":" ["//" authority] path ["?" query] ["#" fragment]`
Noted that the 'authority' can have the following syntax:
`authority = [userinfo "@"] host [":" port]`
More information follow in the following sections.
## URI Scheme <a href="#scheme" id="scheme">#</a>
Always required, but often hidden by the application, e.x. most commonly in browsers as `http` or `https` is the default and implied.
`https://`
`ssh://`
`tel://`
Also commonly called 'protocol', which is an indicator of how the resource can be accessed.
The official register of URI scheme names is maintained by IANA at [http://www.iana.org/assignments/uri-schemes](http://www.iana.org/assignments/uri-schemes). IANA will show registered and reserved schemes that were never registered [1].
There is a large - but now retired - list of [Public registered and un-registered Schemes](https://www.w3.org/Addressing/schemes) by Dan Connolly. And a large and unknown number of private schemes are used for internal use within companies only.
[RFC4395](https://www.rfc-editor.org/rfc/rfc4395.txt) explains the registration procedures and provides some guidelines. The old versions were [RFC2717](https://www.ietf.org/rfc/rfc2717.txt) for the registrations and [RFC2718](https://www.ietf.org/rfc/rfc2718.txt) for the guidelines.
As a side note, the double slashes were a choice of [Tim Berners-Lee, which he regrets since they have no other purpose](https://archive.nytimes.com/bits.blogs.nytimes.com/2009/10/12/the-webs-inventor-regrets-one-small-thing/?partner=rss&emc=rss).
## UserInfo <a href="#userinfo" id="userinfo">#</a>
The UserInfo is optional, and often enough gets discarded by applications. Most browsers will ignore that information or warn you since it is a security risk.
An example where it is used normally:
`ssh://username@example.com:2222`
## Host <a href="#host" id="host">#</a>
This is the host section. It can be the **same system, a hostname, an IP, or a domain**.
Examples:
: `ldap://[2001:db8::10]/c=GB?objectClass?one` # it is required to put the IPv6 address into square brackets
: `https://ittavern.com/url-explained-the-fundamentals/`
: `vnc://10.10.20.57:5900`
#### Domains <a href="#domains" id="domains">#</a>
Just a short digression into the world of domains.
Example:
: `www.example.com` # **full domain name**
: `www` # **subdomain**
: `example` # second-level domain (**SDL**)
: `com` # top-level domain (**TDL**), also called domain suffixe” or “domain extension.”
: `.` # reference: **root zone**, won't go into detail
A second-level domain must only contain letters (a-z), numbers (0-9), and dashes ('-'), but must not start with a dash. Furthermore, domains are case-insensitive, which means `ITTAVERN.COM` is the same as `ittavern.com`. The max length of the second-level domain is 63 characters. Subdomans are subject to the same rules, but can additionally contain underscores (`_`) - it is not recommended, but some services requiere it. For example some SRV DNS of Microsoft `_sipfederationtls._tcp.example.com. Browser can accept it, but there is no guarantee.
Each string between the dots is called **label**, and the maximum length of one label is 63 characters. The **max length of a full domain name is 253 characters**, including the dots.
There are currently almost 1500 TLDs registered. **1470 TLDs** at the time of the creation of this post, to be more specific.
```bash
kuser@pleasejustwork:~$ curl https://data.iana.org/TLD/tlds-alpha-by-domain.txt | sed '1d' | wc -l
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 9828 100 9828 0 0 11506 0 --:--:-- --:--:-- --:--:-- 11494
1470
```
The **list of all TLDs** can be found in the [docs of IANA](https://data.iana.org/TLD/tlds-alpha-by-domain.txt).
There are two kinds of TLDs - **Generic top-level domain (gTLD)** like .com .info .net and **Country-code top-level domain (ccTLD)** like .nl .de .us and some **combinations** like .co.uk or .com.au.
## Port <a href="#port" id="port">#</a>
Many schemes have a default port number, allowing most programs to hide the port number to avoid confusion for their users. `http` has port 80, `https` has port 443, `ssh` has port 22, and so on. The same applies to the transport protocol, for example, `TCP` or `UDP`. They are required, but most applications hide them, if the default port is being used, e.x. browsers hide the `:443` and show `:10443` if the used protocol is `https`.
## Path <a href="#path" id="path">#</a>
The path is a hierarchical naming system of **subdirectories or subfolders and files**, goes from left to right and is required. Unlike domains, **the path is case-sensitive**!
Examples:
: `https://ittavern.com/images/logo.png`
: `https://ittavern.com/random-post/`
: `https://duckduckgo.com` # the path is missing, but implies the root directory `/`
As a side note, the first example leads to an image, and in the second example, you might have noticed that the file is missing. The browser will open the `random-post` subfolder and the webserver is so configured that it provides the browser with a pre-definded file. Those files are Usually called index.html, but that can vary from setup to setup. That is also called 'Pretty URLs.'
## Queries <a href="#queries" id="queries">#</a>
Carries optional parameters that can be used on the server or client site. Commonly use cases are referrer information, variables, option settings, and so on. The delimiters between parameters are `&` and `;`.
Examples:
: `https://www.twitch.tv/randomstream1231?referrer=raid` *# on Twitch it shows where the viewer is coming from*
: `https://youtu.be/dQw4w9WgXcQ?t=4` *# on Youtube, it tells the client where to start the video*
: `https://youtu.be/dQw4w9WgXcQ?list=PLi9drqP&t=9` # multiple parameters containing the playlist and timestamp
## Fragments <a href="#fragments" id="fragments">#</a>
Fragments are optional references for a specific location within a resource. For example, HTML anchors like <a href="#fragments">this</a> in HTML files.
`https://ittavern.com/url-explained-the-fundamentals/#fragments`
#### Difference between Absolute and Relative URL <a href="#relative-url" id="relative-url">#</a>
Until now, every URL was an absolute URL. Relative URLs are often enough just the `Path` and require a reference or base URL to work.
Examples:
: `/de-DE/same-page-different-lang`
: `/img/logo.png`
# Difference between URI and URL and URN and URC <a href="#difference-uri" id="difference-uri">#</a>
URI stands for Uniform Resource Identifier and is a unique string of characters to identify anything and is used by web technologies. URIs may be used to identify anything logical or physical, from places and names to concepts and information. [2]
URIs are the superset of URLs (Uniform Resource Locator), URNs (Uniform Resource Name), and URCs (Uniform Resource Characteristic). For example, every URL is a URI, but not every URI is an URL. That being said, in practice, URI and URL are often used interchangeably.
The different subsets have different tasks: an URN identifies an item, an URL lets you know how to locate and access an item, and URC points to specific metadata of this item. Examples can be found in the specific sections.
#### URL
URL stands for Uniform Resource Locator and specifies where an identified resource is available and the mechanism for accessing it. Further details can be found above.
#### URN <a href="#urn" id="urn">#</a>
Identifies a resource by a unique and persistent name without any location
Examples:
: `urn:isbn:n-nn-nnnnnn-n` *# to identify a book by its ISBN number*
: `urn:uuid:39ab000da-3f9a-abe2-1337-123456789abc` *# globally unique identifier*
: `urn:publishing:book` *# an XML namespace that identifies the document as a type of book*
**Side note**: `isbn` - like in the first example - is an URN namespace identifier (NID) and not an URN scheme nor a URI scheme [1]. It was mentioned that some people would call the `NID` (see the following list) an URI scheme, equivalent to the URL, which is not correct.
Every URN should have the following structure:
: **URN** *# scheme specification prefix.*
: **NID** *# namespace identifier (letters, digits, dashes)*
: **NSS** *# namespace-specific string that identifies the resource (can contain ASCII codes, digits, punctuation marks and special characters)*
#### URC <a href="#urc" id="urc">#</a>
URC stands for Uniform Resource Characteristic or Uniform Resource Citation. According to [Wikipedia](https://en.wikipedia.org/wiki/Uniform_Resource_Characteristic), the former is the currently used name.
An URC points to the metadata of a resource rather than the resource itself. A quick example would be an URC that points to the source code of a homepage:
`view-source:http://example.com/`
That said, there was never a final standard produced, and URCs were never widely adopted.
---
# References <a href="#reference" id="reference">#</a>
- https://cv.jeyrey.net/img?equivocal-urls
- https://developer.mozilla.org/en-US/docs/Learn/Common_questions/Web_mechanics/What_is_a_URL
- https://stackoverflow.com/questions/4913343/what-is-the-difference-between-uri-url-and-urn
- http://www.ietf.org/rfc/rfc3986.txt
- [1] https://www.w3.org/TR/uri-clarification/
- [2] https://en.wikipedia.org/wiki/Uniform_Resource_Identifier
---

View file

@ -0,0 +1,162 @@
# Getting started with netcat on Linux with examples
In this blog post, I'll focus on the basics of netcat. More advanced options and scenarios will follow in separate posts at some point.
Netcat is available on almost any Linux host and is easy to use. It is an excellent tool for troubleshooting network issues or gathering information and a great addition to any tool portfolio.
# Basics of netcat <a href="#basics" id="basics">#</a>
Netcat and nc can be used interchangeably. I've decided to use `nc` for this blog post. On RHEL, it is often called ncat and part of the nmap packet.
The basic syntax is:
: `nc [ options ] host port`
: `nc 10.20.10.7 22`
: Netcat uses TCP by default, which would be a simple TCP connection comparable to telnet.
: You can close the connection with `CTRL` + `d` or `c`.
Get some help:
: `nc --help`
: `man nc`
Don't resolve hostnames:
: `-n`
Get more verbose output:
: `-v`
Use a specific internet protocol:
: `-4` *# IPv4 only*
: `-6` *# IPv6 only*
Use UDP instead of TCP:
: `-u`
: *I don't focus on UDP in this post, but I might add more related content in the future*
#### Interfaces & source port <a href="#interface" id="interface">#</a>
Sometimes it is necessary to specify an interface since hosts often enough have multiple. You can choose the source/interface IP on both sides with the `-s` flag and the source port on the client with the `-p` flag.
Example as a client:
: `nc -p 10101 -s 10.20.10.8 10.20.10.7 9999`
: `-p 10101` *# use the source port `10101`*
: `-s 10.20.10.8` *# use the source IP to connect to the server*
: `10.20.10.7` *# IP of the server*
: `9999` *# destination port of the server*
#### Destination Ports <a href="#ports" id="ports">#</a>
You can choose multiple destination ports for most Netcat functions on the client side.
Range of ports:
: `1-90`
Multiple ports:
: `80 443`
: *separated by a space*
Examples of service names:
: `http`
: `ssh`
: `smtp`
Combination:
: `ssh 2222 10022-10080`
# Simple port scan <a href="#port-scan" id="port-scan">#</a>
There are better options like nmap, but it is often enough all you need.
Example:
: `nc -v -z example.com 20-23`
: `-z` *# scan instead of initiating a connection*
: `-v` *# get a more verbose output*
**Output**
```markdown
nc -vz 10.20.10.8 20-23
nc: connect to 10.20.10.8 port 20 (tcp) failed: Connection refused
nc: connect to 10.20.10.8 port 21 (tcp) failed: Connection refused
Connection to 10.20.10.8 22 port [tcp/ssh] succeeded!
nc: connect to 10.20.10.8 port 23 (tcp) failed: Connection refused
```
**Side note:** The results are being sent to standard error, and not standard out. If you want to filter the results with `grep`, you need to redirect standard error to standard out with `2>&1` like in the following example:
```markdown
nc -vz 10.20.10.8 20-23 2>&1 | grep succeeded
Connection to 10.20.10.8 22 port [tcp/ssh] succeeded!
```
#### More information about the running service <a href="#service" id="service">#</a>
You can get more information about the running service with the following command:
Information about the SSH Service
```markdown
kuser@pleasejustwork: echo "QUIT" | nc 10.20.10.8 22
SSH-2.0-OpenSSH_8.0
```
Information about the Web server
```markdown
kuser@pleasejustwork: echo "QUIT" | nc ittavern.com 443
HTTP/1.1 400 Bad Request
Server: nginx
Date: Sun, 23 Jul 2023 09:18:01 GMT
Content-Type: text/html
Content-Length: 150
Connection: close
<html>
<head><title>400 Bad Request</title></head>
<body>
<center><h1>400 Bad Request</h1></center>
<hr><center>nginx</center>
</body>
</html>
```
# Simple chat via netcat <a href="#chat" id="chat">#</a>
Two Netcat instances can connect to each other in a server-client relationship to let you transfer text and files in both directions. Which host is the server, and which is the client only relevant for the initial connection?
You can use the `-l` flag to let Netcat listen on a specific port like this:
Server:
: `nc -l 9999`
Client:
: `nc 10.20.10.7 9999`
**Side note:** Netcat will listen to all interfaces by default. To limit it to a single one, use the `-s` flag with the desired IP as a parameter. You'll need to add the `-p` flag in front of the port, or it will run in a syntax error as a side note within the side note.
There won't be any notifications on either side, but after successfully connecting to the server, you can send messages to each host. You can close the connection with `CTRL` + `d` or `c`.
When the connection is closed, the server will stop listening by default. You can use `-k` to keep the server listening. There can be only one active TCP connection per server and port.
**Side note:** Non-root users are by default limited to ports above 1023 for security reasons, and all communication is unencrypted by default!
# File Transfer <a href="#file-transfer" id="file-transfer">#</a>
With Netcat, we are not limited to chat messages and can use it to **transfer files** in both directions. Just as a reminder: the transfer will be **unencrypted** by default!
Server / receiver:
: `nc -l 9999 > random-config.txt`
Client / sender:
: `nc -N 10.20.10.8 9999 < random-config.txt`
: `-N` *# shuts down the network socket after the transfer*
In this example, we would transfer the file `random-config.txt` from the sender to the receiver in the current directory. The files don't need to have the same name.
# Conclusion <a href="#conclusion" id="conclusion">#</a>
Netcat is one of my most used tools for my day-to-day work as it is easy to use and installed on almost any Linux host. I've provided you with the basics of Netcat, so you can add it to your portfolio of tools.
In the future, I plan to provide you with more advanced Netcat functions like a simple web server, bandwidth check, TCP proxy, encryption, and reverse shell.
---

View file

@ -0,0 +1,291 @@
# Getting started with Fail2Ban on Linux
I want to show you how to get started with Fail2Ban to keep your Linux servers more secure. For this blog post, I've used **Ubuntu 22.04 LTS** as a reference and will use it to secure my **SSH service** with **iptables** as the firewall. I assume that you already installed it on your system.
# General
Every internet-facing service will be attacked, and brute-forcing login attempts to gain access is one of the most common attacks you will encounter. Fail2Ban will follow certain rules and place any suspicious IP on the deny list of the firewall to minimize the risks.
In short:
: Fail2Ban checks logs > sees suspicious logs in reference to set rules > adds suspicious IP to the deny list of the firewall
: (jails > logs > filters > actions > configuration > firewall)
# Running Fail2Ban with systemd <a href="#running" id="running">#</a>
Make sure to systemd starts Fail2Ban automatically:
: `sudo systemctl enable fail2ban`
If you have never worked with Fail2Ban, it probably will be deactivated. You can check the status with:
`sudo systemctl status fail2ban`
Result:
```bash
○ fail2ban.service - Fail2Ban Service
Loaded: loaded (/lib/systemd/system/fail2ban.service; disabled; vendor preset: enabled)
Active: inactive (dead)
Docs: man:fail2ban(1)
```
It is currently not running!
You can start the Fail2Ban service with:
: `sudo systemctl start fail2ban`
After changes to the configuration, you have to restart the service with:
: `sudo systemctl restart fail2ban`
# Configuration file <a href="#configuration-file" id="configuration-file">#</a>
The **default configuration file** can be found in the `/etc/fail2ban/` directory and is called `jail.conf`. You can modify options within this file, create a new file called `jail.local` or create a new `*.conf` file in the `jail.d/` directory.
For this blog post, we'll create a **new configuration file** in the directory `jail.d/` called `custom.conf`:
`sudo cp jail.conf ./jail.d/custom.conf`
In the same file, it is stated **TO NOT customize** `jail.conf` and rather use the above-mentioned alternatives since the default file might get overwritten with upcoming updates.
```bash
# Changes: in most cases, you should not modify this
# file but provide customizations in jail.local file,
# or separate .conf files under jail.d/ directory, e.g.:
#
# YOU SHOULD NOT MODIFY THIS FILE.
#
# It will probably be overwritten or improved in a distribution update.
#
# Provide customizations in a jail.local file or a jail.d/customisation.local.
# For example to change the default bantime for all jails and to enable the
# ssh-iptables jail, the following (uncommented) would appear in the .local file.
# See man 5 jail.conf for details.
```
As you can see in this example, lines that begin with a `#` are getting ignored by Fail2Ban and won't change any configuration. This can be used for **comments** or **disabling options**.
Important is that you have to restart Fail2Ban after changes. If you run Fail2Ban with systemd, you can restart it with `sudo systemctl restart fail2ban`.
#### Configuration syntax <a href="#configuration-syntax" id="configuration-syntax">#</a>
**Example:**
```bash
[DEFAULT]
# "bantime" is the number of seconds that a host is banned.
bantime = 10m
# A host is banned if it has generated "maxretry" during the last "findtime"
# seconds.
findtime = 10m
# "maxretry" is the number of failures before a host gets banned.
maxretry = 5
#
# JAILS
#
[sshd]
#mode = normal
port = ssh
logpath = %(sshd_log)s
backend = %(sshd_backend)s
```
So, at the beginning of the configuration file, you can find the `[DEFAULT]` settings. Those apply to all `JAILS` if it is not overwritten within the `JAILS` configuration. Those `JAILS` are the services of which Fail2Ban checks the logs. In this example, the jail is the SSH server.
Important: you have to enable a jail by adding the following line:
: `enabled = true`
: that said, `sshd` is enabled by default
A jail requires the configurations segment, an action, and a filter. But this is beyond the scope of this blog post.
# Configurations <a href="#configuration" id="configuration">#</a>
I'll focus on the basics since Fail2Ban provides you with many options. You can check out the manual or default configuration for more options.
Adjust the length of the ban:
: `bantime = 10m` # *10 minutes is the default*
: you can change it to whatever time you want.
Rules for Fail2Ban when to set an IP on the ban list:
: `findtime = 10m`
: `maxretry = 5`
: this means if Fail2Ban finds 5 unsuccessful attempts to access a service within the last 10 minutes, the source host will be placed on the ban list for the duration of the configured `bantime`.
Exclude certain hosts from Fail2Ban:
: `ignoreip = 127.0.0.1/8 ::1`
: You can use IP addresses, networks with CIDR notation, or DNS hosts.
: By default, the loopback addresses are already excluded. You can add your trusted hosts.
: Multiple hosts or networks can be separated by using space and/or commas.
You can find more configuration options with `man jail.conf`.
Keep in mind to **restart Fail2Ban** after your changes!
#### Log level
You can set the verbosity of the logs with `loglevel` in the configuration.
Example:
: `loglevel = INFO`
: the default level is `INFO`
: your options are: ` CRITICAL, ERROR, WARNING, NOTICE, INFO, DEBUG, TRACEDEBUG and HEAVYDEBUG`
# Status & Logging <a href="#status" id="status">#</a>
After starting the service, you can check the current status with:
: `sudo fail2ban-client status`
```bash
sudo fail2ban-client status
Status
|- Number of jails: 1
`- Jail list: sshd
```
This is a simple way to check what jails are active.
To get more information about one specific jail, simply add the name at the end like this:
```bash
sudo fail2ban-client status sshd
Status for the jail: sshd
|- Filter
| |- Currently failed: 0
| |- Total failed: 0
| `- File list: /var/log/auth.log
`- Actions
|- Currently banned: 0
|- Total banned: 0
`- Banned IP list:
```
#### Logs <a href="#logs" id="logs">#</a>
The Fail2Ban logs can be found `/var/log/fail2ban.log` here and look like the following example:
```bash
sudo tail /var/log/fail2ban.log
2023-08-04 09:11:46,772 fail2ban.filter [2144579]: INFO Added logfile: '/var/log/auth.log' (pos = 0, hash = 74ba42c53c7cf9f04857abd99b024ee7018ac4be)
2023-08-04 09:11:46,777 fail2ban.jail [2144579]: INFO Jail 'sshd' started
2023-08-04 09:16:44,756 fail2ban.filter [2144579]: INFO [sshd] Found 170.64.141.213 - 2023-08-04 09:16:44
2023-08-04 09:21:53,547 fail2ban.filter [2144579]: INFO [sshd] Found 170.64.141.213 - 2023-08-04 09:21:53
2023-08-04 09:27:03,055 fail2ban.filter [2144579]: INFO [sshd] Found 170.64.141.213 - 2023-08-04 09:27:03
```
#### Check what hosts are banned <a href="#check-banned" id="check-banned">#</a>
There are multiple ways to do so.
As previously shown, you can see the banned hosts of a certain jail.
```bash
sudo fail2ban-client status perm
Status for the jail: perm
|- Filter
| |- Currently failed: 0
| |- Total failed: 0
| `- File list:
`- Actions
|- Currently banned: 1
|- Total banned: 1
`- Banned IP list: 170.64.141.213
```
---
Another method would be to **show all banned hosts in all jails**.
```bash
sudo fail2ban-client banned
[{'sshd': []}, {'perm': ['170.64.141.213']}]
```
If you want to search for a **single IP**, simply add it after 'banned' and only the jails containing this IP will be shown.
---
The third way would be to check it on the firewall itself.
```bash
sudo iptables -nL
[...]
Chain f2b-sshd (1 references)
target prot opt source destination
REJECT all -- 170.64.141.213 0.0.0.0/0 reject-with icmp-port-unreachable
RETURN all -- 0.0.0.0/0 0.0.0.0/0
```
I bet there are way more ways to show your banned hosts, but those should be good enough for now.
# Banning or unbanning hosts manually
Sometimes you have to work on Fail2Ban manually.
#### Manually unbanning hosts <a href="#manual-unban" id="manual-unban">#</a>
It most likely will happen that Fail2Ban is one of your hosts. To **remove a host from the deny list**, just use the following command:
`sudo fail2ban-client set sshd unbanip 170.64.141.213`
You don't need to restart Fail2Ban after this command.
#### Manually banning hosts <a href="#manual-ban" id="manual-ban">#</a>
There might be a need to place hosts on a ban list. You can do it with the following command:
```bash
sudo fail2ban-client -vvv set sshd banip 170.64.141.213
+ 61 7F1DE7AAE1C0 fail2ban.configreader INFO Loading configs for fail2ban under /etc/fail2ban
+ 61 7F1DE7AAE1C0 fail2ban.configreader DEBUG Reading configs for fail2ban under /etc/fail2ban
+ 62 7F1DE7AAE1C0 fail2ban.configreader DEBUG Reading config files: /etc/fail2ban/fail2ban.conf
+ 63 7F1DE7AAE1C0 fail2ban.configparserinc INFO Loading files: ['/etc/fail2ban/fail2ban.conf']
+ 63 7F1DE7AAE1C0 fail2ban.configparserinc TRACE Reading file: /etc/fail2ban/fail2ban.conf
+ 64 7F1DE7AAE1C0 fail2ban.configparserinc INFO Loading files: ['/etc/fail2ban/fail2ban.conf']
+ 64 7F1DE7AAE1C0 fail2ban.configparserinc TRACE Shared file: /etc/fail2ban/fail2ban.conf
+ 65 7F1DE7AAE1C0 fail2ban INFO Using socket file /var/run/fail2ban/fail2ban.sock
+ 65 7F1DE7AAE1C0 fail2ban INFO Using pid file /var/run/fail2ban/fail2ban.pid, [INFO] logging to /var/log/fail2ban.log
+ 65 7F1DE7AAE1C0 fail2ban HEAVY CMD: ['set', 'sshd', 'banip', '170.64.141.213']
+ 151 7F1DE7AAE1C0 fail2ban HEAVY OK : 1
+ 151 7F1DE7AAE1C0 fail2ban.beautifier HEAVY Beautify 1 with ['set', 'sshd', 'banip', '170.64.141.213']
1
+ 151 7F1DE7AAE1C0 fail2ban DEBUG Exit with code 0
```
In this example, we use `-vvv` for more verbose output and place a random IP on the ban list for our `sshd` service.
You can check the ban list with `sudo fail2ban-client status sshd` or `sudo iptables -nL`.
**Important**: the length of the ban will depend on the `bantime` configured for the jail.
#### Permanently ban hosts <a href="#permanent-ban" id="permanent-ban">#</a>
From what I know, you can't ban hosts permanently. You could create a new jail with the same configurations as a reference jail and change the `bantime` to let's say, `999y` - I'd say 999 years is more or less permanent.
```bash
[perm]
enabled = true
port = ssh
filter = sshd
action = iptables-multiport[name=sshd, port="ssh", protocol=tcp]
bantime = 999y
```
You now can use `sudo fail2ban-client -vvv set perm banip 170.64.141.213` to ban a host for a long time.
# Testing your configuration <a href="#testing" id="testing">#</a>
That is probably the easiest part. Make sure you have access to the server, open the logs, and try to connect with the wrong credentials.
**Important**: just to make sure that you don't lose access to your remote machine!
# Conclusion
This blog post shows you how to get started with Fail2Ban. There are some more specific topics that I am going to write about later, like filters and actions to customize it even more for custom services or sending emails out as soon as an IP gets banned.
---

View file

@ -0,0 +1,190 @@
# Getting started with dig - DNS troubleshooting
# Getting started with dig
Please note that this blog post is not an in-depth guide on DNS and dig. It will provide you with the basics, and more advanced topics that are out of the scope. Some more advanced topics are DNS over HTTPs/TLS, all kinds of methods to format the results, DNSSEC, and so on. I'll go into more detail in separate posts.
# Basic usage
Dig stands for 'Domain Information Groper' and is a great tool to troubleshoot DNS issues or get information about certain domains. It is an excellent alternative to `nslookup` and `host` and generally presents results that are more script-friendly.
The typical syntax is the following:
: `dig @server name type`
: `@server` - is the IP or name of the name server you want to handle the request. It is optional and if it is not specified, dig checks `/etc/resolv.conf`.
: `name` - is the host or domain name for the request
: `type` - the DNS type that is requested. It is optional and if it is not specified, dig will use the `A` record.
Basic example with line numbers added:
```bash
kuser@pleasejustwork:~$ dig ittavern.com
1 ; <<>> DiG 9.18.12-0ubuntu0.22.04.3-Ubuntu <<>> ittavern.com
2 ;; global options: +cmd
3 ;; Got answer:
4 ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 64814
5 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
6
7 ;; OPT PSEUDOSECTION:
8 ; EDNS: version: 0, flags:; udp: 65494
9 ;; QUESTION SECTION:
10 ;ittavern.com. IN A
11
12 ;; ANSWER SECTION:
13 ittavern.com. 600 IN A 95.216.194.187
14
15 ;; Query time: 40 msec
16 ;; SERVER: 127.0.0.53#53(127.0.0.53) (UDP)
17 ;; WHEN: Fri Oct 13 20:26:34 CEST 2023
18 ;; MSG SIZE rcvd: 67
```
Without providing too many options, we already get a lot of information, and I'll try to get into more detail in the following sections.
Let us start with **line 4**: the `status` field is the first indicator of the request's success.
`NOERROR`:
: There was no problem. All requested information were delivered.
`SERVFAIL`:
: The requested name exists, but there's no data available or the data is invalid.
`NXDOMAIN`:
: The requested name doesn't exist.
`REFUSED`:
: The zone doesn't exist at the name server.
I'll go into more detail of the other information when we talk about the usage.
---
## Basic commands
To get the version of dig:
: `-v`
To get more information
: `-h`
: `man dig`
Chose the DNS record type:
: `dig ittavern.com mx`
: this would be an example of requesting an `MX` record. The default is an `A` record.
: you can add the flag `-t` in front of it to separate it from the rest and make it more verbose
: the `ANY` request to get all entries [won't be answered](https://datatracker.ietf.org/doc/html/rfc8482) from most name servers
: I couldn't find a way to request all records for a domain without a script
Start a reverse lookup:
: `-x`
: if you want to lookup a name behind an IP
: you don't have to specify the `PTR` type or `IN` class.
---
Choose a specific name server with `@`:
: `dig @9.9.9.9 ittavern.com`
Specify the source IP and source port:
: `-b address[#port]`
: `dig ittavern.com -b 10.10.10.10#12345`
Specify the destination port:
: `-p port` # the default port is 53, but some name servers listen to another one.
---
Send query over TCP:
: `+tcp`
: the default is UDP
Specify the query class:
: `-c CLASS` # default is `IN`
Specify the IP version:
: `-4` # IPv4
: `-6` # IPv6
# Multiple queries
You can write them in a single command one after the other, like the following example, or use a **batch file** like described in the following section.
`dig ittavern.com ittavern.com mx brrl.net`
#### Using a batch file
Simply use batch files when you have a high number of requests. Every request should stand in a single line.
Using the `-f` flag to do so.
Sample file:
```bash
kuser@pleasejustwork: $ cat batch.txt
ittavern.com a
ittavern.com mx
brrl.net a
```
You then can tell dig to use this file to send the queries:
: `dig -f batch.txt`
You can use the usual options to shorten the output:
```
kuser@pleasejustwork: $ dig -f batch.txt +short
95.216.194.187
10 mxext2.mailbox.org.
10 mxext1.mailbox.org.
20 mxext3.mailbox.org.
94.130.76.189
```
# Verbosity
As mentioned before, without additional options, dig provides you with a lot of information by default - more than `nslookup` or `host`.
To get **less information**, simply use `+short`:
```
kuser@pleasejustwork:~$ dig +short ittavern.com
95.216.194.187
```
To get even **more information**, use `+trace`:
```
kuser@pleasejustwork: $ dig +trace ittavern.com
; <<>> DiG 9.18.12-0ubuntu0.22.04.3-Ubuntu <<>> +trace ittavern.com
;; global options: +cmd
. 40164 IN NS l.root-servers.net.
. 40164 IN NS m.root-servers.net.
. 40164 IN NS f.root-servers.net.
. 40164 IN NS d.root-servers.net.
. 40164 IN NS e.root-servers.net.
. 40164 IN NS b.root-servers.net.
. 40164 IN NS c.root-servers.net.
. 40164 IN NS a.root-servers.net.
. 40164 IN NS h.root-servers.net.
. 40164 IN NS k.root-servers.net.
. 40164 IN NS g.root-servers.net.
. 40164 IN NS j.root-servers.net.
. 40164 IN NS i.root-servers.net.
;; Received 239 bytes from 127.0.0.53#53(127.0.0.53) in 40 ms
;; communications error to 199.7.91.13#53: connection refused
;; communications error to 199.7.91.13#53: connection refused
;; communications error to 199.7.91.13#53: connection refused
;; communications error to 202.12.27.33#53: connection refused
;; communications error to 192.112.36.4#53: connection refused
[...]
```
It gives you more insight into the DNS process.
# Conclusion
I hope this blog post will help you to get started with dig. It provides even more options to troubleshoot certain issues, but I'll tackle those topics in a separate post.
---

View file

@ -0,0 +1,293 @@
# Getting started with rclone - Data transmission
Rclone is an [open-source](https://github.com/rclone/rclone) cross-platform data synchronization application focusing on cloud services. It can act as the CLI for your cloud storage. Rclone provides a broad set of features, from simple data transfer to mounting your cloud storage. It provides so many features that I will work on more posts and concentrate on the initial setup and data transfer in this post.
I wish I had looked into rclone earlier.
I will use **Linux** as a reference system in this post.
# Further information
Instructions for the [download](https://rclone.org/downloads/) and [installation](https://rclone.org/install/) can be found in the official documentation.
After the installation, you can get more help via `rclone --help`, `man rclone` or the [official docs](https://rclone.org/docs/).
# Configuration File
With `rclone config file` you can check the configuration file that rclone would use. In my case, it hasn't been created yet.
```markdown
$ rclone config file
Configuration file doesn't exist, but rclone will use this path:
/home/kuser/.config/rclone/rclone.conf
```
You don't have to create this file. Rclone will create it after you've added the first 'Remote'.
# Remotes
`Remotes` as the name implies, are remote storages and rclone supports a large [number of providers](https://github.com/rclone/rclone#storage-providers).
You can check the current list of remotes with `rclone listremotes` or the interactive `rclone config` command.
```markdown
$ rclone listremotes
hetzner-storagebox:
```
```markdown
$ rclone config
Current remotes:
Name Type
==== ====
hetzner-storagebox sftp
e) Edit existing remote
n) New remote
d) Delete remote
r) Rename remote
c) Copy remote
s) Set configuration password
q) Quit config
e/n/d/r/c/s/q>
```
#### Adding a new remote
I've just added one of my StorageBoxes from Hetzner as a test. You can choose `n` and add a new 'remote'. Rclone will suggest a list of providers and if the provider of choice is unavailable, you have to choose the protocol like `sftp`, `ftp`, `webdav`, or something else to synchronise your data. Rclone will ask you for all necessary information.
```markdown
[...]
25 / Pcloud
\ "pcloud"
26 / Put.io
\ "putio"
27 / SSH/SFTP Connection
\ "sftp"
28 / Sugarsync
\ "sugarsync"
29 / Transparently chunk/split large files
\ "chunker"
[...]
```
The information then can be found in the configuration file.
```markdown
$ rclone config file
Configuration file is stored at:
/home/kuser/.config/rclone/rclone.conf
$ cat ~/.config/rclone/rclone.conf
[hetzner-storagebox]
type = sftp
host = u123123.your-storagebox.de
user = u123123
pass = fdsfasggsghsgsdrgrsgsgsrgsgrg
use_insecure_cipher = true
```
#### Removing Remote
You can either use the interactive menu of `rclone config` or delete the specific remotes section from your configuration file.
```markdown
$ rclone config
Current remotes:
Name Type
==== ====
hetzner-storagebox sftp
e) Edit existing remote
n) New remote
d) Delete remote
r) Rename remote
c) Copy remote
s) Set configuration password
q) Quit config
e/n/d/r/c/s/q> d
Choose a number from below, or type in an existing value
1 > hetzner-storagebox
remote> 1
```
# Show content of Remote
You can check the content of the root directory of the added 'remote' to verify a successful connection with the `ls` options.
```markdown
ls List the objects in the path with size and path.
lsd List all directories/containers/buckets in the path.
lsf List directories and objects in remote:path formatted for parsing.
lsjson List directories and objects in the path in JSON format.
lsl List the objects in path with modification time, size and path.
```
As an example:
```markdown
$ rclone lsd hetzner-storagebox:/
-1 2023-02-02 14:47:07 -1 023
-1 2023-11-05 11:09:05 -1 2023
-1 2023-02-02 15:08:55 -1 cam-2
```
# Basics usage
The basic functions of rclone are to **copy**, **move**, **sync** and **check for changes** of files and directories.
The basic syntax is:
`rclone [options] subcommand <parameters> <parameters...> src:source_path dst:destination_path`
**Side note**: Use the `-v` flag to get a more **verbose** output, which I highly recommend! - You can increase the verbosity further by using `-vv` or `-vvv`.
Use the `--dry-run` and `--interactive / -i` flags before larger transfers to check the connection and the actions rclone would take to **prevent data loss.**
```markdown
$ rclone -v --dry-run sync ./test-dir hetzner-storagebox:/test-dir
2023/11/07 22:52:57 NOTICE: another-test.txt: Skipped copy as --dry-run is set
2023/11/07 22:52:57 NOTICE: test-image.png: Skipped delete as --dry-run is set
2023/11/07 22:52:57 INFO :
Transferred: 0 / 0 Bytes, -, 0 Bytes/s, ETA -
Checks: 1 / 1, 100%
Deleted: 1
Transferred: 1 / 1, 100%
Elapsed time: 0.3s
```
#### Copying data
One of the most used functions is the simple data transfer from a source to a destination.
In the following example, we transfer a test image from our local directory to our 'remote' Hetzner StorageBox.
Usual data transfer:
```markdown
┌──────────────────┐
│ │
│ local─────►cloud │
│ │
└──────────────────┘
```
```markdown
rclone copy ./test-image.png hetzner-storagebox:/
```
I then deleted the file with rclone with the following command `rclone deletefile hetzner-storagebox:/test-image.png` and transferred it again with a more verbose output:
```markdown
$ rclone -v copy ./test-image.png hetzner-storagebox:/
2023/11/07 22:22:01 INFO : test-image.png: Copied (new)
2023/11/07 22:22:01 INFO :
Transferred: 261.574k / 261.574 kBytes, 100%, 920.919 kBytes/s, ETA 0s
Transferred: 1 / 1, 100%
Elapsed time: 0.5s
```
If you copy a **directory**, make sure to label/name the folder on the destination, or you'd copy only the content of the source directory.
```markdown
$ rclone lsd hetzner-storagebox:/
-1 2023-02-02 14:47:07 -1 023
-1 2023-11-05 11:09:05 -1 2023
-1 2023-02-02 15:08:55 -1 cam-2
$ rclone -v copy ./test-dir hetzner-storagebox:/test-dir
2023/11/07 22:29:25 INFO : test-image.png: Copied (new)
2023/11/07 22:29:25 INFO :
Transferred: 261.574k / 261.574 kBytes, 100%, 815.043 kBytes/s, ETA 0s
Transferred: 1 / 1, 100%
Elapsed time: 0.6s
$ rclone lsd hetzner-storagebox:/ -1 2023-02-02 14:47:07 -1 023
-1 2023-11-05 11:09:05 -1 2023
-1 2023-02-02 15:08:55 -1 cam-2
-1 2023-11-07 22:29:25 -1 test-dir
```
The source files and directories remain unchanged just to have it written down.
If you want to copy things from a **remote storage to your local system**, simply swap the source and destination and choose the file or directory in the source.
```markdown
┌──────────────────┐
│ │
│ cloud─────►local │
│ │
└──────────────────┘
```
`rclone copy hetzner-storagebox:/test-image.png ./`
Another often used use case for rclone is to **copy or sync data from one cloud storage provider to another**, which works pretty well!
```markdown
┌──────────────────┐
│ │
│ cloud─────►cloud │
│ │
└──────────────────┘
```
```markdown
$ rclone -v copy hetzner-storagebox:/test-dir onedrive-storage:/test-dir-onedrive
2023/11/08 20:01:15 INFO : another-test.txt: Copied (new)
2023/11/08 20:01:15 INFO :
Transferred: 261.574k / 261.574 kBytes, 100%, 975.168 kBytes/s, ETA 0s
Transferred: 1 / 1, 100%
Elapsed time: 1.9s
```
#### Moving files
I'll keep this one short. Just replace `copy` with `move` from the previous section, and rclone will remove the source files after the data transfer.
```markdown
$ rclone -v move ./test-image.png hetzner-storagebox:/
2023/11/07 22:46:26 INFO : test-image.png: Copied (new)
2023/11/07 22:46:26 INFO : test-image.png: Deleted
2023/11/07 22:46:26 INFO :
Transferred: 261.574k / 261.574 kBytes, 100%, 975.168 kBytes/s, ETA 0s
Checks: 2 / 2, 100%
Deleted: 1
Renamed: 1
Transferred: 1 / 1, 100%
Elapsed time: 0.5s
```
#### Syncing files
Syncing - with `sync` - makes the source and destination identical but only changes the destination. It works on your local device and remote storage.
**Side note**: As syncing can cause data loss as it deletes data that is not present in the source, use the `--dry-run` flag beforehand and check what actions rclone would take.
```markdown
$ rclone -v sync ./test-dir hetzner-storagebox:/test-dir
2023/11/07 23:06:47 INFO : another-test.txt: Copied (new)
2023/11/07 23:06:47 INFO : test-image.png: Deleted
2023/11/07 23:06:47 INFO :
Transferred: 0 / 0 Bytes, -, 0 Bytes/s, ETA -
Checks: 1 / 1, 100%
Deleted: 1
Transferred: 1 / 1, 100%
Elapsed time: 0.4s
```
#### Deleting files
You can delete files and directories with `rclone delete` and `rclone deletefile`.
```markdown
$ rclone -v delete onedrive-storage:/test-dir-onedrive
2023/11/08 20:16:10 INFO : another-test.txt: Deleted
```
---

View file

@ -0,0 +1,173 @@
# Port Knocking with knockd and Linux - Server Hardening
Port knocking is like a **secret handshake** or **magic word** between client and server. It can be used in various ways, but most commonly as a security feature to deny all contact to a specific service - like SSH - and only allow connections if the client used the correct port knocking sequence. But it is not limited to it and can used to run different commands, too. In this article, I've decided to use `knockd`.
It works like this:
: `knockd` server checks logs for specific sequence/ pattern *(e.x. TCP Syn Packets of 3 specific ports)*
: `knockd` client runs specific sequence against server *(e.x. to open up SSH access over the FW)*
: `knockd` server recognises sequence and runs a specified command *(e.x. to open access via SSH for a specific IP)*
---
**Advantages**:
- adds another security layer (e.x. gainst automated attacks and information gathering)
- could run commands in a secure way for third parties
---
**Disadvantages**:
- **limited compatibility/ availability** (clients and servers (like AS400 etc)) + on third party machines not possible
- knocking sequence could be **captured** (on client or network traffic)
- additional work on **network firewalls and IPS solution** requiered, since the knocking ports must be reachable + knocking could be interpreted as a **(malicious) network/port scans**
- additional **software/configuration on client** needed
- **unreliable on certain networks** as high latency and packet loss can interfere with the knocking process/sequence
- complexity might require additional **user training**
- make it more **difficult to use automation services** like Ansible
- if the **knocking listener service dies or is misconfigured**, access could impossible without further preperation *(no long term experience)*
- because of the reasons above, **troubleshooting is a pain**!
# My setup
Linux Ubuntu 22.04 LTS as Client and Server, using `knockd` as a port knocking service, iptables as firewall, and my goal is to secure my SSH access.
## knockd Configuration - Server
Install the service with `sudo apt install knockd`. The service is not running after installation, but let us check the configuration before we start it.
The first configuration file can be found here `/etc/knockd.conf`:
```markdown
[options]
UseSyslog
[openSSH]
sequence = 7000,8000,9000
seq_timeout = 5
command = /sbin/iptables -A INPUT -s %IP% -p tcp --dport 22 -j ACCEPT
tcpflags = syn
[closeSSH]
sequence = 9000,8000,7000
seq_timeout = 5
command = /sbin/iptables -D INPUT -s %IP% -p tcp --dport 22 -j ACCEPT
tcpflags = syn
[openHTTPS]
sequence = 12345,54321,24680,13579
seq_timeout = 5
command = /usr/local/sbin/knock_add -i -c INPUT -p tcp -d 443 -f %IP%
tcpflags = syn
```
Make sure to **change the sequences** since they are known and, therefore, not secure. For this article, I am going to remove the `[openHTTPS]` section as it is not needed for now.
**The sequence** is a number of ports - 3 TCP ports per default, but you can change the number of ports and the protocol to UDP. In this article, I'll just use the default.
This is the new configuration file:
```markdown
[options]
UseSyslog
[openSSH]
sequence = 22222,33333,44444
seq_timeout = 5
command = /sbin/iptables -A INPUT -s %IP% -p tcp --dport 22 -j ACCEPT
tcpflags = syn
[closeSSH]
sequence = 44444,33333,22222
seq_timeout = 5
command = /sbin/iptables -D INPUT -s %IP% -p tcp --dport 22 -j ACCEPT
tcpflags = syn
```
Let me explain the command in `[openSSH]`:
: `/sbin/iptables -A INPUT -s %IP% -p tcp --dport 22 -j ACCEPT`
: `/sbin/iptables` - run `iptables`
: `-A INPUT` - append rules to the `INPUT` chain (last position), replace with `-I` to >insert it to the chain into the first position
: `-s %IP%` - specifies the source with the `knockd` variable of the IP of the 'knocker'. You can change it to a specific IP, network, or all source IPs
: `-p tcp` - choose TCP as a protocol for SSH
: `--dport 22` - specify the destination port (SSH)
: `-j ACCEPT` - tells iptables what to do with this packet
The second command in `closeSSH` deletes the previous rule with `-D` and the rule specifications.
---
The second configuration file can be found here `/etc/default/knockd` - in which we 'enable' `knockd` and specify the interface that we are going to use.
Open it in your favorite text editor, change the value of `START_KNOCKD` from `0` to `1` to enable knockd.
In the next option, you can specify the interface on which `knockd` is listening. Uncomment `KNOCKD_OPTS` and change `eth1` with the interface of your choice. You can use `ip -br a` to find the name:
```markdown
$ ip -br -c a
lo UNKNOWN 127.0.0.1/8 ::1/128
eth0 UP 111.222.111.222/32 metric 100
ens10 UP 10.20.10.7/32
```
#### Starting knockd - Server
So, after the configuration, let us start `knockd`.
`sudo systemctl start knockd` and `sudo systemctl enable knockd` to **start the service** and make sure that it **autostarts** after rebooting.
Use `sudo systemctl status knockd` to ensure the service is running.
The **logs** can be found in the `syslog` of the server:
```markdown
user@test-ubu-01:/var/log$ sudo tail -f syslog
[sudo] password for user:
[...]
Nov 12 15:20:24 test-ubu-01 knockd: 146.70.225.159: openSSH: Stage 1
Nov 12 15:20:24 test-ubu-01 knockd: 146.70.225.159: openSSH: Stage 2
Nov 12 15:20:24 test-ubu-01 knockd: 146.70.225.159: openSSH: Stage 3
Nov 12 15:20:24 test-ubu-01 knockd: 146.70.225.159: openSSH: OPEN SESAME
Nov 12 15:20:24 test-ubu-01 knockd: openSSH: running command: /sbin/iptables -A INPUT -s 146.70.225.159 -p tcp --dport 22 -j ACCEPT
```
You can check the firewall rules with `sudo iptables --list`:
```markdown
sudo iptables --list
Chain INPUT (policy ACCEPT)
target prot opt source destination
ACCEPT tcp -- 146.70.225.159 anywhere tcp dpt:ssh
```
**Important**: Make sure that you remove all 'allow-all' accept rules for SSH and keep one session open so you don't get locked out.
## knockd - Client
Install `knockd` on the client, and use the `knock` command:
: `knock -d 5 -v 195.201.49.99 22222 33333 44444`
: `-d 5` - add a delay of 5 milliseconds between the port hits. It can prevent you from getting hit by various security solutions that block network scans
: `-v` - increase the verbosity
: `195.201.49.99` - destination IP
: `22222 33333 44444` - the sequence, TCP is the default, use `-u` to change all ports to UDP, or add `:udp` or `:tcp` directly behind the port to specify the protocol per port
: more information can be found via `man knock`
As this is the sequence for `[openSSH]`, the logs and rule set should look like the examples in the previous section. To remove the rule, run the command with the sequence of the `[closeSSH]` section.
As a side note: I bet you could do the knocking part without any client and use netcat, nmap or some other tool, but I have not tested it yet.
# Troubleshooting
Just a small list of tips in case something is not working:
- make sure there is no other firewall ruleset active, UFW, for example
- make sure that the order of rules is correct
- check the logs on the server `/var/log/syslog`
- make sure the client can reach the necessary ports of the knocking itself (network firewall, IPS, etc)
- make sure `knockd` is enabled, listening to the correct interface and is running
# Conclusion
Port knocking is an interesting idea, but I don't plan to implement it anywhere. I've got some ideas for some niche cases, but that's it.
---

View file

@ -0,0 +1,300 @@
# SSH Server Hardening Guide v2
*This is an updated version from last year. Thank you for the great feedback!*
---
This article covers mainly the **configuration of the SSH service** and only references ways to protect the service on the **host machine** or via **policies**.
I'll use Linux with an SSH server as a reference (`OpenBSD Secure Shell server` according to systemd).
---
**Important**: Please test the configuration changes in a test environment or a single user or group to limit the lockout risk!
Additionally, DO NOT copy any configuration mindlessly! - Some configuration changes are just recommendations and work in most cases, but make sure those work for your system, too.
# SSH Server Configuration <a href="#config-file" id="config-file">#</a>
The following configurations can be changed in the `/etc/ssh/sshd_config` file or in a separate configuration file that can be created in a subdirectory `/etc/ssh/sshd_config.d/*.conf`.
**Side note**: It is recommended to create a separate configuration file as the default file is at risk of getting overwritten with a future software update. Just make sure that the default configuration file references the subdirectory with `Include /etc/ssh/sshd_config.d/*.conf`.
That said, you can **check the configuration file** with `sudo sshd -t`; no output means that it is okay, and errors will be displayed if someone is not working out, like in the following example:
```bash
$ sudo sshd -t
/etc/ssh/sshd_config: line 49: Bad configuration option: DebianBanner
/etc/ssh/sshd_config: terminating, 1 bad configuration options
```
Use `sudo sshd -T` for a more **verbose output**, which additionally displays all the options that are used.
Almost every config file change **requires a restart of the SSH server service**.
## Public key authentication <a href="#public-key-auth" id="public-key-auth">#</a>
You can find a guide on how to use public key authentication [in this linked article](https://ittavern.com/ssh-how-to-use-public-key-authentication-on-linux/). I highly recommend securing your server with public key authentication instead of password authentication.
After enabling it, make sure to turn off password authentification:
`PasswordAuthentication no`
It requires some configuration on the server and client, but it is worth it as it is one of the best ways to protect your server.
## Changing the ssh port <a href="#changing-ssh-port" id="changing-ssh-port">#</a>
`Port 2222`
Change the default SSH port `22` of your host to something else. Some people think it is a must; some think it is useless. There is no perfect answer, but I hope the following list of pros and cons will help you to decide:
**Pros**:
- **Reduce exposure to automated attacks and bots**
- **Reduce noises in the logs**
- **Attackers need to port scan to find the correct port,** which makes it easier to detect targeted attacks with IPS and firewalls. Great for internal servers as port scans are uncommon; for internet-facing servers, rather useless as port scans are inevitable and common
**Cons**:
- **It does not protect against targeted attacks**, as a simple port scan can detect the correct port
- **Compatibility issues**, since some clients or applications might not work with a non-default port
- **Adds complexity**, as clients and scripts must be configured differently, must be documented, users must be informed, etc
**Side note:** choosing a port below 1024 (system or well-known port) is recommended to make it more difficult for an unprivileged user to highjack the service, as by default, non-root processes can only open ports above 1023. Just make sure to avoid **conflicts with already used ports**.
## Disable root login <a href="#disable-root-login" id="disable-root-login">#</a>
`PermitRootLogin no`
Prohibits connecting as `root` as it is recommended to work with a separate user with optional `sudo` permissions.
I've got some feedback that it is unnecessary to disable this since users with `sudo` permissions could do the same damage, but I disagree. Most - if not all - systems have a `root` user, and this is known, which makes it easy to run brute-force or dictionary attacks against the system. Most attackers don't know the available users on a system, which makes the `username` a kind of password.
## Disable login attempts with empty passwords <a href="#disable-empty-passwords" id="disable-empty-passwords">#</a>
`PermitEmptyPasswords no`
It is fairly self-explanatory, but to make sure, allowing any account without a password to log into the system is a big no-no and should be turned off immediately.
## Disable SSHv1 and use SSHv2 <a href="#disable-sshv1" id="disable-sshv1">#</a>
`Protocol 2`
SSHv2 is usually the default, but it is worth ensuring SSHv1 is disabled.
There are multiple ways to check if SSHv1 is still enabled, and I show you some. The following commands can be done **remotely or on the server with localhost as destination**:
```markdown
ssh -1 remoteuser@remoteserver
SSH protocol v.1 is no longer supported
```
**or** with a verbose output like this:
```markdown
ssh -v remoteuser@remoteserver
[...]
debug1: Remote protocol version 2.0, remote software version OpenSSH_8.9p1 Ubuntu-3ubuntu0.3
[...]
```
**or** netcat:
```markdown
echo ~ | nc remoteserver 22
SSH-2.0-OpenSSH_8.9p1 Ubuntu-3ubuntu0.3
```
**Important:** If you see `SSH-1.99` as version, it means that SSHv1 is enabled and it should be disabled!
## Restrict access to specific users or/and groups <a href="#restrict-users-access" id="restrict-users-access">#</a>
`AllowUsers a_this a_that`
`AllowGroups ssh_login`
This option is pretty straightforward and limits the users or groups that can access the server via SSH.
## Restrict access to specific IP or network <a href="#restrict-network-access" id="restrict-network-access">#</a>
`AllowUsers *@10.10.10.10` *# affects all users*
`AllowUsers provider-a@111.111.111.111 provider-b@222.222.222.222`
`AllowUsers internal-user@10.10.0.0/16`
You can further limit the access to specific IPs or networks.
## Restrict access to specific interfaces <a href="#specific-interface" id="specific-interface">#</a>
`ListenAddress 10.10.10.10`
Most servers have multiple interfaces. If the server has one interface for the internal network and one for the internet, and you don't need to reach the server over the internet, it is recommended to make the SSH server listen only to the internal IP. The default is `0.0.0.0`, which allows the service to listen to all interfaces.
## Set an authentication timer <a href="#authentication-timer" id="authentication-timer">#</a>
`LoginGraceTime 20`
The authentication must happen in 20 seconds before the connection gets closed. The default is 2 minutes. It helps to prevent specific denial-of-service attacks where authentication sessions are kept open for a period of time and prevent valid authentications.
**Side note:** make sure that this limit works for you. This limit won't be a problem for Public Key Authentication, but if you have to wait for mail to arrive with the MFA token, 20 seconds might be too short.
## Limit maximum number of attempted authentications <a href="#limit-authentication-attempts" id="limit-authentication-attempts">#</a>
`MaxAuthTries 3`
The default is `6`, and lowering it makes it a little bit more difficult to brute-force a password since the server drops the unauthenticated connection after 3 failed attempts.
**Side note:** Every SSH key loaded into the ssh-agent counts as one attempt each. Keep this in mind if you have a bunch of keys loaded! Additionally, if the Kerberos/GSSAPI authentication method is enabled, the look-up of whether the client is authenticated counts as one attempt.
## Limit the number of concurrent unauthenticated connections <a href="#limit-unauthenticated-conn" id="limit-unauthenticated-conn">#</a>
`MaxStartups 10:30:100`
This is the default and good enough, but I thought explaining this option makes sense.
Explanation `MaxStartups start:rate:full`:
: `10` - number of allowed concurrent unauthenticated connections
: `30` - percentage chance of randomly dropping connections attempts after reaching `start` (`10`) and the chance increases linearly until the `full` (`100`) connections are reached
: `100` - maximum number of unauthenticated connections after every new attempt is getting dropped
The randomized connection dropping makes it more difficult to DOS the service with unauthenticated connections.
**Side note:** Please remember that this option can cause problems with automation and configuration tools or transferring tools that might authenticate slowly and require separate connections.
This option only affects pre-authentication connection and does not limit anything else. Additionally, it has nothing to do with the following option.
## Restrict Multiplexing <a href="#restrict-multiplexing" id="restrict-multiplexing">#</a>
`MaxSessions 10`
This is the default, limits the 'sessions' for one SSH session to `10`, and is fine for most cases. Nevertheless, I thought I'd write about some options.
This option simply limits the 'sessions' - as in shell, login, or subsystems like sftp - of a single network connection (TCP)/ SSH authentication. If this limit is reached, a user could simply open another connection.
---
You can **disable multiplexing** by setting this option to `1`. Every shell, sftp connection, and so on, will then each require a TCP connection, which adds overhead, but can be helpful to limit the damage a highjacked session can do, increases visibility of the connections as they are separate, and therefore troubleshoot certain issues.
But please keep in mind that **disabling multiplexing can cause problems** and **limits some functions**! Automation and configuration tools like `Chef,` parallel transfers via `scp,` and other tools that need multiplexing could stop working.
---
Setting `MaxSessions` to `0` disables all shell, login, and subsystem sessions but still allows tunneling, port forwarding, or SOCKS proxying.
This option can be used to **limit the permissions of a bastion/jump host user or group** to a single task.
## Set up a session timeout <a href="#session-timeout" id="session-timeout">#</a>
`ClientAliveCountMax 3`
`ClientAliveInterval 120`
The configuration above means that the session is terminated after 6 minutes of client inactivity. After `120` seconds without receiving any data from the client, the server will ask if the client is still there. If the client does not respond, the server will try it again in `120` seconds. If the client fails to answer `3` times, the session is getting terminated.
## Hide Linux Version in identification string <a href="#hide-linux-version" id="hide-linux-version">#</a>
`DebianBanner no`
The Linux version is being added as a comment on the identification string. Debian and Debian derivates add it by default, RHEL distros do not (from my experience, CentOS+RockyOS).
It changes the identification string pre-authentication from `SSH-2.0-OpenSSH_8.9p1 Ubuntu-3ubuntu0.3` to `SSH-2.0-OpenSSH_8.9p1`.
Please note that the rest of the identification string must remain unchanged according to [RFC 4253](http://www.openssh.com/txt/rfc4253.txt):
```markdown
4.2. Protocol Version Exchange
When the connection has been established, both sides MUST send an
identification string. This identification string MUST be
SSH-protoversion-software version SP comments CR LF
```
## Disable tunneling and port forwarding <a href="#disable-tunneling" id="disable-tunneling">#</a>
`AllowAgentForwarding no`
`AllowTcpForwarding no`
`PermitTunnel no`
Disabling those functions makes it more difficult to use the server as a jump host to gain access to the connected networks, malicious or not. Most servers do not need those functions enabled, but to learn more, feel free to check my article about [SSH tunneling and port forwarding](https://ittavern.com/visual-guide-to-ssh-tunneling-and-port-forwarding/).
## Disable unused authentification methods <a href="#disable-unused-auth-methods" id="disable-unused-auth-methods">#</a>
`KerberosAuthentication no`
`GSSAPIAuthentication no`
`ChallengeResponseAuthentication`
It highly depends on your needs, but if an authentification method is unused, it should be disabled as it increases the attack surface to exploits and vulnerabilities.
**Side note:** Please ensure you don't disable the only method you can log in to prevent a lockout.
## Disable X11 Forwarding <a href="#disable-x11" id="disable-x11">#</a>
`X11Forwarding no`
The security concern here is that X11 forwarding opens a channel from the server to the client. In an X11 session, the server can send specific X11 commands to the client, which can be dangerous if the server is compromised. [Source](https://security.stackexchange.com/a/14817)
## Disable SFTP subsystem <a href="#disable-sftp" id="disable-sftp">#</a>
If you do not need SFTP, disable it. It decreases the attack surfaces and makes the system less vulnerable to security flaws.
Just comment out the `Subsystem sftp [...]` out of the config by placing a `#` at the beginning of the lines.
## Disable insecure ciphers and MACs <a href="#disable-ciphers" id="disable-ciphers">#</a>
```markdown
Ciphers chacha20-poly1305@openssh.com,aes256-gcm@openssh.com,aes128-gcm@openssh.com,aes256-ctr,aes192-ctr,aes128-ctr
KexAlgorithms curve25519-sha256@libssh.org
MACs hmac-sha2-512-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-sha2-512,hmac-sha2-256
```
There are even some more restrictive options, but I have not tested them myself.
**Side note**: Please note that some clients could encounter problems connecting to the server if they don't support the same ciphers. Either update the software on the client or add ciphers that are supported by the client to the server.
Auditing tools like [ssh-audit](https://github.com/jtesta/ssh-audit) can tell you what is secure and what is not.
# Host Server configurations <a href="#host-server-config" id="host-server-config">#</a>
I won't go into detail in this section as it is not in the scope. I just reference methods that I have already covered and name others that can help you secure your server even further.
---
**Methods**:
- use Fail2Ban or similar software to **monitor the access logs to ban IPs that failed too many login attempts**. This helps to prevent brute forcing and DOS attacks > [Getting started with Fail2Ban on Linux](https://ittavern.com/getting-started-with-fail2ban-on-linux/)
- use **Port Knocking** to hide the service and decrease the attack surface > [Port Knocking with knockd and Linux - Server Hardening](https://ittavern.com/port-knocking-with-knockd-and-linux/)
- **get notified as soon someone logs** into the machine - practical for critical infrastructure > [SSH - run script or command at login](https://ittavern.com/ssh-run-script-or-command-at-login/)
- **Enable MFA** with TOTP-modules like `libpam-google-authenticator` for your SSH access
- use **host and/or network firewalls to limit the hosts or networks** that can access the server. Feel free to add this in additionally to the whitelisting of the SSH server configuration
- use **host and/or network firewalls to add a rate limit of network connection** to help prevent brute-forcing and DOS attacks.
---
**General policies**:
- **keep your OS and software up-to-date**
- use a **remote logging** instance to keep the logs safe in case of an incident
- audit those logs with a **threat/anomaly detection** of your choice
- don't expose your services to the internet. Use **VPNs** instead
- **use hardened bastion/jump hosts** in front of hosts that can't be properly secured
- **do scheduled and regular audits of your hardening procedure**. Use tools like [ssh-audit](https://github.com/jtesta/ssh-audit) and make sure that your systems are still secure
- check regularly if services, user accounts, permissions, etc are required and remove them, if not
- if you use password authentication, **force secure password or passphrases**
---
Special thanks to ruffy for recommending disabling X11 forwarding and the SFTP subsystem.
---

View file

@ -0,0 +1,379 @@
# Cron Jobs on Linux - Comprehensive Guide with Examples
In this article, I'll use **Ubuntu 22.04** (Debian-derivative) and **rockyOS 9.2** (RHEL-derivative) as references. If it is not mentioned, commands are the same for both systems.
# Basics <a href="#basics" id="basics">#</a>
Cron jobs are scheduled and automated tasks that run commands or scripts on Linux. Common **use cases** are backups, updates, health checks, and so on. Those tasks can be run as `sudo` or user context.
Cron is the daemon that runs in the background. The running service is called `cron` and `crond` on Ubuntu and rockyOS, respectively.
#### Make sure the daemon is running <a href="#daemon" id="daemon">#</a>
Make sure that the service is running:
**Ubuntu / Debian**
`sudo systemctl status cron`
or
`ps aux | grep cron`
**rockyOS / RHEL**
`sudo systemctl status crond`
or
`ps aux | grep crond`
#### Show cron jobs <a href="#show-cron-jobs" id="show-cron-jobs">#</a>
Before we start, there are several places where you can look for cron jobs.
1. via the `crontab` command, as described in the following section
Cron tables are saved in `/var/spool/cron/crontabs/*` in Ubuntu and `/var/spool/cron` in rockyOS.
2. in the system-wide `/etc/crontab` or `/etc/cron.d/*` configuration files
3. as script or executable in one of the following directories:
`/etc/cron.hourly/`
`/etc/cron.daily/`
`/etc/cron.weekly/`
`/etc/cron.monthly/`
List existing cron jobs of `crontab`:
: `crontab -l` *# cron jobs of current user*
: `sudo crontab -l` *# cron jobs of `root`*
: `sudo crontab -u USERNAME -l` *# cron jobs of specific user*
---
Check **all cron jobs of all users** on a machine:
So, there are multiple ways to do so. I'll show you my preferred way that should cover all cron jobs.
**Ubuntu / Debian**
```markdown
root@test-ubu-01:~# grep "^[^#;]" /var/spool/cron/crontabs/* /etc/crontab /etc/cron.d/*
```
```markdown
/etc/crontab:SHELL=/bin/sh
/etc/crontab:17 * * * * root cd / && run-parts --report /etc/cron.hourly
/etc/crontab:25 6 * * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily )
/etc/crontab:47 6 * * 7 root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.weekly )
/etc/crontab:52 6 1 * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.monthly )
/etc/cron.d/e2scrub_all:30 3 * * 0 root test -e /run/systemd/system || SERVICE_MODE=1 /usr/lib/x86_64-linux-gnu/e2fsprogs/e2scrub_all_cron
/etc/cron.d/e2scrub_all:10 3 * * * root test -e /run/systemd/system || SERVICE_MODE=1 /sbin/e2scrub_all -A -r
```
```markdown
root@test-ubu-01:~# ls /etc/cron.*
```
```markdown
/etc/cron.d:
e2scrub_all
/etc/cron.daily:
apport apt-compat dpkg logrotate man-db
/etc/cron.hourly:
/etc/cron.monthly:
/etc/cron.weekly:
man-db
```
---
**rockyOS / RHEL**
```markdown
[root@test-rocky-01 ~]# grep "^[^#;]" /var/spool/cron/* /etc/crontab /etc/cron.d/*
```
```markdown
/var/spool/cron/remotesuser:* * * * * echo "hello world as a random ass user
/var/spool/cron/root:* * * * * echo "hello world"
/etc/crontab:SHELL=/bin/bash
/etc/crontab:PATH=/sbin:/bin:/usr/sbin:/usr/bin
/etc/crontab:MAILTO=root
/etc/crontab:* * * * * remotesuser whoami >> /home/remotesuser/logy.logs
/etc/cron.d/0hourly:SHELL=/bin/bash
/etc/cron.d/0hourly:PATH=/sbin:/bin:/usr/sbin:/usr/bin
/etc/cron.d/0hourly:MAILTO=root
/etc/cron.d/0hourly:01 * * * * root run-parts /etc/cron.hourly
/etc/cron.d/cronywhat:SHELL=/bin/bash
/etc/cron.d/cronywhat:PATH=/sbin:/bin:/usr/sbin:/usr/bin
/etc/cron.d/cronywhat:MAILTO=root
/etc/cron.d/cronywhat:* * * * * root whoami >> /home/remotesuser/logy.logs
[root@test-rocky-01 ~]#
```
```markdown
[root@test-rocky-01 etc]# ls /etc/cron.*
```
```markdown
/etc/cron.deny
/etc/cron.d:
0hourly
/etc/cron.daily:
/etc/cron.hourly:
0anacron
/etc/cron.monthly:
/etc/cron.weekly:
```
#### Add and edit cron jobs <a href="#edit-cron-jobs" id="edit-cron-jobs">#</a>
**Side note:** `crontab` will ask you what editor you want to use to edit the file.
Edit cron jobs with `crontab` user-specific:
: `crontab -e` *# edit cron jobs of current user*
: `sudo crontab -e` *# edit cron jobs of `root`*
: `sudo crontab -u USERNAME -e` *# edit cron jobs of specific user*
The default syntax is `* * * * * command`. A detailed description and examples follow in the following section.
---
A second method would be to add the cron job to the system-wide `/etc/cronjob` file or create a new file in the `/etc/cron.d/` directory. The latter is recommended as the `/etc/cronjob` is at risk of getting overwritten by an update.
The default syntax is slightly different as it adds the user name on the sixth position `* * * * * user command`.
---
**Another way to run scripts** is to use the `/etc/cron.*` directories. Save a script in one of the following directories to run it directly as root and system-wide and the schedule you want:
```markdown
/etc/cron.hourly/
/etc/cron.daily/
/etc/cron.weekly/
/etc/cron.monthly/
```
**Side note:** `/etc/cron.yearly/` / `/etc/cron.annually/` are not there per default but can be added. I have not looked into it.
---
**Important:** Make sure that the script is executable: `sudo chmod +x script.sh`
#### Remove cron job <a href="#remove-cron-jobs" id="remove-cron-jobs">#</a>
You can either delete single cron jobs with `crontab -e` or all cron jobs with the following commands:
Removing ALL cron jobs with `crontab -r`:
: `crontab -r` *# removes all cron jobs of current user WITHOUT a prompt*
: `crontab -r -i` *# `-i` adds a yes/no prompt before removing all cron jobs*
: `sudo crontab -r` *# removes all cron jobs of `root`*
: `sudo crontab -u USERNAME -r` *# removes all cron jobs of specific user*
**Example:**
```markdown
[root@test-rocky-01 ~]# crontab -u remotesuser -r -i
crontab: really delete remotesuser's crontab?
```
# Cron Expressions with Examples <a href="#cron-jobs-expressions" id="cron-jobs-expressions">#</a>
**Side note:** I am going to use the `crontab` command syntax for further references.
There are **four things** you can add to the table:
Active cron job for a command or script:
: `0 */12 * * * /path/to/backup.sh` *# runs a backup script every 12 hours*
: `0 */12 * * * rsync -avh /source/ /destination/` *# runs a backup command every 12 hours*
A declaration of an environment variable for the following cron jobs:
: `result="HELLO WORLD"`
A comment taht starts the line with a hash `#`, that is ignored by `cron`:
: `# just a comment`
Or an empty line, which is also ignored:
: ` `
**Side note**: it is recommended to use absolute paths for all scripts or executables.
---
Explanation from the manual:
```markdown
Example of job definition:
.---------------- minute (0 - 59)
| .------------- hour (0 - 23)
| | .---------- day of month (1 - 31)
| | | .------- month (1 - 12) OR jan,feb,mar,apr ...
| | | | .---- day of week (0 - 6) (Sunday=0 or 7) OR sun,mon,tue,wed,thu,fri,sat
| | | | |
* * * * * command to be executed
```
Examples:
: `* * * * * command` - every minute, which is the lowest possible interval
: `0 * * * * command` - every full hour
: `20 4 * * * command` - every day at 4:20 am
: `0 4 1 * * command` - at 4 am on the first day of the month
: `0 4 1 1 * command` - at 4 am on the first day of January
: `0 4 * * 1 command` - at 4 am every Monday
There are some operators to specify the timing even more.
Noted, the following options **can be combined**! I'll add an example at the end.
The asterisk (`*`) means every possible value.
Lists:
: Lists of values can be created with a comma (`,`)
: `0 4,16 * * * command` - every day at 4 am and 4 pm
: `0 4 * * 1,5 command` - at 4 am every Monday **and Friday**
Ranges:
: Ranges of values can be created with a hyphen (`-`)
: `0 9-17 * * * command` - every hour from 9 am to 5 pm
: `0 4 * * 1-5 command` - at 4 am every **weekday from Monday to Friday**
Steps:
: Ranges of values can be created with a slash (`/`)
: `*/30 * * * * command` - **every 30 minutes** *(00:00,00:30,01:00,[...])*
: `0 */12 * * * command` - **every 12 hours** *(00:00 and 12:00)*
As mentioned before, you can combine these options like in the following example:
`0 9-17/2 * jan-mar 1,5` - every two hours between 9 am and 5 pm, on Monday and Friday from January to March. It's not the best example, but you get the idea.
The following options are **limited** to only some fields and might be **not compatible** with every other option. Additionally, they might not be available in all crin implementations.
The (L)ast x of:
: does only work for 'Day of month' and 'Day of week'
: `0 4 L * * command` - at 4 am on **the last day of the month**
: `0 4 * * 5L command` - at 4 am on **the last Friday of the month**
The nearest (w)eekday within the month:
: does only work for 'Day of month'
: `0 4 15W * * command` - at 4 am on **nearest weekday (Mon-Fri) to the 15th of the month**
: *must be a single day and not a list or range*
The `n`th day of the month with a hash (`#`):
: does only work for 'Day of week'
: `0 4 * * 5#2` - at 4 am on **the second Friday of every month**
**Side note:** Certain values (besides `*`) in the 'Day of month' and 'Day of week' fields can cause an `OR` condition, which creates multiple timings.
---
#### Nonstandard Special Strings <a href="#special-strings" id="special-strings">#</a>
Most implementations support special strings, but some behave a little bit differently. They replace the usual expressions `* * * * *`.
Special strings
: `@hourly` - every full hour - same as `0 * * * *`
: `@daily` or `@midnight` - daily at midnight - same as `0 0 * * *`
: `@weekly` - weekly at midnight on Sunday - same as `0 0 * * 0`
: `@monthly` - monthly at midnight on the first day of the month - same as `0 0 1 * *`
: `@yearly` or `@annually` - yearly at midnight on the 1st of January - same as `0 0 1 1 *`
: `@reboot` - when the cron daemon is started. Depending on the implementation, some daemons would run the command again after a service restart, and some prevent it. Additionally, it can be beneficial to delay the command for a bit to make sure everything is up and running.
: Example: `@reboot sleep 300 && command`
#### Environment Variables <a href="#env-variables" id="env-variables">#</a>
Cron** does not source any startup files**. We then have to add any environment variable to the crontab to use it.
It was mentioned before, but just declare the environment variable in a new line like this:
`result="HELLO WORLD"` *# this environment variable will be available for all commands or scripts of this cron file*
If you want to add an environment variable **for just one cron job**, you could add it like this:
`20 4 * * * TZ="Europe/Berlin" command`
#### Timezones <a href="#timezones" id="timezones">#</a>
By default cron uses the system timezone which can be found in the file `/etc/timezone`.
```markdown
cat /etc/timezone
Etc/UTC
```
Systems often have **multiple users** that might work in **different timezones**. You can add `CRON_TZ=TIME/ZOME` to the cron file of specific users to specify the timezone.
`CRON_TZ=Europe/Berlin`
**Side note:** I've read that it works for Ubuntu and rockyOS, but I only tested it successfully on rockOS.
Available options can be found in the `/usr/share/zoneinfo` directory:
```markdown
ls /usr/share/zoneinfo
Africa EST5EDT Iceland PRC Zulu
America Egypt Indian PST8PDT iso3166.tab
Antarctica Eire Iran Pacific leap-seconds.list
Arctic Etc Israel Poland leapseconds
Asia Europe Jamaica Portugal local time
Atlantic Factory Japan ROC posix
Australia GB Kwajalein ROK posixrules
Brazil GB-Eire Libya Singapore right
CET GMT MET Turkey tzdata.zi
CST6CDT GMT+0 MST UCT zone.tab
Canada GMT-0 MST7MDT US zone1970.tab
Chile GMT0 Mexico UTC
Cuba Greenwich NZ Universal
EET HST NZ-CHAT W-SU
EST Hongkong Navajo WET
```
#### Cron Job Permissions <a href="#permissions" id="permissions">#</a>
There are two configuration files to allow or deny users the use of cron jobs.
`/etc/cron.deny` # it exists by default, but is empty. Any user that is listed in this file can not use cron jobs.
`/etc/cron.allow` # if this file exists, users must be listed in this file to be able to use cron jobs. Just for clarification:** an empty `cron.allow` file means that no user can use cron jobs.**
**If both files are missing**, all users on the system **can** use cron jobs.
**If a user is mentioned in both**, the affected user **can not** use cron jobs.
**In case you want to deny all users** the use of cron jobs, you can either add `ALL` to the `/etc/cron.deny` file or create an empty `/etc/cron.allow` file.
# Cron Jobs Logging <a href="#logging" id="logging">#</a>
The cron daemon writes logs into the following files by default:
**Ubuntu / Debian**
`/var/log/syslog` or `/var/log/auth.log`
You can filter the logs with `grep`:
`sudo grep -i cron /var/log/syslog /var/log/auth.log`
**rockyOS / RHEL**
`/var/log/cron`
The logs are not that detailed, and only the basics are logged.
---
That said, the output / stdout of the command or script is not logged and must be added to the command or script itself.
Example:
`0 4 * * 5L command -v >> /var/logs/command.log`
**Side note:** when the cron daemon is not able to run a job - for example when the server is down - all missed cron jobs won't be repeated and must be run manually if they are important.
---

View file

@ -0,0 +1,363 @@
# Getting started with rsync - Comprehensive Guide
rsync is a CLI tool that covers various use cases. Transfering data, creating backups or archives, mirroring data sets, integrity checks, and many more.
Reference for this article: rsync version 3.2.7 and two Ubuntu 22.04 LTS machines.
If you want to transfer files to a remote host, rsync must be installed on both sites, and a connection via SSH must be possible.
**Side note:** rsync can be used via `rsh` or as a daemon/server over TCP873, but I won't cover those in this article and concentrate on the transfer over SSH
# Basic File transfer <a href="#basics" id="basics">#</a>
You can transfer files locally, from local to a remote host or from a remote host to your local machine. Unfortunately, you can't transfer files from one remote host to another remote host.
The used syntax for rsync is the following:
`rsync [options] source destination`
The syntax for the remote-shell connection is:
: `[user@]host:/path/to/data`
Example of transferring a directory to a remote host:
: `rsync ./data user@192.0.2.55:/home/user/dir/`
**Side note:** `./data/` copies only the files in the directory, `./data` copies the directory as well.
Rsync does not update or preserve metadata like ownership or timestamps of an item. More information about this is in the metadata section below.
---
It is highly recommended to use various options with rsync to get the results you want.
Common options:
: `--recursive`/`-r` *# copy directories recursively*
: `--ipv4`/`-4` *# use Ipv4*
: `--ipv6`/`-6` *# use Ipv6*
: `--human-readable`/`-h` *# output numbers in a human-readable format*
: `--quiet`/`-q` *# decreases the output, recommended for automation like cron*
: `--verbose`/`-v` *# increase the output of information*
: `--archive`/`-a` *# rescursive + keeps all the meta data. Further information in the 'metadata' section*
#### Specify a different SSH port <a href="#specify-ssh-port" id="specify-ssh-port">#</a>
The default TCP port for SSH is `22` but some servers listen on another port. That is not a problem, and you can tell rsync to connect to another port:
`-e "ssh -p 2222"` *# connection to TCP2222 instead of TCP22*
#### Mirroring data <a href="#mirror" id="mirror">#</a>
You can simply mirror a directory to or from a remote host with the `--delete` option. Rsync compares the source and destination directories, and if it finds files in the destination directory that are missing in the source, it will delete those to keep both sides the same. Please use it with caution and start with a dry run.
```markdown
kuser@pleasejustwork:~/9_temp/rsync$ rsync -ah --delete --itemize-changes ./data user@192.0.2.55:/home/user/
sending incremental file list
*deleting data/small-files-7
[...]
```
#### Deleting source files after transfer <a href="#deleting-source-data" id="deleting-source-data">#</a>
The option `--remove-source-files` - as the name already implies - removes all data after transferring the data to the destination. Please use it with caution and start with a dry run.
#### Update-behaviour <a href="#update-behaviour" id="update-behaviour">#</a>
There are some options to make sure that rsync does not overwrite data on the destination.
Examples:
: `--update`/`-u` *# don't update files that are newer on the destination*
: `--existing` *# don't create new files on the destination*
: `--ignore-existing ` *# don't update files that exist on the destination*
: `--size-only` *# only update when the size changes, but not the timestamp*
This can be helpful when the files are used or modified by another application and you don't want to overwrite anything.
# Item Metadata <a href="#metadata" id="metadata">#</a>
As mentioned before, rsync does not preserve the media data of a file or directory. You can set various options to decide what you keep.
Your options:
: `--perms` / `-p` *# permissions*
: ` --owner` / `-o` *# owner*
: `--group` / `-g` *# group*
: `--times` / `-t` *# modification time*
: `--atimes` / `-U` *# access time*
: `--crtimes` / `-N` *# create time*
: `-A` *# ACLs*
: `-X` *# extended attributes*
One of the most common options is `--archive`/`-a`, which will preserve all metadata and add recursing. It is in fact a shortcut for `-rlptgoD.` It additionally preserves symbolic links, special and device files.
You can use the `--no-*` syntax to remove single attributes like `--no-perms`.
# Exclude directories and files <a href="#exclusion" id="exclusion">#</a>
Rsync makes it easy to **exclude files and directories**. I'll show you some examples in the following list.
Example:
: `--exclude "*.iso" --exclude "*.img"`
: `--exclude={"/tmp/*", "/etc/*"}`
You can use `--exclude-from=` to reference a file with a list of exclusions to make it more manageable.
Every line is one exclusion and line starting with `;` or `#` are interpreted as commend and are getting ignored:
`--exclude-from='/exclude.txt'`
```bash
$ cat exclude.txt
.git
*.iso
# Temp
/tmp
/cache
```
#### Exclusion by file size
You can exclude files from being transferred if they're too small or too large with `--max-size=` and `--min-size=`:
Examples:
: `--max-size=500m` # max file size of 500 Mibibyte
: `--min-size=5kb` # min file size of 5 kilobyte
Common scheme:
: `b byte`
: `k kilo/kibi`
: `m mega/mebi`
: `g giga/gibi`
: `t tera/tebi`
: `p peta/pebi`
Single letter or three letters ending with `ib` like `kib` tells rsync to use the Binary Prefix (multiplied by 1024) - kibibytes, and two letters like `kb` tells rsync to use the Decimal Prefix (multiplied by 1000) - kilobytes.
# Limit transfer bandwidth <a href="#limit-bandwidth" id="limit-bandwidth">#</a>
Sometimes, it is necessary to limit the transfer speed of rsync. You can do it with `--bwlimit=`, which uses KB/s by default.
Some examples:
: `--bwlimit=100` *# Limits bandwidth to 100 KB/s*
: `--bwlimit=250k` *# Limits bandwidth to 250 KB/s*
: `--bwlimit=1m` *# Limits bandwidth to 1 MB/s*
# Data Compression <a href="#compression" id="compression">#</a>
You can **choose to compress your data transfer** which is great for slow connections. You can choose to **activate compression** with `--compress`/`-z` and rsync will choose a method for you if you do not specify a method that is compatible with the server side.
You can check the available algorithms with `rsync --version`:
```markdown
$ rsync --version
rsync version 3.2.7 protocol version 31
[...]
Compress list:
zstd lz4 zlibx zlib none
[...]
```
You can **choose the compression algorithm** with `----compress-choice=`/`--zc=`.
Besides the algorithm, you can choose the compression level with `--compress-level=`/`--zl=`. Every algorithm has its own list of levels, and it is recommended to look them up.
**Side note:** you can choose `--zl=999999999` to get the maximum compression no matter what algorithm you choose as rsync limits this value silently to the max limit.
# Showing Transfer Progress <a href="#progress" id="progress">#</a>
By default, rsync does not show any progress at all.
`$ rsync -ah ./data user@192.0.2.55:/home/user/` > nothing
With `-v` you get a more verbose output and show at least the file that rsync is transferring at the moment:
```markdown
$ rsync -avh --delete ./data user@192.0.2.55:/home/user/
sending incremental file list
data/
data/big-file
[...]
```
---
With `--progress` you get the **progress and transfer speed per file**:
```markdown
$ rsync -ah --progress ./data user@192.0.2.55:/home/user/
sending incremental file list
data/
data/big-file
17,92M 1% 2,59MB/s 0:06:27
```
---
To see only the **total progress**, use `--info=progress2`:
```markdown
$ rsync -ah --info=progress2 ./data user@192.0.2.55:/home/user/
4,42M 0% 4,07MB/s 0:04:10
```
The number behind `progress` is the verbosity level: `0`=no output; `1`=per file; `2`=total.
This progress is better than nothing, but it can be vague as rsync is still checking the rest of the files for changes. With `--no-inc-recursive`/`--no-i-r` you can tell rsync to create the file list first and then start the transfer to **make it more precise**. That said, it delays the initial transfer.
---
You can use `--stats` to get the transfer results at the end of the transfer.
# Start a dry run <a href="#dry-run" id="dry-run">#</a>
**Side note:** the following method can be used to perform an **integraty check**. For example, you used another tool to transfer a large data set, and you want to check if everything was transferred right. You can double-check it with rsync and even correct things.
Depending on your use-case, there is a chance to delete data by making mistakes. To avoid that, we can use two features to check the steps rsync will perform in a secure way.
I am talking about `--dry-run`/`-n` and `--itemize-changes`/`-i`. The former performs a read-only run, and the latter shows you all the changes rsync will perform.
Let me show you an example, and don't worry about the other options for now:
```markdown
kuser@pleasejustwork:~/9_temp/rsync$ rsync -ah --delete --itemize-changes --dry-run ./data user@192.0.2.55:/home/user/
sending incremental file list
*deleting data/small-files-7
.d..t...... data/
0 0% 0,00kB/s 0:00:00 (xfr#0, to-chk=0/32)
<f.st...... data/small-files-1
<f+++++++++ data/small-files-14
cd+++++++++ data/new-data/
5 0% 4,88kB/s 0:00:00 (xfr#5, to-chk=0/32)
sent 583 bytes received 61 bytes 429,33 bytes/sec
total size is 1,05G speedup is 1.628.223,61 (DRY RUN)
```
Explanation for this example of `--itemize-changes`:
: `*deleting data/small-files-7` *# deletes file on destination*
: `.d..t...... data/` *# timestamp of directory `data` changed*
: `<f.st...... data/small-files-1` *# changing size and timestamp on destination of file `small-files-1`*
: `<f+++++++++ data/small-files-14` *# file will be created in destination*
: `cd+++++++++ data/new-data/` *# new directory in source detected; will be created on destination*
The syntax of this string is `YXcstpoguax` and is explained as follows:
: `Y` *# type of update performed*
: `X` *# is the file type*
: `cstpoguax` *# are the attributes that could be modified*
Explanation of update types `Y`:
: `<` *# file is being SENT*
: `>` *# file is being RECEIVED*
: `c` *# local change or creation of an item (directory, sym-link, etc)*
: `h` *# item is a hard link*
: `.` *# item is not getting updated*
: `*` *# the rest of the output contains a message (e.g. `deleting`)*
Explanation of file types `X`:
: `f` *# stands for file*
: `d` *# stands for directory*
: `L` *# stands for sym-link*
: `D` *# stands for device*
: `S` *# stands for 'special', e.g. named sockets*
Explanation for the attributes `cstpoguax` of an item:
: `c` *# checksum*
: `s` *# size*
: `t` *# timestamp*
: `p` *# permissions*
: `o` *# owner*
: `g` *# group*
: `u | n | b` *# `a` = access time ; `n` = create time ; `b` = both, access and create times*
: `a` *# ACL information*
: `x` *# extended attributes*
Explanation of the status of the attribute:
: A letter means the attribute is being updated
: `.` *# attribute unchanged*
: `+` *# item newly created*
: `?` *# change is unknown, working with old rsync versions*
# Transfer Logging <a href="#logging" id="logging">#</a>
Rsync does not log anything by default. There are multiple ways to do so.
You can **create a log file** with `--log-file=`:
```markdown
$ rsync -ah --info=progress2 --log-file=./rsync.log ./data user@192.0.2.55:/home/user/
29,43M 2% 3,13MB/s 0:05:18
[...]
```
and the logs would look like this:
```markdown
$ cat rsync.log
2024/01/14 18:24:26 [647220] building file list
2024/01/14 18:24:26 [647220] cd+++++++++ data/
2024/01/14 18:24:34 [647220] sent 29630071 bytes received 585 bytes total size 1048576005
[...]
```
You can modify the name of the logs, for example, by adding a timestamp. That is great for automation like daily cron jobs.
```markdown
$ rsync -ah --info=progress2 --log-file=./rsync-`date +"%F-%I%p"`.log ./data user@192.0.2.55:/home/user/
32,28M 3% 4,84MB/s 0:03:24 ^C
[...]
$ ll
[...]
-rw-r--r-- 1 user user 577 Jan 14 18:31 log-2024-01-14-06.log
[...]
```
---
Another option is to save your console output to a log file like this:
`rsync command >> ./rsync.log`
This is a quick and dirty version.
---
Rsync provides a large set of logging options and lets us decide what to show and hide. As it is out of the scope of this article, I won't go into detail, but I wanted to share the `--info=help` output to give you an idea of the options.
```markdown
$ rsync --info=help
Use OPT or OPT1 for level 1 output, OPT2 for level 2, etc.; OPT0 silences.
BACKUP Mention files backed up
COPY Mention files copied locally on the receiving side
DEL Mention deletions on the receiving side
FLIST Mention file-list receiving/sending (levels 1-2)
MISC Mention miscellaneous information (levels 1-2)
MOUNT Mention mounts that were found or skipped
NAME Mention 1) updated file/dir names, 2) unchanged names
NONREG Mention skipped non-regular files (default 1, 0 disables)
PROGRESS Mention 1) per-file progress or 2) total transfer progress
REMOVE Mention files removed on the sending side
SKIP Mention files skipped due to transfer overrides (levels 1-2)
STATS Mention statistics at end of run (levels 1-3)
SYMSAFE Mention symlinks that are unsafe
ALL Set all --info options (e.g. all4)
NONE Silence all --info options (same as all0)
HELP Output this help message
Options added at each level of verbosity:
0) NONREG
1) COPY,DEL,FLIST,MISC,NAME,STATS,SYMSAFE
2) BACKUP,MISC2,MOUNT,NAME2,REMOVE,SKIP
```
---

View file

@ -0,0 +1,110 @@
# Bandwidth Measurement using netcat on Linux
There are various implementations. I am using nmap-ncat on rockOS 8 on both hosts.
Netcat's using **TCP by default** and this test is **not limited by disk I/O** from what I understood. That said, it is not the best solution, but it is a great 'quick and dirty' method. Additionally, there is **no encryption overhead** and **no compression** involved.
**Important:** Please use with caution. You can lose access to a host while performing the test.
---
Server / Receiver:
: `nc -k -v -l 33333 > /dev/null`
: `-k` # keeps listening after connection ends *(might not be available e.g. gnu-netcat)*
: `-v` # verbose output
: `-l 33333` # listen on port 33333 (default TCP)
: `> /dev/null` # send incoming data into the void to avoid disk I/O
Client / Sender:
: `dd if=/dev/zero bs=500M count=1 | nc -v 192.0.2.5 33333`
: `dd` # convert/copy files
: `if=/dev/zero` # read from file, only zeros in this case
: `bs=500M` # sets the data-/ blocksize, 500 Mibibytes, use `500MB` for Megabytes,
: `count=1` # set the maximum number of blocks, just leave it at `1`
: `|` # 'pipes' all data to the next command
: `nc` # netcat command
: `-v` # set a more verbose output
: `192.0.2.5` # set destination server IP
: `33333` # set destination port
---
**Result on the client side**
```markdown
[user@test-rocky-01 ~]$ dd if=/dev/zero bs=500M count=1 | nc -v 192.0.2.5 33333
Ncat: Version 7.92 ( https://nmap.org/ncat )
Ncat: Connected to 192.0.2.5:33333.
1+0 records in
1+0 records out
524288000 bytes (524 MB, 500 MiB) copied, 19.6253 s, 26.7 MB/s
Ncat: 524288000 bytes sent, 0 bytes received in 19.71 seconds.
```
---
**Result on the server side**
```markdown
[user@test-rocky-02 ~]$ nc -k -v -l 33333 > /dev/null
Ncat: Version 7.92 ( https://nmap.org/ncat )
Ncat: Listening on :::33333
Ncat: Listening on 0.0.0.0:33333
Ncat: Connection from 198.51.100.19.
Ncat: Connection from 198.51.100.19:42822.
Ncat: Connection from 198.51.100.19.
Ncat: Connection from 198.51.100.19:43088.
[...]
```
**Side note:** It is recommended to test both directions.
#### Additional options
**Side note:** For security reasons on most systems you need **higher permissions to use ports in the range of 0-1023** (reserved port range).
```markdown
[user@test-rocky-02 ~]$ nc -k -v -l 444 > /dev/null
Ncat: Version 7.92 ( https://nmap.org/ncat )
Ncat: bind to :::444: Permission denied. QUITTING.
```
---
Specify source interface/IP:
: `-s 10.20.10.8`
Specify source port:
: `-p 45454` # on the client obviously
: Tip: changing the source port with every run to find a specific run faster in a packet capture
Using UDP instead of TCP:
: `-u` # must be used on both hosts and might not be compatible with other options
# Troubleshooting
#### Large transfer / longer test
`[user@test-rocky-01 ~]$ dd if=/dev/zero bs=4G count=1 | nc -p 5555 -v 192.0.2.5 33333`
`dd: memory exhausted by input buffer of size 4294967296 bytes (4.0 GiB)`
You are limited by your RAM when you want to send more data. You can decrease `bs=4G` to `bs=1G`, and increase the counter `count=1` to `4` to transfer 4GiB of data.
#### Connection refused
`Ncat: Connection refused.`
`Ncat: TIMEOUT.`
Make sure:
- that the netcat server is running
- double-check the destination host and port of the command
- make sure that you can reach the destination over this port
- network firewalls
- routing
- check both host firewalls and make sure the inbound and outbound traffic is allowed
# Caution
As mentioned before, you can lose access to your hosts. Additionally, please **announce tests to your network and security team** as you can disrupt a productive network or trigger some kind of IDS system in place.
---

View file

@ -0,0 +1,115 @@
# Adding a trash can to Linux with trash-cli
There is no trash can for the Linux CLI. `rm` removes the data permanently, and there is practically no way of recovering deleted files reliably. `trash-cli` fills this role and lets you 'trash' files and directories and lets you recover 'trashed' items.
#### Installation
There are multiple ways to install `trash-cli`. It is open source and instructions can be found [on Github](https://github.com/andreafrancia/trash-cli?tab=readme-ov-file#installation).
#### Working with Aliases
As a side note: In this article, I will work with aliases. You can pick whatever alias you want, but it is not recommended to overwrite `rm` for `trash-cli`. Overwriting `rm` can cause issues with scripts, applications, and other features. That said, make sure not to overwrite an already-used command.
**Add the aliases** by adding them to your `~/.bashrc` file and load it with `source ~/.bashrc`. It may vary depending on your setup.
# Moving files into the trash can
You can move files into the trash can with `trash` or `trash-put`. It works with files and directories. I've been using it with the alias `tm` as it is close to `rm`.
Alias:
: `alias tm="trash"`
# Showing files and dirs in the trash can
You can use `trash-list` to show the content of the trash can.
```
$ trash-list
2024-02-03 22:53:27 /home/user/data/file2
2024-02-03 22:53:27 /home/user/data/file4
```
Alias:
: `alias tmls="trash-list"`
#### Looking for specific files in the trash can
```
$ trash-list | grep -i file4
2024-02-03 22:53:27 /home/user/data/file4
```
**Side note:** `-i` in `grep` makes the search case-insensitive.
Alias:
: `alias tmgr="trash-list | grep -i"`
#### Disk Space
The following directories store the trashed items:
`~/.local/share/Trash/files` and `/root/.local/share/Trash/files` *# trashed with `sudo`*
You can check the used space of the trash can with the following command:
: `du -sh ~/.local/share/Trash/files`
Alias:
: `alias tmdu="du -sh ~/.local/share/Trash/files`
# Getting things out of the trash
The advantage of trash-cli is the possibility to recover 'trashed' items.:
```
$ trash-restore
0 2024-02-03 23:05:54 /home/user/data/file5
1 2024-02-03 23:05:54 /home/user/data/dir3
2 2024-02-03 23:05:54 /home/user/data/file7
3 2024-02-03 23:05:54 /home/user/data/dir4
4 2024-02-03 22:53:27 /home/user/data/file4
What file to restore [0..4]:
```
Choose a single file or directory or **multiple items** with e.g. `2-3`. The chosen items will be **restored to their original destination**.
Alias:
: `alias tmre="trash-restore`
You can't restore an item when an item with the same name is in the original path.
`Refusing to overwrite existing file "file3".`
There is an `--overwrite` option, but it is not working for me and I haven't really looked into it as I don't need it that often.
# Emptying the trash can
There are multiple ways to do so. I haven't added any aliases for those options, but feel free to do so.
Removes all items from trash can:
: `trash-empty`
: There is no confirmation prompt!
Removes all items that have been deleted more than `n` days:
: `trash-empty n`
: `trash-empty 30`
#### Removing specific items
Removes specific items from the trash can:
: `trash-rm NameOfItem` *# removes all items called `NameOfItem`*
: `trash-rm '*.iso'` *# removes all `.iso` files*
: `trash-rm /path/of/items` *# should remove all items with a specific path, but it is not working for me*
#### Cron
Emptying the trash can be automated with [cron jobs](https://ittavern.com/cron-jobs-on-linux-comprehensive-guide/).
I run it once a day to delete all items that have been trashed more than 7 days ago, but please modify as you wish:
`crontab -e` > add `20 4 * * * trash-rm 7` - runs every day at 4:20 am
# Conclusion
It saved me multiple times, and I can recommend it. I've gotten used to using `tm` instead of `rm`, which can be annoying on systems I don't manage, but this is a small price to pay. The source code can be found [on Github](https://github.com/andreafrancia/trash-cli).
---

View file

@ -0,0 +1,130 @@
# iperf3 - User Authentication with Password and RSA Public Keypair
**Compatibility + Security Notice**
The newest version 3.17 has fixed a side-channel attack ([CVE-2024-26306](https://nvd.nist.gov/vuln/detail/CVE-2024-26306)) which makes the authentication process incompatible with older versions. For backwards-compatibility use the `--use-pkcs1-padding` flag. More information can be found in the [iperf3 Github Repo](https://github.com/advisories/GHSA-x8qh-8j65-v4j9).
In this article I am using iperf3 version 3.9.
## Introduction
For a general introduction visit my [iperf3 guide](https://ittavern.com/getting-started-with-iperf3-network-troubleshooting/) or the [official documentation](https://iperf.fr/iperf-doc.php).
There are some things we have to prepare before we can use the authentication feature of iperf3. We are going through all the steps in the following sections.
#### Overview
![iperf3 auth overview](/images/blog/iperf3-auth-overview.png)
# Usage <a href="#usage" id="usage">#</a>
The following commands are simple examples - the explanation of all the things we need follow in the next sections.
#### Server
```
iperf3 -s 10.20.30.91 -p 1337 --authorized-users-path users.csv --rsa-private-key-path private_unprotected.pem
```
- `iperf3 -s` *# start iperf3 as a server*
- `10.20.30.91` *# listen on a specific IP*
- `-p 1337` *# listen on a specific port (default is 5201)*
- `--authorized-users-path users.csv` *# choose the .csf file with the hashed credentials*
- `--rsa-private-key-path private_unprotected.pem` *# choose the unprotected private key file*
**Side note**: use absolute paths for files when you use iperf3 version `<3.17` in `--daemon` mode as it changes the working directory [Source](https://github.com/esnet/iperf/pull/1672).
---
#### Client
```
iperf3 -c 10.20.30.91 -p 1337 --username iperf-user --rsa-public-key-path public.pem
```
**Side note:** *We must enter the password of the `iperf-user` or add the password as an Linux env variable `IPERF3_PASSWORD`*
- `iperf3 -c` *# start iperf3 as a client*
- `10.20.30.91` *# IP of the iperf3 server*
- `-p 1337` *# destination port of the iperf3 server*
- `--username iperf-user` *# name of the authorized user*
- `--rsa-public-key-path public.pem` *# choose the public key file*
As mentioned before, in the next sections we find everything we need to set it up.
# RSA Keypair Generation <a href="#keypair-generation" id="keypair-generation">#</a>
The **RSA keypair is used to encrpyt and decrypt the user credentials**. The client will receive the public key in the `.pem` format and the server side needs the private key in the `.pem` format without a password.
**Side note**: *In my tests it was working with an encrypted private key too, but as the docs say it has to be unprotected, I went with it too.*
On Linux you can **generate the needed key pair** with the following commands.
Generate RSA private key:
: `openssl genrsa -des3 -out private.pem 2048`
: *You'll be asked to enter a password and it can't be left empty as it runs into an error and generates only a file without content. The used password is needed for the following two steps.*
Generates RSA public key from private key:
: `openssl rsa -in private.pem -outform PEM -pubout -out public.pem`
: *Enter the chosen password. The `public.pem` file can be send to the iperf3 client.*
Remove the password from the private key:
: `openssl rsa -in private.pem -out private_unprotected.pem -outform PEM`
: *The unprotected private key will be used.*
[Source](https://man.archlinux.org/man/iperf3.1.en#Authentication_-_RSA_Keypair)
# Authorized Users List <a href="#user-list" id="user-list">#</a>
On the server we need a `.csv` file with the hashed credentials of the user. For our example we are going to use username `iperf-user` and the password `hunter2`.
```
cat users.csv
# Format: username,sha256hash
iperf-user,e8c37ee89b09dd23ec6658a80caaa941df4de8dd946482d861fa37b52338226a
```
You have to create the SHA256 hash with the username and password in the following format: `{username}password`. You can use **GUI tools like Cyberchef**: [click here to visit the example](https://baked.brrl.net/#recipe=SHA2('256',64,160)&input=e2lwZXJmLXVzZXJ9aHVudGVyMg)
Or use the **Linux CLI**:
```
echo -n "{iperf-user}hunter2" | sha256sum
e8c37ee89b09dd23ec6658a80caaa941df4de8dd946482d861fa37b52338226a -
```
[Source](https://man.archlinux.org/man/iperf3.1.en#Authentication_-_Authorized_users_configuration_file)
# built with OpenSSL support <a href="#openssl-support" id="openssl-support">#</a>
Authentication is only available when both iperf3 installations are built with OpenSSL support. That seems to be the default and the following methods should be enough to check if authentication is available:
**iperf3 --version**
Print the iperf3 versions and check if the `authentication` feature is available:
```
$ iperf3 --version
iperf 3.9 (cJSON 1.7.13)
Linux pleasejustwork 5.15.0-107-generic #117-Ubuntu SMP Fri Apr 26 12:26:49 UTC 2024 x86_64
Optional features available: CPU affinity setting, IPv6 flow label, SCTP, TCP congestion algorithm setting, sendfile / zerocopy, socket pacing, authentication
```
**Dependencies**
Use the Linux command `ldd` to print shared object dependencies and look for the `libcrypto.so` dependency:
```
$ ldd /usr/bin/iperf3
linux-vdso.so.1 (0x00007ffc9c7bc000)
libiperf.so.0 => /lib/x86_64-linux-gnu/libiperf.so.0 (0x00007fc13ba2f000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007fc13b806000)
libcrypto.so.3 => /lib/x86_64-linux-gnu/libcrypto.so.3 (0x00007fc13b3c2000)
libsctp.so.1 => /lib/x86_64-linux-gnu/libsctp.so.1 (0x00007fc13b3bc000)
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007fc13b2d5000)
/lib64/ld-linux-x86-64.so.2 (0x00007fc13ba7e000)
```
[Source](https://github.com/nathancrjackson/iperf3-windows-builds/issues/1) and [openssl github](https://github.com/openssl/openssl)

View file

@ -0,0 +1,101 @@
# My Personal Backup Strategy - August 2024
# Intro
I won't go into technical details, but rather talk about the organization and reasoning behind my current backup strategy. I'll post an update with the technical aspect as soon as I finish the todo list.
As always, feel free to share your feedback and questions in the comments section below.
For reference, my [previous strategy from last year](https://ittavern.com/my-offsite-backup-2023-03/) and my [General Backup Guide](https://ittavern.com/backup-guide/).
# Goals & Criteria
This strategy focuses on the most important data I have. This includes family pictures from the last 20 years, various cryptographic keys, passwords and MFA tokens, notes and documents, and much more. **Losing files is not an option, which makes a good backup strategy essential**.
To make a long story short, here are some goals:
- Encrypted transit & storage
- Data validity & integrity checks
- Simple & well thought out recovery strategy
- Long term backup (5-10 years)
# Overview
![back-strat-overview](/images/blog/backup-strat-20240803.webp)
This is a simplified overview of my strategy. I've chosen to remove certain information from the chart for obvious reasons. I'll go into more detail in the following sections.
# The Process
At the moment, everything is done manually as I am working on a decent process that I want to automate. I may change or add cloud providers over time, but this is the rough plan for the near future.
## Data Categories
As mentioned before, only important stuff is backed up. I then created two categories: **frequently changed/accessed** and **rarely changed/accessed** (long-term storage).
**Examples of frequently used data** are passwords and MFA, keys, coding projects, configuration files, etc. - currently about **2 GB**.
**Examples of rarely used data** include family pictures, old projects, family backups, etc - which is currently about **170 GB**.
I am still in the process of adding and removing data and I see this as an ongoing and never ending process as many things will change over time.
## Used Software
For all backups I'm using [**borgbackup**](https://www.borgbackup.org/). I am familiar with it and it allows me to store my backups **encrypted, compressed and easily recoverable**. The keyfiles and associated passphrase are stored locally. By default, borg stores the keyfiles at the remote location, but I've decided to keep them local to increase security. The keyfiles and passphase are required to access a borg repository.
There is currently **no fixed schedule** as I know when big changes have been made and a backup is needed. I plan to automate this at some point, but for now it's all I need.
Almost all frequently used data is mirrored to local devices via [**Syncthing**](https://syncthing.net/). I don't really consider it a backup, as human error is a huge risk factor, but it's still part of the plan as it prevents data loss in case of hardware failure. Poor man's distributed RAID?
## Cloud Backups & Third Party Storage
All **remote sites support borg and are only accessible via SSH**. There are a lot of providers out there and I think I'll stick with three for now.
At the moment I have a fairly **slow upload speed at home** which is the bottleneck. The initial upload took a couple of hours per provider, but borg de-dublicates everything from here, which will save a lot of time for all subsequent backup runs.
Question: What happens if your upload is faster than the trusted third party's download? - Just keep this in mind before you DOS that party's Internet access.
## Rotating Cases
![back-strat-case](/images/blog/backup-202408-case-content.jpg)
Each case contains a **1TB SSD, 1TB 3.5" HDD** and a spare **Yubikey**. The case itself has anti-shock padding and things can't move when the case is closed. After the backups are done with borg, I put all the drives in **antistatic bags with silica drybags**. Then I fix everything with a **cable binder**, put a **seal** on it - to make sure nobody opens it - and put the case in a **flame and water resistant bag**.
![back-strat-case-seal](/images/blog/backup-202408-case-seal.jpg)
Something I think is a bit overkill, but fun to work on and improve over time. In the end, it won't hurt anything but my wallet.
## Documentation
Everything is documented with **drawio and simple text files**. Each backup contains an encrypted LUKS container with the manual and further instructions.
Since I am still in the process of getting everything done, it is not finished or pretty and not worth sharing. I'll go into more detail when I share the technical part.
## Recovery
Even though the recovery process is working, I know that I need to improve certain things to make everything final. There are still **many open what-if scenarios** that I need to address and document. For now, I'm set and happy, but I don't need to spend so much time and energy just to have some flaws in my strategy. The recovery process will be a big part of the technical follow up post.
## Regular Health Checks
The plan is to keep these backups running for a long time. To ensure this, regular checks are essential.
- Is the recovery process still working?
- Recovery instructions up to date?
- Is the hardware OK?
- Are the cloud providers still reliable?
- What is the status of the software in use? Still under development?
- Is the available storage still large enough? E.g. 1TB in the case.
At the moment I don't have a list and I check it from time to time, but I plan to implement a fixed schedule and automate certain tasks.
# Upcoming Improvements
Even though the first backup is done, there is a lot of room for improvement. I plan to **automate** things like most backups, do **health checks** of the hardware, check the **software in use for updates**, and so on. Once I automate things, some form of **monitoring, alerting, and logging** is required.
Besides automation, I need to **rework my local server where the data is currently stored**. Add a **RAID level and harden it** a bit more. This is part of my homelab rework, but that is another topic.
Something I haven't mentioned yet is the **backup retention time**. Right now there is no need to delete any backup as borg de-dublicates everything, but there will be a need to delete data or entire backups. I think time will tell what makes the most sense.
# Conclusion
I'm pretty happy with the current setup and have been sleeping much better since the first backup. It is not perfect, but working on it is fun, challenging, and a constant reminder to keep an eye on my most important data.

View file

@ -0,0 +1,193 @@
# mtr - More Detailed Traceroute - Network Troubleshooting
`mtr` is a great tool for troubleshooting connection problems and is one of the first things I install on a Linux machine. It is a `traceroute` on steroids. It provides additional information and can pinpoint problems with specific nodes on the network.
We'll focus on `mrt` on Linux and ICMP only, and I hope I can give you some insight into this simple but helpful tool.
## The Basics
To get started, run the following command to get an interactive/ live view of the results:
`mtr DESTINATION`
```
My traceroute [v0.95]
mtr-server-name (192.168.10.175) -> dest-server-name (10.0.10.95) 2024-08-27T13:47:28+0000
Keys: Help Display mode Restart statistics Order of fields quit
Packets Pings
Host Loss% Snt Last Avg Best Wrst StDev
1. _gateway 0.0% 138 0.4 2.5 0.3 75.8 10.1
2. 10.254.3.254 0.0% 138 0.2 0.2 0.1 3.8 0.3
3. 10.254.1.254 0.0% 138 0.3 0.2 0.2 0.4 0.0
4. 10.254.28.70 0.0% 138 20.0 20.1 20.0 20.4 0.1
5. 10.0.10.95 0.0% 138 20.2 20.4 19.9 45.0 2.5
```
## Results Explained
```
Host Loss% Snt Last Avg Best Wrst StDev
1. 10.11.0.1 0.0% 2 18.1 18.3 18.1 18.4 0.2
[...]
```
Explained:
: `Host` - Hop information, which can be changed
: `Loss%` - percentage of packet loss
: `Snt` - number of packets/cycles sent
: `Last` - is the Round-Tip-Time (`RTT`) of the last packet sent.
: `Avg` - average `RTT` of all packets sent
: `Best` - fastest `RTT` of all sent packets
: `Wrst` - worst `RTT` of all sent packets
: `StDev` - standard deviation of all sent packets
This is the default output explained, and it is all I need often enough. However, you can **change the columns**: the order, remove and even add additional columns with `-o FIELDS, --order FIELDS`:
```
│L │ Loss ratio │
├──┼─────────────────────┤
│D │ Dropped packets │
├──┼─────────────────────┤
│R │ Received packets │
├──┼─────────────────────┤
│S │ Sent Packets │
├──┼─────────────────────┤
│N │ Newest RTT(ms) │
├──┼─────────────────────┤
│B │ Min/Best RTT(ms) │
├──┼─────────────────────┤
│A │ Average RTT(ms) │
├──┼─────────────────────┤
│W │ Max/Worst RTT(ms) │
├──┼─────────────────────┤
│V │ Standard Deviation │
├──┼─────────────────────┤
│G │ Geometric Mean │
├──┼─────────────────────┤
│J │ Current Jitter │
├──┼─────────────────────┤
│M │ Jitter Mean/Avg. │
├──┼─────────────────────┤
│X │ Worst Jitter │
├──┼─────────────────────┤
│I │ Interarrival Jitter │
└──┴─────────────────────┘
```
# Common Options
`mtr` gives us more options. I'll show you the most common options here:
Display the help menu:
: `-h, --help`
Choose the Internet Protocol Version:
: `-4` *# IPv4*
: `-6` *# IPv6*
Don't resolve any host names:
: `-n, --no-dns`
Show host name and IPs:
: `-b, --show-ips`
Choose a spcific interface:
: `-I NAME, --interface NAME`
Choose a source IP address:
:`-a ADDRESS, --address ADDRESS`
Manage the number of cycles and interval:
: `-c COUNT, --report-cycles COUNT` *# number of cycles*
: `-i SECONDS, --interval SECONDS` *# time in seconds between ICMP requests, default is 1 second*
: `-s PACKETSIZE, --psize PACKETSIZE` *# payload in bytes, inclusive IP+ICMP headers. A negative number will randomize the size up to that number*
: `-f NUM, --first-ttl NUM` *# set start TTL*
: `-m NUM, --max-ttl NUM` *# set maximum TTL, default is 30*
There are more specific options for MPLS, Autonomous System (AS) numbers and so on.
# Interactive Mode
By default, `mtr` starts in interactive or live mode.
The most important shortcuts to control this mode are `p` to **pause**, `SPACE` to **resume**, `r` to **reset all counters**, `n` to **toggle hostname resolution**, `d` to switch the display mode, and `h` to **show help and all other options**.
The display modes you can choose:
![](/images/blog/mtr-displaymode-1.png)
![](/images/blog/mtr-displaymode-2.png)
![](/images/blog/mtr-displaymode-3.png)
# Report mode
This is not the official name, but it makes things a little bit clearer. If you want to** automate your workflow and save the results to a file**, use the `-r` / `--report` options. This will only **show the results and let you pipe** the results out. By default, the `--report` option sets the number of cycles to `10`.
Use `-F FILENAME, --filename FILENAME` to import a list of hosts that get processed one after the other.
## Saving results to file
I haven't had any luck with showing the results live and saving them to a file at the same time. Instead, the following example runs 5 cycles, displays the final results in the terminal, and additionally saves them to a file named `results`:
```
user@pleasejustwork:~$ mtr -n -r -c 5 server-name | tee results
Start: 2024-07-08T15:57:45+0000
HOST: server-name Loss% Snt Last Avg Best Wrst StDev
1.|-- 192.168.10.254 0.0% 5 0.4 0.3 0.3 0.4 0.0
2.|-- 10.254.3.254 0.0% 5 0.2 0.2 0.2 0.3 0.0
3.|-- 10.254.1.254 0.0% 5 0.3 0.3 0.3 0.4 0.1
4.|-- 198.51.100.44 0.0% 5 13.2 13.6 13.2 14.8 0.7
5.|-- 10.44.193.73 0.0% 5 13.5 13.5 13.5 13.6 0.1
6.|-- 100.64.48.248 0.0% 5 13.5 13.8 13.5 14.0 0.2
7.|-- 10.44.204.26 0.0% 5 18.9 18.9 18.9 19.0 0.0
8.|-- 10.254.32.2 0.0% 5 19.0 19.0 19.0 19.1 0.0
9.|-- 10.0.10.95 0.0% 5 19.8 20.9 18.9 27.5 3.7
user@pleasejustwork:~$ cat results
Start: 2024-07-08T15:57:45+0000
HOST: server-name Loss% Snt Last Avg Best Wrst StDev
1.|-- 192.168.15.254 0.0% 5 0.4 0.3 0.3 0.4 0.0
2.|-- 10.254.3.254 0.0% 5 0.2 0.2 0.2 0.3 0.0
3.|-- 10.254.1.254 0.0% 5 0.3 0.3 0.3 0.4 0.1
4.|-- 198.51.100.44 0.0% 5 13.2 13.6 13.2 14.8 0.7
5.|-- 10.44.193.73 0.0% 5 13.5 13.5 13.5 13.6 0.1
6.|-- 100.64.48.248 0.0% 5 13.5 13.8 13.5 14.0 0.2
7.|-- 10.44.204.26 0.0% 5 18.9 18.9 18.9 19.0 0.0
8.|-- 10.254.32.2 0.0% 5 19.0 19.0 19.0 19.1 0.0
9.|-- 10.0.10.95 0.0% 5 19.8 20.9 18.9 27.5 3.7
```
## Further Processing
If you want to process the data in another system, it makes sense to save the results of `mtr` in a different format. `mtr` gives you some options:
```
-x, --xml
-C, --csv
-j, --json
```
Examples for the `--csv` format:
```
Mtr_Version,Start_Time,Status,Host,Hop,Ip,Loss%,Snt, ,Last,Avg,Best,Wrst,StDev,
MTR.0.95,1720455178,OK,server-name,1,192.168.15.254,0.00,5,0,0.46,8.96,0.33,43.25,19.16
MTR.0.95,1720455178,OK,server-name,2,10.254.3.254,0.00,5,0,0.26,0.25,0.22,0.26,0.02
MTR.0.95,1720455178,OK,server-name,3,10.254.1.254,0.00,5,0,0.23,0.71,0.23,2.29,0.89
MTR.0.95,1720455178,OK,server-name,4,198.51.100.44,0.00,5,0,13.33,13.48,13.24,14.27,0.44
MTR.0.95,1720455178,OK,server-name,5,10.44.193.73,0.00,5,0,19.36,16.65,13.57,22.82,4.24
MTR.0.95,1720455178,OK,server-name,6,100.64.48.248,0.00,5,0,17.40,15.67,13.96,18.63,2.19
MTR.0.95,1720455178,OK,server-name,7,10.44.204.26,0.00,5,0,21.51,21.03,19.04,22.59,1.57
MTR.0.95,1720455178,OK,server-name,8,10.254.32.2,0.00,5,0,18.90,19.87,18.90,21.62,1.22
MTR.0.95,1720455178,OK,server-name,9,10.0.10.95,0.00,5,0,19.07,22.28,19.07,33.95,6.53
```
# Conclusion
So, I hope you found this short primer helpful and can use it in your next troubleshooting session.

View file

@ -0,0 +1,118 @@
# ssh-audit Primer - Audit your SSH Server
As the name already implies,`ssh-audit` helps you to audit SSH clients and servers. It is lightweight, [open-source](https://github.com/jtesta/ssh-audit), supports both versions of SSH and is available for Linux and Windows.
In this article we'll use **Linux** on both sides.
# The Basics
After installation, the **basic syntax** of an audit is `ssh-audit DESTINATION`. By default it uses the standard SSH port `22` - to change the port, simply add it to the destination `DESTINATION:2222`.
![](/images/blog/ssh-audit-default-output.png)
On the left, it starts with the referenced information, categorizes it with `[info],[warn]` and other labels and adds a short description on the right.
It has **many sections** - starting with various information about key, cipher and algorithms information and information about the fingerprint, recommendations and additional information at the end.
![](/images/blog/ssh-audit-default-output-end.png)
## General options
Force a certain **SSH version**:
: `-1, --ssh1`
: `-2, --ssh2`
Choose an **Internet Protocol Version**:
: `-4, --ipv4`
: `-6, --ipv6`
**Removes the colors** in the terminal:
: `-n, --no-colors`
For a **more detailed output**:
: `-v, --verbose`
: `-d, --debug`
: In particular, the debug option is useful if you need to troubleshoot a connection or want the raw data.
![](/images/blog/ssh-audit-debug.png)
# Batch Audit of multiple Servers
Especially in larger environments, it makes sense to work with the following options if you want to **automate processes**.
#### Target list of hosts from file
Create a simple **list of hosts**:
```
cat ssh-servers.txt
10.10.50.50:22
10.20.30.40:22
```
Now you can use the `-T, --targets=<hosts.txt>` options to run through the list of hosts:
`ssh-audit -T ssh-servers.txt`
There are some options that might be helpful:
Add a **timeout** in case a host is unavailable so the run can continue:
: `-t, --timeout=<secs>`
Set the **minimum output level** for the logs:
: `-l, --level=<level>` *# (info|warn|fail)*
Formatting the results in **JSON**:
: `-j, --json`
This options removes a lot of formatting and empty lines which makes it easier to work with the results:
: `-b, --batch`
![](/images/blog/ssh-audit-batch.png)
#### Saving Results to file
Simply redirect the results into a new file with `>`, append to a file with `>>` or pipe it to `tee` when you need to display the results at the same time.
`ssh-audit -T ssh-servers.txt > ssh-servers-results.txt`
# Work with Policy Sets
You can **create custom policy sets** and test against them. This is helpful if you have a standard that you want to enforce and audit your environment against.
Let's say you already have a **hardened SSH server** that you want to use as a reference. Use `-M, --make-policy=reference-policy-ssh.txt` to create a reference policy set that you can use later.
```
ssh-audit -M reference-policy-ssh.txt 10.10.50.51
Wrote policy to reference-policy.txt. Customize as necessary, then run a policy scan with -P option.
```
You then can use `-P, --policy=reference-policy-ssh.txt` to run the **reference policy against another server**:
```
ssh-audit -P reference-policy-ssh.txt 10.10.50.50
Host: 10.10.50.50
Policy: Custom Policy (based on 10.10.50.50 on 2024/08/28) (version 1)
Result: ✔ Passed
```
**Deviations** will be listed and you can make changes accordingly.
The **policy file looks like thi**:
```
cat reference-policy-ssh.txt
# Custom policy based on 10.10.50.51 (created on 2024/08/28)
# The name of this policy (displayed in the output during scans). Must be in quotes.
name = "Custom Policy (based on 10.10.50.51 on 2024/08/28)"
# The version of this policy (displayed in the output during scans). Not parsed, and may be any value, including strings.
version = 1
# When false, host keys, kex, ciphers, and MAC lists must match exactly. When true, the target host may support a subset of the specified algorithms and/or
algorithms may appear in a different order; this feature is useful for specifying a baseline and allowing some hosts the option to implement stricter contr
ols.
allow_algorithm_subset_and_reordering = false
{...}
```

View file

@ -0,0 +1,100 @@
# Generate a Vanity v3 Hidden Service Onion Address with mkp224o
Let us start with some history: as of July 2021, only V3 addresses will be allowed and [V2 are deprecated](https://blog.torproject.org/v2-deprecation-timeline/) for security and privacy reasons, as the algorithm used in V2 is no longer secure. I'll write more about this when I've done my homework.
V3 onion addresses are **56 characters long** and end with `.onion`. V2 only had 16 characters, and the reason V3 addresses are so long is that they contain the full ed25519 public key, not just a hash of it.
I **plan to make ITTavern available over the Tor network**, and in preparation I have been looking into vanity onion addresses. An example of what a vanity address is is the facebook onion address:
`facebookwkhpilnemxj7asaniu7vnjjbiltxjqhye3mhbshg7kx5tfyd.onion`.
Even if it is still impossible to remember, it makes it a little better.
# mkp224o
There are dozens of tools to generate V3 onion addresses, but [mkp224o](https://github.com/cathugger/mkp224o) was the most recommended, so I thought I'd give it a try.
## Installation
The installation is straightforward and I won't go into too much detail, as it's described in more detail in the [repository](https://github.com/cathugger/mkp224o).
Assuming you are using Debian or Ubuntu, here are the simple steps:
```
sudo apt install gcc libc6-dev libsodium-dev make autoconf
git clone https://github.com/cathugger/mkp224o.git
cd ./mkp224o
./autogen.sh
./configure
make
```
That's it.
## Generating Keys
Let us start by creating a new directory in which we save the keys for the upcoming example..
`mkdir vanity-addresses`
**We are going through the most common options using an example**:
`./mkp224o -t 2 -d ./vanity-addresses -f wordlist.txt -o ./vanity-addresses/list-of-hits.log`
Explained:
: `/mkp224o` *# run the programm*
: `-t 2` *# limit to 2 threads - otherwise it tries to get all available*
: `-d ./vanity-addresses` *# directory where keys will be saved in*
: `-f wordlist.txt` *# list of filter words, every word must be in a new line*
: `-o ./vanity-addresses/list-of-hits.log` *# optional log file for found entries - use `-O` to overwrite instead of appending*
That said, it all depends on your needs. It could be as simple as `./mkp224o beer`
```
./mkp224o beer
sorting filters... done.
filters:
beer
in total, 1 filter
using 4 threads
beerduqu5tb5m3h75dkjv5kcqfyirivtia6vmnpqatzjfq54pkohcryd.onion
beeri7cj2ba7jlz4hhvbhv3lydfiawmonslz7yv63dagl3abrvx3xgyd.onion
beerykugi53rz4rvpelywafzgscot5div5g4soe677xetli2ee7vgmyd.onion
beerxyjhfc5lltenz4q6at3yguqx3gh5m737aht44qvs57vjzycpe2qd.onion
beeraqevtz5dhnkzees2bhvldja4li57mcmqz3gzvlipk6holttevaqd.onion
beerxhx32jgjnov6mdjuhkhbttem2cw6pgrla53vajwddb6ful2xdsid.onion
[...]
```
To get **all options** simply run `./mkp224o -h`.
# Chances
Just to give you an idea about the chances of getting your favorite name. Note this reference is from similar program and performance will vary.
Source [katmagic/Shallot](https://web.archive.org/web/20230331011246/https://github.com/katmagic/Shallot) on Github (via Archive.org) with 1.5 Ghz:
| characters | time to generate (approx.) |
| ---------- | -------------------------- |
| 1 | less than 1 second |
| 2 | less than 1 second |
| 3 | less than 1 second |
| 4 | 2 seconds |
| 5 | 1 minute |
| 6 | 30 minutes |
| 7 | 1 day |
| 8 | 25 days |
| 9 | 2.5 years |
| 10 | 40 years |
| 11 | 640 years |
| 12 | 10 millenia |
| 13 | 160 millenia |
| 14 | 2.6 million years |
As you can see, length indeed matters.
# Conclusion
It turned out to be easier than I thought. As I mentioned before, I'll try to keep this site available over Tor and will share new things I learn along the way.

View file

@ -0,0 +1,239 @@
# Deploying ISSO Commenting System for Static Content using Docker
I was looking for a **simple and lightweight commenting system with moderation** for my static blog. There are dozens of solutions out there, but [ISSO](https://isso-comments.de/) seemed like a perfect fit for me. **I decided to host it on another server, using a subdomain and Podman or Docker**. However, there are different ways to [install](https://isso-comments.de/docs/reference/installation/) it - like same server, same domain, without Docker and so on.
**Illustration of our plan**:
![](/images/blog/isso-comments-overview.webp)
You can add the JS snipped in **Hugo, Jekyll, documentation** or wherever you want.
---
**My assumptions before we begin**:
- Two Linux servers running Ubuntu 24.04 LTS *(Distro doesn't really matter)*
- 1 **Backend** to host ISSO, 1 **Frontend** where the comment feature is used.
- Backend server has **Docker/Podman and nginx** installed and ready to go *(or another reverse proxy)*.
- Both servers are **accessible to the visitor** (Internet, Intranet, etc. via HTTPS (TCP/443))**.
- **DNS entry about secondary pointing to backend server**
- In this article we will use `example.com` for the frontend / static blog and `comments.example.com` for the backend / ISSO server.
- **Certificate** for secondary domain
---
Feel free to test the comment section at the end of this article.
# Setting everything up
Let us start with the Backend server.
## Backend Server Configuration
**Create two directories** for the container for the configuration file and database:
`mkdir -p config/ db/`
---
#### ISSO Server Configuration
Download and save the [default configuration](https://github.com/isso-comments/isso/blob/master/isso/isso.cfg) `isso.cfg` into the `/config` directory:
Using `curl`:
`curl -L https://raw.githubusercontent.com/isso-comments/isso/master/isso/isso.cfg -o config/isso.cfg`
---
Now we need to **modify this config file according to our setup and preferences**.
We'll **add the frontend domain and enable moderation** - I'll leave the rest as is for this article.
**Open** the configuration file `config/isso.cfg` in your favorite editor and change the following options:
```
[general]
[...]
# Frontend domain
host =
https://example.com
[...]
# Enabling moderation
[admin]
enabled = true
# Admin access password
password = reallylongandsecurepasswordforhteadminaccess
[...]
[moderation]
enabled = true
[...]
```
The **configuration file does a good job of showing and describing many features** - take your time and configure it to your liking. A more **detailed overview** can be found in [its official documentation](https://isso-comments.de/docs/reference/server-config/).
---
#### Docker Container
Next, we will **start the container**. As mentioned before, I use Podman, but we will use Docker for this example:
```
docker run -d --name isso-comments \
-p 127.0.0.1:8080:8080 \
-v /path/to/storage/config:/config \
-v /path/to/storage/db:/db \
ghcr.io/isso-comments/isso:0.13.0
```
Explained:
: `docker run -d` *# run detached container*
: `--name isso-comments` *# set container name*
: `-p 127.0.0.1:8080:8080` *# exposes port `8080` of the container to the `localhost:8080` of the host system*
: `-v /path/to/storage/config:/config` *# creates bind mount for persistant storage for the container*
: `ghcr.io/isso-comments/isso:0.13.0` *# get and use the on Github hosted [ISSO image](https://github.com/isso-comments/isso/pkgs/container/isso) with a certain tag*
**Important**: Make sure to **change the path** and that `127.0.0.1:8080` is available or change the host port.
**Check locally with curl if container is working**:
`curl -L 127.0.0.1:8080/admin`
```
<html>
<head>
<title>Isso admin</title>
<link type="text/css" href="http://127.0.0.1:8080/css/isso.css" rel="stylesheet">
<link type="text/css" href="http://127.0.0.1:8080/css/admin.css" rel="stylesheet">
</head>
<body>
<div class="wrapper"
[...]
```
**This is looking fine!**
If you get an **error message**, check the logs:
`docker container logs isso-comments`
---
#### Reverse proxy / nginx
Currently, the ISSO backend is only available from the host itself. We need to **add a reverse proxy** to forward the requests and make it available to other hosts. I will be using **nginx, but feel free to use Caddy, Apache**, or something else. The **certification configuration** is handled by **certbot**.
We won't go into much detail, but the **following config works for me** (*some privacy modifications*):
```
server {
server_name comments.example.com;
access_log /var/log/nginx/comments.example.com.access.log;
location / {
proxy_set_header X-Forwarded-For $remote_addr;
proxy_pass "http://127.0.0.1:8080";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
#error_page 404 /404.html;
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/comments.example.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/comments.example.com/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
server {
if ($host = comments.example.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
listen 80;
server_name comments.example.com;
return 404; # managed by Certbot
}
```
If everything went well, the **admin interface should be available**:
`https://comments.example.com/admin/`
---
If it doesn't work:
- check the **logs of your reverse proxy**
- **reverse proxy** up and running
- **secondary domain pointing to Backend server** *(DNS)*
- **Container** up and running
- **port configration** is correct *(container > Docker > nginx)*
- check **network and host firewalls**
---
Regarding the Backend server, this should be everything!
---
## Frontend Configuration
In the next step, we just have to **add some Javascript to all pages that should receive a comment section**. ISSO will do the rest automaticly.
```
<script data-isso="//comments.example.com/"
data-isso-max-comments-top="10"
data-isso-max-comments-nested="5"
data-isso-reveal-on-click="5"
data-isso-sorting="newest"
data-isso-avatar="false"
data-isso-vote="false"
src="//comments.example.com/js/embed.min.js"></script>
<section id="isso-thread">
<noscript>Javascript needs to be activated to view comments.</noscript>
</section>
```
Feel free to change those options! - A **full overview of all options** can be found in [their official documentation](https://isso-comments.de/docs/reference/client-config/).
#### Hidding a certain Fields
Currently, it is [not possible to hide certain fields with the ISSO options](https://github.com/isso-comments/isso/issues/916).
A simple but not perfect **workaround is to hide them with CSS**. If you want to hide the website field, you could use the following CSS snippet:
```
/* ISSO COMMENTS */
label[for=isso-postbox-website] {
display: none !important;
}
input#isso-postbox-website {
display: none;
```
That hides the website field. Replace website with email to hide the `email` field.
#### Using lazy-load
I won't cover it in this article, but if you have many comments, it makes sense to insert some kind of lazy-load, so the comments load only if they are being display to improve the user experience and decrease the page loading speed.
#### Moderation
As a reminder, with this setup you have to approve new comments via admin interface:
`https://comments.example.com/admin/`
# Conclusion
So, as you can see, it is really straightfoward. Feel free to test it in the comment section below.

View file

@ -0,0 +1,72 @@
# Dummy IP & MAC Addresses for Documentation & Sanitization
In this article, we are going to look at a topic that, in my experience, is not well known. There are IP and MAC ranges that are reserved for documentation and can't be routed. There are many use cases for these ranges.
# Use Cases
There are several reasons why you don't want to use real or accessible host addresses. Perhaps the biggest reasons are for **Security & Privacy**: you **want to avoid sharing sensitive information** or **running scripts against 'real' host addresses**.
More examples:
: **Documentation** - in scripts, manuals, articles, ...
: **Placeholder** - in firewall policies, scripts, templates, ...
: **Mock-up** - in product demonstrations, presentations, flyers, ...
: **Sanitize / Anonymize Data** - logs, packet captures, configuration files, ...
There are more, but you know where I am going with this.
**Important**: Just to mention it again, please do not use the following address for anything else to avoid problems.
# IP Addresses
**IPv4** has the following IP address ranges:
`TEST-NET-1` - **192.0.2.0/24** *(192.0.2.0192.0.2.255)*
`TEST-NET-2` - **198.51.100.0/24** *(198.51.100.0198.51.100.255)*
`TEST-NET-3` - **203.0.113.0/24** *(203.0.113.0203.0.113.255)*
Referenced in the [RFC5737](https://datatracker.ietf.org/doc/html/rfc5737#section-3).
---
For **Any-Source Multicast** (ASM):
`MCAST-TEST-NET` - **233.252.0.0/24** *(233.252.0.0233.252.0.255)*
Noted, that it is part of the normal multicast space and referenced in [RFC6676](https://datatracker.ietf.org/doc/html/rfc6676#section-2).
---
You have two ranges in **IPv6**:
- **2001:db8::/32**
- Start: `2001:db8::`
- End: `2001:db8:ffff:ffff:ffff:ffff:ffff:ffff`
- Referenced in [RFC3849](https://datatracker.ietf.org/doc/html/rfc3849#section-2)
- **3fff::/20**
- Start: `3fff::`
- End: `3fff:fff:ffff:ffff:ffff:ffff:ffff:ffff`
- Referenced in [RFC9637](https://datatracker.ietf.org/doc/html/rfc9637#section-6)
# Ethernet MAC Addresses
For **Ethernet** you have the following ranges for documentation purposes:
**Unicast EUI-48**:
`00-00-5E-00-53-00` - `00-00-5E-00-53-FF`
**Multicast EUI-48**:
`01-00-5E-90-10-00` - `01-00-5E-90-10-FF`
Both ranges are referenced in [RFC7042](https://datatracker.ietf.org/doc/html/rfc7042#section-2.1.2).
---
I have rarely used the **EUI-64** format, so I will just share the [RFC7042](https://datatracker.ietf.org/doc/html/rfc7042#section-2.2.3) reference link rather than plastering half the article with it.
# Conclusion
I hope you found this article helpful and apply what you have learned at some point.

View file

@ -0,0 +1,236 @@
# How to: Cisco ISE backup to SFTP repository with public key authentication
> **Side note**: It can happen that after an **update/upgrade of the Cisco ISE the communication with the SFTP repository isn't possible anymore**. It should be sufficient to generate a new SSH key via the GUI ([**#**](#ise-gui-key)) and add it to the `authorized_keys` of the SFTP server ([**#**](#sftp-authorized_keys)) again.
<details>
<summary>> <b>Related error messages</b><br><span class="light-gray">click to unfold</span></summary>
<img src="/images/blog/ise-error-1.png" style="max-height:200px;">
<img src="/images/blog/ise-error-2.png" style="max-height:200px;">
</details>
---
## Update 15.10.24
This article has been updated to a new version. The process is the same, but all screenshots have been replaced.
Previous: **Cisco ISE**, Version 3.1.0.518, ADE-OS Version 3.1.0.135, VM cluser.
New: **Cisco ISE**, Version 3.3.0.430, ADE-OS Version 3.3.0.181, VM cluser.
## Requirements
Let me start with a list of things that are required:
* Access to Cisco, via GUI and CLI as admin
* SFTP server + user, and root access
* Network access: ISE > SFTP server over TCP/22 *(SSH - as SFTP transfers data over it)*
My setup is explained in the next section.
Some prior Linux knowledge will help you, but I've tried to keep it as simple as possible with additional explanations.
# Overview <a href="#overview" id="overview">#</a>
I've replaced some internal data with dummy data. You can find the overview here.
**Cisco ISE**, Version 3.3.0.430, ADE-OS Version 3.3.0.181, VM cluser.
**Ubuntu** 22.04.3 LTS as **SFTP server** accessible via **backup-server (10.10.10.10)**. The **user account** for the backups is called `ise`, and `Mysecureshell` was used to set it up. The backups will be saved in the home directory of said user under `/home/ise/ise-bk-new`.
More specific information will follow. Keep in mind that some configurations might change depending on your setup.
On the ISE, we are mainly working within these two menues.
![ise-1-ise-overview](/images/blog/ise-1-ise-overview.png)
# SFTP Server - Create a backup directory on the SFTP server <a href="#sftp-backup-directory" id="sftp-backup-directory">#</a>
We are going to save the backups in the home directory of the SFTP user `ise`. We are going to switch to said directory as root and, create a new directory for the backups, and change the necessary permissions.
Switch to home directory of user `ise`:
: `root@backup-server:# cd /home/ise/`
Create a new directory:
: `root@backup-server:# mkdir ise-bk-new`
Change ownership from `root` to user `ise`
: `root@backup-server:# chown ise:ise ise-bk-new`
Change permissions of directory:
: `root@backup-server:# chmod 755 ise-bk-new`
: `755` translates to:
: **user**: *read (r) 4, write (w) 2, execute (x) 1*
: **group**: *read (r) 4, execute (x) 1*
: **others**: *read (r) 4, execute (x) 1*
You can confirm the changes with `ll`:
```markdown
root@backup-server:# ll
drwxr-xr-x 2 ise ise 4096 Okt 1 01:11 ise-bk-new
```
# ISE GUI - Create new repository <a href="#ise-gui-repository" id="ise-gui-repository">#</a>
Open the GUI of the ISE as an admin and go to the following menu: `System > Maintenance > Repository`.
![ise-2-add-repo](/images/blog/ise-2-add-repo.png)
Click on `+Add` and choose `SFTP` as the protocol.
![ise-3-repo-config](/images/blog/ise-3-repo-config.png)
Choose a name for the repository, enter the IP or hostname of the SFTP server and the Path relative to the home directory of the SFTP user `ise`. *As a side note: I haven't checked if the full absolute paths would work too*.
Finally, enable PKI authentication by checking the box for `Enable PKI authentication`, enter the SFTP user and keep the password field **empty** as we will use a public key for authentication.
Click save to confirm the creation.
# ISE GUI - Create a new key pair for GUI user <a href="#ise-gui-key" id="ise-gui-key">#</a>
Right after, we create a key pair for this repository/ GUI user and export the public of that key pair.
Choose the new repository, click on `+ Generate Key pairs`, and choose a strong passphrase.
*Don't forget to store that passphrase in a secure place!*
![ise-5-generate-gui-key](/images/blog/ise-5-generate-gui-key.png)
Next, we will export the public key by choosing the repository again and clicking `Export public key`. Save it somewhere on your computer for now; we are going to need it later.
![ise-6-export-gui-key](/images/blog/ise-6-export-gui-key.png)
# ISE CLI - Add SFTP server host key to ISE <a href="#ise-cli-hostkey" id="ise-cli-hostkey">#</a>
> **Important**: In a cluster, this section has to be done on every ISE node so it keeps working after a fail-over! [Source](https://community.cisco.com/t5/network-access-control/ise-nodes-unable-to-see-sftp-repository/m-p/4520332/highlight/true#M571772)
Next, open the CLI of the ISE as `root`.
Add the host key of the SFTP server to the ISE with the following command:
: `crypto host_key add host ipAddressOrHostnameOfTheSFTPServer`
: **Important**: the IP or hostname must match the `Server Name` that has been specific in the repository in a previous section!
You can check if the host key was added with:
: `show crypto host`
# ISE CLI - Create a second key pair in CLI <a href="#ise-cli-key" id="ise-cli-key">#</a>
We now have to create another key pair.
Create key pair for in CLI:
: `crypto key generate rsa passphrase ReallyStrongAndSecurePassphrase`
: *Again, save this passphrase in a secure place as well*
We now have to export this public key to a local repository and can download it via ISE GUI.
Export key to the local disk:
: `crypto key export ChooseNameForPublicKey repository LOCALDISK`
: `ChooseNameForPublicKey` - just in case, replace with a random name
: `LOCALDISK` - name of the local repository on ISE
Now you need to **download** the second public key in the GUI under `System > Maintenance > Localdisk Management`.
![ise-7-download-cli-key](/images/blog/ise-7-download-cli-key.png)
Save it to your computer.
# SFTP Server - change SSH server configuration <a href="#sftp-ssh-config" id="sftp-ssh-config">#</a>
So, from my experience, most SSH servers have authentication via public key disabled by default. We have to check it and might have to change some configurations.
We want to modify the configuration file of the SSH server to enable public key authentication. The default file is `/etc/ssh/sshd_config`, and sudo permissions are required.
**Side note**: Depending on your environment, it is not recommended to change the default configuration file as some management solutions or servers could overwrite them regularly. You could create a new `.conf` file in the `/etc/ssh/sshd_config.d/` directory to keep the ISE configuration separate. Just make sure that the reference to this directory is present in the default configuration file: `Include /etc/ssh/sshd_config.d/*.conf`.
**Open the configuration file** in your favorite text editor and change or add the file according to the following options.
Enable public key authentication:
: `PubkeyAuthentication yes`
Enable RSA authentication:
: `RSAAuthentication yes`
Make sure that the `authorized_keys` file is included:
: `AuthorizedKeysFile .ssh/authorized_keys .ssh/authorized_keys2 ~/.ssh/authorized_keys`
**Important**: Don't forget to remove the leading `#` as it marks the whole line as a comment and will be ignored by the server!
Restart the SSH server with `sudo systemctl restart sshd` - which should be the default way to run your SSH server.
**Side note:** Depending on your version of the SSH server, you have to accept RSA keys with:
```
HostkeyAlgorithms +ssh-rsa
PubkeyAcceptedAlgorithms=+ssh-rsa
HostKey /etc/ssh/ssh_host_rsa_key
```
As a reference and troubleshooting tips: [SSH - How to use public key authentication on Linux](https://ittavern.com/ssh-how-to-use-public-key-authentication-on-linux/)
# SFTP Server - add public keys to authorized_keys file <a href="#sftp-authorized_keys" id="sftp-authorized_keys">#</a>
So, we've created our key pairs on the ISE. We now have to make sure that the SFTP server trusts these.
To make this happen, we have to add the previously downloaded public keys to the `authorized_keys` file of the user `ise`. You can find the file in the `.ssh` directory in the home directory of the user.
`/home/ise/.ssh/authorized_keys`
**Important:** If it does not exist, you have to create it and make sure that the permissions are right.
```markdwon
sudo chmod 700 /home/ise/.ssh # permission of the .ssh directory
sudo touch /home/ise/.ssh/authorized_keys # create file
sudo chown ise:ise /home/ise/.ssh/authorized_keys # set file ownership
sudo chmod 644 /home/ise/.ssh/authorized_keys # set file permissions
```
---
Now, we have to add the content of the public keys into the `authorized_keys` file. **One key per line!**
You can use the CLI text editor `nano` to do so, but feel free to use your favorite method. Alternatives: WinSCP, `ssh-copy-id`
# ISE GUI - Test your setup <a href="#testing" id="testing">#</a>
There are two simple ways to confirm that it is working.
First, on ISE GUI, on the page with the repositories, you can choose the new repository and click `Verify` to make sure that the communication with the SFTP server is possible.
The second option would be to create a manual backup. Visit `System > Administration > Backup & Restore `
![ise-8-create-man-backup](/images/blog/ise-8-create-man-backup.png)
Choose `Configuration Data Backup`, and click on `Backup Now`. In the pop-up window, choose a name, the new repository, a secure passphrase and click on `Backup`.
The status and result of the backup are present on the same page:
![ise-81-status-man-backup](/images/blog/ise-81-status-man-backup.png)
# Conclusion
This should do the trick! - I created this guide sometime after the implementation, so feel free to let me know if I missed anything.
Additionally, I'd like to share some other articles that might help to troubleshoot and harden the system:
* [SSH Troubleshooting Guide](https://ittavern.com/ssh-troubleshooting-guide)
* [SSH - How to use public key authentication on Linux](https://ittavern.com/ssh-how-to-use-public-key-authentication-on-linux)
* [SSH server hardening](https://ittavern.com/ssh-server-hardening)
#### References
*No order, just a bookmark dump of the day of the implementation:*
* https://www.cisco.com/en/US/docs/security/ise/1.0/user_guide/ise10_backup.html
* https://www.cisco.com/c/en/us/td/docs/security/ise/2-2/admin_guide/b_ise_admin_guide_22/b_ise_admin_guide_22_chapter_01011.html
* https://jmcristobal.com/2022/07/27/configuring-an-sftp-repository-in-ise/
* https://www.cisco.com/c/en/us/support/docs/security/identity-services-engine/215355-how-to-take-configuration-and-operation.html
* https://www.cisco.com/c/en/us/td/docs/security/ise/3-1/admin_guide/b_ise_admin_3_1/b_ISE_admin_31_maintain_monitor.html
* https://www.cisco.com/c/en/us/td/docs/security/ise/3-1/cli_guide/b_ise_cli_reference_guide_31/b_ise_CLIReferenceGuide_31_chapter_01.html
---

View file

@ -0,0 +1,24 @@
# About
**This is a personal blog.**
I enjoy testing new software and tools, and writing about them. I try to test as much as possible and keep it short. **I create articles that I'd like to read**.
Also, **performance and clean design are really important to me**. There are *(almost)* no JS dependencies and third party requests *(except for my self-hosted comment function)*, fast page loading speeds without any CDN, and overall a really small footprint.
**Privacy is important**. There are no trackers, no cookies, and no connections to third parties. Access logs are stored for security purposes, but are not shared with anyone.
---
This version is using the static site generator Hugo. The previous version used [Nikola](https://getnikola.com/).
---
**Special thanks** to ruffy and Frank!
---
If you have questions or comments, feel free to reach out.
<b>E-Mail</b>
hello<span style="display:none">foo</span>@itta<span style="display:none">foo</span>vern.<span style="display:none">com</span>com
<br>

View file

@ -0,0 +1,158 @@
# Sending nginx Logs to Loki with Grafana Alloy
![headder](/images/blog/grafana-alloy-header.png)
I've decided to use Loki and Grafana to aggregate and display the nginx logs of my applications. That requires some configuration changes. This articles help you to set everything up, but it is not perfect or finished!
In this article I assume that you have 2 servers - both running Linux, one for nginx and the other for the Loki-Grafana stack. I try to keep it as general as possible to allow you to use configs for different platforms as well.
## nginx configuration
We have to adjust the format of the logging format to structured JSON.
**Side note**: there are many formats you could choose. I highly depends on your needs and what you want to analyze. The format in this articles works for me at this moment.
First, we need to create a `log_formater` that allows us to choose the format for the logs we want to have.
In the `/etc/nginx/nginx.conf` file, we need to add the following config in the `http` block:
```nginx
log_format json_analytics escape=json '{'
'"msec": "$msec", ' # request unixtime in seconds with a milliseconds resolution
'"connection": "$connection", ' # connection serial number
'"connection_requests": "$connection_requests", ' # number of requests made in connection
'"pid": "$pid", ' # process pid
'"request_id": "$request_id", ' # the unique request id
'"request_length": "$request_length", ' # request length (including headers and body)
'"remote_addr": "$remote_addr", ' # client IP
'"remote_user": "$remote_user", ' # client HTTP username
'"remote_port": "$remote_port", ' # client port
'"time_local": "$time_local", '
'"time_iso8601": "$time_iso8601", ' # local time in the ISO 8601 standard format
'"request": "$request", ' # full path no arguments if the request
'"request_uri": "$request_uri", ' # full path and arguments if the request
'"args": "$args", ' # args
'"status": "$status", ' # response status code
'"body_bytes_sent": "$body_bytes_sent", ' # the number of body bytes exclude headers sent to a client
'"bytes_sent": "$bytes_sent", ' # the number of bytes sent to a client
'"http_referer": "$http_referer", ' # HTTP referer
'"http_user_agent": "$http_user_agent", ' # user agent
'"http_x_forwarded_for": "$http_x_forwarded_for", ' # http_x_forwarded_for
'"http_host": "$http_host", ' # the request Host: header
'"server_name": "$server_name", ' # the name of the vhost serving the request
'"request_time": "$request_time", ' # request processing time in seconds with msec resolution
'"upstream": "$upstream_addr", ' # upstream backend server for proxied requests
'"upstream_connect_time": "$upstream_connect_time", ' # upstream handshake time incl. TLS
'"upstream_header_time": "$upstream_header_time", ' # time spent receiving upstream headers
'"upstream_response_time": "$upstream_response_time", ' # time spend receiving upstream body
'"upstream_response_length": "$upstream_response_length", ' # upstream response length
'"upstream_cache_status": "$upstream_cache_status", ' # cache HIT/MISS where applicable
'"ssl_protocol": "$ssl_protocol", ' # TLS protocol
'"ssl_cipher": "$ssl_cipher", ' # TLS cipher
'"scheme": "$scheme", ' # http or https
'"request_method": "$request_method", ' # request method
'"server_protocol": "$server_protocol", ' # request protocol, like HTTP/1.1 or HTTP/2.0
'"pipe": "$pipe", ' # "p" if request was pipelined, "." otherwise
'"gzip_ratio": "$gzip_ratio", '
'"http_cf_ray": "$http_cf_ray",'
'}';
```
I've got this format from the [Grafana Dashboard](https://grafana.com/grafana/dashboards/12559-loki-nginx-service-mesh-json-version/) I want to use, minus the Geodata.
**Side note**: This addition does not change anything at this point as we do not 'use' this log format yet.
---
Next, we have to choose this logging format in a nginx `server` block to create logs the required format.
You can have **multiple logging streams at once with different formats** and for this articles I've created a **new logging directory** to make it easier for the export:
`/var/log/nginx-json`
Open the nginx config file with the `server` block and add the following line:
`access_log /var/log/nginx-json/ittavern.com.access.log json_analytics;`
We create a new destination for this `server` block in the new directory, with the name of the page and the previously created log formatter `json_analytics`.
---
Check the correct syntax with `sudo nginx -t` and reload the nginx config with `sudo systemctl reload nginx` to finish the nginx configuration.
Restart
## Configuring Grafana Alloy
**Grafana Alloy** is the successor of **Promtail** and **is being used to push the local logs to Loki**.
As a reference, I've used this [article from the official documentation](https://grafana.com/docs/grafana-cloud/send-data/logs/collect-logs-with-alloy/).
There are many ways to run Grafana Alloy - **Please do check [the official installation documentation](https://grafana.com/docs/grafana-cloud/send-data/alloy/set-up/install/) and install it in your prefered method**. I've decided to use the native installation on Ubuntu for this article.
---
Before we configure anything, let me explain how it works - or at least how I understand Grafana Alloy:
The **main components of Grafana Alloy** are the **Collector, the Transformer and the Writer**.
The **collector** allows it to get logs locally, via HTTP endpoint or outher methodes.
The **transformer** allows you to process the logs - filter out certain lines or terms, dedublicate, add labels, etc.
The **writer** pushes the processes logs to the destination.
You can have multiple components and build your own little pipeline.
That is the short form - all functions can be found in the [official documentation](https://grafana.com/docs/alloy/latest/reference/components/loki/) - and this is only for Loki. Grafana Alloy can be used for Prometheus and other endpoints as well!
---
The **default configuration file** can be found here:
`/etc/alloy/config.alloy`
The default config can be removed and we add the following components or 'pipeline':
```
local.file_match "local_files" {
path_targets = [{ "__path__" = "/var/log/nginx-json/*.log", job = "nginx", host = "1-prod-mnsn-net" }]
sync_period = "5s"
}
loki.source.file "log_scrape" {
targets = local.file_match.local_files.targets
forward_to = [loki.write.grafana_loki.receiver]
tail_from_end = true
}
loki.write "grafana_loki" {
endpoint {
url = "http://loki.lo.mnsn.net:3100/loki/api/v1/push"
}
}
```
In the first section, we provice the source of the logs we want to forward, add two labels and the sync period.
In the second section, we choose the source and forward them to the **writer** we want.
In the last section, we choose the destination for the logs - in this case our Loki instance.
This is good enough for this article. For more configuration options please check out the [official documentation](https://grafana.com/docs/alloy/latest/reference/components/loki/).
Feel free to check the syntax with `sudo alloy validate /etc/alloy/config.alloy`.
In the beginning or tests, you can run it manually via `sudo alloy run /etc/alloy/config.alloy` and see if it works and if it does, start the systemd service with `sudo systemctl start alloy`. Additionally, Grafana Alloy will publish a simple GUI via `127.0.0.1:12345`.
**Side note**: Please make sure that Grafana Alloy has read-access to the logs and the server with nginx running can reach the Loki instance over TCP/3100.
```bash
nc -vz loki.lo.mnsn.net 3100
Connection to loki.lo.mnsn.net (10.20.30.56) 3100 port [tcp/*] succeeded!
```
---
That is all! Now you can use the logs in Loki. I currently use [this Grafana dashboard](https://grafana.com/grafana/dashboards/12559-loki-nginx-service-mesh-json-version/), or simply build your own!

View file

@ -0,0 +1,73 @@
# ETag in nginx - Simple Resource Caching
An **ETag (Entity Tag) is an HTTP response header** that identifies a version of a resource. This allow efficient caching by providing the client a method to check for a new version before downloading.
In general, the ETag is based on the files last modification time and size and usually handled by the reverse-proxy. This is called the **weak ETag**/ weak validation.
Alternative, there is a **strong or content-based Etag** that hashes the resources and provides a new version with every change. In most cases, the application has to manage the Etag.
Weak ETag headers can have the prefix `W/` - this is optional and often only displayed with the initial `200` HTTP response and not the `304` HTTP response.
![/images/blog/etag-browser-weak.png](/images/blog/etag-browser-weak.png)
More information in the [RFC9110 8.8.1 - Weak versus Strong](https://datatracker.ietf.org/doc/html/rfc9110#section-8.8.1).
---
## Enabling ETag in nginx
Enabling (weak) ETags in nginx is simple - add the following config to the `http`, `server` or `location` block to enable ETag headers.
```nginx
[...]
etag on;
add_header Cache-Control "no-cache" always;
[...]
```
The config can be overwritten in a more specific block, in case you want to change the caching for images or other resources.
Check syntax with `sudo nginx -t` and reload the nginx config.
That is it.
## Testing it with curl
In this article we are going to do some testing with curl. The ETag header can be seen in the browser dev tools as well.
```bash
curl -I https://ittavern.com
HTTP/2 200
server: nginx
date: Mon, 20 Oct 2025 15:42:14 GMT
content-type: text/html
content-length: 20665
last-modified: Sat, 18 Oct 2025 20:55:34 GMT
vary: Accept-Encoding
etag: "68f3fec6-50b9"
[...]
```
This ETag won't change as long as the size (*content-length*) or time of last modification (*last-modified*) of the resources changes.
We now can check the resource again with the ETag to check for changes.
```bash
curl -H 'If-None-Match: "68f3fec6-50b9"' -I https://ittavern.com
HTTP/2 304
server: nginx
date: Mon, 20 Oct 2025 16:24:47 GMT
last-modified: Sat, 18 Oct 2025 20:55:34 GMT
etag: "68f3fec6-50b9"
[...]
```
The `304` HTTP code stands for `Not Modified`.
---
This is how it looks in the browser
![/images/blog/etag-browser.png](/images/blog/etag-browser.png)

View file

@ -0,0 +1,160 @@
# Encryption using SSH Keys with age in Linux
In this article I want to share a method to use your SSH keypair to encrypt messages. We are going to use [age](https://github.com/FiloSottile/age) in Ubuntu 24.04.
The **installation guide** can be found in [the official repo](https://github.com/FiloSottile/age?tab=readme-ov-file#installation).
---
## Limitations
Before we start with usage, let me share some limitations. Not all SSH key types are suited for encryption - even tho there seem to be workarounds. In a [Github comment](https://github.com/str4d/rage/issues/272#issuecomment-970193691) it was mentioned by 'str4d' that `sk-* SSH keys` won't work as they only provide support for authentication.
The same seems to be the case for **ECDSA** (Elliptic Curve Digital Signature Algorithm) SSH keys as I got the following error message while testing:
`age: warning: recipients file "./age-testing.pub": ignoring unsupported SSH key of type "ecdsa-sha2-nistp521" at line 1`
---
**In this article I'll be working with EdDSA-ed25519 and RSA SSH keys.**
```bash
# RSA (RivestShamirAdleman):
ssh-keygen -t rsa -b 4096 -f ~/.ssh/nameofthekey
# EdDSA ed25519:
ssh-keygen -t ed25519 -f ~/.ssh/nameofthekey
```
Additionally, the `ssh-agent` is **[not](https://github.com/FiloSottile/age?tab=readme-ov-file#ssh-keys)** supported.
---
## Usage
Common use cases are to encrypt data to allow you to store ore transfer it securly in an untrusted or unknwon environment. You can make sure that only recipients with the right private key can decrypt the files, messages, or whatever.
---
### Simple Examples
Used version:
```bash
age --version
1.1.1
```
**Encryption of a simple string with SSH public key:**
```bash
echo "Cheers" | age -R ~/.ssh/id_ssh.pub > cheers.txt
```
Encrypted content:
```bash
cat cheers.txt
age-encryption.org/v1
-> ssh-ed25519 7uu5gg 4ivp9LPXTVu6ryrhuSskhL5A3RuQWL8XAg5mxbx6v0s
kGJzFPj2TiwrvrWmVonCsGcWeYmQ7qsV5WXNrf6c0H0
--- Rr+SI6g+73XM6R3CTa7WVp4eEDBgdmZMlsjhHihwjz4
```
Decrypt file with SSH private key:
```bash
cat cheers.txt | age -d -i ~/.ssh/id_ssh
Cheers
```
---
**To encrypt files**, we build upon the example from the official documentation:
```bash
tar cvz ./data | age -R ~/.ssh/id_ssh.pub | base64 > data.tar.gz.age
./data/
./data/random-video.mp4
```
Remove the source files:
```bash
rm -r ./data
```
Decrypt files:
```bash
cat data.tar.gz.age | base64 --decode | age -d -i ~/.ssh/id_ssh | tar xzv
./data/
./data/random-video.mp4
```
**Side Note:** I'll use `base64` encoding to make it more compatible with more services as some tools might not like the binary encoding.
**Addition:** Instead of the `base64` encoding, you could use `a, --armor` to have a more compatible format like this:
```bash
# Encryption
tar cvz ./data | age -a -R ~/.ssh/id_ssh.pub > data.tar.gz.armor.age
# Format
head -5 data.tar.gz.armor.age
-----BEGIN AGE ENCRYPTED FILE-----
YWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IHNzaC1lZDI1NTE5IDd1dTVnZyBjL0Zo
eTJUWDdDR0YzdzdjdDJEZjVxV0NRS1kxMlJWZEt5Y1hONGp4OHljCmJCdjBrUWlZ
QitTcC9Na1BDV2NCbHJsSlhaVHRJMElJMkJxUVdSd3ZFRjAKLS0tIGdSRHJFRVBV
NkFFbjNTbStBLzg0THJCM0lCMHdKbzhvRS9YemhISlRUY0kKK7Cxnu172NVbpBaa
# Decryption
cat data.tar.gz.armor.age | age -d -i ~/.ssh/id_ssh | tar xzv
```
[Thank you technomancy for pointing it out!](https://lobste.rs/s/vt2gjb/encryption_using_ssh_keys_with_age_linux)
---
### Practical Example
There a multiple CLI paste service that allow you to share configs, error messages, and so on. Examples are [0x0.st](https://0x0.st/) or [linedump.com](https://linedump.com/).
Encrypting the payload makes sure that nobody else can read the content.
**Upload**
String:
: `echo "Cheers" | age -R ~/.ssh/id_ssh.pub | base64 | curl -X POST --data-binary @- https://linedump.com`
File:
: `age -R ~/.ssh/id_ssh.pub -o - file.txt | base64 | curl -X POST --data-binary @- https://linedump.com`
Command:
: `ip -br a | age -R ~/.ssh/id_ssh.pub | base64 | curl -X POST --data-binary @- https://linedump.com`
**Download**
Save to file:
: `curl -s https://linedump.com/{paste_id} | base64 -d | age -d -i ~/.ssh/id_ssh > output3.txt`
**Side Notes**: A disclaimer: Linedump is a project of mine which was one reason to look into the encryption with SSH keypairs for some automation.
---
### Multiple recipients
`age` allows you to encrypt for multiple recipients which can decrypt it individually, which is great for a team or some automation and syncing.
Simply use multiple `-r/--recipient` flags - which requires the public key in the command or simply add the public keys to a file and use `-R` - one key per line.
Official documentation:
```bash
cat recipients.txt
# Alice
age1ql3z7hjy54pw3hyww5ayyfg7zqgvc7w3j2elw8zmrj2kg5sfn9aqmcac8p
# Bob
age1lggyhqrw2nlhcxprm67z43rta597azn8gknawjehu9d9dl0jq3yqqvfafg
age -R recipients.txt example.jpg > example.jpg.age
```

View file

@ -0,0 +1,96 @@
# Tools and Hosting
**IMPORTANT**: Will change a lot of things and have to kill 1-2 services!
**Services are provided for free**, and are either public or invite-only. I use those services myself and I think it is a great way to support the project by hosting an instance.
**Privacy is important**. I don't share or sell any information.
<span style="color: green;"><strong>NEW</strong></span> **Status Page** is available via [**mettwork.com**](https://mettwork.com/status/ittavern). - This is a NEW NEW status page!
**Feedback** is welcome either via hello<span style="display:none">foo</span>@itta<span style="display:none">foo</span>vern.<span style="display:none">com</span>com or comment below.
---
# Hosted Services
*Click for more details*
**NEW = Experimental**
<details>
<summary class="summary-service"><b><span style="color: green;"><strong>NEW</strong></span> <a href="https://itt.sh">itt.sh</a></b><br>Link shortener with QR code generation.<br><span class="light-gray">click to unfold</span></summary>
## [itt.sh](https://itt.sh/) <a href="#itt.sh" id="itt.sh">#</a> - Flink
Links:
: [itt.sh](https://itt.sh/)
: [Source Code](https://gitlab.com/rtraceio/web/flink)
</details>
<details>
<summary class="summary-service"><b><span style="color: green;"><strong>NEW</strong></span> <a href="https://convert.ittavern.com">convert.ittavern.com</a></b><br>Converts all kinds of formats - PLEASE NOTE: Videos are currently send to a third party server as it is not part of the current container. I'll try to implement their solution at some point.<br><span class="light-gray">click to unfold</span></summary>
## [convert.ittavern.com](https://convert.ittavern.com/) <a href="#convert.ittavern.com" id="convert.ittavern.com">#</a> - vert.sh
Links:
: [convert.ittavern.com](https://convert.ittavern.com/)
: [Source Code](https://github.com/VERT-sh/VERT)
</details>
---
<details>
<summary class="summary-service"><b><a href="https://share.ittavern.com">share.ittavern.com</a></b><br>securely sharing of passwords, code, snippets and files up to 30MB for a limited time<br><span class="light-gray">click to unfold</span></summary>
## [share.ittavern.com](https://share.ittavern.com/) <a href="#share.ittavern.com" id="share.ittavern.com">#</a> - PrivateBin
Links:
: [share.ittavern.com](https://share.ittavern.com/)
: [Source Code](https://github.com/PrivateBin/PrivateBin)
</details>
<details>
<summary class="summary-service"><b><a href="https://ntfy.ittavern.com">ntfy.ittavern.com</a></b><br>Open push notifcation platform for your devices<br><span class="light-gray">click to unfold</span></summary>
## [ntfy.ittavern.com](https://ntfy.ittavern.com/) <a href="#ntfy.ittavern.com" id="ntfy.ittavern.com">#</a> - ntfy
Links:
: [ntfy.ittavern.com](https://ntfy.ittavern.com/)
: [Official Homepage](https://ntfy.sh/)
: [Source Code](https://github.com/binwiederhier/ntfy)
</details>
<details>
<summary class="summary-service"><b><a href="https://cc.ittavern.com">cc.ittavern.com - CyberChef</a></b><br>The Cyber Swiss Army Knife<br><span class="light-gray">click to unfold</span></summary>
## [cc.ittavern.com](https://cc.ittavern.com/) <a href="#cc.ittavern.com" id="cc.ittavern.com">#</a> - CyberChef
Links:
: [cc.ittavern.com](https://cc.ittavern.com/)
: [Source Code](https://github.com/gchq/CyberChef)
</details>
<details>
<summary class="summary-service"><b><a href="https://draw.ittavern.com">draw.ittavern.com - draw.io</a></b><br>whiteboarding / diagramming software application<br><span class="light-gray">click to unfold</span></summary>
## [draw.ittavern.com](https://draw.ittavern.com/) <a href="#draw.ittavern.com" id="draw.ittavern.com">#</a> - Draw.io Instance
Links:
: [draw.ittavern.com](https://draw.ittavern.com/)
: [Source Code](https://github.com/jgraph/docker-drawio?tab=readme-ov-file#quick-start)
</details>
<details>
<summary class="summary-service"><b><a href="https://pdf.ittavern.com">pdf.ittavern.com - StirlingPDF</a></b><br>PDF editing in your browser<br><span class="light-gray">click to unfold</span></summary>
Work in Progress
</details>